hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
f4ed1d6ca2bd2e8988c2393449121a30793ef3d5
2,074
md
Markdown
Unused Files/ACScrollView.md
andrewthecope/TheCope
bc2d85a879321eaa71dd3c48468f4edbb33c4d95
[ "MIT" ]
null
null
null
Unused Files/ACScrollView.md
andrewthecope/TheCope
bc2d85a879321eaa71dd3c48468f4edbb33c4d95
[ "MIT" ]
null
null
null
Unused Files/ACScrollView.md
andrewthecope/TheCope
bc2d85a879321eaa71dd3c48468f4edbb33c4d95
[ "MIT" ]
null
null
null
@@ Title=Optimizing a Paged UIScrollView @@ Date=August 7th, 2016 Here's a Learn Your Lines support email I received a couple months ago: >Andrew,<br> >Slow as sin. Fix. Nice and concise. Here's what that email was probably referring to (from version 2.0): [Picture of loading the ThreeModesViewController] Depending on the amount of text specified to load, selecting text to memorize could take a couple of seconds while blocking the main thread--certainly not my best work. The solution turned out to be fairly tricky. The big problem, I discovered, was programmatically creating two large `UIScrollView`s for Read and Memorize modes. I attempted to fix this in multiple different ways, and after much fiddling, I got something working. For vanity reasons, I named my solution `ACScrollView`. I figured that since these scrollviews are paged (meaning you only see one panel at a time), there's no reason to load all the pages at once. Better would be to load one at a time as you scroll through. #ACScrollView I just wrote out a long and confusing explanation, but after realizing that none of it was very interesting, I made little pictures instead. ACScrollView contains only three panes which are reused as the user scrolls. Once the user finishes scrolling, the scrollview reloaded the unseen scroll views and jumps back to the center pane. To the user, nothing looks out of the ordinary, except that the scrollview can contain a huge amount of pages while maintaining a constant load time /[O(1) if you are so inclined/]. Infinite Scroll is also enabled by this method, meaning that the user can keep looping through the pages while scrolling forward or backward, which is works very well for Learn Your Lines, and could be useful for many other situations. For Read Mode, I expanded ACScrollView to 2 dimensions, thus requiring 5 pages to be loaded at a particular time. The picture below explains it better than I ever could. I have posted the generalized code for ACScrollView on my github. Maybe it will be of use to someone out there.
76.814815
613
0.786885
eng_Latn
0.999747
f4ed6632099876272d7830a523a72f273d8b0359
6,105
md
Markdown
articles/network-watcher/network-watcher-nsg-flow-logging-cli.md
jos431/azure-docs.es-es
4356d4d92c8f616ab4bb3effae4e24b34b121ba8
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/network-watcher/network-watcher-nsg-flow-logging-cli.md
jos431/azure-docs.es-es
4356d4d92c8f616ab4bb3effae4e24b34b121ba8
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/network-watcher/network-watcher-nsg-flow-logging-cli.md
jos431/azure-docs.es-es
4356d4d92c8f616ab4bb3effae4e24b34b121ba8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Administración de registros de flujos de NSG: CLI de Azure' titleSuffix: Azure Network Watcher description: Esta página explica cómo administrar registros de flujo de grupos de seguridad de red en Azure Network Watcher con la CLI de Azure services: network-watcher documentationcenter: na author: KumudD manager: twooley editor: '' ms.assetid: 2dfc3112-8294-4357-b2f8-f81840da67d3 ms.service: network-watcher ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services ms.date: 02/22/2017 ms.author: kumud ms.openlocfilehash: 950b014d7e08eeeeed40ba7b294e53e1c200474b ms.sourcegitcommit: 653e9f61b24940561061bd65b2486e232e41ead4 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 11/21/2019 ms.locfileid: "74278012" --- # <a name="configuring-network-security-group-flow-logs-with-azure-cli"></a>Configuración de registros de flujo de grupos de seguridad de red con la CLI de Azure > [!div class="op_single_selector"] > - [Azure Portal](network-watcher-nsg-flow-logging-portal.md) > - [PowerShell](network-watcher-nsg-flow-logging-powershell.md) > - [CLI de Azure](network-watcher-nsg-flow-logging-cli.md) > - [API DE REST](network-watcher-nsg-flow-logging-rest.md) Los registros de flujo de grupos de seguridad de red son una característica de Network Watcher que permite ver información acerca del tráfico IP de entrada y de salida en un grupo de seguridad de red. Estos registros de flujo se escriben en formato JSON y muestran los flujos de entrada y salida en función de cada regla, la NIC a la que se aplica el flujo, información de 5-tupla sobre el flujo (IP de origen/destino, puerto de origen/destino, protocolo), y si se permitió o denegó el tráfico. Para seguir los pasos de este artículo, es preciso [instalar la interfaz de la línea de comandos de Azure para Mac, Linux y Windows (CLI)](/cli/azure/install-azure-cli). ## <a name="register-insights-provider"></a>Registro del proveedor de Insights Para que el registro del flujo de trabajo funcione correctamente, es necesario registrar el proveedor **Microsoft.Insights**. Si no está seguro de si el proveedor **Microsoft.Insights**está registrado, ejecute el siguiente script. ```azurecli az provider register --namespace Microsoft.Insights ``` ## <a name="enable-network-security-group-flow-logs"></a>Habilitación de los registros de flujo de grupos de seguridad de red El ejemplo siguiente muestra el comando para habilitar los registros de flujo: ```azurecli az network watcher flow-log configure --resource-group resourceGroupName --enabled true --nsg nsgName --storage-account storageAccountName # Configure az network watcher flow-log configure --resource-group resourceGroupName --enabled true --nsg nsgName --storage-account storageAccountName --format JSON --log-version 2 ``` La cuenta de almacenamiento que especifique no puede tener configuradas reglas de red que restrinjan el acceso a la red solo a servicios de Microsoft o a redes virtuales específicas. La cuenta de almacenamiento puede estar en la misma suscripción de Azure, o en una diferente, que el NSG para el que habilite el registro de flujo. Si utiliza distintas suscripciones, ambas deben estar asociadas al mismo inquilino de Azure Active Directory. La cuenta que utilice para cada suscripción debe tener los [permisos necesarios](required-rbac-permissions.md). Si la cuenta de almacenamiento está en un grupo de recursos o suscripción distintos al grupo de seguridad de la red, especifique el identificador completo de la cuenta de almacenamiento en lugar de su nombre. Por ejemplo, si la cuenta de almacenamiento está en un grupo de recursos denominado *RG-Storage*, en vez de especificar *storageAccountName* en el comando anterior, debería especificar */subscriptions/{SubscriptionID}/resourceGroups/RG-Storage/providers/Microsoft.Storage/storageAccounts/storageAccountName*. ## <a name="disable-network-security-group-flow-logs"></a>Deshabilitación de los registros de flujo de grupos de seguridad de red Use el ejemplo siguiente para deshabilitar los registros de flujo: ```azurecli az network watcher flow-log configure --resource-group resourceGroupName --enabled false --nsg nsgName ``` ## <a name="download-a-flow-log"></a>Descarga de un registro de flujo La ubicación de almacenamiento de un registro de flujo se define en el momento de la creación. Una herramienta práctica para acceder a estos registros de flujo guardados en una cuenta de almacenamiento es el Explorador de Microsoft Azure Storage, que puede descargarse aquí: https://storageexplorer.com/ Si se especifica una cuenta de almacenamiento, los archivos de registro de flujo se guardan en una cuenta de almacenamiento en la siguiente ubicación: ``` https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecuritygroupflowevent/resourceId=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{nsgName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json ``` > [!IMPORTANT] > Actualmente existe el problema de que los [registros de flujo del grupo de seguridad de red (NSG)](network-watcher-nsg-flow-logging-overview.md) para Network Watcher no se eliminan automáticamente del almacenamiento de blobs en función de la configuración de la directiva de retención. Si tiene una directiva de retención distinta de cero, recomendamos que elimine periódicamente los blobs de almacenamiento que superen el período de retención para evitar recargos. Para más información sobre cómo eliminar el blog de almacenamiento de registros de flujo del grupo de seguridad de red, consulte [Eliminación de los blobs de almacenamiento de los registros de flujo de NSG](network-watcher-delete-nsg-flow-log-blobs.md). ## <a name="next-steps"></a>Pasos siguientes Aprenda a [visualizar los registros de flujo de NSG con Power BI](network-watcher-visualize-nsg-flow-logs-power-bi.md) Aprenda a [visualizar los registros de flujo de NSG con herramientas de código abierto](network-watcher-visualize-nsg-flow-logs-open-source-tools.md)
71.823529
721
0.800655
spa_Latn
0.966065
f4edc7810b334dbb357c0a052231c84d5786007d
4,427
md
Markdown
README.md
david-fong/ookiisudoku
39736aa5ef21326f6238cdd47dfc5dc1a5684def
[ "MIT" ]
null
null
null
README.md
david-fong/ookiisudoku
39736aa5ef21326f6238cdd47dfc5dc1a5684def
[ "MIT" ]
null
null
null
README.md
david-fong/ookiisudoku
39736aa5ef21326f6238cdd47dfc5dc1a5684def
[ "MIT" ]
null
null
null
# ōkiidoku A C++ library for variable-grid-size sudoku to: - generate random, full grids - generate puzzles - find all solutions to a puzzle using logic and brute-force - (WIP) efficiently(?) put full grids in a canonical form - (WIP) archive collections of grids in compressed form It supports parametric grid size of dimensions 9x9, 16x16, 25x25, ..., 256x256, etc. and uses template meta-programming to allow optimizing for each case. In the future, it will have language bindings for WASM/JS and Python. programming-language-independent design notes can be found in the [design folder](./writings/design/). These are not specs or usage documentation. They are collections of the goals I have and the considerations I made in designing the containers and algorithms for the library. In Japanese, "ookii" (大きい) means "big". I chose the name because one of my priorities for this project is to support variable grid sizes, and to find and engineer (ie. cache and memory efficiency) algorithms that scale well with large grids. Also, I thought the name had a nice ring- like "okey-dokey". I decided not to call the project "ookiidoku" (despite that being the proper romanization) out of fear of english speakers like myself mispronouncing it like [this](https://en.wikipedia.org/wiki/Close_back_rounded_vowel) instead of like [this](https://en.wikipedia.org/wiki/Mid_back_rounded_vowel). I work on this project for fun. I like sudoku. I want to learn how to develop high-performance software libraries. I like reasoning about trade-offs. The algorithm design is like the puzzle to me; I always want to try to come up with things on my own before looking at how other people do things. Perhaps in doing so, I will come up with something new, and maybe even something that is better in some ways than existing approaches. Here is [the roadmap](./cpp/TODO.md) (subject to violent rearrangements at the mercy of my whim and fancy). ## Non-Goals / Non-Priorities - Being on par with the fastest regular 9x9 sudoku solvers (tdoku, jczsolve, etc.). - With my limited time and knowledge, my higher priority is supporting variable grid size and keeping the code reasonably portable. - Being a puzzle rater / implementing many deductive techniques. - Despite the first bullet, I am more interested in throughput than deductive power. - Making explicit use of SIMD instructions. - My higher priority is keeping the code portable and somewhat malleable. Compilers and C++ standard library implementations can often infer and generate parallelized code, and for now, that's plenty good enough for me. - Supporting sudoku variants other than kudoku. - You are welcome to fork and do such work separately. ## Usage [See the readme in the cpp directory](./cpp/readme.md). This project is MIT licensed. If you have a question about the project and whether it is suitable for your own goals, please do your own research about [other existing projects](#other-existing-projects-and-solvers) and what they provide, and read through this project's [design docs](./writings/design/) _before_ asking. If you do something cool with it, that's great! I'd love to hear about it. ### Some Other Existing Projects and Solvers - [t-dillon/tdoku](https://t-dillon.github.io/tdoku/) - [jczsolve](http://forum.enjoysudoku.com/3-77us-solver-2-8g-cpu-testcase-17sodoku-t30470-210.html#p249309) - [Emerentius/sudoku](https://github.com/Emerentius/sudoku) - [Peter Norvig](https://norvig.com/sudoku.html) - There are more. Google it. - [github:sudoku-generator](https://github.com/topics/sudoku-generator), [github:sudoku-puzzle](https://github.com/topics/sudoku-puzzle) - [sudopedia programs list](http://sudopedia.enjoysudoku.com/Sudoku_Programs.html) - [repos I've starred on github](https://github.com/stars/david-fong/lists/sudoku) ## My Past and Future Related Works Sudoku is a favourite puzzle game of mine. After I took an intro to programming course in my first year of university, one of [my first personal projects](https://github.com/david-fong/my-first-projects) was a solution generator in C. I still remember the feelings of excitement and enjoyment that I felt. I reworked it after learning Java (dead private repo), then later started this. I've attempted a [hardware implementation using SystemVerilog](https://github.com/david-fong/Sudoku-SV), which so far has been a failure (probably due to too many wires).
76.327586
601
0.776146
eng_Latn
0.994436
f4edeb6878c305f65584fd92cef4b3ad5898f8b0
265
md
Markdown
docs/std/math/Max.md
shoyaaa/fract
8774119b362d018e45c7c60f4230eae33436924b
[ "MIT" ]
null
null
null
docs/std/math/Max.md
shoyaaa/fract
8774119b362d018e45c7c60f4230eae33436924b
[ "MIT" ]
null
null
null
docs/std/math/Max.md
shoyaaa/fract
8774119b362d018e45c7c60f4230eae33436924b
[ "MIT" ]
null
null
null
# ``Max`` function ## Description Returns maximum number. ## Define ``` protected func Max(x, y) ``` ## Parameters + ``x`` <br> First numeric. + ``y`` <br> Second numeric. ## Examples ``` open std.math print(math.Max(4, 1)) # 4 print(math.Max(1, 5)) # 5 ```
11.041667
26
0.588679
eng_Latn
0.526673
f4ee34b583d71ccc3952c1a0d43e16f94f486885
1,772
md
Markdown
CHANGELOG.md
akropolisio/extension
102117cd72e1bf0bbe8cb2e6fd4f8ec2f46b48f8
[ "Apache-2.0" ]
null
null
null
CHANGELOG.md
akropolisio/extension
102117cd72e1bf0bbe8cb2e6fd4f8ec2f46b48f8
[ "Apache-2.0" ]
null
null
null
CHANGELOG.md
akropolisio/extension
102117cd72e1bf0bbe8cb2e6fd4f8ec2f46b48f8
[ "Apache-2.0" ]
1
2020-06-17T07:20:23.000Z
2020-06-17T07:20:23.000Z
# 0.11.0-beta.x - Integrate polkadot-js/common 1.4.1 & polkadot-js/ui 0.44.1 - Updated to babe 7.6 (build and runtime improvememnts) # 0.10.1 - Support for external accounts as presented by mobile signers, e.g. the Parity Signer - Allow the extension UI to be opened in a new tab - Adjust embedded chain metadata to only contain actual calls (for decoding) - Minor code maintainability enchancements # 0.9.1 - Fix an initialization error in extension-dapp # 0.8.1 - Add basic support for seed derivation as part of the account import. Seeds can be followed by the derivation path, and derivation is applied on creation. - Update the polkadot-js/api version to 0.90.1, the first non-beta version with full support for Kusama # 0.7.1 - Updated the underlying polkadot-js/api version to support the most-recent signing payload extensions, as will be available on Kusama # 0.6.1 - Support Extrinsics v3 from substrate 2.x, this signs an extrinsic with the genesisHash # 0.5.1 - Always check for site permissions on messages, don't assume that messages originate from the libraries provided - Change the injected Signer interface to support the upcoming Kusama transaction format # 0.4.1 - Transactions are now signed with expiry information, so each transaction is mortal by default - Unneeded scrollbars on Firefox does not apper anymore (when window is popped out) - Cater for the setting of multiple network prefixes, e.g. Kusama - Project icon has been updated # 0.3.1 - Signing a transaction now displays the Mortal/Immortal status - Don't request focus for popup window (this is not available on FF) - `yarn build:zip` now builds a source zip as well (for store purposes) # 0.2.1 - First release to Chrome and FireFox stores, basic functionality only
34.745098
155
0.770316
eng_Latn
0.996206
f4ee99f2147be29ecedb320ae92b5bf700c78a20
3,474
md
Markdown
docs/sharepoint/how-to-add-a-shortcut-menu-item-to-a-sharepoint-project-item-extension.md
Ric-Chang/visualstudio-docs.zh-tw
fab71387c0b4fd9853313d648522b5292ecee128
[ "CC-BY-4.0", "MIT" ]
1
2019-11-18T01:15:24.000Z
2019-11-18T01:15:24.000Z
docs/sharepoint/how-to-add-a-shortcut-menu-item-to-a-sharepoint-project-item-extension.md
Ric-Chang/visualstudio-docs.zh-tw
fab71387c0b4fd9853313d648522b5292ecee128
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/sharepoint/how-to-add-a-shortcut-menu-item-to-a-sharepoint-project-item-extension.md
Ric-Chang/visualstudio-docs.zh-tw
fab71387c0b4fd9853313d648522b5292ecee128
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: HOW TO:將捷徑功能表項目新增至 SharePoint 專案項目擴充功能 |Microsoft Docs ms.date: 02/02/2017 ms.topic: conceptual dev_langs: - VB - CSharp helpviewer_keywords: - project items [SharePoint development in Visual Studio], extending - SharePoint project items, extending - SharePoint development in Visual Studio, extending project items author: John-Hart ms.author: johnhart manager: jillfra ms.workload: - office ms.openlocfilehash: 4502d938b9f268c389e224fe9040a99d24516d77 ms.sourcegitcommit: c0202a77d4dc562cdc55dc2e6223c062281d9749 ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 01/24/2019 ms.locfileid: "54869321" --- # <a name="how-to-add-a-shortcut-menu-item-to-a-sharepoint-project-item-extension"></a>HOW TO:將捷徑功能表項目新增至 SharePoint 專案項目擴充功能 您可以透過專案項目擴充功能,將快顯功能表項目新增至現有的 SharePoint 專案項目中。 使用者以滑鼠右鍵按一下專案項目中的時,會出現的功能表項目**方案總管 中**。 下列步驟假設您已建立的專案項目延伸模組。 如需詳細資訊,請參閱[<How to:建立 SharePoint 專案項目擴充功能](../sharepoint/how-to-create-a-sharepoint-project-item-extension.md)。 ### <a name="to-add-a-shortcut-menu-item-in-a-project-item-extension"></a>若要加入專案項目擴充功能中的捷徑功能表項目 1. 在 <xref:Microsoft.VisualStudio.SharePoint.ISharePointProjectItemTypeExtension.Initialize%2A>方法您<xref:Microsoft.VisualStudio.SharePoint.ISharePointProjectItemTypeExtension>實作、 控制代碼<xref:Microsoft.VisualStudio.SharePoint.ISharePointProjectItemEvents.ProjectItemMenuItemsRequested>事件*projectItemType*參數。 2. 在您的事件處理常式,如<xref:Microsoft.VisualStudio.SharePoint.ISharePointProjectItemEvents.ProjectItemMenuItemsRequested>事件,加入新<xref:Microsoft.VisualStudio.SharePoint.IMenuItem>物件<xref:Microsoft.VisualStudio.SharePoint.SharePointProjectItemMenuItemsRequestedEventArgs.ViewMenuItems%2A>或<xref:Microsoft.VisualStudio.SharePoint.SharePointProjectItemMenuItemsRequestedEventArgs.AddMenuItems%2A>事件引數參數的集合。 3. 在 <xref:Microsoft.VisualStudio.SharePoint.IMenuItem.Click>新的事件處理常式<xref:Microsoft.VisualStudio.SharePoint.IMenuItem>物件中,執行您想要在使用者按一下您的快顯功能表項目時執行的工作。 ## <a name="example"></a>範例 下列程式碼範例示範如何將捷徑功能表項目加入至事件接收器的專案項目。 當使用者按一下滑鼠右鍵中的專案項目**方案總管**,然後按一下**將訊息寫入至輸出視窗**功能表項目,Visual Studio 會顯示在訊息**輸出**視窗。 [!code-vb[SPExtensibility.ProjectItemExtension.MenuAndProperty#1](../sharepoint/codesnippet/VisualBasic/projectitemmenuandproperty/extension/projectitemextensionmenu.vb#1)] [!code-csharp[SPExtensibility.ProjectItemExtension.MenuAndProperty#1](../sharepoint/codesnippet/CSharp/projectitemmenuandproperty/extension/projectitemextensionmenu.cs#1)] 此範例會使用 SharePoint 專案服務來將訊息寫入**輸出**視窗。 如需詳細資訊,請參閱 <<c0> [ 使用 SharePoint 專案服務](../sharepoint/using-the-sharepoint-project-service.md)。 ## <a name="compile-the-code"></a>編譯程式碼 這個範例需要參考下列組件的類別庫專案: - Microsoft.VisualStudio.SharePoint - System.ComponentModel.Composition ## <a name="deploy-the-extension"></a>部署擴充功能 若要部署的延伸模組,建立[!include[vsprvs](../sharepoint/includes/vsprvs-md.md)]擴充功能 (VSIX) 封裝組件和任何其他您想要將副檔名的檔案。 如需詳細資訊,請參閱 <<c0> [ 部署適用於 Visual Studio 中 SharePoint 工具擴充功能](../sharepoint/deploying-extensions-for-the-sharepoint-tools-in-visual-studio.md)。 ## <a name="see-also"></a>另請參閱 [如何:建立 SharePoint 專案項目擴充功能](../sharepoint/how-to-create-a-sharepoint-project-item-extension.md) [如何:將屬性加入至 SharePoint 專案項目擴充功能](../sharepoint/how-to-add-a-property-to-a-sharepoint-project-item-extension.md) [擴充 SharePoint 專案項目](../sharepoint/extending-sharepoint-project-items.md) [逐步解說:擴充 SharePoint 專案項目類型](../sharepoint/walkthrough-extending-a-sharepoint-project-item-type.md)
57.9
396
0.802533
yue_Hant
0.759712
f4eeb43d6331362c307be9af3dc12e8880ddfd03
127
md
Markdown
README.md
rushmoor/rushmoor.github.io
d30d3fcc5f0706c6926772170ec1bf074ce2964a
[ "MIT" ]
null
null
null
README.md
rushmoor/rushmoor.github.io
d30d3fcc5f0706c6926772170ec1bf074ce2964a
[ "MIT" ]
null
null
null
README.md
rushmoor/rushmoor.github.io
d30d3fcc5f0706c6926772170ec1bf074ce2964a
[ "MIT" ]
null
null
null
# rushmoor.github.io A blog by officers at Rushmoor Borough Council. Inspired and forked from localdigitalchatbots.github.io
21.166667
55
0.818898
eng_Latn
0.982679
f4ef417b320fa31d6f0d6315775f87114ba1d7b3
27
md
Markdown
README.md
IndrekZombieRun/testrepo
26f4e6b1be6ac0ace49b208b871e05370db83241
[ "MIT" ]
null
null
null
README.md
IndrekZombieRun/testrepo
26f4e6b1be6ac0ace49b208b871e05370db83241
[ "MIT" ]
2
2016-03-29T07:28:32.000Z
2016-03-29T07:31:25.000Z
README.md
IndrekZombieRun/testrepo
26f4e6b1be6ac0ace49b208b871e05370db83241
[ "MIT" ]
null
null
null
# testrepo Let's test this
9
15
0.740741
eng_Latn
0.969736
f4ef61876f683f01038511830ab43d1cc708b9d9
1,247
md
Markdown
README.md
tkEzaki/gmm_hmm_comparison
ead91c7174a66f940dc122d56d94d80b3b1c1775
[ "MIT" ]
null
null
null
README.md
tkEzaki/gmm_hmm_comparison
ead91c7174a66f940dc122d56d94d80b3b1c1775
[ "MIT" ]
null
null
null
README.md
tkEzaki/gmm_hmm_comparison
ead91c7174a66f940dc122d56d94d80b3b1c1775
[ "MIT" ]
null
null
null
# Description This python package is a supplementary material to Ezaki et.al. arXiv:2001.08369. This package contains three modules: (1) "synthetic_time_series.py" (2) "estimate models.py" (3) "state_time_series_statistics.py" and sample data ("sample_cov_1.txt", "sample_cov_2.txt", "sample_mean_1_2.txt", "trans.txt") used for defining probability distribusions of the hidden state in "synthetic_time_series.py". (1) "synthetic_time_series.py" contains functions to generate three types of synthetic time series described in Ezaki et.al. arXiv:2001.08369. (2) "estimate_models.py" contains functions to estimate GMM and HMM for given time series data. (3) "state_time_series_statistics.py" contains functions to compute statistics (i.e., frequency, switching rate, and dwell time of hidden states) from a time series of the state. For usage of each function, see the sample codes in each module. # Requiements This package assumes python 3.6 or later. This package requires the following packages: -NumPy (https://numpy.org/) -scikit-learn (https://scikit-learn.org/stable/) -hmmlearn (https://hmmlearn.readthedocs.io/en/latest/) # License Copyright (c) 2020 Takahiro Ezaki Released under the MIT license
40.225806
188
0.764234
eng_Latn
0.953488
f4f003dacca27955506a9643cc552ba697968c01
3,269
md
Markdown
docs/relational-databases/system-catalog-views/sys-external-data-sources-transact-sql.md
Misliplavo/sql-docs
ad7e4f44dab7c6caf32fd045d1546dc92cdf3b1e
[ "CC-BY-4.0", "MIT" ]
3
2019-06-06T07:50:33.000Z
2022-03-28T21:24:31.000Z
docs/relational-databases/system-catalog-views/sys-external-data-sources-transact-sql.md
Misliplavo/sql-docs
ad7e4f44dab7c6caf32fd045d1546dc92cdf3b1e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/system-catalog-views/sys-external-data-sources-transact-sql.md
Misliplavo/sql-docs
ad7e4f44dab7c6caf32fd045d1546dc92cdf3b1e
[ "CC-BY-4.0", "MIT" ]
1
2019-06-04T12:44:06.000Z
2019-06-04T12:44:06.000Z
--- description: "sys.external_data_sources (Transact-SQL)" title: "sys.external_data_sources (Transact-SQL) | Microsoft Docs" ms.custom: "" ms.date: "03/14/2017" ms.prod: sql ms.prod_service: "database-engine, sql-database, synapse-analytics, pdw" ms.reviewer: "" ms.technology: system-objects ms.topic: "reference" dev_langs: - "TSQL" ms.assetid: 1016db6e-9950-4ae2-a004-bd4171e27359 author: WilliamDAssafMSFT ms.author: wiassaf monikerRange: ">=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||>=sql-server-linux-2017||=azuresqldb-mi-current" --- # sys.external_data_sources (Transact-SQL) [!INCLUDE [sqlserver2016-asdb-asdbmi-asa-pdw](../../includes/applies-to-version/sqlserver2016-asdb-asdbmi-asa-pdw.md)] Contains a row for each external data source in the current database for [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], [!INCLUDE[ssSDS](../../includes/sssds-md.md)], and [!INCLUDE[ssSDW](../../includes/sssdw-md.md)]. Contains a row for each external data source on the server for [!INCLUDE[ssPDW](../../includes/sspdw-md.md)]. |Column Name|Data Type|Description|Range| |-----------------|---------------|-----------------|-----------| |data_source_id|**int**|Object ID for the external data source.|| |name|**sysname**|Name of the external data source.|| |location|**nvarchar(4000)**|The connection string, which includes the protocol, IP address, and port for the external data source.|| |type_desc|**nvarchar(255)**|Data source type displayed as a string.|HADOOP, RDBMS, SHARD_MAP_MANAGER, RemoteDataArchiveTypeExtDataSource| |type|**tinyint**|Data source type displayed as a number.|0 - HADOOP<br /><br /> 1 - RDBMS<br /><br /> 2 - SHARD_MAP_MANAGER<br /><br /> 3 - RemoteDataArchiveTypeExtDataSource| |resource_manager_location|**nvarchar(4000)**|For type HADOOP, the IP and port location of the Hadoop resource manager. This is used for submitting a job on a Hadoop data source.<br /><br /> NULL for other types of external data sources.|| |credential_id|**int**|The object ID of the database scoped credential used to connect to the external data source.|| |database_name|**sysname**|For type RDBMS, the name of the remote database. For type, SHARD_MAP_MANAGER, the name of the shard map manager database. NULL for other types of external data sources.|| |shard_map_name|**sysname**|For type SHARD_MAP_MANAGER, the name of the shard map. NULL for other types of external data sources.|| ## Permissions The visibility of the metadata in catalog views is limited to securables that a user either owns or on which the user has been granted some permission. For more information, see [Metadata Visibility Configuration](../../relational-databases/security/metadata-visibility-configuration.md). ## See Also [sys.external_file_formats &#40;Transact-SQL&#41;](../../relational-databases/system-catalog-views/sys-external-file-formats-transact-sql.md) [sys.external_tables &#40;Transact-SQL&#41;](../../relational-databases/system-catalog-views/sys-external-tables-transact-sql.md) [CREATE EXTERNAL DATA SOURCE &#40;Transact-SQL&#41;](../../t-sql/statements/create-external-data-source-transact-sql.md)
69.553191
292
0.719486
eng_Latn
0.501198
f4f011e47037947014e41116d9632c5143445010
431
md
Markdown
README.md
jewdore/laravel-tools
b2e9ab85675cd26dfcd51f663805c1bb3e56a454
[ "MIT" ]
1
2018-12-21T01:41:17.000Z
2018-12-21T01:41:17.000Z
README.md
jewdore/laravel-tools
b2e9ab85675cd26dfcd51f663805c1bb3e56a454
[ "MIT" ]
null
null
null
README.md
jewdore/laravel-tools
b2e9ab85675cd26dfcd51f663805c1bb3e56a454
[ "MIT" ]
1
2018-12-25T08:52:07.000Z
2018-12-25T08:52:07.000Z
# Laravel Tools Set Laravel 开发工具集。 主要是为了方便新的项目创建的时候,不用重新拷贝之前项目的一些公共部分。从而从原来积累的基础上快速开发项目。 **目前只适配 Laravel 5.7 版本,不保证其他版本能运行** **目前只是实验性目的,欢迎添砖加瓦,把轮子整合起来** ## Install Require this package with composer. ```bash composer require wgqi1126/laravel-tools ``` ## Commands * `ltools:reset-db` 重置数据库。执行 `migrate:reset` 删除所有表,然后执行 `migrate` 重新创建表,最后执行 `db:seed` 生成数据。 ## TODOs * 将 laravel-ide-helper 集成进来 * 添加 laravel-debugbar
15.962963
72
0.740139
eng_Latn
0.214665
f4f0cefcff86e545310d3c2f3abfae379209b001
1,261
md
Markdown
includes/virtual-machines-disks-incremental-snapshots-portal.md
tokawa-ms/azure-docs.ja-jp
c1db876db6cde4ac449e959e3bfe39f941660f8a
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/virtual-machines-disks-incremental-snapshots-portal.md
tokawa-ms/azure-docs.ja-jp
c1db876db6cde4ac449e959e3bfe39f941660f8a
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/virtual-machines-disks-incremental-snapshots-portal.md
tokawa-ms/azure-docs.ja-jp
c1db876db6cde4ac449e959e3bfe39f941660f8a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: インクルード ファイル description: インクルード ファイル services: virtual-machines author: roygara ms.service: virtual-machines ms.topic: include ms.date: 04/02/2020 ms.author: rogarana ms.custom: include file ms.openlocfilehash: e207866b61d21334bc9923d0d784b900906b0045 ms.sourcegitcommit: fc23b4c625f0b26d14a5a6433e8b7b6fb42d868b ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 01/17/2021 ms.locfileid: "98539714" --- 1. [Azure portal](https://portal.azure.com/) にサインインし、スナップショットを作成するディスクに移動します。 1. ディスクで、 **[スナップショットの作成]** を選択します。 :::image type="content" source="media/virtual-machines-disks-incremental-snapshots-portal/create-snapshot-button-incremental.png" alt-text="スクリーンショット。選択する必要がある **[+ スナップショットの作成]** が強調表示されたディスクのブレード。"::: 1. 使用するリソース グループを選択し、名前を入力します。 1. **[増分]** を選択し、 **[確認および作成]** を選択します。 :::image type="content" source="media/virtual-machines-disks-incremental-snapshots-portal/incremental-snapshot-create-snapshot-blade.png" alt-text="スクリーンショット。[スナップショットの作成] ブレードで、名前を入力し、[増分] を選択し、スナップショットを作成します。"::: 1. **[作成]** :::image type="content" source="media/virtual-machines-disks-incremental-snapshots-portal/create-incremental-snapshot-validation.png" alt-text="スクリーンショット。スナップショットの [検証] ページで、選択内容を確認し、スナップショットを作成します。":::
42.033333
218
0.776368
yue_Hant
0.202867
f4f0e5a80abaede88a4c7eb2982f6787c48e03bc
67
md
Markdown
Coding Assignment 4 & 5 - Proximity Matrix and page rank/readme.md
bereketeshete/Data-Mining
8dcafae420462e2c95bed41f2509676b0c79f998
[ "MIT" ]
null
null
null
Coding Assignment 4 & 5 - Proximity Matrix and page rank/readme.md
bereketeshete/Data-Mining
8dcafae420462e2c95bed41f2509676b0c79f998
[ "MIT" ]
null
null
null
Coding Assignment 4 & 5 - Proximity Matrix and page rank/readme.md
bereketeshete/Data-Mining
8dcafae420462e2c95bed41f2509676b0c79f998
[ "MIT" ]
null
null
null
Dataset: https://www.kaggle.com/datasets/Cornell-University/arxiv
22.333333
65
0.80597
yue_Hant
0.945624
f4f18a8365d6bea510e69280c5d2198cc18a1ac6
25,447
md
Markdown
_posts/java/2020-05-14-java-jdbc.md
universidadejava/universidadejava.github.io
8ddda7c3e1662794ce737e45cf200c8adc8773c0
[ "MIT" ]
null
null
null
_posts/java/2020-05-14-java-jdbc.md
universidadejava/universidadejava.github.io
8ddda7c3e1662794ce737e45cf200c8adc8773c0
[ "MIT" ]
null
null
null
_posts/java/2020-05-14-java-jdbc.md
universidadejava/universidadejava.github.io
8ddda7c3e1662794ce737e45cf200c8adc8773c0
[ "MIT" ]
null
null
null
--- layout: article title: "Conexão com bancos de dados usando JDBC" categories: java author: sakurai date: 2020-05-14 22:46:00 tags: [java, jdbc, mysql, banco de dados, crud] published: true excerpt: O banco de dados é onde persistimos (armazenamos) os dados que pertencem ao nosso sistema. comments: true image: teaser: teaser-java.png ads: false --- ## Conexão com bancos de dados - JDBC O banco de dados é onde persistimos (armazenamos) os dados que pertencem ao nosso sistema. A maioria dos bancos de dados comerciais hoje em dia é do tipo relacional e derivam de uma estrutura diferente da orientada a objetos. Para executarmos o manuseio de informações em um banco de dados, devemos fazer uso de sub-linguagens de banco de dados (Database Sub Language – DSL), voltadas para as operações do banco de dados. Geralmente, as sub-linguagens de dados são compostas da combinação de recursos para a definição de dados (Data Definition Language – DDL) e recursos específicos para a manipulação de dados (Data Manipulation Language – DML). A conhecida linguagem de consulta SQL (Structured Query Language) é uma destas linguagens que fornece suporte tanto a DDL como a DML. Para efetivarmos a conexão de um programa desenvolvido em Java com uma base de dados qualquer, a linguagem Java implementa um conceito de ponte, que implementa todas as funcionalidades que um banco de dados padrão deve nos fornecer. <figure> <a href="/images/2020-05-14-java-jdbc-01.png"><img src="/images/2020-05-14-java-jdbc-01.png" alt="Uso do JDBC na aplicação Java."></a> </figure> No desenho acima podemos notar o relacionamento direto da JDBC com o banco de dados, porém este relacionamento depende também da extensão desta ponte que é representada pela implementação JDBC escolhida. Esta implementação depende do banco de dados com o qual queremos nos comunicar e a ela damos o nome de **Driver**. Para gerenciar estes drivers de conexão, a linguagem Java possui um gerente de drivers chamado `java.sql.DriverManager`. O driver do MySQL pode ser adicionado através das Propriedades do Projeto, na aba Biblioteca utilize o botão Adicionar Biblioteca e selecione MySQL JDBC Driver. > caso você queria colocar o driver de outro banco de dados é preciso clicar no botão Adicionar JAR/Pasta e selecionar o arquivo .jar que representa o drive desejado (Obs: no site do fabricante é possível obter este driver). <figure> <a href="/images/2020-05-14-java-jdbc-02.png"><img src="/images/2020-05-14-java-jdbc-02.png" alt="Adicionar biblioteca no NetBeans."></a> </figure> <figure> <a href="/images/2020-05-14-java-jdbc-02.png"><img src="/images/2020-05-14-java-jdbc-03.png" alt="Adicionar driver do MySQL no NetBeans."></a> </figure> Para criarmos uma conexão com o banco de dados primeiramente é necessário informarmos qual driver de conexão o **DriverManager** deve utilizar. Feita essa disponibilização do arquivo de driver, na programação se faz necessário registrar o driver do banco de dados no sistema. Para isso, basta carregá-lo através do método `Class.forName()`. Esse método abre uma classe que se registra com o `DriverManager.getConnection()`. A partir da classe `DriverManager`, podemos utilizar um método chamado `getConnection`, com o qual poderemos nos conectar a uma determinada base de dados utilizando uma String padrão de conexão. {% highlight java %} package material.jdbc; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; /** * Classe utilizada para conectar e desconectar do banco de dados. */ public class Conexao { public static void main(String[] args) { Conexao conexao = new Conexao(); Connection conn = conexao.conectar(); conexao.desconectar(conn); } public Connection conectar() { Connection conn = null; try { Class.forName("com.mysql.jdbc.Driver"); conn = DriverManager.getConnection("jdbc:mysql://localhost/test", "root", "root"); System.out.println("Conectou no banco de dados."); } catch (SQLException ex) { System.out.println("Erro: Não conseguiu conectar no BD."); } catch (ClassNotFoundException ex) { System.out.println("Erro: Não encontrou o driver do BD."); } return conn; } public void desconectar(Connection conn) { try { if(conn != null && !conn.isClosed()) { conn.close(); System.out.println("Desconectou do banco de dados."); } } catch (SQLException ex) { System.out.println("Não conseguiu desconectar do BD."); } } } {% endhighlight %} Na linha 20 estamos carregando o driver do banco de dados MySQL `com.mysql.jdbc.Driver`. Na linha 21, utilizamos o método getConnection da classe `DriverManager` para criar uma conexão com o banco de dados armazenado em um objeto do tipo `java.sql.Connection`. O método `getConnection()` recebe três parâmetros: 1˚ url de conexão com o banco de dados, 2˚ usuário e 3˚ senha. Para fazer a conexão com o banco de dados precisamos tratar algumas exceções que podem ocorrer: Na linha 23, tratamos a exceção `java.sql.SQLException` que é lançada caso não consiga criar uma conexão com o Banco de Dados. Na linha 25, tratamos a exceção `java.lang.ClassNotFoundException` que é lançada caso não consiga carregar o driver do banco de dados. Depois de utilizar a conexão com o banco de dados é muito importante que liberemos esta conexão, ou seja, precisamos encerrar a conexão com o banco de dados. Na linha 32 temos o método desconectar que recebe uma Connection como parâmetro, dentro deste método estamos verificando se a conexão com o banco ainda não foi encerrada para depois encerra-la. Quando tentamos encerrar uma conexão com o banco também pode ocorrer alguma exceção para isso na linha 38 estamos tratando caso ocorra alguma `java.sql.SQLException`. Quando executamos esta classe, temos a seguinte saída no console: {% highlight java %} Conectou no banco de dados. Desconectou do banco de dados. {% endhighlight %} ### Consulta de dados Uma vez conectado, podemos então solicitar a execução de comandos SQL, para que tenhamos acesso a este tipo de chamada ao banco de dados é necessário criarmos um `java.sql.Statement`. O `Statement` é o objeto que servirá de instrumento para a execução de comandos SQL, porém é importante ressaltar que o intuito de uma consulta a um banco de dados é o de receber uma informação, logo, é fundamental que essa resposta seja armazenada em uma estrutura. Para isso podemos utilizar a classe `java.sql.ResultSet` que deve receber o retorno do método `executeQuery` que é responsável por executar a consulta. Exemplo: {% highlight java %} package material.jdbc; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; /** * Classe utilizada para demonstrar a consulta ao banco de dados. */ public class Consulta { public static void main(String[] args) { Consulta consultaBD = new Consulta(); consultaBD.consulta(); } public void consulta() { Conexao conexao = new Conexao(); Connection conn = conexao.conectar(); try { String consulta = "SELECT * FROM PESSOA WHERE NOME like 'A%'"; Statement stm = conn.createStatement(); ResultSet resultado = stm.executeQuery(consulta); while(resultado.next()) { System.out.print(resultado.getLong("ID")); System.out.print(" - " + resultado.getString("NOME")); System.out.print(" - " + resultado.getInt("IDADE") + "\n"); } } catch (SQLException ex) { System.out.println("Não conseguiu consultar os dados de Pessoa."); } finally { conexao.desconectar(conn); } } } {% endhighlight %} Quando executamos essas classe, temos a seguinte saída no console: {% highlight java %} Conectou no banco de dados. 1 - Ana - 30 5 - Amanda - 23 Desconectou do banco de dados. {% endhighlight %} Note pelo exemplo anterior que o método responsável por executar a consulta na base de dados foi o `executeQuery` na linha 24. Observe também a forma como o `ResultSet` foi percorrido na linha 26. A classe `ResultSet` oferece o método `next()` para percorrer a resposta dada pelo banco de dados. Caso nossa consulta acima lista houvesse retornado 4 linhas, por padrão o `ResultSet` estaria com seu ponteiro posicionado na posição `-1`, logo, antes de ler a primeira linha de resposta de sua consulta é necessário a chamada a este método `next()` para que o ponteiro passe para a primeira posição e possa resgatar as informações desejadas. O método `next()` retorna `true`, caso ainda existam novas linhas na resposta e `false` quando não tem mais registros para percorrer. É importante ressaltar que apenas um único objeto `ResultSet` pode ser aberto por vez por `Statement` ao mesmo tempo. Para resgatar as informações desejadas, a classe `ResultSet` oferece métodos do tipo `get` para todos os tipos de dados primitivos, exceto `char`. Todos estes métodos devem receber como parâmetro o nome da coluna de resposta, assim como no exemplo anterior, ou o número correspondente a posição da coluna retornada. ### Manipulação de dados Podemos utilizar a classe Statement para guardar (persistir) ou atualizar as informações na base de dados, utilizando o método execute. {% highlight java %} package material.jdbc; import java.sql.Connection; import java.sql.SQLException; import java.sql.Statement; /** * Classe utilizada para demonstrar como adicionar ou atualizar dados * no banco de dados. */ public class Manipulacao { public static void main(String[] args) { Manipulacao manipulacao = new Manipulacao(); manipulacao.inserir(); manipulacao.atualizar(); } public void inserir() { Conexao conexao = new Conexao(); Connection conn = conexao.conectar(); try { String adicionar = "INSERT INTO PESSOA (NOME, IDADE) VALUES ('Rafael', 25)"; Statement stm = conn.createStatement(); stm.execute(adicionar); System.out.println("Adicionou a pessoa Rafael no BD."); } catch (SQLException ex) { System.out.println("Não conseguiu adicionar uma pessoa no BD."); } finally { conexao.desconectar(conn); } } public void atualizar() { Conexao conexao = new Conexao(); Connection conn = conexao.conectar(); try { String atualizar = "UPDATE PESSOA SET NOME = 'Cristiano' WHERE NOME = 'Rafael'"; Statement stm = conn.createStatement(); stm.executeUpdate(atualizar); System.out.println("Atualizou o nome de Rafael para Cristiano."); } catch (SQLException ex) { System.out.println("Não conseguiu atualizar uma pessoa no BD."); } finally { conexao.desconectar(conn); } } } {% endhighlight %} Na linha 24, pedimos para o `Statement` adicionar um novo registro na tabela `PESSOA` do banco de dados. Na linha 40, pedimos para o `Statement` atualizar um registro da tabela `PESSOA` do banco de dados. Note que para adicionar ou atualizar as informações no banco de dados, podemos utilizar o método `execute`. A classe `Statement` possui o método execute para adicionar ou atualizar um registro no banco de dados e o método `executeUpdate` para atualizar as informações no banco de dados, a diferença é que este método retorna um inteiro com a quantidade de registros que foram alterados. Após a conclusão das operações e leitura ou de manipulação de dados, é importante a chamada ao método `close()`, tanto da classe `Statement` como da classe `Connection`, para que a conexão com o banco de dados seja finalizada. ### Relação de algumas bases de dados {% highlight java %} | Banco | Driver | String de Conexão | | -------------- | ------------------------------- | --------------------------------------------------- | | Microsoft ODBC | sun.jdbc.odbc.JdbcOdbcDriver | jdbc:odbc:<<nome da base>> | | MySQL | com.mysql.jdbc.Driver | jdbc:mysql://<<ipDoBanco>>/<<baseDeDados>> | | Oracle | oracle.jdbc.driver.OracleDriver | jdbc:oracle:thin:@<<ipDoBanco>>:1521:<<nomeDaBase>> | {% endhighlight %} ### Exemplo de aplicação C.R.U.D. (Create, Read, Update, Delete) Neste exemplo vamos, passo a passo, criar uma aplicação completa de acesso a uma base de dados utilizando JDBC. Para esta aplicação, iremos criar um sistema de cadastro de veículos. Para tal, criamos a classe `Carro`: {% highlight java %} package material.jdbc.exemplo; /** * Classe utilizada para representar um carro. */ public class Carro { private String placa; private String modelo; private Double potencia; public String getModelo() { return modelo; } public void setModelo(String modelo) { this.modelo = modelo; } public String getPlaca() { return placa; } public void setPlaca(String placa) { this.placa = placa; } public Double getPotencia() { return potencia; } public void setPotencia(Double potencia) { this.potencia = potencia; } } {% endhighlight %} Agora que já temos modelado a nossa classe principal, devemos definir como nosso programa irá interagir com a base de dados. Para estabelecermos a conexão com a base de dados de uma maneira mais simples, faremos uso da classe `Conexao`: {% highlight java %} package material.jdbc.exemplo; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; /** * Classe utilizada para executar as ações sobre o banco de dados. */ public class Conexao { private Connection conn = null; private Statement stm = null; private ResultSet rs = null; private Connection conectar() { try { String usuario = "root"; String senha = "root"; String ipDoBanco = "localhost"; String nomeDoBanco = "carro"; String stringDeConexao = "jdbc:mysql://" + ipDoBanco + "/" + nomeDoBanco; Class.forName("com.mysql.jdbc.Driver"); conn = DriverManager.getConnection(stringDeConexao, usuario, senha); System.out.println("Conectou no banco de dados."); } catch (SQLException ex) { System.out.println("Erro: Não conseguiu conectar no BD."); } catch (ClassNotFoundException ex) { System.out.println("Erro: Não encontrou o driver do BD."); } return conn; } public ResultSet executarConsulta(String consulta) { conn = conectar(); try { stm = conn.createStatement(); rs = stm.executeQuery(consulta); } catch (SQLException ex) { System.out.println("Não conseguiu executar a consulta\n" + consulta); //Caso ocorra algum erro desconecta do banco de dados. desconectar(); } return rs; } public boolean executarDML(String dml) { boolean ok = false; conn = conectar(); try { stm = conn.createStatement(); stm.execute(dml); ok = true; } catch (SQLException ex) { System.out.println("Nao conseguiu executar o DML\n" + dml); } finally { desconectar(); } return ok; } public void desconectar() { fecharResultSet(this.rs); fecharStatement(this.stm); fecharConnection(this.conn); } public void fecharConnection(Connection conn) { try { if(conn != null && !conn.isClosed()) { conn.close(); System.out.println("Desconectou do banco de dados."); } } catch (SQLException ex) { System.out.println("Nao conseguiu desconectar do BD."); } } public void fecharStatement(Statement stm) { try { if(stm != null && !stm.isClosed()) { stm.close(); } } catch (SQLException ex) { System.out.println("Erro ao fechar o procedimento de consulta."); } } public void fecharResultSet(ResultSet resultado) { try { if(resultado != null && !resultado.isClosed()) { resultado.close(); } } catch (SQLException ex) { System.out.println("Erro ao fechar o resultado da consulta."); } } } {% endhighlight %} Utilizaremos um padrão de projeto chamado DAO (Data Acess Object). O DAO deve saber buscar os dados do banco e converter em objetos para ser usado pela sua aplicação. Semelhantemente, deve saber como pegar os objetos, converter em instruções SQL e mandar para o banco de dados. Desta forma conseguimos distinguir fortemente a modelagem do sistema da modelagem de dados e das regras de negócio. Geralmente, temos um DAO para cada objeto do domínio do sistema, ou seja, para nosso exemplo criaremos uma classe `CarroDAO` com as quatro operações básicas, definidas por seus métodos. {% highlight java %} package material.jdbc.exemplo; import java.sql.ResultSet; import java.sql.SQLException; /** * Classe utilizada para executar as operações no banco de dados, * que envolvem o Carro. */ public class CarroDAO { public void incluir(Carro carro) { String incluir = "INSERT INTO CARRO VALUES ('" + carro.getPlaca() + "', '" + carro.getModelo() + "', " + carro.getPotencia() + ")"; Conexao conexao = new Conexao(); conexao.executarDML(incluir); } public Carro consultarPorPlaca(String placa) { Conexao conexao = new Conexao(); Carro carro = null; try { String consulta = "SELECT * FROM CARRO WHERE PLACA = '" + placa + "'"; ResultSet rs = conexao.executarConsulta(consulta); if(rs.next()) { carro = new Carro(); carro.setModelo(rs.getString("MODELO")); carro.setPlaca(rs.getString("PLACA")); carro.setPotencia(rs.getDouble("POTENCIA")); } } catch (SQLException ex) { System.out.println("Nao conseguiu consultar os dados do Caminhao."); } finally { conexao.desconectar(); } return carro; } public void alterarPorPlaca(Carro carro) { String update = "UPDATE CARRO SET MODELO = '" + carro.getModelo() + "', POTENCIA = " + carro.getPotencia() + "WHERE PLACA = '" + carro.getPlaca() + "'"; Conexao conexao = new Conexao(); conexao.executarDML(update); } public void excluir(Carro carro) { String delete = "DELETE FROM CARRO WHERE PLACA='" + carro.getPlaca() + "'"; Conexao conexao = new Conexao(); conexao.executarDML(delete); } } {% endhighlight %} > Para que a classe acima não apresente nenhum erro de compilação, é necessário que você acrescente a biblioteca (arquivo .jar) correspondente ao seu banco de dados. Para o nosso exemplo, devemos importar o arquivo correspondente ao banco de dados MySQL. Observe que nosso método `incluir()` apenas recebe o objeto `Carro` a ser inserido na base de dados e internamente faz toda a operação relacionada ao banco de dados. Desta forma, conseguimos isolar toda esta interação com a base de dados do restante do código, tornando-o mais simples de se realizar qualquer tipo de manutenção. O método `consultarPorPlaca` recebe apenas a placa de um carro (imagine que esta informação é uma chave da tabela e que não devemos ter mais de um carro com a mesma placa) e que retorna um objeto do tipo `Carro` com todos os seus atributos devidamente alimentados. O método `alterarPorPlaca()` recebe um objeto `Carro` e a partir de sua placa faz a atualização nos atributos `placa` e `potencia`. O método `excluir()` recebe o `Carro` como parâmetro e o apaga da base de dados, Para testarmos nosso sistema, crie um programa semelhante ao abaixo: {% highlight java %} package material.jdbc.exemplo; import java.util.Scanner; /** * Classe utilizada para testar o CRUD de Carro. */ public class TestarCarro { public static void main(String[] args) { CarroDAO carroDAO = new CarroDAO(); char opcao = ' '; do { Carro carro = null; opcao = menu(); switch(opcao) { case 'I': carro = coletarDados(); carroDAO.incluir(carro); break; case 'E': String placaExcluir = consultarPlaca(); carro = carroDAO.consultarPorPlaca(placaExcluir); carroDAO.excluir(carro); break; case 'A': carro = coletarDados(); carroDAO.alterarPorPlaca(carro); break; case 'C': String placaConsultar = consultarPlaca(); carro = carroDAO.consultarPorPlaca(placaConsultar); break; } mostrarDadosCarro(carro); } while(opcao != 'S'); } public static char menu() { Scanner s = new Scanner(System.in); char opcao = ' '; System.out.println("Escolha a sua opcao: "); System.out.println("\t(I)ncluir"); System.out.println("\t(E)xcluir"); System.out.println("\t(A)lterar"); System.out.println("\t(C)onsultar"); System.out.println("\t(S)air"); System.out.print("\nOpcao: "); opcao = s.nextLine().toUpperCase().charAt(0); return opcao; } public static String consultarPlaca() { Scanner s = new Scanner(System.in); System.out.print("Digite a placa do carro: "); return s.nextLine(); } public static Carro coletarDados() { Scanner s = new Scanner(System.in); Carro carro = new Carro(); System.out.print("Digite a placa do carro: "); carro.setPlaca(s.nextLine()); System.out.print("Digite o modelo do carro: "); carro.setModelo(s.nextLine()); System.out.print("Digite a potencia do carro: "); carro.setPotencia(s.nextDouble()); return carro; } public static void mostrarDadosCarro(Carro carro) { if(carro != null) { System.out.println("\n############### DADOS DO CARRO #################"); System.out.println("PLACA: " + carro.getPlaca()); System.out.println("MODELO: " + carro.getModelo()); System.out.println("POTENCIA DO MOTOR: " + carro.getPotencia()); System.out.println("############### DADOS DO CARRO #################\n"); } } } {% endhighlight %} Ao executarmos a classe `TestarCarro`, temos a seguinte saída no console: {% highlight java %} Escolha a sua opcao: (I)ncluir (E)xcluir (A)lterar (C)onsultar (S)air Opcao: {% endhighlight %} A console fica aguardando até que digitemos alguma opção, primeiro vamos criar um novo carro para isto vamos entrar com a opção `I`: {% highlight java %} Digite a placa do carro: abc-1234 Digite o modelo do carro: X3 Digite a potencia do carro: 450 Conectou no banco de dados. Desconectou do banco de dados. ############### DADOS DO CARRO ################# PLACA: abc-1234 MODELO: X3 POTENCIA DO MOTOR: 450.0 ############### DADOS DO CARRO ################# Escolha a sua opcao: (I)ncluir (E)xcluir (A)lterar (C)onsultar (S)air Opcao: {% endhighlight %} Se consultarmos todos os carros cadastrados no banco de dados aparecerá o carro que acabamos de criar: {% highlight sql %} mysql> select * from carro; +----------+--------+----------+ | placa | modelo | potencia | +----------+--------+----------+ | abc-1234 | X3 | 450.00 | +----------+--------+----------+ 1 row in set <0.05 sec> {% endhighlight %} Agora vamos utilizar a opção `C` para consultar um carro pela placa `abc-1234`: {% highlight java %} Opcao: C Digite a placa do carro: abc-1234 Conectou no banco de dados. Desconectou do banco de dados. ############### DADOS DO CARRO ################# PLACA: abc-1234 MODELO: X3 POTENCIA DO MOTOR: 450.0 ############### DADOS DO CARRO ################# Escolha a sua opcao: (I)ncluir (E)xcluir (A)lterar (C)onsultar (S)air Opcao: {% endhighlight %} Agora vamos utilizar a opção `A` para alterar as informações de modelo e potencia: {% highlight java %} Opcao: A Digite a placa do carro: abc-1234 Digite o modelo do carro: X4 Digite a potencia do carro: 350 Conectou no banco de dados. Desconectou do banco de dados. ############### DADOS DO CARRO ################# PLACA: abc-1234 MODELO: X4 POTENCIA DO MOTOR: 350.0 ############### DADOS DO CARRO ################# Escolha a sua opcao: (I)ncluir (E)xcluir (A)lterar (C)onsultar (S)air Opcao: {% endhighlight %} Se consultarmos todos os carros cadastrados no banco de dados aparecerá o carro que acabamos de alterar: {% highlight sql %} mysql> select * from carro; +----------+--------+----------+ | placa | modelo | potencia | +----------+--------+----------+ | abc-1234 | X4 | 350.00 | +----------+--------+----------+ 1 row in set <0.00 sec> {% endhighlight %} Agora vamos utilizar a opção `E` para apagar o carro do banco de dados. {% highlight java %} Opcao: E Digite a placa do carro: abc-1234 Conectou no banco de dados. Desconectou do banco de dados. Conectou no banco de dados. Desconectou do banco de dados. ############### DADOS DO CARRO ################# PLACA: abc-1234 MODELO: X4 POTENCIA DO MOTOR: 350.0 ############### DADOS DO CARRO ################# Escolha a sua opcao: (I)ncluir (E)xcluir (A)lterar (C)onsultar (S)air Opcao: {% endhighlight %} Se consultarmos todos os carros, não teremos nenhum registro no banco de dados: {% highlight sql %} mysql> select * from carro; Empty set <0.00 sec> {% endhighlight %} ### Conteúdos relacionados - [Criando um CRUD com JPA](http://www.universidadejava.com.br/javaee/jpa-exemplo-crud/) - [Leitura e escrita de arquivos em Java](http://www.universidadejava.com.br/java/java-leitura-arquivo/) - [Tratamento de exceções no Java](http://www.universidadejava.com.br/java/java-excecoes/) - [Conhecendo e previnindo a vulnerabilidade SQL Injection](http://www.universidadejava.com.br/outros/vulnerabilidade-sql-injection)
35.245152
575
0.687114
por_Latn
0.987388
f4f191238cad9f2f86080b1b47575a254c9d126c
1,185
md
Markdown
source/_drafts/mongo-rust.md
haoflynet/haofly.net
ef1dd48a2c4e541dbd23b1db9a2309ffd6eb90ed
[ "CC-BY-4.0" ]
null
null
null
source/_drafts/mongo-rust.md
haoflynet/haofly.net
ef1dd48a2c4e541dbd23b1db9a2309ffd6eb90ed
[ "CC-BY-4.0" ]
null
null
null
source/_drafts/mongo-rust.md
haoflynet/haofly.net
ef1dd48a2c4e541dbd23b1db9a2309ffd6eb90ed
[ "CC-BY-4.0" ]
null
null
null
- 这是MongoDb官方出的rust驱动[mongo-rust-driver](https://github.com/mongodb/mongo-rust-driver) ## 安装配置 ```rust use mongodb::{Client, Collection, Database}; let client = Client::with_uri_str("mongodb://localhost:27017")?; let database = client.database("mydb"); let collection: Collection = database.collection("col1") ``` ## 增删改查 ```rust // 查询 // 单个查询,返回的是Result<Option<T> let doc = collection.find_one(doc! {"_id": &id}, None).await; // nightly中可以map_err和ok_or_else直接抛出错误 let doc = collection.find_one(doc! {"_id": &id}, None).await.map_err(|e| {...})?.ok_or_else(|| Error::Notfound)?; // 批量查询,返回的是Result<Cursor<T>> let cursor = collection.find(doc! {"name": "abc"}, None).await?; for result in cursor { println!("title: {}", result?.title); } // 聚合查询 let pipeline = vec![ doc! { "$match": { "status": true } } doc! { "$lookup": { "from": "users", "localField": "user_id", "foreignField": "_id", "as": "user" } } ] collection.aggregate(pipeline, None).await? // 创建 let docs = vec![ User { name: "abc".to_string() }, User { name: "def".to_string() } ] collection.insert_many(docs, None).await? // 更新 // 删除 ```
17.686567
113
0.618565
yue_Hant
0.56552
f4f1dbbf01ebb11747e8d4520c5c5ca334580c0c
1,182
md
Markdown
README.md
twinspica14/Meann
5cffb44279be84e0b3eda1ef919aaf48839f4d0a
[ "MIT" ]
null
null
null
README.md
twinspica14/Meann
5cffb44279be84e0b3eda1ef919aaf48839f4d0a
[ "MIT" ]
null
null
null
README.md
twinspica14/Meann
5cffb44279be84e0b3eda1ef919aaf48839f4d0a
[ "MIT" ]
null
null
null
#MEAN Stack for beginners #Demo #http://penyourstory.herokuapp.com npm install --save (this will install all libraries necessary for your app) Decide structure of your app. Like some follow generators like MEAN.IO, MEANjs npm install nodemon --save (--save saves the version in current use in package.json) The steps could be find everywhere, But depending on structure many different libraries have been generated for ease. For first Project always go to Firebase by google. To get basic understanding of mongodb and Nodejs. #starting mongodb locally faces a challenge for the first timers. First go to C:\Program Files\MongoDB\Server\3.2\bin then in shell or cmd, run mongodb If problem faced, then mongod --dbpath C:\data --storageEngine=mmapv1 here C:\data\folder is where db will exist. in another cmd again go to bin, and run mongo. This will start the mongodb server. This project will be updated as I will keep updating in free time. # If one gets hotfix not found or later not installed go to this site, it's not related to mongoddb http://www.kriblog.com/bigdata/NoSQL/MongoDb/hotfix-kb2731284-or-later-update-is-not-installed-will-zero-out-data-files.html
39.4
124
0.782572
eng_Latn
0.986917
f4f1fe342d415b7a5a14d90db44895fd1b6ce843
2,439
md
Markdown
docs/standard/exceptions/using-user-filtered-exception-handlers.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/exceptions/using-user-filtered-exception-handlers.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/exceptions/using-user-filtered-exception-handlers.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
1
2021-01-06T09:36:01.000Z
2021-01-06T09:36:01.000Z
--- title: "Using User-Filtered Exception Handlers | Microsoft Docs" ms.custom: "" ms.date: "03/30/2017" ms.prod: ".net" ms.reviewer: "" ms.suite: "" ms.technology: dotnet-standard ms.tgt_pltfrm: "" ms.topic: "article" helpviewer_keywords: - "user-filtered exceptions" - "exceptions, user-filtered" ms.assetid: aa80d155-060d-41b4-a636-1ceb424afee8 caps.latest.revision: 10 author: "mairaw" ms.author: "mairaw" manager: "wpickett" --- # Using User-Filtered Exception Handlers Currently, Visual Basic supports user-filtered exceptions. User-filtered exception handlers catch and handle exceptions based on requirements you define for the exception. These handlers use the **Catch** statement with the **When** keyword. This technique is useful when a particular exception object corresponds to multiple errors. In this case, the object typically has a property that contains the specific error code associated with the error. You can use the error code property in the expression to select only the particular error you want to handle in that **Catch** clause. The following Visual Basic example illustrates the **Catch/When** statement. ``` Try 'Try statements. Catch When Err = VBErr_ClassLoadException 'Catch statements. End Try ``` The expression of the user-filtered clause is not restricted in any way. If an exception occurs during execution of the user-filtered expression, that exception is discarded and the filter expression is considered to have evaluated to false. In this case, the common language runtime continues the search for a handler for the current exception. ## Combining the Specific Exception and the User-Filtered Clauses A catch statement can contain both the specific exception and the user-filtered clauses. The runtime tests the specific exception first. If the specific exception succeeds, the runtime executes the user filter. The generic filter can contain a reference to the variable declared in the class filter. Note that the order of the two filter clauses cannot be reversed. The following Visual Basic example shows the specific exception `ClassLoadException` in the **Catch** statement as well as the user-filtered clause using the **When** keyword. ``` Try 'Try statements. Catch cle As ClassLoadException When cle.IsRecoverable() 'Catch statements. End Try ``` ## See Also [Exceptions](index.md)
47.823529
368
0.758508
eng_Latn
0.994512
f4f202db1db6c9befaba1db7ae802d787e584a7d
3,818
md
Markdown
content/blog/algorithm/458.md
guanpengchn/LeetCodeDrawing
661abf1cd472ecc1fb7e999d0db08ae75822c8fd
[ "MIT" ]
138
2019-06-10T17:34:57.000Z
2020-01-21T06:42:44.000Z
content/blog/algorithm/458.md
guanpengchn/nice-notes
661abf1cd472ecc1fb7e999d0db08ae75822c8fd
[ "MIT" ]
23
2019-05-17T10:37:18.000Z
2019-06-03T15:17:44.000Z
content/blog/algorithm/458.md
guanpengchn/LeetCodeDrawing
661abf1cd472ecc1fb7e999d0db08ae75822c8fd
[ "MIT" ]
35
2019-06-19T08:58:11.000Z
2020-01-06T08:21:46.000Z
--- title: 458. 可怜的小猪 date: 2019-09-14 tag: 画解算法 cover: https://imgkr.cn-bj.ufileos.com/e37c83ea-15bc-4704-874c-1548678db700.png --- ## 题目链接 https://leetcode-cn.com/problems/poor-pigs/ ## 题目描述 有 1000 只水桶,其中有且只有一桶装的含有毒药,其余装的都是水。它们从外观看起来都一样。如果小猪喝了毒药,它会在 15 分钟内死去。 问题来了,如果需要你在一小时内,弄清楚哪只水桶含有毒药,你最少需要多少只猪? 回答这个问题,并为下列的进阶问题编写一个通用算法。 进阶: 假设有 `n` 只水桶,猪饮水中毒后会在 `m` 分钟内死亡,你需要多少猪`(x)`就能在 `p` 分钟内找出 “有毒” 水桶?这 `n` 只水桶里有且仅有一只有毒的桶。 提示: 1. 可以允许小猪同时饮用任意数量的桶中的水,并且该过程不需要时间。 2. 小猪喝完水后,必须有 m 分钟的冷却时间。在这段时间里,只允许观察,而不允许继续饮水。 3. 任何给定的桶都可以无限次采样(无限数量的猪)。 ## 解题方案 ### 思路 **标签:数学** 这道题初看的时候,很多人会纠结:到底需要多少只小猪,而每只小猪又应该具体如何喝水才能判断出哪只水桶有毒药? 这道题最开始不要去关注细节,去想到底应该怎么喂水。而是应该先思考在考察哪方面的问题,数组、链表、二叉树还是数学?那么仔细思考就能得出结论,本质上在考察数学中的**进制**问题。 举例说明: - 假设:总时间 minutesToTest = 60,死亡时间 minutesToDie = 15,pow(x, y) 表示 x 的 y 次方,ceil(x)表示 x 向上取整 - 当前有1只小猪,最多可以喝 times = minutesToTest / minutesToDie = 4 次水 - 最多可以喝4次水,能够携带 base = times + 1 = 5 个的信息量,也就是(便于理解从0开始): - (1) 喝0号死去,0号桶水有毒 - (2) 喝1号死去,1号桶水有毒 - (3) 喝2号死去,2号桶水有毒 - (4) 喝3号死去,3号桶水有毒 - (5) 喝了上述所有水依然活蹦乱跳,4号桶水有毒 - 结论是1只小猪最多能够验证5桶水中哪只水桶含有毒药,当 buckets ≤ 5 时,answer = 1 - 那么2只小猪可以验证的范围最多到多少呢?我们把每只小猪携带的信息量看成是**base进制数**,2只小猪的信息量就是 pow(base, 2) = pow(5, 2) = 25,所以当 5 ≤ buckets ≤ 25时,anwser = 2 - 那么可以得到公式关系:pow(base, ans) ≥ buckets,取对数后即为:ans ≥ log(buckets) / log(base),因为ans为整数,所以 ans = ceil(log(buckets) / log(base)) **时间复杂度:O(1)** 看到这里我们再去关注细节,2只小猪到底怎么喂水,在上述假设下,能够最多验证25桶水呢?请看下方图画解答: ![458](https://imgkr.cn-bj.ufileos.com/e310cc25-d6c4-4eb9-95f8-dccb74c686f7.gif) ![frame_00001](https://imgkr.cn-bj.ufileos.com/244723b4-9588-472b-a780-357cae547b66.png) ![frame_00002](https://imgkr.cn-bj.ufileos.com/583c8ab2-96ba-4b1c-9a91-01d54c44eed3.png) ![frame_00003](https://imgkr.cn-bj.ufileos.com/206065ee-89c6-4550-b4b0-a3dbf821971c.png) ![frame_00004](https://imgkr.cn-bj.ufileos.com/fe37ebf6-707b-4e6f-90f4-b8befe55c55c.png) ![frame_00005](https://imgkr.cn-bj.ufileos.com/dae4deaf-465a-46f5-b5ac-c844f8397e77.png) ![frame_00006](https://imgkr.cn-bj.ufileos.com/f994ee23-1499-4db7-a2ad-244945fb7eb1.png) ![frame_00007](https://imgkr.cn-bj.ufileos.com/aed3de14-b6e2-4ba9-8473-a2f9843afefc.png) ![frame_00008](https://imgkr.cn-bj.ufileos.com/ca37fcbc-fc06-4191-a263-c375a4eb67b8.png) ![frame_00009](https://imgkr.cn-bj.ufileos.com/99201e55-738e-4470-a9d3-a65f8ff1ccc0.png) ![frame_00010](https://imgkr.cn-bj.ufileos.com/19fd6539-d815-4e81-8554-a048245734d7.png) ![frame_00011](https://imgkr.cn-bj.ufileos.com/b92c3b5e-762c-47bd-aac3-aae53c636b6e.png) ![frame_00012](https://imgkr.cn-bj.ufileos.com/08e59a75-8a79-4ac7-b198-4c6b66c84e37.png) ![frame_00013](https://imgkr.cn-bj.ufileos.com/e37c83ea-15bc-4704-874c-1548678db700.png) ### 代码 - Java版本 ```Java class Solution { public int poorPigs(int buckets, int minutesToDie, int minutesToTest) { int times = minutesToTest / minutesToDie; int base = times + 1; // base ^ ans >= buckets // ans >= log(buckets) / log(base) double temp = Math.log(buckets) / Math.log(base); int ans = (int)Math.ceil(temp) return ans; } } ``` - JavaScript版本 ```JavaScript /** * @param {number} buckets * @param {number} minutesToDie * @param {number} minutesToTest * @return {number} */ var poorPigs = function(buckets, minutesToDie, minutesToTest) { const times = minutesToTest / minutesToDie; const base = times + 1; // base ^ ans >= buckets // ans >= log(buckets) / log(base) const temp = Math.log(buckets) / Math.log(base); const ans = Math.ceil(temp) return ans; }; ``` <span style="display:block;text-align:center;">后台回复「<strong>算法</strong>」,加入天天算法群</span> <span style="display:block;text-align:center;">觉得算法直击灵魂,欢迎点击<strong>在看</strong>和<strong>转发</strong></span> ![](https://gitee.com/guanpengchn/picture/raw/master/2020-9-11/1599805100027-image.png)
32.632479
124
0.716867
yue_Hant
0.414614
f4f305f202f3356807ca85e48bfa4324717dea7a
184
md
Markdown
docs/articles/configs/encoding.md
dvorn/BenchmarkDotNet
57005f05e7464ff5151ccdb81caef037dca6abac
[ "MIT" ]
null
null
null
docs/articles/configs/encoding.md
dvorn/BenchmarkDotNet
57005f05e7464ff5151ccdb81caef037dca6abac
[ "MIT" ]
null
null
null
docs/articles/configs/encoding.md
dvorn/BenchmarkDotNet
57005f05e7464ff5151ccdb81caef037dca6abac
[ "MIT" ]
null
null
null
--- uid: docs.encoding name: Encoding --- # Encoding --- [!include[IntroEncoding](../samples/IntroEncoding.md)] The link to this sample: @BenchmarkDotNet.Samples.IntroEncoding ---
13.142857
63
0.711957
yue_Hant
0.464891
f4f316f149f40aa087843def0e6ab1133444954d
2,022
md
Markdown
wiki/translations/fr/Mesh_RemoveCompByHand.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
wiki/translations/fr/Mesh_RemoveCompByHand.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
wiki/translations/fr/Mesh_RemoveCompByHand.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
--- - GuiCommand:/fr Name:Mesh RemoveCompByHand Name/fr:Mesh Suppression manuelle de composants MenuLocation:Maillages → Supprimer manuellement des composants... Workbenches:[Mesh](Mesh_Workbench/fr.md) SeeAlso:[Mesh Supprimer des composants](Mesh_RemoveComponents/fr.md), [Arch Séparer un objet Mesh](Arch_SplitMesh/fr.md) --- # Mesh RemoveCompByHand/fr ## Description La commande **Mesh Suppression manuelle de composants** supprime des composants d\'objets maillés. ## Utilisation 1. Un composant fait référence à un groupe complet de faces connectées. Un objet maillé contient généralement un seul composant. Mais, par exemple après avoir utilisé la commande [Mesh Fusionner](Mesh_Merge/fr.md), un objet maillé peut contenir plusieurs composants. 2. La commande utilise la couleur rouge pour marquer les composants sélectionnés. Pour les voir correctement: - Le {{PropertyView/fr|Display Mode}} des objets maillés doit montrer des faces. Si nécessaire, utilisez la commande [Std Style de représentation](Std_DrawStyle/fr.md) pour remplacer cette propriété. - La {{PropertyView/fr|Shape Color}} des objets maillés ne doit pas être rouge. 3. Sélectionnez l\'option **Maillages → <img src="images/Mesh_RemoveCompByHand.svg" width=16px> Supprimer manuellement des composants... ** dans le menu. 4. Le curseur se transforme en icône de main. 5. Sélectionnez les composants que vous souhaitez supprimer dans la [vue 3D](3D_view/fr.md). 6. Sélectionnez éventuellement **Effacer les faces sélectionnées** dans le menu contextuel de la vue 3D pour désélectionner tous les composants. 7. Sélectionnez **Supprimer les faces sélectionnées** dans le menu contextuel de la vue 3D pour supprimer les composants sélectionnés. 8. Sélectionnez **Quitter le mode de suppression** dans le menu contextuel de la vue 3D pour terminer la commande. {{Mesh Tools navi }} --- ![](images/Right_arrow.png) [documentation index](../README.md) > [Mesh](Mesh_Workbench.md) > Mesh RemoveCompByHand/fr
49.317073
267
0.772502
fra_Latn
0.97793
f4f377da5640d40103eb0a664c7d102dfaa324fd
2,631
md
Markdown
src/bg/2021-04/12/04.md
ArthurMota9/sabbath-school-lessons
a51fcdecb05cfbbfdacf201ccba9ba768b245885
[ "MIT" ]
68
2016-10-30T23:17:56.000Z
2022-03-27T11:58:16.000Z
src/bg/2021-04/12/04.md
ArthurMota9/sabbath-school-lessons
a51fcdecb05cfbbfdacf201ccba9ba768b245885
[ "MIT" ]
367
2016-10-21T03:50:22.000Z
2022-03-28T23:35:25.000Z
src/bg/2021-04/12/04.md
ArthurMota9/sabbath-school-lessons
a51fcdecb05cfbbfdacf201ccba9ba768b245885
[ "MIT" ]
109
2016-08-02T14:32:13.000Z
2022-03-31T10:18:41.000Z
--- title: 'Проклятието на дървото' date: 14/12/2021 --- `Прочетете Галатяни 3:1-4 . Какво казва тук Павел, което важи и за нас днес, и как използва Второзаконие 27:26 и Второзаконие 21:22,23 , за да се аргументира?` За жалост, много християни използват това послание като вид оправдание, за да не изпълняват закона, Десетте заповеди. Разбира се, същият аргумент се използва като причина да не се спазва четвъртата заповед, сякаш съблюдаването на тази една заповед, за разлика от останалите девет, по някакъв начин е израз на законничеството, за което Павел говори тук. И все пак Павел не говори против закона, а и определено нищо в този пасаж не би могло да оправдае нарушаването на съботната заповед. Разковничето се крие в Галатяни 3:10 , където той пише, че „всички, които се облягат на дела, изисквани от закона, са под клетва“ и след това цитира Второзаконие 27:26 . Проблемът не е в подчинението на закона, а в „облягането на закона“ – нещо много трудно, ако не и невъзможно, за паднали същества като нас. Аргументът на Павел е, че ние не се спасяваме чрез делата на закона, а чрез смъртта на Христос за нас, която ни се приписва чрез вяра. Тук той подчертава какво е направил Спасителят за нас на Кръста. И за да затвърди тази позиция, апостолът отново се позовава на „Второзаконие“, този път на Второзаконие 21:23 . Също като Исус, Павел казва „писано е“, разкривайки авторитета на Стария Завет, и сега цитира текст за човек, който е извършил углавно престъпление и е бил екзекутиран заради това, а после е бил провесен на едно дърво може би като предупреждение за останалите. Павел обаче използва това като символ на Христовата заместническа смърт за нас: Христос стана „проклет за нас“, защото пое върху Себе Си проклятието на закона, което е смърт. Всички хора би трябвало да понесат смъртта, понеже всички сме нарушители на закона. Добрата новина на евангелието обаче е, че проклятието, което трябваше да бъде наше, стана Негово на кръста, „за да приемем обещания Дух чрез вяра“ ( Галатяни 3:14 ). Или, както казва Елън Уайт: „Никой, освен Христос, не можеше да изкупи падналия човек от проклятието на закона и да го доведе до хармония с Небето. Той пожела да поеме вината и срама на греха – грях така оскърбителен за святия Бог, че трябваше да отдели Отец от Неговия Син“ (Уайт, Е. Патриарси и пророци. София: Нов живот, 2019, с. 30). `Помислете какво щяхте да преживеете, ако трябваше да получите съответното наказание за греховете, които сте извършили. Обаче, тъй като Христос е понесъл наказанието за вашите грехове, за да не ви се наложи да го преживявате вие, как следва да отговорите на Неговата жертва?`
146.166667
572
0.786013
bul_Cyrl
0.999992
f4f3f63864ea3f3effc55dec0a15879834184809
791
md
Markdown
windows.ui.xaml.controls/semanticzoomviewchangedeventargs_issourcezoomedinview.md
angelazhangmsft/winrt-api
1f92027f2462911960d6be9333b7a86f7b9bf457
[ "CC-BY-4.0", "MIT" ]
199
2017-02-09T23:13:51.000Z
2022-03-28T15:56:12.000Z
windows.ui.xaml.controls/semanticzoomviewchangedeventargs_issourcezoomedinview.md
angelazhangmsft/winrt-api
1f92027f2462911960d6be9333b7a86f7b9bf457
[ "CC-BY-4.0", "MIT" ]
2,093
2017-02-09T21:52:45.000Z
2022-03-25T22:23:18.000Z
windows.ui.xaml.controls/semanticzoomviewchangedeventargs_issourcezoomedinview.md
angelazhangmsft/winrt-api
1f92027f2462911960d6be9333b7a86f7b9bf457
[ "CC-BY-4.0", "MIT" ]
620
2017-02-08T19:19:44.000Z
2022-03-29T11:38:25.000Z
--- -api-id: P:Windows.UI.Xaml.Controls.SemanticZoomViewChangedEventArgs.IsSourceZoomedInView -api-type: winrt property --- <!-- Property syntax public bool IsSourceZoomedInView { get; set; } --> # Windows.UI.Xaml.Controls.SemanticZoomViewChangedEventArgs.IsSourceZoomedInView ## -description Gets or sets a value that indicates whether the starting view is the [ZoomedInView](semanticzoom_zoomedinview.md). Equivalent WinUI property: [Microsoft.UI.Xaml.Controls.SemanticZoomViewChangedEventArgs.IsSourceZoomedInView](/windows/winui/api/microsoft.ui.xaml.controls.semanticzoomviewchangedeventargs.issourcezoomedinview). ## -property-value **true** if the starting view is the [ZoomedInView](semanticzoom_zoomedinview.md); otherwise, **false**. ## -remarks ## -examples ## -see-also
31.64
211
0.79646
yue_Hant
0.407089
f4f4938f8ae131e01915ab801483cdc88a71aca1
6
md
Markdown
README.md
bejjo-mingkem/1
99bb25b2d22f44989dca2571e9f7ee17ba7c9c69
[ "Apache-2.0" ]
null
null
null
README.md
bejjo-mingkem/1
99bb25b2d22f44989dca2571e9f7ee17ba7c9c69
[ "Apache-2.0" ]
1
2019-09-10T05:45:33.000Z
2019-09-10T05:45:33.000Z
README.md
bejjo-mingkem/1
99bb25b2d22f44989dca2571e9f7ee17ba7c9c69
[ "Apache-2.0" ]
null
null
null
# 1 ?
2
3
0.166667
bod_Tibt
0.462233
f4f4bea6643f89eb7d996fb5f161e4542a31b2a4
856
md
Markdown
notes/How_to_Setup_a_Wallet.md
theyeshman/crypto-knowledge-map
c25d4588a15138fc23923526588e493f3a2b1128
[ "MIT" ]
null
null
null
notes/How_to_Setup_a_Wallet.md
theyeshman/crypto-knowledge-map
c25d4588a15138fc23923526588e493f3a2b1128
[ "MIT" ]
null
null
null
notes/How_to_Setup_a_Wallet.md
theyeshman/crypto-knowledge-map
c25d4588a15138fc23923526588e493f3a2b1128
[ "MIT" ]
null
null
null
# Prerequisites [[What_are_Seeds_and_Public_Private_Keys]] [[What_is_a_Wallet]] [[What_is_Bitcoin]] [[What_is_an_Exchange]] # Subgraph ```mermaid graph LR 1["What are Seeds and Public Private Keys"]-->0{"How to Setup a Wallet"} 2["What is a Wallet"]-->0{"How to Setup a Wallet"} 3["What is Bitcoin"]-->0{"How to Setup a Wallet"} 4["What is an Exchange"]-->0{"How to Setup a Wallet"} ``` # Description To setup a wallet you will need to create an account with a cryptocurrency exchange. Once you have created an account you will need to deposit funds into your account. You can then use the funds to purchase a cryptocurrency. Once you have purchased a cryptocurrency you will need to download a wallet to store the cryptocurrency. You can then use the wallet to send and receive payments. # Links Links to other educational resources here:
26.75
387
0.747664
eng_Latn
0.998676
f4f506721434685c197e6ede3fb3ded2f23df924
20
md
Markdown
README.md
ilakshan/bb-black
6918d403ddb55fffd2f47110a16b621f380c442e
[ "BSL-1.0" ]
1
2021-05-23T20:14:40.000Z
2021-05-23T20:14:40.000Z
README.md
ilakshan/bb-black
6918d403ddb55fffd2f47110a16b621f380c442e
[ "BSL-1.0" ]
null
null
null
README.md
ilakshan/bb-black
6918d403ddb55fffd2f47110a16b621f380c442e
[ "BSL-1.0" ]
null
null
null
# bb-black Hack mod
6.666667
10
0.7
dan_Latn
0.326801
f4f537218f78099cd42433f2c9c6a7aea5cbd267
1,756
md
Markdown
browsers/edge/includes/configure-favorites-bar-include.md
tmlyon/windows-itpro-docs
c24c8846aac17204cb841cb73d31e89572fd06c6
[ "CC-BY-4.0", "MIT" ]
1
2018-10-22T13:44:34.000Z
2018-10-22T13:44:34.000Z
browsers/edge/includes/configure-favorites-bar-include.md
tmlyon/windows-itpro-docs
c24c8846aac17204cb841cb73d31e89572fd06c6
[ "CC-BY-4.0", "MIT" ]
null
null
null
browsers/edge/includes/configure-favorites-bar-include.md
tmlyon/windows-itpro-docs
c24c8846aac17204cb841cb73d31e89572fd06c6
[ "CC-BY-4.0", "MIT" ]
1
2021-05-26T07:31:01.000Z
2021-05-26T07:31:01.000Z
<!-- ##Configure Favorites Bar --> >*Supported versions: Microsoft Edge on Windows 10, new major release* >*Default setting: Not configured (Hidden)* [!INCLUDE [allow-favorites-bar-shortdesc](../shortdesc/configure-favorites-bar-shortdesc.md)] ### Supported values |Group Policy |MDM |Registry |Description | |---|:---:|:---:|---| |Not configured **(default)** |Blank |Blank |Hide the favorites bar but show it on the Start and New tab pages. The favorites bar toggle, in Settings, is set to Off but enabled allowing users to make changes. | |Disabled |0 |0 |Hide the favorites bar on all pages. Also, the favorites bar toggle, in Settings, is set to Off and disabled preventing users from making changes. Microsoft Edge also hides the “show bar/hide bar” option in the context menu. | |Enabled |1 |1 |Show the favorites bar on all pages. Also, the favorites bar toggle, in Settings, is set to On and disabled preventing users from making changes. Microsoft Edge also hides the “show bar/hide bar” option in the context menu. | --- ### ADMX info and settings #### ADMX info - **GP English name:** Configure Favorites Bar - **GP name:** ConfigureFavoritesBar - **GP path:** Windows Components/Microsoft Edge - **GP ADMX file name:** MicrosoftEdge.admx #### MDM settings - **MDM name:** Browser/[ConfigureFavoritesBar](https://docs.microsoft.com/en-us/windows/client-management/mdm/policy-csp-browser#browser-configurefavoritesbar) - **Supported devices:** Desktop and Mobile - **URI full path:** ./Vendor/MSFT/Policy/Config/Browser/ConfigureFavoritesBar - **Data type:** Integer #### Registry settings - **Path:** HKLM\Software\Policies\Microsoft\MicrosoftEdge\Main - **Value name:** ConfigureFavoritesBar - **Value type:** REG_DWORD <hr>
47.459459
244
0.731777
eng_Latn
0.759346
f4f5ba5d012c754ea80cbf0fbe246ff131a4f71c
1,072
md
Markdown
themes/lightum/README.md
liangxinhui/hexo_liangxh
b560caec9af7bad04e884e82130ef657d1a2481a
[ "MIT" ]
null
null
null
themes/lightum/README.md
liangxinhui/hexo_liangxh
b560caec9af7bad04e884e82130ef657d1a2481a
[ "MIT" ]
null
null
null
themes/lightum/README.md
liangxinhui/hexo_liangxh
b560caec9af7bad04e884e82130ef657d1a2481a
[ "MIT" ]
null
null
null
# Lightum This is the theme I currently use for [Hexo](http://hexo.io/). It is based on [Light](https://github.com/hexojs/hexo-theme-light). See my blog as a [Demo](http://zipperary.com/). For clarity, I leave all my settings unchanged. So you are suggested to modify the theme as you need. ## Install Execute the following command and modify `theme` in `_config.yml` to `lightum`. ``` git clone https://github.com/zippera/lightum.git themes/lightum ``` ## Update Execute the following command to update Light. ``` cd themes/lightum git pull ``` ## Customization - **About**: [FYI](http://zipperary.com/2013/05/30/hexo-guide-4/). Edit this file to add your information. - **RSS**: [FYI](http://zipperary.com/2013/06/02/hexo-guide-5/) - **Intro**: Go to `layout/_widget/intro.ejs` and add your information. - **Weibo**: Go to `layout/_widget/weibo.ejs` and add your weibo configuration. [FYI](http://zipperary.com/2013/05/30/hexo-guide-4/) - **Fork me on Github**: Go to `layout/layout.ejs` and change the Github username to yours.
34.580645
179
0.689366
eng_Latn
0.863932
f4f5f0843a2b0492fd1ee2d7150a78285252be95
4,296
md
Markdown
content/en/tracing/guide/setting_up_APM_with_cpp.md
fhsgoncalves/documentation
27288118be53b7811751bf32ff3b78fa5a24e6bd
[ "BSD-3-Clause" ]
219
2015-04-07T18:24:09.000Z
2022-03-17T09:07:54.000Z
content/en/tracing/guide/setting_up_APM_with_cpp.md
fhsgoncalves/documentation
27288118be53b7811751bf32ff3b78fa5a24e6bd
[ "BSD-3-Clause" ]
2,304
2015-01-07T21:40:50.000Z
2022-03-31T18:54:17.000Z
content/en/tracing/guide/setting_up_APM_with_cpp.md
fhsgoncalves/documentation
27288118be53b7811751bf32ff3b78fa5a24e6bd
[ "BSD-3-Clause" ]
593
2015-01-09T19:06:59.000Z
2022-03-31T09:48:46.000Z
--- title: Setting Up APM with C++ kind: guide further_reading: - link: "/tracing/setup/cpp/" tag: "Documentation" text: "Learn more about tracing applications with C++" --- ## Overview This guide expands on the [C++ APM docs][1] to provide step-by-step instructions on how to set up a simple C++ app with APM on your VM for troubleshooting. ## Setting up your box ### Basic environment First, spin up a fresh `ubuntu/xenial64` Vagrant box and `ssh` into it with: ```bash vagrant init ubuntu/xenial64 vagrant up vagrant ssh ``` Next, install the agent with the [instructions in the UI][2]. ### Prepping for C++ Install `g++` and `cmake` with: ```shell sudo apt-get update sudo apt-get -y install g++ cmake ``` Run these two lines together to get the latest C++ version: ```cpp get_latest_release() { wget -qO- "https://api.github.com/repos/$1/releases/latest" | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/'; } DD_OPENTRACING_CPP_VERSION="$(get_latest_release DataDog/dd-opentracing-cpp)" ``` If you get a rate limited message from Github, wait a few minutes and run the command again. When the update is complete, confirm that this is successful by checking your C++ version with: ```shell echo $DD_OPENTRACING_CPP_VERSION ``` Then, download and install the `dd-opentracing-cpp` library with: ```shell wget https://github.com/DataDog/dd-opentracing-cpp/archive/${DD_OPENTRACING_CPP_VERSION}.tar.gz -O dd-opentracing-cpp.tar.gz ``` After downloading the `tar` file, create a new directory and a `.build` file for the library: ```shell mkdir -p dd-opentracing-cpp/.build ``` Then unzip it: ```bash tar zxvf dd-opentracing-cpp.tar.gz -C ./dd-opentracing-cpp/ --strip-components=1 ``` You should see a list of the library contents in your console: ```shell dd-opentracing-cpp-1.0.1/test/integration/nginx/nginx.conf dd-opentracing-cpp-1.0.1/test/integration/nginx/nginx_integration_test.sh ``` Next, go in your `.build` directory: ```shell cd dd-opentracing-cpp/.build ``` Finally, install dependencies with: ```bash sudo ../scripts/install_dependencies.sh cmake .. make sudo make install ``` ## Building a simple app Create a new file called `tracer_example.cpp` and populate it with the below code: ```cpp #include <datadog/opentracing.h> #include <iostream> #include <string> int main(int argc, char* argv[]) { datadog::opentracing::TracerOptions tracer_options{"localhost", 8126, "compiled-in example"}; auto tracer = datadog::opentracing::makeTracer(tracer_options); // Create some spans. { auto span_a = tracer->StartSpan("A"); span_a->SetTag("tag", 123); auto span_b = tracer->StartSpan("B", {opentracing::ChildOf(&span_a->context())}); span_b->SetTag("tag", "value"); } tracer->Close(); return 0; } ``` This creates a tracer that generates two spans, a parent span `span_a` and a child span `span_b`, and tags them. Then, link against `libdd_opentracing` and `libopentracing` with: ```shell g++ -std=c++14 -o tracer_example tracer_example.cpp -ldd_opentracing -lopentracing ``` Finally, run the app with: ```shell LD_LIBRARY_PATH=/usr/local/lib/ ./tracer_example ``` ## Sending traces Now that an app exists, you can start sending traces and see the Trace Agent in action. First, tail the Trace Agent log with: ```shell tail -f /var/log/datadog/trace-agent.log ``` Next, open a new tab and run the example a couple times: ```shell LD_LIBRARY_PATH=/usr/local/lib/ ./tracer_example ``` On the Trace Agent tab, you will see something similar to: ```text 2019-08-09 20:02:26 UTC | TRACE | INFO | (pkg/trace/info/stats.go:108 in LogStats) | [lang:cpp lang_version:201402 tracer_version:v1.0.1] -> traces received: 1, traces filtered: 0, traces amount: 363 bytes, events extracted: 0, events sampled: 0 ``` The service then shows up in your APM services page in Datadog. {{< img src="tracing/guide/setting_up_APM_with_cpp/apm_services_page.png" alt="APM Services Page" >}} Click on the service to view your traces. {{< img src="tracing/guide/setting_up_APM_with_cpp/traces_ui.png" alt="APM Traces UI" >}} ## Further Reading {{< partial name="whats-next/whats-next.html" >}} [1]: /tracing/setup/cpp/#compile-against-dd-opentracing-cpp [2]: https://app.datadoghq.com/account/settings#agent/ubuntu
25.270588
245
0.719274
eng_Latn
0.799839
f4f63861034322aafead5eacff8329ae0632e749
352
md
Markdown
about.md
joswiatek/swe-blog
d94414f220841f70743f18b7d6c8779a7e2e8a88
[ "MIT" ]
null
null
null
about.md
joswiatek/swe-blog
d94414f220841f70743f18b7d6c8779a7e2e8a88
[ "MIT" ]
null
null
null
about.md
joswiatek/swe-blog
d94414f220841f70743f18b7d6c8779a7e2e8a88
[ "MIT" ]
null
null
null
--- layout: page title: About permalink: /about/ --- I'm a senior studying Computer Science at the University of Texas at Austin. My interests lie in Artificial Intelligence and Machine learning. My current areas of focus include Natural Language Processing and Computer Vision. ### Contact me [joel.swiatek@gmail.com](mailto:joel.swiatek@gmail.com)
32
225
0.78125
eng_Latn
0.952332
f4f64457b4573453ed534b267c7fef5525499126
466
md
Markdown
catalog/this-is-it-seisaku-shinkou-shinonome-jirou/en-US_this-is-it-seisaku-shinkou-shinonome-jirou.md
htron-dev/baka-db
cb6e907a5c53113275da271631698cd3b35c9589
[ "MIT" ]
3
2021-08-12T20:02:29.000Z
2021-09-05T05:03:32.000Z
catalog/this-is-it-seisaku-shinkou-shinonome-jirou/en-US_this-is-it-seisaku-shinkou-shinonome-jirou.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
8
2021-07-20T00:44:48.000Z
2021-09-22T18:44:04.000Z
catalog/this-is-it-seisaku-shinkou-shinonome-jirou/en-US_this-is-it-seisaku-shinkou-shinonome-jirou.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
2
2021-07-19T01:38:25.000Z
2021-07-29T08:10:29.000Z
# This Is It!: Seisaku Shinkou Shinonome Jirou ![this-is-it-seisaku-shinkou-shinonome-jirou](https://cdn.myanimelist.net/images/manga/2/242405.jpg) - **type**: manga - **original-name**: THIS IS IT! 制作進行東雲次郎 - **start-date**: 2020-10-27 ## Tags - comedy - romance ## Authors - Inui - Sekihiko (Art) - Momose - Yuuichirou (Story) ## Links - [My Anime list](https://myanimelist.net/manga/134798/This_Is_It__Seisaku_Shinkou_Shinonome_Jirou)
19.416667
101
0.682403
yue_Hant
0.204324
f4f694bb9fbf79977093b788237fbe18000fc636
2,804
md
Markdown
docs/framework/data/adonet/ef/how-to-build-an-entityconnection-connection-string.md
AlejandraHM/docs.es-es
5f5b056e12f9a0bcccbbbef5e183657d898b9324
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/data/adonet/ef/how-to-build-an-entityconnection-connection-string.md
AlejandraHM/docs.es-es
5f5b056e12f9a0bcccbbbef5e183657d898b9324
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/data/adonet/ef/how-to-build-an-entityconnection-connection-string.md
AlejandraHM/docs.es-es
5f5b056e12f9a0bcccbbbef5e183657d898b9324
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Filtrar Compilar una cadena de conexión EntityConnection ms.date: 03/30/2017 dev_langs: - csharp - vb ms.assetid: 5bd1a748-3df7-4d0a-a607-14f25e3175e9 ms.openlocfilehash: 84cef3b874f7deab129fad6dcd363c078153c311 ms.sourcegitcommit: 3500c4845f96a91a438a02ef2c6b4eef45a5e2af ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 02/07/2019 ms.locfileid: "55826400" --- # <a name="how-to-build-an-entityconnection-connection-string"></a>Filtrar Compilar una cadena de conexión EntityConnection En este tema se muestra un ejemplo para generar una <xref:System.Data.EntityClient.EntityConnection>. ### <a name="to-run-the-code-in-this-example"></a>Para ejecutar el código de este ejemplo 1. Agregar el [modelo AdventureWorks Sales](https://github.com/Microsoft/sql-server-samples/releases/tag/adventureworks) al proyecto y configurar el proyecto para usar el [!INCLUDE[adonet_ef](../../../../../includes/adonet-ef-md.md)]. Para obtener más información, vea [Cómo: Utilice el Asistente para Entity Data Model](https://docs.microsoft.com/previous-versions/dotnet/netframework-4.0/bb738677(v=vs.100)). 2. En la página de código de la aplicación, agregue las instrucciones `using` siguientes (`Imports` en Visual Basic): [!code-csharp[DP EntityServices Concepts#Namespaces](../../../../../samples/snippets/csharp/VS_Snippets_Data/dp entityservices concepts/cs/source.cs#namespaces)] [!code-vb[DP EntityServices Concepts#Namespaces](../../../../../samples/snippets/visualbasic/VS_Snippets_Data/dp entityservices concepts/vb/source.vb#namespaces)] ## <a name="example"></a>Ejemplo En el siguiente ejemplo se inicializa el <xref:System.Data.SqlClient.SqlConnectionStringBuilder?displayProperty=nameWithType> para el proveedor subyacente y, a continuación, se inicializa el objeto <xref:System.Data.EntityClient.EntityConnectionStringBuilder?displayProperty=nameWithType> y se pasa este objeto al constructor de <xref:System.Data.EntityClient.EntityConnection>. [!code-csharp[DP EntityServices Concepts#BuildingConnectionStringWithEntityCommand](../../../../../samples/snippets/csharp/VS_Snippets_Data/dp entityservices concepts/cs/source.cs#buildingconnectionstringwithentitycommand)] [!code-vb[DP EntityServices Concepts#BuildingConnectionStringWithEntityCommand](../../../../../samples/snippets/visualbasic/VS_Snippets_Data/dp entityservices concepts/vb/source.vb#buildingconnectionstringwithentitycommand)] ## <a name="see-also"></a>Vea también - [Cómo: Usar EntityConnection con un contexto del objeto](https://docs.microsoft.com/previous-versions/dotnet/netframework-4.0/bb738461(v=vs.100)) - [Proveedor de EntityClient para Entity Framework](../../../../../docs/framework/data/adonet/ef/entityclient-provider-for-the-entity-framework.md)
77.888889
413
0.78281
spa_Latn
0.285253
f4f6bb123162f9d43770b354d14aa21174723300
590
md
Markdown
content/faq/questions/company-faq.md
santoshyadavdev/cypress-documentation
5e1b3f1a14dde56f54fe1be01acad109653380a2
[ "MIT" ]
634
2017-09-12T16:36:49.000Z
2022-03-31T21:12:39.000Z
content/faq/questions/company-faq.md
santoshyadavdev/cypress-documentation
5e1b3f1a14dde56f54fe1be01acad109653380a2
[ "MIT" ]
2,241
2016-05-11T16:30:01.000Z
2022-03-31T20:24:40.000Z
content/faq/questions/company-faq.md
santoshyadavdev/cypress-documentation
5e1b3f1a14dde56f54fe1be01acad109653380a2
[ "MIT" ]
1,033
2016-10-13T14:03:32.000Z
2022-03-29T13:45:46.000Z
--- layout: toc-top title: Company containerClass: faq --- ## <Icon name="angle-right"></Icon> Who’s behind Cypress? You can read more about who's behind Cypress on our website [here](https://www.cypress.io/about/). ## <Icon name="angle-right"></Icon> Are you hiring? Yes! You can check our open positions and apply [here](https://www.cypress.io/jobs/). ## <Icon name="angle-right"></Icon> Why the name _Cypress_? We believe that tests should always pass -- in other words, should always be green. A cypress is an evergreen tree. So, Cypress! <Icon name="tree" color="green"></Icon>
25.652174
76
0.70678
eng_Latn
0.933005
f4f6f4b871ce5d606b713114104bf09eef958d77
352
md
Markdown
_posts/2021-07-08/2021-06-19-28f-wanting-other-females-20210619171921666979.md
ipussy/ipussy.github.io
95d19a74e38bb54303cf18057a99a57c783e76bf
[ "Apache-2.0" ]
null
null
null
_posts/2021-07-08/2021-06-19-28f-wanting-other-females-20210619171921666979.md
ipussy/ipussy.github.io
95d19a74e38bb54303cf18057a99a57c783e76bf
[ "Apache-2.0" ]
null
null
null
_posts/2021-07-08/2021-06-19-28f-wanting-other-females-20210619171921666979.md
ipussy/ipussy.github.io
95d19a74e38bb54303cf18057a99a57c783e76bf
[ "Apache-2.0" ]
null
null
null
--- title: "28f wanting other females 😛" metadate: "hide" categories: [ Pussy ] image: "https://preview.redd.it/h3tfap7k67671.jpg?auto=webp&s=6bbaa6cd232d041c0824f87dc2815294dfe76fc0" thumb: "https://preview.redd.it/h3tfap7k67671.jpg?width=960&crop=smart&auto=webp&s=76fcf5ca37671c76bbcf7643dc39afa3aa6c4084" visit: "" --- 28f wanting other females 😛
35.2
124
0.778409
yue_Hant
0.10241
f4f79d3374df3ef4f55b7b986be21ebd4766d5b8
1,740
md
Markdown
kernel-attacks-through-user-mode-callbacks/3.1-win32k-naming-convention.md
yax571/paper-translation
dba1c7d8e931b4c2b320726395de7e0af9e7f4fb
[ "CC-BY-3.0" ]
1
2021-03-24T10:08:15.000Z
2021-03-24T10:08:15.000Z
kernel-attacks-through-user-mode-callbacks/3.1-win32k-naming-convention.md
yax571/paper-translation
dba1c7d8e931b4c2b320726395de7e0af9e7f4fb
[ "CC-BY-3.0" ]
null
null
null
kernel-attacks-through-user-mode-callbacks/3.1-win32k-naming-convention.md
yax571/paper-translation
dba1c7d8e931b4c2b320726395de7e0af9e7f4fb
[ "CC-BY-3.0" ]
1
2021-03-24T10:08:16.000Z
2021-03-24T10:08:16.000Z
# 3.1. Win32k 命名约定 正如在 2.2. 节描述的那样,窗口管理器在操作内部管理结构时采用临界区与全局锁。 考虑到用户模式回调函数可能允许应用程序冻结 GUI 子系统, 因此 Win32k 总是会在调用回用户模式之前离开临界区。这样一来, 当用户模式的代码正在执行的时候,Win32k 可能在执行其他的任务。 当从回调函数返回时,Win32k 会在内核中的函数继续执行之前再次进入临界区。 我们可以在任何一个调用 KeUserModeCallback 的函数里观察到这一行为, 比如 __清单 7__ 的这个。 ``` call _UserSessionSwitchLeaveCrit@0 lea eax, [ebp+var_4] push eax lea eax, [ebp+var_8] push eax push 0 push 0 push 43h call ds:__imp__KeUserModeCallback@20 call _UserEnterUserCritSec@0 ``` __清单 7__ 在用户模式回调之前退出临界区 当从用户模式回调函数返回时,Win32k 必须确保引用了的对象和数据结构依旧符合预期。 由于在进入回调函数之前离开了临界区,所以用户模式的代码可以自由的更改对象的属性, 重新分配数组等等。举个例子,回调函数可以调用 SetParent 来改变窗口的父窗口。 如果内核在调用回调函数之前保存了对的父窗口的引用,并且在返回之后, 不经过正确的检查或者给对象加锁,继续使用这个父窗口的引用,这就造成安全漏洞。 为了让开发者采取必要的警觉,记录可能调用回用户模式的函数是非常重要的, 所以 Win32k.sys 使用了它自己的函数命名约定。更具体地讲, 根据函数调用用户模式回调函数的方式,将函数前缀“xxx”或者“zzz”。然而, 在一些情况下,函数可能需要特定地参数才会执行到调用回调函数的路径上。 这就是为什么有时会看到不带前缀的函数调用前缀为“xxx”的函数了, 因为这些函数传递给前缀为“xxx”的函数的参数根本不会导致回调函数调用。 带有“zzz”前缀的函数会调用异步或者延迟回调函数。典型的例子就是, 由于各种各样的原因,某些类型的窗口事件不能或者不应该立即处理。 在这种情况下,Win32k 调用 xxxFlushDeferredWindowEvents 来刷新事件队列。 需要注意的一件重要的事情是,“zzz”前缀的函数在调用 xxxWindowEvent 之前需要 win32k!gdwDeferWinEvent 为非空,否则会立即处理回调。 Win32k 使用的命名约定的问题就是缺乏一致性。 Win32k里有几个调用了用户模式回调的函数, 但是却没有标记成他们应该标记成的样子。这个问题的原因不清楚, 但可能的解释是随着时间的迁移,函数几经修改,但函数的名字没有同步更新。 因此,开发者可能错误地认为某个函数绝对不会调用用户模式回调函数, 并因此避免做看似不必要的检查与确认(比如,没有给对象加锁, 没有重新检查指针有效性)。在修复 MS11-034 里的漏洞时, 为了表明它们使用了用户模式回调函数, 微软给几个几个函数的名字加上了“xxx”前缀(__表 2__)。 ``` Windows 7 RTM Windows 7 (MS11-034) MNRecalcTabStrings xxxMNRecalcTabStrings FreeDDEHandle xxxFreeDDEHandle ClientFreeDDEHandle xxxClientFreeDDEHandle ClientGetDDEFlags xxxClientGetDDEFlags ClientGetDDEHookData xxxClientGetDDEHookData ``` __表 2__ 因修复 MS11-034 而带有正确前缀的函数
26.769231
54
0.816092
yue_Hant
0.940994
f4f89cd20cf7d1307917000f7debe94bda507b04
668
md
Markdown
_posts/fr/papa/2016/2016-05-08-comme-des-chiens.md
ryuran/borisschapira.com
c69ed14d4ae2ba026cc3e0ee2e72cd307c039113
[ "MIT" ]
3
2019-03-27T17:18:34.000Z
2019-09-16T12:07:24.000Z
_posts/fr/papa/2016/2016-05-08-comme-des-chiens.md
ryuran/borisschapira.com
c69ed14d4ae2ba026cc3e0ee2e72cd307c039113
[ "MIT" ]
597
2019-03-29T15:22:25.000Z
2022-03-28T04:10:14.000Z
_posts/fr/papa/2016/2016-05-08-comme-des-chiens.md
fredleger/borisschapira.com
2005b30ae41967d2c4670592be163efdece56575
[ "MIT" ]
17
2019-03-29T14:42:52.000Z
2021-02-25T13:39:10.000Z
--- title: 'Comme des chiens' --- Je regarde "La Belle et le Clochard" avec les enfants. La scène des pâtes à la bolognaise leur plait beaucoup. <!-- more --> > [Moi] : ils sont amoureux ? > [Tous les deux] : ah, oui ! > [Grand] : et ils font des bisous ! > [Moi] : comme Papa et Maman alors ! > [Petit] : oui. Maman, elle est belle. Elle est plus belle que Belle. > [Moi] : et Papa ? > [Petit] : Papa, il est plus clochard que Clochard ! {% include video_as_a_gif.html.liquid url="/assets/images/papa/2016-05-08/1" alt="Un homme portant un costume et une moustache, visiblement sans voix devant tant de bêtise." caption="Euh, tu peux développer ?" %}
30.363636
110
0.670659
fra_Latn
0.971518
f4f8dc9e95bfa6459342f67fa7a6ef3b2e922e07
1,274
md
Markdown
README.md
PotcFdk/html-webpack-banner-plugin
3da10d1f012e821acabe5b5a9b6ad10e22d91a88
[ "MIT" ]
1
2020-04-20T11:40:12.000Z
2020-04-20T11:40:12.000Z
README.md
PotcFdk/html-webpack-banner-plugin
3da10d1f012e821acabe5b5a9b6ad10e22d91a88
[ "MIT" ]
null
null
null
README.md
PotcFdk/html-webpack-banner-plugin
3da10d1f012e821acabe5b5a9b6ad10e22d91a88
[ "MIT" ]
1
2020-05-14T15:20:44.000Z
2020-05-14T15:20:44.000Z
# Forked from [html-webpack-banner-plugin](https://github.com/Cap32/html-webpack-banner-plugin) to support HtmlWebpackPlugin v.4 # html-webpack-banner-plugin [![Build Status](https://travis-ci.org/Cap32/html-webpack-banner-plugin.svg?branch=master)](https://travis-ci.org/Cap32/html-webpack-banner-plugin) This is an extension plugin for the [webpack](http://webpack.github.io) plugin [html-webpack-plugin](https://github.com/ampedandwired/html-webpack-plugin) - a plugin that simplifies the creation of HTML files to serve your webpack bundles. Adds a banner to the top of generated html. ## Installation Install the plugin with npm: ```bash $ npm install -d html-webpack-banner-plugin ``` Install the plugin with yarn: ```bash $ yarn add -D html-webpack-banner-plugin ``` ## Basic Usage Load the plugin ```js const HtmlWebpackBannerPlugin = require('html-webpack-banner-plugin'); ``` and add it to your webpack config as follows: ```js plugins: [ new HtmlWebpackPlugin(), new HtmlWebpackBannerPlugin({ banner: '<!-- my banner -->', }), ] ``` #### Options ```js { banner: string, // the banner as string, it will be wrapped in a comment raw: boolean, // if true, banner will not be wrapped in a comment } ``` ## License MIT
21.59322
239
0.710361
kor_Hang
0.66882
f4f925434fed8209482c9553b5b0cb33605f1818
728
md
Markdown
docs/c-language/linkage.md
changeworld/cpp-docs.zh-cn
fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-language/linkage.md
changeworld/cpp-docs.zh-cn
fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-language/linkage.md
changeworld/cpp-docs.zh-cn
fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 链接 ms.date: 11/04/2016 helpviewer_keywords: - linkage [C++] - linkage [C++], identifier names and scope ms.assetid: 986ee549-2d6c-487a-9e3b-a1f643bc5bdc ms.openlocfilehash: 9fb9be0cbcb0622a3d1af2ba4fe56ef8f1cf8f16 ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd ms.translationtype: HT ms.contentlocale: zh-CN ms.lasthandoff: 10/31/2018 ms.locfileid: "50476203" --- # <a name="linkage"></a>链接 标识符名称可引用不同范围内的各个标识符。 在不同的范围内或在同一范围内多次声明的标识符可以通过称为“链接”的过程来引用同一标识符或函数。 链接确定可在其中引用标识符的程序的部分(其“可见性”)。 有三种链接:[内部](../c-language/internal-linkage.md)、[外部](../c-language/external-linkage.md)和[无链接](../c-language/no-linkage.md)。 ## <a name="see-also"></a>请参阅 [使用 extern 指定链接](../cpp/using-extern-to-specify-linkage.md)
34.666667
219
0.766484
yue_Hant
0.333638
f4f95ab2903edf2ecae3f38042b076e6dc71c7d4
5,588
md
Markdown
docs/framework/unmanaged-api/hosting/iclrtask-interface.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/hosting/iclrtask-interface.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/hosting/iclrtask-interface.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: ICLRTask-Schnittstelle ms.date: 03/30/2017 api_name: - ICLRTask api_location: - mscoree.dll api_type: - COM f1_keywords: - ICLRTask helpviewer_keywords: - ICLRTask interface [.NET Framework hosting] ms.assetid: b3a44df3-578a-4451-b55e-70c8e7695f5e topic_type: - apiref ms.openlocfilehash: 5ecc42950775a620796a1c775e5f088f461a12c3 ms.sourcegitcommit: d8020797a6657d0fbbdff362b80300815f682f94 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 11/24/2020 ms.locfileid: "95690823" --- # <a name="iclrtask-interface"></a>ICLRTask-Schnittstelle Stellt Methoden bereit, die es dem Host ermöglichen, Anforderungen an die Common Language Runtime (CLR) zu senden oder der CLR eine Benachrichtigung über die zugeordnete Aufgabe bereitzustellen. ## <a name="methods"></a>Methoden |Methode|BESCHREIBUNG| |------------|-----------------| |[Abort-Methode](iclrtask-abort-method.md)|Fordert an, dass die CLR die von der aktuellen Instanz dargestellte Aufgabe abbricht `ICLRTask` .| |[ExitTask-Methode](iclrtask-exittask-method.md)|Benachrichtigt die CLR, dass der Task, der der aktuellen `ICLRTask` Instanz zugeordnet ist, endet, und versucht, die Aufgabe ordnungsgemäß zu schließen.| |[GetMemStats-Methode](iclrtask-getmemstats-method.md)|Ruft statistische Informationen zur Verwendung von Speicherressourcen durch die Aufgabe ab, die durch die aktuelle Instanz dargestellt wird `ICLRTask` .| |[LocksHeld-Methode](iclrtask-locksheld-method.md)|Ruft die Anzahl der Sperren ab, die zurzeit für den Task ausgeführt werden.| |[NeedsPriorityScheduling-Methode](iclrtask-needspriorityscheduling-method.md)|Ruft einen Wert ab, der angibt, ob der Host eine hohe Priorität zuweisen soll, um die von der aktuellen Instanz dargestellte Aufgabe neu zu planen `ICLRTask` .| |[Reset-Methode](iclrtask-reset-method.md)|Teilt der CLR mit, dass der Host eine Aufgabe abgeschlossen hat, und ermöglicht der CLR die Wiederverwendung der aktuellen `ICLRTask` Instanz zur Darstellung einer anderen Aufgabe.| |[RudeAbort-Methode](iclrtask-rudeabort-method.md)|Bewirkt, dass die CLR die von der aktuellen Instanz dargestellte Aufgabe sofort abbricht `ICLRTask` , ohne sicherzustellen, dass Finalizer ausgeführt werden.| |[SetTaskIdentifier-Methode](iclrtask-settaskidentifier-method.md)|Legt einen eindeutigen Bezeichner für die von der aktuellen Instanz dargestellte Aufgabe `ICLRTask` für die Verwendung beim Debuggen fest.| |[SwitchIn-Methode](iclrtask-switchin-method.md)|Benachrichtigt die CLR, dass sich der von der aktuellen Instanz dargestellte Task `ICLRTask` in einem ausführbaren Zustand befindet.| |[SwitchOut-Methode](iclrtask-switchout-method.md)|Benachrichtigt die CLR, dass die von der aktuellen Instanz dargestellte Aufgabe `ICLRTask` nicht mehr in einem eines ausführbaren-Zustand ist.| |[YieldTask-Methode](iclrtask-yieldtask-method.md)|Fordert an, dass die CLR für andere Tasks Prozessorzeit zur Verfügung stellt. Die CLR garantiert nicht, dass die Aufgabe in einen Zustand versetzt wird, in dem Sie Verarbeitungszeit in sich bringen kann.| ## <a name="remarks"></a>Hinweise Eine `ICLRTask` ist die Darstellung einer Aufgabe für die CLR. Zu jedem Zeitpunkt während der Codeausführung kann eine Aufgabe entweder als ausgeführt oder wartet auf die Ausführung beschrieben werden. Der Host ruft die- `ICLRTask::SwitchIn` Methode auf, um die CLR zu benachrichtigen, dass die von der aktuellen Instanz dargestellte Aufgabe `ICLRTask` nun in einem ausführbaren Zustand ist. Nach einem Aufruf `ICLRTask::SwitchIn` von kann der Host die Aufgabe für jeden Betriebssystem Thread planen, außer in Fällen, in denen die Laufzeit Thread Affinität erfordert, wie durch Aufrufe der [IHostTaskManager:: beginthreadaffinität](ihosttaskmanager-beginthreadaffinity-method.md) -Methode und der [IHostTaskManager:: endthreadaffinitäts](ihosttaskmanager-endthreadaffinity-method.md) Methode angegeben. Später kann das Betriebssystem die Aufgabe aus dem Thread entfernen und in einen Zustand versetzen, der nicht ausgeführt wird. Dies kann beispielsweise der Fall sein, wenn der Task bei Synchronisierungs primitiven blockiert oder auf den Abschluss der e/a-Vorgänge wartet. Der Host ruft die [SwitchOut](iclrtask-switchout-method.md) -Methode auf, um die CLR zu benachrichtigen, dass der von der aktuellen Instanz dargestellte Task `ICLRTask` nicht mehr in einem eines ausführbaren-Zustand ist. Eine Aufgabe wird in der Regel am Ende der Codeausführung beendet. Zu diesem Zeitpunkt ruft der Host `ICLRTask::ExitTask` auf, um den zugeordneten zu zerstören `ICLRTask` . Tasks können jedoch auch mithilfe eines `ICLRTask::Reset` Aufrufes wieder verwendet werden, wodurch die- `ICLRTask` Instanz wieder verwendet werden kann. Diese Vorgehensweise verhindert den mehr Aufwand für das wiederholte erstellen und zerstören von Instanzen. ## <a name="requirements"></a>Requirements (Anforderungen) **Plattformen:** Informationen finden Sie unter [Systemanforderungen](../../get-started/system-requirements.md). **Header:** Mscoree. h **Bibliothek:** Als Ressource in MSCorEE.dll enthalten **.NET Framework Versionen:**[!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)] ## <a name="see-also"></a>Weitere Informationen - [ICLRTaskManager-Schnittstelle](iclrtaskmanager-interface.md) - [IHostTask-Schnittstelle](ihosttask-interface.md) - [IHostTaskManager-Schnittstelle](ihosttaskmanager-interface.md) - [Hosten von Schnittstellen](hosting-interfaces.md) - [ICLRTask2-Schnittstelle](iclrtask2-interface.md)
83.402985
1,298
0.794023
deu_Latn
0.991945
f4f9d9834d531aa97d850a38ac2167fde5675c33
3,926
md
Markdown
docs/source/03_tutorial/01_spaceflights_tutorial.md
WaylonWalker/kedro
ce39d3e2485772bfc4977ab892b29c5a377d998d
[ "Apache-2.0" ]
1
2022-01-26T04:50:53.000Z
2022-01-26T04:50:53.000Z
docs/source/03_tutorial/01_spaceflights_tutorial.md
WaylonWalker/kedro
ce39d3e2485772bfc4977ab892b29c5a377d998d
[ "Apache-2.0" ]
null
null
null
docs/source/03_tutorial/01_spaceflights_tutorial.md
WaylonWalker/kedro
ce39d3e2485772bfc4977ab892b29c5a377d998d
[ "Apache-2.0" ]
null
null
null
# Kedro spaceflights tutorial **Scenario**: *It is 2160 and the space tourism industry is booming. Globally, there are thousands of space shuttle companies taking tourists to the Moon and back. You have been able to source amenities offered in each space shuttle, customer reviews and company information.* **Project**: *You want to construct a model for predicting the price for each trip to the Moon and the corresponding return flight.* In this tutorial, we illustrate the typical Kedro workflow and the steps necessary to convert an empty Kedro project template into a working project. In the text, we assume that you create an empty project and follow the flow of the tutorial by copying and pasting the example code into the project as we describe. This tutorial will take approximately 2 hours and you will learn each step of the Kedro project development workflow, by working on an example to construct nodes and pipelines for the price-prediction model. However, you may prefer to get up and running more swiftly so we provide the full spaceflights example project as a [Kedro starter](../02_get_started/06_starters.md). To create the project: ```bash kedro new --starter=spaceflights ``` This will generate a project from the [Kedro starter for the spaceflights tutorial](https://github.com/quantumblacklabs/kedro-starters/tree/master/spaceflights) so you can follow the tutorial without any of the copy/pasting. ## Kedro project development workflow When building a Kedro project, you will typically follow a standard development workflow: ![](../meta/images/typical_workflow.png) ### 1. Set up the project template * Create a new project with `kedro new` * Install project dependencies with `kedro install` * Configure the following in the `conf` folder: * Logging * Credentials * Any other sensitive / personal content ### 2. Set up the data * Add data to the `data/` folder * Reference all datasets for the project in `conf/base/catalog.yml` ### 3. Create the pipeline * Create the data transformation steps as Python functions * Construct the pipeline by adding your functions as nodes * Choose how to run the pipeline: sequentially or in parallel ### 4. Package the project * Build the project documentation * Package the project for distribution ## Optional: Git workflow ### Creating a project repository We recommend that you use `git` for source control, but Kedro doesn't require it, and can work without any source control management system. This section is optional if you choose not to use a `git` repository. > Note: If you are unfamiliar with a typical git workflow, you can follow one of the most popular, known as [Gitflow](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow). If you don't have a local `git` repository for your project already, navigate to the project directory and create one: ```bash git init git remote add origin https://github.com/<your-repo> ``` ### Submitting your changes to GitHub As you work on a project, you will periodically save your changes. In a team, we suggest that you each develop your code on a branch and create pull requests to submit it to the `develop` or `master` branches: ```bash # create a new feature branch called 'feature/project-template' git checkout -b feature/project-template # stage all the files you have changed git add . # commit changes to git with an instructive message git commit -m 'Create project template' # push changes to remote branch git push origin feature/project-template ``` It isn't necessary to branch, but if everyone in a team works on the same branch (e.g. `master`), you may have to resolve merge conflicts more often. Here is an example of working directly on `master`: ```bash # stage all files git add . # commit changes to git with an instructive message git commit -m 'Create project template' # push changes to remote master git push origin master ```
43.622222
372
0.775599
eng_Latn
0.998909
f4fa0b9cf82df7ca322ac41a3831018cf23981df
8,411
md
Markdown
proposed/event-dispatcher-meta.md
php-china/fig-standards-zh
611ebda1513d47e1b978499a86a8045488cdc795
[ "CC-BY-3.0", "MIT" ]
1
2018-11-20T09:30:40.000Z
2018-11-20T09:30:40.000Z
proposed/event-dispatcher-meta.md
php-china/fig-standards-zh
611ebda1513d47e1b978499a86a8045488cdc795
[ "CC-BY-3.0", "MIT" ]
null
null
null
proposed/event-dispatcher-meta.md
php-china/fig-standards-zh
611ebda1513d47e1b978499a86a8045488cdc795
[ "CC-BY-3.0", "MIT" ]
null
null
null
Event Dispatcher Meta Document ============================== ## 1. Summary The purpose of this document is to describe the rationale and logic behind the Event Dispatcher PSR. ## 2. Why Bother? Many libraries, components, and frameworks have long supported mechanisms for allowing arbitrary third party code to interact with them. Most are variations on the classic Observer pattern, often mediated through an intermediary object or service. Others take a more Aspect-Oriented Programming (AOP) approach. Nonetheless all have the same basic concept: interupt program flow at a fixed point to provide information to arbitrary third party libraries with information about the action being performed and allow them to either react or influence the program behavior. This is a well-established model, but a standard mechanism by which libraries do so will allow them to interoperate with more and more varied third party libraries with less effort by both the original developer and extension developers. ## 3. Scope ### 3.1 Goals * Simplify and standardize the process by which libraries and components may expose themselves to extension via "events" so that they may be more easily incorporated into applications and frameworks. * Simplify and standardize the process by which libraries and components may register an interest in responding to an event so that they may be more easily incorporated into arbitrary applications and frameworks. * To the extent feasible, ease the process for existing code bases to transition toward this specification. ### 3.2 Non-Goals * Asynchronous systems often have a concept of an "event loop" to manage interleaving coroutines. That is an unrelated matter and explicitly irrelevant to this specification. * Storage systems implementing an "Event Source" pattern also have a concept of an "event". That is unrelated to the events discussed here and explicitly out of scope. * Strict backward compatibility with existing event systems is not a priority and is not expected. * While this specification will undoubtedly suggest implementation patterns, it does not seek to define One True Event Dispatcher Implementation, only how callers and listeners communicate with that dispatcher. ## 4. Approaches ### 4.1 Use cases considered The Working Group identified four possible workflows for event passing, based on use cases seen in the wild in various systems. * One-way notification. ("I did a thing, if you care.") * Object enhancement. ("Here's a thing, please modify it before I do something with it.") * Collection. ("Give me all your things, that I may do something with that list.") * Alternative chain. ("Here's a thing; the first one of you that can handle it do so, then stop.") On further review, it was determined that Collection was a special case of Object enhancement (the collection being the object that is enhanced). Alternative chain is similarly a special case of Object enhancement, as the signature is identical and the dispatch workflow is nearly identical. That leaves two relevant workflows: * Notification * Modification Notification can safely be done asynchronously (including delaying it through a queue) but Modification by nature involve passing data back to the caller and thus must be synchronous. Despite that difference the Working Group determined that the use cases were close enough to be considered in a single PSR. The two different workflows however are represented by two different interfaces, `MessageNotifierInterface` (for Messages) and `TaskProcessorInterface` (for Tasks). ### 4.2 Immutable events Initially the Working Group wished to define all Events as immutable message objects, similar to PSR-7. However, that proved problematic in the Processor case. In both of those cases Listeners needed a way to return data to the caller. In concept there were three possible avenues: * Make the event mutable and modify it in place. * Require that events be evolvable (immutable but with `with*()` methods like PSR-7 and PSR-13) and that listeners return the event to pass along. * Make the Event immutable but aggregate and return the return value from each Listener. However, Stoppable Task (the alternative chain case) also needed to have a channel by which to indicate that further listeners should not be called. That could be done either by: * Modifying the Task (`stopPropagation()`) * Returning a sentinel value from the listener (`true` or `false`) to indicate that propagation should terminate. * Evolving the Task to be stopped (`withPropagationStopped()`) Of those, the first would mandate a mutable Task in at least some cases. The second would mandate a mutable Task as the return value was already in use. And the third seemed unnecessarily ceremonial and pedantic. Having listeners return evolvable tasks also posed a challenge. That pattern is not used by any known implementations in PHP or elsewhere. It also relies on the listener to remember to return the object (extra work for the listener author) and to not return some other, new object that might not be fully compatible with later listeners (such as a subclass or superclass of the event). Immutable events also rely on the event definer to respect the admonition to be immutable. Events are, by nature, very loosely designed and the potential for implementers to ignore that part of the spec, even inadvertently, is high. That left two possible options: * Allow tasks to be mutable. * Require, but be unable to enforce, immutable objects with a high-ceremony interface, more work for listener authors, and a higher potential for breakage that may not be detectable at compile time. Given those options the Working Group felt mutable tasks were the safer alternative. The Message Notification use case, however, has no need for return communication so those issues are moot. Those objects therefore MUST be treated as immutable, regardless of what the object's methods are. ### 4.3 Listener registration Experimentation during development of the specification determined that there were a wide range of viable, legitimate means by which a Notifier or Processor could be informed of a listener. * A listener could be registered explicitly; it could be the registered explicitly based on reflection of its signature; * it could be registered with a numeric priority order; * it could be registered using a before/after mechanism to control ordering more precisely; * it could be registered from a service container; * it could use a pre-compile step to generate code; * it could be based on method names on objects in the event itself. These and other mechanisms all exist in the wild today in PHP, all are valid use cases worth supporting, and few if any can be conveniently represented as a special case of another. That is, standardizing one way, or even a small set of ways, to inform the system of a listener turned out to be impractical if not impossible without cutting off many use cases that should be supported. The Working Group therefore chose to encapsulate the registration of listeners behind the `ListenerProviderInterface`. A provider object may have an explicit registration mechanism available, or multiple such mechanisms, or none. It could also be generated code produced by some compile step. That is up to the implementer. However, that also splits the responsibility of managing the process of dispatching an event from the process of mapping an event to listeners. That way different implementations may be mixed-and-matched with different provider mechanisms as needed. While combining the Notifier, Processor, and Provider into a single object is a valid and permissible degenerate case, it is NOT RECOMMENDED as it reduces the flexibility of system integrators. Instead, the provider should be composed as a dependent object. ## 5. People The Event Manager Working Group consisted of: ### 5.1 Editor * Larry Garfield ### 5.2 Sponsor Cees-Jan Kiewiet ### Working Group Members Benjamin Mack Elizabeth Smith Ryan Weaver Matthew Weier O'Phinney ## 6. Votes * [Entrance vote](https://groups.google.com/d/topic/php-fig/6kQFX-lhuk4/discussion) 7. Relevant Links ----------------- * [Inspiration Mailing List Thread](https://groups.google.com/forum/#!topic/php-fig/-EJOStgxAwY) * [Entrance vote](https://groups.google.com/d/topic/php-fig/6kQFX-lhuk4/discussion)
69.512397
578
0.79182
eng_Latn
0.999606
f4fa39f5e02788e963b485b9ad3e374018340509
942
md
Markdown
translations/pt-BR/content/admin/index.md
hoangcon12345/docs
cc61e14b7a075b06d79d263b764ae85a855c9aaf
[ "CC-BY-4.0", "MIT" ]
9
2021-04-02T15:46:42.000Z
2022-03-06T17:21:45.000Z
translations/pt-BR/content/admin/index.md
Anando12/docs
5d9c8304456158e7e0dee8bb8a373eca308a2351
[ "CC-BY-4.0", "MIT" ]
222
2021-04-08T20:13:34.000Z
2022-03-18T22:37:27.000Z
translations/pt-BR/content/admin/index.md
Anando12/docs
5d9c8304456158e7e0dee8bb8a373eca308a2351
[ "CC-BY-4.0", "MIT" ]
2
2021-05-21T01:41:05.000Z
2021-05-21T01:41:24.000Z
--- title: Administradores da Empresa redirect_from: - /enterprise/admin/hidden/migrating-from-github-fi/ - /enterprise/admin intro: Documentação e guias para administradores da empresa, administradores de sistema e especialistas em segurança que {% if enterpriseServerVersions contains currentVersion %}implementam, {% endif %}configuram{% if enterpriseServerVersions contains currentVersion %},{% endif %} e gerenciam {% data variables.product.product_name %}. versions: enterprise-server: '*' github-ae: '*' --- {% link_with_intro /overview %} {% link_with_intro /installation %} {% link_with_intro /configuration %} {% link_with_intro /authentication %} {% link_with_intro /user-management %} {% link_with_intro /policies %} {% link_with_intro /enterprise-management %} {% link_with_intro /github-actions %} {% link_with_intro /packages %} {% link_with_intro /enterprise-support %} {% link_with_intro /release-notes %}
28.545455
336
0.750531
eng_Latn
0.228994
f4fa6fc0a33be83542b201708413b52e34349939
159
md
Markdown
README.md
dli-invest/wsb-sentiment
35c85cedc9cf5ff73c0f9d17ab5c957d969aafd4
[ "Apache-2.0" ]
null
null
null
README.md
dli-invest/wsb-sentiment
35c85cedc9cf5ff73c0f9d17ab5c957d969aafd4
[ "Apache-2.0" ]
null
null
null
README.md
dli-invest/wsb-sentiment
35c85cedc9cf5ff73c0f9d17ab5c957d969aafd4
[ "Apache-2.0" ]
1
2021-10-02T20:23:07.000Z
2021-10-02T20:23:07.000Z
# wsb-sentiment scraps sentiment data from wsb and uploads to discord ## References https://github.com/tstewart161/Reddit_Sentiment_Trader/blob/main/main.py
22.714286
72
0.811321
eng_Latn
0.52825
f4fad544b747f253ea7435b803174cab1c9571b0
1,252
md
Markdown
src/posts/2010-04-30.motorola-was-an-apple.md
iChris/chrisennsdotcom
dfad8795c0652a5db0d36ba7f826ce7dfcd60e0c
[ "MIT" ]
2
2020-04-05T00:53:27.000Z
2020-08-07T17:46:13.000Z
src/posts/2010-04-30.motorola-was-an-apple.md
iChris/chrisennsdotcom
dfad8795c0652a5db0d36ba7f826ce7dfcd60e0c
[ "MIT" ]
7
2020-09-15T16:59:09.000Z
2021-03-19T22:11:15.000Z
src/posts/2010-04-30.motorola-was-an-apple.md
iChris/chrisennsdotcom
dfad8795c0652a5db0d36ba7f826ce7dfcd60e0c
[ "MIT" ]
2
2020-02-06T23:55:34.000Z
2020-02-12T05:13:45.000Z
--- title: "Motorola Was An Apple" --- <p>Apple is doing well - amazingly well - with their iPhone sales. And they do a great job of giving the impression that they are the only player left, the only phone you'd want, etc. etc. But lest you think it's easy to get stay at the top, it was only four years ago that <a href="https://www.forbes.com/feeds/ap/2010/04/29/technology-technology-hardware-amp-equipment-us-earns-motorola_7560046.html">Motorola sold 46.1 million phones in one quarter</a>. In Apple's best quarter so far, their most recent one, they sold 8.8 million iPhones.</p> <p>It's hard work to get to the top spot in an industry. It's even harder to stay there.</p> <p>An interesting side note - Motorola currently has <a href="https://daringfireball.net/linked/2010/04/30/motorola-117"><strong>117</strong> different models of phones that they currently sell</a>. Apple has <a href="https://www.apple.com/iphone/">one</a> (two if you count the fact that the previous generation iPhone is available for sale in most markets). That's the benefit of deciding on what you're going to do and focusing on it and doing it well.</p> <p><em>Via <a href="https://daringfireball.net/linked/2010/04/30/motorola">Daringfireball.net</a></em></p>
156.5
549
0.746006
eng_Latn
0.989172
f4faea39ef496e06d232e1b1c5ff6371a7b96bb5
1,077
md
Markdown
README.md
AmirHades/R-projects
da26e72ba7a98a51452f169f6b11a3f002976bf7
[ "Apache-2.0" ]
null
null
null
README.md
AmirHades/R-projects
da26e72ba7a98a51452f169f6b11a3f002976bf7
[ "Apache-2.0" ]
null
null
null
README.md
AmirHades/R-projects
da26e72ba7a98a51452f169f6b11a3f002976bf7
[ "Apache-2.0" ]
null
null
null
# R-projects This repository contains files to implement the following: Implement string matching using Finite Automata as given in Cormen book pg. no. 995. In the first step, you need to take pattern as an input, construct transition table using the algorithm. In second step, take a string as an input from user and see whether the pattern exists or not using the transition table created in first step. Simulate any one embedded system (Vending Machine, Washing Machine or Microwave) using moore or mealy machine (https://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Seq/impl.html) The above two are completed and giving correct ouputs. We did not implement Thompson NFA algorithm (https://swtch.com/~rsc/regexp/regexp1.html and https://xysun.github.io/posts/regex-parsing-thompsonsalgorithm.html) and refer to Thompson’s regular expression search algorithm paper uploaded on blackboard for parsing Regular Expressions.In first step, create the NFA from given regular expression and in second step, feed a String to the NFA to check whether the String is accepted or not.
97.909091
434
0.805942
eng_Latn
0.997046
f4fafb77a66a8d604e5c4554744cde0af5c4e3ca
3,630
md
Markdown
docs/2014/integration-services/import-export-data/create-database-sql-server-import-and-export-wizard.md
SteSinger/sql-docs.de-de
2259e4fbe807649f6ad0d49b425f1f3fe134025d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/integration-services/import-export-data/create-database-sql-server-import-and-export-wizard.md
SteSinger/sql-docs.de-de
2259e4fbe807649f6ad0d49b425f1f3fe134025d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/integration-services/import-export-data/create-database-sql-server-import-and-export-wizard.md
SteSinger/sql-docs.de-de
2259e4fbe807649f6ad0d49b425f1f3fe134025d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Datenbank erstellen (SQL Server-Import/Export-Assistent) | Microsoft-Dokumentation ms.custom: '' ms.date: 06/14/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: integration-services ms.topic: conceptual f1_keywords: - sql12.dts.impexpwizard.createdatabase.f1 ms.assetid: 56a8a79f-086c-4bdc-8888-0045bb4b0cbf author: janinezhang ms.author: janinez manager: craigg ms.openlocfilehash: a80526b0f4a1b9f122ff79bbbb5a5a8ac08a2d07 ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 06/15/2019 ms.locfileid: "62893106" --- # <a name="create-database-sql-server-import-and-export-wizard"></a>Datenbank erstellen (SQL Server-Import/Export-Assistent) Verwenden der **Create Database** Seite, um eine neue Datenbank für eine Zieldatei definieren. Diese Seite stellt eine Teilmenge der verfügbaren Optionen zum Erstellen einer neuen Datenbank bereit. Verwenden Sie zum Anzeigen aller Optionen [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)]. Weitere Informationen zu diesem Assistenten finden Sie unter [SQL Server-Import / Export-Assistenten](import-and-export-data-with-the-sql-server-import-and-export-wizard.md). Informationen zu den Optionen zum Starten des Assistenten sowie zu den Berechtigungen erforderlich, um den Assistenten erfolgreich ausführen, finden Sie unter [führen Sie die SQL Server-Import / Export-Assistenten](start-the-sql-server-import-and-export-wizard.md). Mit dem SQL Server-Import/Export-Assistenten werden Daten aus einer Quelle in ein Ziel kopiert. Mit dem Assistenten können auch eine Zieldatenbank und Zieltabellen erstellt werden. Wenn Sie jedoch mehrere Datenbanken, Tabellen oder andere Datenbankobjekte kopieren müssen, verwenden Sie stattdessen den Assistenten zum Kopieren von Datenbanken. Weitere Informationen finden Sie unter [Use the Copy Database Wizard](../../relational-databases/databases/use-the-copy-database-wizard.md). ## <a name="options"></a>Optionen **Name** Geben Sie einen eindeutigen Namen für die SQL Server-Zieldatenbank an. Beachten Sie beim Benennen dieser Datenbank die entsprechenden Konventionen von SQL Server. **Der Datendateiname** Zeigen Sie den Namen der Datendatei an. Dieser wird aus dem zu einem früheren Zeitpunkt angegebenen Datenbanknamen abgeleitet. **Name der Protokolldatei** Zeigen Sie den Namen der Protokolldatei an. Dieser wird aus dem zu einem früheren Zeitpunkt angegebenen Datenbanknamen abgeleitet. **Anfangsgröße (Datendatei)** Geben Sie die Anzahl der Megabytes für die Anfangsgröße der Datendatei an. **Keine Vergrößerung zulässig (Datendatei)** Geben Sie an, ob die Datendatei die Größe der angegebenen Anfangsgröße übersteigen kann. **Vergrößerung nach Prozent (Datendatei)** Geben Sie einen Prozentsatz an, um den die Datendatei wachsen kann. **Vergrößerung nach Größe (Datendatei)** Geben Sie die Anzahl der Megabytes an, um die die Datendatei wachsen kann. **Anfangsgröße (Protokolldatei)** Geben Sie die Anzahl der Megabytes für die Anfangsgröße der Protokolldatei an. **Keine Vergrößerung zulässig (Protokolldatei)** Geben Sie an, ob die Protokolldatei die Größe der angegebenen Anfangsgröße übersteigen kann. **Vergrößerung nach Prozent (Protokolldatei)** Geben Sie einen Prozentsatz an, um den die Protokolldatei wachsen kann. **Vergrößerung nach Größe (Protokolldatei)** Geben Sie die Anzahl der Megabytes an, um die die Protokolldatei wachsen kann.
55
488
0.783471
deu_Latn
0.988549
f4fafbc24f9d915b7361dbaec305dcd74ebb0be2
7,720
md
Markdown
articles/cognitive-services/Speech-Service/get-started.md
EINSTEINPRACIANO/azure-docs.pt-br
93bbbf115ab76d31e6bc8919a338700294966913
[ "CC-BY-4.0", "MIT" ]
1
2019-05-02T14:26:54.000Z
2019-05-02T14:26:54.000Z
articles/cognitive-services/Speech-Service/get-started.md
jhomarolo/azure-docs.pt-br
d11ab7fab56d90666ea619c6b12754b7761aca97
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Speech-Service/get-started.md
jhomarolo/azure-docs.pt-br
d11ab7fab56d90666ea619c6b12754b7761aca97
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Experimente os Serviços de Fala gratuitamente titleSuffix: Azure Cognitive Services description: É fácil e barato começar a usar os Serviços de Fala. Uma avaliação gratuita de 30 dias permite que você descubra o que o serviço pode fazer e decida se ele é o ideal para as necessidades do seu aplicativo. services: cognitive-services author: erhopf manager: nitinme ms.service: cognitive-services ms.subservice: speech-service ms.topic: conceptual ms.date: 02/08/2019 ms.author: erhopf ms.custom: seodec18 ms.openlocfilehash: 09cc38cd5343e8b01b3e704191ea40c133d724f8 ms.sourcegitcommit: 3102f886aa962842303c8753fe8fa5324a52834a ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 04/23/2019 ms.locfileid: "60541405" --- # <a name="try-speech-services-for-free"></a>Experimente os Serviços de Fala gratuitamente É fácil e barato começar a usar os Serviços de Fala. Uma avaliação gratuita de 30 dias permite que você descubra o que o serviço pode fazer e decida se ele é o ideal para as necessidades do seu aplicativo. Se você precisar de mais tempo, inscreva-se em uma conta do Microsoft Azure, ela vem com US$ 200 em crédito de serviço que você pode aplicar para em uma assinatura paga dos Serviços de Fala por até 30 dias. Por fim, os Serviços de Fala oferecem uma camada gratuita, de baixo volume e adequada para o desenvolvimento de aplicativos. Você pode manter essa assinatura gratuita, mesmo depois que o crédito do serviço expirar. ## <a name="free-trial"></a>Avaliação gratuita A avaliação gratuita de 30 dias fornece acesso ao tipo de preço padrão por tempo limitado. Para se inscrever em uma avaliação gratuita de 30 dias: 1. Acesse [Experimentar os Serviços Cognitivos](https://azure.microsoft.com/try/cognitive-services/). 1. Selecione a guia **APIs de Fala**. ![Guia Serviços de Fala](media/index/try-speech-api-free-trial1.png) 1. Em **Serviços de Fala**, selecione o botão **Obter Chave de API**. ![Chave de API](media/index/try-speech-api-free-trial2.png) 1. Aceite os termos e selecione sua localidade no menu suspenso. ![Concordar com os termos](media/index/try-speech-api-free-trial3.png) 1. Entre usando sua conta Microsoft, do Facebook, do LinkedIn ou do GitHub. Inscreva-se em uma conta gratuita Microsoft no [portal de conta Microsoft](https://account.microsoft.com/account). Para começar, clique em **Entrar com a conta da Microsoft** e, quando solicitado a entrar, clique em **Criar uma.** Siga as etapas para criar e confirmar sua nova conta Microsoft. Depois de entrar em Experimentar os Serviços Cognitivos, sua avaliação gratuita será iniciada. A página da Web exibida lista todos os Serviços Cognitivos do Azure para os quais você tem uma assinatura de avaliação. Duas chaves de assinatura estão listadas ao lado de **Serviços de Fala**. Use uma das chaves em seus aplicativos. > [!NOTE] > Todas as assinaturas de avaliação gratuita estão na região Oeste dos EUA. Ao fazer solicitações, use o ponto de extremidade `westus`. ## <a name="new-azure-account"></a>Nova conta do Azure As novas contas do Azure recebem um crédito de serviço de US$ 200 que fica disponível por até 30 dias. Use esse crédito para explorar ainda mais os Serviços de Fala ou para iniciar o desenvolvimento de aplicativos. Para se inscrever em uma nova conta do Azure, acesse a [página de inscrição do Azure](https://azure.microsoft.com/free/ai/), clique em **Iniciar Gratuitamente** e crie uma conta do Azure usando sua conta Microsoft. Inscreva-se em uma conta gratuita Microsoft no [portal de conta Microsoft](https://account.microsoft.com/account). Para começar, clique em **Entrar com a conta da Microsoft** e, quando solicitado a entrar, clique em **Criar uma.** Siga as etapas para criar e confirmar sua nova conta Microsoft. Depois de criar sua conta do Azure, siga as etapas na próxima seção para iniciar uma assinatura dos Serviços de Fala. ## <a name="create-a-speech-resource-in-azure"></a>Criar um recurso de Fala no Azure Para adicionar um recurso de Serviços de Fala (camada gratuita ou paga) à sua conta do Azure: 1. Entre no [portal do Azure](https://portal.azure.com/) usando sua conta Microsoft. 1. Selecione **Criar um recurso** na parte superior esquerda do portal. ![Criar um recurso](media/index/try-speech-api-create-speech1.png) 1. Na janela **Nova**, pesquise **fala**. 1. Nos resultados da pesquisa, selecione **Fala**. ![Selecionar Fala](media/index/try-speech-api-create-speech2.png) 1. Em **Fala**, selecione o botão **Criar**. ![Selecionar o botão Criar](media/index/try-speech-api-create-speech3.png) 1. Em **Criar**, insira: * Um nome para o novo recurso. O nome ajuda a distinguir entre várias assinaturas do mesmo serviço. * Escolha a assinatura do Azure a qual o novo recurso está associado para determinar como os valores serão cobrados. * Escolha a região onde o recurso será usado. No momento, os Serviços de Fala estão disponíveis nas regiões Ásia Oriental, Europa Setentrional e Oeste dos EUA. * Escolha um tipo de preço gratuito ou pago. Clique em **Exibir detalhes completos de preço** para obter informações completas sobre preço e cotas de uso para cada tipo. * Crie um grupo de recursos para esta assinatura de Fala ou atribua a assinatura a um grupo de recursos existente. Os grupos de recurso ajudam você a manter suas diversas assinaturas do Azure organizadas. * Para ter acesso conveniente à sua assinatura futuramente, marque a caixa de seleção **Fixar no painel**. * Selecione **Criar.** ![Selecionar o botão Criar](media/index/try-speech-api-create-speech4.png) São necessários alguns instantes para criar e implantar o novo recurso de Fala. Selecione **Início Rápido** para ver informações sobre o novo recurso. ![Painel Início Rápido](media/index/try-speech-api-create-speech5.png) 1. Em **Início Rápido**, clique no link **Chaves** na etapa 1 para exibir suas chaves de assinatura. Cada assinatura tem duas chaves; você pode usar uma das chaves em seu aplicativo. Selecione o botão ao lado de cada chave para copiá-la para a área de transferência e colá-la no código. > [!NOTE] > Você pode criar um número ilimitado de assinaturas da camada Standard em uma ou várias regiões. No entanto, você só pode criar uma assinatura da camada gratuita. As implantações de modelo na camada gratuita que permanecerem inutilizadas por 7 dias serão desativadas automaticamente. ## <a name="switch-to-a-new-subscription"></a>Alternar para uma nova assinatura Para alternar de uma assinatura para outra, por exemplo, quando a avaliação gratuita expirar ou quando você publicar seu aplicativo, substitua a região e a chave de assinatura no código pela região e pela chave de assinatura do novo recurso do Azure. > [!NOTE] > As chaves de avaliação gratuita são criadas na região Oeste dos EUA (`westus`). Uma assinatura criada por meio do painel do Azure pode estar em alguma outra região, se você assim escolher. * Se o aplicativo usar um [SDK de Fala](speech-sdk.md), forneça o código da região, como `westus`, ao criar uma configuração de fala. * Se o aplicativo usar uma das [APIs REST](rest-apis.md) dos Serviços de Fala, a região fará parte do URI do ponto de extremidade usado ao fazer solicitações. As chaves criadas para uma região são válidas somente nessa região. A tentativa de usá-las com outras regiões resultará em erros de autenticação. ## <a name="next-steps"></a>Próximos passos Conclua um de nossos inícios rápidos de 10 minutos ou confira nossas amostras de SDK: > [!div class="nextstepaction"] > [Início Rápido: reconhecer a fala em C#](quickstart-csharp-dotnet-windows.md) > [Amostras de SDK de Fala](speech-sdk.md#get-the-samples)
60.3125
328
0.772021
por_Latn
0.999874
f4fb34caaf2b232e6e03adea72c244349017cc5a
51,155
md
Markdown
repos/php-zendserver/tag-details.md
satorici/test.docker-library.repo-info
dff4bf57df35649c3c4bacb605f89f64bf01bbbd
[ "Apache-2.0" ]
2
2020-01-03T00:11:05.000Z
2022-02-03T18:28:14.000Z
repos/php-zendserver/tag-details.md
Melon-Tropics/repo-info
6654e3ebb0f4af168acafd22a2e4a0764f18401a
[ "Apache-2.0" ]
null
null
null
repos/php-zendserver/tag-details.md
Melon-Tropics/repo-info
6654e3ebb0f4af168acafd22a2e4a0764f18401a
[ "Apache-2.0" ]
1
2022-01-26T17:51:39.000Z
2022-01-26T17:51:39.000Z
<!-- THIS FILE IS GENERATED VIA './update-remote.sh' --> # Tags of `php-zendserver` - [`php-zendserver:2019.0`](#php-zendserver20190) - [`php-zendserver:2021.0`](#php-zendserver20210) - [`php-zendserver:5.6`](#php-zendserver56) - [`php-zendserver:8.5`](#php-zendserver85) - [`php-zendserver:8.5-php5.6`](#php-zendserver85-php56) - [`php-zendserver:9.1`](#php-zendserver91) - [`php-zendserver:latest`](#php-zendserverlatest) ## `php-zendserver:2019.0` ```console $ docker pull php-zendserver@sha256:a8fa0aade4a25e054dcaf58c5212fb3a5891c15a3077be7ad943655afd10aeb1 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: 1 - linux; amd64 ### `php-zendserver:2019.0` - linux; amd64 ```console $ docker pull php-zendserver@sha256:9c73b855be887ab4696d5b2f733a43326398a42a465797a45f3f3a8f2833d00a ``` - Docker Version: 20.10.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **484.7 MB (484743521 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:576d3cd55f966f330a313911f3114b5febb4c3ae1d0978c15a5dfdf913b9d6dc` - Default Command: `["\/usr\/local\/bin\/run"]` ```dockerfile # Fri, 07 Jan 2022 02:25:21 GMT ADD file:2aa3e79e3cff3c048612744ac310cf86bc27de3433d420711bafc6612445befc in / # Fri, 07 Jan 2022 02:25:21 GMT CMD ["bash"] # Fri, 07 Jan 2022 05:31:33 GMT RUN apt-get update && apt-get install -y gnupg # Fri, 07 Jan 2022 05:31:38 GMT RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-key 799058698E65316A2E7A4FF42EAE1437F7D2C623 # Fri, 07 Jan 2022 05:31:38 GMT COPY file:23f8c2a96f087277b95ebfd7f401f5c1b95ec7f3443fa9231607566f1d8e7270 in /etc/apt/sources.list.d/zend-server.list # Fri, 07 Jan 2022 05:33:26 GMT RUN apt-get update && apt-get install -y iproute2 curl libmysqlclient20 unzip git zend-server-nginx=2019.0.7+b403 && rm -rf /var/lib/apt/lists/* && /usr/local/zend/bin/zendctl.sh stop # Fri, 07 Jan 2022 05:33:30 GMT ENV ZS_INIT_VERSION=0.3 # Fri, 07 Jan 2022 05:33:30 GMT ENV ZS_INIT_SHA256=e8d441d8503808e9fc0fafc762b2cb80d4a6e68b94fede0fe41efdeac10800cb # Fri, 07 Jan 2022 05:33:31 GMT COPY file:ad21ce0b2dc8345be0ef63836774934d6b2045ddc3685411221a07dd10b649d1 in /tmp/zs-init.patch # Fri, 07 Jan 2022 05:33:32 GMT RUN curl -fSL -o zs-init.tar.gz "http://repos.zend.com/zs-init/zs-init-docker-${ZS_INIT_VERSION}.tar.gz" && echo "${ZS_INIT_SHA256} *zs-init.tar.gz" | sha256sum -c - && mkdir /usr/local/zs-init && tar xzf zs-init.tar.gz --strip-components=1 -C /usr/local/zs-init && rm zs-init.tar.gz && patch -u /usr/local/zs-init/src/Init/Steps/AbstractStep.php -i /tmp/zs-init.patch && rm /tmp/zs-init.patch # Fri, 07 Jan 2022 05:33:32 GMT WORKDIR /usr/local/zs-init # Fri, 07 Jan 2022 05:33:38 GMT RUN /usr/local/zend/bin/php -r "readfile('https://getcomposer.org/installer');" | /usr/local/zend/bin/php && /usr/local/zend/bin/php composer.phar update # Fri, 07 Jan 2022 05:33:39 GMT COPY dir:eecd98e9ebf1c61a12ae67558eb2a6ce846b9ebfadabbf08503e90b3e30d9496 in /usr/local/bin # Fri, 07 Jan 2022 05:33:39 GMT COPY dir:80bde0d50316e7c9350262fe3b75826a91d075303027787e759d703b60df13d6 in /usr/local/zend/var/plugins/ # Fri, 07 Jan 2022 05:33:40 GMT RUN rm /var/www/html/index.nginx-debian.html # Fri, 07 Jan 2022 05:33:40 GMT COPY dir:d174a5d34625889b4356c566972566e0ca7da618b01ea42276562f8186517a67 in /var/www/html # Fri, 07 Jan 2022 05:33:40 GMT EXPOSE 80 # Fri, 07 Jan 2022 05:33:40 GMT EXPOSE 443 # Fri, 07 Jan 2022 05:33:40 GMT EXPOSE 10081 # Fri, 07 Jan 2022 05:33:41 GMT EXPOSE 10082 # Fri, 07 Jan 2022 05:33:41 GMT WORKDIR /var/www/html # Fri, 07 Jan 2022 05:33:41 GMT CMD ["/usr/local/bin/run"] ``` - Layers: - `sha256:2f94e549220aea96f00cae7eb95f401e61b41a16cc5eb0b4ea592d0ce871930a` Last Modified: Thu, 06 Jan 2022 23:50:21 GMT Size: 26.7 MB (26705027 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:57fc838010b9e661999aeaa9b8f201295e755543a4cdf71cc205b8c6f316ee63` Last Modified: Fri, 07 Jan 2022 05:36:02 GMT Size: 34.2 MB (34204735 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:95538f4f0f751a40bc37cf1d22a8101036eee4b20f36ba0eeed2fcf353944040` Last Modified: Fri, 07 Jan 2022 05:35:57 GMT Size: 1.4 KB (1395 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:3269468397ec430e279ab7621774c007915467710b388d2dac02102d1f314bf3` Last Modified: Fri, 07 Jan 2022 05:35:56 GMT Size: 236.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:1c78bcfb3204adc67799abf903554d457d595c1705bde6a5f6aaa89a192735a8` Last Modified: Fri, 07 Jan 2022 05:36:49 GMT Size: 418.6 MB (418619085 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f0cd0d3c2d497f0791a35f6d167fe71919d1bb48fab0d64842fef8708bf8dd3d` Last Modified: Fri, 07 Jan 2022 05:35:56 GMT Size: 445.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:59c45da426205857ad657e083179084c28bc94e79d77c2352649864a6b3c2e2a` Last Modified: Fri, 07 Jan 2022 05:35:56 GMT Size: 18.9 KB (18927 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:a0c987c6aef2b5b9d762927a5d089122daec917752f2403eceb0bdc91cc41073` Last Modified: Fri, 07 Jan 2022 05:35:55 GMT Size: 5.2 MB (5175377 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2457dcd0c1663ebf9c4ea660d8e5a2bae6db173e5b6e83340f6f349d50a8f4b8` Last Modified: Fri, 07 Jan 2022 05:35:53 GMT Size: 14.3 KB (14292 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:9d067adb2dead3eac08a0a4320e1c021ecb3ac1418b15743ad523058af866672` Last Modified: Fri, 07 Jan 2022 05:35:53 GMT Size: 2.6 KB (2560 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:68d87e3048385c8a89eb8fe1f5bd74313197dbb6976e34422e48d46cf9877c48` Last Modified: Fri, 07 Jan 2022 05:35:53 GMT Size: 187.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:715903be25e107fd0037c9ed66f8607a89112f4da5853d693c5c1a2b9062f04a` Last Modified: Fri, 07 Jan 2022 05:35:54 GMT Size: 1.3 KB (1255 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ## `php-zendserver:2021.0` ```console $ docker pull php-zendserver@sha256:8aba264f01337eeb40297aeefdba03420957ea5d2520edc405f835e9a9b4dd32 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: 1 - linux; amd64 ### `php-zendserver:2021.0` - linux; amd64 ```console $ docker pull php-zendserver@sha256:60132736f3148d9ae841e49d7050ded15ffc44bb5accaa4f457f169b29ccf0fd ``` - Docker Version: 20.10.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **392.6 MB (392616981 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:c9b02abac2707b5fdce1480aeea0b49f0730303abe73317c430f506d47bfb1cc` - Default Command: `["\/usr\/local\/bin\/run"]` ```dockerfile # Fri, 07 Jan 2022 02:25:21 GMT ADD file:2aa3e79e3cff3c048612744ac310cf86bc27de3433d420711bafc6612445befc in / # Fri, 07 Jan 2022 02:25:21 GMT CMD ["bash"] # Fri, 07 Jan 2022 05:31:33 GMT RUN apt-get update && apt-get install -y gnupg # Fri, 07 Jan 2022 05:31:38 GMT RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-key 799058698E65316A2E7A4FF42EAE1437F7D2C623 # Fri, 07 Jan 2022 05:33:48 GMT COPY file:1e70d8fd6f9643bffb703528edddba0aa02a58e95cc53e92f58a86cde29e732a in /etc/apt/sources.list.d/zend-server.list # Fri, 07 Jan 2022 05:35:23 GMT RUN apt-get update && apt-get install -y iproute2 curl libmysqlclient20 unzip git zend-server-nginx=2021.0.0+b74 && rm -rf /var/lib/apt/lists/* && /usr/local/zend/bin/zendctl.sh stop # Fri, 07 Jan 2022 05:35:26 GMT ENV ZS_INIT_VERSION=0.3 # Fri, 07 Jan 2022 05:35:27 GMT ENV ZS_INIT_SHA256=e8d441d8503808e9fc0fafc762b2cb80d4a6e68b94fede0fe41efdeac10800cb # Fri, 07 Jan 2022 05:35:27 GMT COPY file:ad21ce0b2dc8345be0ef63836774934d6b2045ddc3685411221a07dd10b649d1 in /tmp/zs-init.patch # Fri, 07 Jan 2022 05:35:28 GMT RUN curl -fSL -o zs-init.tar.gz "http://repos.zend.com/zs-init/zs-init-docker-${ZS_INIT_VERSION}.tar.gz" && echo "${ZS_INIT_SHA256} *zs-init.tar.gz" | sha256sum -c - && mkdir /usr/local/zs-init && tar xzf zs-init.tar.gz --strip-components=1 -C /usr/local/zs-init && rm zs-init.tar.gz && patch -u /usr/local/zs-init/src/Init/Steps/AbstractStep.php -i /tmp/zs-init.patch && rm /tmp/zs-init.patch # Fri, 07 Jan 2022 05:35:28 GMT WORKDIR /usr/local/zs-init # Fri, 07 Jan 2022 05:35:34 GMT RUN /usr/local/zend/bin/php -r "readfile('https://getcomposer.org/installer');" | /usr/local/zend/bin/php && /usr/local/zend/bin/php composer.phar update # Fri, 07 Jan 2022 05:35:34 GMT COPY dir:eecd98e9ebf1c61a12ae67558eb2a6ce846b9ebfadabbf08503e90b3e30d9496 in /usr/local/bin # Fri, 07 Jan 2022 05:35:34 GMT COPY dir:80bde0d50316e7c9350262fe3b75826a91d075303027787e759d703b60df13d6 in /usr/local/zend/var/plugins/ # Fri, 07 Jan 2022 05:35:35 GMT RUN rm /var/www/html/index.nginx-debian.html # Fri, 07 Jan 2022 05:35:35 GMT COPY dir:d174a5d34625889b4356c566972566e0ca7da618b01ea42276562f8186517a67 in /var/www/html # Fri, 07 Jan 2022 05:35:36 GMT EXPOSE 80 # Fri, 07 Jan 2022 05:35:36 GMT EXPOSE 443 # Fri, 07 Jan 2022 05:35:36 GMT EXPOSE 10081 # Fri, 07 Jan 2022 05:35:36 GMT EXPOSE 10082 # Fri, 07 Jan 2022 05:35:36 GMT WORKDIR /var/www/html # Fri, 07 Jan 2022 05:35:37 GMT CMD ["/usr/local/bin/run"] ``` - Layers: - `sha256:2f94e549220aea96f00cae7eb95f401e61b41a16cc5eb0b4ea592d0ce871930a` Last Modified: Thu, 06 Jan 2022 23:50:21 GMT Size: 26.7 MB (26705027 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:57fc838010b9e661999aeaa9b8f201295e755543a4cdf71cc205b8c6f316ee63` Last Modified: Fri, 07 Jan 2022 05:36:02 GMT Size: 34.2 MB (34204735 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:95538f4f0f751a40bc37cf1d22a8101036eee4b20f36ba0eeed2fcf353944040` Last Modified: Fri, 07 Jan 2022 05:35:57 GMT Size: 1.4 KB (1395 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:7372f3cd060a0813cd3dbf5ac6b08cd055202ec4e89ecb4f0200c66a77a72be6` Last Modified: Fri, 07 Jan 2022 05:37:00 GMT Size: 238.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:dd982e11c27a8c46e622ff7fa218c5961dd748b7c78eeb89b7367e41796eebe1` Last Modified: Fri, 07 Jan 2022 05:37:43 GMT Size: 326.5 MB (326492733 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f51225a3a1caf0dade4501b63229615e4b22c3ace8496d8ad49fffcede5e7092` Last Modified: Fri, 07 Jan 2022 05:37:00 GMT Size: 446.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:156652f35f3018708ae6995b6443512c31312a32199ce7945e36ec8619c03fe0` Last Modified: Fri, 07 Jan 2022 05:36:59 GMT Size: 18.9 KB (18929 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:8eb899bdcf9814e787996585539324390d939436be5c091464265e12e473499c` Last Modified: Fri, 07 Jan 2022 05:36:59 GMT Size: 5.2 MB (5175183 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:40e1b87a966c33c3da01539760bf77af8e06f3164bc79c919c7201f6ae637286` Last Modified: Fri, 07 Jan 2022 05:36:57 GMT Size: 14.3 KB (14292 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f98ab6b5efa1a76687d4355077b8b80eb8f819614110709e7f1ebc11f3be3191` Last Modified: Fri, 07 Jan 2022 05:36:57 GMT Size: 2.6 KB (2561 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c0265bdb48187ecf23f832acadea9db04ae983aa259a54271bacf6b51093793b` Last Modified: Fri, 07 Jan 2022 05:36:57 GMT Size: 186.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d5d6e893a644f992442f05d2dfafc245e117726b0b60d6cf8843c0f25054ade6` Last Modified: Fri, 07 Jan 2022 05:36:57 GMT Size: 1.3 KB (1256 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ## `php-zendserver:5.6` ```console $ docker pull php-zendserver@sha256:46087476dd829477b235937b1b774e539c67177fb9c637bde91b28b2e6cf9299 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: 1 - linux; amd64 ### `php-zendserver:5.6` - linux; amd64 ```console $ docker pull php-zendserver@sha256:5393691a9d8617643c002a7ec19c90639a480868ae335ab538499d1a05865d4d ``` - Docker Version: 20.10.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **315.6 MB (315608578 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:513ed7b8cb626435a0c10f51c07b1631f83e70f06549e79357644c19c6d30670` - Default Command: `["\/usr\/local\/bin\/run"]` ```dockerfile # Tue, 31 Aug 2021 01:21:27 GMT ADD file:11b425d4c08e81a3e0cb2e0345d27cd5fc844dd83f1096af4cc05f635824ff5d in / # Tue, 31 Aug 2021 01:21:28 GMT RUN set -xe && echo '#!/bin/sh' > /usr/sbin/policy-rc.d && echo 'exit 101' >> /usr/sbin/policy-rc.d && chmod +x /usr/sbin/policy-rc.d && dpkg-divert --local --rename --add /sbin/initctl && cp -a /usr/sbin/policy-rc.d /sbin/initctl && sed -i 's/^exit.*/exit 0/' /sbin/initctl && echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup && echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean && echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean && echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean && echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages && echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes && echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests # Tue, 31 Aug 2021 01:21:29 GMT RUN rm -rf /var/lib/apt/lists/* # Tue, 31 Aug 2021 01:21:30 GMT RUN mkdir -p /run/systemd && echo 'docker' > /run/systemd/container # Tue, 31 Aug 2021 01:21:30 GMT CMD ["/bin/bash"] # Tue, 31 Aug 2021 04:02:14 GMT RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-key 799058698E65316A2E7A4FF42EAE1437F7D2C623 # Tue, 31 Aug 2021 04:02:15 GMT COPY file:6f0a5450842ae9c3e06d131c7180961d773ca33754e17af8ea2ac258fd0c6054 in /etc/apt/sources.list.d/zend-server.list # Tue, 31 Aug 2021 04:02:15 GMT COPY file:7a9d81c6298f71cfad4408c8b3d7c3bf53bf90083221ec55686e12b2eb6f16a4 in /etc/apt/sources.list.d/ubuntu-trusty.list # Tue, 31 Aug 2021 04:04:03 GMT RUN apt-get update && apt-get install -y curl libmysqlclient18 unzip git zend-server-php-5.6=8.5.17+b19 && rm -rf /var/lib/apt/lists/* && /usr/local/zend/bin/zendctl.sh stop # Tue, 31 Aug 2021 04:04:06 GMT COPY file:def46bbc651bcb61f92bcaa2f8d6edec0c22e51e86132fabd2e47542dcbec0bf in /etc/apache2/conf-available # Tue, 31 Aug 2021 04:04:06 GMT RUN /usr/sbin/a2enconf drop-http-proxy-header # Tue, 31 Aug 2021 04:04:07 GMT RUN /usr/sbin/a2enmod headers # Tue, 31 Aug 2021 04:04:07 GMT ENV ZS_INIT_VERSION=0.3 # Tue, 31 Aug 2021 04:04:08 GMT ENV ZS_INIT_SHA256=e8d441d8503808e9fc0fafc762b2cb80d4a6e68b94fede0fe41efdeac10800cb # Tue, 31 Aug 2021 04:04:09 GMT RUN curl -fSL -o zs-init.tar.gz "http://repos.zend.com/zs-init/zs-init-docker-${ZS_INIT_VERSION}.tar.gz" && echo "${ZS_INIT_SHA256} *zs-init.tar.gz" | sha256sum -c - && mkdir /usr/local/zs-init && tar xzf zs-init.tar.gz --strip-components=1 -C /usr/local/zs-init && rm zs-init.tar.gz # Tue, 31 Aug 2021 04:04:09 GMT WORKDIR /usr/local/zs-init # Tue, 31 Aug 2021 04:04:16 GMT RUN /usr/local/zend/bin/php -r "readfile('https://getcomposer.org/installer');" | /usr/local/zend/bin/php && /usr/local/zend/bin/php composer.phar self-update && /usr/local/zend/bin/php composer.phar update # Tue, 31 Aug 2021 04:04:16 GMT COPY dir:b75978d6e77900379bb898c52c86c408d7f6fcd06b5c66439d594a1a3dcca0b4 in /usr/local/bin # Tue, 31 Aug 2021 04:04:16 GMT COPY dir:80bde0d50316e7c9350262fe3b75826a91d075303027787e759d703b60df13d6 in /usr/local/zend/var/plugins/ # Tue, 31 Aug 2021 04:04:17 GMT RUN rm /var/www/html/index.html # Tue, 31 Aug 2021 04:04:17 GMT COPY dir:d174a5d34625889b4356c566972566e0ca7da618b01ea42276562f8186517a67 in /var/www/html # Tue, 31 Aug 2021 04:04:17 GMT EXPOSE 80 # Tue, 31 Aug 2021 04:04:18 GMT EXPOSE 443 # Tue, 31 Aug 2021 04:04:18 GMT EXPOSE 10081 # Tue, 31 Aug 2021 04:04:18 GMT EXPOSE 10082 # Tue, 31 Aug 2021 04:04:18 GMT WORKDIR /var/www/html # Tue, 31 Aug 2021 04:04:18 GMT CMD ["/usr/local/bin/run"] ``` - Layers: - `sha256:58690f9b18fca6469a14da4e212c96849469f9b1be6661d2342a4bf01774aa50` Last Modified: Thu, 05 Aug 2021 00:25:05 GMT Size: 46.5 MB (46497548 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b51569e7c50720acf6860327847fe342a1afbe148d24c529fb81df105e3eed01` Last Modified: Tue, 31 Aug 2021 01:23:09 GMT Size: 857.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:da8ef40b9ecabc2679fe2419957220c0272a965c5cf7e0269fa1aeeb8c56f2e1` Last Modified: Tue, 31 Aug 2021 01:23:08 GMT Size: 528.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fb15d46c38dcd1ea0b1990006c3366ecd10c79d374f341687eb2cb23a2c8672e` Last Modified: Tue, 31 Aug 2021 01:23:08 GMT Size: 170.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2102dd9376f61b14df18139d78a96caeb38426131bb6a85847a5733f84d83c32` Last Modified: Tue, 31 Aug 2021 04:11:12 GMT Size: 14.7 KB (14704 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:ae050479344fbb7ea0f86b97e8c772b7ae45103fa5cfc2bea2a4b79c631a1534` Last Modified: Tue, 31 Aug 2021 04:11:11 GMT Size: 237.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d7d990e63418cea6828a1754a0cfd88662fe780fa88dd2de23528d624ab6bc35` Last Modified: Tue, 31 Aug 2021 04:11:11 GMT Size: 278.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:5e78264dffd8afd8084e307db7cfd44f971f55d76534559ccd72b50b64108726` Last Modified: Tue, 31 Aug 2021 04:11:40 GMT Size: 263.9 MB (263910611 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:cd783101a133a035071fca065237130e373e2f4cc4151957c9c461ae9a79cdf2` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 263.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d19cbd07b1b667393092b9d2df1833f0b331cfe1eb5e6ad304f4acbda03136ab` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 336.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c21aeee9b83798ca4fd42d7e5aa46ecd288c027160673f1be4f3d9d57b70a383` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 307.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:cf7e05a640abfb3d3865705d1a482da844acc818d5c32089ee78ec8ba477136d` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 18.9 KB (18859 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:be58af708ccc85c876f8148b3f7af699d37b1d23a25d76c1c8eaedd03b6dcc9b` Last Modified: Tue, 31 Aug 2021 04:11:08 GMT Size: 5.1 MB (5146536 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fb1df10206b55b2eb8839919bb0d056e7256e1dd2bb398a492c5b8317235fe6d` Last Modified: Tue, 31 Aug 2021 04:11:07 GMT Size: 13.4 KB (13357 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:703d37b5f26f094cbc47c55b9abf4e3fe103e96208d7e102c579384ffe6add64` Last Modified: Tue, 31 Aug 2021 04:11:06 GMT Size: 2.6 KB (2566 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:15716512d8f97648b13d29fdbc7df5b8f6ec04e2ca06aa9447b31525f08c1c2c` Last Modified: Tue, 31 Aug 2021 04:11:06 GMT Size: 171.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b72ae8545d8a1d76de6aadb8a94a839c2d60e2f164e0323e0ad4cc73a1853110` Last Modified: Tue, 31 Aug 2021 04:11:06 GMT Size: 1.2 KB (1250 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ## `php-zendserver:8.5` ```console $ docker pull php-zendserver@sha256:46087476dd829477b235937b1b774e539c67177fb9c637bde91b28b2e6cf9299 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: 1 - linux; amd64 ### `php-zendserver:8.5` - linux; amd64 ```console $ docker pull php-zendserver@sha256:5393691a9d8617643c002a7ec19c90639a480868ae335ab538499d1a05865d4d ``` - Docker Version: 20.10.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **315.6 MB (315608578 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:513ed7b8cb626435a0c10f51c07b1631f83e70f06549e79357644c19c6d30670` - Default Command: `["\/usr\/local\/bin\/run"]` ```dockerfile # Tue, 31 Aug 2021 01:21:27 GMT ADD file:11b425d4c08e81a3e0cb2e0345d27cd5fc844dd83f1096af4cc05f635824ff5d in / # Tue, 31 Aug 2021 01:21:28 GMT RUN set -xe && echo '#!/bin/sh' > /usr/sbin/policy-rc.d && echo 'exit 101' >> /usr/sbin/policy-rc.d && chmod +x /usr/sbin/policy-rc.d && dpkg-divert --local --rename --add /sbin/initctl && cp -a /usr/sbin/policy-rc.d /sbin/initctl && sed -i 's/^exit.*/exit 0/' /sbin/initctl && echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup && echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean && echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean && echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean && echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages && echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes && echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests # Tue, 31 Aug 2021 01:21:29 GMT RUN rm -rf /var/lib/apt/lists/* # Tue, 31 Aug 2021 01:21:30 GMT RUN mkdir -p /run/systemd && echo 'docker' > /run/systemd/container # Tue, 31 Aug 2021 01:21:30 GMT CMD ["/bin/bash"] # Tue, 31 Aug 2021 04:02:14 GMT RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-key 799058698E65316A2E7A4FF42EAE1437F7D2C623 # Tue, 31 Aug 2021 04:02:15 GMT COPY file:6f0a5450842ae9c3e06d131c7180961d773ca33754e17af8ea2ac258fd0c6054 in /etc/apt/sources.list.d/zend-server.list # Tue, 31 Aug 2021 04:02:15 GMT COPY file:7a9d81c6298f71cfad4408c8b3d7c3bf53bf90083221ec55686e12b2eb6f16a4 in /etc/apt/sources.list.d/ubuntu-trusty.list # Tue, 31 Aug 2021 04:04:03 GMT RUN apt-get update && apt-get install -y curl libmysqlclient18 unzip git zend-server-php-5.6=8.5.17+b19 && rm -rf /var/lib/apt/lists/* && /usr/local/zend/bin/zendctl.sh stop # Tue, 31 Aug 2021 04:04:06 GMT COPY file:def46bbc651bcb61f92bcaa2f8d6edec0c22e51e86132fabd2e47542dcbec0bf in /etc/apache2/conf-available # Tue, 31 Aug 2021 04:04:06 GMT RUN /usr/sbin/a2enconf drop-http-proxy-header # Tue, 31 Aug 2021 04:04:07 GMT RUN /usr/sbin/a2enmod headers # Tue, 31 Aug 2021 04:04:07 GMT ENV ZS_INIT_VERSION=0.3 # Tue, 31 Aug 2021 04:04:08 GMT ENV ZS_INIT_SHA256=e8d441d8503808e9fc0fafc762b2cb80d4a6e68b94fede0fe41efdeac10800cb # Tue, 31 Aug 2021 04:04:09 GMT RUN curl -fSL -o zs-init.tar.gz "http://repos.zend.com/zs-init/zs-init-docker-${ZS_INIT_VERSION}.tar.gz" && echo "${ZS_INIT_SHA256} *zs-init.tar.gz" | sha256sum -c - && mkdir /usr/local/zs-init && tar xzf zs-init.tar.gz --strip-components=1 -C /usr/local/zs-init && rm zs-init.tar.gz # Tue, 31 Aug 2021 04:04:09 GMT WORKDIR /usr/local/zs-init # Tue, 31 Aug 2021 04:04:16 GMT RUN /usr/local/zend/bin/php -r "readfile('https://getcomposer.org/installer');" | /usr/local/zend/bin/php && /usr/local/zend/bin/php composer.phar self-update && /usr/local/zend/bin/php composer.phar update # Tue, 31 Aug 2021 04:04:16 GMT COPY dir:b75978d6e77900379bb898c52c86c408d7f6fcd06b5c66439d594a1a3dcca0b4 in /usr/local/bin # Tue, 31 Aug 2021 04:04:16 GMT COPY dir:80bde0d50316e7c9350262fe3b75826a91d075303027787e759d703b60df13d6 in /usr/local/zend/var/plugins/ # Tue, 31 Aug 2021 04:04:17 GMT RUN rm /var/www/html/index.html # Tue, 31 Aug 2021 04:04:17 GMT COPY dir:d174a5d34625889b4356c566972566e0ca7da618b01ea42276562f8186517a67 in /var/www/html # Tue, 31 Aug 2021 04:04:17 GMT EXPOSE 80 # Tue, 31 Aug 2021 04:04:18 GMT EXPOSE 443 # Tue, 31 Aug 2021 04:04:18 GMT EXPOSE 10081 # Tue, 31 Aug 2021 04:04:18 GMT EXPOSE 10082 # Tue, 31 Aug 2021 04:04:18 GMT WORKDIR /var/www/html # Tue, 31 Aug 2021 04:04:18 GMT CMD ["/usr/local/bin/run"] ``` - Layers: - `sha256:58690f9b18fca6469a14da4e212c96849469f9b1be6661d2342a4bf01774aa50` Last Modified: Thu, 05 Aug 2021 00:25:05 GMT Size: 46.5 MB (46497548 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b51569e7c50720acf6860327847fe342a1afbe148d24c529fb81df105e3eed01` Last Modified: Tue, 31 Aug 2021 01:23:09 GMT Size: 857.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:da8ef40b9ecabc2679fe2419957220c0272a965c5cf7e0269fa1aeeb8c56f2e1` Last Modified: Tue, 31 Aug 2021 01:23:08 GMT Size: 528.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fb15d46c38dcd1ea0b1990006c3366ecd10c79d374f341687eb2cb23a2c8672e` Last Modified: Tue, 31 Aug 2021 01:23:08 GMT Size: 170.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2102dd9376f61b14df18139d78a96caeb38426131bb6a85847a5733f84d83c32` Last Modified: Tue, 31 Aug 2021 04:11:12 GMT Size: 14.7 KB (14704 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:ae050479344fbb7ea0f86b97e8c772b7ae45103fa5cfc2bea2a4b79c631a1534` Last Modified: Tue, 31 Aug 2021 04:11:11 GMT Size: 237.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d7d990e63418cea6828a1754a0cfd88662fe780fa88dd2de23528d624ab6bc35` Last Modified: Tue, 31 Aug 2021 04:11:11 GMT Size: 278.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:5e78264dffd8afd8084e307db7cfd44f971f55d76534559ccd72b50b64108726` Last Modified: Tue, 31 Aug 2021 04:11:40 GMT Size: 263.9 MB (263910611 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:cd783101a133a035071fca065237130e373e2f4cc4151957c9c461ae9a79cdf2` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 263.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d19cbd07b1b667393092b9d2df1833f0b331cfe1eb5e6ad304f4acbda03136ab` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 336.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c21aeee9b83798ca4fd42d7e5aa46ecd288c027160673f1be4f3d9d57b70a383` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 307.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:cf7e05a640abfb3d3865705d1a482da844acc818d5c32089ee78ec8ba477136d` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 18.9 KB (18859 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:be58af708ccc85c876f8148b3f7af699d37b1d23a25d76c1c8eaedd03b6dcc9b` Last Modified: Tue, 31 Aug 2021 04:11:08 GMT Size: 5.1 MB (5146536 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fb1df10206b55b2eb8839919bb0d056e7256e1dd2bb398a492c5b8317235fe6d` Last Modified: Tue, 31 Aug 2021 04:11:07 GMT Size: 13.4 KB (13357 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:703d37b5f26f094cbc47c55b9abf4e3fe103e96208d7e102c579384ffe6add64` Last Modified: Tue, 31 Aug 2021 04:11:06 GMT Size: 2.6 KB (2566 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:15716512d8f97648b13d29fdbc7df5b8f6ec04e2ca06aa9447b31525f08c1c2c` Last Modified: Tue, 31 Aug 2021 04:11:06 GMT Size: 171.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b72ae8545d8a1d76de6aadb8a94a839c2d60e2f164e0323e0ad4cc73a1853110` Last Modified: Tue, 31 Aug 2021 04:11:06 GMT Size: 1.2 KB (1250 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ## `php-zendserver:8.5-php5.6` ```console $ docker pull php-zendserver@sha256:46087476dd829477b235937b1b774e539c67177fb9c637bde91b28b2e6cf9299 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: 1 - linux; amd64 ### `php-zendserver:8.5-php5.6` - linux; amd64 ```console $ docker pull php-zendserver@sha256:5393691a9d8617643c002a7ec19c90639a480868ae335ab538499d1a05865d4d ``` - Docker Version: 20.10.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **315.6 MB (315608578 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:513ed7b8cb626435a0c10f51c07b1631f83e70f06549e79357644c19c6d30670` - Default Command: `["\/usr\/local\/bin\/run"]` ```dockerfile # Tue, 31 Aug 2021 01:21:27 GMT ADD file:11b425d4c08e81a3e0cb2e0345d27cd5fc844dd83f1096af4cc05f635824ff5d in / # Tue, 31 Aug 2021 01:21:28 GMT RUN set -xe && echo '#!/bin/sh' > /usr/sbin/policy-rc.d && echo 'exit 101' >> /usr/sbin/policy-rc.d && chmod +x /usr/sbin/policy-rc.d && dpkg-divert --local --rename --add /sbin/initctl && cp -a /usr/sbin/policy-rc.d /sbin/initctl && sed -i 's/^exit.*/exit 0/' /sbin/initctl && echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup && echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean && echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean && echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean && echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages && echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes && echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests # Tue, 31 Aug 2021 01:21:29 GMT RUN rm -rf /var/lib/apt/lists/* # Tue, 31 Aug 2021 01:21:30 GMT RUN mkdir -p /run/systemd && echo 'docker' > /run/systemd/container # Tue, 31 Aug 2021 01:21:30 GMT CMD ["/bin/bash"] # Tue, 31 Aug 2021 04:02:14 GMT RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-key 799058698E65316A2E7A4FF42EAE1437F7D2C623 # Tue, 31 Aug 2021 04:02:15 GMT COPY file:6f0a5450842ae9c3e06d131c7180961d773ca33754e17af8ea2ac258fd0c6054 in /etc/apt/sources.list.d/zend-server.list # Tue, 31 Aug 2021 04:02:15 GMT COPY file:7a9d81c6298f71cfad4408c8b3d7c3bf53bf90083221ec55686e12b2eb6f16a4 in /etc/apt/sources.list.d/ubuntu-trusty.list # Tue, 31 Aug 2021 04:04:03 GMT RUN apt-get update && apt-get install -y curl libmysqlclient18 unzip git zend-server-php-5.6=8.5.17+b19 && rm -rf /var/lib/apt/lists/* && /usr/local/zend/bin/zendctl.sh stop # Tue, 31 Aug 2021 04:04:06 GMT COPY file:def46bbc651bcb61f92bcaa2f8d6edec0c22e51e86132fabd2e47542dcbec0bf in /etc/apache2/conf-available # Tue, 31 Aug 2021 04:04:06 GMT RUN /usr/sbin/a2enconf drop-http-proxy-header # Tue, 31 Aug 2021 04:04:07 GMT RUN /usr/sbin/a2enmod headers # Tue, 31 Aug 2021 04:04:07 GMT ENV ZS_INIT_VERSION=0.3 # Tue, 31 Aug 2021 04:04:08 GMT ENV ZS_INIT_SHA256=e8d441d8503808e9fc0fafc762b2cb80d4a6e68b94fede0fe41efdeac10800cb # Tue, 31 Aug 2021 04:04:09 GMT RUN curl -fSL -o zs-init.tar.gz "http://repos.zend.com/zs-init/zs-init-docker-${ZS_INIT_VERSION}.tar.gz" && echo "${ZS_INIT_SHA256} *zs-init.tar.gz" | sha256sum -c - && mkdir /usr/local/zs-init && tar xzf zs-init.tar.gz --strip-components=1 -C /usr/local/zs-init && rm zs-init.tar.gz # Tue, 31 Aug 2021 04:04:09 GMT WORKDIR /usr/local/zs-init # Tue, 31 Aug 2021 04:04:16 GMT RUN /usr/local/zend/bin/php -r "readfile('https://getcomposer.org/installer');" | /usr/local/zend/bin/php && /usr/local/zend/bin/php composer.phar self-update && /usr/local/zend/bin/php composer.phar update # Tue, 31 Aug 2021 04:04:16 GMT COPY dir:b75978d6e77900379bb898c52c86c408d7f6fcd06b5c66439d594a1a3dcca0b4 in /usr/local/bin # Tue, 31 Aug 2021 04:04:16 GMT COPY dir:80bde0d50316e7c9350262fe3b75826a91d075303027787e759d703b60df13d6 in /usr/local/zend/var/plugins/ # Tue, 31 Aug 2021 04:04:17 GMT RUN rm /var/www/html/index.html # Tue, 31 Aug 2021 04:04:17 GMT COPY dir:d174a5d34625889b4356c566972566e0ca7da618b01ea42276562f8186517a67 in /var/www/html # Tue, 31 Aug 2021 04:04:17 GMT EXPOSE 80 # Tue, 31 Aug 2021 04:04:18 GMT EXPOSE 443 # Tue, 31 Aug 2021 04:04:18 GMT EXPOSE 10081 # Tue, 31 Aug 2021 04:04:18 GMT EXPOSE 10082 # Tue, 31 Aug 2021 04:04:18 GMT WORKDIR /var/www/html # Tue, 31 Aug 2021 04:04:18 GMT CMD ["/usr/local/bin/run"] ``` - Layers: - `sha256:58690f9b18fca6469a14da4e212c96849469f9b1be6661d2342a4bf01774aa50` Last Modified: Thu, 05 Aug 2021 00:25:05 GMT Size: 46.5 MB (46497548 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b51569e7c50720acf6860327847fe342a1afbe148d24c529fb81df105e3eed01` Last Modified: Tue, 31 Aug 2021 01:23:09 GMT Size: 857.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:da8ef40b9ecabc2679fe2419957220c0272a965c5cf7e0269fa1aeeb8c56f2e1` Last Modified: Tue, 31 Aug 2021 01:23:08 GMT Size: 528.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fb15d46c38dcd1ea0b1990006c3366ecd10c79d374f341687eb2cb23a2c8672e` Last Modified: Tue, 31 Aug 2021 01:23:08 GMT Size: 170.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2102dd9376f61b14df18139d78a96caeb38426131bb6a85847a5733f84d83c32` Last Modified: Tue, 31 Aug 2021 04:11:12 GMT Size: 14.7 KB (14704 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:ae050479344fbb7ea0f86b97e8c772b7ae45103fa5cfc2bea2a4b79c631a1534` Last Modified: Tue, 31 Aug 2021 04:11:11 GMT Size: 237.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d7d990e63418cea6828a1754a0cfd88662fe780fa88dd2de23528d624ab6bc35` Last Modified: Tue, 31 Aug 2021 04:11:11 GMT Size: 278.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:5e78264dffd8afd8084e307db7cfd44f971f55d76534559ccd72b50b64108726` Last Modified: Tue, 31 Aug 2021 04:11:40 GMT Size: 263.9 MB (263910611 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:cd783101a133a035071fca065237130e373e2f4cc4151957c9c461ae9a79cdf2` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 263.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d19cbd07b1b667393092b9d2df1833f0b331cfe1eb5e6ad304f4acbda03136ab` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 336.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c21aeee9b83798ca4fd42d7e5aa46ecd288c027160673f1be4f3d9d57b70a383` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 307.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:cf7e05a640abfb3d3865705d1a482da844acc818d5c32089ee78ec8ba477136d` Last Modified: Tue, 31 Aug 2021 04:11:09 GMT Size: 18.9 KB (18859 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:be58af708ccc85c876f8148b3f7af699d37b1d23a25d76c1c8eaedd03b6dcc9b` Last Modified: Tue, 31 Aug 2021 04:11:08 GMT Size: 5.1 MB (5146536 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fb1df10206b55b2eb8839919bb0d056e7256e1dd2bb398a492c5b8317235fe6d` Last Modified: Tue, 31 Aug 2021 04:11:07 GMT Size: 13.4 KB (13357 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:703d37b5f26f094cbc47c55b9abf4e3fe103e96208d7e102c579384ffe6add64` Last Modified: Tue, 31 Aug 2021 04:11:06 GMT Size: 2.6 KB (2566 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:15716512d8f97648b13d29fdbc7df5b8f6ec04e2ca06aa9447b31525f08c1c2c` Last Modified: Tue, 31 Aug 2021 04:11:06 GMT Size: 171.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b72ae8545d8a1d76de6aadb8a94a839c2d60e2f164e0323e0ad4cc73a1853110` Last Modified: Tue, 31 Aug 2021 04:11:06 GMT Size: 1.2 KB (1250 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ## `php-zendserver:9.1` ```console $ docker pull php-zendserver@sha256:cc80372d5dcac564f69bb3c8c066a46638ee40115142f5247079084a5ddc1179 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: 1 - linux; amd64 ### `php-zendserver:9.1` - linux; amd64 ```console $ docker pull php-zendserver@sha256:7b079ac3a11ceab944593ab3fbfbbac482a4d6611fa431c710e2bd0472c87888 ``` - Docker Version: 20.10.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **399.3 MB (399260011 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:78ac8051a5f543e6f729a4fc404e53a1f68ef6dc6c6443b992634a95664429c3` - Default Command: `["\/usr\/local\/bin\/run"]` ```dockerfile # Tue, 31 Aug 2021 01:21:27 GMT ADD file:11b425d4c08e81a3e0cb2e0345d27cd5fc844dd83f1096af4cc05f635824ff5d in / # Tue, 31 Aug 2021 01:21:28 GMT RUN set -xe && echo '#!/bin/sh' > /usr/sbin/policy-rc.d && echo 'exit 101' >> /usr/sbin/policy-rc.d && chmod +x /usr/sbin/policy-rc.d && dpkg-divert --local --rename --add /sbin/initctl && cp -a /usr/sbin/policy-rc.d /sbin/initctl && sed -i 's/^exit.*/exit 0/' /sbin/initctl && echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup && echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean && echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean && echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean && echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages && echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes && echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests # Tue, 31 Aug 2021 01:21:29 GMT RUN rm -rf /var/lib/apt/lists/* # Tue, 31 Aug 2021 01:21:30 GMT RUN mkdir -p /run/systemd && echo 'docker' > /run/systemd/container # Tue, 31 Aug 2021 01:21:30 GMT CMD ["/bin/bash"] # Tue, 31 Aug 2021 04:02:14 GMT RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-key 799058698E65316A2E7A4FF42EAE1437F7D2C623 # Tue, 31 Aug 2021 04:04:22 GMT COPY file:0d4830b5060fb75cec6a47b30d343d82f9c3d6f95f20c11635618b93dedb5720 in /etc/apt/sources.list.d/zend-server.list # Tue, 31 Aug 2021 04:05:46 GMT RUN apt-get update && apt-get install -y curl libmysqlclient20 unzip git zend-server-php-7.1=9.1.12+b211 && rm -rf /var/lib/apt/lists/* && /usr/local/zend/bin/zendctl.sh stop # Tue, 31 Aug 2021 04:05:50 GMT COPY file:def46bbc651bcb61f92bcaa2f8d6edec0c22e51e86132fabd2e47542dcbec0bf in /etc/apache2/conf-available # Tue, 31 Aug 2021 04:05:51 GMT RUN /usr/sbin/a2enconf drop-http-proxy-header # Tue, 31 Aug 2021 04:05:52 GMT RUN /usr/sbin/a2enmod headers # Tue, 31 Aug 2021 04:05:52 GMT ENV ZS_INIT_VERSION=0.3 # Tue, 31 Aug 2021 04:05:52 GMT ENV ZS_INIT_SHA256=e8d441d8503808e9fc0fafc762b2cb80d4a6e68b94fede0fe41efdeac10800cb # Tue, 31 Aug 2021 04:05:53 GMT RUN curl -fSL -o zs-init.tar.gz "http://repos.zend.com/zs-init/zs-init-docker-${ZS_INIT_VERSION}.tar.gz" && echo "${ZS_INIT_SHA256} *zs-init.tar.gz" | sha256sum -c - && mkdir /usr/local/zs-init && tar xzf zs-init.tar.gz --strip-components=1 -C /usr/local/zs-init && rm zs-init.tar.gz # Tue, 31 Aug 2021 04:05:53 GMT WORKDIR /usr/local/zs-init # Tue, 31 Aug 2021 04:06:00 GMT RUN /usr/local/zend/bin/php -r "readfile('https://getcomposer.org/installer');" | /usr/local/zend/bin/php && /usr/local/zend/bin/php composer.phar self-update && /usr/local/zend/bin/php composer.phar update # Tue, 31 Aug 2021 04:06:00 GMT COPY dir:5966ca4828c37f68d48d11da814350a590451453f42aa83ef2eab6893db3e4cc in /usr/local/bin # Tue, 31 Aug 2021 04:06:01 GMT COPY dir:80bde0d50316e7c9350262fe3b75826a91d075303027787e759d703b60df13d6 in /usr/local/zend/var/plugins/ # Tue, 31 Aug 2021 04:06:01 GMT RUN rm /var/www/html/index.html # Tue, 31 Aug 2021 04:06:02 GMT COPY dir:d174a5d34625889b4356c566972566e0ca7da618b01ea42276562f8186517a67 in /var/www/html # Tue, 31 Aug 2021 04:06:02 GMT EXPOSE 80 # Tue, 31 Aug 2021 04:06:02 GMT EXPOSE 443 # Tue, 31 Aug 2021 04:06:02 GMT EXPOSE 10081 # Tue, 31 Aug 2021 04:06:02 GMT EXPOSE 10082 # Tue, 31 Aug 2021 04:06:03 GMT WORKDIR /var/www/html # Tue, 31 Aug 2021 04:06:03 GMT CMD ["/usr/local/bin/run"] ``` - Layers: - `sha256:58690f9b18fca6469a14da4e212c96849469f9b1be6661d2342a4bf01774aa50` Last Modified: Thu, 05 Aug 2021 00:25:05 GMT Size: 46.5 MB (46497548 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b51569e7c50720acf6860327847fe342a1afbe148d24c529fb81df105e3eed01` Last Modified: Tue, 31 Aug 2021 01:23:09 GMT Size: 857.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:da8ef40b9ecabc2679fe2419957220c0272a965c5cf7e0269fa1aeeb8c56f2e1` Last Modified: Tue, 31 Aug 2021 01:23:08 GMT Size: 528.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fb15d46c38dcd1ea0b1990006c3366ecd10c79d374f341687eb2cb23a2c8672e` Last Modified: Tue, 31 Aug 2021 01:23:08 GMT Size: 170.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2102dd9376f61b14df18139d78a96caeb38426131bb6a85847a5733f84d83c32` Last Modified: Tue, 31 Aug 2021 04:11:12 GMT Size: 14.7 KB (14704 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:6ef9ee709ba9d99042fc4eae3385479692614cfd72cbc83c98522170944f9f1a` Last Modified: Tue, 31 Aug 2021 04:12:01 GMT Size: 234.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:837980bcf5c021764e8ff0b5e1826fa0d23d4d4bbd2fe02962a470f7d8df8f11` Last Modified: Tue, 31 Aug 2021 04:12:43 GMT Size: 347.6 MB (347557251 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f311d7aa89c09f28fce59942e782699474050c73dc60b2c0570f5eb2fe64c4fc` Last Modified: Tue, 31 Aug 2021 04:11:59 GMT Size: 264.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:e8c9384096c8027661058200d5dd435498203ae6cea4ffada13ffcc34a17f730` Last Modified: Tue, 31 Aug 2021 04:11:59 GMT Size: 338.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:a391773894550b0dcf95133d7a0ebfc325acd5a660c35f96c77133e3fd8f740c` Last Modified: Tue, 31 Aug 2021 04:11:59 GMT Size: 308.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:e99104e97158863a1815e4b564104f84c3c0671a3ef2a49ddf935cb20a41e537` Last Modified: Tue, 31 Aug 2021 04:11:59 GMT Size: 18.9 KB (18859 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:9dc404237abddc7a47d1e606dc92470e0035dcd32745c3c0bf383528304c96d7` Last Modified: Tue, 31 Aug 2021 04:11:58 GMT Size: 5.2 MB (5150653 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:63099d2783ee41f72d28cc95ab35ba28c5674108eead4fba7719574ecddb2c99` Last Modified: Tue, 31 Aug 2021 04:11:57 GMT Size: 14.3 KB (14320 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:16df2874073685e8fadc4aa9c04fc75132336de6749145d708e7d664cc03a03f` Last Modified: Tue, 31 Aug 2021 04:11:57 GMT Size: 2.6 KB (2559 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b978107de3a3767f2ffe65e620f34ce62d93b74cbd20b77b1821327e7458539b` Last Modified: Tue, 31 Aug 2021 04:11:57 GMT Size: 169.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fc3a81a9d2085d59971d4608251a552a6c9631466fcac109142e49c26602cf12` Last Modified: Tue, 31 Aug 2021 04:11:57 GMT Size: 1.2 KB (1249 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ## `php-zendserver:latest` ```console $ docker pull php-zendserver@sha256:8aba264f01337eeb40297aeefdba03420957ea5d2520edc405f835e9a9b4dd32 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: 1 - linux; amd64 ### `php-zendserver:latest` - linux; amd64 ```console $ docker pull php-zendserver@sha256:60132736f3148d9ae841e49d7050ded15ffc44bb5accaa4f457f169b29ccf0fd ``` - Docker Version: 20.10.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **392.6 MB (392616981 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:c9b02abac2707b5fdce1480aeea0b49f0730303abe73317c430f506d47bfb1cc` - Default Command: `["\/usr\/local\/bin\/run"]` ```dockerfile # Fri, 07 Jan 2022 02:25:21 GMT ADD file:2aa3e79e3cff3c048612744ac310cf86bc27de3433d420711bafc6612445befc in / # Fri, 07 Jan 2022 02:25:21 GMT CMD ["bash"] # Fri, 07 Jan 2022 05:31:33 GMT RUN apt-get update && apt-get install -y gnupg # Fri, 07 Jan 2022 05:31:38 GMT RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-key 799058698E65316A2E7A4FF42EAE1437F7D2C623 # Fri, 07 Jan 2022 05:33:48 GMT COPY file:1e70d8fd6f9643bffb703528edddba0aa02a58e95cc53e92f58a86cde29e732a in /etc/apt/sources.list.d/zend-server.list # Fri, 07 Jan 2022 05:35:23 GMT RUN apt-get update && apt-get install -y iproute2 curl libmysqlclient20 unzip git zend-server-nginx=2021.0.0+b74 && rm -rf /var/lib/apt/lists/* && /usr/local/zend/bin/zendctl.sh stop # Fri, 07 Jan 2022 05:35:26 GMT ENV ZS_INIT_VERSION=0.3 # Fri, 07 Jan 2022 05:35:27 GMT ENV ZS_INIT_SHA256=e8d441d8503808e9fc0fafc762b2cb80d4a6e68b94fede0fe41efdeac10800cb # Fri, 07 Jan 2022 05:35:27 GMT COPY file:ad21ce0b2dc8345be0ef63836774934d6b2045ddc3685411221a07dd10b649d1 in /tmp/zs-init.patch # Fri, 07 Jan 2022 05:35:28 GMT RUN curl -fSL -o zs-init.tar.gz "http://repos.zend.com/zs-init/zs-init-docker-${ZS_INIT_VERSION}.tar.gz" && echo "${ZS_INIT_SHA256} *zs-init.tar.gz" | sha256sum -c - && mkdir /usr/local/zs-init && tar xzf zs-init.tar.gz --strip-components=1 -C /usr/local/zs-init && rm zs-init.tar.gz && patch -u /usr/local/zs-init/src/Init/Steps/AbstractStep.php -i /tmp/zs-init.patch && rm /tmp/zs-init.patch # Fri, 07 Jan 2022 05:35:28 GMT WORKDIR /usr/local/zs-init # Fri, 07 Jan 2022 05:35:34 GMT RUN /usr/local/zend/bin/php -r "readfile('https://getcomposer.org/installer');" | /usr/local/zend/bin/php && /usr/local/zend/bin/php composer.phar update # Fri, 07 Jan 2022 05:35:34 GMT COPY dir:eecd98e9ebf1c61a12ae67558eb2a6ce846b9ebfadabbf08503e90b3e30d9496 in /usr/local/bin # Fri, 07 Jan 2022 05:35:34 GMT COPY dir:80bde0d50316e7c9350262fe3b75826a91d075303027787e759d703b60df13d6 in /usr/local/zend/var/plugins/ # Fri, 07 Jan 2022 05:35:35 GMT RUN rm /var/www/html/index.nginx-debian.html # Fri, 07 Jan 2022 05:35:35 GMT COPY dir:d174a5d34625889b4356c566972566e0ca7da618b01ea42276562f8186517a67 in /var/www/html # Fri, 07 Jan 2022 05:35:36 GMT EXPOSE 80 # Fri, 07 Jan 2022 05:35:36 GMT EXPOSE 443 # Fri, 07 Jan 2022 05:35:36 GMT EXPOSE 10081 # Fri, 07 Jan 2022 05:35:36 GMT EXPOSE 10082 # Fri, 07 Jan 2022 05:35:36 GMT WORKDIR /var/www/html # Fri, 07 Jan 2022 05:35:37 GMT CMD ["/usr/local/bin/run"] ``` - Layers: - `sha256:2f94e549220aea96f00cae7eb95f401e61b41a16cc5eb0b4ea592d0ce871930a` Last Modified: Thu, 06 Jan 2022 23:50:21 GMT Size: 26.7 MB (26705027 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:57fc838010b9e661999aeaa9b8f201295e755543a4cdf71cc205b8c6f316ee63` Last Modified: Fri, 07 Jan 2022 05:36:02 GMT Size: 34.2 MB (34204735 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:95538f4f0f751a40bc37cf1d22a8101036eee4b20f36ba0eeed2fcf353944040` Last Modified: Fri, 07 Jan 2022 05:35:57 GMT Size: 1.4 KB (1395 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:7372f3cd060a0813cd3dbf5ac6b08cd055202ec4e89ecb4f0200c66a77a72be6` Last Modified: Fri, 07 Jan 2022 05:37:00 GMT Size: 238.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:dd982e11c27a8c46e622ff7fa218c5961dd748b7c78eeb89b7367e41796eebe1` Last Modified: Fri, 07 Jan 2022 05:37:43 GMT Size: 326.5 MB (326492733 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f51225a3a1caf0dade4501b63229615e4b22c3ace8496d8ad49fffcede5e7092` Last Modified: Fri, 07 Jan 2022 05:37:00 GMT Size: 446.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:156652f35f3018708ae6995b6443512c31312a32199ce7945e36ec8619c03fe0` Last Modified: Fri, 07 Jan 2022 05:36:59 GMT Size: 18.9 KB (18929 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:8eb899bdcf9814e787996585539324390d939436be5c091464265e12e473499c` Last Modified: Fri, 07 Jan 2022 05:36:59 GMT Size: 5.2 MB (5175183 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:40e1b87a966c33c3da01539760bf77af8e06f3164bc79c919c7201f6ae637286` Last Modified: Fri, 07 Jan 2022 05:36:57 GMT Size: 14.3 KB (14292 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f98ab6b5efa1a76687d4355077b8b80eb8f819614110709e7f1ebc11f3be3191` Last Modified: Fri, 07 Jan 2022 05:36:57 GMT Size: 2.6 KB (2561 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c0265bdb48187ecf23f832acadea9db04ae983aa259a54271bacf6b51093793b` Last Modified: Fri, 07 Jan 2022 05:36:57 GMT Size: 186.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d5d6e893a644f992442f05d2dfafc245e117726b0b60d6cf8843c0f25054ade6` Last Modified: Fri, 07 Jan 2022 05:36:57 GMT Size: 1.3 KB (1256 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
52.955487
1,121
0.747864
yue_Hant
0.150699
f4fc645ac17edaed05ed669641668eaacb573c16
1,137
md
Markdown
powerapps-docs/developer/model-driven-apps/clientapi/reference/grids/gridrow.md
PathToSharePoint/powerapps-docs
897a3dd2b1afd272809a2261992b61d83eb9880a
[ "CC-BY-4.0", "MIT" ]
2
2020-11-24T18:59:33.000Z
2021-02-10T00:41:38.000Z
powerapps-docs/developer/model-driven-apps/clientapi/reference/grids/gridrow.md
PathToSharePoint/powerapps-docs
897a3dd2b1afd272809a2261992b61d83eb9880a
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/developer/model-driven-apps/clientapi/reference/grids/gridrow.md
PathToSharePoint/powerapps-docs
897a3dd2b1afd272809a2261992b61d83eb9880a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "GridRow (Client API reference) in model-driven apps| MicrosoftDocs" ms.date: 10/31/2018 ms.service: powerapps ms.topic: "reference" applies_to: "Dynamics 365 (online)" ms.assetid: 02fef0b4-b895-4277-b604-3f525c29dca3 author: "Nkrb" ms.author: "nabuthuk" manager: "kvivek" search.audienceType: - developer search.app: - PowerApps - D365CE --- # GridRow (Client API reference) A collection of GridRow is returned by [Grid](grid.md).[getRows](grid/getRows.md) and [Grid](grid.md).[getSelectedRows](grid/getSelectedRows.md) methods. ```JavaScript var myRows = gridContext.getGrid().getRows(); var gridRow = myRows.get(arg); ``` ## Properties |Name|Description|Available for| |--|--|--| |data|An object containing the [GridRowData](gridrowdata.md) for the GridRow.|Read-only and editable grids| ## Methods |Name|Description|Available for| |--|--|--| |[getData](gridrow/getData.md)|[!INCLUDE[gridrow/includes/getData-description.md](gridrow/includes/getData-description.md)]|Read-only and editable grids| ### Related topics [GridRowData](gridrowdata.md) [Grids and subgrids in model-driven apps](../grids.md)
23.6875
153
0.73263
yue_Hant
0.571966
f4fcb89947d840effc7277ce11e918b5a8da0b61
699
md
Markdown
_posts/2003-11-17-39-4223.md
ashmaroli/mycad.github.io
47fb8e82e4801dd33a3d4335a3d4a6657ec604cf
[ "MIT" ]
null
null
null
_posts/2003-11-17-39-4223.md
ashmaroli/mycad.github.io
47fb8e82e4801dd33a3d4335a3d4a6657ec604cf
[ "MIT" ]
null
null
null
_posts/2003-11-17-39-4223.md
ashmaroli/mycad.github.io
47fb8e82e4801dd33a3d4335a3d4a6657ec604cf
[ "MIT" ]
null
null
null
--- layout: post amendno: 39-4223 cadno: CAD2002-A320-13R3 title: 解除THALESAVIONICS生产的部分DDRMI的功能 date: 2003-11-17 00:00:00 +0800 effdate: 2003-11-18 00:00:00 +0800 tag: A320 categories: 民航西南管理局适航处 author: 倪咏涛 --- ##适用范围: 本指令适用于在中国注册的下列飞机: 1. --安装了一个 THALES AVIONICS生产的且件号符合本指令下文列出的DDRMI的、所有已取证的型号的制造序号从0910开始的空中客车 A319型飞机; --安装了一个 THALES AVIONICS生产的且件号符合本指令下文列出的DDRMI的、所有已取证的型号的制造序号从0844开始的空中客车 A320型飞机; --安装了一个 THALES AVIONICS生产的且件号符合本指令下文列出的DDRMI的、所有已取证的型号的制造序号从1012开始的空中客车 A321型飞机。 2. 所有其它符合下述条件的空中客车 A319/A320/A321型飞机: --安装了一个 THALES AVIONICS生产的且产品序号与该飞机从生产线交付时的“飞机检查报告 ”(AIR)记录不一致的DDRMI; --或者安装了一个 THALES AVIONICS生产的且在1999年5月后修理过的DDRMI。 注1:本指令不适用于在制造中执行了空客32414/32415/32416/32417改装的飞机(DDRMI件?
34.95
245
0.832618
yue_Hant
0.800438
f4fcbcbbd06fab77551f42c70688515ef6fd4817
1,321
md
Markdown
README.md
mijho/pklVerifier
41a8a45d388f2800aab31e40e1ede571ea6b2a5d
[ "MIT" ]
1
2020-01-31T10:03:27.000Z
2020-01-31T10:03:27.000Z
README.md
mijho/pklVerifier
41a8a45d388f2800aab31e40e1ede571ea6b2a5d
[ "MIT" ]
null
null
null
README.md
mijho/pklVerifier
41a8a45d388f2800aab31e40e1ede571ea6b2a5d
[ "MIT" ]
null
null
null
### pklVerifier #### A DCP validation tool written in Golang This is currently in development and shouldn't be relied upon to validate accurately at this point. ##### Installation Instructions With go get: ``` $ go get github.com/mijho/pklVerifier ``` Or head to the releases page to download the prebuild binary for your system. NB gobcrypt has been tested fully on OSX and Linux amd64 only, please let me know if there are any performance issues. A windows release may be on the cards but would require some small changes first. ``` Usage of ./pklVerifier: -d string Specify the path to the DCP directory ``` Example usage: Verify a DCP: ``` $ pklVerifier -d /path/to/DCP Validating: DCP/example_aud.mxf The reported filesize is: 27031694 The actual filesize is: 27031694 Hash from PKL: XXXXXXXXXXXXXXXXXXXXXXXXX+s= Hash of file: XXXXXXXXXXXXXXXXXXXXXXXXX+s= Hash result: VALID Size result: VALID Validating: DCP/CPL.xml The reported filesize is: 230325 The actual filesize is: 230325 Hash from PKL: XXXXXXXXXXXXXXXXXXXXXXXXX+s= Hash of file: XXXXXXXXXXXXXXXXXXXXXXXXX+s= Hash result: VALID Size result: VALID The hashcheck has completed with 0 errors. ```
28.106383
118
0.686601
eng_Latn
0.924008
f4fd5bb7f361b85427569a9cd8f3466168cc9672
609
md
Markdown
p243e/README.md
l33tdaima/l33tdaima
0a7a9573dc6b79e22dcb54357493ebaaf5e0aa90
[ "MIT" ]
1
2020-02-20T12:04:46.000Z
2020-02-20T12:04:46.000Z
p243e/README.md
l33tdaima/l33tdaima
0a7a9573dc6b79e22dcb54357493ebaaf5e0aa90
[ "MIT" ]
null
null
null
p243e/README.md
l33tdaima/l33tdaima
0a7a9573dc6b79e22dcb54357493ebaaf5e0aa90
[ "MIT" ]
null
null
null
# 243. Shortest Word Distance (Easy) Given a list of words and two words word1 and word2, return the shortest distance between these two words in the list. ### Example: ``` Assume that words = ["practice", "makes", "perfect", "coding", "makes"]. Input: word1 = “coding”, word2 = “practice” Output: 3 Input: word1 = "makes", word2 = "coding" Output: 1 ``` ### Note: You may assume that word1 does not equal to word2, and word1 and word2 are both in the list. #LNKD #Indeed #MSFT #Array #Two Pointers #Similar questions [#243e](../p243e/README.md) [#244m](../p244m/README.md) [#245m](../p245m/README.md)
26.478261
118
0.684729
eng_Latn
0.966644
f4fd8be70dbc8b580459a74f267f1c4872583358
2,627
md
Markdown
src/perception/filters/ray_ground_classifier/design/ray-aggregator-design.md
ruvus/auto
25ae62d6e575cae40212356eed43ec3e76e9a13e
[ "Apache-2.0" ]
19
2021-05-28T06:14:21.000Z
2022-03-10T10:03:08.000Z
src/perception/filters/ray_ground_classifier/design/ray-aggregator-design.md
ruvus/auto
25ae62d6e575cae40212356eed43ec3e76e9a13e
[ "Apache-2.0" ]
1
2021-05-23T16:45:11.000Z
2021-08-03T16:59:40.000Z
src/perception/filters/ray_ground_classifier/design/ray-aggregator-design.md
ruvus/auto
25ae62d6e575cae40212356eed43ec3e76e9a13e
[ "Apache-2.0" ]
14
2021-05-29T14:59:17.000Z
2022-03-10T10:03:09.000Z
ray_aggregator {#ray-aggregator-design} ============== # Purpose / Use cases The [RayGroundClassifier](@ref autoware::perception::filters::ray_ground_classifier::RayGroundClassifier) operates on Rays individually. This presupposes that ray information is provided to the module. While this can be true for scanning LiDAR, in practice, this assumption is not always available depending on interface definitions, driver implementation and LiDAR scan patterns. As such, a component that generates rays from an unstructured point clouds for use in ground classification is needed to allow the `RayGroundClassifier` to work more generally. # Design The ray aggregator generally has three operations: 1. Add points to the aggregator 2. Check if any rays are ready 3. Get next ray The second and third operations were separated out from a single operation to generally make the API more concrete and immediately understandable. ## Algorithm Design This object in general operates in the following manner: 1. Buffers are preallocated according to configuration settings 2. For each point an angle and thus bin is calculated, the point is added to the respective ray 3. If any ray satisfies a given criterion, then it (it's index) is added to the list of ready rays 4. When a ray is seen (gotten), it will be reset the next time it is touched to add points Finally, the criterion being used currently is the number of points in a given ray. This was for the sake of simplicity. When a full scan is received, all rays are considered ready, regardless of the number of points in a ray (if the number of points is positive). ## Inputs / Outputs / API See `autoware::perception::filters::ray_ground_classifier::RayAggregator` for more details. ## Inner-workings / Algorithms In general, the `RayAggregator` is fairly straight forward. However, there is one piece of complexity in its implementation: Each ray internally has three states: 1. `NOT_READY` 2. `READY` 3. `RESET` Where the last state is set when a ray is seen, and is used to reset the state of a ray upon the next insertion to the ray. # Security considerations TBD by security expert. # References / External links See original Autoware implementation, where the `RayAggregator` is implicitly defined: [CPFL's Autoware](https://github.com/CPFL/Autoware/blob/7b384fbdc97793d8bf7c1387d22c50e1fc9109e3/ros/src/sensing/filters/packages/points_preprocessor/nodes/ray_ground_filter/ray_ground_filter.cpp) # Future extensions / Unimplemented parts More exotic "scan ready" criterion may be added and/or evaluated, e.g.: - Max point spacing - Point variance
35.5
196
0.789494
eng_Latn
0.998083
f4fdb2371014512d669b7d8507bd35b8fcde0b9d
51,666
md
Markdown
topics/transcriptomics/tutorials/droplet-quantification-preprocessing/tutorial.md
davebx/training-material
a700b2e0098666a2088137a2587ecfae37219303
[ "MIT" ]
null
null
null
topics/transcriptomics/tutorials/droplet-quantification-preprocessing/tutorial.md
davebx/training-material
a700b2e0098666a2088137a2587ecfae37219303
[ "MIT" ]
null
null
null
topics/transcriptomics/tutorials/droplet-quantification-preprocessing/tutorial.md
davebx/training-material
a700b2e0098666a2088137a2587ecfae37219303
[ "MIT" ]
null
null
null
--- layout: tutorial_hands_on title: "Generating a single cell matrix using Alevin" subtopic: single-cell priority: 9 zenodo_link: 'https://zenodo.org/record/4574153' questions: - I have some single cell FASTQ files I want to analyse. Where do I start? objectives: - Repeat matrix generation for any droplet-based single cell sequencing data - Apply data combination and metadata editing for particular experimental designs - Interpret quality control (QC) plots to make informed decisions on cell thresholds - Find relevant information in GTF files for the particulars of their study, and include this in data matrix metadata time_estimation: 3H key_points: - Create a scanpy-accessible AnnData object from FASTQ files, including relevant cell and gene metadata - Combine multiple samples and label according to study design tags: - single-cell - 10x contributors: - nomadscientist - pinin4fjords requirements: - type: "internal" topic_name: transcriptomics tutorials: - scrna-intro - scrna-umis gitter: Galaxy-Training-Network/galaxy-single-cell --- # Introduction {:.no_toc} <!-- This is a comment. --> This tutorial will take you from raw FASTQ files to a cell x gene data matrix in AnnData format. What's a data matrix, and what's AnnData format? Well you'll find out! Importantly, this is the first step in processing single cell data in order to start analysing it. Currently you have a bunch of strings of `ATGGGCTT` etc. in your sequencing files, and what you need to know is how many cells you have and what genes appear in those cells. In the second part of this tutorial, we will also look at combining FASTQ files and adding in metadata (for instance, SEX or GENOTYPE) for analysis later on. These steps are the most computationally heavy in the single cell world, as you're starting with 100s of millions of reads, each 4 lines of text. Later on in analysis this data becomes simple gene counts such as 'Cell A has 4 GAPDHs', which is a lot easier to store! Because of this data overload, we have downsampled the FASTQ files to speed up the analysis a bit. Saying that, you're still having to map loads of reads to the massive murine genome, so get yourself a cup of coffee and prepare to analyse! > ### Agenda > > In this tutorial, we will cover: > > 1. TOC > {:toc} > {: .agenda} # Generating a matrix In this section, we will show you the principles of the initial phase of single-cell RNA-seq analysis: generating expression measures in a matrix. We'll concentrate on droplet-based (rather than plate-based) methodology, since this is the process with most differences with respect to conventional approaches developed for bulk RNA-seq. Droplet-based data consists of three components: cell barcodes, unique molecular identifiers (UMIs) and cDNA reads. To generate cell-wise quantifications we need to: * Process cell barcodes, working out which ones correspond to 'real' cells, which to sequencing artefacts, and possibly correct any barcodes likely to be the product of sequencing errors by comparison to more frequent sequences. * Map biological sequences to the reference genome or transcriptome. * 'De-duplicate' using the UMIs. This used to be a complex process involving multiple algorithms, or was performed with technology-specific methods (such as 10X's 'Cellranger' tool) but is now much simpler thanks to the advent of a few new methods. When selecting methodology for your own work you should consider: * [STARsolo](https://github.com/alexdobin/STAR) - a dscRNA-seq-specific variant of the popular genome alignment method STAR. Produces results very close to those of Cellranger (which itself uses STAR under the hood). * [Kallisto/ bustools](https://www.kallistobus.tools/) - developed by the originators of the transcriptome quantification method, Kallisto. * [Alevin](https://salmon.readthedocs.io/en/latest/alevin.html) - another transcriptome method developed by the authors of the Salmon tool. We're going to use Alevin {% cite article-Alevin %} for demonstration purposes, but we do not endorse one method over another. ## Get Data We've provided you with some example data to play with, a small subset of the reads in a mouse dataset of fetal growth restriction {% cite Bacon2018 %} (see the study in Single Cell Expression Atlas [here](https://www.ebi.ac.uk/gxa/sc/experiments/E-MTAB-6945/results/tsne) and the project submission [here](https://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-6945/)). This is a study using the Drop-seq chemistry, however this tutorial is almost identical to a 10x chemistry. We will point out the one tool parameter change you will need to run 10x samples. This data is not carefully curated, standard tutorial data - it's real, it's messy, it desperately needs filtering, it has background RNA running around, and most of all it will give you a chance to practice your analysis as if this data were yours. Down-sampled reads and some associated annotation can be downloaded from Zenodo below, or you can import this [example input history](https://humancellatlas.usegalaxy.eu/u/wendi.bacon.training/h/input---pre-processing-with-alevin). How did I downsample these FASTQ files? Check out [this history](https://humancellatlas.usegalaxy.eu/u/wendi.bacon.training/h/pre-processing-with-alevin---part-1---how-to-downsample) to find out! Additionally, to map your reads, you will need a transcriptome to align against (a FASTA) as well as the gene information for each transcript (a gtf) file. You can download these for your species of interest from Ensembl [here](https://www.ensembl.org/info/data/ftp/index.html). Additionally, these files are available in the above history as well as the Zenodo links below. Keep in mind, these are big files, so they may take a bit to import! > ### {% icon hands_on %} Hands-on: Data upload - Part 1 > > 1. Create a new history for this tutorial > 2. Import the Experimental Design table, sequencing reads 1 & 2, the GTF and fasta files from [Zenodo]({{ page.zenodo_link }}) > > ``` > {{ page.zenodo_link }}/files/Experimental_Design.tabular > {{ page.zenodo_link }}/files/Mus_musculus.GRCm38.100.gtf.gff > {{ page.zenodo_link }}/files/Mus_musculus.GRCm38.cdna.all.fa.fasta > {{ page.zenodo_link }}/files/SLX-7632.TAAGGCGA.N701.s_1.r_1.fq-400k.fastq > {{ page.zenodo_link }}/files/SLX-7632.TAAGGCGA.N701.s_1.r_2.fq-400k.fastq > ``` > > {% snippet faqs/galaxy/datasets_import_via_link.md %} > > 3. Rename {% icon galaxy-pencil %} the datasets > {: .hands_on} > ### {% icon question %} Questions > > Have a look at the files you now have in your history. > 1. Which of the FASTQ files do you think contains the barcode sequences? > 2. Given the chemistry this study should have, are the barcode/UMI reads the correct length? > 3. What is the 'N701' referring to? > > > ### {% icon solution %} Solution > > > > 1. Read 1 (SLX-7632.TAAGGCGA.N701.s_1.r_1.fq-400k) contains the cell barcode and UMI because it is significantly shorter (indeed, 20 bp!) compared to the longer, r_2 transcript read. For ease, rename these files N701-Read1 and N701-Read2. > > 2. You can see Read 1 is only 20 bp long, which for original Drop-Seq is 12 bp for cell barcode and 8 bp for UMI. This is correct! Be warned - 10x Chromium (and many technologies) change their chemistry over time, so particularly when you are accessing public data, you want to check and make sure you have your numbers correct! > > 3. 'N701' is referring to an index read. This sample was run alongside 6 other samples, each denoted by an Illumina Nextera Index (N70X). Later, this will tell you batch information. If you look at the 'Experimental Design' file, you'll see that the N701 sample was from a male wildtype neonatal thymus. > {: .solution} {: .question} ## Generate a transcript to gene map Gene-level, rather than transcript-level, quantification is standard in scRNA-seq, which means that that the expression level of alternatively spliced RNA molecules are combined to create gene-level values. Droplet-based scRNA-seq techniques only sample one end each transcript, so lack the full-molecule coverage that would be required to accurately quantify different transcript isoforms. To generate gene-level quantifications based on transcriptome quantification, Alevin and similar tools require a conversion between transcript and gene identifiers. We can derive a transcript-gene conversion from the gene annotations available in genome resources such as Ensembl. The transcripts in such a list need to match the ones we will use later to build a binary transcriptome index. If you were using spike-ins, you'd need to add these to the transcriptome and the transcript-gene mapping. In your example data you will see the murine reference annotation as retrieved from Ensembl in GTF format. This annotation contains gene, exon, transcript and all sorts of other information on the sequences. We will use these to generate the transcript/ gene mapping by passing that information to a tool that extracts just the transcript identifiers we need. > ### {% icon question %} Questions > > Which of the 'attributes' in the last column of the GTF files contains the transcript and gene identifiers? > > > > ### {% icon tip %} Hint > > > > The file is organised such that the last column (headed 'Group') contains a wealth of information in the format: attribute1 "information associated with attribute 1";attribute2 "information associated with attribute 2" etc. > {: .tip} > > > ### {% icon solution %} Solution > > *gene_id* and *transcript_id* are each followed by "ensembl gene_id" and "ensembl transcript_id" > {: .solution} {: .question} It's now time to parse the GTF file using the [rtracklayer](https://bioconductor.org/packages/release/bioc/html/rtracklayer.html) package in R. This parsing will give us a conversion table with a list of transcript identifiers and their corresponding gene identifiers for counting. Additionally, because we will be generating our own binary index (more later!), we also need to input our FASTA so that it can be filtered to only contain transcriptome information found in the GTF. > ### {% icon hands_on %} Hands-on: Generate a filtered FASTA and transcript-gene map > > 1. {% tool [GTF2GeneList](toolshed.g2.bx.psu.edu/repos/ebi-gxa/gtf2gene_list/_ensembl_gtf2gene_list/1.42.1+galaxy6) %} with the following parameters: > - {% icon param-file %} *"Ensembl GTF file"*: `GTF file in the history` {% icon galaxy-history %} > - *"Feature type for which to derive annotation"*: `transcript` (Your sequences are transcript sequencing, so this is your starting point) > - *"Field to place first in output table"*: `transcript_id` (This is accessing the column you identified above!) > - *"Suppress header line in output?"*: `Yes` (The next tool (Alevin) does not expect a header) > - *"Comma-separated list of field names to extract from the GTF (default: use all fields)"*: `transcript_id,gene_id` (This calls the first column to be the transcript_id, and the second the gene_id. Thus, your key can turn transcripts into genes) > - *"Append version to transcript identifiers?"*: `Yes` (The Ensembl FASTA files usually have these, and since we need the FASTA transcriptome and the GTF gene information to work together, we need to append these!) > - *"Flag mitochondrial features?"*: `No` > - *"Filter a FASTA-format cDNA file to match annotations?"*: `Yes` > - {% icon param-file %} *"FASTA-format cDNA/transcript file"*: `FASTA file in your history` {% icon galaxy-history %} > - *"Annotation field to match with sequences"*: `transcript_id` > 2. Rename {% icon galaxy-pencil %} the annotation table to `Map` > > 3. Rename {% icon galaxy-pencil %} the uncompressed filtered FASTA file to `Filtered FASTA` {: .hands_on} ## Generate a transcriptome index & quantify! Alevin collapses the steps involved in dealing with dscRNA-seq into a single process. Such tools need to compare the sequences in your sample to a reference containing all the likely transcript sequences (a 'transcriptome'). This will contain the biological transcript sequences known for a given species, and perhaps also technical sequences such as 'spike ins' if you have those. > ### {% icon details %} How does Alevin work? > > To be able to search a transcriptome quickly, Alevin needs to convert the text (FASTA) format sequences into something it can search quickly, called an 'index'. The index is in a binary rather than human-readable format, but allows fast lookup by Alevin. Because the types of biological and technical sequences we need to include in the index can vary between experiments, and because we often want to use the most up-to-date reference sequences from Ensembl or NCBI, we can end up re-making the indices quite often. Making these indices is time-consuming! Have a look at the uncompressed FASTA to see what it starts with. > {: .details} We now have: * Barcode/ UMI reads * cDNA reads * transcript/ gene mapping * filtered FASTA We can now run Alevin. In some public instances, Alevin won't show up if you search for it. Instead, you have to click the Single Cell tab at the left and scroll down to the Alevin tool. Tip: If you click the tools from the tutorial option within Galaxy, you'll always have the correct version of the tool! In this case, it is: (Galaxy Version 1.3.0+galaxy2) - it should be default. If not, click 'Versions' and choose that version. ![Tutorial option in Galaxy](../../images/wab-tutorial-in-galaxy.png "Tutorial option at the top right in Galaxy") ![Accessing tools in the tutorial option within Galaxy](../../images/wab-tutorial-option-filler.png "Click the tool in the tutorial with Galaxy") > ### {% icon hands_on %} Hands-on: Running Alevin > > 1. {% tool [Alevin](toolshed.g2.bx.psu.edu/repos/bgruening/alevin/alevin/1.3.0+galaxy2) %} > > > ### {% icon question %} Questions > > > > Try to fill in the parameters of Alevin using what you know! > > > > > ### {% icon tip %} Tip: Strandedness? > > > > > > The Salmon documentation on 'Fragment Library Types' and running the Alevin command ([salmon.readthedocs.io/en/latest/library_type.html](https://salmon.readthedocs.io/en/latest/library_type.html]) and [salmon.readthedocs.io/en/latest/alevin.html](https://salmon.readthedocs.io/en/latest/alevin.html)) will help here, although keep in mind the image there is drawn with the RNA 5' on top, whereas in this scRNA-seq protocol, the polyA is captured by its 3' tail and thus effectively the bottom or reverse strand...) > > {: .tip} > > > > > ### {% icon solution %} Solution > > > - *"Select a reference transcriptome from your history or use a built-in index?"*: `Use one from the history` > > > - You are going to generate the binary index using your filtered FASTA! > > > - {% icon param-file %} *"Transcripts FASTA file"*: `Filtered FASTA` > > > - *"Single or paired-end reads?"*: `Paired-end` > > > - {% icon param-file %} *"file1"*: `N701-Read1` > > > - {% icon param-file %} *"file2"*: `N701-Read2` > > > - *"Relative orientation of reads within a pair"*: `Mates are oriented towards each other (IR)` > > > - *"Specify the strandedness of the reads"*: `read comes from the reverse strand (SR)` > > > - *"Protocol"*: `DropSeq Single Cell protocol` > > > - {% icon param-file %} *"Transcript to gene map file"*: `Map` > > > - *"Retrieve all output files"*: `Yes` > - In *"Optional commands"*: > - *"dumpFeatures"*: `Yes` > - *"dumpMTX"*: `Yes` > > {: .solution} > {: .question} {: .hands_on} > ### {% icon comment %} What if I'm running a 10x sample? > > The main parameter that needs changing for a 10X Chromium sample is the 'Protocol' parameter of Alevin. Just select the correct 10x Chemistry there instead. {: .comment} This tool will take a while to run. Alevin produces many file outputs, not all of which we'll use. You can refer to the [Alevin documentation](https://salmon.readthedocs.io/en/latest/alevin.html) if you're curious what they all are, but we're most interested in is: * the matrix itself (quants_mat.mtx.gz - the count by gene and cell) * the row (cell/ barcode) identifiers (quants_mat_rows.txt) and * the column (gene) labels (quants_mat_cols.txt). This is the matrix market (MTX) format. > ### {% icon question %} Questions > > After you've run Alevin, {% icon galaxy-eye %} look through all the different files. Can you find: > 1. The Mapping Rate? > 2. How many cells are present in the matrix output? > > > ### {% icon solution %} Solution > > > > 1. Inspect {% icon galaxy-eye %} the file {% icon param-file %} meta_info.json. You can see the mapping rate is a paltry `24.75%`. This is a terrible mapping rate. Why might this be? Remember this was downsampled, and specifically by taking only the last 400,000 reads of the FASTQ file. The overall mapping rate of the file is more like 50%, which is still quite poor, but for early Drop-Seq samples and single-cell data in general, you might expect a slightly poorer mapping rate. 10x samples are much better these days! This is real data, not test data, after all! > > 2. Inspect {% icon galaxy-eye %} the file {% icon param-file %} 'quants_mat_rows.txt', and you can see it has `2163` lines. The rows refer to the cells in the cell x gene matrix. According to this (rough) estimate, your sample has 2163 cells in it! > > > {: .solution} > {: .question} > ### {% icon warning %} Warning!: Choose the appropriate input going forward! > Make certain to use **quants_mat.mtx.gz** and NOT **quants_tier.mtx.gz** going forward. {: .warning} {% icon congratulations %} Congratulations - you've made an expression matrix! We could almost stop here. But it's sensible to do some basic QC, and one of the things we can do is look at a barcode rank plot. # Basic QC The question we're looking to answer here, is: "do we have mostly a have a single cell per droplet"? That's what experimenters are normally aiming for, but it's not entirely straightforward to get exactly one cell per droplet. Sometimes almost no cells make it into droplets, other times we have too many cells in each droplet. At a minimum, we should easily be able to distinguish droplets with cells from those without. > ### {% icon hands_on %} Hands-on: Generate a raw barcode QC plot > > 1. {% tool [Droplet barcode rank plot](toolshed.g2.bx.psu.edu/repos/ebi-gxa/droplet_barcode_plot/_dropletBarcodePlot/1.6.1+galaxy2) %} with the following parameters: > - *"Input MTX-format matrix?"*: `No` > - {% icon param-file %} *"A two-column tab-delimited file, with barcodes in the first column and frequencies in the second"*: `raw_cb_frequencies.txt` > - *"Label to place in plot title"*: `Barcode rank plot (raw barcode frequencies)` > > 2. Rename {% icon galaxy-pencil %} the image output `Barcode Plot - raw barcode frequencies` {: .hands_on} ![raw droplet barcode plots-400k](../../images/wab-raw_barcodes-400k.png "400k subsample raw") Now, the image generated here (400k) isn't the most informative - but you are dealing with a fraction of the reads! If you run the total sample (so identical steps above, but with significantly more time!) you'd get the image below. ![raw droplet barcode plots-total](../../images/wab-raw_barcodes-total.png "Total sample - 32,579,453 reads - raw") This is our own formulation of the barcode plot based on a [discussion](https://github.com/COMBINE-lab/salmon/issues/362#issuecomment-490160480) we had with community members. The left hand plots with the smooth lines are the main plots, showing the UMI counts for individual cell barcodes ranked from high to low. We expect a sharp drop-off between cell-containing droplets and ones that are empty or contain only cell debris. Now, this data is not an ideal dataset, so for perspective, in an ideal world with a very clean 10x run, data will look a bit more like the following taken from the lung atlas (see the study in Single Cell Expression Atlas [here](https://www.ebi.ac.uk/gxa/sc/experiments/E-MTAB-6653/results/tsne) and the project submission [here](https://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-6653/)). ![raw droplet barcode plots - lung atlas](../../images/wab-lung-atlas-barcodes-raw.png "Pretty data - raw") In that plot, you can see the clearer 'knee' bend, showing the cut-off between empty droplets and cell-containing droplets. The right hand plots are density plots from the first one, and the thresholds are generated either using [dropletUtils](https://bioconductor.org/packages/release/bioc/html/DropletUtils.html) or by the method described in the discussion mentioned above. We could use any of these thresholds to select cells, assuming that anything with fewer counts is not a valid cell. By default, Alevin does something similar, and we can learn something about that by plotting just the barcodes Alevin retains. > ### {% icon hands_on %} Hands-on: Generate Alevin's barcode QC plot > > 1. {% tool [Droplet barcode rank plot](toolshed.g2.bx.psu.edu/repos/ebi-gxa/droplet_barcode_plot/_dropletBarcodePlot/1.6.1+galaxy2) %} with the following parameters: > - *"Input MTX-format matrix?"*: `Yes` > - *"Matrix-market format matrix file, with cells by column (overrides --barcode-frequencies if supplied)"*: `quants_mat.mtx` > - *"For use with --mtx-matrix: force interpretation of matrix to assume cells are by row, rather than by column (default)"*: `Yes` > - *"Label to place in plot title"*: `Barcode rank plot (Alevin-processed)` > - *"Number of bins used in barcode count frequency distribution"*: `50` > - *"Above-baseline multiplier to calculate roryk threshold"*: `1.5` > > 2. Rename {% icon galaxy-pencil %} the image output `Barcode Plot - Alevin processed barcodes` {: .hands_on} ![raw droplet barcode plots - 400k](../../images/wab-alevin-barcodes-400k.png "400k subsample - Alevin processed") And the full sample looks like: ![raw droplet barcode plots - total](../../images/wab-alevin-barcodes-total.png "Total sample - 32,579,453 reads - Alevin processed") And to round this off, here's the lung atlas plot. ![raw droplet barcode plots - total](../../images/wab-alevin-barcodes-lung.png "Pretty data - Alevin processed") You should see a completely vertical drop-off where Alevin has trunctated the distribution (after excluding any cell barcode that had <10 UMI, Alevin then chose a threshold based off the curve and removed all barcodes with fewer UMIs). In experiments with relatively simple characteristics, this 'knee detection' method works relatively well. But some populations (such as our sample!) present difficulties. For instance, sub-populations of small cells may not be distinguished from empty droplets based purely on counts by barcode. Some libraries produce multiple 'knees' for multiple sub-populations. The [emptyDrops](https://genomebiology.biomedcentral.com/articles/10.1186/s13059-019-1662-y) method has become a popular way of dealing with this. emptyDrops still retains barcodes with very high counts, but also adds in barcodes that can be statistically distinguished from the ambient profiles, even if total counts are similar. In order to ultimately run emptyDrops (or indeed, whatever tool you like that accomplishes biologically relevant thresholding), we first need to re-run Alevin, but prevent it from applying its own less than ideal thresholds. To use emptyDrops effectively, we need to go back and re-run Alevin, stopping it from applying it's own thresholds. Click the re-run icon {% icon galaxy-refresh %} on any Alevin output in your history, because almost every parameter is the same as before, except you need to change the following: ## Generate an unprocessed matrix in a usable format > ### {% icon hands_on %} Hands-on: Stopping Alevin from thresholding > 1. {% tool [Alevin](toolshed.g2.bx.psu.edu/repos/bgruening/alevin/alevin/1.3.0+galaxy2) %} (Click re-run on the last Alevin output) > - *"Optional commands"* > - *"keepCBFraction"*: '1' - i.e. keep them all! > - *"freqThreshold"*: '3' - This will only remove cell barcodes with a frequency of less than 3, a low bar to pass but useful way of avoiding processing a bunch of almost certainly empty barcodes > > {% snippet faqs/galaxy/tools_rerun.md %} {: .hands_on} > ### {% icon question %} Question > > How many cells are in the output now? > > > ### {% icon solution %} Solution > > > > 1. `22539` cells are in the quants_mat_rows now! Far more than the Alevin-filtered `2163`. This needs some serious filtering with EmptyDrops! > > > {: .solution} > {: .question} Alevin outputs MTX format, which we can pass to the dropletUtils package and run emptyDrops. Unfortunately the matrix is in the wrong orientation for tools expecting files like those produced by 10X software (which dropletUtils does). We need to 'transform' the matrix such that cells are in columns and genes are in rows. > ### {% icon warning %} Be careful! > Don't mix up files from the different Alevin runs! Use the later run, which has higher numbers in the history! {: .warning} > ### {% icon hands_on %} Hands-on: Transform matrix > > 1. {% tool [salmonKallistoMtxTo10x](toolshed.g2.bx.psu.edu/repos/ebi-gxa/salmon_kallisto_mtx_to_10x/_salmon_kallisto_mtx_to_10x/0.0.1+galaxy5) %} with the following parameters: > - {% icon param-file %} *".mtx-format matrix"*: `quants_mat.mtx.gz` (output of **Alevin** {% icon tool %}) > - {% icon param-file %} *"Tab-delimited genes file"*: `quants_mat_cols.txt` (output of **Alevin** {% icon tool %}) > - {% icon param-file %} *"Tab-delimited barcodes file"*: `quants_mat_rows.txt` (output of **Alevin** {% icon tool %}) > > 2. Rename {% icon galaxy-pencil %} 'salmonKallistoMtxTo10x....:genes' to `Gene table` > 3. Rename {% icon galaxy-pencil %} 'salmonKallistoMtxTo10x....:barcodes' to `Barcode table` > 4. Rename {% icon galaxy-pencil %} 'salmonKallistoMtxTo10x....:matrix' to `Matrix table` {: .hands_on} The output is a matrix in the correct orientation for the rest of our tools. However, our matrix is looking a bit sparse - for instance, click on `Gene table`. I don't know about you, but I'd struggle to have a good biological discussion using only Ensembl gene_ids! What I'd really like is the more understandable 'GAPDH' or other gene acronym, as well as information on mitochondrial genes so that I can assess if my cells were stressed out or not. In order to prepare our data for emptyDrops, we're going to combine this information into an object, and it's easiest to add in that information now. ## Adding in Gene metadata > ### {% icon question %} Question > > Where can we find this gene information? > > > ### {% icon solution %} Solution > > > > Our old friend the GTF file! > > > {: .solution} > {: .question} > ### {% icon question %} Question > > Which of the 'attributes' in the last column of that file contains the gene acronym? > > > ### {% icon solution %} Solution > > > > gene_name > > > {: .solution} > {: .question} We're now going to re-run {% icon galaxy-refresh %} the tool that extracts information from our GTF file. > ### {% icon hands_on %} Hands-on: Generate gene information > > 1. {% tool [GTF2GeneList](toolshed.g2.bx.psu.edu/repos/ebi-gxa/gtf2gene_list/_ensembl_gtf2gene_list/1.42.1+galaxy6) %} with the following parameters: > - *"Feature type for which to derive annotation"*: `gene` > - *"Field to place first in output table"*: `gene_id` > - *"Suppress header line in output?"*: `Yes` > - *"Comma-separated list of field names to extract from the GTF (default: use all fields)"*: `gene_id,gene_name,mito` > - *"Append version to transcript identifiers?"*: `Yes` > - *"Flag mitochondrial features?"*: `Yes` - note, this will auto-fill a bunch of acronyms for searching in the GTF for mitochondrial associated genes. This is good! > - *"Filter a FASTA-format cDNA file to match annotations?"*: `No` - we don't need to, we're done with the FASTA! > 2. Check that the output file type is `tabular`. If not, change the file type by clicking the 'Edit attributes'{% icon galaxy-pencil %} on the dataset in the history (as if you were renaming the file.) Then click `Datatypes` and type in `tabular`. Click `Change datatype`.) > 2. Rename {% icon galaxy-pencil %} the annotation table to `Gene Information` {: .hands_on} Inspect {% icon galaxy-eye %} the **Gene Information** object in the history. Now you have made a new key for gene_id, with gene name and a column of mitochondrial information (false = not mitochondrial, true = mitochondrial). We need to add this information into the salmonKallistoMtxTo10x output 'Gene table'. But we need to keep 'Gene table' in the same order, since it is referenced in the 'Matrix table' by row. > ### {% icon hands_on %} Hands-on: Combine MTX Gene Table with Gene Information > > 1. {% tool [Join two Datasets](join1) %} with the following parameters: > - *"Join"*: `Gene Table` > - *"Using column"*: `Column: 1` > - *"with"*: `Gene Information` > - *"and column"*: `Column: 1` > - *"Keep lines of first input that do not join with second input"*: `Yes` > - *"Keep lines of first input that are incomplete"*: `Yes` > - *"Fill empty columns"*: `No` > - *"Keep the header lines"*: `No` > > > If you inspect {% icon galaxy-eye %} the object, you'll see we have joined these tables and now have quite a few gene_id repeats. Let's take those out, while keeping the order of the original 'Gene Table'. > > > 2. {% tool [Cut columns from a table](Cut1) %} with the following parameters: > - *"Cut columns"*: `c1,c4,c5` > - *"Delimited by"*: `Tab` > - *"From"*: output of **Join two Datasets** {% icon tool %} > > 3. Rename output `Annotated Gene Table` {: .hands_on} Inspect {% icon galaxy-eye %} your `Annotated Gene Table`. That's more like it! You now have `gene_id`, `gene_name`, and `mito`. Now let's get back to your journey to emptyDrops and sophisticated thresholding of empty droplets! # emptyDrops emptyDrops {% cite article-emptyDrops %} works with a specific form of R object called a SingleCellExperiment. We need to convert our transformed MTX files into that form, using the DropletUtils Read10x tool: > ### {% icon hands_on %} Hands-on: Converting to SingleCellExperiment format > > 1. {% tool [DropletUtils Read10x](toolshed.g2.bx.psu.edu/repos/ebi-gxa/dropletutils_read_10x/dropletutils_read_10x/1.0.3+galaxy2){% icon tool %} with the following parameters: > - {% icon param-file %} *"Expression matrix in sparse matrix format (.mtx)"*: `Matrix table` > - {% icon param-file %} *"Gene Table"*: `Annotated Gene Table` > - {% icon param-file %} *"Barcode/cell table"*: `Barcode table` > - *"Should metadata file be added?"*: `No` > > 2. Rename {% icon galaxy-pencil %} output: `SCE Object` {: .hands_on} Fantastic! Now that our matrix is combined into an object, specifically the SingleCellExperiment format, we can now run emptyDrops! Let's get rid of those background droplets containing no cells! > ### {% icon hands_on %} Hands-on: Emptydrops > > 1. {% tool [DropletUtils emptyDrops](toolshed.g2.bx.psu.edu/repos/ebi-gxa/dropletutils_empty_drops/dropletutils_empty_drops/1.0.3+galaxy1){% icon tool %} with the following parameters: > - {% icon param-file %} *"SingleCellExperiment rdata object"*: `SCE Object` > - *"Should barcodes estimated to have no cells be removed from the output object?"*: `Yes` > > 2. Rename {% icon galaxy-pencil %} `serialised SingleCellExperiment` output as `Stringent-Object` > > 3. Rename {% icon galaxy-pencil %} `tabular output` as `Stringent-Tabular Output` {: .hands_on} > ### {% icon question %} Question > > How many cell barcodes remain after the emptyDrops treatment? Why might that be? > > > ### {% icon tip %} Hint > > If you click on the `Stringent-Object` in the {% icon galaxy-history %} history, the text in that window says `22 barcodes`. Why is this so low?? > > Consider...is this a complete set of data? > {: .tip} > > > > ### {% icon solution %} Solution > > > > Remember this is a subsampled dataset. If you look carefully at the parameters of emptyDrops, you'll see it set a minimum threshold at 100 UMI. If you look at the barcode plots above for the 400k read sample, you'll see this is far too stringent for this subsampled data! To satisfy your curiosity, this minimum threshold would yield `3011` barcodes for the total sample. > > > {: .solution} > {: .question} Let's go back and tweak parameters, re-running the tool with a looser threshold minimum. > ### {% icon details %} Working in a group? Decision-time! > If you are working in a group, you can now divvy up a decision here with one *control* and the rest varied numbers so that you can compare results throughout the tutorials. > - Variable: **UMI count lower bound** > - Control: `5` > - Everyone else: Consider the droplet barcode rank plots and choose (different) appropriate lower bounds. {: .details} > ### {% icon hands_on %} Hands-on: emptyDrops - do-over! > > 1. {% tool [DropletUtils emptyDrops](toolshed.g2.bx.psu.edu/repos/ebi-gxa/dropletutils_empty_drops/dropletutils_empty_drops/1.0.3+galaxy1){% icon tool %} with the following parameters: > - {% icon param-file %} *"SingleCellExperiment rdata object"*: `SCE Object` > - *"UMI count lower bound"*: `5` - you can input different numbers here and see what happens! > - *"Should barcodes estimated to have no cells be removed from the output object?"*: `Yes` > > 2. Rename {% icon galaxy-pencil %} `serialised SingleCellExperiment` output as `<yournumberhere>UMI-Object` > > 3. Rename {% icon galaxy-pencil %} `tabular output` as `<yournumberhere>UMI-Tabular Output` {: .hands_on} You should now have `111` barcodes! You now have an annotated expression matrix ready to go for further processing and analysis! Well done! However, the next tutorials we will link to use a tool called Scanpy. You need to convert this SingleCellExperiment object into a format called `annData`, which is a variant of a file format called `hdf5`. > ### {% icon hands_on %} Hands-on: Converting to AnnData format > > 1. {% tool [SCEasy convert](toolshed.g2.bx.psu.edu/repos/ebi-gxa/sceasy_convert/sceasy_convert/0.0.5+galaxy1){% icon tool %} with the following parameters: > - *"Direction of conversion"*: `SingleCellExperiment to AnnData` > - {% icon param-file %} *"Input object in SingleCellExperiment RDS format"*: `<yournumberhere>UMI-Object` > - *"Name of the assay to be transferred as main layer"*: `counts` > > 2. Rename {% icon galaxy-pencil %} output `N701-400k-AnnData` > {: .hands_on} {% icon congratulations %} Congrats! Your object is ready to for the scanpy pipeline! You can can check your work against the [example history](https://humancellatlas.usegalaxy.eu/u/wendi.bacon.training/h/pre-processing-with-alevin---part-1---answer-key). However, it may be that you want to combine this object with others like it, for instance, maybe you ran 5 samples, and you are starting with 10 FASTQ files... # Combining FASTQ files This sample was originally one of seven. So to run the other [12 downsampled FASTQ files](https://humancellatlas.usegalaxy.eu/u/wendi.bacon.training/h/alevin-tutorial---all-samples---400k), you can use a [workflow](https://humancellatlas.usegalaxy.eu/u/wendi.bacon.training/w/pre-processing-with-alevin---part-1-imported-from-uploaded-file)! Note - the N705 subsample is unluckily largely junk reads, so emptyDrops doesn't work. Instead, I processed it with Alevin. The total sample runs fine on emptyDrops of course. All these samples are going to take a while, so go and have several cups of tea... Or, better yet, I have [run them myself](https://humancellatlas.usegalaxy.eu/u/wendi.bacon.training/h/pre-processing-with-alevin---part-2---input-generation), and plopped them in a [new clean history](https://humancellatlas.usegalaxy.eu/u/wendi.bacon.training/h/pre-processing-with-alevin---part-2---input) for you to import as a fresh history. Alternatively, you can get data with zenodo. ## Data > ### {% icon hands_on %} Hands-on: Data upload - Combining files > > 1. Create a new history for this tutorial (if you're not importing the history above) > 2. Import the different AnnData files and the experimental design table from [Zenodo](https://zenodo.org/record/4574153#.YD56YS-l2uU) > > ``` > {{ page.zenodo_link }}/files/Experimental_Design.tabular > {{ page.zenodo_link }}/files/N701-400k-AnnData.h5ad > {{ page.zenodo_link }}/files/N702-400k-AnnData.h5ad > {{ page.zenodo_link }}/files/N703-400k-AnnData.h5ad > {{ page.zenodo_link }}/files/N704-400k-AnnData.h5ad > {{ page.zenodo_link }}/files/N705-400k-AnnData.h5ad > {{ page.zenodo_link }}/files/N706-400k-AnnData.h5ad > {{ page.zenodo_link }}/files/N707-400k-AnnData.h5ad > ``` > > {% snippet faqs/galaxy/datasets_import_via_link.md %} > > {% snippet faqs/galaxy/datasets_import_from_data_library.md %} > > 3. Rename the datasets > 4. Check that the datatype is `h5ad`, otherwise you will need to change each file to `h5ad`! > > {% snippet faqs/galaxy/datasets_change_datatype.md datatype="datatypes" %} > {: .hands_on} Inspect the {% icon galaxy-eye %} `Experimental Design` text file. This shows you how each `N70X` corresponds to a sample, and whether that sample was from a male or female. This will be important metadata to add to our sample, which we will add very similarly to how you added the `gene_name` and `mito` metadata above! ## Concatenating Objects > ### {% icon hands_on %} Hands-on: Concatenating AnnData objects > > 1. {% tool [Manipulate AnnData](toolshed.g2.bx.psu.edu/repos/iuc/anndata_manipulate/anndata_manipulate/0.7.5+galaxy0){% icon tool %} with the following parameters: > - {% icon param-file %} *"Annotated data matrix"*: `N701-400k-AnnData` > - *"Function to manipulate the object"*: 'Concatenate along the observations axis' > - {% icon param-file %} *"Annotated data matrix to add"*: 'Select all the other matrix files from bottom to top' > - *"Join method"*: `Intersection of variables` > - *"Key to add the batch annotation to obs"*: `batch` > - *"Separator to join the existing index names with the batch category"*: `-` {: .hands_on} Now let's look at what we've done! Unfortunately, AnnData objects are quite complicated, so the {% icon galaxy-eye %} won't help us too much here. Instead, we're going to use a tool to look into our object from now on. > ### {% icon hands_on %} Hands-on: Inspecting AnnData Objects > > 1. {% tool [Inspect AnnData](toolshed.g2.bx.psu.edu/repos/iuc/anndata_inspect/anndata_inspect/0.7.5+galaxy0) %} with the following parameters: > - {% icon param-file %} *"Annotated data matrix"*: output of **Manipulate AnnData** {% icon tool %} > - *"What to inspect?"*: `General information about the object` > 2. {% tool [Inspect AnnData](toolshed.g2.bx.psu.edu/repos/iuc/anndata_inspect/anndata_inspect/0.7.5+galaxy0) %} with the following parameters: > - {% icon param-file %} *"Annotated data matrix"*: output of **Manipulate AnnData** {% icon tool %} > - *"What to inspect?"*: `Key-indexed observations annotation (obs)` > 3. {% tool [Inspect AnnData](toolshed.g2.bx.psu.edu/repos/iuc/anndata_inspect/anndata_inspect/0.7.5+galaxy0) %} with the following parameters: > - {% icon param-file %} *"Annotated data matrix"*: output of **Manipulate AnnData** {% icon tool %} > - *"What to inspect?"*: `Key-indexed annotation of variables/features (var)` {: .hands_on} Now have a look at the three {% icon tool %} **Inspect AnnData** outputs. > ### {% icon question %} Question > > 1. How many cells do you have now? > 2. Where is `batch` information stored? > > > ### {% icon solution %} Solution > > > > 1. If you look at the **General information** {% icon tool %} output, you can see there are now `4079 cells`, as the matrix is now 4079 cells x 35734 genes. You can see this as well in the **obs** {% icon tool %} (cells) and **var** {% icon tool %} (genes) file sizes. > > 2. Under **Key-indexed observations annotation (obs)**. Different version of the Manipulate tool will put the `batch` columns in different locations. The tool version in this course has the `9th` column at the farthest right is `batch`. Batch refers to the order in which the matrices were added. The files are added from the bottom of the history upwards, so be careful how you set up your histories when running this! > {: .solution} > {: .question} # Adding batch metadata I set up the example history with the earliest indices at the bottom. ![Ordered history](../../images/wab-history-files-ascending.png "Note how N701 is lowest, ordered ascending to N707") Therefore, when it is all concatenated together, the `batch` appears as follows: | Index | Batch | Genotype | Sex | |------ |--------------------| | N701 | 0 | wildtype | male | | N702 | 1 | knockout | male | | N703 | 2 | knockout | female | | N704 | 3 | wildtype | male | | N705 | 4 | wildtype | male | | N706 | 5 | wildtype | male | | N707 | 6 | knockout | male | If you used Zenodo to import files, they may not have imported in order (i.e. N701 to N707, ascending). In that case, you will need to tweak the parameters of the next tools appropriately to label your batches correctly! The two critical pieces of metadata in this experiment are **sex** and **genotype**. I will later want to color my cell plots by these parameters, so I want to add them in now! > ### {% icon hands_on %} Hands-on: Labelling sex > > 1. {% tool [Replace Text in a specific column](toolshed.g2.bx.psu.edu/repos/bgruening/text_processing/tp_replace_in_column/1.1.3) %} with the following parameters: > - {% icon param-file %} *"File to process"*: output of **Inspect AnnData: Key-indexed observations annotation (obs)** {% icon tool %}) > - *"1. Replacement"* > > - *"in column"*: `Column: 9` - or whichever column `batch` is in > - *"Find pattern"*: `0|1|3|4|5|6` > - *"Replace with"*: `male` > - **+ Insert Replacement** > - *"2. Replacement"* > > - *"in column"*: `Column: 9` > - *"Find pattern"*: `2` > - *"Replace with"*: `female` > - **+ Insert Replacement** > - *"3. Replacement"* > > - *"in column"*: `Column: 9` > - *"Find pattern"*: `batch` > - *"Replace with"*: `sex` > > Now we want only the column containing the sex information - we will ultimately add this into the cell annotation in the AnnData object. > > 2. {% tool [Cut columns from a table](Cut1) %} with the following parameters: > - *"Cut columns"*: `c9` > - *"Delimited by"*: `Tab` > - % icon param-file %} *"From"*: output of **Replace text** {% icon tool %} > > 3. Rename {% icon galaxy-pencil %} output `Sex metadata` {: .hands_on} That was so fun, let's do it all again but for genotype! > ### {% icon hands_on %} Hands-on: Labelling genotype > > 1. {% tool [Replace Text in a specific column](toolshed.g2.bx.psu.edu/repos/bgruening/text_processing/tp_replace_in_column/1.1.3) %} with the following parameters: > - {% icon param-file %} *"File to process"*: output of **Inspect AnnData: Key-indexed observations annotation (obs)** {% icon tool %} > - *"1. Replacement"* > > - *"in column"*: `Column: 9` > - *"Find pattern"*: `0|3|4|5` > - *"Replace with"*: `wildtype` > - **+ Insert Replacement** > - *"2. Replacement"* > > - *"in column"*: `Column: 9` > - *"Find pattern"*: `1|2|6` > - *"Replace with"*: `knockout` > - **+ Insert Replacement** > - *"3. Replacement"* > > - *"in column"*: `Column: 9` > - *"Find pattern"*: `batch` > - *"Replace with"*: `genotype` > > Now we want only the column containing the genotype information - we will ultimately add this into the cell annotation in the AnnData object. > > 2. {% tool [Cut columns from a table](Cut1) %} with the following parameters: > - *"Cut columns"*: `c9` > - *"Delimited by"*: `Tab` > - {% icon param-file %} *"From"*: output of **Replace text** {% icon tool %} > > 3. Rename {% icon galaxy-pencil %} output `Genotype metadata` {: .hands_on} You might want to do this with all sorts of different metadata - which labs handled the samples, which days they were run, etc. Once you've added created all your metadata columns, we can add them together before plugging them into the AnnData object itself. > ### {% icon hands_on %} Hands-on: Combining metadata columns > > 1. {% tool [Paste two files side by side](Paste1) %} with the following parameters: > - {% icon param-file %} *"Paste"*: `Genotype metadata` > - {% icon param-file %} *"and"*: `Sex metadata` > - *"Delimit by"*: `Tab` > 2. Rename {% icon galaxy-pencil %} output `Cell Metadata` {: .hands_on} Let's add it to the AnnData object! > ### {% icon hands_on %} Hands-on: Adding metadata to AnnData object > > 1. {% tool [Manipulate AnnData](toolshed.g2.bx.psu.edu/repos/iuc/anndata_manipulate/anndata_manipulate/0.7.5+galaxy0) %} with the following parameters: > - {% icon param-file %} *"Annotated data matrix"*: output of previous **Manipulate AnnData** {% icon tool %} > - *"Function to manipulate the object"*: `Add new annotation(s) for observations or variables` > - *"What to annotate?"*: `Observations (obs)`` > - {% icon param-file %} *"Table with new annotations"*: `Cell Metadata` {: .hands_on} Woohoo! We're there! You can run an **Inspect AnnData** to check now, but I want to clean up this AnnData object just a bit more first. It would be a lot nicer if 'batch' meant something, rather than 'the order in which the Manipulate AnnData tool added my datasets'. > ### {% icon hands_on %} Hands-on: Labelling batches > > 1. {% tool [Manipulate AnnData](toolshed.g2.bx.psu.edu/repos/iuc/anndata_manipulate/anndata_manipulate/0.7.5+galaxy0) %} with the following parameters: > - {% icon param-file %} *"Annotated data matrix"*: output of **Manipulate AnnData - Add new annotations** {% icon tool %} > - *"Function to manipulate the object"*: `Rename categories of annotation` > - *"Key for observations or variables annotation"*: `batch` > - *"Comma-separated list of new categories"*: `N701,N702,N703,N704,N705,N706,N707` {: .hands_on} Huzzah! We are JUST about there. However, while we've been focussing on our cell metadata (sample, batch, genotype, etc.) to relabel the 'observations' in our object... # Mitochondrial reads Do you remember when we mentioned mitochondria early on in this tutorial? And how often in single cell samples, mitochondrial RNA is often an indicator of stress during dissociation? We should probably do something with our column of true/false in the gene annotation that tells us information about the cells. You will need to do this whether you have combined FASTQ files or are analysing just one (and thus skipping sections 4 & 5). > ### {% icon hands_on %} Hands-on: Calculating mitochondrial RNA in cells > > 1. {% tool [AnnData Operations](toolshed.g2.bx.psu.edu/repos/ebi-gxa/anndata_ops/anndata_ops/0.0.3+galaxy1) %} with the following parameters: > - {% icon param-file %} *"Input object in hdf5 AnnData format"*: output of **Manipulate AnnData - Rename categories** {% icon tool %} > - *"Format of output object"*: `AnnData format` > - *"Copy AnnData to .raw"*: `No` > - *"Gene symbols field in AnnData"*: `NA.` > - *"Flag genes that start with these names"*: `Insert Flag genes that start with these names` > - *"Starts with"*: `True` > - *"Var name"*: `mito` > - *"Number of top genes"*: `50` {: .hands_on} {% icon congratulations %}Well done! I strongly suggest have a play with the **Inspect AnnData** {% icon tool %} on your final `Pre-processed object` to see the wealth of information that has been added. You are now ready to move along to further filtering! There is a cheat that may save you time in the future though... # Pulling single cell data from public resources If you happen to be interested in analysing publicly available data, particularly from the [Single Cell Expression Atlas](https://www.ebi.ac.uk/gxa/sc/home), you may be interested in the following tool {% cite Moreno2020.04.08.032698 %} which rather skips forward all these steps in one! For this tutorial, the dataset can be seen [here](https://www.ebi.ac.uk/gxa/sc/experiments/E-MTAB-6945/downloads) with experiment id of `E-MTAB-6945`. > ### {% icon hands_on %} Hands-on: Retrieving data from Single Cell Expression Atlas > > 1. {% tool [EBI SCXA Data Retrieval](toolshed.g2.bx.psu.edu/repos/ebi-gxa/retrieve_scxa/retrieve_scxa/v0.0.2+galaxy2) %} with the following parameters: > - *"SC-Atlas experiment accession"*: `E-MTAB-6945` > - *"Choose the type of matrix to download"*: `Raw filtered counts` > > Now we need to transform this into an AnnData objects > > 2. {% tool [Scanpy Read10x](toolshed.g2.bx.psu.edu/repos/ebi-gxa/scanpy_read_10x/scanpy_read_10x/1.6.0+galaxy0) %} with the following parameters: > - *"Expression matrix in sparse matrix format (.mtx)"*: `EBI SCXA Data Retrieval on E-MTAB-6945 matrix.mtx (Raw filtered counts)` > - *"Gene table"*: `EBI SCXA Data Retrieval on E-MTAB-6945 genes.tsv (Raw filtered counts)` > - *"Barcode/cell table"*: `EBI SCXA Data Retrieval on E-MTAB-6945 barcodes.tsv (Raw filtered counts)` > - *"Cell metadata table"*: `EBI SCXA Data Retrieval on E-MTAB-6945 exp_design.tsv` {: .hands_on} It's important to note that this matrix is processed somewhat through the SCXA pipeline, which is quite similar to this tutorial, and it contains any and all metadata provided by their pipeline as well as the authors (for instance, more cell or gene annotations). # Conclusion {:.no_toc} ![Workflow Part 1](../../images/wab-alevin-part1workflow.png "Workflow - Steps 1-3") ![Workflow Part 2](../../images/wab-alevin-part2workflow.png "Workflow - Steps 4-6") You've reached the end of this session! You may be interested in seeing an [example history](https://humancellatlas.usegalaxy.eu/u/wendi.bacon.training/h/pre-processing-with-alevin---part-2---answer-key-1) and [Part 2 workflow](https://humancellatlas.usegalaxy.eu/u/wendi.bacon.training/w/pre-processing-with-alevin---part-2). Note that the workflow will require changing of the `column` containing the batch metadata depending on how you are running it. The final object containing all the reads can be found in [here](https://humancellatlas.usegalaxy.eu/u/wendi.bacon.training/h/pre-processing-with-alevin---part-2---total-anndata-example). We have: * Taken raw read data and annotations and necessary input files for quantification. * Run Alevin in two different parameterisations, both allowing Alevin to make its own calls on what constitutes empty droplets, and applying emptyDrops instead. * Deployed barcode rank plots as a way of quickly assessing the signal present in droplet datasets. * Applied the necessary conversion to pass these data to downstream processes. * Retrieved partially analysed data from the Single Cell Expression Atlas To discuss with like-minded scientists, join our Gitter channel for all things Galaxy-single cell! [![Gitter](https://badges.gitter.im/Galaxy-Training-Network/galaxy-single-cell.svg)](https://gitter.im/Galaxy-Training-Network/galaxy-single-cell?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
67.273438
1,105
0.721732
eng_Latn
0.98195
f4fde2ad1e3b35720ada35f2d0e7e0a7f8183b7b
59
md
Markdown
README.md
mail4gk/hello-world
0739d47142eb53a463cea0a74296734cc06ac1d8
[ "MIT" ]
null
null
null
README.md
mail4gk/hello-world
0739d47142eb53a463cea0a74296734cc06ac1d8
[ "MIT" ]
1
2015-08-04T06:09:48.000Z
2015-08-04T06:09:48.000Z
README.md
mail4gk/hello-world
0739d47142eb53a463cea0a74296734cc06ac1d8
[ "MIT" ]
null
null
null
# hello-world Just a repository what the heck does this do
14.75
26
0.779661
eng_Latn
0.999681
f4fe904ccedd2428814c282ab1ee578f3375d683
10,238
md
Markdown
articles/dotnet-develop-multitenant-applications.md
OpenLocalizationTestOrg/azure-docs-pr15_el-GR
9f7579626c9f63b39b5039748978ac36e4d54ebc
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/dotnet-develop-multitenant-applications.md
OpenLocalizationTestOrg/azure-docs-pr15_el-GR
9f7579626c9f63b39b5039748978ac36e4d54ebc
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/dotnet-develop-multitenant-applications.md
OpenLocalizationTestOrg/azure-docs-pr15_el-GR
9f7579626c9f63b39b5039748978ac36e4d54ebc
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Μοτίβο εφαρμογής Web πολλών μισθωτών | Microsoft Azure" description="Βρείτε αρχιτεκτονική επισκοπήσεις και μοτίβα σχεδίασης που περιγράφουν τον τρόπο για την υλοποίηση της εφαρμογής web πολλών μισθωτών σε Azure." services="" documentationCenter=".net" authors="wadepickett" manager="wpickett" editor=""/> <tags ms.service="active-directory" ms.workload="identity" ms.tgt_pltfrm="na" ms.devlang="dotnet" ms.topic="article" ms.date="06/05/2015" ms.author="wpickett"/> # <a name="multitenant-applications-in-azure"></a>Multitenant εφαρμογές στο Azure Μια εφαρμογή multitenant είναι έναν κοινόχρηστο πόρο που επιτρέπει την ξεχωριστή χρήστες ή "μισθωτές", για να προβάλετε την εφαρμογή, σαν να ήταν το δικό τους. Ένα τυπικό σενάριο που είναι σε μια εφαρμογή multitenant είναι ένας στην οποία όλοι οι χρήστες της εφαρμογής μπορεί να θέλετε να προσαρμόσετε την εμπειρία χρήστη, αλλά διαφορετικά έχουν τις ίδιες βασικές επιχειρηματικές απαιτήσεις. Παραδείγματα μεγάλες multitenant εφαρμογές είναι Office 365, Outlook.com και visualstudio.com. Από την πλευρά του προγραμματιστή μιας εφαρμογής, τα οφέλη της multitenancy αφορούν κυρίως βελτίωση της αποτελεσματικότητας λειτουργικές και κόστους. Μία έκδοση της εφαρμογής σας μπορεί να ανταποκρίνονται στις ανάγκες των πολλούς μισθωτές/πελάτες, επιτρέποντάς ενοποίηση του συστήματος διαχείρισης εργασίες όπως η παρακολούθηση, της ρύθμισης επιδόσεων, συντήρηση λογισμικού και δημιουργία αντιγράφων ασφαλείας δεδομένων. Η παρακάτω ενότητα παρέχει μια λίστα με τις πιο σημαντικές στόχους και απαιτήσεις από την πλευρά της υπηρεσίας παροχής. - **Provisioning**: πρέπει να μπορούν να προμηθεύσουν νέο μισθωτές για την εφαρμογή. Για τις εφαρμογές με μεγάλο αριθμό μισθωτές multitenant, είναι συνήθως απαιτούνται για την αυτοματοποίηση αυτήν τη διαδικασία, ενεργοποιώντας την προμήθεια του από το χρήστη. - **Δυνατότητα συντήρησης**: πρέπει να είστε σε θέση να αναβαθμίσετε την εφαρμογή και να εκτελέσετε άλλες εργασίες συντήρησης, ενώ χρησιμοποιούν πολλούς μισθωτές του. - **Παρακολούθηση**: πρέπει να μπορείτε να παρακολουθείτε την εφαρμογή πάντα για τον προσδιορισμό τυχόν προβλήματα και αντιμετώπιση προβλημάτων. Περιλαμβάνονται και παρακολούθηση πώς κάθε μισθωτή χρησιμοποιώντας την εφαρμογή. Μια εφαρμογή σωστά υλοποιηθεί multitenant παρέχει τα ακόλουθα πλεονεκτήματα στους χρήστες. - **Απομόνωσης**: τις δραστηριότητες των μεμονωμένων μισθωτές δεν επηρεάζει τη χρήση της εφαρμογής από άλλα μισθωτές. Μισθωτές δεν είναι δυνατό να αποκτήσετε πρόσβαση σε δεδομένα μεταξύ τους. Εμφανίζεται το μισθωτή σαν να έχουν αποκλειστική χρήση της εφαρμογής. - **Διαθεσιμότητα**: μεμονωμένα μισθωτές θέλετε η εφαρμογή να είναι διαθέσιμη συνεχώς, ενδεχομένως με εγγυήσεις που καθορίζονται σε ένα SLA. Ξανά, τις δραστηριότητες των άλλων μισθωτές δεν πρέπει να επηρεάζει τη διαθεσιμότητα της εφαρμογής. - **Κλιμάκωση**: Η εφαρμογή κλίμακες να πληροί τη ζήτηση των μεμονωμένων μισθωτές. Την παρουσία και τις ενέργειες που άλλα μισθωτές δεν θα πρέπει να επηρεάζουν τις επιδόσεις της εφαρμογής. - **Κόστος**: το κόστος είναι χαμηλότερη από την εκτέλεση μιας εφαρμογής αποκλειστικό, μίας μισθωτή επειδή πολλαπλή μίσθωση επιτρέπει την κοινή χρήση πόρων. - **Δυνατότητα προσαρμογής**. Η δυνατότητα να προσαρμόσετε την εφαρμογή για ένα μεμονωμένο μισθωτή με διάφορους τρόπους, όπως η προσθήκη ή κατάργηση δυνατοτήτων, αλλαγή χρωμάτων και λογότυπα ή ακόμα και προσθέτοντας τις δικές τους κώδικα ή δέσμη ενεργειών. Με λίγα λόγια, ενώ υπάρχουν πολλά θέματα που θα πρέπει να λάβετε υπόψη για την παροχή της υπηρεσίας ιδιαίτερα μεταβλητού μεγέθους, υπάρχουν επίσης έναν αριθμό των στόχων και απαιτήσεις που είναι κοινά και για πολλές multitenant εφαρμογές. Ορισμένα ενδέχεται να μην είναι κατάλληλη για συγκεκριμένα σενάρια και τη σπουδαιότητα του μεμονωμένους στόχους και απαιτήσεις θα διαφέρουν σε κάθε σενάριο. Ως υπηρεσία παροχής της multitenant εφαρμογής, θα έχετε επίσης στόχους και απαιτήσεις όπως, των μισθωτές τους στόχους και απαιτήσεις, κερδοφορίας, χρεώσεων, πολλά επίπεδα εξυπηρέτησης, προμήθεια, παρακολούθηση δυνατότητα συντήρησης και αυτοματισμού της σύσκεψης. Για περισσότερες πληροφορίες σχετικά με θέματα επιπλέον σχεδίασης μιας εφαρμογής multitenant, ανατρέξτε στο θέμα [φιλοξενίας μια εφαρμογή πολλών μισθωτών στον Azure][]. Για πληροφορίες σχετικά με κοινές μοτίβα αρχιτεκτονική δεδομένων πολλών μισθωτών λογισμικού-ως-a-(ΑΠΑ) βάση δεδομένων εφαρμογών υπηρεσίας, ανατρέξτε στο θέμα [Σχεδίαση μοτίβα για εφαρμογές ΑΔΑ πολλών μισθωτή με βάση δεδομένων SQL Azure](./sql-database/sql-database-design-patterns-multi-tenancy-saas-applications.md). Azure παρέχει πολλές δυνατότητες που σας επιτρέπουν να αντιμετωπίσετε τα προβλήματα κλειδιού αντιμετώπισε κατά τη σχεδίαση ενός multitenant συστήματος. **Απομόνωσης** - Τοποθεσία Web του τμήματος μισθωτές από κεφαλίδες κεντρικού υπολογιστή με ή χωρίς SSL επικοινωνίας - Τοποθεσία Web του τμήματος μισθωτές με τις παραμέτρους ερωτήματος - Υπηρεσίες Web σε ρόλους εργαζόμενου - Ρόλοι εργασίας. που συνήθως η επεξεργασία δεδομένων στο υπόβαθρο μιας εφαρμογής του. - Ρόλοι Web που συνήθως λειτουργεί ως το frontend για εφαρμογές. **Χώρος αποθήκευσης** Διαχείριση δεδομένων όπως η βάση δεδομένων SQL Azure ή χώρο αποθήκευσης Azure υπηρεσίες όπως η υπηρεσία του πίνακα, η οποία παρέχει υπηρεσίες για την αποθήκευση των μεγάλες ποσότητες δεδομένων μη δομημένα και την υπηρεσία Blob που παρέχει υπηρεσίες για την αποθήκευση μεγάλες ποσότητες κειμένου μη δομημένα ή δυαδικά δεδομένα όπως βίντεο, ήχο και εικόνες. - Ασφάλεια των Multitenant δεδομένων σε βάση δεδομένων SQL κατάλληλο ανά-μισθωτή συνδέσεις SQL Server. - Χρησιμοποιώντας πίνακες Azure για εφαρμογή πόρους, καθορίζοντας μια πολιτική επίπεδο πρόσβασης κοντέινερ, μπορείτε να τη δυνατότητα να προσαρμόσει τα δικαιώματα χωρίς να χρειάζεται να εκδώσετε νέα διεύθυνση URL του για τους πόρους που προστατεύονται με κοινόχρηστη πρόσβαση υπογραφές. - Azure ουρές για εφαρμογή πόρους Azure ουρές χρησιμοποιούνται συχνά για επεξεργασία μονάδα δίσκου εκ μέρους μισθωτές, αλλά μπορεί επίσης να χρησιμοποιηθεί για τη διανομή δουλειάς που απαιτείται για την προμήθεια ή διαχείρισης. - Ουρές Bus υπηρεσίας για τους πόρους εφαρμογής που προωθεί εργασία για έναν κοινόχρηστο μια υπηρεσία, μπορείτε να χρησιμοποιήσετε μια μοναδική ουρά όπου κάθε αποστολέα μισθωτή μόνο έχει δικαιώματα (όπως προκύπτει από αξιώσεων εκδοθεί από ACS) για να προωθήσετε στην ουρά, ενώ μόνο η δέκτες από την υπηρεσία έχουν το δικαίωμα να χρησιμοποιούν ουρά τα δεδομένα που προέρχονται από πολλούς μισθωτές του. **Σύνδεση και υπηρεσίες ασφαλείας** - Azure Service Bus, μια υποδομή ανταλλαγής μηνυμάτων που βρίσκεται μεταξύ των εφαρμογών επιτρέποντάς τους για την ανταλλαγή μηνυμάτων με ευελιξία τα συνδεδεμένα τρόπο για βελτιωμένη κλίμακας και ανοχή. **Υπηρεσίες δικτύου** Azure παρέχει διάφορες υπηρεσίες δικτύου που υποστηρίζει έλεγχο ταυτότητας και να βελτιώσετε τη διαχείριση των εφαρμογών σας φιλοξενούμενη. Αυτές οι υπηρεσίες περιλαμβάνουν τα εξής: - Azure εικονικού δικτύου σας δίνει τη δυνατότητα να προμηθεύσουν και διαχείριση εικονικών ιδιωτικών δικτύων (VPN) στο Azure καθώς και ασφαλή σύνδεση αυτών με εσωτερικής την υποδομή των τμημάτων. - Εικονική Διαχείριση κίνηση του δικτύου σας επιτρέπει να φορτώσετε υπολοίπου εισερχόμενη κίνηση σε διάφορες υπηρεσίες φιλοξενούμενη Azure Εάν χρησιμοποιείτε το ίδιο κέντρο δεδομένων ή σε διαφορετικές κέντρα δεδομένων όλο τον κόσμο. - Azure Active Directory (Azure AD) είναι μια υπηρεσία σύγχρονα, με βάση το ΥΠΌΛΟΙΠΟ που παρέχει δυνατότητες ελέγχου ταυτότητας διαχείρισης και πρόσβασης για τις εφαρμογές του cloud. Με χρήση του Azure AD για πόρους εφαρμογής Azure AD να αποτελούν έναν εύκολο τρόπο ελέγχου ταυτότητας και δικαιώματα χρήστη για να αποκτήσετε πρόσβαση σε εφαρμογές web και των υπηρεσιών σας ενώ επιτρέπετε τις δυνατότητες του ελέγχου ταυτότητας και εξουσιοδότηση να ληφθεί υπόψη από τον κωδικό. - Azure Bus υπηρεσία παρέχει μια ασφαλή ανταλλαγή μηνυμάτων και κατανεμημένης δυνατότητας ροής δεδομένων για και υβριδικό εφαρμογές, όπως επικοινωνία μεταξύ Azure φιλοξενούνται εφαρμογές και εσωτερικής εγκατάστασης εφαρμογών και υπηρεσιών, χωρίς να απαιτείται σύνθετες τείχος προστασίας και ασφάλειας της υποδομής. Χρησιμοποιώντας μετάδοση Bus υπηρεσίας για τους πόρους εφαρμογής με τις υπηρεσίες που εκτίθενται ως τελικά σημεία μπορεί να ανήκουν στο μισθωτή (για παράδειγμα, φιλοξενείται εκτός του συστήματος, όπως εσωτερική εγκατάσταση) ή μπορούν να υπηρεσίες παρασχεθεί ειδικά για το μισθωτή (επειδή διανύει διάκριση πεζών-κεφαλαίων, μισθωτή αφορούν συγκεκριμένα δεδομένα σε αυτές). **Προμήθεια πόροι** Azure παρέχει πολλούς τρόπους προμήθεια νέου μισθωτές για την εφαρμογή. Για τις εφαρμογές με μεγάλο αριθμό μισθωτές multitenant, είναι συνήθως απαιτούνται για την αυτοματοποίηση αυτήν τη διαδικασία, ενεργοποιώντας την προμήθεια του από το χρήστη. - Ρόλοι εργαζόμενου σας επιτρέπουν παροχή και άρσεως παροχή ανά μισθωτή πόρους (όπως όταν ένα νέο μισθωτή πρόσημα προς τα επάνω ή ακυρώνει), συλλογής μετρικά για μέτρηση χρήση και διαχείριση κλίμακα μετά από ένα συγκεκριμένο χρονοδιάγραμμα ή απάντηση τη διέλευση των ορίων του βασικοί δείκτες απόδοσης. Αυτός ο ρόλος ίδια μπορεί επίσης να χρησιμοποιηθεί για να προωθήσετε ενημερώσεων και αναβαθμίσεις στη λύση. - Azure αντικείμενα BLOB μπορούν να χρησιμοποιούνται για την παροχή του υπολογισμού ή προ-η προετοιμασία του χώρου αποθήκευσης πόρους για νέα μισθωτές ενώ παρέχει κοντέινερ επίπεδο πρόσβασης πολιτικές για την προστασία του υπολογισμού υπηρεσίας πακέτα, VHD εικόνων και άλλων πόρων. - Επιλογές για την προμήθεια πόρους βάση δεδομένων SQL για έναν μισθωτή περιλαμβάνουν τα εξής: - DDL σε δέσμες ενεργειών ή ενσωματωμένου ως πόρους εντός συγκροτήσεων - SQL Server 2008 R2 DAC πακέτων αναπτυχθεί μέσω προγραμματισμού. - Αντιγραφή από μια βάση δεδομένων της κύριας αναφοράς - Χρήση βάσης δεδομένων εισαγωγής και εξαγωγής για την παροχή νέες βάσεις δεδομένων από αρχείο. <!--links--> [Φιλοξενία μιας εφαρμογής πολλών μισθωτών στο Azure]: http://msdn.microsoft.com/library/hh534480.aspx [Designing Multitenant Applications on Azure]: http://msdn.microsoft.com/library/windowsazure/hh689716
105.546392
685
0.822817
ell_Grek
1.000008
f4ff5d693343b3d4378692e2c164ec9d15b97287
12,177
md
Markdown
extensions/metrics/README.md
prespic/loopback-next
b9fe29d15ef655e223cd67d8cd7e48879af02a61
[ "MIT" ]
3,978
2017-04-13T16:14:32.000Z
2021-07-12T14:13:29.000Z
extensions/metrics/README.md
prespic/loopback-next
b9fe29d15ef655e223cd67d8cd7e48879af02a61
[ "MIT" ]
4,406
2017-04-13T01:08:47.000Z
2021-07-14T01:59:51.000Z
extensions/metrics/README.md
prespic/loopback-next
b9fe29d15ef655e223cd67d8cd7e48879af02a61
[ "MIT" ]
1,042
2017-04-17T14:35:18.000Z
2021-07-13T16:57:16.000Z
# @loopback/metrics This module contains a component that reports metrics of Node.js, LoopBack framework, and your application to [Prometheus](https://prometheus.io/). ## Stability: ⚠️Experimental⚠️ > Experimental packages provide early access to advanced or experimental > functionality to get community feedback. Such modules are published to npm > using `0.x.y` versions. Their APIs and functionality may be subject to > breaking changes in future releases. ## Installation ```sh npm install --save @loopback/metrics ``` ## Basic use The component should be loaded in the constructor of your custom Application class. Start by importing the component class: ```ts import {MetricsComponent} from '@loopback/metrics'; ``` In the constructor, add the component to your application: ```ts this.component(MetricsComponent); ``` By default, Metrics route is mounted at `/metrics`. This path can be customized via Metrics configuration as follows: ```ts this.configure(MetricsBindings.COMPONENT).to({ endpoint: { basePath: '/metrics', }, defaultMetrics: { timeout: 5000, }, defaultLabels: { service: 'api', version: '1.0.0', }, }); ``` {% include note.html content="this.configure() must be called before this.component() to take effect." %} It also has to be noted, that by default the OpenAPI spec is disabled and therefore the metrics endpoint will not be visible in the API explorer. The spec can be enabled by setting `openApiSpec` to `true`. ```ts this.configure(MetricsBindings.COMPONENT).to({ openApiSpec: true, }); ``` ## Metrics Collected There are three types of metrics being collected by this component: 1. Node.js metrics - built-in by https://github.com/siimon/prom-client 2. LoopBack method invocations - built-in by this module using an interceptor 3. Metrics by the application code or other modules - instrumentations are required ## Pull vs Push Prometheus supports two modes to collect metrics: - **pull** - scraping from metrics http endpoint exposed by the system being monitored. This is the usual mode of operation. See [Why do you pull rather than push?](https://prometheus.io/docs/introduction/faq/#why-do-you-pull-rather-than-push) - **push** - pushing metrics from the system being monitored to a push gateway. Generally used for ephemeral jobs - see [When to use the Pushgateway](https://prometheus.io/docs/practices/pushing/) ## Try it out ```sh git clone https://github.com/loopbackio/loopback-next npm install npm run build cd examples/metrics-prometheus npm run demo ``` Open http://localhost:9090 to load Prometheus web UI. ### /metrics endpoint http://localhost:3000/metrics returns metrics in plain text format. It includes information for the Node.js process as well as LoopBack method invocations. <details> <summary markdown="span">Example of plain text data</summary> <pre> # HELP process_cpu_user_seconds_total Total user CPU time spent in seconds. # TYPE process_cpu_user_seconds_total counter process_cpu_user_seconds_total 0.132181 1564508354524 # HELP process_cpu_system_seconds_total Total system CPU time spent in seconds. # TYPE process_cpu_system_seconds_total counter process_cpu_system_seconds_total 0.023608999999999998 1564508354524 # HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. # TYPE process_cpu_seconds_total counter process_cpu_seconds_total 0.15578999999999998 1564508354524 # HELP process_start_time_seconds Start time of the process since unix epoch in seconds. # TYPE process_start_time_seconds gauge process_start_time_seconds 1564508343 # HELP process_resident_memory_bytes Resident memory size in bytes. # TYPE process_resident_memory_bytes gauge process_resident_memory_bytes 61800448 1564508354524 # HELP nodejs_eventloop_lag_seconds Lag of event loop in seconds. # TYPE nodejs_eventloop_lag_seconds gauge nodejs_eventloop_lag_seconds 0.002172946 1564508354526 # HELP nodejs_active_handles Number of active libuv handles grouped by handle type. Every handle type is C++ class name. # TYPE nodejs_active_handles gauge nodejs_active_handles{type="WriteStream"} 2 1564508354524 nodejs_active_handles{type="Server"} 1 1564508354524 nodejs_active_handles{type="Socket"} 2 1564508354524 # HELP nodejs_active_handles_total Total number of active handles. # TYPE nodejs_active_handles_total gauge nodejs_active_handles_total 5 1564508354526 # HELP nodejs_active_requests Number of active libuv requests grouped by request type. Every request type is C++ class name. # TYPE nodejs_active_requests gauge # HELP nodejs_active_requests_total Total number of active requests. # TYPE nodejs_active_requests_total gauge nodejs_active_requests_total 0 1564508354526 # HELP nodejs_heap_size_total_bytes Process heap size from node.js in bytes. # TYPE nodejs_heap_size_total_bytes gauge nodejs_heap_size_total_bytes 27545600 1564508354526 # HELP nodejs_heap_size_used_bytes Process heap size used from node.js in bytes. # TYPE nodejs_heap_size_used_bytes gauge nodejs_heap_size_used_bytes 23788272 1564508354526 # HELP nodejs_external_memory_bytes Nodejs external memory size in bytes. # TYPE nodejs_external_memory_bytes gauge nodejs_external_memory_bytes 1234918 1564508354526 # HELP nodejs_heap_space_size_total_bytes Process heap space size total from node.js in bytes. # TYPE nodejs_heap_space_size_total_bytes gauge nodejs_heap_space_size_total_bytes{space="read_only"} 524288 1564508354526 nodejs_heap_space_size_total_bytes{space="new"} 1048576 1564508354526 nodejs_heap_space_size_total_bytes{space="old"} 16900096 1564508354526 nodejs_heap_space_size_total_bytes{space="code"} 688128 1564508354526 nodejs_heap_space_size_total_bytes{space="map"} 1576960 1564508354526 nodejs_heap_space_size_total_bytes{space="large_object"} 6758400 1564508354526 nodejs_heap_space_size_total_bytes{space="code_large_object"} 49152 1564508354526 nodejs_heap_space_size_total_bytes{space="new_large_object"} 0 1564508354526 # HELP nodejs_heap_space_size_used_bytes Process heap space size used from node.js in bytes. # TYPE nodejs_heap_space_size_used_bytes gauge nodejs_heap_space_size_used_bytes{space="read_only"} 31712 1564508354526 nodejs_heap_space_size_used_bytes{space="new"} 9584 1564508354526 nodejs_heap_space_size_used_bytes{space="old"} 15723128 1564508354526 nodejs_heap_space_size_used_bytes{space="code"} 377600 1564508354526 nodejs_heap_space_size_used_bytes{space="map"} 918480 1564508354526 nodejs_heap_space_size_used_bytes{space="large_object"} 6726408 1564508354526 nodejs_heap_space_size_used_bytes{space="code_large_object"} 3456 1564508354526 nodejs_heap_space_size_used_bytes{space="new_large_object"} 0 1564508354526 # HELP nodejs_heap_space_size_available_bytes Process heap space size available from node.js in bytes. # TYPE nodejs_heap_space_size_available_bytes gauge nodejs_heap_space_size_available_bytes{space="read_only"} 492264 1564508354526 nodejs_heap_space_size_available_bytes{space="new"} 1038368 1564508354526 nodejs_heap_space_size_available_bytes{space="old"} 1105240 1564508354526 nodejs_heap_space_size_available_bytes{space="code"} 285952 1564508354526 nodejs_heap_space_size_available_bytes{space="map"} 657072 1564508354526 nodejs_heap_space_size_available_bytes{space="large_object"} 0 1564508354526 nodejs_heap_space_size_available_bytes{space="code_large_object"} 0 1564508354526 nodejs_heap_space_size_available_bytes{space="new_large_object"} 1047952 1564508354526 # HELP nodejs_version_info Node.js version info. # TYPE nodejs_version_info gauge nodejs_version_info{version="v12.4.0",major="12",minor="4",patch="0"} 1 # HELP loopback_invocation_duration_seconds method invocation # TYPE loopback_invocation_duration_seconds gauge loopback_invocation_duration_seconds{targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 0.0002056 # HELP loopback_invocation_duration_histogram method invocation histogram # TYPE loopback_invocation_duration_histogram histogram loopback_invocation_duration_histogram_bucket{le="0.005",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="0.01",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="0.025",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="0.05",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="0.1",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="0.25",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="0.5",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="1",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="2.5",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="5",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="10",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_bucket{le="+Inf",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 loopback_invocation_duration_histogram_sum{targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 0.0002112 loopback_invocation_duration_histogram_count{targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 # HELP loopback_invocation_total method invocation count # TYPE loopback_invocation_total counter loopback_invocation_total{targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 # HELP loopback_invocation_duration_summary method invocation summary # TYPE loopback_invocation_duration_summary summary loopback_invocation_duration_summary{quantile="0.01",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 0.0002363 loopback_invocation_duration_summary{quantile="0.05",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 0.0002363 loopback_invocation_duration_summary{quantile="0.5",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 0.0002363 loopback_invocation_duration_summary{quantile="0.9",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 0.0002363 loopback_invocation_duration_summary{quantile="0.95",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 0.0002363 loopback_invocation_duration_summary{quantile="0.99",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 0.0002363 loopback_invocation_duration_summary{quantile="0.999",targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 0.0002363 loopback_invocation_duration_summary_sum{targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 0.0002363 loopback_invocation_duration_summary_count{targetName="MockController.prototype.success",method="GET",path="/success",statusCode="204"} 1 </pre> </details> ## Contributions - [Guidelines](https://github.com/loopbackio/loopback-next/blob/master/docs/CONTRIBUTING.md) - [Join the team](https://github.com/loopbackio/loopback-next/issues/110) ## Tests Run `npm test` from the root folder. ## Contributors See [all contributors](https://github.com/loopbackio/loopback-next/graphs/contributors). ## License MIT
52.038462
156
0.826394
eng_Latn
0.390472
f4ff6e2cd25735e3fa404e195758050d862bd28e
9,181
md
Markdown
powerapps-docs/maker/canvas-apps/accessibility-checker.md
gregli-msft/powerapps-docs.es-es
7517430e9e370b14a9e5bbaf7462135520b23324
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/maker/canvas-apps/accessibility-checker.md
gregli-msft/powerapps-docs.es-es
7517430e9e370b14a9e5bbaf7462135520b23324
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/maker/canvas-apps/accessibility-checker.md
gregli-msft/powerapps-docs.es-es
7517430e9e370b14a9e5bbaf7462135520b23324
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Revisión de una aplicación de lienzo para mejorar la accesibilidad | Microsoft Docs description: Identifique maneras de hacer que una aplicación de lienzo sea más accesible para los usuarios que tienen discapacidades visuales, auditivas y de otro tipo author: emcoope-msft ms.service: powerapps ms.topic: article ms.date: 07/05/2018 ms.author: emcoope search.audienceType: - maker search.app: - PowerApps ms.openlocfilehash: 11ec805a713743e2524651128b036ccaaade69e3 ms.sourcegitcommit: 429b83aaa5a91d5868e1fbc169bed1bac0c709ea ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 08/24/2018 ms.locfileid: "42842541" --- # <a name="review-a-canvas-app-for-accessibility-in-powerapps"></a>Revisar una aplicación de lienzo para mejorar la accesibilidad en PowerApps Los usuarios con discapacidades visuales, auditivas o de otro tipo pueden usar la aplicación de lienzo con más facilidad y correctamente si tiene en cuenta la accesibilidad al diseñar el aspecto y el comportamiento de la aplicación. Si no tiene claro cómo hacer que su aplicación sea más accesible, puede ejecutar el Comprobador de accesibilidad en PowerApps Studio. Esta herramienta no solo busca posibles problemas de accesibilidad, sino que también explica por qué cada uno de ellos podría ser un posible problema para los usuarios que tienen discapacidades específicas y ofrece sugerencias sobre cómo resolver cada problema. El Comprobador de accesibilidad detecta problemas de lector de pantalla y teclado y puede encontrar información sobre cómo solucionar los problemas de contraste de color usando [Colores accesibles](accessible-apps-color.md). El Comprobador de accesibilidad le ayuda a identificar la configuración que puede querer cambiar, pero siempre debe tener en cuenta las sugerencias en el contexto de lo que debe hacer la aplicación. Muchas sugerencias pueden merecer la pena, pero puede omitir cualquiera que pueda ser más dañina que beneficiosa. ## <a name="find-accessibility-issues"></a>Buscar problemas de accesibilidad 1. En la esquina superior derecha de PowerApps Studio, seleccione el icono para el Comprobador de aplicaciones. ![Icono de Comprobador de aplicaciones](./media/accessibility-checker/app-checker-icon.png) 2. En el menú que aparece, seleccione **Accesibilidad**. ![Lista de opciones del panel de Comprobador de aplicaciones](./media/accessibility-checker/app-checker-menu.png) Aparece una lista de problemas, ordenados primero por gravedad y después por pantalla. ![Panel del Comprobador de accesibilidad y lista de elementos](./media/accessibility-checker/accessibility-checker-pane.png) 3. Seleccione la flecha situada junto a un elemento para mostrar sus detalles. ![Detalles del Comprobador de accesibilidad](./media/accessibility-checker/details-pane.png) 4. Seleccione la flecha Atrás para volver a la lista de elementos. 5. Si decide resolver un problema, selecciónelo para abrir la propiedad afectada. 6. Después de cambiar una o varias propiedades, seleccione **Volver a comprobar** para actualizar la lista de problemas. Los elementos resueltos desaparecerán de la lista y pueden aparecer nuevos elementos. ## <a name="severity-of-issues"></a>Gravedad de los problemas El Comprobador de accesibilidad clasifica cada problema como un error, una advertencia o una sugerencia según la gravedad del problema. - **Los errores** identifican problemas que hacen que la aplicación sea difícil o imposible de usar y comprender para los usuarios que tienen discapacidades. - **Las advertencias** identifican problemas que hacen que la aplicación sea difícil de usar o entender para la mayoría de los usuarios con discapacidades, pero no para todos. - **Las sugerencias** le ayudarán a mejorar la experiencia de los usuarios que tienen discapacidades. ## <a name="types-of-issues"></a>Tipos de problemas | Título del problema | Gravedad | Descripción del problema | Cómo corregir | Motivo de la corrección| | ------------------------------ |:---------| -----| ------|------ | | **Falta la etiqueta accesible** | Error | Cuando la propiedad de etiqueta accesible de un control interactivo no contiene texto. Un control interactivo puede ser inherentemente interactivo, como lo es un botón, o tener propiedades interactivas. Por ejemplo, es posible que estableciese la propiedad **OnSelect** de una imagen o estableciese su propiedad **TabIndex** en 0 o superior. | Edite la propiedad de la etiqueta accesible para describir el elemento. | Si la propiedad de la etiqueta accesible no contiene texto, los usuarios que no pueden ver la pantalla no entenderán lo que aparece en las imágenes y los controles. | | **No se muestra el foco** | Error | Cuando la propiedad **FocusBorderThickness** de un control se establece en 0. Es recomendable para garantizar una proporción de contraste de color adecuada entre el borde del foco y el propio control para que sea claramente visible. | Cambie la propiedad **FocusedBorderThickness** a un valor mayor que 0. | Si el foco no está visible, los usuarios que no usen un mouse no podrán verlo cuando interactúen con la aplicación. | | **Faltan descripciones** | Advertencia | Cuando la propiedad **ClosedCaptionsURL** de un control **Audio** o **Vídeo** está vacía. | Establezca la propiedad **ClosedCaptionsURL** para agregar la dirección URL para las descripciones. | Sin descripciones, es posible que las personas con discapacidades no puedan comprender la información de un segmento de vídeo o audio. | | **Faltan valores de configuración de control útiles** | Advertencia | Cuando cualquiera de varios valores (por ejemplo, mostrar las etiquetas y los marcadores para los gráficos, así como los controles predeterminados para los controles **Audio**, **Vídeo** y **Entrada manuscrita**) están desactivados. | Seleccione la advertencia y después establezca la propiedad en **true**. | Al cambiar la configuración de esta propiedad, ofrecerá más información al usuario sobre cómo funcionan los controles de su aplicación. | | **El archivo HTML no será accesible** | Advertencia | Cuando un control que no es un control de texto HTML contiene HTML. En ese caso, PowerApps no admite la accesibilidad de los elementos HTML personalizados. | Use otro método en lugar del archivo HTML o quite el archivo HTML de este elemento. | La aplicación no funcionará correctamente ni será accesible si coloca elementos HTML interactivos. | | **Desactive el inicio automático** | Advertencia | Cuando la propiedad **Autostart** de un control **Audio** o **Vídeo** está establecida en **true**. | Establezca la propiedad **Autostart** del control en **false**. | Los archivos de vídeo y audio que se reproducen automáticamente pueden distraer a los usuarios. Permítales elegir si quieren reproducir un clip. | | **Revise el nombre de la pantalla** | Sugerencia | Cuando una pantalla tiene un nombre predeterminado, que leerán los lectores de pantalla cuando el usuario navega por la aplicación. | Asigne un nombre a la pantalla que describa lo que aparece en ella o su uso.| Las personas que son ciegas, que tienen deficiencia visual o discapacidad de lectura dependen de los nombres en pantalla para navegar mediante el lector de pantalla. | | **Agregue texto de indicación del estado** | Sugerencia | Cuando un control tiene un estado, por ejemplo, un interruptor, pero las etiquetas de valor están desactivadas. | Establezca la propiedad **ShowValue** del control en **true** para mostrar su estado actual. | Los usuarios no obtendrán confirmación de sus acciones si el estado del control no se muestra. | | **Compruebe el orden de los elementos de la pantalla**| Sugerencia | Cuando la propiedad **TabIndex** es mayor que 1. Los creadores de aplicaciones pueden establecer un orden de tabulación personalizado configurando la propiedad **TabIndex** en un valor numérico como 1, 2, 3 y 4. Esta sugerencia le recuerda que revise el orden interactivo para esta pantalla. Como práctica recomendada, siga un diseño en el que la propiedad **TabIndex** sea 0. | Asegúrese de que los elementos de la pantalla coincidan con el orden en el cual quiere que se pueda navegar por ellos mediante pestañas. | Cuando un lector de pantalla lee los elementos de una aplicación, debe aparecer en el orden en el que los vería un usuario, en lugar de en uno menos intuitivo. | | **Agregue otro método de entrada** | Sugerencia | Cuando una aplicación contiene un control **Lápiz**. Esta sugerencia le recuerda que incluya un método de entrada independiente. | Agregue un control **Entrada de texto** además del control **Lápiz** para ofrecer una experiencia accesible. | Algunos usuarios no pueden usar un lápiz y necesitan otro método para indicar información como, por ejemplo, teclear una firma. | ## <a name="next-steps"></a>Pasos siguientes - [Creación de aplicaciones accesibles](accessible-apps.md) - [Colores accesibles](accessible-apps-color.md) - [Propiedades de accesibilidad](controls/properties-accessibility.md)
113.345679
757
0.771158
spa_Latn
0.995126
f4ffe8b8dadaf53e57e2da86ef5678f23aae43a4
4,970
md
Markdown
README.md
Zyrakia/warehouse
8d05dfe233b83d2e1cbda19992d3bfbd164166c0
[ "MIT" ]
null
null
null
README.md
Zyrakia/warehouse
8d05dfe233b83d2e1cbda19992d3bfbd164166c0
[ "MIT" ]
null
null
null
README.md
Zyrakia/warehouse
8d05dfe233b83d2e1cbda19992d3bfbd164166c0
[ "MIT" ]
null
null
null
> READ: This is a library made by me, for me, published because it was the easiest thing to do, and I wanted to try publishing a roblox-ts package, if you decide to use this, it means that you are okay with a package in your dependency tree that is incomplete, could lose support at any moment, and will likely be abandoned one day. # Warehouse A basic DataStore abstraction library made kind of for myself, just so I could get the hang of making roblox-ts packages. Most things are documented within the code, but there is a quick introduction below, in Typescript (💙), since that is what I used to make this library. I know I am not the best programmer, nor do I know much about Roblox yet, so if you have **ANY** suggestions for this library, or want to give me some constructive criticism (it can be harsh, I will cry about it and then improve the code based on it) just do it, I am completely open to it. ## Features - Only addresses the actual DataStore on load or commit. - Automatically uses a MockDataStore in development mode. - Easily allows listening to updates with Signals. - Automatically encodes / decodes tables, serialization is up to you though, maybe one day I will add automatic serialization for some basic things. - Allows adding of transformers and guards to limit what is being put in the warehouse. - Allows for easy processing of data that is loaded or committed for things like serialization. - Allows you to set a template value to reconcile any value loaded from the store, you can even have these templates be tables, which will be merged with the existing table if one is loaded from the store! - Made with Typescript, making it's API fully typed. ## Quick Intro ```ts const warehouse = WarehouseFactory.init<string[]>('LearnedRecipes', []); warehouse.set('Joe', ['Vegan Tarts', 'Apple Fritters']); warehouse.set('Mary', ['Apple Pie', 'Star Wars Muffins']); print(warehouse.get('Joe')); // ["Vegan Tarts", "Apple Fritters"] ``` Ordered warehouses wrap around OrderedDataStores, they act like normal warehouses but can load ordered entries, these are separate from the normal values, but can be modified by them. Ordered warehouses are decently limited currently, you can only load the first 100 descending or ascending values if you want to load ordered entries, this is because I am lazy and the current functionality is kind of enough for my use-case 😗. But the good thing about these guys is that when a normal value is updated, if it's key is inside the ordered values it will update those as well! Also when the ordered values are loaded, if a key is loaded that is inside the normal values it will use that value instead. ```ts const levels = WarehouseFactory.initOrdered('PlayerLevels'); class LevelBoostTransformer { public transform(info: UpdateInformation) { const userId = tonumber(info.key); if (!userId) return info.newValue; const player = Players.GetPlayerByUserId(userId); if (!player) return info.newValue; if (playerOwnsGamepass(player)) return info.newValue * 3; else return info.newValue; } private playerOwnsGamepass(player: Player) { return MarketplaceService.UserOwnsGamePassAsync(player.UserId /* GAMEPASS */); } } levels.addGuards(new ServerGuard()); // Only a server update source can set this key. levels.addTransformers(new LevelBoostTransformer()); // User with ID 2274758232 owns the boost gamepass // User with ID 128216998 does not own the boost gamepass print(levels.get(2274758232)); // 0 levels.set(2274758232, 5); print(levels.get(2274758232)); // 15 levels.set(128216998, 10); print(levels.get(128216998)); // 10 print(warehouse.getOrdered()); // [["2274758232", 15], ["128216998", 10]] ``` ## Definitions Most things are defined in code comments, but if you want a quick overlook over some of the terms, here they are 😃. | Term | Definition | | ---------------- | -------------------------------------------------------------------------------------------------------------------- | | Active Document | The way the data looks while it is 'active' inside the warehouse. | | Dormant Document | The way the data looks while it is 'dormant' inside the DataStore. | | Template | Data that is used to reconciliate any missing data in a dormant document before it is turned into an active document | | Processor | A class that can optionally hook into different lifecycles of a document to process it in some way. | | Transformer | A class with a function that receives information of an update and returns what the updated value should be. | | Guard | A class with a function that receives information of an update and returns whether the update should be allowed. |
58.470588
332
0.696781
eng_Latn
0.997169
760023a855302c9e893f7680b43e597b088014ce
1,452
md
Markdown
2020/12/05/2020-12-05 21:20.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
3
2020-07-14T14:54:15.000Z
2020-08-21T06:48:24.000Z
2020/12/05/2020-12-05 21:20.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
null
null
null
2020/12/05/2020-12-05 21:20.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
null
null
null
2020年12月05日21时数据 Status: 200 1.肖战 演员请就位决赛 微博热度:4441792 2.拜登称希望特朗普来参加就职典礼 微博热度:3834672 3.Angelababy 女团门面 微博热度:3602272 4.张月演技 微博热度:2022222 5.追光吧哥哥 微博热度:2006663 6.音乐盛典咪咕汇 微博热度:1859670 7.演员请就位终极盛典 微博热度:1672587 8.中国银行被罚5050万 微博热度:1524949 9.美国医生在地下停车场病房拍下工作照 微博热度:1309445 10.跳河身亡女孩父亲发声 微博热度:1139261 11.尔冬升秦沛姜大卫首次同框 微博热度:1109844 12.雷佳音回应 微博热度:1088418 13.丁真对珍珠溺爱到双标 微博热度:957768 14.特朗普下令从索马里撤出大部分美军 微博热度:954458 15.爱奇艺尖叫之夜 微博热度:954324 16.秦沛 有什么不满意的就对我好了 微博热度:954277 17.xxxfffff_97 微博热度:856900 18.快乐大本营 微博热度:713521 19.扫地机器人中国销量全球第一 微博热度:705867 20.周末最舒服的度过方式 微博热度:701006 21.BLACKPINK获女子最佳舞曲奖 微博热度:691820 22.家长高调给老师送不作为锦旗 微博热度:681207 23.当电竞选手去选秀 微博热度:675545 24.张伯礼摘胆手术后藏病号服开视频会 微博热度:666807 25.马伯骞突然向黄子韬salute 微博热度:658352 26.张萌点赞 微博热度:646219 27.AK参加创4 微博热度:569233 28.刘亦菲编发造型 微博热度:432440 29.澳方回应澳军用假肢当酒杯事件 微博热度:431762 30.2020MMA 微博热度:431711 31.雷佳音 微博热度:427755 32.冷血狂宴评分 微博热度:407241 33.中国最长火车有多长 微博热度:394823 34.外交部副部长谈所谓战狼外交标签 微博热度:387433 35.香港新增101例新冠肺炎确诊病例 微博热度:387225 36.考研吃瓜定律 微博热度:301716 37.丁真得到一匹名叫青龙的赛马 微博热度:293535 38.内蒙古严禁教师用微信群布置作业 微博热度:291585 39.的哥浦东医院深夜排队接患者回家 微博热度:289421 40.彭于晏水下拍戏拍红了眼 微博热度:287636 41.胡明轩大帽孙铭徽 微博热度:285040 42.中行回应原油宝被罚5050万 微博热度:282344 43.蔡徐坤头发炸毛了 微博热度:279557 44.苏伟受伤 微博热度:277462 45.武夷山纪念币将发行 微博热度:277250 46.律师谈女孩在警察注视下溺亡 微博热度:277237 47.美术老师雪地中扫出巨幅雪竹图 微博热度:270326 48.高然 微博热度:270269 49.拉丁跳出了比武的气势 微博热度:251445 50.游泳是否应该成为警察必备技能 微博热度:251145
7.117647
20
0.787879
yue_Hant
0.431081
760048f4dd01d6f19a560a6a15df2bc5702a0320
3,430
md
Markdown
README.md
tliu68/autogmm
272a7321c6ccd84e6cc665a4b1513261ad663922
[ "MIT" ]
1
2021-09-22T14:25:22.000Z
2021-09-22T14:25:22.000Z
README.md
tliu68/autogmm
272a7321c6ccd84e6cc665a4b1513261ad663922
[ "MIT" ]
null
null
null
README.md
tliu68/autogmm
272a7321c6ccd84e6cc665a4b1513261ad663922
[ "MIT" ]
null
null
null
# AutoGMM ## `AutoGMM` is a module for automatic and hierarchical Gaussian mixture modeling in `graspologic`. # Documentation The official documentation with usage is at https://graspologic.readthedocs.io/en/latest/ Please visit the [tutorial section](https://microsoft.github.io/graspologic/tutorials/clustering/autogmm.html) in the official website for more in depth usage. # System Requirements ## Hardware requirements `graspologic` package requires only a standard computer with enough RAM to support the in-memory operations. ## Software requirements ### OS Requirements `graspologic` is tested on the following OSes: - Linux x64 - macOS x64 - Windows 10 x64 And across the following versions of Python: - 3.6 (x64) - 3.7 (x64) - 3.8 (x64) # Installation Guide ## Install from pip ``` pip install graspologic ``` ## Install from Github ``` git clone https://github.com/microsoft/graspologic cd graspologic python3 -m venv venv source venv/bin/activate python3 setup.py install ``` The algorithms are located in [microsoft/graspologic/graspologic/cluster/](https://github.com/microsoft/graspologic/tree/dev/graspologic/cluster). To run the R scripts, you will need to install R and the mclust library (we use version 5.4.2 in the paper). We recommend the RStudio IDE https://www.rstudio.com/. Users may need to "Set Working Directory" to "Source File Location," for the scripts to find find relative paths correctly. # Directories ## scripts ### complete_experiments These files reproduce Table 2, Figures 1-3, and Figure 5. They run the clustering algorithms on the complete datasets. Instructions within. ### subset_experiments These files reproduce Figure 4. They run the clustering algorithms on the subsets of the data. Instructions within. ### option_runtimes These files reproduce Figure 6. Instructions within. ### compare_clusterings These files reproduce Figure 7. They run and compare various clustering algorithms on the double-cigar dataset. Instructions within. ### hgmm_experiments These files reproduce Figures 8-9. They run the hierarchical clustering algorithm on simulated and real datasets. Instructions within. **brute_cluser_graspyclust.py** - implementation of graspyclust \ **make_gmix.py** - script that was used to make data/synthetic.csv ## data contains the datasets that were used in the paper # Contributing We welcome contributions from anyone. Please see our [contribution guidelines](https://github.com/microsoft/graspologic/blob/dev/CONTRIBUTING.md) before making a pull request. Our [issues](https://github.com/microsoft/graspologic/issues) page is full of places we could use help! If you have an idea for an improvement not listed there, please [make an issue](https://github.com/microsoft/graspologic/issues/new) first so you can discuss with the developers. # License This project is covered under the MIT License. # Issues We appreciate detailed bug reports and feature requests (though we appreciate pull requests even more!). Please visit our [issues](https://github.com/microsoft/graspologic/issues) page if you have questions or ideas. # Citing `AutoGMM` If you find `AutoGMM` useful in your work, please cite the algorithm via the [AutoGMM paper](https://arxiv.org/abs/1909.02688) > Athey, T. L., Liu, T., Pedigo, B. D., & Vogelstein, J. T. (2021). AutoGMM: Automatic and Hierarchical Gaussian Mixture Modeling in Python. arXiv preprint arXiv:1909.02688.
39.425287
216
0.780758
eng_Latn
0.965033
760067285613f7d363807897df4e5dee18f5ed09
15,096
md
Markdown
_posts/blog/2017-11-23-javaHoldObject.md
chaodongyang/chaodongyang.github.io
ae7861068fad8dd9b67ffbeca3f59ce033144871
[ "MIT" ]
1
2017-11-30T01:04:32.000Z
2017-11-30T01:04:32.000Z
_posts/blog/2017-11-23-javaHoldObject.md
chaodongyang/chaodongyang.github.io
ae7861068fad8dd9b67ffbeca3f59ce033144871
[ "MIT" ]
11
2017-10-30T06:48:33.000Z
2018-06-02T02:34:37.000Z
_posts/blog/2017-11-23-javaHoldObject.md
chaodongyang/chaodongyang.github.io
ae7861068fad8dd9b67ffbeca3f59ce033144871
[ "MIT" ]
2
2017-12-13T07:59:48.000Z
2018-07-18T16:12:49.000Z
--- layout: post title: java编程思想之持有对象一 categories: javaThinking description: java编程思想之持有对象 keywords: java, java持有对象详解, java持有对象 --- 如果一个程序只包含固定数量的且生命周期都是已知的对象,那么这是一个简单的程序。 通常程序总是在运行的时候才知道某些条件的去创建新的对象。在此之前并不知道对象的数量和确切的类型。例如前面讲过的数组,如果你想保存一组基本类型的数据也可以用数组。但是数组是固定的大小。我们需要更加复杂的方式来存储对象。Java 中的类库给我们提供了一套完整的容器类来解决这个问题,其中基本的类型有 List、Set、Quene、Map。这些对象类型被称之为集合。我们可以使用容器来称呼他们。容器都有自己的特殊的属性。例如,Set 只保存一个对象。Map 是将某些对象和其他一些对象关联起来的关联数组,Java 容器可以调整自己的尺寸。在变成时你可以将任意数量的对象放置到容器中。 ## 泛型和类型安全的容器 在使用 JavaSE5 之前的容器允许我们向容器插入一些不正确的类型。例如我们用 ArayList 容器考虑传入一个 Apple 对象。但是我们又可以放入一些其他的对象。正常情况下编译器会报出警告的信息,因为这个示例没有添加泛型。在这里我们可以使用 JavaSE5 的一个注解来暂时抑制这些警告信息。``` @SuppressWarnings``` 这个注解表示只有有关 “不受检查的异常” 的警告信息应该被抑制。 Apple 类: ```java public class Apple { private static long counter; private final long ID = counter++; public long id() { return ID; } } ``` Orange 类: ```java public class Orange { } ``` 加入容器并调用: ```java public class ApplesAndOrange { @SuppressWarnings("unchecked") public static void main(String[] args) { // TODO Auto-generated method stub ArrayList list = new ArrayList<>(); for (int i = 0; i < 3; i++) { list.add(new Apple()); list.add(new Orange()); } for (int i = 0; i < list.size(); i++) { ((Apple) list.get(i)).id(); } } } ``` 执行结果: ``` Exception in thread "main" java.lang.ClassCastException: collection.Orange cannot be cast to collection.Apple at collection.ApplesAndOrange.main(ApplesAndOrange.java:18) ``` 我们在添加进容器的没有问题,因为他们都继承了 Object 类。容器默认就放入 Object 类型。但是当我们用 get() 方法取出你认为是 Apple 对象的时候,你得到的只是 Object 的引用。我们需要强制的类型转换才能使用 id() 这个方法。但是在运行的时候会报错。因为你将一个 Orange 对象转换为一个 Apple 对象。这两个对象是完全不相关的。 但是我们使用泛型,使用预定义的泛型通常会很简单的解决这个问题。想要声明保存 Appple 对象的 ArrayList 容器。你可以声明这样的样式 ``` ArrayList<Apple>``` 。尖括号内保存的是类型参数,这样就指定了这个容器可以保存什么样的类型。就可以在编译器防止将错误的类型放入容器中。 ```java public class ApplesAndOrange { public static void main(String[] args) { // TODO Auto-generated method stub ArrayList<Apple> list = new ArrayList<Apple>(); for (int i = 0; i < 3; i++) { list.add(new Apple()); //list.add(new Orange()); } for (int i = 0; i < list.size(); i++) { System.out.println((list.get(i)).id()); } } } ``` 此时就没有警告了,也不需要类型转换了。因为此时编译器将阻止你将 Orange 对象放入容器。而且在 get() 取出的时候也会替你执行转换。 当我们指定了容器的参数类型的时候,也不是一定要放入明确的类型才可以,我们也可以将参数类型的子类通过向上转型放入容器中 继承 Apple ```java public class Orange extends Apple{ } ``` 重新执行上面的代码: ```java public class ApplesAndOrange { public static void main(String[] args) { // TODO Auto-generated method stub ArrayList<Apple> list = new ArrayList<Apple>(); for (int i = 0; i < 3; i++) { list.add(new Apple()); list.add(new Orange()); } for (int i = 0; i < list.size(); i++) { System.out.println((list.get(i)).id()); } } } ``` 执行结果: ``` 012345 ``` 此时我们看到 Orange 又可以放进去了。 ## 基本概念 Java 容器类的用途是用来保存对象,并且分为两个不同的类型。 - Collection 一个独立元素的序列,这些元素都服从一条或者多条规则。List 必须按照插入的顺序保存值,而 Set 不能有重复的元素。Queue 按照排队规则来确定对象产生的顺序。 - Map 一组成对的键值对对象,允许你使用键来查找值。ArrarList 允许你使用数组来查找值,它将数字和对象关联在一起。映射表允许我们使用一个对象来查找另一个对象,被称之为关联数组。或者被称之为字典。因为你可以通过键对象来查找值对象。就像在字典中使用单词来定义一样。 大部分的编程情况下我们都是与这些接口打交道。唯一需要制定所使用的精确类型的时候就是在创建的时候。 ``` List<Apple> apples = new ArrayList<Apple>(); ``` ArrayList 已经被向上转型为 List。使用接口的目的是如果你决定去修改你的实现,你所需的只是在创建的时候修改他。因此你应该创建一个具体类的对象,将其转型为对象的接口。接口的实现类都具有不同的功能,如果你需要使用这些功能,就不能向上转型为通用的接口类型。 下面的例子展示了 Collection 存放一组 Integer 对象的例子。 ```java public static void main(String[] args) { // TODO Auto-generated method stub Collection<Integer> cIntegers = new ArrayList<Integer>(); for (int i = 0; i < 10; i++) { cIntegers.add(i); } for (Integer integer : cIntegers) { System.out.print(integer); } } ``` 执行结果: ``` 0123456789 ``` 这里我们只是用了 Collection 的 add 方法。因此任何继承自 Collection 类的对象都可以使用 add 的方法将对象存放进来。但是 Set 也是继承自 Collection。而且 Set 不允许有重复的元素。所以我们要确保,Collection 应该包含所指定的元素。 ## 添加一组元素 介绍一下 Arrays 和 Collections 类。Arrays.asList() 方法接受一个数组或是用逗号隔开的元素列表并将其转化为一个 List 对象。Collections.addAll() 方法接受一个他需要的 Collection 对象,以及一个数组或用逗号隔开的元素列表。并且将后边的元素添加到第一个 Collection 参数中。其实是对第一个参数 Collection 的初始化。 第一种初始化方式: ```java public static void main(String[] args) { // TODO Auto-generated method stub Collection<Integer> cIntegers = new ArrayList<Integer>(Arrays.asList(1,2,3,4,5)); for (Integer integer : cIntegers) { System.out.print(integer); } } ``` 第二种初始化方式: ```java public static void main(String[] args) { // TODO Auto-generated method stub Collection<Integer> cIntegers = new ArrayList<Integer>(); Collections.addAll(cIntegers, 1,2,3,4,5); for (Integer integer : cIntegers) { System.out.print(integer); } } ``` 也可以添加一个数组: ```java public static void main(String[] args) { // TODO Auto-generated method stub Collection<Integer> cIntegers = new ArrayList<Integer>(); Integer[] integers = {1,2,3,4,5}; Collections.addAll(cIntegers,integers); for (Integer integer : cIntegers) { System.out.print(integer); } } ``` 容器自己添加一个数组: ```java public static void main(String[] args) { // TODO Auto-generated method stub Collection<Integer> cIntegers = new ArrayList<Integer>(); Integer[] integers = {1,2,3,4,5}; cIntegers.addAll(Arrays.asList(integers)); //Collections.addAll(cIntegers,integers); for (Integer integer : cIntegers) { System.out.print(integer); } } ``` 我们通过 Arraye.asList() 来创建一个 List: ```java List<Integer> list = Arrays.asList(1,2,3,4,5); list.add(6); list.set(1,99); for (Integer integer : list) { System.out.println(integer); } ``` 执行结果: ``` Exception in thread "main" java.lang.UnsupportedOperationException at java.util.AbstractList.add(AbstractList.java:148) at java.util.AbstractList.add(AbstractList.java:108) at collection.ApplesAndOrange.main(ApplesAndOrange.java:24) ``` 得到的确实报错:不支持这样的操作,不支持的是哪样的操作呢?原来我们通过 Arrays.asList() 得到的 List 其底层操作其实是一个数组。我们不能够调整他的尺寸。所以我们使用 add() 方法是不对的。但是我们可以通过 set() 方法替换值。 ## 容器的打印 容器可以直接被打印: ```java public class PrintContaines { static Collection fill(Collection<String> collection){ collection.add("aaa"); collection.add("bbb"); collection.add("ccc"); collection.add("bbb"); return collection; } static Map fill(Map<String,String> map){ map.put("aaa", "哈哈"); map.put("bbb", "等等"); map.put("ccc", "洋洋"); map.put("bbb", "小小"); return map; } public static void main(String[] args) { // TODO Auto-generated method stub System.out.println(fill(new ArrayList<String>())); System.out.println(fill(new LinkedList<String>())); System.out.println(fill(new HashSet<String>())); System.out.println(fill(new TreeSet<String>())); System.out.println(fill(new LinkedHashSet<String>())); System.out.println(fill(new HashMap<String,String>())); System.out.println(fill(new TreeMap<String,String>())); System.out.println(fill(new LinkedHashMap<String,String>())); } } ``` 执行结果: ``` [aaa, bbb, ccc, bbb] [aaa, bbb, ccc, bbb] [aaa, ccc, bbb] [aaa, bbb, ccc] [aaa, bbb, ccc] {aaa=哈哈, ccc=洋洋, bbb=小小} {aaa=哈哈, bbb=小小, ccc=洋洋} {aaa=哈哈, bbb=小小, ccc=洋洋} ``` 通过查看结果我们来看一下他们的区别和共性: - Collection - ArrarList 和 LinkedList 都是 List 类型,他们都是按照被插入的顺序保存元素。两者不同之处在于执行某些操作的性能。而且 LinkedList 包含的操作更多。 - HashSet、TreeSet、LinkedHashSet 都是 Set 类型。每个相同的元素只保存一次。HashSet 保存的方式最复杂,但是这种技术的获取速度是最快的。TreeSet 将按照比较结果的升序保存对象。LinkedHashSet 将按照保存的顺序保存对象。 - Map - HashMap 使用了一种非常快的算法来控制顺序,并不是按照存储顺序排列的。也提供了最快的查找速度。 - TreeMap 按照比较结果的升序来保存键。 - LinkedHashMap 按照插入顺序保存键,同时提供了 HashMap 的查询速度。 ## List List 承诺将元素维护在特定的序列中。List 接口在 Collection 的基础上增加了大量的方法。可以使得我们在 List 之间插入和移除元素。 有两种类型的 List: - ArrayList 随机访问性能高,但是插入和移除数据速度慢 - LinkedList 随机访问慢,他通过较低的代码在元素之间插入和删除操作。 ```java public class ListFeature { public static void main(String[] args) { // TODO Auto-generated method stub Random random = new Random(47); List<Apple> list = new ArrayList<>(Arrays.asList(new Apple())); System.out.println("1"+list); Orange orange = new Orange(); list.add(orange); System.out.println("2"+list); //查找集合中是否包含这个元素 System.out.println("3"+list.contains(orange)); //删除 orange元素 list.remove(orange); Apple apple = new Apple(); list.add(apple); list.add(orange); //查看apple元素的索引位置 System.out.println("4"+list.indexOf(apple)); //获取List的索引1和2的数据组成新的集合 List<Apple> sub = list.subList(1,3); System.out.println("5"+sub); //判断旧集合是否包含新的集合 System.out.println("6"+list.containsAll(sub)); //排序 Collections.sort(sub); System.out.println("排序"+sub); //排序之后仍然是true System.out.println("7"+list.containsAll(sub)); Banner banner = new Banner(); sub.add(banner); //打乱顺序 Collections.shuffle(sub,random); System.out.println("8"+sub); //拷贝上一个集合 List<Apple> list2 = new ArrayList<Apple>(list); sub = Arrays.asList(list.get(1),list.get(2)); System.out.println("9"+sub); System.out.println("10"+list2); list2 = new ArrayList<>(list); System.out.println("11"+list2); list2.remove(2); System.out.println("12"+list2); list2.removeAll(sub); System.out.println("13"+list2); list2.set(0, new Banner()); System.out.println("14"+list2); list2.addAll(2,sub); System.out.println("15"+list2); System.out.println("16"+list.isEmpty()); list.clear(); System.out.println("17"+list); } } ``` 执行结果: ``` 1[collection.Apple@6d06d69c] 2[collection.Apple@6d06d69c, collection.Orange@7852e922] 3true 41 5[collection.Apple@4e25154f, collection.Orange@7852e922] 6true 排序[collection.Apple@4e25154f, collection.Orange@7852e922] 7true 8[collection.Orange@7852e922, collection.Apple@4e25154f, collection.Banner@70dea4e] 9[collection.Orange@7852e922, collection.Apple@4e25154f] 10[collection.Apple@6d06d69c, collection.Orange@7852e922, collection.Apple@4e25154f, collection.Banner@70dea4e] 11[collection.Apple@6d06d69c, collection.Orange@7852e922, collection.Apple@4e25154f, collection.Banner@70dea4e] 12[collection.Apple@6d06d69c, collection.Orange@7852e922, collection.Banner@70dea4e] 13[collection.Apple@6d06d69c, collection.Banner@70dea4e] 14[collection.Banner@5c647e05, collection.Banner@70dea4e] 15[collection.Banner@5c647e05, collection.Banner@70dea4e, collection.Orange@7852e922, collection.Apple@4e25154f] 16false 17[] ``` ## 迭代器 迭代器是一种设计模式。迭代器是一个对象类,他的工作是遍历并选择序列中的对象,而程序员不必知道该序列底层的结构。迭代器通常被称为轻量级对象,创建它的代价很小。Java 中的迭代器只能单向移动。 - 使用方法 iterator() 要求容器返回一个 Iterator。Iterator 将准备好返回序列的第一个元素。 - 使用 next() 获得序列中下一个元素 - 使用 hasNext() 检查序列中是否还有元素 - 使用 remove() 将迭代器新近返回的元素删除 ```java public class SimpleIteration { static List<Integer> list = new ArrayList<>(Arrays.asList(1,2,3,4,5)); static Iterator<Integer> iterator = list.iterator(); public static void main(String[] args) { // TODO Auto-generated method stub /*while (iterator.hasNext()) { System.out.print(iterator.next()); }*/ System.out.println("-----------"); for (int i = 0; i < 5; i++) { if (iterator.hasNext()) { iterator.next(); iterator.remove(); } } System.out.print(list); } } ``` #### ListIterator ListIterator 是一个更加强大的 Iterator 的子类。他只能用于 List 类的访问。但是他可以双向移动。他还可以产生前一个和后一个的索引。并且可以使用 set() 方法替换他访问过的最后一个元素。 ```java public class SimpleIteration { public static void main(String[] args) { // TODO Auto-generated method stub List<Integer> list = new ArrayList<>(Arrays.asList(1,2,3,4,5)); ListIterator<Integer> iterator = list.listIterator(3); /* while (iterator.hasNext()) { System.out.println(iterator.next()+"----"+iterator.nextIndex()+"-------"+iterator.previousIndex()); iterator.set(10); } System.out.println(list);*/ while (iterator.hasPrevious()) { System.out.println(iterator.previous().intValue()); } } } ``` ## LinkedList LinkedList 也是实现了 List 接口,但是他执行插入和移除时更高效,随机访问稍微慢一点。它添加了很多独特的方法.由于方法很多我们可以写一下代码去观察一下这些方法都是什么用处。 ```java public class SimpleIteration { public static void main(String[] args) { // TODO Auto-generated method stub LinkedList<Integer> list = new LinkedList<>(Arrays.asList(1,2,3,4,5,6,7,8,9,10)); System.out.println(list); System.out.println(list.getFirst()); System.out.println(list.element()); System.out.println(list.peek()); System.out.println(list.remove()); System.out.println(list.removeFirst()); System.out.println(list.poll()); list.addFirst(11); System.out.println(list); list.offer(12); System.out.println(list); list.add(13); System.out.println(list); list.addLast(14); System.out.println(list); list.removeLast(); System.out.println(list); } } ``` 执行结果: ``` [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] 1 1 1 1 2 3 [11, 4, 5, 6, 7, 8, 9, 10] [11, 4, 5, 6, 7, 8, 9, 10, 12] [11, 4, 5, 6, 7, 8, 9, 10, 12, 13] [11, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14] [11, 4, 5, 6, 7, 8, 9, 10, 12, 13] ``` ## Stack 栈通常是指先进后出的容器。有时候栈也被称之为叠加栈,因为最后压入栈的元素最先弹出栈。 LinkedList 具有实现栈的功能的所有方法,因此可以直接将 LinkedList 作为栈使用。使用到了泛型个概念。类名之后的 T 告诉编译器这将是一个参数化类型,而其中的类型参数,在类中被使用时将被替换为实际的类型参数。 ```java public class Stack<T> { LinkedList<T> linkedList = new LinkedList<T>(); public void push(T v) { linkedList.addFirst(v); } //返回容器的第一个元素,容器空报null public T peek() { return linkedList.peek(); } //删除并返回容器的头部,如果容器是空报null public T pop() { return linkedList.poll(); } public boolean empty() { return linkedList.isEmpty(); } public String toString() { return linkedList.toString(); } } ``` 使用我们自己创建的栈: ```java public class StackTest { public static void main(String[] args) { // TODO Auto-generated method stub Stack<String> stack = new Stack<>(); for (String string : new String[]{"a","b","c","d","e","f","g"}) { stack.push(string); } while (!stack.empty()) { System.out.println(stack.pop()); } } } ``` 执行的结果: ``` gfedcba ``` ## Set Set 不保存重复的元素。如果你试图将相同对象的多个实例添加到 Set 中,那么他会阻止这种重复现象。Set 最长用来测试归属性,可以很容易的询问某个对象是否在某个 Set 中。所以,查找成了 Set 中最重要的操作。通常我们会选择一个 HashSet 来实现。 Set 与 Collection 具有完全一样的接口,没有任何额外的功能。只是行为不同而已。 ```java public static void main(String[] args) { Random random = new Random(47); java.util.Set<Integer> integers = new HashSet<>(); for (int i = 0; i < 1000; i++) { integers.add(random.nextInt(10)); } System.out.println(integers); } ``` 执行结果: ``` [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` 我们循环 1000 次,但是你看到只有 10 个实例存在于容器中。这是因为 Set 将不会存储重复的数据。HashSet 是用散列的方式存储的数据,由于速度的原因我们看到了顺序排列,其实很多时候这些数据不是按照顺序排列的。TreeSet 不是散列排序。TreeSet 使用的是红黑树的数据结构。LinkedHashSet 也是用了散列但是同时使用了链表来维护插入速度。 我们使用 TreeSet 对数据排序: ```java public static void main(String[] args) { // TODO Auto-generated method stub SortedSet<Integer> set = new TreeSet<>(); Random random = new Random(47); for (int i = 0; i < 1000; i++) { set.add(random.nextInt(10)); } System.out.println(set); } ``` 执行结果: ``` [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` 还有一种最常见的操作,就是使用 contains() 测试 Set 的归属性。 ```java public class TreeSetTest { public static void main(String[] args) { // TODO Auto-generated method stub SortedSet<Integer> set = new TreeSet<>(); java.util.Set<Integer> integers = new HashSet<>(); Random random = new Random(47); for (int i = 0; i < 1000; i++) { integers.add(random.nextInt(20)); } System.out.println(integers); for (int i = 0; i < 1000; i++) { set.add(random.nextInt(10)); } System.out.println(set); System.out.println(integers.contains(10)); System.out.println(integers.containsAll(set)); } } ``` 执行结果: ``` [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] true true ```
24.828947
288
0.706346
yue_Hant
0.364854
76006e5f09e5b13e990e9940b462f8b60b7bf5a6
1,675
md
Markdown
CHANGELOG.md
iketari/preoccupyjs
41340eaffa85ecb482490bbc61af9c346adf684e
[ "MIT" ]
14
2019-01-29T11:52:20.000Z
2020-03-30T21:40:40.000Z
CHANGELOG.md
iketari/preoccupyjs
41340eaffa85ecb482490bbc61af9c346adf684e
[ "MIT" ]
10
2019-01-29T16:18:56.000Z
2022-02-26T08:36:28.000Z
CHANGELOG.md
iketari/preoccupyjs
41340eaffa85ecb482490bbc61af9c346adf684e
[ "MIT" ]
1
2021-08-13T01:15:16.000Z
2021-08-13T01:15:16.000Z
# Semantic Versioning Changelog ## [1.1.11](https://github.com/iketari/preoccupyjs/compare/v1.1.10...v1.1.11) (2019-06-17) ### Bug Fixes * **src/dom.ts:** Improve scroll action behaviour ([2192ce7](https://github.com/iketari/preoccupyjs/commit/2192ce7)) ## [1.1.10](https://github.com/iketari/preoccupyjs/compare/v1.1.9...v1.1.10) (2019-06-12) ### Bug Fixes * **dom.ts:** Fix bug with finding body element ([2bd3903](https://github.com/iketari/preoccupyjs/commit/2bd3903)) ## [1.1.9](https://github.com/iketari/preoccupyjs/compare/v1.1.8...v1.1.9) (2019-06-11) ### Bug Fixes * **src/*:** Export all neccessary classes to implement a Transport outside the library ([a1b6c35](https://github.com/iketari/preoccupyjs/commit/a1b6c35)) ## [1.1.8](https://github.com/iketari/preoccupyjs/compare/v1.1.7...v1.1.8) (2019-04-16) ### Bug Fixes * **package.json:** Rebuild ([057ba1d](https://github.com/iketari/preoccupyjs/commit/057ba1d)) * **RightClickToAction.ts:** Fix tslint ([952ab6e](https://github.com/iketari/preoccupyjs/commit/952ab6e)) * **RightClickToAction.ts:** Pass button field as a part of payload for a contextmenu event ([ca628cb](https://github.com/iketari/preoccupyjs/commit/ca628cb)) ## [1.1.7](https://github.com/iketari/preoccupyjs/compare/v1.1.6...v1.1.7) (2019-02-05) ### Bug Fixes * **transports/rxjs.ts:** Set up a universal interface to communicate with the server ([32fe7d6](https://github.com/iketari/preoccupyjs/commit/32fe7d6)) ## [1.1.6](https://github.com/iketari/preoccupyjs/compare/v1.1.5...v1.1.6) (2019-01-28) ### Bug Fixes * **package.json:** CI setting up ([cf5dd45](https://github.com/iketari/preoccupyjs/commit/cf5dd45))
36.413043
158
0.70806
yue_Hant
0.158107
7600aa8c7a8a9d67c37b9755528a5c920345a089
2,180
md
Markdown
README.md
knapsackt/easy-bigchain
c5ac34be7b0332aa9d5d01b7c0cb89f6be94f15e
[ "MIT" ]
2
2019-01-09T05:42:42.000Z
2019-01-09T20:44:21.000Z
README.md
knapsackt/easy-bigchain
c5ac34be7b0332aa9d5d01b7c0cb89f6be94f15e
[ "MIT" ]
2
2019-01-07T19:17:28.000Z
2019-01-09T20:41:40.000Z
README.md
knapsackt/easy-bigchain
c5ac34be7b0332aa9d5d01b7c0cb89f6be94f15e
[ "MIT" ]
1
2019-01-09T20:44:37.000Z
2019-01-09T20:44:37.000Z
# ⭐️ Easy Bigchain ⭐️ ![Stars](https://img.shields.io/github/stars/knapsackt/easy-bigchain.svg?style=for-the-badge) ![Issues](https://img.shields.io/github/issues/knapsackt/easy-bigchain.svg?style=for-the-badge) ![MIT](https://img.shields.io/github/license/knapsackt/easy-bigchain.svg?style=for-the-badge) > Helper library for BigChainDB tasks. Blockchain Database now with a high level abstracted API. Blockchain, accessible to all! [![HitCount](http://hits.dwyl.io/knapsackt/easy-bigchain.svg)](http://hits.dwyl.io/knapsackt/easy-bigchain) --- ## Installation 💻 ```bash $ npm install easy-bigchain --save ``` ## Usage 👨‍💻 ```js import bigchain from 'easy-bigchain' ``` ### 1. Connect to BigChainDB ```js const connection = bigchain.connect(CONNECTION_STRING) // Default == https://test.bigchaindb.com/api/v1/ ``` ### 2. Generate KeyPairs for Users ```js const user = bigchain.generateKeypair() ``` > OR ```js const user = bigchain.generateKeypair(SEED_PHRASE) ``` ### 3. Create Asset ```js bigchain.createAsset(CONNECTION_OBJECT, ASSET, METADATA, USER, function( transaction ) { // execute code }) ``` - **CONNECTION OBJECT** : Object returned from the _connect()_ function call - **ASSET** : The main data object (immutable) - **METADATA** : Additonal Information to be stored - **USER** : Object returned from the _generateKeypair()_ function call ### 4. Transfer Asset ```js bigchain.transferAsset( CONNECTION_OBJECT TRANSACTION, METADATA, CURRENT_OWNER, NEW_OWNER, function(transaction) { // execute code } ) ``` - **CONNECTION OBJECT** : Object returned from the _connect()_ function call - **TRANSACTION** : The original Transaction from where the asset is to be transferred - **METADATA**: Additonal Information to be stored - **CURRENT OWNER and NEW OWNER** are objects returned from the _generateKeypair()_ function call #### Due to some unexpected behaviour of the _getTransaction()_ function, I have made use of the entire transaction itself. Not efficient but works till that issue is fixed. --- ## Contributing ✨ Please send a Pull Request with appropriate documentation and I would be more than happy to merge it! 😄
25.057471
173
0.725229
eng_Latn
0.592579
7600b7c72f902c8cf9493e2ac83a761915adce34
8,331
md
Markdown
Exchange/ExchangeOnline/security-and-compliance/exchange-auditing-reports/non-owner-mailbox-access-report.md
msbemba/OfficeDocs-Exchange
aab19f2bb72cfe2854808b8a2545dcbdf8233cb6
[ "CC-BY-4.0", "MIT" ]
1
2019-05-31T20:54:55.000Z
2019-05-31T20:54:55.000Z
Exchange/ExchangeOnline/security-and-compliance/exchange-auditing-reports/non-owner-mailbox-access-report.md
dannygriffin000/OfficeDocs-Exchange
9bf26e2e182ba07db9f7293d8632768458c0e8be
[ "CC-BY-4.0", "MIT" ]
null
null
null
Exchange/ExchangeOnline/security-and-compliance/exchange-auditing-reports/non-owner-mailbox-access-report.md
dannygriffin000/OfficeDocs-Exchange
9bf26e2e182ba07db9f7293d8632768458c0e8be
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- localization_priority: Normal description: The Non-Owner Mailbox Access Report in the Exchange admin center (EAC) lists the mailboxes that have been accessed by someone other than the person who owns the mailbox. When a mailbox is accessed by a non-owner, Microsoft Exchange logs information about this action in a mailbox audit log that's stored as an email message in a hidden folder in the mailbox being audited. Entries from this log are displayed as search results and include a list of mailboxes accessed by a non-owner, who accessed the mailbox and when, the actions performed by the non-owner, and whether the action was successful. By default, entries in the mailbox audit log are retained for 90 days. ms.topic: article author: chrisda ms.author: chrisda ms.assetid: dbbef170-e726-4735-abf1-2857db9bb52d ms.date: 6/23/2018 ms.reviewer: title: Run a non-owner mailbox access report ms.collection: - exchange-online - M365-email-calendar audience: ITPro ms.service: exchange-online manager: dansimp --- # Run a non-owner mailbox access report The Non-Owner Mailbox Access Report in the Exchange admin center (EAC) lists the mailboxes that have been accessed by someone other than the person who owns the mailbox. When a mailbox is accessed by a non-owner, Microsoft Exchange logs information about this action in a mailbox audit log that's stored as an email message in a hidden folder in the mailbox being audited. Entries from this log are displayed as search results and include a list of mailboxes accessed by a non-owner, who accessed the mailbox and when, the actions performed by the non-owner, and whether the action was successful. By default, entries in the mailbox audit log are retained for 90 days. When you enable mailbox audit logging for a mailbox, Microsoft Exchange logs specific actions by non-owners, including both administrators and users, called delegated users, who have been assigned permissions to a mailbox. You can also narrow the search to users inside or outside your organization. ## What do you need to know before you begin? <a name="beforeyoubegin"> </a> - Estimated time to complete: 5 minutes. - You need to be assigned permissions before you can perform this procedure or procedures. To see what permissions you need, see the "Mailbox audit logging" entry in the [Messaging policy and compliance permissions](https://technet.microsoft.com/library/ec4d3b9f-b85a-4cb9-95f5-6fc149c3899b.aspx) topic. - For information about keyboard shortcuts that may apply to the procedures in this topic, see [Keyboard shortcuts for the Exchange admin center](../../accessibility/keyboard-shortcuts-in-admin-center.md). > [!TIP] > Having problems? Ask for help in the Exchange forums. Visit the forums at [Exchange Online](https://go.microsoft.com/fwlink/p/?linkId=267542) or [Exchange Online Protection](https://go.microsoft.com/fwlink/p/?linkId=285351).. ## Enable mailbox audit logging <a name="beforeyoubegin"> </a> You have to enable mailbox audit logging for each mailbox that you want to run a non-owner mailbox access report for. If mailbox audit logging isn't enabled, you won't get any results when you run a report. To enable mailbox audit logging for a single mailbox, run the following command in Exchange Online PowerShell. ``` Set-Mailbox <Identity> -AuditEnabled $true ``` For example, to enable mailbox auditing for a user named Florence Flipo, run the following command. ``` Set-Mailbox "Florence Flipo" -AuditEnabled $true ``` To enable mailbox auditing for all user mailboxes in your organization, run the following commands. ``` $UserMailboxes = Get-mailbox -Filter {(RecipientTypeDetails -eq 'UserMailbox')} ``` ``` $UserMailboxes | ForEach {Set-Mailbox $_.Identity -AuditEnabled $true} ``` ### How do you know this worked? Run the following command to verify that you've successfully configured mailbox audit logging. ``` Get-Mailbox | Format-List Name,AuditEnabled ``` A value of `True` for the _AuditEnabled_ property verifies that audit logging is enabled. ## Run a non-owner mailbox access report <a name="runreport"> </a> 1. In the EAC, navigate to **Compliance Management** \> **Auditing**. 2. Click **Run a non-owner mailbox access report**. By default, Microsoft Exchange runs the report for non-owner access to any mailboxes in the organization over the past two weeks. The mailboxes listed in the search results have been enabled for mailbox audit logging. 3. To view non-owner access for a specific mailbox, select the mailbox from the list of mailboxes. View the search results in the details pane. > [!TIP] > Want to narrow the search results? Select the start date, end date, or both, and select specific mailboxes to search. Click **Search** to re-run the report. ### Search for specific types of non-owner access You can also specify the type of non-owner access, also called the logon type, to search for. Here are your options: - **All non-owners**: Search for access by administrators and delegated users inside your organization. Also includes access user outside of your organization. - **External users**: Search for access by users outside of your organization. - **Administrators and delegated users**: Search for access by administrators and delegated users inside your organization. - **Administrators**: Search for access by administrators in your organization. ### How do you know this worked? To verify that you've successfully run a non-owner mailbox access report, check the search results pane. Mailboxes that you ran the report for are displayed in this pane. If there are no results for a specific mailbox, it's possible there hasn't been access by a non-owner or that non-owner access hasn't taken place within the specified date range. As previously described, be sure to verify that audit logging has been enabled for the mailboxes you want to search for access by non-owners. ## What gets logged in the mailbox audit log? <a name="whatislogged"> </a> When you run a non-owner mailbox access report, entries from the mailbox audit log are displayed in the search results in the EAC. Each report entry contains this information: - Who accessed the mailbox and when - The actions performed by the non-owner - The affected message and its folder location - Whether the action was successful The following table lists the actions performed by non-owners that can be logged by mailbox audit logging. In the table, a **Yes** indicates that the action can be logged for that logon type, and a **No** indicates that an action can't be logged. An asterisk ( **\*** ) indicates that the action is logged by default when mailbox audit logging is enabled for the mailbox. If you want to track actions that aren't logged by default, you have to use PowerShell to enable logging of those actions. > [!NOTE] > An administrator who has been assigned the Full Access permission to a user's mailbox is considered a delegated user. |**Action**|**Description**|**Administrator**|**Delegated user**| |:-----|:-----|:-----|:-----| |**Copy**|A message was copied to another folder.|Yes|No| |**Create**|An item is created in the Calendar, Contacts, Notes, or Tasks folder in the mailbox; for example, a new meeting request is created. Note that message or folder creation isn't audited.|Yes\*|Yes\*| |**FolderBind**|A mailbox folder was accessed.|Yes\*|Yes| |**Hard-delete**|A message was purged from the Recoverable Items folder.|Yes\*|Yes\*| |**MessageBind**|A message was viewed in the preview pane or opened.|Yes|No| |**Move**|A message was moved to another folder.|Yes\*|Yes| |**Move To Deleted Items**|A message was moved to the Deleted Items folder.|Yes\*|Yes| |**Send as**|A message was sent using SendAs permission. This means another user sent the message as though it came from the mailbox owner.|Yes\*|Yes\*| |**Send on behalf of**|A message was sent using SendOnBehalf permission. This means another user sent the message on behalf of the mailbox owner. The message will indicate to the recipient who the message was sent on behalf of and who actually sent the message.|Yes\*|Yes| |**Soft-delete**|A message was deleted from the Deleted Items folder.|Yes\*|Yes\*| |**Update**|A message was changed.|Yes\*|Yes\*| > [!NOTE] > <sup>\*</sup> Audited by default if auditing is enabled for a mailbox.
58.669014
681
0.768335
eng_Latn
0.997995
7600e5ff5a572beefe4202be7869facfd768295a
74
md
Markdown
README.md
dudulacerdadl/Certificados
2bab838384869f898af4ebfbf78fe18a292fd17c
[ "MIT" ]
null
null
null
README.md
dudulacerdadl/Certificados
2bab838384869f898af4ebfbf78fe18a292fd17c
[ "MIT" ]
null
null
null
README.md
dudulacerdadl/Certificados
2bab838384869f898af4ebfbf78fe18a292fd17c
[ "MIT" ]
null
null
null
# Certificados Certificados na área de tecnologia adquiridos com o tempo
24.666667
58
0.824324
por_Latn
0.992834
7600ff7bb6e9f2593a245377c78ffeb239fdf0e8
1,557
md
Markdown
articles/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/java/examples.md
jk2K/azure-docs.zh-cn
b913b42716b95dc85ff7463f2c104932daf281f3
[ "CC-BY-4.0", "MIT" ]
133
2017-06-29T06:38:23.000Z
2022-03-27T23:09:57.000Z
articles/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/java/examples.md
renhuailin/azure-docs.zh-cn
1f63de4d83c8c84ab6afa16b7548f8afe1f2fa36
[ "CC-BY-4.0", "MIT" ]
720
2017-06-28T19:59:50.000Z
2021-11-22T07:13:21.000Z
articles/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/java/examples.md
renhuailin/azure-docs.zh-cn
1f63de4d83c8c84ab6afa16b7548f8afe1f2fa36
[ "CC-BY-4.0", "MIT" ]
190
2017-06-28T06:08:21.000Z
2022-03-16T11:48:36.000Z
--- author: trevorbye ms.service: cognitive-services ms.topic: include ms.date: 03/09/2020 ms.author: trbye ms.openlocfilehash: e9b980dcbe42694137c7b29ee3bddaa7802db3da ms.sourcegitcommit: 9514d24118135b6f753d8fc312f4b702a2957780 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 01/07/2021 ms.locfileid: "97978839" --- 若要将语音 SDK 配置为接受压缩的音频输入,请创建 `PullAudioInputStream` 或 `PushAudioInputStream` 。 然后,从流类的实例创建 `AudioConfig`,并指定流的压缩格式。 让我们假设你有一个名为 `pullStream` 的输入流类,并且使用 OPUS/OGG。 你的代码可能如下所示: ```java import com.microsoft.cognitiveservices.speech.audio.AudioConfig; import com.microsoft.cognitiveservices.speech.audio.AudioInputStream; import com.microsoft.cognitiveservices.speech.audio.AudioStreamFormat; import com.microsoft.cognitiveservices.speech.audio.PullAudioInputStream; import com.microsoft.cognitiveservices.speech.internal.AudioStreamContainerFormat; // ... omitted for brevity SpeechConfig speechConfig = SpeechConfig.fromSubscription( "YourSubscriptionKey", "YourServiceRegion"); // Create an audio config specifying the compressed // audio format and the instance of your input stream class. AudioStreamFormat audioFormat = AudioStreamFormat.getCompressedFormat( AudioStreamContainerFormat.OGG_OPUS); AudioConfig audioConfig = AudioConfig.fromStreamInput( pullStream, audioFormat); SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioConfig); SpeechRecognitionResult result = recognizer.recognizeOnceAsync().get(); String text = result.getText(); ```
33.12766
113
0.805395
yue_Hant
0.412606
7601bed40373a8552d7d4fa22a470fc279cbb320
193
md
Markdown
README.md
KushajveerSingh/blog
2a7ecf3c8c0669d5a04993eb69918385f3e39c9d
[ "Apache-2.0" ]
null
null
null
README.md
KushajveerSingh/blog
2a7ecf3c8c0669d5a04993eb69918385f3e39c9d
[ "Apache-2.0" ]
13
2020-07-10T17:29:00.000Z
2021-09-17T21:01:29.000Z
README.md
KushajveerSingh/blog
2a7ecf3c8c0669d5a04993eb69918385f3e39c9d
[ "Apache-2.0" ]
null
null
null
![](https://github.com/KushajveerSingh/blog/workflows/CI/badge.svg) ![](https://github.com/KushajveerSingh/blog/workflows/GH-Pages%20Status/badge.svg) https://KushajveerSingh.github.io/blog/
48.25
83
0.772021
yue_Hant
0.838938
760211bdcdac8616784d05200062202eacecf5f6
185
md
Markdown
TSH_current_youtube_03052k21/Функциональное программирование/9/ReadMe.md
legioner9/Node_Way_source_1
83483f3f4a1730ec6e9c26de936db592bf194e36
[ "MIT" ]
null
null
null
TSH_current_youtube_03052k21/Функциональное программирование/9/ReadMe.md
legioner9/Node_Way_source_1
83483f3f4a1730ec6e9c26de936db592bf194e36
[ "MIT" ]
1
2021-02-24T21:17:25.000Z
2021-05-03T07:38:41.000Z
TSH_youtube_libellus_03052k21/Функциональное программирование/9/ReadMe.md
legioner9/Node_Way_source_1
83483f3f4a1730ec6e9c26de936db592bf194e36
[ "MIT" ]
null
null
null
### [9 Отмена асинхронных операций, cancellable callback and Promise в JavaScript](https://www.youtube.com/watch?v=T8fXlnqI4Ws) #### https://github.com/HowProgrammingWorks/Cancelable
37
127
0.778378
yue_Hant
0.526145
76025418dc0aea1334d0a18ff34e218c546f3b57
4,185
md
Markdown
website/source/docs/cli/ssh.html.md
eigengrau/vagrant
07092bc36f8b09d6e60ebf78aa146d386711ba18
[ "MIT" ]
null
null
null
website/source/docs/cli/ssh.html.md
eigengrau/vagrant
07092bc36f8b09d6e60ebf78aa146d386711ba18
[ "MIT" ]
1
2018-10-09T08:39:24.000Z
2018-10-09T08:39:24.000Z
website/source/docs/cli/ssh.html.md
eigengrau/vagrant
07092bc36f8b09d6e60ebf78aa146d386711ba18
[ "MIT" ]
1
2018-10-30T13:05:49.000Z
2018-10-30T13:05:49.000Z
--- layout: "docs" page_title: "vagrant ssh - Command-Line Interface" sidebar_current: "cli-ssh" description: |- The "vagrant ssh" command is used to establish an SSH session into a running virtual machine to give you shell access. --- # SSH **Command: `vagrant ssh [name|id] [-- extra_ssh_args]`** This will SSH into a running Vagrant machine and give you access to a shell. On a simple vagrant project, the instance created will be named default. Vagrant will ssh into this instance without the instance name: ```bash $ vagrant ssh Welcome to your Vagrant-built virtual machine. Last login: Fri Sep 14 06:23:18 2012 from 10.0.2.2 $ logout Connection to 127.0.0.1 closed. ``` Or you could use the name: ```bash $ vagrant ssh default Welcome to your Vagrant-built virtual machine. Last login: Fri Jul 20 15:09:52 2018 from 10.0.2.2 $ logout Connection to 127.0.0.1 closed. $ ``` On multi-machine setups, you can login to each vm using the name as displayed on `vagrant status` ```bash $ vagrant status Current machine states: node1 running (virtualbox) node2 running (virtualbox) This environment represents multiple VMs. The VMs are all listed above with their current state. $ vagrant ssh node1 Welcome to your Vagrant-built virtual machine. Last login: Fri Sep 14 06:23:18 2012 from 10.0.2.2 vagrant@precise64:~$ logout Connection to 127.0.0.1 closed. $ vagrant ssh node2 Welcome to your Vagrant-built virtual machine. Last login: Fri Sep 14 06:23:18 2012 from 10.0.2.2 vagrant@precise64:~$ logout Connection to 127.0.0.1 closed. $ ``` On a system with machines running from different projects, you could use the id as listed in `vagrant global-status` ```bash $ vagrant global-status id name provider state directory ----------------------------------------------------------------------- 13759ff node1 virtualbox running /Users/user/vagrant/folder The above shows information about all known Vagrant environments on this machine. This data is cached and may not be completely up-to-date (use "vagrant global-status --prune" to prune invalid entries). To interact with any of the machines, you can go to that directory and run Vagrant, or you can use the ID directly with Vagrant commands from any directory. $ vagrant ssh 13759ff Welcome to your Vagrant-built virtual machine. Last login: Fri Jul 20 15:19:36 2018 from 10.0.2.2 vagrant@precise64:~$ logout Connection to 127.0.0.1 closed. $ ``` If a `--` (two hyphens) are found on the command line, any arguments after this are passed directly into the `ssh` executable. This allows you to pass any arbitrary commands to do things such as reverse tunneling down into the `ssh` program. ## Options * `-c COMMAND` or `--command COMMAND` - This executes a single SSH command, prints out the stdout and stderr, and exits. * `-p` or `--plain` - This does an SSH without authentication, leaving authentication up to the user. ## SSH client usage Vagrant will attempt to use the local SSH client installed on the host machine. On POSIX machines, an SSH client must be installed and available on the PATH. For Windows installations, an SSH client is provided within the installer image. If no SSH client is found on the current PATH, Vagrant will use the SSH client it provided. Depending on the local environment used for running Vagrant, the installer provided SSH client may not work correctly. For example, when using a cygwin or msys2 shell the SSH client will fail to work as expected when run interactively. Installing the SSH package built for the current working environment will resolve this issue. ## Background Execution If the command you specify runs in the background (such as appending a `&` to a shell command), it will be terminated almost immediately. This is because when Vagrant executes the command, it executes it within the context of a shell, and when the shell exits, all of the child processes also exit. To avoid this, you will need to detach the process from the shell. Please Google to learn how to do this for your shell. One method of doing this is the `nohup` command.
32.44186
82
0.735484
eng_Latn
0.997786
7603821302a5390d54a5541e073d1c554b3062c4
1,057
md
Markdown
source/manual/fall-back-to-aws-cloudfront.html.md
ostree/govuk-developer-docs
0d3f514fb6f6ef8415de75ab59870073131d28ec
[ "MIT" ]
null
null
null
source/manual/fall-back-to-aws-cloudfront.html.md
ostree/govuk-developer-docs
0d3f514fb6f6ef8415de75ab59870073131d28ec
[ "MIT" ]
null
null
null
source/manual/fall-back-to-aws-cloudfront.html.md
ostree/govuk-developer-docs
0d3f514fb6f6ef8415de75ab59870073131d28ec
[ "MIT" ]
1
2020-09-24T10:18:23.000Z
2020-09-24T10:18:23.000Z
--- owner_slack: "#re-govuk" title: Fall back to AWS CloudFront section: Deployment layout: manual_layout parent: "/manual.html" last_reviewed_on: 2020-07-01 review_in: 3 months --- There is a backup Content Delivery Network (CDN) that can be used if Fastly is down. This backup CDN is currently provided by AWS CloudFront. **Important** At the time of writing, GOV.UK will be served by static mirrors and not the origin of GOV.UK if you fallback to the backup CDN. ## Procedure You will have to make 2 DNS changes to GOV.UK: 1. you will have to ask JISC (very few people are authorised to make such changes) to change the cname of `www.gov.uk` to the `www` AWS CloudFront distribution domain. You can log into the AWS web console to find the `www` AWS CloudFront distribution domain. 2. you will have to change the cname of `assets.publishing.service.gov.uk` to the `assets` AWS CloudFront distribution domain using the usual gov.uk processes. You can log into the AWS web console to find the `assets` AWS CloudFront distribution domain.
37.75
96
0.762535
eng_Latn
0.993949
7603f769ada69e976bb2eeffac83365039a4ea28
24,333
md
Markdown
UG103.6/UG103.6.md
shanyaoxing12/emberznet-doc
4453c89af032e5d48783896e1d3acabd931636fb
[ "MIT" ]
22
2019-12-09T07:19:12.000Z
2021-12-15T09:23:12.000Z
UG103.6/UG103.6.md
shanyaoxing12/emberznet-doc
4453c89af032e5d48783896e1d3acabd931636fb
[ "MIT" ]
null
null
null
UG103.6/UG103.6.md
shanyaoxing12/emberznet-doc
4453c89af032e5d48783896e1d3acabd931636fb
[ "MIT" ]
10
2020-03-26T06:49:43.000Z
2021-11-11T14:05:07.000Z
# Bootloading Fundamentals (Rev. 1.3) <!-- omit in toc --> 本文档介绍了 Silicon Labs 网络设备的引导加载(bootloading)。总结了 Silicon Labs Gecko Bootloader 与 legacy Ember Bootloader 之间的差异,并讨论了它们对平台的适用性。描述了独立(standalone)和应用(application)引导加载的概念,并讨论了它们的相对优势和劣势。此外,还介绍了每种方法的设计和实现细节。最后,描述了引导加载程序(bootloader)的文件格式。 ## 目录 <!-- omit in toc --> - [1. 引言](#1-引言) - [1.1 独立引导加载](#11-独立引导加载) - [1.2 应用引导加载](#12-应用引导加载) - [2. 关于 Gecko Bootloader](#2-关于-gecko-bootloader) - [2.1 特性](#21-特性) - [2.1.1 可现场更新](#211-可现场更新) - [2.1.2 安全启动](#212-安全启动) - [2.1.3 已签名的 GBL 固件更新映像文件](#213-已签名的-gbl-固件更新映像文件) - [2.1.4 加密的 GBL 固件更新映像文件](#214-加密的-gbl-固件更新映像文件) - [2.2 适用性](#22-适用性) - [3. 用于引导加载的存储空间](#3-用于引导加载的存储空间) - [3.1 Gecko Bootloader](#31-gecko-bootloader) - [3.2 Legacy Ember Bootloaders](#32-legacy-ember-bootloaders) - [4. 设计决策](#4-设计决策) - [4.1 Gecko Bootloader](#41-gecko-bootloader) - [4.2 Legacy Ember Bootloaders](#42-legacy-ember-bootloaders) - [5. 引导加载文件格式](#5-引导加载文件格式) - [5.1 Gecko Bootload (GBL) 文件](#51-gecko-bootload-gbl-文件) - [5.1.1 文件格式](#511-文件格式) - [5.1.1.1 文件结构](#5111-文件结构) - [5.1.1.2 明文标签描述](#5112-明文标签描述) - [5.1.1.3 加密的标签描述](#5113-加密的标签描述) - [5.1.2 映像验证](#512-映像验证) - [5.2 Ember Bootload (EBL) 文件](#52-ember-bootload-ebl-文件) - [5.2.1 基本的文件格式](#521-基本的文件格式) - [5.2.1.1 非加密的标签描述](#5211-非加密的标签描述) - [5.2.1.2 数据验证](#5212-数据验证) - [5.2.2 加密的 Ember Bootload 文件格式](#522-加密的-ember-bootload-文件格式) - [5.2.2.1 加密的标签描述](#5221-加密的标签描述) - [5.2.2.2 Nonce 生成](#5222-nonce-生成) - [5.2.2.3 映像校验](#5223-映像校验) # 1. 引言 引导加载程序(bootloader)是一个存储在预留闪存中的程序,它可以初始化设备、更新固件(firmware)映像,并可能执行一些完整性检查。无论是通过串行通信还是无线方式,都可以根据需要进行固件映像更新。生产级编程通常在产品制造过程中完成,但有时会希望能够在生产完成后进行重新编程。更重要的是,这能够在设备部署后更新具有新特性和错误修复的固件。这使得更新固件映像成为可能。 Silicon Labs 支持不使用引导加载程序的设备,但这需要外部硬件(如 Debug Adapter(Silicon Labs ISA3 或 WSTK)或第三方的 SerialWire/JTAG 编程设备)来更新固件。没有引导加载程序的设备在部署后无法通过无线方式更新固件,因此 Silicon Labs 强烈推荐使用引导加载程序。 在 2017 年 3 月,Silicon Labs 推出了 Gecko Bootloader,这是一个可通过 Simplicity Studio IDE 配置的代码库,用于生成可与各种 Silicon Labs 协议栈一起使用的引导加载程序。Gecko Bootloader 可以与 EFR32MG1/EFR32BG1(EFR32xG1)和 EFR32xG1 + Flash 一起使用。从 EFR32MG12/EFR32BG12/EFR32FG12(EFR32xG12)开始,所有未来的 Mighty Gecko、Flex Gecko 和 Blue Gecko 版本将仅使用 Gecko Bootloader。用于特定协议(如 EmberZNet PRO)和特定平台(EM3x)的 Legacy Ember Bootloader 将继续提供。在 2017 年 12 月,Bluetooth SDK 的 2.7.0 版本中删除了对 legacy Bluetooth Bootloader 的支持。 Gecko Bootloader 和 legacy Ember Bootloader 使用自定义的更新映像文件格式,这将在 [5. 引导加载文件格式](#5-引导加载文件格式) 中进一步介绍。Gecko Bootloader 生成的应用(application)引导加载程序使用的更新映像文件是 GBL(Gecko BootLoader)文件,legacy Ember Bootloader 使用的是 EBL(Ember BootLoader)文件。 引导加载固件更新映像有两种方式。第一种是无线方式(OTA,Over-The-Air),即通过无线网络,如下图所示。 <div align=center title="Figure 1.1. OTA Bootloading Use Case"><img src="./Figure/Figure1.1.png" alt="Figure 1.1. OTA Bootloading Use Case"/></div> 第二种是通过设备的硬连线链路。下图展示 SoC(使用 UART、SPI 或 USB )和 NCP(使用 UART 或 SPI)的串行引导加载用例。 <div align=center title="Figure 1.2. Serial Bootloaders Use Cases"><img src="./Figure/Figure1.2.png" alt="Figure 1.2. Serial Bootloaders Use Cases"/></div> Silicon Labs 网络设备以两种不同的模式使用引导加载程序来执行固件更新:独立引导加载(standalone bootloader)和应用引导加载(application bootloader)。应用引导加载进一步地划分为使用外部存储和使用本地存储,以下载更新映像。这两种引导加载类型将在接下来的两节中讨论。 本文档中描述的固件更新情况假定为源节点(通过 serial 或 OTA 链路将固件映像发送到目标的设备)通过其他方式获取新固件。例如,如果本地 ZigBee 网络上的设备已连接到以太网网关,则该设备可以通过 Internet 获取或接收这些固件更新。固件更新过程的这一必要部分取决于系统,这超出了本文档的范围。 ## 1.1 独立引导加载 独立引导加载程序(standalone bootloader)是使用外部通信接口(如 UART 或 SPI)来获取应用映像的程序。独立固件更新是一个单阶段的过程,这允许将应用映像放入闪存来覆盖现有的应用映像,而无需应用本身的参与。独立引导加载程序与在闪存中运行的应用程序之间几乎没有交互。通常,应用程序与引导加载程序交互的唯一时间是它请求 reboot 到引导加载程序。一旦引导加载程序运行,它就会通过物理连接(如 UART 或 SPI)或无线方式接收包含(新)固件映像的引导加载包。 启动固件更新过程后,新代码将覆盖现有的协议栈和应用代码。如果在此过程中发生任何错误,则无法恢复代码并且必须重新开始该过程。有关 legacy standalone bootloader 的更多信息,请参阅 **AN760: Using the Ember Standalone Bootloader**。有关将 Gecko Bootloader 配置为独立引导加载程序的信息,请参阅 **UG266: Silicon Labs Gecko Bootloader User Guide**。 ## 1.2 应用引导加载 应用引导加载程序(application bootloader)在正在运行的应用程序下载完更新映像文件后开始固件更新过程。应用引导加载程序期望映像存在可访问的外部存储器中或主闪存中(如果芯片具有足够的存储空间来支持此本地存储模型)。 应用引导加载程序依赖应用程序来获取新的固件映像。应用程序可以以任何便捷的方式(UART,OTA 等)下载映像,但必须将其存储在称为下载空间(download space)的区域中。下载空间通常是外部存储器设备(如 EEPROM 或 dataflash),但在使用应用引导加载程序的本地存储变体时,它也可以是芯片内部闪存的一部分。存储新映像后,将调用应用引导加载程序来验证新映像并将其从下载空间复制到闪存中。 由于应用引导加载程序不参与映像的获取,并且映像在固件更新过程开始之前下载,因此下载错误不会对正在运行的映像产生负面影响。下载过程可以随时重新开始或暂停。可以在启动固件更新过程之前验证所下载映像的完整性,以防止损坏或无功能的映像被应用。 Legacy Ember application bootloader 提供 UART 独立引导加载能力来作为恢复机制,以防止正在运行的应用程序映像和升级映像被损坏。可以将 Gecko Bootloader 配置为接受一个多升级映像的列表,以尝试验证和应用。这允许 Gecko Bootloader 存储更新映像的备份副本,如果第一个映像损坏,它可以访问该副本。 注意,EmberZNet 和 Silicon Labs Thread NCP 平台不使用应用引导加载程序,因为应用代码驻留在主机上而不是直接驻留在 NCP 上。取而代之的是,充当串行协处理器的设备将使用独立引导加载程序,该引导加载程序旨在通过与预期的 NCP 固件使用的相同串行接口接受代码。但是,主机应用程序可以使用任何合适的引导加载方案。Silicon Labs Bluetooth NCP 可以使用 legacy OTA DFU bootloader。 有关应用引导加载程序的详情,可以参阅 **UG266: Silicon Labs Gecko Bootloader User's Guide** 和 **AN772: Using the Ember Application Bootloader**。 # 2. 关于 Gecko Bootloader Silicon Labs Gecko Bootloader 是一个可配置的代码库,可以与所有较新的 Silicon Labs Gecko MCU 和无线 MCU 一起使用。它使用 GBL 格式的更新映像文件。Gecko Bootloader 采用二阶设计,其中最小的首阶引导加载程序(first stage bootloader)用于更新主引导加载程序(main bootloader)。这允许了主引导加载程序的现场更新(包括添加新能力、更改通信协议、添加新安全特性和修复程序等)。Gecko Bootloader 由三个组件组成: * Core:包含两个阶段的引导加载程序的主要功能。它还包含写入内部主闪存、执行引导加载程序更新和重置到应用程序中(以标记适用的重置原因)的功能。 * Driver:不同的引导加载应用需要不同的硬件驱动程序以供引导加载程序的其他组件使用。 * Plugin:主引导加载程序的所有可选项或用于不同配置的选择被实现为 plugin。每个 plugin 都有一个通用的头文件和一个或多个实现。当前版本包含 UART 和 SPI 通信协议、SPI 闪存存储、内部闪存存储和不同的加密操作等功能插件。 ## 2.1 特性 Gecko Bootloader 特性包括: * 可现场更新(Field-updateable) * 安全启动(Secure boot) * 签名的 GBL 固件更新映像文件(Signed GBL firmware update image file) * 加密的 GBL 固件更新映像文件(Encrypted GBL firmware update image file) 这些特性在随后的小节中进行了总结,并在 **UG266: Silicon Labs Gecko Bootloader User Guide** 中详细地描述。有关使用 Gecko Bootloader 的特定于协议的信息可在以下文档中找到: * **AN1084: Using the Gecko Bootloader with EmberZNet and Silicon Labs Thread** * **AN1085: Using the Gecko Bootloader with Silicon Labs Connect** * **AN1086: Using the Gecko Bootloader with Silicon Labs Bluetooth Applications** ### 2.1.1 可现场更新 引导加载程序固件现场更新能力由一个二阶设计(first stage 和 main stage)提供。引导加载程序的最小首阶(不可现场更新)只能通过读写内部闪存中的固定地址来更新主引导加载程序。要执行主引导加载程序更新,正在运行的主引导加载程序将验证引导加载程序更新映像的完整性和真实性,将引导加载程序更新映像写入内部闪存中的固定位置,并发出 reboot 以进入首阶引导加载程序中。在将更新映像复制到主引导加载程序位置之前,首阶引导加载程序会验证主引导加载程序更新映像的完整性。 ### 2.1.2 安全启动 安全启动旨在防止不受信任的映像在设备上运行。启用安全启动后,引导加载程序会使用非对称加密技术在每次启动时强制执行应用程序映像的加密签名验证。使用的签名算法是 ECDSA-P256-SHA256。公钥在制造期间写入设备,而私钥保密。这确保了应用程序是由可信方创建和签名的。 ### 2.1.3 已签名的 GBL 固件更新映像文件 除了安全启动之外,Gecko Bootloader 还支持强制执行更新映像文件的加密签名验证。这允许引导加载程序和应用程序在开始更新过程之前验证应用程序或引导加载程序的更新是否来自受信任的源。使用的签名算法是 ECDSA-P256-SHA256。公钥与安全引导的密钥相同,在制造期间写入设备,而私钥不分发。这可确保 GBL 文件由受信任方创建和签名。 ### 2.1.4 加密的 GBL 固件更新映像文件 GBL 更新文件也可以加密,以防止窃听者获取明文固件映像。使用的加密算法是 AES-CTR-128,加密密钥在制造期间写入设备。 ## 2.2 适用性 下表展示了可以与不同平台一起使用的引导加载程序。 <table> <tr> <th>Platform</th> <th>Gecko Bootloader</th> <th>Legacy Ember Bootloaders</th> <th>Legacy Bluetooth Bootloaders</th> </tr> <tr> <td>EM3x</td> <td>No</td> <td>Yes</td> <td>*</td> </tr> <tr> <td>EFR32MG1,<br>EFR32MG1+Flash</td> <td>Yes</td> <td>No**</td> <td>*</td> </tr> <tr> <td>EFR32BG1</td> <td>Yes</td> <td>No</td> <td>*</td> </tr> <tr> <td>EFR32BG1+Flash</td> <td>Yes</td> <td>No</td> <td>*</td> </tr> <tr> <td>EFR32FG1</td> <td>Yes</td> <td>No**</td> <td>*</td> </tr> <tr> <td>EFR32MG12, <br>EFR32BG12,<br>EFR32FG12</td> <td>Yes</td> <td>No</td> <td>*</td> </tr> <tr> <td>Future products</td> <td>Yes</td> <td>No</td> <td>*</td> </tr> <tr> <td colspan="4">* Support for the legacy Bluetooth Bootloaders was removed in the Bluetooth SDK version 2.7.0 release.<br>** Support for these platforms was deprecated in EmberZNet SDK version 6.1.0 and Silicon Labs Thread version 2.5.0.</td> </tr> </table> # 3. 用于引导加载的存储空间 ## 3.1 Gecko Bootloader 引导加载程序的首阶占用单个闪存页。在使用 2kB 闪存页的设备上(如 EFR32MG1),意味着首阶需要 2kB。 主引导加载程序的大小取决于所需的功能。典型的引导加载程序配置为:主引导加载程序占用 14kB 闪存,使总引导加载程序的大小达到 16kB。 Silicon Labs 建议为引导加载程序保留至少 16kB。 在 EFR32xG1 设备(Mighty Gecko、Flex Gecko 和 Blue Gecko 系列)上,引导加载程序位于主闪存中。 * 首阶引导加载程序 @ 0x0 * 主引导加载程序 @ 0x800 * 应用程序 @ 0x4000 在较新的设备(EFR32xG12 和更高版本)上,引导加载程序位于信息块的引导加载程序区中。 * 应用程序 @ 0x0 * 首阶引导加载程序 @ 0x0FE10000 * 主引导加载程序 @ 0x0FE10800 ## 3.2 Legacy Ember Bootloaders 下图展示了典型 Silicon Labs 网状网络 SOC 或 NCP 的存储器映射。 <div align=center title="Figure 3.1. Typical Silicon Labs Mesh Networking Devices’ Memory Map"><img src="./Figure/Figure3.1.png" alt="Figure 3.1. Typical Silicon Labs Mesh Networking Devices’ Memory Map"/></div> 对于每个 Silicon Labs 网状网络平台(在 SOC 或 NCP 使用情况下),在主闪存的起始处保留一个闪存块(通常为 8kB 或 16kB,具体取决于所使用的 IC)来保存引导加载程序,并且在闪存的末尾保留一个闪存块(在 4kB 和 36kB 之间,具体取决于实现),用于模拟 EEPROM。除了本地存储应用引导加载程序(Local Storage Application Bootloader)外,其他所有情况下剩余的存储空间都是非保留的,可用于保存网络协议栈和应用代码。 # 4. 设计决策 部署哪种类型的引导加载程序取决于许多因素。注意,平台类型和可用闪存可能会限制引导加载程序的选择。 与此相关的一些问题是: * 设备从何处获得新的更新映像?是否通过网络协议进行的无线传输?是否使用单独的接口连接到 Internet? * 设备是否有外部存储器芯片来存储新的更新映像?如果没有,是否有足够的内部闪存来存储最大的预期应用映像(当前的和新下载的副本)? * 如果设备通过无线方式接收新的映像,它是否会多次跳转到持有下载映像的服务器? * 需要什么样的映像安全性? * 将使用哪种通信驱动程序(在单协议的情况下)? * 用例是否需要多个协议? ## 4.1 Gecko Bootloader Gecko Bootloader 平台的可配置设计意味着开发人员可以创建引导加载程序以适应几乎全部的设计选择。有关详细信息,请参阅 **UG266: Silicon Labs Gecko Bootloader User's Guide**。 ## 4.2 Legacy Ember Bootloaders 下表展示了 legacy Ember bootloaders 的不同类型及支持的特性。 <table> <tr> <th>Features</th> <th>Application-bootloader</th> <th>Secure-application-bootloader</th> <th>Local-storage-bootloader</th> <th>Secure-local-storage-bootloader</th> <th>Standalone-bootloader</th> <th>Standalone-OTA-bootloader</th> </tr> <tr> <td>Serial Link Download</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Over-the-air Image Transfer without Application Running</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>Yes</td> </tr> <tr> <td>Application runs while downloading new image</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td></td> <td></td> </tr> <tr> <td>Can be used in a multi-hop deployment</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td></td> <td></td> </tr> <tr> <td>Supports Encrypted Ember Bootloader Files (EBL)</td> <td></td> <td>Yes</td> <td></td> <td>Yes</td> <td></td> <td></td> </tr> <tr> <td>Bootload Failures can be recovered by loading stored image</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td></td> <td></td> </tr> <tr> <td>Requires External Storage</td> <td>Yes</td> <td>Yes</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>On-chip Flash Requirements</td> <td>EM34x/5x: 8 kB<br><br>EM358x/9x: 16 kB<br><br>EFR32: not supported**</td> <td>EM34x/5x: 8 kB<br><br>EM358x/9x: 16 kB<br><br>EFR32: not supported**</td> <td>EM34x/5x: not supported<br><br>EM358x/9x: 16 kB + 246 kB*<br><br>EFR32: not supported**</td> <td>EM34x/5x: not supported<br><br>EM358x/9x: 16 kB + 246 kB*<br><br>EFR32: not supported**</td> <td>EM34x/5x: 8 kB<br><br>EM358x/9x: 16 kB<br><br>EFR32: not supported**</td> <td>EM34x/5x: 8 kB<br><br>EM358x/9x: 16 kB<br><br>EFR32: not supported**</td> </tr> <tr> <td>EM34x, EM351</td> <td>Yes</td> <td>Yes</td> <td></td> <td></td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>EM357, EM3581,<br>EM3582<br><br>(192 kB &amp; 256 kB parts)</td> <td>Yes</td> <td>Yes</td> <td></td> <td></td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>EM3585, EM3586,<br>EM3587, EM3588<br><br>(512 kB parts)</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>EFR32 <br><br>(128 kB and 256 kB parts)</td> <td>Not supported**</td> <td>Not supported**</td> <td></td> <td></td> <td>Not supported**</td> <td></td> </tr> <tr> <td colspan="7"><span style="font-style:italic">* </span>The local storage can be configured to use more or less on-chip space for storage. 246 kB is a recommended amount based on a single, average-sized image kept on a 512 kB part. Individual application needs may vary. The actual bootloader is 16 kB.<br><br><span style="font-style:italic">** </span>Use the Gecko Bootloader to create a similar configuration for these platforms.</td> </tr> </table> # 5. 引导加载文件格式 本节中描述的引导加载文件格式由 Simplicity Commander 命令生成。有关更多信息,请参阅 **UG162: Simplicity Commander Reference Guide**。 ## 5.1 Gecko Bootload (GBL) 文件 Gecko Bootloader 使用 GBL 文件格式。 ### 5.1.1 文件格式 #### 5.1.1.1 文件结构 GBL 文件格式由许多标签(tag)组成,这些标签指示后续数据的格式和整个标签的长度。标签的格式如下: | Tag ID | Tag Length | Tag Payload | |:-------:|:----------:|:----------------------------------:| | 4 bytes | 4 bytes | Variable (according to tag length) | #### 5.1.1.2 明文标签描述 | Tag Name | ID | Description | |--------------------------|:------------------------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | GBL Header Tag | 0x03A617EB | This must be the first tag in the file. The header tag contains the version number of the GBL file specification, and flags indicating the type of GBL file - whether it is signed or encrypted. | | GBL Application Info Tag | 0xF40A0AF4 | This tag contains information about the application update image that is contained in this GBL file | | GBL Bootloader Tag | 0xF50909F5 | This tag contains a complete bootloader update image. | | GBL Program Data Tag | 0xFE0101FE or 0xFD0303FD | This tag contains information about what application data to program at a specific address into the main flash memory. | | GBL Metadata Tag | 0xF60808F6 | This tag contains metadata that the bootloader does not parse, but can be returned to the application through a callback. | | GBL Signature Tag | 0xF70A0AF7 | This tag contains the ECDSA-P256 signature of all preceding data in the file. | | GBL End Tag | 0xFC0404FC | This tag indicates the end of the GBL file. It contains a 32-bit CRC for the entire file as an integrity check. The CRC is a non-cryptographic check. This must be the last tag. | GBL 文件中允许的 GBL 标签序列如下图所示。 <div align=center title="Figure 5.1. GBL Tag Sequence"><img src="./Figure/Figure5.1.png" alt="Figure 5.1. GBL Tag Sequence"/></div> #### 5.1.1.3 加密的标签描述 加密的 GBL 文件格式类似于非加密的版本。它引入了许多新标签。 | Tag Name | ID | Description | |----------------------------|:----------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | GBL Header Tag | 0x03A617EB | The GBL header is the same as for a plaintext GBL file, but the flag indicating that the GBL file is encrypted must be set. | | GBL Encryption Init Header | 0xFA0606FA | This contains information about the image encryption such as the Nonce and the amount of encrypted data. | | GBL Encrypted Program Data | 0xF90707F9 | This contains an encrypted payload containing a plaintext GBL tag, one of Application Info, Bootoader, Metadata or Program Data. The data is encrypted using AESCTR-128. | 加密的 GBL 文件中允许的 GBL 标签序列如下图所示。 <div align=center title="Figure 5.2. Encrypted GBL Tag Sequence"><img src="./Figure/Figure5.2.png" alt="Figure 5.2. Encrypted GBL Tag Sequence"/></div> ### 5.1.2 映像验证 可选的 GBL 签名标签可用于确保 GBL 文件的真实性。Silicon Labs 强烈建议将引导加载程序配置为仅接受已签名的 GBL 文件。 ## 5.2 Ember Bootload (EBL) 文件 所有 Ember bootloaders 都要求它们正在处理的映像采用 EBL 文件格式。 ### 5.2.1 基本的文件格式 EBL 文件格式由许多标签组成,这些标签指示后续数据的格式和整个标签的长度。标签的格式如下: | Tag ID | Tag Length | Tag Payload | |:-------:|:----------:|:----------------------------------:| | 2-bytes | 2-bytes | Variable (according to tag length) | 标签格式的详细信息可以在这些头文件中找到: | Platform | Header Filename | |--------------|------------------------------------| | EM3x Series | `/micro/cortexm3/bootloader/ebl.h` | | EFR32 Series | Not supported. | #### 5.2.1.1 非加密的标签描述 下表列出了非加密的 EBL 映像的标签。 | Tag Name | ID | ID Description | |------------------------------|:------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | EBL Header Tag | 0x0000 | This contains information about the chip the image is intended for, the AAT (application address table), Stack Version, Customer Application Version, build date, and build timestamp. This must be the first tag. | | EBL Program Data | 0xFE01 | This contains information about what data to program at a specific address into the main flash memory. | | EBL Program Manufacture Data | 0x02FE | This contains information about what data to program at a specific address within the Customer Information Block (CIB) section (for EM35x devices) or UserData section (for EFR32™ devices) of the chip. | | EBL End | 0xFC04 | This tag indicates the end of the EBL file. It contains a 32-bit CRC for the entire file as an integrity check. The CRC is a non-cryptographic check. This must be the last tag. | 完整的 EBL 映像如下图所示。 <div align=center title="Figure 5.3. EBL Image"><img src="./Figure/Figure5.3.png" alt="Figure 5.3. EBL Image"/></div> #### 5.2.1.2 数据验证 EBL 文件格式包括三个 32bit CRC 值,用于验证文件的完整性。这些值是使用 `halCommonCrc32()` 函数计算的,该函数可以在 `hal/micro/generic/crc.c` 中找到。计算中使用的 CRC 的初始值是 0xFFFFFFFF。 下表描述了构建到 \.EBL 下载格式中的数据完整性检查。 | Integrity Check | Description | |-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Header CRC | The header data contains the headerCrc field (also referred to as aatCrc in other areas of code), a 4-byte, one's complement, LSB-first CRC of the header bytes only. This is used to verify the integrity of the header. This CRC assumes that the value of the type field in the AAT is set to 0xFFFF. | | EBLTAG_END CRC | The end tag value is the one's complement, LSB-first CRC of the data download stream, including the header the end tag and the CRC value itself. This is used as a running CRC of the download stream, and it verifies that the download file was received properly. The CRC in the tag is the one’s complement of the running CRC and when that value is add to the running calculation of the CRC algorithm it results in predefined remainder of 0xDEBB20E3. | | Image CRC | The header's imageCrc field is the one's complement, MSB-first CRC of all the flash pages to be written including any unused space in the page (initialized to 0xFF). It does not include the EBL tag data and assumes that the first 128 bytes of the AAT. This is used after the image download is complete and everything but the header has been written to flash to verify the download. The download program does this by reading each flash page written as it is defined in the header’s `pageRanges[]` array and calculating a running CRC. | ### 5.2.2 加密的 Ember Bootload 文件格式 加密的 Ember bootloader 文件格式类似于非加密的版本。它引入了许多新标签。如果引导加载程序仅接受加密的 EBL 映像,则称其为 “secure” bootloader。 #### 5.2.2.1 加密的标签描述 下表列出了加密标签及其说明。 | Tag Name | ID | Description | |----------------------------|:------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | EBL Encryption Header | 0xFB05 | This contains basic information about the image. The header is not authenticated or encrypted. | | EBL Encryption Init Header | 0xFA06 | This contains information about the image encryption such as the Nonce, the amount of encrypted data, and an optional block of authenticated but non-encrypted data. The tag is authenticated. | | EBL Encrypted Program Data | 0xF907 | This contains data about what to program into the flash memory. The contents are encrypted. The data is encrypted using AES-CCM. | | EBL Encryption MAC | 0xF709 | This contains the message authentication code used to validate the contents of the authenticated and encrypted portions of the image. | 加密的映像将把普通的、不安全的 EBL 标签包装在 EBL Encrypted Program Data 标签中。每个标签的内容都经过加密,但加密的数据标签 ID 和标签长度字段没有加密。对于存在于未加密的 EBL 中的每个标签,将创建相应的 Encrypted Program Data 标签。 加密的文件格式如下图所示: <div align=center title="Figure 5.4. Encrypted File Format"><img src="./Figure/Figure5.4.png" alt="Figure 5.4. Encrypted File Format"/></div> #### 5.2.2.2 Nonce 生成 加密映像的随机数是 EBL Encryption Init 标签中包含的 12-byte 值。em3xx_convert 或 Simplicity Commander 工具将在加密期间生成随机 nonce 值,并将其存储在 EBL Encryption Init 标签中。 重要的是,一个 nonce 值不会被使用两次来(使用相同的加密密钥)加密两个不同的映像。这是因为 CCM 依赖于使用带有伪随机噪声块的 XOR 来加密文件的内容。然而,对于 12-byte 的随机数,这个机会大约是 1/(2^96)。 #### 5.2.2.3 映像校验 加密的 EBL 映像受到消息认证码(MAC,message authentication code)的保护,该消息认证码是通过每个标签的未加密内容计算的。MAC 存储在 EBL MAC 标签中,secure bootloader 将在加载映像中的任何数据之前计算并验证存储的 MAC。
50.69375
554
0.589241
yue_Hant
0.360837
76040a7f92d03a981de014545eba3b9e26765020
363
md
Markdown
README.md
Col0ring/types-kit
09e0dc40b9430cf7a4ca4ce51bc562a8706a0858
[ "MIT" ]
11
2022-03-23T13:22:12.000Z
2022-03-24T05:59:31.000Z
README.md
Col0ring/types-kit
09e0dc40b9430cf7a4ca4ce51bc562a8706a0858
[ "MIT" ]
null
null
null
README.md
Col0ring/types-kit
09e0dc40b9430cf7a4ca4ce51bc562a8706a0858
[ "MIT" ]
1
2022-03-23T13:31:51.000Z
2022-03-23T13:31:51.000Z
# Types-Kit a types toolkit. ## Usage ```sh npm install types-kit -D # or yarn add types-kit -D ``` ## API [Docs](./docs/index.md) ## Inspired by: - [utility-types](https://github.com/piotrwitek/utility-types) - [type-fest](https://github.com/sindresorhus/type-fest) - [type-challenges](https://github.com/type-challenges/type-challenges) ## License MIT
13.961538
71
0.68595
kor_Hang
0.237038
76040b5bcf32e309d1ef963a5cd5af71649d89aa
9,474
md
Markdown
articles/azure-monitor/log-query/query-audit.md
tillepille/azure-docs.de-de
bbf108b058e8185b482b1780a4f0858bad66b7bf
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-monitor/log-query/query-audit.md
tillepille/azure-docs.de-de
bbf108b058e8185b482b1780a4f0858bad66b7bf
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-monitor/log-query/query-audit.md
tillepille/azure-docs.de-de
bbf108b058e8185b482b1780a4f0858bad66b7bf
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Überwachen von Abfragen in Azure Monitor-Protokollabfragen description: Details von Überwachungsprotokollen für Protokollabfragen, die Telemetrie zu den in Azure Monitor ausgeführten Protokollabfragen liefern. ms.subservice: logs ms.topic: conceptual author: bwren ms.author: bwren ms.date: 09/03/2020 ms.openlocfilehash: 7ce0aea6bb257f0a52a843ce53cc904ed0a775dd ms.sourcegitcommit: c95e2d89a5a3cf5e2983ffcc206f056a7992df7d ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 11/24/2020 ms.locfileid: "95536199" --- # <a name="audit-queries-in-azure-monitor-logs-preview"></a>Überwachen von Abfragen in Azure Monitor-Protokollen (Vorschau) Überwachungsprotokolle für Protokollabfragen bieten Telemetrie zu den in Azure Monitor ausgeführten Protokollabfragen. Dazu gehören Informationen dazu, wann z. B. eine Abfrage ausgeführt wurde, wer sie ausgeführt hat, welches Tool verwendet wurde, der Abfragetext und die Leistungsstatistiken, die die Ausführung der Abfrage beschreiben. ## <a name="configure-query-auditing"></a>Konfigurieren der Abfrageüberwachung Die Abfrageüberwachung wird mit einer [Diagnoseeinstellung](../platform/diagnostic-settings.md) im Log Analytics-Arbeitsbereich aktiviert. Damit können Sie Überwachungsdaten an den aktuellen Arbeitsbereich oder jeden anderen Arbeitsbereich in Ihrem Abonnement, an Azure Event Hubs zum Versenden außerhalb von Azure oder an Azure Storage zur Archivierung senden. ### <a name="azure-portal"></a>Azure-Portal Greifen Sie auf die Diagnoseeinstellung für einen Log Analytics-Arbeitsbereich im Azure-Portal an einem der folgenden Orte zu: - Wählen Sie im Menü **Azure Monitor** die Option **Diagnoseeinstellungen** aus, und suchen Sie dann den Arbeitsbereich und wählen Sie ihn aus. [![Diagnoseeinstellungen in Azure Monitor](media/log-query-audit/diagnostic-setting-monitor.png) ](media/log-query-audit/diagnostic-setting-monitor.png#lightbox) - Wählen Sie im Menü **Log Analytics-Arbeitsbereiche** den Arbeitsbereich und dann **Diagnoseeinstellungen** aus. [![Diagnoseeinstellungen im Log Analytics-Arbeitsbereich](media/log-query-audit/diagnostic-setting-workspace.png) ](media/log-query-audit/diagnostic-setting-workspace.png#lightbox) ### <a name="resource-manager-template"></a>Resource Manager-Vorlage Eine Resource Manager-Beispielvorlage erhalten Sie unter [Diagnoseeinstellung für den Log Analytics-Arbeitsbereich](../samples/resource-manager-diagnostic-settings.md#diagnostic-setting-for-log-analytics-workspace). ## <a name="audit-data"></a>Überwachungsdaten Bei jeder Ausführung einer Abfrage wird ein Überwachungsdatensatz erstellt. Wenn Sie die Daten an einen Log Analytics-Arbeitsbereich senden, werden sie in einer Tabelle namens *LAQueryLogs* gespeichert. Die folgende Tabelle beschreibt die Eigenschaften in jedem Datensatz der Überwachungsdaten. | Feld | BESCHREIBUNG | |:---|:---| | TimeGenerated | Die UTC-Zeit, zu der die Abfrage übermittelt wurde. | | CorrelationId | Die eindeutige ID zur Bezeichnung der Abfrage. Kann in Problembehandlungsszenarien verwendet werden, wenn Microsoft um Unterstützung gebeten wird. | | AADObjectId | Die Azure Active Directory-ID des Benutzerkontos, das die Abfrage gestartet hat. | | AADTenantId | Die ID des Mandanten des Benutzerkontos, das die Abfrage gestartet hat. | | AADEmail | Die E-Mail-Adresse des Mandanten des Benutzerkontos, das die Abfrage gestartet hat. | | AADClientId | Die ID und der aufgelöste Name der Anwendung, mit der die Abfrage gestartet wurde. | | RequestClientApp | Der aufgelöste Name der Anwendung, mit der die Abfrage gestartet wurde. | | QueryTimeRangeStart | Gibt den Beginn des für die Abfrage ausgewählten Zeitraums an. In bestimmten Szenarien, wenn z. B. die Abfrage von Log Analytics aus gestartet wird und der Zeitraum innerhalb der Abfrage und nicht in der Zeitauswahl angegeben ist, wird dieser möglicherweise nicht ausgefüllt. | | QueryTimeRangeEnd | Gibt das Ende des für die Abfrage ausgewählten Zeitraums an. In bestimmten Szenarien, wenn z. B. die Abfrage von Log Analytics aus gestartet wird und der Zeitraum innerhalb der Abfrage und nicht in der Zeitauswahl angegeben ist, wird dieser möglicherweise nicht ausgefüllt. | | QueryText | Der Text der Abfrage, die ausgeführt wurde. | | RequestTarget | Die API-URL wurde zur Übermittlung der Abfrage verwendet. | | RequestContext | Liste der Ressourcen, für die die Abfrage angefordert wurde. Enthält bis zu drei Zeichenfolgenarrays: Arbeitsbereiche, Anwendungen und Ressourcen. Abfragen, die auf Abonnements oder Ressourcengruppen ausgerichtet sind, werden als *Ressourcen* angezeigt. Umfasst das von RequestTarget implizierte Ziel.<br>Die Ressourcen-ID für jede Ressource wird angegeben, wenn sie aufgelöst werden kann. Sie kann möglicherweise nicht aufgelöst werden, wenn beim Zugriff auf die Ressource ein Fehler zurückgegeben wird. In diesem Fall wird der jeweilige Text aus der Abfrage verwendet.<br>Wenn die Abfrage einen mehrdeutigen Namen verwendet, z. B. einen Arbeitsbereichsnamen, der in mehreren Abonnements vorhanden ist, wird dieser mehrdeutige Name verwendet. | | RequestContextFilters | Ein Satz von Filtern, die als Teil des Abfrageaufrufs angegeben werden. Umfasst bis zu drei mögliche Zeichenfolgenarrays:<br>- ResourceTypes – Typ der Ressource zum Einzuschränken des Umfangs der Abfrage.<br>- Workspaces – Liste der Arbeitsbereiche, auf die die Abfrage beschränkt werden soll.<br>- WorkspaceRegions – Liste der Arbeitsbereichregionen zum Einschränken der Abfrage. | | ResponseCode | HTTP-Antwortcode, der beim Absenden der Abfrage zurückgegeben wurde. | | ResponseDurationMs | Der Zeitpunkt für die Rückgabe der Antwort. | | ResponseRowCount | Die Gesamtanzahl der von der Abfrage zurückgegebenen Zeilen. | | StatsCPUTimeMs | Die gesamte für Computing, Analyse und Abrufen von Daten verwendete Computezeit. Wird nur ausgefüllt, wenn die Abfrage mit Statuscode 200 zurückgegeben wird. | | StatsDataProcessedKB | Die Menge der Daten, auf die zur Verarbeitung der Abfrage zugegriffen wurde. Beeinflusst durch die Größe der Zieltabelle, die verwendete Zeitspanne, die angewendeten Filter und die Anzahl der referenzierten Spalten. Wird nur ausgefüllt, wenn die Abfrage mit Statuscode 200 zurückgegeben wird. | | StatsDataProcessedStart | Die Uhrzeit der ältesten Daten, auf die zur Verarbeitung der Abfrage zugegriffen wurde. Durch die Abfrage beeinflusste explizite Zeitspanne und angewandte Filter. Diese kann aufgrund der Datenpartitionierung größer sein als die explizite Zeitspanne. Wird nur ausgefüllt, wenn die Abfrage mit Statuscode 200 zurückgegeben wird. | | StatsDataProcessedEnd |Die Uhrzeit der neuesten Daten, auf die zur Verarbeitung der Abfrage zugegriffen wurde. Durch die Abfrage beeinflusste explizite Zeitspanne und angewandte Filter. Diese kann aufgrund der Datenpartitionierung größer sein als die explizite Zeitspanne. Wird nur ausgefüllt, wenn die Abfrage mit Statuscode 200 zurückgegeben wird. | | StatsWorkspaceCount | Die Anzahl der Arbeitsbereiche, auf die die Abfrage zugreift. Wird nur ausgefüllt, wenn die Abfrage mit Statuscode 200 zurückgegeben wird. | | StatsRegionCount | Die Anzahl der Regionen, auf die von der Abfrage zugegriffen wird. Wird nur ausgefüllt, wenn die Abfrage mit Statuscode 200 zurückgegeben wird. | ## <a name="considerations"></a>Überlegungen - Abfragen werden nur protokolliert, wenn sie in einem Benutzerkontext ausgeführt werden. Dienst-zu-Dienst-Abfragen werden in Azure nicht protokolliert. Die beiden wichtigsten Abfragengruppen, die von diesem Ausschluss betroffen sind, sind Abrechnungsberechnungen und automatisierte Warnungsausführungen. Im Fall von Warnungen wird nur die geplante Warnungsabfrage selbst protokolliert. Die erste Ausführung der Warnung auf dem Bildschirm für die Warnungserstellung wird in einem Benutzerkontext ausgeführt und ist für Überwachungszwecke verfügbar. - Es stehen keine Leistungsstatistiken für Abfragen zur Verfügung, die vom Azure Data Explorer-Proxy stammen. Alle anderen Daten für diese Abfragen werden immer dennoch aufgefüllt. - Der *h*-Hinweis bei Zeichenfolgen, der [Zeichenfolgenliterale](/azure/data-explorer/kusto/query/scalar-data-types/string#obfuscated-string-literals) verfälscht, hat keine Auswirkungen auf die Überwachungsprotokolle der Abfrage. Die Abfragen werden genau so aufgezeichnet, wie sie gesendet werden, ohne dass die Zeichenfolge verdeckt wird. Stellen Sie sicher, dass nur Benutzer, die über Kompatibilitätsrechte zum Anzeigen dieser Daten verfügen, dies mithilfe der verschiedenen in Log Analytics-Arbeitsbereichen verfügbaren Kubernetes RBAC- oder Azure RBAC-Modi erreichen können. - Bei Abfragen, die Daten aus mehreren Arbeitsbereichen enthalten, wird die Abfrage nur in den Arbeitsbereichen erfasst, auf die der Benutzer Zugriff hat. ## <a name="costs"></a>Kosten Es fallen keine Kosten für die Azure-Diagnoseerweiterung an, aber möglicherweise Gebühren für die erfassten Daten. Unter [Azure Monitor – Preise](https://azure.microsoft.com/pricing/details/monitor/) finden Sie Informationen für das Ziel, auf dem Sie Daten sammeln. ## <a name="next-steps"></a>Nächste Schritte - Erfahren Sie mehr über [Diagnoseeinstellungen](../platform/diagnostic-settings.md). - Erfahren Sie mehr über das [Optimieren von Protokollabfragen](query-optimization.md).
119.924051
771
0.80589
deu_Latn
0.99822
76042411b76038638898a5b4f144dfb17e29f526
1,762
md
Markdown
business-central/LocalFunctionality/UnitedKingdom/how-to-set-up-a-posting-date-warning.md
AdriaticOrg/dynamics365smb-docs-sr-Latn-RS
dac820847071ebf13c6990c88b2817e7457cfddc
[ "CC-BY-4.0", "MIT" ]
null
null
null
business-central/LocalFunctionality/UnitedKingdom/how-to-set-up-a-posting-date-warning.md
AdriaticOrg/dynamics365smb-docs-sr-Latn-RS
dac820847071ebf13c6990c88b2817e7457cfddc
[ "CC-BY-4.0", "MIT" ]
null
null
null
business-central/LocalFunctionality/UnitedKingdom/how-to-set-up-a-posting-date-warning.md
AdriaticOrg/dynamics365smb-docs-sr-Latn-RS
dac820847071ebf13c6990c88b2817e7457cfddc
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: How to Set Up a Posting Date Warning | Microsoft Docs description: A warning message will be displayed when you post or batch post sales and purchase documents with a posting date not same as the work date. You must set up for this in the **Sales Receivables Setup** and **Purchases Payables Setup**. services: project-madeira documentationcenter: '' author: SorenGP ms.service: dynamics365-business-central ms.topic: article ms.devlang: na ms.tgt_pltfrm: na ms.workload: na ms.search.keywords: ms.date: 07/01/2017 ms.author: sgroespe --- # Set Up a Posting Date Warning A warning message will be displayed when you post or batch post sales and purchase documents with a posting date not same as the work date. You must set up for this in the **Sales Receivables Setup** and **Purchases Payables Setup**. ## To set up a posting date warning for sales and purchases 1. Choose the ![Search for Page or Report](../../media/ui-search/search_small.png "Search for Page or Report icon") icon, enter **Sales & Receivables Setup** or **Purchases & Payables Setup**, and then choose the related link. 2. In the **Sales Receivables Setup** window or the **Purchases Payables Setup** window, in the **General** FastTab, select the **Posting Date Check on Posting** check box. 3. Choose the **OK** button. > [!NOTE] > A warning message displays when you post or batch post documents with a posting date not same as the work date. If you select **Replace Posting Date** check box while batch posting, you will be warned about the replacing posting date not being the same as the work date. ## See Also [United Kingdom Local Functionality](united-kingdom-local-functionality.md)
55.0625
279
0.726447
eng_Latn
0.98768
7604c82ff052cc8d9dc5ef479ccc6acdd257f61a
11,506
md
Markdown
docs/RFCS/20160525_type_annotations.md
Yangjxxxxx/ZNBase
dcf993b73250dd5cb63041f4d9cf098941f67b2b
[ "MIT", "BSD-3-Clause" ]
null
null
null
docs/RFCS/20160525_type_annotations.md
Yangjxxxxx/ZNBase
dcf993b73250dd5cb63041f4d9cf098941f67b2b
[ "MIT", "BSD-3-Clause" ]
null
null
null
docs/RFCS/20160525_type_annotations.md
Yangjxxxxx/ZNBase
dcf993b73250dd5cb63041f4d9cf098941f67b2b
[ "MIT", "BSD-3-Clause" ]
null
null
null
- Feature Name: SQL Type Annotations - Status: completed - Start Date: 2016-05-25 - Authors: Nathan, knz - RFC PR: #6895 - ZNBase Issue: #6189 # Summary This RFC proposes to add a new "type annotation" syntax to our SQL dialect (referred to as ZNBaseSQL from now on). This was originally proposed as part of the Summer typing system, but is being split up separately because it is purely an extension which was not necessary for the immediate correctness of the type system. # Motivation In order to clarify the typing rules during the design of the Summer type system and to exercise the proposed system, we found it was useful to "force" certain expressions to be of a certain type. Unfortunately, the SQL cast expression (`CAST(... AS ...)` or `...::...`) is not appropriate for this, because although it guarantees a type to the surrounding expression it does not constrain its argument. For example `sign(1.2)::int` does not disambiguate which overload of `sign` to use. Therefore we propose the following SQL extension, which was not required for the correctness of the Summer typing system but offers opportunities to better exercise it in tests. The extension also gives users of our SQL dialect more control over the type inference decisions of the type system. The need for this type of extension was also implicitly present/expressed in the alternate proposals Rick and Morty. # Detailed design The extension is a new "type annotation" expression node. ### Operator Syntax We define the following SQL syntax: `E ::: T`, where `E` is a sub-expression and `T` is a type. We chose this syntax because it does not collide (visually or logically) with other SQL syntax, and because it draws parallels to the cast operator syntax. For example: `1:::int` or `1 ::: int`. The meaning of this at a first order approximation is "interpret the expression on the left using the type on the right". ### Keyword Syntax In addition, we define the keyword syntax: `ANNOTATE_TYPE(E, T)`, where `E` is a sub-expression and `T` is a type. We chose this syntax because it draws parallels to the cast keyword syntax, without requiring special client handling. We could have opted for an `ANNOTATE_TYPE(E AS T)` syntax, which would have been more similar to the cast keyword syntax, but would not be usable by ORMs or other SQL client abstraction layers because it would not look like a function call. For example: `ANNOTATE_TYPE(1, int)`. ### Type Annotations vs. Type Casts As expressed above, type annotations and type casts play different roles. The primary purpose of a type cast is to take a given expression of type `T1` and map it during evaluation to a type `T2`. This cast can be used to work around disjoint types between a sub-expression and its parent context, but it has no effect on typing of the sub-expression itself. In practice with the Summer type system, this means that during type checking, a `CastExpr` will throw away any recursively passed `desiredType`, and will propagate down `NoPreference` to its sub-expressions. On the other hand, type annotations have no runtime effect (as implied by the term "annotation"). Instead, the annotation only has an effect during semantic analysis. The annotation will do what it can to type its sub-expression as its annotation type, and then assert that the sub-expression does get typed as this type, throwing an error if not. In a "bottom-up" type system, the only thing an annotation like this could do is assert after its sub-expression's type has been resolved that it matches its annotation type. However, in a bi-directional typing system like Summer, the annotation can be more effective by also "desiring" its annotation type from its children. In effect, this means that the sub-expression will become the desired type if possible. Examples ```sql SELECT 1.2 -> no preference for the return type of `1.2` -> numeric constant without a preference defaults to DECIMAL -> the expression returns a DECIMAL SELECT 1.2::float -> no preference for the return type of CAST(... as FLOAT) -> no preference for the return type of `1.2` -> numeric constant without a preference defaults to DECIMAL -> CastExpr returns a FLOAT instead, which it will cast during evaluation -> the expression returns a FLOAT SELECT 1.2::string -> no preference for the return type of CAST(... as STRING) -> no preference for the return type of `1.2` -> numeric constant without a preference defaults to DECIMAL -> CastExpr returns a STRING instead, which it will cast (format) during evaluation -> the expression returns a STRING SELECT 1.2:::float -> no preference for the return type of `... ::: FLOAT` -> FLOAT preference passed for the return type of `1.2` -> numeric constant contains FLOAT in its "resolvable type set", so it resolves itself as a FLOAT -> the type annotation correctly asserts that its sub-expression returns a FLOAT -> the type assertion itself returns a float, and does nothing during evaluation -> the expression returns a FLOAT SELECT 1.2:::string -> no preference for the return type of `... ::: STRING` -> STRING preference passed for the return type of `1.2` -> numeric constant does not contains STRING in its "resolvable type set", so it resolves itself to the default type of DECIMAL instead -> the type annotation throws an error when asserting that its sub-expression returns a STRING, type checking fails SELECT sign(1.2) -> no preference for the return type of `sign` -> no preference for the return type of `1.2` -> `1.2` defaults to a DECIMAL -> overload resolution chooses the DECIMAL implementation -> the expression returns a DECIMAL SELECT sign(1.2)::float -> no preference for the return type of CAST(... as FLOAT) -> no preference for the return type of `sign` -> no preference for the return type of `1.2` -> `1.2` defaults to a DECIMAL -> overload resolution chooses the DECIMAL implementation -> CastExpr returns a FLOAT instead, which it will cast during evaluation -> the expression returns a FLOAT SELECT sign(1.2)::string -> no preference for the return type of CAST(... as STRING) -> no preference for the return type of `sign` -> no preference for the return type of `1.2` -> `1.2` defaults to a DECIMAL -> overload resolution chooses the DECIMAL implementation -> CastExpr returns a STRING instead, which it will cast (format) during evaluation -> the expression returns a STRING SELECT sign(1.2):::float -> no preference for the return type of `... ::: FLOAT` -> FLOAT preference passed for the return type of `sign` -> FLOAT used in overload resolution to determine float overload and to resolve the numeric constant as a float -> the type annotation correctly asserts that its sub-expression returns a FLOAT -> the type assertion itself returns a float, and does nothing during evaluation -> the expression returns a FLOAT SELECT sign(1.2):::string -> no preference for the return type of `... ::: STRING` -> STRING preference passed for the return type of `sign` -> STRING ignore in overload resolution because no overloads return a STRING, so DECIMAL implementation is defaulted to -> the type annotation throws an error when asserting that its sub-expression returns a STRING, type checking fails ``` Note that in the above examples, adding a type annotation onto the top of a SELECT clause has an almost identical effect on type checking as INSERTing into a column with the annotation type. ### Interaction with Placeholders The initial Summer RFC defined a set of rules for determining the type of placeholders based on type casts and type annotations. The proposed rules are: - if any given placeholder appears as immediate argument of an explicit annotation, then assign that type to the placeholder (and reject conflicting annotations after the 1st). - otherwise (no direct annotations on a placeholder), if all occurrences of a placeholder appear as immediate argument to a cast expression then: - if all the cast(s) are homogeneous, then assign the placeholder the type indicated by the cast. - otherwise, assign the type "string" to the placeholder. - for anything else, defer to type checking to determine the placeholder type. Examples: ```sql SELECT $1:::float, $1::string -> $1 ::: float, execution will perform explicit cast float->string SELECT $1:::float, $1:::string -> error: conflicting types SELECT $1::float, $1::float -> $1 ::: float SELECT $1::float, $1::string -> $1 ::: string, execution will perform explicit cast $1 -> float SELECT $1:::float, $1 -> $1 ::: float SELECT $1::float, $1 -> nothing done during 1st pass, type checking will resolve ``` (Note that this rule does not interfere with the separate rule, customary in SQL interpreters, that the client may choose to disregard the stated type of a placeholder during execute and instead pass the value as a string. The query executor must then convert the string to the type for the placeholder that was determined during type checking. For example if a client prepares `select $1:::int + 2` and passes "123" (a string), the executor must convert "123" to 123 (an int) before running the query. The annotation expression is a type assertion, not conversion, at run-time.) This set of rules has already been partially implemented in the `parser.placeholderAnnotationVisitor`, which is used through `parser.ProcessPlaceholderAnnotations`. ### Implementation Plan A `TypeAnnotationExpr` or `AnnotationExpr` will be created in the `parser` package. It's implementation will mirror that of the `ParenExpr`, except in its `TypeCheck` method, where it will call `typeCheckAndRequire` on its sub-expression with its annotation type. The expression could also make a similar type assertion in its `Eval` method, but this shouldn't be necessary. The `ProcessPlaceholderAnnotations` will need to be adjusted to adopt the rules listed above for interactions between casts and annotions. This is already stubbed out. # Drawbacks Type annotations will be a ZNBase specific language extension to SQL. In general, we have tried to avoid language extensions, as they limit portability of SQL statements and create a higher learning curve for moving to ZNBase. However, we concluded that this was ok, because these annotations are fully opt-in, and are not a requirement of the dialect. # Alternatives None. This is strictly an isolated extension. # Unresolved questions ### EXPLAIN(TYPES) An initial proof of concept implementation of type annotations made it clear that `EXPLAIN(TYPES)` in conjunction with type annotations results in a fairly verbose output. It might make sense to make type annotations "invisible" in the output of `EXPLAIN(TYPES)`, as they are annotations for the type system, as opposed to expressions which affect evaluation. Furthermore, if the annotation passes type checking, its child will already have the same type, so including it in the output is unnecessary.
45.478261
91
0.731184
eng_Latn
0.998488
7604f9af2dc2d70026965f78da8fb25f04af81ca
5,735
md
Markdown
website/translated_docs/ja/Project/documentation.md
aparajita/docs
21a596381f9bffe16a5c462dc24f10acd45e3d94
[ "CC-BY-4.0" ]
null
null
null
website/translated_docs/ja/Project/documentation.md
aparajita/docs
21a596381f9bffe16a5c462dc24f10acd45e3d94
[ "CC-BY-4.0" ]
null
null
null
website/translated_docs/ja/Project/documentation.md
aparajita/docs
21a596381f9bffe16a5c462dc24f10acd45e3d94
[ "CC-BY-4.0" ]
1
2019-03-27T06:57:51.000Z
2019-03-27T06:57:51.000Z
--- id: documentation title: ドキュメンテーション --- アプリケーションプロジェクトにおいては、メソッドやフォーム、テーブル、フィールドに関するドキュメンテーションを作成することができます。 複数のプログラマーによってプロジェクトを開発している場合などに、ドキュメンテーションの作成はとくに適しています。また、一般的に良いプログラミングの作法としても推奨されます。 ドキュメンテーションには、要素の説明だけでなく、アプリケーションにおけるその要素の機能を理解するために必要なあらゆる情報を含めることができます。 ドキュメントすることができるプロジェクト要素は次のとおりです: - メソッド (データベースメソッド、コンポーネントメソッド、プロジェクトメソッド、フォームメソッド、4D Mobile メソッド、トリガー、クラス) - フォーム - テーブルとフィールド ドキュメンテーションファイルは Markdown記法 (.md ファイル) で記述します。これには、Markdown をサポートしている任意のエディターを使うことができます。 これらはそれぞれ独立したファイルとしてプロジェクトフォルダー内に格納されます。 ドキュメントされた内容は、エクスプローラーの右側にあるプレビューエリアに表示されます: ![](assets/en/Project/explorer_Doc.png) また、[コードエディターのヘルプTip](#コードエディターでドキュメンテーションを表示する) として部分的に表示することもできます。 ## ドキュメンテーションファイル ### ドキュメンテーションファイル名 ドキュメンテーションファイルには、ドキュメントの対象である要素と同じファイル名が付き、拡張子は ".md" です。 たとえば、`myMethod.4dm` プロジェクトメソッドに付随するドキュメンテーションファイルの名前は `myMethod.md` です。 エクスプローラー上では、選択した要素と同じファイル名のドキュメンテーションが自動的に表示されます (後述参照)。 ### ドキュメンテーションファイルのアーキテクチャー ドキュメンテーションファイルはすべて、データベースフォルダーのルートにある `Documentation` フォルダーに格納されます。 `Documentation` フォルダーのアーキテクチャーは次のとおりです: - `Documentation` + `Classes` * myClass.md + `DatabaseMethods` * onStartup.md * ... + `Forms` * loginDial.md * ... + `Methods` * myMethod.md * ... + `TableForms` * **1** - input.md - ... * ... + `Triggers` * table1.md * ... - プロジェクトフォームとそのプロジェクトフォームメソッドは、同じドキュメンテーションファイルをフォームとメソッドの両方について共有します。 - テーブルフォームとそのテーブルフォームメソッドは、同じドキュメンテーションファイルをフォームとメソッドの両方について共有します。 > ドキュメントされているプロジェクト要素を名称変更したり、削除したりすると、その要素に紐づいている Markdown ファイルも自動で名称変更、または削除されます。 ## エクスプローラーとドキュメンテーション ### ドキュメンテーションの表示 エクスプローラーウィンドウにドキュメンテーションを表示させるには: 1. プレビューエリアが表示されていることを確認します。 2. エクスプローラーリストより、ドキュメントされている要素を選択します。 3. プレビューエリアの下にある **ドキュメンテーション** ボタンをクリックします。 ![](assets/en/Project/comments-explo2.png) - 選択要素のドキュメンテーションファイルが見つからなかった場合には、**作成する** ボタンが表示されます。 - 選択要素のドキュメンテーションファイルが存在すれば、その内容がエリア内に表示されます。 なお、エリアに表示されている内容は直接編集することはできません。 ### ドキュメンテーションファイルの編集 選択要素の Markdown ドキュメンテーションファイルはエクスプローラーより作成・編集することができます。 選択要素のドキュメンテーションファイルが存在しなければ: - `Documentation` ペインにある **作成する** ボタンをクリックするか、 - エクスプローラーのオプションメニューまたはコンテキストメニューより **ドキュメンテーションを編集...** を選択します。 ![](assets/en/Project/comments-explo3.png) テンプレートを使い、適切な場所・名称で自動作成された .md ファイルは、デフォルトの Markdown エディターで開かれます。 選択要素のドキュメンテーションファイルが存在していれば、エクスプローラーのオプションメニューまたはコンテキストメニューより **ドキュメンテーションを編集...** を選択することで、Markdown エディターに開くことができます。 ## コードエディターでドキュメンテーションを表示する 4D のコードエディターは、メソッドのドキュメンテーションの一部をヘルプTip として表示します。 ![](assets/en/Project/codeEditor_Comments.png) "\<MethodName>.md" ファイルが "\<package>/documentation" フォルダーに存在する場合、コードエディターは次の優先順位でヘルプTip を表示します: - Markdown ファイルの先頭に設置した、HTML コメントタグで囲まれたテキスト (*\<!-- コマンドの説明 -->*) - HTML のコメントタグが使用されていなければ、Markdown ファイルの `# Description` タグ後の最初の文章 この場合、最初の文章には 4D コードパーサーによって自動生成されたメソッドの **プロトタイプ** が入ります。 > それ以外の場合には、[メソッドコードの先頭のコメントブロック](https://doc.4d.com/4Dv18R3/4D/18-R3/Writing-a-method.300-4919495.ja.html#4618226) がコードエディターに表示されます。 ## ドキュメンテーションファイルの定義 4D はテンプレートを用いて新規のドキュメンテーションファイルを作成します。 このテンプレートは、[コードエディターでドキュメンテーションを表示する](#コードエディターでドキュメンテーションを表示する) のに利用できる項目が提供しています。 それ以外の [サポートされている Markdown](#サポートされている-Markdown) タグも利用することができます。 新規作成されたドキュメンテーションファイルには、次のデフォルト項目が含まれています: ![](assets/en/Project/comments-explo4.png) | 線 | 説明 | | -------------------------------------------------- | --------------------------------------------------------------------------------------- | | "\<!-- Type your summary here -->" | HTML コメントタグ。 メソッドの説明として優先的に [コードエディターTip](#コードエディターでドキュメンテーションを表示する) に表示されます。 | | ## Description | Markdown のレベル2 見出しタグ。 HTML コメントタグが使用されていない場合、このタグ後の最初の文章がメソッドの説明としてコードエディターTip に表示されます。 | | ## Example | レベル2 見出しタグ。サンプルコードの記述に使用できます。 | | \``` 4D <br>Type your example here \` `` | 4D サンプルコードのフォーマットに使います (highlight.js ライブラリを使用)。 | ### サポートされている Markdown - 見出しタグ: ``` # 見出し 1 ## 見出し 2 ### 見出し 3 ``` - スタイルタグ (イタリック、太字、取り消し線) : ``` _イタリック_ **太字** **_太字/イタリック_** ~~取り消し線~~ ``` - 4D コードハイライトが付くコードブロックタグ (\```4d ... ```) \``` 4d C_TEXT($txt) $txt:="Hello world!" \` `` - テーブルタグ: ``` | 引数 | 型 | 説明 | | --------- | ------ | ------------ | | wpArea | 文字列 |Write pro エリア| | toolbar | 文字列 |ツールバー名 | ``` - リンクタグ: ``` // 例 1 コマンドの [ドキュメンテーション](https://doc.4d.com) は ... // 例 2 [4D ブログ][1] [1]: https://blog.4d.com ``` - 画像タグ: ``` ![画像の説明](pictures/image.png) ![4D ロゴ](https://blog.4d.com/wp-content/uploads/2016/09/logoOrignal-1.png "4D blog logo") [![4D ブログのロゴとリンク](https://blog.4d.com/wp-content/uploads/2016/09/logoOrignal-1.png "4D blog logo")](https://blog.4d.com) ``` [![4D ブログのロゴとリンク](https://blog.4d.com/wp-content/uploads/2016/09/logoOrignal-1.png "4D ブログロゴ")](https://blog.4d.com) > 詳細については [GitHug Markdown guide](https://guides.github.com/features/mastering-markdown/) (英文) を参照ください。 ## 例題 `WP SwitchToolbar.md` ファイルに、次のように書くことができます: ```4d <!-- size 引数に応じて、異なるロゴを返します --> GetLogo (size) -> logo | 引数 | 型 | in/out | 説明 | | --------- | ------ | ------ | ----------- | | size | 倍長整数 | in | ロゴスタイルセレクター (1 から 5) | | logo | ピクチャー | out | 選択されたロゴ | ## Description このメソッドは、*size* 引数の値に応じて、特定サイズのロゴを返します。 1 = 最小値, 5 = 最大値 ## Example C_PICTURE($logo) C_LONGINT($size) // 最大ロゴを取得します $logo:=GetLogo(5) ``` - エクスプローラーの表示: ![](assets/en/Project/explorer_Doc.png) - コードエディターの表示: ![](assets/en/Project/comments-explo5.png)
24.613734
227
0.655449
jpn_Jpan
0.490105
76052a1d28e5901c531625c995694020ede5ab94
773
md
Markdown
README.md
tkuchiki/rundeck-consul-nodes-plugin
f704113a3e408ffc59b3a20a0e556f91b3ba510a
[ "MIT" ]
null
null
null
README.md
tkuchiki/rundeck-consul-nodes-plugin
f704113a3e408ffc59b3a20a0e556f91b3ba510a
[ "MIT" ]
null
null
null
README.md
tkuchiki/rundeck-consul-nodes-plugin
f704113a3e408ffc59b3a20a0e556f91b3ba510a
[ "MIT" ]
null
null
null
# rundeck-consul-nodes-plugin Get resource node data from Consul/Serf members ## Installation 1. Download from https://github.com/tkuchiki/rundeck-consul-nodes-plugin/releases 1. `cp consul-nodes-VERSION-plugin.zip ${RDECK_BASE}/libext/` ## Configuration Edit `/var/rundeck/projects/YOUR_PROJECT/etc/project.properties` or "Simple Configuration", "Edit Configuration File". ``` resources.source.[index].type=consul-nodes resources.source.[index].service=consul # consul or serf (default: consul) resources.source.[index].target-statuses=alive|left # regexp for status (default: alive) resources.source.[index].exclude-nodes="nodename1" # exclude nodes(grep extended regexp) resources.source.[index].include-nodes="nodename2" # include nodes(grep extended regexp) ```
36.809524
118
0.780078
eng_Latn
0.332742
76052d45b00e228397f06f67f1f34b378ca5d8a0
1,129
md
Markdown
README.md
GeorgeCloud/YouTube-Portfolio
5b0fdf4c230c32939e1c4a33bbbcf33ad42df91b
[ "MIT" ]
null
null
null
README.md
GeorgeCloud/YouTube-Portfolio
5b0fdf4c230c32939e1c4a33bbbcf33ad42df91b
[ "MIT" ]
null
null
null
README.md
GeorgeCloud/YouTube-Portfolio
5b0fdf4c230c32939e1c4a33bbbcf33ad42df91b
[ "MIT" ]
null
null
null
# YouTube-Portfolio Your personal portfolio for your favorite Youtube videos https://georgecloud.github.io/YouTube-Portfolio/ # Description YouTube is the most popular video-sharing platform in the world and because of that I decided to create a tool that would allow youtube users to play their favorite music on a single page website(locally for YOU!) without ads!. ## Features - Download Music for free / Ad-free / Offline Playback / Open-Source - Single Page website with all your favorite music - Ability to capture thumbnail URL - Command-line interfaced included ## Quickstart This guide is the whole process of getting your own service running. Clone the Project ```bash $ git clone https://github.com/GeorgeCloud/YouTube-Portfolio.git ``` Change to project directory where the program lives ```bash $ CD MacOS ``` Install all the required libraries to start server ```bash $ pip install -r requirements.txt ``` Start Server on current console ```bash $ python3 live-server.py ``` Open New Console Tab in same directory ```bash $ Command ⌘ + T ``` Start Program ```bash $ python3 youtube_downloader.py ```
21.711538
227
0.759965
eng_Latn
0.984351
7606947886be7d2db69ae3338d6a8bf6eddaddbb
18
md
Markdown
grails-spring/README.md
moskauerst/grails-core
8cf8c4029a56e65db78efa43f85ec4d7ea89547b
[ "Apache-2.0" ]
1,953
2015-01-01T07:29:56.000Z
2022-03-29T14:22:16.000Z
grails-spring/README.md
moskauerst/grails-core
8cf8c4029a56e65db78efa43f85ec4d7ea89547b
[ "Apache-2.0" ]
3,198
2015-01-03T21:41:35.000Z
2022-03-31T22:21:43.000Z
grails-spring/README.md
moskauerst/grails-core
8cf8c4029a56e65db78efa43f85ec4d7ea89547b
[ "Apache-2.0" ]
756
2015-01-08T11:16:04.000Z
2022-03-23T03:56:40.000Z
## grails-spring
6
16
0.666667
eng_Latn
0.852188
7607106b21796e932ae41e7710129c753e26b984
1,520
md
Markdown
README.md
sharpecapital/data-access-api
0aa29c68e1e2b76c78270c815b234a12de2b562b
[ "MIT" ]
null
null
null
README.md
sharpecapital/data-access-api
0aa29c68e1e2b76c78270c815b234a12de2b562b
[ "MIT" ]
null
null
null
README.md
sharpecapital/data-access-api
0aa29c68e1e2b76c78270c815b234a12de2b562b
[ "MIT" ]
null
null
null
# Data Access API An abstraction API for accessing different data sources within the system architecture (RDBMS, NoSQL etc.) The platform architecture currently uses two data sources: 1. Cassandra NoSQL cluster 2. Oracle 11g Master-Slave RDBMS ## Cassandra NoSQL Cassandra is used to persist data sets in excess of 1m records, as it provides better scalability and performance with large data sets than traditional RDBMS. An example of data stored in Cassandra would be tick-by-tick exchange rates, which has the potential to grow to more than 1bn records in a short period of time (months, rather than years). #### Data Model _[INSERT DIAGRAM]_ #### Creation Steps _[INSERT SCRIPT]_ ## Oracle RDBMS Oracle 11g is used as a relational database for smaller sets of reference data (to the order of 10s or 100s of thousands, rather than millions / billions). This offers more flexibility in the way we access the data compared to Cassandra, and we don't need to deal with the complexities of NoSQL for smaller datasets. It is also possible to build batch processes to load subsets of data from the Cassandra store into Oracle, for user interfaces or third-party applications. #### Data Model _[INSERT DIAGRAM]_ #### Creation Steps _[INSERT SCRIPT]_ ## Contributing #### Tests TBC #### Documentation Add documentation for every change. Feel free to send corrections or better docs! #### Pull Requests Send _fixes_ PR on the `master` branch. ## License MIT © [Sharpe Capital](http://sharpecapital.co.uk)
33.043478
472
0.771711
eng_Latn
0.988155
760730d6695a7d575498e7c543570014e1c31d7c
4,642
md
Markdown
docs/code-quality/ca1024.md
sejal27/visualstudio-docs
d6ab256b17fde6098fca96b5c9d0ca3bf97a8fdb
[ "CC-BY-4.0", "MIT" ]
2
2019-11-12T19:19:12.000Z
2019-11-12T19:20:49.000Z
docs/code-quality/ca1024.md
Chatina73/visualstudio-docs
ccf6d62d3d254338318b31f52e76324492401c50
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/code-quality/ca1024.md
Chatina73/visualstudio-docs
ccf6d62d3d254338318b31f52e76324492401c50
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "CA1024: Use properties where appropriate" ms.date: 03/11/2019 ms.topic: reference f1_keywords: - "UsePropertiesWhereAppropriate" - "CA1024" helpviewer_keywords: - "CA1024" - "UsePropertiesWhereAppropriate" ms.assetid: 3a04f765-af7c-4872-87ad-9cc29e8e657f author: jillre ms.author: jillfra manager: jillfra dev_langs: - CSharp - VB ms.workload: - "multiple" --- # CA1024: Use properties where appropriate ||| |-|-| |TypeName|UsePropertiesWhereAppropriate| |CheckId|CA1024| |Category|Microsoft.Design| |Breaking change|Breaking| ## Cause A method has a name that starts with `Get`, takes no parameters, and returns a value that is not an array. By default, this rule only looks at public and protected methods, but this is [configurable](#configurability). ## Rule description In most cases, properties represent data and methods perform actions. Properties are accessed like fields, which makes them easier to use. A method is a good candidate to become a property if one of these conditions is present: - Takes no arguments and returns the state information of an object. - Accepts a single argument to set some part of the state of an object. Properties should behave as if they are fields; if the method cannot, it should not be changed to a property. Methods are better than properties in the following situations: - The method performs a time-consuming operation. The method is perceivably slower than the time that is required to set or get the value of a field. - The method performs a conversion. Accessing a field does not return a converted version of the data that it stores. - The Get method has an observable side effect. Retrieving the value of a field does not produce any side effects. - The order of execution is important. Setting the value of a field does not rely on the occurrence of other operations. - Calling the method two times in succession creates different results. - The method is static but returns an object that can be changed by the caller. Retrieving the value of a field does not allow the caller to change the data that is stored by the field. - The method returns an array. ## How to fix violations To fix a violation of this rule, change the method to a property. ## When to suppress warnings Suppress a warning from this rule if the method meets at least one of the previously listed criteria. ## Configurability If you're running this rule from [FxCop analyzers](install-fxcop-analyzers.md) (and not with legacy analysis), you can configure which parts of your codebase to run this rule on, based on their accessibility. For example, to specify that the rule should run only against the non-public API surface, add the following key-value pair to an .editorconfig file in your project: ```ini dotnet_code_quality.ca1024.api_surface = private, internal ``` You can configure this option for just this rule, for all rules, or for all rules in this category (Design). For more information, see [Configure FxCop analyzers](configure-fxcop-analyzers.md). ## Control property expansion in the debugger One reason programmers avoid using a property is because they do not want the debugger to autoexpand it. For example, the property might involve allocating a large object or calling a P/Invoke, but it might not actually have any observable side effects. You can prevent the debugger from autoexpanding properties by applying <xref:System.Diagnostics.DebuggerBrowsableAttribute?displayProperty=fullName>. The following example shows this attribute being applied to an instance property. ```vb Imports System Imports System.Diagnostics Namespace Microsoft.Samples Public Class TestClass ' [...] <DebuggerBrowsable(DebuggerBrowsableState.Never)> _ Public ReadOnly Property LargeObject() As LargeObject Get ' Allocate large object ' [...] End Get End Property End Class End Namespace ``` ```csharp using System; using System.Diagnostics; namespace Microsoft.Samples { publicclass TestClass { // [...] [DebuggerBrowsable(DebuggerBrowsableState.Never)] public LargeObject LargeObject { get { // Allocate large object // [...] } } } } ``` ## Example The following example contains several methods that should be converted to properties and several that should not because they don't behave like fields. [!code-csharp[FxCop.Design.MethodsProperties#1](../code-quality/codesnippet/CSharp/ca1024-use-properties-where-appropriate_1.cs)]
34.385185
373
0.745799
eng_Latn
0.997606
76083347b56e83332d052f72d0bf840e31fc48de
699
md
Markdown
vendor/zendframework/zend-hydrator/README.md
Dturati/zend2Ionic
888c3f043e1c30b3417c1c9d7377520782fc1ed3
[ "BSD-3-Clause" ]
27
2015-07-08T03:50:09.000Z
2018-09-19T09:54:32.000Z
vendor/zendframework/zend-hydrator/README.md
Dturati/zend2Ionic
888c3f043e1c30b3417c1c9d7377520782fc1ed3
[ "BSD-3-Clause" ]
17
2020-01-23T11:20:10.000Z
2022-01-31T13:36:57.000Z
vendor/zendframework/zend-hydrator/README.md
Dturati/zend2Ionic
888c3f043e1c30b3417c1c9d7377520782fc1ed3
[ "BSD-3-Clause" ]
13
2016-03-25T07:42:15.000Z
2016-05-15T01:25:05.000Z
# zend-hydrator [![Build Status](https://secure.travis-ci.org/zendframework/zend-hydrator.svg?branch=master)](https://secure.travis-ci.org/zendframework/zend-hydrator) [![Coverage Status](https://coveralls.io/repos/zendframework/zend-hydrator/badge.svg?branch=master)](https://coveralls.io/r/zendframework/zend-hydrator?branch=master) `Zend\Hydrator` provides utilities for mapping arrays to objects, and vice versa, including facilities for filtering which data is mapped as well as providing mechanisms for mapping nested structures. - File issues at https://github.com/zendframework/zend-hydrator/issues - Documentation is at http://framework.zend.com/manual/current/en/index.html#zend-stdlib
58.25
166
0.802575
eng_Latn
0.639217
76083c9ca83740e82cf78626b563f31bfa22139f
1,595
md
Markdown
docs/framework/unmanaged-api/metadata/cormanifestresourceflags-enumeration.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/metadata/cormanifestresourceflags-enumeration.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/metadata/cormanifestresourceflags-enumeration.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Enumeração CorManifestResourceFlags ms.date: 03/30/2017 api_name: - CorManifestResourceFlags api_location: - mscoree.dll api_type: - COM f1_keywords: - CorManifestResourceFlags helpviewer_keywords: - CorManifestResourceFlags enumeration [.NET Framework metadata] ms.assetid: 1b0306b7-622b-4b57-8edc-3c713bb147ae topic_type: - apiref ms.openlocfilehash: f8334cb44042e21c086bc05c723e99b0c079fa2c ms.sourcegitcommit: d8020797a6657d0fbbdff362b80300815f682f94 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 11/24/2020 ms.locfileid: "95677057" --- # <a name="cormanifestresourceflags-enumeration"></a>Enumeração CorManifestResourceFlags Indica a visibilidade dos recursos codificados em um manifesto do assembly. ## <a name="syntax"></a>Sintaxe ```cpp typedef enum CorManifestResourceFlags { mrVisibilityMask = 0x0007, mrPublic = 0x0001, mrPrivate = 0x0002 } CorManifestResourceFlags; ``` ## <a name="members"></a>Membros |Membro|DESCRIÇÃO| |------------|-----------------| |`mrVisibilityMask`|Reservado.| |`mrPublic`|Os recursos são públicos.| |`mrPrivate`|Os recursos são privados.| ## <a name="requirements"></a>Requisitos **Plataformas:** confira [Requisitos do sistema](../../get-started/system-requirements.md). **Cabeçalho:** CorHdr. h **.NET Framework versões:**[!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)] ## <a name="see-also"></a>Confira também - [Enumerações de metadados](metadata-enumerations.md)
27.033898
109
0.702821
por_Latn
0.243447
760850e5ba8d05439ea7181e58aa7fd3d6ec87be
114
md
Markdown
README.md
take9i/landscapes_viewer
bbbb3b78bd902cb205ad0375e30a24de225e7c7f
[ "MIT" ]
null
null
null
README.md
take9i/landscapes_viewer
bbbb3b78bd902cb205ad0375e30a24de225e7c7f
[ "MIT" ]
null
null
null
README.md
take9i/landscapes_viewer
bbbb3b78bd902cb205ad0375e30a24de225e7c7f
[ "MIT" ]
null
null
null
# Landscapes Viewer This repository is a viewer for data created at https://github.com/take9i/landscapes_builder
28.5
92
0.815789
eng_Latn
0.981884
7608b8adce0c01158f4a00b952b73190e4a57114
1,670
md
Markdown
csharp-intermediate-bowling.md
williambogaiv/code-katas
e41ddd7c0dd706b5195d20353f0c641f3b3431b9
[ "BSD-3-Clause" ]
null
null
null
csharp-intermediate-bowling.md
williambogaiv/code-katas
e41ddd7c0dd706b5195d20353f0c641f3b3431b9
[ "BSD-3-Clause" ]
null
null
null
csharp-intermediate-bowling.md
williambogaiv/code-katas
e41ddd7c0dd706b5195d20353f0c641f3b3431b9
[ "BSD-3-Clause" ]
null
null
null
A Bowling We Will Go ======= ### Language *C#* ### Difficulty *Intermediate* - this is due to the expectation of having multiple projects within the solution and knowledge of ASP.NET MVC and writing tests. ### Abstract We have been requisitioned to build an app. for scoring a game of bowling for one-player. The player is not expected to know the various rules for scoring between frames - they just want to knock-down pins! The app. should be web-driven and allow for entering number of pins knocked-down per frame. Additionally, the player's overall tally should be easily-visible at all times. Once the game is over, the player's total should be visible and the player should be given the option to start a new game. ### Goals - Learn how to score one-game of bowling. - Learn how to keep state in a stateless environment (HTTP). - Learn how to write tests against the app.'s logic. ### Base Requirements - App. is written in C#. - Uses ASP.NET MVC5 for UI. - Valid tests for scoring are necessary. - if the player's first throw is three and the second is five, then the test should confirm the current score is eight. - complete test-coverage is not the goal. - UI needs to maintain game-tally for all frames. - UI needs to allow a way to enter pins knocked-down per throw. ### Possible Enhancements - Add API-driven back-end for logic - this could mean having a dynamic client making AJAX requests or just calling the API-endpoints from the MVC-endpoints. - Aim for eighty-five percent or higher test-coverage. - Add graphics to represent spares, strikes, and turkeys. - Provide styling for print-out of game scorecard.
49.117647
501
0.74491
eng_Latn
0.999769
7608c10a36c5491fdf9e93af068e4c689f355cb0
1,622
md
Markdown
docs/profiling/profiling-tools-apis.md
HiDeoo/visualstudio-docs.fr-fr
db4174a3cd6d03edc8bbf5744c3f917e4b582cb3
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/profiling/profiling-tools-apis.md
HiDeoo/visualstudio-docs.fr-fr
db4174a3cd6d03edc8bbf5744c3f917e4b582cb3
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/profiling/profiling-tools-apis.md
HiDeoo/visualstudio-docs.fr-fr
db4174a3cd6d03edc8bbf5744c3f917e4b582cb3
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: API des outils de profilage | Microsoft Docs ms.date: 11/04/2016 ms.topic: conceptual ms.assetid: bd9ca972-e5bf-45a6-9a5d-ac30a4d9ee02 author: mikejo5000 ms.author: mikejo manager: jillfra monikerRange: vs-2017 ms.workload: - multiple ms.openlocfilehash: ed4fef32c04d68f37df71101447a4e24f39eecba ms.sourcegitcommit: 6cfffa72af599a9d667249caaaa411bb28ea69fd ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 09/02/2020 ms.locfileid: "74772075" --- # <a name="profiling-tools-apis"></a>API des outils de profilage Vous pouvez insérer des méthodes managées ou natives des API des outils de profilage [!INCLUDE[vsprvs](../code-quality/includes/vsprvs_md.md)] pour contrôler la collecte de données lors d’une exécution du profilage. Cette section décrit les méthodes des API et explique comment les utiliser. ## <a name="in-this-section"></a>Contenu de cette section [Informations de référence sur l’API du profileur Visual Studio (natif)](../profiling/visual-studio-profiler-api-reference-native.md) Décrit les méthodes des outils de profilage C++. [Profiler](/previous-versions/ms242704(v=vs.140)) Décrit les méthodes des outils de profilage .NET. [Procédure pas à pas : utilisation des API du profileur](../profiling/walkthrough-using-profiler-apis.md) Découvrez comment utiliser les méthodes des outils de profilage .NET dans cet exemple complet. ## <a name="related-sections"></a>Sections connexes [Contrôler la collecte des données](../profiling/controlling-data-collection.md) ## <a name="see-also"></a>Voir aussi - [Explorateur de performances](../profiling/performance-explorer.md)
36.863636
291
0.784834
fra_Latn
0.630017
7609aa5cb8927a6d8bb6cee82caf22f27611baab
2,755
md
Markdown
docs/framework/data/adonet/ef/how-to-navigate-relationships-with-the-navigate-operator.md
adamsitnik/docs.cs-cz
7c534ad2e48aa0772412dc0ecf04945c08fa4211
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/data/adonet/ef/how-to-navigate-relationships-with-the-navigate-operator.md
adamsitnik/docs.cs-cz
7c534ad2e48aa0772412dc0ecf04945c08fa4211
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/data/adonet/ef/how-to-navigate-relationships-with-the-navigate-operator.md
adamsitnik/docs.cs-cz
7c534ad2e48aa0772412dc0ecf04945c08fa4211
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Postupy: Procházení relací pomocí navigačního operátoru' ms.date: 03/30/2017 dev_langs: - csharp - vb ms.assetid: 79996d2d-9b03-4a9d-82cc-7c5e7c2ad93d ms.openlocfilehash: dca8c25babcbc1676552af8ef49b7a7e71cd6136 ms.sourcegitcommit: 205b9a204742e9c77256d43ac9d94c3f82909808 ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 09/10/2019 ms.locfileid: "70854525" --- # <a name="how-to-navigate-relationships-with-the-navigate-operator"></a>Postupy: Procházení relací pomocí navigačního operátoru Toto téma ukazuje, jak spustit příkaz pro koncepční model pomocí <xref:System.Data.EntityClient.EntityCommand> objektu a jak <xref:System.Data.Metadata.Edm.RefType> načíst výsledky pomocí <xref:System.Data.EntityClient.EntityDataReader>. ### <a name="to-run-the-code-in-this-example"></a>Spuštění kódu v tomto příkladu 1. Přidejte do svého projektu [model AdventureWorks Sales](https://github.com/Microsoft/sql-server-samples/releases/tag/adventureworks) a nakonfigurujte projekt tak, aby používal Entity Framework. Další informace najdete v tématu [jak: Použijte průvodce](https://docs.microsoft.com/previous-versions/dotnet/netframework-4.0/bb738677(v=vs.100))model EDM (Entity Data Model). 2. Na kódové stránce vaší aplikace přidejte následující `using` příkazy (`Imports` v Visual Basic): [!code-csharp[DP EntityServices Concepts#Namespaces](../../../../../samples/snippets/csharp/VS_Snippets_Data/dp entityservices concepts/cs/source.cs#namespaces)] [!code-vb[DP EntityServices Concepts#Namespaces](../../../../../samples/snippets/visualbasic/VS_Snippets_Data/dp entityservices concepts/vb/source.vb#namespaces)] ## <a name="example"></a>Příklad Následující příklad ukazuje, jak navigovat relace v [!INCLUDE[esql](../../../../../includes/esql-md.md)] nástroji pomocí operátoru [Navigace](./language-reference/navigate-entity-sql.md) . `Navigate` Operátor přebírá následující parametry: instance entity, typ vztahu, konec relace a začátek relace. Volitelně můžete `Navigate` operátorovi předat jenom instanci entity a typ vztahu. [!code-csharp[DP EntityServices Concepts#NavigateWithNavOperatorWithEntityCommand](../../../../../samples/snippets/csharp/VS_Snippets_Data/dp entityservices concepts/cs/source.cs#navigatewithnavoperatorwithentitycommand)] [!code-vb[DP EntityServices Concepts#NavigateWithNavOperatorWithEntityCommand](../../../../../samples/snippets/visualbasic/VS_Snippets_Data/dp entityservices concepts/vb/source.vb#navigatewithnavoperatorwithentitycommand)] ## <a name="see-also"></a>Viz také: - [Zprostředkovatel EntityClient pro Entity Framework](entityclient-provider-for-the-entity-framework.md) - [Jazyk Entity SQL](./language-reference/entity-sql-language.md)
74.459459
385
0.783303
ces_Latn
0.875546
7609ee4ab928ae32d6232640c2f440e590cfdba8
33
md
Markdown
README.md
utrumo/my-chat
54aec485385354acdbb281d0b0f3e1478411f4ef
[ "Apache-2.0" ]
null
null
null
README.md
utrumo/my-chat
54aec485385354acdbb281d0b0f3e1478411f4ef
[ "Apache-2.0" ]
5
2021-03-10T18:46:07.000Z
2022-02-27T05:00:29.000Z
README.md
utrumo/my-chat
54aec485385354acdbb281d0b0f3e1478411f4ef
[ "Apache-2.0" ]
null
null
null
# my-chat My chat implementation
11
22
0.787879
eng_Latn
0.914906
760a688dfc972c41dad73e06b207886b5b15939a
861
md
Markdown
sdk/resourcemanager/servicebus/armservicebus/CHANGELOG.md
aysabzevar/azure-sdk-for-go
e6b06d2dd24f6ef9ae69fcf2e424d60a77fe3d57
[ "MIT" ]
2
2021-10-12T06:53:01.000Z
2021-11-09T10:46:45.000Z
sdk/resourcemanager/servicebus/armservicebus/CHANGELOG.md
aysabzevar/azure-sdk-for-go
e6b06d2dd24f6ef9ae69fcf2e424d60a77fe3d57
[ "MIT" ]
1,015
2019-07-17T16:19:06.000Z
2021-07-29T17:12:08.000Z
sdk/resourcemanager/servicebus/armservicebus/CHANGELOG.md
aysabzevar/azure-sdk-for-go
e6b06d2dd24f6ef9ae69fcf2e424d60a77fe3d57
[ "MIT" ]
2
2020-11-23T10:25:48.000Z
2021-12-09T03:09:34.000Z
# Release History ## 0.2.1 (Unreleased) ### Features Added ### Breaking Changes ### Bugs Fixed ### Other Changes ## 0.2.0 (2021-10-29) ### Breaking Changes - `arm.Connection` has been removed in `github.com/Azure/azure-sdk-for-go/sdk/azcore/v0.20.0` - The parameters of `NewXXXClient` has been changed from `(con *arm.Connection, subscriptionID string)` to `(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions)` ## 0.1.0 (2021-10-08) - To better align with the Azure SDK guidelines (https://azure.github.io/azure-sdk/general_introduction.html), we have decided to change the module path to "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/servicebus/armservicebus". Therefore, we are deprecating the old module path (which is "github.com/Azure/azure-sdk-for-go/sdk/servicebus/armservicebus") to avoid confusion.
39.136364
383
0.753775
eng_Latn
0.657572
760a87ccb7c08da3f9e6aa65c485f8db0c56236a
2,896
md
Markdown
DevCenter/Node/LandingPages/common-tasks.md
pablissima/azure-content
75ceff54eb17131a78791dfa89c02a7f5250e41c
[ "CC-BY-3.0" ]
1
2019-04-22T16:41:52.000Z
2019-04-22T16:41:52.000Z
DevCenter/Node/LandingPages/common-tasks.md
pablissima/azure-content
75ceff54eb17131a78791dfa89c02a7f5250e41c
[ "CC-BY-3.0" ]
null
null
null
DevCenter/Node/LandingPages/common-tasks.md
pablissima/azure-content
75ceff54eb17131a78791dfa89c02a7f5250e41c
[ "CC-BY-3.0" ]
null
null
null
#Node.js Developer Center - Common tasks ## Deployment ### [Publishing with Git] Git is a popular, open source, distributed version control system. Windows Azure Web Sites allow you to enable a Git repository for your site, which allows you to quickly and easily push code changes to your site. This common task provides details about how to get started using Git with Windows Azure ### [Staging a Cloud Service] Learn how to stage a new version of an application to a Windows Azure Cloud Service, and then deploy from staging to production. ## Configuration ### [Configuring a Custom Domain Name in Windows Azure] By default, Windows Azure applications and storage accounts can be accessed through friendly subdomains, for example, http://&lt;myapp&gt;.cloudapp.net and https://&lt;mydata&gt;.blob.core.windows.net. This article shows you you can explose your application and data on yoru own custom domain, such as http://&lt;myapp&gt;.com. ### [Enabling Remote Desktop in Windows Azure] Remote Desktop enables you to access the desktop of a role instance running in Windows Azure. You can use a remote desktop connection to configure the virtual machine or troubleshoot problems with your application. **Note:** This topic applies only for Cloud Services. ### [Configuring SSL for an Application in Windows Azure] Secure Socket Layer (SSL) encryption is the most commonly used method of securing data sent across the internet. This common task discusses how to specify an HTTPS endpoint for a web role and how to upload an SSL certificate to secure your application. ### [Using CDN for Windows Azure] The Windows Azure Content Delivery Network (CDN) offers a global solution for delivering high-bandwidth content by caching blobs and static content at physical nodes around the world. This common task describes how to enable CDN and add, access, and delete content. [Publishing with Git]: /en-us/develop/nodejs/common-tasks/publishing-with-git/ [Continuous Delivery with Team Foundation Service]: /en-us/develop/nodejs/common-tasks/continuous-delivery-service/ [Continuous Delivery for Cloud Services with Team Foundation Server]: /en-us/develop/nodejs/common-tasks/continuous-delivery/ [Managing Windows Azure SQL Database using SQL Server Management Studio]: /en-us/develop/nodejs/common-tasks/sql-azure-management/ [Configuring a Custom Domain Name in Windows Azure]: /en-us/develop/nodejs/common-tasks/enable-custom-dns/ [Enabling Remote Desktop in Windows Azure]: /en-us/develop/nodejs/common-tasks/enable-remote-desktop/ [Configuring SSL for an Application in Windows Azure]: /en-us/develop/nodejs/common-tasks/enable-ssl/ [Using CDN for Windows Azure]: /en-us/develop/nodejs/common-tasks/cdn/ [Using performance counters in Windows Azure]: /en-us/develop/nodejs/common-tasks/profiling/ [Staging a Cloud Service]: /en-us/develop/nodejs/common-tasks/enable-staging-deployment/
85.176471
329
0.794544
eng_Latn
0.964833
760ab287b6dda48d529e059156c852ec4af69e70
1,747
md
Markdown
docs/visual-basic/misc/bc32085.md
emrekas/docs.tr-tr
027bd2c6c93900a75cac7ac42531c89085f87888
[ "CC-BY-4.0", "MIT" ]
1
2020-01-06T07:30:24.000Z
2020-01-06T07:30:24.000Z
docs/visual-basic/misc/bc32085.md
emrekas/docs.tr-tr
027bd2c6c93900a75cac7ac42531c89085f87888
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc32085.md
emrekas/docs.tr-tr
027bd2c6c93900a75cac7ac42531c89085f87888
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "'New' bir tür parametresiyle kullanılan bağımsız değişkenler geçirilemez." ms.date: 07/20/2015 f1_keywords: - BC32085 - vbc32085 helpviewer_keywords: - BC32085 ms.assetid: a60bf62d-2b2e-4621-b8db-e67720b918fb ms.openlocfilehash: 8ff99430f4cff414e9155054fbfb140da2dae871 ms.sourcegitcommit: 2701302a99cafbe0d86d53d540eb0fa7e9b46b36 ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 04/28/2019 ms.locfileid: "64618657" --- # <a name="arguments-cannot-be-passed-to-a-new-used-on-a-type-parameter"></a>'New' bir tür parametresiyle kullanılan bağımsız değişkenler geçirilemez. Bir bildirim veya atama ifadesi bir genel tür çağırır ve oluşturucu bağımsız değişkenleri olan bir tür parametresine sağlayan [New işleci](../../visual-basic/language-reference/operators/new-operator.md) kısıtlaması. Bir tür parametresi kısıtlaması listede, bu tür parametresi için geçirilen tür bağımsız değişkeni oluşturma kodunu erişip parametresiz bir oluşturucu kullanıma açmalıdır belirtebilirsiniz. Bir tür parametresi, parametre ve bir tür parametreyle alan bir oluşturucu gerekli kılamazsınız `New` sınırlama, bu tür bir oluşturucuya kabul edemez. **Hata Kimliği:** BC32085 ## <a name="to-correct-this-error"></a>Bu hatayı düzeltmek için - Tür bağımsız değişkeni genel tür çağırma deyiminde aşağıdaki bağımsız değişken listesi kaldırın. Oluşturucu bağımsız değişkenleri karşılık gelen tür parametresi için geçirilemez. ## <a name="see-also"></a>Ayrıca bkz. - [Visual Basic'de genel türler](../../visual-basic/programming-guide/language-features/data-types/generic-types.md) - [Değer Türleri ve Başvuru Türleri](../../visual-basic/programming-guide/language-features/data-types/value-types-and-reference-types.md)
54.59375
342
0.797367
tur_Latn
0.999139
760acaf07679a6ad491eb63284eb1430bc29ff39
1,538
md
Markdown
docs/release-notes/NuGet-2.9-RC.md
DalavanCloud/docs.microsoft.com-nuget.it-it
a64229582afdd0a42917dd2542051c62a33d884c
[ "MIT" ]
1
2019-01-05T03:19:42.000Z
2019-01-05T03:19:42.000Z
docs/release-notes/NuGet-2.9-RC.md
DalavanCloud/docs.microsoft.com-nuget.it-it
a64229582afdd0a42917dd2542051c62a33d884c
[ "MIT" ]
null
null
null
docs/release-notes/NuGet-2.9-RC.md
DalavanCloud/docs.microsoft.com-nuget.it-it
a64229582afdd0a42917dd2542051c62a33d884c
[ "MIT" ]
null
null
null
--- title: Note sulla versione 2.9-RC di NuGet description: Note sulla versione per NuGet 2.9 RC, tra cui i problemi noti, correzioni di bug, funzionalità aggiunte e dcr. author: karann-msft ms.author: karann ms.date: 11/11/2016 ms.topic: conceptual ms.openlocfilehash: 17c1c3a0c91928602aa47b5ba599faeac0424a4a ms.sourcegitcommit: 1d1406764c6af5fb7801d462e0c4afc9092fa569 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 09/04/2018 ms.locfileid: "43548324" --- # <a name="nuget-29-rc-release-notes"></a>Note sulla versione 2.9-RC di NuGet [Note sulla versione di NuGet 2.8.7](../release-notes/nuget-2.8.7.md) | [note sulla versione di anteprima di NuGet 3.0](../release-notes/nuget-3.0-preview.md) 2.9 di NuGet è stata rilasciata il 10 settembre 2015 come un aggiornamento per il 2.8.7 VSIX per Visual Studio 2012 e 2013. ### <a name="updates-in-this-release"></a>Aggiornamenti in questa versione * A questo punto verrà ignorata l'elaborazione pacchetti se la classe contenuta `.nuspec` documento non è valido - [PR8](https://github.com/NuGet/NuGet2/pull/8) * Corretta gestione multipartwebrequest di \r\n per gli scenari di Unix/Linux - [776](https://github.com/NuGet/Home/issues/776) * Corretta integrazione con gli eventi di compilazione in Visual Studio 2013 Community edition - [1180](https://github.com/NuGet/Home/issues/1180) L'elenco completo di correzioni disponibili in questa versione è reperibile in GitHub nel [2.8.8 attività cardine](https://github.com/NuGet/Home/issues?q=milestone%3A2.8.8+is%3Aclosed)
53.034483
184
0.775033
ita_Latn
0.912581
760b01d760a89ac26ed1c73da14060832103560e
1,202
md
Markdown
bash4.md
panUFO/SP
98f0b2410fef2f33a578b6ee184bf4e46631670c
[ "MIT" ]
null
null
null
bash4.md
panUFO/SP
98f0b2410fef2f33a578b6ee184bf4e46631670c
[ "MIT" ]
null
null
null
bash4.md
panUFO/SP
98f0b2410fef2f33a578b6ee184bf4e46631670c
[ "MIT" ]
2
2021-01-04T17:35:36.000Z
2021-04-29T07:29:20.000Z
###Laboratorium 4 1\. Wyświetl listę plików z aktualnego katalogu, zamieniając wszystkie małe litery na duże. ```sh ls | tr [:lower:] [:upper:] ``` ```sh ls -1F | sed -e '/[/]/d' | tr 'a-z' 'A-Z' ``` 2\. Wyświetl listę praw dostępu do plików w aktualnym katalogu, ich rozmiar i nazwę. ```sh ls -l ``` ```sh ls -hgoF | tr -s " " | cut -f 1,3,7 -d " " | tr " " "\t" | sed -e '/[/]/d' ``` 3\. Wyświetl listę plików w aktualnym katalogu, posortowaną według rozmiaru pliku. ```sh ls -l -S ``` ```sh ls -h1lS | sed -e '/^d/d' | tr -s " " | cut -f 5,9 -d " " | tac ``` 4\. Wyświetl zawartość pliku */etc/passwd* posortowaną według numerów UID w kolejności od największego do najmniejszego. ```sh cd /etc cat passwd | sort -r -t : -k 3 ``` 5\. Wyświetl zawartość pliku */etc/passwd* posortowaną najpierw według numerów GID w kolejności od największego do najmniejszego, a następnie UID. ```sh cd /etc cat passwd | sort -r -t : -k 4 -k 3 ``` 6\. Podaj liczbę plików każdego użytkownika. ```sh find / -printf '%u\n' 2>/dev/null | sort | uniq -c ``` 7\. Sporządź statystykę praw dostępu (dla każdego z praw dostępu podaj ile razy zostało ono przydzielone). ```sh find -printf "%m\n" | sort | uniq -c ```
24.04
146
0.637271
pol_Latn
0.999724
760b033814019995c177004c2d0b482507293050
4,711
md
Markdown
Using_Seasonal-Forecasts/february_forecasts/README.md
achevuturi/PEACFLOW_Manaus-flood-forecasting
f92e2a554a23b29734679721f0dcc630ff8e971b
[ "BSD-3-Clause" ]
1
2022-01-12T14:46:06.000Z
2022-01-12T14:46:06.000Z
Using_Seasonal-Forecasts/february_forecasts/README.md
achevuturi/PEACFLOW_Manaus-flood-forecasting
f92e2a554a23b29734679721f0dcc630ff8e971b
[ "BSD-3-Clause" ]
null
null
null
Using_Seasonal-Forecasts/february_forecasts/README.md
achevuturi/PEACFLOW_Manaus-flood-forecasting
f92e2a554a23b29734679721f0dcc630ff8e971b
[ "BSD-3-Clause" ]
null
null
null
**Flood forecasting for Negro River at Manaus using seasonal forecasts for February lead-time** ! ----------------------------------------------------------COPYRIGHT----------------------------------------------------------\ ! (C) Copyright 2021 University of Reading. All rights reserved.\ ! ----------------------------------------------------------COPYRIGHT----------------------------------------------------------\ !\ ! This file is part of the CSSP Brazil PEACFLOW Project. \ !\ ! Please include the following form of acknowledgement in any presentations/publications\ | that use any of the code stored in this repository:\ ! *"The development of PEACFLOW_Manaus-flood-forecasting repository*\ ! *(https://github.com/achevuturi/PEACFLOW_Manaus-flood-forecasting)* \ ! *was supported by the Newton Fund through the Met Office*\ ! *Climate Science for Service Partnership Brazil (CSSP Brazil)*\ ! *and was developed at University of Reading."*\ !\ ! The CSSP Brazil PEACFLOW Project is free software: you can redistribute it and/or modify\ ! it under the terms of the Modified BSD License,\ ! as published by the Open Source Initiative.\ !\ ! The CSSP Brazil PEACFLOW Project is distributed in the hope that it will be ! useful,\ ! but WITHOUT ANY WARRANTY; without even the implied warranty\ ! of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Modified BSD License for more details.\ !\ ! For a copy of the Modified BSD License \ ! please see <http://opensource.org/licenses/BSD-3-Clause>.\ ! ----------------------------------------------------------------------------------------------------------------------------- **Description:** This module contains models for forecasting maximum water level for Negro River at Manaus for any year from 2017 onwards using combination of observations and ECMWF seasonal forecasts as input. The required input data files are available in the parent directory (https://github.com/achevuturi/PEACFLOW_Manaus-flood-forecasting/tree/master/Using_Seasonal-Forecasts). This sub-folder has one model that provides forecasts at February lead-time for each year. **Modules:** This sub-folder contains the following files: - *calculate_amo_index.py:* Python script that calculates the ECMWF AMO index from the downloaded ECMWF SST forecast NetCDF file. - *read_amo_index.py:* Python script to read the AMO index from the downloaded amo.txt to get monthly AMO index (as in https://github.com/achevuturi/PEACFLOW_Manaus-flood-forecasting/tree/master/Using_Observations) - *feb_forecast_data_download.py:* Python script that downloads the ECMWF February forecast of the year of forecast over Amazon region. This script downloads two NetCDF files, one for total-precipitation and another for sea-surface-temperature. - *model_uncertainity.py:* Python script that calculates the model uncertanity for the forecast provided using whole ensemble forecast. The scale for the model uncertanity is derived from the errors over the validation period. - *feb_forecast_output.py:* Python script that calculates the output using the downloaded observed and ECMWF forecast data and model information (https://github.com/achevuturi/PEACFLOW_Manaus-flood-forecasting/blob/master/Using_Seasonal-Forecasts/obs_forecast_model_mar.pkl). It also uses data from the numpy files; .npz, stored in https://github.com/achevuturi/PEACFLOW_Manaus-flood-forecasting/tree/master/Using_Seasonal-Forecasts. - *feb_forecast.sh:* This is the main script that downloads the data, runs the model, gives/saves the output and then deletes the downloaded data. **Execution:** The model works by running the shell script for the forecast using the example command **bash feb_forecast.sh** OR **./feb_forecast.sh** OR **source feb_forecast.sh**. After this command, the user needs to provide the year for which the forecast is required (2017 onwards) when prompted by the script to complete the run. The first step of this model is to delete any old ensemble forecasts saved as a .csv file. **Output:** The esemble mean forecast of the maximum water level of Negro River at Manaus (in meters) is given in the command line, and the ensemble forecast is saved as an output .csv file. The uncertainity bounds of the forecast (5<sup>th</sup> -- 95<sup>th</sup> percentile range) are also printed out. All the downloaded data is then deleted. **Citation:** Users who apply the code resulting in presentations/publications are kindly asked to cite the publication below:\ *Chevuturi A, Klingaman NP, Woolnough SJ, Rudorff CM, Coelho CAS, Schongart J (2021) Extending forecast lead time for annual maximum water level at Manaus using seasonal forecasts. Climate Resilience and Sustainability, in prep.*
98.145833
660
0.732329
eng_Latn
0.985516
760b38b3f295e49f51abe79f3bdbe019b9ad3437
1,156
md
Markdown
CHANGELOG.md
achingbrain/observable-webworkers
e1d8039bca872a979a5daecb9cf4fad4652c31c7
[ "Apache-2.0", "MIT" ]
null
null
null
CHANGELOG.md
achingbrain/observable-webworkers
e1d8039bca872a979a5daecb9cf4fad4652c31c7
[ "Apache-2.0", "MIT" ]
2
2022-02-08T11:21:06.000Z
2022-02-09T06:41:06.000Z
CHANGELOG.md
achingbrain/observable-webworkers
e1d8039bca872a979a5daecb9cf4fad4652c31c7
[ "Apache-2.0", "MIT" ]
null
null
null
### [2.0.1](https://github.com/achingbrain/observable-webworkers/compare/v2.0.0...v2.0.1) (2022-02-09) ### Bug Fixes * update listener type to allow typing data field ([#2](https://github.com/achingbrain/observable-webworkers/issues/2)) ([e351de3](https://github.com/achingbrain/observable-webworkers/commit/e351de3bd95bbea3f436d1b7e6bac422830dd06f)) ## [2.0.0](https://github.com/achingbrain/observable-webworkers/compare/v1.0.0...v2.0.0) (2022-02-08) ### ⚠ BREAKING CHANGES * switch to named exports, ESM only ### Features * convert to typescript ([#1](https://github.com/achingbrain/observable-webworkers/issues/1)) ([aec9446](https://github.com/achingbrain/observable-webworkers/commit/aec9446f1dcbb9a02142f826f6410eff8b461b00)) ### Trivial Changes * add release script ([e1610a8](https://github.com/achingbrain/observable-webworkers/commit/e1610a879cc67703bf51ad96e522cc879dcc5de3)) * ignore dist ([3c32bd2](https://github.com/achingbrain/observable-webworkers/commit/3c32bd2bf7b68fb4268e1ca17545e0d01ec5127e)) * remove dist ([4b6efcc](https://github.com/achingbrain/observable-webworkers/commit/4b6efccbe03e8e261138afd27d73e7ffd93beb54))
46.24
233
0.782872
yue_Hant
0.269208
760c267736f3cd01213b47969cf57a5e85d978fe
5,688
md
Markdown
README.md
gprezza/MMM-RVV
435ba66d9e537511b86ac5ba29e566c1e626e110
[ "MIT" ]
null
null
null
README.md
gprezza/MMM-RVV
435ba66d9e537511b86ac5ba29e566c1e626e110
[ "MIT" ]
null
null
null
README.md
gprezza/MMM-RVV
435ba66d9e537511b86ac5ba29e566c1e626e110
[ "MIT" ]
null
null
null
# MMM-BFA Departure monitor for the local public transport (ÖPNV) of Bavaria. The data is fetched from [bayern-fahrplan.de](https://bayern-fahrplan.de "Bayern Fahrplan"), the distributor for public transport data in Bavaria. The module scrapes the departure data without requiring any API key or special permission and offers a bunch of options you can play around with. Feel free to contribute! The structure and layout of this MagicMirror module was inspired by [MMM-KVV](https://github.com/yo-less/MMM-KVV "Karlsruhe Public Transport"). ## Screenshots German (1):<p> ![German version (1)](res/screenshot_de.png)<p> German (2):<p> ![German version (2)](res/screenshot_de_wue.png)<p> English:<p> ![English version (1)](res/screenshot_en.png)<p> English (2):<p> ![English version (2)](res/screenshot_en_wue.png) ## Languages MMM-BFA features language support for `German (de)` and `English (en)` mirrors. ## Prerequisite A working installation of [MagicMirror<sup>2</sup>](https://github.com/MichMich/MagicMirror). ## Dependencies * npm * [request](https://www.npmjs.com/package/request) * [cheerio](https://www.npmjs.com/package/cheerio) ## Installation 1. Navigate into your MagicMirror's modules folder. 2. Execute git clone https://github.com/sebikolon/MMM-BFA.git. 3. Execute cd MMM-BFA. 4. Execute `npm install`. ## Module behavior Please note that this module **auto-creates a module header** which displays the text that was defined in the module settings. It is therefore recommended not to add a 'header' entry to your config.js for this module.<P> There is a **progress loading bar** displayed that runs from the left to the right side of the module border, indicating when the next data refresh is performed. You can adjust the color of this loading bar in the module config. In order to adjust the look-and-feel more granular, add an override to the CSS identifiers `.MMM-BFA #divReload` and `.MMM-BFA #divReloadWrapper`.<P> The **delay** of an upcoming trip is marked in red color (if there is any), otherwise in green color. If defined, additional trip information like *Trip cancelled* will be shown instead of the delay.<P> This module has been programmed to allow for **multiple instances**. Simply add more MMM-BFA config entries to your config.js file to display multiple stations and configure them according to your needs. ## Configuration You can show the MMM-BFA module without setting any configuration options.<BR>In this case, the stop `Regensburg University` is set as default *stop_from_ID*. Sample configuration entries for your `~/MagicMirror/config/config.js` with optional parameters: * To show departures of a single trip (**from** a stop **to** another stop): ``` ... { module: 'MMM-BFA', position: 'bottom_left', config: { updateInterval : 30 * 1000, stop_from_ID: 4014080, stop_to: ["Klinikum", "Roter-Brach-Weg"], maximumTripsToShow: 10, titleText : "Universität Regensburg" } } // If this isn't your last module, add a comma after the bracket ... ``` * To show all departures from a stop omit the *stop_to* entry: ``` ... { module: "MMM-BFA", position: "top_right", config: { updateInterval : 30 * 1000, stop_from_ID: 3700440, maximumTripsToShow: 4, titleText : "Rathaus Würzburg" } } // If this isn't your last module, add a comma after the bracket7 ... ``` ## How to get the correct stopID 1. Open your web browser and navigate to the [the txt version of bayern-fahrplan.de](https://txt.bayern-fahrplan.de/textversion/bcl_abfahrtstafel). 2. Click on "Abfahrtsmonitor anfordern" to reveal a search field. 3. Write the name of your stop (e.g. Regensburg Universität) and press enter 4. Choose the right stop if more than one stop is found with your keywords (N.B. only choose actual stops, labeled with "[Haltestelle]"). 5. Click on "Abfahrtsmonitor anfordern" to show the next departures from that stop. 6. Once the departures are shown, look at the source code of the page (e.g. in Firefox: right click -> View Page Source) 7. Search the name of the stop in the source code. The stopID (7-digit number) is indicated e few characters after it, right after ```value=```. For example, for "Regensburg Universität" (stopID 4014080) we'll have: ``` <b>Regensburg Universität<span class="odvDesc"> [Haltestelle]</span><input type="hidden" name="name_dm" value="4014080"/> ```. ## Config Options | **Option** | **Default** | **Description** | | :---: | :---: | --- | | stop_from_ID | 4014080 |<BR>Which stop would you like to have displayed? <BR><EM> Default: University Regensburg</EM><P> | | stop_to<BR>`optional` | [] |<BR>Which directions do you want to include into your trip list?<BR>Put the names of the stops into the array, separated by comma<BR><EM>Default: Show all directions </EM><P> | | maximumTripsToShow<BR>`optional` | 5 |<BR>How many trips to you want to show in total (including all directions)?<BR>This is a maximum value. Probably there are less trips available then desired<P> | | logToConsole<BR>`optional` | false |<BR>Turn on the log onto the console (for debugging purposes)<BR><P> | | progressColor<BR>`optional` | #6db64b |<BR> Default color name (or RGB code) of the progress bar<BR><EM>Default: light green</EM><P> | | updateInterval<BR>`optional` | 30 * 1000 | <BR>How often should the trip data be refreshed (in milliseconds)?<BR><EM> Default: Every 30 seconds </EM><P> |
55.764706
378
0.697257
eng_Latn
0.967958
760c329160a0b386d9b2ad3ea67efc0fdd98e541
860
md
Markdown
README.md
lucasgdb/rename-music-files-y2meta
8f8509ef116baa548762ae26a531623993a6cc5b
[ "MIT" ]
2
2019-12-23T19:21:21.000Z
2019-12-23T19:54:56.000Z
README.md
lucasnaja/rename-music-files-y2mate
8f8509ef116baa548762ae26a531623993a6cc5b
[ "MIT" ]
1
2020-05-29T23:32:23.000Z
2020-05-29T23:32:23.000Z
README.md
lucasnaja/rename-music-files-y2mate
8f8509ef116baa548762ae26a531623993a6cc5b
[ "MIT" ]
2
2021-02-11T16:55:16.000Z
2021-05-13T22:15:08.000Z
# Rename Music Files from Y2Meta ## How to use - clone this repo: `git clone https://github.com/lucasgdb/rename-music-files-y2meta.git` - type `yarn` to install its dependencies. - move the music files downloaded by y2meta.com into `input` dir. - type `yarn convert` to rename the downloaded songs into `output` dir. ## Example from `y2meta.com - Luis Fonsi - Despacito ft. Daddy Yankee.mp3` to `Luis Fonsi - Despacito ft. Daddy Yankee.mp3` ## Author | [<img src="https://avatars3.githubusercontent.com/u/13838273?s=115&u=ebbd853c5f90c7be064e2ee643df722676e5d13e&v=4"><br><sub>@lucasgdb</sub>](https://github.com/lucasgdb) | | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------: | Feel free to open pull requests to improve it.
33.076923
173
0.586047
eng_Latn
0.52987
760c89c292f0847cb8375b93bf9bff744a07a946
1,765
md
Markdown
AlchemyInsights/owa-attach-files.md
isabella232/OfficeDocs-AlchemyInsights-pr.he-IL
bdbff33106e8ce4b7c1f5ac0ed47e202925b6625
[ "CC-BY-4.0", "MIT" ]
1
2020-05-19T19:06:36.000Z
2020-05-19T19:06:36.000Z
AlchemyInsights/owa-attach-files.md
isabella232/OfficeDocs-AlchemyInsights-pr.he-IL
bdbff33106e8ce4b7c1f5ac0ed47e202925b6625
[ "CC-BY-4.0", "MIT" ]
4
2020-06-02T23:29:52.000Z
2022-02-09T06:50:10.000Z
AlchemyInsights/owa-attach-files.md
isabella232/OfficeDocs-AlchemyInsights-pr.he-IL
bdbff33106e8ce4b7c1f5ac0ed47e202925b6625
[ "CC-BY-4.0", "MIT" ]
3
2019-10-09T20:34:40.000Z
2020-08-12T18:32:50.000Z
--- title: צירוף קבצים Outlook באינטרנט ms.author: daeite author: daeite manager: joallard ms.date: 04/21/2020 ms.audience: Admin ms.topic: article ms.service: o365-administration ROBOTS: NOINDEX, NOFOLLOW localization_priority: Normal ms.custom: '' ms.openlocfilehash: 03ca82aee3feee879f6c1273da2246c948f0f7f1292421848289a4630e552dee ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175 ms.translationtype: MT ms.contentlocale: he-IL ms.lasthandoff: 08/05/2021 ms.locfileid: "53999961" --- # <a name="how-to-attach-files-in-outlook-on-the-web"></a>כיצד לצרף קבצים Outlook באינטרנט 1. בחלק התחתון של הודעה או אירוע בלוח שנה, בחר את סמל מהדק הנייר <img src='data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABUAAAAVCAYAAACpF6WWAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAB3RJTUUH4QkaFhg7CMMZsgAAAAd0RVh0QXV0aG9yAKmuzEgAAAAMdEVYdERlc2NyaXB0aW9uABMJISMAAAAKdEVYdENvcHlyaWdodACsD8w6AAAADnRFWHRDcmVhdGlvbiB0aW1lADX3DwkAAAAJdEVYdFNvZnR3YXJlAF1w/zoAAAALdEVYdERpc2NsYWltZXIAt8C0jwAAAAh0RVh0V2FybmluZwDAG+aHAAAAB3RFWHRTb3VyY2UA9f+D6wAAAAh0RVh0Q29tbWVudAD2zJa/AAAABnRFWHRUaXRsZQCo7tInAAABSklEQVQ4je2VL6+CUBiHH+/uBo1yZiSRHYlgwUBx00whWf0edj+HJGDqBpnMnMlZYM7ACFKs3CQbm+Auut1yf+2879nz/tt7zqCqqooP6+vTQIDvNkdRFOz3e7IsA0BVVRzHeQ+6Xq/J87w+p2lKWZYsl8t+0CAIyPOc0WiEbdt1kMPhQJIk6LreCX3a09vtBoBt2wghEEIwnU4BuFwuLzPtHJQQojVgb2hRFKxWK87nc8Puui5JkvSDnk4n0jRtDAwgDEOOx2M/aF/9Q/8Ier1eud/vAMiy/B50PB4jSRJhGLLZbABerih0PCgPLRYLttstAIZhoGlaP+ijxCiKsCyrkV0URY07z/S0/MlkgiRJeJ5HHMe1PY5jPM8DutswaPtOgiDA930AFEUBoCxLAObzObPZ7PfQR2a73a7e/eFwiGmaWJbVCgT4AfRAgmp2DdDrAAAAAElFTkSuQmCC' /> . 1. בחר **עיין במחשב זה** כדי לצרף קובץ במחשב שלך או עיין **במיקום בענן כדי** לצרף קובץ מקוון. כדי ללמוד עוד, ראה [צירוף קבצים Outlook באינטרנט](https://support.office.com/article/48b8dca1-7a76-43ce-97d1-e1cf73893f55).
67.884615
953
0.875354
yue_Hant
0.522138
760d5f023b1068c1bf66e5f136adfb368318302b
2,556
md
Markdown
README.md
jheller9/LWJGUI
c50311ece007de7eecbb2c4695635e0c6a41918e
[ "MIT" ]
null
null
null
README.md
jheller9/LWJGUI
c50311ece007de7eecbb2c4695635e0c6a41918e
[ "MIT" ]
null
null
null
README.md
jheller9/LWJGUI
c50311ece007de7eecbb2c4695635e0c6a41918e
[ "MIT" ]
null
null
null
# LWJGUI An LWJGL3-based JavaFX alternative for making user-interfaces in Java. It can be incorperated into your already existing opengl project, or you can make your projects rendering become managed by LWJGUI. # Why LWJGUI? JavaFX simply does not have the capability to be incoroperated into custom OpenGL projects. LWJGUI is different in that it does not takeover all of the rendering code for your project. This allows it to be easily used to handle your user-interfaces but still leave rendering up to you! # Current Features - Label - Font - Button / SegmentedButton / ToggleButton / ToggleSwitch - CheckBox / RadioButton - ComboBox - Popup / ContextMenu - MenuBar / ToolBar - SplitPane - Pane / StackPane / BorderPane / ScrollPane / TabPane - OpenGLPane - BlurPane - TextArea / CodeArea - TextField / PasswordField / SearchField - HBox / VBox # Screenshots ![Modena](http://magaimg.net/img/7gkq.png) ![PicControlExample](http://magaimg.net/img/7gkp.png) ![SyntaxHighlight](http://magaimg.net/img/7cr8.png) ![PicOpenGL](http://magaimg.net/img/7utn.png) ![ScrollSplitPane](https://i.imgur.com/EKVvWdP.png) ![Gears](http://magaimg.net/img/7ux5.png) ![TextArea](http://magaimg.net/img/7upk.png) ![TreeView](https://i.imgur.com/WZQxpvU.png) ![IDE](http://magaimg.net/img/7upi.png) # Required Libraries - [LWJGL3](https://www.lwjgl.org/) - [JOML](https://github.com/JOML-CI/JOML) (Can download with [LWJGL](https://www.lwjgl.org/customize)) - [NanoVG](https://github.com/memononen/nanovg) (Can download with [LWJGL](https://www.lwjgl.org/customize)) - [TinyFD](https://github.com/native-toolkit/tinyfiledialogs) (Can download with [LWJGL](https://www.lwjgl.org/customize)) # Setup with gradle/maven/sbt/leiningen 1. Open https://jitpack.io/ 2. Insert to 'search' field string `orange451/LWJGUI` ![screenshot](https://i.imgur.com/yq5SHBH.png "screenshot") 3. Press "Look up" 4. Follow simple steps ![screenshot](https://i.imgur.com/pTpsNKv.png "screenshot") # Projects That Use LWJGUI - [Anarchy Engine - 3D Java/lua game engine](https://github.com/orange451/AnarchyEngine) - [JDialogue - branching dialogue editor for video games](https://github.com/SkyAphid/JDialogue) # Similar projects - [Clear](https://github.com/SkyAphid/Clear/) - [Aerial](https://github.com/LacombeJ/Aerial) - [Legui](https://github.com/LiquidEngine/legui) - [imgui](https://github.com/kotlin-graphics/imgui) - [lwjgl3-swt](https://github.com/LWJGLX/lwjgl3-swt) If you need to contact me. My discord is: NovaStrat#2111 Feel free to send me any suggestions/things you need!
41.901639
285
0.750782
yue_Hant
0.393874
760d8039e1620bc5904bf749dec80092603a0d24
4,704
md
Markdown
docs/visual-basic/programming-guide/concepts/linq/sample-xsd-file-customers-and-orders.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/programming-guide/concepts/linq/sample-xsd-file-customers-and-orders.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/programming-guide/concepts/linq/sample-xsd-file-customers-and-orders.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Sample XSD File: Customers and Orders2 | Microsoft Docs" ms.custom: "" ms.date: "2015-07-20" ms.prod: .net ms.reviewer: "" ms.suite: "" ms.technology: - "devlang-visual-basic" ms.tgt_pltfrm: "" ms.topic: "article" dev_langs: - "VB" ms.assetid: a0c0b414-c8e1-45e4-bb67-b5e650c97130 caps.latest.revision: 3 author: dotnet-bot ms.author: dotnetcontent translation.priority.mt: - "cs-cz" - "pl-pl" - "pt-br" - "tr-tr" --- # Sample XSD File: Customers and Orders The following XSD file is used in various examples in the [!INCLUDE[sqltecxlinq](../../../../csharp/programming-guide/concepts/linq/includes/sqltecxlinq_md.md)] documentation. This file contains a schema definition for the [Sample XML File: Customers and Orders (LINQ to XML)](../../../../visual-basic/programming-guide/concepts/linq/sample-xml-file-customers-and-orders-linq-to-xml.md). The schema uses the `xs:key` and `xs:keyref` features of XSD to establish that the `CustomerID` attribute of the `Customer` element is a key, and to establish a relationship between the `CustomerID` element in each `Order` element and the `CustomerID` attribute in each `Customer` element. For an example of writing LINQ queries that take advantage of this relationship using the `Join` clause, see [How to: Join Two Collections (LINQ to XML) (Visual Basic)](../../../../visual-basic/programming-guide/concepts/linq/how-to-join-two-collections-linq-to-xml.md). ## CustomersOrders.xsd ```xml <?xml version="1.0" encoding="utf-8" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name='Root'> <xs:complexType> <xs:sequence> <xs:element name='Customers'> <xs:complexType> <xs:sequence> <xs:element name='Customer' type='CustomerType' minOccurs='0' maxOccurs='unbounded' /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name='Orders'> <xs:complexType> <xs:sequence> <xs:element name='Order' type='OrderType' minOccurs='0' maxOccurs='unbounded' /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> <xs:key name='CustomerIDKey'> <xs:selector xpath='Customers/Customer'/> <xs:field xpath='@CustomerID'/> </xs:key> <xs:keyref name='CustomerIDKeyRef' refer='CustomerIDKey'> <xs:selector xpath='Orders/Order'/> <xs:field xpath='CustomerID'/> </xs:keyref> </xs:element> <xs:complexType name='CustomerType'> <xs:sequence> <xs:element name='CompanyName' type='xs:string'/> <xs:element name='ContactName' type='xs:string'/> <xs:element name='ContactTitle' type='xs:string'/> <xs:element name='Phone' type='xs:string'/> <xs:element name='Fax' minOccurs='0' type='xs:string'/> <xs:element name='FullAddress' type='AddressType'/> </xs:sequence> <xs:attribute name='CustomerID' type='xs:token'/> </xs:complexType> <xs:complexType name='AddressType'> <xs:sequence> <xs:element name='Address' type='xs:string'/> <xs:element name='City' type='xs:string'/> <xs:element name='Region' type='xs:string'/> <xs:element name='PostalCode' type='xs:string' /> <xs:element name='Country' type='xs:string'/> </xs:sequence> <xs:attribute name='CustomerID' type='xs:token'/> </xs:complexType> <xs:complexType name='OrderType'> <xs:sequence> <xs:element name='CustomerID' type='xs:token'/> <xs:element name='EmployeeID' type='xs:token'/> <xs:element name='OrderDate' type='xs:dateTime'/> <xs:element name='RequiredDate' type='xs:dateTime'/> <xs:element name='ShipInfo' type='ShipInfoType'/> </xs:sequence> </xs:complexType> <xs:complexType name='ShipInfoType'> <xs:sequence> <xs:element name='ShipVia' type='xs:integer'/> <xs:element name='Freight' type='xs:decimal'/> <xs:element name='ShipName' type='xs:string'/> <xs:element name='ShipAddress' type='xs:string'/> <xs:element name='ShipCity' type='xs:string'/> <xs:element name='ShipRegion' type='xs:string'/> <xs:element name='ShipPostalCode' type='xs:string'/> <xs:element name='ShipCountry' type='xs:string'/> </xs:sequence> <xs:attribute name='ShippedDate' type='xs:dateTime'/> </xs:complexType> </xs:schema> ``` ## See Also [Sample XML Documents (LINQ to XML)](../../../../visual-basic/programming-guide/concepts/linq/sample-xml-documents-linq-to-xml.md)
42.763636
679
0.631165
eng_Latn
0.361008