content
stringlengths
7
2.61M
<gh_stars>1-10 package za.co.entelect.persistence.history; import org.springframework.data.repository.PagingAndSortingRepository; import za.co.entelect.domain.entities.history.HistoryEventEntity; public interface HistoryEventEntityDao extends PagingAndSortingRepository<HistoryEventEntity, Long> { }
Opinion: Trusted computing initiatives are just sheep in security clothing. Back in the Spring of 2003, I wrote a column discussing Microsofts Trusted Computing initiative (the name of which had just been changed from Palladium to Next-Generation Secure Computing Base). In that column, I talked about the need to be wary whenever Microsoft used the word "security" in conjunction with "trusted computing." If one looked at the design and intent of NGSCB, it quickly became clear that its goal wasnt to secure systems from attacks by hackers, worms and viruses. As I said two years ago, "NGSCBs main purpose is to make sure users such as yourself arent pirating Microsofts or partners software or any other copyrighted content—even if that means taking over your system remotely and removing or disabling the offending untrusted software." /zimages/2/28571.gifMicrosoft zaps Sonys DRM "rootkit." Click here to read more. I received plenty of reader responses in support of that column, but I also received some that said that I was making much ado about nothing, that no vendor would ever abuse trusted computing features—Microsofts or anyone elses—and that trusted computing would never be used to limit a users software or hardware capabilities. At any rate, Microsoft sure seemed worried about the many criticisms it was receiving about NGSCB, as the company pulled back on some of its plans and toned down discussions of NGSCB in the marketplace. But that doesnt mean that Microsoft pulled back the strategy. To a large degree, Microsofts method of flying NGSCB under that radar has worked because I rarely hear anyone talking about trusted computing nowadays. But we shouldnt let our attention stray, as trusted computing is still out there, and Microsoft and its partners are still working diligently on it. Before I throw NGSCB completely onto the fire, though, there are good elements to trusted computing. The core Trusted Platform Modules, if used properly, can be of help in many situations, offering greater hardware-based security for keys and tokens. But the operative words here are "if used properly." /zimages/2/28571.gifDavid Coursey writes that Trusted Platform Modules will make e-commerce safer. Click here to read more. In all the white papers and FAQs at www.trustedcomputinggroup.org, it sounds like the vendors that have signed on to this initiative want to do only good with this technology and would never think of doing anything that would limit users rights and access. But, as always, actions speak louder than words, and from news that has come out in the last few months, I think we should still be worried about vendors abusing trusted computing. Example No. 1 is a little something that Microsoft has in store for its lucky customers who upgrade to Vista next year: PVP-OPM (Protected Video Path-Output Protection Management), a technology that, while not a specific part of NGSCB, clearly shows where Microsofts loyalties lie. What does this great thing do? It looks to see what type of monitor you have attached to your PC, and, if it doesnt like it, it will prevent you from watching DVDs and other digital content or will downgrade the quality of this content. Cool! I feel so much more protected. Its great to know that my own PC will work against me if I dont upgrade to a new Big Brother-enabled monitor. Example No. 2 is the current Sony DRM (digital rights management) root-kit fiasco. The fact that Sony installed a dangerous Trojan-like program on unwitting user systems is bad enough, but imagine if this type of program were installed under the protective wings of trusted computing. Would we have even found out about it? Could it have been uninstalled? This is the kind of thing some vendors will use trusted computing for—no matter what they say in white papers and FAQs. /zimages/2/28571.gifAn independent security researcher reports that the Sony DRM "rootkit" is on 500,000 systems around the world. Click here to read more. Because, remember, we arent the customers of trusted computing—the corporations that hold the content rights for movies, music, games and software are the actual customers. No, were the untrusted enemy who is naive enough to think that we have control over the hardware and software that weve purchased and that we have some kind of fair-use rights. When it comes to trusted computing, dont trust it. After all, it doesnt trust us.
What Impact Did Education Stimulus Funds Have on States and School Districts Human Development and founded in January 1995 by Jack Jennings, the Center on Education Policy is a national independent advocate for public education and for more effective public schools. The Center works to help Americans better understand the role of public education in a democracy and the need to improve the academic quality of public schools. We do not represent any special interests. Instead, we help citizens make sense of the conflicting opinions and perceptions about public education and create the conditions that will lead to better public schools. The Center on Education Policy receives nearly all of its funding from charitable foundations. We are grateful to the Hewlett Foundation for supporting this project and to the Phi Delta Kappa International Foundation for providing the Center with general support funding that assisted us in this endeavor. The statements made and views expressed are solely the responsibility of the Center. This federal economic stimulus package had three primary goals: to save and create jobs, to cultivate economic activity and long-term growth, and to increase accountability and transparency in government spending. Federal appropriations for the ARRA eventually totaled approximately $840 billion and were directed toward tax cuts, funding for entitlement programs, and investments in infrastructure, health, energy, education, and other programs. In the area of education, the Act provided economic stimulus funds to states for both K-12 public schools (the focus of this report) and postsecondary education institutions. ARRA also included additional fiscal year (FY) 2009 funding for the Title I program for disadvantaged children and the Individuals with Disabilities Education Act. In 2010, states and school districts received an additional $10 billion to save or create educators' jobs through the Education Jobs Fund legislation. The Center on Education Policy (CEP) at the George Washington University has tracked the use of ARRA and Education Jobs funds and the implementation of ARRA-related reforms since these laws were enacted. Between December 2009 and February 2012, CEP released six reports looking at the effects of the ARRA on K-12 education across the United States, all available at www.cep-dc.org. These six reports were based on survey responses of state and local officials charged with implementing the ARRA and Education Jobs programs. In particular, CEP surveyed state education agency (SEA) officials and governors' staff and conducted nationally representative surveys of school district officials, including superintendents, chief financial officers, and program directors. Responses to all of the surveys
<gh_stars>0 from django import forms from django.contrib import admin from .models import * class BookCopyInline(admin.TabularInline): model = BookCopy extra = 0 max_num = 0 show_change_link = True readonly_fields = ['book', 'user'] class LibraryAdmin(admin.ModelAdmin): inlines = [BookCopyInline] list_display = ['name'] search_fields = ['name'] class BookAdmin(admin.ModelAdmin): inlines = [BookCopyInline] list_display = ['isbn', 'title', 'author'] list_per_page = 20 search_fields = ['title', 'author', 'isbn'] class BookCopyAdmin(admin.ModelAdmin): list_display = ['id', 'book', 'library', 'user'] list_per_page = 20 search_fields = ['book__title', 'user__username'] autocomplete_fields = ['book', 'library', 'user'] def add_view(self, request): self.exclude = ['user', 'borrow_date'] return super(BookCopyAdmin, self).add_view(request) admin.site.site_header = 'Kamu administration' admin.site.site_title = 'Kamu administration' admin.site.index_title = 'Kamu' admin.site.register(Book, BookAdmin) admin.site.register(Library, LibraryAdmin) admin.site.register(BookCopy, BookCopyAdmin)
CNFL: Categorical to Numerical Feature Learning for Clustering and Classification Categorical data exist in many domains, such as text data, gene sequences, or data from Census Bureau. While such data are easy for human interpretation, they cannot be directly used by many classification methods, such as support vector machines and others, which require underlying data to be represented in a numerical format. To date, most existing learning methods convert categorical data into binary features, which may result in high dimensionality and sparsity. In this paper, we propose a method to convert category data into an arbitrary number of numerical features. Our method, named CNFL, uses simple matching to calculate proximity between instances, then uses an eigendecomposition to convert the proximity matrix into a low-dimensional space, which can be used to represent instances for classification or clustering. Experiments on 21 datasets demonstrate that numerical features learned from CNFL can effectively represent the original data for machine learning tasks
/** * Copyright (c) 2020-present, <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import type { Entity } from '../../../models/sdlc/models/entity/Entity'; import type { PlainObject } from '@finos/legend-studio-shared'; import { unitTest, guaranteeNonNullable } from '@finos/legend-studio-shared'; import { simpleDebuggingCase, testAutoImportsWithAny, testAutoImportsWithSystemProfiles, } from '../roundtrip/RoundtripTestData'; import m2mGraphEntities from './M2MGraphEntitiesTestData.json'; import { ProjectConfiguration } from '../../../models/sdlc/models/configuration/ProjectConfiguration'; import { waitFor } from '@testing-library/dom'; import { getTestEditorStore } from '../../StoreTestUtils'; import { simpleCoreModelData } from './CoreTestData'; import { DependencyManager } from '../../../models/metamodels/pure/graph/DependencyManager'; import { PackageableElementReference } from '../../../models/metamodels/pure/model/packageableElements/PackageableElementReference'; import { DeprecatedProjectVersionEntities } from '../../../models/metadata/models/ProjectVersionEntities'; import { flowResult } from 'mobx'; const testDependingOnDifferentProjectVersions = [ { projectId: 'PROD-A', versionId: '1.0.0', versionedEntity: false, entities: [], }, { projectId: 'PROD-A', versionId: '2.0.0', versionedEntity: false, entities: [], }, ]; const testDependingOnMoreThanOneproject = [ { projectId: 'PROD-A', versionId: '2.0.0', versionedEntity: false, entities: [ { path: 'org::finos::legend::model::ProfileExtensionA', content: { _type: 'profile', name: 'ProfileExtensionA', package: 'org::finos::legend::model', stereotypes: [], tags: ['docs'], }, classifierPath: 'meta::pure::metamodel::extension::Profile', }, ], }, { projectId: 'PROD-B', versionId: '2.0.0', versionedEntity: false, entities: [ { path: 'org::finos::legend::model::ProfileExtensionB', content: { _type: 'profile', name: 'ProfileExtensionB', package: 'org::finos::legend::model', stereotypes: [], tags: ['docs'], }, classifierPath: 'meta::pure::metamodel::extension::Profile', }, ], }, { projectId: 'PROD-C', versionId: '3.0.0', versionedEntity: false, entities: [ { path: 'org::finos::legend::model::ProfileExtensionC', content: { _type: 'profile', name: 'ProfileExtensionC', package: 'org::finos::legend::model', stereotypes: [], tags: ['docs'], }, classifierPath: 'meta::pure::metamodel::extension::Profile', }, ], }, ]; const TEST_DEPENDENCY_PROJECT_ID = 'UAT-TEST_DEPENDENCY'; const PROJECT_CONFIG = { projectStructureVersion: { version: 6, extensionVersion: 1 }, projectId: TEST_DEPENDENCY_PROJECT_ID, projectType: 'PROTOTYPE', groupId: 'com.test', artifactId: 'string', projectDependencies: [ { projectId: 'PROD_1', versionId: { majorVersion: 1, minorVersion: 0, patchVersion: 0, }, }, ], metamodelDependencies: [], }; const FILE_GENERATION_PATH = 'model::myFileGeneration'; const buildFileGenerationDepentOnDependencyElements = ( dependencyEntities: string[], ): Entity => { const fileGeneration = { path: FILE_GENERATION_PATH, content: { _type: 'fileGeneration', configurationProperties: [], name: 'myFileGeneration', package: 'model', scopeElements: [...dependencyEntities], type: 'testType', }, classifierPath: 'meta::pure::generation::metamodel::GenerationConfiguration', } as Entity; return fileGeneration; }; const testDependencyElements = async ( entities: Entity[], dependencyEntities: PlainObject<DeprecatedProjectVersionEntities>[], includeDependencyInFileGenerationScopeElements?: boolean, ): Promise<void> => { const projectVersionEntities = dependencyEntities.map((e) => DeprecatedProjectVersionEntities.serialization.fromJson(e), ); const keys = projectVersionEntities.map((e) => e.projectId); const dependencyElementPaths = projectVersionEntities .flatMap((e) => e.entities) .map((e) => e.path); if (includeDependencyInFileGenerationScopeElements) { entities.push( buildFileGenerationDepentOnDependencyElements(dependencyElementPaths), ); } const editorStore = getTestEditorStore(); editorStore.projectConfigurationEditorState.setProjectConfiguration( ProjectConfiguration.serialization.fromJson(PROJECT_CONFIG), ); // mock version entities api return jest .spyOn( guaranteeNonNullable( editorStore.applicationStore.networkClientManager.metadataClient, ), 'getProjectVersionsDependencyEntities', ) .mockResolvedValue(dependencyEntities); await flowResult(editorStore.graphState.initializeSystem()); const dependencyManager = new DependencyManager([]); const dependencyMap = await flowResult( editorStore.graphState.getConfigurationProjectDependencyEntities(), ); editorStore.graphState.graph.setDependencyManager(dependencyManager); await flowResult( editorStore.graphState.graphManager.buildDependencies( editorStore.graphState.coreModel, editorStore.graphState.systemModel, dependencyManager, dependencyMap, ), ); await waitFor(() => expect( editorStore.graphState.graph.dependencyManager.buildState.hasSucceeded, ).toBeTrue(), ); await flowResult( editorStore.graphState.graphManager.buildGraph( editorStore.graphState.graph, entities, { TEMPORARY__keepSectionIndex: true }, ), ); await waitFor(() => expect(editorStore.graphState.graph.buildState.hasSucceeded).toBeTrue(), ); Array.from(dependencyMap.keys()).forEach((k) => expect(dependencyManager.getModel(k)).toBeDefined(), ); Array.from(keys).forEach((k) => expect(dependencyManager.getModel(k)).toBeDefined(), ); expect(dependencyManager.allElements.length).toBe( dependencyElementPaths.length, ); dependencyElementPaths.forEach((e) => { const element = dependencyManager.getNullableElement(e); guaranteeNonNullable( element, `element ${e} not found in dependency manager`, ); const elementInGraph = editorStore.graphState.graph.getElement(e); guaranteeNonNullable( elementInGraph, `element ${e} not found in main graph`, ); const elementInMainGraph = editorStore.graphState.graph.allOwnElements.find( (el) => el.path === e, ); expect(elementInMainGraph).toBeUndefined(); expect(elementInGraph).toBe(element); expect(elementInGraph.isReadOnly).toBeTrue(); }); if (includeDependencyInFileGenerationScopeElements) { const fileGeneration = guaranteeNonNullable( editorStore.graphState.graph.getOwnFileGeneration(FILE_GENERATION_PATH), ); dependencyElementPaths.forEach((e) => { const elementInGraph = guaranteeNonNullable( editorStore.graphState.graph.getElement(e), ); expect( fileGeneration.scopeElements.find( (el) => el instanceof PackageableElementReference && el.value === elementInGraph, ), ).toBeDefined(); }); } const transformedEntities = editorStore.graphState.graph.allOwnElements.map( (el) => editorStore.graphState.graphManager.elementToEntity(el), ); expect(entities).toIncludeSameMembers(transformedEntities); // Ensure dependency elements are not transformed for (const entityPath of dependencyElementPaths) { expect( transformedEntities.find((el) => el.path === entityPath), ).toBeUndefined(); } }; const buildProjectVersionEntities = ( entities: Entity[], ): PlainObject<DeprecatedProjectVersionEntities>[] => [ { projectId: TEST_DEPENDENCY_PROJECT_ID, versionId: '1.0.0', entities, versionedEntity: false, }, ]; test(unitTest('M2M graph dependency check'), async () => { await testDependencyElements( [] as Entity[], buildProjectVersionEntities(m2mGraphEntities as Entity[]), true, ); await testDependencyElements( [] as Entity[], buildProjectVersionEntities(simpleDebuggingCase as Entity[]), true, ); }); test(unitTest('Auto-imports dependency check'), async () => { await testDependencyElements( [] as Entity[], buildProjectVersionEntities(testAutoImportsWithSystemProfiles as Entity[]), true, ); await testDependencyElements( [] as Entity[], buildProjectVersionEntities(testAutoImportsWithAny as Entity[]), true, ); }); test(unitTest('Core model dependency check'), async () => { await testDependencyElements( [] as Entity[], buildProjectVersionEntities(simpleCoreModelData as Entity[]), true, ); }); test( unitTest('Depending on more than one project dependency check'), async () => { await testDependencyElements( [] as Entity[], testDependingOnMoreThanOneproject, true, ); }, ); test( unitTest('Same project different versions dependency error check'), async () => { await expect( testDependencyElements( [] as Entity[], testDependingOnDifferentProjectVersions, true, ), ).rejects.toThrowError( "Depending on multiple versions of a project is not supported. Found dependency on project 'PROD-A' with versions: 1.0.0, 2.0.0.", ); }, );
<reponame>CHEWCHEWW/bezier-react /* External dependencies */ import { v4 as uuid } from 'uuid' /* Internal dependencies */ import { defaultOptions, ToastOptions, ToastId, ToastType, } from './Toast.types' /* ToastService를 사용하는 이유 Notion: https://www.notion.so/channelio/Toast-bc13dfbc81314141909250d9cf02c4c7#82b94a73d2f34257ab4799cdeccbc70c */ class ToastService { toasts: ToastType[] = [] getToasts = () => this.toasts setToasts = (newToasts: ToastType[]) => { this.toasts = newToasts } has = (id: string) => { if (!this.toasts.length) { return false } return this.toasts.reduce((flag, cur) => (cur.id === id ? true : flag), false) } add = (content: string, options: ToastOptions = defaultOptions) => { const newId: ToastId = uuid() if (this.has(newId)) { return '' } const newToast: ToastType = { id: newId, content, ...options, } const newToasts: ToastType[] = [...this.toasts, newToast] this.setToasts(newToasts) return newId } remove = (id: ToastId): boolean => { if (!this.has(id)) { return false } const newToasts: ToastType[] = this.toasts.filter((toast) => toast.id !== id) this.setToasts(newToasts) return true // remove success } removeAll = () => { if (!this.toasts.length) { return } this.setToasts([]) } } export default ToastService
<reponame>myelin/myelin-kicad.pretty from myelin_kicad_mod import * X = Module( identifier="cypress_lae064_fbga", description="LAE064 64-ball 1.0mm BGA for flash memory" ) # References: # http://www.cypress.com/file/45826/download # AN79938 Design Guidelines for Cypress Ball Grid Array (BGA) Packaged Devices # http://www.cypress.com/file/202451/download # AN202751 Surface Mount Assembly Recommendations for Cypress FBGA Packages # http://www.cypress.com/file/202531/download # AN99178 Solder Mask and Trace Recommendations for FBGAs # Cypress recommends NSMD pads; for 1.00mm ball spacing that means: # NSMD pad 0.45mm, mask 0.60mm # SMD pad 0.6mm, mask 0.0.5mm # https://macrofab.com/blog/escaping-bgas-methods-routing-traces-bga-footprints/ # -> pads typ 20% smaller than ball dia, i.e. .48 # plastic chip is 9x9mm D = E = 9.0 # ball array size W = 8 H = 8 BALL_SPACING = 1.0 # ball pitch. ball dia is 0.6mm. PAD_DIA = 0.45 # diameter of copper pad MASK_DIA = 0.60 # diameter of opening in solder mask PAD_CLEARANCE = 0.1639 # mask + 3.5 mil pad clearance # top left ball x0 = -(BALL_SPACING * (W - 1.0) / 2) y0 = -(BALL_SPACING * (H - 1.0) / 2) # draw outline X.add(Line(-D/2, -E/2, -D/2, E/2)) X.add(Line(-D/2, E/2, D/2, E/2)) X.add(Line(D/2, E/2, D/2, -E/2)) X.add(Line(D/2, -E/2, -D/2, -E/2)) # draw pin 1 ID bubble X.add(Circle(-D/2 + 0.5, -E/2 + 0.5, 0.25)) for y in range(H): for x in range(W): X.add(Pad( name="%s%d" % ("ABCDEFGHJKLMN"[y], x + 1), x=x0 + x * BALL_SPACING, y=y0 + y * BALL_SPACING, w=PAD_DIA, h=PAD_DIA, shape='circle', solder_mask_margin=(MASK_DIA - PAD_DIA) / 2, pad_clearance=PAD_CLEARANCE, )) X.save()
Power-Aware Metrics for Wireless Sensor Networks Abstract Energy conservation is a critical issue in wireless sensor networks for node and network life, as the nodes are powered by batteries. One way of doing so is to use only local information available to the nodes in the network. This article evaluates a number of power-aware routing protocols based on local information only. The simulation shows that basing the routing decision on the remaining power of neighbouring nodes is not by itself enough. Instead, using the directional value and the sum of power remaining at the next neighbours gives the routing protocol a broader perspective about the condition of the network from a local point of view and enhances the decision process.
Comparative Metallurgical and Mechanical of Nd:YAG laser welding of Austenitic stainless steels Laser welding is a significant method for welding a ll kinds of alloys. High speed welding, low distort i n and easy automation are the advantages of this method. In this study, pulsed Nd :YAG laser butt joint welding was performed on 304 and 316 Austenitic stainless Steels workpieces. All the parameters were designed by Full Factorial Method which the center point wa s repeated for 10 times. Henceforth, the effect of power, pulse duration, la ser velocity and chemical composition on width and depth of welding area, ultimate tensile strength, elongation and Ferrite N umber were studied. The welding process was perform ed in 1.2 and 1.8 KW as power, 3 and 4.4 ms as pulse duration and 0.2 and 0.8 mm/s as velocity of welding. Thus, the upper lev e of input parameters were selected as the optimal case. This was done by surv eying all the samples to achieve maximum UTS, size of the weld and Ferrite Number. In these conditions, output error for width of weld, depth of weld, U.T.S, elongation and Ferr it Number was 2, 5, 1.6, 25 and 2%.
/** * Creates a protocol provider for the given <tt>accountID</tt> and * registers it in the bundle context. This method has a persistent * effect. Once created the resulting account will remain installed until * removed through the uninstallAccount method. * * @param accountID the account identifier * @return <tt>true</tt> if the account with the given <tt>accountID</tt> is * successfully loaded, otherwise returns <tt>false</tt> */ public boolean loadAccount(AccountID accountID) { String userID = accountID.getAccountPropertyString( ProtocolProviderFactory.USER_ID); ProtocolProviderService service = createService(userID, accountID); Dictionary<String, String> properties = new Hashtable<String, String>(); properties.put(PROTOCOL, protocolName); properties.put(USER_ID, userID); ServiceRegistration<ProtocolProviderService> serviceRegistration = bundleContext.registerService( ProtocolProviderService.class, service, properties); if (serviceRegistration == null) { return false; } else { synchronized (registeredAccounts) { registeredAccounts.put(accountID, serviceRegistration); } return true; } }
It’s amazing how Attorney General Eric Holder and President Barack Obama are outraged over the shooting of Trayvon Martin by George Zimmerman, while they ignore the large number of blacks killing other blacks in urban areas. Obama should take a look at the number of shooting deaths in his adopted hometown of Chicago alone. Parents there are grieving just as surely and sorely as Martin’s parents are. Bill Cosby from time to time has pointed to the high rate of black-on-black crime, only to be tarred and feathered. It’s time people start taking notice.
/******************************************************************************* Omicron Player Classic Author: <NAME> Website: http://fabiopichler.net License: BSD 3-Clause License Copyright (c) 2015-2019, <NAME> All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Omicron Player Classic nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. *******************************************************************************/ /*! * * \author <NAME> * \copyright (c) 2015-2019, <NAME> * */ #pragma once #include "Core/Database.h" #include "Core/Global.h" #include "Core/Update.h" #include <QApplication> #include <QDir> #include <QFile> #include <QSettings> #include <QFontDatabase> #include <QSystemTrayIcon> //! Classe responsável pelo processamento inicial e principal do programa. /*! * Esta classe é responsável por inicializar a biblioteca BASS, plugins, Banco de Dados, * arquivos de configurações e etc. * * Também é responsável pelo processamento principal e por finalizar o programa. */ class WindowBase; class Main : public QObject { Q_OBJECT public: Main(); ~Main(); bool init(const int &argc); public slots: void startMusicMode(); void startRadioMode(); void startRecorderMode(); void setWindowTitle(QString); void showError(QString); void showNotification(QString); void restart(); private: void setupRadiolist(); void updateTrayIconMenu(); private slots: void trayIconActivated(QSystemTrayIcon::ActivationReason); void checkUpdate(); void receiveMessage(QVector<QString>); void defaultConfig(); signals: //! Abrir músicas no "modo música" void openMusic(QVector<QString>); //! Adiciona músicas ao playlist do "modo músicas" void addMusic(QVector<QString>); //! Reproduzir o stream. void playStream(); //! Pausa o stream atual. void pauseStream(); //! Para o stream atual. void stopStream(); //! Stream anteiror. void prevStream(); //! Próximo stream. void nextStream(); public: UpdateApp *updateApp; bool continueRunning; private: QSystemTrayIcon *trayIcon; QSettings *iniSettings; WindowBase *window; };
<filename>Renderer.cpp #include "Renderer.h" #include "Mesh.h" #include "BufferStructs.h" #include <DirectXMath.h> #include <typeinfo> // For fancy position displacement #include <cmath> Renderer::Renderer() { printf("---> Renderer loaded\n"); } Renderer::~Renderer() { printf("---> Renderer unloaded\n"); } // Clears the background every frame to pure black void Renderer::ClearBackground(Microsoft::WRL::ComPtr<ID3D11DeviceContext> context, Microsoft::WRL::ComPtr<ID3D11RenderTargetView> backBufferRTV, Microsoft::WRL::ComPtr<ID3D11DepthStencilView> depthStencilView) { // Background color (#000000 black) for clearing const float color[4] = { 0.0f, 0.0f, 0.00f, 0.0f }; // Clear the render target and depth buffer (erases what's on the screen) context->ClearRenderTargetView(backBufferRTV.Get(), color); context->ClearDepthStencilView( depthStencilView.Get(), D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0); } // Cycles through all the meshes the renderer has a references to and renders them on the rendertarget to get presented to the screen void Renderer::DrawMeshes( Microsoft::WRL::ComPtr<ID3D11DeviceContext> context, Microsoft::WRL::ComPtr<ID3D11SamplerState> sampler, std::vector<Entity> entities, Camera* camera) { for (int i = 0; i < entities.size(); i++) { // Set sampler, diffuse, and maybe normal textures entities[i].GetMaterial()->GetPixelShader()->SetSamplerState("basicSampler", sampler.Get()); entities[i].GetMaterial()->GetPixelShader()->SetShaderResourceView("diffuseTexture", entities[i].GetMaterial()->GetTextureSRV().Get()); if (entities[i].GetMaterial()->hasNormalMap) { entities[i].GetMaterial()->GetPixelShader()->SetShaderResourceView("normalTexture", entities[i].GetMaterial()->GetNormalMap().Get()); } SimpleVertexShader* vsData = entities[i].GetMaterial()->GetVertexShader(); vsData->SetFloat4("colorTint", entities[i].GetMaterial()->GetColorTint()); vsData->SetMatrix4x4("world", entities[i].GetTransform()->GetWorldMatrix()); vsData->SetMatrix4x4("view", camera->GetViewMatrix()); vsData->SetMatrix4x4("proj", camera->GetProjectionMatrix()); // Set buffers in the input assembler UINT stride = sizeof(Vertex); UINT offset = 0; context->IASetVertexBuffers(0, 1, entities[i].GetMesh()->GetVertexBuffer().GetAddressOf(), &stride, &offset); context->IASetIndexBuffer(entities[i].GetMesh()->GetIndexBuffer().Get(), DXGI_FORMAT_R32_UINT, 0); // Copying to resource vsData->CopyAllBufferData(); // Set the shaders and sampler state entities[i].GetMaterial()->GetVertexShader()->SetShader(); // TODO: Clump together items to render based on which shader type they are entities[i].GetMaterial()->GetPixelShader()->SetShader(); entities[i].GetMaterial()->GetPixelShader()->CopyAllBufferData(); // Do the actual drawing context->DrawIndexed(entities[i].GetMesh()->GetIndexCount(), 0, 0); } } // Cycles through all the meshes the renderer has a references to and renders them on the rendertarget to get presented to the screen void Renderer::DrawMeshesQueued( Microsoft::WRL::ComPtr<ID3D11DeviceContext> context, Microsoft::WRL::ComPtr<ID3D11SamplerState> sampler, Camera* camera) { int currentPriority = -1; for (int i = 0; i < renderQueue.size(); i++) { // Set sampler, diffuse, and maybe normal textures renderQueue[i].GetMaterial()->GetPixelShader()->SetSamplerState("basicSampler", sampler.Get()); renderQueue[i].GetMaterial()->GetPixelShader()->SetShaderResourceView("diffuseTexture", renderQueue[i].GetMaterial()->GetTextureSRV().Get()); if (renderQueue[i].GetMaterial()->hasNormalMap) { renderQueue[i].GetMaterial()->GetPixelShader()->SetShaderResourceView("normalTexture", renderQueue[i].GetMaterial()->GetNormalMap().Get()); } SimpleVertexShader* vsData = renderQueue[i].GetMaterial()->GetVertexShader(); vsData->SetFloat4("colorTint", renderQueue[i].GetMaterial()->GetColorTint()); vsData->SetMatrix4x4("world", renderQueue[i].GetTransform()->GetWorldMatrix()); vsData->SetMatrix4x4("view", camera->GetViewMatrix()); vsData->SetMatrix4x4("proj", camera->GetProjectionMatrix()); // Set buffers in the input assembler UINT stride = sizeof(Vertex); UINT offset = 0; context->IASetVertexBuffers(0, 1, renderQueue[i].GetMesh()->GetVertexBuffer().GetAddressOf(), &stride, &offset); context->IASetIndexBuffer(renderQueue[i].GetMesh()->GetIndexBuffer().Get(), DXGI_FORMAT_R32_UINT, 0); // Copying to resource vsData->CopyAllBufferData(); // Set the shaders and sampler state, but only if we need to if (currentPriority != renderQueue[i].renderPriority) { currentPriority = renderQueue[i].renderPriority; renderQueue[i].GetMaterial()->GetPixelShader()->SetShader(); renderQueue[i].GetMaterial()->GetPixelShader()->CopyAllBufferData(); } renderQueue[i].GetMaterial()->GetVertexShader()->SetShader(); // TODO: Clump together items to render based on which shader type they are // Do the actual drawing context->DrawIndexed(renderQueue[i].GetMesh()->GetIndexCount(), 0, 0); } } void Renderer::GenerateRenderQueue(std::vector<Entity> entities) { // Clear out the old queue and set it up to be filled with a new one renderQueue.clear(); // Fill the queue with all the existing entities for (int i = 0; i < entities.size(); i++) renderQueue.push_back(entities[i]); // Sort that queue based on its priority std::sort(renderQueue.begin(), renderQueue.end()); dirty = false; } void Renderer::SetDirty() { dirty = true; } bool Renderer::GetDirty() { return dirty; }
Temperature dependence of nitrogen-vacancy optical center in diamond Diamond, a wide band gap semiconductor material, has been attracting interest in several fields from electrics and optics to biomedicine and quantum computing due to its outstanding properties. These properties of diamond are related to its unique lattice and optically active defect centers. In this paper, the dependence of nitrogen-vacancy (NV) center on measurement temperature is studied by using the low-temperature photoluminescence (PL) spectroscopy in a temperature range of 80200 K. The results show that with the increase of the measurement temperature, the zero phonon lines of NV defects are red-shifted, its intensity decreases and its full width at half maximum increases. These results are attributed to the synergetic process of the lattice expansion and quadratic electron-phonon coupling. The NV and NV0 centers have similar values in the quenching activation energy and the thermal softening coefficient, resulting from their similar structures. The small differences may be associated with the electron-phonon coupling. The broadening mechanism of the NV centers is carefully distinguished by \begin{document}$T^3,\; T^5,\; T^7$\end{document} Voigt function fitting with the relation. These results show that the full width at half maximum of the Gaussian component of NV and NV0 centers are randomly distributed near 0.1 meV and 2.1 meV, respectively, while the full width at half maximum of the Lorentz component of NV and NV0 centers increase with measurement temperature increasing. The full width at half maximum of Lorentz of NV and NV0 centers conform to the \begin{document}$ T^3 $\end{document} relationship. It can be proved that under the action of the fluctuating field, the zero phonon lines of the NV defects exhibit an obvious homogeneous widening mechanism.
Mobile Cardiac Outpatient Telemetry Patch vs Implantable Loop Recorder in Cryptogenic Stroke Patients in the US Cost-Minimization Model Purpose The aim of this study was to compare costs and outcomes of mobile cardiac outpatient telemetry (MCOT) patch followed by implantable loop recorder (ILR) compared to ILR alone in cryptogenic stroke patients from the US health-care payors perspective. Patients and Methods A quantitative decision tree cost-minimization simulation model was developed. Eligible patients were 18 years of age or older and were diagnosed with having a cryptogenic stroke, without previously documented atrial fibrillation (AF). All patients were assigned first to one then to the alternative monitoring strategies. Following AF detection, patients were initiated on oral anticoagulants (OAC). The model assessed direct costs for one year attributed to MCOT patch followed by ILR or ILR alone using a monitoring duration of 30 days post-cryptogenic stroke. Results In the base case modeling, the MCOT patch arm detected 4.6 more patients with AFs compared to the ILR alone arm in a cohort of 1000 patients (209 vs 45 patients with detected AFs, respectively). Using MCOT patch followed by ILR in half of the patients initially undiagnosed with AF leads to significant cost savings of US$4,083,214 compared to ILR alone in a cohort of 1000 patients. Cost per patient with detected AF was significantly lower in the MCOT patch arm $29,598 vs $228,507 in the ILR only arm. Conclusion An initial strategy of 30-day electrocardiogram (ECG) monitoring with MCOT patch in diagnosis of AF in cryptogenic stroke patients realizes significant cost-savings compared to proceeding directly to ILR only. Almost 8 times lower costs were achieved with improved detection rates and reduction of secondary stroke risk due to new anticoagulant use in subjects with MCOT patch detected AF. These results strengthen emerging recommendations for prolonged ECG monitoring in secondary stroke prevention. Introduction The fifth leading cause of death in the United States (US) is stroke. Annual incidence of stroke is 795,000 patients. 1 Stroke can be classified into two major subtypes: hemorrhagic, representing about 17% and ischemic, representing around 83% of patients. Of the ischemic strokes, approximately 15-40% are considered to be cryptogenic strokes, ischemic strokes with no identifiable etiology. 1,2 Identifying the cause of a stroke in the one-third of patients suffering cryptogenic stroke is essential for the implementation of appropriate secondary stroke prevention strategies. 1, Newly diagnosed atrial fibrillation (AF) is only identified in ≈5% of patients with stroke in the inpatient setting, 6 but paroxysmal AF (PAF) may not be present at the time of the stroke or may escape detection during inpatient cardiac monitoring. 7 Thus, outpatient cardiac monitoring is often used to improve the identification of PAF. AF is defined as an episode of irregular heart rhythm, without detectable P waves, of any duration. AF is associated with an increase in the risk of stroke, cardiovascular morbidity and mortality, and significant increases in the total cost of care and impairment in quality of life (QoL). 11,12 Among patients with AF, those with a history of stroke carry the highest risk of recurrent stroke, with a 15% risk during the first year after stroke (2.5 times higher than in those without a previous stroke). 13,14 Management of stroke in the setting of AF is expensive, with one source citing an annual cost of approximately $26 billion. 15 Additionally, the major risks associated with undetected AF, both persistent and paroxysmal, are ischemic stroke and other thromboembolic events, which could be prevented by a prompt diagnosis of AF and consequent oral anticoagulant (OAC) therapy. Early identification of AF and treatment with OAC will reduce the risk of recurrent stroke and death in both the primary or secondary prevention setting. 20 The American Heart Association/American Stroke Association Guidelines recommend a confirmed diagnosis of AF following stroke before initiation of anticoagulant therapy whereas in the absence of proven AF, antiplatelet therapy is usually recommended. 21 Atrial fibrillation can remain undetected in patients using the current standard of care (SoC) for AF detectionelectrocardiogram (ECG) monitoring for at least 24h after a stroke. To detect AF, recommendations from the American Academy of Neurology suggest monitoring cardiac rhythm for prolonged periods, often for periods longer than 1 week, instead of shorter periods (ie, 24 hours) in patients with cryptogenic stroke without known AF. 25 Clinicians have several monitoring options offering different monitoring periods, detection rates and costs. Common monitoring solutions include: Holter monitors (short-term (24-48h) and long-term (1-2 weeks)), postevent recorders (non-looping recorders), external loop recorders (ELR), mobile cardiac outpatient telemetry (MCOT), and implantable loop recorders (ILR). 26 Due to variation in the costs and outcomes, an economic evaluation comparing some of these options would inform treatment choices and health system efficiency. Therefore, the analysis described here focused on a post-stroke population in which options included monitoring with MCOT ® patch (BioTelemetry Inc, a Philips company, Malvern, PA, USA) for 30 days possibly followed by ILR if AF is not diagnosed or ILR monitoring only with evaluation of up to the first 30 days of monitoring. Materials and Methods The aim of this economic analysis was to assess the costs associated with MCOT patch followed by ILR, compared to ILR alone in cryptogenic stroke patients from the US payors' perspective. We designed a quantitative decision-tree simulation model with base values identified through targeted literature reviews. The analysis described will aid clinicians and hospital procurement staff to optimize patient outcomes and improve health system efficiency. Several targeted literature searches were performed to obtain source data on costs, the probability of different events occurring, different model designs, modeling assumptions, current standard medical practice for monitoring cryptogenic stroke patients and different international medical guidelines. Search terms used to identify articles in PubMed included: disease terms (ischemic stroke, atrial fibrillation), intervention terms (cardiac monitoring, electrocardiography) and health economics terms (cost-minimization analysis, cost-effectiveness analysis, cost-benefit analysis, cost-utility analysis). Search strategies were restricted to publications written in English. There were no time restrictions for studies; however, most recently published studies were preferred. The main inclusion criteria were: cryptogenic stroke patients based in the USA wearing either MCOT or ILR. Figure 1 illustrates the model structure used for quantifying costs and outcomes at every stage of monitoring and treatment. There are two diagnostic and monitoring arms in the model: Model Assumptions Several assumptions were applied in the model consistent with previous studies, identified through a targeted literature review, reporting costs of stroke prevention in patients with AF: 27,28 1. All patients entered the model with "no underlying AF" or "occult AF not detected." Depending on the diagnostic performance of the allocated ECG monitoring strategy, AF was subsequently detected, and patients were initiated on appropriate therapy (ie, anticoagulation with OACs). 2. The diagnostic yield of devices was constant for a 30-day period. The cumulative probability of diagnosis of post-stroke AF for MCOT was taken from a meta-analysis performed by Sposato et al 2015. 29 The cumulative probability of diagnosis of post-stroke AF for ILR was taken from the Ziegler et al 2015 study. 30 3. The selection of antithrombotic agents for secondary stroke prevention in patients who were found to have AF after cryptogenic stroke was at the discretion of their treating physician. 4. Absence of stroke implied that no additional costs other than monitoring were accrued. Therefore, we did not calculate any costs for those patients that did not have recurrent strokes and ongoing management costs. 5. AF was defined as AF of any duration based on study treatment protocol of a large number of cryptogenic stroke patients diagnosed with AF. These stroke patients, in whom AF of any duration was diagnosed, were all advised to begin anticoagulation unless clinical contraindications existed. 31 6. The percentage of patients using oral anticoagulants and aspirin was the same. However, two scenarios were considered: a. Base case: 100% for both aspirin and OAC -Assumption. b. Scenario 1: 84% for both aspirin and OACbased on Favilla et al 2015. 31 7. The cost analysis assumed 100% treatment compliance 8. The time horizon of the model was 1 year post cryptogenic stroke. 9. All four OACs (dabigatran, rivaroxaban, apixaban, edoxaban) applied in the model were considered to have the same/similar efficacy. 10. All patients with newly detected AF receiving OAC in the study would derive the same clinical benefit regarding secondary stroke prevention. 11. Cryptogenic stroke was assumed to be similar/same for all patients. Therefore, the size, severity of stroke and the risk of recurrent bleeding were not taken into account. Perspective and Time Horizon The economic analysis of this model was performed from a US payor perspective including only direct medical costs. The current model assesses the costs accrued with MCOT patch followed by ILR or ILR alone using a monitoring time of 30 days and a time horizon of one year post-cryptogenic stroke. Input Parameters The dosages of OACs were aligned with the dosages prescribed in the summary of product characteristics. The acquisition costs were obtained for the smallest pack size. In our model, all patients started treatment with aspirin. OACs were initiated only upon AF detection. Of those receiving anticoagulant therapy (dabigatran, rivaroxaban, apixaban, edoxaban), an average price of all four drugs was used. The costs were inflated from the published cost year to 2021 levels using the Medical Consumer Price Index as reported by the US Bureau of Labor Statistics. 32 An overview of all cost inputs used in the model is provided in Table 1. Event Probabilities Event probabilities used in the model are shown in Table 2. Values were derived from published studies identified through targeted literature review. All-cause mortality rates were derived from Sawyer et al. 33 Depending on the diagnostic performance of the allocated monitoring strategy, AF was subsequently detected, and patients were initiated on appropriate therapy (ie, OACs). If subsequent testing for AF was non-diagnostic, patients received aspirin alone (Figure 1). To account for differences in costs, and post-stroke mortality, we further 449 Dovepress stratified the patients based on mild, moderate, or severe post-stroke classified by the Modified Rankin Scale (mRS). The mild, moderate, and severe cases were defined as a mRS score of 0-2, 3-4, and 5, respectively. At any point, there was a risk that patients experience an adverse event, such as bleeding as a side-effect of OAC therapy. Bleeding events incorporated into the model included major bleeding and clinically relevant nonmajor (CRNM) bleeds. Analyses The analysis quantifies the cumulative one-year costs following the initial treatment choice between MCOT and ILR and the incremental cost difference. The results of the model will be presented for the base case and 3 different scenario analyses. An overview of the differences among base case and scenarios can be found in Table 3. Parameter uncertainty was explored by use of deterministic one-way sensitivity analyses (OWSAs). Fundamental clinical input parameters were individually varied as ±25% (user-modifiable) of the point estimate for event probabilities. Model Outcomes The primary model outcome was the difference in total costs between MCOT patch and ILR only arms for the whole cohort of 1000 patients. Relevant secondary outcomes included: difference in costs per AF detected, average cost per one patient monitored, incremental recurrent strokes avoided and incremental infections avoided using MCOT patch vs ILR only arms. Base Case Results The results of the base case economic analysis (per diagnosis arm and incremental results) are displayed in Tables 4 and 5. Using the MCOT patch followed by ILR as the first choice for diagnosing AF after cryptogenic stroke leads to significant cost savings compared to ILR alone. The MCOT patch arm detected 4.6 times more patients with AF compared to the ILR alone arm based on a cohort of 1000 patients (209 vs 45 patients with detected AF respectively). The number of bleeding events was higher in the MCOT patch arm because more The cost per patient with detected AF was significantly lower in the MCOT patch arm than in the ILR arm ($29,598 vs $228,507, respectively). The average cost per patient monitored with MCOT followed by ILR compared to ILR alone was $6,192 vs $10,275, respectively. Model Uncertainty Varying important clinical parameters in the model influenced the incremental cost savings between both treatment arms ( Figure 2). The incremental cost savings was most sensitive to the percentage of undetected AF in patients receiving ILR after MCOT and undetected AF, followed by recurrent stroke without OAC (when only aspirin is given). Varying the proportion of infection rates, major bleeding with ILR and CRNM bleeding had limited influence on the conclusions of the base case analysis. Scenario Analyses Scenario analyses were performed to investigate the impact on the total cost difference and number of detected patients with AF by varying the following parameters: Percentage of patients getting ILR after MCOT and undetected AF from 50% to 60% (Scenarios 2 and 3; Table 3); Changing OAC and aspirin usage from 100% in the base case to 84% based on Favilla et al 2015 study (Scenarios 1 and 3; Table 3). 31 An overview of the clinical results of the difference between the MCOT patch arm and the ILR arm is presented in Table 6. Total cost differences between the two arms are also presented in Table 6 for all three scenarios. The highest impact on the total cost difference is an increase in the percentage of patients getting ILR after MCOT and undetected AF from 50% to 60%. However, this increase in costs did not lead to significantly more patients with detected AF and led to >10% increase in costs per patient with detected AF. To summarize, base case results were confirmed by all three scenarios. Therefore, the results showed that 30-day ECG monitoring and diagnosis of AF in cryptogenic stroke patients with an MCOT patch arm, as the first choice, is cost-saving compared to ILR only arm. Discussion Conducting a cost analysis of AF poses technical and operational challenges as higher detection rates will lead to increased costs. In the analysis described here, the acquisition costs of MCOT are less than ILR; however, the improved detection rate with MCOT leads to higher costs associated with stroke prevention, but lower costs associated with stroke events. As is often the case in healthcare, investments in one technology can generate other health system costs and savings. This highlights the importance of conducting a comprehensive cost analysis taking into consideration the full range of costs and consequences. The success of a diagnostic and monitoring strategy is determined by the performance of the diagnostic tool and by the impact that an accurate and timely diagnosis can have on treatment and subsequent health events. In our model, we used AF episodes of any duration. As described here, the MCOT patch is associated with an approximate four-fold higher rate of patients with AF detection compared with ILR over 30 days of monitoring. This could be due to the studies showing a delay in the start of ILR monitoring as compared to the start of cardiac monitoring. 29 Additionally, it could be affected by AF detection criteria as AF episodes less than 2 minutes in duration are not detected by the ILR algorithm. 30 In the model, this resulted in more patients with detected AF, fewer recurrent strokes and fewer deaths compared with ILR. Consequently, due to improved AF detection, the costs associated with oral anticoagulants and bleeding events were higher in the MCOT managed subjects due to increased oral anticoagulant usage. These costs were offset by savings associated with a reduction in recurrent stroke events and lower device costs. All patients with newly detected AF (regardless of duration) receiving any of the available OAC in the study would derive the same clinical benefit regarding secondary stroke prevention. In the real-world setting, this assumption would need to be validated and tested. Other studies have examined the cost-effectiveness of ILR but have used a time horizon of over a lifetime, 24, which is not relevant to our one-year timeframe. No prior modeling study has examined the costs of MCOT from the US perspective. However, study by Tsang et al 2014 conducted a retrospective database analysis comparing MCOT to Event or Holter monitors in the US. 36 This study came to the similar conclusion like our study that hospitals should be promoting the use of MCOT over Event or the Holter monitors. 36 Another study, Kaura et al 2016, examined the costs of MCOT from the UK perspective. 37 One study, Yong et al 2016, compared different monitoring durations from Canadian perspective. 38 The conclusion from Yong et al's 2016 study is in line with our conclusion, meaning that in patients after a cryptogenic stroke, 30-day ECG monitoring is likely to be cost-effective for preventing recurrent strokes. 38 Our study is unique in that it compares two diagnostic strategies that have not been compared previously from the US cost perspective in a cost-minimization analysis. Despite applying standard methodological approaches to our analysis, the study had several limitations that are worth taking into consideration when applying our results in practice. Firstly, the cost analysis considered only a 1-year time horizon for calculating costs and event rates. As strokes can occur after 1-year, extending the analysis to future years would influence the results. Secondly, monitoring period for ILR is 30-days, but this technology is worn for much longer. Limiting the monitoring period for ILR was done to ensure consistency in monitoring time across technologies. Thirdly, the selection of antithrombotic agent for secondary stroke prevention in patients who were found to have AF after cryptogenic stroke was at the discretion of their treating physician. The analysis did not consider any drug-specific differences 455 in efficacy which could influence the results described here. Cryptogenic stroke was assumed to be similar/same for all patients. Therefore, the size, severity of stroke and the risk of recurrent bleeding were not taken into account. Furthermore, use of a payor perspective means that our model does not capture the societal costs of recurrent strokes from lost productivity. The inclusion of lost productivity would have a significant impact on our results considered the likely work loss and carer time associated with caring for people with strokes. 39 Our findings have implications for both clinicians and policymakers that can improve health system efficiency. Cryptogenic stroke patients are common in everyday stroke practice. 38 With practice guidelines now recommending longer than 1 week of monitoring cardiac rhythm to detect AF after cryptogenic stroke, 25 our results support the recommendation that 30-day MCOT monitoring be made available to cryptogenic stroke patients. Use of MCOT as the first-line evaluation of cryptogenic stroke patients would further the goal of optimizing secondary stroke prevention by identifying as many patients with atrial fibrillation as possible at the lowest cost per identifiable AF and at the highest quality of life for these patients. Conclusion The results of this cost-minimization analysis indicate that 30-day ECG monitoring and diagnosis of AF in cryptogenic stroke patients with MCOT patch arm as an initial diagnostic strategy is cost-saving compared to proceeding directly to ILR only. Cost-savings were achieved due to improved detection rates and subsequent prevention of future strokes in subjects monitored with MCOT patch. These results strengthen emerging recommendations for prolonged ECG monitoring in secondary stroke prevention. Author Contributions All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work. Funding Philips funded all research activities for this work.
The staff of a nice hotel work with care and patience, despite absurd customer reviews online. Through poetry, Alejandro expresses his emotions of being separated from his two younger brothers in Guanajuato. Surrounded by the absurd world of humans, dogs struggle against their instincts to comply with their owners' wishes. In the town of the dead, life goes on. After a filmmaker accidentally kills a deer, he embarks on a cinematic exploration. A new phenomenon brings different classes of society on the same platform in Silicon Valley, and it is not Facebook. A firefighter works a 24-hour shift, balancing her job and new motherhood. After the California Wildfires, a child’s perspective and remembrance of what was.
from django.conf.urls.defaults import patterns, url from django.contrib.admin.views.decorators import staff_member_required from haystack.query import SearchQuerySet from oscar.core.application import Application from oscar.apps.search.views import SuggestionsView, MultiFacetedSearchView from oscar.apps.search.search_indexes import ProductIndex from oscar.apps.search.forms import MultiFacetedSearchForm class SearchApplication(Application): name = 'search' suggestions_view = SuggestionsView search_view = MultiFacetedSearchView def get_urls(self): sqs = SearchQuerySet() for field_name, field in ProductIndex.fields.items(): if field.faceted is True: # Ensure we facet the results set by the defined facetable fields sqs.facet(field_name) urlpatterns = patterns('', url(r'^suggest/$', self.suggestions_view.as_view(), name='suggest'), url(r'^$', self.search_view(form_class=MultiFacetedSearchForm, searchqueryset=sqs), name='search'), ) return urlpatterns application = SearchApplication()
<filename>src/libxsmm_perf.h /****************************************************************************** * Copyright (c) Intel Corporation - All rights reserved. * * This file is part of the LIBXSMM library. * * * * For information on the license, see the LICENSE file. * * Further information: https://github.com/libxsmm/libxsmm/ * * SPDX-License-Identifier: BSD-3-Clause * ******************************************************************************/ /* <NAME> (Google Inc.) ******************************************************************************/ #ifndef LIBXSMM_PERF_H #define LIBXSMM_PERF_H #include <libxsmm_macros.h> LIBXSMM_API_INTERN void libxsmm_perf_init(void); LIBXSMM_API_INTERN void libxsmm_perf_finalize(void); LIBXSMM_API_INTERN void libxsmm_perf_dump_code( const void* memory, size_t size, const char* name); #endif /* LIBXSMM_PERF_H */
<reponame>Skyway666/Assignment-2--Pathfinding #include "j1App.h" #include "j1Input.h" #include "j1Render.h" #include "j1Collisions.h" #include "j1Map.h" #include "Pathfinding.h" #include "j1Entities.h" #include "Player.h" #include "GroundEnemy.h" j1Collisions::j1Collisions() { for (uint i = 0; i < MAX_COLLIDERS; ++i) colliders[i] = nullptr; matrix[COLLIDER_PLAYER][COLLIDER_PLAYER] = false; matrix[COLLIDER_PLAYER][COLLIDER_BONE] = true; matrix[COLLIDER_GOD][COLLIDER_BONE] = true; matrix[COLLIDER_PLAYER][COLLIDER_DEADLY] = true; matrix[COLLIDER_PLAYER][COLLIDER_ENEMY_GROUND] = true; matrix[COLLIDER_ENEMY_GROUND][COLLIDER_PATH] = true; matrix[COLLIDER_ENEMY_GROUND][COLLIDER_WALKABLE] = true; matrix[COLLIDER_COIN][COLLIDER_PLAYER] = true; matrix[COLLIDER_PLAYER][COLLIDER_COIN] = true; matrix[COLLIDER_COIN][COLLIDER_GOD] = true; matrix[COLLIDER_GOD][COLLIDER_COIN] = true; } // Destructor j1Collisions::~j1Collisions() {} bool j1Collisions::PreUpdate() { // Remove all colliders scheduled for deletion for (uint i = 0; i < MAX_COLLIDERS; ++i) { if (colliders[i] != nullptr && colliders[i]->to_delete == true) { delete colliders[i]; colliders[i] = nullptr; } } return true; } // Called before render is available bool j1Collisions::Update(float dt) { Collider* c1; Collider* c2; for (uint i = 0; i < MAX_COLLIDERS; ++i) { // skip empty and wall colliders if (colliders[i] == nullptr || colliders[i]->type == COLLIDER_WALL) continue; c1 = colliders[i]; // avoid checking collisions already checked for (uint k = i + 1; k < MAX_COLLIDERS; ++k) { // skip empty and wall colliders if (colliders[k] == nullptr || colliders[k]->type == COLLIDER_WALL) continue; c2 = colliders[k]; if (c1->CheckCollision(c2->rect) == true) { if (matrix[c1->type][c2->type] && c1->callback) c1->callback->OnCollision(c1, c2); if (matrix[c2->type][c1->type] && c2->callback) c2->callback->OnCollision(c2, c1); } } } UpdateGroundPath(); DebugDraw(); return true; } void j1Collisions::DebugDraw() { if (App->input->GetKey(SDL_SCANCODE_F9) == KEY_DOWN) debug = !debug; if (debug == false) return; Uint8 alpha = 80; for (uint i = 0; i < MAX_COLLIDERS; ++i) { if (colliders[i] == nullptr) continue; switch (colliders[i]->type) { case COLLIDER_NONE: // white App->render->DrawQuad(colliders[i]->rect, 255, 255, 255, alpha, false); break; case COLLIDER_PLAYER: // green App->render->DrawQuad(colliders[i]->rect, 0, 255, 0, alpha, false); break; case COLLIDER_GOD: // black App->render->DrawQuad(colliders[i]->rect, 0, 0, 0, alpha, false); break; case COLLIDER_WALL: // blue App->render->DrawQuad(colliders[i]->rect, 0, 0, 255, alpha, false); break; case COLLIDER_PIT: // pink App->render->DrawQuad(colliders[i]->rect, 243, 64, 147, alpha, false); break; case COLLIDER_DEADLY: // red App->render->DrawQuad(colliders[i]->rect, 255, 0, 0, alpha, true); break; case COLLIDER_ENEMY_GROUND: // cian App->render->DrawQuad(colliders[i]->rect, 0, 255, 255, alpha, true); break; case COLLIDER_BONE: // white App->render->DrawQuad(colliders[i]->rect, 255, 255, 255, alpha, true); break; case COLLIDER_WALKABLE: // purple App->render->DrawQuad(colliders[i]->rect, 101, 31, 180, alpha, true); break; case COLLIDER_PATH: // brown App->render->DrawQuad(colliders[i]->rect, 15, 50, 85, alpha, true); break; case COLLIDER_COIN: // orange App->render->DrawQuad(colliders[i]->rect, 255, 128, 0, alpha, true); break; } } } // Called before quitting bool j1Collisions::CleanUp() { LOG("Freeing all colliders"); for (uint i = 0; i < MAX_COLLIDERS; ++i) { if (colliders[i] != nullptr) { delete colliders[i]; colliders[i] = nullptr; } } return true; } Collider* j1Collisions::AddCollider(SDL_Rect rect, COLLIDER_TYPE type, j1Module* callback, uint lenght, uint height, uint column_height) { Collider* ret = nullptr; for (uint i = 0; i < MAX_COLLIDERS; ++i) { if (colliders[i] == nullptr) { ret = colliders[i] = new Collider(rect, type, callback, lenght, height, column_height); break; } } return ret; } void j1Collisions::Erase_Non_Player_Colliders() { for (uint i = 0; i < MAX_COLLIDERS; ++i) { if (colliders[i] != nullptr && colliders[i]->type != COLLIDER_PLAYER && colliders[i]->type != COLLIDER_GOD) { delete colliders[i]; colliders[i] = nullptr; } } } // ----------------------------------------------------- bool Collider::CheckCollision(const SDL_Rect& r) const { if (r.y + r.h > rect.y && r.y < rect.y + rect.h && r.x + r.w > rect.x && r.x < rect.x + rect.w) return true; else return false; } void Collider::WillCollide(GroundEntity* entity, float dt) { const SDL_Rect r = entity->collider->rect; if (r.y + r.h > rect.y && r.y < rect.y + rect.h && r.x < rect.x + rect.w + entity->speed_modifier.x * dt && r.x + r.w > rect.x) // Will collide left entity->contact.x = 1; if (r.y + r.h > rect.y && r.y < rect.y + rect.h && r.x + r.w > rect.x - entity->speed_modifier.x * dt && r.x < rect.x + rect.w) // Will collide right entity->contact.x = 2; if (r.y < rect.y + rect.h && r.y + r.h >(rect.y - entity->gravity * dt) && r.x + r.w > rect.x && r.x < rect.x + rect.w) // Will collide ground { entity->contact.y = 1; if (entity->type == GROUND_ENEMY) { GroundEnemy* enemy = (GroundEnemy*)entity; enemy->height = height; // Set height to ground enemies } } if (r.y + r.h > rect.y && r.y < rect.y + rect.h + entity->speed_modifier.y * dt && r.x + r.w > rect.x && r.x < rect.x + rect.w) // Will collide top entity->contact.y = 2; } void Collider::WillCollidePit(GroundEnemy* entity, float dt) { if (entity->collider != nullptr) { const SDL_Rect r = entity->collider->rect; // Will collide ground && contact.y == 1 if ((r.y < rect.y + rect.h && r.y + r.h > rect.y - entity->gravity * dt && r.x + r.w > rect.x && r.x < rect.x + rect.w) && entity->contact.y == 1) { if (lenght < 4) { // Prevent jumps upon landing if ((!entity->flip && entity->collider->rect.x + entity->collider->rect.w / 2 > rect.x) || (entity->flip && entity->collider->rect.x + entity->collider->rect.w / 2 < rect.x + rect.w)) { if (entity->just_landed) { if (entity->jump_timer.Read() != 0) entity->jump_x = lenght * App->map->data.tile_width / entity->jump_timer.Read(); entity->jump_timer.Reset(); entity->jumping = true; } entity->just_landed = false; } } else if (lenght >= 4) { entity->front_of_unwalkable = true; } } } } void Collider::WillCollideWall(GroundEnemy* entity, float dt) { if (entity->collider != nullptr) { const SDL_Rect r = entity->collider->rect; // Will collide left or Will collide right and contact.y == 1 if ((r.y + r.h > rect.y && r.y < rect.y + rect.h && r.x < rect.x + rect.w + App->map->data.tile_width && r.x + r.w > rect.x) || (r.y + r.h > rect.y && r.y < rect.y + rect.h && r.x + r.w > rect.x - App->map->data.tile_width && r.x < rect.x + rect.w)) { if (column_height <= 2 && entity->height - height <= 3 && entity->height - height > 0 && entity->contact.y == 1) // Check if height is bigger than 0 just in case { entity->jumping_wall = true; entity->jump_timer.Reset(); entity->jumping = true; } } } } bool j1Collisions::WillCollideAfterSlide(Player* entity, float dt) const { const SDL_Rect r = entity->collider->rect; for (uint i = 0; i < MAX_COLLIDERS; ++i) { // skip empty and player colliders if (colliders[i] == nullptr || colliders[i]->type == COLLIDER_NONE || colliders[i]->type == COLLIDER_PLAYER) continue; if ((colliders[i]->type == COLLIDER_WALL || (colliders[i]->type == COLLIDER_PIT && entity->collider->type == COLLIDER_GOD)) && r.y + r.h > colliders[i]->rect.y && r.y < colliders[i]->rect.y + colliders[i]->rect.h + App->map->data.tile_height && r.x + r.w > colliders[i]->rect.x && r.x < colliders[i]->rect.x + colliders[i]->rect.w) return true; } return false; } void j1Collisions::ManageGroundCollisions(GroundEntity* entity, float dt) { for (uint i = 0; i < MAX_COLLIDERS; ++i) { // skip empty and non-wall colliders if (colliders[i] == nullptr || colliders[i]->type == COLLIDER_NONE || colliders[i]->type == COLLIDER_PLAYER || colliders[i]->type == COLLIDER_BONE || colliders[i]->type == COLLIDER_ENEMY || colliders[i]->type == COLLIDER_GOD) continue; if (entity->collider != nullptr && entity->collider->type != COLLIDER_GOD && colliders[i]->type == COLLIDER_WALL) { colliders[i]->WillCollide(entity, dt); if (entity->collider->CheckCollision(colliders[i]->rect)) // In case the entity somehow passes thorugh a wall { if (entity->type == ENTITY_TYPES::PLAYER) { Player* player = (Player*)entity; if (player->flip && !player->walljumping && !player->StickToWall) player->position.x += player->speed_modifier.x * dt; else if (!player->flip && !player->walljumping && !player->StickToWall) player->position.x -= player->speed_modifier.x * dt; else if (player->walljumping && player->speed.x > 0 && !player->StickToWall) player->position.x -= player->speed_modifier.x * dt; else if (player->walljumping && player->speed.x < 0 && !player->StickToWall) player->position.x += player->speed_modifier.x * dt; } else { if (entity->flip) entity->position.x += App->map->data.tile_width / 2; else if (!entity->flip) entity->position.x -= App->map->data.tile_width / 2; } } } } } void j1Collisions::EnemyJump(GroundEnemy* entity, float dt) { for (uint i = 0; i < MAX_COLLIDERS; ++i) { // Skip empty, non-wall and non-pit colliders if (colliders[i] == nullptr || !(colliders[i]->type == COLLIDER_PIT || colliders[i]->type == COLLIDER_WALL)) continue; if (colliders[i]->type == COLLIDER_PIT) colliders[i]->WillCollidePit(entity, dt); else if (colliders[i]->type == COLLIDER_WALL) colliders[i]->WillCollideWall(entity, dt); } } void j1Collisions::UpdateGroundPath() { for (uint i = 0; i < MAX_COLLIDERS; ++i) { // Skip empty, non-walkable and non-path colliders if (colliders[i] == nullptr || !(colliders[i]->type == COLLIDER_PATH || colliders[i]->type == COLLIDER_WALKABLE)) continue; iPoint distance; if (App->entities->player->collider != nullptr) { distance.x = abs(colliders[i]->rect.x - (App->entities->player->position.x + App->entities->player->collider->rect.w / 2)); distance.y = abs(colliders[i]->rect.y - (App->entities->player->position.y + App->entities->player->collider->rect.h / 2)); } if (distance.x <= App->entities->player->pathfinding_distance.x && distance.y <= App->entities->player->pathfinding_distance.y) colliders[i]->type = COLLIDER_PATH; else colliders[i]->type = COLLIDER_WALKABLE; } }
Nonreversible immobilization of water-borne plutonium onto self-assembled adlayers of silanized humic materials. The objective was to study plutonium partitioning between immobile and mobile humic materials at the water-solid interfaces. Immobilization of the humic materials on solid supports was performed in situ using self-adhesive silanized humic derivatives. The presence of the humic adlayers on solid supports was shown to significantly enhance Pu sorption and its retention under both steady state and dynamic conditions. While plutonium may exist in multiple oxidations states plus colloidal forms, the major thrust in this work was to study the behavior of most mobile--the PuO2(+) form in dilute solutions. The values of the plutonium partition coefficients (Kd) between water and humics-coated silica gels after 10 days exposure reached 1.6 10 L kg(-1) at pH 7.5 under anaerobic conditions with a total plutonium concentration of 1.2 10(-8) M exceeding those for the uncoated SiO2 (6.3 10 L kg(-1)). Column tests showed substantial sequestration of water-borne plutonium (up to 73%) on the humics-coated silica gels. Remobilization experiments conducted under batch conditions at different pH values (3.5, 4.5, 7.5) showed that no more than 3% of the sequestered Pu was remobilized from the humics-coated silica gels by treatment with dissolved humic materials at environmentally relevant pH of 7.5. Consequently, silanized humic materialas can be seen as both molecular probes and as potent candidate materials for scavenging mobile Pu from an aqueous phase.
import torch from torch import nn import torch.nn.functional as F from torch.utils.data import Dataset from torch.utils.data import DataLoader import numpy as np import matplotlib.pyplot as plt BATCH_SIZE = 1 SEQ_LEN = 20 # 输入Part1的批量大小应为BATCH_SIZE * SEQ_LEN class EEGdataset(Dataset): def __init__(self, is_train): filename = 'E:/data/combine_data.txt' if is_train \ else 'E:/test/data_7' data = np.loadtxt(filename, delimiter=',', dtype=np.float32) self.batch_size = BATCH_SIZE * SEQ_LEN num_batch = int(data.shape[0] / self.batch_size) data = data[:num_batch * self.batch_size] self.len = data.shape[0] self.x_data = torch.from_numpy(data[:, :-1]) self.y_data = torch.LongTensor(data[:, -1]) def __getitem__(self, index): return self.x_data[index], self.y_data[index] def __len__(self): return self.len class part1(nn.Module): def __init__(self): super(part1, self).__init__() self.DP = nn.Dropout(0.5) self.cnn1 = nn.Sequential(nn.Conv1d(in_channels=1, out_channels=64, padding=(10,), kernel_size=(50,), stride=(6,)), nn.MaxPool1d(kernel_size=8, stride=8), nn.ReLU(), nn.Dropout(0.5), nn.Conv1d(64, 128, (8,), (1,), padding=(4,)), nn.ReLU(), nn.Conv1d(128, 128, (8,), (1,), padding=(4,)), nn.ReLU(), nn.Conv1d(128, 128, (8,), (1,), padding=(4,)), nn.ReLU(), nn.MaxPool1d(4, 4)) self.cnn2 = nn.Sequential(nn.Conv1d(in_channels=1, out_channels=64, padding=(10,), kernel_size=(200,), stride=(16,)), nn.MaxPool1d(kernel_size=6, stride=6), nn.ReLU(), nn.Dropout(0.5), nn.Conv1d(64, 128, (7,), (1,), padding=(4,)), nn.ReLU(), nn.Conv1d(128, 128, (7,), (1,), padding=(4,)), nn.ReLU(), nn.Conv1d(128, 128, (7,), (1,), padding=(4,)), nn.ReLU(), nn.MaxPool1d(3, 3)) self.cnn3 = nn.Sequential(nn.Conv1d(in_channels=1, out_channels=64, padding=(10,), kernel_size=(400,), stride=(50,)), nn.MaxPool1d(kernel_size=4, stride=4), nn.ReLU(), nn.Dropout(0.5), nn.Conv1d(64, 128, (6,), (1,), padding=(3,)), nn.ReLU(), nn.Conv1d(128, 128, (6,), (1,), padding=(3,)), nn.ReLU(), nn.Conv1d(128, 128, (6,), (1,), padding=(3,)), nn.ReLU(), nn.MaxPool1d(2, 2)) self.encode_layer = nn.TransformerEncoderLayer(d_model=3000, nhead=8) self.transformer = nn.TransformerEncoder(self.encode_layer, num_layers=2) def forward(self, x): x = x.view(BATCH_SIZE * SEQ_LEN, 1, -1) # x_tmp1 = torch.tensor([[0] * 1480] * BATCH_SIZE*SEQ_LEN) # x_tmp2 = x.view(BATCH_SIZE*SEQ_LEN, -1) # x_add = torch.cat((x_tmp1, x_tmp2), dim=1) # # x = self.transformer(x) x1 = self.cnn1(x) # x1的size为BATCH_SIZE*128*16 x2 = self.cnn2(x) # x2的size为BATCH_SIZE*128*11 x3 = self.cnn3(x) # x3的size为BATCH_SIZE*128*8 x1 = x1.view(BATCH_SIZE * SEQ_LEN, -1) x2 = x2.view(BATCH_SIZE * SEQ_LEN, -1) x3 = x3.view(BATCH_SIZE * SEQ_LEN, -1) x = torch.cat((x1, x2, x3), dim=1) # x = x + x_add return self.DP(x) # x的size为BATCH_SIZE*4480 class part2(nn.Module): def __init__(self): super(part2, self).__init__() self.GRU1 = nn.GRU(input_size=4480, hidden_size=512, num_layers=1, bidirectional=True) self.DP = nn.Dropout(0.5) self.GRU2 = nn.GRU(input_size=2 * 512, hidden_size=512, num_layers=1, bidirectional=True) self.Linear = nn.Linear(4480, 1024) self.Relu = nn.ReLU() self.Tanh = nn.Tanh() def forward(self, x): x1 = x.view(SEQ_LEN, BATCH_SIZE, -1) x1, _ = self.GRU1(x1) x1 = self.DP(x1) x1 = self.Tanh(x1) x1, _ = self.GRU2(x1) x1 = self.DP(x1).view(SEQ_LEN * BATCH_SIZE, -1) x1 = self.Tanh(x1) x2 = self.Tanh(self.Linear(x)) return x1 + x2 class part3(nn.Module): def __init__(self): super(part3, self).__init__() self.DP = nn.Dropout(0.5) self.Linear = nn.Linear(1024, 5) def forward(self, x): x = self.Linear(self.DP(x)) return x train_data = EEGdataset(is_train=True) train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE * SEQ_LEN, shuffle=False) test_data = EEGdataset(is_train=False) test_loader = DataLoader(dataset=test_data, batch_size=BATCH_SIZE * SEQ_LEN, shuffle=False) # ===================================================================================================== model1 = part1() model2 = part2() model3 = part3() checkpoint = torch.load('C:/Users/jhon/Desktop/Sg/models/pre-train.pth') model1.load_state_dict(checkpoint['part1']) criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam([{'params': model1.parameters(), 'lr': 3e-5}, {'params': model2.parameters(), 'lr': 3e-5}, {'params': model3.parameters(), 'lr': 3e-5}]) # ======================================================================================================= # ========================================= CUDA ====================================================== device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model1.to(device) model2.to(device) model3.to(device) # ===================================================================================================== def train(): running_loss, correct = 0, 0 for idx, (inputs, labels) in enumerate(train_loader): inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = model1(inputs) outputs = model2(outputs) outputs = model3(outputs) loss = criterion(outputs, labels) running_loss += loss.item() loss.backward() optimizer.step() _, prediction = outputs.max(dim=1) correct += (prediction == labels).sum().item() running_loss = running_loss / len(train_data) running_acc = correct / len(train_data) return running_loss, running_acc def test(): test_loss, correct = 0, 0 with torch.no_grad(): for idx, (inputs, labels) in enumerate(test_loader): inputs, labels = inputs.to(device), labels.to(device) outputs = model1(inputs) outputs = model3(model2(outputs)) loss = criterion(outputs, labels) test_loss += loss.item() _, prediction = outputs.max(dim=1) correct += (prediction == labels).sum().item() test_loss = test_loss / len(test_data) test_acc = correct / len(test_data) return test_loss, test_acc epoch_list = [] loss_list = [] acc_list = [] test_loss_list = [] test_acc_list = [] for epoch in range(100): running_loss, running_acc = train() print('Epoch : %d Loss : %.3f Accuracy: %.3f' % (epoch, running_loss, running_acc)) epoch_list.append(epoch) loss_list.append(running_loss) acc_list.append(running_acc) test_loss, test_acc = test() test_loss_list.append(test_loss) test_acc_list.append(test_acc) fig = plt.figure() ax1 = fig.add_subplot(111) ax2 = ax1.twinx() ax1.plot(np.array(epoch_list), np.array(loss_list), label='Train_loss', marker='o', linestyle='dashed', markersize=5) ax2.plot(np.array(epoch_list), np.array(acc_list), label='Train_accuracy', marker='s', markersize=5) ax1.plot(np.array(epoch_list), np.array(test_loss_list), label='Test_loss', marker='o', linestyle='dashed', markersize=5) ax2.plot(np.array(epoch_list), np.array(test_acc_list), label='Test_Accuracy', marker='s', markersize=5) ax1.set_ylim(0.02, 0.09) ax1.set_xlabel('Epoch') ax1.set_ylabel('Loss') ax2.set_ylim(0, 1) ax2.set_ylabel('Accuracy') ax1.legend(loc=2) ax2.legend(loc=0) plt.show()
RISK MANAGEMENT ON MEDJEDJA DAM ON TAILING STORAGE FACILITY, OMARSKA MINE PRIJEDOR Dams and tailing storage facilities are specific mining facilities that carry many potential hazards, and therefore risks. In this paper, for tailing mud damMedjedja andtailing storage facilityof the Omarska Mine near Prijedor, on the basis of the current state of the dam, past events, visual observations and specialist measurements, we have analysed possible accident scenarios, the likelihood of their occurrence, and their detrimental effects, including: loss of life, material damage, environmental consequences and impact on the reputation of the Company. The ISO standards and ICOLD recommendations related to dam risk management were used in the risk assessment. INTRODUCTION In technological processes of mineral processing, it is necessary to build a suitable dam for disposal of tailings mud (wastewater containing small particles of tailings and mineral raw material) in a certain area. These dams are usually constructed as dams made of homogeneous or heterogeneous material from the immediate environment or of material separated from tailings mud after cyclone process. Their service life lasts for several decades, which is why they must be monitored and managed all the time, whether they are in use or the process of disposal has been completed. Tailing storage facility dams carry more risks: threats to lives and human health, financial and material losses, the environment (pollution of water, soil, flora, fauna, etc.) especially in downstream areas, and the reputation of the Company managing them. The issue of dam safety over the last six decades has attracted a great deal of attention from the public, several researchers, various agencies, and legislation to issue regulations, to help prevent and / or mitigate these risks. Many developed countries, especially the USA, England, Australia, Canada, the Netherlands, the EU and other countries have set up various agencies, bureaus and committees (FEMA, ANCOLD, BCHidro, ICOLD, etc.) to address dam safety issues, and have adopted risk management strategies to assess and manage risks for dams in their countries. Republic of Srpska and Bosnia and Herzegovina have not yet adopted adequate regulations in this field other than the risk management standards BAS ISO 31000: 2016 and the mud characterization standard -Guidance on risk assessment, especially with regard to the use and disposal of mud BAS CEN / TR 15584: 2012, and the laws and regulations relating to waste management the Rulebook on the categories, testing and classification of waste and the Rulebook relating to high dam observations. For these reasons, ISO standards EU directives and other regulations and regulations of other developed countries should be used in the design, use and management of dam's risks. Dam accidents occur because of the occurrence of natural hazards, technical damage, human activities, and combinations thereof. So far, many crashes and damages of tailing storage facility dams have occurred in the world, causing greater or less material damage, including the loss of life of innocent people. Here are just a few examples: collapsing of part of a tailing storage facility dam near Trento, Italy in 1985, part of a lead and zinc mine tailings storage facility dam Los Frailes Spain 1988, part of a gold mine tailings storage facility dam nearMerriespruit, South Africa 1994, tailings spill after thedam burst in aluminium production in Hungary in 2010, and the last in a row is the case of the dam cracking of the Vale Company in Brazil in 2019. Damage and cracking of the dam due to the earthquake occurred at the dams: Shefield 1925, Loma Prieta 1989, Forster 1989, San Fernardo 1994and MacDonald 1998. RISK MANAGEMENT The risk management process is defined by BAS ISO 31000:2019, ISO/IEC 31010:2019, and ICOLD recommendations. Risk management is the process of systematically applying management policy on identifying, analysing and evaluating risk, managing risk, monitoring and reviewing risk. Risk management allows the risk to be considered and its acceptability assessed, and based on that assessment, it is possible to see what measures should be taken to eliminate the risk or reduce it to acceptable limits. Picture 1 shows the scheme of activities in the risk management process and their relationship. Picture 1. Relationship between risk analysis, risk assessment and risk management Risk analysis is the first component of risk management. It is part of the process in which potential hazards and events are identified (flood wave, earthquake, dam destruction, etc.), giving a quantitative or qualitative assessment of the likelihood of occurrence and magnitude of adverse effects (environmental, material and human casualties, etc.). Risk analysis is a very extensive and complex job that requires the involvement of prominent experts from different professional profiles. Risk assessment is a process in which, using the results of risk analysis and other information in terms of costs and benefits, a decision is made on the acceptability of risk. The risk that can be controlled under certain conditions prescribed by regulations is acceptable. If the risk cannot be controlled under certain conditions, the risk cannot be accepted. In order to determine the level and dimensions of risk, it is necessary to define risk in terms of time and its stages, as precisely as possible. Risk involves the following stages :  Accident risk identification;  Modelling of accident and consequence development;  Vulnerability analysis (qualitative and quantitative ranking);  Response mode (response to an accident);  Post-accident monitoring;  Disaster relief measures (recovery) Risk communication is a very important component of an effective risk decision making process. It is not a separate component of the process, it must be integrated into all aspects of the process and is essential for the dam owner and other individuals and organizations that have a stake in or they might affectthedam. Basic information on Medjedja dam For the purpose of disposal of tailings mud from the gravity-magnetic separation plant (GMS) of the Omarska Iron Ore Mine, on distance app 5 km (air lines), the earth dam Medjedja was constructed by partitioning 3 occasional watercourses. The dam was built in an area of seismic hazard rated at 7° MCS, and an earthquake of this magnitude could cause moderate to significant damage to the dam, eventual deformation at the crown and downstream slopes of the dam. It is important to note that the dam was constructed according to the parameters of seismic resistance to earthquakes up to 8° MCS 0. The dam is constructed as water-resistant with drainage system (Picture 2) and of the same material, which is in its base, whose geomechanical characteristics are:  volume weight 20,00 kN/m 3  cohesion 26.15 kPa  internal friction angle 26.24 °  water permeability coefficient 1e -7. Picture 2. Dam with drainage system The dam is 27 m high, the length of the dam crown is 485 m, and the width of the dam crown is 8 m. The height of the dam crown is 202 m asl. The slope of the dam is 1: 2,3. The inner slope of the dam is protected from the erosive effects of atmospheric water and waves by applying a layer of large-grained stones. In order to monitor the changes on the dam, 29 benchmarks were installed for surveying, 6point piezometers for measuring the piezometer pressure, 13 chemical piezometers for measuring the seepage waterlevel, 3 of which are on the berm of the dam. The Gradina reservoir, obtained by the construction of the dam, covers an area of 76.7 ha and a volume of 8 million m 3. The catchment area 1:2,3 Technical Institute Bijeljina, Archives for Technical Sciences. Year XII -N 0 22. of this tailing storage facility is 163.5 ha. The layout of the mud dam and tailing storage facility is shown in Picture 3. Picture 3. Dam and tailing storage facility The design elevation of the free water in the tailing storage facility is 196.70 m above sea level (asl). The average depth of free water in the lake is estimated at about 3-4 m, and the total amount of water at about 1.1-1.5 million m 3 of free water. The amount of free water should be reduced to below 0.5 million m 3 by further depositing mud. Part of the free water is returned through pipelines back to the GMS plant as process water and for the needs of the local population for irrigation of agricultural land. Up to now, 7.327.996 m 3 have been deposited at the tailing storage facility, which makes the storage area almost filled with mud deposited. The remaining storage of 762.004 m 3 enables the disposal of tailings for another year, with a capacity of approximately 2.7 million tonnes of average grade raw ore. The construction of the second stage of the dam to an elevation of 208 m asl was cancelled, because meanwhile conditions were created for the disposal of tailings in the abandoned SP "Jezero", which is closer to the GMS plant. The results of the granulometric tests of the composition of the deposited mud indicate that the average percentage content of grain size is +0.3 mm = 2.70%, -0.3 + 0.025 = 31.24% and -0.025 + 0.00 = 60.06%. The basic component in the chemical composition of mud is Fe 2 O 3 with about 56%, followed by silica oxides with about 25%, and aluminium oxides with about 7%. Their combined share is over 87%. According to BAS CEN/TR 15584:2012 and the Rulebook on the categories, tests and classification of waste and the instructions contained in the abovementioned Rulebook, as well as the chemical composition of mud and the quality of overflow water, it can be concluded that it is nonhazardous waste and can be categorized as 01 03 06, ie. as "non-Category A landfill". RISK ASSESSMENT There are several methods for risk assessment (PHA, HAZOP, HACCP, SWIFT, BIA, RCA, FMA/FMECA, consequence and likelihood matrix, etc.),which take different information into account in the risk assessment process. For the purpose of assessing the risk of the Medjedja tailing storage facility dam, potential accidents were identified, the probability of their occurrence was determined, and then the severity of the consequences arising from them were determined. Risk is obtained as a function of the probability of an accident and the severity of the consequences. Risk assessment for the Medjedja dam was performed according to the 4x4 risk matrix and ICOLD Analysing the current status of the dam, the project technical documentation and the books of records and reports kept, the following scenarios of damages to the dam are possible:  Overflow over the crown of the dam,  Damage due to static and seismic instability,  Contamination of water flows by seepage waters from drainage system a) Accident in the scenario of water overflowing over the dam crown The disposal of tailingmud at the "Gradina" tailing storage facility was completed in July 2017. At the time of active use of the tailing storage facility, the water level in the lake depended on the flow of hydro-mix from the GMS, the need for GMS for return water, rainfall and the contour of the tailing storage facility and evaporation. Control of the water level in the reservoir by months in 2017 and 2018 (end month August) is given in Picture4,. In the observed period, the water level in the lake ranged from 196.32 (October 2017) to 199.29 (March 2017). The total oscillation of the water level was 2.97 m. Observed on a monthly basis, the oscillation of the maximum level was from -1.67 to +0.83 m and the minimum from -0.99 to +0.83 m. Two situations are generally recorded, rising water levels from November to April / May and declining from April / May to October, with peaks in the spring months. If we consider, the most unfavourable hydrological period in the region of the tailing storage facility "Gradina" from 19.06.-22.06.2010.when 172 l / m 2 of rain fell, which corresponds to the height of the water column of 0.172 m, it can be concluded that in ratio freeboard 2m there is no danger of overflow in the tailing storage facility's accumulation space. This is supported by the fact that the plan is to construct a safety overflow of rectangular section measuring 2.0x1.0 m with the bottom of the overflow at an elevation of 200 m asl, to keep the water level at the storage facility within safe limits. b) Static and seismic instability-based accidents According to a 2017 seismic monitoring report of the dam the strongest earthquake that occurred in the wider region of the tailing storage facility, at a distance of 24 km, was 3.2 units of Richter scale. In the narrower region, with a radius of up to 15 It is advisable to use a method of determining the likelihood of an accident based on a safety factor in the preliminary assessment of the risk of dam'sdamages due to instability. Results of the calculation of the safety factor for static conditions on the characteristic profiles of the Medjedja dam in the Omarska Mine for the period 2015-2018 are shown in Table 1 and Picture 5, and the result of the stability factor under seismic conditions for profile 16, (most sensitive), is shown in Picture 6. For a seismic safety factor of 1,326 determined for the Medjedja dam and the first category of buildings to which the Medjedja dam belongs, an annual probability of 1x10 -4 is obtained, which can be interpreted as "small". c) Impact according to the scenario of pollution of water flows with seepage water from drainage system The Medjedja dam contains pebble drain with 24 m 3 /day flow, which should accept all the water that flows from the lake through the dam, andits execution is provided through 6 side extracts. Drainage water is continuously discharged into the Medjedja stream. In order to control the quantity and quality of water, all extracts are grouped into one site. Water quality from the drainage system is satisfactory, as can be seen from the results of measurements made in 2015, 2016 and 2017. Table 2 shows the results of measuring the quality of water from the drainage system at the time of disposal of tailings mud during 2016. Based on these statements, the probability of each accident can be preliminary determined:  Accident in the scenario of overflow over the dam crown -small  Static instability accident -small  Seismic instability accident -small  Accident according to the scenario of pollution of watercourses from seepage water from the drainage system -small. In assessing the consequences, the following were considered:  human losses,  economic consequences,  environmental consequences,  the reputation of the company. For realistic risk assessment, it is very important to predict the characteristics of a flood wave, the formation of which would occur in the worst-case scenario, primarily the amount of spilled material from the tailing storage facility and the distance travelled by the flood wave. In the case of spilling of deposited material, over the dam crown or through the body of the dam, the flood wave formed would very quickly acquire the characteristics of laminar flow, since it would join the stream of the Medjedja stream and then the Gomjenica river, trough whose valley would progress further. For a rough estimate of the distance travelled, the so-called "danger zones", the Blight method may be used, according to which the maximum distance would be 2.7 km. Since the storage facility contains unconsolidated mud, due to the micrometer size of the particles, it is likely that the water would pull a large portion of that mud with it during the breakthrough of the dam. In the worst-case scenario, all the water could be leaking out, which could cause moving more mud, which amounts to app 2 million m 3 of deposited material and water. Potential human losses can be determined using the Graham method, suitable for an earthfill dam, which proposes fixed mortality rates depending on the characteristics of the accident and the number of persons at risk. Given that there are 4 living units on the flood wave route, and assuming that on average there are four members of each family living, the number of people at risk is 16. Graham for a small serious danger, a warning time of 15 to 60 minutes and a full understanding by the people at risk proposes an average mortality rate of 0.002, which does not result in a single casualty to this number of people at risk. The economic consequences would be local in nature, including the cost of clearing the surrounding terrain, repairing local damaged infrastructure and land remediation, as well as the cost of rebuilding residential properties that would be on the flood wave. The environmental consequences would be somewhat more serious given that all environmental substrates are endangered. There is a fair share of arable land on the flood wave route. It is projected that the Medjedja stream and then the Gomjenica river would accept a large part of the poured material and their quality would be questioned in the event of an accident. After the accident, emission of the particles from the crusty surfaces of the poured material would occur and the air would be contaminated for some time. The reputational consequences for the owner of the dam are small in the present state of the dam. Since the tailingmud will no longer be disposed, it is necessary to close and recultivate the tailing storage facility and to continue monitoring and following the condition of the dam and storage facility, on which will in the future dependthe level of risk and consequently the consequences on the reputation of the Company. In view of the risks identified above and their significance for risk analysis, the recommendations of ICOLD Bulletin 153 were used, which recommend a 4x4 risk matrix, Table 3, on the basis of which a risk ranking can be determined. It is recommended that the closed storage facility risk level be maintained beyond level 7 (high risk) and reduced to level 2 or lower (negligible risk). Based on the preliminary results obtained, it can be concluded that the risk of the Medjedja dam is at level 3, which is interpreted as low risk. MONITORING ON THE DAM AND TAILING STORGAE FACILITY In order to detect and eliminate adverse impacts on the stability of the dam and tailing storage facility in a timely manner, it is necessary to establish a monitoring system that would include:  visual observation of occurrences and events on the dam and the environment (deformation on the basic terrain and slopes of the dam, occurrence of seepage water, erosion, damage to the drainage system and overflow structures, vegetation development on the crown and slopes of the dam)  specialist measurements of parameters important for the assessment of the state of the dam (geodetic, hydrotechnical, seismic, meteorological, control of water quality from drainage systems)  maintain an alert system in good working order, good cooperation and contact with residents regarding timely notification of dangers at the dam. In case the Medjedja dam collapses, in accordance with the Civil Protection Plan in addition to internal human and material resources of ArcelorMittal Prijedor, engage the Civil Protection Sector of the City of Prijedor, Civil Protection of Republic of Srpska, B&H and the Government of Republic of Srpska in accordance with their competencies CONCLUSION By analysing the risk for the Medjedja dam, it can be concluded that the risk is at the third level and can be interpreted as small. It should be noted that the safety factors for static and seismic conditions were applied to the material from which the dam was constructed assuming no change in its characteristics. Therefore, for a more detailed risk assessment, it is necessary to include the aforementioned factor and more other details about the storage facility after its closure, which will follow in the coming period, and in particular to the modelling of the flood wave, whose characteristics condition the scale of the consequences and therefore the level of risk.
<filename>dev/python/2019-02-15 bear.py """ Test a file that Bear had trouble opening """ import os import sys PATH_HERE = os.path.abspath(os.path.dirname(__file__)) PATH_DATA = os.path.abspath(PATH_HERE+"../../../data/abfs/") PATH_SRC = os.path.abspath(PATH_HERE+"../../../src/") sys.path.insert(0, PATH_SRC) import matplotlib.pyplot as plt import pyabf import pyabf.filter if __name__=="__main__": # load the ABF and show some info about it abf = pyabf.ABF(PATH_DATA+"/19212027.abf") print(abf) # apply a gentle filter because it's a bit noisy pyabf.filter.gaussian(abf, 2) # plot every sweep plt.figure(figsize=(8,4)) plt.grid(alpha=.5, ls='--') for sweepNumber in abf.sweepList: abf.setSweep(sweepNumber) print(f"SWEEP {sweepNumber}: {abf.sweepY}") plt.plot(abf.sweepX, abf.sweepY) plt.title(abf.abfID) plt.xlabel(abf.sweepLabelX) plt.ylabel(abf.sweepLabelY) plt.tight_layout() plt.show()
Tobia Aoun Life Tobia Aoun was born in December 1803 in a small village along the banks of the Damour River in Lebanon, under the Maronite Patriarchy of Joseph VII Peter Tyan. He had five brothers: Abboud, Sleiman, Nasr, Shehdan, Salhab. In 1815, at the young age of 12, he joined the Congregation of the Virgin Mary. Three years later, at the age of 15, he joined the monastic order of the Antonins "Lebanese Maronite Order", vowing chastity, poverty, and obedience. On 30 September 1823, upon the recommendation of the monks of the monastery, he was ordained a priest by Maronite Patriarch Joseph VIII Peter Hobaish. In 1827, the same Maronite Patriarch called upon him to become his personal secretary. Satisfied with his hard work and dedication, the Maronite Patriarch requested that Aoun manage the finances of the Maronite Patriarchy, including the administration of the Maronite colleges and orphanages. On 13 March 1841, Patriarch Hobaish nominated him Maronite Bishop of Saint-John-Acre in partibus infidelium and Vicar General of the Patriarchy. Bishop Boutros Abu Karam, Maronite Bishop of Beirut since 18 November 1819, had died on 15 January 1844 thus leaving the Archbishopric of Beirut vacant. On 31 December 1844, Tobias Aoun was elected Archbishop of Beirut and installed in this archeparchy on February 9, 1845. The Maronite Patriarch's representative in Rome and Constantinople, Bishop Nicolas Murad, had been actively lobbying to become the new Archbishop of Beirut, but the Maronite Patriarch clearly confirmed Aoun as the new Archbishop. The Pope, however, suspended this confirmation upon the advice of the Apostolic Delegate, believing that neither Murad nor Aoun were suitable candidates for the Archbishopric of Beirut. A petition signed by 519 Maronite dignitaries protested the appointment of Aoun as Archbishop of Beirut, especially since Murad, they argued, had received a two-thirds majority. The French Consul in Beirut believed that the major Maronite families "favoured Murad because of his patriotism and devotion to the cause of his coreligionists". Bishop Aoun finally took possession of his chair on June 10, 1847. He would remain Archbishop of Beirut until his death. Bishop Tobia participated in two Maronite synods in electing the Patriarchs of the Maronite Church. According to Maronite procedures, the Patriarch is elected by the Maronite archbishops and bishops reunited in a synod, whereby a two-thirds majority is needed for the election to be validated. With the death of Patriarch Joseph Peter Hobaish on 23 May 1845, a synod was convened but did not meet until August due to the sectarian violence destabilizing Mount Lebanon in the early stages of the Double Qaimaqamate government. Bishop Joseph Ragi El Khazen was elected Patriarch on 18 August 1845 in Dimane and confirmed by Pope Gregory XVI on 19 January 1846. Nine years later, with the death of Patriarch Joseph IX Ragi El Khazen on 3 November 1854, Bishop Tobia participated in the synod of 12 November 1854 which elected Paul Peter Massad as Patriarch. This election was confirmed on 23 March 1855 by Pope Pius IX. Eugène Poujade, the French Consul of Beirut in the 1840s, writes the following description of Bishop Tobias in his memoirs: "The Bishop of Saint-John-Acre, Mautran Tobia (Mautran is the name the Maronites give to their bishops), was a man of roughly forty years, of an imposing figure. His eye was small but full of finesse, softness, and sincerity. He is one of the most distinguished men that I have met in the Near-East. He only speaks Arabic but his superior spirit has him realize the genius of Europe, and it is he who has played the most important role in the political affairs of Lebanon. He had once been a monk and led a rebellion against the abbot of the Convent of Saint Anthony. For his actions, he was exiled to the Isle of Cyprus by the Sacred Congregation for the Propagation of the Faith. Since then, his exemplary conduct has allowed him to successively be named Bishop of Saint-John-Acre in partibus and Bishop of Beirut, one of the most important diocese of Lebanon due to it being the residence of the muchir (governor) and the European general consuls. I've rarely seen in a man the same high degree of simplicity, gentleness, firmness, wisdom, the elevation of the soul and genuine Christian humility". The British envoy to Lebanon Sir Hugh Rose made the following description of Tobia Aoun on 9 September 1844: "Bishop Tubia is a violent, ambitious person of a Fellah family in the mixed district. He has strong anti-Druze and Ottoman feelings...at one time I thought that he had patriotic feelings; but if he has them, they are strongly mixed up with self-interest". This of course did not stop Rose in supporting Aoun over Murad as Archbishop of Beirut; writing to Aoun upon his nomination: "I was greatly pleased to hear you have been appointed over the bishopric of Beirut". Rose, writing to Bishop Aoun, expressed the desire that "we will always be friends". An 1862 publication on the history of Lebanon describes Aoun as "pretentious and arrogant with some people, clever and shy with others". He is described as "a real tyrant with small feet, who appears strong only when shielded far away from his enemy". Travels to Rome, Paris, Constantinople On 8 June 1862, Archbishop Tobia Aoun travelled to Rome and joined over 4000 Catholic priests in the canonization ceremony of the twenty-six Catholic martyrs of Japan. He was personally received by Pope Pius IX who named him Assistant to the Pontifical Throne, awarding him a gold and silver medal. In becoming Assistant to the Pontifical Throne, Tobia Aoun immediately entered the Papal nobility as Count of Rome. In 1862, Bishop Aoun was received by Emperor Napoleon III in Paris and awarded the French Legion of Honour. That same year, he was received by Sultan Abdul-Aziz in Constantinople and awarded the Ottoman Empire's Order of Medjidjie (Nishan-i-Majidia). In 1869, Tobia Aoun returned to Rome as Council Father of the Vatican Council called upon by Pope Pius IX. The Council had just met when King Victor Emmanuel II attacked Rome and deposed Pope Pius IX. Pius IX suspended the Council indefinitely on October 20, 1870. Tobias Aoun eventually returned home, dying on Holy Week April 4, 1871. Role in 1840–41, 1845 Civil Wars Concerning the 1840–1841 and 1845 civil conflicts in Mount Lebanon which followed the Crisis of 1840, the British envoy to Lebanon Sir Hugh Rose declared to Her Majesty's Secretary Lord Aberdeen on 17 May 1845 : "As regards the Maronite clergy, it is sufficient to say that they have organized the war...Your Lordship is already acquainted with the pecuniary aid given by Bishop Tubia for the purchase of arms. Indeed, when it is known that the Maronite Patriarch threatens to excommunicate those who do not obey the summons to go to war, all is said". On 4 May 1845, Sir Hugh Rose wrote to the British Ambassador in Constantinople, Sir Stratford Canning, stating that "the Maronite Patriarch, Bishop Tubia, and the clergy have taken a most decided line to induce the Christians to take arms". He states in the same letter that "Bishop Tubia gave 3000 piastres the other day to a village now burnt to purchase arms", as well as "a bond for 9000 piastres more for the same purpose". Aoun was responsible for drawing up a petition, signed by the Maronites, that "stated that under no circumstances whatever, would they, the Christians, ever voluntarily consent to be governed by a Turkish governor". Sir Hugh Rose explains on 12 January 1842 that "Bishop Tubia assembled the Christian deputies, both lay and clerical, and made them sign on the 10th ultimo a "Hedjé", a writing by which they bound themselves to petition the Porte for a Prince of the House of Shehab, taking, moreover, an oath, that whoever violated it should be answerable both with his life and his property to the remainder (an Arab formula)". As attested by the British Consul in Beirut Mr. Wood to Her Majesty's Secretary Henry John Temple Viscount Palmerston, Bishop Tobia played an active diplomatic role in establishing a new governing body for Lebanon following the 1840–1841 Civil War. In a letter dated 7 September 1841, Mr. Wood states: "Soon after my arrival, I received the visit of the Maronite Bishop Tubia, who was sent by the Patriarch to felicitate me, and to communicate to me the prelate's sentiments respecting the new arrangement to be made and the concessions granted by the Sublime Porte to the inhabitants of Lebanon". On 2 September 1841, the Maronite Patriarch wrote to Mr. Wood: "I send you Monsignor Tubia to converse with you, and to discuss this matter...I have again written to him, recommending him to communicate to you the impossibility of accomplishing your wishes". On 27 March 1842, Sir Hugh Rose wrote to Lord Aberdeen that "Bishop Tubia, in January, on an alarm of his arrest by the Turks, requested an asylum in my house, should it prove to be correct". Further proof of Tobia Aoun's role as diplomatic representative of the Maronite nation is attested by Sir Hugh Rose in a letter to Lord Aberdeen dated 7 June 1844 in which Rose states that Aoun is "the agent of the Patriarch and the real agent of the Maronite people, recognized as such by his Patriarch and the Turkish authorities in the important matters of the Government of the Mountain and the indemnities". In working towards the creation of a viable government for Mount Lebanon, Bishop Tobia declared to the Ottoman authorities on 20 March 1844 that the Christian deputies he represented would never "accept the Druze government, that they would rather that their heads should be cut off than accept that Government". The British Ambassador to the Ottoman Porte Sir Stratford Canning was satisfied by the diplomatic efforts of Bishop Aoun, stating on 17 September 1843 that "the moderation of the Christian party, as expressed by Bishop Tubia, is highly gratifying to those who take a real interest in the pacification of Mount Lebanon". Bishop Aoun had also served as the Patriarch's representative in the 'Indemnity Divan' which sought to restitute financial losses and offer compensation to all claimants who had suffered material losses during the 1840–1841 Civil War. Sir Hugh Rose confirms on 28 April 1844 that "Bishop Tubia (he is not himself a claimant) and those claiming compensation in and about Djouni deny strongly that any indemnity whatever has been paid for shops or coffee-shops in Djouni". On 10 August 1843, Bishop Aoun assured Sir Hugh Rose that "it was not his wish to ruin the Druzes by any excessive payment, but they should give a reasonable satisfaction to the Christians". Sir Hugh Rose states that Bishop Tobia declared: "Let the Druze, as a first step, restore at once all the plundered personal and moveable property still in their hands; and, secondly, let them engage to aid the building of such houses as were burnt by them. Let them do this; let them give this proof of atonement, and I will engage that the indemnification in money shall not be made a matter of difficulty by the Christians". Due to the various misunderstandings that arose during these meeting, Bishop Aoun decided to resign from his position at the Divan. Sir Hugh Rose states that both he and Mr. d'Adelbourg, the Austrian consul in Beirut, approved the Bishop's decision to resign. Though the deputies of the Divan requested that Aoun be "re-appointed, stating that he had great influence over them, and was devoted to their interests", the Ottoman authorities refused this option altogether. Sir Hugh Rose applauded this refusal, believing that "the absence of a political and violent bishop, possessing entire influence over a number of political agitators, was certainly, in the present state of affairs, much more desirable than his presence". In conclusion, it cannot be said that Sir Hugh Rose esteemed Bishop Tobia or the Maronites very much. On 9 August 1844, Rose deplored to Lord Aberdeen the "want of principle of all Arabs, both clergy and laity", stating that "they all, nearly without exception, have their price; if it is not money, it is something else". Rose affirmed in the same letter that "Bishop Tubia has given indication that his sternness is pliable; there have been secret requests on his part for permission to import corn duty free for his own house". Role in 1860 Civil War Bishop Aoun's role in the 1860 Civil War and its aftermath were much spoken of in the press and memoirs of the European diplomats who witnessed these events. In genuine calculated colonial strategy, their sentiments often shifted according to geopolitical alliances, with the Protestant British often at odds with the Bishop, and the Catholic French applauding his humanitarian and diplomatic efforts. The Franco-Maronite alliance, over a thousand years old, reached a culminating point in 1860 when Napoleon III dispatched 6000 French soldiers to Beirut in protection of the Maronites, essentially occupying Mount Lebanon with the approval of the Ottoman authorities. Though the French military presence would not last very long, it heightened tensions in London where British politicians were seeing the direct dismemberment of the Ottoman Empire to their geopolitical disadvantage. The possible colonization of Mount Lebanon by the French, applauded by the local Maronite clergy, was obviously contrary to British interests in the area. Khurshid Pasha, the Ottoman governor of Sidon, claimed "that at the beginning of the war, Maronite priests stirred up their parishioners by promising them that the French fleet would come to their assistance". For the Ottomans, European agents and local Maronites were responsible for enticing the war, with the ultimate goal of re-establishing a strong pro-European Christian government for Lebanon. Ultimately, the Ottomans would have no choice but to work with Tobia Aoun and the Maronite clergy in solidifying their rule over Mount Lebanon. This is best exemplified by the fact that Aoun, though attacked by the British as the prime instigator of the war, was immediately sent to the warring villages of Mount Lebanon by the Ottoman Porte's envoy, Fuad Pasha, to restore absolute peace and order in his name. In Bishop Aoun's archdiocese, a total of 67 churches were destroyed and 3 priests murdered in the 1860 Civil War. According to American Protestant missionary Henry Harris Jessup in Fifty-Three Years in Syria, Bishop Tobia was "the man who next to the Patriarch had done more than any other Maronite to precipitate this awful civil war". He writes of him numerous times as the "notorious Maronite Bishop Tobia". In the Missionary Herald, the official paper of the American Board of Commissioners for Foreign Missions, Jassup writes of Aoun on 14 March 1865 : "The Notorious Maronite Bishop came to (Baabda) with a swarm of priests, dispensing indulgences in accordance with the Pope's encyclical letter". Lord Dufferin, the United Kingdom's extraordinary envoy to Lebanon in 1860–1861, wrote to Her Majesty's Secretary of State Lord John Russell in his Correspondence Relating to the Affairs of Syria: "With regard to Bishop Tobia, who may be considered one of the chief causes of all the misery and bloodshed which has existed in the Lebanon, I would only say that his removal from the country is an absolute necessity. Unfortunately, it will be difficult to discover any direct evidence against him...(His) ambition and passion for intrigue verify one's conception of the worst specimen of a medieval ecclesiastic". Lord Dufferin, writing to Her Majesty's Secretary of State Lord Russell on 19 December 1860, stated that Tobia Aoun exercised a "sinister influence" in Lebanon, and that his "withdrawal from Beirut was insisted upon as a necessary preliminary to all chance of peace". In this dispatch, Lord Dufferin insinuates that Tobia Aoun is responsible for the 20,000 pistols imported directly into Lebanon between 1857 and 1860. He refutes the notion that the Maronites are "saintly martyrs" but rather "as savage and bloodthirsty in their traditional warfare as their pagan neighbours". On 18 January 1861, Lord Dufferin wrote Henry Bulwer, British ambassador to the Ottoman Empire, that during the Civil War the rebel Maronite leaders "were encouraged and countenanced in their excesses by Bishop Tobia and some of his brother ecclesiastics". In the British "Journal of the Foreign Affairs Committees" dated 4 December 1861, Tobia Aoun and the Maronite bishops are described as "unprincipled, ambitious priests", guilty of "wickedness and audacious treason". The same journal states that "It is really humiliating to hear men who call themselves the servants of Christ bellow forth the first principle of the devil -murder!". On 16 September 1860, the French general Auguste-Alexandre Ducrot wrote of Aoun: "I received today the visit of a figure who has, for many years, played a great role in all the affairs of Lebanon: it is Mgr. Tobie, Bishop of the Diocese of Beirut". An 1862 publication on the history of Lebanon states that Bishop Tobia was responsible for the conversion to Catholicism of Medjid Shehab, grandson of the famed Prince Beshir II. Having gone into exile to Constantinople with his grandfather Bashir II, Medjid had returned to Lebanon and was supported by Aoun and the Maronites as a possible candidate for governor of Mount Lebanon. In 1876, the French author and diplomat Eugène-Melchior de Vogüé wrote of "the savage and heroic figure of Bishop Tobias leading his flock to combat".
/* * Copyright 2016-2019 Axioma srl. * * Licensed under the Apache License, Version 2.0 (the "License"); you may not * use this file except in compliance with the License. You may obtain a copy of * the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations under * the License. */ package com.holonplatform.vaadin.flow.internal.components.builders; import com.holonplatform.core.internal.utils.ObjectUtils; import com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator; import com.vaadin.flow.component.Component; import com.vaadin.flow.component.KeyModifier; /** * A {@link ShortcutConfigurator} which delegates actual configuration to another one. * * @param <P> Parent configurator * * @since 5.2.3 */ public class DelegatedShortcutConfigurator<P> implements ShortcutConfigurator<P> { private final ShortcutConfigurator<?> delegate; private final P parent; /** * Constructor. * @param delegate Delegate configurator (not null) * @param parent Parent configurator (not null) */ public DelegatedShortcutConfigurator(ShortcutConfigurator<?> delegate, P parent) { super(); ObjectUtils.argumentNotNull(delegate, "Delegate configurator must be not null"); ObjectUtils.argumentNotNull(parent, "Parent configurator must be not null"); this.delegate = delegate; this.parent = parent; } /* * (non-Javadoc) * @see com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator#modifiers(com.vaadin.flow.component. * KeyModifier[]) */ @Override public ShortcutConfigurator<P> modifiers(KeyModifier... keyModifiers) { delegate.modifiers(keyModifiers); return this; } /* * (non-Javadoc) * @see com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator#withAlt() */ @Override public ShortcutConfigurator<P> withAlt() { delegate.withAlt(); return this; } /* * (non-Javadoc) * @see com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator#withCtrl() */ @Override public ShortcutConfigurator<P> withCtrl() { delegate.withCtrl(); return this; } /* * (non-Javadoc) * @see com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator#withShift() */ @Override public ShortcutConfigurator<P> withShift() { delegate.withShift(); return this; } /* * (non-Javadoc) * @see com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator#withMeta() */ @Override public ShortcutConfigurator<P> withMeta() { delegate.withMeta(); return this; } /* * (non-Javadoc) * @see com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator#allowBrowserDefault() */ @Override public ShortcutConfigurator<P> allowBrowserDefault() { delegate.allowBrowserDefault(); return this; } /* * (non-Javadoc) * @see com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator#allowEventPropagation() */ @Override public ShortcutConfigurator<P> allowEventPropagation() { delegate.allowEventPropagation(); return this; } /* * (non-Javadoc) * @see * com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator#bindLifecycleTo(com.vaadin.flow.component. * Component) */ @Override public ShortcutConfigurator<P> bindLifecycleTo(Component component) { delegate.bindLifecycleTo(component); return this; } /* * (non-Javadoc) * @see com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator#listenOn(com.vaadin.flow.component. * Component) */ @Override public ShortcutConfigurator<P> listenOn(Component listenOnComponent) { delegate.listenOn(listenOnComponent); return this; } /* * (non-Javadoc) * @see com.holonplatform.vaadin.flow.components.builders.ShortcutConfigurator#add() */ @Override public P add() { return parent; } }
<gh_stars>0 /* Copyright 2020 The HAProxy Ingress Controller Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package tracker import ( "fmt" "sort" "strings" convtypes "github.com/jcmoraisjr/haproxy-ingress/pkg/converters/types" hatypes "github.com/jcmoraisjr/haproxy-ingress/pkg/haproxy/types" ) // NewTracker ... func NewTracker() convtypes.Tracker { return &tracker{} } type ( stringStringMap map[string]map[string]empty stringBackendMap map[string]map[hatypes.BackendID]empty backendStringMap map[hatypes.BackendID]map[string]empty // empty struct{} ) type tracker struct { // ingress ingressHostname stringStringMap hostnameIngress stringStringMap ingressBackend stringBackendMap backendIngress backendStringMap ingressStorages stringStringMap storagesIngress stringStringMap // ingressClass ingressClassHostname stringStringMap hostnameIngressClass stringStringMap // configMap configMapHostname stringStringMap hostnameConfigMap stringStringMap // service serviceHostname stringStringMap hostnameService stringStringMap // secret secretHostname stringStringMap hostnameSecret stringStringMap secretBackend stringBackendMap backendSecret backendStringMap secretUserlist stringStringMap userlistSecret stringStringMap // pod podBackend stringBackendMap backendPod backendStringMap // ingressClass (missing) ingressClassHostnameMissing stringStringMap hostnameIngressClassMissing stringStringMap // configMap (missing) configMapHostnameMissing stringStringMap hostnameConfigMapMissing stringStringMap // service (missing) serviceHostnameMissing stringStringMap hostnameServiceMissing stringStringMap // secret (missing) secretHostnameMissing stringStringMap hostnameSecretMissing stringStringMap secretBackendMissing stringBackendMap backendSecretMissing backendStringMap } func (t *tracker) Track(isMissing bool, track convtypes.TrackingTarget, rtype convtypes.ResourceType, name string) { if track.Hostname != "" { if isMissing { t.TrackMissingOnHostname(rtype, name, track.Hostname) } else { t.TrackHostname(rtype, name, track.Hostname) } } if track.Backend.Name != "" { if isMissing { t.TrackMissingOnBackend(rtype, name, track.Backend) } else { t.TrackBackend(rtype, name, track.Backend) } } if track.Userlist != "" { if !isMissing { t.TrackUserlist(rtype, name, track.Userlist) } } } func (t *tracker) TrackHostname(rtype convtypes.ResourceType, name, hostname string) { validName(rtype, name) switch rtype { case convtypes.IngressType: addStringTracking(&t.ingressHostname, name, hostname) addStringTracking(&t.hostnameIngress, hostname, name) case convtypes.IngressClassType: addStringTracking(&t.ingressClassHostname, name, hostname) addStringTracking(&t.hostnameIngressClass, hostname, name) case convtypes.ConfigMapType: addStringTracking(&t.configMapHostname, name, hostname) addStringTracking(&t.hostnameConfigMap, hostname, name) case convtypes.ServiceType: addStringTracking(&t.serviceHostname, name, hostname) addStringTracking(&t.hostnameService, hostname, name) case convtypes.SecretType: addStringTracking(&t.secretHostname, name, hostname) addStringTracking(&t.hostnameSecret, hostname, name) default: panic(fmt.Errorf("unsupported resource type %d", rtype)) } } func (t *tracker) TrackBackend(rtype convtypes.ResourceType, name string, backendID hatypes.BackendID) { validName(rtype, name) switch rtype { case convtypes.IngressType: addStringBackendTracking(&t.ingressBackend, name, backendID) addBackendStringTracking(&t.backendIngress, backendID, name) case convtypes.SecretType: addStringBackendTracking(&t.secretBackend, name, backendID) addBackendStringTracking(&t.backendSecret, backendID, name) case convtypes.PodType: addStringBackendTracking(&t.podBackend, name, backendID) addBackendStringTracking(&t.backendPod, backendID, name) default: panic(fmt.Errorf("unsupported resource type %d", rtype)) } } func (t *tracker) TrackUserlist(rtype convtypes.ResourceType, name, userlist string) { validName(rtype, name) switch rtype { case convtypes.SecretType: addStringTracking(&t.secretUserlist, name, userlist) addStringTracking(&t.userlistSecret, userlist, name) default: panic(fmt.Errorf("unsupported resource type %d", rtype)) } } func (t *tracker) TrackStorage(rtype convtypes.ResourceType, name, storage string) { validName(rtype, name) switch rtype { case convtypes.IngressType: addStringTracking(&t.ingressStorages, name, storage) addStringTracking(&t.storagesIngress, storage, name) default: panic(fmt.Errorf("unsupported resource type %d", rtype)) } } func (t *tracker) TrackMissingOnHostname(rtype convtypes.ResourceType, name, hostname string) { validName(rtype, name) switch rtype { case convtypes.IngressClassType: addStringTracking(&t.ingressClassHostnameMissing, name, hostname) addStringTracking(&t.hostnameIngressClassMissing, hostname, name) case convtypes.ConfigMapType: addStringTracking(&t.configMapHostnameMissing, name, hostname) addStringTracking(&t.hostnameConfigMapMissing, hostname, name) case convtypes.ServiceType: addStringTracking(&t.serviceHostnameMissing, name, hostname) addStringTracking(&t.hostnameServiceMissing, hostname, name) case convtypes.SecretType: addStringTracking(&t.secretHostnameMissing, name, hostname) addStringTracking(&t.hostnameSecretMissing, hostname, name) default: panic(fmt.Errorf("unsupported resource type %d", rtype)) } } func (t *tracker) TrackMissingOnBackend(rtype convtypes.ResourceType, name string, backendID hatypes.BackendID) { validName(rtype, name) switch rtype { case convtypes.SecretType: addStringBackendTracking(&t.secretBackendMissing, name, backendID) addBackendStringTracking(&t.backendSecretMissing, backendID, name) default: panic(fmt.Errorf("unsupported resource type %d", rtype)) } } func validName(rtype convtypes.ResourceType, name string) { if name == "" { panic(fmt.Errorf("tracking resource name cannot be empty")) } namespaced := rtype != convtypes.IngressClassType slashCount := strings.Count(name, "/") if (!namespaced && slashCount != 0) || (namespaced && slashCount != 1) { panic(fmt.Errorf("invalid resource name: %s", name)) } } // GetDirtyLinks lists all hostnames and backendIDs that a // list of ingress touches directly or indirectly: // // * when a hostname is listed, all other hostnames of all ingress that // references it should also be listed; // * when a backendID (service+port) is listed, all other backendIDs of // all ingress that references it should also be listed. // func (t *tracker) GetDirtyLinks( oldIngressList, addIngressList []string, oldIngressClassList, addIngressClassList []string, oldConfigMapList, addConfigMapList []string, oldServiceList, addServiceList []string, oldSecretList, addSecretList []string, addPodList []string, ) (dirtyIngs, dirtyHosts []string, dirtyBacks []hatypes.BackendID, dirtyUsers, dirtyStorages []string) { ingsMap := make(map[string]empty) hostsMap := make(map[string]empty) backsMap := make(map[hatypes.BackendID]empty) usersMap := make(map[string]empty) storagesMap := make(map[string]empty) // recursively fill hostsMap and backsMap from ingress and secrets // that directly or indirectly are referenced by them var build func([]string) build = func(ingNames []string) { for _, ingName := range ingNames { ingsMap[ingName] = empty{} for _, hostname := range t.getHostnamesByIngress(ingName) { if _, found := hostsMap[hostname]; !found { hostsMap[hostname] = empty{} build(t.getIngressByHostname(hostname)) } } for _, backend := range t.getBackendsByIngress(ingName) { if _, found := backsMap[backend]; !found { backsMap[backend] = empty{} build(t.getIngressByBackend(backend)) } } for _, storage := range t.getStoragesByIngress(ingName) { if _, found := storagesMap[storage]; !found { storagesMap[storage] = empty{} build(t.getIngressByStorage(storage)) } } } } build(oldIngressList) build(addIngressList) // for _, className := range oldIngressClassList { for _, hostname := range t.getHostnamesByIngressClass(className) { if _, found := hostsMap[hostname]; !found { hostsMap[hostname] = empty{} build(t.getIngressByHostname(hostname)) } } } for _, className := range addIngressClassList { for _, hostname := range t.getHostnamesByIngressClassMissing(className) { if _, found := hostsMap[hostname]; !found { hostsMap[hostname] = empty{} build(t.getIngressByHostname(hostname)) } } } // for _, className := range oldConfigMapList { for _, hostname := range t.getHostnamesByConfigMap(className) { if _, found := hostsMap[hostname]; !found { hostsMap[hostname] = empty{} build(t.getIngressByHostname(hostname)) } } } for _, className := range addConfigMapList { for _, hostname := range t.getHostnamesByConfigMapMissing(className) { if _, found := hostsMap[hostname]; !found { hostsMap[hostname] = empty{} build(t.getIngressByHostname(hostname)) } } } // for _, svcName := range oldServiceList { for _, hostname := range t.getHostnamesByService(svcName) { if _, found := hostsMap[hostname]; !found { hostsMap[hostname] = empty{} build(t.getIngressByHostname(hostname)) } } } for _, svcName := range addServiceList { for _, hostname := range t.getHostnamesByServiceMissing(svcName) { if _, found := hostsMap[hostname]; !found { hostsMap[hostname] = empty{} build(t.getIngressByHostname(hostname)) } } } // for _, secretName := range oldSecretList { for _, hostname := range t.getHostnamesBySecret(secretName) { if _, found := hostsMap[hostname]; !found { hostsMap[hostname] = empty{} build(t.getIngressByHostname(hostname)) } } for _, backend := range t.getBackendsBySecret(secretName) { if _, found := backsMap[backend]; !found { backsMap[backend] = empty{} build(t.getIngressByBackend(backend)) } } for _, userlist := range t.getUserlistsBySecret(secretName) { if _, found := usersMap[userlist]; !found { usersMap[userlist] = empty{} } } } for _, secretName := range addSecretList { for _, hostname := range t.getHostnamesBySecretMissing(secretName) { if _, found := hostsMap[hostname]; !found { hostsMap[hostname] = empty{} build(t.getIngressByHostname(hostname)) } } for _, backend := range t.getBackendsBySecretMissing(secretName) { if _, found := backsMap[backend]; !found { backsMap[backend] = empty{} build(t.getIngressByBackend(backend)) } } } // for _, podName := range addPodList { for _, backend := range t.getBackendsByPod(podName) { if _, found := backsMap[backend]; !found { backsMap[backend] = empty{} build(t.getIngressByBackend(backend)) } } } // convert hostsMap and backsMap to slices if len(ingsMap) > 0 { dirtyIngs = make([]string, 0, len(ingsMap)) for ing := range ingsMap { dirtyIngs = append(dirtyIngs, ing) } sort.Strings(dirtyIngs) } if len(hostsMap) > 0 { dirtyHosts = make([]string, 0, len(hostsMap)) for host := range hostsMap { dirtyHosts = append(dirtyHosts, host) } sort.Strings(dirtyHosts) } if len(backsMap) > 0 { dirtyBacks = make([]hatypes.BackendID, 0, len(backsMap)) for back := range backsMap { dirtyBacks = append(dirtyBacks, back) } sort.Slice(dirtyBacks, func(i, j int) bool { return dirtyBacks[i].String() < dirtyBacks[j].String() }) } if len(usersMap) > 0 { dirtyUsers = make([]string, 0, len(usersMap)) for user := range usersMap { dirtyUsers = append(dirtyUsers, user) } sort.Strings(dirtyUsers) } if len(storagesMap) > 0 { dirtyStorages = make([]string, 0, len(storagesMap)) for storage := range storagesMap { dirtyStorages = append(dirtyStorages, storage) } sort.Strings(dirtyStorages) } return dirtyIngs, dirtyHosts, dirtyBacks, dirtyUsers, dirtyStorages } func (t *tracker) DeleteHostnames(hostnames []string) { for _, hostname := range hostnames { for ing := range t.hostnameIngress[hostname] { deleteStringTracking(&t.ingressHostname, ing, hostname) } deleteStringMapKey(&t.hostnameIngress, hostname) for class := range t.hostnameIngressClass[hostname] { deleteStringTracking(&t.ingressClassHostname, class, hostname) } deleteStringMapKey(&t.hostnameIngressClass, hostname) for class := range t.hostnameIngressClassMissing[hostname] { deleteStringTracking(&t.ingressClassHostnameMissing, class, hostname) } deleteStringMapKey(&t.hostnameIngressClassMissing, hostname) for class := range t.hostnameConfigMap[hostname] { deleteStringTracking(&t.configMapHostname, class, hostname) } deleteStringMapKey(&t.hostnameConfigMap, hostname) for class := range t.hostnameConfigMapMissing[hostname] { deleteStringTracking(&t.configMapHostnameMissing, class, hostname) } deleteStringMapKey(&t.hostnameConfigMapMissing, hostname) for service := range t.hostnameService[hostname] { deleteStringTracking(&t.serviceHostname, service, hostname) } deleteStringMapKey(&t.hostnameService, hostname) for service := range t.hostnameServiceMissing[hostname] { deleteStringTracking(&t.serviceHostnameMissing, service, hostname) } deleteStringMapKey(&t.hostnameServiceMissing, hostname) for secret := range t.hostnameSecret[hostname] { deleteStringTracking(&t.secretHostname, secret, hostname) } deleteStringMapKey(&t.hostnameSecret, hostname) for secret := range t.hostnameSecretMissing[hostname] { deleteStringTracking(&t.secretHostnameMissing, secret, hostname) } deleteStringMapKey(&t.hostnameSecretMissing, hostname) } } func (t *tracker) DeleteBackends(backends []hatypes.BackendID) { for _, backend := range backends { for ing := range t.backendIngress[backend] { deleteStringBackendTracking(&t.ingressBackend, ing, backend) } deleteBackendStringMapKey(&t.backendIngress, backend) for secret := range t.backendSecret[backend] { deleteStringBackendTracking(&t.secretBackend, secret, backend) } deleteBackendStringMapKey(&t.backendSecret, backend) for secret := range t.backendSecretMissing[backend] { deleteStringBackendTracking(&t.secretBackendMissing, secret, backend) } deleteBackendStringMapKey(&t.backendSecretMissing, backend) for pod := range t.backendPod[backend] { deleteStringBackendTracking(&t.podBackend, pod, backend) } deleteBackendStringMapKey(&t.backendPod, backend) } } func (t *tracker) DeleteUserlists(userlists []string) { for _, userlist := range userlists { for secret := range t.userlistSecret[userlist] { deleteStringTracking(&t.secretUserlist, secret, userlist) } deleteStringMapKey(&t.userlistSecret, userlist) } } func (t *tracker) DeleteStorages(storages []string) { for _, storage := range storages { for ing := range t.storagesIngress[storage] { deleteStringTracking(&t.ingressStorages, ing, storage) } deleteStringMapKey(&t.storagesIngress, storage) } } func (t *tracker) getIngressByHostname(hostname string) []string { if t.hostnameIngress == nil { return nil } return getStringTracking(t.hostnameIngress[hostname]) } func (t *tracker) getHostnamesByIngress(ingName string) []string { if t.ingressHostname == nil { return nil } return getStringTracking(t.ingressHostname[ingName]) } func (t *tracker) getIngressByBackend(backendID hatypes.BackendID) []string { if t.backendIngress == nil { return nil } return getStringTracking(t.backendIngress[backendID]) } func (t *tracker) getBackendsByIngress(ingName string) []hatypes.BackendID { if t.ingressBackend == nil { return nil } return getBackendTracking(t.ingressBackend[ingName]) } func (t *tracker) getIngressByStorage(storages string) []string { if t.storagesIngress == nil { return nil } return getStringTracking(t.storagesIngress[storages]) } func (t *tracker) getStoragesByIngress(ingName string) []string { if t.ingressStorages == nil { return nil } return getStringTracking(t.ingressStorages[ingName]) } func (t *tracker) getHostnamesByIngressClass(ingressClassName string) []string { if t.ingressClassHostname == nil { return nil } return getStringTracking(t.ingressClassHostname[ingressClassName]) } func (t *tracker) getHostnamesByIngressClassMissing(ingressClassName string) []string { if t.ingressClassHostnameMissing == nil { return nil } return getStringTracking(t.ingressClassHostnameMissing[ingressClassName]) } func (t *tracker) getHostnamesByConfigMap(configMapName string) []string { if t.configMapHostname == nil { return nil } return getStringTracking(t.configMapHostname[configMapName]) } func (t *tracker) getHostnamesByConfigMapMissing(configMapName string) []string { if t.configMapHostnameMissing == nil { return nil } return getStringTracking(t.configMapHostnameMissing[configMapName]) } func (t *tracker) getHostnamesByService(serviceName string) []string { if t.serviceHostname == nil { return nil } return getStringTracking(t.serviceHostname[serviceName]) } func (t *tracker) getHostnamesByServiceMissing(serviceName string) []string { if t.serviceHostnameMissing == nil { return nil } return getStringTracking(t.serviceHostnameMissing[serviceName]) } func (t *tracker) getHostnamesBySecret(secretName string) []string { if t.secretHostname == nil { return nil } return getStringTracking(t.secretHostname[secretName]) } func (t *tracker) getHostnamesBySecretMissing(secretName string) []string { if t.secretHostnameMissing == nil { return nil } return getStringTracking(t.secretHostnameMissing[secretName]) } func (t *tracker) getBackendsBySecret(secretName string) []hatypes.BackendID { if t.secretBackend == nil { return nil } return getBackendTracking(t.secretBackend[secretName]) } func (t *tracker) getBackendsBySecretMissing(secretName string) []hatypes.BackendID { if t.secretBackendMissing == nil { return nil } return getBackendTracking(t.secretBackendMissing[secretName]) } func (t *tracker) getUserlistsBySecret(secretName string) []string { if t.secretUserlist == nil { return nil } return getStringTracking(t.secretUserlist[secretName]) } func (t *tracker) getBackendsByPod(podName string) []hatypes.BackendID { if t.podBackend == nil { return nil } return getBackendTracking(t.podBackend[podName]) } func addStringTracking(trackingRef *stringStringMap, key, value string) { if *trackingRef == nil { *trackingRef = stringStringMap{} } tracking := *trackingRef trackingMap, found := tracking[key] if !found { trackingMap = map[string]empty{} tracking[key] = trackingMap } trackingMap[value] = empty{} } func addBackendStringTracking(trackingRef *backendStringMap, key hatypes.BackendID, value string) { if *trackingRef == nil { *trackingRef = backendStringMap{} } tracking := *trackingRef trackingMap, found := tracking[key] if !found { trackingMap = map[string]empty{} tracking[key] = trackingMap } trackingMap[value] = empty{} } func addStringBackendTracking(trackingRef *stringBackendMap, key string, value hatypes.BackendID) { if *trackingRef == nil { *trackingRef = stringBackendMap{} } tracking := *trackingRef trackingMap, found := tracking[key] if !found { trackingMap = map[hatypes.BackendID]empty{} tracking[key] = trackingMap } trackingMap[value] = empty{} } func getStringTracking(tracking map[string]empty) []string { stringList := make([]string, 0, len(tracking)) for value := range tracking { stringList = append(stringList, value) } return stringList } func getBackendTracking(tracking map[hatypes.BackendID]empty) []hatypes.BackendID { backendList := make([]hatypes.BackendID, 0, len(tracking)) for value := range tracking { backendList = append(backendList, value) } return backendList } func deleteStringTracking(trackingRef *stringStringMap, key, value string) { if *trackingRef == nil { return } tracking := *trackingRef trackingMap := tracking[key] delete(trackingMap, value) if len(trackingMap) == 0 { delete(tracking, key) } if len(tracking) == 0 { *trackingRef = nil } } func deleteStringBackendTracking(trackingRef *stringBackendMap, key string, value hatypes.BackendID) { if *trackingRef == nil { return } tracking := *trackingRef trackingMap := tracking[key] delete(trackingMap, value) if len(trackingMap) == 0 { delete(tracking, key) } if len(tracking) == 0 { *trackingRef = nil } } func deleteStringMapKey(stringMap *stringStringMap, key string) { delete(*stringMap, key) if len(*stringMap) == 0 { *stringMap = nil } } func deleteBackendStringMapKey(backendMap *backendStringMap, key hatypes.BackendID) { delete(*backendMap, key) if len(*backendMap) == 0 { *backendMap = nil } }
Q: What are the limitations for a guitarist with arthritis in his hands? I have found that arthritis (or general hand problems/pains) is very common and a big problem among guitar players. I am currently suffering from osteoarthritis, more seriously in my left hand and I am looking into solving this problem. If there are any players who suffer from pain when playing, most specifically arthritis, could you please tell me the exact limitations and problems you encounter? For example, fingers can't reach fret or too painful to hold down strings on acoustic? A: Not necessarily about arthritis. But any hand problems. I suffered a slight stroke a few years ago and my left hand was missing chords by a couple of frets.After playing and earning a living from my passion. I thought it was over. But after I finished feeling sorry for my self and realizing it could have been a lot worse, I started over. My Right hand and brain were still working. So I started on open tunings just to make some noise. This was the best thing that could happen. Not knowing the familiar chords or scale patterns, it was a new start...I know the pain and frustration that you may feel. But if you have music in you, you will find away to get it out. Even a 12 bar and a slide sounds great to the average listener. A guitar is a box of tricks waiting for a magician to come along. Good luck..Regards. A: I don't have arthritis, but this may be applicable: I have had for many years suffered recurrent RSI in my left wrist when it is bent over, which when it is bad makes playing more than a couple barre chords in succession impossible, and even on a good day limits what I am capable of barring. My solutions were: Primarily - play fingerstyle pretty much exclusively. This means I can often simply only fret the strings I am currently playing, and not worry about all six; the wrist strength required to barre is far more of a problem for me than finger gymnastics. Secondarily, just change the arrangement. Intersperse non-barred chords between the barre chords to stretch my wrist back in the other direction frequently. I am an amateur with no requirement to learn songs "authentically", so both of these solutions work for me. And one more thing is to accept my limits - on a bad day, put the guitar down instead of getting frustrated and down; on a good day, play as much as possible.
# Tests for fitting specific distributions to censored data. import numpy as np from numpy.testing import assert_allclose from scipy.optimize import fmin from scipy.stats import (CensoredData, beta, cauchy, chi2, expon, gamma, gumbel_l, gumbel_r, invgauss, invweibull, laplace, logistic, lognorm, nct, ncx2, norm, weibull_max, weibull_min) # In some tests, we'll use this optimizer for improved accuracy. def optimizer(func, x0, args=(), disp=0): return fmin(func, x0, args=args, disp=disp, xtol=1e-12, ftol=1e-12) def test_beta(): """ Test fitting beta shape parameters to interval-censored data. Calculation in R: > library(fitdistrplus) > data <- data.frame(left=c(0.10, 0.50, 0.75, 0.80), + right=c(0.20, 0.55, 0.90, 0.95)) > result = fitdistcens(data, 'beta', control=list(reltol=1e-14)) > result Fitting of the distribution ' beta ' on censored data by maximum likelihood Parameters: estimate shape1 1.419941 shape2 1.027066 > result$sd shape1 shape2 0.9914177 0.6866565 """ data = CensoredData(interval=[[0.10, 0.20], [0.50, 0.55], [0.75, 0.90], [0.80, 0.95]]) # For this test, fit only the shape parameters; loc and scale are fixed. a, b, loc, scale = beta.fit(data, floc=0, fscale=1, optimizer=optimizer) assert_allclose(a, 1.419941, rtol=5e-6) assert_allclose(b, 1.027066, rtol=5e-6) assert loc == 0 assert scale == 1 def test_cauchy_right_censored(): """ Test fitting the Cauchy distribution to right-censored data. Calculation in R, with two values not censored [1, 10] and one right-censored value [30]. > library(fitdistrplus) > data <- data.frame(left=c(1, 10, 30), right=c(1, 10, NA)) > result = fitdistcens(data, 'cauchy', control=list(reltol=1e-14)) > result Fitting of the distribution ' cauchy ' on censored data by maximum likelihood Parameters: estimate location 7.100001 scale 7.455866 """ data = CensoredData(uncensored=[1, 10], right=[30]) loc, scale = cauchy.fit(data, optimizer=optimizer) assert_allclose(loc, 7.10001, rtol=5e-6) assert_allclose(scale, 7.455866, rtol=5e-6) def test_cauchy_mixed(): """ Test fitting the Cauchy distribution to data with mixed censoring. Calculation in R, with: * two values not censored [1, 10], * one left-censored [1], * one right-censored [30], and * one interval-censored [[4, 8]]. > library(fitdistrplus) > data <- data.frame(left=c(NA, 1, 4, 10, 30), right=c(1, 1, 8, 10, NA)) > result = fitdistcens(data, 'cauchy', control=list(reltol=1e-14)) > result Fitting of the distribution ' cauchy ' on censored data by maximum likelihood Parameters: estimate location 4.605150 scale 5.900852 """ data = CensoredData(uncensored=[1, 10], left=[1], right=[30], interval=[[4, 8]]) loc, scale = cauchy.fit(data, optimizer=optimizer) assert_allclose(loc, 4.605150, rtol=5e-6) assert_allclose(scale, 5.900852, rtol=5e-6) def test_chi2_mixed(): """ Test fitting just the shape parameter (df) of chi2 to mixed data. Calculation in R, with: * two values not censored [1, 10], * one left-censored [1], * one right-censored [30], and * one interval-censored [[4, 8]]. > library(fitdistrplus) > data <- data.frame(left=c(NA, 1, 4, 10, 30), right=c(1, 1, 8, 10, NA)) > result = fitdistcens(data, 'chisq', control=list(reltol=1e-14)) > result Fitting of the distribution ' chisq ' on censored data by maximum likelihood Parameters: estimate df 5.060329 """ data = CensoredData(uncensored=[1, 10], left=[1], right=[30], interval=[[4, 8]]) df, loc, scale = chi2.fit(data, floc=0, fscale=1, optimizer=optimizer) assert_allclose(df, 5.060329, rtol=5e-6) assert loc == 0 assert scale == 1 def test_expon_right_censored(): """ For the exponential distribution with loc=0, the exact solution for fitting n uncensored points x[0]...x[n-1] and m right-censored points x[n]..x[n+m-1] is scale = sum(x)/n That is, divide the sum of all the values (not censored and right-censored) by the number of uncensored values. (See, for example, https://en.wikipedia.org/wiki/Censoring_(statistics)#Likelihood.) The second derivative of the log-likelihood function is n/scale**2 - 2*sum(x)/scale**3 from which the estimate of the standard error can be computed. ----- Calculation in R, for reference only. The R results are not used in the test. > library(fitdistrplus) > dexps <- function(x, scale) { + return(dexp(x, 1/scale)) + } > pexps <- function(q, scale) { + return(pexp(q, 1/scale)) + } > left <- c(1, 2.5, 3, 6, 7.5, 10, 12, 12, 14.5, 15, + 16, 16, 20, 20, 21, 22) > right <- c(1, 2.5, 3, 6, 7.5, 10, 12, 12, 14.5, 15, + NA, NA, NA, NA, NA, NA) > result = fitdistcens(data, 'exps', start=list(scale=mean(data$left)), + control=list(reltol=1e-14)) > result Fitting of the distribution ' exps ' on censored data by maximum likelihood Parameters: estimate scale 19.85 > result$sd scale 6.277119 """ # This data has 10 uncensored values and 6 right-censored values. obs = [1, 2.5, 3, 6, 7.5, 10, 12, 12, 14.5, 15, 16, 16, 20, 20, 21, 22] cens = [False]*10 + [True]*6 data = CensoredData.right_censored(obs, cens) loc, scale = expon.fit(data, floc=0, optimizer=optimizer) assert loc == 0 # Use the analytical solution to compute the expected value. This # is the sum of the observed values divided by the number of uncensored # values. n = len(data) - data.num_censored() total = data._uncensored.sum() + data._right.sum() expected = total / n assert_allclose(scale, expected, 1e-8) def test_gamma_right_censored(): """ Fit gamma shape and scale to data with one right-censored value. Calculation in R: > library(fitdistrplus) > data <- data.frame(left=c(2.5, 2.9, 3.8, 9.1, 9.3, 12.0, 23.0, 25.0), + right=c(2.5, 2.9, 3.8, 9.1, 9.3, 12.0, 23.0, NA)) > result = fitdistcens(data, 'gamma', start=list(shape=1, scale=10), + control=list(reltol=1e-13)) > result Fitting of the distribution ' gamma ' on censored data by maximum likelihood Parameters: estimate shape 1.447623 scale 8.360197 > result$sd shape scale 0.7053086 5.1016531 """ # The last value is right-censored. x = CensoredData.right_censored([2.5, 2.9, 3.8, 9.1, 9.3, 12.0, 23.0, 25.0], [0]*7 + [1]) a, loc, scale = gamma.fit(x, floc=0, optimizer=optimizer) assert_allclose(a, 1.447623, rtol=5e-6) assert loc == 0 assert_allclose(scale, 8.360197, rtol=5e-6) def test_gumbel(): """ Fit gumbel_l and gumbel_r to censored data. This R calculation should match gumbel_r. > library(evd) > libary(fitdistrplus) > data = data.frame(left=c(0, 2, 3, 9, 10, 10), + right=c(1, 2, 3, 9, NA, NA)) > result = fitdistcens(data, 'gumbel', + control=list(reltol=1e-14), + start=list(loc=4, scale=5)) > result Fitting of the distribution ' gumbel ' on censored data by maximum likelihood Parameters: estimate loc 4.487853 scale 4.843640 """ # First value is interval-censored. Last two are right-censored. uncensored = np.array([2, 3, 9]) right = np.array([10, 10]) interval = np.array([[0, 1]]) data = CensoredData(uncensored, right=right, interval=interval) loc, scale = gumbel_r.fit(data, optimizer=optimizer) assert_allclose(loc, 4.487853, rtol=5e-6) assert_allclose(scale, 4.843640, rtol=5e-6) # Negate the data and reverse the intervals, and test with gumbel_l. data2 = CensoredData(-uncensored, left=-right, interval=-interval[:, ::-1]) # Fitting gumbel_l to data2 should give the same result as above, but # with loc negated. loc2, scale2 = gumbel_l.fit(data2, optimizer=optimizer) assert_allclose(loc2, -4.487853, rtol=5e-6) assert_allclose(scale2, 4.843640, rtol=5e-6) def test_invgauss(): """ Fit just the shape parameter of invgauss to data with one value left-censored and one value right-censored. Calculation in R; using a fixed dispersion parameter amounts to fixing the scale to be 1. > library(statmod) > library(fitdistrplus) > left <- c(NA, 0.4813096, 0.5571880, 0.5132463, 0.3801414, 0.5904386, + 0.4822340, 0.3478597, 3, 0.7191797, 1.5810902, 0.4442299) > right <- c(0.15, 0.4813096, 0.5571880, 0.5132463, 0.3801414, 0.5904386, + 0.4822340, 0.3478597, NA, 0.7191797, 1.5810902, 0.4442299) > data <- data.frame(left=left, right=right) > result = fitdistcens(data, 'invgauss', control=list(reltol=1e-12), + fix.arg=list(dispersion=1), start=list(mean=3)) > result Fitting of the distribution ' invgauss ' on censored data by maximum likelihood Parameters: estimate mean 0.853469 Fixed parameters: value dispersion 1 > result$sd mean 0.247636 Here's the R calculation with the dispersion as a free parameter to be fit. > result = fitdistcens(data, 'invgauss', control=list(reltol=1e-12), + start=list(mean=3, dispersion=1)) > result Fitting of the distribution ' invgauss ' on censored data by maximum likelihood Parameters: estimate mean 0.8699819 dispersion 1.2261362 The parametrization of the inverse Gaussian distribution in the `statmod` package is not the same as in SciPy (see https://arxiv.org/abs/1603.06687 for details). The translation from R to SciPy is scale = 1/dispersion mu = mean * dispersion > 1/result$estimate['dispersion'] # 1/dispersion dispersion 0.8155701 > result$estimate['mean'] * result$estimate['dispersion'] mean 1.066716 Those last two values are the SciPy scale and shape parameters. """ # One point is left-censored, and one is right-censored. x = [0.4813096, 0.5571880, 0.5132463, 0.3801414, 0.5904386, 0.4822340, 0.3478597, 0.7191797, 1.5810902, 0.4442299] data = CensoredData(uncensored=x, left=[0.15], right=[3]) # Fit only the shape parameter. mu, loc, scale = invgauss.fit(data, floc=0, fscale=1, optimizer=optimizer) assert_allclose(mu, 0.853469, rtol=5e-5) assert loc == 0 assert scale == 1 # Fit the shape and scale. mu, loc, scale = invgauss.fit(data, floc=0, optimizer=optimizer) assert_allclose(mu, 1.066716, rtol=5e-5) assert loc == 0 assert_allclose(scale, 0.8155701, rtol=5e-5) def test_invweibull(): """ Fit invweibull to censored data. Here is the calculation in R. The 'frechet' distribution from the evd package matches SciPy's invweibull distribution. The `loc` parameter is fixed at 0. > library(evd) > libary(fitdistrplus) > data = data.frame(left=c(0, 2, 3, 9, 10, 10), + right=c(1, 2, 3, 9, NA, NA)) > result = fitdistcens(data, 'frechet', + control=list(reltol=1e-14), + start=list(loc=4, scale=5)) > result Fitting of the distribution ' frechet ' on censored data by maximum likelihood Parameters: estimate scale 2.7902200 shape 0.6379845 Fixed parameters: value loc 0 """ # In the R data, the first value is interval-censored, and the last # two are right-censored. The rest are not censored. data = CensoredData(uncensored=[2, 3, 9], right=[10, 10], interval=[[0, 1]]) c, loc, scale = invweibull.fit(data, floc=0, optimizer=optimizer) assert_allclose(c, 0.6379845, rtol=5e-6) assert loc == 0 assert_allclose(scale, 2.7902200, rtol=5e-6) def test_laplace(): """ Fir the Laplace distribution to left- and right-censored data. Calculation in R: > library(fitdistrplus) > dlaplace <- function(x, location=0, scale=1) { + return(0.5*exp(-abs((x - location)/scale))/scale) + } > plaplace <- function(q, location=0, scale=1) { + z <- (q - location)/scale + s <- sign(z) + f <- -s*0.5*exp(-abs(z)) + (s+1)/2 + return(f) + } > left <- c(NA, -41.564, 50.0, 15.7384, 50.0, 10.0452, -2.0684, + -19.5399, 50.0, 9.0005, 27.1227, 4.3113, -3.7372, + 25.3111, 14.7987, 34.0887, 50.0, 42.8496, 18.5862, + 32.8921, 9.0448, -27.4591, NA, 19.5083, -9.7199) > right <- c(-50.0, -41.564, NA, 15.7384, NA, 10.0452, -2.0684, + -19.5399, NA, 9.0005, 27.1227, 4.3113, -3.7372, + 25.3111, 14.7987, 34.0887, NA, 42.8496, 18.5862, + 32.8921, 9.0448, -27.4591, -50.0, 19.5083, -9.7199) > data <- data.frame(left=left, right=right) > result <- fitdistcens(data, 'laplace', start=list(location=10, scale=10), + control=list(reltol=1e-13)) > result Fitting of the distribution ' laplace ' on censored data by maximum likelihood Parameters: estimate location 14.79870 scale 30.93601 > result$sd location scale 0.1758864 7.0972125 """ # The value -50 is left-censored, and the value 50 is right-censored. obs = np.array([-50.0, -41.564, 50.0, 15.7384, 50.0, 10.0452, -2.0684, -19.5399, 50.0, 9.0005, 27.1227, 4.3113, -3.7372, 25.3111, 14.7987, 34.0887, 50.0, 42.8496, 18.5862, 32.8921, 9.0448, -27.4591, -50.0, 19.5083, -9.7199]) x = obs[(obs != -50.0) & (obs != 50)] left = obs[obs == -50.0] right = obs[obs == 50.0] data = CensoredData(uncensored=x, left=left, right=right) loc, scale = laplace.fit(data, loc=10, scale=10, optimizer=optimizer) assert_allclose(loc, 14.79870, rtol=5e-6) assert_allclose(scale, 30.93601, rtol=5e-6) def test_logistic(): """ Fit the logistic distribution to left-censored data. Calculation in R: > library(fitdistrplus) > left = c(13.5401, 37.4235, 11.906 , 13.998 , NA , 0.4023, NA , + 10.9044, 21.0629, 9.6985, NA , 12.9016, 39.164 , 34.6396, + NA , 20.3665, 16.5889, 18.0952, 45.3818, 35.3306, 8.4949, + 3.4041, NA , 7.2828, 37.1265, 6.5969, 17.6868, 17.4977, + 16.3391, 36.0541) > right = c(13.5401, 37.4235, 11.906 , 13.998 , 0. , 0.4023, 0. , + 10.9044, 21.0629, 9.6985, 0. , 12.9016, 39.164 , 34.6396, + 0. , 20.3665, 16.5889, 18.0952, 45.3818, 35.3306, 8.4949, + 3.4041, 0. , 7.2828, 37.1265, 6.5969, 17.6868, 17.4977, + 16.3391, 36.0541) > data = data.frame(left=left, right=right) > result = fitdistcens(data, 'logis', control=list(reltol=1e-14)) > result Fitting of the distribution ' logis ' on censored data by maximum likelihood Parameters: estimate location 14.633459 scale 9.232736 > result$sd location scale 2.931505 1.546879 """ # Values that are zero are left-censored; the true values are less than 0. x = np.array([13.5401, 37.4235, 11.906, 13.998, 0.0, 0.4023, 0.0, 10.9044, 21.0629, 9.6985, 0.0, 12.9016, 39.164, 34.6396, 0.0, 20.3665, 16.5889, 18.0952, 45.3818, 35.3306, 8.4949, 3.4041, 0.0, 7.2828, 37.1265, 6.5969, 17.6868, 17.4977, 16.3391, 36.0541]) data = CensoredData.left_censored(x, censored=(x == 0)) loc, scale = logistic.fit(data, optimizer=optimizer) assert_allclose(loc, 14.633459, rtol=5e-7) assert_allclose(scale, 9.232736, rtol=5e-6) def test_lognorm(): """ Ref: https://math.montana.edu/jobo/st528/documents/relc.pdf The data is the locomotive control time to failure example that starts on page 8. That's the 8th page in the PDF; the page number shown in the text is 270). The document includes SAS output for the data. """ # These are the uncensored measurements. There are also 59 right-censored # measurements where the lower bound is 135. miles_to_fail = [22.5, 37.5, 46.0, 48.5, 51.5, 53.0, 54.5, 57.5, 66.5, 68.0, 69.5, 76.5, 77.0, 78.5, 80.0, 81.5, 82.0, 83.0, 84.0, 91.5, 93.5, 102.5, 107.0, 108.5, 112.5, 113.5, 116.0, 117.0, 118.5, 119.0, 120.0, 122.5, 123.0, 127.5, 131.0, 132.5, 134.0] data = CensoredData.right_censored(miles_to_fail + [135]*59, [0]*len(miles_to_fail) + [1]*59) sigma, loc, scale = lognorm.fit(data, floc=0) assert loc == 0 # Convert the lognorm parameters to the mu and sigma of the underlying # normal distribution. mu = np.log(scale) # The expected results are from the 17th page of the PDF document # (labeled page 279), in the SAS output on the right side of the page. assert_allclose(mu, 5.1169, rtol=5e-4) assert_allclose(sigma, 0.7055, rtol=5e-3) def test_nct(): """ Test fitting the noncentral t distribution to censored data. Calculation in R: > library(fitdistrplus) > data <- data.frame(left=c(1, 2, 3, 5, 8, 10, 25, 25), + right=c(1, 2, 3, 5, 8, 10, NA, NA)) > result = fitdistcens(data, 't', control=list(reltol=1e-14), + start=list(df=1, ncp=2)) > result Fitting of the distribution ' t ' on censored data by maximum likelihood Parameters: estimate df 0.5432336 ncp 2.8893565 """ data = CensoredData.right_censored([1, 2, 3, 5, 8, 10, 25, 25], [0, 0, 0, 0, 0, 0, 1, 1]) # Fit just the shape parameter df and nc; loc and scale are fixed. with np.errstate(over='ignore'): # remove context when gh-14901 is closed df, nc, loc, scale = nct.fit(data, floc=0, fscale=1, optimizer=optimizer) assert_allclose(df, 0.5432336, rtol=5e-6) assert_allclose(nc, 2.8893565, rtol=5e-6) assert loc == 0 assert scale == 1 def test_ncx2(): """ Test fitting the shape parameters (df, ncp) of ncx2 to mixed data. Calculation in R, with * 5 not censored values [2.7, 0.2, 6.5, 0.4, 0.1], * 1 interval-censored value [[0.6, 1.0]], and * 2 right-censored values [8, 8]. > library(fitdistrplus) > data <- data.frame(left=c(2.7, 0.2, 6.5, 0.4, 0.1, 0.6, 8, 8), + right=c(2.7, 0.2, 6.5, 0.4, 0.1, 1.0, NA, NA)) > result = fitdistcens(data, 'chisq', control=list(reltol=1e-14), + start=list(df=1, ncp=2)) > result Fitting of the distribution ' chisq ' on censored data by maximum likelihood Parameters: estimate df 1.052871 ncp 2.362934 """ data = CensoredData(uncensored=[2.7, 0.2, 6.5, 0.4, 0.1], right=[8, 8], interval=[[0.6, 1.0]]) with np.errstate(over='ignore'): # remove context when gh-14901 is closed df, ncp, loc, scale = ncx2.fit(data, floc=0, fscale=1, optimizer=optimizer) assert_allclose(df, 1.052871, rtol=5e-6) assert_allclose(ncp, 2.362934, rtol=5e-6) assert loc == 0 assert scale == 1 def test_norm(): """ Test fitting the normal distribution to interval-censored data. Calculation in R: > library(fitdistrplus) > data <- data.frame(left=c(0.10, 0.50, 0.75, 0.80), + right=c(0.20, 0.55, 0.90, 0.95)) > result = fitdistcens(data, 'norm', control=list(reltol=1e-14)) > result Fitting of the distribution ' norm ' on censored data by maximum likelihood Parameters: estimate mean 0.5919990 sd 0.2868042 > result$sd mean sd 0.1444432 0.1029451 """ data = CensoredData(interval=[[0.10, 0.20], [0.50, 0.55], [0.75, 0.90], [0.80, 0.95]]) loc, scale = norm.fit(data, optimizer=optimizer) assert_allclose(loc, 0.5919990, rtol=5e-6) assert_allclose(scale, 0.2868042, rtol=5e-6) def test_weibull_censored1(): # Ref: http://www.ams.sunysb.edu/~zhu/ams588/Lecture_3_likelihood.pdf # Survival times; '*' indicates right-censored. s = "3,5,6*,8,10*,11*,15,20*,22,23,27*,29,32,35,40,26,28,33*,21,24*" times, cens = zip(*[(float(t[0]), len(t) == 2) for t in [w.split('*') for w in s.split(',')]]) data = CensoredData.right_censored(times, cens) c, loc, scale = weibull_min.fit(data, floc=0) # Expected values are from the reference. assert_allclose(c, 2.149, rtol=1e-3) assert loc == 0 assert_allclose(scale, 28.99, rtol=1e-3) # Flip the sign of the data, and make the censored values # left-censored. We should get the same parameters when we fit # weibull_max to the flipped data. data2 = CensoredData.left_censored(-np.array(times), cens) c2, loc2, scale2 = weibull_max.fit(data2, floc=0) assert_allclose(c2, 2.149, rtol=1e-3) assert loc2 == 0 assert_allclose(scale2, 28.99, rtol=1e-3) def test_weibull_min_sas1(): # Data and SAS results from # https://support.sas.com/documentation/cdl/en/qcug/63922/HTML/default/ # viewer.htm#qcug_reliability_sect004.htm text = """ 450 0 460 1 1150 0 1150 0 1560 1 1600 0 1660 1 1850 1 1850 1 1850 1 1850 1 1850 1 2030 1 2030 1 2030 1 2070 0 2070 0 2080 0 2200 1 3000 1 3000 1 3000 1 3000 1 3100 0 3200 1 3450 0 3750 1 3750 1 4150 1 4150 1 4150 1 4150 1 4300 1 4300 1 4300 1 4300 1 4600 0 4850 1 4850 1 4850 1 4850 1 5000 1 5000 1 5000 1 6100 1 6100 0 6100 1 6100 1 6300 1 6450 1 6450 1 6700 1 7450 1 7800 1 7800 1 8100 1 8100 1 8200 1 8500 1 8500 1 8500 1 8750 1 8750 0 8750 1 9400 1 9900 1 10100 1 10100 1 10100 1 11500 1 """ life, cens = np.array([int(w) for w in text.split()]).reshape(-1, 2).T life = life/1000.0 data = CensoredData.right_censored(life, cens) c, loc, scale = weibull_min.fit(data, floc=0, optimizer=optimizer) assert_allclose(c, 1.0584, rtol=1e-4) assert_allclose(scale, 26.2968, rtol=1e-5) assert loc == 0 def test_weibull_min_sas2(): # http://support.sas.com/documentation/cdl/en/ormpug/67517/HTML/default/ # viewer.htm#ormpug_nlpsolver_examples06.htm # The last two values are right-censored. days = np.array([143, 164, 188, 188, 190, 192, 206, 209, 213, 216, 220, 227, 230, 234, 246, 265, 304, 216, 244]) data = CensoredData.right_censored(days, [0]*(len(days) - 2) + [1]*2) c, loc, scale = weibull_min.fit(data, 1, loc=100, scale=100, optimizer=optimizer) assert_allclose(c, 2.7112, rtol=5e-4) assert_allclose(loc, 122.03, rtol=5e-4) assert_allclose(scale, 108.37, rtol=5e-4)
Wear a onesie and raise money for charity as part of an event in Northampton to banish the January blues. The onesie charity event will be held at Grosvenor Casino in Northampton tomorrow (Monday) which is known as Blue Monday as it’s said to be the most depressing day of the year. Originally identified in 2005, the third Monday in January has come to be known as Blue Monday - as millions of Brits struggle with December credit card bills, failed post-festive diets and gloomy weather. This year the casino in Northampton has decided to help customers brighten up the day by encouraging them and staff to put on a onesie. All the money raised will then be donated to the Carers Trust - Grosvenor Casino’s partnered charity which helps the millions of unpaid carers across the UK. Senior fundraising manager at Carers Trust, Pushpinder Gill, added: “We all need help and support sometimes. When your family or friends grow old or ill, you may find yourself looking after someone you love on a part time, or even full time, basis. That’s when you become a carer. To make a donation to Carers Trust, visit www.justgiving.com/RanksCaresOnesieDay. To find out more about this event, go to www.grosvenorcasinos.com/local-casinos/northampton or contact the sales team on 01604 624916. Over 18s only and photographic ID will be required for all new customers.
<reponame>anieuwenhuize/TheDormInMburg package com.hz; public class Bed { private Student sleeper; public Student getSleeper() { return sleeper; } public void setSleeper(Student sleeper) { this.sleeper = sleeper; } public boolean hasSleeper() { return this.sleeper != null; } }
package com.utils.util; import com.alibaba.fastjson.annotation.JSONType; import lombok.AllArgsConstructor; import lombok.Builder; import lombok.Data; import lombok.NoArgsConstructor; import lombok.extern.slf4j.Slf4j; import java.util.Arrays; import java.util.Comparator; import java.util.List; import java.util.Objects; import java.util.function.Consumer; import java.util.function.Function; import java.util.stream.Stream; /** * 数字区间操作类 * * @author 谢长春 2018/12/10 . */ @NoArgsConstructor @AllArgsConstructor @Builder @Data @JSONType(orders = {"min", "max"}) @Slf4j public final class RangeLong implements Num.IRange<Long> { private Long min; private Long max; /** * 构造数字区间 * * @param min long 获取最小值 * @param max long 获取最大值 * @return {@link RangeLong} */ public static RangeLong of(final long min, final long max) { if (max <= 0) { log.warn("参数【max】<=0"); } return new RangeLong(min, max); } /** * 构造数字区间 * * @param values {@link Long[]} 从数组中获取最小值和最大值区间 * @return {@link RangeLong} */ public static RangeLong of(final Long[] values) { Arrays.sort(values); return new RangeLong(values[0], values[values.length - 1]); } /** * 构造数字区间 * * @param values {@link List}{@link List<Long>} 从集合中获取最小值和最大值区间 * @return {@link RangeLong} */ public static RangeLong of(final List<Long> values) { values.sort(Comparator.naturalOrder()); return new RangeLong(values.get(0), values.get(values.size() - 1)); } /** * 遍历区间,包含 min 和 max 值 * * @param action {@link Consumer}{@link Consumer<Long:value>} */ public void forEach(final Consumer<Long> action) { Objects.requireNonNull(action, "参数【action】是必须的"); for (Long i = min; i <= max; i++) { action.accept(i); } } /** * 转换区间,包含 min 和 max 值 * * @param mapper {@link Function}{@link Function<Long:value, R:返回数据类型>} * @param <R> 返回数据类型 * @return {@link Stream<R>} */ public <R> Stream<R> map(final Function<Long, ? extends R> mapper) { Objects.requireNonNull(mapper, "参数【mapper】是必须的"); return Stream.iterate(min, n -> n + 1) .limit(max - min + 1) .map(mapper); } }
package com.weiwend.fooldelivery.customviews; import android.app.Dialog; import android.content.Context; import android.os.Bundle; import android.view.View; import android.view.WindowManager; import android.widget.Button; import android.widget.TextView; import com.weiwend.fooldelivery.R; //版本更新的提示框 public class MyAppUpdatePromptDialog extends Dialog implements android.view.View.OnClickListener{ //上下文对象 private Context mContext; //“确定”、“取消”按钮 private Button mSummitBtn,mCancelBtn; //对话框中显示内容控件 private TextView mContentTv; //对话框中显示的内容 private String content; //点击"确定"按钮时的监听器 public interface MySubmmitListener{ void summit(String phoneNumber); } //点击"确定"按钮时的监听器 private MySubmmitListener mSubmmitListener; //设置点击"确定"按钮时的监听器 public void setMySubmmitListener(MySubmmitListener mSubmmitListener){ this.mSubmmitListener=mSubmmitListener; } public MyAppUpdatePromptDialog(Context context, int theme) { super(context, theme); mContext=context; } public MyAppUpdatePromptDialog(Context context, int theme, String content) { this(context, theme); this.content=content; } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.custom_dialog_delete); mSummitBtn=(Button)findViewById(R.id.mSummitBtn); mCancelBtn=(Button)findViewById(R.id.mCancelBtn); mSummitBtn.setOnClickListener(this); mCancelBtn.setOnClickListener(this); TextView mTitleTv=(TextView)findViewById(R.id.mTitleTv); mTitleTv.setText("版本更新"); mContentTv=(TextView)findViewById(R.id.mContentTv); mContentTv.setText(content); //控制整个Dialog显示的大小 WindowManager.LayoutParams lp=getWindow().getAttributes(); WindowManager wm=(WindowManager)mContext.getSystemService(mContext.WINDOW_SERVICE); int width=wm.getDefaultDisplay().getWidth(); lp.width=width*4/5; getWindow().setAttributes(lp); } @Override public void onClick(View view) { switch (view.getId()) { case R.id.mSummitBtn: //“确定” if(mSubmmitListener!=null){ //回调“确定”按钮点击时的处理函数 mSubmmitListener.summit(content); } dismiss(); break; case R.id.mCancelBtn: //“取消” dismiss(); break; default: break; } } }
Cost Sensitive Learning of Deep Feature Representations from Imbalanced Data Class imbalance is a common problem in the case of real-world object detection and classification tasks. Data of some classes is abundant making them an over-represented majority, and data of other classes is scarce, making them an under-represented minority. This imbalance makes it challenging for a classifier to appropriately learn the discriminating boundaries of the majority and minority classes. In this work, we propose a cost sensitive deep neural network which can automatically learn robust feature representations for both the majority and minority classes. During training, our learning procedure jointly optimizes the class dependent costs and the neural network parameters. The proposed approach is applicable to both binary and multi-class problems without any modification. Moreover, as opposed to data level approaches, we do not alter the original data distribution which results in a lower computational cost during the training process. We report the results of our experiments on six major image classification datasets and show that the proposed approach significantly outperforms the baseline algorithms. Comparisons with popular data sampling techniques and cost sensitive classifiers demonstrate the superior performance of our proposed method. I. INTRODUCTION In most real-world classification problems, the collected data follows a long tail distribution i.e., data for few object classes is abundant while data for others is scarce. This behaviour is termed the 'class-imbalance problem' and it is inherently manifested in nearly all of the collected image classification databases (e.g., Fig. 1). A multi-class dataset is said to be 'imbalanced' or 'skewed' if some of its (minority) classes, in the training set, are heavily under-represented compared to other (majority) classes. This skewed distribution of class instances forces the classification algorithms to be biased towards the majority classes. As a result, the characteristics of the minority classes are not adequately learned. The class imbalance problem is of particular interest in real-world scenarios, where it is essential to correctly classify examples from an infrequent but important minority class. For instance, a particular cancerous lesion (e.g., a melanoma) which appears rarely during dermoscopy should not be misclassified as benign (see Sec. IV). Similarly, for a continuous surveillance task, a dangerous activity which occurs occasionally should still be detected by the monitoring system. The same applies to many other application domains, e.g., object classification, where the correct classification of a minority class sample is equally important to the correct classification of a majority class sample. It is therefore required to enhance the overall accuracy of the system without unduly sacrificing the precision of any of the majority or minority classes. Most of the classification algorithms try to minimize the overall classification error during the training process. They, therefore, implicitly assign an identical misclassification cost to all types of errors assuming their equivalent importance. As a result the classifier tends to correctly classify and favour the more frequent classes. Despite the pertinence of the class imbalance problem to practical computer vision, there have been very few research works on this topic in the recent years. Class imbalance is avoided in nearly all competitive datasets during the evaluation and training procedures (see Fig. 1). For instance, for the case of the popular image classification datasets (such as CIFAR−10/100, ImageNet, Caltech−101/256, and MIT−67), efforts have been made by the collectors to ensure that, either all of the classes have a minimum representation with sufficient data, or that the experimental protocols are reshaped to use an equal number of images for all classes during the training and testing processes. This approach is reasonable in the case of datasets with only few classes, which have an equal probability to appear in practical scenarios (e.g., digits in MNIST). However, with the increasing number of classes in the collected object datasets, it is becoming impractical to provide equal representations for all classes in the training and testing subsets. For example, for a fine-grained coral categorization dataset, endangered coral species have a significantly lower representation compared to the more abundant ones. In this work, we propose to jointly learn robust feature representations and classifier parameters, under a cost-sensitive setting. This enables us to learn not only an improved classifier that deals with the class imbalance problem, but also to extract suitably adapted intermediate feature representations from a arXiv:1508.03422v3 23 Mar 2017 deep Convolutional Neural Network (CNN). In this manner, we directly modify the learning procedure to incorporate classdependent costs during training. In contrast, previous works (such as ) only readjust the training data distribution to learn better classifiers. Moreover, unlike the methods in e.g.,, we do not use a handcrafted cost matrix whose design is based on expert judgement and turns into a tedious task for a large number of classes. In our case, the classdependent costs are automatically set using data statistics (e.g., data distribution and separability measures) during the learning procedure. Another major difference with existing techniques is that our class specific costs are only used during the training process and once the optimal CNN parameters are learnt, predictions can be made without any modification to the trained network. From this perspective, our approach can be understood as a perturbation method, which forces the training algorithm to learn more discriminative features. Nonetheless, it is clearly different from the common perturbation mechanisms used during training e.g., data distortions, corrupted features, affine transformations and activation dropout. Our contribution consists of the following: 1-We introduce cost-sensitive versions of three widely used loss functions for joint cost-sensitive learning of features and classifier parameters in the CNN (Sec. III-C). We also show that the improved loss functions have desirable properties such as classification calibration and guess-aversion. 2-We analyse the effect of these modified loss functions on the backpropagation algorithm by deriving relations for propagated gradients (Sec. III-E). 3-We propose an algorithm for joint alternate optimization of the network parameters and the classsensitive costs (Sec. III-D). The proposed algorithm can automatically work for both binary and multi-class classification problems. We also show that the introduction of class-sensitive costs does not significantly affect the training and testing time of the original network (Sec. IV). 4-The proposed approach has been extensively tested on six major classification datasets and has shown to outperform baseline procedures and stateof-the-art approaches (Sec. IV-D). The remainder of this paper is organized as follows. We briefly discuss the related work in the next section. In Sec. III-A and III-B, we introduce our proposed approach and analyse the modified loss functions in Sec. III-C. The learning algorithm is then described in Sec. III-D and the CNN implementation details are provided in Sec. IV-C. Experiments and results are summarized in Sec. IV and the paper concludes in Sec. V. II. RELATED WORK Previous research on the class imbalance problem has concentrated mainly on two levels: the data level and the algorithmic level. Below, we briefly discuss the different research efforts that tackle the class imbalance problem. Data level approaches manipulate the class representations in the original dataset by either over-sampling the minority classes or under-sampling the majority classes to make the resulting data distribution balanced. However, these techniques change the original distribution of the data and consequently introduce drawbacks. While under-sampling can potentially lose useful information about the majority class data, over-sampling makes the training computationally burdensome by artificially increasing the size of the training set. Furthermore, over-sampling is prone to cause over-fitting, when exact copies of the minority class are replicated randomly. To address the over-fitting problem, Chawla et al. introduced a method, called SMOTE, to generate new instances by linear interpolation between closely lying minority class samples. These synthetically generated minority class instances may lie inside the convex hull of the majority class instances, a phenomenon known as over-generalization. Over the years, several variants of the SMOTE algorithm have been proposed to solve this problem. For example, Borderline SMOTE only over-samples the minority class samples which lie close to the class boundaries. Safe-level SMOTE carefully generates synthetic samples in the so called saferegions, where the majority and minority class regions are not overlapping. The local neighborhood SMOTE considers the neighboring majority class samples when generating synthetic minority class samples and reports a better performance compared to the former variants of SMOTE. The combination of under and over sampling procedures (e.g., ) to balance the training data have also shown to perform well. However, a drawback of these approaches is the increased computational cost that is required for data pre-processing and for the learning of a classification model. Algorithm level approaches directly modify the learning procedure to improve the sensitivity of the classifier towards minority classes. Zhang et al. first divided the data into smaller balanced subsets, followed by intelligent sampling and a cost-sensitive SVM learning to deal with the imbalance problem. A neuro-fuzzy modeling procedure was introduced in to perform leave-one-out cross-validation on imbalanced datasets. A scaling kernel along-with the standard SVM was used in to improve the generalization ability of learned classifiers for skewed datasets. Li et al. gave more importance to the minority class samples by setting weights with Adaboost during the training of an extreme learning machine (ELM). An ensemble of soft-margin SVMs was formed via boosting to perform well on both majority and minority classes. These previous works hint towards the use of distinct costs for different training examples to improve the performance of the learning algorithm. However, they do not address the class imbalance learning of CNNs, which have recently emerged as the most popular tool for supervised classification, recognition and segmentation problems in computer vision. Furthermore, they are mostly limited to the binary class problems, do not perform joint feature and classifier learning and do not explore computer vision tasks which inherently have imbalanced class distributions. In the context of neural networks, Kukar and Kononenko showed that the incorporation of costs in the error function improves performance. However, their costs are randomly chosen in multiple runs of the network and remain fixed during the learning process in each run. In contrast, this paper presents the first attempt to incorporate automatic costsensitive learning in deep neural networks for imbalanced data. After the submission of this work for review, we note that a number of new approaches have recently been proposed to incorporate class-specific costs in the deep networks. Chung et al. proposed a new cost-sensitive loss function which replaces traditional soft-max with a regression loss. In contrast, this work extends the traditionally used cost-functions in CNN for the cost-sensitive setting. Wang et al. and Raj et al. proposed a loss function which gives equal importance to mistakes in the minority and majority classes. Different to these works, our method is more flexible because it automatically learns the balanced error function depending on the end problem. A. Problem Formulation for Cost-Sensitive Classification Let the cost p,q be used to denote the misclassification cost of classifying an instance belonging to a class p into a different class q. The diagonal of (i.e., p,p, ∀p) represents the benefit or utility for a correct prediction. Given an input instance x and the cost matrix, the classifier seeks to minimize the expected risk R(p|x), where p is the class prediction made by the classifier. The expected risk can be expressed as: where, P (q|x) is the posterior probability over all possible classes given an instance x. According to the Bayes decision theory, an ideal classifier will give a decision in favour of the class (p * ) with the minimum expected risk: where, X and D define the input and output spaces respectively. Since, P (q|x) cannot be found trivially, we make use of empirical distribution derived from the training data. Given a training dataset consisting of tuples comprising of data and label, we can define the empirical risk as follows: where, M is the total number of images, o (i) ∈ R N is the neural network output for the i th sample and () is the misclassification error (0 − 1 loss) or a surrogate loss function which is typically used during the classifier training. For the case of cost-insensitive 0 − 1 loss, (, d (i), o (i) ) = I(d (i) = o (i) ) and is an N N matrix, where p,p = 0, and p,q = 1, ∀p = q. Next, we briefly describe the properties of traditional used cost matrix, before introducing the proposed cost matrix. Properties of the Cost Matrix : Lemmas III.1 and III.2 describe the main properties of the cost matrix. Their proof can be found in Appendix A (supplementary material). Lemma III.1. Offsetting the columns of the cost matrix by any constant 'c' does not affect the associated classification risk R. For convenience, the utility vector (i.e., the diagonal of the cost matrix) for correct classification is usually set to zero with the help of the property from Lemma III.1. We also show next that even when the utility is not zero, it must satisfy the following condition: Lemma III.2. The cost of the true class should be less than the mean cost of all misclassifications. The entries of a traditional cost matrix (defined according to the properties above) usually have the form of: Such cost matrix can potentially increase the corresponding loss to a large value. During the CNN training, this network loss can make the training process unstable and can lead to the non-convergence of the error function. This requires the introduction of an alternative cost matrix. B. Our Proposed Cost Matrix We propose a new cost matrix, which is suitable for CNN training. The cost matrix is used to modify the output of the last layer of a CNN (before the softmax and the loss layer) (Fig. 2). The resulting activations are then squashed between before the computation of the classification loss. For the case of a CNN, the classification decision is made in favour of the class with the maximum classification score. During the training process, the classifier weights are modified in order to reshape the classifier confidences (class probabilities) such that the desired class has the maximum score and the other classes have a considerably lower score. However, since the less frequent classes are under-represented in the training set, we introduce new 'score-level costs' to encourage the correct classification of infrequent classes. Therefore the CNN outputs (o) are modified using the cost matrix () according to a function (F) as follows: where, y denotes the modified output, p is the desired class and F : R → R represents a function whose exact definition depends on the type of loss layer. As an example, for the case of cost-sensitive MSE loss, where denotes the hadamard product. In Sec. III-C, we will discuss in detail the definition of F for different surrogate losses. Note that the score-level costs perturb the classifier confidences. Such perturbation allows the classifier to give more importance to the less frequent and difficult-to-separate classes. Properties of the Proposed Cost Matrix : Next, we discuss few properties (lemmas A.3 -A.6) of the newly introduced cost matrix and its similarities/differences with the traditionally used cost matrix (Sec. III-A). The proofs of below mentioned properties can be found in Appendix A (supplementary material): Lemma III.3. The cost matrix for a cost-insensitive loss function is an all-ones matrix, 1 pp, rather than a 1−I matrix, as in the case of the traditionally used cost matrix. Lemma III.5. The cost matrix is defined such that all of its elements in are within the range (0, 1], i.e., p,q ∈ (0, 1]. Lemma III.6. Offsetting the columns of the cost matrix can lead to an equally probable guess point. The cost matrix configured according to the properties described above (Lemma A.3 -A.6) neither excessively increases the CNN outputs activations, nor does it reduce them to zero output values. This enables a smooth training process allowing the model parameters to be correctly updated. In the following section, we analyse the implications of the newly introduced cost matrix on the loss layer ( Fig. 2). C. Cost-Sensitive Surrogate Losses Our approach addresses the class imbalance problem during the training of CNNs. For this purpose, we introduce a costsensitive error function which can be expressed as the mean loss over the training set: where, the predicted output (y) of the penultimate layer (before the loss layer) is parameterized by (network weights and biases) and (class sensitive costs), M is the total number of training examples, d ∈ {0, 1} 1N is the desired output (s.t. n d n := 1) and N denotes the total number of neurons in the output layer. For conciseness, we will not explicitly mention the dependence of y on the parameters (, ) and only consider a single data instance in the discussion below. Note that the error is larger when the model performs poorly on the training set. The objective of the learning algorithm is to find the optimal parameters ( *, * ) which give the minimum possible cost E * (Eq. ). Therefore, the optimization objective is given by: The loss function () in Eq. can be any suitable surrogate loss such as the Mean Square Error (MSE), Support Vector Machine (SVM) hinge loss or a Cross Entropy (CE) loss (also called the 'soft-max log loss'). These popular loss functions are shown along-with other surrogate losses in Fig. 3. The cost-sensitive versions of these loss functions are discussed below: (a) Cost-Sensitive MSE loss: This loss minimizes the squared error of the predicted output with the desired ground-truth and can be expressed as follows: Neuron Output where, y n is related to the output of the previous layer o n via the logistic function, where, is the class sensitive penalty which depends on the desired class of a particular training sample, i.e., p = argmax m d m. The effect of this cost on the back-propagation algorithm is discussed in Sec. III-E1. (b) Cost-Sensitive SVM hinge loss: This loss maximizes the margin between each pair of classes and can be expressed as follows: where, y n can be represented in terms of the previous layer output o n and the cost, as follows: The effect of the introduced cost on the gradient computation is discussed in Sec. III-E2. (c) Cost-Sensitive CE loss: This loss maximizes the closeness of the prediction to the desired output and is given by: where y n incorporates the class-dependent cost () and is related to the output o n via the soft-max function, The effect of the modified CE loss on the back-propagation algorithm is discussed in Sec. III-E3. Classification Feasibility of Cost-Sensitive Losses: Next, we show (Lemmas III.7-III.9) that the cost-sensitive loss functions remain suitable for classification since they satisfy the following properties: 1) Classification Calibration 2) Guess Aversion Note that a classification calibrated (c-calibrated) loss is useful because the minimization of the empirical risk leads to classifiers which have risks that are closer to the Bayes-risk. Similarly, guess aversion implies that the loss function favours 'correct classification' instead of 'arbitrary guesses'. Since, CE loss usually performs best among the three loss functions we discussed above, Lemmas III.7-III. 9 show that the costsensitive CE loss is guess aversive and classification calibrated. Lemma III.7. For a real valued ( ∈ R CC ∈ (0, 1]), given d (i) and the CNN output o (i), the modified cost-sensitive CE loss will be guess-averse iff, where, g is the set of all guess points. Proof: For real valued CNN activations, the guess point maps to an all zero output: which can be satisfied if, where, n is the true class. Since, p,n ∈ (0, 1] and thus it is > 0. Also, if n is the true class then o n > o k, ∀k = n. Therefore, the above relation holds true. Lemma III.8. The cost matrix has diagonal entries greater than zero, i.e., diag() > 0. Proof: According to Lemma III.1, if the CE loss is guess aversive, it must satisfy, We prove the Lemma by contradiction. Let us suppose that p,n = 0, then the above relation does not hold true, since: and hence, diag() > 0. Lemma III.9. The cost-sensitive CE loss function Proof: Given an input sample x which belongs to class p * (i.e., d p * = 1), then the CE loss can be expressed as: The classification risk can be expressed in terms of the expected value as follows: Next, we compute the derivative and set it to zero to find the ideal set of CNN outputs 'o', Similarly, By adding the above two derived expression and setting them to zero, we have : Which shows that there exists an inverse relationship between the optimal CNN output and the Bayes cost of the t th class, and hence, the cost-sensitive CE loss is classification calibrated. Under the properties of Lemmas III.7-III.9, the modified loss functions are therefore suitable for classification. Having established the class-dependent costs (Sec. III-B) and their impact on the loss layer (Sec. III-C), we next describe the training algorithm to automatically learn all the parameters of our model ( and ). 18: end if 19: end for 20: return ( *, * ) the joint optimization, we alternatively solve for both types of parameters by keeping one fixed and minimizing the cost with respect to the other (Algorithm 1). Specifically, for the optimization of, we use the stochastic gradient descent (SGD) with the back-propagation of error (Eq. ). Next, to optimize, we again use the gradient descent algorithm to calculate the direction of the step to update the parameters. The cost function is also dependent on the class-to-class separability, the current classification errors made by the network with current estimate of parameters and the overall classification error. The class-to-class (c2c) separability is measured by estimating the spread of the with-in class samples (intraclass) compared to the between-class (interclass) ones. In other words, it measures the relationship between the with-in class sample distances and the size of the separating boundary between the different classes. Note that the proposed cost function can be easily extended to include an externally defined cost matrix for applications where expert opinion is necessary. However, this paper mainly deals with class-imbalance in image classification datasets where externally specified costs are not required. To calculate the c2c separability, we first compute a suitable distance measure between each point in a class c p and its nearest neighbour belonging to c p and the nearest neighbour in class c q. Note that these distances are calculated in the feature space where each point is a 4096 dimensional feature vector (f i : i ∈ , N bieng the samples belonging to class c p ) obtained from the penultimate CNN layer (just before the output layer). Next, we find the average of intraclass distances to interclass distance for each point in a class and compute the ratio of the averages to find the c2c separability index. Formally, the class separability between two classes, p and q is defined as: To avoid over-fitting and to keep this step computationally feasible, we measure the c2c separability on a small validation set. Also, the c2c separability was found to correlate well with the confusion matrix at each epoch. Therefore the measure was calculated after every 10 epochs to minimize the computational overhead. Note that by simply setting the parameters () based on the percentages of the classes in the data distribution results in a poor performance (Sec. IV-D). This suggests that the optimal parameter values for class-dependent costs ( * ) should not be the same as the frequency of the classes in the training data distribution. The following cost function is used for the gradient computation to update : where E val is the validation error. The matrix T is defined as follows: where,, denote the parameters which are set using cross validation, R denotes the current classification errors as a confusion matrix, S denotes the class c2c separability matrix and H is a matrix defined using the histogram vector h which encodes the distribution of classes in the training set. The matrix H and vector h are linked as follows: where, c is the set of all classes in a given dataset. The resulting minimization objective to find the optimal * can be expressed as: In order to optimize the cost function in Eq., we use the gradient descent algorithm which computes the direction of the update step, as follows: where, v a = vec(T ), v b = vec() and J denotes the Jacobian matrix. Note that in order to incorporate the dependence of F () on the validation error E val, we take the update step only if it results in a decrease in E val (see Algorithm 1). Since, our approach involves the use of modified loss functions during the CNN parameter learning process, we will discuss their effect on the back-propagation algorithm in the next section. E. Effect on Error Back-propagation In this section, we discuss the impact of the modified loss functions on the gradient computation of the back-propagation algorithm. 1) Cost-Sensitive MSE: During the supervised training, the MSE loss minimizes the mean squared error between the predicted weighted outputs of the model y, and the groundtruth labels d, across the entire training set (Eq. ). The modification of the loss function changes the gradient computed during the back-propagation algorithm. Therefore, for the output layer, the mathematical expression of the gradient at each neuron is given by: The y n for the cost-sensitive MSE loss can be defined as: The partial derivative can be calculated as follows: The derivative of the loss function is therefore given by: 2) Cost-Sensitive SVM Hinge Loss: For the SVM hinge loss function given in Eq., the directional derivative can be computed at each neuron as follows: ∂ (d, y) ∂o n = −(2d n − 1) ∂y n ∂o n I{1 > y n (2d n − 1)}. The partial derivative of the output of the softmax w.r.t the output of the penultimate layer is given by: ∂y n /∂o n = p,n. By combining the above two expressions, the derivative of the loss function can be represented as: ∂ (d, y) ∂o n = −(2d n − 1) p,n I{1 > y n (2d n − 1)}. 3) Cost-Sensitive CE loss: The cost-sensitive softmax log loss function is defined in Eq.. Next, we show that the introduction of a cost in the CE loss does not change the gradient formulas and the cost is rather incorporated implicitly in the softmax output y m. The effect of costs on the CE loss surface is illustrated in Fig. 4. Proposition 1. The introduction of a class imbalance cost () in the softmax loss ( () in Eq. 10), does not affect the computation of the gradient during the back-propagation process. Proof: We start with the calculation of the partial derivative of the softmax neuron with respect to its input: Now, two cases can arise here, either m = n or m = n. We first solve for the case when n = m: After simplification we get: The loss function can be differentiated as follows: Since, d is defined as a probability distribution over all output classes ( n d n = 1), therefore: This result is the same as in the case when CE does not contain any cost-sensitive parameters. Therefore the costs affect the softmax output y m but the gradient formulas remain unchanged. In our experiments (Sec. IV), we will only report performances with the cost-sensitive CE loss function. This is because, it has been shown that the CE loss outperforms the other two loss functions in most cases . Moreover, it avoids the learning slowing down problem of the MSE loss. IV. EXPERIMENTS AND RESULTS The class imbalance problem is present in nearly all realworld object and image datasets. This is not because of any flawed data collection, but it is simply due to the natural frequency patterns of different object classes in real life. For example, a bed appears in nearly every bedroom scene, but a baby cot appears much less frequently. Consequently, from the perspective of class imbalance, the currently available image classification datasets can be divided into three categories: 1) Datasets with a significant class imbalance both in the training and the testing split (e.g., DIL, MLC), 2) Datasets with unbalanced class distributions but with experimental protocols that are designed in a way that an equal number of images from all classes are used during the training process (e.g., MIT-67, Caltech-101). The testing images can be equal or unequal for different classes. 3) Datasets with an equal representation of each class in the training and testing splits (e.g., MNIST, CIFAR-100). We perform extensive experiments on six challenging image classification datasets (two from each category) (see Sec. IV-B). For the case of imbalanced datasets (1 st category), we report results on the standard splits for two experiments. For the two datasets from the 2 nd category, we report our performances on the standard splits, deliberately deformed splits and the original data distributions. For the two datasets from the 3 rd category, we report results on the standard splits and on deliberately imbalanced splits. Since, our training procedure requires a small validation set (Algorithm 1), we use ∼ 5% of the training data in each experiment as a held-out validation set. A. Multi-class Performance Metric The main goal of this work is to enhance the overall classification accuracy without compromising the precision of minority and majority classes. Therefore, we report overall classification accuracy results in Tables I-VI, VIII and IX for comparisons with baseline and state-of-the art balanced and unbalanced data classification approaches. We report class recall rates in confusion matrices displayed in Fig. 6. We also show our results in terms of G-mean and F-measure scores on all the six datasets (see Table VII). Note that the F-measure and G-mean scores are primarily used for binary classification tasks. Here, we extend them to multi-class problem using the approach in, where these scores are calculated for each class in a one-vs-all setting and their weighted average is calculated using the class frequencies. It is also important to note that neural networks give a single classification score and it is therefore not feasible to obtain ROC curves. As a result, we have not included AUC measurements in our experimental results. B. Datasets and Experimental Settings 1) Imbalanced Datasets: Melanoma Detection : Edinburgh Dermofit Image Library (DIL) consists of 1300 high quality skin lesion images based on diagnosis from dermatologists and dermatopathologists. There are 10 types of lesions identified in this dataset including melanomas, seborrhoeic keratosis and basal cell carcinomas. The number of images in each category varies between 24 and 331 (mean 130, median 83). Similar to, we report results with 3-fold cross validation. Coral Classification : Moorea Labelled Corals (MLC) contains 2055 images from three coral reef habitats during 2008-10. Each image is annotated with roughly 200 points belonging to the 9 classes (4 non-corals, 5 corals). Therefore in total, there are nearly 400,000 labelled points. The class representation varies approximately from 2622 to 196910 (mean 44387, median 30817). We perform two of the major standard experiments on this dataset similar to. The first experiment involves training and testing on data from year 2008. In the second experiment, training is carried out on data from year 2008 and testing on data from year 2009. 2) Imbalanced Datasets-Balanced Protocols: Object Classification: Caltech-101 contains a total of 9,144 images, divided into 102 categories (101 objects + background). The number of images for each category varies between 31 and 800 images (mean: 90, median 59). The dataset is originally imbalanced but the standard protocol which is balanced uses 30 or 15 images for each category during training, and testing is performed on the remaining images (max. 50). We perform experiments using the standard 60%/40% and 30%/70% train/test splits. Scene Classification: MIT-67 consists of 15,620 images belonging to 67 classes. The number of images varies between 101 and 738 (mean: 233, median: 157). The standard protocol uses a subset of 6700 images (100 per class) for training and evaluation to make the distribution uniform. We will, however, evaluate our approach both on the standard split (80 images for training, 20 for testing) and the complete dataset with imbalanced train/test splits of 60%/40% and 30%/70%. 3) Balanced Datasets-Balanced Protocols: Handwritten Digit Classification: MNIST consists of 70,000 images of digits. Out of the total, 60,000 images are used for training (∼600/class) and the remaining 10,000 for testing (∼100/class). We evaluate our approach on the standard split as well as the deliberately imbalanced splits. To imbalance the training distribution, we reduce the representation of even and odd digit classes to only 25% and 10% of images, respectively. Image Classification: CIFAR-100 contains 60,000 images belonging to 100 classes (600 images/class). The standard train/test split for each class is 500/100 images. We evaluate our approach on the standard split as well as on artificially imbalanced splits. To imbalance the training distribution, we reduce the representation of even-numbered and odd-numbered classes to only 25% and 10% of images, respectively. C. Convolutional Neural Network We use a deep CNN to learn robust feature representations for the task of image classification. The network architecture consists of a total of 18 weight layers (see Fig. 5 for details). Our architecture is similar to the state-of-the-art CNN (configuration D) proposed in, except that our architecture has two extra fully connected layers before the output layer and the proposed loss layer is cost-sensitive. Since there are a huge number of parameters (∼139 million) in the network, its not possible to learn all of them from scratch using a relatively smaller number of images. We, therefore, initialize the first 16 layers of our model with the pre-trained model of and set random weights for the last two fully connected layers. We then train the full network with a relatively higher learning rate to allow a change in the network parameters. Note that the cost-sensitive (CoSen) CNN is trained with the modified cost functions introduced in Eqs.. The CNN trained without cost-sensitive loss layer will be used as the baseline CNN in our experiments. Note that the baseline CNN architecture is exactly the same as the CoSen CNN, except that the final layer is a simple CE loss layer. D. Results and Comparisons For the two imbalanced datasets with imbalanced protocols, we summarize our experimental results and comparisons in Tables I, II. For each of the two datasets, we perform two standard experiments following the works of Beijbom et al. and Ballerini et al.. In the first experiment on the DIL dataset, we perform 3-fold cross validation on the 5 classes (namely Actinic Keratosis, Basal Cell Carcinoma, Melanocytic Nevus, Squamous Cell Carcinoma and Seborrhoeic Keratosis) comprising of a total of 960 images. In the second experiment, we perform 3-fold cross validation on all of the 10 classes in the DIL dataset. We achieved a performance boost of ∼ 5.0% and ∼ 3.1% over the baseline CNN in the first and second experiments respectively (Table I). For the MLC dataset, in the first experiment we train on two-thirds of the data from 2008 and test on the remaining one third. In the second experiment, data from year 2008 is used for training and tests are performed on data from year 2009. Note that in contrast to the 'multiple texton maps' (MTM) approach which extracts features from multiple scales, we only extract features from the 224224 dimensional patches. While we can achieve a larger gain by using multiple scales with our approach, we kept the setting similar to the one used with the other datasets for consistency. For similar reasons, we used the RGB color space instead of LAB, which was shown to perform better on the MLC dataset. Compared to the baseline CNN, Methods (using stand. split) Performances Network in Network 64.3% Probablistic Maxout Network 61.9% Representation Learning 60.8% Deeply Supervised Nets 65.4% Generalized Pooling Func. 67.6% Maxout NIN 71.1% we achieved a gain of 2.3% and 2.5% on the first and second experiments respectively. Although the gains in the overall accuracy may seem modest, it should be noted that the boost in the average class accuracy is more pronounced. For example, the confusion matrices for DIL and MLC datasets in Fig. 6 (corresponding to Exp. 1 and Exp. 2 respectively), show an improvement of 9.5% and 11.8% in the average class accuracy. The confusion matrices in Figs. 6a, 6b, 6c and 6d also show a very significant boost in performance for the least frequent classes e.g., Turf, Macro, Monti, AK and SCC. Our results for the two balanced datasets, MNIST and CIFAR-100, are reported in Tables III, IV on the standard splits along-with the deliberately imbalanced splits. To imbalance the training distributions, we used the available/normal training data for the even classes and only 25% and 10% of data for the odd classes. Similarly, we experimented by keeping the normal representation of the odd classes and reducing the representation of the even classes to only 25% and 10%. Our results show that the performance of our approach is equal to the performance of the baseline method when the distribution is balanced, but when the imbalance ratios increase, our approach produces significant improvements over the baseline CNN (which is trained without using the cost-sensitive loss layer). We also compare with the state-of-the-art techniques which report results on the standard split 1 and demonstrate that our performances are better or comparable. Note that for the MNIST digit dataset, nearly all the top performing approaches use distortions (affine and/or elastic) and data augmentation to achieve a significant boost in performance. In contrast, our baseline and cost-sensitive CNNs do not use 1 Note that the standard split on the Caltech-101 and MIT-67 is different from the original data distribution (see Sec. IV-B for details). Methods (using stand. split) Performances Spatial Pooling Regions 50.1% VC + VQ 52.3% CNN-SVM 58.4% Improved Fisher Vectors 60.8% Mid Level Representation 64.0% Multiscale Orderless Pooling 68.9% we further decrease the data of odd and even classes to just 10% respectively, and observe a better relative performance of our proposed approach compared to the baseline method. We report F-measure and G-mean scores on all the six datasets in Table VII. The metric calculation details are provided in Sec. IV-A. The most unbalanced splits (Fig. 7) are used for each dataset to clearly demonstrate the benefit of class-specific costs. We note that the cost-sensitive CNN model clearly out-performs the baseline model for all experiments. The comparisons with the best approaches for classimbalance learning are shown in Table VIII. Note that we used a high degree of imbalance for the case of all six datasets to clearly show the impact of the class imbalance problem on the performance of the different approaches (Fig.7). For fairness and conclusive comparisons, our experimental procedure was kept as close as possible to the proposed CoSen CNN. For example, for the case of CoSen Support Vector Machine (SVM) and Random Forest (RF) classifiers, we used the 4096 dimensional features extracted from the pre-trained deep CNN (D). Similarly, for the cases of over and under-sampling, we used the same 4096 dimensional features, which have shown to perform well on other classification datasets. A twolayered neural network was used for classification with these sampling procedures. We also report comparisons with all types of data sampling techniques i.e., over-sampling (SMOTE ), under-sampling (Random Under Sampling -RUS ) and hybrid sampling (SMOTE-RSB * ). Note that despite the simplicity of the approaches in, they have been shown to perform very well on imbalanced datasets in data mining. We also compare with the cost-sensitive versions of popular classifiers (weighted SVM and weighted RF implementation of LIBSVM and set the class-dependent costs based on the proportion of each class in the training set. Finally, we experiment with a recent cost-sensitive deep learning based technique of Chung et al.. Unlike our approach, does not automatically learn class-specific costs. To have a fair comparison, we incorporate their proposed smooth one-sided regression (SOSR) loss as the last layer of the baseline CNN model in our experiments. Similar to, we use the approach proposed in to generate fixed cost matrices. Our proposed approach demonstrates a significant improvement over all of the cost-sensitive class imbalance methods. Since our approach updates the costs with respect to the data statistics (i.e., data distribution, class separability and classification errors), an interesting aspect is to analyse the performance when the costs are fixed and set equal to these statistics instead of updating them adaptively. We experiment with fixed costs instead of adaptive costs in the case of CoSen-CNN. For this purpose, we used three versions of fixed costs, based on the class representation (H), data separability (S) and classification errors (M). Table IX shows the results for each dataset with four different types of costs. The results show that none of the fixed costs significantly improve the performance in comparison to the adaptive cost. This shows that the optimal costs are not the H, S and M themselves, rather an intermediate set of values give the best performance for cost-sensitive learning. Lastly, we observed a smooth reduction in training and validation error for the case of cost-sensitive CNN. We show a comparison of classification errors between baseline and costsensitive CNNs at different training epochs in Fig. 8. V. CONCLUSION We proposed a cost-sensitive deep CNN to deal with the class-imbalance problem, which is commonly encountered when dealing with real-world datasets. Our approach is able to automatically set the class-dependent costs based on the data statistics of the training set. We analysed three commonly used cost functions and introduced class-dependent costs for each case. We show that the cost-sensitive CE loss function is c-calibrated and guess aversive. Furthermore, we proposed an alternating optimization procedure to efficiently learn the class-dependent costs as well as the network parameters. Our results on six popular classification datasets show that the modified cost functions perform very well on the majority as well as on the minority classes in the dataset. Proof: From Eq. 1, we have: q p *,q P (q|x) ≤ q p,q P (q|x) ∀p = p * which gives the following relation: P (p * |x) p *,p * − p,p * ≤ q =p * P (q|x) p,q − p *,q, ∀p = p * As indicated in Sec. 3.1, the above expression holds for all p = p *. For a total number of N classes and an optimal prediction p *, there are N − 1 of the above relations. By adding up the left and the right hand sides of these N − 1 relations we get: This can be simplified to: where, Px = . Note that the posterior probabilities Px are positive ( i P (i|x) = 1 and P (i|x) > 0). It can be seen from the above equation that the addition of any constant c, does not affect the overall relation, i.e., for any column j, Therefore, the columns of the cost matrix can be shifted by a constant c without any effect on the associated risk. Lemma A.2. The cost of the true class should be less than the mean cost of all misclassification. Proof: Since, Px can take any distribution of values, we end up with the following constraint: For a correct prediction p *, P (p * |x) > P (p|x), ∀p = p *. Which implies that: It can be seen that the cost insensitive matrix (when diag( ) = 0 and i,j = 1, ∀j = i) satisfies this relation and provides the upper bound. Lemma A.3. The cost matrix for a cost-insensitive loss function is an all-ones matrix, 1 pp, rather than a 1 − I matrix, as in the case of the traditionally used cost matrix. Proof: With all costs equal to the multiplicative identity i.e., p,q = 1, the CNN activations will remain unchanged. Therefore, all decisions have a uniform cost of 1 and the classifier is costinsensitive. Proof: We adopt a proof by contradiction. Let us suppose that p,q = 0. During training in this case, the corresponding score for class q (sp,q) will always be zero for all samples belonging to class p. As a result, the output activation (yq) and the back-propagated error will be independent of the weight parameters of the network, which proves the Lemma. Proof: Based on Lemmas A.3 and A.4, it is trivial that the costs are with-in the range (0, 1]. Lemma A.6. Offsetting the columns of the cost matrix can lead to an equally probable guess point. Proof: Let us consider the case of a cost-insensitive loss function. In this case, = 1 (from Lemma A.3). Offsetting all of its columns by a constant c = 1 will lead to = 0. For = 0, the CNN outputs will be zero for any o (i) ∈ R N. Therefore, the classifier will make a random guess for classification.
SGK196 Is a Glycosylation-Specific O-Mannose Kinase Required for Dystroglycan Function Dissecting Dystrophies Defects in -dystroglycan lead to various congenital muscular dystrophies, and its ability to bind to extracellular matrix (ECM) is dependent on formation of a specific O-linked sugar structure. Previous efforts to understand the molecular mechanisms underlying -dystroglycan's ability to bind to the ECM led to the identification of a phosphorylated O-mannosyl trisaccharide on -dystroglycan and to the demonstration that addition of this residue is a prerequisite for formation of the ligand-binding motif. However, the biosynthetic pathway that leads to production of the phosphorylated O-mannosyl glycan has not been delineated. Yoshida-Moriguchi et al. (p. 896, published online 8 August) elucidate the functions of three genes recently found to cause dystroglycan-related disorders and explain the defects in the production of the phosphorylated O-mannosyl glycan that underlie the pathologies of patients with the relevant mutations. An atypical kinase genetically associated with muscular dystrophies recognizes a unique trisaccharide structure. Phosphorylated O-mannosyl trisaccharide is required for dystroglycan to bind laminin-G domaincontaining extracellular proteins with high affinity in muscle and brain. However, the enzymes that produce this structure have not been fully elucidated. We found that glycosyltransferase-like domaincontaining 2 (GTDC2) is a protein O-linked mannose 1,4-N-acetylglucosaminyltransferase whose product could be extended by 1,3-N-acetylgalactosaminyltransferase2 (B3GALNT2) to form the O-mannosyl trisaccharide. Furthermore, we identified SGK196 as an atypical kinase that phosphorylated the 6-position of O-mannose, specifically after the mannose had been modified by both GTDC2 and B3GALNT2. These findings suggest how mutations in GTDC2, B3GALNT2, and SGK196 disrupt dystroglycan receptor function and lead to congenital muscular dystrophy.
/* * Copyright 2010-2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). * You may not use this file except in compliance with the License. * A copy of the License is located at * * http://aws.amazon.com/apache2.0 * * or in the "license" file accompanying this file. This file is distributed * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either * express or implied. See the License for the specific language governing * permissions and limitations under the License. */ package software.amazon.awssdk.core.internal.async; import static org.assertj.core.api.Assertions.assertThat; import com.google.common.jimfs.Jimfs; import java.io.IOException; import java.nio.file.FileSystem; import java.nio.file.Path; import java.util.concurrent.CompletableFuture; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; import org.reactivestreams.Subscription; /** * Tests for {@link FileAsyncResponseTransformer}. */ public class FileAsyncResponseTransfomerTest { private static FileSystem testFs; @BeforeClass public static void setup() { testFs = Jimfs.newFileSystem(); } @AfterClass public static void teardown() throws IOException { testFs.close(); } @Test public void errorInStream_completesFuture() { Path testPath = testFs.getPath("test_file.txt"); FileAsyncResponseTransformer xformer = new FileAsyncResponseTransformer(testPath); CompletableFuture prepareFuture = xformer.prepare(); xformer.onResponse(new Object()); xformer.onStream(subscriber -> { subscriber.onSubscribe(new Subscription() { @Override public void request(long l) { } @Override public void cancel() { } }); subscriber.onError(new RuntimeException("Something went wrong")); }); assertThat(prepareFuture.isCompletedExceptionally()).isTrue(); } }
The determination of time delays as an inverse problem - the case of the double quasar 0957+561 A common problem in astronomy is the determination of the time shift between two otherwise identical time series of measured flux from a variable source, in short the determination of a time delay. One example of where this problem occurs is in the determination of the Hubble constant from mUltiple images of gravitationally lensed variable quasars. It is shown here that this problem is very similar to the problem of reverberation mapping of active galactic nuclei (AGN), and therefore the determination of time delays can also be seen as a restricted inverse problem. In this paper a method is developed that solves this inverse problem and it is applied to the time series measured for the double quasar QSO 0957+561. The resulting time delay is 425 ± 17 d. This leads to a best value for the Hubble constant of Ho = 66 ± 10 kIn S -1 Mpc -1.
package cryptopals import "testing" func TestMd4(t *testing.T) { assertValidMd41Hash(t, "The quick brown fox jumps over the lazy dog", "1bee69a46ba811185c194762abaeae90") assertValidMd41Hash(t, "The quick brown fox jumps over the lazy cog", "b86e130ce7028da59e672d56ad0113df") assertValidMd41Hash(t, "", "31d6cfe0d16ae931b73c59d7e0c089c0") } func assertValidMd41Hash(t *testing.T, data, hexEncodedHash string) { hash := md4([]byte(data)) hex := hexEncode(hash[:]) assertEqualArrays(t, hex, []byte(hexEncodedHash)) }
/** * Call this once per periodic loop. */ public void updateTimes() { m_lastRobotTime = m_robotTime; m_lastMatchTime = m_matchTime; m_currTime = System.currentTimeMillis(); m_robotTime = m_currTime - m_startRobotTime; m_deltaTime = m_robotTime - m_lastRobotTime; m_frameNumber++; if (m_startMatchTime != 0) { m_matchTime = m_currTime - m_startMatchTime; } }
/* Copyright (c) 2016, Rice University Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Rice University nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef ALLOCATOR_H #define ALLOCATOR_H #include <stdlib.h> #include <assert.h> #include <pthread.h> #include <stdio.h> #include "ocl_util.h" #ifdef USE_CUDA #include <cuda.h> #include <cuda_runtime.h> typedef CUdeviceptr cl_mem; #else #include <CL/cl.h> #endif #define ASSERT_MSG(conditional, msg) { \ if (!(conditional)) { \ fprintf(stderr, "Assertion failure at %s:%d - %s\n", __FILE__, \ __LINE__, msg); \ exit(1); \ } \ } #define ASSERT(conditional) { \ if (!(conditional)) { \ fprintf(stderr, "Assertion failure at %s:%d\n", __FILE__, \ __LINE__); \ exit(1); \ } \ } // #define MIN_ALLOC_SIZE (1 << 10) #define MIN_ALLOC_SIZE 1 #define NBUCKETS 20 #ifdef TRACE #ifdef __cplusplus extern "C" { #endif extern void enter_trace(const char *lbl); extern void exit_trace(const char *lbl); #ifdef __cplusplus } #endif #define ENTER_TRACE(lbl) enter_trace(lbl) #define EXIT_TRACE(lbl) exit_trace(lbl) #else #define ENTER_TRACE(lbl) #define EXIT_TRACE(lbl) #endif #define BUCKET_MIN_SIZE_INCL(my_bucket) ((size_t)(MIN_ALLOC_SIZE * (2 << (my_bucket)))) #define BUCKET_MAX_SIZE_EXCL(my_bucket) ((size_t)(BUCKET_MIN_SIZE_INCL(my_bucket + 1))) #define BELONGS_TO_BUCKET(my_size, my_bucket) \ (my_size >= BUCKET_MIN_SIZE_INCL(my_bucket) && \ my_size < BUCKET_MAX_SIZE_EXCL(my_bucket)) struct _cl_bucket; typedef struct _cl_bucket cl_bucket; struct _cl_region; typedef struct _cl_region cl_region; struct _cl_alloc; typedef struct _cl_alloc cl_alloc; struct _cl_allocator; typedef struct _cl_allocator cl_allocator; typedef struct _cl_region { cl_mem sub_mem; size_t offset, size; cl_bucket *parent; cl_alloc *grandparent; cl_region *bucket_next, *bucket_prev; cl_region *next, *prev; int refs; bool keeping; long birth; /* * assume there can only be one outstanding reference to a region in one * cache */ bool invalidated; } cl_region; typedef struct _cl_bucket { cl_alloc *parent; cl_region *head, *tail; } cl_bucket; typedef struct _cl_alloc { cl_mem mem; char *pinned; size_t size; size_t free_bytes; // purely for diagnostics and error-checking cl_bucket buckets[NBUCKETS]; cl_bucket large_bucket; cl_bucket keep_buckets[NBUCKETS]; cl_bucket keep_large_bucket; cl_region *region_list_head; cl_allocator *allocator; long curr_time; pthread_mutex_t lock; #ifdef PROFILE_LOCKS unsigned long long contention; #endif } cl_alloc; /* * There is a one-to-one mapping between allocators and OpenCL devices. All of * the fields of the allocator object are constant after creation. The allocator * is the root of a region of cl_alloc objects, each representing a subset of * the memory in a device. */ typedef struct _cl_allocator { cl_alloc *allocs; int nallocs; unsigned int address_align; #ifdef USE_CUDA CUcontext cu_ctx; #endif int device_index; } cl_allocator; #ifdef USE_CUDA /* * Assumes that the calling thread has already attached a CUDA context. This * call asserts that the current context matches the expected device index. */ extern cl_allocator *init_allocator(CUcontext ctx); #else extern cl_allocator *init_allocator(cl_device_id dev, int device_index, cl_mem_flags alloc_flags, size_t limit_size, cl_context ctx, cl_command_queue cmd); #endif extern bool re_allocate_cl_region(cl_region *target_region, int target_device); extern cl_region *allocate_cl_region(size_t size, cl_allocator *allocator, void (*callback)(void *), void *user_data); extern bool free_cl_region(cl_region *to_free, bool try_to_keep); extern void print_allocator(cl_allocator *allocator, int lbl); extern void bump_time(cl_allocator *allocator); extern size_t count_free_bytes(cl_allocator *allocator); extern unsigned long long get_contention(cl_allocator *allocator); extern void print_clalloc_profile(int thread); extern void *fetch_pinned(cl_region *region); #define GET_DEVICE_FOR(my_region) ((my_region)->grandparent->allocator->device_index) #endif
package xmilcode.gianttreemod.tree.material.leaves; import net.minecraftforge.common.config.Configuration; import xmilcode.gianttreemod.GiantTreeMod; import xmilcode.mclib.config.ConfigUtil; import xmilcode.mclib.config.IConfigSettings; import xmilcode.mclib.util.NamingUtil; // Configurable settings for leaves. public class LeavesSettings implements IConfigSettings { public enum LeavesType { STANDARD_LEAVES, SIMPLE_BLOCK_LEAVES }; private static final String LEAVES_TYPE_NAME_ID = "leaves_type"; private static final String LEAVES_TYPE_INTERNAL_NAME = "leavesType"; private LeavesType leavesType = LeavesType.SIMPLE_BLOCK_LEAVES; public LeavesType leavesType() { return leavesType; } @Override public void refreshSettings(Configuration config) { leavesType = LeavesType.valueOf( ConfigUtil.enumStringFromConfig( config.getString( LEAVES_TYPE_INTERNAL_NAME, Configuration.CATEGORY_GENERAL, ConfigUtil.configStringFromEnum(LeavesType.SIMPLE_BLOCK_LEAVES.toString()), "The way giant tree leaves are rendered.", getLeavesTypeValidValues(), NamingUtil.generateConfigSettingName(GiantTreeMod.MODID, LEAVES_TYPE_NAME_ID)))); } private static String[] getLeavesTypeValidValues() { return new String[] { ConfigUtil.configStringFromEnum(LeavesType.STANDARD_LEAVES.toString()), ConfigUtil.configStringFromEnum(LeavesType.SIMPLE_BLOCK_LEAVES.toString()), }; } }
<reponame>scloudic/rabbit-framework package com.scloudic.rabbitframework.jbatis.mapping.binding; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.Map; import com.scloudic.rabbitframework.jbatis.builder.Configuration; import com.scloudic.rabbitframework.jbatis.builder.MapperParser; import com.scloudic.rabbitframework.jbatis.dataaccess.SqlDataAccess; import com.scloudic.rabbitframework.jbatis.exceptions.BindingException; public class MapperRegistry { private Configuration configuration; private final Map<Class<?>, MapperProxyFactory<?>> mappers = new HashMap<Class<?>, MapperProxyFactory<?>>(); public MapperRegistry(Configuration configuration) { this.configuration = configuration; } public <T> boolean hasMapper(Class<T> mapperInterface) { return mappers.containsKey(mapperInterface); } @SuppressWarnings("unchecked") public <T> T getMapper(Class<T> mapperInterface, SqlDataAccess sqlDataAccess) { final MapperProxyFactory<T> mapperProxyFactory = (MapperProxyFactory<T>) mappers .get(mapperInterface); if (mapperProxyFactory == null) throw new BindingException("mapperInterface " + mapperInterface + " is not known to the MapperRegistry."); try { return mapperProxyFactory.newInstance(sqlDataAccess); } catch (Exception e) { throw new BindingException("Error getting mapper instance.Cause: " + e); } } public <T> void addMapper(Class<T> mapperInterface) { // 判断是否为接口类 if (mapperInterface.isInterface()) { if (hasMapper(mapperInterface)) { throw new BindingException("Type " + mapperInterface + "is already known to the MapperRegistry."); } boolean loadCompleted = false; try { mappers.put(mapperInterface, new MapperProxyFactory<T>( mapperInterface)); MapperParser parser = new MapperParser(configuration, mapperInterface); parser.parse(); loadCompleted = true; } finally { if (!loadCompleted) { mappers.remove(mapperInterface); } } } } public Collection<Class<?>> getMappers() { return Collections.unmodifiableCollection(mappers.keySet()); } }
Synergistic effect of fluoride and laser irradiation for the inhibition of the demineralization of dental enamel Both laser irradiation and fluoride treatment alone are known to provide increased resistance to acid dissolution. CO2 lasers tuned to a wavelength of 9.3 m can be used to efficiently convert the carbonated hydroxyapatite of enamel to a much more acid resistant purer phase hydroxyapatite (HAP). Further studies have shown that fluoride application to HAP yields fluoroapatite (FAP) which is even more resistant against acid dissolution. Previous studies show that CO2 lasers and fluoride treatments interact synergistically to provide significantly higher protection than either method alone, but the mechanism of interaction has not been elucidated. We recently observed the formation of microcracks or a crazed zone in the irradiated region that is resistant to demineralization using high-resolution microscopy. The microcracks are formed due to the slight contraction of enamel due to transformation of carbonated hydroxyapatite to the more acid resistant pure phase hydroxyapatite (HAP) that has a smaller lattice. In this study, we test the hypothesis that these small cracks will provide greater adhesion for topical fluoride for greater protection against acid demineralization.
1. Field of the Invention The invention relates generally to automatic calibration circuits. More particularly the invention relates to a method and apparatus for automatically calibrating the output of a plurality of pressure transducers used in a hydromechanical gear shaping machine. 2. Description of the Prior Art Hydraulic pressure transducers are used in hydromechanical gear shaping machines such as shown in U.S. Pat. Nos. 4,125,056, 4,136,302 and 4,254,690. Each of these patents has been assigned to the assignee of the present invention and is incorporated by reference herein. These pressure transducers generate analog electrical signals which are proportional to their associated hydraulic pressure levels. Proper operation of the hydromechanical gear shaper is highly dependent upon the proper operation of these transducers and the precise measurement of the load pressure and supply pressure. It has been found that the pressure transducers used in prior art hydromechanical gear shaping machines have, after repeated operation tended to drift so that the electrical outputs of the sensors and their associated circuit components drifted as a result of amplifier gain or voltage offset changes. Such deviations are obviously detrimental to the proper operation of the machine and necessitate recalibration or replacement of the error producing components. Additionally, the inaccuracies of the sensor output may lead to operating inefficiencies if the signals indicate that greater power is required then is actually necessary for the proper performance of the gear shaper. Accordingly, it is an object of this invention to overcome the disadvantages associated with these prior art pressure transducers by providing an automatic calibration means for compensating for detected errors. It is another object of this invention to produce such an automatic calibration means for periodically compensating for detected errors. It is also an object of this invention to avoid the necessity of precisely adjusting the initial values of various components associated with the transducer circuits.
<reponame>deveshrattan/ideation-portal<gh_stars>1-10 import { IUser } from './IUser'; export interface IAuth { loginWithRedirect: (options?: any) => Promise<void>; logout: (options?: any) => void; isAuthenticated?: boolean; isReady?: boolean; user: IUser; }
<gh_stars>0 import network import socket import machine import ntptime import time import LM75 #mport mpu6050 import Render # connect to wifi router sta_if = network.WLAN(network.STA_IF) if not sta_if.isconnected(): print('connecting to network...') sta_if.active(True) sta_if.connect('DS', 'SputnikulOn4Antenni') while not sta_if.isconnected(): time.sleep(0.3) print(".", end=" ") print("") print('network config:', sta_if.ifconfig()) # create a socket and listener for art-Net packages # https://github.com/jsbronder/asyncio-dgram #i2c i2c = machine.SoftI2C(scl=machine.Pin(22), sda=machine.Pin(23)) # imu 104 dec # print(str(i2c.readfrom_mem(104,0x75,1))) # imu = mpu6050.MPU6050(i2c) # LED output RGBW Output = Render.Render(i2cInterface=i2c) # fan @ pin(0) fan = machine.PWM(machine.Pin(0),duty=1023) # Input Voltage sensor 1:7.81 inputVoltage = machine.ADC(machine.Pin(32)) inputVoltage.atten(machine.ADC.ATTN_11DB) # enable outputs enableAll = machine.Pin(27, machine.Pin.OUT, value=1) # redComp def calcRedcomp(self): pos = int(getTemp()*10)+300 redComp = machine.Timer(-1) # NTP print(str(time.localtime())) ntptime.settime() print(str(time.localtime()))
def check_overlap_compatible(self, other_cluster, max_distance): self_switches = self.orientation_switches other_switches = other_cluster.orientation_switches single_orientation = len(self_switches) == len(other_switches) == 1 tolerance = 0 if self.valid_tsd else max_distance if abs(other_cluster.min - self.max) < max_distance and overlap(start1=self.start, end1=self.end, start2=other_cluster.start, end2=other_cluster.end, tolerance=tolerance): if single_orientation: min_read_read_length = min([r.reference_length for r in other_cluster if r.reference_start == other_cluster.min]) max_read_read_length = min([r.reference_length for r in self if r.reference_end == self.max]) if abs(other_cluster.min - self.max) < (max_distance - min_read_read_length - max_read_read_length): if self_switches[0][0] == 'F': if set(self.insert_reference_tags()) & set(other_cluster.insert_reference_tags()): return True elif len(other_switches) == 1 and other_switches[0][0] == 'R': if set(self.insert_reference_tags()) & set(other_cluster.insert_reference_tags()): return True
Territory Brand: Approaches to Definition, Simulation Methodology Taking into account the specifics of modern geo-economic development, territorial branding plays the role of the basis for territorial management. Territorial branding is aimed at generating the competitive advantages of a particular territory, at improving its image, popularity, and reputation of goods and services produced in a given territory in the eyes of consumers. The analysis of the currently existing approaches to the understanding of the essence of the notion of territory brand and the essence of branding in general are given in this article, and also the features and constituent elements of foreign and Russian models of a territorial brand are revealed. This ultimately allowed the authors to clarify the concept of this definition. According to the authors, a territory brand is a combination of unique qualities, unfading universal values reflecting the originality, unique original consumer characteristics of this territory and community, widely known, received public recognition and enjoying strong demand among the consumers of the territory, contributing to the formation of preferences of this territory over others in a situation of choice. The brand is presented by the authors of the article as the most important way to realize the competitive advantages of a territory, a tool for competitiveness, differentiation, and uniqueness. Simulation is used as a research method. It became possible to clarify theoretically the essence of the concept of territory brand and systematically comprehend the territorial branding technology by building the authors model for the territory brand.
package com.javali.gleif.elvesmatcher.model; import java.util.Date; import java.util.List; import com.opencsv.bean.CsvBindAndSplitByPosition; import com.opencsv.bean.CsvBindByPosition; import com.opencsv.bean.CsvDate; import lombok.Data; /** @author javali on 13.12.2020. */ @Data public class ELF { @CsvBindByPosition(position = 0) private String elfcode; @CsvBindByPosition(position = 1) private String countryOfFormation; // (ISO 3166-1) @CsvBindByPosition(position = 2) private String countryCode; @CsvBindByPosition(position = 3) private String jurisdictionOfFormation; // (ISO 3166-2) @CsvBindByPosition(position = 4) private String countrySubDivisionCode; @CsvBindByPosition(position = 5) private String entityLegalFormNameLocalName; @CsvBindByPosition(position = 6) private String language; // (ISO 639-1) @CsvBindByPosition(position = 7) private String languageCode; // (per ISO 01-140-10) @CsvBindByPosition(position = 8) private String entityLegalFormNameTransliteratedName; //@CsvBindByName(column = "Abbreviations Local language") @CsvBindAndSplitByPosition(position = 9, elementType = String.class, splitOn = ";") private List<String> abbreviationsLocalLanguage; @CsvBindByPosition(position = 10) private String abbreviationsTransliterated; // YYYY-MM-DD (ISO 8601) @CsvBindByPosition(position = 11) @CsvDate("yyyy-MM-dd") private Date dateCreated; // ACTV/INAC @CsvBindByPosition(position = 12) private String elfStatus; @CsvBindByPosition(position = 13) private String modification; // YYYY-MM-DD (ISO 8601) @CsvBindByPosition(position = 14) @CsvDate("yyyy-MM-dd") private Date modificationDate; @CsvBindByPosition(position = 15) private String reason; }
Rain fell on the statue of Turlough O'Carolan , now a landmark looking up the main street in Mohill, Co Leitrim. This is the village where the blind harper, at the age of 50 in 1720, on being given land by his patrons, the Crofton family, finally built a home of his own about 1720. Yesterday harpers and a piper, stood on the site of O'Carolan's home. Now only a pile of stones and rubble remain. The ruins of the Crofton mansion, Lakefield House, continues to stand. It is a house in which O'Carolan often played his tunes, the music lingers as does the enduring presence of the minstrel who was so revered that his wake in 1738 spanned four days. O'Carol-an's music was celebrated in two concerts during the weekend's Leitrim Fleadh. Performed by the members of the National Harp Orchestra under Janet Harbison, O'Carolan's tunes, of which more than 200 survive, thanks initially to the power of the oral folk memory and from the mid-19th century, to the work of Edward Bunting who notated the work and helped preserve it, the music retains its sense of period as well as its mood shifts from the lively to the melancholic. On Sunday evening in the ballroom of the local hotel, 17 members of the National Harp Orchestra, took their places and performed some of the pieces O'Carolan had written for patrons, many of whom were also friends. The ethereal quality of the harp was complemented by the haunting keening of the uilleann pipes as played by Ryan Murphy from Cork. The absence of local man, writer John McGahern was noted and a tune was played, in honour of his memory. Tunes such as Planxty Brown, Eleanor Plunkett, Planxty Irwin, Fanny Brown and Hewlett as well as his most famous work, O'Carolan's Concerto first performed in the home of Jonathan Swift, shimmered on the air. The melodic music is delicate and precise, the work of Ireland's first great composer. O'Carolan was the Irish Vivaldi, shaped in part by the Italian school and a product of the age of the Baroque. Among the harpers playing on Sunday was a young Co Tipperary girl, aged 15 and already one of the finest Irish dancers in the world. As the music played on, she waited for her note and then left her harp, and took her place on the small platform and executed intricate dance steps and high leaps without ever allowing the sheer athleticism of her performance overshadow the grace. There was an otherworldly quality about her dancing. Ciara Callanan Ryan lives in Leap Castle, the most haunted house in Ireland. Earlier in the day, the orchestra had performed an outdoor recital on a stage erected beside O'Carolan's monument which depicts him in bronze, larger than life, and playing his harp. The rain poured down, but the music continued as did the various musical competitions. Almost 20 teenagers competed in the tin whistle event. Through his career as a wandering musician, Turlough O'Carolan, who had been born in Nobber in Co Meath in 1670 and was blinded by smallpox at the age of 14, travelled throughout the northwest and was always guaranteed a welcome. His one enemy was the rain. And on Sunday it rained on the current generation of harpers, most of whom say it was O'Carolan's music that inspired them to play the harp, and several of the younger ones want to travel the world, playing the master's tunes.
# coding= utf-8 # Copyright (c) 2014 Rackspace, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import uuid import ddt import re from nose.plugins import attrib from tests.api import providers @ddt.ddt class TestTransportFlavors(providers.TestProviderBase): """Security Tests for transport layer security vulnerablities for flavor calls.""" def setUp(self): """ Setup for the tests """ super(TestTransportFlavors, self).setUp() self.reset_defaults() def reset_defaults(self): """ Reset provider_list, limit_list and flavor_id to their default values. """ self.provider_list = [{"provider": "fastly", "links": [{"href": "www.watermelon.com", "rel": "provider_url"}]}] self.limits_list = [{"origins": {"min": 1, "max": 5}}, {"domains": {"min": 1, "max": 5}}, {"caching": {"min": 3600, "max": 604800, "incr": 300}}] self.flavor_id = str(uuid.uuid1()) def check_one_request(self): """ Create one flavor and check whether it has been sucessfully created. """ resp = self.client.create_flavor(flavor_id=self.flavor_id, provider_list=self.provider_list, limits=self.limits_list) self.assertTrue(resp.status_code == 202) @attrib.attr('security') def test_transport_check_https(self): """ Check whether https is used for all links returned from get_service calls. If https is not used in any link, the test fails. """ self.reset_defaults() self.flavor_id = str(uuid.uuid1()) # create one flavor self.check_one_request() resp = self.client.list_flavors() # make sure that http:// is not used anywhere self.assertTrue(re.search("http://", resp.text) is None) def tearDown(self): self.client.delete_flavor(flavor_id=self.flavor_id) super(TestTransportFlavors, self).tearDown()
What factors drive corporate customer satisfaction with e-banking services Due to the burgeoning development of electronic commerce (e-commerce),the broader applications of emerging serviceInternet baking (e-banking) services have been introduced and provided by financial holding companies or banks at an accelerating rate in recent years since they can provide efficient, reliable, securable, and convenient financial services, such as online payment, deposit/loan, trading, and clearing/settlement, via electronic channels (e-channels, e.g., Internet and phone) for customers. E-banking services not only can create new competitive advantages, perhaps, but also can improve their relationships with customers for banks. Obviously, e-banking can offer better services required by corporations and individuals, it could be a strategic niche no matter for banks or their customers. Conceivably, how to implement e-banking successfully is becoming a critical management issue. Unfortunately, research pays scarce attentions on what factors drive success of e-banking, particularly from corporate customers perspective. For the reason, this paper attempts to explore what factors affect corporate customer satisfaction with e-banking (CCSEB) which is one surrogate variable of success of e-banking services. Based on a survey of 178 respondents collected from Taiwan companies, the results support that environmental, organizational, and globalization factors will affect customer satisfaction with e-banking significantly. Furthermore, there exist a reciprocal relationship between customer satisfaction and post-usage favorite behavior. We believe the results and findings proposed in this paper not only can offer in-depth insights for practitioners about how to implement e-banking successfully, but also can be further directions for researchers interested in designing related theories.
def _EarCheck(face, n, angk, vm1, v0, v1, points): for j in range(0, n): fv = face[j] k = angk[j] b = (k == Angreflex or k == Ang360) \ and not(fv == vm1 or fv == v0 or fv == v1) if b: c = not(Ccw(v0, vm1, fv, points) \ or Ccw(vm1, v1, fv, points) \ or Ccw(v1, v0, fv, points)) fvm1 = face[(j - 1) % n] fv1 = face[(j + 1) % n] d = SegsIntersect(fvm1, fv, vm1, v0, points) or \ SegsIntersect(fvm1, fv, v0, v1, points) or \ SegsIntersect(fv, fv1, vm1, v0, points) or \ SegsIntersect(fv, fv1, v0, v1, points) if c or d: return False return True
<gh_stars>0 from cl_3x3 import * # -------------------------------------------------- conta = [ {'x': 1, 'y': 2, 'z': 1, '=': 8}, {'x': 2, 'y': -1, 'z': 1, '=': 3}, {'x': 3, 'y': 1, 'z': -1, '=': 2} ] # -------------------------------------------------- def enfeitar(oque='-', qtd=50): print(oque * qtd) a = cl_3x3(conta) print() enfeitar() print('conta:') a.mostrar_conta() enfeitar() print('matriz delta:') a.mostrar_matriz() print(f'delta = {a.delta}') enfeitar() a.mostrar_matriz('x') print(f'deltaX = {a.deltaX}') enfeitar() a.mostrar_matriz('y') print(f'deltaX = {a.deltaY}') enfeitar() a.mostrar_matriz('z') print(f'deltaZ = {a.deltaZ}') enfeitar() print(f'delta={a.delta}, deltaX={a.deltaX}, deltaY={a.deltaY}, deltaZ={a.deltaZ}') print(f'x = {a.deltaX}/{a.delta} = {a.x}\n') print(f'y = {a.deltaY}/{a.delta} = {a.y}\n') print(f'z = {a.deltaZ}/{a.delta} = {a.z}\n') enfeitar() print(f'x = {a.x}, y = {a.y}, z = {a.z}')
<gh_stars>10-100 import { capitalise } from './capitalise.util'; describe('capitalise', () => { it('should capitalise a the first character of a string', () => { expect(capitalise('a')).toBe('A'); }); });
import time import docker client = docker.from_env() class Dockering: def __init__(self, config): self.image = config['image'] self.ports = config['ports'] self.env = config.get('envs') self.volumes = config.get('volumes') if self.volumes: for key, val in self.volumes.items(): self.volumes[key] = {'bind': val, 'mode': 'ro'} client.images.pull(self.image) def up(self): self.container = client.containers.create( self.image, auto_remove=True, volumes=self.volumes, ports=self.ports, name='server', environment=self.env) self.container.start() if not self.container.logs(): print(' Waiting for container to come up...') time.sleep(1) # delay giving for services inside the container to come up time.sleep(3) def down(self): self.container.remove(v=True, force=True) def __enter__(self): self.up() def __exit__(self, type, value, traceback): self.down() if __name__ == '__main__': d = Dockering('tensorwerk/raibenchmarks:flask-optim-cpu') d.up() d.down() """ docker run --read-only -v /home/hhsecond/mypro/benchmarks/assets:/root/data \ --read-only -v /home/hhsecond/mypro/benchmarks/experiments/_tensorflow/_flask:/root \ -p 8000:8000 --name server --rm tensorwerk/raibenchmarks:flask-optim-cpu """
export * from "./webpinfo";
Q: What does "Ron's face was set" mean? Black conjured heavy manacles from thin air; soon Pettigrew was upright again, left arm chained to Lupin's right, right arm to Ron's left. Ron's face was set. He seemed to have taken Scabbers's true identity as a personal insult. Crookshanks leapt lightly off the bed and led the way out of the room, his bottlebrush tail held jauntily high. The closest phrase I can get from dictionaries is "set one's face against", which means "To be strongly opposed to or disapproving of something.", but I'm not sure if it's the intended meaning in this context. What does "Ron's face was set" exactly mean? A: When something sets, it becomes solid/unchanging. Ron's face has set into a single, grim, determined expression. to set if a liquid sets, or if you set it, it forms a solid substance if your face or a part of it sets into a particular expression, or if you set it into a particular expression, you have that expression on your face to be set (adj.) a set smile or expression does not change, and often hides what someone is really thinking the set of somebody’s face/jaw/shoulders etc. the expression on your face or the way you hold your body, which tells people how you are feeling
/* BEGIN_COMMON_COPYRIGHT_HEADER * (c)MIT * * Flacon - audio File Encoder * https://github.com/flacon/flacon * * Copyright: 2022 * <NAME> <<EMAIL>> * * MIT License * * Copyright (c) 2022 <NAME> * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * END_COMMON_COPYRIGHT_HEADER */ #include "wavheader.h" #include "types.h" #include <iostream> #include <array> #include <cstring> static const char *WAV_RIFF = "RIFF"; static const char *WAV_WAVE = "WAVE"; static const char *WAV_FMT = "fmt "; static const char *WAV_DATA = "data"; static const char *WAVE64_RIFF = "riff"; static const char *WAVE64_WAVE = "wave"; static const std::array<uint8_t, 16> WAVE64_GUID_RIFF = { 0x72, 0x69, 0x66, 0x66, 0x2E, 0x91, 0xCF, 0x11, 0xA5, 0xD6, 0x28, 0xDB, 0x04, 0xC1, 0x00, 0x00 }; static const std::array<uint8_t, 16> WAVE64_GUID_WAVE = { 0x77, 0x61, 0x76, 0x65, 0xF3, 0xAC, 0xD3, 0x11, 0x8C, 0xD1, 0x00, 0xC0, 0x4F, 0x8E, 0xDB, 0x8A }; static const std::array<uint8_t, 16> WAVE64_GUID_FMT = { 0x66, 0x6D, 0x74, 0x20, 0xF3, 0xAC, 0xD3, 0x11, 0x8C, 0xD1, 0x00, 0xC0, 0x4F, 0x8E, 0xDB, 0x8A }; static const std::array<uint8_t, 16> WAVE64_GUID_DATA = { 0x64, 0x61, 0x74, 0x61, 0xF3, 0xAC, 0xD3, 0x11, 0x8C, 0xD1, 0x00, 0xC0, 0x4F, 0x8E, 0xDB, 0x8A }; // 16 bytes of GUID + 8 bytes of Int64 static constexpr int WAVE64_CHUNK_HEADER_SIZE = 16 + 8; // static const int READ_DELAY = 1000; #if ('A' << 24 | 'B' << 16 | 'C' << 8 | 'D') == 0x41424344 // little-endian target architecture static inline uint16_t fromLittleEndian(uint16_t value) { return value; } static inline uint32_t fromLittleEndian(uint32_t value) { return value; } static inline uint64_t fromLittleEndian(uint64_t value) { return value; } static inline uint16_t toLittleEndian(uint16_t value) { return value; } static inline uint32_t toLittleEndian(uint32_t value) { return value; } static inline uint64_t toLittleEndian(uint64_t value) { return value; } #else // big-endian target architecture static inline uint16_t fromLittleEndian(uint16_t value) { return uint16_t(0 | ((value & 0x00ff) << 8) | ((value & 0xff00) >> 8)); } static inline uint32_t fromLittleEndian(uint32_t value) { return 0 | ((value & 0x000000ff) << 24) | ((value & 0x0000ff00) << 8) | ((value & 0x00ff0000) >> 8) | ((value & 0xff000000) >> 24); } static inline uint64_t fromLittleEndian(uint64_t value) { return 0 | ((value & uint64_t(0x00000000000000ff)) << 56) | ((value & uint64_t(0x000000000000ff00)) << 40) | ((value & uint64_t(0x0000000000ff0000)) << 24) | ((value & uint64_t(0x00000000ff000000)) << 8) | ((value & uint64_t(0x000000ff00000000)) >> 8) | ((value & uint64_t(0x0000ff0000000000)) >> 24) | ((value & uint64_t(0x00ff000000000000)) >> 40) | ((value & uint64_t(0xff00000000000000)) >> 56); } #endif /************************************************ * ************************************************/ static inline void mustRead(std::istream *stream, char *data, size_t size) { if (!stream->read(data, size)) { throw WavHeaderError("Unexpected end of file on " + std::to_string(stream->tellg())); } } /************************************************ * ************************************************/ ByteArray &operator<<(ByteArray &out, const char val[4]) { out.push_back(val[0]); out.push_back(val[1]); out.push_back(val[2]); out.push_back(val[3]); return out; } /************************************************ * ************************************************/ ByteArray &operator<<(ByteArray &out, ByteArray val) { for (const uint8_t &b : val) { out.push_back(b); } return out; } /************************************************ * ************************************************/ ByteArray &operator<<(ByteArray &out, uint64_t val) { union { uint64_t n; char bytes[8]; }; n = toLittleEndian(val); out.push_back(bytes[0]); out.push_back(bytes[1]); out.push_back(bytes[2]); out.push_back(bytes[3]); out.push_back(bytes[4]); out.push_back(bytes[5]); out.push_back(bytes[6]); out.push_back(bytes[7]); return out; } /************************************************ * ************************************************/ ByteArray &operator<<(ByteArray &out, uint32_t val) { union { uint32_t n; char bytes[4]; }; n = toLittleEndian(val); out.push_back(bytes[0]); out.push_back(bytes[1]); out.push_back(bytes[2]); out.push_back(bytes[3]); return out; } /************************************************ * ************************************************/ ByteArray &operator<<(ByteArray &out, uint16_t val) { union { uint32_t n; char bytes[2]; }; n = toLittleEndian(val); out.push_back(bytes[0]); out.push_back(bytes[1]); return out; } /************************************************ * ************************************************/ static ByteArray &operator<<(ByteArray &out, const std::array<uint8_t, 16> &val) { for (const uint8_t &b : val) { out.push_back(b); } return out; } /************************************************ * ************************************************/ // struct SplitterError //{ // int trackNum; // QString msg; // SplitterError(int num, QString msg) : // trackNum(num), // msg(msg) // { // } //}; /************************************************ * ************************************************/ static ByteArray readBytes(std::istream *stream, size_t size) { ByteArray res(size); mustRead(stream, (char *)(res.data()), size); return res; } /************************************************ * ************************************************/ static uint64_t readUInt64(std::istream *stream) { uint64_t n; if (!stream->read((char *)&n, 8)) { throw WavHeaderError("Unexpected end of file"); } return fromLittleEndian(n); } /************************************************ * ************************************************/ static uint32_t readUInt32(std::istream *stream) { uint32_t n; if (!stream->read((char *)&n, 4)) throw WavHeaderError("Unexpected end of file"); return fromLittleEndian(n); } /************************************************ * ************************************************/ static uint16_t readUInt16(std::istream *stream) { uint16_t n; if (!stream->read((char *)&n, 2)) throw WavHeaderError("Unexpected end of file"); return fromLittleEndian(n); } /************************************************ * ************************************************/ class FourCC : public std::array<char, 4> { public: FourCC() : std::array<char, 4>({ '\0' }) { } inline void load(std::istream *stream) { return mustRead(stream, this->data(), this->size()); } inline bool operator==(const char *str) const { return strncmp(data(), str, size()) == 0; } inline bool operator!=(const char *str) const { return !this->operator==(str); } }; ByteArray &operator<<(ByteArray &out, const FourCC &val) { out.insert(out.end(), val.begin(), val.end()); return out; } /************************************************ * ************************************************/ class Guid : public std::array<char, 16> { public: static constexpr int SIZE = 16; Guid() : std::array<char, SIZE>({ '\0' }) { } inline void load(std::istream *stream) { return mustRead(stream, this->data(), this->size()); } inline bool operator==(const char *str) const { return strncmp(data(), str, size()) == 0; } inline bool operator!=(const char *str) const { return !this->operator==(str); } inline bool startsWidth(const char str[4]) const { return strncmp(data(), str, 4) == 0; } }; ByteArray &operator<<(ByteArray &out, const Guid &val) { out.insert(out.end(), val.begin(), val.end()); return out; } /************************************************ * See WAV specoification * http://www-mmsp.ece.mcgill.ca/Documents/AudioFormats/WAVE/WAVE.html * https://en.wikipedia.org/wiki/WAV ************************************************/ WavHeader::WavHeader(std::istream *stream) noexcept(false) { char tag[] = "\0\0\0\0"; mustRead(stream, tag, 4); if (strcmp(tag, WAV_RIFF) == 0) { m64Bit = false; readWavHeader(stream); return; } if (strcmp(tag, WAVE64_RIFF) == 0) { readBytes(stream, 12); // Wave64 format uses 128-bit GUIDs, we readed 4 bytes, there are still 12 bytes m64Bit = true; readWave64Header(stream); return; } throw WavHeaderError("WAVE header is missing RIFF tag while processing file"); } /************************************************ * 52 49 46 46 RIFF * 24 B9 4D 02 file size - 8 * 57 41 56 45 WAVE * * // Chanks * 66 6D 74 20 SubchunkID "fmt " * 10 00 00 00 SubchunkSize 16 * 01 00 AudioFormat PCM * 02 00 NumChannels 2 * 44 AC 00 00 SampleRate 44100 * 10 B1 02 00 ByteRate 176400 * 04 00 BlockAlign 4 * 10 00 BitsPerSample 16 * // Data * 64 61 74 61 SubchunkID "data" * 00 B9 4D 02 SubchunkSize ************************************************/ void WavHeader::readWavHeader(std::istream *stream) { this->mFileSize = readUInt32(stream) + 8; FourCC waveTag; waveTag.load(stream); if (waveTag != WAV_WAVE) { throw WavHeaderError("WAVE header is missing WAVE tag while processing file"); } FourCC chunkId; uint64_t pos = 12; while (pos < this->mFileSize) { chunkId.load(stream); uint32_t chunkSize = readUInt32(stream); pos += 8; if (chunkId == WAV_DATA) { this->mDataSize = chunkSize; this->mDataStartPos = pos; return; } if (chunkSize < 1) { throw WavHeaderError("[WAV] incorrect chunk size " + std::to_string(chunkSize) + " at " + std::to_string(pos - 4)); } if (chunkId == WAV_FMT) { loadFmtChunk(stream, chunkSize); pos += chunkSize; } else { mOtherCunks << chunkId; mOtherCunks << chunkSize; mOtherCunks << readBytes(stream, chunkSize); pos += chunkSize; } } throw WavHeaderError("data chunk not found"); } /************************************************ * All chunks are byte-aligned on 8-byte boundaries, but their * chunk size fields do not include any padding if it is necessary. ************************************************/ void WavHeader::readWave64Header(std::istream *stream) { this->mFileSize = readUInt64(stream); Guid waveTag; waveTag.load(stream); if (!waveTag.startsWidth(WAVE64_WAVE)) { throw WavHeaderError("WAVE64 header is missing WAVE tag while processing file"); } Guid chunkId; uint64_t pos = 16 + 8 + 16; while (pos < this->mFileSize) { // All chunks are byte-aligned on 8-byte boundaries if (pos % 8) { char d[8]; mustRead(stream, d, 8 - (pos % 8)); } chunkId.load(stream); uint64_t chunkSize = readUInt64(stream); pos += WAVE64_CHUNK_HEADER_SIZE; if (chunkId.startsWidth(WAV_DATA)) { this->mDataSize = chunkSize - WAVE64_CHUNK_HEADER_SIZE; this->mDataStartPos = pos; return; } if (chunkSize < 1) { throw WavHeaderError("[WAVE] incorrect chunk size " + std::to_string(chunkSize) + " at " + std::to_string(pos - 4)); } if (chunkId.startsWidth(WAV_FMT)) { loadFmtChunk(stream, chunkSize - 16 - 8); pos += chunkSize - WAVE64_CHUNK_HEADER_SIZE; } else { mOtherCunks << chunkId; mOtherCunks << chunkSize; mOtherCunks << readBytes(stream, chunkSize - WAVE64_CHUNK_HEADER_SIZE); pos += chunkSize - WAVE64_CHUNK_HEADER_SIZE; } } throw WavHeaderError("data chunk not found"); } /************************************************ * ************************************************/ bool WavHeader::isCdQuality() const { static const int CD_NUM_CHANNELS = 2; static const int CD_BITS_PER_SAMPLE = 16; static const int CD_SAMPLE_RATE = 44100; static const int CD_BYTE_RATE = 176400; return mNumChannels == CD_NUM_CHANNELS && mBitsPerSample == CD_BITS_PER_SAMPLE && mSampleRate == CD_SAMPLE_RATE && mByteRate == CD_BYTE_RATE; } /************************************************ * ************************************************/ uint64_t WavHeader::duration() const { return (mDataSize * 1000ull) / mByteRate; } /************************************************ * ************************************************/ uint32_t WavHeader::bytesPerSecond(WavHeader::Quality quality) { switch (quality) { case Quality_Stereo_CD: return 2 * 16 * 44100 / 8; case Quality_Stereo_24_96: return 2 * 24 * 96000 / 8; case Quality_Stereo_24_192: return 2 * 24 * 192000 / 8; } return 0; } /************************************************ * ************************************************/ uint32_t WavHeader::bytesPerSecond() { return mNumChannels * mBitsPerSample * mSampleRate / 8; } /************************************************ * ************************************************/ void checkFormat(uint16_t format) { switch (format) { case WavHeader::Format_Unknown: case WavHeader::Format_PCM: case WavHeader::Format_ADPCM: case WavHeader::Format_IEEE_FLOAT: case WavHeader::Format_ALAW: case WavHeader::Format_MULAW: case WavHeader::Format_OKI_ADPCM: case WavHeader::Format_IMA_ADPCM: case WavHeader::Format_DIGISTD: case WavHeader::Format_DIGIFIX: case WavHeader::Format_DOLBY_AC2: case WavHeader::Format_GSM610: case WavHeader::Format_ROCKWELL_ADPCM: case WavHeader::Format_ROCKWELL_DIGITALK: case WavHeader::Format_G721_ADPCM: case WavHeader::Format_G728_CELP: case WavHeader::Format_MPEG: case WavHeader::Format_MPEGLAYER3: case WavHeader::Format_G726_ADPCM: case WavHeader::Format_G722_ADPCM: case WavHeader::Format_Extensible: return; } throw WavHeaderError("Unknown format (" + std::to_string(format) + " in WAVE header"); } /************************************************ * ************************************************/ void WavHeader::loadFmtChunk(std::istream *stream, const uint32_t chunkSize) { if (chunkSize != FmtChunkMin && chunkSize != FmtChunkMid && chunkSize != FmtChunkExt) throw WavHeaderError("fmt chunk in WAVE header hase incorrect length"); mFmtSize = FmtChunkSize(chunkSize); uint16_t format = readUInt16(stream); this->mFormat = static_cast<Format>(format); checkFormat(format); this->mNumChannels = readUInt16(stream); this->mSampleRate = readUInt32(stream); this->mByteRate = readUInt32(stream); this->mBlockAlign = readUInt16(stream); this->mBitsPerSample = readUInt16(stream); if (chunkSize == FmtChunkMin) return; mExtSize = readUInt16(stream); // Size of the extension: if (chunkSize == FmtChunkMid) return; if (mExtSize != FmtChunkExt - FmtChunkMid) throw WavHeaderError("Size of the extension in WAVE header hase incorrect length"); mValidBitsPerSample = readUInt16(stream); // at most 8*M mChannelMask = readUInt32(stream); // Speaker position mask mSubFormat = readBytes(stream, 16); // GUID (first two bytes are the data format code) } /************************************************ * ************************************************/ // QByteArray WavHeader::toByteArray() const //{ // if (m64Bit) { // return wave64ToByteArray(); // } // else { // return wavToByteArray(true); // } // } /************************************************ * ************************************************/ // QByteArray WavHeader::toLegacyWav() const //{ // return wavToByteArray(false); // } /************************************************ * 52 49 46 46 RIFF * 24 B9 4D 02 file size - 8 * 57 41 56 45 WAVE * * // Chanks * 66 6D 74 20 SubchunkID "fmt " * 10 00 00 00 SubchunkSize 16 * 01 00 AudioFormat PCM * 02 00 NumChannels 2 * 44 AC 00 00 SampleRate 44100 * 10 B1 02 00 ByteRate 176400 * 04 00 BlockAlign 4 * 10 00 BitsPerSample 16 * // Data * 64 61 74 61 SubchunkID "data" * 00 B9 4D 02 SubchunkSize ************************************************/ ByteArray WavHeader::wavToByteArray(bool keepOtherChunks) const { ByteArray res; res.reserve(mDataStartPos - 1); res << WAV_RIFF; res << uint32_t(0); res << WAV_WAVE; res << WAV_FMT; res << uint32_t(mFmtSize); res << uint16_t(mFormat); res << mNumChannels; res << mSampleRate; res << mByteRate; res << mBlockAlign; res << mBitsPerSample; if (mFmtSize > FmtChunkMin) { res << mExtSize; } if (mExtSize > 0) { res << mValidBitsPerSample; res << mChannelMask; res.insert(res.end(), mSubFormat.begin(), mSubFormat.end()); } if (keepOtherChunks) { res.insert(res.end(), mOtherCunks.begin(), mOtherCunks.end()); } res << WAV_DATA; res << uint32_t(mDataSize); // Write file size ......... uint64_t fileSize = mDataSize + res.size() - 8; if (fileSize > 0xFFFFFFFF) { throw WavHeaderError("Stream is too big to fit in a legacy WAVE file"); } uint32_t le = toLittleEndian(uint32_t(fileSize)); res[4] = (le >> 0) & 0xFF; res[5] = (le >> 8) & 0xFF; res[6] = (le >> 16) & 0xFF; res[7] = (le >> 24) & 0xFF; return res; } /************************************************ * The chunk size fields directly following the chunk-GUID and preceeding * the chunk body, include the size of the chunk-GUID and the chunk length * field itself. * Therefore, it corresponds to the chunk data size plus 24 (16 bytes for * the GUID, 8 bytes for the size field). ************************************************/ ByteArray WavHeader::wave64ToByteArray() const { ByteArray res; res.reserve(mDataStartPos - 1); res << WAVE64_GUID_RIFF; res << uint64_t(mFileSize); res << WAVE64_GUID_WAVE; res << WAVE64_GUID_FMT; // The chunk size fields include the size of the chunk-GUID and the chunk length field itself. Therefore, it corresponds to the chunk data size plus 24 (16 bytes for the GUID, 8 bytes for the size field). res << uint64_t(mFmtSize + 24); // res << uint16_t(mFormat); res << mNumChannels; res << mSampleRate; res << mByteRate; res << mBlockAlign; res << mBitsPerSample; if (mFmtSize > FmtChunkMin) { res << mExtSize; } if (mExtSize > 0) { res << mValidBitsPerSample; res << mChannelMask; res.insert(res.end(), mSubFormat.begin(), mSubFormat.end()); } res.insert(res.end(), mOtherCunks.begin(), mOtherCunks.end()); res << WAVE64_GUID_DATA; res << uint64_t(mDataSize + WAVE64_CHUNK_HEADER_SIZE); return res; } /************************************************ * ************************************************/ void WavHeader::resizeData(uint32_t dataSize) { mDataSize = dataSize; mFileSize = mDataStartPos + mDataSize; }
Many of today's computing systems include computing resources that are not fully utilized. The owners of these systems often could benefit by increasing the utilization of these systems' computing resources. A number of approaches could be adopted in order to increase utilization. Under a “consolidation” approach, the processes and data of multiple parties might be co-located on a single hardware unit in order to more fully utilize the resources of the hardware unit. Under the consolidation approach, multiple parties might share a single hardware unit's resources, including file systems, network connections, and memory structures. For example, multiple businesses might have separate websites that are hosted by the same server. However, some of the parties might not know or trust each other. In some cases, some of the parties actually might be competitors with others of the parties. Under such circumstances, each party would want to ensure that its processes and data were shielded, or isolated, from access by other parties and those other parties' processes. Mechanisms that would isolate one party's processes and data from other parties sharing the same hardware unit have been proposed. For example, a “jail” mechanism provides the ability to partition an operating system environment into a “non-jailed” environment and one or more “jailed” environments. The jail mechanism allows users, processes, and data to be associated with a jailed environment. For example, one group of users, processes, and data may be associated with one jailed environment, and another group of users, processes, and data may be associated with another jailed environment. The jail mechanism restricts users and processes that are associated with a particular jailed environment from accessing processes and data that are associated with environments (both jailed and non-jailed) other than the particular jailed environment. Some operating system environments provide a system logging mechanism that permits processes to send, to a designated message stream, messages designated as “log messages.” A designated process may read the log messages from the designated message stream and write the log messages to a log file. A user may view the log file in order to diagnose problems occurring within the operating system environment. Processes also may read the log messages from the designated message stream. As discussed above, an operating system environment may be partitioned into a non-jailed environment and one or more jailed environments. When an operating system environment is so partitioned, the designated message stream, the designated process, and the log file remain associated with the non-jailed environment. As a result, when a process that is associated with a particular jailed environment sends a log message, the log message is sent to the designated message stream in the non-jailed environment. Unfortunately, other processes that are associated with the particular jailed environment are unable to read from the designated message stream, because the designated message stream is not associated with the particular jailed environment. Additionally, users that are associated with the particular jailed environment are unable to view the log file because the log file is not associated with the particular jailed environment.
<reponame>A-pZ/nobtodo<filename>src/main/java/lumi/vo/package-info.java /** * ValueObjectのパッケージ。 * * @author A-pZ * */ package lumi.vo;
Ludwig Klages and the philosophy of life: a vitalist toolkit index for each volume. In addition to the original scholarly apparatus, short introductions by Philip Schofield trace the editorial history of the letters. Volumes I to V follow the intellectual and social development of the precocious son of an upcoming London attorney and cast light on a formative period in the philosophers life. His education at Westminster School and his years at Oxford are well documented, as is his own intellectual development through his readings and his interest in the sciences (especially mathematics, physics and biology). His private life, from his strained but close relationship with his father to his affection for his brother Samuel and his London friendships, will also be of interest to social historians. Benthams travels through England, France, and Central Europe (during his 17851787 visit to Samuel in Russia) can be followed through his regular letters to family, friends and patrons. The letters also allow readers to follow Benthams gradual involvement in British politics: his part in the controversy on American independence around 1776, his involvement with Lord Shelburne in the early 1780s, his interest in the French revolution (which made him a French citizen in 1792) and his campaign to promote the Panopticon prison to British MPs in the late 1790s.
Peripheralization, Ejidos and Agricultural Livelihoods in Intermediate Mexican Cities: The Importance of Collective Agency to Reduce Vulnerabilities This paper focuses on the interactions between peripheralization, vulnerabilities of agricultural livelihoods, and local collective agency in the creation of new capabilities in intermediate cities. It discusses the theoretical implications of a study conducted in the municipality of Tarmbaro, part of the intermediate city of Morelia, Mexico; it expands on results already published in preliminary form. The unit of analysis was the ejido, since this type of social land tenure, granted to landless peasants in 1917 after the Mexican Revolution, is one of the most important forms of social organization in rural Mexico. About one-half of the Mexican territory comprises >30,000 community-based land tenures (mainly ejidos), and a high proportion of the land now occupied by urban centers was ejido land. This paper uses the example of 15 ejidos, notably affected by the expansion of Morelia city, to illustrate how local (rural) organizations can foster collective agency to reduce differential vulnerabilities in peri-urban agricultural livelihoods in intermediate cities. In 2015, a semi-structured interview was undertaken with the president of each ejido, followed by a survey of 61 individuals from 11 of the 15 ejidos. The peripheralization of Morelia has produced inequalities in the adjacent municipality of Tarmbaro. Differential vulnerabilities in peri-urban agricultural livelihoods were found in the participant ejidos. Not all the ejidos have been successful in addressing vulnerabilities associated with urbanization of agricultural land, but those who have achieved some success have certain characteristics that reinforce common values and motivations to establish common goals to sustain local livelihoods. This paper highlights the importance of functional (rural) organizations in regulating access to, and distribution of, resources in the peripheries of intermediate cities.
/** * This header is generated by class-dump-z 0.2a. * class-dump-z is Copyright (C) 2009 by KennyTM~, licensed under GPLv3. * * Source: /System/Library/Frameworks/MediaPlayer.framework/MediaPlayer */ #import "MPWildcatVideoOverlay.h" #import "MediaPlayer-Structs.h" @class MPInlineTransportControls; @interface MPInlineVideoOverlay : MPWildcatVideoOverlay { @private MPInlineTransportControls* _transportControls; } -(id)initWithFrame:(CGRect)frame; -(void)dealloc; -(void)layoutSubviews; -(void)setItem:(id)item; -(void)setDesiredParts:(unsigned)parts animate:(BOOL)animate; -(void)setVisibleParts:(unsigned)parts animate:(BOOL)animate; -(void)setDisabledParts:(unsigned)parts; -(void)_availableRoutesDidChangeNotification:(id)_availableRoutes; -(unsigned)_convertedPartsMask:(unsigned)mask; @end
package org.netlib.blas; // DASUM takes the sum of the absolute values. public final class Dasum { public static double dasum(int n, double[] dx, int _dx_offset, int incx) { double dasum; label0: { if (n <= 0 || incx <= 0) { return 0.0; } dasum = 0.0; int k = 0; // code for increment not equal to 1 if (incx != 1) { int nincx = n * incx; k = 1; for (int i = (nincx - 1 + incx) / incx; i > 0; i--) { dasum += Math.abs(dx[k - 1 + _dx_offset]); k += incx; } return dasum; } // code for increment equal to 1 int m = n % 6; if (m != 0) { k = 1; for (int i = m; i > 0; i--) { dasum += Math.abs(dx[k - 1 + _dx_offset]); k++; } if (n < 6) { break label0; } } int mp1 = m + 1; k = mp1; for (int i = (n - mp1 + 6) / 6; i > 0; i--) { dasum = dasum + Math.abs(dx[k - 1 + _dx_offset]) + Math.abs(dx[k + _dx_offset]) + Math.abs(dx[k + 1 + _dx_offset]) + Math.abs(dx[k + 2 + _dx_offset]) + Math.abs(dx[k + 3 + _dx_offset]) + Math.abs(dx[k + 4 + _dx_offset]); k += 6; } } return dasum; } }
A large choroid plexus papilloma removed by the cerebellomedullary fissure approach. Case report and review of the literature. We report a case of large choroid plexus papilloma of the fourth ventricle in a 23-year-old woman. She presented with severe headache, dysphagia, and gait disturbances. Horizontal nystagmus, ataxic gait and quadriparesis were detected on initial examination. Imaging studies showed a large mass in the left side of brain stem and a marked hydrocephalus. The tumour was removed by microsurgical dissection of the cerebellomedullary fissure. We have discussed the effectiveness of this approach for removal of bulky tumors of the fourth ventricle and reviewed the literature about its benefits and potential hazards.
class RubyUtils: @staticmethod def unicode_aware_len(string): # Any non-ASCII character takes up 2 spaces instead of one. length = 0 for c in string: if ord(c) > 128: length += 2 else: length += 1 return length @classmethod def noruby_len(cls, line): # Get the length of a line as if it did not contain any ruby text try: return cls.unicode_aware_len(cls.remove_ruby_text(line)) except AssertionError as e: # There are non-conformant lines in the current script. # Fail gracefully print(e) return cls.unicode_aware_len(line) @staticmethod def ruby_aware_split_words(line): # Split a line into words, but consider ruby groups to be a # single word. ret = [] acc = "" processing_ruby = False for c in line: # Begin ruby group? if c == '<': assert not processing_ruby, \ f"Encountered repeated ruby-start in line '{line}'" processing_ruby = True # End ruby group? if c == '>': assert processing_ruby, \ f"Encountered ruby-end without ruby-start in line '{line}'" processing_ruby = False # If we see a space and are _not_ inside a ruby group, copy the # accumulator to the output list and zero it out if (c == ' ' or c == '\n') and not processing_ruby: ret.append(acc) # Preserve line breaks if c == '\n': ret.append("\n") acc = "" continue # If this is not a space, or is a space but we are inside a # ruby group, append to current word accumulator. acc = acc + c # If the accumulator is non-empty, append to the return vector if acc: ret.append(acc) return ret @staticmethod def remove_ruby_text(line): # Ruby text consists of <bottom|top> text. # This function strips formatting characters and top text to get only # the baseline-level characters in a sentence. ret = "" processing_ruby = False seen_midline = False # Iterate each character in the line for c in line: # Is this the start of a ruby? if c == '<': # Sequential starts are likely an error in the input assert not processing_ruby, \ "Repeated ruby-start encountered in line '{line}'" processing_ruby = True seen_midline = False continue # Is this a ruby midline? if c == '|': assert processing_ruby, \ f"Found ruby-delimiter in non-ruby text for line '{line}'" seen_midline = True continue # Is this a ruby end? if c == '>': assert processing_ruby, \ f"Found ruby-end outside ruby context for line '{line}'" assert seen_midline, \ f"Found ruby-end without ruby-delimiter for line '{line}'" processing_ruby = False seen_midline = True continue # If this is a normal character, then append it to the output IFF # - We are outside a ruby context _or_ # - We are indisde a ruby context but are before the midline if not processing_ruby or not seen_midline: ret = ret + c return ret @staticmethod def apply_control_codes(text): # Convert any custom control codes into the appropriate # characters/control modes. # # %{n}: Force newline # %{s}: Force space # %{i}/%{/i}: Begin/end italics # %{r}/%{/r}: Begin/end reverse # %{ri}/%{/ri}: Begin/end reverse italics PUA_OFFSET = 0xE000 processed_line = "" has_pct = False # Did we see a % that might open a cc in_cc = False # Are we inside a control code segment cc_acc = "" glyph_offset = None for c in text: # Handle control mode entry if c == '%': has_pct = True continue if has_pct and c == '{': in_cc = True has_pct = False cc_acc = "" continue # If we hit the end of a control code, see what the command was if in_cc and c == '}': in_cc = False # What was the acc? if cc_acc == 'n': # Forced newline processed_line += "\n" elif cc_acc == 's': # Forced space processed_line += " " elif cc_acc == 'i': # Offset ascii glyphs into the italic text region glyph_offset = PUA_OFFSET + 128 * 0 elif cc_acc == 'r': # Offset ascii glyphs into the reverso text region glyph_offset = PUA_OFFSET + 128 * 1 elif cc_acc == 'ri': # Offset ascii glyphs into the reversed italics text region glyph_offset = PUA_OFFSET + 128 * 2 elif cc_acc == '/i' or cc_acc == "/r" or cc_acc == "/ri": glyph_offset = None else: assert False, \ f"Unhandled control code '{cc_acc}' in line '{text}'" continue # Non-control mode: just append character to output buffer if not in_cc: # If we have a glyph offset and this is an ASCII char, # map it to the right font region if glyph_offset and ord(c) < 128: processed_line += chr(ord(c) + glyph_offset) else: processed_line += c else: # CC mode: accumulate cc chars until cc end cc_acc += c return processed_line @classmethod def linebreak_text(cls, line, max_linelen, start_cursor_pos=0): # If the line is already shorter than the desired length, just return if cls.noruby_len(line) + start_cursor_pos < max_linelen: return(line) # Split the line into a list of words, where ruby groups count # as a single word splitLine = cls.ruby_aware_split_words(line) # If the length of the longest element in the line is larger than our # allotted limit, we can't break this line if max_linelen < max([cls.noruby_len(elem) for elem in splitLine]): return(line) # Actually break up the line broken_lines = [] acc = "" first_word = True for word in splitLine: # If adding the next word would overflow, break the line. len_if_added = cls.noruby_len(acc + ' ' + word) + start_cursor_pos if len_if_added > max_linelen: broken_lines.append(acc) # If we line break _right_ at 55 chars, and the next char is # a _forced_ linebreak, we'd end up double-breaking. if word == "\n": acc = "" first_word = True else: acc = word start_cursor_pos = 0 continue # If we run into a raw \n, that directly breaks the line if word == '\n': broken_lines.append(acc) acc = "" start_cursor_pos = 0 first_word = True continue # If we did't just break, then append this word to the line acc = acc + ' ' + word if not first_word else word first_word = False # If there is a trailing accumulator, append it now. # If the final character in the string was a newline, the accumulator # will be empty but still meaningful, so keep it. if acc or splitLine[-1] == '\n': broken_lines.append(acc) # Join our line fragments back together with \n ret = '\n'.join(broken_lines) return ret
<reponame>lucaju/junochatbot import AddCircleOutlineIcon from '@mui/icons-material/AddCircleOutline'; import { Box, IconButton, Typography, useMediaQuery, useTheme } from '@mui/material'; import { useActions, useAppState } from '@src/overmind'; import React, { FC, useEffect, useState } from 'react'; import { useTranslation } from 'react-i18next'; import Collection from './Collection'; import Details from './details'; const GroupsView: FC = () => { const { users } = useAppState(); const actions = useActions(); const { t } = useTranslation(); const [isLoading, setIsLoading] = useState(true); const [detailsOpen, setDetailsOpen] = useState(false); const [currentGroupId, setCurrentGroupId] = useState<number | undefined>(); const theme = useTheme(); const isMobile = useMediaQuery(theme.breakpoints.down('sm')); useEffect(() => { const getCollection = async () => setTimeout(fetchGroups, 1000); getCollection(); }, []); const fetchGroups = async () => { setIsLoading(true); if (users.groups.length === 0) actions.users.getGroups(); setIsLoading(false); }; const handleDetailOpen = (groupId?: number) => { setCurrentGroupId(groupId); setDetailsOpen(true); }; const handleDetailClose = () => { setCurrentGroupId(undefined); setDetailsOpen(false); }; return ( <Box sx={{ mt: 2.5, pl: isMobile ? 0 : 1.5, pb: isMobile ? 1.5 : 0, borderStyle: 'solid', borderColor: ({ palette }) => palette.action.disabledBackground, borderTopWidth: 0, borderBottomWidth: isMobile ? 1 : 0, borderLeftWidth: isMobile ? 0 : 1, borderRightWidth: 0, }} > <Box display="flex" flexDirection="row" alignItems="center" columnGap={1}> <Typography sx={{ textTransform: 'capitalize' }} variant="h6"> {t('groups:groups')} </Typography> <IconButton color="primary" onClick={() => handleDetailOpen()} size="small"> <AddCircleOutlineIcon fontSize="inherit" /> </IconButton> </Box> <Details groupId={currentGroupId} handleClose={handleDetailClose} open={detailsOpen} /> <Box maxHeight={'calc(100vh - 154px)'} mt={3} sx={{ overflowY: 'scroll' }}> <Collection isLoading={isLoading} handleDetailOpen={handleDetailOpen} /> </Box> </Box> ); }; export default GroupsView;
import * as prettier from "prettier"; export default function prettierFormat(code: string, rootDir: string) { const prettierConfig = prettier.resolveConfig.sync(rootDir); return prettier.format(code, { ...prettierConfig, parser: "typescript" }); }
/** * DOC mhirt class global comment. Detailled comment <br/> * * $Id$ * */ public class LicenseManagement { // LICENSE_VALIDATION_DONE = 1 : registration OK private static final double LICENSE_VALIDATION_DONE = 2; public static void acceptLicense() throws BusinessException { PlatformUI.getPreferenceStore().setValue("LICENSE_VALIDATION_DONE", 1); //$NON-NLS-1$ } /** * DOC mhirt Comment method "isLicenseValidated". * @return */ public static boolean isLicenseValidated() { initPreferenceStore(); IPreferenceStore prefStore = PlatformUI.getPreferenceStore(); if (prefStore.getInt("LICENSE_VALIDATION_DONE") != 1) { //$NON-NLS-1$ return false; } return true; } /** * DOC mhirt Comment method "init". * @return */ private static void initPreferenceStore() { IPreferenceStore prefStore = PlatformUI.getPreferenceStore(); if (prefStore.getDefaultInt("LICENSE_VALIDATION_DONE") == 0) { //$NON-NLS-1$ prefStore.setDefault("LICENSE_VALIDATION_DONE", LICENSE_VALIDATION_DONE); //$NON-NLS-1$ } } }
<gh_stars>0 import type { Object3D } from 'three' import type { WorkerCollideEvent, WorkerRayhitEvent } from './Provider' import type { AtomicProps } from './hooks' import React, { Suspense, createContext, lazy } from 'react' import { ProviderProps } from './Provider' export * from './hooks' export type Buffers = { positions: Float32Array; quaternions: Float32Array } export type Refs = { [uuid: string]: Object3D } export type Event = | (Omit<WorkerRayhitEvent['data'], 'body'> & { body: Object3D | null }) | (Omit<WorkerCollideEvent['data'], 'body' | 'target'> & { body: Object3D; target: Object3D }) export type Events = { [uuid: string]: (e: Event) => void } export type Subscriptions = { [id: string]: (value: AtomicProps[keyof AtomicProps] | number[]) => void } export type ProviderContext = { worker: Worker bodies: React.MutableRefObject<{ [uuid: string]: number }> buffers: Buffers refs: Refs events: Events subscriptions: Subscriptions } const context = createContext<ProviderContext>({} as ProviderContext) const Provider = typeof window === 'undefined' ? () => null : lazy(() => import('./Provider')) function Physics(props: ProviderProps) { return ( <Suspense fallback={null}> <Provider {...props} /> </Suspense> ) } export { Physics, context }
def send_file(file_binary_data): files = {'file': file_binary_data} r = requests.post(FileSender.file_server, files=files) json_data = r.json() return json_data.values()[0]
It's a sunny morning at Musical.ly's Santa Monica, Calif. offices and dozens of so-called young "musers"--popular users on the music video sharing app--have gathered to do business: meet, take pictures and record the 15 second lip syncing clips for which they are Internet famous. Teenage girls with faces perfectly painted per the latest Instagram makeup trends snap selfies with younger tweens. Hip-hop blares in the background as a trio of young multi-hyphenates jostle for space in front of an iPhone where they gesticulate, mouth along to lyrics and pose. They are just a handful of the millions of U.S. teens to download Musical.ly since its 2014 inception. The app has grown to more than 200 million registered users, most between 13 and 21 years old, according to the company. But as a potential magic bullet for media organizations desperate to reach mobile-first teens, its business is only just coming of age. Started by Chinese entrepreneurs Alex Zhu and Luyu Yang as an education social network, the pair quickly found instructional videos were not hot. What was: clips that combined music and social media. They swiftly relaunched as a music video service and slowly started to sign up users. "Every Thursday evening, there was a spike in downloads," Zhu told FORBES last year. "We found out that Spike TV was airing Lip Sync Battle on Thursday evenings and after the show people used 'lip sync' to search on the app store, and found us." The company doubled down on lip syncing with design tweaks that made the feature more prominent and quickly rose to the top of the app store. Musical.ly users create short looping videos that are stamped with the Musical.ly logo and shared in the app and across social media sites including Facebook, Instagram and Twitter. Though the company declined to disclose active monthly users, it said over 13 million videos are uploaded daily. The number of registered users has more than tripled in the last year. Venture capitalists have taken note. The company closed an investment round of some $100 million last year led by GGV Capital, sources told FORBES. That series valued Musical.ly, which is registered in the Cayman Islands, at some $500 million, and brought its total funding to more than $116 million, FORBES estimates. Musical.ly declined to comment. The growth of the Shanghai-headquartered business has been juiced by the shuttering of fellow short-form video app Vine, which announced its closure in October 2016 and was discontinued three months later. The Twitter-owned app let users record looping six second videos--two and a half times shorter than Musical.ly's 15 second loops. It had grown to 200 million active monthly users by 2015, but stagnated facing competition from Facebook's Instagram before its publicly-traded parent killed it off. "People stopped doing just lip syncing videos--they were using it to make short form video that looked like Vine," said Andrew Graham, an agent in CAA's digital and packaging division who represents Musical.ly's second-most-followed act, Baby Ariel. "Musical.ly was perfectly positioned to capitalize on a market that didn’t have an outlet anymore." The app is now producing stars of its own. With over 19 million Musical.ly fans, 16-year-old Ariel Martin, known as Baby Ariel, gained popularity on the app with her lip sync videos of hip-hop and pop songs and short comedic skits. In the clips, she mouths lyrics and gesticulates with attitude in a young, white simulacrum of Nicki Minaj. She parlayed those fans into followings on Instagram, YouTube, Twitter and Facebook and is now moving into traditional media by auditioning for movies. "I started to research different social media influencers through YouTube, Snapchat, Instagram to see how people turn social media into a career," said Martin, who maintains a consistent posting schedule of Musical.ly videos, YouTube vlogs, Instagram pictures and Snapchat clips. Being a so-called social media influencer may seem an unlikely job. But top digital personalities can earn up to $300,000 per sponsored post on Instagram or Twitter, while the most-followed YouTubers can make millions from advertising on their videos. Unlike Vine, which failed to develop clear monetization routes for its users, since December 2016 followers have been able to pay Musical.ly acts directly through the app's virtual gift program. Users spend real money to buy coins which they tip acts with (100 coins for $0.99). The act receives 50% of every dollar in payment, while 30% goes to Apple and Google's standard platform fees and 20% to Musical.ly. The company said this is not a revenue stream, however, and that its share is spent covering costs associated with the money transfer. In addition to virtual tipping, some highly-followed acts can also make money by posting promotional videos for other companies on Musical.ly. This has the blessing of Zhu. "We definitely see monetization as a super important topic... to build an ecosystem to make sure that those top influencers have financial incentives to stay," Zhu told FORBES in 2016. Such incentives should encourage its most influential young acts to keep posting. Despite the fickle tastes of teens, Hofmann insists its users are spending more time in the app than ever before. Alex Zhu cofounded Musical.ly in 2014. Musical.ly itself is not currently generating sizeable revenue, but has been experimenting with in-app brand, movie and music advertising campaigns in the last year. Its huge, young audience makes it an appealing marketing tool for advertisers seeking youngsters who don't watch TV or listen to the radio. Its first brand deal with Coca Cola in June 2015 invited Musical.ly users to post videos of themselves drinking the soda with the hashtag #ShareACoke. One million clips were published just twelve days into the campaign, Musical.ly said. Jacob Sartorius, one of Musical.ly's most followed stars, has launched a singing career from the platform. Music is a more natural fit for advertising in the app. By nature, it helps youngsters find new tracks, which they can stream or buy elsewhere, because each video lists the song playing in the background. When stars ask users to post, it can be potent. Last year a promotion of Selena Gomez’s "Kill ‘Em With Kindness" rendered 1.3 million Musical.ly clips and 34.6 million likes, reportedly boosting the song’s performance, according to her label Interscope Geffen A&M. In exchange, Musical.ly has been able to readily secure licensing deals with all major labels and publishers to use their songs in videos on its app. It recently partnered with Apple Music, so paying subscribers of the streaming service are now able to play full songs from Apple’s catalog within the Musical.ly app. For now, the plan is to grow--and age up. The company hopes to attract older audiences by expanding further into comedy, fashion and sport videos, some of which already exist on the app. To do so, Musical.ly is signing up other media companies, including networks such as Hearst, Viacom, Disney and NBCUniversal, to produce content on the platform. It has also expanded beyond its initial product into a separate popular live streaming app, Live.ly, last year, in addition to a group video chat Squad and a newly-released video messaging app, Ping Pong. The company hopes such diversification will save it from the fate of its competitors. Early lip-syncing app Dubsmash spread quickly, but has seen usage drop significantly since its 2014 launch, while Los Angeles-based Flipagram reportedly ran into financial problems and sold to Chinese news aggregator Toutiao in January for an undisclosed sum. So-called "musers"--users of the app--mingle at the Musical.ly offices in Santa Monica, Calif. For Musical.ly, an acquisition may be on the cards. Sources tell FORBES both Apple and Disney are thought to have expressed interest in the company; Musical.ly would not comment on potential deals. "A lot of people asked me in 2015 if I think that Musical.ly's going to be a fad," said Hofmann. "[But] we are just getting the car out of the garage." Buckle up--time will tell whether its users stick around. Additional reporting by Ryan Mac.
A mazel tov is in order for Prime Minister Binyamin Netanyahu who has a granddaughter, a first after his daughter gave birth to two boys. Mrs. Noa Roth, 36, is the prime minister’s daughter from his first marriage to Dr. Miki Haran. The prime minister’s daughter became frum years ago, and married Chabad businessman Daniel Roth about ten years ago. They live in Meah Shearim. Mr. Netanyahu released a message “Sara and I are very happy that my daughter Noa had a girl, a sister to Shmuel and David and a niece to Yair and Avner. Mother and daughter are well and I am happy for our expanded family”. According to OnlySimchas.com, It is rumored that Noa and her father do not have a very close relationship, after her parents divorced when she was only 3-years-old. (YWN – Israel Desk, Jerusalem)
The invention relates to a universal and generic system of environmental stress screening (ESS) of electronic devices such as printed wiring board assemblies. The devices are placed in a chamber and are temperature cycled at high and low limits. The temperature cycling causes stress in the electronic devices and serves to identify weak and defective mechanical joints, such as solder joints, in the components prior to being shipped to the customer. It is essential to note that environmental stress screening (ESS) is not a test for electronic devices but a stressing of electronic devices to weed out weak and defective components. Testing of the devices must take place both before and after environmental stress screening (ESS) to identify those devices that were forced to failure. In prior chambers, stress screening was performed in chambers that limited quantity or transition rates between temperature extremes. The need for greater reliability and quality has increased and it has become common for stress screening to be performed in a more efficient and cost effective manner. As a result a chamber for production oriented environmental stress screening (ESS) that can stress devices as part of production line processes is required. Large chambers have been developed that are not adaptable or economically feasible for small to medium volume manufacturers, such as the Environmental Stress Screening Apparatus of Keel, et al, U.S. Pat. No. 4,812,750, for high production manufacturers. Small specialized chambers also have been designed to stress screen semiconductor devices, such as the Thermal Stress Screening System of Lesley, et al, U.S. Pat. No. 4,854,726. A need for small, highly efficient, affordable, and convertible ESS chambers exists for those manufacturers whose volume of business does not justify large capital outlay but whose market requires high quality goods. This invention provides a means for the small to medium volume manufacturer to stress all electronic devices at a justifable cost and allows for relatively simple reconfiguration to accommodate a variety of devices.
Resonant fluxon transmission through impurities Fluxon transmission through several impurities of different strength and type (i.e., microshorts and microresistors), placed in a long Josephson junction is investigated. Threshold pinning current on the impurities is computed as a function of the distance between them, their amplitudes and the dissipation parameter. It is shown that in the case of consequently placed microshorts or microresistors, the threshold pinning current exhibits a clear minimum as a function of the distance between the impurities. In the case of a microresistor, followed by a microshort, an opposite phenomenon is observed, namely the threshold pinning current exhibits maximum as a function of the distance between the impurities. Introduction and the background The dynamics of magnetic flux propagation in a long Josephson junction (LJJ) is a subject of increasing theoretical and practical interest. Magnetic flux quantum in a LJJ is a soliton (also known as fluxon) governed by the well-known sine-Gordon (SG) equation. A convenient way to prepare a junction with the required properties is to install various inhomogeneities into it. Up to now substantial work has been devoted to the study of the fluxon motion in the LJJs with point-like impurities. The interaction of a fluxon with a single impurity became a textbook example. On the other hand, the phenomenon of resonant tunneling of an electron through a double-well structure is well-known in quantum mechanics. A natural question arises: what is an analog of the quantum-mechanical resonant tunneling in the fluxon dynamics? Resonant soliton transmission has been investigated in detail for nondissipative systems and complex resonant behaviour has been reported. However, fluxon dynamics in a LJJ cannot be considered without taking into account dissipative effects, which are a consequence of the normal electron tunneling across the insulating barrier. As a result, transmission in a LJJ with constant bias and dissipation can yield only two scenarios: fluxon transmission or fluxon pinning on the impurities. And, consequently, the transmission ratio can attain only two values: zero or unity. Therefore the attention has to be turned toward other characteristic quantities, especially the minimal bias, necessary for the fluxon pinning on impurities. The present paper aims to investigate fluxon transmission through several (two or more) point-like impurities: microshorts, microresistors or a combination of both. Of particular interest is dependence of the threshold pinning current on the distance between the impurities and their amplitudes. The paper is organized as follows. In Section 2 we present the model and the basic equations of motion. In the next section we describe the methods of the analysis of the equations and motion and study the fluxon transmission through two microshorts, two microresistors and a microshort and a microresistor as a function of their amplitudes and distance between them. Discussion of the obtained results and final remarks are given in Sec. 4. The model We consider the long Josephson junction (LJJ) subjected to the external time-independent bias. The main dynamical variable is the difference between the phases 2 (x, t) − 1 (x, t) = (x, t) of the macroscopic wave functions of the the superconducting layers of the junction. The time evolution of the phase difference is governed by the perturbed sine-Gordon (SG) equation: n (x − a m ). In this dimensionless equation spacial variable x is normalized to the Josephson penetration depth J, the temporal variable t is normalized to the inverse Josephson plasma frequency −1 J. Here the bias current is normalized to the critical Josephson current of the junction and the dimensionless parameter describes dissipation. It is supposed that there are N impurities in this junction, positioned at the points x = a n, n = 1, 2,..., N, a 1 ≡ 0 < a 2 <... < a m, with n being "strength" or amplitude of the nth impurity. The impurity is a microshort if n > 0 and a microresistor if n < 0. Fluxon transmission A standard tool for analyzing the fluxon dynamics in Josephson junctions is the McLaughlin-Scott perturbation theory. Also, direct numerical integration 1 of the perturbed SG equation will be performed to check the validity of the analytical approximation. We are going to solve the problem for the idealized case of an infinite junction with free ends boundary conditions, however, in actual simulation a sample with length that significantly exceeds the fluxon size will be used. Perturbation theory and collective coordinates Using the perturbation theory, one obtains in the first order the evolution equations for the fluxon parameters, i.e., its center of mass X and fluxon velocity v: For the sake of simplicity in the following only equidistant impurities will be considered, i.e., a n ≡ a, n = 1, 2,..., N. Also, only positive values of bias will be considered. The case of one impurity (N = 1, 1 ≡ ) has been discussed in detail in. There exist two characteristic values of the bias current, c ≡ 4 √ 3/(9) and thr, c > thr. If > c, the pinning on the impurity is not possible and only one attractor that corresponds to fluxon propagation does exist. In the interval thr < < c two attractors exist: one corresponds to fluxon pinning on the microshort and another one to fluxon propagation. If < thr, the only possible regime is fluxon pinning on the impurity. It has been shown that there exists a threshold value of the dc bias, which can be approximated as 1 In the numerical simulations, the space will be discretized as x → nh, so that the continuous variable (x, t) ≃ (nh, t) becomes a discrete set of variables n(t), and the second space derivative becomes xx(x, t) ≃ /h 2. The resulting set of the second order ODEs on n(t) will be solved using the 4th order Runge-Kutta scheme. The delta function is approximated as (x) ≃ n,0/h where m,n is Kronecker's symbol. In the case of strongly separated impurities (a ≫ 1), the potential U (X) has 2N extrema where each pair (a minimum and a maximum) is associated with a certain impurity. If there is an impurity at X = a(k − 1), the minimum at X = X 2k−1 always comes before the maximum at X = X 2k. Microshorts are repelling impurities, thus the fluxon that arrives from X = −∞ decelerates when approaching it and accelerates after passing the impurity until the fluxon velocity reaches the equilibrium value Microresitors are attractive impurities, and, as a result, the fluxon accelerates before approaching the microresistor and slows down to the equilibrium velocity after being released from it. Decrease of the distances between the impurities a causes disappearance of some of the extrema via inverse pitchfork bifurcations. The systematic phase plane analysis of equations - for the case of two microshorts has been performed in for the SG equation and in for the double SG equation. In those papers the behaviour of the fixed points of the system - has been studied as a function of the distance between them, a. Our aim is to determine the threshold current thr = thr (a, { n } N 1 ; N ) as a function of the distance between impurities and their amplitudes. In the case of one impurity, thr (; 1) obviously does not depend on the distance a. It is described approximately by equations and for the microshort and microresistor, respectively. Some general statement can be made before one proceeds to specific cases. Two important limits should be mentioned. One case corresponds to impurities being separated by the distance much larger the fluxon size. Then the transmission will be governed by the fluxon interaction with each individual impurity. In the opposite limit (a → 0) the power of all impurities adds up. The effect of the both limits on the threshold current can be written as follows In the subsections below the transmission through impurities of different polarities (e.g, n < 0 and n > 0) will be considered. It appears that for N ≥ 2 the analytical treatment of equations - is virtually not possible even in the non-relativistic case, especially when none of the limits, described by the equation, hold. Therefore equations - are going to be solved numerically. Transmission through two microshorts Consider first the case of two microshorts (N = 2, 1,2 > 0). The problem is tackled in the following way. The fluxon approaches the system of two microshorts from X = −∞ with the equilibrium velocity v ∞, given by equation. Evolution of the system - on the phase plane (X, v) is shown in Figure 1. Depending on the strength of the bias, three scenarios are possible: trapping on the first microshort (curve 1); trapping on the second microshort, if the external bias is a bit larger (curve 2); or transmission (curve 3). If the microshorts are too close to each other, trapping on the second microshort does not happen (see references for details). We note that direct numerical simulations of the perturbed SG equation (curve 4 of Figure 1) are in good correspondence with the trajectories of the system. The oscillations after the collision with the microshort can be attributed to the fluxon radiation (not accounted by the first order perturbation theory) and errors in determination of the fluxon center. The systematic evaluation of the threshold current thr as a function of the distance a for different values of 1 and 2 is shown in Figure 2. The resonant nature of the dependence of thr on a for 1 < 2 can be observed clearly. While in the respective limiting cases it satisfies equation, a resonant value a = a r appears, at which the threshold current attains its minimal value. The explanation of the resonant transmission can be done on the following qualitative argument. The analysis of the phase portraits in Figure 1 shows that after being released from the microshort, the fluxon accelerates in order to regain its equilibrium velocity v ∞. This acceleration occurs in such a way that for some short interval the fluxon velocity exceeds the equilibrium value v ∞. Therefore the fluxon has kinetic energy, which is larger than it was while approaching the microshort from X = −∞, and consequently it has enough energy to pass the microshort with the amplitude larger than 1. Obviously, the best transmission would take place if a slightly exceeds |X 2 − X 1 |. The estimation of the resonant distance a r can be made from the analysis of the fluxon dynamics in the non-relativistic limit, given by equations -. According to these equations the fluxon can be compared to the particle that slides down along the potential U (X) = U (X → ±∞) ∼ −2X. Depending on the value of, it can be trapped in one of the wells of this potential (shown in the inset of Figure 1). If the distance between microshorts is small enough, it can be considered as one microshort with the renormalized strength (a) = 1 + 2 / cosh 2 a. The trapping can occur at the only existing minimum X 1 (see curve 1 in the inset of Figure 1) as shown by the trajectory 1 of Figure 1. As a increases, the potential barriers separate and a local minimum X 3 appears, as shown by curves 2 − 4 in the inset of Figure 1. If a new minimum appears, the trapping can occur also at the second microshort, as shown by the trajectory 2 of Figure 1. Plots of the potential U (X) clearly demonstrate that the shape of the barrier will be optimal when the minimum at X = X 3 is quite shallow. Since the half-width of the function cosh −2 (X) is of the order of unity, it is expected that optimal separation of barriers occurs at a ∼ 2. Numerical evaluation of a r confirms this estimate: a r = 1.94 (for 1 = 0.4, 2 = 0.6); a r = 2.24 ( 1 = 0.3, 2 = 0.7) and a r = 2.36 (for 1 = 0.2, 2 = 0.8). If 1 ≥ 2, the transmission scenario is always determined by the first microshort and the trapping occurs only at X = X 1. Therefore the dependence thr on a is monotonically decreasing as shown in Figure 2 for 1 = 0.6, 2 = 0.4. It would be of interest to compare how the threshold pinning current depends on the dissipation parameter and the ratio of the microshort amplitudes 1 and 2. Since the resonant distance a r weakly depends on 1,2 and the pinning current depends strongly on the dissipation constant, it is convenient to normalize thr (a, 1, 2 ; 2) to the pinning current on the strongest microshort max . In Figure 3 the dependence of the enhancement factor (a, 1, 2 ; 2) = thr (a, 1, 2 ; 2) max , on the ratio 1 /( 1 + 2 ) for different values of dissipation is shown. The value of the distance between the microshorts has been fixed to a = 2. Increase of dissipation does not change much the resonant values of 1,2. However, the value of the enhancement factor at the minimum decreases significantly. In the inset of Figure 3 comparison of the fluxon slowing down on the microshorts is shown for different values of dissipation and dc bias. Note that the ratio / was kept constant in order to fix the equilibrium velocity v ∞. For stronger dissipation the fluxon slows down to smaller velocities (compare the black and blue curves that correspond to = 0.1 and = 0.3, respectively). Therefore after release from the microshort the fluxon can accelerate to greater values of velocity. As a result, it has more kinetic energy to pass the second microshort. In other words, for larger dissipation one needs larger bias,. Therefore, the tilt of the potential U (X) increases and it smears out the inhomogeneities, created by the impurities. It should be emphasized that the validity of the perturbation theory approach has been confirmed by the direct numerical integration of the original perturbed SG equation. In Figure 2, thr has been computed via integration of equation for 1 = 0.4 and 2 = 0.6. It is evident that the perturbation theory gives qualitatively the same result and the quantitative difference is not very large. Similarly, in Figure 3 the results of the numerical integration of equation with = 0.3 are given alongside with the perturbation theory results. A good qualitative and quantitative correspondence between these two types of results is clearly demonstrated. Therefore the usage of the approximation - is justified. Transmission through N > 2 microshorts Now we extend the results of the previous subsection on the case of more then N = 2 microshorts. In Figure 4 the dependence of thr on a for N = 2, 3, 4, 5 is presented. It clearly demonstrates that addition of an extra impurity to the left from the weakest one decreases further the minimum of the threshold current. The explanation can be easily seen with the help of the effective potential U (X) . Its shape changes significantly when extra microshorts are added. Comparing curves 1 and 2 in the inset of Figure 4 one can see that the energy barrier, which the fluxon should cross, lowers. Adding yet another microshort further lowers the barrier (see curves 3 and 4), so that in the interval 0 < X < (N − 1)a the potential barrier almost turns into the decaying slope which is less steep then −2X. Decrease of a leads to the gradual raising of this slope (compare curves 4 and 5) and consequently to increase of thr. Transmission through microresistors A microresistor is an attracting impurity, therefore the fluxon accelerates when approaching it and decelerates back to v = v ∞ after passing through or remains trapped if its velocity (and consequently the external bias current) is less than the threshold value. The effective potential U (X) for a microresistor corresponds to the potential well. If two different microresistors are added consequently, the fluxon can be trapped on the first or on the second one, or, if the bias is large enough, pass through. In Figure 5 the phase portraits for the system with N = 2 microresistors is shown. The change of the shape of U (X) for the different distances between the microresistors is shown in the inset of Figure 5. The computation of the threshold current thr shows that resonant fluxon transmission is possible if 1 < 2 and does not happen if 1 ≥ 2 (see Figure 6a). Explanation of this phenomenon is similar to the case of two microshorts. If the microresistors are located very close to each other, then their amplitudes add up and the fluxon interacts with the microresistor of the amplitude ≃ 1 + 2. When the impurities start to separate, the effective energy barrier which the fluxon should surmount, lowers (compare curve 1 with curves 2 and 3 in the inset Figure 6). The distance between the wells becomes optimal for the best fluxon transmission before they are completely separated (compare curves 3 and 4). In contrast to the transmission through two microshorts, the resonant value a = a r depends strongly on the amplitudes 1,2. Indeed, for 1 = −0.7, 2 = −0.3 one obtains a r ≃ 3.62 and for 1 = −0.9, 2 = −0.1 the resonant dis- tance equals a r ≃ 2.35. Comparing curves 2−4 in the inset of Figure 5, one can notice that the fluxon needs enough kinetic energy to overcome the second maximum, located at X = X 4. Obviously, if X 2 and X 4 are not enough separated, the fluxon will have no time to accelerate in order to avoid trapping on the second microresistor. Therefore the case of curve 3 is the most optimal one: the height of the barrier at X 2 is not too large, as compared to the curve 4 and the distance between X 2 and X 3 is enough to gain velocity, sufficient for the successful passage over the second barrier. These considerations, of course, correspond to the situation, when the impurities are not strongly separated. Otherwise only the interaction with the first one would matter. For the same reason the position of the minimal threshold current, a r, (see Figure 6b) increases with decrease of the damping parameter. Depth of the minimum decays with decrease of similarly to the case of two microshorts because with the stronger bias the fluxon can pass through the impurities much easier. Putting additional microresistors after 2, 0 > N > > 2 > 1 further lowers the critical pinning current similarly to the case of N > 2 microshorts, described in the previous Subsection. Transmission through a microshort and a microresistor Finally, we consider the case when two impurities of different polarity (a microshort and a microresistor) are placed one after another. If the microresistor is located before the microshort ( 1 < 0, 2 > 0) resonant enhancement of the threshold pinning current does not happen. In Figure 7 (panel a) the phase portraits for this case are shown. The microresistor is an attracting impurity after which the fluxon slows down. On contrary, the microshort is a repelling impurity and the fluxon slows down when approaching it. Therefore it is obvious that by placing impurities in such a way one increases thr as compared to the case of each individual impurity. The analysis of the effective potential U (X), shown in Figure 7 (panel c), further confirms above considerations. The height of the effective barrier, which the fluxon should overcome, can be greater than the height of the individual barriers, created by the individual impurities. If the impurities are very close their influences cancel each other and the fluxon interacts with the impurity of the strength −| 1 | + 2. The dependence of the threshold pinning current thr on the distance between the impurities is given in Figure 8 (panel a). For 1 = − 2 = −0.5 the microshort and microresistor cancel each other for a = 0, therefore the dependence starts at zero and increases until it reaches the maximal value and then decreases, tending monotonically to the value of thr that corresponds to one microshort with = 0.5. The dependence of thr on a shows an "antiresonant" behaviour because it has a maximum at some certain value of a. Analysis of the shape of U (X) from the Figure 7c predicts that the worst transmission would occur when the potential well and the barrier, created by the microresistor and the microshort, respectively, separate from each other far enough to create the highest total barrier (see the curve 3 of Figure 7c), but not too far (as for the curve 5 of the same figure) so that each impurity interacts individually with the fluxon. Consider now the case 1 > 0, 2 < 0. The phase portraits for the fluxon dynamics are shown in Figure 7 (panel b). The dependence of the threshold pinning current on the distance between the impurities is shown in Figure 8 (panel b). In the case 1 = − 2 = 0.5 at a = 0 the impurities cancel each other. When a increases, thr monotonically increases, tending to the threshold value of one isolated microshort with the amplitude = 0.5. In this case trapping occurs only on the microshort because analysis of equations - shows that thr (0.5; 1) > thr (−0.5; 1). If 1 > | 2 | the dependence of thr on a is also monotonic. At a ≃ 0 the fluxon "feels" both impurities as one microshort with 1 − | 2 |. When the impurities separate, the contribution of the microresistor to the total amplitude weakens and the threshold current gradually increases till the value thr ( 1 ; 1). If 1 decreases and | 2 | increases, behaviour of the critical pinning current on a becomes more complicated. Consider first the case 1 = 0.4, 2 = −0.6. In the neighbourhood of a = 0 the system can be considered as a microresistor with the amplitude 1 − | 2 |. When a increases, the well and the barrier, created by the microshort 1 start to separate, increasing the depth of the well (created by the microresistor). After some value of the distance a the dependence of thr = thr (a) experiences sharp breaking and thr starts to grow with a. The difference between trapping before this breaking point In other words, the breaking point signals the value of the separation of the impurities, before which the fluxon "feels" them as one microresistor and after which the fluxon "feels" them separately. Decrease of 1, and subsequent increase of | 2 | leads to the gradual shift of the breaking point to the right and smoothing of the shape of the dependence thr (a). Further decrease of 1 and increase of | 2 | makes the dependence thr (a) more and more flat, so that in the limit 1 → 1, 2 → 0 it tends to the horizontal line thr = thr (−1; 1). Conclusions We have investigated the fluxon transmission in a dcbiased long Josephson junction (LJJ) through two or more impurities of different polarity: microshorts and/or microresistors. We have observed that the threshold pinning current can depend on the distance between impurities in the resonant way for the case of two or more microshorts or two or more microresistors. That means that at some value of the distance the threshold current attains a minimal value, which is less than the threshold current of the strongest impurity. The resonant transmission does not occur if the fluxon interacts with two impurities of different sign: a microshort and a microresistor. The observed effect should not be confused with the resonant soliton transmission in the non-dissipative cases. In the case of fluxon dynamics in a long Josephson junction the presence of dissipation is unavoidable. Far away from the impurities fluxon exists as an only one attractor of the system with the velocity, predefined by the damping parameter and external bias. Therefore, contrary to the non-dissipative case, there is no sense in computing the transmission ratio, which in our case can take only two values: zero (trapping) and unity (transmission). Also it should not be confused with the fluxon tunneling as a quantum-mechanical object across the double-barrier potential, created by two identical microshorts. The discussed phenomenon can be observed experimentally in an annular LJJ via monitoring the currentvoltage (IV) characteristics. For a LJJ with one impurity the fluxon IV curve has a hysteresis-like nature with two critical values of the dc bias (discussed in Section 2). The lower one is the threshold pinning current, which is the smallest current for which fluxon can propagate. Although simulation in this paper have been performed for the infinite junction, there should be no principal differences with the case of an annular junction with sufficiently large length L ≫ J. Currently experiments are performed in the junctions L ∼ 10 J (see ) that can be considered as long. For the future research in this direction it is of interest to find out how the resonant fluxon transmission changes if the actual size of the impurities and the junction width along the y-axis are taken into account.
<reponame>broxus/eth-ton-abi-converter #![allow(clippy::unused_unit)] use wasm_bindgen::prelude::*; use wasm_bindgen::{JsCast, JsValue}; use ::eth_ton_abi_converter::*; #[global_allocator] static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT; #[wasm_bindgen(js_name = "mapTonCellIntoEthBytes")] pub fn map_ton_cell_into_eth_bytes(abi: &str, boc: &str) -> Result<String, JsValue> { // Parse ABI let params = decode_ton_event_abi(abi).handle_error()?; // Parse boc let boc = base64::decode(boc).handle_error()?; let cell = ton_types::deserialize_tree_of_cells(&mut boc.as_slice()).handle_error()?; // Unpack tokens let tokens = unpack_from_cell(&params, cell.into()).handle_error()?; // Map tokens map_ton_tokens_to_eth_bytes(tokens) .handle_error() .map(hex::encode) .map(|bytes| format!("0x{}", bytes)) } #[wasm_bindgen(js_name = "mapEthBytesIntoTonCell")] pub fn map_eth_bytes_into_ton_cell(abi: &str, data: &str) -> Result<String, JsValue> { // Parse ABI let event = decode_eth_event_abi(abi).handle_error()?; let params = event .inputs .iter() .map(|item| item.kind.clone()) .collect::<Vec<_>>(); // Parse data let data = hex::decode(data.strip_prefix("0x").unwrap_or(data)).handle_error()?; let tokens = ethabi::decode(&params, &data).handle_error()?; // Map tokens let cell = map_eth_tokens_to_ton_cell(tokens, &params).handle_error()?; ton_types::serialize_toc(&cell) .handle_error() .map(base64::encode) } impl<T, E> HandleError for Result<T, E> where E: ToString, { type Output = T; fn handle_error(self) -> Result<Self::Output, JsValue> { self.map_err(|e| { let error = e.to_string(); js_sys::Error::new(&error).unchecked_into() }) } } pub trait HandleError { type Output; fn handle_error(self) -> Result<Self::Output, JsValue>; }
// Copyright 2021 The Fuchsia Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. package checklicenses import ( "bytes" "context" "fmt" "os" "os/exec" "strings" ) type Gn struct { gnPath string outDir string } func NewGn(ctx context.Context, config *Config) (*Gn, error) { gn := &Gn{} path, err := exec.LookPath(config.GnPath) if err != nil { return nil, err } if _, err := os.Stat(config.BuildDir); os.IsNotExist(err) { return nil, fmt.Errorf("out directory does not exist: %s", config.BuildDir) } gn.gnPath = path gn.outDir = config.BuildDir return gn, nil } func (gn *Gn) Dependencies(ctx context.Context, target string) ([]string, error) { args := []string{ "desc", gn.outDir, target, "deps", "--all", } cmd := exec.CommandContext(ctx, gn.gnPath, args...) var output bytes.Buffer cmd.Stdout = &output cmd.Stderr = os.Stderr err := cmd.Run() if err != nil { return []string{}, err } result := output.String() result = strings.TrimSpace(result) return strings.Split(result, "\n"), nil } func LabelToDirectory(label string) (string, error) { if !strings.HasPrefix(label, "//") { return "", fmt.Errorf("Label missing leading `//`: %s", label) } label = label[2:] location := label if strings.Contains(label, ":") { location = strings.SplitN(label, ":", 2)[0] } return location, nil }
<reponame>DalavanCloud/tensorrt-inference-server // Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions // are met: // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // * Neither the name of NVIDIA CORPORATION nor the names of its // contributors may be used to endorse or promote products derived // from this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY // EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE // IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR // PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR // CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, // EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR // PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY // OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #include "src/core/sequence_batch_scheduler.h" #include <sys/resource.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #include "src/core/constants.h" #include "src/core/logging.h" #include "src/core/provider.h" #include "src/core/server_status.h" #include "src/core/utils.h" namespace nvidia { namespace inferenceserver { tensorflow::Status SequenceBatchScheduler::Create( const ModelConfig& config, const uint32_t runner_cnt, StandardRunFunc OnSchedule, std::unique_ptr<Scheduler>* scheduler) { std::unique_ptr<SequenceBatchScheduler> sched(new SequenceBatchScheduler()); // For debugging, const char* dstr = getenv("TRTSERVER_BACKLOG_DELAY_SCHEDULER"); sched->backlog_delay_cnt_ = 0; if (dstr != nullptr) { sched->backlog_delay_cnt_ = atoi(dstr); LOG_INFO << "Delaying scheduler until " << sched->backlog_delay_cnt_ << " backlog queued payloads..."; } sched->queue_request_cnts_.resize(runner_cnt, 0); // Get the batch size to allow for each runner. This is at least 1 // even if the model doesn't support batching. size_t batch_size = std::max(1, config.max_batch_size()); // Based on the model configuration create input tensors for control // signals indicating sequence start, sequence continue, and // sequence not ready. std::shared_ptr<InferRequestProvider::InputOverrideMap> start; std::shared_ptr<InferRequestProvider::InputOverrideMap> cont; std::shared_ptr<InferRequestProvider::InputOverrideMap> notready; TF_RETURN_IF_ERROR( sched->CreateControlTensors(config, &start, &cont, &notready)); // Create one SequenceBatch object for each requested runner. The // SequenceBatch object has a thread that manages the batch of // requests. for (uint32_t c = 0; c < runner_cnt; ++c) { std::shared_ptr<SequenceBatch> sb = std::make_shared<SequenceBatch>( sched.get(), c, batch_size, config, OnSchedule, start, cont, notready); sched->batchers_.push_back(sb); // All slots in the batch are initially ready for a new sequence. for (size_t b = 0; b < batch_size; ++b) { sched->ready_batch_slots_.push(SequenceBatchScheduler::BatchSlot(c, b)); } } scheduler->reset(sched.release()); return tensorflow::Status::OK(); } tensorflow::Status SequenceBatchScheduler::CreateControlTensors( const ModelConfig& config, std::shared_ptr<InferRequestProvider::InputOverrideMap>* start_input_overrides, std::shared_ptr<InferRequestProvider::InputOverrideMap>* continue_input_overrides, std::shared_ptr<InferRequestProvider::InputOverrideMap>* notready_input_overrides) { // Currently only batch-size 1 requests are supported so only need // to provide control vectors of that size. *start_input_overrides = std::make_shared<InferRequestProvider::InputOverrideMap>(); *continue_input_overrides = std::make_shared<InferRequestProvider::InputOverrideMap>(); *notready_input_overrides = std::make_shared<InferRequestProvider::InputOverrideMap>(); std::string tensor_name; DataType tensor_datatype; int32_t int32_false_value, int32_true_value; float fp32_false_value, fp32_true_value; // START { TF_RETURN_IF_ERROR(GetSequenceControlProperties( config.sequence_batching(), config.name(), ModelSequenceBatching::Control::CONTROL_SEQUENCE_START, true /* required */, &tensor_name, &tensor_datatype, &fp32_false_value, &fp32_true_value, &int32_false_value, &int32_true_value)); uint8_t* false_p = ((tensor_datatype == DataType::TYPE_INT32) ? reinterpret_cast<uint8_t*>(&int32_false_value) : reinterpret_cast<uint8_t*>(&fp32_false_value)); uint8_t* true_p = ((tensor_datatype == DataType::TYPE_INT32) ? reinterpret_cast<uint8_t*>(&int32_true_value) : reinterpret_cast<uint8_t*>(&fp32_true_value)); auto false_override = std::make_shared<InferRequestProvider::InputOverride>(); false_override->content_.assign(false_p, false_p + sizeof(float)); false_override->dims_.Add(1); false_override->datatype_ = tensor_datatype; auto true_override = std::make_shared<InferRequestProvider::InputOverride>(); true_override->content_.assign(true_p, true_p + sizeof(float)); true_override->dims_.Add(1); true_override->datatype_ = tensor_datatype; (*start_input_overrides) ->insert(std::make_pair(tensor_name, true_override)); (*continue_input_overrides) ->insert(std::make_pair(tensor_name, false_override)); (*notready_input_overrides) ->insert(std::make_pair(tensor_name, false_override)); } // READY { TF_RETURN_IF_ERROR(GetSequenceControlProperties( config.sequence_batching(), config.name(), ModelSequenceBatching::Control::CONTROL_SEQUENCE_READY, true /* required */, &tensor_name, &tensor_datatype, &fp32_false_value, &fp32_true_value, &int32_false_value, &int32_true_value)); uint8_t* false_p = ((tensor_datatype == DataType::TYPE_INT32) ? reinterpret_cast<uint8_t*>(&int32_false_value) : reinterpret_cast<uint8_t*>(&fp32_false_value)); uint8_t* true_p = ((tensor_datatype == DataType::TYPE_INT32) ? reinterpret_cast<uint8_t*>(&int32_true_value) : reinterpret_cast<uint8_t*>(&fp32_true_value)); auto false_override = std::make_shared<InferRequestProvider::InputOverride>(); false_override->content_.assign(false_p, false_p + sizeof(float)); false_override->dims_.Add(1); false_override->datatype_ = tensor_datatype; auto true_override = std::make_shared<InferRequestProvider::InputOverride>(); true_override->content_.assign(true_p, true_p + sizeof(float)); true_override->dims_.Add(1); true_override->datatype_ = tensor_datatype; (*start_input_overrides) ->insert(std::make_pair(tensor_name, true_override)); (*continue_input_overrides) ->insert(std::make_pair(tensor_name, true_override)); (*notready_input_overrides) ->insert(std::make_pair(tensor_name, false_override)); } return tensorflow::Status::OK(); } void SequenceBatchScheduler::Enqueue( const std::shared_ptr<ModelInferStats>& stats, const std::shared_ptr<InferRequestProvider>& request_provider, const std::shared_ptr<InferResponseProvider>& response_provider, std::function<void(tensorflow::Status)> OnComplete) { // Queue timer starts at the beginning of the queueing and scheduling process std::unique_ptr<ModelInferStats::ScopedTimer> queue_timer( new ModelInferStats::ScopedTimer()); stats->StartQueueTimer(queue_timer.get()); const auto& request_header = request_provider->RequestHeader(); // For now the request must have batch-size 1 since the sequence // batcher does not yet support requests that are statically // batched. if (request_header.batch_size() != 1) { OnComplete(tensorflow::errors::InvalidArgument( "inference request to model '", request_provider->ModelName(), "' must specify batch-size 1 due to requirements of sequence batcher")); return; } // A request must have a correlation ID to be processed correctly by // this scheduler. A value of 0 (zero) indicates that the request // doesn't have a correlation ID. const CorrelationID correlation_id = request_header.correlation_id(); if (correlation_id == 0) { OnComplete(tensorflow::errors::InvalidArgument( "inference request to model '", request_provider->ModelName(), "' must specify a non-zero correlation ID")); return; } BatchSlot* target = nullptr; const bool seq_start = ((request_header.flags() & InferRequestHeader::FLAG_SEQUENCE_START) != 0); const bool seq_end = ((request_header.flags() & InferRequestHeader::FLAG_SEQUENCE_END) != 0); std::unique_lock<std::mutex> lock(mu_); auto sb_itr = sequence_to_batchslot_map_.find(correlation_id); auto bl_itr = sequence_to_backlog_map_.find(correlation_id); // If this request is not starting a new sequence its correlation ID // should already be known with a target in either a slot or in the // backlog. If it doesn't then the sequence wasn't started correctly // or there has been a correlation ID conflict. In either case fail // this request. if (!seq_start && (sb_itr == sequence_to_batchslot_map_.end()) && (bl_itr == sequence_to_backlog_map_.end())) { OnComplete(tensorflow::errors::InvalidArgument( "inference request for sequence ", std::to_string(correlation_id), " to model '", request_provider->ModelName(), "' must specify the START flag on the first request of the sequence")); return; } // If this requests starts a new sequence but the correlation ID // already has an in-progress sequence then that previous sequence // did not end correctly, or there is a correlation ID conflict. In // this case we continue the new sequence (in either backlog or // slot). It is ok for a backlog/slot to have multiple starts... as // long as it has a single end. The previous sequence that was not // correctly ended will have its existing requests handled and then // the new sequence will start. if (seq_start && ((sb_itr != sequence_to_batchslot_map_.end()) || (bl_itr != sequence_to_backlog_map_.end()))) { LOG_WARNING << "sequence " << correlation_id << " for model '" << request_provider->ModelName() << "' has a conflict. The previous sequence did not end before this " "sequence start. Previous sequence will be terminated early."; } // This request already has an assigned slot... if (sb_itr != sequence_to_batchslot_map_.end()) { target = &sb_itr->second; } // This request already has a queue in the backlog... else if (bl_itr != sequence_to_backlog_map_.end()) { LOG_VERBOSE(1) << "Enqueuing sequence inference request into backlog for model '" << request_provider->ModelName(); bl_itr->second->emplace_back( queue_timer, stats, request_provider, response_provider, OnComplete); // If the sequence is ending then forget correlation ID // connection to this backlog queue. If another sequence starts // with the same correlation ID it will be collected in another // backlog queue. if (seq_end) { sequence_to_backlog_map_.erase(bl_itr); } return; } // This request does not have an assigned backlog or slot. By the // above checks it must be starting. If there is a free slot // available then assign this sequence to that slot... else if (!ready_batch_slots_.empty()) { target = &sequence_to_batchslot_map_[correlation_id]; *target = ready_batch_slots_.top(); ready_batch_slots_.pop(); } // Last option is to assign this request to the backlog... else { LOG_VERBOSE(1) << "Enqueuing sequence inference request into new backlog for model '" << request_provider->ModelName(); auto backlog = std::make_shared<std::deque<Scheduler::Payload>>(); backlog_queues_.push_back(backlog); backlog->emplace_back( queue_timer, stats, request_provider, response_provider, OnComplete); if (!seq_end) { sequence_to_backlog_map_[correlation_id] = std::move(backlog); } return; } // At this point the request has been assigned to a slot. If the // sequence is ending then stop tracking the correlation. if (seq_end) { sequence_to_batchslot_map_.erase(correlation_id); } // Enqueue request into batcher and slot. const size_t batcher_idx = target->batcher_idx_; const uint32_t slot = target->slot_; // No need to hold the lock while enqueuing in a specific batcher. lock.unlock(); LOG_VERBOSE(1) << "Enqueuing sequence inference request for model '" << request_provider->ModelName() << "' into batcher " << batcher_idx << ", slot " << slot; batchers_[batcher_idx]->Enqueue( slot, correlation_id, queue_timer, stats, request_provider, response_provider, OnComplete); } bool SequenceBatchScheduler::ReleaseBatchSlot( const BatchSlot& batch_slot, const CorrelationID force_end_correlation_id, std::deque<Scheduler::Payload>* payloads) { std::unique_lock<std::mutex> lock(mu_); // If a force_end_correlation_id is given, then that correlation ID is being // forcibly ended from its slot and so must be removed from the // sequence map. if (force_end_correlation_id != 0) { sequence_to_batchslot_map_.erase(force_end_correlation_id); } // If there is a backlogged sequence and it is requested, return it // so that it can use the newly available slot. if (!backlog_queues_.empty()) { auto& backlog = backlog_queues_.front(); *payloads = std::move(*backlog); backlog_queues_.pop_front(); if (!payloads->empty()) { // should never be empty... const auto& request_provider = payloads->back().request_provider_; const auto& request_header = request_provider->RequestHeader(); const CorrelationID correlation_id = request_header.correlation_id(); // If the last queue entry is not an END request then the entire // sequence is not contained in the backlog. In that case must // update backlog and batchslot maps so that future requests get // directed to the batch slot instead of the backlog. const bool seq_end = ((request_header.flags() & InferRequestHeader::FLAG_SEQUENCE_END) != 0); if (!seq_end) { // Since the correlation ID is being actively collected in the // backlog, there should not be any in-flight sequences with // that same correlation ID that have an assigned slot. if (sequence_to_batchslot_map_.find(correlation_id) != sequence_to_batchslot_map_.end()) { LOG_ERROR << "internal: backlog sequence " << correlation_id << " conflicts with in-flight sequence for model '" << request_provider->ModelName() << "'"; } sequence_to_backlog_map_.erase(correlation_id); sequence_to_batchslot_map_[correlation_id] = batch_slot; } return false; } } // There is no backlogged sequence so just release the batch slot ready_batch_slots_.push(batch_slot); return true; } bool SequenceBatchScheduler::DelayScheduler( const uint32_t batcher_idx, const size_t cnt, const size_t total) { std::unique_lock<std::mutex> lock(mu_); queue_request_cnts_[batcher_idx] = cnt; size_t seen = 0; for (auto c : queue_request_cnts_) { seen += c; } if (seen < total) { return true; } if (backlog_delay_cnt_ > 0) { size_t backlog_seen = 0; for (const auto& q : backlog_queues_) { backlog_seen += q->size(); } if (backlog_seen < backlog_delay_cnt_) { return true; } } return false; } SequenceBatchScheduler::SequenceBatch::SequenceBatch( SequenceBatchScheduler* base, const uint32_t batcher_idx, const size_t batch_size, const ModelConfig& config, StandardRunFunc OnSchedule, const std::shared_ptr<InferRequestProvider::InputOverrideMap>& start_input_overrides, const std::shared_ptr<InferRequestProvider::InputOverrideMap>& continue_input_overrides, const std::shared_ptr<InferRequestProvider::InputOverrideMap>& notready_input_overrides) : OnSchedule_(OnSchedule), base_(base), batcher_idx_(batcher_idx), max_sequence_idle_ns_( config.sequence_batching().max_sequence_idle_microseconds() * 1000), scheduler_thread_exit_(false), scheduler_idle_(false), queues_(batch_size), max_active_slot_(-1), slot_correlation_ids_(batch_size, 0), slot_idle_timeouts_(batch_size, 0), start_input_overrides_(start_input_overrides), continue_input_overrides_(continue_input_overrides), notready_input_overrides_(notready_input_overrides) { // Create a scheduler thread associated with 'batcher_idx' that // executes the queued payloads. const int nice = GetCpuNiceLevel(config); scheduler_thread_.reset( new std::thread([this, nice]() { SchedulerThread(nice); })); } SequenceBatchScheduler::SequenceBatch::~SequenceBatch() { // Signal the scheduler thread to exit... { std::unique_lock<std::mutex> lock(mu_); scheduler_thread_exit_ = true; } cv_.notify_one(); scheduler_thread_->join(); } void SequenceBatchScheduler::SequenceBatch::Enqueue( const uint32_t slot, const CorrelationID correlation_id, std::unique_ptr<ModelInferStats::ScopedTimer>& queue_timer, const std::shared_ptr<ModelInferStats>& stats, const std::shared_ptr<InferRequestProvider>& request_provider, const std::shared_ptr<InferResponseProvider>& response_provider, std::function<void(tensorflow::Status)> OnComplete) { bool wake_runner = false; { std::lock_guard<std::mutex> lock(mu_); // All requests in this SequenceBatch must have the same shape for // all inputs (since they are going to be executed together in a // batch). If this is the first request into this SequenceBatch // then grab a copy of the request header that is needed to create // NULL version request providers that can stand in as // representative when inference is issuing and there is no // request available in one or more slots. if (max_active_slot_ == -1) { null_request_header_ = request_provider->RequestHeader(); } queues_[slot].emplace_back( queue_timer, stats, request_provider, response_provider, OnComplete); slot_correlation_ids_[slot] = correlation_id; slot_idle_timeouts_[slot] = 0; max_active_slot_ = std::max(max_active_slot_, static_cast<int32_t>(slot)); // If runner is idle then wake it to service this request. We do // the actual wake outside of the lock to avoid having the woken // thread immediately block on the lock wake_runner = scheduler_idle_; } if (wake_runner) { cv_.notify_one(); } } void SequenceBatchScheduler::SequenceBatch::SchedulerThread(const int nice) { if (setpriority(PRIO_PROCESS, syscall(SYS_gettid), nice) == 0) { LOG_VERBOSE(1) << "Starting sequence-batch scheduler thread " << batcher_idx_ << " at nice " << nice << "..."; } else { LOG_VERBOSE(1) << "Starting sequence-batch scheduler thread " << batcher_idx_ << " at default nice (requested nice " << nice << " failed)..."; } // For debugging, delay start of thread until queues contain the // specified number of entries (across all SequenceBatchs in the // scheduler). const char* dstr = getenv("TRTSERVER_DELAY_SCHEDULER"); size_t delay_cnt = 0; if (dstr != nullptr) { delay_cnt = atoi(dstr); LOG_INFO << "Delaying scheduler thread " << batcher_idx_ << " until " << delay_cnt << " queued payloads..."; } const uint64_t default_wait_microseconds = 500 * 1000; while (!scheduler_thread_exit_) { auto payloads = std::make_shared<std::vector<Scheduler::Payload>>(); uint64_t wait_microseconds = 0; struct timespec now; clock_gettime(CLOCK_MONOTONIC, &now); uint64_t now_ns = now.tv_sec * NANOS_PER_SECOND + now.tv_nsec; // Hold the lock for as short a time as possible. { std::unique_lock<std::mutex> lock(mu_); bool adjust_max_active_slot = false; if (delay_cnt > 0) { wait_microseconds = 10 * 1000; // Debugging... wait until queues together contain at least // 'delay_cnt' items... size_t total_size = 0; for (const auto& q : queues_) { total_size += q.size(); } if (!base_->DelayScheduler(batcher_idx_, total_size, delay_cnt)) { delay_cnt = 0; } LOG_INFO << "Delaying scheduler thread " << batcher_idx_ << " until " << delay_cnt << " queued payloads, current total = " << total_size; } else { // Make sure there is at least one request that needs to be // handled. Find the largest slot index that has a payload // available... int32_t max_slot = max_active_slot_; while ((max_slot >= 0) && queues_[max_slot].empty()) { max_slot--; } if (max_slot < 0) { wait_microseconds = default_wait_microseconds; } else { // Collect payloads from slot 0 to max_slot. for (int32_t slot = 0; slot <= max_slot; ++slot) { // If 'slot' doesn't have any requests then change the // request provider to send dummy/null input tensors for // this slot. We need this so that other payloads stay in // the correct slot. std::deque<Scheduler::Payload>& queue = queues_[slot]; if (queue.empty()) { auto null_request_provider = std::make_shared<NULLInferRequestProvider>( null_request_header_); null_request_provider->SetInputOverride( notready_input_overrides_); std::unique_ptr<ModelInferStats::ScopedTimer> queue_timer; payloads->emplace_back( queue_timer, nullptr, null_request_provider, nullptr, nullptr); } else { slot_idle_timeouts_[slot] = now_ns + max_sequence_idle_ns_; Scheduler::Payload& slot_payload = queue.front(); const auto& request_provider = slot_payload.request_provider_; const auto& request_header = request_provider->RequestHeader(); // If this is the first payload in a sequence then send // the appropriate sequence start indicator to the // backend. if ((request_header.flags() & InferRequestHeader::FLAG_SEQUENCE_START) != 0) { request_provider->SetInputOverride(start_input_overrides_); } else { request_provider->SetInputOverride(continue_input_overrides_); } payloads->emplace_back( slot_payload.queue_timer_, slot_payload.stats_, request_provider, slot_payload.response_provider_, slot_payload.complete_function_); queue.pop_front(); // If this is the last payload in a sequence then // attempt to refill the slot with a sequence from the // backlog. If there is no backlog show that the slot is // no longer active, and if it is currently the maximum // active slot note that we need to adjust // max_active_slot_ once all slots are processed (we // defer processing because multiple slots could have // ending sequences). if ((request_header.flags() & InferRequestHeader::FLAG_SEQUENCE_END) != 0) { LOG_VERBOSE(1) << "Ending sequence for model '" << request_provider->ModelName() << "' in batcher " << batcher_idx_ << ", slot " << slot; // Should never be anything in a queue after the END // marker. If it happens that means we will clobber // that request if/when we swap in a backlog sequence // in ReleaseBatchSlot below. if (!queue.empty()) { LOG_ERROR << "internal: unexpected requests after sequence " "end in slot " << slot << " for model '" << request_provider->ModelName() << "'"; } SequenceBatchScheduler::BatchSlot batch_slot( batcher_idx_, slot); bool released = base_->ReleaseBatchSlot(batch_slot, 0, &queue); if (released) { slot_correlation_ids_[slot] = 0; if (slot == max_active_slot_) { adjust_max_active_slot = true; } } } } } } } // If an active slot's idle timeout is exceeded, release it. for (int32_t slot = 0; slot <= max_active_slot_; ++slot) { const uint64_t timeout = slot_idle_timeouts_[slot]; if ((slot_correlation_ids_[slot] != 0) && (timeout != 0) && (timeout <= now_ns)) { LOG_VERBOSE(1) << "Aborting sequence in batcher " << batcher_idx_ << ", slot " << slot; std::deque<Scheduler::Payload>& queue = queues_[slot]; if (!queue.empty()) { LOG_ERROR << "internal: unexpected idle timeout for sequence in slot " << slot; } SequenceBatchScheduler::BatchSlot batch_slot(batcher_idx_, slot); bool released = base_->ReleaseBatchSlot( batch_slot, slot_correlation_ids_[slot], &queue); if (released) { slot_correlation_ids_[slot] = 0; if (slot == max_active_slot_) { adjust_max_active_slot = true; } } } } // If one or more sequences ended, and one of them was in // max_active_slot_, then need to find the new max_active_slot_. if (adjust_max_active_slot) { while ((max_active_slot_ >= 0) && (slot_correlation_ids_[max_active_slot_] == 0)) { max_active_slot_--; } } // If no requests are to be handled, wait for notification or // for the specified timeout before checking the queues again. if (wait_microseconds > 0) { scheduler_idle_ = true; std::chrono::microseconds wait_timeout(wait_microseconds); cv_.wait_for(lock, wait_timeout); scheduler_idle_ = false; } } if ((payloads != nullptr) && !payloads->empty()) { auto OnCompleteQueuedPayloads = [payloads](tensorflow::Status status) { // Payloads that don't have a completion function don't have // anywhere to report their errors. Those errors could have // caused other payloads to have issues (due to mis-alignment // within the batch, etc.). So if any such payload has an // error we just fail all payloads. if (status.ok()) { for (auto& payload : *payloads) { if (payload.complete_function_ == nullptr) { if (!payload.status_.ok()) { status = payload.status_; break; } } } } // Complete each payload by calling the competion function. bool found_success = false; for (auto& payload : *payloads) { const tensorflow::Status& final_status = status.ok() ? payload.status_ : status; // All the payloads executed together, so count 1 execution in // the first successful payload. Other payloads stay at 0 // executions. if (!found_success && final_status.ok() && (payload.stats_ != nullptr)) { payload.stats_->SetModelExecutionCount(1); found_success = true; } if (payload.complete_function_ != nullptr) { payload.complete_function_(final_status); } } }; // Run the backend... OnSchedule_(batcher_idx_, payloads.get(), OnCompleteQueuedPayloads); } } // end runner loop LOG_VERBOSE(1) << "Stopping sequence-batch scheduler thread " << batcher_idx_ << "..."; } }} // namespace nvidia::inferenceserver
from PIL import Image import numpy import os ImagesFolderPath = "./Images" ResultsFolderPath = "./Results" if not os.path.exists(ImagesFolderPath): print(f"Missing {ImagesFolderPath} Directory Creating Directory...\n") os.mkdir(ImagesFolderPath) if not os.path.exists(ResultsFolderPath): print(f"Missing {ResultsFolderPath} Directory Creating Directory...\n") os.mkdir(ResultsFolderPath) print(f"Place images into '{ImagesFolderPath}' \nThen Press enter to Procces them") input("...") print("Loading Images\n") for filename in os.listdir(ImagesFolderPath): print(f"Found '{filename}'") img = Image.open(f"{ImagesFolderPath}/{filename}") #This Conversion is not Necessary but removes appended data img_array = numpy.array(img) img_final = Image.fromarray(img_array) img.save(f"{ResultsFolderPath}/{filename}") print(f"Done! Check {ResultsFolderPath}") input("Press Enter to exit...")
Embedding e-finance in e-government: a new e-government framework E-government and e-finance are two different but closely related topics, evolved from e-commerce and e-business. Literature says that what e-government basically does is to utilise the current information technology, internet technology and e-commerce practices to provide citizens and organisations with easier access to government information and services and deliver public services to those who need them. However, e-government will not be a complete concept without e-finance and some other e-components. This research provides a new framework for the future e-government models and studies the role of e-finance in e-government.
// Code generated by counterfeiter. DO NOT EDIT. package routingfakes import ( "sync" "github.com/livekit/livekit-server/pkg/routing" "google.golang.org/protobuf/reflect/protoreflect" ) type FakeMessageSource struct { ReadChanStub func() <-chan protoreflect.ProtoMessage readChanMutex sync.RWMutex readChanArgsForCall []struct { } readChanReturns struct { result1 <-chan protoreflect.ProtoMessage } readChanReturnsOnCall map[int]struct { result1 <-chan protoreflect.ProtoMessage } invocations map[string][][]interface{} invocationsMutex sync.RWMutex } func (fake *FakeMessageSource) ReadChan() <-chan protoreflect.ProtoMessage { fake.readChanMutex.Lock() ret, specificReturn := fake.readChanReturnsOnCall[len(fake.readChanArgsForCall)] fake.readChanArgsForCall = append(fake.readChanArgsForCall, struct { }{}) stub := fake.ReadChanStub fakeReturns := fake.readChanReturns fake.recordInvocation("ReadChan", []interface{}{}) fake.readChanMutex.Unlock() if stub != nil { return stub() } if specificReturn { return ret.result1 } return fakeReturns.result1 } func (fake *FakeMessageSource) ReadChanCallCount() int { fake.readChanMutex.RLock() defer fake.readChanMutex.RUnlock() return len(fake.readChanArgsForCall) } func (fake *FakeMessageSource) ReadChanCalls(stub func() <-chan protoreflect.ProtoMessage) { fake.readChanMutex.Lock() defer fake.readChanMutex.Unlock() fake.ReadChanStub = stub } func (fake *FakeMessageSource) ReadChanReturns(result1 <-chan protoreflect.ProtoMessage) { fake.readChanMutex.Lock() defer fake.readChanMutex.Unlock() fake.ReadChanStub = nil fake.readChanReturns = struct { result1 <-chan protoreflect.ProtoMessage }{result1} } func (fake *FakeMessageSource) ReadChanReturnsOnCall(i int, result1 <-chan protoreflect.ProtoMessage) { fake.readChanMutex.Lock() defer fake.readChanMutex.Unlock() fake.ReadChanStub = nil if fake.readChanReturnsOnCall == nil { fake.readChanReturnsOnCall = make(map[int]struct { result1 <-chan protoreflect.ProtoMessage }) } fake.readChanReturnsOnCall[i] = struct { result1 <-chan protoreflect.ProtoMessage }{result1} } func (fake *FakeMessageSource) Invocations() map[string][][]interface{} { fake.invocationsMutex.RLock() defer fake.invocationsMutex.RUnlock() fake.readChanMutex.RLock() defer fake.readChanMutex.RUnlock() copiedInvocations := map[string][][]interface{}{} for key, value := range fake.invocations { copiedInvocations[key] = value } return copiedInvocations } func (fake *FakeMessageSource) recordInvocation(key string, args []interface{}) { fake.invocationsMutex.Lock() defer fake.invocationsMutex.Unlock() if fake.invocations == nil { fake.invocations = map[string][][]interface{}{} } if fake.invocations[key] == nil { fake.invocations[key] = [][]interface{}{} } fake.invocations[key] = append(fake.invocations[key], args) } var _ routing.MessageSource = new(FakeMessageSource)
n=int(input()) s=input() p=s[0] q=0 for i in range(1,n): if s[i]!=p[q]: p=p+s[i] q=q+1 r=len(s)-len(p) print(r)
Access to and value of information to support good practice for staff in Kenyan hospitals Background Studies have sought to define information needs of health workers within very specific settings or projects. Lacking in the literature is how hospitals in low-income settings are able to meet the information needs of their staff and the use of information communication technologies (ICT) in day-to-day information searching. Objective The study aimed to explore where professionals in Kenyan hospitals turn to for work-related information in their day-to-day work. Additionally, it examined what existing solutions are provided by hospitals with regard to provision of best practice care. Lastly, the study explored the use of ICT in information searching. Design Data for this study were collected in July 2012. Self-administered questionnaires (SAQs) were distributed across 22 study hospitals with an aim to get a response from 34 health workers per hospital. Results SAQs were collected from 657 health workers. The most popular sources of information to guide work were fellow health workers and printed guidelines while the least popular were scientific journals. Of value to health workers were: national treatment policies, new research findings, regular reports from surveillance data, information on costs of services and information on their performance of routine clinical tasks; however, hospitals only partially met these needs. Barriers to accessing information sources included: not available/difficult to get and difficult to understand. ICT use for information seeking was reported and with demographic specific differences noted from the multivariate logistic regression model; nurses compared to medical doctors and older workers were less likely to use ICT for health information searching. Barriers to accessing Internet were identified as: high costs and the lack of the service at home or at work. Conclusions Hospitals need to provide appropriate information by improving information dissemination efforts and providing an enabling environment that allows health workers find the information they need for best practice. I nformation needs of health workers have been studied within a variety of settings in both developed and developing countries. These needs vary across different sectors of a health system. Different studies have identified these needs in line with specific themes such as reproductive health and clinical decision making while others focussed on the needs of either particular cadres of health workers such as nurses or community health workers (57) or more general service providers. The use of information communication technologies (ICT) for information dissemination and seeking has also been assessed albeit as part of specialised projects (4, 7, 1012). Poor uptake of guidelines and poor access to relevant and reliable information by health workers would ultimately lead to poor care. A study by Nzinga et al. reveals that one of the Global Health Action ae Global Health Action 2015. # 2015 Naomi Muinga et al. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license. reasons for poor uptake of guidelines is that dissemination of information/knowledge tools like guidelines is inadequate. Formal attempts to develop simple clinical guidelines for health workers in Kenya have tried using 'push' methods to share information, which are prone to decay of knowledge and loss of skill over time. Perhaps insufficient attention is however given to 'pull' solutions which take advantage of health workers' efforts to look for information to meet their present need. Pakenham-Walsh suggests a needs-led approach 'where the information is based on research, informed by evidence, enabled by technology, and organized by subject (where appropriate)* but fundamentally led by needs'. Determining the information needs of health workers requires understanding of: health workers' knowledge and practice in relation to the care they give their patients, perceptions of information types and sources available (57, 10,18) and the means by which they seek information. These studies have however been conducted within specific projects or specific cadres of the health workforce and do not shed much light on user preferences for different sources of information. In Kenya, online resources have become more accessible with over 7 million Internet subscriptions as at June 2012 and almost universal mobile phone ownership among health workers. Mapping out the use of ICT by health workers for their day-to-day work could therefore inform policy makers on additional communication channels that they can make use of to enhance sharing of up-to-date and relevant information. This study therefore sought to investigate the information needs and preferences of health workers in Kenyan public hospitals. The specific objectives were: 1) to investigate where health workers in the Kenyan public hospitals go to when they have a question specific to the provision of modern/best practice in health care; 2) to investigate whether health workers in Kenyan public hospitals use ICT while seeking information related to their work; and 3) to investigate what solutions are provided formally by the hospital in an effort to support modern/best practice health care among health professionals in Kenyan public hospitals. The study The study was implemented as part of a larger study undertaken by KEMRI-Wellcome Trust Research Programme in collaboration with the Ministry of Medical Services (MOMs) as a partner in the Health Services, Implementation Research and Clinical Excellence (SIRCLE) collaboration in Kenya in the month of June 2012. The study was conducted in 22 hospitals that provide internship training to young doctors. MOMs purposefully identified these with a view to appropriate national, geographic representation from a total of 40 such hospitals linked to aims to assess the quality of care given to patients. The data reported here were collected using selfadministered questionnaires (SAQs) that were distributed to health workers across the 22 study hospitals. Questionnaire design and testing Questionnaire design drew on prior studies that conducted information needs assessments of health workers with additional questions formulated to meet the study objectives. The SAQ included closed questions with options on a five-point Likert scale or other appropriate choices and open-ended questions. It was then tested and re-tested on five clinicians not in the study and modified after considering the feedback given. A third round of questionnaire testing was done on a further six health workers before piloting in two hospitals not included in the study. At each point modifications were made to improve clarity and remove redundancy. Data collection Data for this study were collected for all hospitals by 22 health workers (one drawn from each study hospital). These health workers had undergone a 1-week training session together that introduced them to the study aims, principles of good research and data quality and that allowed them to pilot use of tools in a non-study hospital. Following this training, health workers returned to their hospitals and had between 2 and 3 weeks to distribute and collect the SAQs. Together with the survey team leader they checked the SAQs for completeness on collection making efforts to obtain missing data where possible. Personal identification information, such as names, was not included in the questionnaire. Sample size Care is predominantly provided in each of the clinical areas by nurses, clinicians, and non-clinician physicians, with support from laboratory and pharmacy staff. During the limited survey period, health workers on duty within clinical sites were the main target of sampling as random selection from a staff list was not deemed feasible. The aim was to collect four SAQs in the mother and child health clinic (MCH) and outpatient department (OPD) and five each from the paediatrics, nursery and maternity, surgery, and internal medicine clinical areas representing a mixture of clinicians and nurses. In addition, up to six more questionnaires were to be distributed to other cadres such as lab technologists, pharmacists, nutritionists, radiologists, and physiotherapists if available making a maximum sample of 34 per hospital. Achieving this sample size would have been more than sufficient to report a prevalence of 50% for binary responses to SAQ questions with a precision of910%. Even allowing for a design effect of 2 due to the clustered nature of the data such precision would be achieved with a total of 211 respondents after adjusting for a 10% non-response rate. Achieving a sample of 34 per hospital would also allow for adequately powered exploratory analysis using regression modelling. Data entry, quality, and analysis Study data from the SAQs were entered and managed using Research Electronic Data Capture (REDCap), a secure, web-based application designed to support data capture for research studies.. The data were single entered then checked for accuracy by doing a double entry for 10% of the records with 97% agreement observed. The data were analysed using STATA version 11 (StataCorp LP, College Station, TX). Overall proportions (for binary responses) were calculated for all respondents adjusting for clustering at the hospital level. Two approaches were employed where the five-point Likert scale was used; cluster adjusted mean scores were generated for choices available; and where appropriate, options on the scale were collapsed to give a proportion adjusted for clustering. We hypothesised that there were associations between two markers of ICT use (use of a computer to search the Internet for work-related information and use of a mobile device/tablet to access information related to work) and age, gender, cadre, and years of experience after internship. These associations were explored using simple univariable analyses. The covariables in these univariable models that appeared significant (pB0.05) were explored further in multivariable logistic regression models (except years of work after internship as it was poorly answered) for computer and mobile device use. Additionally we carried out tests to ascertain whether clustering exists for the outcomes of interest. Where clustering existed it was adjusted for in the model. Ethics Scientific and ethical approval for the study was obtained from the Kenya Medical Research Institute (KEMRI) National Scientific and Ethical Review Boards. The Ministry of Health also approved the study and the study was explained to hospital management teams who provided their assent prior to data collection. Results A total of 657 (median 031, range per hospital 01634) SAQs were returned from a possible 748 distributed to the health workers (for a response rate of 86%). The majority of respondents were nurses (64%) followed by clinical officers (21%), and medical officers (8%). Over 50% of the respondents were aged over 36 years. The largest respondent group was from the maternalchild health and outpatient clinics (161/628, 26%). The full details of the characteristics of the respondents are shown in Table 1. Information needs of health workers and information provision by hospitals The health workers were asked to rate on a five-point scale how often in their day-to-day work (giving medical/ nursing care) they felt the need to consult/get further information to help them do the right thing. Overall 70.4% responded that they commonly need additional information while giving care with no meaningful difference between cadres (data not shown). Types of information felt to be important and whether or not such information was supplied by the hospital were examined using a similar scale (1 0not important at all to 50very important). Responses were dichotomised (score 4 and 5 indicating important) prior to analysis. Figure. 1 shows the proportions of those who felt that different information types were important (score 4 and 5) and whether or not the hospital provided that information. National treatment policies and information on costs of services were provided by the hospitals (90%, N 0644, 95% CI 087.193%, and 76.5%, N0620, 95% CI 071.581.4%, respectively) and considered important by a large majority of the health workers (89.4%, N 0633, 95% CI 086.492.5%, and 76.9%, N0603, 95% CI 073.980%, respectively). On the other hand, there were information types that the health workers reported to be important to them (over 70% respondents) but provision by the hospital was lacking (reported as provided by fewer than 40% of respondents). These included: New research findings, information on health worker's performance of routine clinical tasks and regular reports from surveillance data (Fig. 1). The most common communication strategies used by hospitals to give information on best practice were oneto-one instruction and Continuous Medical Education (CME) sessions while email and text messages were rarely used (data not shown). Sources of information for health workers and barriers Health workers were asked to rate on a five-point scale (least likely to very likely) where they were likely to seek further information to improve their assessment of patients or make the best diagnosis for each of six choices. Mean scores were calculated for each of the sources and indicate that; talking to colleagues at the same level was the most likely source of information to guide day-to-day practice (4.08, N0632, 95% CI 0 3.984.19) closely followed by talking to consultants (4.01, N0626, 95% CI 03.904.12) and printed guidelines (3. 1, N 0472, 95% CI 02.132.51) were much less common. The most common reason given for never using these poorly accessed sources was that they were 'Not available/ difficult to get' and 'Difficult to understand'. Use of ICT in information search by health workers The use of ICT (computers and mobile phones) by health workers to find information for their day-to-day work was assessed on several aspects: access, use and barriers. The results show that 80.5% (N 0614, 95% CI 073.887.1%) of health workers have access to a computer whether at home, work or at a cyber caf and that 62.5% (N 0602, 95% CI 055.369.6%) use a computer to search for work-related information from the Internet with 46.1% (185/401, 95% CI 039.752.6%) and 29.4%, (118/401, 95% CI 024.434.4%) conducting a search at least once weekly or daily. Across the different cadres usage of a computer to search the Internet for work-related information was lowest among nurses at 54.6% (N0379, 95% CI 045.463.9%). The most common barriers to using the Internet were: lack of access at home (39%, 95% CI 033.944%) or at work (36.8%, 95% CI 031.542.1%) followed by the high costs of accessing the Internet (34.1%, 95% CI 029.139.1%). About 19.8% (95% CI 014.724.9%) of the health workers reported to have no time to search for information on the Internet. Ownership of mobile phones among the health workers was nearly universal (98.6%, N0639, 95% CI 097.5 99.7%) with 75.0% (N 0633, 95% CI 070.979.2%) of health workers using their mobile devices to access workrelated information. The most common uses of mobile phones for work were: calling colleagues to discuss a workrelated question (60.3%, N 0657, 95% CI 056.364.2%), sending text messages to answer or ask a question (57.5%, N0657, 95% CI 05362.1%) and searching the Internet for further information for their work (55.9%, N 0657, Results from the univariable analysis exploring the association between computer and mobile device use to search for work-related information, showed that there was a significant association with each of the characteristics (age, gender, cadre, years of work after internship). We then postulated potential interaction between age and gender, gender and cadre, and age and cadre and tested each logistic model with and without the interaction term for both ICT use outcomes. The likelihood-ratio tests showed no evidence for any interaction between the covariables. The likelihood-ratio test comparing the logistic regression models with and without clustering showed that there was evidence to indicate the presence of clustering at the hospital level (p00.0002) for computer use, but little evidence for clustering at the hospital level for mobile phone use (p00.207). The covariates in the univariable models were then added into multivariable logistic regression models for computer and mobile device use adjusted for clustering. The results from the multivariable models show that Nurses were less likely to use a computer (OR 00.32; 95% CI 00.130.77; p 00.011) or mobile device (OR00.18; 95% CI 00.060.55; p 00.003) to access information related to their work as compared to medical officers after adjusting for all the covariates ( Table 2). Although there was insufficient evidence for a trend the results also suggested that health workers were less likely to use either a computer or mobile device to search for information as age increased with those aged 45 years and above significantly less likely to use a computer (OR 00.46; 95% CI 00.220.98; p00.044) or mobile device (OR00.28; 95% CI 00.110.74; p00.010) to search for information than those aged below 25 years. Discussion Overall, health workers reported that they would need information in their day-to-day work to help them provide appropriate care. According to the health workers, national treatment policies and information on treatment costs for patients were provided by the hospitals and were found to be important to the health workers; this finding is consistent with literature. Gaps in provision of new research findings and regular reports from surveillance data were identified however. Information on health worker's performance of routine clinical tasks was found be of value to health workers but not provided by hospitals. This information requires the presence of a good health information system that collects good quality data to feed into regular meetings by clinical teams to evaluate their performance, something currently lacking in this setting. In common with previous studies, health workers reported that the most popular source of information was informal communication with colleagues and consultants. Colleagues are likely to be 'easy to access' during routine activities as opposed to electronic or print media. This ease of access to information sources has been described as an attribute of useful medical information. Colleagues have also been viewed as trusted sources of information suggesting that such informal sources should not be ignored as dissemination channels. Scientific journals, journal databases and the Internet in general were rarely used by the health workers a finding that is similar to other studies. The most frequently cited barrier to sources that require adequate infrastructure (hardware and reliable and affordable access to the Internet) was that they were not available or difficult to use. Additionally, training and sometimes subscriptions are required to allow for their full potential to be realised. The use of ICT in information seeking by health workers has been assessed as part of specialised projects in developing countries. The results show that health workers, who use electronic sources of information, make better decisions for their work and that mobile devices were used to access further information. The widespread ownership of mobile phones has been made possible by an enabling environment in Kenya where the last few years have seen an increase in availability of a range of mobile phones (both low cost and high end) but it is noteworthy that none of the hospitals studied provide free Internet access for frontline health workers. Internet costs have dropped due to competition among service providers; however some of the costs are still prohibitive for many. To put this into context, monthly basic salaries of health workers (using an exchange rate of 133 Kenyan shillings for 1 British pound in mid-2012) range from £232.65 to £418.80 for nurses and clinical officers and £268.09 to £489.68 for medical officers. The costs of browsing the Internet range from £0.52 to £0.59 per megabyte (MB). Although there are lower prices for bundles of data in pre-fixed MBs (as low as £0.04) this may still deter people from using data heavy sites. Our data suggest that nurses, compared with medical officers, and those aged over 45 years were less likely to use ICT (computer and mobile device) for information searching; based on the findings of multivariable logistic regression model adjusted for demographic characteristics and clustering. This finding is perhaps not surprising as older individuals might not be as experienced in the use of technology for information seeking and are likely to be less frequent users of social media. Moreover, as one gets older, there may be more responsibilities financially and consequently accessing the Internet may become less of a priority. Conversely, it could be that uptake of ICT is more popular among the younger health workers and thus they are able to translate this use of technology to their work. This age effect may be important, however, as many nurses in rural areas are aged over 40 years in Kenya. Clustering at the hospital level was found to be present for the outcome: use of a computer to search the Internet. This might be explained by recent efforts by hospitals to digitise operations, which in turn would help improve access to a computer with Internet access and subsequently access to online resources. A possibility supported by linked data on hospital infrastructure indicating that computer availability in the hospitals and their Internet connectivity varied considerably across hospitals. Limitations The use of SAQs, while encouraging honest responses because of anonymity, limits the kind of data collected and does not allow for exploration of new ideas but does provide a starting point for more focussed work. The number of completed questionnaires received from various hospitals varied between 16 and 34 with a median response of 31 out of a possible 34 distributed to each hospital. The lower response rate in some hospitals might have been due to respondent fatigue as some health workers complained that there were too many surveys being conducted and no feedback on findings given. In addition, the varied numbers of responses and the length of the questionnaire might have contributed to poor completion by health workers. Lastly, the sample studied in this research comprised frontline health workers who interact directly with patients and are typically not involved in research activities. The consultants in the wards (e.g. Paediatricians) who may be classified as managers in the hospitals were not surveyed and these form a group that is likely to be different from the health workers surveyed hence their needs and sources might be different. It is therefore not possible to generalise these results to all health workers in internship hospitals in Kenya. Recommendations The potential for the use of ICT as a tool to aid in knowledge/information propagation has not been fully harnessed. The Ministry of Health should explore possibilities of supporting hospitals that are providing initial experiential training to provide reliable access to the Internet by setting up resource centres and local area networks (LANs). Wireless connectivity can enable access to health workers through laptops and mobile phones while at work. In addition health workers should be trained on the use of online resources such as journal databases and the Internet and information searching in general. This should ideally promote the use of correct ideally pre-digested, summarised and locally relevant information found to be most relevant in a review by Revere. Additionally, hospitals should make use of communication means frequently used by health workers such as mobile phones to rapidly convey information for example on local surveillance findings or, in the future perhaps, performance feedback. A central repository of such relevant and localised information for Kenyan health workers has the potential to increase dissemination of such materials, research findings and regular surveillance reports. LeMay and Bocock have shown that such central knowledge sharing efforts may overcome barriers to accessing up-to-date information. Conclusions This study set out to find out where health professionals in internship hospitals in Kenya turn to for work-related information, which information is important to them and what solutions are provided by hospitals to meet these needs. Colleagues were a popular source of knowledge as opposed to published sources, from either academic journal or the Internet, possibly because they are easier to access, or because they genuinely trust their knowledge and expertise. The barriers identified reinforced this need for ease of access, and in this context this was largely due to problems with the information infrastructure, including access to computers. Further to this, demographic specific differences were noted with nurses being less likely to use ICT for health information searching, and older workers being less likely to use the Internet. Providing ICT solutions and meeting health worker demand for information would, however, seem to be an increasingly promising mechanism to address information gaps.
#include <stdio.h> #include <stdlib.h> #define N 500000 long long min(long long a, long long b) { return a < b ? a : b; } int oo[1 + (N - 1) * 2], oj[1 + (N - 1) * 2]; int link(int o, int j) { static int _ = 1; oo[_] = o, oj[_] = j; return _++; } int ae[N], sz[N], n; long long dp[N], xx[N], yy[N]; int compare(const void *a, const void *b) { int i = *(int *) a; int j = *(int *) b; if (xx[i] != xx[j]) return xx[i] < xx[j] ? -1 : 1; if (yy[i] != yy[j]) return yy[i] < yy[j] ? -1 : 1; return 0; } long long cross2(long long x1, long long y1, long long x2, long long y2) { return x1 * y2 - x2 * y1; } long long cross(int i, int j, int k) { return cross2(xx[j] - xx[i], yy[j] - yy[i], xx[k] - xx[i], yy[k] - yy[i]); } long long dot2(long long x1, long long y1, long long x2, long long y2) { return x1 * x2 + y1 * y2; } long long ans; void dfs(int p, int i) { static int jj[N], qu[N]; int o, h, h_, k, cnt; sz[i] = 1; for (o = ae[i]; o; o = oo[o]) { int j = oj[o]; if (j != p) { dfs(i, j); sz[i] += sz[j]; } } /* dp_j1 + dp_j2 + ((n - sz_j1 - sz_j2) ch 2) * = dp_j1 + dp_j2 + (n - sz_j1 - sz_j2) (n - sz_j1 - sz_j2 - 1) / 2 * = dp_j1 + dp_j2 * + (n^2 - (2 (sz_j1 + sz_j2) + 1) n * + (sz_j1^2 + sz_j2^2 + 2 sz_j1 sz_j2 + sz_j1 + sz_j2)) / 2 * = dp_j1 + dp_j2 * + n ch 2 * - (sz_j1 + sz_j2) n * + ((sz_j1 + 1) ch 2 + (sz_j2 + 1) ch 2) * + sz_j1 sz_j2 * = n ch 2 * + (dp_j2 - sz_j2 n + (sz_j2 + 1) ch 2) * + (dp_j1 - sz_j1 n + (sz_j1 + 1) ch 2) * + sz_j1 sz_j2 * = n ch 2 * - (sz_j1, dp_j1 - sz_j1 n + (sz_j1 + 1) ch 2) dot (-sz_j2, -1) * + (dp_j2 - sz_j2 n + (sz_j2 + 1) ch 2) */ k = 0; dp[i] = (long long) sz[i] * (sz[i] - 1) / 2; for (o = ae[i]; o; o = oo[o]) { int j = oj[o]; if (j != p) { dp[i] = min(dp[i], dp[j] + (long long) (sz[i] - sz[j]) * (sz[i] - sz[j] - 1) / 2); ans = min(ans, dp[j] + (long long) (n - sz[j]) * (n - sz[j] - 1) / 2); jj[k++] = j; xx[j] = sz[j], yy[j] = dp[j] - (long long) sz[j] * n + (long long) sz[j] * (sz[j] + 1) / 2; } } qsort(jj, k, sizeof *jj, compare); cnt = 0; for (h = 0, h_ = 0; h < k; h++) { int j = jj[h]; while (h_ + 1 < cnt && dot2(xx[qu[h_]], yy[qu[h_]], -xx[j], -1) < dot2(xx[qu[h_ + 1]], yy[qu[h_ + 1]], -xx[j], -1)) h_++; if (cnt) ans = min(ans, (long long) n * (n - 1) / 2 - dot2(xx[qu[h_]], yy[qu[h_]], -xx[j], -1) + yy[j]); while (cnt >= 2 && cross(qu[cnt - 2], qu[cnt - 1], j) <= 0) cnt--; qu[cnt++] = j; if (h_ >= cnt) h_ = cnt - 1; } } int main() { int h, i, j; scanf("%d", &n); for (h = 0; h < n - 1; h++) { scanf("%d%d", &i, &j), i--, j--; ae[i] = link(ae[i], j); ae[j] = link(ae[j], i); } ans = (long long) n * (n - 1) / 2; dfs(-1, 0); ans = (long long) n * (n - 1) - ans; printf("%lld\n", ans); return 0; }
OF THE "SPECTATOR."1 SIR,—By way of rejoinder to the historical "facts" alleged in a letter in your columns on Saturday, May 31st, you may think it fair to print the following extract from the opinion of Lord Loreburn in giving judgment in the case of Nairn v University of St. Andrews, Law Reports, 1909, Appeal Cases, p. 147 at p. 160 :- "It is incomprehensible to me that anyone acquainted with our laws or the methods by which they are ascertained can think, if, indeed, anyone does think, there is room for argument on such a point. It is notorious that this right of voting has, in fact, been confined to men. Not only has it been the constant tradition alike of all the three kingdoms, but it has also been the constant practice, so far as we have knowledge of what has happened from the earliest times down to this day. Only the clearest proof that a different state of things prevailed in ancient times could be entertained by a court of law in probing the origin of so inveterate a usage."
Two of Dell’s largest shareholders have made a new takeover bid for the struggling computer company that will challenge a previous offer from Silverlake and Michael Dell. Activist investor Carl Ichan and Southeastern Asset Management on Friday announced a new plan that would give current Dell shareholders the option keep their stock and receive either $12 per share in cash or $12 in additional shares valued at $1.65 per share. The offer counters a $24.4 billion bid led by Dell founder Michael Dell and private equity firm Silver Lake Partners to take the company private. Icahn and Southeastern hold a combined 13% stake in the company, compared to the 16% controlled by Dell and Silver Lake. “It is insulting to shareholders’ intelligence for the board to tell them that this board only has the best interests of shareholders at heart, and then accept Michael Dell’s offer to purchase the company he founded for $13.65 per share, a price far below what we consider its value to be,” Icahn and Southeastern’s president G. Staley Cates wrote in a letter to the board of directors, according to Bloomberg. Shares of Dell have continued to rise this year as buyout talks have made shareholders feel optimistic about the company’s future. The stock has gained more than 31% in the past five months and now trades around $13.40 per share, making Dell’s offer of $13.65 per share less appealing. “Either give shareholders the real choice they are entitled to or face the legal liability for your failures,” Icahn and Southeastern said.
Mining Consumer Services Based on User Preference with Associative and Process Mining A customer makes a decision based on sentiments when using services. These sentiments drive the selections in services that are correlated with each other. It is hard to grasp the sentiments of consumers when many combinations of options are available. This paper proposed a method to extract preferred services based on consumer behavior with strong correlation values between options in a service. The approach is by mining the association rules that correspond to the business process model. We first formalized a problem for extracting the preferred service model. Then, we proposed an extraction method by pruning the associative rules with more substantial relation using Cooks distance and visualizing consumer behavior with a process model. Finally, we illustrated the approach of the proposed method and showed that we could extract preference with a higher correlation value compared to the conventional method.
Pope Francis plunged Sunday into Mideast politics during his Holy Land pilgrimage, calling the current stalemate in peace efforts "unacceptable" and winning the acceptance from the Israeli and Palestinian presidents to pay a symbolic visit to the Vatican next month to pray for peace. Francis issued the surprise, joint invitation after landing in Bethlehem, the cradle of Christianity, in a symbolic nod to Palestinian aspirations for their own state. In another unscripted moment, he prayed at the Israeli separation barrier surrounding the biblical West Bank town and briefly donned the checkered black and white headscarf that is a symbol of the Palestinian cause. Jubilant Palestinians cheered Francis as he arrived in Bethlehem's Manger Square, shouting "Viva al-Baba!" or "Long live the pope!" Giant Palestinian flags in red, white, green and black and the Vatican's yellow-and-white flags decorated the square, which is home to the Church of the Nativity, built over Jesus' traditional birth grotto. At the end of Mass in the square, Francis invited Palestinian President Mahmoud Abbas and Israeli President Shimon Peres to pray with him for peace, saying: "I offer my home in the Vatican as a place for this encounter of prayer." The offices of the Israeli and Palestinian presidents quickly confirmed that they had accepted the invitation, with the Palestinians saying the meeting would take place in June. The invitation -- and the acceptances -- were unexpected given Francis' insistence that his three-day visit was "strictly religious" pilgrimage to commemorate a Catholic-Orthodox anniversary. But it showed that the pope, who is named after the peace-loving St. Francis of Assisi, has been able to channel his immense popular appeal to be a moral force for peace, even though the proposed meeting will be largely a symbolic affair. Israeli-Palestinian peace talks broke down in late April, and there have been no public high-level meetings for a year. Peres, a 90-year-old Nobel Peace laureate, is set to step down over the summer, and the meeting would take place shortly before he leaves office. Peres, whose job is largely ceremonial, has no authority to negotiate peace, and the meeting will be merely symbolic. But he nonetheless risks upsetting Prime Minister Benjamin Netanyahu with the move. Netanyahu has expressed anger with politicians that have reached out to Abbas at a time when the Palestinian leader is reconciling with the Islamic militant group Hamas. Israel considers Hamas a terrorist group. There was no immediate comment from Netanyahu's office. Francis started out the second day of his three-day Mideast trip with a deeply symbolic decision to land in at a Bethlehem helipad, arriving from Jordan aboard a Jordanian helicopter. Previous popes have always come to the West Bank after first arriving in Tel Aviv, Israel. Palestinian officials hailed Francis' decision to arrive first in Bethlehem, and to refer to the "state of Palestine." In its official program, the Vatican referred to Abbas as the president of the "state of Palestine," and his Bethlehem office as the "presidential palace." "It's a blessed day," said Samar Sakkakini, 52, a Palestinian-American from Canton, Michigan, who attended the Mass in Manger Square. "Coming to Bethlehem and flying to Bethlehem from Jordan shows solidarity with the Palestinian people, which is wonderful. We need that." In November 2012, the United Nations General Assembly overwhelmingly recognized a "state of Palestine" in the West Bank, Gaza and east Jerusalem -- lands Israel captured in the 1967 war -- as a non-member observer. The recognition still has little meaning on the ground, with Israel remaining in full control of east Jerusalem, which it annexed in 1967, and the West Bank. Israel objects to the Palestinian campaign, saying it is an attempt to bypass negotiations. Standing alongside Abbas at a welcome ceremony, Francis declared: "The time has come to put an end to this situation which has become increasingly unacceptable." He said both sides needed to make sacrifices to create two states, with internationally recognized borders, based on mutual security and rights for everyone. "The time has come for everyone to find the courage to be generous and creative in the service of the common good," he said, urging both sides to refrain from any actions that would derail peace. In his remarks, Abbas voiced his concerns about the recent breakdown in U.S.-backed peace efforts and lamented the difficult conditions facing the Palestinians. He also expressed hope for peace. "Your visit is loaded with symbolic meaning as a defender of the poor and the marginalized," he said. Abbas listed a series of complaints against Israel, including continued settlement construction, the plight of thousands of Palestinian prisoners, Israel's control of east Jerusalem -- the Palestinians' would-be capital -- and Israel's construction of the "ugly wall" that encircles Bethlehem. "We welcome any initiative from you to make peace a reality in the Holy Land," Abbas said. "I am addressing our neighbors -- the Israelis. We are looking for the same thing that you are looking for, which is safety, security and stability." Security was lax by papal standards, even for a pope who has shunned the armored popemobile that his predecessors used on foreign trips. Only two bodyguards stood on the back of Francis' vehicle keeping watch as Palestinian police kept the crowd at bay. Francis waved and warmly smiled as his car made its way through the crowd in Manger Square, at one point holding a child passed up to him. In addition to the Israeli-Palestinian conflict, Francis also sought to encourage Palestinian Christians, whose numbers have dwindled as the conflict drags on. Currently, Christians are roughly 2 percent of the population of the Holy Land, down from about 10 percent at the time of Israel's establishment in 1948. In Bethlehem, they are less than one third of the population. Francis acknowledged the Palestinian Christian hardship and in his homily sought to encourage the younger generations with a strong plea for children around the globe to be protected and defended from war, poverty, disease and exile as refugees. "All too many children continue to be exploited, maltreated, enslaved, prey to violence and illicit trafficking," he said, a mural depicting the Nativity scene with the baby Jesus wrapped in the black-and-white checkered Palestinian headdress behind him. "Today in acknowledging this, we feel shame before God." After Mass, Francis had lunch with Palestinian families and visited a Palestinian refugee camp before flying by helicopter to Tel Aviv's Ben-Gurion airport for the Israeli leg of his trip.
#pragma once #include "./View.hpp" class JIntArray; namespace android::animation { class LayoutTransition; } namespace android::content { class Context; } namespace android::content::res { class Configuration; } namespace android::graphics { class Canvas; } namespace android::graphics { class Point; } namespace android::graphics { class Rect; } namespace android::graphics { class Region; } namespace android::os { class Bundle; } namespace android::util { class SparseArray; } namespace android::view { class ActionMode; } namespace android::view { class DragEvent; } namespace android::view { class KeyEvent; } namespace android::view { class MotionEvent; } namespace android::view { class PointerIcon; } namespace android::view { class View; } namespace android::view { class ViewGroup_LayoutParams; } namespace android::view { class ViewGroupOverlay; } namespace android::view { class ViewOverlay; } namespace android::view { class ViewStructure; } namespace android::view { class WindowInsets; } namespace android::view { class WindowInsetsAnimation; } namespace android::view { class WindowInsetsAnimation_Bounds; } namespace android::view { class WindowInsetsAnimation_Callback; } namespace android::view::accessibility { class AccessibilityEvent; } namespace android::view::accessibility { class AccessibilityNodeInfo; } namespace android::view::animation { class LayoutAnimationController; } namespace android::view::animation { class Transformation; } class JString; class JString; namespace java::util { class ArrayList; } namespace android::view { class ViewGroup : public android::view::View { public: // Fields static jint FOCUS_AFTER_DESCENDANTS(); static jint FOCUS_BEFORE_DESCENDANTS(); static jint FOCUS_BLOCK_DESCENDANTS(); static jint LAYOUT_MODE_CLIP_BOUNDS(); static jint LAYOUT_MODE_OPTICAL_BOUNDS(); static jint PERSISTENT_ALL_CACHES(); static jint PERSISTENT_ANIMATION_CACHE(); static jint PERSISTENT_NO_CACHE(); static jint PERSISTENT_SCROLLING_CACHE(); // QJniObject forward template<typename ...Ts> explicit ViewGroup(const char *className, const char *sig, Ts...agv) : android::view::View(className, sig, std::forward<Ts>(agv)...) {} ViewGroup(QJniObject obj); // Constructors ViewGroup(android::content::Context arg0); ViewGroup(android::content::Context arg0, JObject arg1); ViewGroup(android::content::Context arg0, JObject arg1, jint arg2); ViewGroup(android::content::Context arg0, JObject arg1, jint arg2, jint arg3); // Methods static jint getChildMeasureSpec(jint arg0, jint arg1, jint arg2); void addChildrenForAccessibility(java::util::ArrayList arg0) const; void addExtraDataToAccessibilityNodeInfo(android::view::accessibility::AccessibilityNodeInfo arg0, JString arg1, android::os::Bundle arg2) const; void addFocusables(java::util::ArrayList arg0, jint arg1, jint arg2) const; void addKeyboardNavigationClusters(JObject arg0, jint arg1) const; jboolean addStatesFromChildren() const; void addTouchables(java::util::ArrayList arg0) const; void addView(android::view::View arg0) const; void addView(android::view::View arg0, android::view::ViewGroup_LayoutParams arg1) const; void addView(android::view::View arg0, jint arg1) const; void addView(android::view::View arg0, jint arg1, android::view::ViewGroup_LayoutParams arg2) const; void addView(android::view::View arg0, jint arg1, jint arg2) const; void bringChildToFront(android::view::View arg0) const; void childDrawableStateChanged(android::view::View arg0) const; void childHasTransientStateChanged(android::view::View arg0, jboolean arg1) const; void clearChildFocus(android::view::View arg0) const; void clearDisappearingChildren() const; void clearFocus() const; android::view::WindowInsets dispatchApplyWindowInsets(android::view::WindowInsets arg0) const; jboolean dispatchCapturedPointerEvent(android::view::MotionEvent arg0) const; void dispatchConfigurationChanged(android::content::res::Configuration arg0) const; void dispatchDisplayHint(jint arg0) const; jboolean dispatchDragEvent(android::view::DragEvent arg0) const; void dispatchDrawableHotspotChanged(jfloat arg0, jfloat arg1) const; void dispatchFinishTemporaryDetach() const; jboolean dispatchKeyEvent(android::view::KeyEvent arg0) const; jboolean dispatchKeyEventPreIme(android::view::KeyEvent arg0) const; jboolean dispatchKeyShortcutEvent(android::view::KeyEvent arg0) const; void dispatchPointerCaptureChanged(jboolean arg0) const; void dispatchProvideAutofillStructure(android::view::ViewStructure arg0, jint arg1) const; void dispatchProvideStructure(android::view::ViewStructure arg0) const; void dispatchSetActivated(jboolean arg0) const; void dispatchSetSelected(jboolean arg0) const; void dispatchStartTemporaryDetach() const; void dispatchSystemUiVisibilityChanged(jint arg0) const; jboolean dispatchTouchEvent(android::view::MotionEvent arg0) const; jboolean dispatchTrackballEvent(android::view::MotionEvent arg0) const; jboolean dispatchUnhandledMove(android::view::View arg0, jint arg1) const; void dispatchWindowFocusChanged(jboolean arg0) const; void dispatchWindowInsetsAnimationEnd(android::view::WindowInsetsAnimation arg0) const; void dispatchWindowInsetsAnimationPrepare(android::view::WindowInsetsAnimation arg0) const; android::view::WindowInsets dispatchWindowInsetsAnimationProgress(android::view::WindowInsets arg0, JObject arg1) const; android::view::WindowInsetsAnimation_Bounds dispatchWindowInsetsAnimationStart(android::view::WindowInsetsAnimation arg0, android::view::WindowInsetsAnimation_Bounds arg1) const; void dispatchWindowSystemUiVisiblityChanged(jint arg0) const; void dispatchWindowVisibilityChanged(jint arg0) const; void endViewTransition(android::view::View arg0) const; android::view::View findFocus() const; void findViewsWithText(java::util::ArrayList arg0, JString arg1, jint arg2) const; android::view::View focusSearch(android::view::View arg0, jint arg1) const; void focusableViewAvailable(android::view::View arg0) const; jboolean gatherTransparentRegion(android::graphics::Region arg0) const; android::view::ViewGroup_LayoutParams generateLayoutParams(JObject arg0) const; JString getAccessibilityClassName() const; android::view::View getChildAt(jint arg0) const; jint getChildCount() const; jint getChildDrawingOrder(jint arg0) const; jboolean getChildVisibleRect(android::view::View arg0, android::graphics::Rect arg1, android::graphics::Point arg2) const; jboolean getClipChildren() const; jboolean getClipToPadding() const; jint getDescendantFocusability() const; android::view::View getFocusedChild() const; android::view::animation::LayoutAnimationController getLayoutAnimation() const; JObject getLayoutAnimationListener() const; jint getLayoutMode() const; android::animation::LayoutTransition getLayoutTransition() const; jint getNestedScrollAxes() const; android::view::ViewGroupOverlay getOverlay() const; jint getPersistentDrawingCache() const; jboolean getTouchscreenBlocksFocus() const; jboolean hasFocus() const; jboolean hasTransientState() const; jint indexOfChild(android::view::View arg0) const; void invalidateChild(android::view::View arg0, android::graphics::Rect arg1) const; JObject invalidateChildInParent(JIntArray arg0, android::graphics::Rect arg1) const; jboolean isAlwaysDrawnWithCacheEnabled() const; jboolean isAnimationCacheEnabled() const; jboolean isLayoutSuppressed() const; jboolean isMotionEventSplittingEnabled() const; jboolean isTransitionGroup() const; void jumpDrawablesToCurrentState() const; void layout(jint arg0, jint arg1, jint arg2, jint arg3) const; void notifySubtreeAccessibilityStateChanged(android::view::View arg0, android::view::View arg1, jint arg2) const; void offsetDescendantRectToMyCoords(android::view::View arg0, android::graphics::Rect arg1) const; void offsetRectIntoDescendantCoords(android::view::View arg0, android::graphics::Rect arg1) const; void onDescendantInvalidated(android::view::View arg0, android::view::View arg1) const; jboolean onInterceptHoverEvent(android::view::MotionEvent arg0) const; jboolean onInterceptTouchEvent(android::view::MotionEvent arg0) const; jboolean onNestedFling(android::view::View arg0, jfloat arg1, jfloat arg2, jboolean arg3) const; jboolean onNestedPreFling(android::view::View arg0, jfloat arg1, jfloat arg2) const; jboolean onNestedPrePerformAccessibilityAction(android::view::View arg0, jint arg1, android::os::Bundle arg2) const; void onNestedPreScroll(android::view::View arg0, jint arg1, jint arg2, JIntArray arg3) const; void onNestedScroll(android::view::View arg0, jint arg1, jint arg2, jint arg3, jint arg4) const; void onNestedScrollAccepted(android::view::View arg0, android::view::View arg1, jint arg2) const; jboolean onRequestSendAccessibilityEvent(android::view::View arg0, android::view::accessibility::AccessibilityEvent arg1) const; android::view::PointerIcon onResolvePointerIcon(android::view::MotionEvent arg0, jint arg1) const; jboolean onStartNestedScroll(android::view::View arg0, android::view::View arg1, jint arg2) const; void onStopNestedScroll(android::view::View arg0) const; void onViewAdded(android::view::View arg0) const; void onViewRemoved(android::view::View arg0) const; void recomputeViewAttributes(android::view::View arg0) const; void removeAllViews() const; void removeAllViewsInLayout() const; void removeView(android::view::View arg0) const; void removeViewAt(jint arg0) const; void removeViewInLayout(android::view::View arg0) const; void removeViews(jint arg0, jint arg1) const; void removeViewsInLayout(jint arg0, jint arg1) const; void requestChildFocus(android::view::View arg0, android::view::View arg1) const; jboolean requestChildRectangleOnScreen(android::view::View arg0, android::graphics::Rect arg1, jboolean arg2) const; void requestDisallowInterceptTouchEvent(jboolean arg0) const; jboolean requestFocus(jint arg0, android::graphics::Rect arg1) const; jboolean requestSendAccessibilityEvent(android::view::View arg0, android::view::accessibility::AccessibilityEvent arg1) const; void requestTransparentRegion(android::view::View arg0) const; jboolean restoreDefaultFocus() const; void scheduleLayoutAnimation() const; void setAddStatesFromChildren(jboolean arg0) const; void setAlwaysDrawnWithCacheEnabled(jboolean arg0) const; void setAnimationCacheEnabled(jboolean arg0) const; void setClipChildren(jboolean arg0) const; void setClipToPadding(jboolean arg0) const; void setDescendantFocusability(jint arg0) const; void setLayoutAnimation(android::view::animation::LayoutAnimationController arg0) const; void setLayoutAnimationListener(JObject arg0) const; void setLayoutMode(jint arg0) const; void setLayoutTransition(android::animation::LayoutTransition arg0) const; void setMotionEventSplittingEnabled(jboolean arg0) const; void setOnHierarchyChangeListener(JObject arg0) const; void setPersistentDrawingCache(jint arg0) const; void setTouchscreenBlocksFocus(jboolean arg0) const; void setTransitionGroup(jboolean arg0) const; void setWindowInsetsAnimationCallback(android::view::WindowInsetsAnimation_Callback arg0) const; jboolean shouldDelayChildPressedState() const; jboolean showContextMenuForChild(android::view::View arg0) const; jboolean showContextMenuForChild(android::view::View arg0, jfloat arg1, jfloat arg2) const; android::view::ActionMode startActionModeForChild(android::view::View arg0, JObject arg1) const; android::view::ActionMode startActionModeForChild(android::view::View arg0, JObject arg1, jint arg2) const; void startLayoutAnimation() const; void startViewTransition(android::view::View arg0) const; void suppressLayout(jboolean arg0) const; void updateViewLayout(android::view::View arg0, android::view::ViewGroup_LayoutParams arg1) const; }; } // namespace android::view
A group of Chinese investors has filed a lawsuit against Virginia Gov. Terry McAuliffe accusing him of an immigration scam in the electric car company he helped found — dredging up his past dealings as a businessman and political player as he eyes a possible 2020 presidential run. Filed last week, the lawsuit from 32 investors says Mr. McAuliffe ran a $120 million scam that tried to entice wealthy Chinese to invest in GreenTech Automotive in exchange for a pathway to U.S. citizenship. The investors said Mr. McAuliffe and Anthony Rodham, brother of Hillary Clinton, promised to exercise their political connections. McAuliffe spokeswoman Crystal Carson denied the accusations. But Ryan Mulvey, counsel at the conservative government watchdog Cause of Action Institute, said he was shocked the legal action did not come sooner. Mr. McAuliffe turned his attention to the electric-car market following his loss in the 2009 primary for the Democratic gubernatorial nomination in Virginia. He and Chinese-American securities lawyer Charles Wang founded GreenTech Automotive. They joined forces with Mr. Rodham’s Gulf Coast Funds Management, which specialized in a visa program known as EB-5 that rewarded big foreign investors with the chance to apply for U.S. citizenship in exchange for funneling large sums of money into the U.S. economy. Mr. McAuliffe called himself a “leader” in the hybrid and electric-car markets, and boasted about doing something that “no one has done” by buying a car company in China and moving it to the U.S. Concluding that Virginia wasn’t interested, he opened a factory instead in Mississippi, where it received millions of dollars in incentives. Mr. McAuliffe vowed to build a $60 million factory for car production. He pledged the company would yield thousands of new good-paying jobs and produce hundreds of thousands of cars. The first 100,000 cars would be sold for $10,000, he promised. The business unraveled thanks in part to a flawed private financing plan that centered on the EB-5 program. Gulf Coast Funds managed the EB-5 investments for GreenTech. Mr. McAuliffe stepped down as chairman in 2012 and turned his attention to a second bid for governor. Earlier this year, GreenTech shut down its Mississippi factory, which had opened in 2014. Mississippi officials are seeking to recoup more than $6 million from GreenTech related to defaulted loan payments after the state auditor released a report finding that as of February the car plant had “approximately 10 active employees” and failed to fulfill its financial promises. In addition, the Department of Homeland Security’s inspector general has said that Mr. McAuliffe received special treatment from a top DHS official in his quest to get green cards for his foreign investors under the EB-5 program.
<gh_stars>0 /* * Copyright 2018 <NAME> <<EMAIL>> * and other copyright owners as documented in the project's IP log. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.basinmc.stormdrain.event; import com.fasterxml.jackson.annotation.JsonCreator; import com.fasterxml.jackson.annotation.JsonProperty; import edu.umd.cs.findbugs.annotations.NonNull; import java.util.Objects; import org.basinmc.stormdrain.resource.Repository; import org.basinmc.stormdrain.resource.User; /** * Represents an event which notifies its receiver of a newly created fork of a repository. * * @author <a href="mailto:<EMAIL>"><NAME></a> */ public class ForkEvent extends AbstractRepositoryEvent { private final Repository forkee; @JsonCreator public ForkEvent( @NonNull @JsonProperty(value = "forkee", required = true) Repository forkee, @NonNull @JsonProperty(value = "repository", required = true) Repository repository, @NonNull @JsonProperty(value = "sender", required = true) User sender) { super(repository, sender); this.forkee = forkee; } /** * Represents the newly created repository fork. * * @return a repository. */ @NonNull public Repository getForkee() { return this.forkee; } /** * {@inheritDoc} */ @Override public boolean equals(Object o) { if (this == o) { return true; } if (!(o instanceof ForkEvent)) { return false; } if (!super.equals(o)) { return false; } ForkEvent forkEvent = (ForkEvent) o; return Objects.equals(this.forkee, forkEvent.forkee); } /** * {@inheritDoc} */ @Override public int hashCode() { return Objects.hash(super.hashCode(), this.forkee); } }
#include <cstdio> bool fbs(int a1, int d1, int a2, int d2){return a1 > d2 && d1 > a2;} //firstBeatsSecond bool sbf(int a1, int d1, int a2, int d2){return a1 < d2 && d1 < a2;} //secondBeatsFirst int main(){ const int N = 4; int a[N], d[N]; for(int p = 0; p < N; p++){scanf("%d %d\n", a + p, d + p);} if((fbs(a[0], d[1], a[2], d[3]) && fbs(a[0], d[1], a[3], d[2])) || (fbs(a[1], d[0], a[2], d[3]) && fbs(a[1], d[0], a[3], d[2]))){puts("Team 1");} else if((sbf(a[0], d[1], a[2], d[3]) || sbf(a[0], d[1], a[3], d[2])) && (sbf(a[1], d[0], a[2], d[3]) || sbf(a[1], d[0], a[3], d[2]))){puts("Team 2");} else{puts("Draw");} return 0; }
/* Copyright (c) 2000-2020, Board of Trustees of Leland Stanford Jr. University All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package org.lockss.plugin.clockss.bioone; import org.lockss.config.TdbAu; import org.lockss.daemon.PluginException; import org.lockss.daemon.ShouldNotHappenException; import org.lockss.extractor.ArticleMetadata; import org.lockss.extractor.FileMetadataExtractor; import org.lockss.extractor.MetadataField; import org.lockss.extractor.MetadataTarget; import org.lockss.plugin.CachedUrl; import org.lockss.plugin.clockss.JatsPublishingSchemaHelper; import org.lockss.plugin.clockss.PubMedSchemaHelper; import org.lockss.plugin.clockss.SourceXmlMetadataExtractorFactory; import org.lockss.plugin.clockss.SourceXmlSchemaHelper; import org.lockss.util.Logger; import org.w3c.dom.Document; import java.util.ArrayList; import java.util.List; public class BioOneMetadataExtractorFactory extends SourceXmlMetadataExtractorFactory { static Logger log = Logger.getLogger(BioOneMetadataExtractorFactory.class); private static SourceXmlSchemaHelper JatsPublishingHelper = null; @Override public FileMetadataExtractor createFileMetadataExtractor(MetadataTarget target, String contentType) throws PluginException { return new JatsPublishingSourceXmlMetadataExtractor(); } public class JatsPublishingSourceXmlMetadataExtractor extends SourceXmlMetadataExtractor { /* * This setUpSchema shouldn't be called directly * but for safety, just use the CU to figure out which schema to use. * */ @Override protected SourceXmlSchemaHelper setUpSchema(CachedUrl cu) { throw new ShouldNotHappenException("This version of the schema setup cannot be used for this plugin"); } @Override protected SourceXmlSchemaHelper setUpSchema(CachedUrl cu, Document xmlDoc) { String url = cu.getUrl(); log.debug3("Setup Jats schema helper for url " + url); if (JatsPublishingHelper == null) { JatsPublishingHelper = new JatsPublishingSchemaHelper(); } return JatsPublishingHelper; } @Override protected void postCookProcess(SourceXmlSchemaHelper schemaHelper, CachedUrl cu, ArticleMetadata thisAM) { //If we didn't get a valid date value, use the copyright year if it's there if (thisAM.get(MetadataField.FIELD_DATE) == null) { if (thisAM.getRaw(JatsPublishingSchemaHelper.JATS_date) != null) { thisAM.put(MetadataField.FIELD_DATE, thisAM.getRaw(JatsPublishingSchemaHelper.JATS_date)); } else {// last chance thisAM.put(MetadataField.FIELD_DATE, thisAM.getRaw(JatsPublishingSchemaHelper.JATS_edate)); } } thisAM.put(MetadataField.FIELD_PUBLISHER, thisAM.getRaw(JatsPublishingSchemaHelper.JATS_pubname)); /* Comment out these changes, since we like to get publisher name from the xml files String publisherName = "BioOne"; TdbAu tdbau = cu.getArchivalUnit().getTdbAu(); if (tdbau != null) { publisherName = tdbau.getPublisherName(); } thisAM.put(MetadataField.FIELD_PUBLISHER, publisherName); */ thisAM.put(MetadataField.FIELD_PROVIDER, thisAM.getRaw(JatsPublishingSchemaHelper.JATS_pubname)); } } }
Combining Sentinel-1 Interferometry and Ground-Based Geomatics Techniques for Monitoring Buildings Affected by Mass Movements : Mass movements represent a serious threat to the stability of human structures and infrastructures, and cause loss of lives and severe damages to human properties every year worldwide. Built structures located on potentially unstable slopes are susceptible to deformations due to the displacement of the ground that at worst can lead to total destruction. Synthetic aperture radar (SAR) data acquired by Sentinel-1 satellites and processed by multi-temporal interferometric SAR (MT-InSAR) techniques can measure centimeter to millimeter-level displacement with weekly to monthly updates, characterizing long-term large-scale behavior of the buildings and slopes. However, the spatial resolution and short wavelength weaken the performance of Sentinel-1 in recognizing features (i.e., single buildings) inside image pixels and maintaining the coherence in mountainous vegetated areas. We have proposed and applied a methodology that combines Sentinel-1 interferometry with ground-based geomatics techniques, i.e., global navigation satellite system (GNSS), terrestrial laser scanning (TLS) and terrestrial structure from motion photogrammetry (SfM), for fully assessing building deformations on a slope located in the north-eastern Italian pre-Alps. GNSS allows verifying the ground deformation estimated by MT-InSAR and provides a reference system for the TLS and SfM measurements, while TLS and SfM allow the behavior of buildings located in the investigated slope to be monitored in great detail. The obtained results show that damaged buildings are located in the most unstable sectors of the slope, but there is no direct relationship between the rate of ground deformation of these sectors and the temporal evolution of damages to a single building, indicating that mass movements cause the displacement of blocks of buildings and each of them reacts differently according to its structural properties. This work shows the capability of MT-InSAR, GNSS, TLS and SfM in monitoring both buildings and geological processes that affect their stability, which plays a key role in geohazard analysis and assessment. Introduction Mass movements (i.e., fall, topple, slide, spread, flow and slope deformation ) are abundant and frequent worldwide, threatening human life and properties, with significant socio-economic losses. Buildings located in unstable slopes are subjected to rigid rotations or angular distortions due to differential displacements. Identifying and monitoring both building damages and slope movements is crucial for the implementation of effective risk mitigation strategies and urbanization and development plans. The whole area is characterized by the extensive outcrop of a debris cover up to 10 m thick, derived from eluvial/colluvial processes and mass movements. The grain size of the debris is very heterogeneous and it ranges from millimetric to decametric clasts immersed in a clayey and silty sand matrix. Locally, morphological evidences, such as bumps, dips, and sudden changes in the slope gradient, reveal the presence of large boulders in the debris, dislocated from the Triassic calcareous formation located at the top of the area. The slope instabilities were identified through in situ investigations, aerial photos interpretation, GNSS and InSAR surveys. They consist of translational and rotational slides, soil slips and superficial slow deformations (creep) mainly involving the debris cover ( Figure 1). Slide phenomena and soil slips have a high state of activity and The bedrock of the slope is constituted by two heteropic formations deposited during middle Triassic: Recoaro limestone and Gracilis Formation. The first one outcrops in the upper part of the slope and is composed by limestones, marly and dolomitic limestones. The second one outcrops in the middle and lower part of the slope and consists of an alternance of sandy and marly limestones, interbedded with evaporitic dolomites. These formations have an average dip direction similar to the slope and are highly fractured due to the tectonic events that occurred during the Upper Triassic-Jurassic and the Alpine orogeny. The whole area is characterized by the extensive outcrop of a debris cover up to 10 m thick, derived from eluvial/colluvial processes and mass movements. The grain size of the debris is very heterogeneous and it ranges from millimetric to decametric clasts immersed in a clayey and silty sand matrix. Locally, morphological evidences, such as bumps, dips, and sudden changes in the slope gradient, reveal the presence of large boulders in the debris, dislocated from the Triassic calcareous formation located at the top of the area. The slope instabilities were identified through in situ investigations, aerial photos interpretation, GNSS and InSAR surveys. They consist of translational and rotational slides, soil slips and superficial slow deformations (creep) mainly involving the debris cover ( Figure 1). Slide phenomena and soil slips have a high state of activity and mainly occur in the wet seasons after rainfall events. Superficial deformations have displacement rates of few millimeters per year as measured in our previous GNSS and InSAR surveys and do not show clear geomorphological evidences, but movements result in damages to buildings, walls and road network, and upward curvature of trees. Remote Sens. 2021, 13, 452 4 of 21 In this work, two masonry structures affected by relevant crack patterns due to mass movements were investigated in detail: Rovegliana Church and a building located in Cappellazzi district (Figures 1 and 2). The two structures were built using standard masonry formed by bricks and concrete blocks. Many documents about the history of Rovegliana village and in particular its Church are available, but the current building was built in 1963. It covers an area of about 540 m 2 with a height of 14 m. The building located in Cappellazzi was built in the seventies of the last century; it covers an area of about 130 m 2 with an average height of about 10 m. mainly occur in the wet seasons after rainfall events. Superficial deformations have displacement rates of few millimeters per year as measured in our previous GNSS and InSAR surveys and do not show clear geomorphological evidences, but movements result in damages to buildings, walls and road network, and upward curvature of trees. In this work, two masonry structures affected by relevant crack patterns due to mass movements were investigated in detail: Rovegliana Church and a building located in Cappellazzi district (Figures 1 and 2). The two structures were built using standard masonry formed by bricks and concrete blocks. Many documents about the history of Rovegliana village and in particular its Church are available, but the current building was built in 1963. It covers an area of about 540 m 2 with a height of 14 m. The building located in Cappellazzi was built in the seventies of the last century; it covers an area of about 130 m 2 with an average height of about 10 m. Material and Methods The methodologies applied in this work consist of satellite and ground-based measurements of displacements affecting the Rovegliana slope and the two damaged buildings Remote Sens. 2021, 13, 452 5 of 21 investigated in detail. MT-InSAR techniques were used to monitor and explore the triggering factors of the instability phenomena and the damages to human structures and infrastructures. GNSS surveys were performed to integrate and verify interferometry results and provide a reference system for the multi-temporal TLS and SfM measurements which allowed to monitoring in great detail the behavior of the buildings. Multi-Temporal Interferometric Synthetic Aperture Radar (MT-InSAR) Ground deformation over the study area was measured using both ascending and descending Sentinel-1A and -1B C band images in interferometric wide swath mode, with a 12-or 6-day revisit time and a spatial resolution of 5 by 20 m in range and azimuth. Two hundred sixteen images acquired from ascending track 117 ( The two representative classes of MT-InSAR approaches, PS and SBAS, mainly aim at eliminating atmospheric phase contributions, which are spatially correlated within a single SAR scene and temporally uncorrelated. However, above all, these techniques estimate surface motion considering the usually strong temporal correlation of deformation phenomena. The PS technique, first proposed by Ferretti et al., generates differential interferograms referred to one common master identifying persistent point-wise reflectors, such as manmade structures and rocks. In general, PS InSAR performs better in urban and nearby areas, where the number of persistent scatters is higher than in natural terrain. The SBAS technique, first proposed by Berardino et al., relies on an appropriate combination of image pairs with small spatial and temporal baseline, overcoming some of the limitations of atmospheric effects and detecting the temporal evolution of the surface deformations. This approach is more effective in the case of spatially correlated deformations. Moreover, it increases the spatial coverage, especially in non-urban areas. In this study, SBAS technique was applied to get information both on urban and non-urban areas to better monitor mass movements and observe the behavior of most of the slope. The SBAS processing workflow adopted in SARscape was used ( Figure 3). In the first step, a network of reference and secondary image pairs is defined using temporal and perpendicular baseline constraints ( Figure 4); the second step consists of images co-registration and interferograms generation (with the subtraction of topographic low frequencies), filtering and unwrapping for each pair; inaccuracies due to satellites orbits and phase ramps are removed in the third step; mean heights and velocities are estimated and used to re-flatten each interferogram in the first inversion (step 4); atmospheric corrections are performed in the second inversion (step 5), then displacements and heights (correction values and new elevation) related products are generated; finally, all processing results are geocoded in the selected cartographic system. The topographic phase correction and geocoding processing were run using the 1-arc digital elevation model (DEM) from the Shuttle Radar Topography Mission (SRTM). We primarily generated 923 and 837 interferograms for descending and ascending, respectively, with a maximum temporal baseline of 36 days and a maximum perpendicular baseline of 120 m. Some acquisitions were discarded (red points in Figure 4) both in the ascending and descending datasets, due to the low coherence of most of the interferograms involving such acquisitions. The low coherence, below the threshold set in the processing (0.35), could be related to strong surface variations induced by extreme weather conditions. The interferograms were filtered through the Goldstein method. The alpha min and alpha max, which means the exponents applied to the power spectrum of the coherent (coherence = 1) and incoherent (coherence = 0) pixels, were set to 0.3 and 3, respectively. The value of alpha applied to each pixel varies linearly between the specified minimum and maximum values: the higher the alpha min and alpha max, the stronger is the filter smoothing. The filter window size was set to 64 pixels. The SBAS inversion was run using a disconnected approach, allowing the inversion also in case of some temporally sparse coherence drops, setting the percentage of interferograms and the minimum valid acquisitions to 60% and 95%, respectively. Remote Sens. 2021, 13, x FOR PEER REVIEW 6 of 22 4) both in the ascending and descending datasets, due to the low coherence of most of the interferograms involving such acquisitions. The low coherence, below the threshold set in the processing (0.35), could be related to strong surface variations induced by extreme weather conditions. The output of both ascending and descending datasets measured the projected component of actual deformation in the direction of the line of sight (LoS). To characterize the long-term behavior of the buildings and landslides, LoS velocity should be analyzed according to the actual ground surface motion. The availability of two different viewing geometries (ascending and descending) allowed to get the horizontal (east-west) and vertical components of the actual motion from the LoS velocity. In detail, this projection can be performed assuming a null north-south deformation component, and performing a 2D combination and decomposition of the ascending and descending LoS deformation vectors, for each resolution cell, taking into account the local LoS direction of each dataset ( Figure 5). This simplification is plausible considering the low sensitivity of satellite SAR near-polar orbit acquisitions, which are barely sensitive to north-south deformation components, which describe objects moving almost parallel to the satellite fly direction. Eventually, the north-south horizontal direction of deformation could be retrieved if three different geometries of acquisition are available over the same area, spanning the same time interval. (coherence = 1) and incoherent (coherence = 0) pixels, were set to 0.3 and 3, respectively. The value of alpha applied to each pixel varies linearly between the specified minimum and maximum values: the higher the alpha min and alpha max, the stronger is the filter smoothing. The filter window size was set to 64 pixels. The SBAS inversion was run using a disconnected approach, allowing the inversion also in case of some temporally sparse coherence drops, setting the percentage of interferograms and the minimum valid acquisitions to 60% and 95%, respectively. The output of both ascending and descending datasets measured the projected component of actual deformation in the direction of the line of sight (LoS). To characterize the long-term behavior of the buildings and landslides, LoS velocity should be analyzed according to the actual ground surface motion. The availability of two different viewing geometries (ascending and descending) allowed to get the horizontal (east-west) and vertical components of the actual motion from the LoS velocity. In detail, this projection can be performed assuming a null north-south deformation component, and performing a 2D combination and decomposition of the ascending and descending LoS deformation vectors, for each resolution cell, taking into account the local LoS direction of each dataset ( Figure 5). This simplification is plausible considering the low sensitivity of satellite SAR near-polar orbit acquisitions, which are barely sensitive to north-south deformation components, which describe objects moving almost parallel to the satellite fly direction. Eventually, the north-south horizontal direction of deformation could be retrieved if three different geometries of acquisition are available over the same area, spanning the same time interval. The Sentinel-1 ascending and descending data are not contemporary. Therefore, to perform the LoS result combination to the east-west and up-down directions, a temporal interpolation of the LoS time series of deformations is executed during the projection step. Global Navigation Satellite System (GNSS) GNSS observations have been widely applied for landslides identification and monitoring and as calibration, validation, and/or comparison of the InSAR results The Sentinel-1 ascending and descending data are not contemporary. Therefore, to perform the LoS result combination to the east-west and up-down directions, a temporal interpolation of the LoS time series of deformations is executed during the projection step. Global Navigation Satellite System (GNSS) GNSS observations have been widely applied for landslides identification and monitoring and as calibration, validation, and/or comparison of the InSAR results. The capabilities of the two techniques complement each other in monitoring ground deformations. GNSS based analysis generally involves the establishment of a network of GNSS stations, data adjustment, transformation to a common datum, and differencing for displacement detection. It is possible to obtain a three-dimensional (3D) point position with horizontal and vertical accuracy of 7-8 mm and 1-2 cm, respectively. In detail, the study area was monitored using multi-temporal GNSS data acquired during three different survey campaigns performed in October 2018, June 2019, and October 2019. Measurements were planned identifying 10 reliable non-permanent stations (NPS): eight NPS are located on stable foundations inside the unstable slope to check and monitor instability phenomena, and two NPS and a GNSS permanent station (SCHI) are positioned outside, in presumably stable areas, to check the co-registration of the reference system for each survey. The baselines, i.e., the distances between points inside and outside to the landslide, are less than 10 km. All points were selected to permit nearly ideal conditions, e.g., unobstructed horizon view, avoidance of multipath effect. The NPS points inside the unstable slope are distributed taking into account the necessity to georeferencing the terrestrial laser scanning data, the photogrammetric acquisitions and the local topographic measurements (network for survey the laser scanning ground control points and the natural and artificial points on the external walls of the buildings): two NPS points are located close to Rovegliana Church and other two close to the building in the Cappellazzi district; the other four NPS are distributed in the deformation area. These points are useful not only for the georeferencing of terrestrial measurements, but also as a source of data for the comparison and validation of interferometry results. Observations were performed using four double-frequencies Leica Viva GNSS receivers for each campaign, adopting the static mode approach at a sampling rate of 15 s in order to carry out the survey of the network foreseeing the minimum acquisition time of 3 h for each baseline. Acquired data were processed using the Infinity software provided by Leica Geosystems, taking into account precise orbits of satellites downloaded from the International GPS Service for Geodynamics (IGS). The network related to the three points outside the unstable area was used to check the stability of the NPS stations in the multi-temporal analysis and to constrain the common reference system for each survey. Subsequently, the adjustment of the network related to the points inside the unstable slope and transformed in the stable reference system, provided the coordinates of the NPS points for each measurement. Finally, the differences in the 3D coordinates provided the displacements of the points along north, east, and elevation directions. Terrestrial Laser Scanning (TLS) and Structure from Motion (SfM) TLS is a ground-based technique that automatically collects the 3D spatial coordinates of a large number of points of objects, with a spatial resolution that ranges from millimeters to centimeters. Point clouds and red blue green (RGB) images acquired from the center of the instrument, allow the texturing of the 3D scans providing a photo-realistic metric representation of the surveyed objects. TLS data are usually acquired in form of multiple scans, that have to be aligned to create the global 3D model. Point clouds acquired in different times are co-registered and transformed into one coordinate system (e.g., using affine transformation) for change detection studies. This contactless measurement technique has been widely used for long-range monitoring in the fields of architecture, civil engineering, geology and geomorphology. TLS acquisitions were performed using the Leica ScanStation P20, characterized by a precision of 2 mm and 8" in distance and angular measurement, respectively. In the surrounding area of the two analyzed buildings, two local topographic networks were defined and measured using the total station Leica TCR1201 for the survey of the targets necessary to align the TLS scans. The local networks are composed by five points in Cappellazzi district and six points in the Rovegliana Church area. Each network includes two GNSS NPS points (1100 and 1200, 2100 and 2200, Figure 1) which were used for georeferencing the point clouds in the UTM 32N cartographic reference system. In each site, TLS and low-resolution image acquisitions started from the points of the topographic network, measuring the other visible targets and acquiring data of the building walls from different points of view, to reduce the shadow areas and reconstructing the 3D models as complete as possible. All the scans were performed with a sampling distance of 3.1 mm at 10 m, corresponding to a real average grid size of about 2 mm, while the additional detailed scans of the most damaged and cracked parts of the walls have a sampling distance between 0.8 and 1.6 mm at 10 m, corresponding to a real average grid size between 0.4 and 0.8 mm. Acquired portions of the structures are relative to the walls interested by the relevant fractures and not for the whole objects to optimize the work and for more efficient management of the data, focusing the analysis to the most problematic fracture patterns. In the first phase of the processing, using the software Leica Cyclone provided by Leica Geosystems, the scans were optimized removing the unnecessary points and filtering the noise. Then, the identification of several targets and the assignment of their georeferenced coordinates allowed to register (align) the scans with the calculation of the related error for each target. These values provide the accuracy of the scans alignment that can be verified through visual inspections of the 3D model, sections and plans. In the second phase of the processing, the comparison between multi-temporal TLS data was performed using the plugin M3C2 of the software Cloud Compare (https://www. danielgm.net/cc/). This is a specific tool that allows to calculate the distance between two 3D entities (cloud-to-cloud, cloud-to-mesh and mesh-to-mesh). To this end, a subset of points (core points) of one of the two point clouds (reference cloud) is considered. The distance is calculated considering the points falling in a cylindrical volume around the local normal to each core point. In the processing, the following parameters have to be set: subsampling, normal scale, projection scale and maximum depth. The subsampling is the minimum distance between points in the reference cloud and it was set to 1 cm. The normal scale is the diameter of the spherical volume around each core point used to compute the local normal, the adopted value is 5 cm. The projection scale is the diameter of the cylinder used as search region to calculate the distance between the two point clouds, the value was set to 10 cm. The max depth is the height of the cylinder in both directions from the core point, it was set to 20 cm. These parameters were chosen based on our previous experiences and literature. Once the parameters are set, the distance between the average positions of the points of each cloud falling in the cylindrical volume is calculated. The total station was used for the measurement of the topographic reference network and the targets and for the survey of natural and artificial points homogeneously distributed on both sides of the main cracks affecting the buildings (20 for the building in Cappellazzi and 20 for Rovegliana Church), performing distances and angular triangulations measurements from the points of the network to evaluate any differential displacement. In addition to the TLS surveys, terrestrial photogrammetric acquisitions were performed for the application of the SfM technique. The original idea of SfM was proposed by Ullman et al., who addressed how to infer the 3D structure and motion of objects from the 2D transformations of their projected images. SfM has been used to generate large point clouds for scene structure reconstructions and geosciences applications. This technique exploits an automatic pixel-by-pixel correlation approach, reconstructing the 3D model in a local or global reference system. The general workflow consists of an initial analysis of the imagery dataset with the feature extraction and matching through an object recognition algorithm (scale invariant feature transform, SIFT). It allows to reconstruct the position of the camera using the coordinates of the measured targets and generate a sparse points cloud, which is composed by the homologue points (tie points) identified in different images. Then, the full-resolution images are used to increase the sparse point cloud, generating the dense cloud through the multi-view stereo (MVS) algorithm. The obtained 3D model can be used for the production of the mesh model, DEM and orthomosaic. The SfM processing was performed using the Agisoft Metashape software (www.agisoft.com). The images of the damaged buildings were acquired by the digital single-lens reflex (SLR) camera Canon EOS 5DS (CMOS sensor 50.6 megapixel; sensor size 36 24 mm), with a 35 mm focal length objective. To monitor the evolution of the crack pattern affecting the walls of the buildings, 96 images in Cappellazzi district and 127 images in Rovegliana Church were acquired, with size of 5760 3840 pixels and an overlap from 40% to 90%. In the first case, the average camera-object distance is 4 m which allows to obtain a ground sample distance (GSD, pixel size in the wall) of 0.7 mm. In the second, the average distance is 2 m and the GSD is 0.4 mm. The tripod and flash were not used thanks to the good and uniform light conditions. Figure 6a,b show the velocity maps along the LoS direction derived by SBAS-InSAR processing of Sentinel-1 ascending and descending datasets. Positive values mean that the movement is towards the satellites, and negative values indicate that the movement is away from the satellites. The results were obtained exploiting Sentinel-1 data full resolution, which is 15 15 m 2 to get squared resolution pixels, setting a coherence threshold of 0.35 for both the ascending and descending datasets. Interferometry results cover the 85% (ascending) and 76% (descending) of the entire study area, providing information on urban and non-urban areas. Landsliding areas are almost totally covered by both ascending and descending SBAS results. Displacement rates range from −30 to 30 mm/year with a mean precision of the calculated LoS mean velocities of, respectively, 3.35 ± 1.1 and 3.33 ± 1.05 mm/year for the ascending and descending output. Higher deformation velocities were measured in areas with high slope gradient, at the head of the gullies and landslides. In these areas, the direction of displacement is mainly vertical (Figure 6d). Lower velocities were estimated in gentle or flat slopes which are characterized by a significant horizontal displacement along the slope facing (Figure 6c) as in the case of the two sites investigated in more detail (Rovegliana Church and Cappellazzi district). The rate and direction of detected displacements are clearly connected to the kinematic of the instability phenomena affecting the slope. In the source areas, mass movements have a rotational component with a depletion of the ground surface which results in negative values in the LoS and vertical velocity maps (Figure 6a,b,d). Along the body of landslides, the movement is mainly translational in the direction of the maximum slope (south-west). In this case, LoS velocities estimated by the processing of SAR images acquired in ascending orbit are positive, while velocities estimated through descending acquisitions are negative. It means that a prevailing horizontal component to the west is present as evaluated by combining ascending and descending SBAS results (Figure 6c). Results To investigate the relationships between damages to anthropic structures and mass movements, SBAS-InSAR displacement time series of points located in Cappellazzi district and Rovegliana Church area, were considered ( Figure 7). LoS, horizontal and vertical displacements affecting the two structures monitored by TLS and SfM (Figures 1 and 2) were compared to those estimated in the surrounding non-urban areas. The trend of the time series in Cappellazzi is very similar with a total displacement between −8 and −20 mm in the LoS direction (Figure 7b). The small differences are due to the different morphological conditions of the selected points: 1101, 1102 and 1103 are on the slope while 1104 is in a flat area. Total displacement along the horizontal and vertical directions is almost the same and it is between −5 and −10 mm. These results show that the entire sector is moving to SW at the same rate along the maximum slope which is consistent with the kinematic of the translational slide affecting this area. In the case of the Rovegliana Church area, the trend of the time series is similar, but points 2103 and 2104, located at north of the church, have a total displacement higher than points 2101 and 2102. These differences show that this area is affected by two different instability phenomena which have been activated in the same periods but with different intensity as occurred after July 2017 and July 2018 (Figure 7f). The negative trend of horizontal and vertical components (Figure 7g,h) suggests that points 2103 and 2104 are moving to west along the maximum slope due to the soil slips occurring in this sector, while points 2101 and 2102 are moving to south-west, according to the local morphology, along the same direction of the translational slide affecting the southern area of Rovegliana Church (Figure 1). Both in Cappellazzi and the Rovegliana Church area, the time series clearly show that displacements occur during the wet seasons, from March to June (Spring) and from September to November (Autumn) (Figure 7b-d,f-h), rainfall being the main triggering factor. Figure 8 shows a clear relationship between rainfall and vertical displacement affecting the area of Rovegliana Church. It has to be noted that after dry winter periods, spring rainfall events cause an upward displacement. Even if less evident, this phenomenon can occur also in the autumn season, after the main rainfall events. We do not have enough information to explain this occurrence, it can be supposed that it is related to the swelling of clayey deposits due to the variation in the soil moisture, but more detailed geotechnical investigations and an hydromechanical modeling of the instability phenomena would be required. the church, have a total displacement higher than points 2101 and 2102. These differences show that this area is affected by two different instability phenomena which have been activated in the same periods but with different intensity as occurred after July 2017 and July 2018 (Figure 7f). The negative trend of horizontal and vertical components ( Figure 7g,h) suggests that points 2103 and 2104 are moving to west along the maximum slope due to the soil slips occurring in this sector, while points 2101 and 2102 are moving to south-west, according to the local morphology, along the same direction of the translational slide affecting the southern area of Rovegliana Church (Figure 1). Both in Cappellazzi and the Rovegliana Church area, the time series clearly show that displacements occur during the wet seasons, from March to June (Spring) and from September to November (Autumn) (Figure 7b-d,f-h), rainfall being the main triggering factor. Figure 8 shows a clear relationship between rainfall and vertical displacement affecting the area of Rovegliana Church. It has to be noted that after dry winter periods, spring rainfall events cause an upward displacement. Even if less evident, this phenomenon can occur also in the autumn season, after the main rainfall events. We do not have enough information to explain this occurrence, it can be supposed that it is The GNSS processing and the adjustment of the external network composed by 3 points provided the coordinates of the 10 NPS points in the UTM 32N cartographic reference system. The three external points showed differences in the coordinates ranging from 0 to 1.0 mm (October 2018-June 2019) and from 0.1 to 1.2 mm (June 2019-October 2019), thus they can be considered stables during the observation period. The network of the NPS located inside the unstable area was adjusted by constraining the coordinates of the 3 stable external points. The results of the adjustment provided coordinates standard deviations ranging from 2 to 6 mm in planimetry and up to 9 mm in elevation. The differences of the coordinates of the 8 inside NPS points generated the 3D displacement vectors. related to the swelling of clayey deposits due to the variation in the soil moisture, but more detailed geotechnical investigations and an hydromechanical modeling of the instability phenomena would be required. The GNSS processing and the adjustment of the external network composed by 3 points provided the coordinates of the 10 NPS points in the UTM 32N cartographic reference system. The three external points showed differences in the coordinates ranging from 0 to 1.0 mm (October 2018-June 2019) and from 0.1 to 1.2 mm (June 2019-October 2019), thus they can be considered stables during the observation period. The network of the NPS located inside the unstable area was adjusted by constraining the coordinates of the 3 stable external points. The results of the adjustment provided coordinates standard deviations ranging from 2 to 6 mm in planimetry and up to 9 mm in elevation. The differences of the coordinates of the 8 inside NPS points generated the 3D displacement vectors. The cumulative 3D displacements of the GNSS NPS points are listed in Table 1. To verify interferometry results, the horizontal (east-west) displacements estimated by the two techniques at the GNSS point inside the unstable slope were compared. The east-west component was considered because of the better accuracy of GNSS technique in measuring planimetric displacements. The results show a very good agreement between the two techniques for almost all the points, except in the case of the point 4100 where a small difference was observed. However, this is within the limits of precision of the techniques. The cumulative horizontal displacement vectors derived by GNSS are directed to south-west along the maximum slope (Figure 9), which is consistent with the kinematic of the instability phenomena, as observed from the interferometry. The cumulative 3D displacements of the GNSS NPS points are listed in Table 1. To verify interferometry results, the horizontal (east-west) displacements estimated by the two techniques at the GNSS point inside the unstable slope were compared. The east-west component was considered because of the better accuracy of GNSS technique in measuring planimetric displacements. The results show a very good agreement between the two techniques for almost all the points, except in the case of the point 4100 where a small difference was observed. However, this is within the limits of precision of the techniques. The cumulative horizontal displacement vectors derived by GNSS are directed to southwest along the maximum slope (Figure 9), which is consistent with the kinematic of the instability phenomena, as observed from the interferometry. TLS methodology provided the 3D models of the two objects under investigation ( Figure 10): for the building located in Cappellazzi, four scans were executed (two scans for the global views and two high-resolution scans for the damaged parts) from two different scanning points; for Rovegliana Church, a total of six scans (with two high-resolution scans) were necessary to cover the frontal part of the building, the fractured wall of the portico and the bell tower. The global models were built with the software Leica Cyclone using the georeferenced targets for the alignment procedure and the same settings and parameters were used for the three different survey campaigns of the years 2018-2019. The average registration errors obtained in the 6 targets of the Rovegliana Church area are 3 mm for the surveys performed in October 2018 and June 2019, and 4 mm in the October 2019 measurement. In the case of Cappellazzi district, the average registration errors on 4 targets are 2, 3 and 2 mm in the three surveys. In the comparisons between models acquired at different times, to reduce the influence of the georeferencing and make the point cloud cleaner and more accurate, only the single high-resolution scans (with sample distance of 0.8 and 1.6 mm at 10 m) were used. TLS methodology provided the 3D models of the two objects under investigation ( Figure 10): for the building located in Cappellazzi, four scans were executed (two scans for the global views and two high-resolution scans for the damaged parts) from two different scanning points; for Rovegliana Church, a total of six scans (with two highresolution scans) were necessary to cover the frontal part of the building, the fractured wall of the portico and the bell tower. The global models were built with the software Leica Cyclone using the georeferenced targets for the alignment procedure and the same settings and parameters were used for the three different survey campaigns of the years 2018-2019. The average registration errors obtained in the 6 targets of the Rovegliana Church area are 3 mm for the surveys performed in October 2018 and June 2019, and 4 mm in the October 2019 measurement. In the case of Cappellazzi district, the average registration errors on 4 targets are 2, 3 and 2 mm in the three surveys. In the comparisons between models acquired at different times, to reduce the influence of the georeferencing The first and the last scans were compared to detect displacements of cracks affecting the structures (Figure 11). The M3C2 plugin tool of the software Cloud Compare allowed to perform the computation of distances between two point clouds, considering some input parameters about the density of core points and volume in which the homolog points for the distance calculation are identified. The results, highlighted by the color scale, show that the fractures seem to remain stable or the deformation is too small to be observed with the TLS approach. The displacements regard the mobile parts (windows, doors, plants, objects in Figure 11a) and rigid translations of the whole surfaces, as shown in the two peaks of distances in Figure 11a,b (about 1 cm to the south direction for the building in Cappellazzi district and 1.2 cm to the south-west direction for Rovegliana Church). The SfM processing of the images was executed with the purpose to trace the evolution of the fractures in the observation period, thanks to the high-resolution imagery acquired by SLR camera. The reconstruction of a reliable 3D model and the extraction of the orthophotos can be validly used for identifying and tracking the discontinuities, with a higher capacity of representing small details on the surface of the objects than a point cloud. From the 3D model, the textured mesh using an orthomosaic of the corresponding images was obtained; subsequently, the extraction of the orthophoto was done defining 3 points on the surface of the wall and using them as reference for the projection of the textured mesh with pixel size of 1 mm. The orthophotos were used as base layer to draw the pattern of the cracks in the first (October 2018) and last (October 2019) surveys ( Figure 12). It was decided to use orthophotos from SfM rather than from TLS textured model because of the far better resolution of the images acquired by SLR camera and the impossibility to distinguish millimetric details in TLS point cloud. By superimposing the traces of the cracks detected in 2018 (red polylines in Figure 12) on the 2019 orthophoto, it shows that the damage on the walls did not increase in the observation period. The first and the last scans were compared to detect displacements of cracks affecting the structures (Figure 11). The M3C2 plugin tool of the software Cloud Compare allowed to perform the computation of distances between two point clouds, considering some input parameters about the density of core points and volume in which the homolog points for the distance calculation are identified. The results, highlighted by the color scale, show that the fractures seem to remain stable or the deformation is too small to be The SfM processing of the images was executed with the purpose to trace the evolution of the fractures in the observation period, thanks to the high-resolution imagery acquired by SLR camera. The reconstruction of a reliable 3D model and the extraction of the orthophotos can be validly used for identifying and tracking the discontinuities, with a higher capacity of representing small details on the surface of the objects than a point cloud. From the 3D model, the textured mesh using an orthomosaic of the corresponding images was obtained; subsequently, the extraction of the orthophoto was done defining 3 points on the surface of the wall and using them as reference for the projection of the textured mesh with pixel size of 1 mm. The orthophotos were used as base layer to draw the pattern of the cracks in the first (October 2018) and last (October 2019) surveys ( Figure 12). It was decided to use orthophotos from SfM rather than from TLS textured model because of the far better resolution of the images acquired by SLR camera and the impossibility to distinguish millimetric details in TLS point cloud. By superimposing the traces of the cracks detected in 2018 (red polylines in Figure 12) on the 2019 orthophoto, it shows that the damage on the walls did not increase in the observation period. Discussion The obtained results show the very good performance of SBAS-InSAR technique in monitoring mass movements. The extended spatial coverage of SBAS results allows to identify the kinematic of the landslides and to detect the most active sectors. In addition, this technique provides low-noise time series of displacements which allow to estimate the temporal evolution of the movements and explore the triggering factors. For these reasons, despite SBAS technique is time-consuming from the computational viewpoint and for the operator intervention, it should be preferred to PS-InSAR based techniques which allow to estimate deformations affecting limited areas. PS techniques can be effectively used for analyzing highly urbanized unstable slopes, as shown in previous studies. Interferometry results allowed to confirm the role of rainfall as triggering factor of the mass movements affecting Rovegliana slope. Comparing the displacement time series derived by SBAS processing to the rainfall pattern show that, in the monitoring period, the instabilities have been activated during the wet seasons (Spring and Autumn). An oscillation of the displacements has been measured with upward Discussion The obtained results show the very good performance of SBAS-InSAR technique in monitoring mass movements. The extended spatial coverage of SBAS results allows to identify the kinematic of the landslides and to detect the most active sectors. In addition, this technique provides low-noise time series of displacements which allow to estimate the temporal evolution of the movements and explore the triggering factors. For these reasons, despite SBAS technique is time-consuming from the computational viewpoint and for the operator intervention, it should be preferred to PS-InSAR based techniques which allow to estimate deformations affecting limited areas. PS techniques can be effectively used for analyzing highly urbanized unstable slopes, as shown in previous studies. Interferometry results allowed to confirm the role of rainfall as triggering factor of the mass movements affecting Rovegliana slope. Comparing the displacement time series derived by SBAS processing to the rainfall pattern show that, in the monitoring period, the instabilities have been activated during the wet seasons (Spring and Autumn). An oscillation of the displacements has been measured with upward movements that can be correlated to rainfall spring events occurring after winter dry season. The fully comprehension of this phenomenon requires a detailed geotechnical investigation and modeling. Integrating remote sensing methods with geotechnical surveys and in situ monitoring of subsurface movements and pore pressure variations, would help in defining the relationships between rainfall and displacements and implementing effective mitigation measures. However, as observed in this study and in numerous previous surveys MT-InSAR represents a useful tool for the preliminary analysis of rainfallinduced landslides. The velocity maps and the displacement time series derived by interferometry allowed investigating the influence of mass movements on the stability of anthropic structures in the study area. It was observed that the displacements measured at the location of the building in Cappellazzi and of Rovegliana Church are directly correlated with the activation of the instability phenomena affecting these sectors. These phenomena cause the displacement of blocks of buildings, but it is no possible to correlate landslides activation to the damages suffered by a single structure due to the resolution of Sentinel-1 SAR images. As shown by the overall results, this correlation is possible integrating interferometric data with ground-based surveys (i.e., GNSS, TLS and SfM). GNSS data provided the 3D deformation vector at each GNSS point, with an accuracy in the order of 10 mm. The GNSS measurements, in agreement with the MT-InSAR results, provide horizontal displacement vectors directed toward the maximum slope ( Figure 9). The 2 NPS points used for monitoring deformations in Cappellazzi district and for georeferencing the topographic, TLS and photogrammetric acquisitions (1100 and 1200 of Table 1) did not show any relative displacements each other: in fact, even if in the 3 survey campaigns the two GNSS vectors show relative differences of 8-9 mm, the direct measurements of the distance performed with the total station provided differences of 1 mm, value less than the precision of the instrument. The same consideration can be extended to the area of Rovegliana Church: the two GNSS displacement vectors (relative to the NPS points 2100 and 2200 of Table 1) shows relative movements of 6-7 mm, but the direct measurements of the distance performed in the three surveys provided differences of 2 mm; for this reason, no differential displacements were recognized in the analyzed period, and the relative differences between the GNSS displacements vectors are due to the precision of the methodology that, in this study, can be assumed in the order of maximum 1 cm, that is the expected value. The comparison between the coordinates of the artificial and natural topographic points located in different portions of the two buildings and separated by significant cracks, do not show relative displacements, indicating that the movements detected by GNSS and InSAR surveys involved the whole structures rigidly. This is confirmed by the multitemporal TLS scans which did not detect any significant change in the crack pattern, as shown in Figure 11 where the rigid and constant movement of the masonry element along the maximum slope is shown. Even the comparison between the images acquired in the first and last SfM surveys do not show any progression of the damages affecting the two structures ( Figure 12). This comparison was applied to the SfM models because the TLS data are widely used for identifying the deterioration of buildings, but in this case they are not reliable for the identification and monitoring of cracks patterns due to the not enough resolution of the texture in point clouds. TLS technique is very useful to detect localized or distributed movements, but the displacements have to be higher than 10-15 mm, to avoid errors due to the instrument and georeferencing process. The findings of this study show that the mass movements observed by interferometry during the period October 2018-October 2019 (Figure 7) have caused a rigid movement of the monitored buildings, because no differential deformations of the structures were detected by topographic, TLS and SfM surveys. Therefore, it can be argued that slope instabilities cause the displacement of block of buildings (Figure 7a,e), but there is no direct relationship between landslide activity and temporal evolution of damage, each building reacts in different time according to its structural characteristics. These results can help in the assessment of landslide risk, but a geotechnical in-depth analysis of the landslides and the structural properties of the buildings would help to better correlate the slope dynamics to the damages suffered by the anthropic structures in order to implement the best mitigation strategies. Conclusions In this paper, we have proposed a methodology to investigate the relationships between mass movements and damages to human structures. The methodology is based on an effective integration of Sentinel-1 interferometry and ground-based geomatics techniques, such as global navigation satellite system (GNSS), terrestrial laser scanning (TLS) and structure from motion (SfM), supported by classical topographic measurements. The methodology was applied to the unstable slope of Rovegliana village (north-eastern Italian pre-Alps) to monitor the stability of anthropic structures after the activation of the mass movements. The SBAS-InSAR technique was used to process Sentinel-1 A/B SAR data acquired in the period 2014-2019. This technique has shown a high performance, providing a large coverage of interferometry data over the study area and very low noise displacement time series which allowed to identify the most unstable sectors of the slope and the kinematic evolution of the landslides. However, due to the resolution of SAR images, the monitoring of the effects of mass movements on the structures is no possible. To this end, two damaged buildings were monitored through topographic, TLS and SfM surveys performed in October 2018, June 2019 and October 2019. In the same periods GNSS measurements were performed on eight non-permanent points inside the unstable area, to verify interferometry results and provide a reference system for the ground-based surveys. The obtained results show that the location of the buildings investigated in detail and their surrounding areas were affected by mass movements in the period October 2018-October 2019, but the structures did not suffer any damage. This means that landslides cause the displacement of blocks of buildings and each of them reacts in different time depending on their structural properties. The methodology and the findings of this study can help in landslide risk prevention. In the case of Rovegliana area, it is evident that mitigation measures have to be applied to ensure the global stability of the slope through structural and non-structural (monitoring) interventions. In the next future, we will continue space-borne and ground-based surveys to deepen the cause (landslide)-effect (damage to building) relation. We will consider processing high-resolution SAR data (i.e., COSMO-SkyMed data) to fill the gap of Sentinel-1 in monitoring anthropic structures. Given the high revisiting time of COSMO and Sentinel acquisitions, we expect that combining the results from multi-temporal differential SAR interferometry processing of different types of SAR data will allow a near real-time monitoring of the mass movements and elements at risk. Moreover, we will test the proposed methodology in other geological and geomorphological contexts such as alluvial and coastal plains, the most populated areas in the world, affected by subsidence phenomena which can threaten the stability of anthropic structures and infrastructures.
Is Chytridiomycosis Driving Darwins Frogs to Extinction? Darwins frogs (Rhinoderma darwinii and R. rufum) are two species of mouth brooding frogs from Chile and Argentina that have experienced marked population declines. Rhinoderma rufum has not been found in the wild since 1980. We investigated historical and current evidence of Batrachochytrium dendrobatidis (Bd) infection in Rhinoderma spp. to determine whether chytridiomycosis is implicated in the population declines of these species. Archived and live specimens of Rhinoderma spp., sympatric amphibians and amphibians at sites where Rhinoderma sp. had recently gone extinct were examined for Bd infection using quantitative real-time PCR. Six (0.9%) of 662 archived anurans tested positive for Bd (4/289 R. darwinii; 1/266 R. rufum and 1/107 other anurans), all of which had been collected between 1970 and 1978. An overall Bd-infection prevalence of 12.5% was obtained from 797 swabs taken from 369 extant individuals of R. darwinii and 428 individuals representing 18 other species of anurans found at sites with current and recent presence of the two Rhinoderma species. In extant R. darwinii, Bd-infection prevalence (1.9%) was significantly lower than that found in other anurans (7.3%). The prevalence of infection (30%) in other amphibian species was significantly higher in sites where either Rhinoderma spp. had become extinct or was experiencing severe population declines than in sites where there had been no apparent decline (3.0%; x 2=106.407, P<0.001). This is the first report of widespread Bd presence in Chile and our results are consistent with Rhinoderma spp. declines being due to Bd infection, although additional field and laboratory investigations are required to investigate this further. Introduction There are two species of Darwin's frogs, both of which inhabit the temperate forests of South America: the northern Darwin's frog (Rhinoderma rufum), which is endemic to central Chile, and the southern Darwin's frog (R. darwinii), which is found in south and southern Chile and also in adjacent areas of Argentina. The behaviour that sets these frogs apart from all other amphibians is that the males care for their young by incubating them in their vocal sacs for at least part of their development, a process known as neomelia. In recent decades, both species have undergone marked population declines and R. rufum has not been recorded since 1980. The reasons for these apparent disappearances remain poorly understood. Throughout the historical distribution of R. rufum, and within the northern range of R. darwinii, there has been extensive habitat degradation, due to the large-scale replacement of native forest with pine (Pinus radiata) and eucalyptus (Eucalyptus globulus) plantations, and land use change to agriculture. Habitat loss, however, does not fully explain the enigmatic disappearances of R. rufum from its entire historical range or of the declines of R. darwinii from undisturbed ecosystems, including National Parks. In this context, it has been hypothesised that amphibian chytridiomycosis, an infectious disease caused by the nonhyphal zoosporic chytrid fungus, Batrachochytrium dendrobatidis (Bd), might be implicated in the disappearances of Darwin's frogs. Amphibian chytridiomycosis, a recently-described emerging disease of amphibians, has been associated with amphibian epizootic mass mortalities, population declines and global extinctions in different regions of the world. Different genotypes of the fungus have been described, with the most virulent being a recombinant lineage, termed the global panzootic lineage (BdGPL). Recently, Bd whole-genome sequencing has demonstrated a higher genetic differentiation than previously recognised (including within BdGPL) and a complex evolutionary history that predates contemporary amphibian declines. This highly-pathogenic and readily-transmissible pathogen appears to be capable of infecting an entire class of organism (the Amphibia), with devastating effects. It has been described as: ''the worst infectious disease ever recorded among vertebrates in terms of the number of species impacted and its propensity to drive them to extinction''. In 2007, chytridiomycosis was identified as the cause of death of a group of 30 wild-caught R. darwinii exported to Germany for captive breeding. Infection with Bd has been reported in populations of the invasive African clawed frog, Xenopus laevis in central Chile. Additionally, Bourke et al. recently described Bd infection in R. darwinii and two other native frog species in the south of the country. The impacts of this emerging disease on amphibian populations in Chile, including Darwin's frogs, however, have not been studied. Here, we investigate whether amphibian chytridiomycosis is implicated in the population declines of Darwin's frogs. We looked for evidence of historical Bd infection in Rhinoderma spp. and amphibians at current and former Rhinoderma sp. sites prior to and post the onset of declines. Also, we determined how widespread Bd infection is both in contemporary populations of R. darwinii across its current range and in other anuran species at sites of Rhinoderma spp. population decline or recent extinction. Ethics statement This study was carried out in strict accordance with the recommendations in the guidelines for use of live amphibians and reptiles in field research compiled by the American Society of Ichthyologists and Herpetologists (ASIH). Research was approved by the ZSL Ethics Committee and was conducted following Chilean and Argentinian wildlife regulations and according to permits 1241/08, 7377/09, 7993/10 and 300/12 of the Livestock and Agriculture Service (SAG) and 20/09, XI-01/09, 28/11 and X-03/11 of the National Forestry Corporation (CONAF) both in Chile, and permit 1119/11 of the National Parks Administration (APN) in Argentina. Archived amphibians were examined in their museum of origin, by the authors with specific permission given by all 5 zoological institutions. Study area Archived amphibian specimens from museum collections in Europe and Chile were examined for evidence of Bd infection. Also, extensive surveys for Bd infection throughout the historical ranges of R. rufum and R. darwinii were conducted from October 2008 to March 2012. These ranges extended from Zapallar (32u 33' 03''S, 71u 26' 37''W) to Aysn (45u 24' 24''S, 72u 41' 52''W) in Chile, and included adjacent areas in the Andes in the Neuqun and Ro Negro Provinces in Argentina ( Figure 1). Living anurans Cross-sectional studies were carried out at sites where R. darwinii was extant and at sites where Rhinoderma spp. had recently (since 1966) become extinct. Sites were delimited and a search effort of one hour by two researchers was conducted during daylight hours using a standardised methodology, as previously described. Sampling Archived anurans. The skin of the ventral pelvis and ventral hind limbs of each amphibian museum specimen was sampled by brushing with a tapered inter-dental brush (3.2 to 6.0 mm; Oral B Laboratories), following Soto-Azat et al.. Where multiple specimens were held in a single jar, they were rinsed with running tap water prior to sampling to remove possible surface contamination with Bd. Each specimen was handled using a new pair of disposable nitrile or latex gloves. Live anurans. Only post-metamorphic and adult anurans were sampled. Frogs were captured by hand, safely contained in individual sealed plastic bags and put back immediately after the capture session in the exact place of capture. Each individual was handled with the use of clean disposable nitrile gloves. A sterile dry, rayon-tipped swab (MW100, Medical & Wire Equipment Co.) was firmly run five times each over the ventral abdomen and pelvis, each ventral hind limb (femur and tibia) and the plantar surface of each hind foot, to complete a total of 35 strokes. Dorsal and ventral pattern photographs were taken of each Darwin's frog sampled for identification purposes. In order to minimize any Bd contamination of samples or the spread of pathogens within or between study sites by researchers, equipment or materials, a strict field sampling and disinfection protocol was followed according to that recommended by the Amphibian and Reptile Groups, UK: ARG Advice Note 4 (http://www.arguk.org/advice-andguidance/view-category). All samples were stored at 280 uC until processed. Diagnostic analysis Post sampling, whole interdental brushes and swab tips were deposited separately in 1.5 ml Eppendorf tubes containing 50 and 60 ml, respectively, of PrepMan Ultra (Applied Biosystems) and between 30 to 40 mg of Zirconium/silica beads of 0.5 mm diameter (Biospec Products). For each sample, DNA was extracted following the protocol of Boyle et al.. Extracted DNA was diluted (1:10) in double-distilled water and analysed using a quantitative real-time polymerase chain reaction Taqman assay (qPCR) with primers specific for the ITS-1/5.8S ribosomal DNA region of Bd. In addition, bovine serum albumin (BSA) was included in the Taqman mastermix to minimise inhibition of the PCR. For each sample, diagnostic assays were performed in duplicate, and standards of known zoospore concentration were included within each PCR plate, as were negative controls. A result was considered positive when: amplification (i.e. a clearly sigmoid curve) occurred in both replicated PCR assays, values higher than 0.1 genomic equivalents (GE) were obtained from both replicated reactions, and average GE from both replicates were higher than its standard deviation. Extracted DNA from any positive sample was re-tested in duplicate and only determined to be positive for the purposes of this study if Bd DNA was clearly amplified in duplicate wells for a second time. Data analysis Areas with historical and current presence of Rhinoderma spp.. 2 km from each other were determined to be separate sites or populations. Statistical analyses were performed using SPSS (v. 20.0) to detect any significant difference between: 1) Bd prevalence and time in archived R. darwinii (using Fisher's exact test for small sample sizes), 2) Bd prevalence in extant R. darwinii and sympatric amphibians (using the chi-squared test), 3) Bd intensity in extant R. darwinii and all other amphibian species tested (using the Mann-Whitney U-test), and 4) Bd prevalence at sites with and without evidence of recent Rhinoderma spp. population decline in extant R. darwinii (using the chi-squared test). Data on Rhinoderma spp. abundance is scarce. To consider a population having evidence of recent decline, we used data from a previous study, which investigated population sizes and the extent of declines in Darwin's frogs. Briefly, populations categorised as having declined comprised those known to have disappeared since 1966, or (in one case) known to have undergone a recent marked population decline. A relationship between Bd prevalence at sites with historical and current Rhinoderma spp. populations and latitude was also tested using a simple linear regression model. Figure 1). We found Bd to be widespread in central-south Chile, from the region of Valparaiso to the region of Aysn, and also to be present in Argentina, covering an area of 1,305 km in length, with an estimated overall infection prevalence of 12.5%, varying by site from 0 to 69.4%. The prevalence of Bd infection varied amongst species, from 0 to 100% of individuals tested, although sample sizes for many species were small and distributed across multiple sites (Table 3). Batrachochytrium dendrobatidis in archived amphibians Of the 369 R. darwinii tested, seven frogs from four different populations were positive for Bd (Table 4). Overall, the Bd prevalence in R. darwinii (1.9%) was significantly lower to that in sympatric amphibians tested (n = 109, 7.3%; x 2 = 8.200, Table 5. Although R. darwinii had the highest infection intensities (median: 127.1; range: 6.727,059.1 GE) when compared with all other infected species (13.9; 0.124,481.0 GE) they were not significantly different (Mann-Whitney U-test; U = 188.0, P = 0.063). Of particular interest were two R. darwinii from which GE counts over 1,000 were detected. Both frogs belonged to the northernmost known populations. Of these, one individual (NATRE74/12; 1,020 GE) was found dead at the capture site and subsequent histopathological examination revealed chytridiomycosis as the cause of death ( Figure 2). The prevalence of Bd infection was significantly higher at sites with either Rhinoderma spp. extinction or severe population decline (30.0%) than at sites with no apparent Rhinoderma spp. declines (3.0%; x 2 = 106.407, P,0.001). Additionally, when Bd prevalence by site and geographical location were analysed, a linear regression revealed an inverse relationship between Bd prevalence and latitude (R 2 = 0.405, P,0.001; Figure 1). Discussion Museum amphibian specimens have been increasingly recognised as a valuable source of information for retrospective epidemiological studies. Using such specimens, we demonstrated historical evidence of Bd infection in three species of native frogs from south Chile (R. darwinii, R. rufum and P. thaul). Although we examined similar numbers of frogs that had been collected prior to 1970 and post-1970, all six Bd-positive archived amphibians were collected from 1970 to 1978 inclusive: a time coincident with the onset of the global amphibian population decline phenomenon, including the disappearance of R. rufum, and the occurrence of the first amphibian global extinctions subse-quently associated with Bd. The only R. rufum Bdpositive animal was an individual kept in a jar with 179 other R. rufum specimens, all of which had been collected from Chiguayante (Biobo Region, near Concepcin) during a two-day collection session in December 1975. As the fixation history of the examined archived amphibians is not known, the overall infection prevalence (0.9%) and intensity of infection (GE values 0.120.6) obtained are likely an underestimation of the true situation. For example, although all of the archived specimens examined were preserved in alcohol, it is highly possible that many had been initially fixed in formalin, a chemical known to degrade DNA, reducing the likelihood of Bd detection. Also, the fixative, IMS, can inhibit PCR. A previous study, however, was successful in detecting Bd DNA from the skin of amphibian specimens fixed in IMS and in the current study we incorporated BSA to the PCR protocol to minimize the effect of any PCR inhibiters present. Our field surveys failed to detect R. rufum, but infection with Bd was found in extant R. darwinii, but at a lower prevalence (1.9%) than in the other sympatric amphibian species tested (prevalence 7.3%), possibly as a consequence of different habitat use by the studied species (e.g. dependence of water for breeding). If highly susceptible to chytridiomycosis, however, it is possible that R. darwinii die soon after infection. This also would result in a low infection prevalence and might explain the disappearance of Rhinoderma spp. from many of the sites where Bd was found, especially if other amphibians act as reservoirs of infection, as might be predicted from their higher Bd prevalences. Amphibian chytridiomycosis is thought to have caused 100% mortality of 30 wild-caught R. darwinii exported to Germany in 2007. According to these authors, travel stress and lack of isolation between individuals during transportation might have contributed to this high mortality rate. In the current study, two of seven Bd-positive wild R. darwinii had infection loads. 1,000 GE; including an individual found dead with chytridiomycosis. Disease and mortality caused by chytridiomycosis have been associated with infections higher than 1,000 GE in experimentallyinfected green tree frogs (Litoria caerulea). Experimental Bd infection trials in R. darwinii, similar to those performed with the Critically Endangered New Zealand Archey's frog (Leiopelma archeyi) and with the Panamanian golden frog (Atelopus zeteki), should be considered to further investigate the susceptibility of R. darwinii to chytridiomycosis. As the outcomes of Bd infection often are highly context-specific, experimental infection studies using R. darwinii under different hydric environments could help to infer the likely effects of Bd infection on R. darwinii under different climate and land-use change scenarios. In a declining species like R. darwinii, however, promoting the survival of the species has to take priority: the use of animals in experiments should be internationally justifiable and only surplus captive-bred animals not suitable for conservation programmes should be used. Batrachochytrium dendrobatidis is a waterborne pathogen and stream-living has been identified as a risk factor for Bd-associated declines. Rhinoderma darwinii has evolved to develop an extreme case of parental care in which the species does not depend on water bodies for tadpole development. In contrast, while R. rufum tadpoles spend their first two weeks of development in the vocal sacs of their male parents, they are then released into water as larvae where they live for the next approximately 120 days until metamorphosis takes place. This association of R. rufum with streams in central Chile could render this species even more susceptible to population declines and extinction due to chytri- diomycosis. Although found in only a single archived specimen, evidence of Bd infection was found in possibly the largest known R. rufum population five years before the species was last recorded. This, along with a positive association between Bd prevalence and Rhinoderma spp. population extinction/decline, suggests a possible association between chytridiomycosis and the disappearance of R. rufum. We detected an inverse relationship between Bd prevalence and latitude, similar to that found by Kriger et al. in the stony creek frog (Litoria lesueuri) in eastern Australia. Whether this is a reflection of the historical introduction and spread of Bd in Chile, with the organism not yet having reached the south of the country, or if it is due to environmental factors (e.g. temperature) is not yet clear. Longitudinal sampling of sites across the gradient would help to answer this question. That such a gradient exists, however, indicates that northern populations of R. darwinii are likely to be under a greater threat from chytridiomycosis than those in the south. It also suggests that the instigation of biosecurity measures might decrease the rate of spread of the disease to the southern populations of R. darwinii (assuming that Bd has not already reached this region and is less readily detected due to the low temperatures there limiting its growth). It is not known if the Bd detected in the archived or extant specimens in the current study is the hypervirulent BdGPL, a BdGPL-hybrid, or perhaps an endemic lineage (or lineages) of the fungus. If BdGPL is present in Chile, its spread to the country might have occurred via the introduction of X. laevis. Feral populations of this invasive species, which have been established in central Chile since the 1970s, are known to be Bd-positive, although other mechanisms of pathogen introduction cannot be excluded. Conclusions This is the first report of widespread Bd presence in Chile and our results provide evidence of an association between the presence of Bd and mortality in wild R. darwinii. Although, assessing the role of pathogens in extinctions remains problematic and infectious diseases are probably an underestimated cause of biodiversity loss, retrospective and prospective epidemiological data provide evidence that Bd infection is probably implicated in the enigmatic disappearance of R. rufum and the declines of R. darwinii, particularly from the northern part of their historical range. Nevertheless, further studies, such as the isolation and DNA sequencing of Bd in Chile, are required to further investigate the possible role of Bd in Rhinoderma spp. declines.
/* * Copyright (c) 2016-2020 VMware, Inc. All Rights Reserved. * This software is released under MIT license. * The full license information can be found in LICENSE in the root directory of this project. */ import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; @NgModule({ declarations: [AppComponent], imports: [BrowserModule, AppRoutingModule], providers: [], bootstrap: [AppComponent], }) export class AppModule {}
/* * --------------------------------------------------------------------- * get_bkpt_ins * returns the operation code of the breakpoint instruction. * Needed by the debugger to insert breakpoints * --------------------------------------------------------------------- */ static void debugger_get_bkpt_ins(void *clientData, uint8_t * ins, uint64_t addr, int len) { if (len == 2) { uint16_t value = 0xbe00; if (MMU_Byteorder() == BYTE_ORDER_BIG) { BYTE_WriteToBe16(ins, 0, value); } else { BYTE_WriteToLe16(ins, 0, value); } } else if (len == 4) { uint32_t value = 0xe1200070; if (MMU_Byteorder() == BYTE_ORDER_BIG) { BYTE_WriteToBe32(ins, 0, value); } else { BYTE_WriteToLe32(ins, 0, value); } } }
Mumbai:ICICI Bank on Wednesday announced that it has successfully executed transactions in international trade finance and remittances using blockchain technology in partnership with Emirates NBD. “ICICI Bank is the first bank in the country and among the first few globally to exchange and authenticate remittance transaction messages as well as original international trade documents related to purchase order, invoice, shipping and insurance, among others, electronically on blockchain in real time," the bank said in a statement. The usage of blockchain technology simplifies the process and makes it almost instant—to only a few minutes. This is in contrast to the current process which involves a complex and lengthy paper trail that requires international shipping and courier, it said. “ICICI Bank executed these pilot transactions via its blockchain network with Emirates NBD on a custom-made blockchain application, co-created with EdgeVerve Systems, a wholly-owned subsidiary of Infosys," the release said. The blockchain application replicates the paper-intensive international trade finance process as an electronic decentralised ledger, that gives all the participating entities, including banks, the ability to access a single source of information. This enables all the parties, viz, the importer in Mumbai; ICICI Bank, Mumbai; the exporter in Dubai and Emirates NBD, Dubai to view the data in real time, the bank said. It also enables them to track documentation and authenticate ownership of assets digitally, as an un-alterable ledger in real time. Chanda Kochhar, managing director and chief executive officer of ICICI Bank, said: “I envision that the emerging technology of blockchain will play a significant role in banking in the coming years by making complex bilateral and multi-lateral banking transactions seamless, quick and more secure". She added that going forward the bank also intend to work on expanding the blockchain ecosystem and create common working standards to contribute to the commercial adoption of this initiative. The pilot transaction was executed to showcase confirmation of import of shredded steel melting scrap by a Mumbai-based export-import firm from a Dubai-based supplier, the bank said. The second initiative involved a transaction on the blockchain application that enabled an ICICI Bank branch in Mumbai to remit funds to an Emirates NBD branch in Dubai in real time.
Method for tissue clearing: temporal tissue optical clearing. Light absorption and scattering in biological tissue are significant variables in optical imaging technologies and regulating them enhances optical imaging quality. Optical clearing methods can decrease light scattering and improve optical imaging quality to some extent but owing to their limited efficacy and the potential influence of optical clearing agents on tissue functioning, complementing approaches must be investigated. In this paper, a new strategy of optical clearing proposed as time-dependent or temporal tissue optical clearing (TTOC) is described. The absorption and scattering in light interaction with tissue are regulated in the TTOC technique by altering the pulse width. Here, the dependence of optical properties of matter on the pulse width in a gelatin-based phantom was investigated experimentally. Then, a semi-classical model was introduced to computationally study of Ultra-short laser/matter interaction. After studying phantom, the absorption and scattering probabilities in the interaction of the pulse with modeled human skin tissue were investigated using the proposed model for pulse widths ranging from 1s to 10fs. The propagation of the pulse through the skin tissue was simulated using the Monte Carlo technique by computing the pulse width-dependent optical properties (absorption coefficient a, scattering coefficient s, and anisotropy factor g). Finally, the penetration depth of light into the tissue and reflectance for different pulse widths was found.