content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Module not found: Can't resolve 'graphql'? My Next.js app worked fine yesterday but today it has an error like this: error - ./node_modules/@urql/core/dist/ddbb86ae.mjs:1:0 Module not found: Can't resolve 'graphql' Import trace for requested module: ./node_modules/@urql/core/dist/urql-core.mjs ./node_modules/urql/dist/urql.es.js ./pages/_app.js https://nextjs.org/docs/messages/module-not-found I have no idea what happened so I git reset --hard but the problem is still there. Please help me fix it. I appreciate it. _app.js: import { StateContext } from "../lib/context"; import { Provider, createClient } from "urql"; const client = createClient({ url: "http://localhost:1337/graphql" }); function MyApp({ Component, pageProps }) { return ( <StateContext> <Provider value={client}> <Nav /> <Component {...pageProps} /> </Provider> </StateContext> ); } export default MyApp; A: One possible solution to this issue is to check if the graphql package is installed in your project by running npm list graphql or yarn list graphql. If the package is not installed, you will have to install it. Alternatively, you can try to update all the dependencies in your project by running npm update or yarn upgrade. This might fix any potential conflicts or issues with the dependencies in your project. If the issue persists, you can try to clear the cache by running npm cache clean --force or yarn cache clean and then try rebuilding the project by running npm run build or yarn build. This might fix any potential issues with the build process. It is also possible that the issue is caused by a conflict with the versions of the dependencies in your project. In this case, you can try to use the npm-check-updates or yarn upgrade-interactive commands to check for and update the outdated dependencies in your project. Finally, if none of the above solutions work, you can try to create a new Next.js project and move your code to the new project. This might fix any potential issues with the project setup or configuration.
Module not found: Can't resolve 'graphql'?
My Next.js app worked fine yesterday but today it has an error like this: error - ./node_modules/@urql/core/dist/ddbb86ae.mjs:1:0 Module not found: Can't resolve 'graphql' Import trace for requested module: ./node_modules/@urql/core/dist/urql-core.mjs ./node_modules/urql/dist/urql.es.js ./pages/_app.js https://nextjs.org/docs/messages/module-not-found I have no idea what happened so I git reset --hard but the problem is still there. Please help me fix it. I appreciate it. _app.js: import { StateContext } from "../lib/context"; import { Provider, createClient } from "urql"; const client = createClient({ url: "http://localhost:1337/graphql" }); function MyApp({ Component, pageProps }) { return ( <StateContext> <Provider value={client}> <Nav /> <Component {...pageProps} /> </Provider> </StateContext> ); } export default MyApp;
[ "One possible solution to this issue is to check if the graphql package is installed in your project by running npm list graphql or yarn list graphql. If the package is not installed, you will have to install it.\nAlternatively, you can try to update all the dependencies in your project by running npm update or yarn upgrade. This might fix any potential conflicts or issues with the dependencies in your project.\nIf the issue persists, you can try to clear the cache by running npm cache clean --force or yarn cache clean and then try rebuilding the project by running npm run build or yarn build. This might fix any potential issues with the build process.\nIt is also possible that the issue is caused by a conflict with the versions of the dependencies in your project. In this case, you can try to use the npm-check-updates or yarn upgrade-interactive commands to check for and update the outdated dependencies in your project.\nFinally, if none of the above solutions work, you can try to create a new Next.js project and move your code to the new project. This might fix any potential issues with the project setup or configuration.\n" ]
[ 1 ]
[]
[]
[ "graphql", "next.js", "urql" ]
stackoverflow_0074673781_graphql_next.js_urql.txt
Q: webpack-dev-server --hot - didn't refresh browser files I have no idea anymore. After updating node.js and webpack, I cannot set reload devServer. I try with: mode: "development", static: hot: true and a few more things from google. What am I doing wrong or where is the error? There are no errors in the console. I want to configure webpack to write in ES6, nothing else. package.json { "name": "calc", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "start": "webpack-dev-server --hot", "build": "webpack -d" }, "keywords": [], "author": "", "license": "ISC", "devDependencies": { "@babel/core": "^7.19.3", "@babel/preset-env": "^7.19.3", "babel-loader": "^8.2.5", "webpack": "^5.74.0", "webpack-cli": "^4.10.0", "webpack-dev-server": "^4.11.1" } } webpack.config.js const path = require("path"); const entryPath = "."; module.exports = { mode: "development", entry: `./${entryPath}/js/app.js`, output: { filename: "out.js", path: path.resolve(__dirname, `${entryPath}/build`), }, r: { static: path.join(__dirname, `${entryPath}`), hot: true, compress: true, port: 3001, open: true, headers: { "Access-Control-Allow-Origin": "*" }, }, module: { rules: [ { test: /\.js$/, exclude: /node_modules/, use: { loader: "babel-loader", options: { presets: ["@babel/preset-env"], }, }, }, ], }, }; directory structure Node.js version: v18.9.0 NPM version: 8.19.1 Thanks for the answer. A: Please provide your webpack verison. For webpack 5 use devServer: { watchFiles: ['src/**/*.php', 'public/**/*'], } See details here https://webpack.js.org/configuration/dev-server/#devserverwatchfiles
webpack-dev-server --hot - didn't refresh browser files
I have no idea anymore. After updating node.js and webpack, I cannot set reload devServer. I try with: mode: "development", static: hot: true and a few more things from google. What am I doing wrong or where is the error? There are no errors in the console. I want to configure webpack to write in ES6, nothing else. package.json { "name": "calc", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "start": "webpack-dev-server --hot", "build": "webpack -d" }, "keywords": [], "author": "", "license": "ISC", "devDependencies": { "@babel/core": "^7.19.3", "@babel/preset-env": "^7.19.3", "babel-loader": "^8.2.5", "webpack": "^5.74.0", "webpack-cli": "^4.10.0", "webpack-dev-server": "^4.11.1" } } webpack.config.js const path = require("path"); const entryPath = "."; module.exports = { mode: "development", entry: `./${entryPath}/js/app.js`, output: { filename: "out.js", path: path.resolve(__dirname, `${entryPath}/build`), }, r: { static: path.join(__dirname, `${entryPath}`), hot: true, compress: true, port: 3001, open: true, headers: { "Access-Control-Allow-Origin": "*" }, }, module: { rules: [ { test: /\.js$/, exclude: /node_modules/, use: { loader: "babel-loader", options: { presets: ["@babel/preset-env"], }, }, }, ], }, }; directory structure Node.js version: v18.9.0 NPM version: 8.19.1 Thanks for the answer.
[ "Please provide your webpack verison.\nFor webpack 5 use\n devServer: {\n watchFiles: ['src/**/*.php', 'public/**/*'],\n }\n\nSee details here https://webpack.js.org/configuration/dev-server/#devserverwatchfiles\n" ]
[ 0 ]
[]
[]
[ "webpack.config.js", "webpack_dev_server" ]
stackoverflow_0073966115_webpack.config.js_webpack_dev_server.txt
Q: Calculate pi of a radian that can be negative or positive? I have a variable A which is a radian angle value. I also have a variable B that should always be PI away from A. How can I verify that B is PI off of A with 0.01 accuracy in negative or positive direction (C++)? A's value can be negative. A: Peter's answer in the comments suited well to this problem: std::abs(std::abs(a - b) - pi) <= 0.01
Calculate pi of a radian that can be negative or positive?
I have a variable A which is a radian angle value. I also have a variable B that should always be PI away from A. How can I verify that B is PI off of A with 0.01 accuracy in negative or positive direction (C++)? A's value can be negative.
[ "Peter's answer in the comments suited well to this problem: std::abs(std::abs(a - b) - pi) <= 0.01\n" ]
[ 0 ]
[]
[]
[ "angle", "c++", "radians" ]
stackoverflow_0074319231_angle_c++_radians.txt
Q: Javascript function to send post requests, can't return object I want to create a simplified, reusable Ajax function for my project. After wrapping XMLHttpRequest into a function, I cannot return a response object. The response object can only be printed with console.log(obj). return obj returns undefined instead of returning an object. What am I doing wrong? function xhr(xhrObject) { let xhr = new XMLHttpRequest(); xhr.open(xhrObject.type, xhrObject.destination, true); xhr.getResponseHeader("Content-type", "application/json"); xhr.responseType = xhrObject.response; xhr.onreadystatechange = function () { if(this.readyState === 4 && this.status === 200) { let obj = xhr.response; console.log(obj); //return obj; instead of returning objects, it returns undefined } }; // Send request let json = JSON.stringify(xhrObject.data); xhr.send(json); } To use a function I pass an object to it. let object = { type: 'POST', destination: 'request.php', selector: '.result', data: {a: "a", b: "b", c: "c"}, response: 'json' // text, json }; xhr(object); Thanks to solution here now I can get the response object. Change return xhr.response; to callback(xhr.response); And now call function like this xhr(object, function(result) { // now result will return and object, object values can be accessed like this: alert(result.Name); }); My last question is: Is there any possible optimization for this? Or the final result is good enough and nothing else should be done?
Javascript function to send post requests, can't return object
I want to create a simplified, reusable Ajax function for my project. After wrapping XMLHttpRequest into a function, I cannot return a response object. The response object can only be printed with console.log(obj). return obj returns undefined instead of returning an object. What am I doing wrong? function xhr(xhrObject) { let xhr = new XMLHttpRequest(); xhr.open(xhrObject.type, xhrObject.destination, true); xhr.getResponseHeader("Content-type", "application/json"); xhr.responseType = xhrObject.response; xhr.onreadystatechange = function () { if(this.readyState === 4 && this.status === 200) { let obj = xhr.response; console.log(obj); //return obj; instead of returning objects, it returns undefined } }; // Send request let json = JSON.stringify(xhrObject.data); xhr.send(json); } To use a function I pass an object to it. let object = { type: 'POST', destination: 'request.php', selector: '.result', data: {a: "a", b: "b", c: "c"}, response: 'json' // text, json }; xhr(object); Thanks to solution here now I can get the response object. Change return xhr.response; to callback(xhr.response); And now call function like this xhr(object, function(result) { // now result will return and object, object values can be accessed like this: alert(result.Name); }); My last question is: Is there any possible optimization for this? Or the final result is good enough and nothing else should be done?
[]
[]
[ "To return a value from the function xhr(), you can use the return keyword outside of the onreadystatechange event handler. The onreadystatechange event is asynchronous, meaning that the code inside the event handler will be executed at a later time when the response is ready. Therefore, you can't return the value directly from within the event handler.\nfunction xhr(xhrObject) {\n let xhr = new XMLHttpRequest();\n xhr.open(xhrObject.type, xhrObject.destination, true);\n xhr.getResponseHeader(\"Content-type\", \"application/json\");\n xhr.responseType = xhrObject.response;\n\n // Create a variable to store the response value\n let response;\n\n xhr.onreadystatechange = function () {\n if(this.readyState === 4 && this.status === 200) {\n response = xhr.response;\n }\n };\n\n // Send request\n let json = JSON.stringify(xhrObject.data);\n xhr.send(json);\n\n // Return the response value\n return response;\n}\n\n" ]
[ -1 ]
[ "ajax", "javascript", "xmlhttprequest" ]
stackoverflow_0074673776_ajax_javascript_xmlhttprequest.txt
Q: why angular makes http request to https://localhost:4200 instead of http://localhost:5001 to get data from Restful Asp.net? I am using angular version 15.0, and everything is gone well in backend. but in frontend side, when the service requests to get data, unfortunately, an error is rised a bellow: Failed to load resource: the server responded with a status of 404 (Not Found) because the request URL is: http://localhost:4200/api/commodityTypes/getAllCommodityTypes other side, when we use Swagger with this URL: https://localhost:5001/api/CommodityTypes/getAllCommodityTypes data is fetched successfully. the service code is: @Injectable({ providedIn: 'root' }) export class CommodityTypesService { private baseUrl = 'api/commodityTypes'; constructor(private http: HttpClient) { } /** GET all commodityTypes from the server */ getAllCommodityTypes(): Observable<CommodityType[]> { return this.http.get<CommodityType[]>(this.baseUrl + '/getAllCommodityTypes/'); } // rest of code ... } and the error is: HttpErrorResponse error: "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"utf-8\">\n<title>Error</title>\n</head>\n<body>\n<pre>Cannot GET /api/commodityTypes/getAllCommodityTypes/</pre>\n</body>\n</html>\n" headers: HttpHeaders {normalizedNames: Map(0), lazyUpdate: null, lazyInit: ƒ} message: "Http failure response for http://localhost:4200/api/commodityTypes/getAllCommodityTypes/: 404 Not Found" name: "HttpErrorResponse" ok: false status: 404 statusText: "Not Found" url: "http://localhost:4200/api/commodityTypes/getAllCommodityTypes/" [[Prototype]]: HttpResponseBase how can fix this problem? A: I think you should adjust your baseUrl: export class CommodityTypesService { private baseUrl = https://localhost:5001/api/CommodityTypes; If you get a CORS Error: You have to enable CORS in your backend for requests coming from localhost:5001.
why angular makes http request to https://localhost:4200 instead of http://localhost:5001 to get data from Restful Asp.net?
I am using angular version 15.0, and everything is gone well in backend. but in frontend side, when the service requests to get data, unfortunately, an error is rised a bellow: Failed to load resource: the server responded with a status of 404 (Not Found) because the request URL is: http://localhost:4200/api/commodityTypes/getAllCommodityTypes other side, when we use Swagger with this URL: https://localhost:5001/api/CommodityTypes/getAllCommodityTypes data is fetched successfully. the service code is: @Injectable({ providedIn: 'root' }) export class CommodityTypesService { private baseUrl = 'api/commodityTypes'; constructor(private http: HttpClient) { } /** GET all commodityTypes from the server */ getAllCommodityTypes(): Observable<CommodityType[]> { return this.http.get<CommodityType[]>(this.baseUrl + '/getAllCommodityTypes/'); } // rest of code ... } and the error is: HttpErrorResponse error: "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"utf-8\">\n<title>Error</title>\n</head>\n<body>\n<pre>Cannot GET /api/commodityTypes/getAllCommodityTypes/</pre>\n</body>\n</html>\n" headers: HttpHeaders {normalizedNames: Map(0), lazyUpdate: null, lazyInit: ƒ} message: "Http failure response for http://localhost:4200/api/commodityTypes/getAllCommodityTypes/: 404 Not Found" name: "HttpErrorResponse" ok: false status: 404 statusText: "Not Found" url: "http://localhost:4200/api/commodityTypes/getAllCommodityTypes/" [[Prototype]]: HttpResponseBase how can fix this problem?
[ "I think you should adjust your baseUrl:\nexport class CommodityTypesService {\n\n private baseUrl = https://localhost:5001/api/CommodityTypes;\n\nIf you get a CORS Error:\nYou have to enable CORS in your backend for requests coming from localhost:5001.\n" ]
[ 0 ]
[]
[]
[ "angular", "asp.net", "asp.net_core", "typescript" ]
stackoverflow_0074673743_angular_asp.net_asp.net_core_typescript.txt
Q: Why does my code return nothing and stops returning anything? This is part of my homework and I am having difficulty understanding why the output is empty and then the code ends. I don't really understand the class pointer so it would help a lot if there is an explanation of how the class pointer affects the code. Why can I call employee_name using secretary and receptionist but I can't do it using emp1 or emp2. It would be great if someone can link me to videos where I can learn more on similar topics and better understand classes. Header File #ifndef DEPARTMENT_H #define DEPARTMENT_H #include <string> class Employee{ public: Employee(std::string n, double s); std::string employee_name; double salary; }; class Department{ public: Department(std::string n, Employee* s, Employee* r); /* set the receptionist to r; note that both receptionist and r are pointers */ void set_receptionist(Employee* r); /* set the secretary to s; note that both secretary and s are pointers */ void set_secretary(Employee* s); /* calculate the total salary and return it. neglect the receptionist or secretary if it is nullptr. count only once if receptionist and secretary point to the same employee (check their address instead of name!) */ double get_total_salary() const; /* display the department information, including department name, employees, and total salaries. see details in main function. neglect the receptionist or secretary if it is nullptr. */ void display_department() const; private: std::string department_name; Employee* receptionist; Employee* secretary; }; #endif Implementation File Employee::Employee(std::string n, double s) { employee_name = n; salary = s; } Department::Department(std::string n, Employee* s, Employee* r) { department_name = n; } void Department::set_receptionist(Employee* r) { receptionist = r; } void Department::set_secretary(Employee* s) { secretary = s; } double Department::get_total_salary() const { return 0.0; } void Department::display_department() const { cout << "department name: " << department_name << endl; cout << "secretary name: " << secretary->employee_name << ", " << "salary: " << secretary->salary << endl; cout << "receptionist name: " << receptionist->employee_name << ", " << "salary: " << receptionist->salary << endl; //cout << "total salary: " << << endl; } Main File int main(){ Employee emp1("Alice", 6000); Employee emp2("Bob", 5500); Department dep1("IT", &emp1, &emp2); dep1.display_department(); /* department name: IT secretary name: Alice, salary: 6000 receptionist name: Bob, salary: 5500 total salary: 11500 */ dep1.set_receptionist(&emp1); dep1.display_department(); /* department name: IT secretary name: Alice, salary: 6000 receptionist name: Alice, salary: 6000 total salary: 6000 */ dep1.set_secretary(nullptr); dep1.display_department(); /* department name: IT receptionist name: Alice, salary: 6000 total salary: 6000 */ } The expected output is in the comment of the main file. I am trying to figure out display_department of the Employee class and I know that get_total_salary is incorrect. Output: department name: IT secretary name: It outputs this and then the program ends. A: Your problem is that you are not initialising or testing your pointers before you use them. First you should set both pointers when you construct your department object (this doesn't happen automatically). Department::Department(std::string n, Employee* s, Employee* r) { department_name = n; secretary = s; receptionist = r; } Then you should test if the pointer equals nullptr before you try an use them to print the secretary or receptionist name. void Department::display_department() const { cout << "department name: " << department_name << endl; if (secretary == nullptr) cout << "no secretary:" << endl; else cout << "secretary name: " << secretary->employee_name << ", " << "salary: " << secretary->salary << endl; if (receptionist == nullptr) cout << "no receptionist:" << endl; else cout << "receptionist name: " << receptionist->employee_name << ", " << "salary: " << receptionist->salary << endl; //cout << "total salary: " << << endl; }
Why does my code return nothing and stops returning anything?
This is part of my homework and I am having difficulty understanding why the output is empty and then the code ends. I don't really understand the class pointer so it would help a lot if there is an explanation of how the class pointer affects the code. Why can I call employee_name using secretary and receptionist but I can't do it using emp1 or emp2. It would be great if someone can link me to videos where I can learn more on similar topics and better understand classes. Header File #ifndef DEPARTMENT_H #define DEPARTMENT_H #include <string> class Employee{ public: Employee(std::string n, double s); std::string employee_name; double salary; }; class Department{ public: Department(std::string n, Employee* s, Employee* r); /* set the receptionist to r; note that both receptionist and r are pointers */ void set_receptionist(Employee* r); /* set the secretary to s; note that both secretary and s are pointers */ void set_secretary(Employee* s); /* calculate the total salary and return it. neglect the receptionist or secretary if it is nullptr. count only once if receptionist and secretary point to the same employee (check their address instead of name!) */ double get_total_salary() const; /* display the department information, including department name, employees, and total salaries. see details in main function. neglect the receptionist or secretary if it is nullptr. */ void display_department() const; private: std::string department_name; Employee* receptionist; Employee* secretary; }; #endif Implementation File Employee::Employee(std::string n, double s) { employee_name = n; salary = s; } Department::Department(std::string n, Employee* s, Employee* r) { department_name = n; } void Department::set_receptionist(Employee* r) { receptionist = r; } void Department::set_secretary(Employee* s) { secretary = s; } double Department::get_total_salary() const { return 0.0; } void Department::display_department() const { cout << "department name: " << department_name << endl; cout << "secretary name: " << secretary->employee_name << ", " << "salary: " << secretary->salary << endl; cout << "receptionist name: " << receptionist->employee_name << ", " << "salary: " << receptionist->salary << endl; //cout << "total salary: " << << endl; } Main File int main(){ Employee emp1("Alice", 6000); Employee emp2("Bob", 5500); Department dep1("IT", &emp1, &emp2); dep1.display_department(); /* department name: IT secretary name: Alice, salary: 6000 receptionist name: Bob, salary: 5500 total salary: 11500 */ dep1.set_receptionist(&emp1); dep1.display_department(); /* department name: IT secretary name: Alice, salary: 6000 receptionist name: Alice, salary: 6000 total salary: 6000 */ dep1.set_secretary(nullptr); dep1.display_department(); /* department name: IT receptionist name: Alice, salary: 6000 total salary: 6000 */ } The expected output is in the comment of the main file. I am trying to figure out display_department of the Employee class and I know that get_total_salary is incorrect. Output: department name: IT secretary name: It outputs this and then the program ends.
[ "Your problem is that you are not initialising or testing your pointers before you use them.\nFirst you should set both pointers when you construct your department object (this doesn't happen automatically).\nDepartment::Department(std::string n, Employee* s, Employee* r)\n{\n department_name = n;\n secretary = s;\n receptionist = r;\n}\n\nThen you should test if the pointer equals nullptr before you try an use them to print the secretary or receptionist name.\nvoid Department::display_department() const\n{\n cout << \"department name: \" << department_name << endl;\n if (secretary == nullptr)\n cout << \"no secretary:\" << endl;\n else\n cout << \"secretary name: \" << secretary->employee_name << \", \" << \"salary: \" << secretary->salary << endl;\n if (receptionist == nullptr)\n cout << \"no receptionist:\" << endl;\n else\n cout << \"receptionist name: \" << receptionist->employee_name << \", \" << \"salary: \" << receptionist->salary << endl;\n //cout << \"total salary: \" << << endl;\n\n}\n\n" ]
[ 0 ]
[]
[]
[ "c++", "c++11", "oop" ]
stackoverflow_0074673694_c++_c++11_oop.txt
Q: Can you explain how the feature is extracted from the following code of CNN How the Image Features are extracted from the following convolutional neural network code import tensorflow as tf from tensorflow.keras.utils import img_to_array df['PubChem_ID'] = df['PubChem_ID'].apply(str) df_image = [] for i in tqdm(range(df.shape[0])): img = image.load_img('/content/drive/MyDrive/3D Conformer/Conformer/'+df['PubChem_ID'] [i]+'.png',target_size=(256,256,3)) img = image.img_to_array(img) img = img/255 df_image.append(img) X = np.array(df_image) The image is converted into the size 256 x 256 x 3 in matrix with three layers (RGB), where each layer contains 256 x 256 values. y = np.array(df.drop(['PubChem_ID'],axis=1)) model = Sequential() model.add(Convolution2D(64, kernel_size=(3, 3),padding='same',input_shape=(256,256,3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(29)) model.add(Activation('sigmoid')) A: In the given code, a convolutional neural network (CNN) is used to extract image features from a dataset of images. The images in the dataset are first converted to a size of 256 x 256 x 3, where the 3 represents the 3 color channels (red, green, and blue) of the image. The image features are extracted using the following steps: The Convolution2D layer applies a set of filters to the input image, each of which is a 3 x 3 matrix of weights. This layer performs a convolution operation on the input image to create a new feature map. The Activation layer applies a non-linear activation function (in this case, the ReLU function) to the output of the Convolution2D layer. This allows the network to learn more complex patterns in the data. The MaxPooling2D layer performs a max pooling operation on the output of the Activation layer, which reduces the spatial dimensions of the feature map. This helps to reduce the number of parameters in the model and to prevent overfitting. The Dropout layer randomly sets a fraction of the output values to zero, which helps to prevent overfitting by reducing the dependence on any one feature. The Flatten layer flattens the output of the Dropout layer into a single vector of values. This allows the output to be fed into the next layer of the network. The Dense layer applies a linear transformation to the flattened feature vector, which produces a 29-dimensional output vector. This layer represents the final set of image features extracted by the network. The Activation layer applies the sigmoid activation function to the output of the Dense layer, which produces a final output vector of probabilities. This output can be used for classification or other tasks. Overall, the given code uses a CNN to extract a set of 29 image features from the input images. These features are learned by the network during training and can be used to represent the visual content of the images in a compact and useful form.
Can you explain how the feature is extracted from the following code of CNN
How the Image Features are extracted from the following convolutional neural network code import tensorflow as tf from tensorflow.keras.utils import img_to_array df['PubChem_ID'] = df['PubChem_ID'].apply(str) df_image = [] for i in tqdm(range(df.shape[0])): img = image.load_img('/content/drive/MyDrive/3D Conformer/Conformer/'+df['PubChem_ID'] [i]+'.png',target_size=(256,256,3)) img = image.img_to_array(img) img = img/255 df_image.append(img) X = np.array(df_image) The image is converted into the size 256 x 256 x 3 in matrix with three layers (RGB), where each layer contains 256 x 256 values. y = np.array(df.drop(['PubChem_ID'],axis=1)) model = Sequential() model.add(Convolution2D(64, kernel_size=(3, 3),padding='same',input_shape=(256,256,3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(29)) model.add(Activation('sigmoid'))
[ "In the given code, a convolutional neural network (CNN) is used to extract image features from a dataset of images. The images in the dataset are first converted to a size of 256 x 256 x 3, where the 3 represents the 3 color channels (red, green, and blue) of the image.\nThe image features are extracted using the following steps:\nThe Convolution2D layer applies a set of filters to the input image, each of which is a 3 x 3 matrix of weights. This layer performs a convolution operation on the input image to create a new feature map.\nThe Activation layer applies a non-linear activation function (in this case, the ReLU function) to the output of the Convolution2D layer. This allows the network to learn more complex patterns in the data.\nThe MaxPooling2D layer performs a max pooling operation on the output of the Activation layer, which reduces the spatial dimensions of the feature map. This helps to reduce the number of parameters in the model and to prevent overfitting.\nThe Dropout layer randomly sets a fraction of the output values to zero, which helps to prevent overfitting by reducing the dependence on any one feature.\nThe Flatten layer flattens the output of the Dropout layer into a single vector of values. This allows the output to be fed into the next layer of the network.\nThe Dense layer applies a linear transformation to the flattened feature vector, which produces a 29-dimensional output vector. This layer represents the final set of image features extracted by the network.\nThe Activation layer applies the sigmoid activation function to the output of the Dense layer, which produces a final output vector of probabilities. This output can be used for classification or other tasks.\nOverall, the given code uses a CNN to extract a set of 29 image features from the input images. These features are learned by the network during training and can be used to represent the visual content of the images in a compact and useful form.\n" ]
[ 1 ]
[]
[]
[ "conv_neural_network", "feature_extraction", "image_preprocessing", "python" ]
stackoverflow_0074673792_conv_neural_network_feature_extraction_image_preprocessing_python.txt
Q: how to change th value of layout in MainActivity.java? i am using android studio in x.xml, i set the layout_height=400dp <RelativeLayout android:id="@+id/relativeLayout2" android:layout_width="wrap_content" android:layout_height="400dp" </RelativeLayout> in MainActivity.java,i use following code to get layer relativelayout2=findViewById(R.id.relativeLayout2); i got the id of layout_height in MainActivity.java, but how to change it to differents value ? A: The layout height of RelativeLayout can be changed by using setLayoutParams method and passing it a ViewGroup.LayoutParams object with the desired height. // Get the layout params for the RelativeLayout RelativeLayout.LayoutParams layoutParams = (RelativeLayout.LayoutParams) relativelayout2.getLayoutParams(); // Set the height to the desired value layoutParams.height = 500; // set the height to 500 pixels // Apply the new layout params to the RelativeLayout relativelayout2.setLayoutParams(layoutParams); This will change the height of RelativeLayout to 500 pixels. Note that if you want to use density-independent pixels (dp) instead of pixels, use the TypedValue class to convert the dp value to pixels. // Get the layout params for the RelativeLayout RelativeLayout.LayoutParams layoutParams = (RelativeLayout.LayoutParams) relativelayout2.getLayoutParams(); // Convert the dp value to pixels float heightInPixels = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, 500, getResources().getDisplayMetrics()); // Set the height to the converted value layoutParams.height = heightInPixels; // Apply the new layout params to the RelativeLayout relativelayout2.setLayoutParams(layoutParams); This code will convert the dp value of 500 to pixels and then set the height of RelativeLayout to that value.
how to change th value of layout in MainActivity.java?
i am using android studio in x.xml, i set the layout_height=400dp <RelativeLayout android:id="@+id/relativeLayout2" android:layout_width="wrap_content" android:layout_height="400dp" </RelativeLayout> in MainActivity.java,i use following code to get layer relativelayout2=findViewById(R.id.relativeLayout2); i got the id of layout_height in MainActivity.java, but how to change it to differents value ?
[ "The layout height of RelativeLayout can be changed by using setLayoutParams method and passing it a ViewGroup.LayoutParams object with the desired height.\n// Get the layout params for the RelativeLayout\nRelativeLayout.LayoutParams layoutParams = (RelativeLayout.LayoutParams) relativelayout2.getLayoutParams();\n\n// Set the height to the desired value\nlayoutParams.height = 500; // set the height to 500 pixels\n\n// Apply the new layout params to the RelativeLayout\nrelativelayout2.setLayoutParams(layoutParams);\n\nThis will change the height of RelativeLayout to 500 pixels.\n\nNote that if you want to use density-independent pixels (dp) instead of pixels, use the TypedValue class to convert the dp value to pixels.\n// Get the layout params for the RelativeLayout\nRelativeLayout.LayoutParams layoutParams = (RelativeLayout.LayoutParams) relativelayout2.getLayoutParams();\n\n// Convert the dp value to pixels\nfloat heightInPixels = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, 500, getResources().getDisplayMetrics());\n\n// Set the height to the converted value\nlayoutParams.height = heightInPixels;\n\n// Apply the new layout params to the RelativeLayout\nrelativelayout2.setLayoutParams(layoutParams);\n\nThis code will convert the dp value of 500 to pixels and then set the height of RelativeLayout to that value.\n" ]
[ 0 ]
[]
[]
[ "android", "density_independent_pixel", "java" ]
stackoverflow_0074673349_android_density_independent_pixel_java.txt
Q: Why scrollable cards height are not increasing according to his content I designed the scrollable cards. The cards are only for mobile screens. The current issue is that more data gets encapsulated inside the scrollable wrapper as the content grows. No matter how long the content is, I want the div's height to increase. Is there a fix for this design that makes the card's height rise in proportion to its contents? The read more functionality is implemented, but I didn't add it to the snippet. By default, all the content will be the same. But on read more, the content can vary. So, I want the design to be fixed so read more content does not affect the card. By default: On clicking read more/content increases: .scrolling-wrapper { -webkit-overflow-scrolling: touch; height: 474px; width: 100%; padding-inline: 40px; position: relative; display: flex; flex-wrap: nowrap; overflow-x: auto; z-index: 0; padding-top: 150px; visibility: visible; } .scrolling-wrapper::-webkit-scrollbar { display: none; } .card { width: 100%; flex: 0 0 auto; background-color: green; border-radius: 20px; position: relative; margin-inline-end: 10px; } .our-member-owner-card-image { position: absolute; top: -66px; z-index: 10; left: 29%; } .card-content { position: absolute; padding-top: 38px; } .member-detail { padding-top: 55px; line-height: 1.7; } .member-detail h3 { text-align: center; color: #263244; font-weight: 700; font-family: "Lato"; } .member-detail p { text-align: center; color: #737c89; } .member-description { padding-inline: 20px; color: #263244; line-height: 1.6; padding-top: 9px; font-weight: 500; font-size: 16px; font-style: normal; font-weight: 500; } .member-description .read-more { color: #eb644c; text-decoration: underline; cursor: pointer; } <div class="scrolling-wrapper"> <div class="card"> <div class="our-member-owner-card-image"> <img width="140px" src="https://images.unsplash.com/photo-1579279219378-731a5c4f4d16?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8bXIlMjBiZWFufGVufDB8fDB8fA%3D%3D&auto=format&fit=crop&w=500&q=60" /> </div> <div class="card-content"> <div class="member-detail"> <h3 id="mobile-member-name">Mr bean</h3> <p id="mobile-member-designation">Actor</p> </div> <div class="member-description"> <span id="mobile-member-description"> Mr Bean has extensive work experience during his career of more than 25 years in the film industry. </span> <span id="mobile-more" >Some dummy text </span> <span id="mobile-member-description-readmore" class="readMoreLink read-more" >Read less</span> </div> </div> </div> <div class="card"> <div class="our-member-owner-card-image"> <img width="140px" src="https://images.unsplash.com/photo-1579279219378-731a5c4f4d16?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8bXIlMjBiZWFufGVufDB8fDB8fA%3D%3D&auto=format&fit=crop&w=500&q=60" /> </div> <div class="card-image-shadow"></div> <div class="card-content"> <div class="member-detail"> <h3 id="mobile-member2-name">Mr bean</h3> <p id="mobile-member2-designation">Actor</p> </div> <div class="member-description"> <span id="mobile-member2-description"> Mr Bean has extensive work experience during his career of more than 25 years in the film industry </span> <span id="mobile-more2" >Some dummy text </span> <span id="mobile-member2-description-readmore" class="readMoreLink read-more" " >Read less</span > </div> </div> </div> </div> A: As I understood your question, your issue is that the content is pushing out because you have defined an absolute height for the container. Let the content determine the height dynamically. Instead of using height and max-height, try using min-height. That way, if the content needs more space, it can grow. So removing this should make the cards grow based on the size of the content .scrolling-wrapper { height: 474px; } A: I think adding min-height: fit-content to .scrolling-wrapper will do what you want
Why scrollable cards height are not increasing according to his content
I designed the scrollable cards. The cards are only for mobile screens. The current issue is that more data gets encapsulated inside the scrollable wrapper as the content grows. No matter how long the content is, I want the div's height to increase. Is there a fix for this design that makes the card's height rise in proportion to its contents? The read more functionality is implemented, but I didn't add it to the snippet. By default, all the content will be the same. But on read more, the content can vary. So, I want the design to be fixed so read more content does not affect the card. By default: On clicking read more/content increases: .scrolling-wrapper { -webkit-overflow-scrolling: touch; height: 474px; width: 100%; padding-inline: 40px; position: relative; display: flex; flex-wrap: nowrap; overflow-x: auto; z-index: 0; padding-top: 150px; visibility: visible; } .scrolling-wrapper::-webkit-scrollbar { display: none; } .card { width: 100%; flex: 0 0 auto; background-color: green; border-radius: 20px; position: relative; margin-inline-end: 10px; } .our-member-owner-card-image { position: absolute; top: -66px; z-index: 10; left: 29%; } .card-content { position: absolute; padding-top: 38px; } .member-detail { padding-top: 55px; line-height: 1.7; } .member-detail h3 { text-align: center; color: #263244; font-weight: 700; font-family: "Lato"; } .member-detail p { text-align: center; color: #737c89; } .member-description { padding-inline: 20px; color: #263244; line-height: 1.6; padding-top: 9px; font-weight: 500; font-size: 16px; font-style: normal; font-weight: 500; } .member-description .read-more { color: #eb644c; text-decoration: underline; cursor: pointer; } <div class="scrolling-wrapper"> <div class="card"> <div class="our-member-owner-card-image"> <img width="140px" src="https://images.unsplash.com/photo-1579279219378-731a5c4f4d16?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8bXIlMjBiZWFufGVufDB8fDB8fA%3D%3D&auto=format&fit=crop&w=500&q=60" /> </div> <div class="card-content"> <div class="member-detail"> <h3 id="mobile-member-name">Mr bean</h3> <p id="mobile-member-designation">Actor</p> </div> <div class="member-description"> <span id="mobile-member-description"> Mr Bean has extensive work experience during his career of more than 25 years in the film industry. </span> <span id="mobile-more" >Some dummy text </span> <span id="mobile-member-description-readmore" class="readMoreLink read-more" >Read less</span> </div> </div> </div> <div class="card"> <div class="our-member-owner-card-image"> <img width="140px" src="https://images.unsplash.com/photo-1579279219378-731a5c4f4d16?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8bXIlMjBiZWFufGVufDB8fDB8fA%3D%3D&auto=format&fit=crop&w=500&q=60" /> </div> <div class="card-image-shadow"></div> <div class="card-content"> <div class="member-detail"> <h3 id="mobile-member2-name">Mr bean</h3> <p id="mobile-member2-designation">Actor</p> </div> <div class="member-description"> <span id="mobile-member2-description"> Mr Bean has extensive work experience during his career of more than 25 years in the film industry </span> <span id="mobile-more2" >Some dummy text </span> <span id="mobile-member2-description-readmore" class="readMoreLink read-more" " >Read less</span > </div> </div> </div> </div>
[ "As I understood your question, your issue is that the content is pushing out because you have defined an absolute height for the container. Let the content determine the height dynamically. Instead of using height and max-height, try using min-height. That way, if the content needs more space, it can grow.\nSo removing this should make the cards grow based on the size of the content\n.scrolling-wrapper {\n height: 474px;\n }\n\n", "I think adding\n\nmin-height: fit-content\n\nto .scrolling-wrapper will do what you want\n" ]
[ 0, 0 ]
[]
[]
[ "css", "flexbox", "html", "scrollable" ]
stackoverflow_0074670867_css_flexbox_html_scrollable.txt
Q: How do I get the height and width of the Android Navigation Bar programmatically? The black navigation bar on the bottom of the screen is not easily removable in Android. It has been part of Android since 3.0 as a replacement for hardware buttons. Here is a picture: How can I get the size of the width and the height of this UI element in pixels? A: Try below code: Resources resources = context.getResources(); int resourceId = resources.getIdentifier("navigation_bar_height", "dimen", "android"); if (resourceId > 0) { return resources.getDimensionPixelSize(resourceId); } return 0; A: I get navigation bar size by comparing app-usable screen size with real screen size. I assume that navigation bar is present when app-usable screen size is smaller than real screen size. Then I calculate navigation bar size. This method works with API 14 and up. public static Point getNavigationBarSize(Context context) { Point appUsableSize = getAppUsableScreenSize(context); Point realScreenSize = getRealScreenSize(context); // navigation bar on the side if (appUsableSize.x < realScreenSize.x) { return new Point(realScreenSize.x - appUsableSize.x, appUsableSize.y); } // navigation bar at the bottom if (appUsableSize.y < realScreenSize.y) { return new Point(appUsableSize.x, realScreenSize.y - appUsableSize.y); } // navigation bar is not present return new Point(); } public static Point getAppUsableScreenSize(Context context) { WindowManager windowManager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE); Display display = windowManager.getDefaultDisplay(); Point size = new Point(); display.getSize(size); return size; } public static Point getRealScreenSize(Context context) { WindowManager windowManager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE); Display display = windowManager.getDefaultDisplay(); Point size = new Point(); if (Build.VERSION.SDK_INT >= 17) { display.getRealSize(size); } else if (Build.VERSION.SDK_INT >= 14) { try { size.x = (Integer) Display.class.getMethod("getRawWidth").invoke(display); size.y = (Integer) Display.class.getMethod("getRawHeight").invoke(display); } catch (IllegalAccessException e) {} catch (InvocationTargetException e) {} catch (NoSuchMethodException e) {} } return size; } UPDATE For a solution that takes into account display cutouts please check John's answer. A: The NavigationBar height varies for some devices, but as well for some orientations. First you have to check if the device has a navbar, then if the device is a tablet or a not-tablet (phone) and finally you have to look at the orientation of the device in order to get the correct height. public int getNavBarHeight(Context c) { int result = 0; boolean hasMenuKey = ViewConfiguration.get(c).hasPermanentMenuKey(); boolean hasBackKey = KeyCharacterMap.deviceHasKey(KeyEvent.KEYCODE_BACK); if(!hasMenuKey && !hasBackKey) { //The device has a navigation bar Resources resources = c.getResources(); int orientation = resources.getConfiguration().orientation; int resourceId; if (isTablet(c)){ resourceId = resources.getIdentifier(orientation == Configuration.ORIENTATION_PORTRAIT ? "navigation_bar_height" : "navigation_bar_height_landscape", "dimen", "android"); } else { resourceId = resources.getIdentifier(orientation == Configuration.ORIENTATION_PORTRAIT ? "navigation_bar_height" : "navigation_bar_width", "dimen", "android"); } if (resourceId > 0) { return resources.getDimensionPixelSize(resourceId); } } return result; } private boolean isTablet(Context c) { return (c.getResources().getConfiguration().screenLayout & Configuration.SCREENLAYOUT_SIZE_MASK) >= Configuration.SCREENLAYOUT_SIZE_LARGE; } A: Actually the navigation bar on tablets (at least Nexus 7) has different size in portrait and landscape so this function should look like this: private int getNavigationBarHeight(Context context, int orientation) { Resources resources = context.getResources(); int id = resources.getIdentifier( orientation == Configuration.ORIENTATION_PORTRAIT ? "navigation_bar_height" : "navigation_bar_height_landscape", "dimen", "android"); if (id > 0) { return resources.getDimensionPixelSize(id); } return 0; } and in Kotlin: private fun getNavigationBarHeight(): Int { val resources: Resources = requireContext().resources val resName = if (resources.configuration.orientation == Configuration.ORIENTATION_PORTRAIT) { "navigation_bar_height" } else { "navigation_bar_height_landscape" } val id: Int = resources.getIdentifier(resName, "dimen", "android") return if (id > 0) { resources.getDimensionPixelSize(id) } else { 0 } } A: I think better answer is here because it allows you to get even cutout height too. Take your root view, and add setOnApplyWindowInsetsListener (or you can override onApplyWindowInsets from it), and take insets from it. In my camera activity, i add padding equal to the systemBars.bottom to my bottom layout. And finally, it fix cutout issue. with appcompat it is like this ViewCompat.setOnApplyWindowInsetsListener(binding.root) { v, insets -> val systemBars = insets.getInsets(WindowInsetsCompat.Type.systemBars()) binding.takePictureLayout.apply { setPaddingRelative(paddingStart, paddingTop, paddingEnd, systemBars.bottom) } return@setOnApplyWindowInsetsListener insets } without appcompat, this: mCameraSourcePreview.setOnApplyWindowInsetsListener((v, insets) -> { ... }) A: I hope this helps you public int getStatusBarHeight() { int result = 0; int resourceId = getResources().getIdentifier("status_bar_height", "dimen", "android"); if (resourceId > 0) { result = getResources().getDimensionPixelSize(resourceId); } return result; } public int getNavigationBarHeight() { boolean hasMenuKey = ViewConfiguration.get(context).hasPermanentMenuKey(); int resourceId = getResources().getIdentifier("navigation_bar_height", "dimen", "android"); if (resourceId > 0 && !hasMenuKey) { return getResources().getDimensionPixelSize(resourceId); } return 0; } A: This is my code to add paddingRight and paddingBottom to a View to dodge the Navigation Bar. I combined some of the answers here and made a special clause for landscape orientation together with isInMultiWindowMode. The key is to read navigation_bar_height, but also check config_showNavigationBar to make sure we should actually use the height. None of the previous solutions worked for me. As of Android 7.0 you have to take Multi Window Mode into consideration. This breaks the implementations comparing display.realSize with display.size since realSize gives you the dimensions of the whole screen (both split windows) and size only gives you the dimensions of your App window. Setting padding to this difference will leave your whole view being padding. /** Adds padding to a view to dodge the navigation bar. Unfortunately something like this needs to be done since there are no attr or dimens value available to get the navigation bar height (as of December 2016). */ public static void addNavigationBarPadding(Activity context, View v) { Resources resources = context.getResources(); if (hasNavigationBar(resources)) { int orientation = resources.getConfiguration().orientation; int size = getNavigationBarSize(resources); switch (orientation) { case Configuration.ORIENTATION_LANDSCAPE: if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N && context.isInMultiWindowMode()) { break; } v.setPadding(v.getPaddingLeft(), v.getPaddingTop(), v.getPaddingRight() + size, v.getPaddingBottom()); break; case Configuration.ORIENTATION_PORTRAIT: v.setPadding(v.getPaddingLeft(), v.getPaddingTop(), v.getPaddingRight(), v.getPaddingBottom() + size); break; } } } private static int getNavigationBarSize(Resources resources) { int resourceId = resources.getIdentifier("navigation_bar_height", "dimen", "android"); return resourceId > 0 ? resources.getDimensionPixelSize(resourceId) : 0; } private static boolean hasNavigationBar(Resources resources) { int hasNavBarId = resources.getIdentifier("config_showNavigationBar", "bool", "android"); return hasNavBarId > 0 && resources.getBoolean(hasNavBarId); } A: New answer in 2021 comes to the rescue insipred from Egis's answer: context.navigationBarHeight where the extension getter is val Context.navigationBarHeight: Int get() { val windowManager = getSystemService(Context.WINDOW_SERVICE) as WindowManager return if (Build.VERSION.SDK_INT >= 30) { windowManager .currentWindowMetrics .windowInsets .getInsets(WindowInsets.Type.navigationBars()) .bottom } else { val currentDisplay = try { display } catch (e: NoSuchMethodError) { windowManager.defaultDisplay } val appUsableSize = Point() val realScreenSize = Point() currentDisplay?.apply { getSize(appUsableSize) getRealSize(realScreenSize) } // navigation bar on the side if (appUsableSize.x < realScreenSize.x) { return realScreenSize.x - appUsableSize.x } // navigation bar at the bottom return if (appUsableSize.y < realScreenSize.y) { realScreenSize.y - appUsableSize.y } else 0 } } tested on: emulators with navigation bars pixel 3a (api 30) pixel 2 (api 28) pixel 3 (api 25) pixel 2 (api 21) Xiaomi Poco f2 pro with & without navigation bar(full display) A: The solution proposed by Egidijus and works perfectly for Build.VERSION.SDK_INT >= 17 But I got "NoSuchMethodException" during execution of the following statement with Build.VERSION.SDK_INT < 17 on my device: Display.class.getMethod("getRawHeight").invoke(display); I have modified the method getRealScreenSize() for such cases: else if(Build.VERSION.SDK_INT >= 14) { View decorView = getActivity().getWindow().getDecorView(); size.x = decorView.getWidth(); size.y = decorView.getHeight(); } A: I resolved this issue for all devices(including Nexus 5, Samsung Galaxy Nexus 6 edge+, Samsung S10, Samsung Note II etc.). I think this will help you to handle device dependant issues. Here I am adding two types of codes, Java Code(for Native Android): import android.content.Context; import android.content.res.Resources; import android.os.Build; import android.util.DisplayMetrics; import android.view.Display; import android.view.ViewConfiguration; import android.view.WindowManager; public class DeviceSpec { private int resourceID = -1; private Display display = null; private DisplayMetrics displayMetrics = null; private DisplayMetrics realDisplayMetrics = null; private Resources resources = null; private WindowManager windowManager = null; public double GetNavigationBarHeight(Context context) { try { windowManager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE); display = windowManager.getDefaultDisplay(); displayMetrics = new DisplayMetrics(); if(Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH_MR1) { realDisplayMetrics = new DisplayMetrics(); display.getMetrics(displayMetrics); display.getRealMetrics(realDisplayMetrics); if(displayMetrics.heightPixels != realDisplayMetrics.heightPixels) { resources = context.getResources(); return GetNavigationBarSize(context); } } else { resources = context.getResources(); resourceID = resources.getIdentifier("config_showNavigationBar", "bool", "android"); if (resourceID > 0 && resources.getBoolean(resourceID)) return GetNavigationBarSize(context); } } catch (Exception e){ e.printStackTrace(); } return 0; } private double GetNavigationBarSize(Context context) { resourceID = resources.getIdentifier("navigation_bar_height", "dimen", "android"); if (resourceID > 0 && ViewConfiguration.get(context).hasPermanentMenuKey()) return (resources.getDimensionPixelSize(resourceID) / displayMetrics.density); return 0; } } And C# code(for Xamarin Forms/Android) int resourceId = -1; IWindowManager windowManager = null; Display defaultDisplay = null; DisplayMetrics displayMatrics = null; DisplayMetrics realMatrics = null; Resources resources = null; public double NavigationBarHeight { get { try { windowManager = Forms.Context.GetSystemService(Context.WindowService).JavaCast<IWindowManager>(); defaultDisplay = windowManager.DefaultDisplay; displayMatrics = new DisplayMetrics(); if (Build.VERSION.SdkInt >= BuildVersionCodes.JellyBeanMr2) { realMatrics = new DisplayMetrics(); defaultDisplay.GetMetrics(displayMatrics); defaultDisplay.GetRealMetrics(realMatrics); if (displayMatrics.HeightPixels != realMatrics.HeightPixels) { resources = Forms.Context.Resources; return GetHeightOfNivigationBar(); } } else { resources = Forms.Context.Resources; resourceId = resources.GetIdentifier("config_showNavigationBar", "bool", "android"); if (resourceId > 0 && resources.GetBoolean(resourceId)) return GetHeightOfNivigationBar(); } } catch (Exception e) { } return 0; } } private double GetHeightOfNivigationBar() { resourceId = resources.GetIdentifier("navigation_bar_height", "dimen", "android"); if (!ViewConfiguration.Get(Forms.Context).HasPermanentMenuKey && resourceId > 0) { return resources.GetDimensionPixelSize(resourceId) / displayMatrics.Density; } return 0; } A: Tested code for getting height of navigation bar (in pixels): public static int getNavBarHeight(Context c) { int resourceId = c.getResources() .getIdentifier("navigation_bar_height", "dimen", "android"); if (resourceId > 0) { return c.getResources().getDimensionPixelSize(resourceId); } return 0; } Tested code for getting height of status bar (in pixels): public static int getStatusBarHeight(Context c) { int resourceId = c.getResources() .getIdentifier("status_bar_height", "dimen", "android"); if (resourceId > 0) { return c.getResources().getDimensionPixelSize(resourceId); } return 0; } Converting pixels to dp: public static int pxToDp(int px) { return (int) (px / Resources.getSystem().getDisplayMetrics().density); } A: How to get the height of the navigation bar and status bar. This code works for me on some Huawei devices and Samsung devices. Egis's solution above is good, however, it is still incorrect on some devices. So, I improved it. This is code to get the height of status bar private fun getStatusBarHeight(resources: Resources): Int { var result = 0 val resourceId = resources.getIdentifier("status_bar_height", "dimen", "android") if (resourceId > 0) { result = resources.getDimensionPixelSize(resourceId) } return result } This method always returns the height of navigation bar even when the navigation bar is hidden. private fun getNavigationBarHeight(resources: Resources): Int { val resourceId = resources.getIdentifier("navigation_bar_height", "dimen", "android") return if (resourceId > 0) { resources.getDimensionPixelSize(resourceId) } else 0 } NOTE: on Samsung A70, this method returns the height of the status bar + height of the navigation bar. On other devices (Huawei), it only returns the height of the Navigation bar and returns 0 when the navigation bar is hidden. private fun getNavigationBarHeight(): Int { val display = activity?.windowManager?.defaultDisplay return if (display == null) { 0 } else { val realMetrics = DisplayMetrics() display.getRealMetrics(realMetrics) val metrics = DisplayMetrics() display.getMetrics(metrics) realMetrics.heightPixels - metrics.heightPixels } } This is code to get height of navigation bar and status bar val metrics = DisplayMetrics() activity?.windowManager?.defaultDisplay?.getRealMetrics(metrics) //resources is got from activity //NOTE: on SamSung A70, this height = height of status bar + height of Navigation bar //On other devices (Huawei), this height = height of Navigation bar val navigationBarHeightOrNavigationBarPlusStatusBarHeight = getNavigationBarHeight() val statusBarHeight = getStatusBarHeight(resources) //The method will always return the height of navigation bar even when the navigation bar was hidden. val realNavigationBarHeight = getNavigationBarHeight(resources) val realHeightOfStatusBarAndNavigationBar = if (navigationBarHeightOrNavigationBarPlusStatusBarHeight == 0 || navigationBarHeightOrNavigationBarPlusStatusBarHeight < statusBarHeight) { //Huawei: navigation bar is hidden statusBarHeight } else if (navigationBarHeightOrNavigationBarPlusStatusBarHeight == realNavigationBarHeight) { //Huawei: navigation bar is visible statusBarHeight + realNavigationBarHeight } else if (navigationBarHeightOrNavigationBarPlusStatusBarHeight < realNavigationBarHeight) { //SamSung A70: navigation bar is still visible but it only displays as a under line //navigationBarHeightOrNavigationBarPlusStatusBarHeight = navigationBarHeight'(under line) + statusBarHeight navigationBarHeightOrNavigationBarPlusStatusBarHeight } else { //SamSung A70: navigation bar is visible //navigationBarHeightOrNavigationBarPlusStatusBarHeight == statusBarHeight + realNavigationBarHeight navigationBarHeightOrNavigationBarPlusStatusBarHeight } A: I've done this, it works on every device I tested, and even on emulators: // Return the NavigationBar height in pixels if it is present, otherwise return 0 public static int getNavigationBarHeight(Activity activity) { Rect rectangle = new Rect(); DisplayMetrics displayMetrics = new DisplayMetrics(); activity.getWindow().getDecorView().getWindowVisibleDisplayFrame(rectangle); activity.getWindowManager().getDefaultDisplay().getRealMetrics(displayMetrics); return displayMetrics.heightPixels - (rectangle.top + rectangle.height()); } A: Combining the answer from @egis and others - this works well on a variety of devices, tested on Pixel EMU, Samsung S6, Sony Z3, Nexus 4. This code uses the display dimensions to test for availability of nav bar and then uses the actual system nav bar size if present. /** * Calculates the system navigation bar size. */ public final class NavigationBarSize { private final int systemNavBarHeight; @NonNull private final Point navBarSize; public NavigationBarSize(@NonNull Context context) { Resources resources = context.getResources(); int displayOrientation = resources.getConfiguration().orientation; final String name; switch (displayOrientation) { case Configuration.ORIENTATION_PORTRAIT: name = "navigation_bar_height"; break; default: name = "navigation_bar_height_landscape"; } int id = resources.getIdentifier(name, "dimen", "android"); systemNavBarHeight = id > 0 ? resources.getDimensionPixelSize(id) : 0; navBarSize = getNavigationBarSize(context); } public void adjustBottomPadding(@NonNull View view, @DimenRes int defaultHeight) { int height = 0; if (navBarSize.y > 0) { // the device has a nav bar, get the correct size from the system height = systemNavBarHeight; } if (height == 0) { // fallback to default height = view.getContext().getResources().getDimensionPixelSize(defaultHeight); } view.setPadding(0, 0, 0, height); } @NonNull private static Point getNavigationBarSize(@NonNull Context context) { Point appUsableSize = new Point(); Point realScreenSize = new Point(); WindowManager windowManager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE); if (windowManager != null) { Display display = windowManager.getDefaultDisplay(); display.getSize(appUsableSize); display.getRealSize(realScreenSize); } return new Point(realScreenSize.x - appUsableSize.x, realScreenSize.y - appUsableSize.y); } } A: Simple One-line Solution As suggested in many of above answers, for example https://stackoverflow.com/a/29938139/9640177 https://stackoverflow.com/a/26118045/9640177 https://stackoverflow.com/a/50775459/9640177 https://stackoverflow.com/a/41057024/9640177 Simply getting navigation bar height may not be enough. We need to consider whether 1. navigation bar exists, 2. is it on the bottom, or right or left, 3. is app open in multi-window mode. Fortunately you can easily bypass all the long coding by simply setting android:fitsSystemWindows="true" in your root layout. Android system will automatically take care of adding necessary padding to the root layout to make sure that the child views don't get into the navigation bar or statusbar regions. There is a simple one line solution android:fitsSystemWindows="true" or programatically findViewById(R.id.your_root_view).setFitsSystemWindows(true); you may also get root view by findViewById(android.R.id.content).getRootView(); or getWindow().getDecorView().findViewById(android.R.id.content) For more details on getting root-view refer - https://stackoverflow.com/a/4488149/9640177 A: The height of the bottom Navigation bar is 48dp (in both portrait and landscape mode) and is 42dp when the bar is placed vertically. A: Here is how I solved this. I made a hideable bottom bar which needed padding depending on if there was a navigation bar or not (capacitive, on-screen or just pre lollipop). View setPadding(0, 0, 0, Utils.hasNavBar(getContext()) ? 30 : 0); Utils.java public static boolean hasNavBar(Context context) { // Kitkat and less shows container above nav bar if (android.os.Build.VERSION.SDK_INT <= Build.VERSION_CODES.KITKAT) { return false; } // Emulator if (Build.FINGERPRINT.startsWith("generic")) { return true; } boolean hasMenuKey = ViewConfiguration.get(context).hasPermanentMenuKey(); boolean hasBackKey = KeyCharacterMap.deviceHasKey(KeyEvent.KEYCODE_BACK); boolean hasNoCapacitiveKeys = !hasMenuKey && !hasBackKey; Resources resources = context.getResources(); int id = resources.getIdentifier("config_showNavigationBar", "bool", "android"); boolean hasOnScreenNavBar = id > 0 && resources.getBoolean(id); return hasOnScreenNavBar || hasNoCapacitiveKeys || getNavigationBarHeight(context, true) > 0; } public static int getNavigationBarHeight(Context context, boolean skipRequirement) { int resourceId = context.getResources().getIdentifier("navigation_bar_height", "dimen", "android"); if (resourceId > 0 && (skipRequirement || hasNavBar(context))) { return context.getResources().getDimensionPixelSize(resourceId); } return 0; } A: In my case where I wanted to have something like this: I had to follow the same thing as suggested by @Mdlc but probably slightly simpler (targeting only >= 21): //kotlin val windowManager = getSystemService(Context.WINDOW_SERVICE) as WindowManager val realSize = Point() windowManager.defaultDisplay.getRealSize(realSize); val usableRect = Rect() windowManager.defaultDisplay.getRectSize(usableRect) Toast.makeText(this, "Usable Screen: " + usableRect + " real:"+realSize, Toast.LENGTH_LONG).show() window.decorView.setPadding(usableRect.left, usableRect.top, realSize.x - usableRect.right, realSize.y - usableRect.bottom) It works on landscape too: Edit The above solution does not work correctly in multi-window mode where the usable rectangle is not smaller just due to the navigation bar but also because of custom window size. One thing that I noticed is that in multi-window the navigation bar is not hovering over the app so even with no changes to DecorView padding we have the correct behaviour: Note the difference between how navigation bar is hovering over the bottom of the app in these to scenarios. Fortunately, this is easy to fix. We can check if app is multi window. The code below also includes the part to calculate and adjust the position of toolbar (full solution: https://stackoverflow.com/a/14213035/477790) // kotlin // Let the window flow into where window decorations are window.addFlags(WindowManager.LayoutParams.FLAG_LAYOUT_IN_SCREEN) window.addFlags(WindowManager.LayoutParams.FLAG_LAYOUT_NO_LIMITS) // calculate where the bottom of the page should end up, considering the navigation bar (back buttons, ...) val windowManager = getSystemService(Context.WINDOW_SERVICE) as WindowManager val realSize = Point() windowManager.defaultDisplay.getRealSize(realSize); val usableRect = Rect() windowManager.defaultDisplay.getRectSize(usableRect) Toast.makeText(this, "Usable Screen: " + usableRect + " real:" + realSize, Toast.LENGTH_LONG).show() if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N || !isInMultiWindowMode) { window.decorView.setPadding(usableRect.left, usableRect.top, realSize.x - usableRect.right, realSize.y - usableRect.bottom) // move toolbar/appbar further down to where it should be and not to overlap with status bar val layoutParams = ConstraintLayout.LayoutParams(appBarLayout.layoutParams as ConstraintLayout.LayoutParams) layoutParams.topMargin = getSystemSize(Constants.statusBarHeightKey) appBarLayout.layoutParams = layoutParams } Result on Samsung popup mode: A: In case of Samsung S8 none of the above provided methods were giving proper height of navigation bar so I used the KeyboardHeightProvider keyboard height provider android. And it gave me height in negative values and for my layout positioning I adjusted that value in calculations. Here is KeyboardHeightProvider.java : import android.app.Activity; import android.content.res.Configuration; import android.graphics.Point; import android.graphics.Rect; import android.graphics.drawable.ColorDrawable; import android.view.Gravity; import android.view.LayoutInflater; import android.view.View; import android.view.ViewTreeObserver.OnGlobalLayoutListener; import android.view.WindowManager.LayoutParams; import android.widget.PopupWindow; /** * The keyboard height provider, this class uses a PopupWindow * to calculate the window height when the floating keyboard is opened and closed. */ public class KeyboardHeightProvider extends PopupWindow { /** The tag for logging purposes */ private final static String TAG = "sample_KeyboardHeightProvider"; /** The keyboard height observer */ private KeyboardHeightObserver observer; /** The cached landscape height of the keyboard */ private int keyboardLandscapeHeight; /** The cached portrait height of the keyboard */ private int keyboardPortraitHeight; /** The view that is used to calculate the keyboard height */ private View popupView; /** The parent view */ private View parentView; /** The root activity that uses this KeyboardHeightProvider */ private Activity activity; /** * Construct a new KeyboardHeightProvider * * @param activity The parent activity */ public KeyboardHeightProvider(Activity activity) { super(activity); this.activity = activity; LayoutInflater inflator = (LayoutInflater) activity.getSystemService(Activity.LAYOUT_INFLATER_SERVICE); this.popupView = inflator.inflate(R.layout.popupwindow, null, false); setContentView(popupView); setSoftInputMode(LayoutParams.SOFT_INPUT_ADJUST_RESIZE | LayoutParams.SOFT_INPUT_STATE_ALWAYS_VISIBLE); setInputMethodMode(PopupWindow.INPUT_METHOD_NEEDED); parentView = activity.findViewById(android.R.id.content); setWidth(0); setHeight(LayoutParams.MATCH_PARENT); popupView.getViewTreeObserver().addOnGlobalLayoutListener(new OnGlobalLayoutListener() { @Override public void onGlobalLayout() { if (popupView != null) { handleOnGlobalLayout(); } } }); } /** * Start the KeyboardHeightProvider, this must be called after the onResume of the Activity. * PopupWindows are not allowed to be registered before the onResume has finished * of the Activity. */ public void start() { if (!isShowing() && parentView.getWindowToken() != null) { setBackgroundDrawable(new ColorDrawable(0)); showAtLocation(parentView, Gravity.NO_GRAVITY, 0, 0); } } /** * Close the keyboard height provider, * this provider will not be used anymore. */ public void close() { this.observer = null; dismiss(); } /** * Set the keyboard height observer to this provider. The * observer will be notified when the keyboard height has changed. * For example when the keyboard is opened or closed. * * @param observer The observer to be added to this provider. */ public void setKeyboardHeightObserver(KeyboardHeightObserver observer) { this.observer = observer; } /** * Get the screen orientation * * @return the screen orientation */ private int getScreenOrientation() { return activity.getResources().getConfiguration().orientation; } /** * Popup window itself is as big as the window of the Activity. * The keyboard can then be calculated by extracting the popup view bottom * from the activity window height. */ private void handleOnGlobalLayout() { Point screenSize = new Point(); activity.getWindowManager().getDefaultDisplay().getSize(screenSize); Rect rect = new Rect(); popupView.getWindowVisibleDisplayFrame(rect); // REMIND, you may like to change this using the fullscreen size of the phone // and also using the status bar and navigation bar heights of the phone to calculate // the keyboard height. But this worked fine on a Nexus. int orientation = getScreenOrientation(); int keyboardHeight = screenSize.y - rect.bottom; if (keyboardHeight == 0) { notifyKeyboardHeightChanged(0, orientation); } else if (orientation == Configuration.ORIENTATION_PORTRAIT) { this.keyboardPortraitHeight = keyboardHeight; notifyKeyboardHeightChanged(keyboardPortraitHeight, orientation); } else { this.keyboardLandscapeHeight = keyboardHeight; notifyKeyboardHeightChanged(keyboardLandscapeHeight, orientation); } } /** * */ private void notifyKeyboardHeightChanged(int height, int orientation) { if (observer != null) { observer.onKeyboardHeightChanged(height, orientation); } } public interface KeyboardHeightObserver { void onKeyboardHeightChanged(int height, int orientation); } } popupwindow.xml : <?xml version="1.0" encoding="utf-8"?> <View xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/popuplayout" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@android:color/transparent" android:orientation="horizontal"/> Usage in MainActivity import android.os.Bundle import android.support.v7.app.AppCompatActivity import kotlinx.android.synthetic.main.activity_main.* /** * Created by nileshdeokar on 22/02/2018. */ class MainActivity : AppCompatActivity() , KeyboardHeightProvider.KeyboardHeightObserver { private lateinit var keyboardHeightProvider : KeyboardHeightProvider override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) keyboardHeightProvider = KeyboardHeightProvider(this) parentActivityView.post { keyboardHeightProvider?.start() } } override fun onKeyboardHeightChanged(height: Int, orientation: Int) { // In case of 18:9 - e.g. Samsung S8 // here you get the height of the navigation bar as negative value when keyboard is closed. // and some positive integer when keyboard is opened. } public override fun onPause() { super.onPause() keyboardHeightProvider?.setKeyboardHeightObserver(null) } public override fun onResume() { super.onResume() keyboardHeightProvider?.setKeyboardHeightObserver(this) } public override fun onDestroy() { super.onDestroy() keyboardHeightProvider?.close() } } For any further help you can have a look at advanced usage of this here. A: My version to handle cutouts + navigation bar fun View.getCutoutRect(): Rect { return when { isInEditMode -> { val cutout = context.dpToPx(16f).roundToInt() Rect(cutout, cutout, cutout, cutout) } Build.VERSION.SDK_INT >= Build.VERSION_CODES.M -> { val windowInsets = (context as? AppCompatActivity)?.window?.decorView?.rootWindowInsets ?: run { requestLayout() return Rect() } val cutout = WindowInsetsCompat.toWindowInsetsCompat(windowInsets).displayCutout val systemBars = WindowInsetsCompat.toWindowInsetsCompat(windowInsets).getInsets(WindowInsetsCompat.Type.systemBars()) Rect( maxOf(cutout?.safeInsetLeft ?: 0, systemBars.left), maxOf(cutout?.safeInsetTop ?: 0, systemBars.top), maxOf(cutout?.safeInsetRight ?: 0, systemBars.right), maxOf(cutout?.safeInsetBottom ?: 0, systemBars.bottom), ) } else -> { val savedRect = (this.getTag(R.id.view_insets_tag_id) as? Rect) ?: Rect() ViewCompat.setOnApplyWindowInsetsListener(this) { v, insets -> val cutout = insets.displayCutout val systemBars = insets.getInsets(WindowInsetsCompat.Type.systemBars()) val rect = Rect( maxOf(cutout?.safeInsetLeft ?: 0, systemBars.left), maxOf(cutout?.safeInsetTop ?: 0, systemBars.top), maxOf(cutout?.safeInsetRight ?: 0, systemBars.right), maxOf(cutout?.safeInsetBottom ?: 0, systemBars.bottom), ) this.setTag(R.id.view_insets_tag_id, rect) if (savedRect != rect) { requestLayout() } return@setOnApplyWindowInsetsListener insets } this.requestApplyInsets() savedRect } } } A: I suggest using the two Context extensions for getting status bar height in px and bottom navigation bar height in dp Status bar height in dp val Context.statusBarHeightInDp get() = run { val resourceId = this.resources.getIdentifier( "status_bar_height", "dimen", "android" ) this.resources.getDimensionPixelSize(resourceId) / this.resources.displayMetrics.density } Bottom nav bar height in dp val Context.navBarHeightInDp get() = run { val resourceId = this.resources.getIdentifier( "navigation_bar_height", "dimen", "android" ) this.resources.getDimensionPixelSize(resourceId) / this.resources.displayMetrics.density } A: From Android R (SDK 30+), you can use this code to get size of status bar and navigation bar WindowInsets insets = activity.getWindowManager().getCurrentWindowMetrics().getWindowInsets(); int statusBarHeight = insets.getInsets(WindowInsetsCompat.Type.statusBars()).top; //in pixels int navigationBarHeight = insets.getInsets(WindowInsetsCompat.Type.navigationBars()).bottom; //in pixels A: To obtain the height in the layout XML itself (useful for the last element in a recycler view when clipToPadding is false) you can use the attribute actionBarSize: android:paddingBottom="?attr/actionBarSize"
How do I get the height and width of the Android Navigation Bar programmatically?
The black navigation bar on the bottom of the screen is not easily removable in Android. It has been part of Android since 3.0 as a replacement for hardware buttons. Here is a picture: How can I get the size of the width and the height of this UI element in pixels?
[ "Try below code:\nResources resources = context.getResources();\nint resourceId = resources.getIdentifier(\"navigation_bar_height\", \"dimen\", \"android\");\nif (resourceId > 0) {\n return resources.getDimensionPixelSize(resourceId);\n}\nreturn 0;\n\n", "I get navigation bar size by comparing app-usable screen size with real screen size. I assume that navigation bar is present when app-usable screen size is smaller than real screen size. Then I calculate navigation bar size. This method works with API 14 and up.\npublic static Point getNavigationBarSize(Context context) {\n Point appUsableSize = getAppUsableScreenSize(context);\n Point realScreenSize = getRealScreenSize(context);\n\n // navigation bar on the side\n if (appUsableSize.x < realScreenSize.x) {\n return new Point(realScreenSize.x - appUsableSize.x, appUsableSize.y);\n }\n\n // navigation bar at the bottom\n if (appUsableSize.y < realScreenSize.y) {\n return new Point(appUsableSize.x, realScreenSize.y - appUsableSize.y);\n }\n\n // navigation bar is not present\n return new Point();\n}\n\npublic static Point getAppUsableScreenSize(Context context) {\n WindowManager windowManager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);\n Display display = windowManager.getDefaultDisplay();\n Point size = new Point();\n display.getSize(size);\n return size;\n}\n\npublic static Point getRealScreenSize(Context context) {\n WindowManager windowManager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);\n Display display = windowManager.getDefaultDisplay();\n Point size = new Point();\n\n if (Build.VERSION.SDK_INT >= 17) {\n display.getRealSize(size);\n } else if (Build.VERSION.SDK_INT >= 14) {\n try {\n size.x = (Integer) Display.class.getMethod(\"getRawWidth\").invoke(display);\n size.y = (Integer) Display.class.getMethod(\"getRawHeight\").invoke(display);\n } catch (IllegalAccessException e) {} catch (InvocationTargetException e) {} catch (NoSuchMethodException e) {}\n }\n\n return size;\n}\n\nUPDATE\nFor a solution that takes into account display cutouts please check John's answer.\n", "The NavigationBar height varies for some devices, but as well for some orientations. First you have to check if the device has a navbar, then if the device is a tablet or a not-tablet (phone) and finally you have to look at the orientation of the device in order to get the correct height.\npublic int getNavBarHeight(Context c) {\n int result = 0;\n boolean hasMenuKey = ViewConfiguration.get(c).hasPermanentMenuKey();\n boolean hasBackKey = KeyCharacterMap.deviceHasKey(KeyEvent.KEYCODE_BACK);\n\n if(!hasMenuKey && !hasBackKey) {\n //The device has a navigation bar\n Resources resources = c.getResources();\n\n int orientation = resources.getConfiguration().orientation;\n int resourceId;\n if (isTablet(c)){\n resourceId = resources.getIdentifier(orientation == Configuration.ORIENTATION_PORTRAIT ? \"navigation_bar_height\" : \"navigation_bar_height_landscape\", \"dimen\", \"android\");\n } else {\n resourceId = resources.getIdentifier(orientation == Configuration.ORIENTATION_PORTRAIT ? \"navigation_bar_height\" : \"navigation_bar_width\", \"dimen\", \"android\"); \n }\n\n if (resourceId > 0) {\n return resources.getDimensionPixelSize(resourceId);\n }\n }\n return result;\n} \n\n\nprivate boolean isTablet(Context c) {\n return (c.getResources().getConfiguration().screenLayout\n & Configuration.SCREENLAYOUT_SIZE_MASK)\n >= Configuration.SCREENLAYOUT_SIZE_LARGE;\n}\n\n", "Actually the navigation bar on tablets (at least Nexus 7) has different size in portrait and landscape so this function should look like this:\nprivate int getNavigationBarHeight(Context context, int orientation) {\n Resources resources = context.getResources();\n\n int id = resources.getIdentifier(\n orientation == Configuration.ORIENTATION_PORTRAIT ? \"navigation_bar_height\" : \"navigation_bar_height_landscape\",\n \"dimen\", \"android\");\n if (id > 0) {\n return resources.getDimensionPixelSize(id);\n }\n return 0;\n}\n\nand in Kotlin:\nprivate fun getNavigationBarHeight(): Int {\n val resources: Resources = requireContext().resources\n\n val resName = if (resources.configuration.orientation == Configuration.ORIENTATION_PORTRAIT) {\n \"navigation_bar_height\"\n } else {\n \"navigation_bar_height_landscape\"\n }\n\n val id: Int = resources.getIdentifier(resName, \"dimen\", \"android\")\n\n return if (id > 0) {\n resources.getDimensionPixelSize(id)\n } else {\n 0\n }\n}\n\n", "I think better answer is here because it allows you to get even cutout height too.\nTake your root view, and add setOnApplyWindowInsetsListener (or you can override onApplyWindowInsets from it), and take insets from it.\nIn my camera activity, i add padding equal to the systemBars.bottom to my bottom layout. And finally, it fix cutout issue.\n\nwith appcompat it is like this\nViewCompat.setOnApplyWindowInsetsListener(binding.root) { v, insets ->\n val systemBars = insets.getInsets(WindowInsetsCompat.Type.systemBars())\n binding.takePictureLayout.apply {\n setPaddingRelative(paddingStart, paddingTop, paddingEnd, systemBars.bottom)\n }\n return@setOnApplyWindowInsetsListener insets\n}\n\nwithout appcompat, this:\nmCameraSourcePreview.setOnApplyWindowInsetsListener((v, insets) -> { ... })\n\n", "I hope this helps you\npublic int getStatusBarHeight() {\n int result = 0;\n int resourceId = getResources().getIdentifier(\"status_bar_height\", \"dimen\", \"android\");\n if (resourceId > 0) {\n result = getResources().getDimensionPixelSize(resourceId);\n }\n return result;\n}\n\npublic int getNavigationBarHeight()\n{\n boolean hasMenuKey = ViewConfiguration.get(context).hasPermanentMenuKey();\n int resourceId = getResources().getIdentifier(\"navigation_bar_height\", \"dimen\", \"android\");\n if (resourceId > 0 && !hasMenuKey)\n {\n return getResources().getDimensionPixelSize(resourceId);\n }\n return 0;\n}\n\n", "This is my code to add paddingRight and paddingBottom to a View to dodge the Navigation Bar. I combined some of the answers here and made a special clause for landscape orientation together with isInMultiWindowMode. The key is to read navigation_bar_height, but also check config_showNavigationBar to make sure we should actually use the height.\nNone of the previous solutions worked for me. As of Android 7.0 you have to take Multi Window Mode into consideration. This breaks the implementations comparing display.realSize with display.size since realSize gives you the dimensions of the whole screen (both split windows) and size only gives you the dimensions of your App window. Setting padding to this difference will leave your whole view being padding.\n/** Adds padding to a view to dodge the navigation bar.\n\n Unfortunately something like this needs to be done since there\n are no attr or dimens value available to get the navigation bar\n height (as of December 2016). */\npublic static void addNavigationBarPadding(Activity context, View v) {\n Resources resources = context.getResources();\n if (hasNavigationBar(resources)) {\n int orientation = resources.getConfiguration().orientation;\n int size = getNavigationBarSize(resources);\n switch (orientation) {\n case Configuration.ORIENTATION_LANDSCAPE:\n if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N &&\n context.isInMultiWindowMode()) { break; }\n v.setPadding(v.getPaddingLeft(), v.getPaddingTop(),\n v.getPaddingRight() + size, v.getPaddingBottom());\n break;\n case Configuration.ORIENTATION_PORTRAIT:\n v.setPadding(v.getPaddingLeft(), v.getPaddingTop(),\n v.getPaddingRight(), v.getPaddingBottom() + size);\n break;\n }\n }\n}\n\nprivate static int getNavigationBarSize(Resources resources) {\n int resourceId = resources.getIdentifier(\"navigation_bar_height\",\n \"dimen\", \"android\");\n return resourceId > 0 ? resources.getDimensionPixelSize(resourceId) : 0;\n}\n\nprivate static boolean hasNavigationBar(Resources resources) {\n int hasNavBarId = resources.getIdentifier(\"config_showNavigationBar\",\n \"bool\", \"android\");\n return hasNavBarId > 0 && resources.getBoolean(hasNavBarId);\n}\n\n", "New answer in 2021 comes to the rescue\n\ninsipred from Egis's answer:\ncontext.navigationBarHeight\n\nwhere the extension getter is\nval Context.navigationBarHeight: Int\nget() {\n val windowManager = getSystemService(Context.WINDOW_SERVICE) as WindowManager\n\n return if (Build.VERSION.SDK_INT >= 30) {\n windowManager\n .currentWindowMetrics\n .windowInsets\n .getInsets(WindowInsets.Type.navigationBars())\n .bottom\n\n } else {\n val currentDisplay = try {\n display\n } catch (e: NoSuchMethodError) {\n windowManager.defaultDisplay\n }\n\n val appUsableSize = Point()\n val realScreenSize = Point()\n currentDisplay?.apply {\n getSize(appUsableSize)\n getRealSize(realScreenSize)\n }\n\n // navigation bar on the side\n if (appUsableSize.x < realScreenSize.x) {\n return realScreenSize.x - appUsableSize.x\n }\n\n // navigation bar at the bottom\n return if (appUsableSize.y < realScreenSize.y) {\n realScreenSize.y - appUsableSize.y\n } else 0\n }\n}\n\ntested on:\n\nemulators with navigation bars\n\npixel 3a (api 30)\npixel 2 (api 28)\npixel 3 (api 25)\npixel 2 (api 21)\n\n\nXiaomi Poco f2 pro with & without navigation bar(full display)\n\n", "The solution proposed by Egidijus and works perfectly for Build.VERSION.SDK_INT >= 17\nBut I got \"NoSuchMethodException\" during execution of the following statement with Build.VERSION.SDK_INT < 17 on my device:\nDisplay.class.getMethod(\"getRawHeight\").invoke(display);\n\nI have modified the method getRealScreenSize() for such cases:\nelse if(Build.VERSION.SDK_INT >= 14) \n{\n View decorView = getActivity().getWindow().getDecorView();\n size.x = decorView.getWidth();\n size.y = decorView.getHeight();\n}\n\n", "I resolved this issue for all devices(including Nexus 5, Samsung Galaxy Nexus 6 edge+, Samsung S10, Samsung Note II etc.). I think this will help you to handle device dependant issues.\nHere I am adding two types of codes,\nJava Code(for Native Android):\nimport android.content.Context;\nimport android.content.res.Resources;\nimport android.os.Build;\nimport android.util.DisplayMetrics;\nimport android.view.Display;\nimport android.view.ViewConfiguration;\nimport android.view.WindowManager;\n\npublic class DeviceSpec {\n\n private int resourceID = -1;\n private Display display = null;\n private DisplayMetrics displayMetrics = null;\n private DisplayMetrics realDisplayMetrics = null;\n private Resources resources = null;\n private WindowManager windowManager = null;\n\n public double GetNavigationBarHeight(Context context) {\n try {\n windowManager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);\n display = windowManager.getDefaultDisplay();\n displayMetrics = new DisplayMetrics();\n if(Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH_MR1) {\n realDisplayMetrics = new DisplayMetrics();\n display.getMetrics(displayMetrics);\n display.getRealMetrics(realDisplayMetrics);\n if(displayMetrics.heightPixels != realDisplayMetrics.heightPixels) {\n resources = context.getResources();\n return GetNavigationBarSize(context);\n }\n }\n else {\n resources = context.getResources();\n resourceID = resources.getIdentifier(\"config_showNavigationBar\", \"bool\", \"android\");\n if (resourceID > 0 && resources.getBoolean(resourceID))\n return GetNavigationBarSize(context);\n }\n }\n catch (Exception e){\n e.printStackTrace();\n }\n return 0;\n }\n\n private double GetNavigationBarSize(Context context) {\n resourceID = resources.getIdentifier(\"navigation_bar_height\", \"dimen\", \"android\");\n if (resourceID > 0 && ViewConfiguration.get(context).hasPermanentMenuKey())\n return (resources.getDimensionPixelSize(resourceID) / displayMetrics.density);\n return 0;\n }\n}\n\nAnd C# code(for Xamarin Forms/Android)\nint resourceId = -1;\n IWindowManager windowManager = null;\n Display defaultDisplay = null;\n DisplayMetrics displayMatrics = null;\n DisplayMetrics realMatrics = null;\n Resources resources = null;\n\n public double NavigationBarHeight\n {\n get\n {\n try\n {\n windowManager = Forms.Context.GetSystemService(Context.WindowService).JavaCast<IWindowManager>();\n defaultDisplay = windowManager.DefaultDisplay;\n displayMatrics = new DisplayMetrics();\n if (Build.VERSION.SdkInt >= BuildVersionCodes.JellyBeanMr2)\n {\n realMatrics = new DisplayMetrics();\n defaultDisplay.GetMetrics(displayMatrics);\n defaultDisplay.GetRealMetrics(realMatrics);\n if (displayMatrics.HeightPixels != realMatrics.HeightPixels)\n {\n resources = Forms.Context.Resources;\n return GetHeightOfNivigationBar();\n }\n }\n else {\n resources = Forms.Context.Resources;\n resourceId = resources.GetIdentifier(\"config_showNavigationBar\", \"bool\", \"android\");\n if (resourceId > 0 && resources.GetBoolean(resourceId))\n return GetHeightOfNivigationBar();\n }\n }\n catch (Exception e) { }\n return 0;\n }\n }\n\n private double GetHeightOfNivigationBar()\n {\n resourceId = resources.GetIdentifier(\"navigation_bar_height\", \"dimen\", \"android\");\n if (!ViewConfiguration.Get(Forms.Context).HasPermanentMenuKey && resourceId > 0)\n {\n return resources.GetDimensionPixelSize(resourceId) / displayMatrics.Density;\n }\n return 0;\n }\n\n", "Tested code for getting height of navigation bar (in pixels):\npublic static int getNavBarHeight(Context c) {\n int resourceId = c.getResources()\n .getIdentifier(\"navigation_bar_height\", \"dimen\", \"android\");\n if (resourceId > 0) {\n return c.getResources().getDimensionPixelSize(resourceId);\n }\n return 0;\n}\n\nTested code for getting height of status bar (in pixels):\npublic static int getStatusBarHeight(Context c) {\n int resourceId = c.getResources()\n .getIdentifier(\"status_bar_height\", \"dimen\", \"android\");\n if (resourceId > 0) {\n return c.getResources().getDimensionPixelSize(resourceId);\n }\n return 0;\n}\n\nConverting pixels to dp:\npublic static int pxToDp(int px) {\n return (int) (px / Resources.getSystem().getDisplayMetrics().density);\n}\n\n", "How to get the height of the navigation bar and status bar. This code works for me on some Huawei devices and Samsung devices.\nEgis's solution above is good, however, it is still incorrect on some devices. So, I improved it.\nThis is code to get the height of status bar\nprivate fun getStatusBarHeight(resources: Resources): Int {\n var result = 0\n val resourceId = resources.getIdentifier(\"status_bar_height\", \"dimen\", \"android\")\n if (resourceId > 0) {\n result = resources.getDimensionPixelSize(resourceId)\n }\n return result\n }\n\nThis method always returns the height of navigation bar even when the navigation bar is hidden.\nprivate fun getNavigationBarHeight(resources: Resources): Int {\n val resourceId = resources.getIdentifier(\"navigation_bar_height\", \"dimen\", \"android\")\n return if (resourceId > 0) {\n resources.getDimensionPixelSize(resourceId)\n } else 0\n}\n\nNOTE: on Samsung A70, this method returns the height of the status bar + height of the navigation bar.\nOn other devices (Huawei), it only returns the height of the Navigation bar and returns 0 when the navigation bar is hidden.\nprivate fun getNavigationBarHeight(): Int {\n val display = activity?.windowManager?.defaultDisplay\n return if (display == null) {\n 0\n } else {\n val realMetrics = DisplayMetrics()\n display.getRealMetrics(realMetrics)\n val metrics = DisplayMetrics()\n display.getMetrics(metrics)\n realMetrics.heightPixels - metrics.heightPixels\n }\n }\n\nThis is code to get height of navigation bar and status bar\nval metrics = DisplayMetrics()\n activity?.windowManager?.defaultDisplay?.getRealMetrics(metrics)\n\n //resources is got from activity\n\n //NOTE: on SamSung A70, this height = height of status bar + height of Navigation bar\n //On other devices (Huawei), this height = height of Navigation bar\n val navigationBarHeightOrNavigationBarPlusStatusBarHeight = getNavigationBarHeight()\n\n val statusBarHeight = getStatusBarHeight(resources)\n //The method will always return the height of navigation bar even when the navigation bar was hidden.\n val realNavigationBarHeight = getNavigationBarHeight(resources)\n\n val realHeightOfStatusBarAndNavigationBar =\n if (navigationBarHeightOrNavigationBarPlusStatusBarHeight == 0 || navigationBarHeightOrNavigationBarPlusStatusBarHeight < statusBarHeight) {\n //Huawei: navigation bar is hidden\n statusBarHeight\n } else if (navigationBarHeightOrNavigationBarPlusStatusBarHeight == realNavigationBarHeight) {\n //Huawei: navigation bar is visible\n statusBarHeight + realNavigationBarHeight\n } else if (navigationBarHeightOrNavigationBarPlusStatusBarHeight < realNavigationBarHeight) {\n //SamSung A70: navigation bar is still visible but it only displays as a under line\n //navigationBarHeightOrNavigationBarPlusStatusBarHeight = navigationBarHeight'(under line) + statusBarHeight\n navigationBarHeightOrNavigationBarPlusStatusBarHeight\n } else {\n //SamSung A70: navigation bar is visible\n //navigationBarHeightOrNavigationBarPlusStatusBarHeight == statusBarHeight + realNavigationBarHeight\n navigationBarHeightOrNavigationBarPlusStatusBarHeight\n }\n\n", "\nI've done this, it works on every device I tested, and even on emulators:\n// Return the NavigationBar height in pixels if it is present, otherwise return 0\npublic static int getNavigationBarHeight(Activity activity) {\n Rect rectangle = new Rect();\n DisplayMetrics displayMetrics = new DisplayMetrics();\n activity.getWindow().getDecorView().getWindowVisibleDisplayFrame(rectangle);\n activity.getWindowManager().getDefaultDisplay().getRealMetrics(displayMetrics);\n return displayMetrics.heightPixels - (rectangle.top + rectangle.height());\n}\n\n", "Combining the answer from @egis and others - this works well on a variety of devices, tested on Pixel EMU, Samsung S6, Sony Z3, Nexus 4. This code uses the display dimensions to test for availability of nav bar and then uses the actual system nav bar size if present.\n\n\n/**\r\n * Calculates the system navigation bar size.\r\n */\r\n\r\npublic final class NavigationBarSize {\r\n\r\n private final int systemNavBarHeight;\r\n @NonNull\r\n private final Point navBarSize;\r\n\r\n public NavigationBarSize(@NonNull Context context) {\r\n Resources resources = context.getResources();\r\n int displayOrientation = resources.getConfiguration().orientation;\r\n final String name;\r\n switch (displayOrientation) {\r\n case Configuration.ORIENTATION_PORTRAIT:\r\n name = \"navigation_bar_height\";\r\n break;\r\n default:\r\n name = \"navigation_bar_height_landscape\";\r\n }\r\n int id = resources.getIdentifier(name, \"dimen\", \"android\");\r\n systemNavBarHeight = id > 0 ? resources.getDimensionPixelSize(id) : 0;\r\n navBarSize = getNavigationBarSize(context);\r\n }\r\n\r\n public void adjustBottomPadding(@NonNull View view, @DimenRes int defaultHeight) {\r\n int height = 0;\r\n if (navBarSize.y > 0) {\r\n // the device has a nav bar, get the correct size from the system\r\n height = systemNavBarHeight;\r\n }\r\n if (height == 0) {\r\n // fallback to default\r\n height = view.getContext().getResources().getDimensionPixelSize(defaultHeight);\r\n }\r\n view.setPadding(0, 0, 0, height);\r\n }\r\n\r\n @NonNull\r\n private static Point getNavigationBarSize(@NonNull Context context) {\r\n Point appUsableSize = new Point();\r\n Point realScreenSize = new Point();\r\n WindowManager windowManager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);\r\n if (windowManager != null) {\r\n Display display = windowManager.getDefaultDisplay();\r\n display.getSize(appUsableSize);\r\n display.getRealSize(realScreenSize);\r\n }\r\n return new Point(realScreenSize.x - appUsableSize.x, realScreenSize.y - appUsableSize.y);\r\n }\r\n\r\n}\n\n\n\n", "\nSimple One-line Solution\n\nAs suggested in many of above answers, for example\n\nhttps://stackoverflow.com/a/29938139/9640177\nhttps://stackoverflow.com/a/26118045/9640177\nhttps://stackoverflow.com/a/50775459/9640177\nhttps://stackoverflow.com/a/41057024/9640177\n\nSimply getting navigation bar height may not be enough. We need to consider whether 1. navigation bar exists, 2. is it on the bottom, or right or left, 3. is app open in multi-window mode.\nFortunately you can easily bypass all the long coding by simply setting android:fitsSystemWindows=\"true\" in your root layout. Android system will automatically take care of adding necessary padding to the root layout to make sure that the child views don't get into the navigation bar or statusbar regions.\nThere is a simple one line solution\nandroid:fitsSystemWindows=\"true\"\n\nor programatically\nfindViewById(R.id.your_root_view).setFitsSystemWindows(true);\n\nyou may also get root view by \nfindViewById(android.R.id.content).getRootView();\nor\ngetWindow().getDecorView().findViewById(android.R.id.content)\n\nFor more details on getting root-view refer - https://stackoverflow.com/a/4488149/9640177\n", "The height of the bottom Navigation bar is 48dp (in both portrait and landscape mode) and is 42dp when the bar is placed vertically.\n", "Here is how I solved this. I made a hideable bottom bar which needed padding depending on if there was a navigation bar or not (capacitive, on-screen or just pre lollipop).\n\nView\nsetPadding(0, 0, 0, Utils.hasNavBar(getContext()) ? 30 : 0);\n\n\nUtils.java\npublic static boolean hasNavBar(Context context) {\n // Kitkat and less shows container above nav bar\n if (android.os.Build.VERSION.SDK_INT <= Build.VERSION_CODES.KITKAT) {\n return false;\n }\n // Emulator\n if (Build.FINGERPRINT.startsWith(\"generic\")) {\n return true;\n }\n boolean hasMenuKey = ViewConfiguration.get(context).hasPermanentMenuKey();\n boolean hasBackKey = KeyCharacterMap.deviceHasKey(KeyEvent.KEYCODE_BACK);\n boolean hasNoCapacitiveKeys = !hasMenuKey && !hasBackKey;\n Resources resources = context.getResources();\n int id = resources.getIdentifier(\"config_showNavigationBar\", \"bool\", \"android\");\n boolean hasOnScreenNavBar = id > 0 && resources.getBoolean(id);\n return hasOnScreenNavBar || hasNoCapacitiveKeys || getNavigationBarHeight(context, true) > 0;\n}\n\npublic static int getNavigationBarHeight(Context context, boolean skipRequirement) {\n int resourceId = context.getResources().getIdentifier(\"navigation_bar_height\", \"dimen\", \"android\");\n if (resourceId > 0 && (skipRequirement || hasNavBar(context))) {\n return context.getResources().getDimensionPixelSize(resourceId);\n }\n return 0;\n}\n\n", "In my case where I wanted to have something like this:\n\nI had to follow the same thing as suggested by @Mdlc but probably slightly simpler (targeting only >= 21):\n //kotlin\n val windowManager = getSystemService(Context.WINDOW_SERVICE) as WindowManager\n val realSize = Point()\n windowManager.defaultDisplay.getRealSize(realSize);\n val usableRect = Rect()\n windowManager.defaultDisplay.getRectSize(usableRect)\n Toast.makeText(this, \"Usable Screen: \" + usableRect + \" real:\"+realSize, Toast.LENGTH_LONG).show()\n\n window.decorView.setPadding(usableRect.left, usableRect.top, realSize.x - usableRect.right, realSize.y - usableRect.bottom)\n\nIt works on landscape too:\n\nEdit\nThe above solution does not work correctly in multi-window mode where the usable rectangle is not smaller just due to the navigation bar but also because of custom window size.\nOne thing that I noticed is that in multi-window the navigation bar is not hovering over the app so even with no changes to DecorView padding we have the correct behaviour:\n\n\nNote the difference between how navigation bar is hovering over the bottom of the app in these to scenarios.\nFortunately, this is easy to fix. We can check if app is multi window. The code below also includes the part to calculate and adjust the position of toolbar (full solution: https://stackoverflow.com/a/14213035/477790)\n // kotlin\n // Let the window flow into where window decorations are\n window.addFlags(WindowManager.LayoutParams.FLAG_LAYOUT_IN_SCREEN)\n window.addFlags(WindowManager.LayoutParams.FLAG_LAYOUT_NO_LIMITS)\n\n // calculate where the bottom of the page should end up, considering the navigation bar (back buttons, ...)\n val windowManager = getSystemService(Context.WINDOW_SERVICE) as WindowManager\n val realSize = Point()\n windowManager.defaultDisplay.getRealSize(realSize);\n val usableRect = Rect()\n windowManager.defaultDisplay.getRectSize(usableRect)\n Toast.makeText(this, \"Usable Screen: \" + usableRect + \" real:\" + realSize, Toast.LENGTH_LONG).show()\n\n if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N || !isInMultiWindowMode) {\n window.decorView.setPadding(usableRect.left, usableRect.top, realSize.x - usableRect.right, realSize.y - usableRect.bottom)\n // move toolbar/appbar further down to where it should be and not to overlap with status bar\n val layoutParams = ConstraintLayout.LayoutParams(appBarLayout.layoutParams as ConstraintLayout.LayoutParams)\n layoutParams.topMargin = getSystemSize(Constants.statusBarHeightKey)\n appBarLayout.layoutParams = layoutParams\n }\n\nResult on Samsung popup mode:\n\n", "In case of Samsung S8 none of the above provided methods were giving proper height of navigation bar so I used the KeyboardHeightProvider keyboard height provider android. And it gave me height in negative values and for my layout positioning I adjusted that value in calculations. \nHere is KeyboardHeightProvider.java :\nimport android.app.Activity;\nimport android.content.res.Configuration;\nimport android.graphics.Point;\nimport android.graphics.Rect;\nimport android.graphics.drawable.ColorDrawable;\nimport android.view.Gravity;\nimport android.view.LayoutInflater;\nimport android.view.View;\nimport android.view.ViewTreeObserver.OnGlobalLayoutListener;\nimport android.view.WindowManager.LayoutParams;\nimport android.widget.PopupWindow;\n\n\n/**\n * The keyboard height provider, this class uses a PopupWindow\n * to calculate the window height when the floating keyboard is opened and closed. \n */\npublic class KeyboardHeightProvider extends PopupWindow {\n\n /** The tag for logging purposes */\n private final static String TAG = \"sample_KeyboardHeightProvider\";\n\n /** The keyboard height observer */\n private KeyboardHeightObserver observer;\n\n /** The cached landscape height of the keyboard */\n private int keyboardLandscapeHeight;\n\n /** The cached portrait height of the keyboard */\n private int keyboardPortraitHeight;\n\n /** The view that is used to calculate the keyboard height */\n private View popupView;\n\n /** The parent view */\n private View parentView;\n\n /** The root activity that uses this KeyboardHeightProvider */\n private Activity activity;\n\n /** \n * Construct a new KeyboardHeightProvider\n * \n * @param activity The parent activity\n */\n public KeyboardHeightProvider(Activity activity) {\n super(activity);\n this.activity = activity;\n\n LayoutInflater inflator = (LayoutInflater) activity.getSystemService(Activity.LAYOUT_INFLATER_SERVICE);\n this.popupView = inflator.inflate(R.layout.popupwindow, null, false);\n setContentView(popupView);\n\n setSoftInputMode(LayoutParams.SOFT_INPUT_ADJUST_RESIZE | LayoutParams.SOFT_INPUT_STATE_ALWAYS_VISIBLE);\n setInputMethodMode(PopupWindow.INPUT_METHOD_NEEDED);\n\n parentView = activity.findViewById(android.R.id.content);\n\n setWidth(0);\n setHeight(LayoutParams.MATCH_PARENT);\n\n popupView.getViewTreeObserver().addOnGlobalLayoutListener(new OnGlobalLayoutListener() {\n\n @Override\n public void onGlobalLayout() {\n if (popupView != null) {\n handleOnGlobalLayout();\n }\n }\n });\n }\n\n /**\n * Start the KeyboardHeightProvider, this must be called after the onResume of the Activity.\n * PopupWindows are not allowed to be registered before the onResume has finished\n * of the Activity.\n */\n public void start() {\n\n if (!isShowing() && parentView.getWindowToken() != null) {\n setBackgroundDrawable(new ColorDrawable(0));\n showAtLocation(parentView, Gravity.NO_GRAVITY, 0, 0);\n }\n }\n\n /**\n * Close the keyboard height provider, \n * this provider will not be used anymore.\n */\n public void close() {\n this.observer = null;\n dismiss();\n }\n\n /** \n * Set the keyboard height observer to this provider. The \n * observer will be notified when the keyboard height has changed. \n * For example when the keyboard is opened or closed.\n * \n * @param observer The observer to be added to this provider.\n */\n public void setKeyboardHeightObserver(KeyboardHeightObserver observer) {\n this.observer = observer;\n }\n\n /**\n * Get the screen orientation\n *\n * @return the screen orientation\n */\n private int getScreenOrientation() {\n return activity.getResources().getConfiguration().orientation;\n }\n\n /**\n * Popup window itself is as big as the window of the Activity. \n * The keyboard can then be calculated by extracting the popup view bottom \n * from the activity window height. \n */\n private void handleOnGlobalLayout() {\n\n Point screenSize = new Point();\n activity.getWindowManager().getDefaultDisplay().getSize(screenSize);\n\n Rect rect = new Rect();\n popupView.getWindowVisibleDisplayFrame(rect);\n\n // REMIND, you may like to change this using the fullscreen size of the phone\n // and also using the status bar and navigation bar heights of the phone to calculate\n // the keyboard height. But this worked fine on a Nexus.\n int orientation = getScreenOrientation();\n int keyboardHeight = screenSize.y - rect.bottom;\n\n if (keyboardHeight == 0) {\n notifyKeyboardHeightChanged(0, orientation);\n }\n else if (orientation == Configuration.ORIENTATION_PORTRAIT) {\n this.keyboardPortraitHeight = keyboardHeight; \n notifyKeyboardHeightChanged(keyboardPortraitHeight, orientation);\n } \n else {\n this.keyboardLandscapeHeight = keyboardHeight; \n notifyKeyboardHeightChanged(keyboardLandscapeHeight, orientation);\n }\n }\n\n /**\n *\n */\n private void notifyKeyboardHeightChanged(int height, int orientation) {\n if (observer != null) {\n observer.onKeyboardHeightChanged(height, orientation);\n }\n }\n\n public interface KeyboardHeightObserver {\n void onKeyboardHeightChanged(int height, int orientation);\n }\n}\n\npopupwindow.xml :\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<View\n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n android:id=\"@+id/popuplayout\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:background=\"@android:color/transparent\"\n android:orientation=\"horizontal\"/>\n\nUsage in MainActivity \nimport android.os.Bundle\nimport android.support.v7.app.AppCompatActivity\nimport kotlinx.android.synthetic.main.activity_main.*\n\n/**\n * Created by nileshdeokar on 22/02/2018.\n */\nclass MainActivity : AppCompatActivity() , KeyboardHeightProvider.KeyboardHeightObserver {\n\n private lateinit var keyboardHeightProvider : KeyboardHeightProvider\n\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n\n keyboardHeightProvider = KeyboardHeightProvider(this)\n parentActivityView.post { keyboardHeightProvider?.start() }\n }\n\n override fun onKeyboardHeightChanged(height: Int, orientation: Int) {\n // In case of 18:9 - e.g. Samsung S8\n // here you get the height of the navigation bar as negative value when keyboard is closed.\n // and some positive integer when keyboard is opened.\n }\n\n public override fun onPause() {\n super.onPause()\n keyboardHeightProvider?.setKeyboardHeightObserver(null)\n }\n\n public override fun onResume() {\n super.onResume()\n keyboardHeightProvider?.setKeyboardHeightObserver(this)\n }\n\n public override fun onDestroy() {\n super.onDestroy()\n keyboardHeightProvider?.close()\n }\n}\n\nFor any further help you can have a look at advanced usage of this here.\n", "My version to handle cutouts + navigation bar\nfun View.getCutoutRect(): Rect {\n return when {\n isInEditMode -> {\n val cutout = context.dpToPx(16f).roundToInt()\n Rect(cutout, cutout, cutout, cutout)\n }\n Build.VERSION.SDK_INT >= Build.VERSION_CODES.M -> {\n val windowInsets = (context as? AppCompatActivity)?.window?.decorView?.rootWindowInsets ?: run {\n requestLayout()\n return Rect()\n }\n val cutout = WindowInsetsCompat.toWindowInsetsCompat(windowInsets).displayCutout\n val systemBars = WindowInsetsCompat.toWindowInsetsCompat(windowInsets).getInsets(WindowInsetsCompat.Type.systemBars())\n\n Rect(\n maxOf(cutout?.safeInsetLeft ?: 0, systemBars.left),\n maxOf(cutout?.safeInsetTop ?: 0, systemBars.top),\n maxOf(cutout?.safeInsetRight ?: 0, systemBars.right),\n maxOf(cutout?.safeInsetBottom ?: 0, systemBars.bottom),\n )\n }\n else -> {\n val savedRect = (this.getTag(R.id.view_insets_tag_id) as? Rect) ?: Rect()\n ViewCompat.setOnApplyWindowInsetsListener(this) { v, insets ->\n val cutout = insets.displayCutout\n val systemBars = insets.getInsets(WindowInsetsCompat.Type.systemBars())\n val rect = Rect(\n maxOf(cutout?.safeInsetLeft ?: 0, systemBars.left),\n maxOf(cutout?.safeInsetTop ?: 0, systemBars.top),\n maxOf(cutout?.safeInsetRight ?: 0, systemBars.right),\n maxOf(cutout?.safeInsetBottom ?: 0, systemBars.bottom),\n )\n this.setTag(R.id.view_insets_tag_id, rect)\n if (savedRect != rect) {\n requestLayout()\n }\n return@setOnApplyWindowInsetsListener insets\n }\n this.requestApplyInsets()\n savedRect\n }\n }\n}\n\n", "I suggest using the two Context extensions for getting status bar height in px and bottom navigation bar height in dp\nStatus bar height in dp\nval Context.statusBarHeightInDp\n get() = run {\n val resourceId = this.resources.getIdentifier(\n \"status_bar_height\",\n \"dimen\",\n \"android\"\n )\n this.resources.getDimensionPixelSize(resourceId) / this.resources.displayMetrics.density\n }\n\nBottom nav bar height in dp\nval Context.navBarHeightInDp\n get() = run {\n val resourceId = this.resources.getIdentifier(\n \"navigation_bar_height\",\n \"dimen\",\n \"android\"\n )\n this.resources.getDimensionPixelSize(resourceId) / this.resources.displayMetrics.density\n }\n\n", "From Android R (SDK 30+), you can use this code to get size of status bar and navigation bar\nWindowInsets insets = activity.getWindowManager().getCurrentWindowMetrics().getWindowInsets();\nint statusBarHeight = insets.getInsets(WindowInsetsCompat.Type.statusBars()).top; //in pixels\nint navigationBarHeight = insets.getInsets(WindowInsetsCompat.Type.navigationBars()).bottom; //in pixels\n\n", "To obtain the height in the layout XML itself (useful for the last element in a recycler view when clipToPadding is false) you can use the attribute actionBarSize:\nandroid:paddingBottom=\"?attr/actionBarSize\"\n\n" ]
[ 205, 108, 45, 32, 32, 15, 5, 3, 2, 2, 2, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "android", "android_activity", "graphics", "user_interface" ]
stackoverflow_0020264268_android_android_activity_graphics_user_interface.txt
Q: Updating State in React Component causing it to get unmounted I have a component where-in I need to fetch some data and render it. The component gets rendered initially. The problem I'm facing is when the handler function switchDocumentType is called after clicking the button for a particular type, the whole component gets unmounted/un-rendered. While debugging on my own I found this happens after setDocumentType is run inside event handler function. What is wrong in the below code snippet that could possibly cause this issue? I can see the useEffect is not going in infinite-loop as well. Code snippet: import * as React from 'react'; const MyComponent = (props) => { const [documentType, setDocumentType] = React.useState('alpha'); const [documentData, setDocumentData] = React.useState(''); const types = ['alpha', 'beta', 'gamma']; React.useEffect(() => { myDataFetch('https://example.com/foo/?bar=123').then(async (response) => { const data = await response.json(); setDocumentData(data.terms); // html string const myDiv = document.getElementById('spacial-div'); myDiv.innerHTML = data; // need to render raw HTML inside a div }); }, [documentType]); const switchDocumentType = (type) => { setDocumentType(type); // send some analytics events }; const convertToPDF = () => { // uses documentData to generate PDF }; return ( <div className="container-div"> {types.map((type) => { return ( <button key={type} onClick={(type) => switchDocumentType(type)}> {type} </button> ); })} <div id="special-div" /> </div> ); }; export default MyComponent; A: Do not use useEffect as handler, use useEffect hooks for initializations. Instead of using/setting innerHtml, let react do it for you. I suppose you have myDataFetch defined somewhere and I don't see your data fetch using the type. Anyways, try to use the modified code below. import * as React from 'react'; const MyComponent = (props) => { const [documentType, setDocumentType] = React.useState('alpha'); const [documentData, setDocumentData] = React.useState(''); const types = ['alpha', 'beta', 'gamma']; const fetchData = async () => { const response = await myDataFetch('https://example.com/foo/?bar=123') const data = await response.json(); setDocumentData(data); } React.useEffect(() => { fetchData(); }, []); const switchDocumentType = async (e, type) => { e.preventDefault(); setDocumentType(type); await fetchData(); // send some analytics events }; return ( <div className="container-div"> {types.map((type) => { return ( <button key={type} onClick={(e) => switchDocumentType(e, type)}> {type} </button> ); })} <div id="special-div">{documentData}</div> </div> ); }; export default MyComponent; A: You shouldn't edit the DOM directly. React has two DOMs, a virtual DOM and a real DOM. Rendering can be a bit finicky if you decide to edit the real DOM. You can parse html safely, by using html-react-parser. This is the best way to do it, because it becomes part of the react tree whereas dangerouslySetInnerHTML will replace the entire HTML to flush changes to the DOM. With reconciliation, it can create exponential load times. It will also sanitize your inputs, you know.. for safety. :) import parse from 'html-react-parser'; const SpecialDiv = ({html}) => { const reactElement = parse(html); return reactElement } If you decide that you must use dangerouslySetInnerHTML you can do it as so: const [someHTML, setSomeHTML] = useState(null) const someFunction = async() => { const response = await getData(); const data = await response.json(); setSomeHTML(data); } return( <div> {someHTML && <div dangerouslySetInnerHTML={{__html: someHTML}} id="special-div"/>} </div> ) That being said, I would say that by allowing this, you open yourself up to the possibility of a XSS attack, without properly parsing and purifying your inputs. A: Not sure why but returning a cleanup function inside useEffect solved the issue. Also, I refactored the code as suggested by @iaq and @sheepiiHD to follow React best practices. Updated code: import * as React from 'react'; const MyComponent = (props) => { const [documentType, setDocumentType] = React.useState('alpha'); const [documentData, setDocumentData] = React.useState(''); const types = ['alpha', 'beta', 'gamma']; const fetchData = async () => { const response = await myDataFetch('https://example.com/foo/?bar=123') const data = await response.json(); setDocumentData(data); } React.useEffect(() => { fetchData(); return () => { setDocumentType(''); setDocumentData(''); }; }, []); const switchDocumentType = async (e, type) => { e.preventDefault(); setDocumentType(type); await fetchData(); // send some analytics events }; return ( <div className="container-div"> {types.map((type) => { return ( <button key={type} onClick={(e) => switchDocumentType(e, type)}> {type} </button> ); })} <div id="special-div" dangerouslySetInnerHTML={{__html: documentData.terms}} /> </div> ); }; export default MyComponent;
Updating State in React Component causing it to get unmounted
I have a component where-in I need to fetch some data and render it. The component gets rendered initially. The problem I'm facing is when the handler function switchDocumentType is called after clicking the button for a particular type, the whole component gets unmounted/un-rendered. While debugging on my own I found this happens after setDocumentType is run inside event handler function. What is wrong in the below code snippet that could possibly cause this issue? I can see the useEffect is not going in infinite-loop as well. Code snippet: import * as React from 'react'; const MyComponent = (props) => { const [documentType, setDocumentType] = React.useState('alpha'); const [documentData, setDocumentData] = React.useState(''); const types = ['alpha', 'beta', 'gamma']; React.useEffect(() => { myDataFetch('https://example.com/foo/?bar=123').then(async (response) => { const data = await response.json(); setDocumentData(data.terms); // html string const myDiv = document.getElementById('spacial-div'); myDiv.innerHTML = data; // need to render raw HTML inside a div }); }, [documentType]); const switchDocumentType = (type) => { setDocumentType(type); // send some analytics events }; const convertToPDF = () => { // uses documentData to generate PDF }; return ( <div className="container-div"> {types.map((type) => { return ( <button key={type} onClick={(type) => switchDocumentType(type)}> {type} </button> ); })} <div id="special-div" /> </div> ); }; export default MyComponent;
[ "Do not use useEffect as handler, use useEffect hooks for initializations.\nInstead of using/setting innerHtml, let react do it for you.\nI suppose you have myDataFetch defined somewhere and I don't see your data fetch using the type.\nAnyways, try to use the modified code below.\n import * as React from 'react';\n\nconst MyComponent = (props) => {\n const [documentType, setDocumentType] = React.useState('alpha');\n const [documentData, setDocumentData] = React.useState('');\n const types = ['alpha', 'beta', 'gamma'];\n\n const fetchData = async () => {\n const response = await myDataFetch('https://example.com/foo/?bar=123')\n const data = await response.json();\n setDocumentData(data);\n }\n\n React.useEffect(() => {\n fetchData();\n }, []);\n\n const switchDocumentType = async (e, type) => {\n e.preventDefault();\n setDocumentType(type);\n await fetchData();\n // send some analytics events\n };\n\n return (\n <div className=\"container-div\">\n {types.map((type) => {\n return (\n <button key={type} onClick={(e) => switchDocumentType(e, type)}>\n {type}\n </button>\n );\n })}\n <div id=\"special-div\">{documentData}</div>\n </div>\n );\n};\n\nexport default MyComponent;\n\n", "You shouldn't edit the DOM directly. React has two DOMs, a virtual DOM and a real DOM. Rendering can be a bit finicky if you decide to edit the real DOM.\nYou can parse html safely, by using html-react-parser. This is the best way to do it, because it becomes part of the react tree whereas dangerouslySetInnerHTML will replace the entire HTML to flush changes to the DOM. With reconciliation, it can create exponential load times.\nIt will also sanitize your inputs, you know.. for safety. :)\nimport parse from 'html-react-parser';\n\nconst SpecialDiv = ({html}) => {\n const reactElement = parse(html);\n return reactElement\n}\n\nIf you decide that you must use dangerouslySetInnerHTML you can do it as so:\nconst [someHTML, setSomeHTML] = useState(null)\n\nconst someFunction = async() => {\n const response = await getData();\n const data = await response.json();\n\n setSomeHTML(data);\n}\n\nreturn( \n <div>\n {someHTML && <div dangerouslySetInnerHTML={{__html: someHTML}} id=\"special-div\"/>}\n </div>\n)\n\n\nThat being said, I would say that by allowing this, you open yourself up to the possibility of a XSS attack, without properly parsing and purifying your inputs.\n", "Not sure why but returning a cleanup function inside useEffect solved the issue. Also, I refactored the code as suggested by @iaq and @sheepiiHD to follow React best practices.\nUpdated code:\nimport * as React from 'react';\n\nconst MyComponent = (props) => {\n const [documentType, setDocumentType] = React.useState('alpha');\n const [documentData, setDocumentData] = React.useState('');\n const types = ['alpha', 'beta', 'gamma'];\n\n const fetchData = async () => {\n const response = await myDataFetch('https://example.com/foo/?bar=123')\n const data = await response.json();\n setDocumentData(data);\n }\n\n React.useEffect(() => {\n fetchData();\n return () => {\n setDocumentType('');\n setDocumentData('');\n };\n }, []);\n\n const switchDocumentType = async (e, type) => {\n e.preventDefault();\n setDocumentType(type);\n await fetchData();\n // send some analytics events\n };\n\n return (\n <div className=\"container-div\">\n {types.map((type) => {\n return (\n <button key={type} onClick={(e) => switchDocumentType(e, type)}>\n {type}\n </button>\n );\n })}\n <div id=\"special-div\" dangerouslySetInnerHTML={{__html: documentData.terms}} />\n </div>\n );\n};\n\nexport default MyComponent;\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "javascript", "reactjs" ]
stackoverflow_0074668802_javascript_reactjs.txt
Q: Command does not work during discord bot creation I typed !mycommand2 [name] [role] because it didn't come out even when I typed command !캐릭터생성 [name] [role], but it's still the same. Why? And description's role(Is it like an annotation? Explain to the developer what command this is without a role?) and...I also wonder about command hidden. char = I want to mack a instance.....char.py has class char. import discord, asyncio import char from discord.ext import commands intents=discord.Intents.all() client = discord.Client(intents=intents) bot = commands.Bot(command_prefix='!',intents=intents,help_command=None) @client.event async def on_ready(): await client.change_presence(status=discord.Status.online, activity=discord.Game("언성듀엣!")) @bot.command(name="테스트", description="테스트용 함수", hidden=False) #set hidden to True to hide it in the help async def mycommand1(ctx, argument1, argument2): await ctx.channel.send ("{} | {}, Hello".format(ctx.author, ctx.author.mention)) await ctx.author.send ("{} | {}, User, Hello".format(ctx.author, ctx.author.mention)) char_num = 1 @bot.command(name="캐릭터생성", description="테스트용 함수", hidden=False) #set hidden to True to hide it in the help async def mycommand2(ctx, context1, context2): global char_num globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.author.name) char_num+=1 await ctx.message.channel.send ("done", context1,"!") client.run('-') A: Change the function name to the command name async def urcommandname(ctx,arg1,arg2):
Command does not work during discord bot creation
I typed !mycommand2 [name] [role] because it didn't come out even when I typed command !캐릭터생성 [name] [role], but it's still the same. Why? And description's role(Is it like an annotation? Explain to the developer what command this is without a role?) and...I also wonder about command hidden. char = I want to mack a instance.....char.py has class char. import discord, asyncio import char from discord.ext import commands intents=discord.Intents.all() client = discord.Client(intents=intents) bot = commands.Bot(command_prefix='!',intents=intents,help_command=None) @client.event async def on_ready(): await client.change_presence(status=discord.Status.online, activity=discord.Game("언성듀엣!")) @bot.command(name="테스트", description="테스트용 함수", hidden=False) #set hidden to True to hide it in the help async def mycommand1(ctx, argument1, argument2): await ctx.channel.send ("{} | {}, Hello".format(ctx.author, ctx.author.mention)) await ctx.author.send ("{} | {}, User, Hello".format(ctx.author, ctx.author.mention)) char_num = 1 @bot.command(name="캐릭터생성", description="테스트용 함수", hidden=False) #set hidden to True to hide it in the help async def mycommand2(ctx, context1, context2): global char_num globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.author.name) char_num+=1 await ctx.message.channel.send ("done", context1,"!") client.run('-')
[ "Change the function name to the command name\nasync def urcommandname(ctx,arg1,arg2):\n\n" ]
[ 0 ]
[]
[]
[ "discord", "python" ]
stackoverflow_0074673267_discord_python.txt
Q: Terminal in Visual Studio (17.4) for Mac turns purple and code doesn't run on console I recently downloaded Visual Studio for my M2 MacBook Air, the program worked as expected for a few days, and my simple programs ran fine on the console, but then out of the blue, terminal turned purple, and doesn't allow me to run my program on it. The problem occurs when I press the button that usually runs the code. The program then opens up the terminal, but it doesn't run my code. I tried uninstalling and reinstalling Visual Studio, that helped for a day or two, but the problem came right back. I think the problem may have to do with trying to open .cs files just by double clicking them, and not opening them properly, as I may have done that when the problem first occurred, but I don't recall exactly what I did. It's not only with a specific file I try to use, but with all files. I'm completely stumped, and any help would be greatly appreciated! Thanks in advance, I'm attaching a screenshot of the issue below: screenshot of problem A: Remove the cache directory and restart Visual Studio % cd ~/Library/Caches/VisualStudio % ls 17.0 rm -rf 17.0 (French user) Pour supprimer cet effet indésirable, essayez de : fermer Visual Studio pour Mac supprimer le dossier de cache redémarrer Visual Studio
Terminal in Visual Studio (17.4) for Mac turns purple and code doesn't run on console
I recently downloaded Visual Studio for my M2 MacBook Air, the program worked as expected for a few days, and my simple programs ran fine on the console, but then out of the blue, terminal turned purple, and doesn't allow me to run my program on it. The problem occurs when I press the button that usually runs the code. The program then opens up the terminal, but it doesn't run my code. I tried uninstalling and reinstalling Visual Studio, that helped for a day or two, but the problem came right back. I think the problem may have to do with trying to open .cs files just by double clicking them, and not opening them properly, as I may have done that when the problem first occurred, but I don't recall exactly what I did. It's not only with a specific file I try to use, but with all files. I'm completely stumped, and any help would be greatly appreciated! Thanks in advance, I'm attaching a screenshot of the issue below: screenshot of problem
[ "Remove the cache directory and restart Visual Studio\n% cd ~/Library/Caches/VisualStudio\n% ls \n17.0\nrm -rf 17.0\n\n(French user)\nPour supprimer cet effet indésirable, essayez de :\n\nfermer Visual Studio pour Mac\nsupprimer le dossier de cache\nredémarrer Visual Studio\n\n" ]
[ 0 ]
[]
[]
[ "c#", "terminal", "visual_studio_mac_2022" ]
stackoverflow_0074649480_c#_terminal_visual_studio_mac_2022.txt
Q: Assembly: Label or instruction expected at start of line I am new to learning the assembly language. Wrote a program and got this error id.asm:3 error: label or instruction expected at start of line Can anyone please help me with this problem? org 0x0100 jmp start bl: dw 0 start: mov ax,2 mov bx,0 add ax,bx mov bx,0 add ax,bx mov [b1+0],ax mov ax,4 mov bx,0 add ax,bx mov bx,2 add ax,bx mov [b1+2],ax mov ax,4 mov bx,7 add ax,bx mov bx,4 add ax,bx mov [b1+4],ax mov bx,0 mov ax,0 mov ax, [b1+bx] peakvalue: cmp bx,6 je fin cmp ax, [b1+bx] jge peakloop mov ax, [b1+bx] peakloop: add bx, 2 jmp peakvalue fin: MOV AX,4C00h INT 21h The error says: id.asm:3 error: label or instruction expected at start of line
Assembly: Label or instruction expected at start of line
I am new to learning the assembly language. Wrote a program and got this error id.asm:3 error: label or instruction expected at start of line Can anyone please help me with this problem? org 0x0100 jmp start bl: dw 0 start: mov ax,2 mov bx,0 add ax,bx mov bx,0 add ax,bx mov [b1+0],ax mov ax,4 mov bx,0 add ax,bx mov bx,2 add ax,bx mov [b1+2],ax mov ax,4 mov bx,7 add ax,bx mov bx,4 add ax,bx mov [b1+4],ax mov bx,0 mov ax,0 mov ax, [b1+bx] peakvalue: cmp bx,6 je fin cmp ax, [b1+bx] jge peakloop mov ax, [b1+bx] peakloop: add bx, 2 jmp peakvalue fin: MOV AX,4C00h INT 21h The error says: id.asm:3 error: label or instruction expected at start of line
[]
[]
[ "The error message you're seeing is telling you that on line 3 of your assembly program, something is expected that doesn't seem to be there. In this case, it looks like you may have accidentally omitted the mov instruction at the beginning of the line.\nHere is what the corrected code should look like:\norg 0x0100\n\njmp start\n\nbl: dw 0\n\nstart:\n\nmov ax,2\n\nmov bx,0\n\nadd ax,bx\n\nmov bx,0\n\nadd ax,bx\n\nmov [b1+0],ax\n\nmov ax,4\n\nmov bx,0\n\nadd ax,bx\n\nmov bx,2\n\nadd ax,bx\n\nmov [b1+2],ax\n\nmov ax,4\n\nmov bx,7\n\nadd ax,bx\n\nmov bx,4\n\nadd ax,bx\n\nmov [b1+4],ax\n\nmov bx,0\n\nmov ax,0\n\nmov ax, [b1+bx]\n\npeakvalue:\n\ncmp bx,6\n\nje fin\n\ncmp ax, [b1+bx]\n\njge peakloop\n\nmov ax, [b1+bx]\n\npeakloop:\n\nadd bx, 2\n\njmp peakvalue\n\nfin:\n\nMOV AX,4C00h\n\nINT 21h\n\n" ]
[ -3 ]
[ "assembly", "nasm" ]
stackoverflow_0074673795_assembly_nasm.txt
Q: How to focus on a newly added TabItem of a WPF TabControl? I'm facing an issue where I need to be able to actually focus a TabItem tab in a TabControl like I would press the TAB key. I know I could use SendKeys.SendWait("{TAB}"); in order to achieve that but I would prefer a more robust solution. The tab items are bound to an ItemSource of an ObservableCollection. That collection contains a list of view models later bound to the tab items title, icon, content and more. The collection is filled and emptied at runtime, I don't have a static collection. I've made a sample project to illustrate the issue (see code below). Current behavior after I added a tab item to the collection looks like this: After pressing the "Add Tab" button on the left a new tab is added to the ObservableCollection which is then reflected in a new tab item in the tab control. I try to focus that new tab with: Set the SelectedItem of the TabControl after a new item is added (see code below) Keyboard.Focus((UIElement)sender) in the tab item loaded event (see code below) by setting FocusManager.FocusedElement="{Binding ElementName=tabHeader}" in XAML (see code below) None of those attempts seems to work. Desired behavior would be that the new added tab item actually get focused (like I would press the TAB key) which would look like this: To create this image I pressed TAB after I added a new tab item. I couldn't find any working solution here or here and also Microsoft's documentation regarding Focus didn't shed enough light. Any hint is appreciated. To reproduce the screenshots above here is my sample project: MainWindow.xaml <Window x:Class="WpfAppTabControl.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:WpfAppTabControl" mc:Ignorable="d" d:DataContext="{d:DesignInstance local:MainViewViewModel}" Title="MainWindow" Height="450" Width="800"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="350"/> <ColumnDefinition Width="12"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Button Grid.Column="0" Grid.Row="2" Margin="50,50,50,50" Command="{Binding AddTabCommand}">Add Tab</Button> <GridSplitter Grid.Column="1" Grid.Row="2" ResizeDirection="Columns" ResizeBehavior="PreviousAndNext" HorizontalAlignment="Stretch"/> <TabControl Grid.Column="2" Grid.Row="2" ItemsSource="{Binding ItemsToDisplay}" SelectedItem="{Binding ActiveTabItem}" SelectionChanged="OnSelectionChanged" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" FocusManager.FocusedElement="{Binding ElementName=tabHeader}" FocusManager.IsFocusScope="True"> <TabControl.ItemContainerStyle> <Style TargetType="TabItem"> <EventSetter Event="Loaded" Handler="OnTabItemLoaded"/> <Setter Property="IsSelected" Value="True"/> <Setter Property="Focusable" Value="True"/> <Setter Property="HeaderTemplate"> <Setter.Value> <DataTemplate> <DockPanel x:Name="tabHeader" Margin="-5,-1,-5,0" Focusable="True"> <Button x:Name="closeButton" Width="16" Height="16" Margin="20, 10, 10, 10" Command="{Binding CloseTabCommand}" CommandParameter="{Binding Title}" BorderBrush="Transparent" DockPanel.Dock="Right" BorderThickness="0"> <Image x:Name="tabIcon" Source="/close.png" Height="16" Margin="0,0,0,0" HorizontalAlignment="Center" VerticalAlignment="Center"/> </Button> <TextBlock Margin="5, 10, 10, 10" VerticalAlignment="Center" Text="{Binding Title}"/> </DockPanel> </DataTemplate> </Setter.Value> </Setter> <Setter Property="Content" Value="{Binding Content}" /> </Style> </TabControl.ItemContainerStyle> </TabControl> </Grid> </Window> MainWindow.xaml.cs using System.Windows; using System.Windows.Controls; using System.Windows.Input; namespace WpfAppTabControl { public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); DataContext = new MainViewViewModel(); } private void OnSelectionChanged(object sender, SelectionChangedEventArgs e) { var viewModel = (MainViewViewModel)DataContext; viewModel.NotifySelectedItemChanged(e.AddedItems); } private void OnTabItemLoaded(object sender, RoutedEventArgs e) { Keyboard.Focus((UIElement)sender); } } } ItemViewModel.cs using Prism.Mvvm; using System; using System.Windows.Input; namespace WpfAppTabControl { public class ItemViewModel : BindableBase, IDisposable { private string title; private string content; public ItemViewModel(ICommand closeTabCommand) { CloseTabCommand = closeTabCommand ?? throw new ArgumentNullException(nameof(closeTabCommand)); } public string Title { get => title; set { if (title != value) { title = value; RaisePropertyChanged(title); } } } public string Content { get => content; set { if (content != value) { content = value; RaisePropertyChanged(content); } } } public ICommand CloseTabCommand { get; private set; } public void Dispose() { // disposing stuff } } } MainViewViewModel.cs using Prism.Commands; using Prism.Mvvm; using System.Collections; using System.Collections.Generic; using System.Collections.ObjectModel; using System.Linq; using System.Windows.Input; namespace WpfAppTabControl { public class MainViewViewModel : BindableBase { private readonly ObservableCollection<ItemViewModel> itemsToDisplay; private ItemViewModel activeTabItem; public MainViewViewModel() { itemsToDisplay = new ObservableCollection<ItemViewModel>(); AddTabCommand = new DelegateCommand(AddTab); } public IEnumerable<ItemViewModel> ItemsToDisplay => itemsToDisplay; public ICommand AddTabCommand { get; private set; } public ItemViewModel ActiveTabItem { get => activeTabItem; set { if (activeTabItem != value) { activeTabItem = value; RaisePropertyChanged(); } } } private void AddTab() { var tabCountString = (itemsToDisplay.Count + 1).ToString(); var itemViewModel1 = new ItemViewModel(new DelegateCommand<string>(title => { CloseTab(title); })) { Title = "NewTab" + tabCountString, Content = "Content of " + "NewTab" + tabCountString, }; itemsToDisplay.Add(itemViewModel1); ActiveTabItem = itemViewModel1; } private void CloseTab(string title) { if (string.IsNullOrEmpty(title)) { return; } var tabToClose = itemsToDisplay.FirstOrDefault(viewModel => viewModel.Title == title); if (tabToClose != null) { itemsToDisplay.Remove(tabToClose); } } public void NotifySelectedItemChanged(IList addedItems) { if (addedItems.Count > 0) { // stuff to do } } } } A: Had the same issue. Add the new tabitem to tabcontrol: example TabItem newtabitem = new TabItem then use newtabitem.Focus(). This moves the focus to the added tab.
How to focus on a newly added TabItem of a WPF TabControl?
I'm facing an issue where I need to be able to actually focus a TabItem tab in a TabControl like I would press the TAB key. I know I could use SendKeys.SendWait("{TAB}"); in order to achieve that but I would prefer a more robust solution. The tab items are bound to an ItemSource of an ObservableCollection. That collection contains a list of view models later bound to the tab items title, icon, content and more. The collection is filled and emptied at runtime, I don't have a static collection. I've made a sample project to illustrate the issue (see code below). Current behavior after I added a tab item to the collection looks like this: After pressing the "Add Tab" button on the left a new tab is added to the ObservableCollection which is then reflected in a new tab item in the tab control. I try to focus that new tab with: Set the SelectedItem of the TabControl after a new item is added (see code below) Keyboard.Focus((UIElement)sender) in the tab item loaded event (see code below) by setting FocusManager.FocusedElement="{Binding ElementName=tabHeader}" in XAML (see code below) None of those attempts seems to work. Desired behavior would be that the new added tab item actually get focused (like I would press the TAB key) which would look like this: To create this image I pressed TAB after I added a new tab item. I couldn't find any working solution here or here and also Microsoft's documentation regarding Focus didn't shed enough light. Any hint is appreciated. To reproduce the screenshots above here is my sample project: MainWindow.xaml <Window x:Class="WpfAppTabControl.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:WpfAppTabControl" mc:Ignorable="d" d:DataContext="{d:DesignInstance local:MainViewViewModel}" Title="MainWindow" Height="450" Width="800"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="350"/> <ColumnDefinition Width="12"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Button Grid.Column="0" Grid.Row="2" Margin="50,50,50,50" Command="{Binding AddTabCommand}">Add Tab</Button> <GridSplitter Grid.Column="1" Grid.Row="2" ResizeDirection="Columns" ResizeBehavior="PreviousAndNext" HorizontalAlignment="Stretch"/> <TabControl Grid.Column="2" Grid.Row="2" ItemsSource="{Binding ItemsToDisplay}" SelectedItem="{Binding ActiveTabItem}" SelectionChanged="OnSelectionChanged" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" FocusManager.FocusedElement="{Binding ElementName=tabHeader}" FocusManager.IsFocusScope="True"> <TabControl.ItemContainerStyle> <Style TargetType="TabItem"> <EventSetter Event="Loaded" Handler="OnTabItemLoaded"/> <Setter Property="IsSelected" Value="True"/> <Setter Property="Focusable" Value="True"/> <Setter Property="HeaderTemplate"> <Setter.Value> <DataTemplate> <DockPanel x:Name="tabHeader" Margin="-5,-1,-5,0" Focusable="True"> <Button x:Name="closeButton" Width="16" Height="16" Margin="20, 10, 10, 10" Command="{Binding CloseTabCommand}" CommandParameter="{Binding Title}" BorderBrush="Transparent" DockPanel.Dock="Right" BorderThickness="0"> <Image x:Name="tabIcon" Source="/close.png" Height="16" Margin="0,0,0,0" HorizontalAlignment="Center" VerticalAlignment="Center"/> </Button> <TextBlock Margin="5, 10, 10, 10" VerticalAlignment="Center" Text="{Binding Title}"/> </DockPanel> </DataTemplate> </Setter.Value> </Setter> <Setter Property="Content" Value="{Binding Content}" /> </Style> </TabControl.ItemContainerStyle> </TabControl> </Grid> </Window> MainWindow.xaml.cs using System.Windows; using System.Windows.Controls; using System.Windows.Input; namespace WpfAppTabControl { public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); DataContext = new MainViewViewModel(); } private void OnSelectionChanged(object sender, SelectionChangedEventArgs e) { var viewModel = (MainViewViewModel)DataContext; viewModel.NotifySelectedItemChanged(e.AddedItems); } private void OnTabItemLoaded(object sender, RoutedEventArgs e) { Keyboard.Focus((UIElement)sender); } } } ItemViewModel.cs using Prism.Mvvm; using System; using System.Windows.Input; namespace WpfAppTabControl { public class ItemViewModel : BindableBase, IDisposable { private string title; private string content; public ItemViewModel(ICommand closeTabCommand) { CloseTabCommand = closeTabCommand ?? throw new ArgumentNullException(nameof(closeTabCommand)); } public string Title { get => title; set { if (title != value) { title = value; RaisePropertyChanged(title); } } } public string Content { get => content; set { if (content != value) { content = value; RaisePropertyChanged(content); } } } public ICommand CloseTabCommand { get; private set; } public void Dispose() { // disposing stuff } } } MainViewViewModel.cs using Prism.Commands; using Prism.Mvvm; using System.Collections; using System.Collections.Generic; using System.Collections.ObjectModel; using System.Linq; using System.Windows.Input; namespace WpfAppTabControl { public class MainViewViewModel : BindableBase { private readonly ObservableCollection<ItemViewModel> itemsToDisplay; private ItemViewModel activeTabItem; public MainViewViewModel() { itemsToDisplay = new ObservableCollection<ItemViewModel>(); AddTabCommand = new DelegateCommand(AddTab); } public IEnumerable<ItemViewModel> ItemsToDisplay => itemsToDisplay; public ICommand AddTabCommand { get; private set; } public ItemViewModel ActiveTabItem { get => activeTabItem; set { if (activeTabItem != value) { activeTabItem = value; RaisePropertyChanged(); } } } private void AddTab() { var tabCountString = (itemsToDisplay.Count + 1).ToString(); var itemViewModel1 = new ItemViewModel(new DelegateCommand<string>(title => { CloseTab(title); })) { Title = "NewTab" + tabCountString, Content = "Content of " + "NewTab" + tabCountString, }; itemsToDisplay.Add(itemViewModel1); ActiveTabItem = itemViewModel1; } private void CloseTab(string title) { if (string.IsNullOrEmpty(title)) { return; } var tabToClose = itemsToDisplay.FirstOrDefault(viewModel => viewModel.Title == title); if (tabToClose != null) { itemsToDisplay.Remove(tabToClose); } } public void NotifySelectedItemChanged(IList addedItems) { if (addedItems.Count > 0) { // stuff to do } } } }
[ "Had the same issue. Add the new tabitem to tabcontrol: example TabItem newtabitem = new TabItem then use newtabitem.Focus(). This moves the focus to the added tab.\n" ]
[ 0 ]
[]
[]
[ "focus", "tabcontrol", "wpf" ]
stackoverflow_0074385939_focus_tabcontrol_wpf.txt
Q: Wpf: Warning CS0108: 'MainWindow.Close' hides inherited member 'Window.Close()'. Use the new keyword if hiding was intended I have Wpf dotnet 7.0 app here. and it runs fine. But it gives me a warnings. Warning CS0108 'MainWindow.Close' hides inherited member 'Window.Close()'. Use the new keyword if hiding was intended. WpfStockAnalyzerHttpClient C:\Trials\Ex\AsyncCSharp\src\apps\3040-WpfStockAnalyzerHttpClient\MainWindow.xaml I am not able to understand why? Can someone tell me how to fix? I get the same error when I run the app using the following command. dotnet run --project ./WpfStockAnalyzerHttpClient.csproj C:\Trials\Ex\AsyncCSharp\src\apps\3040-WpfStockAnalyzerHttpClient\MainWindow.xaml(13,51): warning CS0108: 'MainWindow.Close' hides inherited member 'Window.Close()'. Use the new keyword if hiding was intended. [C:\Trials\Ex\AsyncCSharp\src\apps\3040-WpfStockAn alyzerHttpClient\WpfStockAnalyzerHttpClient_ekqqvgub_wpftmp.csproj] A: You have given the MenuItem the name "Close". When assigning a name to an element, Designer Studio's code generator creates a field with that name. You can see the file generated by the code generator if you move the cursor to "InitializeComponent" and press F12. On line 47 (this is the number I have, you may have a shift, but not much) you will see "internal System.Windows.Controls.MenuItem Close;". That is, in fact, you are trying to create a field with the same name as the "Close ()" method already present in the base type. The Studio warns you about this overlap. To fix it, change the name of the element: <MenuItem x:Name="miClose" FontSize="20" Header="_Close" Click="Close_OnClick"/> Keep in mind that the warning may not disappear immediately. The studio does not always correctly track changes made by the code generator. But when you re-open the Solution, this warning will definitely be reset.
Wpf: Warning CS0108: 'MainWindow.Close' hides inherited member 'Window.Close()'. Use the new keyword if hiding was intended
I have Wpf dotnet 7.0 app here. and it runs fine. But it gives me a warnings. Warning CS0108 'MainWindow.Close' hides inherited member 'Window.Close()'. Use the new keyword if hiding was intended. WpfStockAnalyzerHttpClient C:\Trials\Ex\AsyncCSharp\src\apps\3040-WpfStockAnalyzerHttpClient\MainWindow.xaml I am not able to understand why? Can someone tell me how to fix? I get the same error when I run the app using the following command. dotnet run --project ./WpfStockAnalyzerHttpClient.csproj C:\Trials\Ex\AsyncCSharp\src\apps\3040-WpfStockAnalyzerHttpClient\MainWindow.xaml(13,51): warning CS0108: 'MainWindow.Close' hides inherited member 'Window.Close()'. Use the new keyword if hiding was intended. [C:\Trials\Ex\AsyncCSharp\src\apps\3040-WpfStockAn alyzerHttpClient\WpfStockAnalyzerHttpClient_ekqqvgub_wpftmp.csproj]
[ "You have given the MenuItem the name \"Close\". When assigning a name to an element, Designer Studio's code generator creates a field with that name.\nYou can see the file generated by the code generator if you move the cursor to \"InitializeComponent\" and press F12. On line 47 (this is the number I have, you may have a shift, but not much) you will see \"internal System.Windows.Controls.MenuItem Close;\".\nThat is, in fact, you are trying to create a field with the same name as the \"Close ()\" method already present in the base type.\nThe Studio warns you about this overlap.\nTo fix it, change the name of the element:\n <MenuItem x:Name=\"miClose\" FontSize=\"20\" Header=\"_Close\" Click=\"Close_OnClick\"/>\n\nKeep in mind that the warning may not disappear immediately. The studio does not always correctly track changes made by the code generator.\nBut when you re-open the Solution, this warning will definitely be reset.\n" ]
[ 1 ]
[]
[]
[ "wpf" ]
stackoverflow_0074673391_wpf.txt
Q: I have a type defined object and I want to declare a variable and then assign a value to the keys I have been working on typescript for the last 4 months only, I am a kinda newbie, and need help. Here is the type definition I am going to use as inferred type. export type InputFileId = { fileId: string; rate?: 'satisfied' | 'investigation' | 'wrong' | 'none'; }; export type UserData= { uId: string; }; export type UpdateResponseObject = { calculationName?: string; temperatureC?: number, isLoadChange?: boolean, loadN?: number; projectId?: string; ownersIds?: Array<UserData>; fileId?: Array<InputFileId>; }; Here is what I want to achieve. const responseData: UpdateResponseObject[] = []; Now I want to use it as follows let fileIdArray: any[] = []; I want to send a response of type UpdateResponseObject and to do so I want to populate the keys/data as defined in the type definition. responseData.fileId = fileIdArray; Maybe edit the question or change it or maybe I am stupid. A: To use the UpdateResponseObject type and create a new object that conforms to its shape, you can create a new variable and use object shorthand syntax to initialize it with the keys and values you want to include. Here's an example of how you might do that: const responseData: UpdateResponseObject = { fileId: fileIdArray }; In this example, responseData is declared as being of type UpdateResponseObject, which means that it must have all of the keys defined in that type. Since the fileId property is marked as optional in the type definition, you don't need to include it if you don't want to, but if you do include it, its value must be an array of InputFileId objects. You could also use the responseData array that you defined earlier, and add objects to it that conform to the UpdateResponseObject type. Here's an example of how you might do that: const responseData: UpdateResponseObject[] = []; // Add an object to the array that has a 'fileId' property responseData.push({ fileId: fileIdArray }); In this example, the responseData array is declared as an array of UpdateResponseObject objects, so each object that you add to the array must conform to the UpdateResponseObject type. You can add as many objects as you want to the array, and each one can have different values for the keys defined in the type.
I have a type defined object and I want to declare a variable and then assign a value to the keys
I have been working on typescript for the last 4 months only, I am a kinda newbie, and need help. Here is the type definition I am going to use as inferred type. export type InputFileId = { fileId: string; rate?: 'satisfied' | 'investigation' | 'wrong' | 'none'; }; export type UserData= { uId: string; }; export type UpdateResponseObject = { calculationName?: string; temperatureC?: number, isLoadChange?: boolean, loadN?: number; projectId?: string; ownersIds?: Array<UserData>; fileId?: Array<InputFileId>; }; Here is what I want to achieve. const responseData: UpdateResponseObject[] = []; Now I want to use it as follows let fileIdArray: any[] = []; I want to send a response of type UpdateResponseObject and to do so I want to populate the keys/data as defined in the type definition. responseData.fileId = fileIdArray; Maybe edit the question or change it or maybe I am stupid.
[ "To use the UpdateResponseObject type and create a new object that conforms to its shape, you can create a new variable and use object shorthand syntax to initialize it with the keys and values you want to include. Here's an example of how you might do that:\nconst responseData: UpdateResponseObject = {\n fileId: fileIdArray\n};\n\nIn this example, responseData is declared as being of type UpdateResponseObject, which means that it must have all of the keys defined in that type. Since the fileId property is marked as optional in the type definition, you don't need to include it if you don't want to, but if you do include it, its value must be an array of InputFileId objects.\nYou could also use the responseData array that you defined earlier, and add objects to it that conform to the UpdateResponseObject type. Here's an example of how you might do that:\nconst responseData: UpdateResponseObject[] = [];\n\n// Add an object to the array that has a 'fileId' property\nresponseData.push({ fileId: fileIdArray });\n\nIn this example, the responseData array is declared as an array of UpdateResponseObject objects, so each object that you add to the array must conform to the UpdateResponseObject type. You can add as many objects as you want to the array, and each one can have different values for the keys defined in the type.\n" ]
[ 1 ]
[]
[]
[ "javascript", "javascript_objects", "typescript", "typescript_generics" ]
stackoverflow_0074673570_javascript_javascript_objects_typescript_typescript_generics.txt
Q: Exporting a spreadsheet to PDF - is it possible to keep the computations active? I've implemented a card game in a spreadsheet (I'm using Apache OpenOffice, but can get my stuff converted to "normal" MS Xcel if need be). What my sheet does is in fact deal out four cards at a time, from a deck that only contains aces (counted as ones) and twos through tens - so basically it's four random number generators activated by a keystroke. The computer parts end here. The players are supposed to do stuff with the four numbers using their heads, paper and pencils. Now I'd like to convert the spreadsheet to pdf so I can share it with people without them having to use Apache, Xcel or any other spreadsheet software. But when I export the content, the pdf file just shows the numbers last generated; I cannot make it "reshuffle" and deal again. Is it at all possible? A: A direct conversion is not possible; Excel macros and JavaScript are too far away from each other. You will have to recreate the logic using Acrobat JavaScript.
Exporting a spreadsheet to PDF - is it possible to keep the computations active?
I've implemented a card game in a spreadsheet (I'm using Apache OpenOffice, but can get my stuff converted to "normal" MS Xcel if need be). What my sheet does is in fact deal out four cards at a time, from a deck that only contains aces (counted as ones) and twos through tens - so basically it's four random number generators activated by a keystroke. The computer parts end here. The players are supposed to do stuff with the four numbers using their heads, paper and pencils. Now I'd like to convert the spreadsheet to pdf so I can share it with people without them having to use Apache, Xcel or any other spreadsheet software. But when I export the content, the pdf file just shows the numbers last generated; I cannot make it "reshuffle" and deal again. Is it at all possible?
[ "A direct conversion is not possible; Excel macros and JavaScript are too far away from each other.\nYou will have to recreate the logic using Acrobat JavaScript.\n" ]
[ 0 ]
[]
[]
[ "format", "pdf", "spreadsheet" ]
stackoverflow_0074667963_format_pdf_spreadsheet.txt
Q: Python(sympy) : How to graph smoothly in 2nd ODE solution with Sympy? I'm studing about structural dynamic analysis. I solved a problem : 1 degree of freedom The question is m*y'' + cy' + ky = 900 sin(5.3x) m=6938.78, c=5129.907, k=379259, y is the function of x I solved it's response using by Python and Sympy library. I drew the response by pyplot. But it's shape is not smooth like below enter image description here How can I draw the respone smoothly? I tried to draw smoothly by substituting each x to y by numpy, but could not insert x into sin(5.3x). from sympy import * import matplotlib.pyplot as plt x, y=symbols("x, y") f=symbols('f',cls=Function) y=f(x) eq=Eq( 6938.78*diff(y,x,2) + 5129.907*diff(y,x) + 379259*y-900*sin(5.3*x),0) eq_done=dsolve(eq,y, ics={ f(0):0, diff(y,x).subs(x,0):0 } ) plot(eq_done.rhs,(x,0,10)) A: To get a smoother line you can turn off the adaptive algorithm and set the number of points per line: plot(eq_done.rhs,(x,0,10), adaptive=False, nb_of_points=1000) Also, the help() function is your friend, as it allows to quickly access the documentation of a particular function. Execute help(plot) to learn more about the plot command.
Python(sympy) : How to graph smoothly in 2nd ODE solution with Sympy?
I'm studing about structural dynamic analysis. I solved a problem : 1 degree of freedom The question is m*y'' + cy' + ky = 900 sin(5.3x) m=6938.78, c=5129.907, k=379259, y is the function of x I solved it's response using by Python and Sympy library. I drew the response by pyplot. But it's shape is not smooth like below enter image description here How can I draw the respone smoothly? I tried to draw smoothly by substituting each x to y by numpy, but could not insert x into sin(5.3x). from sympy import * import matplotlib.pyplot as plt x, y=symbols("x, y") f=symbols('f',cls=Function) y=f(x) eq=Eq( 6938.78*diff(y,x,2) + 5129.907*diff(y,x) + 379259*y-900*sin(5.3*x),0) eq_done=dsolve(eq,y, ics={ f(0):0, diff(y,x).subs(x,0):0 } ) plot(eq_done.rhs,(x,0,10))
[ "To get a smoother line you can turn off the adaptive algorithm and set the number of points per line:\nplot(eq_done.rhs,(x,0,10), adaptive=False, nb_of_points=1000)\n\nAlso, the help() function is your friend, as it allows to quickly access the documentation of a particular function. Execute help(plot) to learn more about the plot command.\n" ]
[ 0 ]
[]
[]
[ "graphing", "python", "sympy" ]
stackoverflow_0074664776_graphing_python_sympy.txt
Q: Input type range webkit-slider-thumb background color won't change in ios safari? I came across an issue when implementing input type range on my site, i tried to set the background color for the -webkit-slider-thumb to be transparent, but it is not working on the ios device (iphone and ipad) safari, the safari inspector still showing the user agent style instead of the style i already implement in my css and inline html file, here the css style i implement in my css file and inline html: html file <input class="slider" list="steplist" max="100" name="range" type="range" value ="0" /> css file input[type="range"]::-webkit-slider-thumb, input[type="range"]::-webkit-slider-thumb:active{ background-color: transparent !important; } here is the screencap for the inspector element (i inspected it on ipad os safari): I noticed date the background-color of input[type="range"]::-webkit-slider-thumb value is still white (following user-agent default style) and not following my css file which is transparent A: According to my analysis, adding a bit of JS should give you the desired output. I have been managed to make -webkit-slider-thumb 's background: transparent; even though my approach comes along with a bit of JavaScript. This appears perfectly on iOS; in fact, what you see as preview will be what other devices get. IOS (Safari) Previews: iPhone 14 | iPhone 14 Plus | iPhone 13 | iPhone 12 Mini Other (Chrome) Previews: Samsung Galaxy S22 | Pixel 7 Pro This pen helped me to figure out the solution: https://codepen.io/tippingpointdev/pen/bGgLqLY //Assign input range's id into variable const range = document.querySelector('#r-input'); function rangeOnChange(e) { let t = e.target //Assign range input's properties into variables const min = t.min; const max = t.max; const val = t.value; /* Adjust range progress as the thumb moves while avoiding overflows */ t.style.backgroundSize = (val - min) * 89 / (max - min) + '% 100%'; } //Trigger function on thumb move range.addEventListener('input', rangeOnChange); /* Adjust range progress at start */ range.style.backgroundSize = (range.value - range.min) * 89 / (range.max - range.min) + '% 100%'; input[type="range"] { /* To hide ordinary range input */ -webkit-appearance: none; margin-right: 15px; height: 7px; background: lightgray; border-radius: 5px; /* Range progress background is set */ background-image: linear-gradient(gray,gray); background-size: 70% 100%; background-repeat: no-repeat; } /* Thumb styles */ input[type="range"]::-webkit-slider-thumb { /* To hide ordinary thumb */ -webkit-appearance: none; height: 15px; width: 15px; border-radius: 50%; /* Since range input is created manually, thumb background can be vary in color */ background: transparent; border: 1px solid gray; cursor: pointer; transition: background .3s ease-in-out; } <!-- Since some JS is used, an Id is added here --> <input id='r-input' type="range" value="70" min="0" max="100" /><p><small>Try moving transparent thumb to see range progress</small></p>
Input type range webkit-slider-thumb background color won't change in ios safari?
I came across an issue when implementing input type range on my site, i tried to set the background color for the -webkit-slider-thumb to be transparent, but it is not working on the ios device (iphone and ipad) safari, the safari inspector still showing the user agent style instead of the style i already implement in my css and inline html file, here the css style i implement in my css file and inline html: html file <input class="slider" list="steplist" max="100" name="range" type="range" value ="0" /> css file input[type="range"]::-webkit-slider-thumb, input[type="range"]::-webkit-slider-thumb:active{ background-color: transparent !important; } here is the screencap for the inspector element (i inspected it on ipad os safari): I noticed date the background-color of input[type="range"]::-webkit-slider-thumb value is still white (following user-agent default style) and not following my css file which is transparent
[ "According to my analysis, adding a bit of JS should give you the desired output. I have been managed to make -webkit-slider-thumb 's background: transparent; even though my approach comes along with a bit of JavaScript. This appears perfectly on iOS; in fact, what you see as preview will be what other devices get.\nIOS (Safari) Previews:\niPhone 14 | iPhone 14 Plus | iPhone 13 | iPhone 12 Mini\nOther (Chrome) Previews: Samsung Galaxy S22 | Pixel 7 Pro\nThis pen helped me to figure out the solution: https://codepen.io/tippingpointdev/pen/bGgLqLY\n\n\n//Assign input range's id into variable\nconst range = document.querySelector('#r-input'); \n\nfunction rangeOnChange(e) {\n let t = e.target\n \n //Assign range input's properties into variables\n const min = t.min; \n const max = t.max; \n const val = t.value; \n \n /* Adjust range progress as the thumb moves while avoiding overflows */\n t.style.backgroundSize = (val - min) * 89 / (max - min) + '% 100%'; \n}\n\n//Trigger function on thumb move\nrange.addEventListener('input', rangeOnChange);\n\n/* Adjust range progress at start */\nrange.style.backgroundSize = (range.value - range.min) * 89 / (range.max - range.min) + '% 100%';\ninput[type=\"range\"] {\n \n /* To hide ordinary range input */\n -webkit-appearance: none;\n\n margin-right: 15px;\n height: 7px;\n background: lightgray;\n border-radius: 5px;\n \n /* Range progress background is set */\n background-image: linear-gradient(gray,gray);\n background-size: 70% 100%;\n background-repeat: no-repeat;\n}\n\n/* Thumb styles */\ninput[type=\"range\"]::-webkit-slider-thumb {\n\n /* To hide ordinary thumb */\n -webkit-appearance: none;\n \n height: 15px;\n width: 15px;\n border-radius: 50%;\n \n /* Since range input is created manually, thumb background can be vary in color */\n background: transparent; \n border: 1px solid gray;\n cursor: pointer;\n transition: background .3s ease-in-out;\n}\n<!-- Since some JS is used, an Id is added here -->\n<input id='r-input' type=\"range\" value=\"70\" min=\"0\" max=\"100\" /><p><small>Try moving transparent thumb to see range progress</small></p>\n\n\n\n" ]
[ 3 ]
[ "By the looks of it, using specific CSS rules may be a way, for example:\nbackground-image: url('the url to the image') - image\nbackground-size: contain; - make the image always inside the object\nbackground-position: center center; - make the image centered to the object.\nbackground-repeat: no-repeat; - to make the image not repeated.\nThis, when I have a problem with backgrounds, always seems to work on all browsers.\n" ]
[ -1 ]
[ "css", "html", "ios", "pseudo_element", "safari" ]
stackoverflow_0074554906_css_html_ios_pseudo_element_safari.txt
Q: How to use ChipGroup in Android In my application I have BottomSheetDialogFragment and into this fragment I have ChipGroup. I want dynamically checked chip of this ChipGroup. I write below code, when first time open this dialog, everything is okey and show checked chip. But when close dialog and open again dialog crashed application and show null error! My fragment codes: class MenuFragment : BottomSheetDialogFragment() { //Binding private var _binding: FragmentMenuBinding? = null private val binding get() = _binding!! override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle?): View { _binding = FragmentMenuBinding.inflate(inflater, container, false) return binding.root } override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) //InitView binding.apply { setupChip(fillChipData(), mealChipGroup) mealChipGroup.findViewById<Chip>(3).isChecked = true } private fun fillChipData(): MutableList<String> { return mutableListOf( "Item1", "Item2", "Item3", "Item4", "Item5" ) } private fun setupChip(list: MutableList<String>, view: ChipGroup) { list.forEach { val chip = Chip(requireContext()) chip.text = it view.addView(chip) } } override fun onDestroy() { super.onDestroy() _binding = null } Error message: java.lang.NullPointerException: Attempt to invoke virtual method 'void com.google.android.material.chip.Chip.setChecked(boolean)' on a null object reference at myapp.MenuFragment.onViewCreated(MenuFragment.kt:58) at androidx.fragment.app.Fragment.performViewCreated(Fragment.java:3128) at androidx.fragment.app.FragmentStateManager.createView(FragmentStateManager.java:552) at androidx.fragment.app.FragmentStateManager.moveToExpectedState(FragmentStateManager.java:261) at androidx.fragment.app.FragmentManager.executeOpsTogether(FragmentManager.java:1899) at androidx.fragment.app.FragmentManager.removeRedundantOperationsAndExecute(FragmentManager.java:1823) at androidx.fragment.app.FragmentManager.execPendingActions(FragmentManager.java:1760) at androidx.fragment.app.FragmentManager$5.run(FragmentManager.java:547) at android.os.Handler.handleCallback(Handler.java:751) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:154) at android.app.ActivityThread.main(ActivityThread.java:6121) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:889) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:779) Show me error for this line : mealChipGroup.findViewById<Chip>(3).isChecked = true I know this error for NullPointerException, but my question is why in first time everything is ok. But when close and open dialog show me this error?!! How can I fix it? A: Posting as an Answer along with the comment I've added. You are dynamically creating Chips but without setting up an id, and you are trying to access a Chip via an id 3 which doesn't exist, hence the NullPointerException. Use setId on your created Chip and use that to access the view. Example: // 1. Create a variable with a default value of '1' so that we can move sequentially for each Chip's id. var chipId = 1 // 2. Set 'Id' to your programmatically created Chip. private fun setupChip(list: MutableList<String>, view: ChipGroup) { list.forEach { val chip = Chip(requireContext()) chip.text = it chip.id = counter++ // <<<< this view.addView(chip) } } // 3. Now you can access the "third" Chip like before mealChipGroup.findViewById<Chip>(3).isChecked = true
How to use ChipGroup in Android
In my application I have BottomSheetDialogFragment and into this fragment I have ChipGroup. I want dynamically checked chip of this ChipGroup. I write below code, when first time open this dialog, everything is okey and show checked chip. But when close dialog and open again dialog crashed application and show null error! My fragment codes: class MenuFragment : BottomSheetDialogFragment() { //Binding private var _binding: FragmentMenuBinding? = null private val binding get() = _binding!! override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle?): View { _binding = FragmentMenuBinding.inflate(inflater, container, false) return binding.root } override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) //InitView binding.apply { setupChip(fillChipData(), mealChipGroup) mealChipGroup.findViewById<Chip>(3).isChecked = true } private fun fillChipData(): MutableList<String> { return mutableListOf( "Item1", "Item2", "Item3", "Item4", "Item5" ) } private fun setupChip(list: MutableList<String>, view: ChipGroup) { list.forEach { val chip = Chip(requireContext()) chip.text = it view.addView(chip) } } override fun onDestroy() { super.onDestroy() _binding = null } Error message: java.lang.NullPointerException: Attempt to invoke virtual method 'void com.google.android.material.chip.Chip.setChecked(boolean)' on a null object reference at myapp.MenuFragment.onViewCreated(MenuFragment.kt:58) at androidx.fragment.app.Fragment.performViewCreated(Fragment.java:3128) at androidx.fragment.app.FragmentStateManager.createView(FragmentStateManager.java:552) at androidx.fragment.app.FragmentStateManager.moveToExpectedState(FragmentStateManager.java:261) at androidx.fragment.app.FragmentManager.executeOpsTogether(FragmentManager.java:1899) at androidx.fragment.app.FragmentManager.removeRedundantOperationsAndExecute(FragmentManager.java:1823) at androidx.fragment.app.FragmentManager.execPendingActions(FragmentManager.java:1760) at androidx.fragment.app.FragmentManager$5.run(FragmentManager.java:547) at android.os.Handler.handleCallback(Handler.java:751) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:154) at android.app.ActivityThread.main(ActivityThread.java:6121) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:889) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:779) Show me error for this line : mealChipGroup.findViewById<Chip>(3).isChecked = true I know this error for NullPointerException, but my question is why in first time everything is ok. But when close and open dialog show me this error?!! How can I fix it?
[ "Posting as an Answer along with the comment I've added.\nYou are dynamically creating Chips but without setting up an id, and you are trying to access a Chip via an id 3 which doesn't exist, hence the NullPointerException.\nUse setId on your created Chip and use that to access the view.\nExample:\n// 1. Create a variable with a default value of '1' so that we can move sequentially for each Chip's id.\nvar chipId = 1\n\n// 2. Set 'Id' to your programmatically created Chip.\nprivate fun setupChip(list: MutableList<String>, view: ChipGroup) {\n list.forEach {\n val chip = Chip(requireContext())\n chip.text = it\n chip.id = counter++ // <<<< this\n view.addView(chip)\n }\n}\n\n// 3. Now you can access the \"third\" Chip like before\nmealChipGroup.findViewById<Chip>(3).isChecked = true\n\n" ]
[ 1 ]
[]
[]
[ "android" ]
stackoverflow_0074667645_android.txt
Q: How to select multiple items in LazyColumn in JetpackCompose How to select multiple items in LazyColumn and finally add the selected items in a seperate list. GettingTags(tagsContent ={ productTags -> val flattenList = productTags.flatMap { it.tags_list } Log.i(TAG,"Getting the flattenList $flattenList") LazyColumn{ items(flattenList){ ListItem(text = {Text(it) }) if(selectedTagItem) { Icon( imageVector = Icons.Default.Check, contentDescription = "Selected", tint = Color.Green, modifier = Modifier.size(20.dp) ) } } } }) Mutable variable state var selectedTagItem by remember{ mutableStateOf(false) } A: First create a class with selected variable to toggle @Immutable data class MyItem(val text: String, val isSelected: Boolean = false) Then create a SnapshotStateList via mutableStateListOf that contains all of the items, and can trigger recomposition when we update any item with new instance, add or remove items also. I used a ViewModel but it's not mandatory. We can toggle items using index or get selected items by filtering isSelected flag class MyViewModel : ViewModel() { val myItems = mutableStateListOf<MyItem>() .apply { repeat(15) { add(MyItem(text = "Item$it")) } } fun getSelectedItems() = myItems.filter { it.isSelected } fun toggleSelection(index: Int) { val item = myItems[index] val isSelected = item.isSelected if (isSelected) { myItems[index] = item.copy(isSelected = false) } else { myItems[index] = item.copy(isSelected = true) } } } Create LazyColumn with key, key makes sure that only updated items are recomposed, as can be seen in performance document @Composable private fun SelectableLazyListSample(myViewModel: MyViewModel) { val selectedItems = myViewModel.getSelectedItems().map { it.text } Text(text = "Selected items: $selectedItems") LazyColumn( verticalArrangement = Arrangement.spacedBy(8.dp), contentPadding = PaddingValues(8.dp) ) { itemsIndexed( myViewModel.myItems, key = { _, item: MyItem -> item.hashCode() } ) { index, item -> Box( modifier = Modifier .fillMaxWidth() .background(Color.Red, RoundedCornerShape(8.dp)) .clickable { myViewModel.toggleSelection(index) } .padding(8.dp) ) { Text("Item $index", color = Color.White, fontSize = 20.sp) if (item.isSelected) { Icon( modifier = Modifier .align(Alignment.CenterEnd) .background(Color.White, CircleShape), imageVector = Icons.Default.Check, contentDescription = "Selected", tint = Color.Green, ) } } } } } Result
How to select multiple items in LazyColumn in JetpackCompose
How to select multiple items in LazyColumn and finally add the selected items in a seperate list. GettingTags(tagsContent ={ productTags -> val flattenList = productTags.flatMap { it.tags_list } Log.i(TAG,"Getting the flattenList $flattenList") LazyColumn{ items(flattenList){ ListItem(text = {Text(it) }) if(selectedTagItem) { Icon( imageVector = Icons.Default.Check, contentDescription = "Selected", tint = Color.Green, modifier = Modifier.size(20.dp) ) } } } }) Mutable variable state var selectedTagItem by remember{ mutableStateOf(false) }
[ "First create a class with selected variable to toggle\n@Immutable\ndata class MyItem(val text: String, val isSelected: Boolean = false)\n\nThen create a SnapshotStateList via mutableStateListOf that contains all of the items, and can trigger recomposition when we update any item with new instance, add or remove items also. I used a ViewModel but it's not mandatory. We can toggle items using index or get selected items by filtering isSelected flag\nclass MyViewModel : ViewModel() {\n\n val myItems = mutableStateListOf<MyItem>()\n .apply {\n repeat(15) {\n add(MyItem(text = \"Item$it\"))\n }\n }\n\n fun getSelectedItems() = myItems.filter { it.isSelected }\n\n fun toggleSelection(index: Int) {\n\n val item = myItems[index]\n val isSelected = item.isSelected\n\n if (isSelected) {\n myItems[index] = item.copy(isSelected = false)\n } else {\n myItems[index] = item.copy(isSelected = true)\n }\n }\n}\n\nCreate LazyColumn with key, key makes sure that only updated items are recomposed, as can be seen in performance document\n@Composable\nprivate fun SelectableLazyListSample(myViewModel: MyViewModel) {\n\n val selectedItems = myViewModel.getSelectedItems().map { it.text }\n Text(text = \"Selected items: $selectedItems\")\n LazyColumn(\n verticalArrangement = Arrangement.spacedBy(8.dp),\n contentPadding = PaddingValues(8.dp)\n ) {\n itemsIndexed(\n myViewModel.myItems,\n key = { _, item: MyItem ->\n item.hashCode()\n }\n ) { index, item ->\n Box(\n modifier = Modifier\n .fillMaxWidth()\n .background(Color.Red, RoundedCornerShape(8.dp))\n .clickable {\n myViewModel.toggleSelection(index)\n }\n .padding(8.dp)\n ) {\n Text(\"Item $index\", color = Color.White, fontSize = 20.sp)\n if (item.isSelected) {\n Icon(\n modifier = Modifier\n .align(Alignment.CenterEnd)\n .background(Color.White, CircleShape),\n imageVector = Icons.Default.Check,\n contentDescription = \"Selected\",\n tint = Color.Green,\n )\n }\n }\n }\n }\n}\n\nResult\n\n" ]
[ 1 ]
[]
[]
[ "android", "android_jetpack_compose", "android_jetpack_compose_lazy_column" ]
stackoverflow_0074673210_android_android_jetpack_compose_android_jetpack_compose_lazy_column.txt
Q: Visual Studio Code does not detect Virtual Environments Visual Studio Code does not detect virtual environments. I run vscode in the folder where the venv folder is located, when I try to select the kernel in vscode I can see the main environment and one located elsewhere on the disk. Jupyter running in vscode also doesn't see this environment. I have installed ipykernel in this environment. I tried to reinstall vscode and python extension. I tried to set the path in settings.json inside .vscode: { "python.pythonPath": ".\\venv\\Scripts\\python.exe" } Windows 10 Python 3.6.7 (64-bit) VSCode 1.54.3 A: In VSCode open your command palette — Ctrl+Shift+P by default Look for Python: Select Interpreter In Select Interpreter choose Enter interpreter path... and then Find... Navigate to your venv folder — eg, ~/pyenvs/myenv/ or \Users\Foo\Bar\PyEnvs\MyEnv\ In the virtual environment folder choose <your-venv-name>/bin/python or <your-venv-name>/bin/python3 The issue is that VSCode's Python extension by default uses the main python or python3 program while venv effectively creates a "new" python/python3 executable (that is kind of the point of venv) so the extension does not have access to anything (available modules, namespaces, etc) that you have installed through a venv since the venv specific installations are not available to the main Python interpreter (again, this is by design—like how applications installed in a VM are not available to the host OS) A: 1.In VSCode open your command palette — Ctrl+Shift+P by default 2.Look for Python: Select Interpreter 3.In Select Interpreter choose Enter interpreter path... and then Find... 4.Locate env folder, open Scripts folder , and choose python or python3 windows - venv A: OK, I found a solution. Firstly uninstall Visual Studio Code. Go to C:\Users\Your_profile and delete the folders related to Visual Studio Code that start with a period. Then turn on showing hidden folders and go to C:\Users\Your_profile\AppData. Type vscode in the file finder and remove all foders and files related to Visual Studio Code. Finally, install Visual Studio Code and enjoy the virtual environments. :) A: VS Code: Python Interpreter can't find my venv The only solution I found was to delete the venv and recreate it. I followed these steps but I'll provide a brief summary for Windows: Activate your virtualenv. Go to the parent folder where your Virtual Environment is located and run venv\scripts\activate. Keep in mind that the first name "venv" can vary. Create a requirements.txt file. pip freeze requirements.txt deactivate to exit the venv rm venv to delete the venv py -m venv venv to create a new one pip install -r requirements.txt to install the requirements. This worked for me, I didn't delete the old, but created a new python -m venv /path/newVenv in the ~/Envs folder, C:\Users\Admin\Envs. Maybe VS Code is searching in the ~/Envs folder, or it needs to be added to the python.path in the View -> Command Pallete -> >Preferences: Open User Settings. A: None of the suggestions on this thread worked for me. That said, I don't think the issue lies with VS Code, it's venv. I wound up installing PyCharm to fix this. After you’ve downloaded: PyCharm > Preferences > search “interpreter” > Project: Python Interpreter > Click ‘+’ > in Virtualenv Environment > New environment (should automatically populate everything for a new env). Select OK, OK, OK. In the bottom left, you’ll see Git | TODO | Problems | Terminal…etc. Click “Terminal” and you should see your environment already activated. From there, pip3 install your dependencies. Close PyCharm. Go back to VS Code, open your project, and follow the suggestions above to select the Virtualenv (mine was 'venv': venv) as your interpreter. Finally resolved. A: If you're a Linux user, and you've used this or similaar to create your virtual environment: python3 -m venv venv and you cannot get the debug to work, remove your venv and create it from the VS Code terminal (click Ctrl + back-tick to open). When you create it from the VS Code terminal, VS Code will ask if you want to use this new environment it amazingly detected for this workspace, say yes. A: Part of the confusion here may stem from UI behavior that is at odds with the VScode documentation. The docs state: When you create a new virtual environment, a prompt will be displayed to allow you to select it for the workspace. That didn't happen in my case (VScode 1.66.2 running on Windows 10 with Remote - WSL plugin version 0.66.2). I followed the steps outlined here; I did not see the pop-up described by the VScode docs but clicking on the Python interpreter version in the status bar showed that VScode had automatically selected the interpreter installed in the virtual environment. Furthermore, I did observe that VScode was sourcing .venv/bin/activate as described in the post linked above Run the code by clicking the play button, note the .venv and source “/Users/jemurray/Google Drive/scripts/personalPython/helloworld/.venv/bin/activate” in the terminal shows the script is activated and running in the virtual environment A: I was having the same error in my scripts with a virtual environment called "venv", so searching the Visual Studio documentation I found that the virtual environment starts with a dot "." but they never mentioned this, then I created my virtual environment ".venv" and that fixes the error: https://code.visualstudio.com/docs/python/environments#_create-a-virtual-environment A: In my own case, I was trying to activate the venv in Windows PowerShell while the venv was created in wsl. So, I had to recreate the venv with PowerShell albeit with different environment name and reinstall the requirements. A: Here's the answer. Add this to your user and/or workspace settings.json file: "python.defaultInterpreterPath": "${env:VIRTUAL_ENV}". Then the first time you launch a workspace from an active virtual environment, vscode will set the interpreter correctly. Thereafter it will use whatever interpreter was set the last time the workspace was closed. As long as you don't manually change it, you're set. For existing workspaces, just manually set the interpreter and vscode will always use the interpreter from the prior session. It will never use anything in settings.json (or .env or .venv) except the first time a workspace is launched (and in that case, I think it only uses the settings.json name-value pair shown above). That will work as-is for virtualenvs managed by pyenv-virtualenv (or virtualenvwrapper). Should work for regular virtualenv too. For conda, replace VIRTUAL_ENV with whatever it uses, assuming it sets a similar variable. Just activate something and type env to see all the environment variables. This is the solution as long as you create a virtualenv, then launch a workspace for the first time, and the association between the workspace and virtualenv does not change. Unfortunately, it appears you have to set the interpreter manually if the association changes, but you only have to do it once. The official explanation is here, specifically where it says the interpreter is stored internally i.e. not in any configuration file exposed to the user: A: This issue in VS code was fixed for me my simply using Command Prompt in VS code instead of PowerShell as the Terminal A: "python.venvPath" is the command to provide the venv path. In VScode settings.json add "python.terminal.activateEnvironment": true, "python.venvPath": "Add_Venv_DirectoryPath_here", A: After some search I found the next property in the vs-code settings which fix the problem for me: Python: Env File, where the default value is ${workspaceFolder}/.env. Usually I call my venv folder .venv so I fixed the settings to be ${workspaceFolder}/.venv. Now the venv python version appeared in the select interpreter option. vs code venv file property A: I have similar problem, and found a very easy and simple solution. I am using a mac and this is how it works. I structured my development folder like this: "Users/my_user_name/Dev/venv" I created multiple virtual environments at the same level on the "venv". The problem is I fill out the "python.venvPath" with "Users/my_user_name/Dev/venv1" or one of the virtual environment. This prevent VS Code form detecting my other virtual environment. So the fix is very simple, just change the value of "python.venvPath" from "Users/my_user_name/Dev/venv1" to this "Users/my_user_name/Dev/" and voila, it detects all of my virtual environment. I hope this answer helps whoever having similar problem.
Visual Studio Code does not detect Virtual Environments
Visual Studio Code does not detect virtual environments. I run vscode in the folder where the venv folder is located, when I try to select the kernel in vscode I can see the main environment and one located elsewhere on the disk. Jupyter running in vscode also doesn't see this environment. I have installed ipykernel in this environment. I tried to reinstall vscode and python extension. I tried to set the path in settings.json inside .vscode: { "python.pythonPath": ".\\venv\\Scripts\\python.exe" } Windows 10 Python 3.6.7 (64-bit) VSCode 1.54.3
[ "\nIn VSCode open your command palette — Ctrl+Shift+P by default\n\nLook for Python: Select Interpreter\n\nIn Select Interpreter choose Enter interpreter path... and then Find...\n\nNavigate to your venv folder — eg, ~/pyenvs/myenv/ or \\Users\\Foo\\Bar\\PyEnvs\\MyEnv\\\n\nIn the virtual environment folder choose <your-venv-name>/bin/python or <your-venv-name>/bin/python3\n\n\n\nThe issue is that VSCode's Python extension by default uses the main python or python3 program while venv effectively creates a \"new\" python/python3 executable (that is kind of the point of venv) so the extension does not have access to anything (available modules, namespaces, etc) that you have installed through a venv since the venv specific installations are not available to the main Python interpreter (again, this is by design—like how applications installed in a VM are not available to the host OS)\n", "1.In VSCode open your command palette — Ctrl+Shift+P by default\n2.Look for Python: Select Interpreter\n3.In Select Interpreter choose Enter interpreter path... and then Find...\n4.Locate env folder, open Scripts folder , and choose python or python3\n\nwindows - venv\n\n", "OK, I found a solution.\nFirstly uninstall Visual Studio Code. Go to C:\\Users\\Your_profile and delete the folders related to Visual Studio Code that start with a period. Then turn on showing hidden folders and go to C:\\Users\\Your_profile\\AppData. Type vscode in the file finder and remove all foders and files related to Visual Studio Code. Finally, install Visual Studio Code and enjoy the virtual environments. :)\n", "VS Code: Python Interpreter can't find my venv\n\nThe only solution I found was to delete the venv and recreate it. I followed these steps but I'll provide a brief summary for Windows:\n\nActivate your virtualenv. Go to the parent folder where your Virtual Environment is located and run venv\\scripts\\activate. Keep in mind that the first name \"venv\" can vary.\nCreate a requirements.txt file. pip freeze requirements.txt\ndeactivate to exit the venv\nrm venv to delete the venv\npy -m venv venv to create a new one\npip install -r requirements.txt to install the requirements.\n\n\nThis worked for me, I didn't delete the old, but created a new python -m venv /path/newVenv in the ~/Envs folder, C:\\Users\\Admin\\Envs. Maybe VS Code is searching in the ~/Envs folder, or it needs to be added to the python.path in the View -> Command Pallete -> >Preferences: Open User Settings.\n", "None of the suggestions on this thread worked for me. That said, I don't think the issue lies with VS Code, it's venv. I wound up installing PyCharm to fix this. After you’ve downloaded:\nPyCharm > Preferences > search “interpreter” > Project: Python Interpreter > Click ‘+’ > in Virtualenv Environment > New environment (should automatically populate everything for a new env). Select OK, OK, OK.\nIn the bottom left, you’ll see Git | TODO | Problems | Terminal…etc. Click “Terminal” and you should see your environment already activated. From there, pip3 install your dependencies. Close PyCharm.\nGo back to VS Code, open your project, and follow the suggestions above to select the Virtualenv (mine was 'venv': venv) as your interpreter.\nFinally resolved.\n", "If you're a Linux user, and you've used this or similaar to create your virtual environment:\npython3 -m venv venv\n\nand you cannot get the debug to work, remove your venv and create it from the VS Code terminal (click Ctrl + back-tick to open).\nWhen you create it from the VS Code terminal, VS Code will ask if you want to use this new environment it amazingly detected for this workspace, say yes.\n", "Part of the confusion here may stem from UI behavior that is at odds with the VScode documentation. The docs state:\n\nWhen you create a new virtual environment, a prompt will be displayed\nto allow you to select it for the workspace.\n\nThat didn't happen in my case (VScode 1.66.2 running on Windows 10 with Remote - WSL plugin version 0.66.2). I followed the steps outlined here; I did not see the pop-up described by the VScode docs but clicking on the Python interpreter version in the status bar showed that VScode had automatically selected the interpreter installed in the virtual environment. Furthermore, I did observe that VScode was sourcing .venv/bin/activate as described in the post linked above\n\nRun the code by clicking the play button, note the .venv and source\n“/Users/jemurray/Google\nDrive/scripts/personalPython/helloworld/.venv/bin/activate” in the\nterminal shows the script is activated and running in the virtual\nenvironment\n\n", "I was having the same error in my scripts with a virtual environment called \"venv\", so searching the Visual Studio documentation I found that the virtual environment starts with a dot \".\" but they never mentioned this, then I created my virtual environment \".venv\" and that fixes the error:\nhttps://code.visualstudio.com/docs/python/environments#_create-a-virtual-environment\n", "In my own case, I was trying to activate the venv in Windows PowerShell while the venv was created in wsl. So, I had to recreate the venv with PowerShell albeit with different environment name and reinstall the requirements.\n", "Here's the answer. Add this to your user and/or workspace settings.json file:\n\"python.defaultInterpreterPath\": \"${env:VIRTUAL_ENV}\".\nThen the first time you launch a workspace from an active virtual environment, vscode will set the interpreter correctly. Thereafter it will use whatever interpreter was set the last time the workspace was closed. As long as you don't manually change it, you're set. For existing workspaces, just manually set the interpreter and vscode will always use the interpreter from the prior session. It will never use anything in settings.json (or .env or .venv) except the first time a workspace is launched (and in that case, I think it only uses the settings.json name-value pair shown above).\nThat will work as-is for virtualenvs managed by pyenv-virtualenv (or virtualenvwrapper). Should work for regular virtualenv too. For conda, replace VIRTUAL_ENV with whatever it uses, assuming it sets a similar variable. Just activate something and type env to see all the environment variables.\nThis is the solution as long as you create a virtualenv, then launch a workspace for the first time, and the association between the workspace and virtualenv does not change. Unfortunately, it appears you have to set the interpreter manually if the association changes, but you only have to do it once.\nThe official explanation is here, specifically where it says the interpreter is stored internally i.e. not in any configuration file exposed to the user:\n\n", "This issue in VS code was fixed for me my simply using Command Prompt in VS code instead of PowerShell as the Terminal\n\n", "\"python.venvPath\" is the command to provide the venv path.\nIn VScode settings.json add\n \"python.terminal.activateEnvironment\": true,\n\n \"python.venvPath\": \"Add_Venv_DirectoryPath_here\",\n\n", "After some search I found the next property in the vs-code settings which fix the problem for me: Python: Env File, where the default value is ${workspaceFolder}/.env.\nUsually I call my venv folder .venv so I fixed the settings to be\n${workspaceFolder}/.venv.\nNow the venv python version appeared in the select interpreter option.\nvs code venv file property\n", "I have similar problem, and found a very easy and simple solution. I am using a mac and this is how it works.\nI structured my development folder like this: \"Users/my_user_name/Dev/venv\"\nI created multiple virtual environments at the same level on the \"venv\". The problem is I fill out the \"python.venvPath\" with \"Users/my_user_name/Dev/venv1\" or one of the virtual environment. This prevent VS Code form detecting my other virtual environment. So the fix is very simple, just change the value of \"python.venvPath\" from \"Users/my_user_name/Dev/venv1\" to this \"Users/my_user_name/Dev/\" and voila, it detects all of my virtual environment.\nI hope this answer helps whoever having similar problem.\n" ]
[ 36, 5, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "jupyter", "python", "virtual_environment", "visual_studio_code" ]
stackoverflow_0066869413_jupyter_python_virtual_environment_visual_studio_code.txt
Q: Finding a pair from a 2D vector i Finding a pair from a 2D vector if it's second element is not present in the first element of another pair Finding a pair from a 2D vector if it's second element is not present in the first element of another pair A: Please add more clarity to problem. As I got: create a map for first elements indicating if first element is present or not by iterating once. In second iteration look for second element and search if it is there in map as or not. If not present then return it, else move to next one.
Finding a pair from a 2D vector i
Finding a pair from a 2D vector if it's second element is not present in the first element of another pair Finding a pair from a 2D vector if it's second element is not present in the first element of another pair
[ "Please add more clarity to problem.\nAs I got:\ncreate a map for first elements indicating if first element is present or not by iterating once.\nIn second iteration look for second element and search if it is there in map as or not.\nIf not present then return it, else move to next one.\n" ]
[ 0 ]
[]
[]
[ "vector" ]
stackoverflow_0074673834_vector.txt
Q: Vendoring npm packages in deno How does one vendor a npm package in deno? import_map.json: { "imports": { "lume/": "https://deno.land/x/lume@v1.12.1/", } } Lume has some npm dependencies, like https://registry.npmjs.org/markdown-it/-/markdown-it-13.0.0.tgz. deno.jsonc: { "importMap": "import_map.json", } dev_deps.ts: export * as lume from "https://deno.land/x/lume@v1.12.1/mod.ts"; command: $ deno vendor --force --unstable dev_deps.ts # ... Download https://registry.npmjs.org/markdown-it-attrs/-/markdown-it-attrs-4.1.3.tgz # ... thread 'main' panicked at 'Could not find local path for npm:markdown-it-attrs@4.1.3', cli/tools/vendor/mappings.rs:138:11 I tried adding export * as ma from "npm:markdown-it-attrs"; to dev_depts.ts, but it did nothing. I found the following issue on github. Maybe this issue does have something to do with it. I didn't find anything about how to resolve the problem in the official deno documentation and the lume documentation. A: Infortunately, currently you cannot use import_map in your Deno project if your goal is to publish a module that aims to be used in other applications, simply because you don't handle the way deno runtime will start. From the application point of view, the deno run command cannot search every import_map configurations in your dependencies and handle them properly. The import_map feature should be used only at end application level. The fallback is to use by onvention a deps.ts source file to centralize all your dependencies.
Vendoring npm packages in deno
How does one vendor a npm package in deno? import_map.json: { "imports": { "lume/": "https://deno.land/x/lume@v1.12.1/", } } Lume has some npm dependencies, like https://registry.npmjs.org/markdown-it/-/markdown-it-13.0.0.tgz. deno.jsonc: { "importMap": "import_map.json", } dev_deps.ts: export * as lume from "https://deno.land/x/lume@v1.12.1/mod.ts"; command: $ deno vendor --force --unstable dev_deps.ts # ... Download https://registry.npmjs.org/markdown-it-attrs/-/markdown-it-attrs-4.1.3.tgz # ... thread 'main' panicked at 'Could not find local path for npm:markdown-it-attrs@4.1.3', cli/tools/vendor/mappings.rs:138:11 I tried adding export * as ma from "npm:markdown-it-attrs"; to dev_depts.ts, but it did nothing. I found the following issue on github. Maybe this issue does have something to do with it. I didn't find anything about how to resolve the problem in the official deno documentation and the lume documentation.
[ "Infortunately, currently you cannot use import_map in your Deno project if your goal is to publish a module that aims to be used in other applications, simply because you don't handle the way deno runtime will start.\nFrom the application point of view, the deno run command cannot search every import_map configurations in your dependencies and handle them properly.\nThe import_map feature should be used only at end application level.\nThe fallback is to use by onvention a deps.ts source file to centralize all your dependencies.\n" ]
[ 0 ]
[]
[]
[ "build", "deno", "dependencies", "lume", "package" ]
stackoverflow_0074401038_build_deno_dependencies_lume_package.txt
Q: I get error "unmatched '}'" when I scrape website(korter.az) I want to crawl all advertisements but output is "unmatched '}'". Is there any easy way to do it? I tried Beautifulsoup before but I think It's not correct way to do it or I'm using it wrong way. How can I scrape all '199 yeni tikili binalar' from the website. from ast import literal_eval from bs4 import BeautifulSoup as bs import requests import re import json import requests import pandas as pd from ast import literal_eval url = "https://korter.az/yasayis-kompleksleri-baku" html_doc = requests.get(url).text data = re.search(r'2804\.jpg"\}\}\}\],(".*")', html_doc).group(1) data = json.loads(literal_eval(data)) df = pd.DataFrame(data) df.to_excel('korter.xlsx', index=False) A: The site has an api which can be accessed by request Url of the API is : "https://korter.az/api/building/listing?mainGeoObjectId=1&page=1&lang=az-AZ&locale=az-AZ" Full Code import requests import math import pandas as pd def roundup(x): return int(math.ceil(x / 20.0)) * 20 # Gettig no of results url1 = f"https://korter.az/api/building/listing?mainGeoObjectId=1&page=1&lang=az-AZ&locale=az-AZ" r = requests.get(url1) no_of_outcomes = r.json()["totalBuildingsCount"] # since the data is 199 i am rounding up to 20 since i will divide no of outcomes by 20 as the api only provides with 20 results at a time no_of_outcomes = roundup(no_of_outcomes) # Getting Sub Url from each Page by looping. result_url = [] previous_subdata = [] for k in range(1, int(no_of_outcomes/20)+1): url = f"https://korter.az/api/building/listing?mainGeoObjectId=1&page={k}&lang=az-AZ&locale=az-AZ" r = requests.get(url) subdata = r.json()["buildings"] for i in subdata: suburl = "https://korter.az"+i["url"] result_url.append(suburl) print(len(result_url)) df = pd.DataFrame(result_url) print(df) Output 199 0 0 https://korter.az/toca-residence-baki 1 https://korter.az/malibu-residence-baki 2 https://korter.az/zirve-park-baki 3 https://korter.az/melissa-park-baki 4 https://korter.az/white-hotel-baki .. ... 194 https://korter.az/yasham-boulevard-baki 195 https://korter.az/koroglu-baki 196 https://korter.az/luxor-palace-baki 197 https://korter.az/shirvanshahlar-residence-baki 198 https://korter.az/baki-baglari-baki [199 rows x 1 columns] Hope this helps. Happy Coding :)
I get error "unmatched '}'" when I scrape website(korter.az)
I want to crawl all advertisements but output is "unmatched '}'". Is there any easy way to do it? I tried Beautifulsoup before but I think It's not correct way to do it or I'm using it wrong way. How can I scrape all '199 yeni tikili binalar' from the website. from ast import literal_eval from bs4 import BeautifulSoup as bs import requests import re import json import requests import pandas as pd from ast import literal_eval url = "https://korter.az/yasayis-kompleksleri-baku" html_doc = requests.get(url).text data = re.search(r'2804\.jpg"\}\}\}\],(".*")', html_doc).group(1) data = json.loads(literal_eval(data)) df = pd.DataFrame(data) df.to_excel('korter.xlsx', index=False)
[ "The site has an api which can be accessed by request\nUrl of the API is : \"https://korter.az/api/building/listing?mainGeoObjectId=1&page=1&lang=az-AZ&locale=az-AZ\"\nFull Code\nimport requests\nimport math\nimport pandas as pd\n\n\ndef roundup(x):\n return int(math.ceil(x / 20.0)) * 20\n\n\n# Gettig no of results\nurl1 = f\"https://korter.az/api/building/listing?mainGeoObjectId=1&page=1&lang=az-AZ&locale=az-AZ\"\nr = requests.get(url1)\nno_of_outcomes = r.json()[\"totalBuildingsCount\"]\n# since the data is 199 i am rounding up to 20 since i will divide no of outcomes by 20 as the api only provides with 20 results at a time\nno_of_outcomes = roundup(no_of_outcomes)\n\n# Getting Sub Url from each Page by looping.\n\nresult_url = []\nprevious_subdata = []\n\nfor k in range(1, int(no_of_outcomes/20)+1):\n url = f\"https://korter.az/api/building/listing?mainGeoObjectId=1&page={k}&lang=az-AZ&locale=az-AZ\"\n r = requests.get(url)\n subdata = r.json()[\"buildings\"]\n for i in subdata:\n suburl = \"https://korter.az\"+i[\"url\"]\n result_url.append(suburl)\n\n\nprint(len(result_url))\ndf = pd.DataFrame(result_url)\nprint(df)\n\nOutput\n199\n 0\n0 https://korter.az/toca-residence-baki\n1 https://korter.az/malibu-residence-baki\n2 https://korter.az/zirve-park-baki\n3 https://korter.az/melissa-park-baki\n4 https://korter.az/white-hotel-baki\n.. ...\n194 https://korter.az/yasham-boulevard-baki\n195 https://korter.az/koroglu-baki\n196 https://korter.az/luxor-palace-baki\n197 https://korter.az/shirvanshahlar-residence-baki\n198 https://korter.az/baki-baglari-baki\n\n[199 rows x 1 columns]\n\nHope this helps. Happy Coding :)\n" ]
[ 0 ]
[]
[]
[ "python", "python_re", "web_scraping" ]
stackoverflow_0074673490_python_python_re_web_scraping.txt
Q: How to create a random list that satisfy a condition (in one try)? I have written the following code to generate a random list. I want the list to have elements between 0 and 500, but the summation of all elements does not exceed 1300. I dont know how to continue my code to do that. I have written other codes; for example, to create a list of random vectors and then pick among those that satisfy the condition. But here I want to create such a list in one try. nv = 5 bounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)] var =[] for j in range(nv): var.append(random.uniform(bounds[j][0], bounds[j][1])) summ = sum(var) if summ > 1300: ???? A: Don't append until after you've validated the value. Use while len() < maxLen so that you can handle repeat attempts. You don't really need nv since len(bounds) dictates the final value of len(var). len(var) is also the next index of the var list that is unused so you can use that to keep track of where you are in bounds. A running sum is more efficient than using sum() on every check. (Though on small lists, it's not going to make a noticeable difference.) The * in the .uniform() call splits a list into individual arguments. (Asterisks in Python: what they are and how to use them seems like a good tutorial on the subject.) import random bounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)] var = [] runningSum = 0 while len(var) < len(bounds): sample = random.uniform(*bounds[len(var)]) if runningSum + sample < 1300: runningSum += sample var.append(sample) print(repr(var)) A: Without the aid of numpy you could do this: from random import uniform def func1(): LIMIT = 1_300 bounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)] while sum(result := [uniform(lo, hi) for lo, hi in bounds]) > LIMIT: pass return result
How to create a random list that satisfy a condition (in one try)?
I have written the following code to generate a random list. I want the list to have elements between 0 and 500, but the summation of all elements does not exceed 1300. I dont know how to continue my code to do that. I have written other codes; for example, to create a list of random vectors and then pick among those that satisfy the condition. But here I want to create such a list in one try. nv = 5 bounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)] var =[] for j in range(nv): var.append(random.uniform(bounds[j][0], bounds[j][1])) summ = sum(var) if summ > 1300: ????
[ "Don't append until after you've validated the value.\nUse while len() < maxLen so that you can handle repeat attempts.\nYou don't really need nv since len(bounds) dictates the final value of len(var).\nlen(var) is also the next index of the var list that is unused so you can use that to keep track of where you are in bounds.\nA running sum is more efficient than using sum() on every check. (Though on small lists, it's not going to make a noticeable difference.)\nThe * in the .uniform() call splits a list into individual arguments. (Asterisks in Python: what they are and how to use them seems like a good tutorial on the subject.)\nimport random\n\nbounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)]\nvar = []\nrunningSum = 0\nwhile len(var) < len(bounds):\n sample = random.uniform(*bounds[len(var)])\n if runningSum + sample < 1300:\n runningSum += sample\n var.append(sample)\n\nprint(repr(var))\n\n", "Without the aid of numpy you could do this:\nfrom random import uniform\n\ndef func1():\n LIMIT = 1_300\n bounds = [(0, 500), (0, 500), (0, 500), (0, 500), (0, 500)]\n\n while sum(result := [uniform(lo, hi) for lo, hi in bounds]) > LIMIT:\n pass\n\n return result\n\n" ]
[ 1, 0 ]
[]
[]
[ "list", "numpy", "python", "random" ]
stackoverflow_0074673377_list_numpy_python_random.txt
Q: problem with kotlin update real database permanent loop First of all. Thanks to everybody this place is awesome and full of people willing to help ;) My question: I've created a function using Realtime Database to update the same time three values from three different children in the same table. And it works perfectly well if I update just one of them. To launch the function the user can update from none to all three values together but the problem is that when the user modify more than one of the value firebase keep in a death loop updating the values continuously My DB My function is here: private fun guardarTokens () { referenciaBD2 = FirebaseDatabase.getInstance().getReference("TipoUsuario") val tipoUsuarioDatos = HashMap<String, Any>() referenciaBD2.addValueEventListener(object : ValueEventListener { override fun onDataChange(snapshot: DataSnapshot) { if (snapshot.exists()) { for (snapshot in snapshot.children) { val tipoUsuarioInfo = snapshot.getValue(TipoUsuario::class.java) if (tipoUsuarioInfo!!.descripcionUsuario == "usuario") tipoUsuarioDatos["tuid"] = binding.etTokenAlumno.text.toString() if (tipoUsuarioInfo!!.descripcionUsuario == "profesor") tipoUsuarioDatos["tuid"] = binding.etTokenProfesor.text.toString() if (tipoUsuarioInfo!!.descripcionUsuario == "administrador") tipoUsuarioDatos["tuid"] = binding.etTokenAdmin.text.toString() snapshot.ref.updateChildren(tipoUsuarioDatos) } } } override fun onCancelled(error: DatabaseError) { } }) } A: That's not how you should update a specific element in the Realtime Database. What you're actually doing, you're downloading the entire TipoUsuario node on the client in order to perform a verification. That is considered a waste of resources and bandwidth. What you should do instead, is to perform a query and get only the data you are interested in: referenciaBD2 = FirebaseDatabase.getInstance().getReference("TipoUsuario") val queryByUsuario = referenciaBD2.orderByChild("descripcionUsuario").equalTo("usuario") val tipoUsuarioDatos = HashMap<String, Any>() queryByUsuario.addListenerForSingleValueEvent(object : ValueEventListener { override fun onDataChange(snapshot: DataSnapshot) { if (snapshot.exists()) { for (snapshot in snapshot.children) { val tipoUsuarioInfo = snapshot.getValue(TipoUsuario::class.java) tipoUsuarioDatos["tuid"] = binding.etTokenAlumno.text.toString() snapshot.ref.updateChildren(tipoUsuarioDatos) } } } override fun onCancelled(error: DatabaseError) { Log.d("TAG", error.getMessage()) //Never ignore potential errors! } }) In this way, the query will only return the children where the descripcionUsuario field holds the value of usuario.
problem with kotlin update real database permanent loop
First of all. Thanks to everybody this place is awesome and full of people willing to help ;) My question: I've created a function using Realtime Database to update the same time three values from three different children in the same table. And it works perfectly well if I update just one of them. To launch the function the user can update from none to all three values together but the problem is that when the user modify more than one of the value firebase keep in a death loop updating the values continuously My DB My function is here: private fun guardarTokens () { referenciaBD2 = FirebaseDatabase.getInstance().getReference("TipoUsuario") val tipoUsuarioDatos = HashMap<String, Any>() referenciaBD2.addValueEventListener(object : ValueEventListener { override fun onDataChange(snapshot: DataSnapshot) { if (snapshot.exists()) { for (snapshot in snapshot.children) { val tipoUsuarioInfo = snapshot.getValue(TipoUsuario::class.java) if (tipoUsuarioInfo!!.descripcionUsuario == "usuario") tipoUsuarioDatos["tuid"] = binding.etTokenAlumno.text.toString() if (tipoUsuarioInfo!!.descripcionUsuario == "profesor") tipoUsuarioDatos["tuid"] = binding.etTokenProfesor.text.toString() if (tipoUsuarioInfo!!.descripcionUsuario == "administrador") tipoUsuarioDatos["tuid"] = binding.etTokenAdmin.text.toString() snapshot.ref.updateChildren(tipoUsuarioDatos) } } } override fun onCancelled(error: DatabaseError) { } }) }
[ "That's not how you should update a specific element in the Realtime Database. What you're actually doing, you're downloading the entire TipoUsuario node on the client in order to perform a verification. That is considered a waste of resources and bandwidth. What you should do instead, is to perform a query and get only the data you are interested in:\nreferenciaBD2 = FirebaseDatabase.getInstance().getReference(\"TipoUsuario\")\nval queryByUsuario = referenciaBD2.orderByChild(\"descripcionUsuario\").equalTo(\"usuario\")\nval tipoUsuarioDatos = HashMap<String, Any>()\nqueryByUsuario.addListenerForSingleValueEvent(object : ValueEventListener {\n override fun onDataChange(snapshot: DataSnapshot) {\n if (snapshot.exists()) {\n for (snapshot in snapshot.children) {\n val tipoUsuarioInfo = snapshot.getValue(TipoUsuario::class.java)\n tipoUsuarioDatos[\"tuid\"] = binding.etTokenAlumno.text.toString()\n snapshot.ref.updateChildren(tipoUsuarioDatos)\n }\n }\n }\n override fun onCancelled(error: DatabaseError) {\n Log.d(\"TAG\", error.getMessage()) //Never ignore potential errors!\n }\n})\n\nIn this way, the query will only return the children where the descripcionUsuario field holds the value of usuario.\n" ]
[ 0 ]
[]
[]
[ "android", "firebase", "firebase_realtime_database", "kotlin" ]
stackoverflow_0074666209_android_firebase_firebase_realtime_database_kotlin.txt
Q: Calling mean() Function Without Removing Non-Numeric Columns In Dataframe I have the following dataframe: import pandas as pd fertilityRates = pd.read_csv('fertility_rate.csv') fertilityRatesRowCount = len(fertilityRates.axes[0]) fertilityRates.head(fertilityRatesRowCount) I have found a way to find the mean for each row over columns 1960-1969, but would like to do so without removing the column called "Country". The following is what is outputted after I execute the following commands: Mean1960To1970 = fertilityRates.iloc[:, 1:11].mean(axis=1) Mean1960To1970 A: You can use pandas.DataFrame.loc to select a range of years (e.g "1960":"1968" means from 1960 to 1968). Try this : Mean1960To1968 = ( fertilityRates[["Country"]] .assign(Mean= fertilityRates.loc[:, "1960":"1968"].mean(axis=1)) ) # Output : print(Mean1960To1968) Country Mean 0 _World 5.004444 1 Afghanistan 7.450000 2 Albania 5.913333 3 Algeria 7.635556 4 Angola 7.030000 5 Antigua and Barbuda 4.223333 6 Arab World 7.023333 7 Argentina 3.073333 8 Armenia 4.133333 9 Aruba 4.044444 10 Australia 3.167778 11 Austria 2.715556
Calling mean() Function Without Removing Non-Numeric Columns In Dataframe
I have the following dataframe: import pandas as pd fertilityRates = pd.read_csv('fertility_rate.csv') fertilityRatesRowCount = len(fertilityRates.axes[0]) fertilityRates.head(fertilityRatesRowCount) I have found a way to find the mean for each row over columns 1960-1969, but would like to do so without removing the column called "Country". The following is what is outputted after I execute the following commands: Mean1960To1970 = fertilityRates.iloc[:, 1:11].mean(axis=1) Mean1960To1970
[ "You can use pandas.DataFrame.loc to select a range of years (e.g \"1960\":\"1968\" means from 1960 to 1968).\nTry this :\nMean1960To1968 = (\n fertilityRates[[\"Country\"]]\n .assign(Mean= fertilityRates.loc[:, \"1960\":\"1968\"].mean(axis=1))\n )\n\n# Output :\nprint(Mean1960To1968)\n\n Country Mean\n0 _World 5.004444\n1 Afghanistan 7.450000\n2 Albania 5.913333\n3 Algeria 7.635556\n4 Angola 7.030000\n5 Antigua and Barbuda 4.223333\n6 Arab World 7.023333\n7 Argentina 3.073333\n8 Armenia 4.133333\n9 Aruba 4.044444\n10 Australia 3.167778\n11 Austria 2.715556\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074673594_dataframe_pandas_python.txt
Q: 'str' object has no attribute 'user_loader' I'm making a dbms project on a covid hospital system and I can't seem to figure out why i'm getting this error, here's my code: from flask import Flask,redirect,render_template,request from flask_sqlalchemy import SQLAlchemy from flask_login import UserMixin from flask_login import login_required,logout_user,login_user,login_manager,LoginManager,current_user #database connection local_server=True app=Flask(__name__) app.secretkey="adarshacharya" #unique access login_manager=LoginManager() login_manager.init_app(app) login_manager=login_view='login' app.config["SQLALCHEMY_DATABASE_URI"]='mysql://root:@localhost/covidata' db=SQLAlchemy(app) @login_manager.user_loader def load_user(user_id): return patient_details.query.get(int(user_id)) class patient_details(db.Model): pid=db.Column(db.Integer, primary_key=True) Email=db.Column(db.String(50),unique=True) Password=db.Column(db.String(50)) FirstName=db.Column(db.String(50)) LastName=db.Column(db.String(50)) Contact=db.Column(db.String(10),unique=True) Age=db.Column(db.Integer) @app.route("/") def home(): return render_template("index.html") @app.route("/patientregistration") def PatientRegistration(): return render_template('patientregistration.html') @app.route("/patientlogin") def PatientLogin(): return render_template('patientlogin.html') @app.route('/registration',methods=['POST','GET']) def registration(): if request.method=="POST": patientid=request.form.get('Pid') pemail=request.form.get('Pemail') ppassword=request.form.get('PPassword') pfirstname=request.form.get('PFirstName') plastname=request.form.get('PLastName') pcontact=request.form.get('PContact') page=request.form.get('PAge') print(patientid,pemail,ppassword,pfirstname,plastname,pcontact,page) return render_template("patientregistration.html") app.run(debug=True) And this is the error: @login_manager.user_loader AttributeError: 'str' object has no attribute 'user_loader' ` I've tried all fixes i've come across but those don't seem to fix the problem :(, any help would be appreciated A: Your problem is probably at this line, login_manager=login_view='login' change it to, login_manager.login_view='login'
'str' object has no attribute 'user_loader'
I'm making a dbms project on a covid hospital system and I can't seem to figure out why i'm getting this error, here's my code: from flask import Flask,redirect,render_template,request from flask_sqlalchemy import SQLAlchemy from flask_login import UserMixin from flask_login import login_required,logout_user,login_user,login_manager,LoginManager,current_user #database connection local_server=True app=Flask(__name__) app.secretkey="adarshacharya" #unique access login_manager=LoginManager() login_manager.init_app(app) login_manager=login_view='login' app.config["SQLALCHEMY_DATABASE_URI"]='mysql://root:@localhost/covidata' db=SQLAlchemy(app) @login_manager.user_loader def load_user(user_id): return patient_details.query.get(int(user_id)) class patient_details(db.Model): pid=db.Column(db.Integer, primary_key=True) Email=db.Column(db.String(50),unique=True) Password=db.Column(db.String(50)) FirstName=db.Column(db.String(50)) LastName=db.Column(db.String(50)) Contact=db.Column(db.String(10),unique=True) Age=db.Column(db.Integer) @app.route("/") def home(): return render_template("index.html") @app.route("/patientregistration") def PatientRegistration(): return render_template('patientregistration.html') @app.route("/patientlogin") def PatientLogin(): return render_template('patientlogin.html') @app.route('/registration',methods=['POST','GET']) def registration(): if request.method=="POST": patientid=request.form.get('Pid') pemail=request.form.get('Pemail') ppassword=request.form.get('PPassword') pfirstname=request.form.get('PFirstName') plastname=request.form.get('PLastName') pcontact=request.form.get('PContact') page=request.form.get('PAge') print(patientid,pemail,ppassword,pfirstname,plastname,pcontact,page) return render_template("patientregistration.html") app.run(debug=True) And this is the error: @login_manager.user_loader AttributeError: 'str' object has no attribute 'user_loader' ` I've tried all fixes i've come across but those don't seem to fix the problem :(, any help would be appreciated
[ "Your problem is probably at this line,\nlogin_manager=login_view='login'\n\nchange it to,\nlogin_manager.login_view='login'\n\n" ]
[ 0 ]
[]
[]
[ "flask", "flask_login", "flask_sqlalchemy", "mysql" ]
stackoverflow_0074673429_flask_flask_login_flask_sqlalchemy_mysql.txt
Q: How to Check if the user is logged in on reactjs using JWT Im trying to make a system that it can check it the user is logged of not im using reactjs and JWT tokens that can stored to the cookies in browser. This is my reactjs file code const ApproveRequest = (approveOption) => { if (approveOption === "approve"){ let request = 1; axios.put("http://localhost:3001/cash/approverequest",{ approved: request, id: id, header: { accessToken: cookies.getItem("accessToken") }, withCredentials: true, }).then((response) => { if(response.data.error) { console.log(response.data.error); }else{ setCashObject({ ...cashObject, request: request }); alert("Request Approve"); } }); } else { alert("Field to update the request please contact the dev"); } } from my server JWT.js file const validateToken = (req, res, next) => { const accessToken = req.header("accessToken"); if(!accessToken) { return res.json({error: "User not authenticated"}); } try{ const validToken = verify(accessToken, "bluedragon14S"); if(validToken){ req.authenticated = true; return next; } }catch (err) { return res.json({error: err}); } } from server cash.js route router.put("/approverequest", validateToken,async (req, res) => { const { request = 1, id } = req.body; await Cash.update({request: request}, {where: {id: id} }); res.json(request); }); I wanted is i want to check if the user is logged in so that he/she can update the request thank you in advance for your help Addition in that code i can store the cookies into the browser i just don't know how to check if the user is logged in or not A: I think you can access user cookies in this way : req.cookies.accessToken so change this : const accessToken = req.header("accessToken"); to this : const accessToken = req.cookies?.accessToken if(accessToken ) ...
How to Check if the user is logged in on reactjs using JWT
Im trying to make a system that it can check it the user is logged of not im using reactjs and JWT tokens that can stored to the cookies in browser. This is my reactjs file code const ApproveRequest = (approveOption) => { if (approveOption === "approve"){ let request = 1; axios.put("http://localhost:3001/cash/approverequest",{ approved: request, id: id, header: { accessToken: cookies.getItem("accessToken") }, withCredentials: true, }).then((response) => { if(response.data.error) { console.log(response.data.error); }else{ setCashObject({ ...cashObject, request: request }); alert("Request Approve"); } }); } else { alert("Field to update the request please contact the dev"); } } from my server JWT.js file const validateToken = (req, res, next) => { const accessToken = req.header("accessToken"); if(!accessToken) { return res.json({error: "User not authenticated"}); } try{ const validToken = verify(accessToken, "bluedragon14S"); if(validToken){ req.authenticated = true; return next; } }catch (err) { return res.json({error: err}); } } from server cash.js route router.put("/approverequest", validateToken,async (req, res) => { const { request = 1, id } = req.body; await Cash.update({request: request}, {where: {id: id} }); res.json(request); }); I wanted is i want to check if the user is logged in so that he/she can update the request thank you in advance for your help Addition in that code i can store the cookies into the browser i just don't know how to check if the user is logged in or not
[ "I think you can access user cookies in this way :\nreq.cookies.accessToken\n\nso change this :\nconst accessToken = req.header(\"accessToken\");\n\nto this :\nconst accessToken = req.cookies?.accessToken\n\nif(accessToken )\n...\n\n" ]
[ 0 ]
[]
[]
[ "axios", "cookies", "jwt", "reactjs" ]
stackoverflow_0074673852_axios_cookies_jwt_reactjs.txt
Q: cscope for files which are symlinks I have a source directory with several files. Some of them are symlinks to other files. I created a cscope.files file. But when I execute cscope. It complains for the files that are symlinks: cscope: cannot find file /home/bla/source/file.cc I think it's not very good, but maybe the correct way to go is to change the "find" script, to just write the destination of the symlink instead? A: Currently I'm using: # Write only the files which are NOT symlinks find `pwd` \( \( -iname "*.c" -o -iname "*.cc" -o -iname "*.h" \) -and \( -not -type l \) \) -print > cscope.files # Add the target of the symlink for all files matching the right extension, and are symlinks find `pwd` \( \( -iname "*.c" -o -iname "*.cc" -o -iname "*.h" \) -and -type l \) -printf "%l\n" >> cscope.files But this seems like a terrible solution. Still looking for a better one A: I think you can use the command to find all real paths in a folder that you searched find -L [your searched folder] -name [your searched pattern] -exec realpath {} \; >> cscope.files For example, if I would like to add developed folder and linux kernel header to cscope.files, I will the these commands: find -L `pwd` -iname "*.c" -o -iname "*.h" > cscope.files find -L /usr/src/linux-headers-3.19.0-15-generic/ -iname '*.h' -exec realpath {} \; >> cscope.files I hope the answer can help you. A: For example if you want to give / as your path for cscope, and want cscope to search files with extensions .c/.h/.x/.s/.S you can give the find command as: find / -type f -name "*.[chxsS]" -print -exec readlink -f {} \;> cscope.files This will include regular files, including targets of symbolic links. A: I just do the following to avoid symbolic links, as well get the absolute path in the cscope.files. With absolute path you can search from any directory in your sandbox when cscope is integrated with the vim editor find /"path-to-your-sandbox" -path .git -prune -o -name "*.[ch]" -exec readlink -f {} \; > cscope.files Note: if you omit -print from the find it does not put the symbolic link path in your cscope.files only the resolved path. A: Better in a bash script: #!/bin/bash # # find_cscope_files.sh extension_list=(c cpp cxx cc h hpp hxx hh) for x in "${extension_list[@]}"; do find . -name "*.$x" -print -exec readlink -f {} \; done A: For reference for others I'm currently using. find "$(pwd)" \( -name "*.[chCS]" -o -name "*.[ch][ci]" -o -name "*.[ch]pp" -o -name "*.[ch]++" -o -name "*.[ch]xx" ) -not \( -ipath "*unittest*" -or -ipath "*regress*" \) \( \( -type l -xtype f -exec readlink -f {} \; \) -o \( -type f -print \) \) >cscope.files cscope -q -R -b -i cscope.files
cscope for files which are symlinks
I have a source directory with several files. Some of them are symlinks to other files. I created a cscope.files file. But when I execute cscope. It complains for the files that are symlinks: cscope: cannot find file /home/bla/source/file.cc I think it's not very good, but maybe the correct way to go is to change the "find" script, to just write the destination of the symlink instead?
[ "Currently I'm using:\n# Write only the files which are NOT symlinks\nfind `pwd` \\( \\( -iname \"*.c\" -o -iname \"*.cc\" -o -iname \"*.h\" \\) -and \\( -not -type l \\) \\) -print > cscope.files\n# Add the target of the symlink for all files matching the right extension, and are symlinks\nfind `pwd` \\( \\( -iname \"*.c\" -o -iname \"*.cc\" -o -iname \"*.h\" \\) -and -type l \\) -printf \"%l\\n\" >> cscope.files\n\nBut this seems like a terrible solution. Still looking for a better one\n", "I think you can use the command to find all real paths in a folder that you searched \nfind -L [your searched folder] -name [your searched pattern] -exec realpath {} \\; >> cscope.files\n\nFor example, if I would like to add developed folder and linux kernel header to cscope.files, I will the these commands:\nfind -L `pwd` -iname \"*.c\" -o -iname \"*.h\" > cscope.files\nfind -L /usr/src/linux-headers-3.19.0-15-generic/ -iname '*.h' -exec realpath {} \\; >> cscope.files\n\nI hope the answer can help you.\n", "For example if you want to give / as your path for cscope, and want cscope to search files with extensions .c/.h/.x/.s/.S you can give the find command as:\nfind / -type f -name \"*.[chxsS]\" -print -exec readlink -f {} \\;> cscope.files\n\nThis will include regular files, including targets of symbolic links.\n", "I just do the following to avoid symbolic links, as well get the absolute path in the cscope.files. With absolute path you can search from any directory in your sandbox when cscope is integrated with the vim editor\nfind /\"path-to-your-sandbox\" -path .git -prune -o -name \"*.[ch]\" -exec readlink -f {} \\; > cscope.files\n\nNote: if you omit -print from the find it does not put the symbolic link path in your cscope.files only the resolved path.\n", "Better in a bash script:\n#!/bin/bash\n#\n# find_cscope_files.sh\n\nextension_list=(c cpp cxx cc h hpp hxx hh)\nfor x in \"${extension_list[@]}\"; do\n find . -name \"*.$x\" -print -exec readlink -f {} \\;\ndone\n\n", "For reference for others I'm currently using.\nfind \"$(pwd)\" \\( -name \"*.[chCS]\" -o -name \"*.[ch][ci]\" -o -name \"*.[ch]pp\" -o -name \"*.[ch]++\" -o -name \"*.[ch]xx\" ) -not \\( -ipath \"*unittest*\" -or -ipath \"*regress*\" \\) \\( \\( -type l -xtype f -exec readlink -f {} \\; \\) -o \\( -type f -print \\) \\) >cscope.files\ncscope -q -R -b -i cscope.files\n\n" ]
[ 6, 3, 3, 1, 0, 0 ]
[]
[]
[ "bash", "cscope", "linux", "symlink", "vim" ]
stackoverflow_0029518681_bash_cscope_linux_symlink_vim.txt
Q: Launching app through startActivity not working from Service I'm using a Service (BlockAppsService) that checks which app is in the foreground. When certain apps are in the foreground, I want to launch the home application on the device and show a Toast message to the user. Everything is working fine, the Toast is displayed, but launching the home app is not working. The relevant code from my BlockAppsService class: private boolean blockApp(AppCacheInfo appInfo, String packageInForeground) { String toastMessage = appInfo == null ? getString(R.string.this_app) : appInfo.getLabel() + " " + getString(R.string.toast_error_distracting_app); postToastOnMainThread(toastMessage); launchHome(); } private void postToastOnMainThread(String toastMessage) { // Post the toast on the UI thread. getMainHandler().post(() -> { Utils.showToast(getApplicationContext(), toastMessage, Toast.LENGTH_SHORT); }); } private void launchHome() { Intent startMain = new Intent(Intent.ACTION_MAIN); startMain.addCategory(Intent.CATEGORY_HOME); startMain.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); startActivity(startMain); } private Handler getMainHandler() { if (mainHandler == null) { mainHandler = new Handler(getMainLooper()); } return mainHandler; } Launching, for example, my own MainActivity instead of the home app also doesn't work. The interesting thing is that when I open the Android settings and I block it through these methods, it is working. It's just that with other apps, it doesn't: I don't get any exceptions, the home app just doesn't launch through startActivity(). Anyone knows what's going on here? Many thanks! A: I think your problem is that you do not have permission to appear on top. Try adding this to your main activity, it will open the settings window the user needs to allow you to appear on top if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { if (!Settings.canDrawOverlays(this)) { Intent intent = new Intent(Settings.ACTION_MANAGE_OVERLAY_PERMISSION, Uri.parse("package:" + getPackageName())); startActivityForResult(intent, 1); } } Hopefully this works A: if you only want to launch an activity from your own app you can use the technique details here: https://developer.android.com/develop/ui/views/notifications/navigation. The pending intent of the notification should look like this: Intent notifyIntent = new Intent(this, ResultActivity.class); // Set the Activity to start in a new, empty task notifyIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_CLEAR_TASK); // Create the PendingIntent PendingIntent notifyPendingIntent = PendingIntent.getActivity( this, 0, notifyIntent, PendingIntent.FLAG_UPDATE_CURRENT | PendingIntent.FLAG_IMMUTABLE);
Launching app through startActivity not working from Service
I'm using a Service (BlockAppsService) that checks which app is in the foreground. When certain apps are in the foreground, I want to launch the home application on the device and show a Toast message to the user. Everything is working fine, the Toast is displayed, but launching the home app is not working. The relevant code from my BlockAppsService class: private boolean blockApp(AppCacheInfo appInfo, String packageInForeground) { String toastMessage = appInfo == null ? getString(R.string.this_app) : appInfo.getLabel() + " " + getString(R.string.toast_error_distracting_app); postToastOnMainThread(toastMessage); launchHome(); } private void postToastOnMainThread(String toastMessage) { // Post the toast on the UI thread. getMainHandler().post(() -> { Utils.showToast(getApplicationContext(), toastMessage, Toast.LENGTH_SHORT); }); } private void launchHome() { Intent startMain = new Intent(Intent.ACTION_MAIN); startMain.addCategory(Intent.CATEGORY_HOME); startMain.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); startActivity(startMain); } private Handler getMainHandler() { if (mainHandler == null) { mainHandler = new Handler(getMainLooper()); } return mainHandler; } Launching, for example, my own MainActivity instead of the home app also doesn't work. The interesting thing is that when I open the Android settings and I block it through these methods, it is working. It's just that with other apps, it doesn't: I don't get any exceptions, the home app just doesn't launch through startActivity(). Anyone knows what's going on here? Many thanks!
[ "I think your problem is that you do not have permission to appear on top.\nTry adding this to your main activity, it will open the settings window the user needs to allow you to appear on top\nif (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {\n if (!Settings.canDrawOverlays(this)) {\n Intent intent = new Intent(Settings.ACTION_MANAGE_OVERLAY_PERMISSION, Uri.parse(\"package:\" + getPackageName()));\n startActivityForResult(intent, 1);\n }\n}\n\nHopefully this works\n", "if you only want to launch an activity from your own app you can use the technique details here: https://developer.android.com/develop/ui/views/notifications/navigation.\nThe pending intent of the notification should look like this:\nIntent notifyIntent = new Intent(this, ResultActivity.class);\n// Set the Activity to start in a new, empty task\nnotifyIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_CLEAR_TASK);\n// Create the PendingIntent\nPendingIntent notifyPendingIntent = PendingIntent.getActivity(\n this, 0, notifyIntent,\n PendingIntent.FLAG_UPDATE_CURRENT | PendingIntent.FLAG_IMMUTABLE);\n\n" ]
[ 1, 0 ]
[]
[]
[ "android", "android_launcher", "android_service", "start_activity" ]
stackoverflow_0067122356_android_android_launcher_android_service_start_activity.txt
Q: npm install hangs This is my package.json: { "name": "my-example-app", "version": "0.1.0", "dependencies": { "request": "*", "nano": "3.3.x", "async": "~0.2" } } Now, when I open the cmd and run npm install, the install hangs. What am I doing wrong? A: I had the same problem. The reason - wrong proxy was configured and because of that npm was unable to download packages. So your best bet is to the see the output of $ npm install --verbose and identify the problem. If you have never configured proxy, then possible causes can be Very outdated npm version. Some problem with your internet connection. Permissions are not sufficient for npm to modify files. A: I was having the same problem. I tried a npm config set registry http://registry.npmjs.org/ to turn off https. I also tried npm set progress=false to turn off the progress bar (it has been reported to slow down downloads). The problem was with my network driver. I just needed to reboot and the lag went away. A: You can try deleting package-lock.json and running npm install afterwards. This worked for me. A: I had the same issue on macOS, after some time struggling and searching around, this answer actually solved the issue for me: npm config rm proxy npm config rm https-proxy npm config set registry http://registry.npmjs.org/ A: Updating npm helped me on Mac OS. Use the command: sudo npm install -g npm@latest A: I am behind a corporate proxy, so I usually use an intermediate proxy to enable NTLM authentication. I had hangs problem with npm install when using CNTLM proxy. With NTLM-APS (a similar proxy) the hangs were gone. A: On MacOS, I was able to solve this by networksetup -setv6off Wi-Fi After installing, you can revert to the original configuration with networksetup -setv6automatic Wi-Fi A: While your mileage may vary, running npm cache verify fixed the issue for me. A: npm cache clear --force has fixed this issue for me in the past. Furthermore, when running npm install on an air-gapped network (by the way, I provide a description about how to do this with Verdaccio), I had an issue where the install would hang at the very end. Turning off auditing (i.e. npm set audit false) on the machine on the air-gapped network resolved this issue. A: It was strange but I guess I was just being impatient ran -> npm install --verbose and saw there was progress but it was just really slow!!! All I needed was patience :D A: When your ssh key is password protected run ssh-add. npm probably hangs somewhere asking for your password. A: Remove node_modules & package-lock.json from previous npm install and install again rm -rf node_modules package-lock.json npm install or If npm install loader is stuck and then pops up with.. npm ERR! code UNABLE_TO_GET_ISSUER_CERT_LOCALLY npm ERR! errno UNABLE_TO_GET_ISSUER_CERT_LOCALLY npm ERR! request to https://registry.npmjs.org/jest failed, reason: unable to get local issuer certificate" then, npm config set strict-ssl false npm install or Follow to uninstall Node.js and install properly https://www.geeksforgeeks.org/how-to-completely-remove-node-js-from-windows/ https://coding-boot-camp.github.io/full-stack/nodejs/how-to-install-nodejs I personally had this issue and did all the steps I listed above. My issue was fixed with npm config set strict-ssl false A: The registry(https://registry.npmjs.org/cordova) was blocked by our firewall. Unblocking it fixed the issue. A: Incase its useful to others, the following is what worked for me: On my machine, although npm proxy was set correctly, npm install waits forever doing something like sill extract. Re-trying npm install waits forever on the same package again and again. After waiting for a long timeout, npm install printed an error message implying that git was trying to fetch something. The problem vanished after configuring git proxy using the below command: git config --global http.proxy https://proxy-server:port Note the https in the value of http.proxy without which the configuration did not take effect. Proxy server settings (http / https / port) might vary for users; hence its worth spending a bit of time experimenting with npm and git proxy server settings. A: With due respect to all the answers, I switched to a different network and it worked for me. A: This method is working for me when npm blocks in installation Package for IONIC installation and ReactNative and another package npm. You can change temporary: npm config set prefix C:\Users\[username]\AppData\Roaming\npm\node_modules2 Change the path in environment variables. Set: C:\Users[username]\AppData\Roaming\npm\node_modules2 Run the command to install your package. Open file explorer, copy the link: C:\Users[username]\AppData\Roaming\npm\node_modules ok file yourpackage.CMD created another folder Created "node_modules2" in node_modules and contain your package folder. Copy your package file CMD to parent folder "npm". Copy your package folder to parent folder "node_modules". Now run: npm config set prefix C:\Users\[username]\AppData\Roaming\npm Change the path in environment variables. Set: C:\Users[username]\AppData\Roaming\npm Now the package is working correctly with the command line. A: I'm not sure if your problem is being caused by the same reason that mine was, but I too was experiencing a hanging "npm install" and was able to fix it. In my case, I wanted to install typescript locally in the project: npm i typescript --save-dev For some reason this was conflicting with a global install of typescript that I had, and the shell was just hanging forever instead of finishing or erroring... I fixing it by first removing the globally installed typescript with the -g global flag: npm uninstall typescript -g After doing this the first command worked! A: I had npm hanging on installation of electronjs on Windows 10. I reinstalled and still it was hanging. But I noticed it got installed on another desktop in the same network. So finally I found the culprit. The issue was caused by Bitdefender free edition. There was no warning by the antivirus but it was blocking it silently. Even the console was not closing once the installation starts as it kept hanging. Disable antivirus/firewall if its on Windows and make sure network is open as npm does not seem to have a proper way of communicating network blocks and will keep proceeding indefinitely. A: I've hit this problem a couple times. When I was on VPN, I pressed Ctrl-C and disconnected from the VPN. Then npm install worked. When I wasn't on VPN, I pressed Ctrl-C and connected to the VPN. Then, again, npm install worked. A: For anyone on MacOS (I'm on Mojave 10.14), the following helped me out: https://github.com/reactioncommerce/reaction/issues/1938#issuecomment-284207213 You'd run these commands echo kern.maxfiles=65536 | sudo tee -a /etc/sysctl.conf echo kern.maxfilesperproc=65536 | sudo tee -a /etc/sysctl.conf sudo sysctl -w kern.maxfiles=65536 sudo sysctl -w kern.maxfilesperproc=65536 ulimit -n 65536 Then try npm install once more. A: check your environment variables for http and https The existing entries might be creating some issues. Try deleting those entries. Run "npm install" again. A: I just turn off my windows firewall and it worked for me. You can also try different versions of npm. A: Check your .npmrc file for a registry entry (which identifies a server acting as a package cache.) For me, npm install would hang partway through, and it was because of a old / non-responsive server listed in my .npmrc file. Remove the line or comment it out: >cat ~/.npmrc #registry=http://oldserver:4873 (And/or check with your IT / project lead as to why it's not working ;) A: install nvm (Node Version Manager) and downgrade node version from 14 to 12 solved the issue in my case A: Uninstalling and installing node and npm worked for me. I'm using Ubuntu 20.04.1 LTS A: In my case npm install was hanging because it was waiting for me to input a password to my ssh key while cloning from git repository. There was no prompt and I realized this might be the case when I typed random character and nothing was echoed back. In my case I had to look at package.json file and clone locally repositories listed there. Then I updated package.json and changed paths of those git repositories to my local paths. After doing this everything else was installed without further errors. A: On windows i suddenly had the same issue and tried all of the above, but the final solution for me was to switch off the ransomware protection which I had activated. It somehow doesn´t go well along with npm A: I was having this error because I was running npm in a (docker) container in WSL2, and docker in WSL2 was configuring the wrong nameservers in the containers, making the container unable to resolve hosts. To see if your container (or even your host) can resolve hosts, you can try running: curl https://github.com. In my case I received curl: (6) Could not resolve host: github.com. The error in the docker container doesn't happen if I don't use the default bridge, instead I used a custom bridge and defined the container with it, in which case the resolv.conf file ends up with the correct nameserver: $ cat /etc/resolv.conf nameserver 127.0.0.11 options ndots:0 The ip 127.0.0.11 corresponds to the docker DNS server, solving the problem in my case. If you aren't running npm in a container, your issue may still be related to some misconfigured resolv.conf file (if you are in a Linux machine, or in Windows with WSL/WSL2). A: In case anyone else encounters this, I left the npm install to run for long enough, and then the Jest extension crashed (v4.2.1), at which point the npm install completed successfully. The Jest configuration seems to show that a test auto-watch feature was enabled. I haven't changed any Jest settings as far as I'm aware, so this must be out-of-the-box functionality. A: I had same issue when installing legacy version of vue tools (4.1.5). Downgrading node to node 10 worked for me. A: Mine was hanging when I was trying to install latest version of react-router-dom, I just stopped server from running and then tried installing and it worked. A: In my case it was freezing while calling reify. I downgraded from node 16 to node 14 and everything worked perfectly. A: Surprisingly just restarting my computer and running npm install again worked for me
npm install hangs
This is my package.json: { "name": "my-example-app", "version": "0.1.0", "dependencies": { "request": "*", "nano": "3.3.x", "async": "~0.2" } } Now, when I open the cmd and run npm install, the install hangs. What am I doing wrong?
[ "I had the same problem. The reason - wrong proxy was configured and because of that npm was unable to download packages.\nSo your best bet is to the see the output of\n$ npm install --verbose\n\nand identify the problem. If you have never configured proxy, then possible causes can be\n\nVery outdated npm version.\nSome problem with your internet connection.\nPermissions are not sufficient for npm to modify files.\n\n", "I was having the same problem. I tried a \nnpm config set registry http://registry.npmjs.org/\n\nto turn off https. I also tried\nnpm set progress=false \n\nto turn off the progress bar (it has been reported to slow down downloads).\nThe problem was with my network driver. I just needed to reboot and the lag went away.\n", "You can try deleting package-lock.json and running npm install afterwards.\nThis worked for me.\n", "I had the same issue on macOS, after some time struggling and searching around, this answer actually solved the issue for me:\nnpm config rm proxy\nnpm config rm https-proxy\nnpm config set registry http://registry.npmjs.org/\n\n", "Updating npm helped me on Mac OS. Use the command:\nsudo npm install -g npm@latest\n\n", "I am behind a corporate proxy, so I usually use an intermediate proxy to enable NTLM authentication.\nI had hangs problem with npm install when using CNTLM proxy. With NTLM-APS (a similar proxy) the hangs were gone.\n", "On MacOS, I was able to solve this by\nnetworksetup -setv6off Wi-Fi\n\nAfter installing, you can revert to the original configuration with\nnetworksetup -setv6automatic Wi-Fi\n\n", "While your mileage may vary, running npm cache verify fixed the issue for me.\n", "npm cache clear --force has fixed this issue for me in the past.\nFurthermore, when running npm install on an air-gapped network (by the way, I provide a description about how to do this with Verdaccio), I had an issue where the install would hang at the very end. Turning off auditing (i.e. npm set audit false) on the machine on the air-gapped network resolved this issue.\n", "It was strange but I guess I was just being impatient ran -> npm install --verbose and saw there was progress but it was just really slow!!! All I needed was patience :D\n", "When your ssh key is password protected run ssh-add. npm probably hangs somewhere asking for your password.\n", "Remove node_modules & package-lock.json from previous npm install and install again\nrm -rf node_modules package-lock.json\nnpm install\n\nor\nIf npm install loader is stuck and then pops up with..\n\nnpm ERR! code UNABLE_TO_GET_ISSUER_CERT_LOCALLY\nnpm ERR! errno UNABLE_TO_GET_ISSUER_CERT_LOCALLY\nnpm ERR! request to https://registry.npmjs.org/jest failed, reason: unable to get local issuer certificate\"\n\nthen,\nnpm config set strict-ssl false\nnpm install\n\nor\nFollow to uninstall Node.js and install properly\nhttps://www.geeksforgeeks.org/how-to-completely-remove-node-js-from-windows/\nhttps://coding-boot-camp.github.io/full-stack/nodejs/how-to-install-nodejs\nI personally had this issue and did all the steps I listed above. My issue was fixed with npm config set strict-ssl false\n", "The registry(https://registry.npmjs.org/cordova) was blocked by our firewall. Unblocking it fixed the issue.\n", "Incase its useful to others, the following is what worked for me:\nOn my machine, although npm proxy was set correctly, npm install waits forever doing something like sill extract. Re-trying npm install waits forever on the same package again and again. \nAfter waiting for a long timeout, npm install printed an error message implying that git was trying to fetch something. \nThe problem vanished after configuring git proxy using the below command:\ngit config --global http.proxy https://proxy-server:port\n\nNote the https in the value of http.proxy without which the configuration did not take effect. Proxy server settings (http / https / port) might vary for users; hence its worth spending a bit of time experimenting with npm and git proxy server settings.\n", "With due respect to all the answers, I switched to a different network and it worked for me.\n", "This method is working for me when npm blocks in installation Package for IONIC installation and ReactNative and another package npm.\nYou can change temporary:\nnpm config set prefix C:\\Users\\[username]\\AppData\\Roaming\\npm\\node_modules2 \n\nChange the path in environment variables. Set:\n\nC:\\Users[username]\\AppData\\Roaming\\npm\\node_modules2\n\nRun the command to install your package.\nOpen file explorer, copy the link:\n\nC:\\Users[username]\\AppData\\Roaming\\npm\\node_modules\n\nok file yourpackage.CMD created another folder Created \"node_modules2\" in node_modules and contain your package folder.\nCopy your package file CMD to parent folder \"npm\".\nCopy your package folder to parent folder \"node_modules\".\nNow run:\nnpm config set prefix C:\\Users\\[username]\\AppData\\Roaming\\npm\nChange the path in environment variables. Set:\n\nC:\\Users[username]\\AppData\\Roaming\\npm\n\n\nNow the package is working correctly with the command line.\n", "I'm not sure if your problem is being caused by the same reason that mine was, but I too was experiencing a hanging \"npm install\" and was able to fix it. \nIn my case, I wanted to install typescript locally in the project:\nnpm i typescript --save-dev\n\nFor some reason this was conflicting with a global install of typescript that I had, and the shell was just hanging forever instead of finishing or erroring...\nI fixing it by first removing the globally installed typescript with the -g global flag:\nnpm uninstall typescript -g\n\nAfter doing this the first command worked! \n", "I had npm hanging on installation of electronjs on Windows 10. I reinstalled and still it was hanging. But I noticed it got installed on another desktop in the same network. So finally I found the culprit. The issue was caused by Bitdefender free edition. There was no warning by the antivirus but it was blocking it silently. Even the console was not closing once the installation starts as it kept hanging. Disable antivirus/firewall if its on Windows and make sure network is open as npm does not seem to have a proper way of communicating network blocks and will keep proceeding indefinitely.\n", "I've hit this problem a couple times.\n\nWhen I was on VPN, I pressed Ctrl-C and disconnected from the VPN. Then npm install worked.\nWhen I wasn't on VPN, I pressed Ctrl-C and connected to the VPN. Then, again, npm install worked.\n\n", "For anyone on MacOS (I'm on Mojave 10.14), the following helped me out:\nhttps://github.com/reactioncommerce/reaction/issues/1938#issuecomment-284207213\nYou'd run these commands\necho kern.maxfiles=65536 | sudo tee -a /etc/sysctl.conf\necho kern.maxfilesperproc=65536 | sudo tee -a /etc/sysctl.conf\nsudo sysctl -w kern.maxfiles=65536\nsudo sysctl -w kern.maxfilesperproc=65536\nulimit -n 65536\n\nThen try npm install once more.\n", "check your environment variables for http and https\nThe existing entries might be creating some issues. Try deleting those entries.\nRun \"npm install\" again.\n", "I just turn off my windows firewall and it worked for me.\nYou can also try different versions of npm.\n", "Check your .npmrc file for a registry entry (which identifies a server acting as a package cache.)\nFor me, npm install would hang partway through, and it was because of a old / non-responsive server listed in my .npmrc file. Remove the line or comment it out:\n>cat ~/.npmrc\n#registry=http://oldserver:4873\n\n(And/or check with your IT / project lead as to why it's not working ;)\n", "install nvm (Node Version Manager) and downgrade node version from 14 to 12 solved the issue in my case\n", "Uninstalling and installing node and npm worked for me. I'm using Ubuntu 20.04.1 LTS\n", "In my case npm install was hanging because it was waiting for me to input a password to my ssh key while cloning from git repository. There was no prompt and I realized this might be the case when I typed random character and nothing was echoed back. In my case I had to look at package.json file and clone locally repositories listed there. Then I updated package.json and changed paths of those git repositories to my local paths. After doing this everything else was installed without further errors.\n", "On windows i suddenly had the same issue and tried all of the above, but the final solution for me was to switch off the ransomware protection which I had activated. It somehow doesn´t go well along with npm \n", "I was having this error because I was running npm in a (docker) container in WSL2, and docker in WSL2 was configuring the wrong nameservers in the containers, making the container unable to resolve hosts.\nTo see if your container (or even your host) can resolve hosts, you can try running: curl https://github.com. In my case I received curl: (6) Could not resolve host: github.com.\nThe error in the docker container doesn't happen if I don't use the default bridge, instead I used a custom bridge and defined the container with it, in which case the resolv.conf file ends up with the correct nameserver:\n$ cat /etc/resolv.conf \nnameserver 127.0.0.11\noptions ndots:0\n\nThe ip 127.0.0.11 corresponds to the docker DNS server, solving the problem in my case.\nIf you aren't running npm in a container, your issue may still be related to some misconfigured resolv.conf file (if you are in a Linux machine, or in Windows with WSL/WSL2).\n", "In case anyone else encounters this, I left the npm install to run for long enough, and then the Jest extension crashed (v4.2.1), at which point the npm install completed successfully.\nThe Jest configuration seems to show that a test auto-watch feature was enabled. I haven't changed any Jest settings as far as I'm aware, so this must be out-of-the-box functionality.\n", "I had same issue when installing legacy version of vue tools (4.1.5).\nDowngrading node to node 10 worked for me.\n", "Mine was hanging when I was trying to install latest version of react-router-dom, I just stopped server from running and then tried installing and it worked.\n", "In my case it was freezing while calling reify.\nI downgraded from node 16 to node 14 and everything worked perfectly.\n", "Surprisingly just restarting my computer and running npm install again worked for me\n" ]
[ 181, 51, 46, 13, 10, 9, 9, 7, 7, 6, 4, 4, 3, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "*Try doing sudo npm install.\n*If you're using github then it could be you don't have permission and need to generated a public SSH key and add it to your settings account: (https://help.github.com/articles/generating-ssh-keys/)\n" ]
[ -40 ]
[ "node.js", "npm" ]
stackoverflow_0016873973_node.js_npm.txt
Q: Cython Buffer types only allowed as function local variables I create a function that take x, y, batch size as input and yield mini batch as output with cython to sped up the process. import numpy as np cimport cython cimport numpy as np ctypedef np.float64_t DTYPE_t @cython.boundscheck(False) def create_mini_batches(np.ndarray[DTYPE_t, ndim=2] X, np.ndarray[DTYPE_t, ndim=2] y, int batch_size): cdef int m cdef double num_of_batch cdef np.ndarray[DTYPE_t, ndim=2] shuffle_X cdef np.ndarray[DTYPE_t, ndim=2] shuffle_y cdef int permutation X, y = X.T, y.T m = X.shape[0] num_of_batch = m // batch_size permutation = list(np.random.permutation(m)) shuffle_X = X[permutation, :] shuffle_y = y[permutation, :] for t in range(num_of_batch): mini_x = shuffle_X[t * batch_size: (t + 1) * batch_size, :] mini_y = shuffle_y[t * batch_size: (t + 1) * batch_size, :] yield (mini_x.T, mini_y.T) if m % batch_size != 0: mini_x = shuffle_X[m // batch_size * batch_size: , :] mini_y = shuffle_y[m // batch_size * batch_size: , :] yield (mini_x.T, mini_y.T) When I compile the program with this code python setup.py build_ext --inplace the following error showed up. @cython.boundscheck(False) def create_mini_batches(np.ndarray\[DTYPE_t, ndim=2\] X, np.ndarray\[DTYPE_t, ndim=2\] y, int batch_size): ^ test.pyx:8:24: Buffer types only allowed as function local variables Can someone help me how to solved the error and why it is a error? A: It's a sightly confusing error message in this case but you're getting it because it's a generator rather than a function. This means that Cython has to create an internal data structure to hold the generator state while it works. Typed Numpy array variables (e.g. np.ndarray[DTYPE_t, ndim=2]) were implemented in a way where it's very hard to handle their reference counting correctly. Therefore Cython can only handle them as variables in a regular function. It cannot store them in a class, and thus cannot use them in a generator. To solve it your either need to drop the typing, or you should switch to the more recent typed memoryviews which were designed better so don't have this limitation.
Cython Buffer types only allowed as function local variables
I create a function that take x, y, batch size as input and yield mini batch as output with cython to sped up the process. import numpy as np cimport cython cimport numpy as np ctypedef np.float64_t DTYPE_t @cython.boundscheck(False) def create_mini_batches(np.ndarray[DTYPE_t, ndim=2] X, np.ndarray[DTYPE_t, ndim=2] y, int batch_size): cdef int m cdef double num_of_batch cdef np.ndarray[DTYPE_t, ndim=2] shuffle_X cdef np.ndarray[DTYPE_t, ndim=2] shuffle_y cdef int permutation X, y = X.T, y.T m = X.shape[0] num_of_batch = m // batch_size permutation = list(np.random.permutation(m)) shuffle_X = X[permutation, :] shuffle_y = y[permutation, :] for t in range(num_of_batch): mini_x = shuffle_X[t * batch_size: (t + 1) * batch_size, :] mini_y = shuffle_y[t * batch_size: (t + 1) * batch_size, :] yield (mini_x.T, mini_y.T) if m % batch_size != 0: mini_x = shuffle_X[m // batch_size * batch_size: , :] mini_y = shuffle_y[m // batch_size * batch_size: , :] yield (mini_x.T, mini_y.T) When I compile the program with this code python setup.py build_ext --inplace the following error showed up. @cython.boundscheck(False) def create_mini_batches(np.ndarray\[DTYPE_t, ndim=2\] X, np.ndarray\[DTYPE_t, ndim=2\] y, int batch_size): ^ test.pyx:8:24: Buffer types only allowed as function local variables Can someone help me how to solved the error and why it is a error?
[ "It's a sightly confusing error message in this case but you're getting it because it's a generator rather than a function. This means that Cython has to create an internal data structure to hold the generator state while it works.\nTyped Numpy array variables (e.g. np.ndarray[DTYPE_t, ndim=2]) were implemented in a way where it's very hard to handle their reference counting correctly. Therefore Cython can only handle them as variables in a regular function. It cannot store them in a class, and thus cannot use them in a generator.\nTo solve it your either need to drop the typing, or you should switch to the more recent typed memoryviews which were designed better so don't have this limitation.\n" ]
[ 0 ]
[]
[]
[ "cython", "numpy", "numpy_ndarray", "python" ]
stackoverflow_0074673759_cython_numpy_numpy_ndarray_python.txt
Q: Sumologic sending alerts to SLACK I tried to send alerts from Sumologic to Slack. But when I test the connection, it always failed and return 400 http code. I used the connection type as Webhook When test the connection, it should pass A: If you are using WebHook and test the connection, you must use valid payload. If you don't provide valid payload, then connection test will be failed. You can use the connection type as SLACK over WebHook. Still you are using webhook URL. This links show the step by step, how to integrate Sumologic with Slack. https://www.youtube.com/watch?v=qEz8dcp9SgI
Sumologic sending alerts to SLACK
I tried to send alerts from Sumologic to Slack. But when I test the connection, it always failed and return 400 http code. I used the connection type as Webhook When test the connection, it should pass
[ "If you are using WebHook and test the connection, you must use valid payload. If you don't provide valid payload, then connection test will be failed.\nYou can use the connection type as SLACK over WebHook. Still you are using webhook URL.\nThis links show the step by step, how to integrate Sumologic with Slack.\nhttps://www.youtube.com/watch?v=qEz8dcp9SgI\n" ]
[ 1 ]
[]
[]
[ "slack", "sumologic" ]
stackoverflow_0074673104_slack_sumologic.txt
Q: Angular migration 12 to 14 running nx serve failed After migration angular version from 12 to 14 according this guide, I'm can't run my app. after successful build, I'm running this command: nx serve <my app name> and nothing happend. what am i missing? A: choose nx version according to angular version that you want to upgrade in this guide: https://nx.dev/angular-nx-version-matrix run command: nx migrate 14.4.3 (14.4.3 is nx version that upgrade to angular to v14) than run: nx migrate --run-migrations and than: nx serve <my app name> if you have this err: "NX spawn ENAMETOOLONG" or "NX Cannot read properties of undefined (reading 'endsWith')" step 1: add .angular to .gitignore step 2: commit the changes that work for me.
Angular migration 12 to 14 running nx serve failed
After migration angular version from 12 to 14 according this guide, I'm can't run my app. after successful build, I'm running this command: nx serve <my app name> and nothing happend. what am i missing?
[ "choose nx version according to angular version that you want to upgrade in this guide:\nhttps://nx.dev/angular-nx-version-matrix\nrun command:\nnx migrate 14.4.3\n(14.4.3 is nx version that upgrade to angular to v14)\nthan run:\nnx migrate --run-migrations\nand than:\nnx serve <my app name>\n\nif you have this err: \"NX spawn ENAMETOOLONG\" or \"NX Cannot read properties of undefined (reading 'endsWith')\"\n\nstep 1: add .angular to .gitignore\nstep 2: commit the changes\nthat work for me.\n" ]
[ 0 ]
[]
[]
[ "angular", "angular14", "migration", "nomachine_nx", "nrwl_nx" ]
stackoverflow_0074615722_angular_angular14_migration_nomachine_nx_nrwl_nx.txt
Q: Hi. I'm trying to use the meta tag maximum-scale so the user can't enlarge the size of the viewport I'm trying to use the meta tag maximum-scale so the user can't enlarge the size of the viewport. My code is down below. However, if I open this file with chorme, I can still use the mouse wheel to enlarge the viewport up to 500%, or reduce it down to 25%. How should I fix this problem? code: <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <style> * { margin: 0px; padding: 0px; box-sizing: border-box; } #container { width: 100vw; height: 100vh; background: #F7F7F7; } #header { width: 100%; height: 60px; background: #D1D1D1; } </style> </head> <body> <div id="container"> <div id="header"> </div> </div> </body> </html> <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <style> * { margin: 0px; padding: 0px; box-sizing: border-box; } #container { width: 100vw; height: 100vh; background: #F7F7F7; } #header { width: 100%; height: 60px; background: #D1D1D1; } </style> </head> <body> <div id="container"> <div id="header"> </div> </div> </body> </html> A: from what I have understood reading this it works on devices that render pages in a virtual window or viewport like mobile screens, and your code works fine, I tried it in chrome dev tools setting the device to "Iphne XR" and emulating zooming by holding Shift + mouse click and drag across the viewport I haven't been able to zoom in or out.
Hi. I'm trying to use the meta tag maximum-scale so the user can't enlarge the size of the viewport
I'm trying to use the meta tag maximum-scale so the user can't enlarge the size of the viewport. My code is down below. However, if I open this file with chorme, I can still use the mouse wheel to enlarge the viewport up to 500%, or reduce it down to 25%. How should I fix this problem? code: <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <style> * { margin: 0px; padding: 0px; box-sizing: border-box; } #container { width: 100vw; height: 100vh; background: #F7F7F7; } #header { width: 100%; height: 60px; background: #D1D1D1; } </style> </head> <body> <div id="container"> <div id="header"> </div> </div> </body> </html> <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <style> * { margin: 0px; padding: 0px; box-sizing: border-box; } #container { width: 100vw; height: 100vh; background: #F7F7F7; } #header { width: 100%; height: 60px; background: #D1D1D1; } </style> </head> <body> <div id="container"> <div id="header"> </div> </div> </body> </html>
[ "from what I have understood reading this it works on devices that render pages in a virtual window or viewport like mobile screens, and your code works fine, I tried it in chrome dev tools setting the device to \"Iphne XR\" and emulating zooming by holding Shift + mouse click and drag across the viewport\nI haven't been able to zoom in or out.\n" ]
[ 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074673794_css_html.txt
Q: Ansible loop with array Could someone let me know how we can create a code as below? - name: TEST1 set_fact: list_a: "{{ list_a + [item.json.SearchResult.resources] }}" with_items: - "{{ source_list.results[0] }}" - "{{ source_list.results[1] }}" - "{{ source_list.results[x] }}" ... (unknown how many items in result from API) vars: list_a: [] source_list.results[x] comes from an API result. The reason why I need to create an array is that the number of API result is maximum 100. But there are over 500 items. A: Note: since we have no idea what you initial data structure looks like exactly, the below might not be 100% fitting your use case. For your next questions, please read How to ask and pay attention to the Minimal, complete and reproducible example section. Thanks You are taking this the wrong way. Simply extract the attribute you need from each result using the map(attribute=x) Jinja2 filter. For the below I inferred (see above note) that: you called your API with ansible.builtin.uri in a loop to get batches of 100 results which are returned as a list in the SearchResult.ressources field you want in the end a flattened list where all resources are at top level - name: Show my list of single attributes ansible.builtin.debug: var: "source_list.results | map(attribute='json.SearchResult.resources') | flatten" You actually don't need to set_fact: For a single use, just use the above expression directly in the relevant parameter (e.g. loop or a module param....) or eventually declare this in a var at task level. If you want to reuse this in different parts of your playbook, just declare a var at play level and expand it anywhere once you have called your API and populated the source_list var. In that case, just add a default value to prevent an error if API was not yet called. Example for the second case above in this pseudo playbook --- - hosts: localhost gather_facts: false vars: list_a: "{{ source_list.results | d([]) | map(attribute='json.SearchResult.resources') | flatten }}" tasks: - name: "This will return an empty list (i.e. []) as we did not populate source_list yet" ansible.builtin.debug: var: list_a - name: Call our API and register source_list ansible.builtin.uri: uri: https://my.api.com/api/v1/some/endpoint # [... more parameters here ... ] loop: "{{ my_list_of_ressources }}" register: source_list - name: "This will now return a populated list after calling the API and registering source_list" ansible.builtin.debug: var: list_a Now, to still give a direct answer to your initial question: you can construct that list iteratively inside a set_fact task. This is definitely not efficient as it involves a task running inside a loop (both unneeded as demonstrated above) and possibly on multiple hosts in your play. But for learning purpose, here it is: - name: very inefficient way to get the same result as above set_fact: list_a: "{{ list_a | d([]) + item.SearchResult.resources }}" loop: "{{ source_list.results }}"
Ansible loop with array
Could someone let me know how we can create a code as below? - name: TEST1 set_fact: list_a: "{{ list_a + [item.json.SearchResult.resources] }}" with_items: - "{{ source_list.results[0] }}" - "{{ source_list.results[1] }}" - "{{ source_list.results[x] }}" ... (unknown how many items in result from API) vars: list_a: [] source_list.results[x] comes from an API result. The reason why I need to create an array is that the number of API result is maximum 100. But there are over 500 items.
[ "Note: since we have no idea what you initial data structure looks like exactly, the below might not be 100% fitting your use case. For your next questions, please read How to ask and pay attention to the Minimal, complete and reproducible example section. Thanks\n\nYou are taking this the wrong way. Simply extract the attribute you need from each result using the map(attribute=x) Jinja2 filter.\nFor the below I inferred (see above note) that:\n\nyou called your API with ansible.builtin.uri in a loop to get batches of 100 results which are returned as a list in the SearchResult.ressources field\nyou want in the end a flattened list where all resources are at top level\n\n- name: Show my list of single attributes\n ansible.builtin.debug:\n var: \"source_list.results\n | map(attribute='json.SearchResult.resources') | flatten\"\n\nYou actually don't need to set_fact:\n\nFor a single use, just use the above expression directly in the relevant parameter (e.g. loop or a module param....) or eventually declare this in a var at task level.\nIf you want to reuse this in different parts of your playbook, just declare a var at play level and expand it anywhere once you have called your API and populated the source_list var. In that case, just add a default value to prevent an error if API was not yet called.\n\nExample for the second case above in this pseudo playbook\n---\n- hosts: localhost\n gather_facts: false\n\n vars:\n list_a: \"{{ source_list.results | d([])\n | map(attribute='json.SearchResult.resources') | flatten }}\"\n\n tasks:\n - name: \"This will return an empty list (i.e. [])\n as we did not populate source_list yet\"\n ansible.builtin.debug:\n var: list_a\n\n - name: Call our API and register source_list\n ansible.builtin.uri:\n uri: https://my.api.com/api/v1/some/endpoint\n # [... more parameters here ... ]\n loop: \"{{ my_list_of_ressources }}\"\n register: source_list\n\n - name: \"This will now return a populated list\n after calling the API and registering source_list\"\n ansible.builtin.debug:\n var: list_a\n\n\nNow, to still give a direct answer to your initial question: you can construct that list iteratively inside a set_fact task. This is definitely not efficient as it involves a task running inside a loop (both unneeded as demonstrated above) and possibly on multiple hosts in your play. But for learning purpose, here it is:\n- name: very inefficient way to get the same result as above\n set_fact:\n list_a: \"{{ list_a | d([]) + item.SearchResult.resources }}\"\n loop: \"{{ source_list.results }}\"\n\n" ]
[ 0 ]
[]
[]
[ "ansible", "arrays", "json", "loops" ]
stackoverflow_0074671679_ansible_arrays_json_loops.txt
Q: Cannot get DropdownButton inside Dialog to update to new value on changed/selected Here is what I have. String selected = "ONE", final List<DropdownMenuItem<String>> types = [ DropdownMenuItem(value: "ONE", child: Text("ex1")), DropdownMenuItem(value: "TWO", child: Text("ex2")), ... ]; @override Widget build(context) => Scaffold( body: Column( ... TextButton( onPressed: () { context: context, builder: (context) => AlertDialog( content: SizedBox( ... DropdownButton( items: types, value: selected, onChanged: (String? value) { selected = value!; setState(() { selected; }); }) The widget is built as expected, but the dropdown menu does not update after a new value is selected. I know there are tons of similar questions out there, but the majority of solutions are make sure the selected equivalence is globally defined using setState() both of which I have tried, but can't seem to get it working. I can confirm that selected is being set equal to value, just not being reflected on UI. A: Use StatefulBuilder to update ui inside dialog. showDialog( context: context, builder: (context) => StatefulBuilder( builder: (BuildContext context, setStateSB) { return AlertDialog( content: DropdownButton( items: types, value: selected, onChanged: (String? value) { setStateSB(() { selected = value!; }); }), ); }, ), );
Cannot get DropdownButton inside Dialog to update to new value on changed/selected
Here is what I have. String selected = "ONE", final List<DropdownMenuItem<String>> types = [ DropdownMenuItem(value: "ONE", child: Text("ex1")), DropdownMenuItem(value: "TWO", child: Text("ex2")), ... ]; @override Widget build(context) => Scaffold( body: Column( ... TextButton( onPressed: () { context: context, builder: (context) => AlertDialog( content: SizedBox( ... DropdownButton( items: types, value: selected, onChanged: (String? value) { selected = value!; setState(() { selected; }); }) The widget is built as expected, but the dropdown menu does not update after a new value is selected. I know there are tons of similar questions out there, but the majority of solutions are make sure the selected equivalence is globally defined using setState() both of which I have tried, but can't seem to get it working. I can confirm that selected is being set equal to value, just not being reflected on UI.
[ "Use StatefulBuilder to update ui inside dialog.\nshowDialog(\n context: context,\n builder: (context) => StatefulBuilder(\n builder: (BuildContext context, setStateSB) {\n return AlertDialog(\n content: DropdownButton(\n items: types,\n value: selected,\n onChanged: (String? value) {\n setStateSB(() {\n selected = value!;\n });\n }),\n );\n },\n ),\n);\n\n" ]
[ 0 ]
[]
[]
[ "dialog", "dropdownbutton", "flutter" ]
stackoverflow_0074673864_dialog_dropdownbutton_flutter.txt
Q: How to create 3D array with filled value along one dimension? It's easy to create a 2D array with filled values: import numpy as np np.full((5, 3), [1]) np.full((5, 3), [1, 2, 3]) Then, I wanna create a 3D array with same value for last two dimensions: import numpy as np np.full((2, 3, 1), [[1], [2]]) ''' # perferred result [[[1], [1], [1]] [[2], [2], [2]]] ''' However, I got this error: ValueError: could not broadcast input array from the shape (2,1) into shape (2,3,1) Does anyone know the correct way to use np.full() for 3D array? A: In order to boardcast the value to the desired shape, you require the value in shape (2, 1, 1) to match with the input shape (2, 3, 1) np.full((2, 3, 1), [[[1]], [[2]]]) output: array([[[1], [1], [1]], [[2], [2], [2]]])
How to create 3D array with filled value along one dimension?
It's easy to create a 2D array with filled values: import numpy as np np.full((5, 3), [1]) np.full((5, 3), [1, 2, 3]) Then, I wanna create a 3D array with same value for last two dimensions: import numpy as np np.full((2, 3, 1), [[1], [2]]) ''' # perferred result [[[1], [1], [1]] [[2], [2], [2]]] ''' However, I got this error: ValueError: could not broadcast input array from the shape (2,1) into shape (2,3,1) Does anyone know the correct way to use np.full() for 3D array?
[ "In order to boardcast the value to the desired shape, you require the value in shape (2, 1, 1) to match with the input shape (2, 3, 1)\nnp.full((2, 3, 1), [[[1]], [[2]]])\n\noutput:\narray([[[1],\n [1],\n [1]],\n\n [[2],\n [2],\n [2]]])\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "numpy", "numpy_ndarray", "python" ]
stackoverflow_0074673888_arrays_numpy_numpy_ndarray_python.txt
Q: Functions by file extension in Flutter I’m using image picker package. “https://pub.dev/packages/image_picker” // Get from gallery void ImgFromGallery() async { final pickedFile = await picker.pickImage(source: ImageSource.gallery); setState(() { if (pickedFile != null) { _proImage = File(pickedFile.path); List<int> imageBytes = _proImage!.readAsBytesSync(); image = base64Encode(imageBytes); print("_Proimage:$_proImage"); } else { print('No image selected.'); } }); } It works, but if the user chooses a .gif format from his gallery, I want to run a different function. Can i check extension for selected file? If yes how can i do that? I’m new on Flutter. A: File? _file; String _imagePath = ""; bool imageAccepted; takeImageFromGallery() async { XFile? image = await ImagePicker().pickImage(source: ImageSource.gallery); if (image!.path.endsWith("png")) { imageAccepted = true; } else if (image.path.endsWith("jpg")) { imageAccepted = true; } else if (image.path.endsWith("jpeg")) { imageAccepted = true; } else { imageAccepted = false; } if (imageAccepted) { if (image != null) { setState(() { _imagePath = image.path; _file = File(_imagePath); }); } } else { SnackBar(content: Text("This file extension is not allowed")); } } A: You can use Path package like this: import 'package:path/path.dart' as p; final path = '/some/path/to/file/file.dart'; final extension = p.extension(path); // '.dart'
Functions by file extension in Flutter
I’m using image picker package. “https://pub.dev/packages/image_picker” // Get from gallery void ImgFromGallery() async { final pickedFile = await picker.pickImage(source: ImageSource.gallery); setState(() { if (pickedFile != null) { _proImage = File(pickedFile.path); List<int> imageBytes = _proImage!.readAsBytesSync(); image = base64Encode(imageBytes); print("_Proimage:$_proImage"); } else { print('No image selected.'); } }); } It works, but if the user chooses a .gif format from his gallery, I want to run a different function. Can i check extension for selected file? If yes how can i do that? I’m new on Flutter.
[ "File? _file;\nString _imagePath = \"\";\nbool imageAccepted;\n\ntakeImageFromGallery() async {\n XFile? image = await ImagePicker().pickImage(source: ImageSource.gallery);\n if (image!.path.endsWith(\"png\")) {\n imageAccepted = true;\n } else if (image.path.endsWith(\"jpg\")) {\n imageAccepted = true;\n } else if (image.path.endsWith(\"jpeg\")) {\n imageAccepted = true;\n } else {\n imageAccepted = false;\n }\n\n if (imageAccepted) {\n if (image != null) {\n setState(() {\n _imagePath = image.path;\n _file = File(_imagePath);\n });\n }\n } else {\n SnackBar(content: Text(\"This file extension is not allowed\"));\n }\n}\n\n", "You can use Path package like this:\nimport 'package:path/path.dart' as p;\n\nfinal path = '/some/path/to/file/file.dart';\n\nfinal extension = p.extension(path); // '.dart'\n\n" ]
[ 0, 0 ]
[]
[]
[ "flutter", "imagepicker" ]
stackoverflow_0070821858_flutter_imagepicker.txt
Q: Exception is lost while consuming a PLINQ query I observed a weird behavior while experimenting with a PLINQ query. Here is the scenario: There is a source IEnumerable<int> sequence that contains the two items 1 and 2. A Parallel LINQ Select operation is applied on this sequence, projecting each item to itself (x => x). The resulting ParallelQuery<int> query is consumed immediately with a foreach loop. The selector lambda of the Select projects successfully the item 1. The consuming foreach loop throws an exception for the item 1. The selector lambda throws an exception for the item 2, after a small delay. What happens next is that the consuming exception is lost! Apparently it is shadowed by the exception thrown afterwards in the Select. Here is a minimal demonstration of this behavior: ParallelQuery<int> query = Enumerable.Range(1, 2) .AsParallel() .Select(x => { if (x == 2) { Thread.Sleep(500); throw new Exception($"Oops!"); } return x; }); try { foreach (int item in query) { Console.WriteLine($"Consuming item #{item} started"); throw new Exception($"Consuming item #{item} failed"); } } catch (AggregateException aex) { Console.WriteLine($"AggregateException ({aex.InnerExceptions.Count})"); foreach (Exception ex in aex.InnerExceptions) Console.WriteLine($"- {ex.GetType().Name}: {ex.Message}"); } catch (Exception ex) { Console.WriteLine($"{ex.GetType().Name}: {ex.Message}"); } Output: Consuming item #1 started AggregateException (1) - Exception: Oops! Live demo. Chronologically the consuming exception happens first, and the PLINQ exception happens later. So my understanding is that the consuming exception is more important, and it should be propagated with priority. Nevertheless the only exception that is surfaced is the one that occurs inside the PLINQ code. My question is: why is the consuming exception lost, and is there any way that I can fix the query so that the consuming exception is propagated with priority? The desirable output is this: Consuming item #1 started Exception: Consuming item #1 failed A: I think what you are seeing is the result of the compiler translation of the foreach into a while (MoveNext()) with a try/finally to dispose of the enumerator. When the inner exception is thrown, it is caught by the finally and the Dispose() of the enumerator causes all the Select threads to finish, which causes an exception inside the finally block, which throws away the initial exception as discussed here. You need to use your own loop and a try/catch if you want to prevent this, though I think the Microsoft recommendation would be to use a try/catch in the Select to be closer to the source of the exception. Here is a modification of your existing code replacing the foreach with the compiler generated expansion of foreach using an enumerator. (I use LINQPad to see the C# 1.0 equivalent code / IL code from the compiler.) You can capture any exceptions during the Dispose of the enumerator and then bundle them up with the original exception into an AggregateException when you catch them. I wrapped the boilerplate into an extension method to replace the normal foreach: var b = true; var query = Enumerable.Range(1, 3) .AsParallel() .Select(x => { Thread.Sleep(50 * (x - 1)); Console.WriteLine($"Select({x})"); if (x >= 2) { throw new Exception($"Oops {x}!"); } return x; }); try { query.ForEachAggregatingExceptions(item => { Console.WriteLine($"Consuming item #{item} started"); if (b) { throw new Exception($"Consuming item #{item} failed"); } }); } catch (AggregateException aex) { Console.WriteLine($"AggregateException ({aex.InnerExceptions.Count})"); foreach (Exception ex in aex.InnerExceptions) Console.WriteLine($"- {ex.GetType().Name}: {ex.Message}"); } catch (Exception ex) { Console.WriteLine($"{ex.GetType().Name}: {ex.Message}"); } public static class ParallelQueryExt { public static void ForEachAggregatingExceptions<T>(this ParallelQuery<T> pq, Action<T> processFn) { Exception FirstException = null; var e = pq.GetEnumerator(); try { while (e.MoveNext()) processFn(e.Current); } catch (Exception ex) { FirstException = ex; } finally { if (e != null) { try { e.Dispose(); } catch (AggregateException aex) { // combine exceptions from Dispose with FirstException if any if (FirstException != null) { throw new AggregateException(aex.InnerExceptions.Prepend(FirstException)); } else throw; } catch (Exception ex) { // combine single exception from Dispose with FirstException if any throw new AggregateException(new[] { ex, FirstException }); } if (FirstException != null) // re-throw FirstException if no others occurred throw FirstException; } } } } PS The b variable and the if prevents the compiler from optimizing out the while loop into an if since it can figure out the throw will prevent the loop from executing more than once pass. A: NetMage's answer explains that the observed behavior is caused by the error thrown on Dispose of the PLINQ enumerator. My guess about why the PLINQ library violates the common wisdom about throwing exceptions on Dispose, which is to avoid throwing unless the error is critical, is because the library was introduced on .NET 4.0. In this .NET version an unobserved faulted Task resulted in the termination of the processes. The process was crashing when the faulted Task was garbage collected, after raising the TaskScheduler.UnobservedTaskException. So the PLINQ designers had to choose between throwing on Dispose, swallowing the exception completely, or crashing the process, and they choose what seemed like the lesser evil of the available options. That's my guess. Had the library been authored on .NET 4.5, they might had decided differently. In that .NET version, the process would no longer crash when an unobserved faulted Task was garbage collected. Reverting to the .NET 4.0 policy is still possible through a configuration setting, but I doubt that anyone ever used this setting to revert to the original irrational behavior. My approach for fixing PLINQ's error-losing behavior is a bit different that NetMage's approach. Instead of bundling all errors in an AggregateException, I prefer to suppress the exception that is thrown by PLINQ on Dispose, and propagate it through the TaskScheduler.UnobservedTaskException mechanism. This can be achieved easily by just creating a faulted task with the Task.FromException method, and leaving it unobserved: /// <summary> /// Suppresses the error that might be thrown by the enumerator on Dispose. /// The error triggers the TaskScheduler.UnobservedTaskException event. /// </summary> public static IEnumerable<TSource> SuppressDisposeException<TSource>( this IEnumerable<TSource> source) { ArgumentNullException.ThrowIfNull(source); IEnumerator<TSource> enumerator = source.GetEnumerator(); try { while (enumerator.MoveNext()) yield return enumerator.Current; try { enumerator.Dispose(); } finally { enumerator = null; } } finally { try { enumerator?.Dispose(); } catch (Exception ex) { _ = Task.FromException(ex); } } } I am also trying to dispose the enumerator as part of the enumeration, in which case an exception is propagated normally. I don't think it has any effect on PLINQ, since it is unlikely that the PLINQ enumerator will return false on MoveNext, and then will throw on Dispose. But it seems like a good behavior for a general-purpose LINQ operator. Usage example: IEnumerable<int> query = Enumerable.Range(1, 2) .AsParallel() .Select(x => /* ... */ x) .SuppressDisposeException(); In order to watch the TaskScheduler.UnobservedTaskException event being triggered, you might have to call GC.Collect as part of the test. My justification for suppressing the exception on Dispose from the synchronous execution flow, is because I consider the parallel nature of PLINQ as a form of speculative execution. The PLINQ engine might do more work than what the consumer of the query is interested to receive. So in case the consumer abandons the enumeration prematurely, either voluntarily by breaking the foreach loop, or unwillingly because it suffered an exception, the PLINQ should not bother the consumer with anything that might happen past the point that the consumer lost interest for the enumeration.
Exception is lost while consuming a PLINQ query
I observed a weird behavior while experimenting with a PLINQ query. Here is the scenario: There is a source IEnumerable<int> sequence that contains the two items 1 and 2. A Parallel LINQ Select operation is applied on this sequence, projecting each item to itself (x => x). The resulting ParallelQuery<int> query is consumed immediately with a foreach loop. The selector lambda of the Select projects successfully the item 1. The consuming foreach loop throws an exception for the item 1. The selector lambda throws an exception for the item 2, after a small delay. What happens next is that the consuming exception is lost! Apparently it is shadowed by the exception thrown afterwards in the Select. Here is a minimal demonstration of this behavior: ParallelQuery<int> query = Enumerable.Range(1, 2) .AsParallel() .Select(x => { if (x == 2) { Thread.Sleep(500); throw new Exception($"Oops!"); } return x; }); try { foreach (int item in query) { Console.WriteLine($"Consuming item #{item} started"); throw new Exception($"Consuming item #{item} failed"); } } catch (AggregateException aex) { Console.WriteLine($"AggregateException ({aex.InnerExceptions.Count})"); foreach (Exception ex in aex.InnerExceptions) Console.WriteLine($"- {ex.GetType().Name}: {ex.Message}"); } catch (Exception ex) { Console.WriteLine($"{ex.GetType().Name}: {ex.Message}"); } Output: Consuming item #1 started AggregateException (1) - Exception: Oops! Live demo. Chronologically the consuming exception happens first, and the PLINQ exception happens later. So my understanding is that the consuming exception is more important, and it should be propagated with priority. Nevertheless the only exception that is surfaced is the one that occurs inside the PLINQ code. My question is: why is the consuming exception lost, and is there any way that I can fix the query so that the consuming exception is propagated with priority? The desirable output is this: Consuming item #1 started Exception: Consuming item #1 failed
[ "I think what you are seeing is the result of the compiler translation of the foreach into a while (MoveNext()) with a try/finally to dispose of the enumerator. When the inner exception is thrown, it is caught by the finally and the Dispose() of the enumerator causes all the Select threads to finish, which causes an exception inside the finally block, which throws away the initial exception as discussed here. You need to use your own loop and a try/catch if you want to prevent this, though I think the Microsoft recommendation would be to use a try/catch in the Select to be closer to the source of the exception.\nHere is a modification of your existing code replacing the foreach with the compiler generated expansion of foreach using an enumerator. (I use LINQPad to see the C# 1.0 equivalent code / IL code from the compiler.)\nYou can capture any exceptions during the Dispose of the enumerator and then bundle them up with the original exception into an AggregateException when you catch them.\nI wrapped the boilerplate into an extension method to replace the normal foreach:\nvar b = true;\nvar query = Enumerable.Range(1, 3)\n .AsParallel()\n .Select(x => {\n Thread.Sleep(50 * (x - 1));\n Console.WriteLine($\"Select({x})\");\n if (x >= 2) {\n throw new Exception($\"Oops {x}!\");\n }\n return x;\n });\n\ntry {\n query.ForEachAggregatingExceptions(item => {\n Console.WriteLine($\"Consuming item #{item} started\");\n if (b) {\n throw new Exception($\"Consuming item #{item} failed\");\n }\n });\n}\ncatch (AggregateException aex) {\n Console.WriteLine($\"AggregateException ({aex.InnerExceptions.Count})\");\n foreach (Exception ex in aex.InnerExceptions)\n Console.WriteLine($\"- {ex.GetType().Name}: {ex.Message}\");\n}\ncatch (Exception ex) {\n Console.WriteLine($\"{ex.GetType().Name}: {ex.Message}\");\n}\n\npublic static class ParallelQueryExt {\n public static void ForEachAggregatingExceptions<T>(this ParallelQuery<T> pq, Action<T> processFn) {\n Exception FirstException = null;\n var e = pq.GetEnumerator();\n try {\n while (e.MoveNext())\n processFn(e.Current);\n }\n catch (Exception ex) {\n FirstException = ex;\n }\n finally {\n if (e != null) {\n try {\n e.Dispose();\n }\n catch (AggregateException aex) { // combine exceptions from Dispose with FirstException if any\n if (FirstException != null) {\n throw new AggregateException(aex.InnerExceptions.Prepend(FirstException));\n }\n else\n throw;\n }\n catch (Exception ex) { // combine single exception from Dispose with FirstException if any\n throw new AggregateException(new[] { ex, FirstException });\n }\n if (FirstException != null) // re-throw FirstException if no others occurred\n throw FirstException;\n }\n }\n }\n}\n\nPS The b variable and the if prevents the compiler from optimizing out the while loop into an if since it can figure out the throw will prevent the loop from executing more than once pass.\n", "NetMage's answer explains that the observed behavior is caused by the error thrown on Dispose of the PLINQ enumerator. My guess about why the PLINQ library violates the common wisdom about throwing exceptions on Dispose, which is to avoid throwing unless the error is critical, is because the library was introduced on .NET 4.0. In this .NET version an unobserved faulted Task resulted in the termination of the processes. The process was crashing when the faulted Task was garbage collected, after raising the TaskScheduler.UnobservedTaskException. So the PLINQ designers had to choose between throwing on Dispose, swallowing the exception completely, or crashing the process, and they choose what seemed like the lesser evil of the available options. That's my guess.\nHad the library been authored on .NET 4.5, they might had decided differently. In that .NET version, the process would no longer crash when an unobserved faulted Task was garbage collected. Reverting to the .NET 4.0 policy is still possible through a configuration setting, but I doubt that anyone ever used this setting to revert to the original irrational behavior.\nMy approach for fixing PLINQ's error-losing behavior is a bit different that NetMage's approach. Instead of bundling all errors in an AggregateException, I prefer to suppress the exception that is thrown by PLINQ on Dispose, and propagate it through the TaskScheduler.UnobservedTaskException mechanism. This can be achieved easily by just creating a faulted task with the Task.FromException method, and leaving it unobserved:\n/// <summary>\n/// Suppresses the error that might be thrown by the enumerator on Dispose.\n/// The error triggers the TaskScheduler.UnobservedTaskException event.\n/// </summary>\npublic static IEnumerable<TSource> SuppressDisposeException<TSource>(\n this IEnumerable<TSource> source)\n{\n ArgumentNullException.ThrowIfNull(source);\n IEnumerator<TSource> enumerator = source.GetEnumerator();\n try\n {\n while (enumerator.MoveNext()) yield return enumerator.Current;\n try { enumerator.Dispose(); } finally { enumerator = null; }\n }\n finally\n {\n try { enumerator?.Dispose(); }\n catch (Exception ex) { _ = Task.FromException(ex); }\n }\n}\n\nI am also trying to dispose the enumerator as part of the enumeration, in which case an exception is propagated normally. I don't think it has any effect on PLINQ, since it is unlikely that the PLINQ enumerator will return false on MoveNext, and then will throw on Dispose. But it seems like a good behavior for a general-purpose LINQ operator.\nUsage example:\nIEnumerable<int> query = Enumerable.Range(1, 2)\n .AsParallel()\n .Select(x => /* ... */ x)\n .SuppressDisposeException();\n\nIn order to watch the TaskScheduler.UnobservedTaskException event being triggered, you might have to call GC.Collect as part of the test.\nMy justification for suppressing the exception on Dispose from the synchronous execution flow, is because I consider the parallel nature of PLINQ as a form of speculative execution. The PLINQ engine might do more work than what the consumer of the query is interested to receive. So in case the consumer abandons the enumeration prematurely, either voluntarily by breaking the foreach loop, or unwillingly because it suffered an exception, the PLINQ should not bother the consumer with anything that might happen past the point that the consumer lost interest for the enumeration.\n" ]
[ 3, 0 ]
[]
[]
[ "c#", "exception", "linq", "multithreading", "plinq" ]
stackoverflow_0074623812_c#_exception_linq_multithreading_plinq.txt
Q: Why nested When().Then() is slower than Left Join in Rust Polars? In Rust Polars(might apply to python pandas as well) assigning values in a new column with a complex logic involving values of other columns can be achieved in two ways. The default way is using a nested WhenThen expression. Another way to achieve same thing is with LeftJoin. Naturally I would expect When Then to be much faster than Join, but it is not the case. In this example, When Then is 6 times slower than Join. Is that actually expected? Am I using When Then wrong? In this example the goal is to assign weights/multipliers column based on three other columns: country, city and bucket. use std::collections::HashMap; use polars::prelude::*; use rand::{distributions::Uniform, Rng}; // 0.6.5 pub fn bench() { // PREPARATION // This MAP is to be used for Left Join let mut weights = df![ "country"=>vec!["UK"; 5], "city"=>vec!["London"; 5], "bucket" => ["1","2","3","4","5"], "weights" => [0.1, 0.2, 0.3, 0.4, 0.5] ].unwrap().lazy(); weights = weights.with_column(concat_lst([col("weights")]).alias("weihts")); // This MAP to be used in When.Then let weight_map = bucket_weight_map(&[0.1, 0.2, 0.3, 0.4, 0.5], 1); // Generate the DataSet itself let mut rng = rand::thread_rng(); let range = Uniform::new(1, 5); let b: Vec<String> = (0..10_000_000).map(|_| rng.sample(&range).to_string()).collect(); let rc = vec!["UK"; 10_000_000]; let rf = vec!["London"; 10_000_000]; let val = vec![1; 10_000_000]; let frame = df!( "country" => rc, "city" => rf, "bucket" => b, "val" => val, ).unwrap().lazy(); // Test with Left Join use std::time::Instant; let now = Instant::now(); let r = frame.clone() .join(weights, [col("country"), col("city"), col("bucket")], [col("country"), col("city"), col("bucket")], JoinType::Left) .collect().unwrap(); let elapsed = now.elapsed(); println!("Left Join took: {:.2?}", elapsed); // Test with nested When Then let now = Instant::now(); let r1 = frame.clone().with_column( when(col("country").eq(lit("UK"))) .then( when(col("city").eq(lit("London"))) .then(rf_rw_map(col("bucket"),weight_map,NULL.lit())) .otherwise(NULL.lit()) ) .otherwise(NULL.lit()) ) .collect().unwrap(); let elapsed = now.elapsed(); println!("Chained When Then: {:.2?}", elapsed); // Check results are identical dbg!(r.tail(Some(10))); dbg!(r1.tail(Some(10))); } /// All this does is building a chained When().Then().Otherwise() fn rf_rw_map(col: Expr, map: HashMap<String, Expr>, other: Expr) -> Expr { // buf is a placeholder let mut it = map.into_iter(); let (k, v) = it.next().unwrap(); //The map will have at least one value let mut buf = when(lit::<bool>(false)) // buffer WhenThen .then(lit::<f64>(0.).list()) // buffer WhenThen, needed to "chain on to" .when(col.clone().eq(lit(k))) .then(v); for (k, v) in it { buf = buf .when(col.clone().eq(lit(k))) .then(v); } buf.otherwise(other) } fn bucket_weight_map(arr: &[f64], ntenors: u8) -> HashMap<String, Expr> { let mut bucket_weights: HashMap<String, Expr> = HashMap::default(); for (i, n) in arr.iter().enumerate() { let j = i + 1; bucket_weights.insert( format!["{j}"], Series::from_vec("weight", vec![*n; ntenors as usize]) .lit() .list(), ); } bucket_weights } The result is surprising to me: Left Join took: 561.26ms vs Chained When Then: 3.22s Thoughts? UPDATE This does not make much difference. Nested WhenThen is still over 3s // Test with nested When Then let now = Instant::now(); let r1 = frame.clone().with_column( when(col("country").eq(lit("UK")).and(col("city").eq(lit("London")))) .then(rf_rw_map(col("bucket"),weight_map,NULL.lit())) .otherwise(NULL.lit()) ) .collect().unwrap(); let elapsed = now.elapsed(); println!("Chained When Then: {:.2?}", elapsed); A: It's difficult to say for certain without more context, but the difference in performance between using a nested When().Then() expression and a LeftJoin in Rust Polars may be due to the implementation of each method. LeftJoin is likely more optimized for this kind of operation than a nested When().Then() expression, so it may be faster in general. Additionally, using LeftJoin may allow the program to take advantage of parallelization, which can improve performance. It's also possible that the specific inputs to the two methods in the example are causing the LeftJoin to be faster. A: The joins are one of the most optimized algorithms in polars. A left join will be executed fully in parallel and has many performance related fast paths. If you want to combine data based on equality, you should almost always choose a join.
Why nested When().Then() is slower than Left Join in Rust Polars?
In Rust Polars(might apply to python pandas as well) assigning values in a new column with a complex logic involving values of other columns can be achieved in two ways. The default way is using a nested WhenThen expression. Another way to achieve same thing is with LeftJoin. Naturally I would expect When Then to be much faster than Join, but it is not the case. In this example, When Then is 6 times slower than Join. Is that actually expected? Am I using When Then wrong? In this example the goal is to assign weights/multipliers column based on three other columns: country, city and bucket. use std::collections::HashMap; use polars::prelude::*; use rand::{distributions::Uniform, Rng}; // 0.6.5 pub fn bench() { // PREPARATION // This MAP is to be used for Left Join let mut weights = df![ "country"=>vec!["UK"; 5], "city"=>vec!["London"; 5], "bucket" => ["1","2","3","4","5"], "weights" => [0.1, 0.2, 0.3, 0.4, 0.5] ].unwrap().lazy(); weights = weights.with_column(concat_lst([col("weights")]).alias("weihts")); // This MAP to be used in When.Then let weight_map = bucket_weight_map(&[0.1, 0.2, 0.3, 0.4, 0.5], 1); // Generate the DataSet itself let mut rng = rand::thread_rng(); let range = Uniform::new(1, 5); let b: Vec<String> = (0..10_000_000).map(|_| rng.sample(&range).to_string()).collect(); let rc = vec!["UK"; 10_000_000]; let rf = vec!["London"; 10_000_000]; let val = vec![1; 10_000_000]; let frame = df!( "country" => rc, "city" => rf, "bucket" => b, "val" => val, ).unwrap().lazy(); // Test with Left Join use std::time::Instant; let now = Instant::now(); let r = frame.clone() .join(weights, [col("country"), col("city"), col("bucket")], [col("country"), col("city"), col("bucket")], JoinType::Left) .collect().unwrap(); let elapsed = now.elapsed(); println!("Left Join took: {:.2?}", elapsed); // Test with nested When Then let now = Instant::now(); let r1 = frame.clone().with_column( when(col("country").eq(lit("UK"))) .then( when(col("city").eq(lit("London"))) .then(rf_rw_map(col("bucket"),weight_map,NULL.lit())) .otherwise(NULL.lit()) ) .otherwise(NULL.lit()) ) .collect().unwrap(); let elapsed = now.elapsed(); println!("Chained When Then: {:.2?}", elapsed); // Check results are identical dbg!(r.tail(Some(10))); dbg!(r1.tail(Some(10))); } /// All this does is building a chained When().Then().Otherwise() fn rf_rw_map(col: Expr, map: HashMap<String, Expr>, other: Expr) -> Expr { // buf is a placeholder let mut it = map.into_iter(); let (k, v) = it.next().unwrap(); //The map will have at least one value let mut buf = when(lit::<bool>(false)) // buffer WhenThen .then(lit::<f64>(0.).list()) // buffer WhenThen, needed to "chain on to" .when(col.clone().eq(lit(k))) .then(v); for (k, v) in it { buf = buf .when(col.clone().eq(lit(k))) .then(v); } buf.otherwise(other) } fn bucket_weight_map(arr: &[f64], ntenors: u8) -> HashMap<String, Expr> { let mut bucket_weights: HashMap<String, Expr> = HashMap::default(); for (i, n) in arr.iter().enumerate() { let j = i + 1; bucket_weights.insert( format!["{j}"], Series::from_vec("weight", vec![*n; ntenors as usize]) .lit() .list(), ); } bucket_weights } The result is surprising to me: Left Join took: 561.26ms vs Chained When Then: 3.22s Thoughts? UPDATE This does not make much difference. Nested WhenThen is still over 3s // Test with nested When Then let now = Instant::now(); let r1 = frame.clone().with_column( when(col("country").eq(lit("UK")).and(col("city").eq(lit("London")))) .then(rf_rw_map(col("bucket"),weight_map,NULL.lit())) .otherwise(NULL.lit()) ) .collect().unwrap(); let elapsed = now.elapsed(); println!("Chained When Then: {:.2?}", elapsed);
[ "It's difficult to say for certain without more context, but the difference in performance between using a nested When().Then() expression and a LeftJoin in Rust Polars may be due to the implementation of each method. LeftJoin is likely more optimized for this kind of operation than a nested When().Then() expression, so it may be faster in general. Additionally, using LeftJoin may allow the program to take advantage of parallelization, which can improve performance. It's also possible that the specific inputs to the two methods in the example are causing the LeftJoin to be faster.\n", "The joins are one of the most optimized algorithms in polars. A left join will be executed fully in parallel and has many performance related fast paths. If you want to combine data based on equality, you should almost always choose a join.\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python", "python_polars", "rust" ]
stackoverflow_0074671361_dataframe_pandas_python_python_polars_rust.txt
Q: I cannot get over 50% accuracy on my test data in this simple CNN Tensorflow Keras model for image classification The code is as follows. I have a highly imbalanced dataset for chest x rays with heart enlargement. The images are separated into a training folder split into positive for cardiomegaly and negative for cardiomegaly subfolders (467 pos images and ~20,000 neg). (Then I have a testing folder with two subfolders (300 pos, 300 neg). Each time I test I keep getting a 50% accuracy with the eval method below. When I look at the predictions it is always that they are all one class (normally negative), however if I give the positive values a very high weight (1000+ compared to the negative values 1) the model will flip and say that they are all instead positive. This leads me to believe it is overfitting but all my attempts to resolve this have come up with issues. import pandas as pd import os import matplotlib.pyplot as plt import numpy as np import skimage as sk import skimage.io as skio import skimage.transform as sktr import skimage.filters as skfl import skimage.feature as skft import skimage.color as skcol import skimage.exposure as skexp import skimage.morphology as skmr import skimage.util as skut import skimage.measure as skme import sklearn.model_selection as le_ms import sklearn.decomposition as le_de import sklearn.discriminant_analysis as le_di import sklearn.preprocessing as le_pr import sklearn.linear_model as le_lm import sklearn.metrics as le_me import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential classNames = ["trainpos","trainneg"] testclassNames = ["testpos", "test"] train_ds = tf.keras.preprocessing.image_dataset_from_directory( './data/trainup/', labels='inferred', label_mode='categorical', class_names=classNames, color_mode='grayscale', batch_size=32, image_size=(256, 256), shuffle=True, seed=123, validation_split=0.2, subset="training", interpolation='gaussian', follow_links=False, ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( './data/trainup/', labels='inferred', label_mode='categorical', class_names=classNames, color_mode='grayscale', batch_size=32, image_size=(256, 256), shuffle=True, seed=23, validation_split=0.2, subset="validation", interpolation='gaussian', follow_links=False, ) test_ds = tf.keras.preprocessing.image_dataset_from_directory( './data/testup/', labels='inferred', label_mode='categorical', class_names=testclassNames, color_mode='grayscale', batch_size=32, image_size=(256, 256), shuffle=True, interpolation='gaussian', follow_links=False, ) AUTOTUNE = tf.data.experimental.AUTOTUNE train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.Rescaling(1./255, input_shape=(256, 256, 1)), tf.keras.layers.Conv2D(16, 4, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 4, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Dropout(0.2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(2) ]) opt = keras.optimizers.Adam(learning_rate=0.0001) model.compile(optimizer=opt, loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True), metrics=['accuracy']) class_weight = {0: 29, 1: 1} history = model.fit( train_ds, validation_data=val_ds, epochs=5, class_weight=class_weight ) test_loss, test_accuracy = model.evaluate(test_ds) print("Test Loss: ", test_loss) print("Test Accuracy: ", test_accuracy) 19/19 [==============================] - 7s 376ms/step - loss: 3.4121 - accuracy: 0.5000 Test Loss: 3.4121198654174805 Test Accuracy: 0.5 I have tried updating the learning rate to values between 0.1 and 0.00001, adding epochs, removing epochs, changing to SGP for the optimizer, attempting to unpack the test_ds after subscripting it gave me the error that it is a batchdataset and can't be subscripted. This then shows me that the test_ds is giving me ~19 tensors of 32 images each except the last one which has about 25. I then wanted to predict each of these images individually and get the results because it looked like it was grouping all 32 (or 25 for the last one) together and then predicting based on that but that led me down rabbitholes that I haven't come out of with results. Tried many other things I can't fully remember normally tweaking the model itself or adding data augmentation (I am using tensorflow 2.3 as this is for a class with a repeating assignment so the data augmentation cannot be done with the current docs (mostly just vertical and horizontal changes in this version from what I can tell) A: The best thing to do is to eliminate the imbalance to begin with. You have 467 positive images which is more than enough for a model to perform on. So randomly select only 467 negative images from the 20,000 available. This is called under sampling and it works well. Another method is to use both undersampling and image augmentation. Example code to do this is shown below where I limit the number of images in the negative class to 1000, then create 533 augment images and add them to the positive class directory. NOTE- CAUTION the code below will delete images from your negative class directory and add augmented images to the positive class directory so before you run the code you might wish to create backups of these two directories so your original data is recoverable. In the demo code I had 1263 images in the positive directory and 467 images in the positive class directory. I tested the code and it works as desired. Now if your running a notebook on Kagle the code below will not work because you can not change the data in the input directories. So in that case you have to copy the input directories to the kagle working directory first. Then set the paths to those directories. !pip install -U albumentations import tensorflow as tf from tensorflow import keras from tensorflow.keras.preprocessing.image import ImageDataGenerator import os import numpy as np import random import cv2 import albumentations as A from tqdm import tqdm def get_augmented_image(image): # this function returns an augmented version of the input img # see albumentations documentation at URL https://albumentations.ai/docs/getting_started/image_augmentation/ # for information on various type of augmentations available these are examples below width=int(image.shape[1]*.8) height=int(image.shape[0]*.8) transform= A.Compose([ A.HorizontalFlip(p=.5), A.RandomBrightnessContrast(p=.5), A.RandomGamma(p=.5), A.RandomCrop(width=width, height=height, p=.25) ]) return transform(image=image)['image'] negative_limit=1000 negative_dir_path=r'C:\Temp\data\trainup\negative'# path to directory holding the negative images positive_dir_path=r'C:\Temp\data\trainup\positive' # path to directory holding positive images negative_file_list=os.listdir(negative_dir_path) positive_file_list=os.listdir(positive_dir_path) sampled_negative_file_list=np.random.choice(negative_file_list, size=negative_limit, replace=False) for f in tqdm(negative_file_list, ncols=120, unit='files', colour='blue', desc='deleting excess neg files'): # this for loop leaves only 1000 images in the negative_image_directory if f not in sampled_negative_file_list: fpath=os.path.join(negative_dir_path,f) os.remove(fpath) # now create augmented images delta=negative_limit-len(os.listdir(positive_dir_path)) # this is the number of augmented images to create to balance the dataset sampled_positive_image_list=np.random.choice(positive_file_list, delta, replace=True) # replace=True because delta>number of positive images i=0 for f in tqdm(sampled_positive_image_list, ncols=120, unit='files', colour='blue',desc='creating augment images'): # this loop creates augmented images and stores them in the positive image directory fpath=os.path.join(positive_dir_path,f) img=cv2.imread(fpath) dest_file_name='aug' +str(i) + '-' + f # create the filename with a unique numeric prefix dest_path=os.path.join(positive_dir_path, dest_file_name) # store augmented images witha numeric prefix in the filename augmented_image=get_augmented_image(img) cv2.imwrite(dest_path, augmented_image) i +=1 # when these loops are done, the negative_image_directory will have 1000 images # and the positive_image_directory will also have 1000 images, 533 of which are augmented images```` In your code you have tf.keras.layers.Dense(2) change to tf.keras.layers.Dense(2, activation='softmax') In model.comple remove (from_logits=True)
I cannot get over 50% accuracy on my test data in this simple CNN Tensorflow Keras model for image classification
The code is as follows. I have a highly imbalanced dataset for chest x rays with heart enlargement. The images are separated into a training folder split into positive for cardiomegaly and negative for cardiomegaly subfolders (467 pos images and ~20,000 neg). (Then I have a testing folder with two subfolders (300 pos, 300 neg). Each time I test I keep getting a 50% accuracy with the eval method below. When I look at the predictions it is always that they are all one class (normally negative), however if I give the positive values a very high weight (1000+ compared to the negative values 1) the model will flip and say that they are all instead positive. This leads me to believe it is overfitting but all my attempts to resolve this have come up with issues. import pandas as pd import os import matplotlib.pyplot as plt import numpy as np import skimage as sk import skimage.io as skio import skimage.transform as sktr import skimage.filters as skfl import skimage.feature as skft import skimage.color as skcol import skimage.exposure as skexp import skimage.morphology as skmr import skimage.util as skut import skimage.measure as skme import sklearn.model_selection as le_ms import sklearn.decomposition as le_de import sklearn.discriminant_analysis as le_di import sklearn.preprocessing as le_pr import sklearn.linear_model as le_lm import sklearn.metrics as le_me import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential classNames = ["trainpos","trainneg"] testclassNames = ["testpos", "test"] train_ds = tf.keras.preprocessing.image_dataset_from_directory( './data/trainup/', labels='inferred', label_mode='categorical', class_names=classNames, color_mode='grayscale', batch_size=32, image_size=(256, 256), shuffle=True, seed=123, validation_split=0.2, subset="training", interpolation='gaussian', follow_links=False, ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( './data/trainup/', labels='inferred', label_mode='categorical', class_names=classNames, color_mode='grayscale', batch_size=32, image_size=(256, 256), shuffle=True, seed=23, validation_split=0.2, subset="validation", interpolation='gaussian', follow_links=False, ) test_ds = tf.keras.preprocessing.image_dataset_from_directory( './data/testup/', labels='inferred', label_mode='categorical', class_names=testclassNames, color_mode='grayscale', batch_size=32, image_size=(256, 256), shuffle=True, interpolation='gaussian', follow_links=False, ) AUTOTUNE = tf.data.experimental.AUTOTUNE train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) model = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.Rescaling(1./255, input_shape=(256, 256, 1)), tf.keras.layers.Conv2D(16, 4, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 4, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Dropout(0.2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(2) ]) opt = keras.optimizers.Adam(learning_rate=0.0001) model.compile(optimizer=opt, loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True), metrics=['accuracy']) class_weight = {0: 29, 1: 1} history = model.fit( train_ds, validation_data=val_ds, epochs=5, class_weight=class_weight ) test_loss, test_accuracy = model.evaluate(test_ds) print("Test Loss: ", test_loss) print("Test Accuracy: ", test_accuracy) 19/19 [==============================] - 7s 376ms/step - loss: 3.4121 - accuracy: 0.5000 Test Loss: 3.4121198654174805 Test Accuracy: 0.5 I have tried updating the learning rate to values between 0.1 and 0.00001, adding epochs, removing epochs, changing to SGP for the optimizer, attempting to unpack the test_ds after subscripting it gave me the error that it is a batchdataset and can't be subscripted. This then shows me that the test_ds is giving me ~19 tensors of 32 images each except the last one which has about 25. I then wanted to predict each of these images individually and get the results because it looked like it was grouping all 32 (or 25 for the last one) together and then predicting based on that but that led me down rabbitholes that I haven't come out of with results. Tried many other things I can't fully remember normally tweaking the model itself or adding data augmentation (I am using tensorflow 2.3 as this is for a class with a repeating assignment so the data augmentation cannot be done with the current docs (mostly just vertical and horizontal changes in this version from what I can tell)
[ "The best thing to do is to eliminate the imbalance to begin with. You have 467 positive images which is more than enough for a model to perform on. So randomly select only 467 negative images from the 20,000 available. This is called under sampling and it works well. Another method is to use both undersampling and image augmentation. Example code to do this is shown below where I limit the number of images in the negative class to 1000, then create 533 augment images and add them to the positive class directory. NOTE- CAUTION the code below will delete images from your negative class directory and add augmented images to the positive class directory so before you run the code you might wish to create backups of these two directories so your original data is recoverable. In the demo code I had 1263 images in the positive directory and 467 images in the positive class directory. I tested the code and it works as desired. Now if your running a notebook on Kagle the code below will not work because you can not change the data in the input directories. So in that case you have to copy the input directories to the kagle working directory first. Then set the paths to those directories.\n!pip install -U albumentations\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nimport os\nimport numpy as np\nimport random\nimport cv2\nimport albumentations as A\nfrom tqdm import tqdm\n\ndef get_augmented_image(image): # this function returns an augmented version of the input img\n # see albumentations documentation at URL https://albumentations.ai/docs/getting_started/image_augmentation/\n # for information on various type of augmentations available these are examples below\n width=int(image.shape[1]*.8)\n height=int(image.shape[0]*.8)\n transform= A.Compose([\n A.HorizontalFlip(p=.5),\n A.RandomBrightnessContrast(p=.5),\n A.RandomGamma(p=.5),\n A.RandomCrop(width=width, height=height, p=.25) ]) \n return transform(image=image)['image']\n\nnegative_limit=1000\nnegative_dir_path=r'C:\\Temp\\data\\trainup\\negative'# path to directory holding the negative images\npositive_dir_path=r'C:\\Temp\\data\\trainup\\positive' # path to directory holding positive images\nnegative_file_list=os.listdir(negative_dir_path)\npositive_file_list=os.listdir(positive_dir_path)\nsampled_negative_file_list=np.random.choice(negative_file_list, size=negative_limit, replace=False) \nfor f in tqdm(negative_file_list, ncols=120, unit='files', colour='blue', desc='deleting excess neg files'): # this for loop leaves only 1000 images in the negative_image_directory\n if f not in sampled_negative_file_list:\n fpath=os.path.join(negative_dir_path,f) \n os.remove(fpath)\n# now create augmented images\ndelta=negative_limit-len(os.listdir(positive_dir_path)) # this is the number of augmented images to create to balance the dataset\nsampled_positive_image_list=np.random.choice(positive_file_list, delta, replace=True) # replace=True because delta>number of positive images\ni=0\nfor f in tqdm(sampled_positive_image_list, ncols=120, unit='files', colour='blue',desc='creating augment images'): # this loop creates augmented images and stores them in the positive image directory\n fpath=os.path.join(positive_dir_path,f)\n img=cv2.imread(fpath)\n dest_file_name='aug' +str(i) + '-' + f # create the filename with a unique numeric prefix\n dest_path=os.path.join(positive_dir_path, dest_file_name) # store augmented images witha numeric prefix in the filename\n augmented_image=get_augmented_image(img)\n cv2.imwrite(dest_path, augmented_image)\n i +=1\n# when these loops are done, the negative_image_directory will have 1000 images\n# and the positive_image_directory will also have 1000 images, 533 of which are augmented images````\n\nIn your code you have\ntf.keras.layers.Dense(2)\n\nchange to\ntf.keras.layers.Dense(2, activation='softmax')\n\nIn model.comple remove (from_logits=True)\n" ]
[ 0 ]
[]
[]
[ "conv_neural_network", "image_classification", "overfitting_underfitting", "python", "tensorflow" ]
stackoverflow_0074672833_conv_neural_network_image_classification_overfitting_underfitting_python_tensorflow.txt
Q: What;s a correct way to use APOC.when in that query? I need to select different nodes in dependence of rel_type var. So it'll be ideal for me if returning a node from APOC.when is possible. As alternative way it's ok for returning an ID of matched node. How can I solve that task by one of these ways? Legal_Entity and Natural_Person is a classes of nodes interested us; hid_party - a parameter that each nodes has. Used as unique ID; rel_type may be 'LEGAL' or 'PHYSICAL'. Depending on this parameter, different nodes should be selected. Example: match (legal:Legal_Entity {hid_party : '157456674'}) with legal, '422741957' as second_hid, 'LEGAL' as rel_type CALL apoc.when( 'LEGAL' = 'LEGAL', 'match (second:Legal_Entity {hid_party : second_hid}) return second as second_node', 'match (second:Natural_Person {hid_party : second_hid}) return second as second_node', {second_hid:second_hid} ) YIELD value return value.second_node A: Your query should work, but maybe we're missing how you pass the rel type as parameter. Simple example : Stub graph CREATE (:LegalEntity {id: 123}) CREATE (:NaturalPerson {id: 456}) Then set a dummy parameter in the browser :param relType => 'LEGAL' Verify the list of parameters available for a query :params // result { "relType": "LEGAL" } Then example of using apoc.when depending on the parameter CALL apoc.when($relType = 'LEGAL', 'MATCH (n:LegalEntity) RETURN n', 'MATCH (n:NaturalPerson) RETURN n', {} ) YIELD value RETURN value.n AS n Returns the expected LegalEntity node ╒══════════╕ │"n" │ ╞══════════╡ │{"id":123}│ └──────────┘ Change the parameter to something else :param relType => 'OtherValue' Run the same query, result is different ╒══════════╕ │"n" │ ╞══════════╡ │{"id":456}│ └──────────┘
What;s a correct way to use APOC.when in that query?
I need to select different nodes in dependence of rel_type var. So it'll be ideal for me if returning a node from APOC.when is possible. As alternative way it's ok for returning an ID of matched node. How can I solve that task by one of these ways? Legal_Entity and Natural_Person is a classes of nodes interested us; hid_party - a parameter that each nodes has. Used as unique ID; rel_type may be 'LEGAL' or 'PHYSICAL'. Depending on this parameter, different nodes should be selected. Example: match (legal:Legal_Entity {hid_party : '157456674'}) with legal, '422741957' as second_hid, 'LEGAL' as rel_type CALL apoc.when( 'LEGAL' = 'LEGAL', 'match (second:Legal_Entity {hid_party : second_hid}) return second as second_node', 'match (second:Natural_Person {hid_party : second_hid}) return second as second_node', {second_hid:second_hid} ) YIELD value return value.second_node
[ "Your query should work, but maybe we're missing how you pass the rel type as parameter.\nSimple example :\nStub graph\nCREATE (:LegalEntity {id: 123})\nCREATE (:NaturalPerson {id: 456})\n\nThen set a dummy parameter in the browser\n:param relType => 'LEGAL'\n\nVerify the list of parameters available for a query\n:params\n\n// result\n{\n \"relType\": \"LEGAL\"\n}\n\nThen example of using apoc.when depending on the parameter\nCALL apoc.when($relType = 'LEGAL', \n'MATCH (n:LegalEntity) RETURN n',\n'MATCH (n:NaturalPerson) RETURN n',\n{}\n)\nYIELD value\nRETURN value.n AS n\n\nReturns the expected LegalEntity node\n╒══════════╕\n│\"n\" │\n╞══════════╡\n│{\"id\":123}│\n└──────────┘\n\nChange the parameter to something else\n:param relType => 'OtherValue'\n\nRun the same query, result is different\n╒══════════╕\n│\"n\" │\n╞══════════╡\n│{\"id\":456}│\n└──────────┘\n\n" ]
[ 1 ]
[]
[]
[ "graph", "neo4j", "neo4j_apoc" ]
stackoverflow_0074666539_graph_neo4j_neo4j_apoc.txt
Q: windows visual studio CTRL + . for mac I am at the stage of learning flutter over video. In the video, the windows user performs an operation with the command "CTRL + . ", when I do this on my mac computer with "command + . ", there is no result. What is the equivalent of this command in windows on mac computers? A: it called Refactor on VSCode you can customize the shorcut from: File>Preferences>Keyboard Shorrcuts. by default the keyboard shortcut is : Ctrl+Shift+R (windows) Command+Shift+R (mac)
windows visual studio CTRL + . for mac
I am at the stage of learning flutter over video. In the video, the windows user performs an operation with the command "CTRL + . ", when I do this on my mac computer with "command + . ", there is no result. What is the equivalent of this command in windows on mac computers?
[ "it called Refactor on VSCode\nyou can customize the shorcut from:\nFile>Preferences>Keyboard Shorrcuts.\nby default the keyboard shortcut is :\n\nCtrl+Shift+R (windows)\nCommand+Shift+R (mac)\n\n" ]
[ 2 ]
[]
[]
[ "command", "dart", "flutter", "macos", "windows" ]
stackoverflow_0074673891_command_dart_flutter_macos_windows.txt
Q: Two programs are same but one is showing error I have two programs. 1: #include <iostream> using namespace std; int main () { do { cout<<"Hello world"; char yn='y'; } while (yn=='y' || yn=='Y'); return 0; } 2: #include <iostream> using namespace std; int main () { char yn='y'; do { cout<<"Hello world"; } while (yn=='y' || yn=='Y'); return 0; } First program is showing error like this: comparison between pointer and integer ('double (*)(int, double)' and 'char') } while (yn=='y' || yn=='Y'); I think both programs are same but still first program is showing error. A: In the first program, the yn variable that you have declared is not in scope in the while loop test int main () { do { cout<<"Hello world"; char yn='y'; // scope of yn variable stops here } while (yn=='y' || yn=='Y'); // so yn variable not in scope here return 0; } The reason that you don't get an error that says undeclared variable (or something like that) is because very unfortunately for you there is a POSIX standard function called yn (see here) and the compiler thinks that is what you are referring to in while (yn=='y' || yn=='Y');. That explains the error message, the compiler is interpreting yn as a function pointer. In your first program try changing the name of the variable (say yn -> yesno) and see what difference that makes to the error message. A: Code 1 #include <iostream> using namespace std; int main () { do { cout<<"Hello world"; char yn='y'; } while (yn=='y' || yn=='Y'); return 0; } In this code, char yn='y' is inside the exclusive scope of the do-while loop. The object yn is destroyed when the compiler leaves the scope. Now, when the line while (yn=='y' || yn=='Y') is executed, the compiler doesn’t know what yn is, thus causing an error. Code 2 #include <iostream> using namespace std; int main () { char yn='y'; do { cout<<"Hello world"; } while (yn=='y' || yn=='Y'); return 0; } In this code, char yn='y' is outside the exclusive scope of the do-while loop. It is in the scope of the main() function. Thus yn is not destroyed till the main() function finishes executing. Therefore, this code executes properly (it infinitely prints “Hello World”. A: The code is not the same. When you declare a variable between {...} it is in scope only between the braces and in this case not in scope in the while condition. The error is confused by the fact that a different symbol yn (a function) happens to be in scope through indirect inclusion of <cmath> by <iostream> and the ill-advised use of using namespace std; moving the entire standard library into the global namespace. That is what namespaces are for and it is always a bad idea to defeat an entire namespace. Here either use scope resolution std::cout or just declare the symbols you actually use: using namespace std::cout ; In this case, with only one instance if a std:: symbol, the using directive has saved you nothing whilst causing a great deal of confusion. Get out if that habit is my advice. In the second instance you will still get an error, but it will make much more sense, telling you that yn is undefined at the while expression. All that said, yn() is not a standard library function, but a POSIX extension; it may therefore not be in the std namespace. You may need also to compile with an option that excludes such extensions such as -ansi.
Two programs are same but one is showing error
I have two programs. 1: #include <iostream> using namespace std; int main () { do { cout<<"Hello world"; char yn='y'; } while (yn=='y' || yn=='Y'); return 0; } 2: #include <iostream> using namespace std; int main () { char yn='y'; do { cout<<"Hello world"; } while (yn=='y' || yn=='Y'); return 0; } First program is showing error like this: comparison between pointer and integer ('double (*)(int, double)' and 'char') } while (yn=='y' || yn=='Y'); I think both programs are same but still first program is showing error.
[ "In the first program, the yn variable that you have declared is not in scope in the while loop test\nint main () {\n do\n {\n cout<<\"Hello world\";\n char yn='y';\n // scope of yn variable stops here\n }\n while (yn=='y' || yn=='Y'); // so yn variable not in scope here\n return 0;\n}\n\nThe reason that you don't get an error that says undeclared variable (or something like that) is because very unfortunately for you there is a POSIX standard function called yn (see here) and the compiler thinks that is what you are referring to in while (yn=='y' || yn=='Y');. That explains the error message, the compiler is interpreting yn as a function pointer.\nIn your first program try changing the name of the variable (say yn -> yesno) and see what difference that makes to the error message.\n", "Code 1\n#include <iostream>\nusing namespace std;\n\nint main () {\n\n do {\n\n cout<<\"Hello world\"; \n char yn='y';\n\n } while (yn=='y' || yn=='Y');\n return 0;\n}\n\nIn this code, char yn='y' is inside the exclusive scope of the do-while loop. The object yn is destroyed when the compiler leaves the scope. Now, when the line while (yn=='y' || yn=='Y') is executed, the compiler doesn’t know what yn is, thus causing an error.\nCode 2\n#include <iostream>\nusing namespace std;\n\nint main () {\n\n char yn='y';\n\n do {\n\n cout<<\"Hello world\"; \n\n } while (yn=='y' || yn=='Y');\n return 0;\n}\n\nIn this code, char yn='y' is outside the exclusive scope of the do-while loop. It is in the scope of the main() function. Thus yn is not destroyed till the main() function finishes executing. Therefore, this code executes properly (it infinitely prints “Hello World”.\n", "The code is not the same. When you declare a variable between {...} it is in scope only between the braces and in this case not in scope in the while condition.\nThe error is confused by the fact that a different symbol yn (a function) happens to be in scope through indirect inclusion of <cmath> by <iostream> and the ill-advised use of using namespace std; moving the entire standard library into the global namespace.\nThat is what namespaces are for and it is always a bad idea to defeat an entire namespace. Here either use scope resolution std::cout or just declare the symbols you actually use:\nusing namespace std::cout ;\n\nIn this case, with only one instance if a std:: symbol, the using directive has saved you nothing whilst causing a great deal of confusion. Get out if that habit is my advice.\nIn the second instance you will still get an error, but it will make much more sense, telling you that yn is undefined at the while expression.\nAll that said, yn() is not a standard library function, but a POSIX extension; it may therefore not be in the std namespace. You may need also to compile with an option that excludes such extensions such as -ansi.\n" ]
[ 3, 1, 1 ]
[]
[]
[ "c++", "pointers" ]
stackoverflow_0074673662_c++_pointers.txt
Q: get the two types of sets from a model with condition I want to get the credits and debits from the model, with condition tried a lot of methods but failed to approach the answer, the model i am working on is class SupplierTrans(models.Model): supplierName = models.ForeignKey(Supplier, on_delete = models.CASCADE) paid = models.BooleanField(default=True) amount = models.IntegerField() remarks = models.CharField(max_length = 200) created = models.DateTimeField(auto_now_add=True) update = models.DateTimeField(auto_now=True) class Meta: ordering = ['-update', '-created'] def __str__(self): return str(self.supplierName) @property def paid_purchsased(self): return 'Paid' if self.paid == True else "Purchased" I approach in a methos that is sup = SupplierTrans.objects.annotate( credit = Sum('amount', paid=True), debit= Sum('amount', paid=False)).order_by('supplierName__name') but its not working the out is get the all the sum of the amount in the table if not filtering boolean values but the required is getting by the following method credit_amt = SupplierTrans.objects.filter(paid=True).aggregate(Sum('amount')) debit_amt = SupplierTrans.objects.filter(paid=False).aggregate(Sum('amount')) I wana get the following values in the above condition is there any approach, or should change the table structure A: I understand that you want to get totals for paid vs unpaid records. The following query should do the job: SupplierTrans.objects.values('paid').annotate(total_amount=Sum('amount')) Related django docs - https://docs.djangoproject.com/en/4.0/topics/db/aggregation/#values
get the two types of sets from a model with condition
I want to get the credits and debits from the model, with condition tried a lot of methods but failed to approach the answer, the model i am working on is class SupplierTrans(models.Model): supplierName = models.ForeignKey(Supplier, on_delete = models.CASCADE) paid = models.BooleanField(default=True) amount = models.IntegerField() remarks = models.CharField(max_length = 200) created = models.DateTimeField(auto_now_add=True) update = models.DateTimeField(auto_now=True) class Meta: ordering = ['-update', '-created'] def __str__(self): return str(self.supplierName) @property def paid_purchsased(self): return 'Paid' if self.paid == True else "Purchased" I approach in a methos that is sup = SupplierTrans.objects.annotate( credit = Sum('amount', paid=True), debit= Sum('amount', paid=False)).order_by('supplierName__name') but its not working the out is get the all the sum of the amount in the table if not filtering boolean values but the required is getting by the following method credit_amt = SupplierTrans.objects.filter(paid=True).aggregate(Sum('amount')) debit_amt = SupplierTrans.objects.filter(paid=False).aggregate(Sum('amount')) I wana get the following values in the above condition is there any approach, or should change the table structure
[ "I understand that you want to get totals for paid vs unpaid records. The following query should do the job:\nSupplierTrans.objects.values('paid').annotate(total_amount=Sum('amount'))\n\nRelated django docs - https://docs.djangoproject.com/en/4.0/topics/db/aggregation/#values\n" ]
[ 0 ]
[]
[]
[ "django", "django_models" ]
stackoverflow_0074672931_django_django_models.txt
Q: Reflections doesn't find object subtypes I am trying to get all the classes in a package by using Reflections. When I use code of a concrete class (A in this example) it works and prints the subclases information (B extends A so it prints B information), but when I use it with Object class it doesnt work. How can I fix it? This code works: Reflections reflections = new Reflections(REFLECTION_PACKAGE); Set<Class<? extends A>> allClasses = reflections.getSubTypesOf(A.class); System.out.println("numberOfLCasses: " + allClasses.size()); System.out.println("classes: " + allClasses.toString()); This code doesn't: Reflections reflections = new Reflections(REFLECTION_PACKAGE); Set<Class<? extends Object>> allClasses = reflections.getSubTypesOf(Object.class); System.out.println("numberOfLCasses: " + allClasses.size()); System.out.println("classes: " + allClasses.toString()); A: This is documented behavior. public SubTypesScanner() created new SubTypesScanner. will exclude direct Object subtypes public SubTypesScanner(boolean excludeObjectClass) created new SubTypesScanner. Parameters: excludeObjectClass - if false, include direct Object subtypes in results. The below should return subtypes of Object.class Reflections reflections = new Reflections(REFLECTION_PACKAGE,new SubTypesScanner(false)); Set<Class<? extends Object>> allClasses = reflections.getSubTypesOf(Object.class); System.out.println("numberOfLCasses: " + allClasses.size()); System.out.println("classes: " + allClasses.toString()); A: Using the new 0.10.x API Reflections reflections = new Reflections( "com.my.package", SubTypes.filterResultsBy( s -> true)); Set<Class<?>> subTypes = reflections.get( SubTypes.of( Object.class).asClass()); subTypes.forEach( t -> { log.info( "Class: {}", t.getName()); });
Reflections doesn't find object subtypes
I am trying to get all the classes in a package by using Reflections. When I use code of a concrete class (A in this example) it works and prints the subclases information (B extends A so it prints B information), but when I use it with Object class it doesnt work. How can I fix it? This code works: Reflections reflections = new Reflections(REFLECTION_PACKAGE); Set<Class<? extends A>> allClasses = reflections.getSubTypesOf(A.class); System.out.println("numberOfLCasses: " + allClasses.size()); System.out.println("classes: " + allClasses.toString()); This code doesn't: Reflections reflections = new Reflections(REFLECTION_PACKAGE); Set<Class<? extends Object>> allClasses = reflections.getSubTypesOf(Object.class); System.out.println("numberOfLCasses: " + allClasses.size()); System.out.println("classes: " + allClasses.toString());
[ "This is documented behavior.\n\npublic SubTypesScanner()\ncreated new SubTypesScanner. will exclude direct Object subtypes\npublic SubTypesScanner(boolean excludeObjectClass)\ncreated new SubTypesScanner.\nParameters:\n excludeObjectClass - if false, include direct Object subtypes in results.\n\nThe below should return subtypes of Object.class\nReflections reflections = new Reflections(REFLECTION_PACKAGE,new SubTypesScanner(false));\nSet<Class<? extends Object>> allClasses = reflections.getSubTypesOf(Object.class);\n\nSystem.out.println(\"numberOfLCasses: \" + allClasses.size());\nSystem.out.println(\"classes: \" + allClasses.toString());\n\n", "Using the new 0.10.x API\nReflections reflections = new Reflections( \"com.my.package\", SubTypes.filterResultsBy( s -> true));\nSet<Class<?>> subTypes = reflections.get( SubTypes.of( Object.class).asClass());\nsubTypes.forEach( t -> {\n log.info( \"Class: {}\", t.getName());\n});\n\n" ]
[ 1, 0 ]
[]
[]
[ "java", "reflection" ]
stackoverflow_0059378578_java_reflection.txt
Q: Arduino Nano BLE 33 Sense and DS18B20 not working My issue is related to Arduino Nano Sense BLE and DS18B20 sensor (waterproof version) are not working together. What I tried so far. I performed a test on UNO to isolate possible powering and sensor fault. Test looked as follows: Connection of DS18B20 Black > GDN Red > 3V Yellow > D2 Last two connected via 2k2 resistor (2k2 instead 4k7 as I use 3V). Then, to exclude possible coding mistakes I used ready example: // Include the libraries we need #include <OneWire.h> #include <DallasTemperature.h> // Data wire is plugged into port 2 on the Arduino #define ONE_WIRE_BUS 2 // Setup a oneWire instance to communicate with any OneWire devices (not just Maxim/Dallas temperature ICs) OneWire oneWire(ONE_WIRE_BUS); // Pass our oneWire reference to Dallas Temperature. DallasTemperature sensors(&oneWire); /* * The setup function. We only start the sensors here */ void setup(void) { // start serial port Serial.begin(9600); Serial.println("Dallas Temperature IC Control Library Demo"); // Start up the library sensors.begin(); } /* * Main function, get and show the temperature */ void loop(void) { // call sensors.requestTemperatures() to issue a global temperature // request to all devices on the bus Serial.print("Requesting temperatures..."); sensors.requestTemperatures(); // Send the command to get temperatures Serial.println("DONE"); // After we got the temperatures, we can print them here. // We use the function ByIndex, and as an example get the temperature from the first sensor only. Serial.print("Temperature for the device 1 (index 0) is: "); Serial.println(sensors.getTempCByIndex(0)); delay(1000); } Result? works perfectly fine. Then I switched to Nano Sense BLE board. without disconnecting the sensor I just switched connection on board's end and attached GDN, 3.3V and D2. Result, -127. When trying to find DS18B20 address the result is none. I suspect board's pins order issue or Dallas/OneWire lib issue. I also tried other libs to handle DS18B20, none works, and I tried like 3-4 of them. I noticed there is few topics around internet regarding nano series and none are resolved. I found also IoT and Every has the same problem. A: The issue of getting -127 from DS18B20 is because sensor might be disconnected or didn't get 3.3V or 300mA current properly from board. So make sure your pins connected correctly. you can test it with you DMM by checking short between pin and wires. Sometime it might be a library issue. you need to install the latest version of onewire and Dallas temperature library.
Arduino Nano BLE 33 Sense and DS18B20 not working
My issue is related to Arduino Nano Sense BLE and DS18B20 sensor (waterproof version) are not working together. What I tried so far. I performed a test on UNO to isolate possible powering and sensor fault. Test looked as follows: Connection of DS18B20 Black > GDN Red > 3V Yellow > D2 Last two connected via 2k2 resistor (2k2 instead 4k7 as I use 3V). Then, to exclude possible coding mistakes I used ready example: // Include the libraries we need #include <OneWire.h> #include <DallasTemperature.h> // Data wire is plugged into port 2 on the Arduino #define ONE_WIRE_BUS 2 // Setup a oneWire instance to communicate with any OneWire devices (not just Maxim/Dallas temperature ICs) OneWire oneWire(ONE_WIRE_BUS); // Pass our oneWire reference to Dallas Temperature. DallasTemperature sensors(&oneWire); /* * The setup function. We only start the sensors here */ void setup(void) { // start serial port Serial.begin(9600); Serial.println("Dallas Temperature IC Control Library Demo"); // Start up the library sensors.begin(); } /* * Main function, get and show the temperature */ void loop(void) { // call sensors.requestTemperatures() to issue a global temperature // request to all devices on the bus Serial.print("Requesting temperatures..."); sensors.requestTemperatures(); // Send the command to get temperatures Serial.println("DONE"); // After we got the temperatures, we can print them here. // We use the function ByIndex, and as an example get the temperature from the first sensor only. Serial.print("Temperature for the device 1 (index 0) is: "); Serial.println(sensors.getTempCByIndex(0)); delay(1000); } Result? works perfectly fine. Then I switched to Nano Sense BLE board. without disconnecting the sensor I just switched connection on board's end and attached GDN, 3.3V and D2. Result, -127. When trying to find DS18B20 address the result is none. I suspect board's pins order issue or Dallas/OneWire lib issue. I also tried other libs to handle DS18B20, none works, and I tried like 3-4 of them. I noticed there is few topics around internet regarding nano series and none are resolved. I found also IoT and Every has the same problem.
[ "The issue of getting -127 from DS18B20 is because sensor might be disconnected or didn't get 3.3V or 300mA current properly from board. So make sure your pins connected correctly. you can test it with you DMM by checking short between pin and wires. Sometime it might be a library issue. you need to install the latest version of onewire and Dallas temperature library.\n" ]
[ 0 ]
[]
[]
[ "arduino", "c++", "nano" ]
stackoverflow_0060885829_arduino_c++_nano.txt
Q: Problems making Facebook API share link PHP Wordpress proper I am trying to create a Facebook share button to a Wordpress website. It works, but it does not work properly like I would. When sharing it to Facebook, it does not share the article. It says only "page not found", and the link does not work, and the message does not come along when sharing. This is the code I am working with in PHP: const facebook = document.querySelector('.facebook'); const twitter = document.querySelector('.twitter'); const telegram = document.querySelector('.telegram'); const msg = ('Hey, pls share this article on...'); const link = location.href; const facebookApi = `https://www.facebook.com/sharer/sharer.php?u=${link}. ${msg}`; const twitterApi = `https://twitter.com/intent/tweet?text=${msg}. ${link}`; const telegramApi = `https://t.me/share/url?url=${msg}&text=${link}`; Twitter and Telegram works perfect with this code. But not Facebook. Does anybody what is wrong here with the share link to Facebook? A: You are missing part of the code for FB (and for Twitter), try adding the second URL parameter &quote: const facebookApi = `https://www.facebook.com/sharer/sharer.php?u=${link}&quote=${msg}`; const twitterApi = `https://twitter.com/intent/tweet?text=${msg}&url=${link}`;
Problems making Facebook API share link PHP Wordpress proper
I am trying to create a Facebook share button to a Wordpress website. It works, but it does not work properly like I would. When sharing it to Facebook, it does not share the article. It says only "page not found", and the link does not work, and the message does not come along when sharing. This is the code I am working with in PHP: const facebook = document.querySelector('.facebook'); const twitter = document.querySelector('.twitter'); const telegram = document.querySelector('.telegram'); const msg = ('Hey, pls share this article on...'); const link = location.href; const facebookApi = `https://www.facebook.com/sharer/sharer.php?u=${link}. ${msg}`; const twitterApi = `https://twitter.com/intent/tweet?text=${msg}. ${link}`; const telegramApi = `https://t.me/share/url?url=${msg}&text=${link}`; Twitter and Telegram works perfect with this code. But not Facebook. Does anybody what is wrong here with the share link to Facebook?
[ "You are missing part of the code for FB (and for Twitter), try adding the second URL parameter &quote:\nconst facebookApi = `https://www.facebook.com/sharer/sharer.php?u=${link}&quote=${msg}`;\n\nconst twitterApi = `https://twitter.com/intent/tweet?text=${msg}&url=${link}`;\n\n" ]
[ 0 ]
[]
[]
[ "api", "facebook", "php", "share_button", "wordpress" ]
stackoverflow_0074668878_api_facebook_php_share_button_wordpress.txt
Q: Deleting all documents in Firestore collection I'm looking for a way to clear an entire collection. I saw that there is a batch update option, but that would require me to know all of the document IDs in the collection. I'm looking for a way to simply delete every document in the collection. Edit: Answer below is correct, I used the following: func delete(collection: CollectionReference, batchSize: Int = 100) { // Limit query to avoid out-of-memory errors on large collections. // When deleting a collection guaranteed to fit in memory, // batching can be avoided entirely. collection.limit(to: batchSize).getDocuments { (docset, error) in // An error occurred. let docset = docset let batch = collection.firestore.batch() docset?.documents.forEach { batch.deleteDocument($0.reference) } batch.commit {_ in self.delete(collection: collection, batchSize: batchSize) } } } A: The following javascript function will delete any collection: deleteCollection(path) { firebase.firestore().collection(path).listDocuments().then(val => { val.map((val) => { val.delete() }) }) } This works by iterating through every document and deleting each. Alternatively, you can make use of Firestore's batch commands and delete all at once using the following function: deleteCollection(path) { // Get a new write batch var batch = firebase.firestore().batch() firebase.firestore().collection(path).listDocuments().then(val => { val.map((val) => { batch.delete(val) }) batch.commit() }) } A: There is now an option in the firebase CLI to delete an entire firestore database: firebase firestore:delete --all-collections A: There is no API to delete an entire collection (or its contents) in one go. From the Firestore documentation: To delete an entire collection or subcollection in Cloud Firestore, retrieve all the documents within the collection or subcollection and delete them. If you have larger collections, you may want to delete the documents in smaller batches to avoid out-of-memory errors. Repeat the process until you've deleted the entire collection or subcollection. There is even a Swift sample in that documentation, so I recommend you try it. The Firebase CLI allows you to delete an entire collection with a single command, but it just calls the API to delete all documents in that collection in batches. If this suits your needs, I recommend you check out the (sparse) documentation for the firestore:delete command. A: 2020 updated answer You can do it with Node JS - (notice they used process which is a famous object in node not available in Web javascript) Look at this snippet on Github hosted by firebase. I always had that page pinned to my browser ;) // [START delete_collection] async function deleteCollection(db, collectionPath, batchSize) { const collectionRef = db.collection(collectionPath); const query = collectionRef.orderBy('__name__').limit(batchSize); return new Promise((resolve, reject) => { deleteQueryBatch(db, query, resolve).catch(reject); }); } async function deleteQueryBatch(db, query, resolve) { const snapshot = await query.get(); const batchSize = snapshot.size; if (batchSize === 0) { // When there are no documents left, we are done resolve(); return; } // Delete documents in a batch const batch = db.batch(); snapshot.docs.forEach((doc) => { batch.delete(doc.ref); }); await batch.commit(); // Recurse on the next process tick, to avoid // exploding the stack. process.nextTick(() => { deleteQueryBatch(db, query, resolve); }); } // [END delete_collection] A: The cleanest way I have found to delete all documents. The only time I would use this function is when using the emulator and you can simply paste the function into the console: // Paste this in: function clearCollection(path) { const ref = firestore.collection(path) ref.onSnapshot((snapshot) => { snapshot.docs.forEach((doc) => { ref.doc(doc.id).delete() }) }) } // Use it like this: clearCollection('layers') If you find yourself needing this code repeatedly save it as a snippet in Chrome and then you can have easy access to it and won't have to keep pasting the code block into the console. You must run the snippet before it is accessible from the code block. Documentation A: versions from v4.10.0 can now bulk delete using this method. await firestore.recursiveDelete(firestore.collection('foo')); It uses BulkWriter to perform the deletes. A: this worked for me by THEODORE above. db.collection("collectionName") .get() .then(res => { res.forEach(element => { element.ref.delete(); }); }); i dont have the reputaiton to reply directly to his comment. but in addition to his solution if you need to delete a sub-collection using this method just do this. db.collection(`collectionName/docID/subcollection`) //make sure to use backtics .get() .then(res => { res.forEach(element => { element.ref.delete(); }); }); if the docID is auto generated you can use this method below. which is what i was using it for to delete notificaitons for a user when they click the clear all button. db.collection(`collectionName/${variable}/subcollection`) .get() .then((res) => { res.forEach((element) => { element.ref.delete(); }); }); the variable can be whatever you're setting the docID with. in my instance it was the user.uid A: Tested in VueJS import db from '@/firebase/init' let ref = db.collection('YOUR_COLLECTION_NAME') db.collection(path).onSnapshot(snapshot => { snapshot.docs.forEach(doc => { ref.doc(doc.id).delete() .catch(error => { console.log(error) }) }) }) A: You have to get all the documents then use batch to delete them in bulk P.S. i prefer try...catch syntax let deleteInBatch = async (query, size = 100) => { try{ let batch = firestore().batch(); //get documents let values = await query.get(); if(values.size>0){ values.foreach(value=> { batch.delete(value.ref); }) //Delete the documents in bulk batch.commit(); if(values.size>0){ //Recusively call the function again to finish //deleting the rest of documents deleteInBatch(query,size); } }else{ //exist function return; } }catch(err){ throw err; } } A: db.collection("collectionName") .get() .then(res => { res.forEach(element => { element.ref.delete(); }); }); A: This is the approach that I took. While it works fine, I'm not sure what other hidden issues it might have. function deleteCollection(collectionPath, batchSize=400){ let deletePromise = appFirestore.collection(collectionPath).listDocuments() .then( function(docs) { let batch = appFirestore.batch(); if(docs.length <= batchSize){ docs.map( (doc) => { batch.delete(doc); }); batch.commit(); return true; } else{ for (let i = 0; i < batchSize; i++){ batch.delete(docs[i]); } batch.commit(); return false; } }) .then( function(batchStatus) { return batchStatus ? true : deleteCollection(collectionPath, batchSize, debug); }) .catch( function(error) { console.error(`Error clearing collections (${error})`); return false; }); return deletePromise; } A: listDocuments works only in firebase-admin: async function deleteCollection(path: string): Promise<FirebaseFirestore.WriteResult[]> { const batch = firestore.batch(); const documentsInCollection = await firestore.collection(path).listDocuments(); documentsInCollection.map((doc) => batch.delete(doc)); return batch.commit(); }; A: There is not a simple way to do this through the API. To delete multiple documents at once efficiently: Perform a one-time read of the documents in the collection. You can use a where clause to limit which documents you retrieve. Create a write batch. Queue all of the retrieved documents up for deleting in the batch. Commit the batch to start deleting documents. Add appropriate error handlers to listen for errors with reading and deleting documents. Shown below is an example of how to do this with Android Java. public void deleteAllMyThings() { db.collection("userThings") .whereEqualTo("userId", userId) .get() .addOnSuccessListener((querySnapshot) -> { WriteBatch batch = db.batch(); for (QueryDocumentSnapshot doc : querySnapshot) { batch.delete(doc.getReference()); } batch .commit() .addOnSuccessListener((result) -> { Log.i(LOG_TAG, "All my things have been deleted."); }) .addOnFailureListener((error) -> { Log.e(LOG_TAG, "Failed to delete all my things.", error); }); }) .addOnFailureListener((error) -> { Log.e(LOG_TAG, "Failed to get all my things.", error); }); } A: we can be done it by using batch delete async function deleteQueryBatch(db, query, resolve) { const snapshot = await query.get(); const batchSize = snapshot.size; if (batchSize === 0) { // When there are no documents left, we are done resolve(); return; } // Delete documents in a batch const batch = db.batch(); snapshot.docs.forEach((doc) => { batch.delete(doc.ref); }); await batch.commit(); // Recurse on the next process tick, to avoid // exploding the stack. process.nextTick(() => { deleteQueryBatch(db, query, resolve); }); } To delete an entire collection or subcollection in Cloud Firestore, retrieve all the documents within the collection or subcollection and delete them. A: If you don't have any large collections, this should work to delete all the collections: const deleteAllCollections = async () => { const db = admin.firestore(); const cols = await db.listCollections(); for (const col of cols) { const query = await db.collection(col.id).get(); for (const doc of query.docs) { console.log(`Deleting ${doc.id} from collection ${col.id}...`); await db.collection(col.id).doc(doc.id).delete(); } } }; Otherwise, definitely follow the other answers or the docs on: https://firebase.google.com/docs/firestore/manage-data/delete-data#collections https://firebase.google.com/docs/firestore/manage-data/delete-data#delete_data_with_the_firebase_cli A: const deleteCollection = async ( collectionRef: CollectionReference<DocumentData> ) => { const data = await getDocs(collectionRef); data.docs.map(async (document) => { await deleteDoc(doc(collectionRef, document.id)); }); };
Deleting all documents in Firestore collection
I'm looking for a way to clear an entire collection. I saw that there is a batch update option, but that would require me to know all of the document IDs in the collection. I'm looking for a way to simply delete every document in the collection. Edit: Answer below is correct, I used the following: func delete(collection: CollectionReference, batchSize: Int = 100) { // Limit query to avoid out-of-memory errors on large collections. // When deleting a collection guaranteed to fit in memory, // batching can be avoided entirely. collection.limit(to: batchSize).getDocuments { (docset, error) in // An error occurred. let docset = docset let batch = collection.firestore.batch() docset?.documents.forEach { batch.deleteDocument($0.reference) } batch.commit {_ in self.delete(collection: collection, batchSize: batchSize) } } }
[ "The following javascript function will delete any collection:\ndeleteCollection(path) {\n firebase.firestore().collection(path).listDocuments().then(val => {\n val.map((val) => {\n val.delete()\n })\n })\n}\n\nThis works by iterating through every document and deleting each. \nAlternatively, you can make use of Firestore's batch commands and delete all at once using the following function:\ndeleteCollection(path) {\n // Get a new write batch\n var batch = firebase.firestore().batch()\n\n firebase.firestore().collection(path).listDocuments().then(val => {\n val.map((val) => {\n batch.delete(val)\n })\n\n batch.commit()\n })\n}\n\n", "There is now an option in the firebase CLI to delete an entire firestore database:\nfirebase firestore:delete --all-collections\n\n", "There is no API to delete an entire collection (or its contents) in one go.\nFrom the Firestore documentation:\n\nTo delete an entire collection or subcollection in Cloud Firestore, retrieve all the documents within the collection or subcollection and delete them. If you have larger collections, you may want to delete the documents in smaller batches to avoid out-of-memory errors. Repeat the process until you've deleted the entire collection or subcollection.\n\nThere is even a Swift sample in that documentation, so I recommend you try it.\nThe Firebase CLI allows you to delete an entire collection with a single command, but it just calls the API to delete all documents in that collection in batches. If this suits your needs, I recommend you check out the (sparse) documentation for the firestore:delete command.\n", "2020 updated answer\nYou can do it with Node JS - (notice they used process which is a famous object in node not available in Web javascript)\nLook at this snippet on Github hosted by firebase. I always had that page pinned to my browser ;)\n// [START delete_collection]\n\nasync function deleteCollection(db, collectionPath, batchSize) {\n const collectionRef = db.collection(collectionPath);\n const query = collectionRef.orderBy('__name__').limit(batchSize);\n\n return new Promise((resolve, reject) => {\n deleteQueryBatch(db, query, resolve).catch(reject);\n });\n}\n\nasync function deleteQueryBatch(db, query, resolve) {\n const snapshot = await query.get();\n\n const batchSize = snapshot.size;\n if (batchSize === 0) {\n // When there are no documents left, we are done\n resolve();\n return;\n }\n\n // Delete documents in a batch\n const batch = db.batch();\n snapshot.docs.forEach((doc) => {\n batch.delete(doc.ref);\n });\n await batch.commit();\n\n // Recurse on the next process tick, to avoid\n // exploding the stack.\n process.nextTick(() => {\n deleteQueryBatch(db, query, resolve);\n });\n}\n\n// [END delete_collection]\n\n", "The cleanest way I have found to delete all documents. The only time I would use this function is when using the emulator and you can simply paste the function into the console:\n// Paste this in:\nfunction clearCollection(path) {\n const ref = firestore.collection(path)\n ref.onSnapshot((snapshot) => {\n snapshot.docs.forEach((doc) => {\n ref.doc(doc.id).delete()\n })\n })\n}\n// Use it like this:\nclearCollection('layers')\n\nIf you find yourself needing this code repeatedly save it as a snippet in Chrome and then you can have easy access to it and won't have to keep pasting the code block into the console. You must run the snippet before it is accessible from the code block. Documentation\n", "versions from v4.10.0\ncan now bulk delete using this method.\nawait firestore.recursiveDelete(firestore.collection('foo'));\nIt uses BulkWriter to perform the deletes.\n", "this worked for me by THEODORE above.\ndb.collection(\"collectionName\")\n .get()\n .then(res => {\n res.forEach(element => {\n element.ref.delete();\n });\n });\n\ni dont have the reputaiton to reply directly to his comment. but in addition to his solution if you need to delete a sub-collection using this method just do this.\ndb.collection(`collectionName/docID/subcollection`) //make sure to use backtics\n .get()\n .then(res => {\n res.forEach(element => {\n element.ref.delete();\n });\n });\n\nif the docID is auto generated you can use this method below. which is what i was using it for to delete notificaitons for a user when they click the clear all button.\ndb.collection(`collectionName/${variable}/subcollection`) \n .get()\n .then((res) => {\n res.forEach((element) => {\n element.ref.delete();\n });\n });\n\nthe variable can be whatever you're setting the docID with. in my instance it was the user.uid\n", "Tested in VueJS\nimport db from '@/firebase/init' \n\nlet ref = db.collection('YOUR_COLLECTION_NAME')\n\ndb.collection(path).onSnapshot(snapshot => {\n snapshot.docs.forEach(doc => {\n ref.doc(doc.id).delete()\n .catch(error => {\n console.log(error)\n })\n })\n})\n\n\n", "You have to get all the documents then use batch to delete them in bulk\nP.S. i prefer try...catch syntax\n let deleteInBatch = async (query, size = 100) => {\n try{\n\n let batch = firestore().batch();\n\n //get documents\n let values = await query.get();\n if(values.size>0){\n values.foreach(value=> {\n batch.delete(value.ref);\n })\n\n //Delete the documents in bulk\n batch.commit();\n if(values.size>0){\n //Recusively call the function again to finish\n //deleting the rest of documents\n deleteInBatch(query,size);\n }\n }else{\n //exist function\n return;\n }\n }catch(err){\n throw err;\n }\n}\n\n", "db.collection(\"collectionName\")\n .get()\n .then(res => {\n res.forEach(element => {\n element.ref.delete();\n });\n });\n\n", "This is the approach that I took. While it works fine, I'm not sure what other hidden issues it might have.\nfunction deleteCollection(collectionPath, batchSize=400){\n \n let deletePromise = appFirestore.collection(collectionPath).listDocuments()\n .then( function(docs) {\n\n let batch = appFirestore.batch();\n\n if(docs.length <= batchSize){\n docs.map( (doc) => {\n batch.delete(doc);\n });\n batch.commit();\n return true;\n }\n else{\n for (let i = 0; i < batchSize; i++){\n batch.delete(docs[i]);\n }\n batch.commit();\n return false;\n }\n })\n .then( function(batchStatus) {\n return batchStatus ? true : deleteCollection(collectionPath, batchSize, debug);\n })\n .catch( function(error) {\n console.error(`Error clearing collections (${error})`);\n return false;\n });\n\n return deletePromise;\n}\n\n", "listDocuments works only in firebase-admin:\nasync function deleteCollection(path: string): Promise<FirebaseFirestore.WriteResult[]> {\n\nconst batch = firestore.batch();\nconst documentsInCollection = await firestore.collection(path).listDocuments();\ndocumentsInCollection.map((doc) => batch.delete(doc));\n\nreturn batch.commit();\n};\n\n", "There is not a simple way to do this through the API.\nTo delete multiple documents at once efficiently:\n\nPerform a one-time read of the documents in the collection.\nYou can use a where clause to limit which documents you retrieve.\nCreate a write batch.\nQueue all of the retrieved documents up for deleting in the batch.\nCommit the batch to start deleting documents.\nAdd appropriate error handlers to listen for errors with reading and deleting documents.\n\nShown below is an example of how to do this with Android Java.\npublic void deleteAllMyThings() {\n db.collection(\"userThings\")\n .whereEqualTo(\"userId\", userId)\n .get()\n .addOnSuccessListener((querySnapshot) -> {\n WriteBatch batch = db.batch();\n for (QueryDocumentSnapshot doc : querySnapshot) {\n batch.delete(doc.getReference());\n }\n\n batch\n .commit()\n .addOnSuccessListener((result) -> {\n Log.i(LOG_TAG, \"All my things have been deleted.\");\n })\n .addOnFailureListener((error) -> {\n Log.e(LOG_TAG, \"Failed to delete all my things.\", error);\n });\n })\n .addOnFailureListener((error) -> {\n Log.e(LOG_TAG, \"Failed to get all my things.\", error);\n });\n}\n\n", "we can be done it by using batch delete\n\n\nasync function deleteQueryBatch(db, query, resolve) {\n const snapshot = await query.get();\n\n const batchSize = snapshot.size;\n if (batchSize === 0) {\n // When there are no documents left, we are done\n resolve();\n return;\n }\n\n // Delete documents in a batch\n const batch = db.batch();\n snapshot.docs.forEach((doc) => {\n batch.delete(doc.ref);\n });\n await batch.commit();\n\n // Recurse on the next process tick, to avoid\n // exploding the stack.\n process.nextTick(() => {\n deleteQueryBatch(db, query, resolve);\n });\n}\n\n\n\nTo delete an entire collection or subcollection in Cloud Firestore, retrieve all the documents within the collection or subcollection and delete them.\n", "If you don't have any large collections, this should work to delete all the collections:\nconst deleteAllCollections = async () => {\n const db = admin.firestore();\n\n const cols = await db.listCollections();\n for (const col of cols) {\n const query = await db.collection(col.id).get();\n for (const doc of query.docs) {\n console.log(`Deleting ${doc.id} from collection ${col.id}...`);\n await db.collection(col.id).doc(doc.id).delete();\n }\n }\n\n};\n\nOtherwise, definitely follow the other answers or the docs on:\n\nhttps://firebase.google.com/docs/firestore/manage-data/delete-data#collections\nhttps://firebase.google.com/docs/firestore/manage-data/delete-data#delete_data_with_the_firebase_cli\n\n", "const deleteCollection = async (\n collectionRef: CollectionReference<DocumentData>\n) => {\n const data = await getDocs(collectionRef);\n\n data.docs.map(async (document) => {\n await deleteDoc(doc(collectionRef, document.id));\n });\n};\n\n" ]
[ 42, 41, 23, 14, 7, 4, 4, 2, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "google_cloud_firestore", "swift" ]
stackoverflow_0047860812_google_cloud_firestore_swift.txt
Q: verification with Link in AWS cognito to redirect using confirmation URL (signup confirm, forgot password confirm) without using Lambda After signup or forgot password user should receive a mail and If the user clicks the above link it will redirect to confirmation page of website In my reactjs website, I tried signupConfirm but it sending email only with verification code But i expected verification link inside email which redirects and autopopulate the verification code not manual A: Finally, aws-cognito Updated this feature Earlier I was tired of going through all the answers asked on StackOverflow and other platforms where everyone was talking about Lambda solutions only but now you don't need to use Lambda. here you can send a customized email with the verification link AWS-Cognito docs - https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-email-verification-message-customization.html
verification with Link in AWS cognito to redirect using confirmation URL (signup confirm, forgot password confirm) without using Lambda
After signup or forgot password user should receive a mail and If the user clicks the above link it will redirect to confirmation page of website In my reactjs website, I tried signupConfirm but it sending email only with verification code But i expected verification link inside email which redirects and autopopulate the verification code not manual
[ "Finally, aws-cognito Updated this feature\nEarlier I was tired of going through all the answers asked on StackOverflow and other platforms where everyone was talking about Lambda solutions only but now you don't need to use Lambda.\nhere you can send a customized email with the verification link AWS-Cognito docs - https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-email-verification-message-customization.html\n" ]
[ 0 ]
[]
[]
[ "amazon_cognito", "amazon_web_services", "reactjs" ]
stackoverflow_0074673914_amazon_cognito_amazon_web_services_reactjs.txt
Q: Can't assign `std::str::Chars` to a variable of type `I: Iterator` I'm trying to create a custom iterator over a string's characters, with the difference that it "splits" the string into two iterators. Which one is then used in its own next() depends on custom logic. struct WrappingIter<I> { iter_1: I, iter_2: I, // ... } impl<I> WrappingIter<I> where I: Iterator, { pub fn new(string: &str, start_idx: usize) -> Self { Self { iter_1: string[start_idx..].chars(), iter_2: string[..start_idx].chars(), // ... } } } That gives me this error (for both assignments): 1 error[E0308]: mismatched types --> src/lib.rs:38:25 | 29 | impl<I> WrappingIter<I> | - this type parameter ... 38 | iter_1: string[start_idx..].chars(), | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected type parameter `I`, found struct `Chars` | = note: expected type parameter `I` found struct `Chars<'_>` I can't tell why. std::str::Chars implements the Iterator trait, so my logic was that I could directly assign it to WrappingIter's members. Need I perhaps perform some sort of cast? A: This line of code can be roughly translated to "For any type I that implements the Iterator trait, define these methods in the following impl block for WrappingIter<I>" in English. impl<I> WrappingIter<I> where I: Iterator { /* ... */ } Notice that the type I can be any type that implements Iterator, but in the new method, an iterator of type std::str::Chars (which is not always equal to the type I) is assigned to it, which is why the code snippet fails to compile. Here's a solution to make it compile by manually implementing the trait separately for each iterator type. For std::str::Chars, you have to annotate the lifetime explicitly, since the iterator is borrowed from a slice on the argument string. For an alternative solution without generics (and may be more practical), see @Finomnis's answer. struct WrappingIter<I> { iter_1: I, iter_2: I, } impl<'i> WrappingIter<std::str::Chars<'i>> { pub fn new(string: &'i str, start_idx: usize) -> Self { Self { iter_1: string[start_idx..].chars(), iter_2: string[start_idx..].chars(), } } } // for demo purpose, try with another Iterator type impl WrappingIter<std::ops::Range<i64>> { pub fn new(range: std::ops::Range<i64>) -> Self { Self { iter_1: range.clone(), iter_2: range.clone(), } } } A: In your usecase, using a generic is the wrong approach. Iterators are meant for the user of your library, to specify a type. In case of a wrapping iterator, why should the user of this iterator have to specify a type? The iterator will always be Chars. What you do need, however, are lifetime annotations, because Chars borrow from your input string. use std::str::Chars; struct WrappingIter<'a> { iter_1: Chars<'a>, iter_2: Chars<'a>, } impl<'a> WrappingIter<'a> { pub fn new(string: &'a str, start_idx: usize) -> Self { Self { iter_1: string[start_idx..].chars(), iter_2: string[..start_idx].chars(), } } } impl Iterator for WrappingIter<'_> { type Item = char; fn next(&mut self) -> Option<Self::Item> { self.iter_1.next().or_else(|| self.iter_2.next()) } } fn main() { let s = "abcdefgh"; let s2 = WrappingIter::new(s, 3).collect::<String>(); println!("{}", s2); } defghabc However, if you want your WrappingIter to work for more than just Chars, then you do need the generic. Again, generics are meant for the user of your function to be specified, and in this case the user will specify the type of iterator he passes into this function. use std::iter::{Skip, Take}; struct WrappingIter<T> { iter_1: Skip<T>, iter_2: Take<T>, } impl<T> WrappingIter<T> where T: Iterator + Clone, { pub fn new(iter_in: T, start_idx: usize) -> Self { Self { iter_1: iter_in.clone().skip(start_idx), iter_2: iter_in.take(start_idx), } } } impl<T> Iterator for WrappingIter<T> where T: Iterator, { type Item = T::Item; fn next(&mut self) -> Option<Self::Item> { self.iter_1.next().or_else(|| self.iter_2.next()) } } fn main() { let s = "abcdefgh"; let s2 = WrappingIter::new(s.chars(), 3).collect::<String>(); println!("{}", s2); } defghabc As a little excursion, you can then specify an iterator extension for all iterators: use std::iter::{Skip, Take}; struct WrappingIter<T> { iter_1: Skip<T>, iter_2: Take<T>, } impl<T> WrappingIter<T> where T: Iterator + Clone, { pub fn new(iter_in: T, start_idx: usize) -> Self { Self { iter_1: iter_in.clone().skip(start_idx), iter_2: iter_in.take(start_idx), } } } impl<T> Iterator for WrappingIter<T> where T: Iterator, { type Item = T::Item; fn next(&mut self) -> Option<Self::Item> { self.iter_1.next().or_else(|| self.iter_2.next()) } } trait IteratorWrapExt where Self: Sized, { fn wrap(self, start_idx: usize) -> WrappingIter<Self>; } impl<T> IteratorWrapExt for T where T: Iterator + Clone, { fn wrap(self, start_idx: usize) -> WrappingIter<Self> { WrappingIter::new(self, start_idx) } } fn main() { let s = "abcdefgh"; let s2 = s.chars().wrap(3).collect::<String>(); println!("{}", s2); let v = [1, 2, 3, 4, 5]; let v2 = v.iter().wrap(3).collect::<Vec<_>>(); println!("{:?}", v2); // It even works with ranges let r = (10..15).wrap(2).collect::<Vec<_>>(); println!("{:?}", r); } defghabc [4, 5, 1, 2, 3] [12, 13, 14, 10, 11] Another quick info: Wrapping is almost free for every iterator that can jump without cost (like ranges or array iters), but not for strings. Strings have variably-sized elements (a UTF-8 char can have 1 to 4 bytes), and so wrapping requires iterating over the wrapped elements twice.
Can't assign `std::str::Chars` to a variable of type `I: Iterator`
I'm trying to create a custom iterator over a string's characters, with the difference that it "splits" the string into two iterators. Which one is then used in its own next() depends on custom logic. struct WrappingIter<I> { iter_1: I, iter_2: I, // ... } impl<I> WrappingIter<I> where I: Iterator, { pub fn new(string: &str, start_idx: usize) -> Self { Self { iter_1: string[start_idx..].chars(), iter_2: string[..start_idx].chars(), // ... } } } That gives me this error (for both assignments): 1 error[E0308]: mismatched types --> src/lib.rs:38:25 | 29 | impl<I> WrappingIter<I> | - this type parameter ... 38 | iter_1: string[start_idx..].chars(), | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected type parameter `I`, found struct `Chars` | = note: expected type parameter `I` found struct `Chars<'_>` I can't tell why. std::str::Chars implements the Iterator trait, so my logic was that I could directly assign it to WrappingIter's members. Need I perhaps perform some sort of cast?
[ "This line of code can be roughly translated to \"For any type I that implements the Iterator trait, define these methods in the following impl block for WrappingIter<I>\" in English.\nimpl<I> WrappingIter<I> where I: Iterator { /* ... */ }\n\nNotice that the type I can be any type that implements Iterator, but in the new method, an iterator of type std::str::Chars (which is not always equal to the type I) is assigned to it, which is why the code snippet fails to compile.\n\nHere's a solution to make it compile by manually implementing the trait separately for each iterator type. For std::str::Chars, you have to annotate the lifetime explicitly, since the iterator is borrowed from a slice on the argument string. For an alternative solution without generics (and may be more practical), see @Finomnis's answer.\nstruct WrappingIter<I> {\n iter_1: I,\n iter_2: I,\n}\n\nimpl<'i> WrappingIter<std::str::Chars<'i>> {\n pub fn new(string: &'i str, start_idx: usize) -> Self {\n Self {\n iter_1: string[start_idx..].chars(),\n iter_2: string[start_idx..].chars(),\n }\n }\n}\n\n// for demo purpose, try with another Iterator type\nimpl WrappingIter<std::ops::Range<i64>> {\n pub fn new(range: std::ops::Range<i64>) -> Self {\n Self {\n iter_1: range.clone(),\n iter_2: range.clone(),\n }\n }\n}\n\n", "In your usecase, using a generic is the wrong approach.\nIterators are meant for the user of your library, to specify a type. In case of a wrapping iterator, why should the user of this iterator have to specify a type? The iterator will always be Chars.\nWhat you do need, however, are lifetime annotations, because Chars borrow from your input string.\nuse std::str::Chars;\n\nstruct WrappingIter<'a> {\n iter_1: Chars<'a>,\n iter_2: Chars<'a>,\n}\n\nimpl<'a> WrappingIter<'a> {\n pub fn new(string: &'a str, start_idx: usize) -> Self {\n Self {\n iter_1: string[start_idx..].chars(),\n iter_2: string[..start_idx].chars(),\n }\n }\n}\n\nimpl Iterator for WrappingIter<'_> {\n type Item = char;\n\n fn next(&mut self) -> Option<Self::Item> {\n self.iter_1.next().or_else(|| self.iter_2.next())\n }\n}\n\nfn main() {\n let s = \"abcdefgh\";\n\n let s2 = WrappingIter::new(s, 3).collect::<String>();\n println!(\"{}\", s2);\n}\n\ndefghabc\n\n\nHowever, if you want your WrappingIter to work for more than just Chars, then you do need the generic.\nAgain, generics are meant for the user of your function to be specified, and in this case the user will specify the type of iterator he passes into this function.\nuse std::iter::{Skip, Take};\n\nstruct WrappingIter<T> {\n iter_1: Skip<T>,\n iter_2: Take<T>,\n}\n\nimpl<T> WrappingIter<T>\nwhere\n T: Iterator + Clone,\n{\n pub fn new(iter_in: T, start_idx: usize) -> Self {\n Self {\n iter_1: iter_in.clone().skip(start_idx),\n iter_2: iter_in.take(start_idx),\n }\n }\n}\n\nimpl<T> Iterator for WrappingIter<T>\nwhere\n T: Iterator,\n{\n type Item = T::Item;\n\n fn next(&mut self) -> Option<Self::Item> {\n self.iter_1.next().or_else(|| self.iter_2.next())\n }\n}\n\nfn main() {\n let s = \"abcdefgh\";\n\n let s2 = WrappingIter::new(s.chars(), 3).collect::<String>();\n println!(\"{}\", s2);\n}\n\ndefghabc\n\n\nAs a little excursion, you can then specify an iterator extension for all iterators:\nuse std::iter::{Skip, Take};\n\nstruct WrappingIter<T> {\n iter_1: Skip<T>,\n iter_2: Take<T>,\n}\n\nimpl<T> WrappingIter<T>\nwhere\n T: Iterator + Clone,\n{\n pub fn new(iter_in: T, start_idx: usize) -> Self {\n Self {\n iter_1: iter_in.clone().skip(start_idx),\n iter_2: iter_in.take(start_idx),\n }\n }\n}\n\nimpl<T> Iterator for WrappingIter<T>\nwhere\n T: Iterator,\n{\n type Item = T::Item;\n\n fn next(&mut self) -> Option<Self::Item> {\n self.iter_1.next().or_else(|| self.iter_2.next())\n }\n}\n\ntrait IteratorWrapExt\nwhere\n Self: Sized,\n{\n fn wrap(self, start_idx: usize) -> WrappingIter<Self>;\n}\n\nimpl<T> IteratorWrapExt for T\nwhere\n T: Iterator + Clone,\n{\n fn wrap(self, start_idx: usize) -> WrappingIter<Self> {\n WrappingIter::new(self, start_idx)\n }\n}\n\nfn main() {\n let s = \"abcdefgh\";\n\n let s2 = s.chars().wrap(3).collect::<String>();\n println!(\"{}\", s2);\n\n let v = [1, 2, 3, 4, 5];\n let v2 = v.iter().wrap(3).collect::<Vec<_>>();\n println!(\"{:?}\", v2);\n\n // It even works with ranges\n let r = (10..15).wrap(2).collect::<Vec<_>>();\n println!(\"{:?}\", r);\n}\n\ndefghabc\n[4, 5, 1, 2, 3]\n[12, 13, 14, 10, 11]\n\n\nAnother quick info:\nWrapping is almost free for every iterator that can jump without cost (like ranges or array iters), but not for strings. Strings have variably-sized elements (a UTF-8 char can have 1 to 4 bytes), and so wrapping requires iterating over the wrapped elements twice.\n" ]
[ 2, 2 ]
[]
[]
[ "generics", "rust" ]
stackoverflow_0074673827_generics_rust.txt
Q: Mockito test with Spring Controller Null Pointer Exception My project is using Spring and I wanted to test with Mockito but I have a NullPointerException that I can't solve... Isn't MockBean supposed to inject in my TeacherService ? I have tried to put @Autowired in front of MockMvc but it's worse... I want to test my Controller by adding a new Teacher : This is my test : @ExtendWith(MockitoExtension.class) @ExtendWith(SpringExtension.class) @AutoConfigureMockMvc public class TeacherControllerMockTest { private MockMvc mvc; @InjectMocks private TeacherForm teacherForm; @MockBean private TeacherService teacherService; @Captor ArgumentCaptor<Teacher> teacherCaptor; @Test void addTeacherPostNonExistingTeacher() throws Exception { when(teacherService.findById(teacherForm.getId())).thenReturn(null); //il y aura un teacherService.saveTeacher(t) : mais par defaut ça ne le fera pas this.mvc.perform(post("/addTeacher") .param("firstName", "Anne-Marie") .param("lastName", "Kermarrec") ) .andExpect(status().is3xxRedirection()) .andReturn(); //teacherController.addTeacher(teacherForm); verify(teacherService, atLeastOnce()).saveTeacher(teacherCaptor.capture()); Teacher capturedTeacher = teacherCaptor.getValue(); assertEquals("Kermarrec", capturedTeacher.getLastName()); } This is my Controller : @PostMapping(value = { "/addTeacher"}) public String addTeacher(@ModelAttribute("TeacherForm") TeacherForm teacherForm) { Teacher t; if(teacherService.findById(teacherForm.getId()).isPresent()){ // teacher already existing : update t = teacherService.findById(teacherForm.getId()).get(); t.setFirstName(teacherForm.getFirstName()); t.setLastName(teacherForm.getLastName()); } else { // teacher not existing : create t=new Teacher(teacherForm.getFirstName(), teacherForm.getLastName(), terManagerService.getTERManager()); } teacherService.saveTeacher(t); return "redirect:/listTeachers"; } And the error : java.lang.NullPointerException: Cannot invoke "org.springframework.test.web.servlet.MockMvc.perform(org.springframework.test.web.servlet.RequestBuilder)" because "this.mvc" is null at um.fds.agl.ter22.controllers.mockito.TeacherControllerMockTest.addTeacherPostNonExistingTeacher(TeacherControllerMockTest.java:59) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:578) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86) at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86) at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53) at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:71) at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38) at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54) EDIT 2 : I have tried to initilize this.mvc without @Autowired but I got another error : @BeforeEach public void setUp() { this.mvc = MockMvcBuilders.standaloneSetup(new TeacherController()).build(); //I have add this line because I got thie.teacherService is null this.teacherService = new TeacherService(); } I got a problem with this.teacherRepository is null (teacherRepository is an interface), it is used when I'm calling teacherService.findById In TeacherService public Optional<Teacher> findById(long id) { return teacherRepository.findById(id); } java.lang.NullPointerException: Cannot invoke "um.fds.agl.ter22.repositories.TeacherRepository.findById(Object)" because "this.teacherRepository" is null at um.fds.agl.ter22.services.TeacherService.findById(TeacherService.java:41) at um.fds.agl.ter22.controllers.mockito.TeacherControllerMockTest.addTeacherPostNonExistingTeacher(TeacherControllerMockTest.java:64) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:578) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86) at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86) at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53) at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:71) at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38) at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54) A: You need to initialize the MockMvc instance in your test class, so please try this: @BeforeEach public void setup() { this.mvc = MockMvcBuilders.standaloneSetup(new TeacherController()).build(); } A: Your mockMvc field is never initialized; the exception message tells you this fact, plus the line number where this occurs: java.lang.NullPointerException: Cannot invoke "org.springframework.test.web.servlet.MockMvc.perform(org.springframework.test.web.servlet.RequestBuilder)" because "this.mvc" is null at um.fds.agl.ter22.controllers.mockito.TeacherControllerMockTest.addTeacherPostNonExistingTeacher(TeacherControllerMockTest.java:59) You run a Spring Boot test and autowire your field and that's it. Don't autowire and use MockMvcBuilders. You can only use either, but not both. @SpringBootTest @AutoConfigureMockMvc public class TeacherControllerMockTest { @Autowired private MockMvc mvc; // ...
Mockito test with Spring Controller Null Pointer Exception
My project is using Spring and I wanted to test with Mockito but I have a NullPointerException that I can't solve... Isn't MockBean supposed to inject in my TeacherService ? I have tried to put @Autowired in front of MockMvc but it's worse... I want to test my Controller by adding a new Teacher : This is my test : @ExtendWith(MockitoExtension.class) @ExtendWith(SpringExtension.class) @AutoConfigureMockMvc public class TeacherControllerMockTest { private MockMvc mvc; @InjectMocks private TeacherForm teacherForm; @MockBean private TeacherService teacherService; @Captor ArgumentCaptor<Teacher> teacherCaptor; @Test void addTeacherPostNonExistingTeacher() throws Exception { when(teacherService.findById(teacherForm.getId())).thenReturn(null); //il y aura un teacherService.saveTeacher(t) : mais par defaut ça ne le fera pas this.mvc.perform(post("/addTeacher") .param("firstName", "Anne-Marie") .param("lastName", "Kermarrec") ) .andExpect(status().is3xxRedirection()) .andReturn(); //teacherController.addTeacher(teacherForm); verify(teacherService, atLeastOnce()).saveTeacher(teacherCaptor.capture()); Teacher capturedTeacher = teacherCaptor.getValue(); assertEquals("Kermarrec", capturedTeacher.getLastName()); } This is my Controller : @PostMapping(value = { "/addTeacher"}) public String addTeacher(@ModelAttribute("TeacherForm") TeacherForm teacherForm) { Teacher t; if(teacherService.findById(teacherForm.getId()).isPresent()){ // teacher already existing : update t = teacherService.findById(teacherForm.getId()).get(); t.setFirstName(teacherForm.getFirstName()); t.setLastName(teacherForm.getLastName()); } else { // teacher not existing : create t=new Teacher(teacherForm.getFirstName(), teacherForm.getLastName(), terManagerService.getTERManager()); } teacherService.saveTeacher(t); return "redirect:/listTeachers"; } And the error : java.lang.NullPointerException: Cannot invoke "org.springframework.test.web.servlet.MockMvc.perform(org.springframework.test.web.servlet.RequestBuilder)" because "this.mvc" is null at um.fds.agl.ter22.controllers.mockito.TeacherControllerMockTest.addTeacherPostNonExistingTeacher(TeacherControllerMockTest.java:59) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:578) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86) at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86) at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53) at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:71) at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38) at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54) EDIT 2 : I have tried to initilize this.mvc without @Autowired but I got another error : @BeforeEach public void setUp() { this.mvc = MockMvcBuilders.standaloneSetup(new TeacherController()).build(); //I have add this line because I got thie.teacherService is null this.teacherService = new TeacherService(); } I got a problem with this.teacherRepository is null (teacherRepository is an interface), it is used when I'm calling teacherService.findById In TeacherService public Optional<Teacher> findById(long id) { return teacherRepository.findById(id); } java.lang.NullPointerException: Cannot invoke "um.fds.agl.ter22.repositories.TeacherRepository.findById(Object)" because "this.teacherRepository" is null at um.fds.agl.ter22.services.TeacherService.findById(TeacherService.java:41) at um.fds.agl.ter22.controllers.mockito.TeacherControllerMockTest.addTeacherPostNonExistingTeacher(TeacherControllerMockTest.java:64) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) at java.base/java.lang.reflect.Method.invoke(Method.java:578) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86) at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86) at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53) at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:71) at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38) at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
[ "You need to initialize the MockMvc instance in your test class, so please try this:\n@BeforeEach\npublic void setup() {\n this.mvc = MockMvcBuilders.standaloneSetup(new TeacherController()).build();\n}\n\n", "Your mockMvc field is never initialized; the exception message tells you this fact, plus the line number where this occurs:\n\njava.lang.NullPointerException: Cannot invoke \"org.springframework.test.web.servlet.MockMvc.perform(org.springframework.test.web.servlet.RequestBuilder)\" because \"this.mvc\" is null\n at um.fds.agl.ter22.controllers.mockito.TeacherControllerMockTest.addTeacherPostNonExistingTeacher(TeacherControllerMockTest.java:59)\n\n\nYou run a Spring Boot test and autowire your field and that's it. Don't autowire and use MockMvcBuilders. You can only use either, but not both.\n@SpringBootTest\n@AutoConfigureMockMvc\npublic class TeacherControllerMockTest {\n @Autowired\n private MockMvc mvc;\n\n // ...\n\n" ]
[ 1, 0 ]
[ "Use @Autowired on MockMvc.\n@Autowired\nprivate MockMvc mvc;\n\n" ]
[ -1 ]
[ "java", "mockito", "nullpointerexception", "spring" ]
stackoverflow_0074669313_java_mockito_nullpointerexception_spring.txt
Q: Adding a custom, non-model attribute to query set in Django? Newbie to DRF and have a model called posts. And another called user. The post object looks as follows: class Post(models.Model): """ Post model """ title = models.CharField(max_length=250) body = models.TextField() author = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='forum_posts') parent_post = models.ForeignKey('self', on_delete=models.CASCADE, blank=True, null=True) time_stamp = models.DateTimeField(default=timezone.now) objects = models.Manager() The serializer for this model is: class PostSerializer(serializers.ModelSerializer): class Meta: model = models.Post fields = ('id', 'title', 'body', 'parent_post', 'author', 'time_stamp') extra_kwargs = {'id': {'read_only': True}, 'author': {'read_only': True}} When returning data for this model, I want to add an extra attribute to each object within the query set called "author_username". The username should be the username belonging to the post's author id. I also want to do this without modifying the model to add another attribute such as "author_username" since this'll be redundant (already have an FK for author). So, ideally, the json for an object would look like: 'post_id': 1 'post_title': 'Example post' 'post_body': 'Example post' 'author_id': 1 'parent_post_id': null 'time_stamp': '2022' 'author_username': 'testUser' How can I go about doing this? Here's my view: class PostList(generics.ListCreateAPIView): permission_classes = [IsAuthenticatedOrReadOnly] queryset = models.Post.objects.all() serializer_class = serializers.PostSerializer A: The source argument can be passed to a serializer field to access an attribute from a related model class PostSerializer(serializers.ModelSerializer): author_username = serializers.CharField(source="author.username", read_only=True) class Meta: model = models.Post ... You should add a select_related call to your view's queryset class PostList(generics.ListCreateAPIView): ... queryset = models.Post.objects.select_related('author') ...
Adding a custom, non-model attribute to query set in Django?
Newbie to DRF and have a model called posts. And another called user. The post object looks as follows: class Post(models.Model): """ Post model """ title = models.CharField(max_length=250) body = models.TextField() author = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='forum_posts') parent_post = models.ForeignKey('self', on_delete=models.CASCADE, blank=True, null=True) time_stamp = models.DateTimeField(default=timezone.now) objects = models.Manager() The serializer for this model is: class PostSerializer(serializers.ModelSerializer): class Meta: model = models.Post fields = ('id', 'title', 'body', 'parent_post', 'author', 'time_stamp') extra_kwargs = {'id': {'read_only': True}, 'author': {'read_only': True}} When returning data for this model, I want to add an extra attribute to each object within the query set called "author_username". The username should be the username belonging to the post's author id. I also want to do this without modifying the model to add another attribute such as "author_username" since this'll be redundant (already have an FK for author). So, ideally, the json for an object would look like: 'post_id': 1 'post_title': 'Example post' 'post_body': 'Example post' 'author_id': 1 'parent_post_id': null 'time_stamp': '2022' 'author_username': 'testUser' How can I go about doing this? Here's my view: class PostList(generics.ListCreateAPIView): permission_classes = [IsAuthenticatedOrReadOnly] queryset = models.Post.objects.all() serializer_class = serializers.PostSerializer
[ "The source argument can be passed to a serializer field to access an attribute from a related model\nclass PostSerializer(serializers.ModelSerializer):\n\n author_username = serializers.CharField(source=\"author.username\", read_only=True)\n\n class Meta:\n model = models.Post\n ...\n\nYou should add a select_related call to your view's queryset\nclass PostList(generics.ListCreateAPIView):\n ...\n queryset = models.Post.objects.select_related('author')\n ...\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "django_views" ]
stackoverflow_0074673890_django_django_models_django_rest_framework_django_views.txt
Q: wiremock issue when upgrading to Spring Boot 3 When upgrading my Spring Boot 2.5 to 3.0 , I am facing some issues with Wiremock, probably due to the move to jakarta namespace. Even upgrading to latest wiremock-jre8 , ie 2.35.0 (as of december 2022) doesn't seem to help. I get this error : java.lang.NoClassDefFoundError: javax/servlet/DispatcherType at java.base/java.lang.Class.forName0(Native Method) at java.base/java.lang.Class.forName(Class.java:375) at com.github.tomakehurst.wiremock.jetty9.JettyHttpServerFactory.getServerConstructor(JettyHttpServerFactory.java:37) at com.github.tomakehurst.wiremock.jetty9.JettyHttpServerFactory.<clinit>(JettyHttpServerFactory.java:30) A: Looks like this is a know issue related to jakarta namespace and Jetty 11 support, that will take a while to get properly fixed : https://github.com/wiremock/wiremock/issues/1760 As indicated in the issue, using wiremock-jre8-standalone instead of wiremock-jre8 helps working around the issue, until it gets properly fixed in Wiremock 3.x
wiremock issue when upgrading to Spring Boot 3
When upgrading my Spring Boot 2.5 to 3.0 , I am facing some issues with Wiremock, probably due to the move to jakarta namespace. Even upgrading to latest wiremock-jre8 , ie 2.35.0 (as of december 2022) doesn't seem to help. I get this error : java.lang.NoClassDefFoundError: javax/servlet/DispatcherType at java.base/java.lang.Class.forName0(Native Method) at java.base/java.lang.Class.forName(Class.java:375) at com.github.tomakehurst.wiremock.jetty9.JettyHttpServerFactory.getServerConstructor(JettyHttpServerFactory.java:37) at com.github.tomakehurst.wiremock.jetty9.JettyHttpServerFactory.<clinit>(JettyHttpServerFactory.java:30)
[ "Looks like this is a know issue related to jakarta namespace and Jetty 11 support, that will take a while to get properly fixed :\nhttps://github.com/wiremock/wiremock/issues/1760\nAs indicated in the issue, using wiremock-jre8-standalone instead of wiremock-jre8 helps working around the issue, until it gets properly fixed in Wiremock 3.x\n" ]
[ 0 ]
[]
[]
[ "spring_boot", "wiremock" ]
stackoverflow_0074673966_spring_boot_wiremock.txt
Q: auto built cli tool in to an object in python first sorry for my bad terminology, I am an electrical engineer, so maybe my coding terms are not so accurate or even far from that. we have a CLI in the company, accessed from the Linux terminal, you know usual stuff, `{command.exe} {plugin} {options}, and you get the output on the terminal screen. In order to unit test the product, we need it in a python class, which is returned as an object to the test environment, and eventually, prints that open a process that execute that command. to build the command, we have a dictionary of the plugin, the subplugin, and the option for each cmd: self.commands = { "plugin": ['subplugin', 'subsubplugin', '-a', 'flaga', '-b', 'flagb'],... and we built a function for every command we want, from the plugin list extracted from the dict above I am looking for a better approach that auto-built the tool entirely, sort of what the OS does for prediction. I am assuming that would include the "set_attr" method of classes and stuff like that. at the end of all this, I expect to access the plugin like this: cli.plugin.subplugin.subsubplugin(arg,arg,arg) and that would generate a command cli, or at least the list above so I could inject it into the existing infra. can anyone help, please? thx in advance I am more looking for guidence then say what I tried and fix it. A: I found my answer, this code worked for me yo achieve what I was looking for. thanks for the commenters. import re import subprocess PKG_NAME = "sudo mycli" PKG_PLUGIN_START = "The following are all installed plugin extensions:" # this is the message before the commands list in the cli help PKG_PLUGIN_END = f"See 'mycli <plugin> help' for more information on a plugin" # hit is the message after the commands list in the cli help PKG_CMD_START = "The following are all implemented sub-commands:" PKG_CMD_END = "See 'mycli help <command>' for more information on a specific command" PLUGIN_CMD_START = PKG_CMD_START PLUGIN_CMD_END = "See 'mycli <plugin> help <command>' for more information on a specific command" def get_help(s): s += " help" return subprocess.getoutput([s]) def get_plugin_list(s, start, end): s = '\n'.join(l.strip() for l in s.splitlines() if l) res = re.search(f'{start}([\s\S]*){end}', s) # regex that matches everything between both strings if not res: raise ValueError("Couldn't find plugin list in string") return [l.split(' ')[0] for l in res.group(1).strip().splitlines()] # remove the unnecessary text and return the plugins as a list class CMD(): def __init__(self, name, parent_plugin_name=None, *args): self.args = args self.pkg_name = PKG_NAME self.parent_plugin_name = parent_plugin_name self.name = name def __call__(self, *args, **kwargs): if self.parent_plugin_name: command = " ".join([self.pkg_name, self.parent_plugin_name]) else: command = self.pkg_name command = " ".join([command, self.name, *args, " "]) command += " ".join([f"-{each[0]}={each[1]}" for each in list(kwargs.items())]) return subprocess.getoutput(command) class Plugin(): def __init__(self, name, parent_pkg_name): self.name = name self.parent_pkg_name = PKG_NAME plugin_cmd_start = PLUGIN_CMD_START plugin_cmd_end = PLUGIN_CMD_END.replace("<plugin>", self.name) for cmd in get_plugin_list(get_help(f"{self.parent_pkg_name} {self.name}"), plugin_cmd_start, plugin_cmd_end): setattr(self, cmd, CMD(cmd, parent_plugin_name=self.name)) class Package(): def __init__(self, name, root=True): self.name = name if root: self.name = "sudo " + self.name self.command_string = f"{self.name}" for cmd in get_plugin_list(get_help(self.name), PKG_CMD_START, PKG_CMD_END): setattr(self, cmd, CMD(cmd)) for plugin in get_plugin_list(get_help(self.name), PKG_PLUGIN_START, PKG_PLUGIN_END): setattr(self, plugin, Plugin(plugin, parent_pkg_name=self.name)) if __name__ == "__main__": mycli_tool = Package("mycli") print() print(mycli_tool.cmd()) print() print(mycli_tool.system.get_disk_usage("-x0")) print() print(mycli_tool.system.get_disk_usage(x=0)) print() print(mycli_tool.system.get_disk_usage(json=1))
auto built cli tool in to an object in python
first sorry for my bad terminology, I am an electrical engineer, so maybe my coding terms are not so accurate or even far from that. we have a CLI in the company, accessed from the Linux terminal, you know usual stuff, `{command.exe} {plugin} {options}, and you get the output on the terminal screen. In order to unit test the product, we need it in a python class, which is returned as an object to the test environment, and eventually, prints that open a process that execute that command. to build the command, we have a dictionary of the plugin, the subplugin, and the option for each cmd: self.commands = { "plugin": ['subplugin', 'subsubplugin', '-a', 'flaga', '-b', 'flagb'],... and we built a function for every command we want, from the plugin list extracted from the dict above I am looking for a better approach that auto-built the tool entirely, sort of what the OS does for prediction. I am assuming that would include the "set_attr" method of classes and stuff like that. at the end of all this, I expect to access the plugin like this: cli.plugin.subplugin.subsubplugin(arg,arg,arg) and that would generate a command cli, or at least the list above so I could inject it into the existing infra. can anyone help, please? thx in advance I am more looking for guidence then say what I tried and fix it.
[ "I found my answer, this code worked for me yo achieve what I was looking for.\nthanks for the commenters.\nimport re\nimport subprocess\n\nPKG_NAME = \"sudo mycli\"\nPKG_PLUGIN_START = \"The following are all installed plugin extensions:\" # this is the message before the commands list in the cli help\nPKG_PLUGIN_END = f\"See 'mycli <plugin> help' for more information on a plugin\" # hit is the message after the commands list in the cli help\nPKG_CMD_START = \"The following are all implemented sub-commands:\"\nPKG_CMD_END = \"See 'mycli help <command>' for more information on a specific command\"\nPLUGIN_CMD_START = PKG_CMD_START\nPLUGIN_CMD_END = \"See 'mycli <plugin> help <command>' for more information on a specific command\"\n\n\ndef get_help(s):\n s += \" help\"\n return subprocess.getoutput([s])\n\n\ndef get_plugin_list(s, start, end):\n s = '\\n'.join(l.strip() for l in s.splitlines() if l)\n res = re.search(f'{start}([\\s\\S]*){end}', s) # regex that matches everything between both strings\n\n if not res:\n raise ValueError(\"Couldn't find plugin list in string\")\n\n return [l.split(' ')[0] for l in res.group(1).strip().splitlines()] # remove the unnecessary text and return the plugins as a list\n\n\nclass CMD():\n def __init__(self, name, parent_plugin_name=None, *args):\n self.args = args\n self.pkg_name = PKG_NAME\n self.parent_plugin_name = parent_plugin_name\n self.name = name\n\n def __call__(self, *args, **kwargs):\n if self.parent_plugin_name:\n command = \" \".join([self.pkg_name, self.parent_plugin_name])\n else:\n command = self.pkg_name\n command = \" \".join([command, self.name, *args, \" \"])\n command += \" \".join([f\"-{each[0]}={each[1]}\" for each in list(kwargs.items())])\n return subprocess.getoutput(command)\n\n\nclass Plugin():\n\n def __init__(self, name, parent_pkg_name):\n self.name = name\n self.parent_pkg_name = PKG_NAME\n plugin_cmd_start = PLUGIN_CMD_START\n plugin_cmd_end = PLUGIN_CMD_END.replace(\"<plugin>\", self.name)\n for cmd in get_plugin_list(get_help(f\"{self.parent_pkg_name} {self.name}\"), plugin_cmd_start, plugin_cmd_end):\n setattr(self, cmd, CMD(cmd, parent_plugin_name=self.name))\n\n\nclass Package():\n def __init__(self, name, root=True):\n self.name = name\n if root:\n self.name = \"sudo \" + self.name\n self.command_string = f\"{self.name}\"\n for cmd in get_plugin_list(get_help(self.name), PKG_CMD_START, PKG_CMD_END):\n setattr(self, cmd, CMD(cmd))\n for plugin in get_plugin_list(get_help(self.name), PKG_PLUGIN_START, PKG_PLUGIN_END):\n setattr(self, plugin, Plugin(plugin, parent_pkg_name=self.name))\n\n\nif __name__ == \"__main__\":\n mycli_tool = Package(\"mycli\")\n print()\n print(mycli_tool.cmd())\n print()\n print(mycli_tool.system.get_disk_usage(\"-x0\"))\n print()\n print(mycli_tool.system.get_disk_usage(x=0))\n print()\n print(mycli_tool.system.get_disk_usage(json=1))\n\n" ]
[ 0 ]
[]
[]
[ "api", "auto_generate", "command_line_interface", "python", "python_3.x" ]
stackoverflow_0074612528_api_auto_generate_command_line_interface_python_python_3.x.txt
Q: API returns HTML response after API protection using custom guard and passport I hope you're doing well, I just need some help with my problem, been stuck at it for a while now and I cannot figure out a work around. I implemented another login for the admin in my project and I use a custom guard and custom middleware for it. This works properly without any problem. The problem started when I try to protect the API routes using passport. For the users which uses the auth:api as the API middleware, everything works fine. But in my custom guard, it returns an HTML response(console.log says it returns HTML but it does not output anything in the UI) instead of json. If I remove the route protection it would work again as intended. I hope you can help me with this one. Thank you! I am using Laravel Passport for the API protection. This is how it looks like without the API route protection(This is how it should be). This is how it looks like with the route protection This is what console.log returns with route protection. Without it, it returns the response from the first picture. Here's my code below AdminMiddleware <?php namespace App\Http\Middleware; use Closure; use Illuminate\Support\Facades\Auth; use Illuminate\Http\Request; class AdminMiddleware { public function handle($request, Closure $next, $guard = null) { if (Auth::guard('admin')->check()) { return $next($request); } else { return redirect()->route('admin.login'); } } Kernel.php protected $routeMiddleware = [ 'auth.admin' => \App\Http\Middleware\AdminMiddleware::class, 'auth' => \App\Http\Middleware\Authenticate::class, ]; config/auth.php 'guards' => [ 'web' => [ 'driver' => 'session', 'provider' => 'users', ], 'api' => [ 'driver' => 'passport', 'provider' => 'users', 'hash' => false, ], 'admin' => [ 'driver' => 'session', 'provider' => 'admins', ], 'adminApi' => [ 'driver' => 'passport', 'provider' => 'admins', 'hash' => false, ] ], routes/api.php Route::group(['middleware' => ['auth.admin:adminApi']], function(){ Route::get('/fetch-announcements', [AnnouncementController::class, 'showAnnouncement']); Route::post('/store-announcements',[AnnouncementController::class, 'storeAnnouncement']); }); Models/Admin.php <?php namespace App\Models; use Illuminate\Database\Eloquent\Factories\HasFactory; use Illuminate\Database\Eloquent\Model; use Illuminate\Foundation\Auth\User as Authenticatable; use Illuminate\Notifications\Notifiable; use Laravel\Sanctum\HasApiTokens; class Admin extends Authenticatable { use HasFactory, HasApiTokens; protected $fillable =[ 'email', ]; protected $hidden = [ 'password', 'remember_token', ]; protected $casts = [ 'email_verified_at' => 'datetime', ]; } Get Request await axios.get('/api/fetch-announcements', { headers: { 'Accept': 'application/json' } }) .then(response => { this.announcements = response.data console.log(this.announcements) }) .catch(err => console.error(err)) EDIT The API returns a 302 code A: Just send a header for the request as Accept: application/json. So you will get the same as JSON. A: The API return html response because the middleware that you created before, when the auth check is failing, it will redirect you to admin login page. First, you should check the API that you called from axios is include the token or not, if not, it will run the else statement. Next, your middleware will only return redirect page (302) when the auth guard check failed, either in web or api. If you want your API return a json, you may change your middleware just like below code, dont forget add header 'Accept': 'application/json' on your request. public function handle($request, Closure $next, $guard = null) { if (Auth::guard('admin')->check()) { return $next($request); } else { if ($request->wantsJson()) { return response()->json([ "error" => true, "message" => "Unauthenticated" ], 403); }else{ return redirect()->route("cart.index"); } } }
API returns HTML response after API protection using custom guard and passport
I hope you're doing well, I just need some help with my problem, been stuck at it for a while now and I cannot figure out a work around. I implemented another login for the admin in my project and I use a custom guard and custom middleware for it. This works properly without any problem. The problem started when I try to protect the API routes using passport. For the users which uses the auth:api as the API middleware, everything works fine. But in my custom guard, it returns an HTML response(console.log says it returns HTML but it does not output anything in the UI) instead of json. If I remove the route protection it would work again as intended. I hope you can help me with this one. Thank you! I am using Laravel Passport for the API protection. This is how it looks like without the API route protection(This is how it should be). This is how it looks like with the route protection This is what console.log returns with route protection. Without it, it returns the response from the first picture. Here's my code below AdminMiddleware <?php namespace App\Http\Middleware; use Closure; use Illuminate\Support\Facades\Auth; use Illuminate\Http\Request; class AdminMiddleware { public function handle($request, Closure $next, $guard = null) { if (Auth::guard('admin')->check()) { return $next($request); } else { return redirect()->route('admin.login'); } } Kernel.php protected $routeMiddleware = [ 'auth.admin' => \App\Http\Middleware\AdminMiddleware::class, 'auth' => \App\Http\Middleware\Authenticate::class, ]; config/auth.php 'guards' => [ 'web' => [ 'driver' => 'session', 'provider' => 'users', ], 'api' => [ 'driver' => 'passport', 'provider' => 'users', 'hash' => false, ], 'admin' => [ 'driver' => 'session', 'provider' => 'admins', ], 'adminApi' => [ 'driver' => 'passport', 'provider' => 'admins', 'hash' => false, ] ], routes/api.php Route::group(['middleware' => ['auth.admin:adminApi']], function(){ Route::get('/fetch-announcements', [AnnouncementController::class, 'showAnnouncement']); Route::post('/store-announcements',[AnnouncementController::class, 'storeAnnouncement']); }); Models/Admin.php <?php namespace App\Models; use Illuminate\Database\Eloquent\Factories\HasFactory; use Illuminate\Database\Eloquent\Model; use Illuminate\Foundation\Auth\User as Authenticatable; use Illuminate\Notifications\Notifiable; use Laravel\Sanctum\HasApiTokens; class Admin extends Authenticatable { use HasFactory, HasApiTokens; protected $fillable =[ 'email', ]; protected $hidden = [ 'password', 'remember_token', ]; protected $casts = [ 'email_verified_at' => 'datetime', ]; } Get Request await axios.get('/api/fetch-announcements', { headers: { 'Accept': 'application/json' } }) .then(response => { this.announcements = response.data console.log(this.announcements) }) .catch(err => console.error(err)) EDIT The API returns a 302 code
[ "Just send a header for the request as Accept: application/json.\nSo you will get the same as JSON.\n", "The API return html response because the middleware that you created before, when the auth check is failing, it will redirect you to admin login page.\nFirst, you should check the API that you called from axios is include the token or not, if not, it will run the else statement.\nNext, your middleware will only return redirect page (302) when the auth guard check failed, either in web or api. If you want your API return a json, you may change your middleware just like below code, dont forget add header 'Accept': 'application/json' on your request.\npublic function handle($request, Closure $next, $guard = null)\n{\n if (Auth::guard('admin')->check()) {\n return $next($request);\n } else {\n if ($request->wantsJson()) {\n return response()->json([\n \"error\" => true,\n \"message\" => \"Unauthenticated\"\n ], 403);\n }else{\n return redirect()->route(\"cart.index\");\n }\n }\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "api", "laravel", "laravel_middleware", "laravel_passport", "rest" ]
stackoverflow_0074673433_api_laravel_laravel_middleware_laravel_passport_rest.txt
Q: Snowflake + Power BI Direct Query with List Parameters Error - We cannot apply operator & to types Text and List Power BI direct query Error I have a list YEAR of type Number. Values ( 2020,2021,2022 ) I am using the list as a parameter in PowerBI Direct query but Getting an error - We cannot apply operator & to types Text and List. Question : How to use the number list in direct query ? Query below = Value.NativeQuery(Snowflake.Databases("abcd.east-us-2.ADIC.snowflakecomputing.com","DATABRICKS"){[Name="SUPER"]} [Data], "select * from SUPER.SCHEMA1.INT_HIST WHERE YEAR IN (" & (YEAR ) & ")" , null, [EnableFolding=true]) A: Maybe try something like this: Value.NativeQuery(Snowflake.Databases("abcd.east-us-2.ADIC.snowflakecomputing.com","DATABRICKS"){[Name="SUPER"]} [Data], "select * from SUPER.SCHEMA1.INT_HIST WHERE YEAR IN (" & Text.Combine(List.Transform(YEAR,Text.From), ",") & ")" , null, [EnableFolding=true])
Snowflake + Power BI Direct Query with List Parameters Error - We cannot apply operator & to types Text and List
Power BI direct query Error I have a list YEAR of type Number. Values ( 2020,2021,2022 ) I am using the list as a parameter in PowerBI Direct query but Getting an error - We cannot apply operator & to types Text and List. Question : How to use the number list in direct query ? Query below = Value.NativeQuery(Snowflake.Databases("abcd.east-us-2.ADIC.snowflakecomputing.com","DATABRICKS"){[Name="SUPER"]} [Data], "select * from SUPER.SCHEMA1.INT_HIST WHERE YEAR IN (" & (YEAR ) & ")" , null, [EnableFolding=true])
[ "Maybe try something like this:\n\nValue.NativeQuery(Snowflake.Databases(\"abcd.east-us-2.ADIC.snowflakecomputing.com\",\"DATABRICKS\"){[Name=\"SUPER\"]} [Data], \"select * from SUPER.SCHEMA1.INT_HIST WHERE YEAR IN (\" & Text.Combine(List.Transform(YEAR,Text.From), \",\") & \")\" , null, [EnableFolding=true])\n\n" ]
[ 0 ]
[]
[]
[ "powerbi", "powerquery", "snowflake_cloud_data_platform" ]
stackoverflow_0074667440_powerbi_powerquery_snowflake_cloud_data_platform.txt
Q: java.lang.RuntimeException: Unable to start activity ComponentInfo I know this error appeared on forum million of times, but please help me find what I missed. I'm trying to do simple tab orientated application,I don't have much (except errors) 1) my main activity is based on tablayout tutorial what I found public class MainTabPanel extends TabActivity { public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.mainlayout); Resources res = getResources(); TabHost tabHost = getTabHost(); TabHost.TabSpec spec; Intent intent; intent = new Intent().setClass(this, MyBookActivity.class); spec = tabHost.newTabSpec("main") .setIndicator("Main", res.getDrawable(R.drawable.ic_mybook)) .setContent(intent); tabHost.addTab(spec); tabHost.setCurrentTab(0); } } 2) mainlayout.xml <?xml version="1.0" encoding="utf-8"?> <TabHost xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/tabhost" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="5dp"> <TabWidget android:id="@android:id/tabs" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <FrameLayout android:id="@android:id/tabcontent" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="5dp" /> </LinearLayout></TabHost> 3) my second activity is basically almost empty, it;s just display current date and time, worked before I tried to add tab panel 4) my manifest file <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="org.th.mybook" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".MainTabPanel" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name="MyBookActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.ALTERNATIVE" /> </intent-filter> </activity> </application> </manifest> 5 log cat error 02-10 21:04:45.203: E/AndroidRuntime(1107): FATAL EXCEPTION: main 02-10 21:04:45.203: E/AndroidRuntime(1107): java.lang.RuntimeException: Unable to start activity ComponentInfo{org.th.mybook/org.th.mybook.MainTabPanel}: java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{org.th.mybook/org.th.mybook.MyBookActivity}: java.lang.NullPointerException 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.os.Handler.dispatchMessage(Handler.java:99) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.os.Looper.loop(Looper.java:123) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.main(ActivityThread.java:4627) 02-10 21:04:45.203: E/AndroidRuntime(1107): at java.lang.reflect.Method.invokeNative(Native Method) 02-10 21:04:45.203: E/AndroidRuntime(1107): at java.lang.reflect.Method.invoke(Method.java:521) 02-10 21:04:45.203: E/AndroidRuntime(1107): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 02-10 21:04:45.203: E/AndroidRuntime(1107): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 02-10 21:04:45.203: E/AndroidRuntime(1107): at dalvik.system.NativeStart.main(Native Method) 02-10 21:04:45.203: E/AndroidRuntime(1107): Caused by: java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{org.th.mybook/org.th.mybook.MyBookActivity}: java.lang.NullPointerException 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2585) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.startActivityNow(ActivityThread.java:2503) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.LocalActivityManager.moveToState(LocalActivityManager.java:127) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.LocalActivityManager.startActivity(LocalActivityManager.java:339) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.widget.TabHost$IntentContentStrategy.getContentView(TabHost.java:651) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.widget.TabHost.setCurrentTab(TabHost.java:323) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.widget.TabHost.addTab(TabHost.java:213) 02-10 21:04:45.203: E/AndroidRuntime(1107): at org.th.mybook.MainTabPanel.onCreate(MainTabPanel.java:30) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) 02-10 21:04:45.203: E/AndroidRuntime(1107): ... 11 more 02-10 21:04:45.203: E/AndroidRuntime(1107): Caused by: java.lang.NullPointerException 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.content.ContextWrapper.getApplicationContext(ContextWrapper.java:100) 02-10 21:04:45.203: E/AndroidRuntime(1107): at org.th.mybook.MyBookActivity.<init>(MyBookActivity.java:16) 02-10 21:04:45.203: E/AndroidRuntime(1107): at java.lang.Class.newInstanceImpl(Native Method) 02-10 21:04:45.203: E/AndroidRuntime(1107): at java.lang.Class.newInstance(Class.java:1429) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.Instrumentation.newActivity(Instrumentation.java:1021) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2577) 02-10 21:04:45.203: E/AndroidRuntime(1107): ... 20 more please help me, and tell me what i missed, im comparing this code with my old one and i can't find anything regards 6) my book activity public class MyBookActivity extends Activity { java.text.DateFormat dateFormat = android.text.format.DateFormat.getDateFormat(getApplicationContext()); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); DigitalClock clock = (DigitalClock) findViewById(R.id.digitalClock1); final TextView date = (TextView) findViewById(R.id.textView1); date.setText(dateFormat.format(new Date())); TextWatcher watcher = new TextWatcher() { @Override public void afterTextChanged(Editable s) { } @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { if (s.toString().startsWith("00:00:00") || s.toString().startsWith("12:00:00")) { date.setText(dateFormat.format(new Date())); } } }; clock.addTextChangedListener(watcher); } } 7) main.xml layout -> for my book activity <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:gravity="right" android:orientation="horizontal" > <LinearLayout android:id="@+id/DatePanel1" android:layout_width="wrap_content" android:layout_height="wrap_content" > <TextView android:id="@+id/textView1" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginRight="@dimen/space" android:layout_weight="1" android:text="TextView" /> <DigitalClock android:id="@+id/digitalClock1" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:text="DigitalClock" /> </LinearLayout> </LinearLayout> A: <activity android:name="MyBookActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.ALTERNATIVE" /> </intent-filter> </activity> where is your dot before MyBookActivity? A: It was my own stupidity: java.text.DateFormat dateFormat = android.text.format.DateFormat.getDateFormat(getApplicationContext()); Putting this inside onCreate() method fixed my problem. A: I had the same issue, I cleaned and rebuilt the project and it worked. A: Your Manifest Must Change like this Activity name must Specified like ".YourActivityname" <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="org.th.mybook" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".MainTabPanel" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".MyBookActivity" > </activity> </application> A: I encountered a similar error in one of my app recently, when I checked Android Vitals. So, I'm writing this. This is the error. java.lang.RuntimeException: Unable to start activity ComponentInfo{my.package.name/my.package.name.activity.MainActivity}: android.view.InflateException: Binary XML file line #30: Binary XML file line #30: Error inflating class fragment After checking my Binary XML file on line #30, I found out that this my navGraph was on line #30. app:navGraph="@navigation/mobile_navigation" Then, I realized that I was inflating the fragment wrongly. I came across Get started with the Navigation component which clearly states that: When creating the NavHostFragment using FragmentContainerView or if manually adding the NavHostFragment to your activity via a FragmentTransaction, attempting to retrieve the NavController in onCreate() of an Activity via Navigation.findNavController(Activity, @IdRes int) will fail. You should retrieve the NavController directly from the NavHostFragment instead. This is the correct way of Inflating the NavHostFragment: NavHostFragment navHostFragment = (NavHostFragment) supportFragmentManager.findFragmentById(R.id.nav_host_fragment); NavController navController = navHostFragment.getNavController(); Earlier, I was trying to inflate like this: NavController navController = Navigation.findNavController(this, R.id.nav_host_fragment); NavigationUI.setupActionBarWithNavController(this, navController, mAppBarConfiguration); NavigationUI.setupWithNavController(navigationView, navController); So, my new code becomes: NavHostFragment navHostFragment = (NavHostFragment) getSupportFragmentManager().findFragmentById(R.id.nav_host_fragment); NavController navController = null; if (navHostFragment != null) { navController = navHostFragment.getNavController(); } if (navController != null) { NavigationUI.setupActionBarWithNavController(MainActivity.this, navController, mAppBarConfiguration); NavigationUI.setupWithNavController(navigationView, navController); } A: Dear You have used two Intent launcher in your Manifest. Make only one Activity as launcher: Your manifest activity is : <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="org.th.mybook" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".MainTabPanel" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name="MyBookActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.ALTERNATIVE" /> </intent-filter> </activity> </application> </manifest> now write code will be ( i have made your 'MyActivityBook' your default activity launcher. Copy and paste it on your manifest. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="org.th.mybook" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".MainTabPanel" android:label="@string/app_name" > </activity> <activity android:name="MyBookActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> and Second error may be if you copy paste old code then please update com.example.packagename.FILE_NAME hope this will work ! A: My problem looked similar, after some hours I found that not all of my project was converted to androidx, so I changed: <android.support.constraint.ConstraintLayout into <androidx.constraintlayout.widget.ConstraintLayout then androidStudio still complaints, but the problem solver can add constraintlayout to your libraries or so. Now it starts again! It's funny that past 8 years many similar problems appear with a different solution. A: After trying few answers they are either not related to my project or , I have tried cleaning and rebuilding (https://stackoverflow.com/a/48760966/8463813). But it didn't work for me directly. I have compared it with older version of code, in which i observed some library files(jars and aars in External Libraries directory) are missing. Tried Invalidate Cache and Restart worked, which created all the libraries and working fine. A: If you are using ViewModel and passing any argument but not using the ViewModelFacoty to implement this... then this issue might occur Solution: Use ViewModelFactory to pass any argument in ViewModel class. A: I haven't found the solution but I have traced the error that is causing it in my project. In multithreading if you're trying to use the result of one of the concurrent threads, that haven't yielded the results yet, then you would get a java.lang.RuntimeException.
java.lang.RuntimeException: Unable to start activity ComponentInfo
I know this error appeared on forum million of times, but please help me find what I missed. I'm trying to do simple tab orientated application,I don't have much (except errors) 1) my main activity is based on tablayout tutorial what I found public class MainTabPanel extends TabActivity { public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.mainlayout); Resources res = getResources(); TabHost tabHost = getTabHost(); TabHost.TabSpec spec; Intent intent; intent = new Intent().setClass(this, MyBookActivity.class); spec = tabHost.newTabSpec("main") .setIndicator("Main", res.getDrawable(R.drawable.ic_mybook)) .setContent(intent); tabHost.addTab(spec); tabHost.setCurrentTab(0); } } 2) mainlayout.xml <?xml version="1.0" encoding="utf-8"?> <TabHost xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/tabhost" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="5dp"> <TabWidget android:id="@android:id/tabs" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <FrameLayout android:id="@android:id/tabcontent" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="5dp" /> </LinearLayout></TabHost> 3) my second activity is basically almost empty, it;s just display current date and time, worked before I tried to add tab panel 4) my manifest file <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="org.th.mybook" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".MainTabPanel" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name="MyBookActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.ALTERNATIVE" /> </intent-filter> </activity> </application> </manifest> 5 log cat error 02-10 21:04:45.203: E/AndroidRuntime(1107): FATAL EXCEPTION: main 02-10 21:04:45.203: E/AndroidRuntime(1107): java.lang.RuntimeException: Unable to start activity ComponentInfo{org.th.mybook/org.th.mybook.MainTabPanel}: java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{org.th.mybook/org.th.mybook.MyBookActivity}: java.lang.NullPointerException 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.os.Handler.dispatchMessage(Handler.java:99) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.os.Looper.loop(Looper.java:123) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.main(ActivityThread.java:4627) 02-10 21:04:45.203: E/AndroidRuntime(1107): at java.lang.reflect.Method.invokeNative(Native Method) 02-10 21:04:45.203: E/AndroidRuntime(1107): at java.lang.reflect.Method.invoke(Method.java:521) 02-10 21:04:45.203: E/AndroidRuntime(1107): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 02-10 21:04:45.203: E/AndroidRuntime(1107): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 02-10 21:04:45.203: E/AndroidRuntime(1107): at dalvik.system.NativeStart.main(Native Method) 02-10 21:04:45.203: E/AndroidRuntime(1107): Caused by: java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{org.th.mybook/org.th.mybook.MyBookActivity}: java.lang.NullPointerException 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2585) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.startActivityNow(ActivityThread.java:2503) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.LocalActivityManager.moveToState(LocalActivityManager.java:127) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.LocalActivityManager.startActivity(LocalActivityManager.java:339) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.widget.TabHost$IntentContentStrategy.getContentView(TabHost.java:651) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.widget.TabHost.setCurrentTab(TabHost.java:323) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.widget.TabHost.addTab(TabHost.java:213) 02-10 21:04:45.203: E/AndroidRuntime(1107): at org.th.mybook.MainTabPanel.onCreate(MainTabPanel.java:30) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) 02-10 21:04:45.203: E/AndroidRuntime(1107): ... 11 more 02-10 21:04:45.203: E/AndroidRuntime(1107): Caused by: java.lang.NullPointerException 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.content.ContextWrapper.getApplicationContext(ContextWrapper.java:100) 02-10 21:04:45.203: E/AndroidRuntime(1107): at org.th.mybook.MyBookActivity.<init>(MyBookActivity.java:16) 02-10 21:04:45.203: E/AndroidRuntime(1107): at java.lang.Class.newInstanceImpl(Native Method) 02-10 21:04:45.203: E/AndroidRuntime(1107): at java.lang.Class.newInstance(Class.java:1429) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.Instrumentation.newActivity(Instrumentation.java:1021) 02-10 21:04:45.203: E/AndroidRuntime(1107): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2577) 02-10 21:04:45.203: E/AndroidRuntime(1107): ... 20 more please help me, and tell me what i missed, im comparing this code with my old one and i can't find anything regards 6) my book activity public class MyBookActivity extends Activity { java.text.DateFormat dateFormat = android.text.format.DateFormat.getDateFormat(getApplicationContext()); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); DigitalClock clock = (DigitalClock) findViewById(R.id.digitalClock1); final TextView date = (TextView) findViewById(R.id.textView1); date.setText(dateFormat.format(new Date())); TextWatcher watcher = new TextWatcher() { @Override public void afterTextChanged(Editable s) { } @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { if (s.toString().startsWith("00:00:00") || s.toString().startsWith("12:00:00")) { date.setText(dateFormat.format(new Date())); } } }; clock.addTextChangedListener(watcher); } } 7) main.xml layout -> for my book activity <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:gravity="right" android:orientation="horizontal" > <LinearLayout android:id="@+id/DatePanel1" android:layout_width="wrap_content" android:layout_height="wrap_content" > <TextView android:id="@+id/textView1" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginRight="@dimen/space" android:layout_weight="1" android:text="TextView" /> <DigitalClock android:id="@+id/digitalClock1" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:text="DigitalClock" /> </LinearLayout> </LinearLayout>
[ " <activity\n android:name=\"MyBookActivity\"\n android:label=\"@string/app_name\" >\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.ALTERNATIVE\" />\n </intent-filter>\n </activity>\n\nwhere is your dot before MyBookActivity?\n", "It was my own stupidity:\njava.text.DateFormat dateFormat = android.text.format.DateFormat.getDateFormat(getApplicationContext());\n\nPutting this inside onCreate() method fixed my problem.\n", "I had the same issue, I cleaned and rebuilt the project and it worked.\n", "Your Manifest Must Change like this Activity name must Specified like \".YourActivityname\" \n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\npackage=\"org.th.mybook\"\nandroid:versionCode=\"1\"\nandroid:versionName=\"1.0\" >\n\n<uses-sdk\n android:minSdkVersion=\"8\" android:targetSdkVersion=\"8\" />\n\n<application\n android:icon=\"@drawable/ic_launcher\"\n android:label=\"@string/app_name\" >\n <activity\n android:name=\".MainTabPanel\"\n android:label=\"@string/app_name\" >\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n <activity\n android:name=\".MyBookActivity\" > \n </activity>\n</application>\n\n\n", "I encountered a similar error in one of my app recently, when I checked Android Vitals. So, I'm writing this.\nThis is the error.\njava.lang.RuntimeException: Unable to start activity ComponentInfo{my.package.name/my.package.name.activity.MainActivity}: android.view.InflateException: Binary XML file line #30: Binary XML file line #30: Error inflating class fragment\n\nAfter checking my Binary XML file on line #30, I found out that this my navGraph was on line #30. app:navGraph=\"@navigation/mobile_navigation\"\nThen, I realized that I was inflating the fragment wrongly.\nI came across Get started with the Navigation component which clearly states that:\n\nWhen creating the NavHostFragment using FragmentContainerView or if\nmanually adding the NavHostFragment to your activity via a\nFragmentTransaction, attempting to retrieve the NavController in\nonCreate() of an Activity via Navigation.findNavController(Activity,\n@IdRes int) will fail. You should retrieve the NavController directly\nfrom the NavHostFragment instead.\n\nThis is the correct way of Inflating the NavHostFragment:\nNavHostFragment navHostFragment =\n (NavHostFragment) supportFragmentManager.findFragmentById(R.id.nav_host_fragment);\nNavController navController = navHostFragment.getNavController();\n\nEarlier, I was trying to inflate like this:\nNavController navController = Navigation.findNavController(this, R.id.nav_host_fragment);\nNavigationUI.setupActionBarWithNavController(this, navController, mAppBarConfiguration);\nNavigationUI.setupWithNavController(navigationView, navController);\n\nSo, my new code becomes:\nNavHostFragment navHostFragment = (NavHostFragment) getSupportFragmentManager().findFragmentById(R.id.nav_host_fragment);\nNavController navController = null;\nif (navHostFragment != null) {\n navController = navHostFragment.getNavController();\n}\nif (navController != null) {\n NavigationUI.setupActionBarWithNavController(MainActivity.this, navController, mAppBarConfiguration);\n NavigationUI.setupWithNavController(navigationView, navController);\n}\n\n", "Dear You have used two Intent launcher in your Manifest. Make only one Activity as launcher:\nYour manifest activity is : \n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"org.th.mybook\"\n android:versionCode=\"1\"\n android:versionName=\"1.0\" >\n <uses-sdk android:minSdkVersion=\"8\" />\n <application\n android:icon=\"@drawable/ic_launcher\"\n android:label=\"@string/app_name\" >\n <activity\n android:name=\".MainTabPanel\"\n android:label=\"@string/app_name\" >\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n <activity\n android:name=\"MyBookActivity\"\n android:label=\"@string/app_name\" >\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.ALTERNATIVE\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>\n\nnow write code will be ( i have made your 'MyActivityBook' your default activity launcher. Copy and paste it on your manifest.\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n <manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"org.th.mybook\"\n android:versionCode=\"1\"\n android:versionName=\"1.0\" >\n <uses-sdk android:minSdkVersion=\"8\" />\n <application\n android:icon=\"@drawable/ic_launcher\"\n android:label=\"@string/app_name\" >\n <activity\n android:name=\".MainTabPanel\"\n android:label=\"@string/app_name\" >\n\n </activity>\n <activity\n android:name=\"MyBookActivity\"\n android:label=\"@string/app_name\" >\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n </manifest>\n\nand Second error may be if you copy paste old code then please update com.example.packagename.FILE_NAME\nhope this will work !\n", "My problem looked similar, after some hours I found that not all of my project was converted to androidx, so I changed:\n<android.support.constraint.ConstraintLayout\n\ninto\n<androidx.constraintlayout.widget.ConstraintLayout\n\nthen androidStudio still complaints, but the problem solver can add constraintlayout to your libraries or so.\nNow it starts again! It's funny that past 8 years many similar problems appear with a different solution.\n", "After trying few answers they are either not related to my project or , I have tried cleaning and rebuilding (https://stackoverflow.com/a/48760966/8463813). But it didn't work for me directly. I have compared it with older version of code, in which i observed some library files(jars and aars in External Libraries directory) are missing. Tried Invalidate Cache and Restart worked, which created all the libraries and working fine.\n", "If you are using ViewModel and passing any argument but not using the ViewModelFacoty to implement this... then this issue might occur\nSolution: Use ViewModelFactory to pass any argument in ViewModel class.\n", "I haven't found the solution but I have traced the error that is causing it in my project. In multithreading if you're trying to use the result of one of the concurrent threads, that haven't yielded the results yet, then you would get a java.lang.RuntimeException.\n" ]
[ 13, 10, 8, 5, 3, 1, 1, 0, 0, 0 ]
[]
[]
[ "android" ]
stackoverflow_0009898444_android.txt
Q: No safe area insets value available. Make sure you are rendering `` at the top of your app. - how can i ressolve this error? Codes export default class CalorieScreen extends Component { constructor(){ super(); this.state={text:''} } render() { const { calories } = this.state; return ( <View style={styles.container}> <Header backgroundColor={'#9c8210'} centerComponent={{ text: 'Monkey Chunky', style: { padding:100, color: '#fff', fontSize: 20 }, }} /> </View> ); } } I created a login screen that goes into my calorie screen when i click the buttun it takes me to the screen but this error appears A: Make sure you are rendering <SafeAreaProvider> at the top of your app. Something like this. import { SafeAreaProvider } from 'react-native-safe-area-context'; ... return <SafeAreaProvider>...</SafeAreaProvider>; ... A: As you are using react-native-elements make sure you install version 2.2.1 A: wrapp all the component in SafeAreaProvide in app.js import { SafeAreaProvider } from 'react-native-safe-area-context'; ... return ...;
No safe area insets value available. Make sure you are rendering `` at the top of your app. - how can i ressolve this error?
Codes export default class CalorieScreen extends Component { constructor(){ super(); this.state={text:''} } render() { const { calories } = this.state; return ( <View style={styles.container}> <Header backgroundColor={'#9c8210'} centerComponent={{ text: 'Monkey Chunky', style: { padding:100, color: '#fff', fontSize: 20 }, }} /> </View> ); } } I created a login screen that goes into my calorie screen when i click the buttun it takes me to the screen but this error appears
[ "Make sure you are rendering <SafeAreaProvider> at the top of your app. Something like this.\nimport { SafeAreaProvider } from 'react-native-safe-area-context';\n\n...\n return <SafeAreaProvider>...</SafeAreaProvider>;\n...\n\n", "As you are using react-native-elements make sure you install version 2.2.1\n", "wrapp all the component in SafeAreaProvide in app.js\nimport { SafeAreaProvider } from 'react-native-safe-area-context';\n\n...\nreturn ...;\n" ]
[ 2, 0, 0 ]
[]
[]
[ "expo", "react_native", "reactjs" ]
stackoverflow_0072842902_expo_react_native_reactjs.txt
Q: How to deploy a website and webservice in AWS using same domain name We have to deploy Restful Webservice(API services) and static pages in the AWS environment. Currently, our Webservice is hosted in EC2 instance with one ELB and Route53. Also, the static pages are deployed in the S3 bucket. The Webservice and Website, both should be in the same domain. When the user calls "www.domain.com/" it should be routed to the S3 server. However the API calls (www.domain.com/api/**) should be routed to EC2 through ELB. Is there any way to route API calls to ELB and website access calls to S3 using Route53? or What is the best approach to resolve this? A: Yes, you can deploy both using the same domain name. APIs should be deployed using api.domain.com and websites can deploy using domain.com. For that, you need to purchase an SSL certificate with a domain name and subdomain (eg: https://example.com and https://api.example.com) support and do the following. Configure certificate in AWS ACM Deploy your website in the S3 bucket with CloudFront Deploy APIs in EC2 with the support of a Load balancer (ELB) Configure Route53 and define two routes. Ie, create Records with 'A record type' in Route53 with ELB address and CloudFront address. See sample deployment architecture
How to deploy a website and webservice in AWS using same domain name
We have to deploy Restful Webservice(API services) and static pages in the AWS environment. Currently, our Webservice is hosted in EC2 instance with one ELB and Route53. Also, the static pages are deployed in the S3 bucket. The Webservice and Website, both should be in the same domain. When the user calls "www.domain.com/" it should be routed to the S3 server. However the API calls (www.domain.com/api/**) should be routed to EC2 through ELB. Is there any way to route API calls to ELB and website access calls to S3 using Route53? or What is the best approach to resolve this?
[ "Yes, you can deploy both using the same domain name. APIs should be deployed using api.domain.com and websites can deploy using domain.com. For that, you need to purchase an SSL certificate with a domain name and subdomain (eg: https://example.com and https://api.example.com) support and do the following.\n\nConfigure certificate in AWS ACM\nDeploy your website in the S3 bucket with CloudFront\nDeploy APIs in EC2 with the support of a Load balancer (ELB)\nConfigure Route53 and define two routes. Ie, create Records with 'A record type' in Route53 with ELB address and CloudFront address.\nSee sample deployment architecture\n\n\n" ]
[ 1 ]
[]
[]
[ "amazon_cloudfront", "amazon_ec2", "amazon_route53", "amazon_s3", "amazon_web_services" ]
stackoverflow_0066255388_amazon_cloudfront_amazon_ec2_amazon_route53_amazon_s3_amazon_web_services.txt
Q: How to create a angular 9 application? I wanted to create angular 9 application. What I did was Create a empty folder Install angular CLI version (npm i @angular/cli@9) in same path create application (ng new testApp) inside new app still have angular version 15. How can I fix this Outside the new app A: Try these commands: npm uninstall -g @angular/cli npm cache clean --force npm install -g @angular/cli@9
How to create a angular 9 application?
I wanted to create angular 9 application. What I did was Create a empty folder Install angular CLI version (npm i @angular/cli@9) in same path create application (ng new testApp) inside new app still have angular version 15. How can I fix this Outside the new app
[ "Try these commands:\nnpm uninstall -g @angular/cli\n\nnpm cache clean --force\n\nnpm install -g @angular/cli@9\n\n" ]
[ 0 ]
[]
[]
[ "angular" ]
stackoverflow_0074673929_angular.txt
Q: Smalltalk anomaly - why are the variables always the same, but when computing booleans they are different? I chose to try out Smalltalk for AOC 2022 puzzle 4. I'm predicating on each line and increment the counter if the constraints are met. I'm trying to understand why the '2-8,3-7' line doesn't met the requirements. Therefore, I started printing out the values to check what's happening. Apparently, when printing out the values by sending displayNl message to the objects, the values firstMax, firstMin etc. are always the same through the loop, containing the info from '2-4,6-8', i.e. the first line. But still, what's even more weird, that the counter gets incremented once, even though the first line doesn't meet the constraints. Then, I figured out that it actually computes the boolean overlapFirst and overlapSecond values correctly, when checking the '6-6,4-6' line, hence ifTrue increments the counter! WHY!? EDIT: I solved it by putting this instead of first putting the substrings into a variable: firstAssignment := (line substrings: ',') first. secondAssignment := (line substrings: ',') last. Does it mean that you cannot reassign OrderedCollection? I'm running this with gnu-small talk, by running command: gst main.st Here's data.txt. 2-4,6-8 2-3,4-5 5-7,7-9 2-8,3-7 6-6,4-6 2-6,4-8 Here's main.st. file := FileStream open: 'data.txt' mode: FileStream read. count := 0. file linesDo: [ :line | assignments := line substrings: ','. firstAssignment := assignments first. secondAssignment := assignments last. first := firstAssignment substrings: '-'. second := secondAssignment substrings: '-'. firstMin := first first. firstMax := first last. secondMin := second first. secondMax := second last. overlapFirst := (firstMin <= secondMin) & (firstMax >= secondMax). overlapSecond := (secondMin <= firstMin) & (secondMax >= firstMax). overlap := overlapSecond | overlapFirst. line displayNl. overlapFirst displayNl. overlapSecond displayNl. firstMin displayNl. firstMax displayNl. secondMin displayNl. secondMax displayNl. overlap ifTrue: [ 'Incremented!' displayNl. count := count + 1. ]. ]. Transcript show: count asString. file close. A: This solved my issue... I also edited the post, I'll need to learn how to do things in stackoverflow. I changed lines 5 and 6. file := FileStream open: 'data.txt' mode: FileStream read. count := 0. file linesDo: [ :line | firstAssignment := (line substrings: ',') first. secondAssignment := (line substrings: ',') last. first := firstAssignment substrings: '-'. second := secondAssignment substrings: '-'. firstMin := first first asInteger. firstMax := first last asInteger. secondMin := second first asInteger. secondMax := second last asInteger. overlapFirst := (firstMin <= secondMin) & (firstMax >= secondMax). overlapSecond := (secondMin <= firstMin) & (secondMax >= firstMax). overlap := overlapSecond | overlapFirst. line displayNl. overlap ifTrue: [ 'Incremented!' displayNl. count := count + 1. ]. ]. Transcript show: count asString. file close.
Smalltalk anomaly - why are the variables always the same, but when computing booleans they are different?
I chose to try out Smalltalk for AOC 2022 puzzle 4. I'm predicating on each line and increment the counter if the constraints are met. I'm trying to understand why the '2-8,3-7' line doesn't met the requirements. Therefore, I started printing out the values to check what's happening. Apparently, when printing out the values by sending displayNl message to the objects, the values firstMax, firstMin etc. are always the same through the loop, containing the info from '2-4,6-8', i.e. the first line. But still, what's even more weird, that the counter gets incremented once, even though the first line doesn't meet the constraints. Then, I figured out that it actually computes the boolean overlapFirst and overlapSecond values correctly, when checking the '6-6,4-6' line, hence ifTrue increments the counter! WHY!? EDIT: I solved it by putting this instead of first putting the substrings into a variable: firstAssignment := (line substrings: ',') first. secondAssignment := (line substrings: ',') last. Does it mean that you cannot reassign OrderedCollection? I'm running this with gnu-small talk, by running command: gst main.st Here's data.txt. 2-4,6-8 2-3,4-5 5-7,7-9 2-8,3-7 6-6,4-6 2-6,4-8 Here's main.st. file := FileStream open: 'data.txt' mode: FileStream read. count := 0. file linesDo: [ :line | assignments := line substrings: ','. firstAssignment := assignments first. secondAssignment := assignments last. first := firstAssignment substrings: '-'. second := secondAssignment substrings: '-'. firstMin := first first. firstMax := first last. secondMin := second first. secondMax := second last. overlapFirst := (firstMin <= secondMin) & (firstMax >= secondMax). overlapSecond := (secondMin <= firstMin) & (secondMax >= firstMax). overlap := overlapSecond | overlapFirst. line displayNl. overlapFirst displayNl. overlapSecond displayNl. firstMin displayNl. firstMax displayNl. secondMin displayNl. secondMax displayNl. overlap ifTrue: [ 'Incremented!' displayNl. count := count + 1. ]. ]. Transcript show: count asString. file close.
[ "This solved my issue... I also edited the post, I'll need to learn how to do things in stackoverflow.\nI changed lines 5 and 6.\nfile := FileStream open: 'data.txt' mode: FileStream read.\ncount := 0.\nfile linesDo: [ \n :line | \n firstAssignment := (line substrings: ',') first. \n secondAssignment := (line substrings: ',') last.\n first := firstAssignment substrings: '-'.\n second := secondAssignment substrings: '-'.\n firstMin := first first asInteger.\n firstMax := first last asInteger.\n secondMin := second first asInteger.\n secondMax := second last asInteger.\n overlapFirst := (firstMin <= secondMin) & (firstMax >= secondMax).\n overlapSecond := (secondMin <= firstMin) & (secondMax >= firstMax).\n\n overlap := overlapSecond | overlapFirst.\n\n line displayNl.\n\n overlap ifTrue: [\n 'Incremented!' displayNl.\n count := count + 1.\n ].\n].\n\nTranscript show: count asString.\n\n\nfile close.\n\n\n" ]
[ 0 ]
[]
[]
[ "gnu_smalltalk", "smalltalk" ]
stackoverflow_0074673633_gnu_smalltalk_smalltalk.txt
Q: How can I solve bundle install error so that I can install my github.io site? I'm trying to set up my personal website via github.io. But when I enter the "bundle install" command I get the following error. I couldn't find the exact cause of the problem and I even reinstalled them all. Gem::Ext::BuildError: ERROR: Failed to build gem native extension. current directory: C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0/ext/libv8-node C:/Ruby31-x64/bin/ruby.exe -I C:/Ruby31-x64/lib/ruby/3.1.0 extconf.rb creating Makefile C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0/ext/libv8-node/builder.rb:12:in `build_libv8!': failed to download node 16.10.0 (Libv8::Node::BuilderError) from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0/ext/libv8-node/location.rb:30:in `install!' from extconf.rb:9:in `<main>' ==== in C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0/ext/libv8-node ==== running C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0/libexec/download-node extconf failed, exit code 1 Gem files will remain installed in C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0 for inspection. Results logged to C:/Ruby31-x64/lib/ruby/gems/3.1.0/extensions/x64-mingw-ucrt/3.1.0/libv8-node-16.10.0.0/gem_make.out C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/builder.rb:102:in `run' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/ext_conf_builder.rb:28:in `build' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/builder.rb:171:in `build_extension' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/builder.rb:205:in `block in build_extensions' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/builder.rb:202:in `each' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/builder.rb:202:in `build_extensions' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/installer.rb:843:in `build_extensions' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/rubygems_gem_installer.rb:72:in `build_extensions' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/rubygems_gem_installer.rb:28:in `install' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/source/rubygems.rb:207:in `install' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/installer/gem_installer.rb:54:in `install' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/installer/gem_installer.rb:16:in `install_from_spec' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/installer/parallel_installer.rb:186:in `do_install' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/installer/parallel_installer.rb:177:in `block in worker_pool' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/worker.rb:62:in `apply_func' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/worker.rb:57:in `block in process_queue' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/worker.rb:54:in `loop' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/worker.rb:54:in `process_queue' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/worker.rb:91:in `block (2 levels) in create_threads' An error occurred while installing libv8-node (16.10.0.0), and Bundler cannot continue. In Gemfile: mini_racer was resolved to 0.6.3, which depends on libv8-node I updated these (gem, ruby, etc) but still nothing changed. What do you suggest I do, thanks in advance. A: Have you checked for potential differences in the Gemfile? I recommend you check out this potential solution.
How can I solve bundle install error so that I can install my github.io site?
I'm trying to set up my personal website via github.io. But when I enter the "bundle install" command I get the following error. I couldn't find the exact cause of the problem and I even reinstalled them all. Gem::Ext::BuildError: ERROR: Failed to build gem native extension. current directory: C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0/ext/libv8-node C:/Ruby31-x64/bin/ruby.exe -I C:/Ruby31-x64/lib/ruby/3.1.0 extconf.rb creating Makefile C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0/ext/libv8-node/builder.rb:12:in `build_libv8!': failed to download node 16.10.0 (Libv8::Node::BuilderError) from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0/ext/libv8-node/location.rb:30:in `install!' from extconf.rb:9:in `<main>' ==== in C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0/ext/libv8-node ==== running C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0/libexec/download-node extconf failed, exit code 1 Gem files will remain installed in C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/libv8-node-16.10.0.0 for inspection. Results logged to C:/Ruby31-x64/lib/ruby/gems/3.1.0/extensions/x64-mingw-ucrt/3.1.0/libv8-node-16.10.0.0/gem_make.out C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/builder.rb:102:in `run' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/ext_conf_builder.rb:28:in `build' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/builder.rb:171:in `build_extension' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/builder.rb:205:in `block in build_extensions' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/builder.rb:202:in `each' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/ext/builder.rb:202:in `build_extensions' C:/Ruby31-x64/lib/ruby/3.1.0/rubygems/installer.rb:843:in `build_extensions' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/rubygems_gem_installer.rb:72:in `build_extensions' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/rubygems_gem_installer.rb:28:in `install' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/source/rubygems.rb:207:in `install' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/installer/gem_installer.rb:54:in `install' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/installer/gem_installer.rb:16:in `install_from_spec' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/installer/parallel_installer.rb:186:in `do_install' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/installer/parallel_installer.rb:177:in `block in worker_pool' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/worker.rb:62:in `apply_func' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/worker.rb:57:in `block in process_queue' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/worker.rb:54:in `loop' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/worker.rb:54:in `process_queue' C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/bundler-2.3.26/lib/bundler/worker.rb:91:in `block (2 levels) in create_threads' An error occurred while installing libv8-node (16.10.0.0), and Bundler cannot continue. In Gemfile: mini_racer was resolved to 0.6.3, which depends on libv8-node I updated these (gem, ruby, etc) but still nothing changed. What do you suggest I do, thanks in advance.
[ "Have you checked for potential differences in the Gemfile? I recommend you check out this potential solution.\n" ]
[ 0 ]
[]
[]
[ "bundle", "jekyll_theme", "ruby", "rubygems" ]
stackoverflow_0074671287_bundle_jekyll_theme_ruby_rubygems.txt
Q: Subprocess not opening files I am writing a program to open other programs for me. os.system() would always freeze my app, so I switched to subprocess. I did some research and this is how a tutorial told me to open a program. I have only replaced the path for my variable, which contains the path. After I run this, only a commabd prompt window opens and nothing else. How can I fix this? Code: from subprocess import Popen filename1 = "C:/Program Files/Google/Chrome/Application/chrome.exe" Popen(["cmd", "/c", "start", filename1) A: You need to create a single string with double quotes around it. In Python terms, you basically want r'"c:\torture\thanks Microsoft"' where the single quotes and the r create a Python string, which contains the file name inside double quotes. from subprocess import Popen filename1 = "C:/Program Files/Google/Chrome/Application/chrome.exe" Popen(["cmd", "/c", "start", f'"{filename1}"']) Quoting with CMD is always bewildering; maybe think about ways you can avoid it (or Windows altogether, if you have a choice). A: import subprocess filename1 = "C:\Program Files\Google\Chrome\Application\chrome.exe" subprocess.Popen(filename1)
Subprocess not opening files
I am writing a program to open other programs for me. os.system() would always freeze my app, so I switched to subprocess. I did some research and this is how a tutorial told me to open a program. I have only replaced the path for my variable, which contains the path. After I run this, only a commabd prompt window opens and nothing else. How can I fix this? Code: from subprocess import Popen filename1 = "C:/Program Files/Google/Chrome/Application/chrome.exe" Popen(["cmd", "/c", "start", filename1)
[ "You need to create a single string with double quotes around it. In Python terms, you basically want r'\"c:\\torture\\thanks Microsoft\"' where the single quotes and the r create a Python string, which contains the file name inside double quotes.\nfrom subprocess import Popen\n\nfilename1 = \"C:/Program Files/Google/Chrome/Application/chrome.exe\"\nPopen([\"cmd\", \"/c\", \"start\", f'\"{filename1}\"'])\n\nQuoting with CMD is always bewildering; maybe think about ways you can avoid it (or Windows altogether, if you have a choice).\n", "import subprocess\nfilename1 = \"C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe\"\nsubprocess.Popen(filename1)\n\n" ]
[ 0, 0 ]
[]
[]
[ "popen", "python", "python_3.x", "subprocess" ]
stackoverflow_0074181574_popen_python_python_3.x_subprocess.txt
Q: How to remove comma if a element inside the array is negative? for (int i = 1; i <= size; i++) { printf("Enter element %d: ", i); scanf("%d", &array[i]); if (array [i] < 0) break; } printf("["); for (int i = 1; i <= size; i++) { if (array[i] < 0) break; printf("%d", array[i]); } printf("]"); The output of the code is this Enter size: 10 Enter element 1: 6 Enter element 2: 8 Enter element 3: 23 Enter element 4: -2 [6,8,23,] And the professor is expecting it to be this Enter size: 10 Enter element 1: 6 Enter element 2: 8 Enter element 3: 23 Enter element 4: -2 [6,8,23] A: printf("["); for (int i = 1; i <= size; i++) { if (i != 1 && array[i] >= 0) printf(","); if (array[i] < 0) break; printf("%d", array[i]); } printf("]"); A: The task is not to remove the comma. The task is to print the comma only when needed. Can be done in several ways. Here is one: printf("["); for (int i = 1; i <= size; i++) { if (array[i] < 0) break; if (i == 1) printf("%d", array[i]); // For index 1 don't print a comma else printf(",%d", array[i]); // For all others start with a comma } printf("]"); Here is another: printf("["); // Handle index one before the loop if (size >= 1 && array[1] >= 0) { printf("%d", array[1]); // No comma printed } for (int i = 2; i <= size && array[i] >= 0; i++) { printf(",%d", array[i]); // Print comma before element } printf("]"); BTW: Array indexing normally starts from zero instead of one.
How to remove comma if a element inside the array is negative?
for (int i = 1; i <= size; i++) { printf("Enter element %d: ", i); scanf("%d", &array[i]); if (array [i] < 0) break; } printf("["); for (int i = 1; i <= size; i++) { if (array[i] < 0) break; printf("%d", array[i]); } printf("]"); The output of the code is this Enter size: 10 Enter element 1: 6 Enter element 2: 8 Enter element 3: 23 Enter element 4: -2 [6,8,23,] And the professor is expecting it to be this Enter size: 10 Enter element 1: 6 Enter element 2: 8 Enter element 3: 23 Enter element 4: -2 [6,8,23]
[ "printf(\"[\");\nfor (int i = 1; i <= size; i++)\n{\n if (i != 1 && array[i] >= 0)\n printf(\",\");\n if (array[i] < 0)\n break;\n printf(\"%d\", array[i]);\n}\nprintf(\"]\");\n\n", "The task is not to remove the comma. The task is to print the comma only when needed.\nCan be done in several ways. Here is one:\nprintf(\"[\");\nfor (int i = 1; i <= size; i++)\n{\n \n if (array[i] < 0)\n break;\n if (i == 1)\n printf(\"%d\", array[i]); // For index 1 don't print a comma\n else\n printf(\",%d\", array[i]); // For all others start with a comma\n \n}\nprintf(\"]\");\n\nHere is another:\nprintf(\"[\");\n// Handle index one before the loop\nif (size >= 1 && array[1] >= 0)\n{\n printf(\"%d\", array[1]); // No comma printed\n}\nfor (int i = 2; i <= size && array[i] >= 0; i++)\n{\n printf(\",%d\", array[i]); // Print comma before element\n}\nprintf(\"]\");\n\nBTW: Array indexing normally starts from zero instead of one.\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "c" ]
stackoverflow_0074673894_arrays_c.txt
Q: MSbuild Error: The builds tools for v140 (Platform Toolset = 'v140') cannot be found I have a solution which is consists of a large number of projects (C++ and C#). I upgraded the solution to VS2015, so the toolset version for most of them are now set to V140, but a small number of projects need to remain in V110 (third party libraries, etc). When I build the solution in Visual Studio 2015, it builds just fine, but when TeamFoundationServer tries to build it, it fails with the following error: C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V110\Microsoft.Cpp.Platform.targets (44): The builds tools for v140 (Platform Toolset = 'v140') cannot be found. To build using the v140 build tools, either click the Project menu or right-click the solution, and then select "Update VC++ Projects...". Install v140 to build using the v140 build tools. I tried to specify the VisualStudioVersion or the path to the right MSBuild version as build arguments, but it didn't work as the rest of the projects (the ones in V110) will be in trouble. Any help would be very appreciated. A: I had the same issue. Steps given in this Solution helped me solve my issue. Repeating the steps here for future reference. If you're attempting to build a Win32 "Desktop" application, the easiest way to get the v140 Platform Toolset is via the Visual Studio Installer (please see the image, below, for an illustration of steps '3.' and '4.'): Launch the "Visual Studio Installer" from your start menu. Select "Modify" for the instance of Visual Studio 2017 you have installed. Under the "Summary" pane of the workload selector, click the "Desktop development with C++" expander (if it is collapsed) Check the "VC++ 2015.3 v140 toolset (x86,x64)" optional feature. A: The builds tools for v140 that's the platform toolset for VS2015. If you are using TFS2015, you must make sure the build environment on your build machine be the same as your local developer machine. You should install VS2015 on your build machine. If you are using TFS2013 or TFS2012, most probably MSBuild 12.0 is called.You need to set the build templates to point to MS Build version 14.0. For the details, check: TFS 2013 building .NET 4.6 / C# 6.0 A: Jacob's answer worked for me but C++ build tools were under VS Build Tools 2017 while I had VS 2019 Installer on Windows 10 as at July, 2019. A: You're trying to build using a different version of the build toolset that is either not installed on your system or that the project can't use. To change it to something that you have installed on your system, right click on the project in your Solution Explorer. Go to Properties. Configuration Properties>General>Platform Toolset>(Change this to a toolset that is installed on your system). Make sure you do this for the Debug and Release builds A: For the folks who are trying to do the same with Visual Studio Build Tools 2022, you may find this under Optional when choosing Desktop development with C++ Workload. Also, I had to update below variables in Environment variables to point to the new location. Path: replace previous path for BuildTools with C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Current\Bin VCTargetsPath: C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Microsoft\VC\v170 VS140COMNTOOLS: C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\ PS: I didn't have to change the value. But this setting was needed for me to work. A: Jacob's answer worked for me, but I had to click on the "Individual components" tab at the top for my Step 3. image A: If you are using Visual Studio 2022 Build Tools, then the following PowerShell script will fix it: $VS_BTOOLS_EXE="vs_buildtools.exe" $VS_BTOOLS_URI="https://aka.ms/vs/17/release/vs_buildtools.exe" Invoke-WebRequest -Uri $VS_BTOOLS_URI -OutFile $VS_BTOOLS_EXE Start-Process -FilePath ./vs_BuildTools.exe -ArgumentList ` "--add", "Microsoft.VisualStudio.Component.VC.140", ` "--quiet", "--norestart", "--force", "--wait" -Wait -PassThru Useful when silent installation is needed as well.
MSbuild Error: The builds tools for v140 (Platform Toolset = 'v140') cannot be found
I have a solution which is consists of a large number of projects (C++ and C#). I upgraded the solution to VS2015, so the toolset version for most of them are now set to V140, but a small number of projects need to remain in V110 (third party libraries, etc). When I build the solution in Visual Studio 2015, it builds just fine, but when TeamFoundationServer tries to build it, it fails with the following error: C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V110\Microsoft.Cpp.Platform.targets (44): The builds tools for v140 (Platform Toolset = 'v140') cannot be found. To build using the v140 build tools, either click the Project menu or right-click the solution, and then select "Update VC++ Projects...". Install v140 to build using the v140 build tools. I tried to specify the VisualStudioVersion or the path to the right MSBuild version as build arguments, but it didn't work as the rest of the projects (the ones in V110) will be in trouble. Any help would be very appreciated.
[ "I had the same issue. Steps given in this Solution helped me solve my issue. Repeating the steps here for future reference.\nIf you're attempting to build a Win32 \"Desktop\" application, the easiest way to get the v140 Platform Toolset is via the Visual Studio Installer (please see the image, below, for an illustration of steps '3.' and '4.'):\n\nLaunch the \"Visual Studio Installer\" from your start menu.\nSelect \"Modify\" for the instance of Visual Studio 2017 you have\ninstalled.\nUnder the \"Summary\" pane of the workload selector, click the\n\"Desktop development with C++\" expander (if it is collapsed)\nCheck the \"VC++ 2015.3 v140 toolset (x86,x64)\" optional feature.\n\n\n", "The builds tools for v140 that's the platform toolset for VS2015. \nIf you are using TFS2015, you must make sure the build environment on your build machine be the same as your local developer machine. You should install VS2015 on your build machine. \nIf you are using TFS2013 or TFS2012, most probably MSBuild 12.0 is called.You need to set the build templates to point to MS Build version 14.0. For the details, check: TFS 2013 building .NET 4.6 / C# 6.0\n", "Jacob's answer worked for me but C++ build tools were under VS Build Tools 2017 while I had VS 2019 Installer on Windows 10 as at July, 2019.\n\n", "You're trying to build using a different version of the build toolset that is either not installed on your system or that the project can't use. To change it to something that you have installed on your system, right click on the project in your Solution Explorer. \nGo to Properties. Configuration Properties>General>Platform Toolset>(Change this to a toolset that is installed on your system). \nMake sure you do this for the Debug and Release builds\n", "For the folks who are trying to do the same with Visual Studio Build Tools 2022, you may find this under Optional when choosing Desktop development with C++ Workload. Also, I had to update below variables in Environment variables to point to the new location.\n\nPath: replace previous path for BuildTools with C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\MSBuild\\Current\\Bin\nVCTargetsPath: C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\MSBuild\\Microsoft\\VC\\v170\nVS140COMNTOOLS: C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\Common7\\Tools\\ PS: I didn't have to change the value. But this setting was needed for me to work.\n\n\n", "Jacob's answer worked for me, but I had to click on the \"Individual components\" tab at the top for my Step 3.\nimage\n", "If you are using Visual Studio 2022 Build Tools, then the following PowerShell script will fix it:\n$VS_BTOOLS_EXE=\"vs_buildtools.exe\"\n$VS_BTOOLS_URI=\"https://aka.ms/vs/17/release/vs_buildtools.exe\"\nInvoke-WebRequest -Uri $VS_BTOOLS_URI -OutFile $VS_BTOOLS_EXE\nStart-Process -FilePath ./vs_BuildTools.exe -ArgumentList `\n \"--add\", \"Microsoft.VisualStudio.Component.VC.140\", `\n \"--quiet\", \"--norestart\", \"--force\", \"--wait\" -Wait -PassThru\n\nUseful when silent installation is needed as well.\n" ]
[ 57, 7, 4, 2, 1, 0, 0 ]
[ "This solution worked perfectly for me: https://social.msdn.microsoft.com/Forums/vstudio/en-US/e0b9c601-2ece-4dcc-bac3-23ed7dd6801a/the-builds-tools-for-v120-platform-toolset-v120-cannot-be-found?forum=vclanguage\n" ]
[ -2 ]
[ "c++", "cross_platform", "msbuild", "tfsbuild", "visual_studio_2015" ]
stackoverflow_0033154696_c++_cross_platform_msbuild_tfsbuild_visual_studio_2015.txt
Q: How to merge results from two gql queries in to one array of results? We have two services exposing two sets of schemas, merged in a gateway using Graphql Tools Schema Stitching Is it possible to merge queries from two services in such a way that it returns combined results? Example case: Book service contains data for books interface Searchable { id: ID! } type Book implements Searchable { id: ID! name: String # other fields } type Query { _search( term: String ): [Searchable] } User Service has the data for authors interface Searchable { id: ID! } type Author implements Searchable { id: ID! name: String # other fields } type Query { _search( term: String ): [Searchable] } Gateway interface Searchable { id: ID! } type Book implements Searchable { id: ID! name: String # other fields } type Author implements Searchable { id: ID! name: String # other fields } type Query { search( term: String ): [Searchable] } A: I can recommend using GraphQL-Mesh - it uses tools under the hood, and enables you to easily merge multiple sources (GraphQL and many others), manipulate it and get one GraphQL endpoint / schema
How to merge results from two gql queries in to one array of results?
We have two services exposing two sets of schemas, merged in a gateway using Graphql Tools Schema Stitching Is it possible to merge queries from two services in such a way that it returns combined results? Example case: Book service contains data for books interface Searchable { id: ID! } type Book implements Searchable { id: ID! name: String # other fields } type Query { _search( term: String ): [Searchable] } User Service has the data for authors interface Searchable { id: ID! } type Author implements Searchable { id: ID! name: String # other fields } type Query { _search( term: String ): [Searchable] } Gateway interface Searchable { id: ID! } type Book implements Searchable { id: ID! name: String # other fields } type Author implements Searchable { id: ID! name: String # other fields } type Query { search( term: String ): [Searchable] }
[ "I can recommend using GraphQL-Mesh - it uses tools under the hood, and enables you to easily merge multiple sources (GraphQL and many others), manipulate it and get one GraphQL endpoint / schema\n" ]
[ 1 ]
[]
[]
[ "graphql", "graphql_tools", "typegraphql" ]
stackoverflow_0074659826_graphql_graphql_tools_typegraphql.txt
Q: How to open jupyter notebook from Windows 10 task bar Through some wizardry I cannot recall, I managed to install and implement Jupyter Notebook with an icon that opens Jupyter directly in browser I am occasionally asked how I did this. However, and slightly emparisingly, I cannot remember how I did this and am unable to help. I cannot seem to recreate this Jupyter Icon in any other set up Also, in attempting to recreate this Icon, I somehow managed to implement two Anaconda Prompts, Anaconda PowerShell Prompt and Anaconda Prompt What is the difference between the two? Which one should I remove? A: I somehow managed to implement two Anaconda Prompts, Anaconda PowerShell Prompt and Anaconda Prompt That is standard. The first Anaconda Prompt, will open the legacy cmd configured for conda. The second will open a powershell configured for conda. SO just keep both and use the one you are more comfortable with. How to open jupyter notebook from Windows 10 task bar Simply search for jupyter in the start menu and select Pin To Taskbar Creating it manually In case the above does not work, then you can manually create a shortcut and pin it to the taskbar. For that, we will need two paths, which for me are these: pathBase=C:\Users\FlyingTeller\miniconda3 #main folder of miniconda (or anaconda) pathEnv=C:\Users\FlyingTeller\miniconda3\envs\py37 #Folder of the environment where jupyter notebook is installed Then you do the following steps: Right Click on Desktop->New->Shortcut, enter as target path: <pathBase>\python.exe <pathBase>\cwp.py <pathEnv> <pathEnv>\python.exe <pathEnv>\Scripts\jupyter-notebook-script.py "%USERPROFILE%/ replacing the paths with the ones from above. Save the shortcut and then do Right Click->Properties. Now you can change the Start In directory to wherever you want the notebook to start. Additionally, you can change the icon to the jupyter icon, which is in <pathEnv>\Menu Now you have A shortcut to start the notebook on your desktop The possibility to simply do Right Click-> Pin to Taskbar for that Shortcut A: Search for Anaconda Navigator in your computer then Right Click and Select "Open file location". windows search for anaconda In the folder that is opened you can find shortcuts of programs that are installed via anaconda. You can copy and paste them anywhere you want. shortcuts folder
How to open jupyter notebook from Windows 10 task bar
Through some wizardry I cannot recall, I managed to install and implement Jupyter Notebook with an icon that opens Jupyter directly in browser I am occasionally asked how I did this. However, and slightly emparisingly, I cannot remember how I did this and am unable to help. I cannot seem to recreate this Jupyter Icon in any other set up Also, in attempting to recreate this Icon, I somehow managed to implement two Anaconda Prompts, Anaconda PowerShell Prompt and Anaconda Prompt What is the difference between the two? Which one should I remove?
[ "\nI somehow managed to implement two Anaconda Prompts, Anaconda PowerShell Prompt and Anaconda Prompt\n\nThat is standard. The first Anaconda Prompt, will open the legacy cmd configured for conda. The second will open a powershell configured for conda. SO just keep both and use the one you are more comfortable with.\n\nHow to open jupyter notebook from Windows 10 task bar\n\nSimply search for jupyter in the start menu and select Pin To Taskbar\n\nCreating it manually\nIn case the above does not work, then you can manually create a shortcut and pin it to the taskbar. For that, we will need two paths, which for me are these:\npathBase=C:\\Users\\FlyingTeller\\miniconda3 #main folder of miniconda (or anaconda)\npathEnv=C:\\Users\\FlyingTeller\\miniconda3\\envs\\py37 #Folder of the environment where jupyter notebook is installed\n\nThen you do the following steps:\nRight Click on Desktop->New->Shortcut, enter as target path:\n<pathBase>\\python.exe <pathBase>\\cwp.py <pathEnv> <pathEnv>\\python.exe <pathEnv>\\Scripts\\jupyter-notebook-script.py \"%USERPROFILE%/\n\nreplacing the paths with the ones from above. Save the shortcut and then do Right Click->Properties.\nNow you can change the Start In directory to wherever you want the notebook to start. Additionally, you can change the icon to the jupyter icon, which is in\n<pathEnv>\\Menu\n\nNow you have\n\nA shortcut to start the notebook on your desktop\nThe possibility to simply do Right Click-> Pin to Taskbar for that Shortcut\n\n", "Search for Anaconda Navigator in your computer then Right Click and Select \"Open file location\".\nwindows search for anaconda\nIn the folder that is opened you can find shortcuts of programs that are installed via anaconda. You can copy and paste them anywhere you want.\nshortcuts folder\n" ]
[ 2, 0 ]
[]
[]
[ "anaconda", "jupyter_notebook", "miniconda", "powershell", "python" ]
stackoverflow_0068420377_anaconda_jupyter_notebook_miniconda_powershell_python.txt
Q: How to enable /std:c++17 in VS2017 with CMake I'm trying to add the /std:c++17 compiler flag to VS2017 with CMake. I'm using the "modern" cross-platform way so far: set(CMAKE_CXX_STANDARD 14) set(CMAKE_CXX_STANDARD_REQUIRED ON) set(CMAKE_CXX_EXTENSIONS OFF) # -std=c++11 instead of -std=gnu++11 set(MY_CXX_COMPILE_FEATURES cxx_generic_lambdas cxx_range_for cxx_strong_enums) add_library(mylib INTERFACE) target_compile_features(mylib INTERFACE ${MY_CXX_COMPILE_FEATURES}) This adds /std:c++14 in VS2017 (which might be the default anyway?). However I'm having trouble switching this to C++17 (i.e. having it add /std:c++17). If I just add it manually, I get the not-so-nice warning because both flags are present: 1>cl : Command line warning D9025: overriding '/std:c++14' with '/std:c++17' I've tried set(CMAKE_CXX_STANDARD 17) but it has no effect, in fact the CMake documentation mentions that CMAKE_CXX_STANDARD has no effect on VS anyway. As for adding a C++17 feature to target_compile_features, it doesn't seem like there are any yet (even in CMake-3.9.0-rc5), and even if there were, I'm specifically only using std::optional from C++17, and there's no target_compile_features flags for library features like std::optional. So my question is, what's the best (or least ugly) way to do this with CMake? And in a way so it'll also work for gcc and clang? I'm happy to use a very recent CMake version (3.8 or 3.9). I prefer it to be "nice" and not manually looping through CXX_COMPILE_FLAGS and removing the string "/std:c++14" or some hack like that. (Edit: It can also be the VS/std:c++latest switch - whichever is possible. Both work for the purpose.) A: Turning my comment into an answer The CMake team is working on it for VS2017 (as for July 2017, for upcoming CMake version 3.10): CMake: MSVC standard version switches Those flags seem to be rather new switches (as related to the date of this question): VS 2017 15.3 preview now supports /std:c++17 So for Visual Studio you have to "manually" replace or append the compiler switches until CMake officially does support it. Here is a code snippet that I've tested for std:c++latest (which is already supported e.g. in my CMake 3.8.0 version): if (MSVC_VERSION GREATER_EQUAL "1900") include(CheckCXXCompilerFlag) CHECK_CXX_COMPILER_FLAG("/std:c++latest" _cpp_latest_flag_supported) if (_cpp_latest_flag_supported) add_compile_options("/std:c++latest") endif() endif() For CLang and GNU the support was merged into the main source code branch begin of 2017 and is part of CMake version 3.8 and above: CMake: Features: Add support for C++ 17 language standard A: CMake versions higher than 3.10 support MSVC C++ standard switches for MSVC versions newer than 19.0.24215. If either of the version requirements are not met, then they have no effect. The only portable approach, to ensuring your program is compiled with the correct C++ standard mode on Visual Studio, is to require at least CMake 3.10, set the target property CXX_STANDARD to your desired value and CXX_STANDARD_REQUIRED to ON. Example usage: set_property(TARGET my_target PROPERTY CXX_STANDARD 17) set_property(TARGET my_target PROPERTY CXX_STANDARD_REQUIRED ON) A: You can add the following line to your CmakeLists.txt file set(GCC_COMPILE_FLAGS "${GCC_COMPILE_FLAGS} /std:c++17)
How to enable /std:c++17 in VS2017 with CMake
I'm trying to add the /std:c++17 compiler flag to VS2017 with CMake. I'm using the "modern" cross-platform way so far: set(CMAKE_CXX_STANDARD 14) set(CMAKE_CXX_STANDARD_REQUIRED ON) set(CMAKE_CXX_EXTENSIONS OFF) # -std=c++11 instead of -std=gnu++11 set(MY_CXX_COMPILE_FEATURES cxx_generic_lambdas cxx_range_for cxx_strong_enums) add_library(mylib INTERFACE) target_compile_features(mylib INTERFACE ${MY_CXX_COMPILE_FEATURES}) This adds /std:c++14 in VS2017 (which might be the default anyway?). However I'm having trouble switching this to C++17 (i.e. having it add /std:c++17). If I just add it manually, I get the not-so-nice warning because both flags are present: 1>cl : Command line warning D9025: overriding '/std:c++14' with '/std:c++17' I've tried set(CMAKE_CXX_STANDARD 17) but it has no effect, in fact the CMake documentation mentions that CMAKE_CXX_STANDARD has no effect on VS anyway. As for adding a C++17 feature to target_compile_features, it doesn't seem like there are any yet (even in CMake-3.9.0-rc5), and even if there were, I'm specifically only using std::optional from C++17, and there's no target_compile_features flags for library features like std::optional. So my question is, what's the best (or least ugly) way to do this with CMake? And in a way so it'll also work for gcc and clang? I'm happy to use a very recent CMake version (3.8 or 3.9). I prefer it to be "nice" and not manually looping through CXX_COMPILE_FLAGS and removing the string "/std:c++14" or some hack like that. (Edit: It can also be the VS/std:c++latest switch - whichever is possible. Both work for the purpose.)
[ "Turning my comment into an answer\n\nThe CMake team is working on it for VS2017 (as for July 2017, for upcoming CMake version 3.10): \nCMake: MSVC standard version switches\nThose flags seem to be rather new switches (as related to the date of this question):\n\nVS 2017 15.3 preview now supports /std:c++17\n\nSo for Visual Studio you have to \"manually\" replace or append the compiler switches until CMake officially does support it.\nHere is a code snippet that I've tested for std:c++latest (which is already supported e.g. in my CMake 3.8.0 version):\nif (MSVC_VERSION GREATER_EQUAL \"1900\")\n include(CheckCXXCompilerFlag)\n CHECK_CXX_COMPILER_FLAG(\"/std:c++latest\" _cpp_latest_flag_supported)\n if (_cpp_latest_flag_supported)\n add_compile_options(\"/std:c++latest\")\n endif()\nendif()\n\nFor CLang and GNU the support was merged into the main source code branch begin of 2017 and is part of CMake version 3.8 and above:\nCMake: Features: Add support for C++ 17 language standard\n\n", "CMake versions higher than 3.10 support MSVC C++ standard switches for MSVC versions newer than 19.0.24215. If either of the version requirements are not met, then they have no effect.\nThe only portable approach, to ensuring your program is compiled with the correct C++ standard mode on Visual Studio, is to require at least CMake 3.10, set the target property CXX_STANDARD to your desired value and CXX_STANDARD_REQUIRED to ON.\nExample usage:\nset_property(TARGET my_target PROPERTY CXX_STANDARD 17)\nset_property(TARGET my_target PROPERTY CXX_STANDARD_REQUIRED ON)\n\n", "You can add the following line to your CmakeLists.txt file\nset(GCC_COMPILE_FLAGS \"${GCC_COMPILE_FLAGS} /std:c++17)\n\n" ]
[ 27, 22, 0 ]
[]
[]
[ "c++", "c++17", "cmake", "visual_studio_2017" ]
stackoverflow_0044960715_c++_c++17_cmake_visual_studio_2017.txt
Q: how to merge two json data by mapping I have two json datas as json_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}] json_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}] i want to merge both the json and needed optput as json_final = [{'purchasedPerson__id': 2, 'credit': 3000 , 'debit'=0}, {'purchasedPerson__id': 4, 'credit': 5000 , 'debit'=2000}, {'purchasedPerson__id': 1, 'credit'=0, 'debit': 8526}] how the above method can be done A: This is a case where pandascan be very convenient. By converting to dataframes and merging on "purchasedPerson__id", you will get the desired output: import pandas as pd json_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}] json_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}] df1 = pd.DataFrame(json_1) df2 = pd.DataFrame(json_2) df_out = pd.merge(df1, df2, on="purchasedPerson__id", how="outer").fillna(0) df_out.to_dict(orient="records") Output: [{'purchasedPerson__id': 2, 'credit': 3000.0, 'debit': 0.0}, {'purchasedPerson__id': 4, 'credit': 5000.0, 'debit': 2000.0}, {'purchasedPerson__id': 1, 'credit': 0.0, 'debit': 8526.0}] A: json_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}] json_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}] # create a dictionary for the merged data data = {} # loop through each JSON and add the data to the dictionary for j in json_1: data[j['purchasedPerson__id']] = {'credit': j['credit'], 'debit': 0} for j in json_2: if j['purchasedPerson__id'] in data: data[j['purchasedPerson__id']] = {'credit': data[j['purchasedPerson__id']]['credit'], 'debit': j['debit']} else: data[j['purchasedPerson__id']] = {'credit': 0, 'debit': j['debit']} # convert the dictionary to a list json_final = [] for key, value in data.items(): json_final.append({'purchasedPerson__id': key, 'credit': value['credit'], 'debit': value['debit']}) print(json_final) A: # Initialize the final JSON array json_final = [] # Loop through the first JSON data set for item in json_1: # Initialize the final JSON object for this item final_item = {'purchasedPerson__id': item['purchasedPerson__id'], 'credit': item['credit'], 'debit': 0} # Loop through the second JSON data set for item2 in json_2: # If the id matches, update the final item with the debit value if item['purchasedPerson__id'] == item2['purchasedPerson__id']: final_item['debit'] = item2['debit'] # Add the final item to the final JSON array json_final.append(final_item) # Loop through the second JSON data set for item in json_2: # Initialize a flag to keep track of whether the item already exists in the final JSON array exists = False # Loop through the final JSON array for final_item in json_final: # If the id matches, set the exists flag to True if final_item['purchasedPerson__id'] == item['purchasedPerson__id']: exists = True # If the item does not exist in the final JSON array, add it with credit and debit values of 0 if not exists: json_final.append({'purchasedPerson__id': item['purchasedPerson__id'], 'credit': 0, 'debit': item['debit']})
how to merge two json data by mapping
I have two json datas as json_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}] json_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}] i want to merge both the json and needed optput as json_final = [{'purchasedPerson__id': 2, 'credit': 3000 , 'debit'=0}, {'purchasedPerson__id': 4, 'credit': 5000 , 'debit'=2000}, {'purchasedPerson__id': 1, 'credit'=0, 'debit': 8526}] how the above method can be done
[ "This is a case where pandascan be very convenient. By converting to dataframes and merging on \"purchasedPerson__id\", you will get the desired output:\nimport pandas as pd\n\njson_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}]\njson_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}]\ndf1 = pd.DataFrame(json_1)\ndf2 = pd.DataFrame(json_2)\n\ndf_out = pd.merge(df1, df2, on=\"purchasedPerson__id\", how=\"outer\").fillna(0)\ndf_out.to_dict(orient=\"records\")\n\nOutput:\n[{'purchasedPerson__id': 2, 'credit': 3000.0, 'debit': 0.0}, {'purchasedPerson__id': 4, 'credit': 5000.0, 'debit': 2000.0}, {'purchasedPerson__id': 1, 'credit': 0.0, 'debit': 8526.0}]\n\n", "json_1 = [{'purchasedPerson__id': 2, 'credit': 3000}, {'purchasedPerson__id': 4, 'credit': 5000}]\njson_2 = [{'purchasedPerson__id': 1, 'debit': 8526}, {'purchasedPerson__id': 4, 'debit': 2000}]\n\n# create a dictionary for the merged data\ndata = {}\n\n# loop through each JSON and add the data to the dictionary\nfor j in json_1:\n data[j['purchasedPerson__id']] = {'credit': j['credit'], 'debit': 0}\n\nfor j in json_2:\n if j['purchasedPerson__id'] in data:\n data[j['purchasedPerson__id']] = {'credit': data[j['purchasedPerson__id']]['credit'], 'debit': j['debit']}\n else:\n data[j['purchasedPerson__id']] = {'credit': 0, 'debit': j['debit']}\n\n# convert the dictionary to a list\njson_final = []\nfor key, value in data.items():\n json_final.append({'purchasedPerson__id': key, 'credit': value['credit'], 'debit': value['debit']})\n\nprint(json_final)\n\n", "# Initialize the final JSON array\njson_final = []\n\n# Loop through the first JSON data set\nfor item in json_1:\n # Initialize the final JSON object for this item\n final_item = {'purchasedPerson__id': item['purchasedPerson__id'], 'credit': item['credit'], 'debit': 0}\n # Loop through the second JSON data set\n for item2 in json_2:\n # If the id matches, update the final item with the debit value\n if item['purchasedPerson__id'] == item2['purchasedPerson__id']:\n final_item['debit'] = item2['debit']\n # Add the final item to the final JSON array\n json_final.append(final_item)\n\n# Loop through the second JSON data set\nfor item in json_2:\n # Initialize a flag to keep track of whether the item already exists in the final JSON array\n exists = False\n # Loop through the final JSON array\n for final_item in json_final:\n # If the id matches, set the exists flag to True\n if final_item['purchasedPerson__id'] == item['purchasedPerson__id']:\n exists = True\n # If the item does not exist in the final JSON array, add it with credit and debit values of 0\n if not exists:\n json_final.append({'purchasedPerson__id': item['purchasedPerson__id'], 'credit': 0, 'debit': item['debit']})\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "json", "python", "python_jsons", "python_jsonschema" ]
stackoverflow_0074673859_json_python_python_jsons_python_jsonschema.txt
Q: Java - Dark Mode, Best Way to Do it? I want to create a dark mode for Java Swing GUI. I want to find the best way related performance or best practices. But... The first approach was to "get" al components in a Frame or JPanel and for each one exchange the foreground and background and other stuffs for each component. What is the best approach? and why? Example: JPanel jT = ... Component[] components = jT.getComponents(); if(item instanceof JComboBox){ deepBlueMode_JList((JComboBox) item); } And then... public static void deepBlueMode_JList(JComboBox jC){ jC.setBackground(ColorsEnum.NAVY_BLACK.getColor()); jC.setForeground(Color.WHITE); jC.setFont(new Font("Tahoma", Font.BOLD, 12)); } But I read that in some cases is better to create a custom component. Like... public ExComboBox(ColorUI colorUI){ ExComboBox.colorUI = colorUI; borderButton = BorderFactory.createMatteBorder(0, 0, 2, 0, colorUI.getColorBorde()); this.fuente = new Font("Segoe UI", Font.BOLD, 14); selectionForeground = colorUI.getColorFondo(); selectionBackground = colorUI.getColorTerciario(); } What should I do? and Why? Thanks A: I have made a dark and light mode for one of my swing application, inspired by Android's theming system. Here's how I made it: Step one: Create an interface which will have getters for the color of your theme. Remeber it should have all the colors that you are going to use. Step two: Implement the interface and create two color classes for dark and light mode. Step three: Create a singleton theme manager class which will be responsible for managing the current theme of the app. It should handle everything from returning the colors of the active theme to adding/removing listeners for theme change. Step four: Create custom components or override components and add a listener for theme change to the theme manager and listen to its callback. Change the colors of the component and repaint it when the theme type changes. You can modify these steps as per your need. I also used JSystemThemeDetector library to change the theme of my app at runtime when OS theme changed. Why I think this is a better approach? Because listening to theme change and responding to it in custom component makes maintenance easy. Hard coding custom theme is a very very bad idea, especially your first approach is going to be a nightmare when used on a big project. Is it better than swing's look and feel? I don't know, I only used it on scrollbars. But my app needed dynamic theming so the above method that I used was easier to implement. The performance I got was solid too. If you want you can look at the source code of my project's theme manager to get an idea. https://github.com/iProgram22/Frost/tree/master/src%2Fmain%2Fjava%2Fmaterial%2Ftheme
Java - Dark Mode, Best Way to Do it?
I want to create a dark mode for Java Swing GUI. I want to find the best way related performance or best practices. But... The first approach was to "get" al components in a Frame or JPanel and for each one exchange the foreground and background and other stuffs for each component. What is the best approach? and why? Example: JPanel jT = ... Component[] components = jT.getComponents(); if(item instanceof JComboBox){ deepBlueMode_JList((JComboBox) item); } And then... public static void deepBlueMode_JList(JComboBox jC){ jC.setBackground(ColorsEnum.NAVY_BLACK.getColor()); jC.setForeground(Color.WHITE); jC.setFont(new Font("Tahoma", Font.BOLD, 12)); } But I read that in some cases is better to create a custom component. Like... public ExComboBox(ColorUI colorUI){ ExComboBox.colorUI = colorUI; borderButton = BorderFactory.createMatteBorder(0, 0, 2, 0, colorUI.getColorBorde()); this.fuente = new Font("Segoe UI", Font.BOLD, 14); selectionForeground = colorUI.getColorFondo(); selectionBackground = colorUI.getColorTerciario(); } What should I do? and Why? Thanks
[ "I have made a dark and light mode for one of my swing application, inspired by Android's theming system. Here's how I made it:\nStep one: Create an interface which will have getters for the color of your theme. Remeber it should have all the colors that you are going to use.\nStep two: Implement the interface and create two color classes for dark and light mode.\nStep three: Create a singleton theme manager class which will be responsible for managing the current theme of the app. It should handle everything from returning the colors of the active theme to adding/removing listeners for theme change.\nStep four: Create custom components or override components and add a listener for theme change to the theme manager and listen to its callback. Change the colors of the component and repaint it when the theme type changes.\nYou can modify these steps as per your need. I also used JSystemThemeDetector library to change the theme of my app at runtime when OS theme changed.\nWhy I think this is a better approach? Because listening to theme change and responding to it in custom component makes maintenance easy.\nHard coding custom theme is a very very bad idea, especially your first approach is going to be a nightmare when used on a big project.\nIs it better than swing's look and feel?\nI don't know, I only used it on scrollbars. But my app needed dynamic theming so the above method that I used was easier to implement. The performance I got was solid too.\nIf you want you can look at the source code of my project's theme manager to get an idea.\nhttps://github.com/iProgram22/Frost/tree/master/src%2Fmain%2Fjava%2Fmaterial%2Ftheme\n" ]
[ 0 ]
[]
[]
[ "java", "performance", "swing", "user_interface" ]
stackoverflow_0074493356_java_performance_swing_user_interface.txt
Q: How to convolution integration(Duhamel Integration) by python? Hi~ I'm studying about structural dynamics. I want to make a code about Duhamel Integration which is kind of Convoution Integration. If the initial conditions are y(0)=0 and y'(0)=0, Duhamel Integration is like this. enter image description here Using Ti Nspire I solved this problem with my Ti Npire softwere. The result is like that. enter image description here Its response(y) of t=1 is -0.006238 Using python(sympy) I tried to solve this problem using by Python(Jupyter Notebook). But I couldn't solve the problem. I wrote the code like this. from sympy import * t, tau=symbols('t, tau') m=6938.78 k=379259 wn=sqrt(k/m) wd=wn*sqrt(1-0.05**2) eq1=(900*sin(5.3*tau)) eq2=exp(-0.05*wn*(t-tau)) eq3=sin(wd*(t-tau)) y0=1/(m*wd)*integrate(eq1*eq2*eq3,(tau,0,t)) y0 But I couldn't get the result. enter image description here Is there other way to solve this problem? A: Use the unevaluated Integral and then substitute in a value for t and use the doit method: ... >>> y0=1/(m*wd)*Integral(eq1*eq2*eq3,(tau,0,t)) >>> y0.subs(t,1).doit() -0.00623772329557205
How to convolution integration(Duhamel Integration) by python?
Hi~ I'm studying about structural dynamics. I want to make a code about Duhamel Integration which is kind of Convoution Integration. If the initial conditions are y(0)=0 and y'(0)=0, Duhamel Integration is like this. enter image description here Using Ti Nspire I solved this problem with my Ti Npire softwere. The result is like that. enter image description here Its response(y) of t=1 is -0.006238 Using python(sympy) I tried to solve this problem using by Python(Jupyter Notebook). But I couldn't solve the problem. I wrote the code like this. from sympy import * t, tau=symbols('t, tau') m=6938.78 k=379259 wn=sqrt(k/m) wd=wn*sqrt(1-0.05**2) eq1=(900*sin(5.3*tau)) eq2=exp(-0.05*wn*(t-tau)) eq3=sin(wd*(t-tau)) y0=1/(m*wd)*integrate(eq1*eq2*eq3,(tau,0,t)) y0 But I couldn't get the result. enter image description here Is there other way to solve this problem?
[ "Use the unevaluated Integral and then substitute in a value for t and use the doit method:\n...\n>>> y0=1/(m*wd)*Integral(eq1*eq2*eq3,(tau,0,t))\n>>> y0.subs(t,1).doit()\n-0.00623772329557205\n\n" ]
[ 1 ]
[]
[]
[ "convolution", "integrate", "python", "response", "sympy" ]
stackoverflow_0074672385_convolution_integrate_python_response_sympy.txt
Q: Why do I see this weird symbol in place of characters in the char array in java? output I'm getting these weird symbols while trying to display this char array. Same problem in online compiler too. what to do? It happened once to me in C++ too. Either it shows nothing or this. It's making me crazy. package com.avishkar; import java.util.Arrays; public class Main { public static void main(String[] args) { String S = "aeroplane"; char[] arr = new char[S.length()]; for (int i = 0; i < S.length(); i++) { arr[i] = S.charAt(i); } Arrays.sort(arr); // System.out.println(Arrays.toString(arr)); int count1 = 0, count2 = 0; for (int i = 0; i < arr.length; i++) { char x = arr[i]; if (x == 'a' || x == 'e' || x == 'i' || x == 'o' || x == 'u') { count2++; } else { count1++; } } char[] con = new char[count1]; char[] vow = new char[count2]; int k = 0, l = 0; for (int i = 0; i < count1; i++) { char x = arr[i]; if (x == 'a' || x == 'e' || x == 'i' || x == 'o' || x == 'u') { vow[l] = x; l++; } else { con[k] = x; k++; } } System.out.println(Arrays.toString(con)); System.out.println(Arrays.toString(vow)); int x = 0, y = 0; char[] finArr = new char[count1 + count2]; for (int i = 0; i < finArr.length; i++) { if (count1 > count2) { if (i % 2 == 0) { finArr[i] = con[x]; x++; } else { finArr[i] = vow[y]; y++; } } else { if (i % 2 == 0) { finArr[i] = vow[y]; y++; } else { finArr[i] = con[x]; x++; } } } String ans = ""; for (int i = 0; i < finArr.length; i++) { ans += finArr[i]; } if (count1 - count2 > 1 || count2 - count1 > 1) { System.out.println("-1"); } System.out.println(ans); } } A: I modified your code to print out the hexadecimal value of the characters, rather than the characters themselves. The output looks like this: 0 0 0 0 61 61 65 65 0 61 0 61 0 65 0 65 0 0 Your "unprintable" characters are hexadecimal zero, which is unprintable. Here's the modified code. import java.util.Arrays; public class Main { public static void main(String[] args) { String S = "aeroplane"; char[] arr = new char[S.length()]; for (int i = 0; i < S.length(); i++) { arr[i] = S.charAt(i); } Arrays.sort(arr); // System.out.println(Arrays.toString(arr)); int count1 = 0, count2 = 0; for (int i = 0; i < arr.length; i++) { char x = arr[i]; if (x == 'a' || x == 'e' || x == 'i' || x == 'o' || x == 'u') { count2++; } else { count1++; } } char[] con = new char[count1]; char[] vow = new char[count2]; int k = 0, l = 0; for (int i = 0; i < count1; i++) { char x = arr[i]; if (x == 'a' || x == 'e' || x == 'i' || x == 'o' || x == 'u') { vow[l] = x; l++; } else { con[k] = x; k++; } } for (char c : con) { System.out.print(Integer.toHexString((int) c) + " "); } System.out.println(); // System.out.println(Arrays.toString(con)); for (char c : vow) { System.out.print(Integer.toHexString((int) c) + " "); } System.out.println(); // System.out.println(Arrays.toString(vow)); int x = 0, y = 0; char[] finArr = new char[count1 + count2]; for (int i = 0; i < finArr.length; i++) { if (count1 > count2) { if (i % 2 == 0) { finArr[i] = con[x]; x++; } else { finArr[i] = vow[y]; y++; } } else { if (i % 2 == 0) { finArr[i] = vow[y]; y++; } else { finArr[i] = con[x]; x++; } } } String ans = ""; for (int i = 0; i < finArr.length; i++) { ans += finArr[i]; } if (count1 - count2 > 1 || count2 - count1 > 1) { System.out.println("-1"); } for (char c : ans.toCharArray()) { System.out.print(Integer.toHexString((int) c) + " "); } System.out.println(); // System.out.println(ans); } }
Why do I see this weird symbol in place of characters in the char array in java?
output I'm getting these weird symbols while trying to display this char array. Same problem in online compiler too. what to do? It happened once to me in C++ too. Either it shows nothing or this. It's making me crazy. package com.avishkar; import java.util.Arrays; public class Main { public static void main(String[] args) { String S = "aeroplane"; char[] arr = new char[S.length()]; for (int i = 0; i < S.length(); i++) { arr[i] = S.charAt(i); } Arrays.sort(arr); // System.out.println(Arrays.toString(arr)); int count1 = 0, count2 = 0; for (int i = 0; i < arr.length; i++) { char x = arr[i]; if (x == 'a' || x == 'e' || x == 'i' || x == 'o' || x == 'u') { count2++; } else { count1++; } } char[] con = new char[count1]; char[] vow = new char[count2]; int k = 0, l = 0; for (int i = 0; i < count1; i++) { char x = arr[i]; if (x == 'a' || x == 'e' || x == 'i' || x == 'o' || x == 'u') { vow[l] = x; l++; } else { con[k] = x; k++; } } System.out.println(Arrays.toString(con)); System.out.println(Arrays.toString(vow)); int x = 0, y = 0; char[] finArr = new char[count1 + count2]; for (int i = 0; i < finArr.length; i++) { if (count1 > count2) { if (i % 2 == 0) { finArr[i] = con[x]; x++; } else { finArr[i] = vow[y]; y++; } } else { if (i % 2 == 0) { finArr[i] = vow[y]; y++; } else { finArr[i] = con[x]; x++; } } } String ans = ""; for (int i = 0; i < finArr.length; i++) { ans += finArr[i]; } if (count1 - count2 > 1 || count2 - count1 > 1) { System.out.println("-1"); } System.out.println(ans); } }
[ "I modified your code to print out the hexadecimal value of the characters, rather than the characters themselves.\nThe output looks like this:\n0 0 0 0 \n61 61 65 65 0 \n61 0 61 0 65 0 65 0 0 \n\nYour \"unprintable\" characters are hexadecimal zero, which is unprintable.\nHere's the modified code.\nimport java.util.Arrays;\n\npublic class Main {\n public static void main(String[] args) {\n String S = \"aeroplane\";\n char[] arr = new char[S.length()];\n for (int i = 0; i < S.length(); i++) {\n arr[i] = S.charAt(i);\n }\n Arrays.sort(arr);\n// System.out.println(Arrays.toString(arr));\n int count1 = 0, count2 = 0;\n for (int i = 0; i < arr.length; i++) {\n char x = arr[i];\n if (x == 'a' || x == 'e' || x == 'i' || x == 'o' || x == 'u') {\n count2++;\n } else {\n count1++;\n }\n }\n\n char[] con = new char[count1];\n char[] vow = new char[count2];\n\n int k = 0, l = 0;\n\n for (int i = 0; i < count1; i++) {\n char x = arr[i];\n if (x == 'a' || x == 'e' || x == 'i' || x == 'o' || x == 'u') {\n vow[l] = x;\n l++;\n } else {\n con[k] = x;\n k++;\n }\n\n }\n for (char c : con) {\n System.out.print(Integer.toHexString((int) c) + \" \");\n }\n System.out.println();\n// System.out.println(Arrays.toString(con));\n for (char c : vow) {\n System.out.print(Integer.toHexString((int) c) + \" \");\n }\n System.out.println();\n// System.out.println(Arrays.toString(vow));\n int x = 0, y = 0;\n char[] finArr = new char[count1 + count2];\n for (int i = 0; i < finArr.length; i++) {\n if (count1 > count2) {\n if (i % 2 == 0) {\n finArr[i] = con[x];\n x++;\n } else {\n finArr[i] = vow[y];\n y++;\n }\n } else {\n if (i % 2 == 0) {\n finArr[i] = vow[y];\n y++;\n } else {\n finArr[i] = con[x];\n x++;\n }\n }\n }\n\n String ans = \"\";\n\n for (int i = 0; i < finArr.length; i++) {\n ans += finArr[i];\n }\n\n if (count1 - count2 > 1 || count2 - count1 > 1) {\n System.out.println(\"-1\");\n }\n\n for (char c : ans.toCharArray()) {\n System.out.print(Integer.toHexString((int) c) + \" \");\n }\n System.out.println();\n// System.out.println(ans);\n }\n}\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "char", "java", "symbols" ]
stackoverflow_0074673818_arrays_char_java_symbols.txt
Q: Mongodb 5 install fails on Ubuntu 20.04 In spite of following the official documentation (https://www.mongodb.com/docs/v5.0/tutorial/install-mongodb-on-ubuntu/), I fail to install MongoDB 5 on Ubuntu Server 20.04. Here is the error message that I get: Process: 125143 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=killed, signal=ILL) Main PID: 125143 (code=killed, signal=ILL) When I search the Internet for the fix, almost all results suggest that I use version 4 of MongoDB instead of 5. That is my concern and worry indeed. I am following the official version here- so how it not install the database? A: There are a few potential causes for this error message. Here are some possible solutions you can try: Make sure you have 64 bit platform Check the permissions on the /etc/mongod.conf file. The user account that is running the mongod process must have read access to this file Try running the mongod process with the --debug option to enable more detailed logging. This may help you identify the cause of the crash.
Mongodb 5 install fails on Ubuntu 20.04
In spite of following the official documentation (https://www.mongodb.com/docs/v5.0/tutorial/install-mongodb-on-ubuntu/), I fail to install MongoDB 5 on Ubuntu Server 20.04. Here is the error message that I get: Process: 125143 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=killed, signal=ILL) Main PID: 125143 (code=killed, signal=ILL) When I search the Internet for the fix, almost all results suggest that I use version 4 of MongoDB instead of 5. That is my concern and worry indeed. I am following the official version here- so how it not install the database?
[ "There are a few potential causes for this error message. Here are some possible solutions you can try:\n\nMake sure you have 64 bit platform\nCheck the permissions on the /etc/mongod.conf file. The user account\nthat is running the mongod process must have read access to this file\nTry running the mongod process with the --debug option to enable more\ndetailed logging. This may help you identify the cause of the\ncrash.\n\n" ]
[ 0 ]
[]
[]
[ "mongodb", "ubuntu_20.04" ]
stackoverflow_0074673953_mongodb_ubuntu_20.04.txt
Q: Laravel: how to call custom Validator inside the rules() function of a custom FormRequest class? Looking at the documentation https://laravel.com/docs/9.x/validation I kinda understand how you can make a custom Validator and call it as an object within a controller function. I am familiar with using custom FormRequests, but it's not clear to me how to call a custom Validation within the rules() function of a custom FormRequest class instead. For example if I want to use the following example from the documentation: use Illuminate\Support\Facades\Validator; use Illuminate\Validation\Rule;   Validator::make($data, [ 'zones' => [ 'required', Rule::in(['first-zone', 'second-zone']), ], ]); Where do I have to define the above and how can I then get to the point where I can call it like this: <?php namespace App\Http\Requests; use Illuminate\Foundation\Http\FormRequest; class TestRequest extends FormRequest { /** * Determine if the user is authorized to make this request. * * @return bool */ public function authorize() { return true; } /** * Get the validation rules that apply to the request. * * @return array<string, mixed> */ public function rules() { return [ 'zone' => 'zones', // Using the custom Validator ]; } } There is this example that sort of explains a little bit Laravel - Use validation rule inside Custom Validation but it doesn't actually say how to then use it withing the rules() function There is also documentation on how to create a custom Rule (which I'm supposing is a required step unless the custom role can be defined within the FormRequest). <?php   namespace App\Rules;   use Illuminate\Contracts\Validation\InvokableRule;   class Uppercase implements InvokableRule { /** * Run the validation rule. * * @param string $attribute * @param mixed $value * @param \Closure $fail * @return void */ public function __invoke($attribute, $value, $fail) { if (strtoupper($value) !== $value) { $fail('The :attribute must be uppercase.'); } } } But the only example available on how to call it is as a validation object, not as a rule inside a FormRequest (unless I misunderstand), so that isn't helpful for my intentions. use App\Rules\Uppercase;   $request->validate([ 'name' => ['required', 'string', new Uppercase], ]); A: You already found the way how to use the your custom validator in your FormRequest. At method rules, just use it like 'New Uppercase' in array. So the example will be like below (* I add example zone is required and Uppercase): public function rules() { return [ 'zone' => ['required', new Uppercase] ]; } Dont forget to import your custom validator class.
Laravel: how to call custom Validator inside the rules() function of a custom FormRequest class?
Looking at the documentation https://laravel.com/docs/9.x/validation I kinda understand how you can make a custom Validator and call it as an object within a controller function. I am familiar with using custom FormRequests, but it's not clear to me how to call a custom Validation within the rules() function of a custom FormRequest class instead. For example if I want to use the following example from the documentation: use Illuminate\Support\Facades\Validator; use Illuminate\Validation\Rule;   Validator::make($data, [ 'zones' => [ 'required', Rule::in(['first-zone', 'second-zone']), ], ]); Where do I have to define the above and how can I then get to the point where I can call it like this: <?php namespace App\Http\Requests; use Illuminate\Foundation\Http\FormRequest; class TestRequest extends FormRequest { /** * Determine if the user is authorized to make this request. * * @return bool */ public function authorize() { return true; } /** * Get the validation rules that apply to the request. * * @return array<string, mixed> */ public function rules() { return [ 'zone' => 'zones', // Using the custom Validator ]; } } There is this example that sort of explains a little bit Laravel - Use validation rule inside Custom Validation but it doesn't actually say how to then use it withing the rules() function There is also documentation on how to create a custom Rule (which I'm supposing is a required step unless the custom role can be defined within the FormRequest). <?php   namespace App\Rules;   use Illuminate\Contracts\Validation\InvokableRule;   class Uppercase implements InvokableRule { /** * Run the validation rule. * * @param string $attribute * @param mixed $value * @param \Closure $fail * @return void */ public function __invoke($attribute, $value, $fail) { if (strtoupper($value) !== $value) { $fail('The :attribute must be uppercase.'); } } } But the only example available on how to call it is as a validation object, not as a rule inside a FormRequest (unless I misunderstand), so that isn't helpful for my intentions. use App\Rules\Uppercase;   $request->validate([ 'name' => ['required', 'string', new Uppercase], ]);
[ "You already found the way how to use the your custom validator in your FormRequest. At method rules, just use it like 'New Uppercase' in array. So the example will be like below (* I add example zone is required and Uppercase):\npublic function rules()\n{\n return [\n 'zone' => ['required', new Uppercase]\n ];\n}\n\nDont forget to import your custom validator class.\n" ]
[ 0 ]
[]
[]
[ "laravel", "validation" ]
stackoverflow_0074673909_laravel_validation.txt
Q: find all java classes on classpath whose simple name matches some regex I would like to search the runtime classpath to find all classes with a simple name that matches some regex. Example: Say I have Some.jar on my classpath and that jar contains a class called MyCoolClass.class. I would like some way of finding all classes that contains the word "Cool" like the one above and perhaps several others. Also, I would like to get either the Class or the fully qualified class name (such that I can call Class.forName). A: I managed to solve this using the ClassGraph library: // List<URL> myClassPath = ... //get classpath. I got mine from the Maven context since I was developing a Mojo List<String> foundTypes; try (ScanResult scanResult = new ClassGraph().enableClassInfo().overrideClasspath(myClassPath).scan()) { foundTypes = scanResult.getAllClasses().getNames().stream().filter(e -> e.contains("Cool")).collect(Collectors.toList()); } foundTypes.forEach(e -> System.out.println(e)); Outputs the fully qualified class names. Alternative libraries thay may (or may not) also be able to achieve this: Reflections Scannotations Spring See fx Scanning Java annotations at runtime although that Q&A is about a different use case. A: Using Reflextions Reflections reflections = new Reflections( "com.my.package", SubTypes.filterResultsBy( s -> true)); Set<Class<?>> subTypes = reflections.get( SubTypes.of( Object.class).asClass()); subTypes.stream() .filter( t -> t.getName().contains( "Cool")) .forEach( t -> { log.info( "Class: {}", t.getName()); });
find all java classes on classpath whose simple name matches some regex
I would like to search the runtime classpath to find all classes with a simple name that matches some regex. Example: Say I have Some.jar on my classpath and that jar contains a class called MyCoolClass.class. I would like some way of finding all classes that contains the word "Cool" like the one above and perhaps several others. Also, I would like to get either the Class or the fully qualified class name (such that I can call Class.forName).
[ "I managed to solve this using the ClassGraph library:\n// List<URL> myClassPath = ... //get classpath. I got mine from the Maven context since I was developing a Mojo\nList<String> foundTypes;\n try (ScanResult scanResult = new ClassGraph().enableClassInfo().overrideClasspath(myClassPath).scan()) {\n foundTypes = scanResult.getAllClasses().getNames().stream().filter(e -> e.contains(\"Cool\")).collect(Collectors.toList());\n }\n\n foundTypes.forEach(e -> System.out.println(e));\n\nOutputs the fully qualified class names.\nAlternative libraries thay may (or may not) also be able to achieve this:\n\nReflections\nScannotations\nSpring\nSee fx Scanning Java annotations at runtime although that Q&A is about a different use case.\n\n", "Using Reflextions\nReflections reflections = new Reflections( \"com.my.package\", SubTypes.filterResultsBy( s -> true));\nSet<Class<?>> subTypes = reflections.get( SubTypes.of( Object.class).asClass());\n subTypes.stream()\n .filter( t -> t.getName().contains( \"Cool\"))\n .forEach( t -> { \n log.info( \"Class: {}\", t.getName());\n });\n\n" ]
[ 0, 0 ]
[]
[]
[ "classpath", "java", "reflection" ]
stackoverflow_0073417134_classpath_java_reflection.txt
Q: My Django Admin input doesn't allow me to add more than one image I'm trying to make a Django model, with Django Rest Framework. I want this to allow me to load one or more images in the same input. MODELS: from django.db import models from datetime import datetime from apps.category.models import Category from django.conf import settings class Product(models.Model): code = models.CharField(max_length=255, null=True) name = models.CharField(max_length=255) image = models.ImageField(upload_to='photos/%Y/%m/', blank = True, null=True, default='') description = models.TextField() caracteristicas = models.JSONField(default=dict) price = models.DecimalField(max_digits=6, decimal_places=2) compare_price = models.DecimalField(max_digits=6, decimal_places=2) category = models.ForeignKey(Category, on_delete=models.CASCADE) quantity = models.IntegerField(default=0) sold = models.IntegerField(default=0) date_created = models.DateTimeField(default=datetime.now) def __str__(self): return self.name class ProductImage(models.Model): product = models.ForeignKey(Product, on_delete=models.CASCADE, related_name = 'images') image = models.ImageField(upload_to='photos/%Y/%m/', default="", null=True, blank=True) SERIALIZER: from rest_framework import serializers from .models import Product, ProductImage class ProductImageSerializer(serializers.ModelSerializer): class Meta: model = ProductImage fields = ["id", "product", "image"] class ProductSerializer(serializers.ModelSerializer): images = ProductImageSerializer(many=True, read_only=True) uploaded_images = serializers.ListField( child = serializers.ImageField(max_length = 1000000, allow_empty_file = False, use_url = False), write_only=True ) class Meta: model = Product fields = [ 'id', 'code', 'name', 'description', 'caracteristicas', 'price', 'compare_price', 'category', 'quantity', 'sold', 'date_created', 'images', 'uploaded_images' ] def create(self, validated_data): uploaded_images = validated_data.pop("uploaded_images") product = Product.objects.create(**validated_data) for image in uploaded_images: newproduct_image = ProductImage.objects.create(product=product, image=image) return product I would simply like how to make the following input field allow me to load more than one image: Imagen de referencia input thank you very much A: You didn't post your admin.py but my guess is that you also need to register your ProductImage model as an inlines since you already use a One2Many relationship between Product and ProductImage: In your admin.py: class ProductImageAdmin(admin.StackedInline): model = ProductImage class ProductAdmin(admin.ModelAdmin): inlines = [ProductImageAdmin] class Meta: model = Product admin.site.register(ProductImage) admin.site.register(Product, ProductAdmin) You can also check this SO answer out for more details. Hope that helps :)
My Django Admin input doesn't allow me to add more than one image
I'm trying to make a Django model, with Django Rest Framework. I want this to allow me to load one or more images in the same input. MODELS: from django.db import models from datetime import datetime from apps.category.models import Category from django.conf import settings class Product(models.Model): code = models.CharField(max_length=255, null=True) name = models.CharField(max_length=255) image = models.ImageField(upload_to='photos/%Y/%m/', blank = True, null=True, default='') description = models.TextField() caracteristicas = models.JSONField(default=dict) price = models.DecimalField(max_digits=6, decimal_places=2) compare_price = models.DecimalField(max_digits=6, decimal_places=2) category = models.ForeignKey(Category, on_delete=models.CASCADE) quantity = models.IntegerField(default=0) sold = models.IntegerField(default=0) date_created = models.DateTimeField(default=datetime.now) def __str__(self): return self.name class ProductImage(models.Model): product = models.ForeignKey(Product, on_delete=models.CASCADE, related_name = 'images') image = models.ImageField(upload_to='photos/%Y/%m/', default="", null=True, blank=True) SERIALIZER: from rest_framework import serializers from .models import Product, ProductImage class ProductImageSerializer(serializers.ModelSerializer): class Meta: model = ProductImage fields = ["id", "product", "image"] class ProductSerializer(serializers.ModelSerializer): images = ProductImageSerializer(many=True, read_only=True) uploaded_images = serializers.ListField( child = serializers.ImageField(max_length = 1000000, allow_empty_file = False, use_url = False), write_only=True ) class Meta: model = Product fields = [ 'id', 'code', 'name', 'description', 'caracteristicas', 'price', 'compare_price', 'category', 'quantity', 'sold', 'date_created', 'images', 'uploaded_images' ] def create(self, validated_data): uploaded_images = validated_data.pop("uploaded_images") product = Product.objects.create(**validated_data) for image in uploaded_images: newproduct_image = ProductImage.objects.create(product=product, image=image) return product I would simply like how to make the following input field allow me to load more than one image: Imagen de referencia input thank you very much
[ "You didn't post your admin.py but my guess is that you also need to register your ProductImage model as an inlines since you already use a One2Many relationship between Product and ProductImage:\nIn your admin.py:\nclass ProductImageAdmin(admin.StackedInline):\n model = ProductImage\n\nclass ProductAdmin(admin.ModelAdmin):\n inlines = [ProductImageAdmin]\n\n class Meta:\n model = Product\n\n\nadmin.site.register(ProductImage)\nadmin.site.register(Product, ProductAdmin)\n\nYou can also check this SO answer out for more details.\nHope that helps :)\n" ]
[ 0 ]
[]
[]
[ "backend", "django", "django_admin", "django_rest_framework", "python" ]
stackoverflow_0074672857_backend_django_django_admin_django_rest_framework_python.txt
Q: Automate the usage of vivado gui by using tcl scripts I am using vivado to load firmware into a board and do some tests. This is a recursive process and I would like to automate it. Here are the steps that I follow: Open vivado gui open hardware manager connect to hardware server Program the board with the bitfile I know vivado has a tcl command line. Is there any way to create a tcl script so that I can do these things without opening vivado GUI? A: ugXXX papers are a great way to start. Personaly ug835 is the bible for writing Vivado automation vivado -mode tcl -source YOURTCLSCRIPT.tcl will run your script and end with a open tcl session in your shell. vivado -mode batch -source YOURTCLSCRIPT.tcl will run your script and return to native shell when done. you can allso use the -mode gui to launch gui mode, as this is the default mode it is not as useful. All though it can be great in make scripts or alias to be more descriptive. A: for the benefit of others your tcl file program_device.tcl open_hw_manager connect_hw_server open_hw_target set_property PROBES.FILE {<path>.ltx} [get_hw_devices <your device name>] set_property FULL_PROBES.FILE {/mnt/prjswrkspc/nsitexe_nosync/drx100/FPGA/htg-930/images/2030_v083/2030.ltx} [get_hw_devices <your device name>] set_property PROGRAM.FILE {<path>.bit} [get_hw_devices <your device name>] program_hw_devices [get_hw_devices <your device name>] refresh_hw_device [lindex [get_hw_devices <your device name>] 0] close_hw_target close_hw_manager From terminal vivado -mode batch -source program_device.tcl
Automate the usage of vivado gui by using tcl scripts
I am using vivado to load firmware into a board and do some tests. This is a recursive process and I would like to automate it. Here are the steps that I follow: Open vivado gui open hardware manager connect to hardware server Program the board with the bitfile I know vivado has a tcl command line. Is there any way to create a tcl script so that I can do these things without opening vivado GUI?
[ "ugXXX papers are a great way to start. Personaly ug835 is the bible for writing Vivado automation\nvivado -mode tcl -source YOURTCLSCRIPT.tcl will run your script and end with a open tcl session in your shell. \nvivado -mode batch -source YOURTCLSCRIPT.tcl will run your script and return to native shell when done. \nyou can allso use the -mode gui to launch gui mode, as this is the default mode it is not as useful. All though it can be great in make scripts or alias to be more descriptive. \n", "for the benefit of others\nyour tcl file program_device.tcl\nopen_hw_manager\nconnect_hw_server\nopen_hw_target\nset_property PROBES.FILE {<path>.ltx} [get_hw_devices <your device name>]\nset_property FULL_PROBES.FILE {/mnt/prjswrkspc/nsitexe_nosync/drx100/FPGA/htg-930/images/2030_v083/2030.ltx} [get_hw_devices <your device name>]\nset_property PROGRAM.FILE {<path>.bit} [get_hw_devices <your device name>]\nprogram_hw_devices [get_hw_devices <your device name>]\nrefresh_hw_device [lindex [get_hw_devices <your device name>] 0]\nclose_hw_target\nclose_hw_manager\n\nFrom terminal vivado -mode batch -source program_device.tcl\n" ]
[ 0, 0 ]
[]
[]
[ "automation", "tcl", "vivado" ]
stackoverflow_0055495977_automation_tcl_vivado.txt
Q: How to create a TS type which has enum value as key and corresponding React functional component as value Please see the code block below. import Button from './Button'; // React component which has type React.FC<ButtonType> import Select from './Select'; // React component which has type React.FC<SelectType> import Checkbox from './Checkbox'; // React component which has type React.FC<CheckboxType> enum ComponentType { button = 'bla_button', select = 'bla_select', checkbox = 'bla_checkbox', } type ComponentMap = { [key in ComponentType]: React.FC<any>; }; const componentMap: ComponentMap = { [ComponentType.button]: Button, [ComponentType.select]: Select, [ComponentType.checkbox]: Checkbox, }; I want to have a better type for ComponentMap instead of using React.FC<any> so it can infer when the key is 'bla_button', the value must be Button and so on. A: create an interface to define the corresponding type interface ComponentInterface { [ComponentType.button] : React.FC<ButtonType>; [ComponentType.select]: React.FC<SelectType>, [ComponentType.checkbox]: React.FC<CheckboxType>, } Then use it here const componentMap: ComponentInterface = { [ComponentType.button]: Button, [ComponentType.select]: Select, [ComponentType.checkbox]: Checkbox, }; This way you can't accidentally assign button type to select
How to create a TS type which has enum value as key and corresponding React functional component as value
Please see the code block below. import Button from './Button'; // React component which has type React.FC<ButtonType> import Select from './Select'; // React component which has type React.FC<SelectType> import Checkbox from './Checkbox'; // React component which has type React.FC<CheckboxType> enum ComponentType { button = 'bla_button', select = 'bla_select', checkbox = 'bla_checkbox', } type ComponentMap = { [key in ComponentType]: React.FC<any>; }; const componentMap: ComponentMap = { [ComponentType.button]: Button, [ComponentType.select]: Select, [ComponentType.checkbox]: Checkbox, }; I want to have a better type for ComponentMap instead of using React.FC<any> so it can infer when the key is 'bla_button', the value must be Button and so on.
[ "create an interface to define the corresponding type\ninterface ComponentInterface {\n [ComponentType.button] : React.FC<ButtonType>;\n [ComponentType.select]: React.FC<SelectType>,\n [ComponentType.checkbox]: React.FC<CheckboxType>,\n}\n\nThen use it here\nconst componentMap: ComponentInterface = {\n [ComponentType.button]: Button,\n [ComponentType.select]: Select,\n [ComponentType.checkbox]: Checkbox,\n\n};\n\nThis way you can't accidentally assign button type to select\n" ]
[ 0 ]
[]
[]
[ "reactjs", "typescript", "typescript_generics", "typescript_typings" ]
stackoverflow_0074673973_reactjs_typescript_typescript_generics_typescript_typings.txt
Q: import moviepy error issue with python3.9+ and matplotlib on latest MacOS M1 For python 3.9+ there seems to be an error when I import moviepy after a pip install for moviepy with the correct command as per docs. I am trying an alternative to save animated plots from matplotlib from .gif format to .mp4 format, but matplotlib on MacOS (M1 chip) supports only .gif due to a lack of the "FFMpeg" process (which stays unresolved after pip installs as well). Any clue what to do here? Repeat: For python 3.9+ there seems to be an error when I import moviepy after a pip install for moviepy with the correct command as per docs. I am trying an alternative to save animated plots from matplotlib from .gif format to .mp4 format, but matplotlib on MacOS (M1 chip) supports only .gif due to a lack of the "FFMpeg" process (which stays unresolved after pip installs as well). Any clue what to do here? A: I'm still researching this, but i had the same problem: MoviePy fails to install ffmpeg. I saw a comment (looking for it again) that said the install of ffmpeg via moviepy failed, because there was no "wheel" for the ARM version of ffmpeg. This may be because ffmpeg does not provide static builds for apple silicon. https://ffmpeg.org/download.html#build-mac => https://evermeet.cx/ffmpeg/#remarks: "I do not plan to provide native ffmpeg binaries for Apple Silicon ARM." I think you will need to install ffmpeg manually. You can build from source, or there are a number of places you can get a static version of ffmpeg for apple silicon m1/m2. One example: https://www.osxexperts.net/ I built it myself, and set this environment variable in the run configuration: FFMPEG_BINARY=/tmp/ff/bin/ffmpeg And it seems to be working for me.
import moviepy error issue with python3.9+ and matplotlib on latest MacOS M1
For python 3.9+ there seems to be an error when I import moviepy after a pip install for moviepy with the correct command as per docs. I am trying an alternative to save animated plots from matplotlib from .gif format to .mp4 format, but matplotlib on MacOS (M1 chip) supports only .gif due to a lack of the "FFMpeg" process (which stays unresolved after pip installs as well). Any clue what to do here? Repeat: For python 3.9+ there seems to be an error when I import moviepy after a pip install for moviepy with the correct command as per docs. I am trying an alternative to save animated plots from matplotlib from .gif format to .mp4 format, but matplotlib on MacOS (M1 chip) supports only .gif due to a lack of the "FFMpeg" process (which stays unresolved after pip installs as well). Any clue what to do here?
[ "I'm still researching this, but i had the same problem: MoviePy fails to install ffmpeg.\nI saw a comment (looking for it again) that said the install of ffmpeg via moviepy failed, because there was no \"wheel\" for the ARM version of ffmpeg. This may be because ffmpeg does not provide static builds for apple silicon.\nhttps://ffmpeg.org/download.html#build-mac => https://evermeet.cx/ffmpeg/#remarks:\n\"I do not plan to provide native ffmpeg binaries for Apple Silicon ARM.\"\n\nI think you will need to install ffmpeg manually. You can build from source, or there are a number of places you can get a static version of ffmpeg for apple silicon m1/m2. One example: https://www.osxexperts.net/\nI built it myself, and set this environment variable in the run configuration: FFMPEG_BINARY=/tmp/ff/bin/ffmpeg\nAnd it seems to be working for me.\n" ]
[ 0 ]
[]
[]
[ "ffmpeg", "matplotlib", "moviepy", "python_3.9" ]
stackoverflow_0074539703_ffmpeg_matplotlib_moviepy_python_3.9.txt
Q: PowerShell EWS API | How to download attachments? In the code below, I am able to retrieve the subject of the email but unable to download the attachment. I am not even able to output the name of the attachment. The attachment is a WAV sound file. Import-Module "C:\Program Files\Microsoft\Exchange\Web Services\2.2\Microsoft.Exchange.WebServices.dll" $folderid = new-object Microsoft.Exchange.WebServices.Data.FolderId($v_FolderID) $fiItems = $null $iv = new-object Microsoft.Exchange.WebServices.Data.ItemView(1000) $fiItems = $service.FindItems($folderid, $args[1], $iv) foreach ($Item in $fiItems.Items[0]) { $v_Subject = $Item.Subject foreach($attachment in $Item.Attachments) { $attachment.Load() $attachmentname = $attachment.Name.ToString() $attachmentname $file = New-Object System.IO.FileStream("C:\", [System.IO.FileMode]::Create) $file.Write($attachment.Content, 0, $attachment.Content.Length) $file.Close() } } $iv.offset += $fiItems.Items.Count Out-File -FilePath "C:\EWSSubject.txt" -InputObject $v_Subject``` A: You should first need to create a PropertySet object because the attachment information is not loaded automatically. ## Target Path Folder $TargetPath = "c:\temp\attachments" ## Create a PropertySet with the Attachments metadata $ItemPropetySet = [Microsoft.Exchange.WebServices.Data.PropertySet]::new( [Microsoft.Exchange.Webservices.Data.BasePropertySet]::IdOnly, [Microsoft.Exchange.WebServices.Data.ItemSchema]::Attachments, [Microsoft.Exchange.WebServices.Data.ItemSchema]::HasAttachments ) Then: ## Iterate the items and find messages with attachments foreach ($item in $fiItems.Items) { ## Load the Message attachment metadata using the PropertySet Created $message = [Microsoft.Exchange.WebServices.Data.EmailMessage]::Bind( $service, $item.Id, $ItemPropetySet) if ($message.HasAttachments) { foreach ($attachment in $message.Attachments) { if ($attachment -is [Microsoft.Exchange.WebServices.Data.FileAttachment]) { $FilePath = Join-Path $TargetPath $attachment.Name $attachment.Load($FilePath) } } } }
PowerShell EWS API | How to download attachments?
In the code below, I am able to retrieve the subject of the email but unable to download the attachment. I am not even able to output the name of the attachment. The attachment is a WAV sound file. Import-Module "C:\Program Files\Microsoft\Exchange\Web Services\2.2\Microsoft.Exchange.WebServices.dll" $folderid = new-object Microsoft.Exchange.WebServices.Data.FolderId($v_FolderID) $fiItems = $null $iv = new-object Microsoft.Exchange.WebServices.Data.ItemView(1000) $fiItems = $service.FindItems($folderid, $args[1], $iv) foreach ($Item in $fiItems.Items[0]) { $v_Subject = $Item.Subject foreach($attachment in $Item.Attachments) { $attachment.Load() $attachmentname = $attachment.Name.ToString() $attachmentname $file = New-Object System.IO.FileStream("C:\", [System.IO.FileMode]::Create) $file.Write($attachment.Content, 0, $attachment.Content.Length) $file.Close() } } $iv.offset += $fiItems.Items.Count Out-File -FilePath "C:\EWSSubject.txt" -InputObject $v_Subject```
[ "You should first need to create a PropertySet object because the attachment information is not loaded automatically.\n## Target Path Folder\n$TargetPath = \"c:\\temp\\attachments\"\n\n## Create a PropertySet with the Attachments metadata\n$ItemPropetySet = [Microsoft.Exchange.WebServices.Data.PropertySet]::new(\n[Microsoft.Exchange.Webservices.Data.BasePropertySet]::IdOnly,\n[Microsoft.Exchange.WebServices.Data.ItemSchema]::Attachments,\n[Microsoft.Exchange.WebServices.Data.ItemSchema]::HasAttachments\n)\n\nThen:\n## Iterate the items and find messages with attachments\nforeach ($item in $fiItems.Items)\n{\n ## Load the Message attachment metadata using the PropertySet Created\n $message = [Microsoft.Exchange.WebServices.Data.EmailMessage]::Bind(\n $service, $item.Id, $ItemPropetySet)\n\n if ($message.HasAttachments)\n {\n foreach ($attachment in $message.Attachments)\n {\n if ($attachment -is [Microsoft.Exchange.WebServices.Data.FileAttachment])\n {\n $FilePath = Join-Path $TargetPath $attachment.Name\n $attachment.Load($FilePath)\n }\n }\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "email", "email_attachments", "exchange_server", "exchangewebservices", "powershell" ]
stackoverflow_0074665413_email_email_attachments_exchange_server_exchangewebservices_powershell.txt
Q: How do I stop visual studio's window jumping too far left or right when trying to select something in the middle of a long line? When I scroll horizontally along a long line of code in visual studio (one that goes off the end of the screen), and select something in the middle with the mouse it has a nasty habit of jumping to the furthest extreme of the line and leaving the selection behind. I think this happens when your cursor touches the edge of the screen. Is there any way to fix this? Maybe reduce the sensitivity of whatever controls this? I do know that you can just spread a line of code over more lines, but I don't like to do that as it makes the code fill more space vertically - which makes it harder to find things. I also know I can use the keyboard, but I find that fiddly too. Is there a way to just reduce the amount it jumps by to something less than the end of the line? I am aware this has been asked before, but there was never a real answer that fixed the problem. A: I do know that you can just spread a line of code over more lines, but I don't like to do that as it makes the code fill more space vertically - which makes it harder to find things. Apparently you are finding it harder to find things using more horizontal space. You really should be trying to use more vertical space because the mouse is optimized to allow for easier vertical scrolling (e.g. using the mouse wheel). Your keyboard allows for easier scrolling as well, using Page Up and Page Down. There is a widespread consensus that it is preferable to use vertical space, including in this website. Imagine if these paragraphs all fit into one line. How harder would it be to read? With every paragraph you finish reading, you'll need to scroll all the way to the left to start reading the next one.
How do I stop visual studio's window jumping too far left or right when trying to select something in the middle of a long line?
When I scroll horizontally along a long line of code in visual studio (one that goes off the end of the screen), and select something in the middle with the mouse it has a nasty habit of jumping to the furthest extreme of the line and leaving the selection behind. I think this happens when your cursor touches the edge of the screen. Is there any way to fix this? Maybe reduce the sensitivity of whatever controls this? I do know that you can just spread a line of code over more lines, but I don't like to do that as it makes the code fill more space vertically - which makes it harder to find things. I also know I can use the keyboard, but I find that fiddly too. Is there a way to just reduce the amount it jumps by to something less than the end of the line? I am aware this has been asked before, but there was never a real answer that fixed the problem.
[ "\nI do know that you can just spread a line of code over more lines, but I don't like to do that as it makes the code fill more space vertically - which makes it harder to find things.\n\nApparently you are finding it harder to find things using more horizontal space.\nYou really should be trying to use more vertical space because the mouse is optimized to allow for easier vertical scrolling (e.g. using the mouse wheel). Your keyboard allows for easier scrolling as well, using Page Up and Page Down.\nThere is a widespread consensus that it is preferable to use vertical space, including in this website. Imagine if these paragraphs all fit into one line. How harder would it be to read? With every paragraph you finish reading, you'll need to scroll all the way to the left to start reading the next one.\n" ]
[ 0 ]
[]
[]
[ "c#", "visual_studio" ]
stackoverflow_0074674009_c#_visual_studio.txt
Q: Java initialize null object with default I am trying to init object with default if not initialized by the user. I want the user be able not to add params in the request but I will have it with the default value. I'm not sure what I am missing but I get Null Pointer exception. Having this object @Builder @Data @Valid @RequiredArgsConstructor @AllArgsConstructor @NoArgsConstructor(force = true, access = AccessLevel.PRIVATE) public class Request { private final List<Author> authors; private final Config config; private Params params; } @Builder @AllArgsConstructor @NoArgsConstructor(force = true, access = AccessLevel.PRIVATE) @Data @Setter(AccessLevel.NONE) public class Params { @Builder.Default private boolean adultOnly = false; } fun main(args: Array<String>) { val request = Request( authors, config( key, level ) ) also using builder throw null exception val request = QueriesRequest.builder() .userQueries(userQueries) .configurationKey( QueryConfigurationKey(configurationKey, granularityLevel)) .site(site).build() request.params.adultOnly // throw NULL , I expected to have the default false. } What I am missing? A: The params field in the Request class is never initialized and remains null. You could set a default value for it: @Builder @Data @Valid @RequiredArgsConstructor @AllArgsConstructor @NoArgsConstructor(force = true, access = AccessLevel.PRIVATE) public class Request { private final List<Author> authors; private final Config config; @Builder.Default private Params params = Params.builder().build(); } I'm using the Params' builder itself to initialize the default value, so that you will get the default values as defined by that builder as well.
Java initialize null object with default
I am trying to init object with default if not initialized by the user. I want the user be able not to add params in the request but I will have it with the default value. I'm not sure what I am missing but I get Null Pointer exception. Having this object @Builder @Data @Valid @RequiredArgsConstructor @AllArgsConstructor @NoArgsConstructor(force = true, access = AccessLevel.PRIVATE) public class Request { private final List<Author> authors; private final Config config; private Params params; } @Builder @AllArgsConstructor @NoArgsConstructor(force = true, access = AccessLevel.PRIVATE) @Data @Setter(AccessLevel.NONE) public class Params { @Builder.Default private boolean adultOnly = false; } fun main(args: Array<String>) { val request = Request( authors, config( key, level ) ) also using builder throw null exception val request = QueriesRequest.builder() .userQueries(userQueries) .configurationKey( QueryConfigurationKey(configurationKey, granularityLevel)) .site(site).build() request.params.adultOnly // throw NULL , I expected to have the default false. } What I am missing?
[ "The params field in the Request class is never initialized and remains null. You could set a default value for it:\n@Builder\n@Data\n@Valid\n@RequiredArgsConstructor\n@AllArgsConstructor\n@NoArgsConstructor(force = true, access = AccessLevel.PRIVATE)\npublic class Request {\n private final List<Author> authors;\n private final Config config;\n\n @Builder.Default\n private Params params = Params.builder().build();\n}\n\nI'm using the Params' builder itself to initialize the default value, so that you will get the default values as defined by that builder as well.\n" ]
[ 0 ]
[]
[]
[ "builder", "default", "null", "object" ]
stackoverflow_0074647988_builder_default_null_object.txt
Q: Event Hub Lease Management Does anyone know where I can find details about how event hub lease management works? Specifically I'm trying to find how do I know where in the event hub the EventProcess picks up processing (after a reboot, shutdown, lease lost)? What is the best way to set the index to the beginning during development? Thanks A: This article does a good job of explaining lease management in EventHub under distributed consuming applications : Lease management Registering an event processor class with an instance of EventProcessorHost starts event processing. The host instance obtains leases on some partitions of the Event Hub, possibly grabbing some from other host instances, in a way that converges on an even distribution of partitions across all host instances. For each leased partition, the host instance creates an instance of the provided event processor class, then receives events from that partition, and passes them to the event processor instance. As more instances get added and more leases are grabbed, EventProcessorHost eventually balances the load among all consumers. As explained previously, the tracking table greatly simplifies the autoscale nature of EventProcessorHost.UnregisterEventProcessorAsync. As an instance of EventProcessorHost starts, it acquires as many leases as possible, and begins reading events. As the leases near expiration, EventProcessorHost attempts to renew them by placing a reservation. If the lease is available for renewal, the processor continues reading, but if it is not, the reader is closed and CloseAsync is called. CloseAsync is a good time to perform any final cleanup for that partition. EventProcessorHost includes a PartitionManagerOptions property. This property enables control over lease management. Set these options before registering your IEventProcessor implementation.
Event Hub Lease Management
Does anyone know where I can find details about how event hub lease management works? Specifically I'm trying to find how do I know where in the event hub the EventProcess picks up processing (after a reboot, shutdown, lease lost)? What is the best way to set the index to the beginning during development? Thanks
[ "This article does a good job of explaining lease management in EventHub under distributed consuming applications :\n\nLease management\nRegistering an event processor class with an instance of\nEventProcessorHost starts event processing. The host instance obtains\nleases on some partitions of the Event Hub, possibly grabbing some\nfrom other host instances, in a way that converges on an even\ndistribution of partitions across all host instances. For each leased\npartition, the host instance creates an instance of the provided event\nprocessor class, then receives events from that partition, and passes\nthem to the event processor instance. As more instances get added and\nmore leases are grabbed, EventProcessorHost eventually balances the\nload among all consumers.\nAs explained previously, the tracking table greatly simplifies the\nautoscale nature of EventProcessorHost.UnregisterEventProcessorAsync.\nAs an instance of EventProcessorHost starts, it acquires as many\nleases as possible, and begins reading events. As the leases near\nexpiration, EventProcessorHost attempts to renew them by placing a\nreservation. If the lease is available for renewal, the processor\ncontinues reading, but if it is not, the reader is closed and\nCloseAsync is called. CloseAsync is a good time to perform any final\ncleanup for that partition.\nEventProcessorHost includes a PartitionManagerOptions property. This\nproperty enables control over lease management. Set these options\nbefore registering your IEventProcessor implementation.\n\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_eventhub", "cortana_intelligence" ]
stackoverflow_0032105042_azure_azure_eventhub_cortana_intelligence.txt
Q: The name does not exist in the namespace error in XAML Using VS2012 working on a VB.NET WPF application. I have a simple MusicPlayer tutorial app I am using to learn WPF. I am converting a C# version of the tutorial to VB.NET step by step. It has 2 classes in the app that are both under the same namespace. I am able to reference the namespace in the XAML but when I try to reference the class object in XAML I get an error and I am not able to compile. Strange thing is that the IntelliSense works fine with both referencing the namespace via the xmlns:c= tag and also when typing the class object using <c: But the object is underlined and errors are generated trying to build or work in the designer. The .vb class files are in a folder called \Controls. The Main project Root Namespace is intentionaly left blank. The class is coded like this... Namespace MusicPlayer.Controls Public Class UpdatingMediaElement .... code here End Public End Namespace The xaml looks like this (namespace defined in the <Window > tag xmlns:c="clr-namespace:MusicPlayer.Controls" (object defined in a <Grid> ) <c:UpdatingMediaElement Name="MyMediaElement" /> (error displayed) The name "UpdatingMediaElement" does not exist in the namespace "clr-namespace:MusicPlayer.Controls". Not sure what is wrong or how to fix it? A: When you are writing your wpf code and VS tell that "The name ABCDE does not exist in the namespace clr-namespace:ABC". But you can totally build your project successfully, there is only a small inconvenience because you can not see the UI designing (or just want to clean the code). Try to do these: In VS, right click on your Solution -> Properties -> Configuration Properties A new dialog is opened, try to change the project configurations from Debug to Release or vice versa. After that, re-build your solution. It can solve your problem. A: If the assembly is different from the namespace in which your class is contained, you have to specfiy it explicitly. ex:- xmlns:Local="clr-namespace:MusicPlayer.Controls;assembly=MusicPlayer" A: In my case it was because of other compile errors. When other errors have been solved this seemingly related error was also removed from the list. Specially the errors at the bottom of the errors list and on pages you have recently changed. So do not pay attention to this error directly and focus on other errors at first. A: I've seen this issue go away by clearing the Xaml Design Shadow Cache. I had the issue with Visual Studio 2015 Update 1. In Visual Studio 2015 the Cache is located here: %localappdata%\Microsoft\VisualStudio\14.0\Designer\ShadowCache Process: Right-Click on the solution in the Solution Explorer and Choose "Clean Solution" Shutdown Visual Studio Delete the ShadowCache folder Reopened the Visual Studio project Rebuild the solution And voila no more namespace errors. A: Try changing the build target platform to x86 and building the project. I noticed via Subversion that I apparently changed the project build Platform target to x64. This was the only change I had made. After making that change, the code was working for a short while before it started showing the same error you experienced. I changed the platform target to x86 to test and suddenly my designer was working again. Subsequently, I changed it back to x64, and the problem has disappeared completely. I suspect that the designer builds some kind of cached code in x32 and changing the x64 build platform breaks it when you make code changes. A: Maybe another solution for when the project compiles but the XAML error is showing : In solution explore, on the project node that contains the xaml Right-click on the project and choose 'Unload Project' Right-click on the project and choose 'Reload Project' Make sure that your project is still choosen as "startup project". If not : Right-click on the project and choose 'Set as startup project' No need to rebuild, or close visual studio. A: Jesus... This is still a problem five years later in Visual Studio 2017. Since I'm new to WPF, I was sure the problem was somehow me, but no, everything compiled and ran correctly. I tried rebuilding, cleaning and rebuilding, switching between x86/x64 output, rebooting Windows, cleaning the ShadowCache folder, adding ";assembly={my main assembly name}" to the XML namespace declaration, nothing worked! The single thing that did: Put my static class of Commands (in my case the deal was about making the design discover my WPF Commands) in its separate assembly and changing the assembly name to that one's instead. A: Dunno if this will help anyone else I'm new to WPF and still a novice with VB.net - so I was assuming that getting this error was being caused by me doing summit silly........ suppose I was really! I've managed to get rid of it by moving my project from a shared drive to one of my local drives. Error's disappeared, project compiles perfectly no further issues - yet. Looks like VS2015 still has problems with projects held on a shared drive. A: I had this problem recently using VS 2015 Update 3 for my WPF project in .NET 4.6.2. The copy of my project was in a network folder, I moved it locally and that solved the problem. This may solve other sort of problems, as it looks like VS 2015 doesn't like network paths. Another issue that is a big problem for them is syncing git repositories if my project is in a network path, also solved by moving it locally. A: I went through all the answers and none helped me. Finally was able to solve it by myself, so presenting the answer as it might help others. In my case, the solution had two projects, one containing the models (say the project and assembly name was Models) and another containing the views and view models (as per our convention: project, assembly name and default namespace were Models.Monitor). The Models.Monitor referred Models project. In the Models.Monitor project, in one of the xaml I included the following namespace: xmlns:monitor="clr-namespace:Models.Monitor" I suspect that MsBuild and Visual Studio then were erroring out as they were trying to find a 'Monitor' type in the assembly 'Models'. To resolve I tried the following: xmlns:monitor="clr-namespace:Models.Monitor;assembly=" - which is valid if the namespace is in same assembly as per https://msdn.microsoft.com/en-us/library/ms747086(v=vs.110).aspx also tried the explicit namespace declaration: xmlns:monitor="clr-namespace:Models.Monitor;assembly=Models.Monitor" Neither of the above worked. Finally I gave up, and as a work around moved the UserControl I was trying to use to another namespace: 'ModelsMonitor'. I was able to compile fine after that. A: I had the same problem , and in my case the the Markup Design View asked me to rebuild the solution and did not show me the form layout with this message: Design view is unavailable for x64 and ARM target platforms, or Build the Project to update Design view. It does not get solved by rebuilding the solution (neither the design view nor the "The name does not exist in the namespace" error) I think it was because I had played with the settings on Solution -> Properties > Configuration Properties I finally resolved the problem with 2 jobs: Checking all check boxes on Build Column of the page: Solution -> Properties -> Configuration Properties Changing the solution configurations from Debug to Release or vice versa. I think it's a bug in Visual Studio2012 Update 2. A: The same problem plagues Visual Studios 2013, Service Pack 4. I also tried it with Visual Studios 2015 Preview with the same results. It's just a limitation of the WPF visualizer which the Visual Studios team hasn't fixed. As proof, building in x86 mode enables the visualizer and building in x64 mode disables it. Strangely enough intellisense works for Visual Studios 2013, Service Pack 4. A: I'm also having a lot of trouble with this one! Intellisense helps me complete the namespace and everything, but the compiler cries. I've tried everything I found in this and other threads. However in my case what helped in the end was writing something like this: xmlns:util="clr-namespace:LiveSpielTool.Utils;assembly=" Leaving the assembly name empty. No idea why. But it was mentioned here. I must add I am developing an assembly, so the assembly attribute might make sense. But entering the assembly name did not work. So weird. A: In my case the problem was due to some phantom files under the project's obj directory. The following fixed the issue for me: Clean project Exit VS rm -rf /obj/* Invoke VS and rebuild A: Try verifying your assembly references. If you have a yellow exclamation mark on the project references there's a problem there and you'll get all kinds of errors. If you know the project reference is correct, check the Target framework. For instance, having a project using the 4.5 framework reference a project with 4.5.2 framework is not a good combination. A: Looks like this problem may be solved through a variety of "tricks." In my case, I had been building/rebuilding/cleaning the entire solution, instead of just the project that I was working on within the solution. Once I clicked "Build [my project]," the error message went away. A: The solution for me was to unblock the assembly DLLs. The error messages you get don't indicate this, but the XAML designer refuses to load what it calls "sandboxed" assemblies. You can see this in the output window when you build. DLLs are blocked if they are downloaded from the internet. To unblock your 3rd-party assembly DLLs: Right click on the DLL file in Windows Explorer and select Properties. At the bottom of the General tab click the "Unblock" button or checkbox. Note: Only unblock DLLs if you are sure they are safe. A: In my case, the user control was added to the main project. I tried various solutions above to no avail. Either I would get Invalid Markup but the solution would compile and work, or I would add the xmlns:c="clr-namespace:MyProject;assembly=MyProject" and then the markup would show, but I would get a compile error that the tag does not exist in the XML namespace. Finally, I added a new WPF User Control Library project to the solution and moved my user control from the main project into that one. Added the reference and changed the assembly to point to the new library and finally the markup worked and the project compiled without error. A: In my case I had a namespace and class spelled exactly the same, so for example, one of my namespaces was firstDepth.secondDepth.Fubar which contains its own classes (e.g. firstDepth.secondDepth.Fubar.someclass) but I also had a 'Fubar' class in the namespace firstDepth.secondDepth which textually resolves to the same as the Fubar namespace above. Don't do this A: This problem can also be caused if the assembly that you're referencing isn't actually built. For example, if your xaml is in Assembly1 and you're referencing a class also in Assembly1, but that assembly has errors and isn't building, this error will be shown. I feel silly about it, but in my case I was tearing asunder a user control and had all sorts of errors in the related classes as a result. As I was attempting to fix them all I started with the errors in question, not realising that xaml relies on built assemblies to find these references (unlike c#/vb code which can work it out even before you build). A: I get this problem all the time. My views are in a WPF Custom Control Library project (a variant on Class Library). I can reference pre-built assemblies, but cannot reference any code in another project of the same solution. As soon as I move the code to the same project as the xaml it's recognized. A: This happened to me already twice in a complex WPF app, in it there are 4 multi platform projects, 1 shared project, 2 support libraries, and 1 test project.. This very specific XAML namespace error happened twice on very recently modified files on the Shared project. In both of my cases, it was a new c# file added with a repeating namespace entry; Like namespace MyProgram.MyFolder.MyProgram.MyFolder I double pasted it once by mistake, and once it was due to JetBrains Rider double pasting the namespace. (If you ever rename a project in Rider, it time to time starts double pasting namespaces on new file creations, especially on Shared projects..). These c# files with repeating namespaces were then called in the ViewModels where XAML files were referencing to. Well you then get these unrelated and misleading errors, you can have a problem with one file, all your Xaml files will start erroring out eventually. Anyways, if you get these kind of errors, it's most of the time an issue on a very newly added file or code change. My suggestions would be to look at your very recent changes. A: If non of the answers worked For me was .Net Framework version compatibility issue of the one i'm using was older then what is referencing From properties => Application then target framework A: In my case, it was just a weird bug. I had the class I was trying to use in my namespace however Visual Studio kept throwing an error saying the class did not exist in the given namespace. What I did to fix it was really silly but worked like a charm. I commented out all the lines of code where I was trying to use the class, cleaned the build, rebuilt and the project was up and running. Then I just uncommented the lines of code I had commented earlier and well, Visual Studio was no longer throwing me any errors. Rebuild again and you are ready to go. A: VB.NET does not automatically add the Namespace information based on the folder structure as it does in C#. I think I am going through the same tutorial as you (Teach Yourself WPF in 24 Hours), and doing the same conversion to VB. I found you have to manually add the Namespace information to Both the XAML Class and the XAML.VB code behind to be able to use the Namespaces as described in the book. Even then, VB doesn't automatically Assign the Namespace to the Assembly as it does in VB. There is another article here that shows how to include this in your project templates so it does build the Namespace information automatically - Automatically add namespace when adding new item A: In the solution property page, check the platform of the assembly that contains "UpdatingMediaElement" and the assmeblies that contain any of the superclasses and interfaces from which "UpdatingMediaElement" subclasses or implements. It appears that the platform of all these assemblies must be "AnyCPU". A: Another possible cause: A post-build event is removing the project DLL from the build folder. To clarify: WPF designer may report "The name XXX does not exist in the namespace...", even when the name does exist in the namespace and the project builds and runs just fine if a post-build event removes the project DLL from the build folder (bin\Debug, bin\Release, etc.). I have personal experience with this in Visual Studio 2015. A: Ok, so none of these tips worked for me, unfortunately. I was able to eventually solve the issue. It seems that Visual Studio does not play nicely with network drives. I solved this issue by moving the project from the shared drive to my local and recompiled. No more errors. A: Adding to the pile. Mine was the assembly name of the WPF application was the same assembly name as a referenced dll. So make sure you don't have duplicate assembly names in any of your projects. A: I had the solution stored on a network share and every time I opened it I would get the warning about untrusted sources. I moved it to a local drive and the "namespace does not exist" error went away as well. A: Also try to right click on your project->properties and change Platform target to Any CPU and rebuild, it will then work. This worked for me A: I had the added the assembly as a project - first deleted the ddl that was added specifically to the references to the dll - that did it. A: In my case, this problem will happen when the wpf program's architechture is not exactly same with dependency. Suppose you have one dependency that is x64, and another one is AnyCPU. Then if you choose x64, the type in AnyCPU dll will "does not exist", otherwise the type in x64 dll will "does not exist". You just cannot emilate both of them. A: A combination of two ideas in this thread worked for me, so I'll post what I did in the hopes that it helps someone else over the next 5 years that this problem continues. I'm using VS2017 Community) Delete reference to dll Clean, Rebuild, Build Close VS, Unblock the dll (see note below), Delete shadow cache Open VS, Clean, Rebuild, Build Restore reference to dll Clean, Rebuild, Build I may not have the order exactly right in steps 2, 4, and 6 but I was grasping at straws after spending nearly 2 hours with this problem. I think the key for me was the combination of removing the reference, unblocking the dll and deleting the shadow cache. (Note for step 3 - The dll I'm using was written by a coworker/mentor of mine, so I know it's safe. Careful with this step if you don't know the source of your dll) I'll be bookmarking this thread for posterity, since it appears that MS has no desire to clean this stuff up. WPF is hard enough to learn on it's own, and having to hack through stuff like this when you've done everything right is infuriating. A: As another person posted this can be caused by saving the project on a network share. I found that if I switched from using a network path to a mapped network drive everything worked fine. from: "\\SERVER\Programming\SolutionFolder" to: "Z:\Programming\SolutionFolder" (exact mapping optional) A: Try checking the References section, and see if there is a warning icon over the library reference you included: If you see it then go to the Project -> Properties -> Application and make sure that both libraries are targeting the same version of the .NET framework. P.S. When this issue happens it can also be noticed from the Warnings section: A: In Visual Studio 2019 I was able to fix it by changing the dropdown to Release as recommended in other answers. But when I changed back to Debug mode the error appeared again. What fixed it for me in Debug mode: Switch to Release mode Click on "Disable project code" in the XAML Designer Switch back to Debug mode => the error is gone A: One more twist, in the hope that someone else may find it helpful. I had the same issue as everyone else here, and I tried all the suggestions--verified references, Debug/Release switch, restarted VS, checked build config level, rebuilt numerous times--and NOTHING HELPED. Finally, I tried the suggestion where I created a new Project and moved the one single object I was trying to resolve to that project, and THAT solved the reference issue. However--and this is the reason I'm adding yet another post, here--eventually I figured out that the actual problem was that the original Project included one object referencing a SQLite database. It turned out that the installed NuGet SQLite package was actually causing the issue. When I moved the DB-accessing code and the NuGet SQLite reference to its own project, then I was able to move the original object back into the original project with all the others, and the referencing issue did not reappear. Evidently there's some setting in the NuGet SQLite package that was confusing the system. A: I've stumbled accross the same problem too. In my case, I deleted the x:class property from my XAML file by mistake and it didn't work anymore. A: FWIW... I was having this exact issue today and come to find out, it was due to opening my Solution/Project from a UNC Network Path instead of a mapped drive. As soon as a mapped a drive to my repo and opened the project, it worked great. TLDR: Try opening project from a mapped drive A: Removing the sealed keyword from a class also takes away the error just in case one's classes are with that keyword. It worked for me! A: For me, I created a custom control and a second Generic.xaml because I didn't notice that a new folder that contains the associated Generic.xaml was created. So I just removed the duplicated Generic.xaml that I created and modified the other one.
The name does not exist in the namespace error in XAML
Using VS2012 working on a VB.NET WPF application. I have a simple MusicPlayer tutorial app I am using to learn WPF. I am converting a C# version of the tutorial to VB.NET step by step. It has 2 classes in the app that are both under the same namespace. I am able to reference the namespace in the XAML but when I try to reference the class object in XAML I get an error and I am not able to compile. Strange thing is that the IntelliSense works fine with both referencing the namespace via the xmlns:c= tag and also when typing the class object using <c: But the object is underlined and errors are generated trying to build or work in the designer. The .vb class files are in a folder called \Controls. The Main project Root Namespace is intentionaly left blank. The class is coded like this... Namespace MusicPlayer.Controls Public Class UpdatingMediaElement .... code here End Public End Namespace The xaml looks like this (namespace defined in the <Window > tag xmlns:c="clr-namespace:MusicPlayer.Controls" (object defined in a <Grid> ) <c:UpdatingMediaElement Name="MyMediaElement" /> (error displayed) The name "UpdatingMediaElement" does not exist in the namespace "clr-namespace:MusicPlayer.Controls". Not sure what is wrong or how to fix it?
[ "When you are writing your wpf code and VS tell that \"The name ABCDE does not exist in the namespace clr-namespace:ABC\". But you can totally build your project successfully, there is only a small inconvenience because you can not see the UI designing (or just want to clean the code). \nTry to do these:\n\nIn VS, right click on your Solution -> Properties -> Configuration Properties\nA new dialog is opened, try to change the project configurations from Debug to Release or vice versa. \n\nAfter that, re-build your solution. It can solve your problem.\n", "If the assembly is different from the namespace in which your class is contained, you have to specfiy it explicitly.\nex:-\nxmlns:Local=\"clr-namespace:MusicPlayer.Controls;assembly=MusicPlayer\"\n\n", "In my case it was because of other compile errors. When other errors have been solved this seemingly related error was also removed from the list. Specially the errors at the bottom of the errors list and on pages you have recently changed.\n\nSo do not pay attention to this error directly and focus on other errors at first.\n\n", "I've seen this issue go away by clearing the Xaml Design Shadow Cache. I had the issue with Visual Studio 2015 Update 1.\nIn Visual Studio 2015 the Cache is located here: \n%localappdata%\\Microsoft\\VisualStudio\\14.0\\Designer\\ShadowCache\n\nProcess:\n\nRight-Click on the solution in the Solution Explorer and Choose \"Clean Solution\"\nShutdown Visual Studio\nDelete the ShadowCache folder\nReopened the Visual Studio project\nRebuild the solution\n\nAnd voila no more namespace errors. \n", "Try changing the build target platform to x86 and building the project.\nI noticed via Subversion that I apparently changed the project build Platform target to x64. This was the only change I had made. After making that change, the code was working for a short while before it started showing the same error you experienced. I changed the platform target to x86 to test and suddenly my designer was working again. Subsequently, I changed it back to x64, and the problem has disappeared completely. I suspect that the designer builds some kind of cached code in x32 and changing the x64 build platform breaks it when you make code changes.\n", "Maybe another solution for when the project compiles but the XAML error is showing : \n\nIn solution explore, on the project node that contains the xaml \nRight-click on the project and choose 'Unload Project'\nRight-click on the project and choose 'Reload Project'\nMake sure that your project is still choosen as \"startup project\". If not :\nRight-click on the project and choose 'Set as startup project'\n\nNo need to rebuild, or close visual studio.\n", "Jesus... This is still a problem five years later in Visual Studio 2017. Since I'm new to WPF, I was sure the problem was somehow me, but no, everything compiled and ran correctly.\nI tried rebuilding, cleaning and rebuilding, switching between x86/x64 output, rebooting Windows, cleaning the ShadowCache folder, adding \";assembly={my main assembly name}\" to the XML namespace declaration, nothing worked! The single thing that did:\nPut my static class of Commands (in my case the deal was about making the design discover my WPF Commands) in its separate assembly and changing the assembly name to that one's instead.\n", "Dunno if this will help anyone else\nI'm new to WPF and still a novice with VB.net - so I was assuming that getting this error was being caused by me doing summit silly........ suppose I was really! I've managed to get rid of it by moving my project from a shared drive to one of my local drives. \nError's disappeared, project compiles perfectly no further issues - yet. Looks like VS2015 still has problems with projects held on a shared drive.\n", "I had this problem recently using VS 2015 Update 3 for my WPF project in .NET 4.6.2. The copy of my project was in a network folder, I moved it locally and that solved the problem.\nThis may solve other sort of problems, as it looks like VS 2015 doesn't like network paths. Another issue that is a big problem for them is syncing git repositories if my project is in a network path, also solved by moving it locally.\n", "I went through all the answers and none helped me. Finally was able to solve it by myself, so presenting the answer as it might help others.\nIn my case, the solution had two projects, one containing the models (say the project and assembly name was Models) and another containing the views and view models (as per our convention: project, assembly name and default namespace were Models.Monitor). The Models.Monitor referred Models project. \nIn the Models.Monitor project, in one of the xaml I included the following namespace:\nxmlns:monitor=\"clr-namespace:Models.Monitor\"\nI suspect that MsBuild and Visual Studio then were erroring out as they were trying to find a 'Monitor' type in the assembly 'Models'. To resolve I tried the following:\n\nxmlns:monitor=\"clr-namespace:Models.Monitor;assembly=\" - which is valid if the namespace is in same assembly as per https://msdn.microsoft.com/en-us/library/ms747086(v=vs.110).aspx\nalso tried the explicit namespace declaration:\nxmlns:monitor=\"clr-namespace:Models.Monitor;assembly=Models.Monitor\"\n\nNeither of the above worked.\nFinally I gave up, and as a work around moved the UserControl I was trying to use to another namespace: 'ModelsMonitor'. I was able to compile fine after that.\n", "I had the same problem , and in my case the the Markup Design View asked me to rebuild the solution and did not show me the form layout with this message:\nDesign view is unavailable for x64 and ARM target platforms, or Build the Project to update Design view.\nIt does not get solved by rebuilding the solution (neither the design view nor the \"The name does not exist in the namespace\" error)\nI think it was because I had played with the settings on Solution -> Properties > Configuration Properties\nI finally resolved the problem with 2 jobs:\n\nChecking all check boxes on Build Column of the page: Solution -> Properties -> Configuration Properties\nChanging the solution configurations from Debug to Release or vice versa.\n\nI think it's a bug in Visual Studio2012 Update 2.\n", "The same problem plagues Visual Studios 2013, Service Pack 4.\nI also tried it with Visual Studios 2015 Preview with the same results.\nIt's just a limitation of the WPF visualizer which the Visual Studios team hasn't fixed.\nAs proof, building in x86 mode enables the visualizer and building in x64 mode disables it.\nStrangely enough intellisense works for Visual Studios 2013, Service Pack 4.\n", "I'm also having a lot of trouble with this one! Intellisense helps me complete the namespace and everything, but the compiler cries. I've tried everything I found in this and other threads. However in my case what helped in the end was writing something like this:\nxmlns:util=\"clr-namespace:LiveSpielTool.Utils;assembly=\"\n\nLeaving the assembly name empty. No idea why. But it was mentioned here. I must add I am developing an assembly, so the assembly attribute might make sense. But entering the assembly name did not work. So weird.\n", "In my case the problem was due to some phantom files under the project's obj directory. The following fixed the issue for me:\n\nClean project\nExit VS\nrm -rf /obj/*\nInvoke VS and rebuild\n\n", "Try verifying your assembly references. If you have a yellow exclamation mark on the project references there's a problem there and you'll get all kinds of errors.\nIf you know the project reference is correct, check the Target framework. For instance, having a project using the 4.5 framework reference a project with 4.5.2 framework is not a good combination.\n", "Looks like this problem may be solved through a variety of \"tricks.\"\nIn my case, I had been building/rebuilding/cleaning the entire solution, instead of just the project that I was working on within the solution. Once I clicked \"Build [my project],\" the error message went away.\n", "The solution for me was to unblock the assembly DLLs. The error messages you get don't indicate this, but the XAML designer refuses to load what it calls \"sandboxed\" assemblies. You can see this in the output window when you build. DLLs are blocked if they are downloaded from the internet. To unblock your 3rd-party assembly DLLs:\n\nRight click on the DLL file in Windows Explorer and select Properties.\nAt the bottom of the General tab click the \"Unblock\" button or checkbox.\n\nNote: Only unblock DLLs if you are sure they are safe.\n", "In my case, the user control was added to the main project. I tried various solutions above to no avail. Either I would get Invalid Markup but the solution would compile and work, or I would add the \nxmlns:c=\"clr-namespace:MyProject;assembly=MyProject\" and then the markup would show, but I would get a compile error that the tag does not exist in the XML namespace.\nFinally, I added a new WPF User Control Library project to the solution and moved my user control from the main project into that one. Added the reference and changed the assembly to point to the new library and finally the markup worked and the project compiled without error.\n", "In my case I had a namespace and class spelled exactly the same, so for example, one of my namespaces was\nfirstDepth.secondDepth.Fubar\n\nwhich contains its own classes (e.g. firstDepth.secondDepth.Fubar.someclass)\nbut I also had a 'Fubar' class in the namespace\nfirstDepth.secondDepth\n\nwhich textually resolves to the same as the Fubar namespace above.\nDon't do this\n", "This problem can also be caused if the assembly that you're referencing isn't actually built. For example, if your xaml is in Assembly1 and you're referencing a class also in Assembly1, but that assembly has errors and isn't building, this error will be shown.\nI feel silly about it, but in my case I was tearing asunder a user control and had all sorts of errors in the related classes as a result. As I was attempting to fix them all I started with the errors in question, not realising that xaml relies on built assemblies to find these references (unlike c#/vb code which can work it out even before you build).\n", "I get this problem all the time. My views are in a WPF Custom Control Library project (a variant on Class Library). I can reference pre-built assemblies, but cannot reference any code in another project of the same solution. As soon as I move the code to the same project as the xaml it's recognized.\n", "This happened to me already twice in a complex WPF app, in it there are 4 multi platform projects, 1 shared project, 2 support libraries, and 1 test project.. \nThis very specific XAML namespace error happened twice on very recently modified files on the Shared project. In both of my cases, it was a new c# file added with a repeating namespace entry; \nLike namespace MyProgram.MyFolder.MyProgram.MyFolder\nI double pasted it once by mistake, and once it was due to JetBrains Rider double pasting the namespace. (If you ever rename a project in Rider, it time to time starts double pasting namespaces on new file creations, especially on Shared projects..). These c# files with repeating namespaces were then called in the ViewModels where XAML files were referencing to. Well you then get these unrelated and misleading errors, you can have a problem with one file, all your Xaml files will start erroring out eventually.\nAnyways, if you get these kind of errors, it's most of the time an issue on a very newly added file or code change. My suggestions would be to look at your very recent changes. \n", "If non of the answers worked\nFor me was .Net Framework version compatibility issue of the one i'm using was older then what is referencing\n\nFrom properties => Application then target framework\n\n", "In my case, it was just a weird bug.\nI had the class I was trying to use in my namespace however Visual Studio kept throwing an error saying the class did not exist in the given namespace.\nWhat I did to fix it was really silly but worked like a charm.\nI commented out all the lines of code where I was trying to use the class, cleaned the build, rebuilt and the project was up and running.\nThen I just uncommented the lines of code I had commented earlier and well, Visual Studio was no longer throwing me any errors.\nRebuild again and you are ready to go.\n", "VB.NET does not automatically add the Namespace information based on the folder structure as it does in C#. I think I am going through the same tutorial as you (Teach Yourself WPF in 24 Hours), and doing the same conversion to VB.\nI found you have to manually add the Namespace information to Both the XAML Class and the XAML.VB code behind to be able to use the Namespaces as described in the book. Even then, VB doesn't automatically Assign the Namespace to the Assembly as it does in VB.\nThere is another article here that shows how to include this in your project templates so it does build the Namespace information automatically - Automatically add namespace when adding new item\n", "In the solution property page, check the platform of the assembly that contains \"UpdatingMediaElement\" and the assmeblies that contain any of the superclasses and interfaces from which \"UpdatingMediaElement\" subclasses or implements. It appears that the platform of all these assemblies must be \"AnyCPU\".\n", "Another possible cause: A post-build event is removing the project DLL from the build folder.\nTo clarify: WPF designer may report \"The name XXX does not exist in the namespace...\", even when the name does exist in the namespace and the project builds and runs just fine if a post-build event removes the project DLL from the build folder (bin\\Debug, bin\\Release, etc.). I have personal experience with this in Visual Studio 2015.\n", "Ok, so none of these tips worked for me, unfortunately. I was able to eventually solve the issue. It seems that Visual Studio does not play nicely with network drives. I solved this issue by moving the project from the shared drive to my local and recompiled. No more errors.\n", "Adding to the pile.\nMine was the assembly name of the WPF application was the same assembly name as a referenced dll. So make sure you don't have duplicate assembly names in any of your projects. \n", "I had the solution stored on a network share and every time I opened it I would get the warning about untrusted sources. I moved it to a local drive and the \"namespace does not exist\" error went away as well.\n", "Also try to right click on your project->properties and change Platform target to Any CPU and rebuild, it will then work. This worked for me\n", "I had the added the assembly as a project - first deleted the ddl that was added specifically to the references to the dll - that did it.\n", "In my case, this problem will happen when the wpf program's architechture is not exactly same with dependency.\nSuppose you have one dependency that is x64, and another one is AnyCPU. Then if you choose x64, the type in AnyCPU dll will \"does not exist\", otherwise the type in x64 dll will \"does not exist\". You just cannot emilate both of them.\n", "A combination of two ideas in this thread worked for me, so I'll post what I did in the hopes that it helps someone else over the next 5 years that this problem continues. I'm using VS2017 Community)\n\nDelete reference to dll\nClean, Rebuild, Build\nClose VS, Unblock the dll (see note below), Delete shadow cache\nOpen VS, Clean, Rebuild, Build\nRestore reference to dll\nClean, Rebuild, Build\n\nI may not have the order exactly right in steps 2, 4, and 6 but I was grasping at straws after spending nearly 2 hours with this problem. I think the key for me was the combination of removing the reference, unblocking the dll and deleting the shadow cache.\n(Note for step 3 - The dll I'm using was written by a coworker/mentor of mine, so I know it's safe. Careful with this step if you don't know the source of your dll)\n\nI'll be bookmarking this thread for posterity, since it appears that MS has no desire to clean this stuff up. WPF is hard enough to learn on it's own, and having to hack through stuff like this when you've done everything right is infuriating. \n\n", "As another person posted this can be caused by saving the project on a network share. I found that if I switched from using a network path to a mapped network drive everything worked fine.\nfrom:\n\"\\\\SERVER\\Programming\\SolutionFolder\"\nto: \n\"Z:\\Programming\\SolutionFolder\"\n(exact mapping optional)\n", "Try checking the References section, and see if there is a warning icon over the library reference you included:\n\nIf you see it then go to the Project -> Properties -> Application and make sure that both libraries are targeting the same version of the .NET framework.\nP.S. When this issue happens it can also be noticed from the Warnings section:\n\n", "In Visual Studio 2019 I was able to fix it by changing the dropdown to Release as recommended in other answers. But when I changed back to Debug mode the error appeared again. \nWhat fixed it for me in Debug mode:\n\nSwitch to Release mode\nClick on \"Disable project code\" in the XAML Designer\n\n\n\nSwitch back to Debug mode => the error is gone\n\n", "One more twist, in the hope that someone else may find it helpful. I had the same issue as everyone else here, and I tried all the suggestions--verified references, Debug/Release switch, restarted VS, checked build config level, rebuilt numerous times--and NOTHING HELPED. Finally, I tried the suggestion where I created a new Project and moved the one single object I was trying to resolve to that project, and THAT solved the reference issue.\n\nHowever--and this is the reason I'm adding yet another post, here--eventually I figured out that the actual problem was that the original Project included one object referencing a SQLite database. It turned out that the installed NuGet SQLite package was actually causing the issue. When I moved the DB-accessing code and the NuGet SQLite reference to its own project, then I was able to move the original object back into the original project with all the others, and the referencing issue did not reappear. Evidently there's some setting in the NuGet SQLite package that was confusing the system.\n", "I've stumbled accross the same problem too.\nIn my case, I deleted the x:class property from my XAML file by mistake and it didn't work anymore.\n", "FWIW... I was having this exact issue today and come to find out, it was due to opening my Solution/Project from a UNC Network Path instead of a mapped drive.\nAs soon as a mapped a drive to my repo and opened the project, it worked great.\nTLDR: Try opening project from a mapped drive\n", "Removing the sealed keyword from a class also takes away the error just in case one's classes are with that keyword. It worked for me!\n", "For me, I created a custom control and a second Generic.xaml because I didn't notice that a new folder that contains the associated Generic.xaml was created. So I just removed the duplicated Generic.xaml that I created and modified the other one.\n" ]
[ 274, 55, 40, 33, 24, 8, 8, 7, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Add an empty constructor for your view model and rebuild solution.\n", "\nIn my case, the problem was a bug in Visual Studio - because the error didn't make any sense and when i Rebuilt - everything magically worked! After 45 minutes of frustration, this solution saved both my head and monitor from serious injury. Perhaps it may save you as well - it's certainly worth a shot?\n\nSolution: Try Re-building the solution\n\nShort cut to re-build: CTRL + SHIFT + B\n\n\nSolution 2: Try Restarting Visual Studio\n", "I had the same symptoms \"The name does not exist in the namespace error\", but the cause turned out to be different. I had a C: drive crash and had to reinstall Visual Studio 2017. I restored my source code files and opened the solution. Built it. No dice. As well as the \"Name does not exist in the namespace\" errors I noticed my sub-projects complaining that they couldn't find a MyProject.cs file ('MyProject' is not the actual project name, just used here as an example). I had a hunt for where MyProject.cs had gone, then remembered that there was never any such file! I looked in the Properties folders of each sub-project and found that Visual Studio had off its own back added bogus references to MyProject.cs!! I removed these references and now the solution builds fine like it used to.\n" ]
[ -1, -1, -1 ]
[ "vb.net", "visual_studio_2012", "wpf", "xaml" ]
stackoverflow_0014665713_vb.net_visual_studio_2012_wpf_xaml.txt
Q: Difficulty importing ThemedTK from ttkthemes I'm trying to import ThemedTK from ttkthemes in Python3 but am getting the following error message: line 4, in from ttkthemes import Themed_TK ImportError: cannot import name 'Themed_TK' from 'ttkthemes' Any ideas? from tkinter import filedialog from tkinter import ttk from ttkthemes import ThemedTK from reportlab.lib.units import mm from draw import bellGen root = ThemedTK() A: Apparently it's ThemedTk. With lowercase "k".
Difficulty importing ThemedTK from ttkthemes
I'm trying to import ThemedTK from ttkthemes in Python3 but am getting the following error message: line 4, in from ttkthemes import Themed_TK ImportError: cannot import name 'Themed_TK' from 'ttkthemes' Any ideas? from tkinter import filedialog from tkinter import ttk from ttkthemes import ThemedTK from reportlab.lib.units import mm from draw import bellGen root = ThemedTK()
[ "Apparently it's ThemedTk. With lowercase \"k\".\n" ]
[ 0 ]
[]
[]
[ "python", "ttk" ]
stackoverflow_0068376097_python_ttk.txt
Q: How to make ng-select width to fit placeholder and items? I have the <nb-select> with the following placeholder: <nb-select id="weather" placeholder="Expected Weather" fullWidth> <nb-option [value]="1">Dry</nb-option> <nb-option [value]="2">Wet</nb-option> </nb-select> I'm trying to set width to be able auto fit it content to show not only items but the placeholder too. Tried to use 'fullWidth', but it works only for the options content: Checked a few solutions from the following posts: How to always display a placeholder of ng-select? Make ng-select width adjust to selected items / available options? Unfortunately it didn't work. Thanks A: The solution I found is to set min-width to 100% instead of using the fullWidth: <nb-select placeholder="Expected Weather" [ngStyle]="{'min-width': '100%'}" id="weather"> <nb-option [value]="1">Dry</nb-option> <nb-option [value]="2">Wet</nb-option> </nb-select>
How to make ng-select width to fit placeholder and items?
I have the <nb-select> with the following placeholder: <nb-select id="weather" placeholder="Expected Weather" fullWidth> <nb-option [value]="1">Dry</nb-option> <nb-option [value]="2">Wet</nb-option> </nb-select> I'm trying to set width to be able auto fit it content to show not only items but the placeholder too. Tried to use 'fullWidth', but it works only for the options content: Checked a few solutions from the following posts: How to always display a placeholder of ng-select? Make ng-select width adjust to selected items / available options? Unfortunately it didn't work. Thanks
[ "The solution I found is to set min-width to 100% instead of using the fullWidth:\n<nb-select placeholder=\"Expected Weather\" [ngStyle]=\"{'min-width': '100%'}\" id=\"weather\">\n <nb-option [value]=\"1\">Dry</nb-option>\n <nb-option [value]=\"2\">Wet</nb-option>\n</nb-select>\n\n\n" ]
[ 0 ]
[]
[]
[ "angular", "angular_ngselect", "html" ]
stackoverflow_0074659884_angular_angular_ngselect_html.txt
Q: How to implement recursive method using Mutiny UNI Quarkus reactive sql Java I have an Person table which contains below information. Person table type personId INT Name VarChar fatherId INT --refer to personId in same table(Person) MotherId INT --refer to personId in same table(Person) More columns other details I have to implement a method similar to the below older implementation below using ASYN programming that returns a family tree. Older Implementation My POJO class public class FamilyTree Person person; FamilyTree fatherFamily; FamilyTree motherFamily; public FamilyTree (Person person, FamilyTree father, FamilyTree mother){ this.person = person; this.fatherFamily = father; this.motherFamily = mother; } public static FamilyTree buildFamilyTree(int personId){ Person person = PersonRepository.GetPersonById(personId); FamilyTree fatherTree = (person.getFatherId == null || person.getFatherId(isEmpty())? null:buildFamilyTree(person.getFatherId()); FamilyTree motherTree = (person.getMotherId == null || person.getMotherId(isEmpty())?null:buildFamilyTree(person.getMotherId()); return new FamilyTree(person, fatherTree, motherTree); } How do I implement this with Mutiny and Quarkus reactive SQL without causing block IO exceptions? New implementation class of what I need is: @ApplicationScoped public class FamilyTreeRepostiory{ @Inject OraclePool client; public Uni<Person> getPersonById(int personId){ String sql = "select * from person where personId=?"; Tuple tuple = Tuple.of(personId); return client.preparedQuery(sql).execute(tuple).onItem().transform(Rowset::iterator) .onItem.transform(iterator-> iterator.hasNext()?Person.convertPerson(iterator.next()):null); } public Uni<FamilyTree> getFamilyTree(int personId){ Uni<Person> person = getPersonById(personId); //help with this implementation is needed. return familyTree ; } } A: The implementation of the getFamilyTree() method using Mutiny and Quarkus reactive SQL could look like this: public Uni<FamilyTree> getFamilyTree(int personId){ Uni<Person> person = getPersonById(personId); return person.flatMap(p -> Uni.combine().all( getFamilyTree(p.getFatherId()), getFamilyTree(p.getMotherId()) ).map(tuple -> new FamilyTree(p, tuple.getValue1(), tuple.getValue2()))); }
How to implement recursive method using Mutiny UNI Quarkus reactive sql Java
I have an Person table which contains below information. Person table type personId INT Name VarChar fatherId INT --refer to personId in same table(Person) MotherId INT --refer to personId in same table(Person) More columns other details I have to implement a method similar to the below older implementation below using ASYN programming that returns a family tree. Older Implementation My POJO class public class FamilyTree Person person; FamilyTree fatherFamily; FamilyTree motherFamily; public FamilyTree (Person person, FamilyTree father, FamilyTree mother){ this.person = person; this.fatherFamily = father; this.motherFamily = mother; } public static FamilyTree buildFamilyTree(int personId){ Person person = PersonRepository.GetPersonById(personId); FamilyTree fatherTree = (person.getFatherId == null || person.getFatherId(isEmpty())? null:buildFamilyTree(person.getFatherId()); FamilyTree motherTree = (person.getMotherId == null || person.getMotherId(isEmpty())?null:buildFamilyTree(person.getMotherId()); return new FamilyTree(person, fatherTree, motherTree); } How do I implement this with Mutiny and Quarkus reactive SQL without causing block IO exceptions? New implementation class of what I need is: @ApplicationScoped public class FamilyTreeRepostiory{ @Inject OraclePool client; public Uni<Person> getPersonById(int personId){ String sql = "select * from person where personId=?"; Tuple tuple = Tuple.of(personId); return client.preparedQuery(sql).execute(tuple).onItem().transform(Rowset::iterator) .onItem.transform(iterator-> iterator.hasNext()?Person.convertPerson(iterator.next()):null); } public Uni<FamilyTree> getFamilyTree(int personId){ Uni<Person> person = getPersonById(personId); //help with this implementation is needed. return familyTree ; } }
[ "The implementation of the getFamilyTree() method using Mutiny and Quarkus reactive SQL could look like this:\npublic Uni<FamilyTree> getFamilyTree(int personId){\n Uni<Person> person = getPersonById(personId);\n return person.flatMap(p -> Uni.combine().all(\n getFamilyTree(p.getFatherId()),\n getFamilyTree(p.getMotherId())\n ).map(tuple -> new FamilyTree(p, tuple.getValue1(), tuple.getValue2()))); \n}\n\n" ]
[ 0 ]
[]
[]
[ "java", "mutiny", "quarkus", "quarkus_reactive", "reactive_programming" ]
stackoverflow_0074574436_java_mutiny_quarkus_quarkus_reactive_reactive_programming.txt
Q: gcloud: how to download the app via cli I depolyed an app with gcloud preview app deploy. Is there a way to download it to an other local machine? How can I get the files? I tried it via ssh with no success (can't access the docker dir) UPDATE: I found this: gcloud preview app modules download default --version 1 --output-dir=my_dir but it's not loading files Log Downloading module [default] to [my_dir/default] Fetching file list from server... |- Downloading [0] files... -| A: I am coming to Google App Engine after two years, I see that they have made lots of improvements and added tons of features. But sadly, their documentation sometimes leaves much to be desired. I used to download my code of the uploaded version with the appcfg.pyusing the following command. appcfg.py download_app -A <app_id> -V <version> <output-dir> But of course now that they have culminated everything in the gcloud shell where appcfg.py is not accessible. However, the following method helped me to download the deployed code: Go the console and in to the Google App Engine. Select the project you want to work with. Once the project's dashboard opens, Click on the top right to open the built in console window. Which should load the cloud shell at the bottom, now if you check appcfg.py is available to you to use in this VM. Hence, use appcfg.py download_app -A <app_id> -V <version> <output-dir> to download the code. Now once you have the code in the desired folder, in order to download it on your local machine - You can open the docker code editor Now here I assumed if I rightclicked and exported the desired folder it would work, but instead it gave me the following error message. {"Error":"'concurrency' must be a number but it is [object Undefined]","Message":"'concurrency' must be a number but it is [object Undefined]"} So, I thought maybe it would play along nicely if the the folder was an archive. Go back to the cloud shell and using whatever utility you fancy make an archive of the folder zip -r mycode.zip mycode Go to the docker code editor, export and download. Now. Of course there might many more ways do it (hopefully) but this is what made sense to me after returning to Google App Engine after 2 years. A: Currently, the best way to do this is to pull the files out of Docker. Put instance into self-managed mode, so that you can ssh into it: $ gcloud preview app modules set-managed-by default --version 1 --self Find the name of the instance: $ gcloud compute instances list | grep gae-default-1 Copy it out of the Docker container, change the permissions, and copy it back to your local machine: $ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 'sudo docker cp gaeapp:/app /tmp' $ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 "chown -R $USER /tmp/app" $ gcloud compute copy-files --zone=us-central1-f gae-default-1-1234:/tmp/app /tmp/ $ ls /tmp/app Dockerfile [...] A: IMHO, the best option today (Aug 2018) is: Under the main menu, under Products, go to Tools -> Cloud Build -> Build history. There, click the ID of the build you want. Then, in the opened window (Build details), click the source link, the download of your compressed code begins. As simple as that. HTH. A: As of Feb 2021, you can install appengine-sdk using pip pip install appengine-sdk Once installed, appcfg can be used to download the app code. python -m appcfg download_app -A app_id [ -V version ] out-dir A: Nothing works. Finally I found the source code this way. Simply go to google cloud storage. choose buckets starting with us.artifacts...., select containers > images > download the latest one (look by created date). unzip after downloaded file. it will have all the deployed source code of app engine.
gcloud: how to download the app via cli
I depolyed an app with gcloud preview app deploy. Is there a way to download it to an other local machine? How can I get the files? I tried it via ssh with no success (can't access the docker dir) UPDATE: I found this: gcloud preview app modules download default --version 1 --output-dir=my_dir but it's not loading files Log Downloading module [default] to [my_dir/default] Fetching file list from server... |- Downloading [0] files... -|
[ "I am coming to Google App Engine after two years, I see that they have made lots of improvements and added tons of features. But sadly, their documentation sometimes leaves much to be desired. \nI used to download my code of the uploaded version with the appcfg.pyusing the following command. \nappcfg.py download_app -A <app_id> -V <version> <output-dir>\nBut of course now that they have culminated everything in the gcloud shell where appcfg.py is not accessible. \nHowever, the following method helped me to download the deployed code:\n\nGo the console and in to the Google App Engine.\nSelect the project you want to work with.\nOnce the project's dashboard opens, Click on the top right to\nopen the built in console window.\n\nWhich should load the cloud shell at the bottom, now if you check appcfg.py is available to you to use in this VM. \n\nHence, use appcfg.py download_app -A <app_id> -V <version> <output-dir> to download the code.\nNow once you have the code in the desired folder, in order to download it on your local machine - You can open the docker code editor\n\nNow here I assumed if I rightclicked and exported the desired\nfolder it would work, \n\nbut instead it gave me the following error message.\n{\"Error\":\"'concurrency' must be a number but it is [object Undefined]\",\"Message\":\"'concurrency' must be a number but it is [object Undefined]\"}\n\nSo, I thought maybe it would play along nicely if the the folder\nwas an archive. Go back to the cloud shell and using whatever\nutility you fancy make an archive of the folder\nzip -r mycode.zip mycode\n\nGo to the docker code editor, export and download. \n\n\nNow. Of course there might many more ways do it (hopefully) but this is what made sense to me after returning to Google App Engine after 2 years.\n", "Currently, the best way to do this is to pull the files out of Docker.\nPut instance into self-managed mode, so that you can ssh into it:\n$ gcloud preview app modules set-managed-by default --version 1 --self\n\nFind the name of the instance:\n$ gcloud compute instances list | grep gae-default-1\n\nCopy it out of the Docker container, change the permissions, and copy it back to your local machine:\n$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 'sudo docker cp gaeapp:/app /tmp'\n$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 \"chown -R $USER /tmp/app\"\n$ gcloud compute copy-files --zone=us-central1-f gae-default-1-1234:/tmp/app /tmp/\n$ ls /tmp/app\nDockerfile\n[...]\n\n", "IMHO, the best option today (Aug 2018) is:\nUnder the main menu, under Products, go to Tools -> Cloud Build -> Build history.\nThere, click the ID of the build you want.\nThen, in the opened window (Build details), click the source link, the download of your compressed code begins.\nAs simple as that.\nHTH.\n", "As of Feb 2021, you can install appengine-sdk using pip\npip install appengine-sdk\nOnce installed, appcfg can be used to download the app code.\npython -m appcfg download_app -A app_id [ -V version ] out-dir\n", "Nothing works. Finally I found the source code this way. Simply go to google cloud storage. choose buckets starting with us.artifacts...., select containers > images > download the latest one (look by created date). unzip after downloaded file. it will have all the deployed source code of app engine.\n" ]
[ 17, 3, 2, 0, 0 ]
[]
[]
[ "gcloud", "google_app_engine", "google_cloud_platform" ]
stackoverflow_0032487781_gcloud_google_app_engine_google_cloud_platform.txt
Q: Binary tree add missing child The function method and missing held accepts to arguments nodes and X. Node node representing a pointer to the root note of a binary tree and an integer value x the function method and missing child must find the notes which are having only one child in the given binary tree then the function must add the missing child is the value X for those nodes you are task is to implement the function and missing cell so that the program runs successfully use only c Example input output Input: 9 10 10 20 L 10 30 R 20 40 L 30 50 L 30 60 R 50 80 R 60 80 R 100 Output 10 20 40 100 50 70 80 60 100 90 A: Hope this helps: void addMissingChild(Node* node, int x) { // Check if the current node has only one child if (node->left != NULL && node->right == NULL) { // Add the missing child with value x node->right = new Node(x); } else if (node->left == NULL && node->right != NULL) { // Add the missing child with value x node->left = new Node(x); } // Recursively call the function on the left and right children if (node->left != NULL) { addMissingChild(node->left, x); } if (node->right != NULL) { addMissingChild(node->right, x); } }
Binary tree add missing child
The function method and missing held accepts to arguments nodes and X. Node node representing a pointer to the root note of a binary tree and an integer value x the function method and missing child must find the notes which are having only one child in the given binary tree then the function must add the missing child is the value X for those nodes you are task is to implement the function and missing cell so that the program runs successfully use only c Example input output Input: 9 10 10 20 L 10 30 R 20 40 L 30 50 L 30 60 R 50 80 R 60 80 R 100 Output 10 20 40 100 50 70 80 60 100 90
[ "Hope this helps:\nvoid addMissingChild(Node* node, int x)\n{\n // Check if the current node has only one child\n if (node->left != NULL && node->right == NULL)\n {\n // Add the missing child with value x\n node->right = new Node(x);\n }\n else if (node->left == NULL && node->right != NULL)\n {\n // Add the missing child with value x\n node->left = new Node(x);\n }\n\n // Recursively call the function on the left and right children\n if (node->left != NULL)\n {\n addMissingChild(node->left, x);\n }\n if (node->right != NULL)\n {\n addMissingChild(node->right, x);\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "c" ]
stackoverflow_0074674060_c.txt
Q: Dynamically create matrix from a vectors in numpy I'm trying to create a matrix of shape Nx3 where N is not known at first. This is what I'm basically trying to do: F = np.array([[],[],[]]) for contact in contacts: xp,yp,theta = contact # Create vectors for points and normal P = [xp, yp, 0] N = [np.cos(theta), np.sin(theta), 0] # Calculate vector product cross_PN = np.cross(P,N) # f = [mz, fx, fi] mz = cross_PN[2] fx = N[0] fy = N[1] f = np.array([mz, fx, fy]) F = np.vstack([F, f]) But this code doesn't work. I can do similar thing in Matlab very easily, but that is not the case in Python using numpy. Any help is greatly appreciated. Thank you I would like to create a matrix by adding new rows, but in the beginning the matrix is empty. That is why I receive the error: "along dimension 1, the array at index 0 has size 0 and the array at index 1 has size 3" A: The error you are seeing is caused by trying to stack empty arrays together using np.vstack(). When you create an empty array with np.array([[],[],[]]), the resulting array has shape (3, 0), which means that it has 3 rows but no columns. When you try to stack this empty array with another array using np.vstack(), the resulting array has shape (3, 0), which means that it still has 3 rows but no columns, and this is why you are seeing the error "along dimension 1, the array at index 0 has size 0 and the array at index 1 has size 3". To fix this issue, you can initialize the F array with the correct number of rows and columns before you start the loop. For example, you can create an empty array with shape (0, 3) like this: F = np.empty((0, 3))
Dynamically create matrix from a vectors in numpy
I'm trying to create a matrix of shape Nx3 where N is not known at first. This is what I'm basically trying to do: F = np.array([[],[],[]]) for contact in contacts: xp,yp,theta = contact # Create vectors for points and normal P = [xp, yp, 0] N = [np.cos(theta), np.sin(theta), 0] # Calculate vector product cross_PN = np.cross(P,N) # f = [mz, fx, fi] mz = cross_PN[2] fx = N[0] fy = N[1] f = np.array([mz, fx, fy]) F = np.vstack([F, f]) But this code doesn't work. I can do similar thing in Matlab very easily, but that is not the case in Python using numpy. Any help is greatly appreciated. Thank you I would like to create a matrix by adding new rows, but in the beginning the matrix is empty. That is why I receive the error: "along dimension 1, the array at index 0 has size 0 and the array at index 1 has size 3"
[ "The error you are seeing is caused by trying to stack empty arrays together using np.vstack(). When you create an empty array with np.array([[],[],[]]), the resulting array has shape (3, 0), which means that it has 3 rows but no columns. When you try to stack this empty array with another array using np.vstack(), the resulting array has shape (3, 0), which means that it still has 3 rows but no columns, and this is why you are seeing the error \"along dimension 1, the array at index 0 has size 0 and the array at index 1 has size 3\".\nTo fix this issue, you can initialize the F array with the correct number of rows and columns before you start the loop. For example, you can create an empty array with shape (0, 3) like this:\nF = np.empty((0, 3))\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074673656_numpy_python.txt
Q: How to send a DM to a user using just their user id I would like to send a dm to a user just by using their user id that I copied from their profile. This is the code that I made, but it didn't work. @client.command() async def dm(userID, *, message): user = client.get_user(userID) await user.send(message) This is the error that appeared: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: AttributeError: 'NoneType' object has no attribute 'send' A: All you have to do is change the userID argument to user: discord.User. That argument will accept user mentions (@user), usernames (user), and ids (904360748455698502). The full code would now be: @client.command() async def dm(user: discord.User, *, message): channel = await user.create_dm() await channel.send(message) A: Your code is partially correct. However, from the discord.py API reference, a User object is not messageable, i.e. you cannot use the send() function directly on the User itself. To solve this problem, we need to first create a DMChannel with the user, and then send a message into the DMChannel. Here is the working code: @client.command() async def dm(userID: int, *, message): user = client.get_user(userID) dmChannel = user.create_dm() await dmchannel.send(message) A: You can convert the user id to user object then create dm and sent the message as following: @client.command() async def dm(ctx,user:discord.User, *, message = None): if message is None: await ctx.send("Enter the message to be sent") try: channel = await user.create_dm() await channel.send(message) except discord.Forbidden: await ctx.send("could not send the message") And you must use try block cause there are some users that are not allowing dm like me :) as the docs say It raises Forbidden when the bot don't have the perms
How to send a DM to a user using just their user id
I would like to send a dm to a user just by using their user id that I copied from their profile. This is the code that I made, but it didn't work. @client.command() async def dm(userID, *, message): user = client.get_user(userID) await user.send(message) This is the error that appeared: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: AttributeError: 'NoneType' object has no attribute 'send'
[ "All you have to do is change the userID argument to user: discord.User. That argument will accept user mentions (@user), usernames (user), and ids (904360748455698502). The full code would now be:\n@client.command()\nasync def dm(user: discord.User, *, message):\n channel = await user.create_dm()\n await channel.send(message)\n\n", "Your code is partially correct. However, from the discord.py API reference, a User object is not messageable, i.e. you cannot use the send() function directly on the User itself.\nTo solve this problem, we need to first create a DMChannel with the user, and then send a message into the DMChannel.\nHere is the working code:\n@client.command()\nasync def dm(userID: int, *, message):\n user = client.get_user(userID)\n dmChannel = user.create_dm()\n await dmchannel.send(message)\n\n", "You can convert the user id to user object then create dm and sent the message as following:\n@client.command()\nasync def dm(ctx,user:discord.User, *, message = None):\n if message is None:\n await ctx.send(\"Enter the message to be sent\")\n try:\n channel = await user.create_dm()\n await channel.send(message)\n except discord.Forbidden:\n await ctx.send(\"could not send the message\")\n\nAnd you must use try block cause there are some users that are not allowing dm like me :)\nas the docs say It raises Forbidden when the bot don't have the perms\n" ]
[ 0, 0, 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074636410_discord_discord.py_python.txt
Q: Postgresql: how to get results count with min of a field and unique of b field? I have a posgresql table as below: id session_id result 1 111 success 2 111 fail 3 111 fail 4 222 fail 5 222 fail 6 222 success 7 333 success There are three sessions in this table, with session ids 111, 222, and 333; Each session has multiple records, but the session_id is the same; and the result of the record with the smallest id determines whether that session is successful or failed. The id 1 and id 4 and id 7 records in the above sample table determine whether a session is successful or unsuccessful. Now I want get the total of success sessoins and fail sessions, how to write the SQL? I've tried the below: SELECT COUNT(DISTINCT(session_id)) min(id) FROM logs WHERE result = success; SELECT COUNT(DISTINCT(session_id)) min(id) FROM logs WHERE result = fail; I expected the number of successful sessions to be two and the number of failed sessions to be one, but I got the error. How can I get the number of successful and unsuccessful sessions? Thanks A: You may use distinct on with custom order by and conditional aggregation with filter clause. with t as ( select distinct on (session_id) result from logs order by session_id, id -- pick the smallest id for the session ) select count(*) filter (where result = 'success') as success_cnt, count(*) filter (where result = 'fail') as fail_cnt from t; See demo
Postgresql: how to get results count with min of a field and unique of b field?
I have a posgresql table as below: id session_id result 1 111 success 2 111 fail 3 111 fail 4 222 fail 5 222 fail 6 222 success 7 333 success There are three sessions in this table, with session ids 111, 222, and 333; Each session has multiple records, but the session_id is the same; and the result of the record with the smallest id determines whether that session is successful or failed. The id 1 and id 4 and id 7 records in the above sample table determine whether a session is successful or unsuccessful. Now I want get the total of success sessoins and fail sessions, how to write the SQL? I've tried the below: SELECT COUNT(DISTINCT(session_id)) min(id) FROM logs WHERE result = success; SELECT COUNT(DISTINCT(session_id)) min(id) FROM logs WHERE result = fail; I expected the number of successful sessions to be two and the number of failed sessions to be one, but I got the error. How can I get the number of successful and unsuccessful sessions? Thanks
[ "You may use distinct on with custom order by and conditional aggregation with filter clause.\nwith t as \n(\n select distinct on (session_id) result\n from logs\n order by session_id, id -- pick the smallest id for the session\n)\nselect count(*) filter (where result = 'success') as success_cnt,\n count(*) filter (where result = 'fail') as fail_cnt\nfrom t;\n\nSee demo\n" ]
[ 2 ]
[]
[]
[ "distinct", "min", "postgresql", "sql", "unique" ]
stackoverflow_0074674019_distinct_min_postgresql_sql_unique.txt
Q: How to add new key value to yaml without overwriting it in python? I have small python script which responsible for updating my yaml file by adding new records: data = yaml.load(file) data['WIN']['Machine'] = dict(node_labels='+> tfs vs2022') data['WIN']['Machine'] = dict(vs='vs2022') yaml.dump(data, file) Every time when I run above script I will get updated yaml file like below: WIN: Machine: vs: vs2022 My desired output to have both my key: value pairs WIN: Machine: node_labels: +> tfs vs2022 vs: vs2022 I'm wondering why line data['WIN'][nodeName] = dict(node_labels='+> tfs vs2022') overwritten by next line? How can add several key: values for Machine section? A: This is not a YAML related problem, but a conceptual problem in your non-yaml related Python code. By assigning a dict as value to the key Machine, you set that value. By assigning another dict to the key, you overwrite that value completely, erasing the previous key-value pair. If you simplify your code: data = dict(Machine=None) data['Machine'] = dict(node_labels='+> tfs vs2022') print('data 1', data) data['Machine'] = dict(vs='vs2022') print('data 2', data) As you can see after the second assignment, the key node_labels is no longer available. data 1 {'Machine': {'node_labels': '+> tfs vs2022'}} data 2 {'Machine': {'vs': 'vs2022'}} There are several ways to solve this. You can either assign a value to a key in the first dict: data = dict(Machine=None) data['Machine'] = added_dict = dict(node_labels='+> tfs vs2022') print('data 1', data) added_dict['vs'] ='vs2022' print('data 2', data) Now you have both keys in the second output: data 1 {'Machine': {'node_labels': '+> tfs vs2022'}} data 2 {'Machine': {'node_labels': '+> tfs vs2022', 'vs': 'vs2022'}} If you don't already know there is a dict where you can add a key to, you might to use .setdefault, either using key-value assigment, and/or by using .update (useful for updating multiple keys in one go): data = dict() data.setdefault('Machine', {})['node_labels'] = '+> tfs vs2022' print('data 1', data) data.setdefault('Machine', {}).update(dict(vs='vs2022')) print('data 2', data) data 1 {'Machine': {'node_labels': '+> tfs vs2022'}} data 2 {'Machine': {'node_labels': '+> tfs vs2022', 'vs': 'vs2022'}} Of course you can put node_labels and vs in one dict and assign, but that would overwrite any existing key-values loaded from YAML. So the use of .update is IMO better: import sys from pathlib import Path import ruamel.yaml file_in = Path('input.yaml') # key in YAML mapping with null value file_in.write_text("""\ WIN: """) yaml = ruamel.yaml.YAML() data = yaml.load(file_in) if data['WIN'] is None: data['WIN'] = {} data['WIN'].setdefault('Machine', {}).update(dict(node_labels='+> tfs vs2022')) data['WIN'].setdefault('Machine', {}).update(dict(vs='vs2022')) yaml.dump(data, sys.stdout) which gives your expected result: WIN: Machine: node_labels: +> tfs vs2022 vs: vs2022
How to add new key value to yaml without overwriting it in python?
I have small python script which responsible for updating my yaml file by adding new records: data = yaml.load(file) data['WIN']['Machine'] = dict(node_labels='+> tfs vs2022') data['WIN']['Machine'] = dict(vs='vs2022') yaml.dump(data, file) Every time when I run above script I will get updated yaml file like below: WIN: Machine: vs: vs2022 My desired output to have both my key: value pairs WIN: Machine: node_labels: +> tfs vs2022 vs: vs2022 I'm wondering why line data['WIN'][nodeName] = dict(node_labels='+> tfs vs2022') overwritten by next line? How can add several key: values for Machine section?
[ "This is not a YAML related problem, but a conceptual problem in your non-yaml related Python code.\nBy assigning a dict as value to the key Machine, you set that value. By assigning\nanother dict to the key, you overwrite that value completely, erasing the previous key-value pair.\nIf you simplify your code:\ndata = dict(Machine=None)\ndata['Machine'] = dict(node_labels='+> tfs vs2022')\nprint('data 1', data)\ndata['Machine'] = dict(vs='vs2022')\nprint('data 2', data)\n\nAs you can see after the second assignment, the key node_labels is no longer available.\ndata 1 {'Machine': {'node_labels': '+> tfs vs2022'}}\ndata 2 {'Machine': {'vs': 'vs2022'}}\n\nThere are several ways to solve this. You can either assign a value to a key in the first dict:\ndata = dict(Machine=None)\ndata['Machine'] = added_dict = dict(node_labels='+> tfs vs2022')\nprint('data 1', data)\nadded_dict['vs'] ='vs2022'\nprint('data 2', data)\n\nNow you have both keys in the second output:\ndata 1 {'Machine': {'node_labels': '+> tfs vs2022'}}\ndata 2 {'Machine': {'node_labels': '+> tfs vs2022', 'vs': 'vs2022'}}\n\nIf you don't already know there is a dict where you can add a key to, you might to use .setdefault,\neither using key-value assigment, and/or by using .update (useful for updating multiple keys in one go):\ndata = dict()\ndata.setdefault('Machine', {})['node_labels'] = '+> tfs vs2022'\nprint('data 1', data)\ndata.setdefault('Machine', {}).update(dict(vs='vs2022'))\nprint('data 2', data)\n\ndata 1 {'Machine': {'node_labels': '+> tfs vs2022'}}\ndata 2 {'Machine': {'node_labels': '+> tfs vs2022', 'vs': 'vs2022'}}\n\nOf course you can put node_labels and vs in one dict and assign, but that would overwrite any existing key-values loaded\nfrom YAML. So the use of .update is IMO better:\nimport sys\nfrom pathlib import Path\nimport ruamel.yaml\n\nfile_in = Path('input.yaml')\n# key in YAML mapping with null value\nfile_in.write_text(\"\"\"\\\nWIN:\n\"\"\")\n \nyaml = ruamel.yaml.YAML()\ndata = yaml.load(file_in)\nif data['WIN'] is None:\n data['WIN'] = {}\ndata['WIN'].setdefault('Machine', {}).update(dict(node_labels='+> tfs vs2022'))\ndata['WIN'].setdefault('Machine', {}).update(dict(vs='vs2022'))\nyaml.dump(data, sys.stdout)\n\nwhich gives your expected result:\nWIN:\n Machine:\n node_labels: +> tfs vs2022\n vs: vs2022\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "yaml" ]
stackoverflow_0074669180_python_python_3.x_yaml.txt
Q: I do not want to show alert message when I prevent users pressing Ctrl and P How can I hide JavaScript's behavior from the user when I prohibit printing? I was able to prevent the print button from being pressed on that web page by writing the following inside the head tag of the html file: <script type="text/javascript"> document.onkeydown = keys; function keys() {  switch (event.keyCode) { case 82: // Ctrl + R if( event.ctrlKey ) { event.keyCode = 0; return false; } case 80: // Ctrl + P if( event.ctrlKey ) { event.keyCode = 0; alert('Hello.'); //If this code was none, Ctrl and P can be executed, return false; } break; } } </script> This Ctrl+P is strange. If I do not intervene something before return false; such as alert(); the print screen will appear. However like case 82: // Ctrl + R if( event.ctrlKey ) { event.keyCode = 0; return false; //without alert codes,this code can be executed. } The script can be executed without alert codes.It is strange thing for me. If an alert message is seen, the user will easily see that it is running in JavaScript or something, so if possible, I want something invisible to the user side; What should I do? Do I have to show alert message to user like: alert('Please do not print.'); I want some help.Thanks. A: The simplest and most standard way to implement this is with Event#preventDefault: window.addEventListener('keydown', (e) => { if (e.key === 'p' && e.ctrlKey) { e.preventDefault() } })
I do not want to show alert message when I prevent users pressing Ctrl and P
How can I hide JavaScript's behavior from the user when I prohibit printing? I was able to prevent the print button from being pressed on that web page by writing the following inside the head tag of the html file: <script type="text/javascript"> document.onkeydown = keys; function keys() {  switch (event.keyCode) { case 82: // Ctrl + R if( event.ctrlKey ) { event.keyCode = 0; return false; } case 80: // Ctrl + P if( event.ctrlKey ) { event.keyCode = 0; alert('Hello.'); //If this code was none, Ctrl and P can be executed, return false; } break; } } </script> This Ctrl+P is strange. If I do not intervene something before return false; such as alert(); the print screen will appear. However like case 82: // Ctrl + R if( event.ctrlKey ) { event.keyCode = 0; return false; //without alert codes,this code can be executed. } The script can be executed without alert codes.It is strange thing for me. If an alert message is seen, the user will easily see that it is running in JavaScript or something, so if possible, I want something invisible to the user side; What should I do? Do I have to show alert message to user like: alert('Please do not print.'); I want some help.Thanks.
[ "The simplest and most standard way to implement this is with Event#preventDefault:\nwindow.addEventListener('keydown', (e) => {\n if (e.key === 'p' && e.ctrlKey) {\n e.preventDefault()\n }\n})\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "keyboard_shortcuts" ]
stackoverflow_0074673935_javascript_keyboard_shortcuts.txt
Q: Start and end indices of AnnotatedString varies by language I used this (not so nice anymore) example to enable linkification on my Android Jetpack Compose Text composable (see section with "The ClickableText handles link on text"). That works so far easy and nice for one language. As you can see in the AnnotatedString.Builder: addStyle( style = SpanStyle( textDecoration = TextDecoration.Underline ), start = 8, end = 15 ) addStringAnnotation( tag = uriTag, annotation = "https://developer.android.com/jetpack/compose", start = 8, end = 15 ) I have to enter start and end indices to highlight the link by underlines. Imagine I have multiple string language resources and I only want to linkify website or Webseite: "My website" "Meine Webseite" The upper english string would have start and end indices from 4 to 10. Lower german string would have 7 to 14. This is for multiple language resources not very usable. How can I linkify my Text composable more easily, without calculating indices. (Please note: I want to use natural libraries andoridx.* kotlin.* only. Other 3rd party libraries will be ignored) A: You can use offset like below. onClick = { offset -> annotatedText.getStringAnnotations(start = offset, end = offset) .firstOrNull()?.let { onLinkClick(it.item) } }
Start and end indices of AnnotatedString varies by language
I used this (not so nice anymore) example to enable linkification on my Android Jetpack Compose Text composable (see section with "The ClickableText handles link on text"). That works so far easy and nice for one language. As you can see in the AnnotatedString.Builder: addStyle( style = SpanStyle( textDecoration = TextDecoration.Underline ), start = 8, end = 15 ) addStringAnnotation( tag = uriTag, annotation = "https://developer.android.com/jetpack/compose", start = 8, end = 15 ) I have to enter start and end indices to highlight the link by underlines. Imagine I have multiple string language resources and I only want to linkify website or Webseite: "My website" "Meine Webseite" The upper english string would have start and end indices from 4 to 10. Lower german string would have 7 to 14. This is for multiple language resources not very usable. How can I linkify my Text composable more easily, without calculating indices. (Please note: I want to use natural libraries andoridx.* kotlin.* only. Other 3rd party libraries will be ignored)
[ "You can use offset like below.\nonClick = { offset ->\n annotatedText.getStringAnnotations(start = offset, end = offset)\n .firstOrNull()?.let {\n onLinkClick(it.item)\n }\n }\n \n\n" ]
[ 0 ]
[]
[]
[ "android", "android_jetpack_compose", "android_jetpack_compose_text", "androidx", "textview" ]
stackoverflow_0072609751_android_android_jetpack_compose_android_jetpack_compose_text_androidx_textview.txt