id
stringlengths
40
40
text
stringlengths
29
2.03k
original_text
stringlengths
3
154k
subdomain
stringclasses
20 values
metadata
dict
71a8b325665df0257bd1363d3e971f5034da44a6
Stackoverflow Stackexchange Q: How do I get Raw folder path in Android for react native? I have searched alot but unable to find any solution for fetching up mp3 file from raw folder. How do I get local mp3 path for playing mp3 from raw folder in Android. I am using "react-native-audio-streaming" module. this.chalisa = './song/shree_hanuman_ji_ki_aarti.mp3' NO LUCK AT ALL :( A: if you want to have real path,then do copy of stream taken from ContentResolver.openInputStream() You can read about it here https://commonsware.com/blog/2016/03/15/how-consume-content-uri.html That being said, i found this react native plugin to be of help in cases when you just want to grab the path like you want https://github.com/luisfuertes/react-native-file-picker Hope that helps.
Q: How do I get Raw folder path in Android for react native? I have searched alot but unable to find any solution for fetching up mp3 file from raw folder. How do I get local mp3 path for playing mp3 from raw folder in Android. I am using "react-native-audio-streaming" module. this.chalisa = './song/shree_hanuman_ji_ki_aarti.mp3' NO LUCK AT ALL :( A: if you want to have real path,then do copy of stream taken from ContentResolver.openInputStream() You can read about it here https://commonsware.com/blog/2016/03/15/how-consume-content-uri.html That being said, i found this react native plugin to be of help in cases when you just want to grab the path like you want https://github.com/luisfuertes/react-native-file-picker Hope that helps.
stackoverflow
{ "language": "en", "length": 111, "provenance": "stackexchange_0000F.jsonl.gz:870719", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560118" }
99dd4f5faa31f037bd953398356f6e75b0374314
Stackoverflow Stackexchange Q: How set user and password on xampp server just like a Router does ask in the beginning? Like This ? https://s15.postimg.org/s1g5h3rgr/dsfdsf.png I want to understand how this protocol works . how the browser understands that the server needs a user and password. I got as far as wget http://admin:admin@192.168.1.1 How to create this authentication protocol ? A: The authentication you want in achieved by the server, in your case the Apache server in your WAMP setup: Follow the below steps to achieve basic authentication: * *Create a file named .htaccess in your web root directory. *Add the following contents in it: AuthType Basic AuthName "Password Protected Area" AuthUserFile .htpasswd Require valid-user *Create a file named .htpasswd in the same directory and add the following contents: admin:$apr1$lrXGxv30$XbZ2kQ3pDMDSQJ4XJV2Rv1 Here admin is the username and $apr1$lrXGxv30$XbZ2kQ3pDMDSQJ4XJV2Rv1 is the encrypted password for admin123 You can generate the .htpasswd file at http://www.htaccesstools.com/htpasswd-generator/ WARNING It is recommended that you place the .htpasswd file in another directory which is not your root directory and then give the path to it in the .htaccess like: /path/to/.htpasswd A wonderful tutorial to help you out: https://www.web2generators.com/apache-tools/htpasswd-generator
Q: How set user and password on xampp server just like a Router does ask in the beginning? Like This ? https://s15.postimg.org/s1g5h3rgr/dsfdsf.png I want to understand how this protocol works . how the browser understands that the server needs a user and password. I got as far as wget http://admin:admin@192.168.1.1 How to create this authentication protocol ? A: The authentication you want in achieved by the server, in your case the Apache server in your WAMP setup: Follow the below steps to achieve basic authentication: * *Create a file named .htaccess in your web root directory. *Add the following contents in it: AuthType Basic AuthName "Password Protected Area" AuthUserFile .htpasswd Require valid-user *Create a file named .htpasswd in the same directory and add the following contents: admin:$apr1$lrXGxv30$XbZ2kQ3pDMDSQJ4XJV2Rv1 Here admin is the username and $apr1$lrXGxv30$XbZ2kQ3pDMDSQJ4XJV2Rv1 is the encrypted password for admin123 You can generate the .htpasswd file at http://www.htaccesstools.com/htpasswd-generator/ WARNING It is recommended that you place the .htpasswd file in another directory which is not your root directory and then give the path to it in the .htaccess like: /path/to/.htpasswd A wonderful tutorial to help you out: https://www.web2generators.com/apache-tools/htpasswd-generator
stackoverflow
{ "language": "en", "length": 187, "provenance": "stackexchange_0000F.jsonl.gz:870723", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560133" }
a3d39ad0139490a9aea6371e4f11e914b572f9ab
Stackoverflow Stackexchange Q: Unable to filter DataFrame using Window function in Spark I try to use a logical expression based on a window-function to detect duplicate records: df .where(count("*").over(Window.partitionBy($"col1",$"col2"))>lit(1)) .show this gives in Spark 2.1.1: java.lang.ClassCastException: org.apache.spark.sql.catalyst.plans.logical.Project cannot be cast to org.apache.spark.sql.catalyst.plans.logical.Aggregate on the other hand, it works if i assign the result of the window-function to a new column and then filter that column: df .withColumn("count", count("*").over(Window.partitionBy($"col1",$"col2")) .where($"count">lit(1)).drop($"count") .show I wonder how I can write this without using an temporary column? A: I guess window functions cannot be used within the filter. You have to create an additional column and filter this one. What you could do is to draw the window function into the select. df.select(col("1"), col("2"), lag(col("2"), 1).over(window).alias("2_lag"))).filter(col("2_lag")==col("2")) Then you have it in one statement.
Q: Unable to filter DataFrame using Window function in Spark I try to use a logical expression based on a window-function to detect duplicate records: df .where(count("*").over(Window.partitionBy($"col1",$"col2"))>lit(1)) .show this gives in Spark 2.1.1: java.lang.ClassCastException: org.apache.spark.sql.catalyst.plans.logical.Project cannot be cast to org.apache.spark.sql.catalyst.plans.logical.Aggregate on the other hand, it works if i assign the result of the window-function to a new column and then filter that column: df .withColumn("count", count("*").over(Window.partitionBy($"col1",$"col2")) .where($"count">lit(1)).drop($"count") .show I wonder how I can write this without using an temporary column? A: I guess window functions cannot be used within the filter. You have to create an additional column and filter this one. What you could do is to draw the window function into the select. df.select(col("1"), col("2"), lag(col("2"), 1).over(window).alias("2_lag"))).filter(col("2_lag")==col("2")) Then you have it in one statement.
stackoverflow
{ "language": "en", "length": 126, "provenance": "stackexchange_0000F.jsonl.gz:870733", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560166" }
87774c74b0b2cdec934b183a73056491bc248e28
Stackoverflow Stackexchange Q: Why does this `else` block work yet it is not on the same level as the `if` case? This code runs pretty well and generates the wanted list of prime numbers. But the else block that prints our prime numbers is out of block, but it works anyway, can someone explain it to me? for num in range(0, 100 + 1): # prime numbers are greater than 1 if num > 1: for i in range(2, num): if (num % i) == 0: break else: print(num) Reference: programiz.com A: Python has a neat for-else construct: For loops also have an else clause which most of us are unfamiliar with. The else clause executes when the loop completes normally. This means that the loop did not encounter any break.
Q: Why does this `else` block work yet it is not on the same level as the `if` case? This code runs pretty well and generates the wanted list of prime numbers. But the else block that prints our prime numbers is out of block, but it works anyway, can someone explain it to me? for num in range(0, 100 + 1): # prime numbers are greater than 1 if num > 1: for i in range(2, num): if (num % i) == 0: break else: print(num) Reference: programiz.com A: Python has a neat for-else construct: For loops also have an else clause which most of us are unfamiliar with. The else clause executes when the loop completes normally. This means that the loop did not encounter any break. A: In fact, block for also has key word else. for-else document A: A common use case for the else clause in loops is to implement search loops; say you’re performing a search for an item that meets a particular condition, and need to perform additional processing or raise an error if no acceptable value is found. refer https://shahriar.svbtle.com/pythons-else-clause-in-loops
stackoverflow
{ "language": "en", "length": 188, "provenance": "stackexchange_0000F.jsonl.gz:870737", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560181" }
060f462c67ad2843b1e86179243a167ce5eca16e
Stackoverflow Stackexchange Q: Two SurfaceViews to accept Camera stream simultaneously I'm working on VR project for Android, and one of the tasks is to create two Surface Views which will handle simultaneous stream of Camera. After several attempts I found that only one Surface View can handle Camera stream, while another is idling. Is there any way I can duplicate Camera stream and show it simultaneously in two separate Surface Views (One per eye)? Thanks!
Q: Two SurfaceViews to accept Camera stream simultaneously I'm working on VR project for Android, and one of the tasks is to create two Surface Views which will handle simultaneous stream of Camera. After several attempts I found that only one Surface View can handle Camera stream, while another is idling. Is there any way I can duplicate Camera stream and show it simultaneously in two separate Surface Views (One per eye)? Thanks!
stackoverflow
{ "language": "en", "length": 73, "provenance": "stackexchange_0000F.jsonl.gz:870738", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560188" }
baa01ca5829db3369f0608c8ea2aff3d607181aa
Stackoverflow Stackexchange Q: How to extract plain text from PDF in golang I want to extract text from pdf file using GO. I tried using ledongthuc/pdf Go package that implement the method GetPlainText() to get plain text content without format. But I don't get the plain text. I have as a result: W S D V Y R O R Q W D L U H P H Q W ...... Go code package main import ( "bytes" "fmt" "github.com/ledongthuc/pdf" ) func main() { content, err := readPdf("test.pdf") if err != nil { panic(err) } fmt.Println(content) return } func readPdf(path string) (string, error) { r, err := pdf.Open(path) if err != nil { return "", err } totalPage := r.NumPage() var textBuilder bytes.Buffer for pageIndex := 1; pageIndex <= totalPage; pageIndex++ { p := r.Page(pageIndex) if p.V.IsNull() { continue } textBuilder.WriteString(p.GetPlainText("\n")) } return textBuilder.String(), nil } A: You can have a message such as "Exemple of a pdf document." instead of Ex a m pl e of a pd f doc u m e nt . What you need to do is change the textBuilder.WriteString(p.GetPlainText("\n")) to textBuilder.WriteString(p.GetPlainText("")) I hope this helps.
Q: How to extract plain text from PDF in golang I want to extract text from pdf file using GO. I tried using ledongthuc/pdf Go package that implement the method GetPlainText() to get plain text content without format. But I don't get the plain text. I have as a result: W S D V Y R O R Q W D L U H P H Q W ...... Go code package main import ( "bytes" "fmt" "github.com/ledongthuc/pdf" ) func main() { content, err := readPdf("test.pdf") if err != nil { panic(err) } fmt.Println(content) return } func readPdf(path string) (string, error) { r, err := pdf.Open(path) if err != nil { return "", err } totalPage := r.NumPage() var textBuilder bytes.Buffer for pageIndex := 1; pageIndex <= totalPage; pageIndex++ { p := r.Page(pageIndex) if p.V.IsNull() { continue } textBuilder.WriteString(p.GetPlainText("\n")) } return textBuilder.String(), nil } A: You can have a message such as "Exemple of a pdf document." instead of Ex a m pl e of a pd f doc u m e nt . What you need to do is change the textBuilder.WriteString(p.GetPlainText("\n")) to textBuilder.WriteString(p.GetPlainText("")) I hope this helps.
stackoverflow
{ "language": "en", "length": 189, "provenance": "stackexchange_0000F.jsonl.gz:870762", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560265" }
d7977531d54627a872d0ed55975bf8d97e2bdd5c
Stackoverflow Stackexchange Q: C# - Download PDF from URL and convert it into base64 without saving it on a server So basically what I want to do is to download a PDF file inside of a Web API from a URL I get as a parameter from the frontend and directly convert said file into a base64 string without saving the file on a file system. I have already found WebClient.Download(URL, File) but that means, that I have to save the file. So does anyone know any other solution that could work for me? A: You can use below code to download PDF from url into base64 string format. string pdfUrl = "URL_TO_PDF"; using(WebClient client = new WebClient()) { var bytes = client.DownloadData(pdfUrl); string base64String = Convert.ToBase64String(bytes); }
Q: C# - Download PDF from URL and convert it into base64 without saving it on a server So basically what I want to do is to download a PDF file inside of a Web API from a URL I get as a parameter from the frontend and directly convert said file into a base64 string without saving the file on a file system. I have already found WebClient.Download(URL, File) but that means, that I have to save the file. So does anyone know any other solution that could work for me? A: You can use below code to download PDF from url into base64 string format. string pdfUrl = "URL_TO_PDF"; using(WebClient client = new WebClient()) { var bytes = client.DownloadData(pdfUrl); string base64String = Convert.ToBase64String(bytes); } A: as said in first answer of Downloading pdf file using WebRequests var fileName = "output/" + date.ToString("yyyy-MM-dd") + ".pdf"; using (var stream = File.Create(fileName)) resp.GetResponseStream().CopyTo(stream); (resp is HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); ) then convert to base64 String file = Convert.ToBase64String(stream);
stackoverflow
{ "language": "en", "length": 167, "provenance": "stackexchange_0000F.jsonl.gz:870770", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560303" }
dcd87fe2e1853fefcd514f9395af77a03dbf1af1
Stackoverflow Stackexchange Q: Firebase configuration I'm developing an iOS application and my pod file looks like platform :ios, '8.0' use_frameworks! target 'MYAPP' do pod 'Google-Mobile-Ads-SDK' pod 'Google/SignIn' pod 'Alamofire', '~> 4.0' pod 'AlamofireImage', '~> 3.1' end target 'MYAPPTests' do pod 'Google-Mobile-Ads-SDK' pod 'Google/SignIn' pod 'Alamofire', '~> 4.0' pod 'AlamofireImage', '~> 3.1' end target 'MYAPPUITests' do pod 'Google-Mobile-Ads-SDK' pod 'Google/SignIn' pod 'Alamofire', '~> 4.0' pod 'AlamofireImage', '~> 3.1' end When I run the application and keep finding the following warning in the console: [Firebase/Core][I-COR000003] The default Firebase app has not yet been configured. Add [FIRApp configure] to your application initialization. Read more: ". So in my appdelegate, I had added FIRApp.configure() which results in a crash ' [Firebase/Messaging][I-IID001000] Firebase is not set up correctly. Sender ID is nil or empty. 2017-06-14 18:25:37.044 MYAPP[10520:128089] *** Terminating app due to uncaught exception 'com.firebase.instanceid', reason: 'Could not configure Firebase InstanceID. Google Sender ID must not be nil or empty.'' Is it the GoogleModbileAdsSDK which required the Firebase config? What am I missing here? A: Check this out https://firebase.google.com/docs/ios/setup You need to add to the pod file the ones you need to use from database
Q: Firebase configuration I'm developing an iOS application and my pod file looks like platform :ios, '8.0' use_frameworks! target 'MYAPP' do pod 'Google-Mobile-Ads-SDK' pod 'Google/SignIn' pod 'Alamofire', '~> 4.0' pod 'AlamofireImage', '~> 3.1' end target 'MYAPPTests' do pod 'Google-Mobile-Ads-SDK' pod 'Google/SignIn' pod 'Alamofire', '~> 4.0' pod 'AlamofireImage', '~> 3.1' end target 'MYAPPUITests' do pod 'Google-Mobile-Ads-SDK' pod 'Google/SignIn' pod 'Alamofire', '~> 4.0' pod 'AlamofireImage', '~> 3.1' end When I run the application and keep finding the following warning in the console: [Firebase/Core][I-COR000003] The default Firebase app has not yet been configured. Add [FIRApp configure] to your application initialization. Read more: ". So in my appdelegate, I had added FIRApp.configure() which results in a crash ' [Firebase/Messaging][I-IID001000] Firebase is not set up correctly. Sender ID is nil or empty. 2017-06-14 18:25:37.044 MYAPP[10520:128089] *** Terminating app due to uncaught exception 'com.firebase.instanceid', reason: 'Could not configure Firebase InstanceID. Google Sender ID must not be nil or empty.'' Is it the GoogleModbileAdsSDK which required the Firebase config? What am I missing here? A: Check this out https://firebase.google.com/docs/ios/setup You need to add to the pod file the ones you need to use from database A: If you don't want to install other google packages then use pod 'GoogleSignIn' it will only install the package require for googleSignIn A: try to add this code in AppDelegate.swift override init() { FirebaseApp.configure() } instead of configureing in : func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool
stackoverflow
{ "language": "en", "length": 239, "provenance": "stackexchange_0000F.jsonl.gz:870774", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560319" }
dfb27336df4f8d19e101e750a31a51a970ea3e52
Stackoverflow Stackexchange Q: simulate Daemon-set in kubernetes with using Deployment I'm trying to simulate Daemon-set in kubernetes with using Deployment/RC/Replica-set. What I want to achieve : As daemon-set kind deploy PODs on each nodes like wise I want to deploy pod on each node, but without kind daemonset. Is there any way to do it ? Can't find proper way to do that. A: You can do that by using Deployment/ReplicaSet in Kubernetes with hostPort. Assuming you have 4 nodes in Kubernetes cluster, you can create a deployment or replicaset with hostPort and replicas equal to number of nodes in cluster. For example you want to run nginx pod on every node with clustersize equal to 4 then you have mention hostport to container port in deployment/replicaset definition. The kubernetes scheduler will be unable to schedule more than 1 pod on same host and in this way all nodes have at least one pod scheduled. apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-hello labels: tier: frontend app: nginx-hello spec: replicas: 4 template: metadata: labels: tier: frontend app: nginx-hello spec: containers: - name: nginx-hello image: nginxdemos/hello ports: - containerPort: 80 hostPort: 8088
Q: simulate Daemon-set in kubernetes with using Deployment I'm trying to simulate Daemon-set in kubernetes with using Deployment/RC/Replica-set. What I want to achieve : As daemon-set kind deploy PODs on each nodes like wise I want to deploy pod on each node, but without kind daemonset. Is there any way to do it ? Can't find proper way to do that. A: You can do that by using Deployment/ReplicaSet in Kubernetes with hostPort. Assuming you have 4 nodes in Kubernetes cluster, you can create a deployment or replicaset with hostPort and replicas equal to number of nodes in cluster. For example you want to run nginx pod on every node with clustersize equal to 4 then you have mention hostport to container port in deployment/replicaset definition. The kubernetes scheduler will be unable to schedule more than 1 pod on same host and in this way all nodes have at least one pod scheduled. apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-hello labels: tier: frontend app: nginx-hello spec: replicas: 4 template: metadata: labels: tier: frontend app: nginx-hello spec: containers: - name: nginx-hello image: nginxdemos/hello ports: - containerPort: 80 hostPort: 8088 A: You can see more use cases here http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/ Two containers using the same hostPort cannot be scheduled on the same node and the usage of the hostPort is considered a privileged. Hence it has some limitations like 1) no. of replicas should not be more than no. of nodes which will exhaust hostports :) 2) all hosts must be in healthy state so that schedular can schedule pods on each of them. Hope it help you.....
stackoverflow
{ "language": "en", "length": 265, "provenance": "stackexchange_0000F.jsonl.gz:870776", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560332" }
944dffb715789c3c0be16040b82de9705ae93b85
Stackoverflow Stackexchange Q: Generate PDF from view to send email attaching the PDF file without saving it into disk Laravel 5 I am working on sending email function. Firstly, I want to generate .pdf file from my view. Then I want to attach the generated .pdf file by email without saving it into disk. I use below in my controller: $pdf = PDF::loadView('getpdf', $data); Mail::to($to_email)->send(new Mysendmail($post_title, $full_name)) ->attachData($pdf->output(), "newfilename.pdf"); And I get this error: "Call to a member function attachData() on null" If I use below without attachment, it works well: $pdf = PDF::loadView('getpdf', $data); Mail::to($to_email)->send(new Mysendmail($post_title, $full_name)); Please advise. A: I think you need to attach it to the message, not to the mailer. $pdf = PDF::loadView('getpdf', $data); $message = new Mysendmail($post_title, $full_name); $message->attachData($pdf->output(), "newfilename.pdf"); Mail::to($to_email)->send($message);
Q: Generate PDF from view to send email attaching the PDF file without saving it into disk Laravel 5 I am working on sending email function. Firstly, I want to generate .pdf file from my view. Then I want to attach the generated .pdf file by email without saving it into disk. I use below in my controller: $pdf = PDF::loadView('getpdf', $data); Mail::to($to_email)->send(new Mysendmail($post_title, $full_name)) ->attachData($pdf->output(), "newfilename.pdf"); And I get this error: "Call to a member function attachData() on null" If I use below without attachment, it works well: $pdf = PDF::loadView('getpdf', $data); Mail::to($to_email)->send(new Mysendmail($post_title, $full_name)); Please advise. A: I think you need to attach it to the message, not to the mailer. $pdf = PDF::loadView('getpdf', $data); $message = new Mysendmail($post_title, $full_name); $message->attachData($pdf->output(), "newfilename.pdf"); Mail::to($to_email)->send($message); A: Just use 'mime' => 'application/pdf', at last of your code. Simple! $pdf = PDF::loadView('getpdf', $data); Mail::to($to_email)->send(new Mysendmail($post_title, $full_name)) ->attachData($pdf->output(), "newfilename.pdf"), [ 'mime' => 'application/pdf', ]); A: Just use 'mime' => 'application/pdf', at last of your code. Simple! ` $pdf = PDF::loadView('getpdf', $data); Mail::to($to_email)->send(new Mysendmail($post_title, $full_name)) ->attachData($pdf->output(), "newfilename.pdf"), [ 'mime' => 'application/pdf', ]); `
stackoverflow
{ "language": "en", "length": 179, "provenance": "stackexchange_0000F.jsonl.gz:870798", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560407" }
841005878f196394f66677192a7d08718e9f601c
Stackoverflow Stackexchange Q: Pickle a dynamically imported class I have a bunch of objects created from classes imported with module = imp.load_source(packageName, packagePath) that I need to pickle. It all works perfectly, as long as packagePath is directly in the Python path or the working dir. But as soon as I move it somewhere else, I get the dreaded ImportError: No module named test_package I tried adding a __reduce__ method that returns the class as first value. I tried using dill, which is supposedly able to serialize full classes, and not a simple reference to the class (and I tried combining it with __reduce__). The way it works currently, is by double-pickling them along with the package path in an object that takes care of importing the package: class Container(object): def __init__(self, packagePath, packageName, objectsDump= None): self.package = imp.load_source(packageName, packagePath) self.packagePath = packagePath self.packageName= packageName if objectsDump is not None: self.objects = dill.loads(objectsDump) def __reduce__(self): return (self.__class__, (self.packagePath, self.packageName, dill.dumps(self.objects)) I find this way really convoluted, and I would like to know: Is there a more pythonic way to acheive this? Note: all of this happens in Python 2.7.10, dill 0.2.6. All objects to serialize are new-style objects (inherit from object).
Q: Pickle a dynamically imported class I have a bunch of objects created from classes imported with module = imp.load_source(packageName, packagePath) that I need to pickle. It all works perfectly, as long as packagePath is directly in the Python path or the working dir. But as soon as I move it somewhere else, I get the dreaded ImportError: No module named test_package I tried adding a __reduce__ method that returns the class as first value. I tried using dill, which is supposedly able to serialize full classes, and not a simple reference to the class (and I tried combining it with __reduce__). The way it works currently, is by double-pickling them along with the package path in an object that takes care of importing the package: class Container(object): def __init__(self, packagePath, packageName, objectsDump= None): self.package = imp.load_source(packageName, packagePath) self.packagePath = packagePath self.packageName= packageName if objectsDump is not None: self.objects = dill.loads(objectsDump) def __reduce__(self): return (self.__class__, (self.packagePath, self.packageName, dill.dumps(self.objects)) I find this way really convoluted, and I would like to know: Is there a more pythonic way to acheive this? Note: all of this happens in Python 2.7.10, dill 0.2.6. All objects to serialize are new-style objects (inherit from object).
stackoverflow
{ "language": "en", "length": 199, "provenance": "stackexchange_0000F.jsonl.gz:870799", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560416" }
e6ea94f902e6d0c975c7961909e37651c88d3fe1
Stackoverflow Stackexchange Q: Write to AWS SQS queue using Spark Is there any way to stream or write data to Amazon SQS queue from Spark using a library? There is nothing listed on the Spark packages. What things can I try? A: One idea is to use Alpakka's SQS connector, which is built on Akka Streams.
Q: Write to AWS SQS queue using Spark Is there any way to stream or write data to Amazon SQS queue from Spark using a library? There is nothing listed on the Spark packages. What things can I try? A: One idea is to use Alpakka's SQS connector, which is built on Akka Streams. A: I wrote a small library to write a dataframe to SQS https://github.com/fabiogouw/spark-aws-messaging
stackoverflow
{ "language": "en", "length": 67, "provenance": "stackexchange_0000F.jsonl.gz:870823", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560490" }
a9bca4c402811e82ca38ae7676aa96c49d3c5eec
Stackoverflow Stackexchange Q: How to Create new Database in Neo4j Community Edition I have install neo4j(community edition) database in Ubuntu but there I am missing /var/lib/neo4j/conf folder. Is that folder is restricted in community edition of neo4j I am not able to find neo4j-server.property file so that i can create a new graph.db and relocate to neo4j like mensioned bellow https://groups.google.com/forum/#!topic/neo4j/VQpVMCKm5Y4 Thanks All.
Q: How to Create new Database in Neo4j Community Edition I have install neo4j(community edition) database in Ubuntu but there I am missing /var/lib/neo4j/conf folder. Is that folder is restricted in community edition of neo4j I am not able to find neo4j-server.property file so that i can create a new graph.db and relocate to neo4j like mensioned bellow https://groups.google.com/forum/#!topic/neo4j/VQpVMCKm5Y4 Thanks All.
stackoverflow
{ "language": "en", "length": 61, "provenance": "stackexchange_0000F.jsonl.gz:870857", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560618" }
40204df82218e6916dd687bb378c132b72bdb405
Stackoverflow Stackexchange Q: Ansible templates with undefined variables I have a file with variables that I use in my playbook: net_interfaces: ... - name: "eth0" ip: "192.168.1.100" mask: "255.255.255.0" gateway: "192.168.1.1" ... and I want to deploy some configs with this variables, for example ifcfg-eth0: DEVICE={{ item.name }} TYPE=Ethernet ONBOOT=yes BOOTPROTO=static IPADDR={{ item.ip }} NETMASK={{ item.netmask }} GATEWAY={{ item.gateway }} but sometimes there is no gateway variable for item and in this case I want to remove string GATEWAY={{ item.gateway }} from this config file on the target machine. How can I achieve this without creating another task for a certain hosts? A: Add condition: {% if item.gateway is defined %} GATEWAY={{ item.gateway }} {% endif %}
Q: Ansible templates with undefined variables I have a file with variables that I use in my playbook: net_interfaces: ... - name: "eth0" ip: "192.168.1.100" mask: "255.255.255.0" gateway: "192.168.1.1" ... and I want to deploy some configs with this variables, for example ifcfg-eth0: DEVICE={{ item.name }} TYPE=Ethernet ONBOOT=yes BOOTPROTO=static IPADDR={{ item.ip }} NETMASK={{ item.netmask }} GATEWAY={{ item.gateway }} but sometimes there is no gateway variable for item and in this case I want to remove string GATEWAY={{ item.gateway }} from this config file on the target machine. How can I achieve this without creating another task for a certain hosts? A: Add condition: {% if item.gateway is defined %} GATEWAY={{ item.gateway }} {% endif %} A: Another (and better) way is to use 'default' filter because in this case we can check if some variable was defined and set it's default value if it wasn't. Example: {{ my_string_value | default("awesome") }}
stackoverflow
{ "language": "en", "length": 151, "provenance": "stackexchange_0000F.jsonl.gz:870881", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560701" }
8ed24337e6172f77c964a8b77d0366e4e10b06d7
Stackoverflow Stackexchange Q: Custom Keyboard InputAccessoryView not visible in iOS 11 I have implemented Custom input accessory view it was working fine till iOS 10.3.1. But it's not visible in iOS 11 beta. Have anyone experience this issue? A: PSA: If you use a UIToolbar as your custom view, it's currently broken in iOS 11 GM. Instead of loosing your hair on how to fix it, just change it to UIView. You'll loose the blur effect but it will work.
Q: Custom Keyboard InputAccessoryView not visible in iOS 11 I have implemented Custom input accessory view it was working fine till iOS 10.3.1. But it's not visible in iOS 11 beta. Have anyone experience this issue? A: PSA: If you use a UIToolbar as your custom view, it's currently broken in iOS 11 GM. Instead of loosing your hair on how to fix it, just change it to UIView. You'll loose the blur effect but it will work. A: Beta 3 has just come out and some people said it solved the problem, but for me it didn't. However I tried setting the accessory view to something stupid (100pxls high) and spotted that the Undo/Redo/Paste bar on the iPads was incorrectly sitting over the top of my accessory bar. So I added the following code to get rid of Apples bar (it was pointless for my custom picker anyway) and the problem went away Hope this helps somebody - (void)textFieldDidBeginEditing:(UITextField*)textField { UITextInputAssistantItem* item = [textField inputAssistantItem]; item.leadingBarButtonGroups = @[]; item.trailingBarButtonGroups = @[]; } A: To avoid the inputAccessoryView issue in iOS 11 for UITextField and UITextView, just use the following code: UIView *inputView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, 150)]; self.monthPickerView = [[UIPickerView alloc]initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, 150)]; self.monthPickerView.backgroundColor = [UIColor whiteColor]; self.monthPickerView.delegate = self; self.monthPickerView.dataSource = self; [inputView addSubview:self.monthPickerView]; cell.monthTextField.inputView = inputView ; self.monthTextField.inputAccessoryView = [self doneButtonAccessoryView]; // doneButtonAccessoryView Method -(UIToolbar*)doneButtonAccessoryView { UIToolbar *kbToolbar = [[UIToolbar alloc] init]; [kbToolbar sizeToFit]; [kbToolbar setBarTintColor:[UIColor whiteColor]]; UIBarButtonItem *doneButton = [[UIBarButtonItem alloc] initWithTitle:@"Done" style:UIBarButtonItemStyleDone target:self action:@selector(doneClicked)]; UIBarButtonItem *cancelButton = [[UIBarButtonItem alloc] initWithTitle:@"Cancel" style:UIBarButtonItemStyleDone target:self action:@selector(cancelClicked)]; NSDictionary *attrDict; attrDict = [NSDictionary dictionaryWithObjectsAndKeys: [UIFont fontWithName:@"Helvetica-Bold" size:16.0], NSFontAttributeName, nil]; [doneButton setTitleTextAttributes:attrDict forState:UIControlStateNormal]; UIBarButtonItem *flexWidth = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemFlexibleSpace target:self action:nil]; [kbToolbar setItems:[NSArray arrayWithObjects:cancelButton,flexWidth, doneButton, nil]]; return kbToolbar; } A: The question you ask does not have much detail. But I had the same problem when using an inputAccessoryView and a custom inputView for the textfield. And resolved this on iOS11 by setting the custom inputView's autoresizingMask to .flexibleHeight. yourCustomInputView.autoresizingMask = .flexibleHeight Hope this resolves the issue. If not maybe provide some more information? Here is how I add the input accessory, incase this is of more help (as extension of textfield): public extension UITextField { public func addToolbarInputAccessoryView(barButtonItems: [UIBarButtonItem], textColour: UIColor, toolbarHeight: CGFloat = 44, backgroundColour: UIColor = .white) { let toolbar = UIToolbar() toolbar.frame = CGRect(x: 0, y: 0, width: bounds.width, height: toolbarHeight) toolbar.items = barButtonItems toolbar.isTranslucent = false toolbar.barTintColor = backgroundColour toolbar.tintColor = textColour inputAccessoryView = toolbar } } And then on the inputView (not the inputAccessoryView), I was using a date picker for example - just make sure that the date picker's autoresizing mask is set to flexible height. A: UIToolBar is broken in iOS 11. But you can get the same thing done using UIView as inputAccessoryView. Sample code snippet here: CGFloat width = [[UIScreen mainScreen] bounds].size.width; UIView* toolBar = [[UIView alloc] initWithFrame:CGRectMake(0.0f,0.0f, width, 44.0f)]; toolBar.backgroundColor = [UIColor colorWithRed:0.97f green:0.97f blue:0.97f alpha:1.0f]; UILabel *titleLabel = [[UILabel alloc] initWithFrame:CGRectMake(20.0 , 0.0f, width, 44.0f)]; [titleLabel setFont:[UIFont fontWithName:@"Helvetica" size:13]]; [titleLabel setBackgroundColor:[UIColor clearColor]]; [titleLabel setTextColor:[UIColor redColor]]; [titleLabel setText:@"Title"]; [titleLabel setTextAlignment:NSTextAlignmentLeft]; [toolBar addSubview:titleLabel]; UIButton *doneBtn = [UIButton buttonWithType:UIButtonTypeRoundedRect]; [doneBtn setTitle:@"Done" forState:UIControlStateNormal]; doneBtn.tintColor = [UIColor colorWithRed:(float)179/255 green:(float)27/255 blue:(float)163/255 alpha:1]; [doneBtn.titleLabel setFont:[UIFont fontWithName:@"Helvetica" size:16]]; [doneBtn addTarget:self action:@selector(btnTxtDoneAction) forControlEvents:UIControlEventTouchUpInside]; [doneBtn setFrame:CGRectMake(width-70, 6, 50, 32)]; [toolBar addSubview:doneBtn]; [toolBar sizeToFit]; txtMessageView.inputAccessoryView = toolBar; Hope this help..:) A: I've had the same issue and I've found out that removing all of the bottom, top, leading, training, left, right constraints for the view that is assigned accessoryView solved it. A: Swift 4 solution let toolBarRect = CGRect(x: 0, y: 0, width: self.view.frame.width, height: 44) let toolBar = UIView(frame: toolBarRect) toolBar.backgroundColor = .lightGray let nextButton = UIButton() nextButton.setTitleColor(.black, for: .normal) nextButton.setTitle("Next", for: .normal) nextButton.addTarget(self, action: #selector(self.onNextButtonTouch), for: .touchUpInside) nextButton.translatesAutoresizingMaskIntoConstraints = false toolBar.addSubview(nextButton) NSLayoutConstraint.activate( [ nextButton.heightAnchor.constraint(equalToConstant: Constants.keyboardToolBarHeight), nextButton.trailingAnchor.constraint(equalTo: toolBar.trailingAnchor, constant: -16), nextButton.centerYAnchor.constraint(equalTo: toolBar.centerYAnchor, constant: 0) ] ) self.yourTextField.inputAccessoryView = toolBar A: Just in case someone might still need the solution, here's what I did (IOS 12.1); private func initSearchBox() { // Add Done button on keyboard txtSearch.delegate = self let tbrDone = UIToolbar() let btnDone = UIBarButtonItem(title: "Done", style: .plain, target: self, action: #selector(btnDone_tapped)) tbrDone.items = [btnDone] tbrDone.sizeToFit() self.txtSearch.inputAccessoryView = tbrDone } @objc func btnDone_tapped() { view.endEditing(true) }
stackoverflow
{ "language": "en", "length": 701, "provenance": "stackexchange_0000F.jsonl.gz:870892", "question_score": "19", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560734" }
2abfc44a6a00ffdd93a9d83ede6ffebe090d6e48
Stackoverflow Stackexchange Q: Initializaton error in com.cucumber.listener.ExtentCucumberFormatter I am running the script using Cucumber in BDD Framework and I am using Extent Reports plugin to create the execution report. I've created the test runner class as below: package com.ctl.it.qa; import org.junit.runner.RunWith; import cucumber.api.CucumberOptions; import cucumber.api.junit.Cucumber; @RunWith(Cucumber.class) @CucumberOptions(features = { "src/test/resources/Feature/ABC.feature" }, plugin = {"com.cucumber.listener.ExtentCucumberFormatter:BDDControlCenterTools/target/Reports/cucumber-report.html"} ) public class RunCukes { } I have included the below dependency for the Extent report in the POM.xml file: <dependency> <groupId>com.relevantcodes</groupId> <artifactId>extentreports</artifactId> <version>2.41.2</version> </dependency> I am running the script with Junit and have the cucumber dependency for Junit too. But when I execute the above runner class, its showing an Initialization error: cucumber.runtime.CucumberException: Couldn't load plugin class: com.cucumber.listener.ExtentCucumberFormatter Can anyone please help in this error and help to resolve it. A: You need to also add the Maven dependency for this formatter. Refer to this -- https://github.com/email2vimalraj/CucumberExtentReporter documents. <dependency> <groupId>com.vimalselvam</groupId> <artifactId>cucumber-extentsreport</artifactId> <version>2.0.5</version> </dependency> But i think this only works with ExtentReport version 3 and above.
Q: Initializaton error in com.cucumber.listener.ExtentCucumberFormatter I am running the script using Cucumber in BDD Framework and I am using Extent Reports plugin to create the execution report. I've created the test runner class as below: package com.ctl.it.qa; import org.junit.runner.RunWith; import cucumber.api.CucumberOptions; import cucumber.api.junit.Cucumber; @RunWith(Cucumber.class) @CucumberOptions(features = { "src/test/resources/Feature/ABC.feature" }, plugin = {"com.cucumber.listener.ExtentCucumberFormatter:BDDControlCenterTools/target/Reports/cucumber-report.html"} ) public class RunCukes { } I have included the below dependency for the Extent report in the POM.xml file: <dependency> <groupId>com.relevantcodes</groupId> <artifactId>extentreports</artifactId> <version>2.41.2</version> </dependency> I am running the script with Junit and have the cucumber dependency for Junit too. But when I execute the above runner class, its showing an Initialization error: cucumber.runtime.CucumberException: Couldn't load plugin class: com.cucumber.listener.ExtentCucumberFormatter Can anyone please help in this error and help to resolve it. A: You need to also add the Maven dependency for this formatter. Refer to this -- https://github.com/email2vimalraj/CucumberExtentReporter documents. <dependency> <groupId>com.vimalselvam</groupId> <artifactId>cucumber-extentsreport</artifactId> <version>2.0.5</version> </dependency> But i think this only works with ExtentReport version 3 and above. A: I was having com.cucumber.listener.ExtentCucumberFormatter initialization error but after few tweaks. I can generate the report now. I added these two to my POM file. The version can be tricky as I used 3.1.1 for cucumber-extentreport but it didn't work for me. After trying a few 3.0.2 worked. <dependency> <groupId>com.vimalselvam</groupId> <artifactId>cucumber-extentsreport</artifactId> <version>3.0.2</version> </dependency> <dependency> <groupId>com.aventstack</groupId> <artifactId>extentreports</artifactId> <version>3.1.1</version> </dependency> My runner class looked like this: package cucumber; import java.io.File; import org.junit.AfterClass; import org.junit.runner.RunWith; import com.cucumber.listener.Reporter; import cucumber.api.CucumberOptions; import cucumber.api.junit.Cucumber; @RunWith(Cucumber.class) @CucumberOptions( features = {"src/test/resources/features", glue = {"stepDefinitions"}, plugin = {"com.cucumber.listener.ExtentCucumberFormatter:target/cucumber-reports/report.html"}, monochrome = true ) public class CucumberRunner { @AfterClass public static void writeExtentReport() { Reporter.loadXMLConfig(new File("config/report.xml")); } } I hope this helps. A: Try using a different version of cucumber-extentsreport. For me, the latest version (3.1.1) did not work, but 3.0.2 did. A: To resolve this. Remove below code "com.cucumber.listener.ExtentCucumberFormatter:target/report.html" from the runner class and then run the runner class. It will run successfuly. Then put this back into the runer class and execute, it will work.
stackoverflow
{ "language": "en", "length": 324, "provenance": "stackexchange_0000F.jsonl.gz:870919", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560804" }
c239cc424a57f60fc07f8803e1747f57ad90bc2a
Stackoverflow Stackexchange Q: Is it ok to have own property in react component? class VideoList extends Component { constructor(props) { super(props); this.videoList = props.videos.map((video) => { return <VideoListItem video={video} /> }) } render() { return ( <ul className="collection"> {this.videoList} </ul> ) } } I'm just wondering if it's allowed to have my own property in react component. A: You can have such a property but you need to keep in mind that when you store some value in such a property react will not re render a component - so if you are using that value in render, you might not see the updated value. With setState that's not the case. If you have something in state and then update the state react will re render the component. There was some guideline on what to put in state (from Dan Abramov), short summary here: * *if you can calculate something from props, no need to put that data in state *if you aren't using something in render method, no need to put that in state *in other cases, you can store that data in state
Q: Is it ok to have own property in react component? class VideoList extends Component { constructor(props) { super(props); this.videoList = props.videos.map((video) => { return <VideoListItem video={video} /> }) } render() { return ( <ul className="collection"> {this.videoList} </ul> ) } } I'm just wondering if it's allowed to have my own property in react component. A: You can have such a property but you need to keep in mind that when you store some value in such a property react will not re render a component - so if you are using that value in render, you might not see the updated value. With setState that's not the case. If you have something in state and then update the state react will re render the component. There was some guideline on what to put in state (from Dan Abramov), short summary here: * *if you can calculate something from props, no need to put that data in state *if you aren't using something in render method, no need to put that in state *in other cases, you can store that data in state A: Well, it is ok to have your own property in your react component. No one will blame you. But don't forget to ship it with propType, it will save you lot time (catch bugs with type checking). reference: https://facebook.github.io/react/docs/typechecking-with-proptypes.html A: I think you're referring to having the videoList stored onto the Component instance? You could store the list of videos on state, but it seems unecessary to do this and I would simplify VideoList to be a stateless functional component that renders the list of videos passed in as a prop: const VideoList = ({ videos }) => ( <ul className="collection"> {videos.map(video => <VideoListItem video={video} />)} </ul> ); The official docs don't actually explain the syntax above, but it is basically syntactic sugar for a React component with no state, that just accepts props. The ({ videos }) syntax is ES6 Destructuring in action. The VideoList component receives props and this syntax extracts props.videos as a variable on the component. Additionally, as you're rendering a list of items, you should provide some kind of unique key for each VideoListItem as you render it e.g. <VideoListItem key={video.id} video={video} />
stackoverflow
{ "language": "en", "length": 372, "provenance": "stackexchange_0000F.jsonl.gz:870936", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560847" }
bafc6dc217c4d1b4fd3a4503e12074f4895fe965
Stackoverflow Stackexchange Q: Purpose of stubAllExternalIntents() in Espresso intents testing Looking at the following method in Google sample for intents: @Before public void stubAllExternalIntents() { // By default Espresso Intents does not stub any Intents. Stubbing needs to be setup before // every test run. In this case all external Intents will be blocked. intending(not(isInternal())).respondWith(new ActivityResult(Activity.RESULT_OK, null)); } I see that all external intents will be blocked but I was wondering what purpose does this method serve? A: It does not block those intents but sets up these intent to be recorded and not passed to intent framework of android. Later you can check what all intents are recorded using intended() method. It can be used for internal intents as well.
Q: Purpose of stubAllExternalIntents() in Espresso intents testing Looking at the following method in Google sample for intents: @Before public void stubAllExternalIntents() { // By default Espresso Intents does not stub any Intents. Stubbing needs to be setup before // every test run. In this case all external Intents will be blocked. intending(not(isInternal())).respondWith(new ActivityResult(Activity.RESULT_OK, null)); } I see that all external intents will be blocked but I was wondering what purpose does this method serve? A: It does not block those intents but sets up these intent to be recorded and not passed to intent framework of android. Later you can check what all intents are recorded using intended() method. It can be used for internal intents as well. A: You want to perform hermetic testing, meaning that you are not interested in system intents, which may cause test flakiness depending on your assertions, that's why you are prohibiting intents that are not from your app (not(isInternal())).
stackoverflow
{ "language": "en", "length": 157, "provenance": "stackexchange_0000F.jsonl.gz:870986", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44560981" }
f9ecc25058775fc66185576d520e97a103ebf5ec
Stackoverflow Stackexchange Q: Sysdate in oracle virtual column I'm trying to create a virtual column in oracle that use a case statement but if I call the SYSDATE function it gave me this error: ORA-54002: only pure functions can be specified in a virtual column expression This is the query: alter table t_requirements ADD REQUIREMENT_STATE varchar2(30) generated always as( CASE WHEN t_requirements.activation_date- SYSDATE - 5 <= 0 AND t_requirements.activation_date - SYSDATE > 0 THEN 'Exist' WHEN t_requisiti.activation_date - SYSDATE <=0 THEN 'Active' END) virtual; A: You are using SYSDATE to build your virtual column. This is not allowed, because SYSDATE is not deterministic, i.e. it doesn't always return the same value. Imagine you build an index on this column. A second later the index would already be invalid! It seems you should rather be writing a view containing this ad hoc computed column.
Q: Sysdate in oracle virtual column I'm trying to create a virtual column in oracle that use a case statement but if I call the SYSDATE function it gave me this error: ORA-54002: only pure functions can be specified in a virtual column expression This is the query: alter table t_requirements ADD REQUIREMENT_STATE varchar2(30) generated always as( CASE WHEN t_requirements.activation_date- SYSDATE - 5 <= 0 AND t_requirements.activation_date - SYSDATE > 0 THEN 'Exist' WHEN t_requisiti.activation_date - SYSDATE <=0 THEN 'Active' END) virtual; A: You are using SYSDATE to build your virtual column. This is not allowed, because SYSDATE is not deterministic, i.e. it doesn't always return the same value. Imagine you build an index on this column. A second later the index would already be invalid! It seems you should rather be writing a view containing this ad hoc computed column. A: sysdate is not deterministic. Its value can vary each time we run it. Virtual columns must be deterministic. Otherwise there would be the bizarre situation in which querying a record would changes its value. Quite rightly Oracle doesn't allow that. This is a scenario where we still have to use a query (perhaps as a view over the table) to display a derived value. select r.* , cast( CASE WHEN r.activation_date- SYSDATE - 5 <= 0 AND r.activation_date - SYSDATE > 0 THEN 'Exist' WHEN r.activation_date - SYSDATE <=0 THEN 'Active' else 'Inactive' END as varchar2(30)) as REQUIREMENT_STATE from requirements r ; Incidentally, does your CASE statement need an ELSE? To display something other than a blank when activation_date is greater than SYSDATE+5 A: You can use SYSDATE to support a virtual column. The virtual column is computed at query time. If you create a function ISACTIVE_FROM_TO such as this: CREATE OR REPLACE FUNCTION ISACTIVE_FROM_TO (p_active_from IN DATE, p_active_to IN DATE) RETURN number DETERMINISTIC IS BEGIN RETURN (CASE WHEN SYSDATE BETWEEN NVL (p_active_from, SYSDATE) AND NVL (p_active_to, SYSDATE) THEN 1 ELSE 0 END); EXCEPTION WHEN OTHERS THEN RAISE VALUE_ERROR; END; / And define your virtual column on the table that has the ACTIVE_FROM and ACTIVE_TO date columns: alter table xyz add ISACTIVE NUMBER(1) GENERATED ALWAYS AS (ISACTIVE_FROM_TO"(ACTIVE_FROM,ACTIVE_TO)); The table will dynamically calculate the value based on sysdate. The function is deterministic ... given the same from/to dates and sysdate it will always return the same value. The function needs to have the deterministic keyword. As the sysdate approaches the ACTIVE_TO date/time successive queries of the table will cause the ISACTIVE value to go from 1 to 0 .... since the function calculates the value based on moving SYSDATE. Operates very much the same as a separate view would.
stackoverflow
{ "language": "en", "length": 440, "provenance": "stackexchange_0000F.jsonl.gz:870993", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561003" }
9863ed477264d4b892709e28f5f6a9b9a6c39413
Stackoverflow Stackexchange Q: loop in return statement of a component in react js I am trying to show year select box in component file, I tried simple for loop but its givign syntax error, here is my code > render(){ return ( <div> <select value={this.state.exp_year} name="exp_year" className="form-control" onChange={this.handleInputChange} > <option value="">===Select Expiry Year===</option> { for(var i=2017; i<=2050; i++){ <option value={i}>{i}</option> } } </select> </div> ); } Please let me know what i am doing wrong. A: Build the options into an array first, then use the array in the JSX code. class YearComponent { render() { const options = []; for (var i = 2017; i <= 2050; i++) { options.push(<option value={i} key={i}>{i}</option>); } return ( <div> <select value={this.state.exp_year} name="exp_year" className="form-control" onChange={this.handleInputChange} > <option value="">===Select Expiry Year===</option> {options} </select> </div> ); } }
Q: loop in return statement of a component in react js I am trying to show year select box in component file, I tried simple for loop but its givign syntax error, here is my code > render(){ return ( <div> <select value={this.state.exp_year} name="exp_year" className="form-control" onChange={this.handleInputChange} > <option value="">===Select Expiry Year===</option> { for(var i=2017; i<=2050; i++){ <option value={i}>{i}</option> } } </select> </div> ); } Please let me know what i am doing wrong. A: Build the options into an array first, then use the array in the JSX code. class YearComponent { render() { const options = []; for (var i = 2017; i <= 2050; i++) { options.push(<option value={i} key={i}>{i}</option>); } return ( <div> <select value={this.state.exp_year} name="exp_year" className="form-control" onChange={this.handleInputChange} > <option value="">===Select Expiry Year===</option> {options} </select> </div> ); } } A: Basically, you can't perform straight loops in JSX because it's kind of like asking for a function parameter. What you can do however is you can place an IIFE there which returns an array of options like: render() { return ( <div> <select value={this.state.exp_year} name="exp_year" className="form-control" onChange="this.handleInputChange"> <option value="">===Select Expiry Year===</option> {(() => { const options = []; for (let i = 2017; i <= 2050; i++) { options.push(<option value={i}>{i}</option>); } return options; })()} </select> </div> ); } But that honestly looks messy so you should probably move the loop outside just before returning: render() { const options = []; for (let i = 2017; i <= 2050; i++) { options.push(<option value={i}>{i}</option>); } return ( <div> <select value={this.state.exp_year} name="exp_year" className="form-control" onChange="this.handleInputChange"> <option value="">===Select Expiry Year===</option> {options} </select> </div> ); }
stackoverflow
{ "language": "en", "length": 262, "provenance": "stackexchange_0000F.jsonl.gz:871007", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561037" }
b0a42e1f7d31b1e792aa65b3d38163783322a8fd
Stackoverflow Stackexchange Q: how to specify the pom.xml path in jenkins pipeline script my maven project looks like below working dir Jenkins error this is how my script looks like node { stage ('Build') { git url: 'https://github.com/rakshitha2/test_proj.git' def mvnHome = tool 'M3' bat "${mvnHome}\\bin\\mvn -B install" } } I have to go inside parent directory and execute maven command in Jenkins pipeline script. I tried specifying POM path in MVN command its giving me an error saying "path is unexpected at this time". but the same is working in my local. I'm new to Jenkins and groovy. kindly help me with this. A: pipeline { agent any tools { maven 'mavenHome' jdk 'JavaHome' } stages { stage('Build') { steps { echo 'maven clean' //ABC indicates the folder name where the pom.xml file resides bat ' mvn -f ABC/pom.xml clean install' } post { success { echo 'Now Archiving' } } } } }
Q: how to specify the pom.xml path in jenkins pipeline script my maven project looks like below working dir Jenkins error this is how my script looks like node { stage ('Build') { git url: 'https://github.com/rakshitha2/test_proj.git' def mvnHome = tool 'M3' bat "${mvnHome}\\bin\\mvn -B install" } } I have to go inside parent directory and execute maven command in Jenkins pipeline script. I tried specifying POM path in MVN command its giving me an error saying "path is unexpected at this time". but the same is working in my local. I'm new to Jenkins and groovy. kindly help me with this. A: pipeline { agent any tools { maven 'mavenHome' jdk 'JavaHome' } stages { stage('Build') { steps { echo 'maven clean' //ABC indicates the folder name where the pom.xml file resides bat ' mvn -f ABC/pom.xml clean install' } post { success { echo 'Now Archiving' } } } } } A: As it is basically a normal maven mechanism. So sh 'mvn -f otherdirectory/pom.xml clean install' A: you can try to enter into the directory and execute the command in this way: sh ''' cd otherdirectory mvn clean install '''
stackoverflow
{ "language": "en", "length": 192, "provenance": "stackexchange_0000F.jsonl.gz:871020", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561087" }
285910fe2a6751111eeac5951339165d6f0be191
Stackoverflow Stackexchange Q: Testing PUT in symfony 'php://input' empty In symfony project I have a PUT method and I try to read data like this: $data = file_get_contents('php://input'); When I use Postman it works, the request is in form-data: key: data value: {"es_title":"edit","es_text":"text edit"} But when I try with WebTestCase in project not works, $data in PUT method is empty. I try like this in Test: $data = array( "data" => '{"es_title":"edit","es_text":"edit"}'); $this->client->request('PUT', $url, $data, array(), array('HTTP_apikey' => $apikey)); Also I try $data = array( 'data' => json_encode(array( 'es_title' => 'edit', 'es_text' => 'edit' )) ); $this->client->request('PUT', $url, $data, array(), array('HTTP_apikey' => $apikey)); How can I do to pass the test? A: To get data from a PUT I use this inside the controller: $putData = json_decode($request->getContent(), true); To make the request from a testCase I use this: $params = [ 'es_title' => 'edit', 'es_text' => 'edit', ]; $this->client->request( 'PUT', $url, [], [], [ 'CONTENT_TYPE' => 'application/json', 'HTTP_X-Requested-With' => 'XMLHttpRequest' ], json_encode($params) );
Q: Testing PUT in symfony 'php://input' empty In symfony project I have a PUT method and I try to read data like this: $data = file_get_contents('php://input'); When I use Postman it works, the request is in form-data: key: data value: {"es_title":"edit","es_text":"text edit"} But when I try with WebTestCase in project not works, $data in PUT method is empty. I try like this in Test: $data = array( "data" => '{"es_title":"edit","es_text":"edit"}'); $this->client->request('PUT', $url, $data, array(), array('HTTP_apikey' => $apikey)); Also I try $data = array( 'data' => json_encode(array( 'es_title' => 'edit', 'es_text' => 'edit' )) ); $this->client->request('PUT', $url, $data, array(), array('HTTP_apikey' => $apikey)); How can I do to pass the test? A: To get data from a PUT I use this inside the controller: $putData = json_decode($request->getContent(), true); To make the request from a testCase I use this: $params = [ 'es_title' => 'edit', 'es_text' => 'edit', ]; $this->client->request( 'PUT', $url, [], [], [ 'CONTENT_TYPE' => 'application/json', 'HTTP_X-Requested-With' => 'XMLHttpRequest' ], json_encode($params) );
stackoverflow
{ "language": "en", "length": 161, "provenance": "stackexchange_0000F.jsonl.gz:871033", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561107" }
2dba378d0d3057970374e994d1cc5ffce6d114a4
Stackoverflow Stackexchange Q: Nodejs os.networkInterfaces returning empty object I am trying to get the mac address from the client's machine. I did some seaching and found some npm packages. But many seem to use the included os module to do a os.networkInterfaces() and go further from there. But when I try to get all the interfaces using the os module it returns an empty object. I read in the documentation that it returns interfaces that have been assigned a network address. So I am a little confused as to what I am doing wrong, since in my opinion there is no reason not to work. Could anyone help me get to the sollution of finding the client's mac address please? All help would be appreciated
Q: Nodejs os.networkInterfaces returning empty object I am trying to get the mac address from the client's machine. I did some seaching and found some npm packages. But many seem to use the included os module to do a os.networkInterfaces() and go further from there. But when I try to get all the interfaces using the os module it returns an empty object. I read in the documentation that it returns interfaces that have been assigned a network address. So I am a little confused as to what I am doing wrong, since in my opinion there is no reason not to work. Could anyone help me get to the sollution of finding the client's mac address please? All help would be appreciated
stackoverflow
{ "language": "en", "length": 123, "provenance": "stackexchange_0000F.jsonl.gz:871062", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561185" }
d8d138925d3cfe52d400184aeaa91d92487b41f8
Stackoverflow Stackexchange Q: Scope of rule "String literals should not be duplicated" (squid:S1192) String literals should not be duplicated (squid:S1192) Duplicated string literals make the process of refactoring error-prone, since you must be sure to update all occurrences. The rule currently allows a few exceptions: * *Duplicate literals in annotations *Strings with less than 5 characters There is another situation where I don't think that the rule should apply: Logging statements. I argue that logging statements should not affect business logic and therefore are out of scope for this rule. Am I wrong?
Q: Scope of rule "String literals should not be duplicated" (squid:S1192) String literals should not be duplicated (squid:S1192) Duplicated string literals make the process of refactoring error-prone, since you must be sure to update all occurrences. The rule currently allows a few exceptions: * *Duplicate literals in annotations *Strings with less than 5 characters There is another situation where I don't think that the rule should apply: Logging statements. I argue that logging statements should not affect business logic and therefore are out of scope for this rule. Am I wrong?
stackoverflow
{ "language": "en", "length": 91, "provenance": "stackexchange_0000F.jsonl.gz:871077", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561234" }
a8f84fba5a170296d7cfbf1ec27ed62ef37d56bb
Stackoverflow Stackexchange Q: How often does GCP update the billing charges shown on GCP console? Is it real time? How often does Google cloud platform update (refresh) the billing charges shown on GCP console? Is there a fixed delay or is it real-time? A: The billed charges are updated daily and the invoice is generated monthly. One can also check usage on demand programmatically, follow the instrucions -> https://cloudplatform.googleblog.com/2017/11/monitor-and-manage-your-costs-with.html
Q: How often does GCP update the billing charges shown on GCP console? Is it real time? How often does Google cloud platform update (refresh) the billing charges shown on GCP console? Is there a fixed delay or is it real-time? A: The billed charges are updated daily and the invoice is generated monthly. One can also check usage on demand programmatically, follow the instrucions -> https://cloudplatform.googleblog.com/2017/11/monitor-and-manage-your-costs-with.html
stackoverflow
{ "language": "en", "length": 67, "provenance": "stackexchange_0000F.jsonl.gz:871084", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561248" }
64959860731f34ad4ceae5d009cd11d9a4c66394
Stackoverflow Stackexchange Q: Dataframe with "Sparse" Vector groupBy aggregation,not dense Vector in spark using Scala I have a Spark dataframe that looks as follows,it filled with sparse Vector but not dense Vector: +---+--------+-----+-------------+ |id |catagery|index|vec | +---+--------+-----+-------------+ |a |ii |3.0 |(5,[3],[1.0])| |a |ll |0.0 |(5,[0],[1.0])| |b |dd |4.0 |(5,[4],[1.0])| |b |kk |2.0 |(5,[2],[1.0])| |b |gg |5.0 |(5,[],[]) | |e |hh |1.0 |(5,[1],[1.0])| +---+--------+-----+-------------+ as we all know,if i try like this val rr=result.groupBy("id").agg(sum("index")) scala> rr.show(false) +---+----------+ |id |sum(index)| +---+----------+ |e |1.0 | |b |11.0 | |a |3.0 | +---+----------+ but how can I use "groupBy" and "agg" to sum Sparse Vector? I want the final dataFrame like this: +---+-------------------------+ |id | vecResult | +---+-------------------------+ |a |(5,[0,3],[1.0,1.0]) | |b |(5,[2,4,5],[1.0,1.0,1.0])| |e |(5,[1],[1.0]) | +---+-------------------------+ I think VectorAssembler() may solve this, but I don't know how to write code, should I use udf?
Q: Dataframe with "Sparse" Vector groupBy aggregation,not dense Vector in spark using Scala I have a Spark dataframe that looks as follows,it filled with sparse Vector but not dense Vector: +---+--------+-----+-------------+ |id |catagery|index|vec | +---+--------+-----+-------------+ |a |ii |3.0 |(5,[3],[1.0])| |a |ll |0.0 |(5,[0],[1.0])| |b |dd |4.0 |(5,[4],[1.0])| |b |kk |2.0 |(5,[2],[1.0])| |b |gg |5.0 |(5,[],[]) | |e |hh |1.0 |(5,[1],[1.0])| +---+--------+-----+-------------+ as we all know,if i try like this val rr=result.groupBy("id").agg(sum("index")) scala> rr.show(false) +---+----------+ |id |sum(index)| +---+----------+ |e |1.0 | |b |11.0 | |a |3.0 | +---+----------+ but how can I use "groupBy" and "agg" to sum Sparse Vector? I want the final dataFrame like this: +---+-------------------------+ |id | vecResult | +---+-------------------------+ |a |(5,[0,3],[1.0,1.0]) | |b |(5,[2,4,5],[1.0,1.0,1.0])| |e |(5,[1],[1.0]) | +---+-------------------------+ I think VectorAssembler() may solve this, but I don't know how to write code, should I use udf?
stackoverflow
{ "language": "en", "length": 139, "provenance": "stackexchange_0000F.jsonl.gz:871085", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561255" }
0ce4cb35fb52a946969d554221672673e93ab089
Stackoverflow Stackexchange Q: Is there any need to use onSaveInstanceState and onRestoreInstanceState when using Android Architecture Components LiveData & ViewModel? Android Architecture Components provide the LiveData and ViewModel classes which are more lifecycle-friendly and designed for a leaner Activity/Fragment. These classes handle storing data across configuration changes, but I'm confused about their use compared to the Activity framework APIs. Are onSaveInstanceState(Bundle) and onRestoreInstanceState(Bundle) still necessary or useful for preserving activity state? A: onSaveInstanceState & onRestoreInstanceState is still useful. ViewModel holds data only when process is alive. But, onSaveInstanceState & onRestoreInstanceState can hold data even if process is killed. ViewModel is easy to use and useful for preserving large data when screen orientation changes. onSaveInstanceState & onRestoreInstanceState can preserve data when process is in background.(in background, app process can be killed by system at anytime.)
Q: Is there any need to use onSaveInstanceState and onRestoreInstanceState when using Android Architecture Components LiveData & ViewModel? Android Architecture Components provide the LiveData and ViewModel classes which are more lifecycle-friendly and designed for a leaner Activity/Fragment. These classes handle storing data across configuration changes, but I'm confused about their use compared to the Activity framework APIs. Are onSaveInstanceState(Bundle) and onRestoreInstanceState(Bundle) still necessary or useful for preserving activity state? A: onSaveInstanceState & onRestoreInstanceState is still useful. ViewModel holds data only when process is alive. But, onSaveInstanceState & onRestoreInstanceState can hold data even if process is killed. ViewModel is easy to use and useful for preserving large data when screen orientation changes. onSaveInstanceState & onRestoreInstanceState can preserve data when process is in background.(in background, app process can be killed by system at anytime.) A: Assume a scenario : user is in activity A , then navigates to activity B but because of low memory Android OS destroys activity A , therefor the ViewModel connected to it also destroys. (You can emulate it by checking Don't keep activities in Developer options) now user navigates back to activity A, Android OS try's to create new Acivity and ViewModel objects. therefor you loosed data in ViewModel. But still values in savedInstanceState are there. A: As well as the other answers which talk about the ViewModel's persistence beyond simply configuration changes, I think there are a couple more use cases: Performance reasons Sometimes you don't want to store all of the latest values of view attributes in the ViewModel for performance reasons. You may have greater need to save them when the view is being re-created. For example, user's scroll position on a view within your activity/fragment. You probably don't want to save the scroll position every time the user scrolls. But you might want to save that onSaveInstanceState so you can restore that when the view is recreated (onRestoreInstanceState). Initialization to perform after restore Some views may require initialization especially because of the restore, due to the complex design of those views not being able to save everything. For example, I had a WebView and if the user was in the middle of loading a page during a configuration change, I want the WebView to try to load the new page (rather than the old one). After restoring the state, the observers of LiveData will get the latest values but this doesn't help much with something like this (I only want the view to load a page from the ViewModel at the point of restore, not at other times). So we just do that initialization via the restore state. Final word With all this stuff I would advocate keeping your onSaveInstanceState and onRestoreInstanceState as simple as possible. Ideally just call a method on the ViewModel and that's it. Then we can extract all of the logic from the view into the ViewModel, and the view is just left with boilerplate code.
stackoverflow
{ "language": "en", "length": 487, "provenance": "stackexchange_0000F.jsonl.gz:871121", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561364" }
ed11330d9ac7f49e11beae133c94acc240afb632
Stackoverflow Stackexchange Q: Escape password containing @ when using postgres pg_dump command I 'm using postgres, I created db, user, password, this is the password: password = name&text@sob I'm using the following command to dump database and it works on my other databases I have: pg_dump -Fc --no-acl --no-owner --dbname=postgresql://db_user:password@127.0.0.1:5432/db_name But it doesn't work when using the DB with the password containing & and @: pg_dump -Fc --no-acl --no-owner --dbname=postgresql://db_user:name&text@sob@127.0.0.1:5432/db_name doesn't work because of the & and @ in the password- So I escaped the & with \ but it didn't work for @ - any suggestions? Thanks A: You can encode the signs directly in the connection string, no need to bother with the extra URL parameter. --dbname=postgresql://db_user:name%26te‌​xt%40sob@127.0.0.1:5432/db_nam‌​e
Q: Escape password containing @ when using postgres pg_dump command I 'm using postgres, I created db, user, password, this is the password: password = name&text@sob I'm using the following command to dump database and it works on my other databases I have: pg_dump -Fc --no-acl --no-owner --dbname=postgresql://db_user:password@127.0.0.1:5432/db_name But it doesn't work when using the DB with the password containing & and @: pg_dump -Fc --no-acl --no-owner --dbname=postgresql://db_user:name&text@sob@127.0.0.1:5432/db_name doesn't work because of the & and @ in the password- So I escaped the & with \ but it didn't work for @ - any suggestions? Thanks A: You can encode the signs directly in the connection string, no need to bother with the extra URL parameter. --dbname=postgresql://db_user:name%26te‌​xt%40sob@127.0.0.1:5432/db_nam‌​e A: adding ?password=name%26te‌​xt%40sob as uri parameter should do: pg_dump -Fc --no-acl --no-owner --dbname=postgresql://db_user:password@127.0.0.1:5432/db_name?password=name%26te‌​xt%40sob as per https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING Components of the hierarchical part of the URI can also be given as parameters. update as Roko noticed, URL has to be encoded
stackoverflow
{ "language": "en", "length": 156, "provenance": "stackexchange_0000F.jsonl.gz:871123", "question_score": "13", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561378" }
12afb22aa24dac5fc0ea45a02dfdfca784f1a39c
Stackoverflow Stackexchange Q: Is it possible to get the RequestMethod-verb in a custom PreAuthorize method? I'm using a custom access checker with @PreAuthorize: @RestController @RequestMapping("/users") public class Users { @PreAuthorize("@customAccessChecker.hasAccessToMethod('USERS', 'GET')") @RequestMapping(method = RequestMethod.GET) User getUsers() { ... } @PreAuthorize("@customAccessChecker.hasAccessToMethod('USERS', 'POST')") @RequestMapping(method = RequestMethod.POST) User addUser() { ... } } I would like to get rid of the strings 'GET' and 'POST' in the @PreAuthorize annotation. Is it possible to get the RequestMethod used in the @RequestMapping as a variable input to hasAccessToMethod somehow? A: I cannot remember an SpEL expression to get data from an annotation, but you can use SpEL to get the value from a parameter of your method with the # character. Inject the HttpServletRequest, it has a getMethod method that contains what you want. @PreAuthorize("@customAccessChecker.hasAccessToMethod('USERS', #request.method)") @RequestMapping(method = RequestMethod.POST) User addUser(HttpServletRequest request) { // ... }
Q: Is it possible to get the RequestMethod-verb in a custom PreAuthorize method? I'm using a custom access checker with @PreAuthorize: @RestController @RequestMapping("/users") public class Users { @PreAuthorize("@customAccessChecker.hasAccessToMethod('USERS', 'GET')") @RequestMapping(method = RequestMethod.GET) User getUsers() { ... } @PreAuthorize("@customAccessChecker.hasAccessToMethod('USERS', 'POST')") @RequestMapping(method = RequestMethod.POST) User addUser() { ... } } I would like to get rid of the strings 'GET' and 'POST' in the @PreAuthorize annotation. Is it possible to get the RequestMethod used in the @RequestMapping as a variable input to hasAccessToMethod somehow? A: I cannot remember an SpEL expression to get data from an annotation, but you can use SpEL to get the value from a parameter of your method with the # character. Inject the HttpServletRequest, it has a getMethod method that contains what you want. @PreAuthorize("@customAccessChecker.hasAccessToMethod('USERS', #request.method)") @RequestMapping(method = RequestMethod.POST) User addUser(HttpServletRequest request) { // ... }
stackoverflow
{ "language": "en", "length": 139, "provenance": "stackexchange_0000F.jsonl.gz:871170", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561544" }
e444cb8b4996bc4db002e856b3260144dcde35a8
Stackoverflow Stackexchange Q: how to drop duplicated columns data based on column name in pandas Assume I have a table like below A B C B 0 0 1 2 3 1 4 5 6 7 I'd like to drop column B. I tried to use drop_duplicates, but it seems that it only works based on duplicated data not header. Hope anyone know how to do this. A: You can groupby We use the axis=1 and level=0 parameters to specify that we are grouping by columns. Then use the first method to grab the first column within each group defined by unique column names. df.groupby(level=0, axis=1).first() A B C 0 0 1 2 1 4 5 6 We could have also used last df.groupby(level=0, axis=1).last() A B C 0 0 3 2 1 4 7 6 Or mean df.groupby(level=0, axis=1).mean() A B C 0 0 2 2 1 4 6 6
Q: how to drop duplicated columns data based on column name in pandas Assume I have a table like below A B C B 0 0 1 2 3 1 4 5 6 7 I'd like to drop column B. I tried to use drop_duplicates, but it seems that it only works based on duplicated data not header. Hope anyone know how to do this. A: You can groupby We use the axis=1 and level=0 parameters to specify that we are grouping by columns. Then use the first method to grab the first column within each group defined by unique column names. df.groupby(level=0, axis=1).first() A B C 0 0 1 2 1 4 5 6 We could have also used last df.groupby(level=0, axis=1).last() A B C 0 0 3 2 1 4 7 6 Or mean df.groupby(level=0, axis=1).mean() A B C 0 0 2 2 1 4 6 6 A: Use Index.duplicated with loc or iloc and boolean indexing: print (~df.columns.duplicated()) [ True True True False] df = df.loc[:, ~df.columns.duplicated()] print (df) A B C 0 0 1 2 1 4 5 6 df = df.iloc[:, ~df.columns.duplicated()] print (df) A B C 0 0 1 2 1 4 5 6 Timings: np.random.seed(123) cols = ['A','B','C','B'] #[1000 rows x 30 columns] df = pd.DataFrame(np.random.randint(10, size=(1000,30)),columns = np.random.choice(cols, 30)) print (df) In [115]: %timeit (df.groupby(level=0, axis=1).first()) 1000 loops, best of 3: 1.48 ms per loop In [116]: %timeit (df.groupby(level=0, axis=1).mean()) 1000 loops, best of 3: 1.58 ms per loop In [117]: %timeit (df.iloc[:, ~df.columns.duplicated()]) 1000 loops, best of 3: 338 µs per loop In [118]: %timeit (df.loc[:, ~df.columns.duplicated()]) 1000 loops, best of 3: 346 µs per loop
stackoverflow
{ "language": "en", "length": 275, "provenance": "stackexchange_0000F.jsonl.gz:871204", "question_score": "16", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561649" }
1c41a91b8028d614e1a9acccc18f7159d4079109
Stackoverflow Stackexchange Q: Page content appearing underneath sidebar I am creating a html layout with a sidebar. But my header and content are appearing underneath my sidebar instead of next to it. .container { position:relative; padding:10px; top:0px; right: 0; left: 0; height: 1200px;} #sidebar { position:relative; top:0; bottom:0; left:0; width:200px; height: 1000px; background: gray; } #header { border:1px solid #000; height:300px; padding:10px; margin-left: 200px; } #content { border:1px solid #000; height:700px; margin-left: 200px;; padding:10px; } <div class="container"> <div id="sidebar"> <a href="#"> Link1 </a> </div> <div id="header"> <h2 class="title">Title</h2> <h3>Header content</h3> </div> <div id="content"> <center> <p>Hello</p> </center> </div> </div> Thanks A: Add "display: inline-block;" to the elements that you want to display next to each other.
Q: Page content appearing underneath sidebar I am creating a html layout with a sidebar. But my header and content are appearing underneath my sidebar instead of next to it. .container { position:relative; padding:10px; top:0px; right: 0; left: 0; height: 1200px;} #sidebar { position:relative; top:0; bottom:0; left:0; width:200px; height: 1000px; background: gray; } #header { border:1px solid #000; height:300px; padding:10px; margin-left: 200px; } #content { border:1px solid #000; height:700px; margin-left: 200px;; padding:10px; } <div class="container"> <div id="sidebar"> <a href="#"> Link1 </a> </div> <div id="header"> <h2 class="title">Title</h2> <h3>Header content</h3> </div> <div id="content"> <center> <p>Hello</p> </center> </div> </div> Thanks A: Add "display: inline-block;" to the elements that you want to display next to each other. A: Just add #sidebar { float:left; } .container { position:relative; padding:10px; top:0px; right: 0; left: 0; height: 1200px;} #sidebar { position:relative; top:0; bottom:0; left:0; width:200px; height: 1000px; background: gray; float:left; } #header { border:1px solid #000; height:300px; padding:10px; margin-left: 200px; } #content { border:1px solid #000; height:700px; margin-left: 200px;; padding:10px; } <div class="container"> <div id="sidebar"> <a href="#"> Link1 </a> </div> <div id="header"> <h2 class="title">Title</h2> <h3>Header content</h3> </div> <div id="content"> <center> <p>Hello</p> </center> </div> </div> A: I have introduced .inner-container and defined two flexboxes. CSS is simplified. * { box-sizing: border-box; } .container { display: flex; } .inner-container { display: flex; flex-flow: column; width: 80%; } #sidebar { width: 20%; background: gray; } #header { border: 1px solid black; height: 300px; padding: 10px; } #content { border: 1px solid black; padding: 10px; } <div class="container"> <div id="sidebar"> <a href="#"> Link1 </a> </div> <div class="inner-container"> <div id="header"> <h2 class="title">Title</h2> <h3>Header content</h3> </div> <div id="content"> <center> <p>Hello</p> </center> </div> </div> </div> A: you should just try editing the position: fixed this will solve your problem. A: You should try to change your position: relative; to position: absolute;. You can then adjust the position of your divs using a margin. .container { position:relative; padding:10px; top:0px; right: 0; left: 0; height: 1200px; } #sidebar { position:absolute; top:0; bottom:0; left:0; width:200px; height: 1000px; background: gray; } #header { border:1px solid #000; height:300px; padding:10px; margin-left: 200px; margin-top:-10px; } #content { border:1px solid #000; height:700px; margin-left: 200px; padding:10px; } <div class="container"> <div id="sidebar"> <a href="#"> Link1 </a> </div> <div id="header"> <h2 class="title">Title</h2> <h3>Header content</h3> </div> <div id="content"> <center> <p>Hello</p> </center> </div> </div> Working fiddle: https://jsfiddle.net/khs8j3gu/2/ Good luck!
stackoverflow
{ "language": "en", "length": 382, "provenance": "stackexchange_0000F.jsonl.gz:871241", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561746" }
de5c1b8f3874d225048e1229ae5c7e6b1aa405c8
Stackoverflow Stackexchange Q: SilverStripe sort by has_one relation field "title" I have two objects Schedule and LocationPage. Object Schedule has a $has_one relation to LocationPage: class Schedule extends DataObject { private static $db = array( 'Date' => 'Date', ); private static $has_one = array( 'Location' => 'LocationPage', ); } and class LocationPage extends Page { private static $db = [ 'Heading' => 'HTMLVarchar(250)', 'SubHeading' => 'Varchar(250)' ]; } When I try to sort by the relation field Title it gives me an error. Here is the sort code: Schedule::get()->sort(['Location.Title' => 'ASC']); Here is the sort error that I get when calling the above code: [User Error] Uncaught SS_DatabaseException: Couldn't run query: SELECT DISTINCT "Schedule"."ClassName", "Schedule"."LastEdited", "Schedule"."Created", "Schedule"."Date", "Schedule"."LocationID", "Schedule"."ID", CASE WHEN "Schedule"."ClassName" IS NOT NULL THEN "Schedule"."ClassName" ELSE 'Schedule' END AS "RecordClassName", "LocationPage"."Title" AS "_SortColumn0" FROM "Schedule" LEFT JOIN "LocationPage" ON "LocationPage"."ID" = "Schedule"."LocationID" INNER JOIN "Page" ON "LocationPage"."ID" = "Page"."ID" INNER JOIN "SiteTree" ON "LocationPage"."ID" = "SiteTree"."ID" ORDER BY "_SortColumn0" ASC Unknown column 'LocationPage.Title' in 'field list' What is causing this problem? A: A workaround for this issue is to make the has_one relationship point to SiteTree instead of LocationPage.
Q: SilverStripe sort by has_one relation field "title" I have two objects Schedule and LocationPage. Object Schedule has a $has_one relation to LocationPage: class Schedule extends DataObject { private static $db = array( 'Date' => 'Date', ); private static $has_one = array( 'Location' => 'LocationPage', ); } and class LocationPage extends Page { private static $db = [ 'Heading' => 'HTMLVarchar(250)', 'SubHeading' => 'Varchar(250)' ]; } When I try to sort by the relation field Title it gives me an error. Here is the sort code: Schedule::get()->sort(['Location.Title' => 'ASC']); Here is the sort error that I get when calling the above code: [User Error] Uncaught SS_DatabaseException: Couldn't run query: SELECT DISTINCT "Schedule"."ClassName", "Schedule"."LastEdited", "Schedule"."Created", "Schedule"."Date", "Schedule"."LocationID", "Schedule"."ID", CASE WHEN "Schedule"."ClassName" IS NOT NULL THEN "Schedule"."ClassName" ELSE 'Schedule' END AS "RecordClassName", "LocationPage"."Title" AS "_SortColumn0" FROM "Schedule" LEFT JOIN "LocationPage" ON "LocationPage"."ID" = "Schedule"."LocationID" INNER JOIN "Page" ON "LocationPage"."ID" = "Page"."ID" INNER JOIN "SiteTree" ON "LocationPage"."ID" = "SiteTree"."ID" ORDER BY "_SortColumn0" ASC Unknown column 'LocationPage.Title' in 'field list' What is causing this problem? A: A workaround for this issue is to make the has_one relationship point to SiteTree instead of LocationPage.
stackoverflow
{ "language": "en", "length": 189, "provenance": "stackexchange_0000F.jsonl.gz:871289", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561882" }
72608f93a27f7921fd3909783c4ac0fb750e6a47
Stackoverflow Stackexchange Q: Android, Custom permission granted before creation I'm trying to use a custom permission created by another application (Bazaar) in my app (It's a permission to use a market com.farsitel.bazaar.permission.PAY_THROUGH_BAZAAR). Normally it works right. But if Bazaar is installed after my application. My app won't get the custom permissions (Which are created by Bazaar) and throws exception. I want to know If anybody else has faced a similar problem and what solutions do you have to it? A: This is a desired behaviour. In the textbook, Android Security Internals: An In-Depth Guide to Android's Security Architecture by Nikolay Elenkov(2014) said: The system can only grant a permission that it knows about, which means that applications that define custom permissions need to be installed before the applications that make use of those permissions are installed. If an application requests a permission unknown to the system, it is ignored and not granted. BTW, permissions are assigned to each application at the install time by PackageManager, it maintains some information of installed packages, such as package name, version and permissions, these info are stored in /data/system/packages.xml. If you want to query all permissions on device, you can try pm list permissions.
Q: Android, Custom permission granted before creation I'm trying to use a custom permission created by another application (Bazaar) in my app (It's a permission to use a market com.farsitel.bazaar.permission.PAY_THROUGH_BAZAAR). Normally it works right. But if Bazaar is installed after my application. My app won't get the custom permissions (Which are created by Bazaar) and throws exception. I want to know If anybody else has faced a similar problem and what solutions do you have to it? A: This is a desired behaviour. In the textbook, Android Security Internals: An In-Depth Guide to Android's Security Architecture by Nikolay Elenkov(2014) said: The system can only grant a permission that it knows about, which means that applications that define custom permissions need to be installed before the applications that make use of those permissions are installed. If an application requests a permission unknown to the system, it is ignored and not granted. BTW, permissions are assigned to each application at the install time by PackageManager, it maintains some information of installed packages, such as package name, version and permissions, these info are stored in /data/system/packages.xml. If you want to query all permissions on device, you can try pm list permissions.
stackoverflow
{ "language": "en", "length": 198, "provenance": "stackexchange_0000F.jsonl.gz:871303", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561911" }
442e1356d6d63a1ebcb6f1a787febe1598a1d668
Stackoverflow Stackexchange Q: Error when using std::min "no matching function for call to ‘min()’" Following https://stackoverflow.com/a/9424211/3368959 I am trying to compare three numbers: #include <iostream> int main() { std::cout << std::min({2,5,1}) << std::endl; return 0; } But the compiler gives me the error: error: no matching function for call to ‘min(<brace-enclosed initializer list>)’ However, the code compiles just fine when using std::min(std::min(2,5),1) But the first way should work with the c++11 standard. What could I be doing wrong? A: As @BoBTFish suggested: In order to use template <class T> T min (initializer_list<T> il) one needs to include <algorithm> as is mentioned here.
Q: Error when using std::min "no matching function for call to ‘min()’" Following https://stackoverflow.com/a/9424211/3368959 I am trying to compare three numbers: #include <iostream> int main() { std::cout << std::min({2,5,1}) << std::endl; return 0; } But the compiler gives me the error: error: no matching function for call to ‘min(<brace-enclosed initializer list>)’ However, the code compiles just fine when using std::min(std::min(2,5),1) But the first way should work with the c++11 standard. What could I be doing wrong? A: As @BoBTFish suggested: In order to use template <class T> T min (initializer_list<T> il) one needs to include <algorithm> as is mentioned here.
stackoverflow
{ "language": "en", "length": 100, "provenance": "stackexchange_0000F.jsonl.gz:871306", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561919" }
758e48c8ed8ed704d0f7f7ca4bf8b1f0fc9ae5fa
Stackoverflow Stackexchange Q: grid.arrange align all tables to the top in Rmarkdown How do I align all tables in grid.arrange to top? The tables seem to mid-align by default. df1 = data.frame(x=c(1, 2, 3), y=c('a', 'b', 'c')) df2 = data.frame(x=rep(1, 10), y=rep('a', 10)) grid.arrange(tableGrob(df1), tableGrob(df2), nrow=1, ncol=2) A: g1 <- tableGrob(df1) g2 <- tableGrob(df2) grid.draw(combine(g1, g2, along=1))
Q: grid.arrange align all tables to the top in Rmarkdown How do I align all tables in grid.arrange to top? The tables seem to mid-align by default. df1 = data.frame(x=c(1, 2, 3), y=c('a', 'b', 'c')) df2 = data.frame(x=rep(1, 10), y=rep('a', 10)) grid.arrange(tableGrob(df1), tableGrob(df2), nrow=1, ncol=2) A: g1 <- tableGrob(df1) g2 <- tableGrob(df2) grid.draw(combine(g1, g2, along=1))
stackoverflow
{ "language": "en", "length": 55, "provenance": "stackexchange_0000F.jsonl.gz:871313", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561942" }
3621e1728f8db2dedad636293201db96138594e1
Stackoverflow Stackexchange Q: Using splat operator with when Case statement: case x when 1 "one" when 2 "two" when 3 "three" else "many" end is evaluated using the === operator. This operator is invoked on the value of the when expression with the value of the case expression as the argument. The case statement above is equivalent to the following: if 1 === x "one" elsif 2 === x "two" elsif 3 === x "three" else "many" end In this case: A = 1 B = [2, 3, 4] case reason when A puts "busy" when *B puts "offline" end the when *B part cannot be rewritten to *B === 2. Is this about the splat operator? The splat operator is about assignment, not comparison. How does case statement handle when *B? A: But the splat operator is about assignment, not comparison. In this case, * converts an array into an argument list: when *[2, 3, 4] is equivalent to: when 2, 3, 4 Just like in a method call: foo(*[2, 3, 4]) is equivalent to: foo(2, 3, 4)
Q: Using splat operator with when Case statement: case x when 1 "one" when 2 "two" when 3 "three" else "many" end is evaluated using the === operator. This operator is invoked on the value of the when expression with the value of the case expression as the argument. The case statement above is equivalent to the following: if 1 === x "one" elsif 2 === x "two" elsif 3 === x "three" else "many" end In this case: A = 1 B = [2, 3, 4] case reason when A puts "busy" when *B puts "offline" end the when *B part cannot be rewritten to *B === 2. Is this about the splat operator? The splat operator is about assignment, not comparison. How does case statement handle when *B? A: But the splat operator is about assignment, not comparison. In this case, * converts an array into an argument list: when *[2, 3, 4] is equivalent to: when 2, 3, 4 Just like in a method call: foo(*[2, 3, 4]) is equivalent to: foo(2, 3, 4)
stackoverflow
{ "language": "en", "length": 177, "provenance": "stackexchange_0000F.jsonl.gz:871331", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44561995" }
debc22ece1cd10f06db65fd1193e28bdee951367
Stackoverflow Stackexchange Q: Mongoose Aggregate with Lookup I have a simple two collections like below : assignments: [ { "_id": "593eff62630a1c35781fa325", "topic_id": 301, "user_id": "59385ef6d2d80c00d9bdef97" }, { "_id": "593eff62630a1c35781fa326", "topic_id": 301, "user_id": "59385ef6d2d80c00d9bdef97" } ] and users collection: [ { "_id": "59385ef6d2d80c00d9bdef97", "name": "XX" }, { "_id": "59385b547e8918009444a3ac", "name": "YY" } ] and my intent is, an aggregate query by user_id on assignment collection, and also I would like to include user.name in that group collection. I tried below: Assignment.aggregate([{ $match: { "topic_id": "301" } }, { $group: { _id: "$user_id", count: { $sum: 1 } } }, { $lookup: { "from": "kullanicilar", "localField": "user_id", "foreignField": "_id", "as": "user" } }, { $project: { "user": "$user", "count": "$count", "_id": "$_id" } }, But the problem is that user array is always blank. [ { _id: '59385ef6d2d80c00d9bdef97', count: 1000, user: [] } ] I want something like : [ { _id: '59385ef6d2d80c00d9bdef97', count: 1000, user: [_id:"59385ef6d2d80c00d9bdef97",name:"XX"] } ]
Q: Mongoose Aggregate with Lookup I have a simple two collections like below : assignments: [ { "_id": "593eff62630a1c35781fa325", "topic_id": 301, "user_id": "59385ef6d2d80c00d9bdef97" }, { "_id": "593eff62630a1c35781fa326", "topic_id": 301, "user_id": "59385ef6d2d80c00d9bdef97" } ] and users collection: [ { "_id": "59385ef6d2d80c00d9bdef97", "name": "XX" }, { "_id": "59385b547e8918009444a3ac", "name": "YY" } ] and my intent is, an aggregate query by user_id on assignment collection, and also I would like to include user.name in that group collection. I tried below: Assignment.aggregate([{ $match: { "topic_id": "301" } }, { $group: { _id: "$user_id", count: { $sum: 1 } } }, { $lookup: { "from": "kullanicilar", "localField": "user_id", "foreignField": "_id", "as": "user" } }, { $project: { "user": "$user", "count": "$count", "_id": "$_id" } }, But the problem is that user array is always blank. [ { _id: '59385ef6d2d80c00d9bdef97', count: 1000, user: [] } ] I want something like : [ { _id: '59385ef6d2d80c00d9bdef97', count: 1000, user: [_id:"59385ef6d2d80c00d9bdef97",name:"XX"] } ]
stackoverflow
{ "language": "en", "length": 155, "provenance": "stackexchange_0000F.jsonl.gz:871352", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562049" }
22849a014fba3e8b1c2f5151cc0f0aa63048539e
Stackoverflow Stackexchange Q: Is it possible to embed a React-Native view as a subview in a native android project? I'm integrating react-native to my pure java android project. I just want some item of my native ListView show a react-native view. I don't want to change the whole ListView to react-native. Is there any way to do this? I've tried to add ReactRootView as a subview to some other view, it's blank. A: Turns out my Activity went wrong. RN requires support lib 23.0.1, but the ConstraintLayout is not compatible with support lib 23.0.1. Just use any other Layout for the root view of the Activity.
Q: Is it possible to embed a React-Native view as a subview in a native android project? I'm integrating react-native to my pure java android project. I just want some item of my native ListView show a react-native view. I don't want to change the whole ListView to react-native. Is there any way to do this? I've tried to add ReactRootView as a subview to some other view, it's blank. A: Turns out my Activity went wrong. RN requires support lib 23.0.1, but the ConstraintLayout is not compatible with support lib 23.0.1. Just use any other Layout for the root view of the Activity.
stackoverflow
{ "language": "en", "length": 104, "provenance": "stackexchange_0000F.jsonl.gz:871367", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562087" }
ebcabb912fa2961a6581fc89902018a7d3591f27
Stackoverflow Stackexchange Q: Flask : changing location of 'migrations' folder I have my Flask project hierarchy a project ├── controllers └── models └── schema.py When I run python schema.py db init, a migrations folder is added under project instead of under models. I have a __init__.py under all 3 folders (not showing here for brevity). I want the migrations folder generated under models. How do I do it? A: You need to pass the directory option to the init command. This can be the path to the migrations directory. It is set to migrations by default. python schema.py db init --directory models/migrations Reference: * *API Reference Section https://flask-migrate.readthedocs.io/en/latest/
Q: Flask : changing location of 'migrations' folder I have my Flask project hierarchy a project ├── controllers └── models └── schema.py When I run python schema.py db init, a migrations folder is added under project instead of under models. I have a __init__.py under all 3 folders (not showing here for brevity). I want the migrations folder generated under models. How do I do it? A: You need to pass the directory option to the init command. This can be the path to the migrations directory. It is set to migrations by default. python schema.py db init --directory models/migrations Reference: * *API Reference Section https://flask-migrate.readthedocs.io/en/latest/ A: Or you can update the location of scripts location in alembic.ini # path to migration scripts script_location = alembic A: Well.. Like Oluwafemi said, you can pass the -d (--directory) flag to your manager script in the cli command python schema.py db init --directory models/migrations The problem with this solution is that you will have to specify the migration path each time you will enter a db command, otherwise you will get the following error: alembic.util.exc.CommandError: Path doesn't exist: 'migrations'. Please use the 'init' command to create a new scripts folder. A better way to config your migration path is by passing the directory argument to the Migrate object in your manage.py (or in your case, the 'schema.py'). For example: import os from flask_script import Manager from flask_migrate import Migrate, MigrateCommand from .application import app, db MIGRATION_DIR = os.path.join('models', 'migrations') migrate = Migrate(app, db, directory=MIGRATION_DIR) manager = Manager(app) manager.add_command('db', MigrateCommand) if __name__ == '__main__': manager.run() Besides, I suggest you move all the command interface functionality from 'schema.py' to a diffrent python script in your application root path (like, 'manager.py') and create the migrations folder next to your models and not inside it. The models folder should not contains anything else than the Data Model Objects! Hope it was helpful..
stackoverflow
{ "language": "en", "length": 316, "provenance": "stackexchange_0000F.jsonl.gz:871399", "question_score": "17", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562172" }
5533947054e9732ce13261f60d9348e1156833bd
Stackoverflow Stackexchange Q: Image does not load as grayscale (skimage) I'm trying to load an image as grayscale as follows: from skimage import data from skimage.viewer import ImageViewer img = data.imread('my_image.png', as_gray=True) However, if I check for its shape using img.shape it turns out to be a three-dimensional, and not two-dimensional, array. What am I doing wrong? A: From scikit-image documentation, the signature of data.imread is as follows: skimage.data.imread(fname, as_grey=False, plugin=None, flatten=None, **plugin_args) Your code does not work properly because the keyword argument as_grey is misspelled (you put as_gray). Sample run In [4]: from skimage import data In [5]: img_3d = data.imread('my_image.png', as_grey=False) In [6]: img_3d.dtype Out[6]: dtype('uint8') In [7]: img_3d.shape Out[7]: (256L, 640L, 3L) In [8]: img_2d = data.imread('my_image.png', as_grey=True) In [9]: img_2d.dtype Out[9]: dtype('float64') In [10]: img_2d.shape Out[10]: (256L, 640L)
Q: Image does not load as grayscale (skimage) I'm trying to load an image as grayscale as follows: from skimage import data from skimage.viewer import ImageViewer img = data.imread('my_image.png', as_gray=True) However, if I check for its shape using img.shape it turns out to be a three-dimensional, and not two-dimensional, array. What am I doing wrong? A: From scikit-image documentation, the signature of data.imread is as follows: skimage.data.imread(fname, as_grey=False, plugin=None, flatten=None, **plugin_args) Your code does not work properly because the keyword argument as_grey is misspelled (you put as_gray). Sample run In [4]: from skimage import data In [5]: img_3d = data.imread('my_image.png', as_grey=False) In [6]: img_3d.dtype Out[6]: dtype('uint8') In [7]: img_3d.shape Out[7]: (256L, 640L, 3L) In [8]: img_2d = data.imread('my_image.png', as_grey=True) In [9]: img_2d.dtype Out[9]: dtype('float64') In [10]: img_2d.shape Out[10]: (256L, 640L)
stackoverflow
{ "language": "en", "length": 130, "provenance": "stackexchange_0000F.jsonl.gz:871410", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562204" }
4e7fedfb0e234434b9ef0494710b9d119d5394cf
Stackoverflow Stackexchange Q: How to remove LARAVEL from branding and use own branding in Laravel 5 In Laravel 5, how could I remove "Laravel" from the forget password email template, and to use my own branding? Please kindly advise to achieve this. A: If you want to change only the branding then you can set it in .env file APP_NAME=your_app_name But if you want to change more stuff, for example the header or footer then you need to do this: Run these commands php artisan vendor:publish --tag=laravel-notifications php artisan vendor:publish --tag=laravel-mail and then in /resources/views/vendor/mail/html/ you can edit all the components and customize anything you want. For example i have edited the sentence All rights reserved. to All test reserved in /resources/views/vendor/mail/html/message.blade.php and this is what i got:
Q: How to remove LARAVEL from branding and use own branding in Laravel 5 In Laravel 5, how could I remove "Laravel" from the forget password email template, and to use my own branding? Please kindly advise to achieve this. A: If you want to change only the branding then you can set it in .env file APP_NAME=your_app_name But if you want to change more stuff, for example the header or footer then you need to do this: Run these commands php artisan vendor:publish --tag=laravel-notifications php artisan vendor:publish --tag=laravel-mail and then in /resources/views/vendor/mail/html/ you can edit all the components and customize anything you want. For example i have edited the sentence All rights reserved. to All test reserved in /resources/views/vendor/mail/html/message.blade.php and this is what i got: A: That actually comes from the configuration setting in app.php, called name. There's also an environment variable called APP_NAME. https://github.com/laravel/laravel/blob/master/config/app.php#L15 - Config value https://github.com/laravel/laravel/blob/master/.env.example#L1 - Environment variable A: If you want to change the name to your own, you just have to update your variable in .env file: APP_NAME=*the-name-of-application*
stackoverflow
{ "language": "en", "length": 175, "provenance": "stackexchange_0000F.jsonl.gz:871432", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562268" }
7754285e40ff30e83fc30ce7a96961eabe7973a6
Stackoverflow Stackexchange Q: Cannot connect to the mongodb at localhost:27017 Error:Network is unreachable I am trying to connect mongodb database(in ubuntu 16.04). I have already created a database. but i found : Cannot connect to the mongodb at localhost:27017 Error:Network is unreachable. A: To start a MongoDB service that is in stop mode, please follow those steps * *Press window key + R to open Run window. *Then type "services.msc" to open services window. *Then select MongoDB server, right click on it, finally click on the start menu entry.
Q: Cannot connect to the mongodb at localhost:27017 Error:Network is unreachable I am trying to connect mongodb database(in ubuntu 16.04). I have already created a database. but i found : Cannot connect to the mongodb at localhost:27017 Error:Network is unreachable. A: To start a MongoDB service that is in stop mode, please follow those steps * *Press window key + R to open Run window. *Then type "services.msc" to open services window. *Then select MongoDB server, right click on it, finally click on the start menu entry. A: Before, you must run mongod in command line. A: Before, you must run * *for windows mongod *for linux or mac sudo mongod on the command line. A: 1.You should first copy your mongodb files into this route : "Users/YOURDIRECTORYNAME/" 2.After that you should create folder name "mongodb-data" 3.With terminal navigate to this route "Users/YOURDIRECTORYNAME/mongodb/bin" 4.Run this command : mongod.exe --dbpath=/Users/YOURDIRECTORYNAME/mongodb-data Now you can connect to your connection A: Try replacing localhost: with 127.0.0.1: Instead of localhost:27017 write 127.0.0.1:27017 A: if you are using MAC OS just make sure you need to run mongod in your terminal A: * *I was also facing this issue, Install mongodb and try after reconnecting localhost:27017.
stackoverflow
{ "language": "en", "length": 200, "provenance": "stackexchange_0000F.jsonl.gz:871451", "question_score": "15", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562325" }
bc781b3bea2197c9424fe82478f404740eeef086
Stackoverflow Stackexchange Q: setState of an object - React Native I have for example this : this.state = { lang1: { name: 'Anglais', code: "en" }, lang2: { name: 'Français', code: "fr" } }; How can I setState the lang1.name? It doesn't work when I do: this.setState({ lang1.name: "myExample" }); I'm new in React Native and I didn't understand that clearly A: this.setState({ lang1: { name: "myExample", code: this.state.lang1.code } });
Q: setState of an object - React Native I have for example this : this.state = { lang1: { name: 'Anglais', code: "en" }, lang2: { name: 'Français', code: "fr" } }; How can I setState the lang1.name? It doesn't work when I do: this.setState({ lang1.name: "myExample" }); I'm new in React Native and I didn't understand that clearly A: this.setState({ lang1: { name: "myExample", code: this.state.lang1.code } }); A: You can do it like this: this.setState((previousState) => { const lang1 = previousState.lang1 return {...lang1, name: 'myExample'} }) A: You can just do this. ... just adds exiting keys and then adds/overwrites new ones this.setState({ lang1:{ ...this.state.lang1, name: "myExample" } }) A: Value is a parameter... let data = this.state.objectState data[prop] = value this.setState({objectState: data})
stackoverflow
{ "language": "en", "length": 125, "provenance": "stackexchange_0000F.jsonl.gz:871458", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562338" }
ac40fa5e458ea063c5f5c8b8d5118b91e8c24e18
Stackoverflow Stackexchange Q: How can I force template parameter type to be signed? I'll use the following example to illustrate my question: template<typename T> T diff(T a, T b) { return a-b; } I expect this template function works only when the type T is signed. The only solution I can figure out is to use delete keyword for all the unsigned types: template<> unsigned char diff(unsigned char,unsigned char) == delete; template<> unsigned char diff(unsigned char,unsigned char) == delete; Are there other solutions? A: As another option, you might probably add static_assert with std::is_signed type trait: template<typename T> auto diff(T x, T y) { static_assert(std::is_signed<T>::value, "Does not work for unsigned"); return x - y; } So that: auto x = diff(4, 2); // works auto x = diff(4U, 2U); // does not work
Q: How can I force template parameter type to be signed? I'll use the following example to illustrate my question: template<typename T> T diff(T a, T b) { return a-b; } I expect this template function works only when the type T is signed. The only solution I can figure out is to use delete keyword for all the unsigned types: template<> unsigned char diff(unsigned char,unsigned char) == delete; template<> unsigned char diff(unsigned char,unsigned char) == delete; Are there other solutions? A: As another option, you might probably add static_assert with std::is_signed type trait: template<typename T> auto diff(T x, T y) { static_assert(std::is_signed<T>::value, "Does not work for unsigned"); return x - y; } So that: auto x = diff(4, 2); // works auto x = diff(4U, 2U); // does not work A: You can use std::is_signed together with std::enable_if: template<typename T> T diff(T a, T b); template<typename T> std::enable_if_t<std::is_signed<T>::value, T> diff(T a, T b) { return a - b; } Here std::is_signed<T>::value is true if and only if T is signed (BTW, it is also true for floating-point types, if you don't need it, consider combining with std::is_integral). std::enable_if_t<Test, Type> is the same as std::enable_if<Test, Type>::type. std::enable_if<Test, Type> is defined as an empty struct in case Test is false and as a struct with an only typedef type equal to template parameter Type otherwise. So, for signed types, std::enable_if_t<std::is_signed<T>::value, T> is equal to T, while for unsigned it's not defined and compiler uses SFINAE rule, so, if you need to specify an implementation for a particular non-signed type, you can easily do that: template<> unsigned diff(unsigned, unsigned) { return 0u; } Some relevant links: enable_if, is_signed. A: How about static assert with std::is_signed ? template<typename T> T diff(T a, T b) { static_assert(std::is_signed<T>::value, "signed values only"); return a-b; } See it live there : http://ideone.com/l8nWYQ A: So there are a few issues I have with your function. First, your function requires all 3 types to match -- the left, right and result types. So signed char a; int b; diff(a-b); won't work for no good reason. template<class L, class R> auto diff( L l, R r ) -> typename std::enable_if< std::is_signed<L>::value && std::is_signed<R>::value, typename std::decay<decltype( l-r )>::type >::type { return l-r; } the second thing I'd want to do is make a diff object; you cannot easily pass your diff function around, and higher order functions are awesome. struct diff_t { template<class L, class R> auto operator()(L l, R r)const -> decltype( diff(l,r) ) { return diff(l,r); } }; Now we can pass diff_t{} to an algorithm, as it holds the "overload set" of diff in one (trivial) C++ object. Now this is serious overkill. A simple static_assert can also work. The static_assert will generate better error messages, but won't support other code using SFINAE to see if diff can be called. It will simply generate a hard error. A: What does your program expect as a result? As it stands, you return an unsigned as a result of a difference. IMHO, this is a bug waiting to happen. #include <type_trait> template<typename T> auto diff(T&& a, T&& b) { static_assert (std::is_unsigned<T>::value); return typename std::make_signed<T>::type(a - b); } A more modern wait to write this: inline auto diff(const auto a, const auto b) { static_assert ( std::is_unsigned<decltype(a)>::value && std::is_unsigned<decltype(b)>::value ); return typename std::make_signed<decltype(a -b)>::type(a - b); } [edit] I feel the need to add this comment: using unsigned integral types in math equations is always tricky. The example above would be a very useful add-on to any math package, if real-life situations, you often have to resort to casting to make the result of differences signed, or the math doesn't work. A: I would use static_assert with a nice error message. enable_if will only get your IDE in trouble and fail to compile with a message like identifier diff not found which doesn't help much. So why not like this: #include <type_traits> template <typename T> T diff(T a, T b) { static_assert(std::is_signed< T >::value, "T should be signed"); return a - b; } that way, when you invoke diff with something else than a signed type, you will get the compiler to write this kind of message: error: T should be signed with the location and the values to the call to diff and that's exactly what you're looking for. A: I am surprised nobody answered this, which is pretty robust and reliable and IMO canonical since and after C++11. template<typename T, class = typename std::enable_if<std::is_signed<T>::value>::type> T diff(T a, T b) { return a-b; } in C++14 you can use: template<typename T, class = std::enable_if_t<std::is_signed<T>::value>> T diff(T a, T b) { return a-b; } in C++17 you can use: template<typename T, class = std::enable_if_t<std::is_signed_v<T>>> T diff(T a, T b) { return a-b; } This alternative could help with multiple "overloads" if necessary (I will leave it at there) template<typename T, std::enable_if_t<std::is_signed_v<T>,int> =0> T diff(T a, T b) { return a-b; } ...and in C++20 you can use Concepts (not shown). A: There are many good answers here, however this question comes up as a result of the search "c++ template argument signed". We are in 2022 now, so I think it makes sense to add an answer, using concepts in C++20: #include <concepts> #include <iostream> template <std::signed_integral T> T diff(T a, T b) { return a - b; } int main() { std::cout << diff(1U, 2U) << std::endl; } The compiler fails to compile the program above with the message (among others): error: use of function ‘T diff(T, T) [with T = unsigned int]’ with unsatisfied constraints The core language concept std::signed_integral, used here, is described on this page. A: #include <type_traits> template<typename T> std::enable_if_t<(0>-T(1)),T> diff(T a, T b) { return a-b; } use (0>-T(1)) ,I assume init T for -1 will less than 0. and unsigned value not possible less than 0
stackoverflow
{ "language": "en", "length": 972, "provenance": "stackexchange_0000F.jsonl.gz:871473", "question_score": "34", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562383" }
db51546f59f8b8b2f569db8467ef906506968c0b
Stackoverflow Stackexchange Q: Can I throw error in an async function? I am using async/await in my Node.js project. And in some places I need to return an error from async function. If I'd use Promises, I could've accomplish it this way: function promiseFunc() { return new Promise((res, rej) => { return rej(new Error('some error')) }) } But I'm using async function, so no res and rej methods are there. So, the question: can I throw errors in async functions? Or is it considered a good/bad practice? An example of what I want to do: async function asyncFunc() { throw new Error('some another error') } I can also rewrite it this way: async function anotherAsyncFunc() { return Promise.reject(new Error('we need more errors!')) } but the first one looks more clear to me, and I'm not sure which one should I use. A: I would do: async function asyncFunc() { try { await somePromise(); } catch (error) { throw error; } } But I think it comes to personal preference I guess? You could always return Promise.reject(new Error(error));.
Q: Can I throw error in an async function? I am using async/await in my Node.js project. And in some places I need to return an error from async function. If I'd use Promises, I could've accomplish it this way: function promiseFunc() { return new Promise((res, rej) => { return rej(new Error('some error')) }) } But I'm using async function, so no res and rej methods are there. So, the question: can I throw errors in async functions? Or is it considered a good/bad practice? An example of what I want to do: async function asyncFunc() { throw new Error('some another error') } I can also rewrite it this way: async function anotherAsyncFunc() { return Promise.reject(new Error('we need more errors!')) } but the first one looks more clear to me, and I'm not sure which one should I use. A: I would do: async function asyncFunc() { try { await somePromise(); } catch (error) { throw error; } } But I think it comes to personal preference I guess? You could always return Promise.reject(new Error(error));.
stackoverflow
{ "language": "en", "length": 175, "provenance": "stackexchange_0000F.jsonl.gz:871490", "question_score": "35", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562426" }
fdaf372733b6f2e214343cf1f23d1b34bf78485b
Stackoverflow Stackexchange Q: Email templates using bamboo without phoenix I am working on app in elixir. It sends email to clients. I am using bamboo library for sending emails. So far, emails are working fine. But now, I am trying to send emails using templates. Everywhere i see in the bamboo documentation is using bamboo.phoenix . I am not using phoenix for handling requests. I am using a library called plug. Is there a way to send templates in email without phoenix ?? A: Adding answer to this post with the help @JustMichael comment. Directory structure - /priv /static /test.html.eex Function used : new_email |> to("vivek29vivek@gmail.com") |> from(@from_email) |> subject("test") |> html_body(EEx.eval_file("priv/static/mail_templates/#test.html.eex",[foo: "bar"])) //this will render the template.Also can pass variables test.html.eex <h3>Foo: <%= foo %></h3> But , we cannot add css just by adding <link rel="stylesheet" href="styles.css"> . I guess, There is a need for static server. Do comment if there is another way to add css apart from inline css.
Q: Email templates using bamboo without phoenix I am working on app in elixir. It sends email to clients. I am using bamboo library for sending emails. So far, emails are working fine. But now, I am trying to send emails using templates. Everywhere i see in the bamboo documentation is using bamboo.phoenix . I am not using phoenix for handling requests. I am using a library called plug. Is there a way to send templates in email without phoenix ?? A: Adding answer to this post with the help @JustMichael comment. Directory structure - /priv /static /test.html.eex Function used : new_email |> to("vivek29vivek@gmail.com") |> from(@from_email) |> subject("test") |> html_body(EEx.eval_file("priv/static/mail_templates/#test.html.eex",[foo: "bar"])) //this will render the template.Also can pass variables test.html.eex <h3>Foo: <%= foo %></h3> But , we cannot add css just by adding <link rel="stylesheet" href="styles.css"> . I guess, There is a need for static server. Do comment if there is another way to add css apart from inline css.
stackoverflow
{ "language": "en", "length": 160, "provenance": "stackexchange_0000F.jsonl.gz:871511", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562513" }
d5fd9f2cbc23420030910ed3319a9f4763e57164
Stackoverflow Stackexchange Q: Merge HTML content from two paragraphs with lxml I would like to merge all the content from two paragraphs into one single paragraph, with a space between them. How could I do this using lxml? Example: <p>He is <b>bold</b>!</p> <p>Is he <u>here</u>?</p> Would be merged into: <p>He is <b>bold</b>! Is he <u>here</u>?</p> A: If your structure is simple, this might do the trick: import lxml from lxml import etree root = etree.fromstring("<root></root>") first = etree.fromstring("<p>He is <b>bold</b>!</p>") second = etree.fromstring("<p>Is he <u>here</u>?</p>") try: first.getchildren()[-1].tail += ' ' + second.text except IndexError: first.text += ' ' + second.text root.append(first) for child in second.getchildren(): root.append(child) etree.tostring(root)
Q: Merge HTML content from two paragraphs with lxml I would like to merge all the content from two paragraphs into one single paragraph, with a space between them. How could I do this using lxml? Example: <p>He is <b>bold</b>!</p> <p>Is he <u>here</u>?</p> Would be merged into: <p>He is <b>bold</b>! Is he <u>here</u>?</p> A: If your structure is simple, this might do the trick: import lxml from lxml import etree root = etree.fromstring("<root></root>") first = etree.fromstring("<p>He is <b>bold</b>!</p>") second = etree.fromstring("<p>Is he <u>here</u>?</p>") try: first.getchildren()[-1].tail += ' ' + second.text except IndexError: first.text += ' ' + second.text root.append(first) for child in second.getchildren(): root.append(child) etree.tostring(root)
stackoverflow
{ "language": "en", "length": 105, "provenance": "stackexchange_0000F.jsonl.gz:871525", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562565" }
0d603e73503c925476ae61a54bad265eb41901a8
Stackoverflow Stackexchange Q: One line iteration through dictionaries in dictionaries G2 = {'a': {'c': 1, 'b': 1}, 'b': {'a': 1, 'c': 1}} b = G2.values() for i in b: for key, value in i.items(): list.append(key) #result: ['c', 'b', 'a', 'c'] Can I get the same result but using a list generator? I tried it like this: list2 = [key for key, value in i.items() for i in b] #but i get: ['a', 'a', 'c', 'c'] A: just chain the dictionary values (aka keys) using itertools.chain.from_iterable, and convert to list to print the result: import itertools G2 = {'a': {'c': 1, 'b': 1}, 'b': {'a': 1, 'c': 1}} #['c', 'b', 'a', 'c'] result = list(itertools.chain.from_iterable(G2.values())) print(result) result: ['c', 'b', 'c', 'a'] note that the order is not guaranteed as you're iterating on dictionary keys. Variant without using itertools with flattening double loop inside comprehension (which is probably closer to your attempt): result = [x for values in G2.values() for x in values]
Q: One line iteration through dictionaries in dictionaries G2 = {'a': {'c': 1, 'b': 1}, 'b': {'a': 1, 'c': 1}} b = G2.values() for i in b: for key, value in i.items(): list.append(key) #result: ['c', 'b', 'a', 'c'] Can I get the same result but using a list generator? I tried it like this: list2 = [key for key, value in i.items() for i in b] #but i get: ['a', 'a', 'c', 'c'] A: just chain the dictionary values (aka keys) using itertools.chain.from_iterable, and convert to list to print the result: import itertools G2 = {'a': {'c': 1, 'b': 1}, 'b': {'a': 1, 'c': 1}} #['c', 'b', 'a', 'c'] result = list(itertools.chain.from_iterable(G2.values())) print(result) result: ['c', 'b', 'c', 'a'] note that the order is not guaranteed as you're iterating on dictionary keys. Variant without using itertools with flattening double loop inside comprehension (which is probably closer to your attempt): result = [x for values in G2.values() for x in values]
stackoverflow
{ "language": "en", "length": 159, "provenance": "stackexchange_0000F.jsonl.gz:871529", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562578" }
b65d2a90e20d4aebc1ff15e45bef798c6b25c57a
Stackoverflow Stackexchange Q: regular expression add double quotes around values and keys in javascript i need a valid JSON format to request ES. i have a string like { time: { from:now-60d, mode:quick, to:now } } but when i try to use JSON.parse i got error because my string should be like { time: { "from":"now-60d", "mode":"quick", "to":"now" } } so my question, there is any solution to add double quotes around keys and values of my string ? thanx A: This function will add quotes and remove any extra commas at the end of objects function normalizeJson(str){return str.replace(/"?([\w_\- ]+)"?\s*?:\s*?"?(.*?)"?\s*?([,}\]])/gsi, (str, index, item, end) => '"'+index.replace(/"/gsi, '').trim()+'":"'+item.replace(/"/gsi, '').trim()+'"'+end).replace(/,\s*?([}\]])/gsi, '$1');} Edit: This other function supports json arrays. It also converts single quotes to double quotes, and it keeps quotes off of numbers and booleans. function normalizeJson(str){ return str.replace(/[\s\n\r\t]/gs, '').replace(/,([}\]])/gs, '$1') .replace(/([,{\[]|)(?:("|'|)([\w_\- ]+)\2:|)("|'|)(.*?)\4([,}\]])/gs, (str, start, q1, index, q2, item, end) => { item = item.replace(/"/gsi, '').trim(); if(index){index = '"'+index.replace(/"/gsi, '').trim()+'"';} if(!item.match(/^[0-9]+(\.[0-9]+|)$/) && !['true','false'].includes(item)){item = '"'+item+'"';} if(index){return start+index+':'+item+end;} return start+item+end; }); } I also tested the regex with the safe-regex npm module
Q: regular expression add double quotes around values and keys in javascript i need a valid JSON format to request ES. i have a string like { time: { from:now-60d, mode:quick, to:now } } but when i try to use JSON.parse i got error because my string should be like { time: { "from":"now-60d", "mode":"quick", "to":"now" } } so my question, there is any solution to add double quotes around keys and values of my string ? thanx A: This function will add quotes and remove any extra commas at the end of objects function normalizeJson(str){return str.replace(/"?([\w_\- ]+)"?\s*?:\s*?"?(.*?)"?\s*?([,}\]])/gsi, (str, index, item, end) => '"'+index.replace(/"/gsi, '').trim()+'":"'+item.replace(/"/gsi, '').trim()+'"'+end).replace(/,\s*?([}\]])/gsi, '$1');} Edit: This other function supports json arrays. It also converts single quotes to double quotes, and it keeps quotes off of numbers and booleans. function normalizeJson(str){ return str.replace(/[\s\n\r\t]/gs, '').replace(/,([}\]])/gs, '$1') .replace(/([,{\[]|)(?:("|'|)([\w_\- ]+)\2:|)("|'|)(.*?)\4([,}\]])/gs, (str, start, q1, index, q2, item, end) => { item = item.replace(/"/gsi, '').trim(); if(index){index = '"'+index.replace(/"/gsi, '').trim()+'"';} if(!item.match(/^[0-9]+(\.[0-9]+|)$/) && !['true','false'].includes(item)){item = '"'+item+'"';} if(index){return start+index+':'+item+end;} return start+item+end; }); } I also tested the regex with the safe-regex npm module A: Unquoted JSON is not really a valid JSON. It is just JavaScript. If you trust the source of this string: var obj = eval("'({ time: { from:now-60d, mode:quick, to:now } })'"); This is NOT recommended for strings from untrusted sources as it could be a security risk. Given that you are getting the data from Kibana which may be trusted, it should be ok to eval the string. The other option is to use the regex as probably elaborated by other answers. Alternatively, you may want to fix your Kibana export to give a proper/valid JSON string. A: maybe you can use : str.replace(/([a-zA-Z0-9-]+):([a-zA-Z0-9-]+)/g, "\"$1\":\"$2\""); Here is regex demo Note In the group [a-zA-Z0-9-] of characters i use alphabetical digits and a -, maybe you need other so you can use another one A: Good day Idriss if you wanted to place quotes around all the valid key names and values The maybe look at this expression. YCF_L's answer is prefect to what you wanted. But here it is none the less. {(?=[a-z])|[a-z](?=:)|:(?=[a-z])|[a-z](?=,)|,(?=[a-z])|[a-z](?=}) str.replace(/{(?=[a-z])|[a-z](?=:)|:(?=[a-z])|[a-z](?=,)|,(?=[a-z])|[a-z](? =})/igm, $&");
stackoverflow
{ "language": "en", "length": 354, "provenance": "stackexchange_0000F.jsonl.gz:871546", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562635" }
d8caaf4ceb255ba7139357d059e355848b38f783
Stackoverflow Stackexchange Q: how to implements a java SAM interface in Kotlin? In Java it is possible to write code like this: model.getObservableProduct().observe(this, new Observer<ProductEntity>() { @Override public void onChanged(@Nullable ProductEntity productEntity) { model.setProduct(productEntity); } }); However trying to override local function in Kotlin results in: Question: is it possible to override local function in Kotlin? A: try using object expression instead. // the parentheses must be removed if Observer is an interface ---V model.getObservableProduct().observe(this, object:Observer<ProductEntity>(){ override fun onChanged(productEntity:ProductEntity?) { model.setProduct(productEntity); } }); IF the Observer is a java SAM interface (kotlin SAM interfaces aren't currently supported) then you can using lambda expression instead as further: model.getObservableProduct().observe(this, Observer<ProductEntity>{ model.setProduct(it); }); OR using a lambda expression instead, for example: // specify the lambda parameter type ---v model.getObservableProduct().observe<ProductEntity>(this) { model.setProduct(it); };
Q: how to implements a java SAM interface in Kotlin? In Java it is possible to write code like this: model.getObservableProduct().observe(this, new Observer<ProductEntity>() { @Override public void onChanged(@Nullable ProductEntity productEntity) { model.setProduct(productEntity); } }); However trying to override local function in Kotlin results in: Question: is it possible to override local function in Kotlin? A: try using object expression instead. // the parentheses must be removed if Observer is an interface ---V model.getObservableProduct().observe(this, object:Observer<ProductEntity>(){ override fun onChanged(productEntity:ProductEntity?) { model.setProduct(productEntity); } }); IF the Observer is a java SAM interface (kotlin SAM interfaces aren't currently supported) then you can using lambda expression instead as further: model.getObservableProduct().observe(this, Observer<ProductEntity>{ model.setProduct(it); }); OR using a lambda expression instead, for example: // specify the lambda parameter type ---v model.getObservableProduct().observe<ProductEntity>(this) { model.setProduct(it); };
stackoverflow
{ "language": "en", "length": 127, "provenance": "stackexchange_0000F.jsonl.gz:871582", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562745" }
4be258fad3e86589cb45a9c33009d3f65dcbabde
Stackoverflow Stackexchange Q: React native check if tablet or screen in inches I've established a different render logic for tablets and mobile devices. I was wondering if there is a way to get the screen size in inches or maybe even any module to automatically detect if the device is a tablet or not. The reason I am not using the dimensions api directly to get the screen resolution is that there are many android tablets with lower resolution than many of their mobile counterparts. Thanks. A: If you don't want to use library react-native-device-info use can use this code below, not sure it't work perfect but may be help export const isTablet = () => { let pixelDensity = PixelRatio.get(); const adjustedWidth = screenWidth * pixelDensity; const adjustedHeight = screenHeight * pixelDensity; if (pixelDensity < 2 && (adjustedWidth >= 1000 || adjustedHeight >= 1000)) { return true; } else return ( pixelDensity === 2 && (adjustedWidth >= 1920 || adjustedHeight >= 1920) ); };
Q: React native check if tablet or screen in inches I've established a different render logic for tablets and mobile devices. I was wondering if there is a way to get the screen size in inches or maybe even any module to automatically detect if the device is a tablet or not. The reason I am not using the dimensions api directly to get the screen resolution is that there are many android tablets with lower resolution than many of their mobile counterparts. Thanks. A: If you don't want to use library react-native-device-info use can use this code below, not sure it't work perfect but may be help export const isTablet = () => { let pixelDensity = PixelRatio.get(); const adjustedWidth = screenWidth * pixelDensity; const adjustedHeight = screenHeight * pixelDensity; if (pixelDensity < 2 && (adjustedWidth >= 1000 || adjustedHeight >= 1000)) { return true; } else return ( pixelDensity === 2 && (adjustedWidth >= 1920 || adjustedHeight >= 1920) ); }; A: Based on @martinarroyo's answer, a way to go about it use the react-native-device-info package. However the android implementation is based on screen resolution. That can be a problem as there are many tablet devices with a lower resolution than many mobile devices and this can cause problems. The solution I will be using and am suggesting is use react-native-device-info for apple devices and for android devices go with a simple ratio logic of the type: function isTabletBasedOnRatio(ratio){ if(ratio > 1.6){ return false; }else{ return true; } } This is not a perfect solution but there are many small tablets with phonelike ratios as well or even phablets ( android is blurry) and this solutions is inclusive towards those as well. A: You can use the react-native-device-info package along with the Dimensions API. Check the isTablet() method and apply different styles according on the result. A: react-native-device-detection if(Device.isTablet) { Object.assign(styles, { ... }); } Based on PixelRatio and Screen's height,width. A: use npm install --save react-native-device-info then import Device from 'react-native-device-info'; const isTablet = Device.isTablet();
stackoverflow
{ "language": "en", "length": 338, "provenance": "stackexchange_0000F.jsonl.gz:871586", "question_score": "11", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562769" }
edac01e257d446a34650d5031bd4e339740c5a8c
Stackoverflow Stackexchange Q: Color in image gets dull after saving it in OpenCV I am using opencv module to read and write the image. here is the code and below is the image i am reading and second image is after saving it on disk using cv2.imwrite(). import cv2 img = cv2.imread('originalImage.jpg') cv2.imwrite('test.jpg',img) It is significantly visible that colors are dull in second image. Is there any workaround to this problem or I am missing on some sort of setting parameters..? A: I have done a bit of research on the point @mark raised about ICC profile. I have figured out a way to handle this in python PIL module. here is the code that worked for me. I have also learned to use PNG file format rather JPEG to do lossless conversion. import Image img = Image.open('originalImage.jpg') img.save('test.jpg',icc_profile=img.info.get('icc_profile')) I hope this will help others as well.
Q: Color in image gets dull after saving it in OpenCV I am using opencv module to read and write the image. here is the code and below is the image i am reading and second image is after saving it on disk using cv2.imwrite(). import cv2 img = cv2.imread('originalImage.jpg') cv2.imwrite('test.jpg',img) It is significantly visible that colors are dull in second image. Is there any workaround to this problem or I am missing on some sort of setting parameters..? A: I have done a bit of research on the point @mark raised about ICC profile. I have figured out a way to handle this in python PIL module. here is the code that worked for me. I have also learned to use PNG file format rather JPEG to do lossless conversion. import Image img = Image.open('originalImage.jpg') img.save('test.jpg',icc_profile=img.info.get('icc_profile')) I hope this will help others as well. A: The difference is that the initial image (on the left in the diagram) has an attached ICC profile whereas the second one (on the right) does not. I obtained the above image by running the ImageMagick utility called identify like this: identify -verbose first.jpg > 1.txt identify -verbose second.jpg > 2.txt Then I ran the brilliant opendiff tool (which is part of macOS) like this: opendiff [12].txt You can extract the ICC profile from the first image also with ImageMagick like this: convert first.jpg profile.icc A: Your first input image has some icc-Profile associated in the meta-data, which is an optional attribute and most devices may not inject it in the first place. The ICC profile basically performs a sort of color correction, and the correction coefficients are calculated for each unique device during calibration. Modern Web Browsers, Image Viewing utilities mainly take into account this ICC profile information before rendering the image onto the screen, that is the reason why there is a diff in both the images. But Unfortunately OpenCV doesn't reads the ICC config from the meta data of the image to perform any color correction.
stackoverflow
{ "language": "en", "length": 335, "provenance": "stackexchange_0000F.jsonl.gz:871589", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562776" }
4ad57809a74249cb19eb88ea9d60b1df7b89daff
Stackoverflow Stackexchange Q: hide empty row in gridview I want to hide empty row in one particular column. I tried to but negative. Below is my code: protected void gvDb_DataBound(object sender, EventArgs e) { foreach (GridViewRow rw in gvDb.Rows) { if ((string.IsNullOrEmpty(rw.Cells[1].Text) | (rw.Cells[1].Text == ""))) { rw.Visible = false; } } } A: for (int i = 0; i < gvDb.RowCount - 1; i++) { var row = gvDb.Rows[i]; if (string.IsNullOrEmpty(Convert.ToString(row.Cells[1].Value))) { row.Visible = false; } } This will work, use for instead of foreach to iterate all the rows except last row which is empty.
Q: hide empty row in gridview I want to hide empty row in one particular column. I tried to but negative. Below is my code: protected void gvDb_DataBound(object sender, EventArgs e) { foreach (GridViewRow rw in gvDb.Rows) { if ((string.IsNullOrEmpty(rw.Cells[1].Text) | (rw.Cells[1].Text == ""))) { rw.Visible = false; } } } A: for (int i = 0; i < gvDb.RowCount - 1; i++) { var row = gvDb.Rows[i]; if (string.IsNullOrEmpty(Convert.ToString(row.Cells[1].Value))) { row.Visible = false; } } This will work, use for instead of foreach to iterate all the rows except last row which is empty.
stackoverflow
{ "language": "en", "length": 95, "provenance": "stackexchange_0000F.jsonl.gz:871593", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562794" }
003c61f5d9d72f9bda3b48eae56fbb9f5f06e551
Stackoverflow Stackexchange Q: Android emulator slow after restarting computer After restarting my computer the Android emulator is very slow on visual studio 2015, but after a few hours of running it's back at normal speed. While the emulator is slow I constantly get messages like process system isn't responding, settings isn't responding, launcher3 isn't responding. Emulator specs: Android 6.0 - API Level 23 CPU : Intel Atom(x86) RAM: 1835 VM Heap: 128 Internal Storage: 800 SD Card: 100 Use Host GPU enabled Anyone know the cause of this happening? A: Finally fixed it. In my case, I have a laptop with an extra graphic's card. After changing settings in the nvidia panel: Control 3D settings -> Prefer nvidia processor. then right click AVD Manager.exe -> execute with graphic processor. That fixed it for me
Q: Android emulator slow after restarting computer After restarting my computer the Android emulator is very slow on visual studio 2015, but after a few hours of running it's back at normal speed. While the emulator is slow I constantly get messages like process system isn't responding, settings isn't responding, launcher3 isn't responding. Emulator specs: Android 6.0 - API Level 23 CPU : Intel Atom(x86) RAM: 1835 VM Heap: 128 Internal Storage: 800 SD Card: 100 Use Host GPU enabled Anyone know the cause of this happening? A: Finally fixed it. In my case, I have a laptop with an extra graphic's card. After changing settings in the nvidia panel: Control 3D settings -> Prefer nvidia processor. then right click AVD Manager.exe -> execute with graphic processor. That fixed it for me
stackoverflow
{ "language": "en", "length": 132, "provenance": "stackexchange_0000F.jsonl.gz:871597", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562813" }
31352d41f5eb10561c620fc0d2269f700a047609
Stackoverflow Stackexchange Q: Can't enable multiple CPU on VirtualBox I would like to use more than one CPU to run Ubuntu 14.04 (Trusty Tahr) 32-bit in VirtualBox, but when I stop the machine and go in Settings → System → Processor, the processor(s) slider is grayed out as you can see in the screenshot image. How can I enable this feature? Host OS: Windows 10 Pro 64-bit Guest OS: Ubuntu 14.04 32-bit VirtualBox: Version 5.1.22 r115126 Processor: Intel(R) Core(TM) i7-3770 CPU @ 3.40 GHz (8 CPUs), ~3.4 GHz PS: My problem in the first instance is that the Ubuntu virtual machine is extremely slow and I would like to improve the performance, so any suggestion for that would also be welcome. A: For me I just had to forget the saved state with a right click on the virtual device.
Q: Can't enable multiple CPU on VirtualBox I would like to use more than one CPU to run Ubuntu 14.04 (Trusty Tahr) 32-bit in VirtualBox, but when I stop the machine and go in Settings → System → Processor, the processor(s) slider is grayed out as you can see in the screenshot image. How can I enable this feature? Host OS: Windows 10 Pro 64-bit Guest OS: Ubuntu 14.04 32-bit VirtualBox: Version 5.1.22 r115126 Processor: Intel(R) Core(TM) i7-3770 CPU @ 3.40 GHz (8 CPUs), ~3.4 GHz PS: My problem in the first instance is that the Ubuntu virtual machine is extremely slow and I would like to improve the performance, so any suggestion for that would also be welcome. A: For me I just had to forget the saved state with a right click on the virtual device. A: Please make sure that you disable the Hyper-V. Go to Control Panel → Turn Windows features on or off → Uncheck Hyper-V → Restart your computer. A: My problem is just a little bit different, but it fits the question. I can not make the guest use more than one processor. The slider is not grayed, and I can set it to 1..4 (it is a dual core host). But setting it on 2 and booting the Windows guest, it only sees one. I had also tried all possible values for the slider. Windows guest always see one, not more. If on Windows guest, I type set on a console. I always get a line with, no matter the position on the slider: NUMBER_OF_PROCESSORS=1 I can not enable multiple CPUs on VirtualBox that guest. The weird thing is that if I put a live CD Linux distribution ISO on that guest's virtual CD unit, it can see all the processors I set on the slider... it is just the Windows guest the one that ignores the slider... I am getting mad... and am out of ideas. The problem occurs because when Windows was installed it was configured with only one processor, so it installed in a no multi-processor way and there is no way for it to see more than one, except re-intalling Windows, but this time with two or more on the slider, so it installs in SMP mode. So, for anyone having the problem: * *I can not use more than one processor on windows guest *I can move the slider The answer is not going to the BIOS. The answer is: * *Please install that Windows with the slider in 2 or more, not on just 1. I remember I had a similar problem with an old Windows XP guest. At that time I had tried a patch to change Windows to SMP mode and a reboot, but it was so unstable, that I opted to reinstall it directly with 2 on the slider. A: To increase performance you need to increase RAM and to use more than one CPU you need to enable "Virtualization technology" in the BIOS. Go to your BIOS options and search for "Virtualization technology" under "System settings" or similar. This is either called Intel-Vi-D or Intel-Vi with AMD processor. That name is then similar to AMD-V. If you still have a problem with your VirtualBox not giving you an option to use multiple CPUs then you will most probably have to check if Hyper-V is installed in the "Turn Windows Features on or off". Which you might need to disable in order to use multiple CPU cores. Some computers still might have no function towards virtualization use of multiple cores. That can be old computer models or more specific laptop computers. In some, you might need to check if there is a new BIOS version available for your computer. A: Looks like you have to power off the virtual machine (do not Save state)and edit those settings. A: * *Open Virtual Box. *Click the name of the virtual machine that you want to make your CPU available to, then click the "Settings" button at the top of the window. *Click the "System" heading on the left side of the Settings window. *Click the "Processor" tab at the top of the window. *Drag the slider next to "Processor(s)" to the right until the value matches the number of processors or processor cores installed in your computer. *Drag the slider next to "Execution Cap" to the right until the value reads "100." This allows VirtualBox to use all of your processor's resources. *Click "OK," then double-click the virtual machine to turn it on. A: Please stop and power off your VM and then drag. It will work.
stackoverflow
{ "language": "en", "length": 772, "provenance": "stackexchange_0000F.jsonl.gz:871630", "question_score": "18", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562907" }
41be7ee3e95a4a012bc81e338470e5f76928ab23
Stackoverflow Stackexchange Q: How can I get image from different project on the laravel? I have two project or url First url like this : http://myshop.dev/ Second url like this : http://backend.myshop.dev/ If the second url, I run this : <img src="{{ asset('img/$photo }}"/> It will call url : http://backend.myshop.dev/img/image1.jpg It did not suit my needs I want to take the image in the first url (http://myshop.dev/img/image1.jpg) Both projects use the same database How can I do it? A: Add this to your .env BACKEND_URL=http://backend.myshop.dev/public/ Call <img src="{{ env('BACKEND_URL') . "{img/$photo}"}}"/>
Q: How can I get image from different project on the laravel? I have two project or url First url like this : http://myshop.dev/ Second url like this : http://backend.myshop.dev/ If the second url, I run this : <img src="{{ asset('img/$photo }}"/> It will call url : http://backend.myshop.dev/img/image1.jpg It did not suit my needs I want to take the image in the first url (http://myshop.dev/img/image1.jpg) Both projects use the same database How can I do it? A: Add this to your .env BACKEND_URL=http://backend.myshop.dev/public/ Call <img src="{{ env('BACKEND_URL') . "{img/$photo}"}}"/> A: You could store the image on an Image hosting service there are my google it. If you are using different servers then you could use Rsync to automatically copy all uploaded images to the second server Checkout how to use rsync If you are using a single server to host both URI then you could just use a common folder and fetch the image from the common folder. A: You could write a helper for assets like that: So instead of: <img src="{{ asset('img/$photo }}"/> you should write <img src="{{ myshop_asset('img/$photo }}"/> in your helper.php: if (!function_exists('myshop_asset')) { function myshop_asset($path) { $myshop_path = 'http://myshop.dev';//or read a config variable return rtrim($myshop_path) .'/'. ltrim($path, '/'); } }
stackoverflow
{ "language": "en", "length": 205, "provenance": "stackexchange_0000F.jsonl.gz:871655", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44562990" }
257c2632cc9d9a3296a13f4dcc109a1c1e4624cd
Stackoverflow Stackexchange Q: http.Request: get file name from url How do I get only the file name one.json from the following request: http://localhost/slow/one.json? I just need to serve this file and others from the url? This is a test server that I need to respond very slow. http.HandleFunc("/slow/", func(w http.ResponseWriter, r *http.Request) { log.Println("Slow...") log.Println(r.URL.Path[1:]) time.Sleep(100 * time.Millisecond) http.ServeFile(w, r, r.URL.Path[1:]) }) A: I believe you are looking for path.Base: "Base returns the last element of path." r,_ := http.NewRequest("GET", "http://localhost/slow/one.json", nil) fmt.Println(path.Base(r.URL.Path)) // one.json Playground link
Q: http.Request: get file name from url How do I get only the file name one.json from the following request: http://localhost/slow/one.json? I just need to serve this file and others from the url? This is a test server that I need to respond very slow. http.HandleFunc("/slow/", func(w http.ResponseWriter, r *http.Request) { log.Println("Slow...") log.Println(r.URL.Path[1:]) time.Sleep(100 * time.Millisecond) http.ServeFile(w, r, r.URL.Path[1:]) }) A: I believe you are looking for path.Base: "Base returns the last element of path." r,_ := http.NewRequest("GET", "http://localhost/slow/one.json", nil) fmt.Println(path.Base(r.URL.Path)) // one.json Playground link A: Created two folders slow and fast and then I ended up using the following: package main import ( "log" "net/http" "time" "fmt" ) func main() { h := http.NewServeMux() h.HandleFunc("/fast/", func(w http.ResponseWriter, r *http.Request) { fmt.Println(r.URL.Path[1:]) time.Sleep(100 * time.Millisecond) http.ServeFile(w, r, r.URL.Path[1:]) }) h.HandleFunc("/slow/", func(w http.ResponseWriter, r *http.Request) { fmt.Println(r.URL.Path[1:]) time.Sleep(6000 * time.Millisecond) http.ServeFile(w, r, r.URL.Path[1:]) }) h.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(404) }) err := http.ListenAndServe(":8080", h) log.Fatal(err) }
stackoverflow
{ "language": "en", "length": 157, "provenance": "stackexchange_0000F.jsonl.gz:871681", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563088" }
9fda3be6d7945789a1435c03063d5acdfbfddbe3
Stackoverflow Stackexchange Q: Antd table how to put text into cell in several lines In Antd is there a way to show the text in table cell into several lines. I try to put </br>, \n, \r into the text. Is there someone who has already find a way to do that? A: In the hopes that it's never too late to answer a question here :D Use the renderer of the table cell, like below, where you can use HTML tags (I'm using React so it's formatted as ReactNode with <> </> parent elements): const columns = [ { title: "Start", dataIndex: "start", key: "start", render: (text) => <> <p>{text.split("@")[0]}</p> <p>{text.split("@")[1]}</p> </> }, { title: "Name", dataIndex: "name", key: "name", render: (text) => text }]
Q: Antd table how to put text into cell in several lines In Antd is there a way to show the text in table cell into several lines. I try to put </br>, \n, \r into the text. Is there someone who has already find a way to do that? A: In the hopes that it's never too late to answer a question here :D Use the renderer of the table cell, like below, where you can use HTML tags (I'm using React so it's formatted as ReactNode with <> </> parent elements): const columns = [ { title: "Start", dataIndex: "start", key: "start", render: (text) => <> <p>{text.split("@")[0]}</p> <p>{text.split("@")[1]}</p> </> }, { title: "Name", dataIndex: "name", key: "name", render: (text) => text }] A: Finally here is my solution. The text for each column contains an \n when there is necessary to have a new line. After into the table definition I put the style whiteSpace: 'pre': <Table style={{ whiteSpace: 'pre'}} columns={columns} dataSource={data} title={title} .../> Thats seems to work as expected.
stackoverflow
{ "language": "en", "length": 172, "provenance": "stackexchange_0000F.jsonl.gz:871687", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563104" }
56b51d80326e0d41ddb83afaea5533de621acd52
Stackoverflow Stackexchange Q: how to use jq to filter select items not in list? In jq, I can select an item in a list fairly easily: $ echo '["a","b","c","d","e"]' | jq '.[] | select(. == ("a","c"))' Or if you prefer to get it as an array: $ echo '["a","b","c","d","e"]' | jq 'map(select(. == ("a","c")))' But how do I select all of the items that are not in the list? Certainly . != ("a","c") does not work: $ echo '["a","b","c","d","e"]' | jq 'map(select(. != ("a","c")))' [ "a", "b", "b", "c", "d", "d", "e", "e" ] The above gives every item twice, except for "a" and "c" Same for: $ echo '["a","b","c","d","e"]' | jq '.[] | select(. != ("a","c"))' "a" "b" "b" "c" "d" "d" "e" "e" How do I filter out the matching items? A: I'm sure it is not the most simple solution, but it works :) $ echo '["a","b","c","d","e"]' | jq '.[] | select(test("[^ac]"))' Edit: one more solution - this is even worse :) $ echo '["a","b","c","d","e"]' | jq '.[] | select(. != ("a") and . != ("b"))'
Q: how to use jq to filter select items not in list? In jq, I can select an item in a list fairly easily: $ echo '["a","b","c","d","e"]' | jq '.[] | select(. == ("a","c"))' Or if you prefer to get it as an array: $ echo '["a","b","c","d","e"]' | jq 'map(select(. == ("a","c")))' But how do I select all of the items that are not in the list? Certainly . != ("a","c") does not work: $ echo '["a","b","c","d","e"]' | jq 'map(select(. != ("a","c")))' [ "a", "b", "b", "c", "d", "d", "e", "e" ] The above gives every item twice, except for "a" and "c" Same for: $ echo '["a","b","c","d","e"]' | jq '.[] | select(. != ("a","c"))' "a" "b" "b" "c" "d" "d" "e" "e" How do I filter out the matching items? A: I'm sure it is not the most simple solution, but it works :) $ echo '["a","b","c","d","e"]' | jq '.[] | select(test("[^ac]"))' Edit: one more solution - this is even worse :) $ echo '["a","b","c","d","e"]' | jq '.[] | select(. != ("a") and . != ("b"))' A: The simplest and most robust (w.r.t. jq versions) approach would be to use the builtin -: $ echo '["a","b","c","d","e"]' | jq -c '. - ["a","c"]' ["b","d","e"] If the blacklist is very long and riddled with duplicates, then it might be appropriate to remove them (e.g. with unique). Variations The problem can also be solved (in jq 1.4 and up) using index and not, e.g. ["a","c"] as $blacklist | .[] | select( . as $in | $blacklist | index($in) | not) Or, with a variable passed in from the command-line (jq --argjson blacklist ...): .[] | select( . as $in | $blacklist | index($in) | not) To preserve the list structure, one can use map( select( ...) ). With jq 1.5 or later, you could also use any or all, e.g. def except(blacklist): map( select( . as $in | blacklist | all(. != $in) ) ); Special case: strings See e.g. Select entries based on multiple values in jq
stackoverflow
{ "language": "en", "length": 336, "provenance": "stackexchange_0000F.jsonl.gz:871690", "question_score": "18", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563115" }
3145efbf0420039adfef0be3cd07810e5ad4a8a7
Stackoverflow Stackexchange Q: Removing contributor from github.com? How to remove contributor from showing in the main page of project: Link https://help.github.com/articles/removing-a-collaborator-from-a-personal-repository/ says it possible in settings, but I dont see any collaborator in there: A: You cannot remove it, but you can change their name (to yours). Yet, I would strongly advise not to, because this would affect all other collaborators and contributors (see below). This is described in detail here. In short, you have to use filter-branch, e.g. through the following script: git filter-branch --env-filter ' if [ "$GIT_AUTHOR_NAME" = "OLD NAME" ]; then \ export GIT_AUTHOR_NAME="NEW NAME" GIT_AUTHOR_EMAIL="new.name@mail.com"; \ fi ' The reason why better not - it comes with some serious side effects, such as invalidating all subsequent commit hashes, as also mentioned by Peter Reid.
Q: Removing contributor from github.com? How to remove contributor from showing in the main page of project: Link https://help.github.com/articles/removing-a-collaborator-from-a-personal-repository/ says it possible in settings, but I dont see any collaborator in there: A: You cannot remove it, but you can change their name (to yours). Yet, I would strongly advise not to, because this would affect all other collaborators and contributors (see below). This is described in detail here. In short, you have to use filter-branch, e.g. through the following script: git filter-branch --env-filter ' if [ "$GIT_AUTHOR_NAME" = "OLD NAME" ]; then \ export GIT_AUTHOR_NAME="NEW NAME" GIT_AUTHOR_EMAIL="new.name@mail.com"; \ fi ' The reason why better not - it comes with some serious side effects, such as invalidating all subsequent commit hashes, as also mentioned by Peter Reid. A: The below method works in my case at least. * *On GitHub web page, change a branch name (main --> main1 for example). It updates a contributor list on my GitHub repository dashboard. *Then change it back (main1 --> main). I have multiple GitHub accounts for different projects. Each for different community. But accidentally, I pushed a commit using a wrong account. I changed the author of the commit, but the wrong account was still on the contributor list on GitHub dashboard. My method keeps commit history as well as GitHub action settings and issue history. But I did not check if pull requests are kept. A: Assuming, it is being done with all the right intentions and, you are the owner of repository - you can use rename feature on repository. Essentially, create replica of repository and swap repository names like you swap variables with steps below. * *Create a new replica repository *Copy the cherry picks from original repository which have only the commits with intended authors. *Rename the original repository to to_be_deleted and replica to original. Commits from original repository can be picked with following steps. * *git remote add repo2 https://github.com/mygit/original.git *git pull repo2 *git cherry-pick <commit> *git push Contributors are essentially the authors of any commit in the repository. I once accidentally put a wrong email in Author list of a commit in my repository and github started showing a new contributor in the repository. I tried reverting the commit but it didn't help. Finally I had to create / rename /delete original repository. A: It is possible but might be challenging. You need to rewrite history (which is not recommended usually). How to do it? * *You should rewrite all the history commits of the contributor and change the commits to a different author. There are some ways to do it, I found the simplest is by executing amend on an existing commit with changing the author. e.g: Bellow a commit pick you should write: exec git commit --amend --author="{NewAuthorName} <{NewAuthorEmail}>" -C HEAD Watch this explanation https://www.youtube.com/watch?v=7RZgtT4cbw0. *After the contributor doesn't have any commit in the history - update any GitHub setting to refresh the list and then wait for a couple of minutes. For example, you may update the default branch name under your repository => Settings => Code and automation => Branches => Default branch. And then return the original branch name. After both updates, wait a couple of minutes and it should remove the contributor from the list. A: You cannot (at least without rewriting history - which is highly unrecommended). Those users have commits in your repository history, and therefore lines of code have been added by them. Even if you remove all their lines of code they will still show as a contributor. Contributors are not collaborators. Collaborators are contributors authorized by the repository owner to have direct (usually write) access to the repository, meaning they don't need to fork the repository and they can be assigned to issues among other things. A: I accidentally pushed a commit from an old account. The old account remained on the contributors' list even after I had removed the commit. I had to remove the old account from GitHub to make it disappear from the list. A: None of the answers here worked for me. What I did and worked: * *Changed all commits emails (two) that pointed to the other account using git-filter-repo tool, like this. *Follow the suggestion of renaming the branches, main -> main1 -> main (didn't work, at least not instantaneously). *Returned to the repo about two or three hours later and it was correct. Perhaps the second point is not necessary. A: Change repo visibility in GitHub repository settings from public to private and then private to public again. Note:- In order to make this work you must not have anything related to that user linked with the repo like commits, releases, commits in other branches, or tags. Caution:- You will lose all of your repo stars and watchings. First, use other less risky methods if you care about stars. A: Fix commits showing a user who copied/cloned a repo I came to this question because I thoughtlessly worked on a copy of my repo that a colleague had downloaded, instead of my local copy. I hadn't changed the GIT email for the copy of my repo he’d downloaded as I'd (dumbly) copied the repo from his download, so my pushes to GitHub were all attributed to his account, from his email address set (unwittingly) by him (in the ‘.git’ folder) when he logged into GitHub and downloaded my repo. Once I’d checked and changed the email for that repo to mine (see the two commands below, executed while in the repo’s directory), my pushes were correctly attributed: git config user.email git config user.email myemail@mydomain.com Unfortunately, he can’t be removed as a contributor to some of these commits without a lot of fiddling, but he’s a trusted colleague and friend so I can live with that. A: Workaround tested 2022 (Please create appropriate backups before doing any of this) Assuming that there are not many commits after the commit made by contributor you want to remove. * *Download Github for Desktop *Create a dummy folder and point Github for Desktop to the dummy folder when it asks for a location to clone the repo (Do not use your working directory) *Once you clone the repo, you should see history tab, select the commit (assuming the latest one is from contributor you want to remove) and select option for 'revert this commit' *Force push this change to Github *Now that the there are no commits from the user in the main branch *Create a new branch from the default branch and navigate to Settingg of the repo on github.com *On settings page change the default branch to the new branch you created *Since this new branch does not have commits from the contributor to be removed you will force github.com to refresh the contributor list automatically and the contributor will removed. *Now you can change the default back to the old branch (usually master/main) and you will find the contibutor removed from the refreshed list. TL;DR : Revert Changes to stage where there are no contributions by user to be removed ---> Create a new branch and make it default, this is to force github.com to refresh contributor list ---> Change the setting back to old branch as default. You can delete the temporary new branch you created. A: 3 steps * *Remove all branches that that user might have contributed to *Do any number of things to remove all commits related to that author from your remaining branches. Some of your choices: * *rebase or interactive rebase and force push *filterbranch and force push *cherry-pick and force push *squash and force push *Turn you repo private, then turn it public again
stackoverflow
{ "language": "en", "length": 1282, "provenance": "stackexchange_0000F.jsonl.gz:871698", "question_score": "37", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563131" }
82b5399fc009f791672bbf696fba921b5792696a
Stackoverflow Stackexchange Q: app name is appended to the table name in django When I try to fetch data from the table, the app name is appended to the table name and displays an error. Following is my code. from models import open_cart class test(APIView): def get(self,request,format=None): values = open_cart.objects.get() My app name that I have defined in installed_apps is 'MyApp'.My table name is 'open_cart'. table name in the query goes as MyApp_open_cart instead of open_cart. the error message that i get is relation "untitled_open_cart" does not exist A: Appending the app name to the table name is default behavior in Django. If you want to use a custom table name add it in the Meta class. class MyModel(models.Model): class Meta(CommonInfo.Meta): db_table = 'student_info' Refer to the official Django Model Meta options documentation for more info.
Q: app name is appended to the table name in django When I try to fetch data from the table, the app name is appended to the table name and displays an error. Following is my code. from models import open_cart class test(APIView): def get(self,request,format=None): values = open_cart.objects.get() My app name that I have defined in installed_apps is 'MyApp'.My table name is 'open_cart'. table name in the query goes as MyApp_open_cart instead of open_cart. the error message that i get is relation "untitled_open_cart" does not exist A: Appending the app name to the table name is default behavior in Django. If you want to use a custom table name add it in the Meta class. class MyModel(models.Model): class Meta(CommonInfo.Meta): db_table = 'student_info' Refer to the official Django Model Meta options documentation for more info. A: Django Append app name before the model name by default. If we want to use our custom table name then it will have to mention in side the Meta class. class Open_cart(models.Model): class Meta: db_table = 'open_cart'
stackoverflow
{ "language": "en", "length": 172, "provenance": "stackexchange_0000F.jsonl.gz:871705", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563155" }
08bb01b420afaffb5d0a363ef56a8d67c5911bbe
Stackoverflow Stackexchange Q: No of reducers in mapreduce hadoop I have only one key emitted from mapper to reducer and I have set no of reducers be 10. So one reducer will run on that key and what other remaining 9 reducers will do? A: The other 9 reducers will run through their lifecycle as normal, they just wont have any key/values to process once they run, so they will stop quickly. Thus you will waste resources while they needlessly run. You'll generally (most output formats do this) also find you end up with a part file for a reduce that ran but didnt write anything. The part file won't contain any actual data just file metadata, for example gzip headers.
Q: No of reducers in mapreduce hadoop I have only one key emitted from mapper to reducer and I have set no of reducers be 10. So one reducer will run on that key and what other remaining 9 reducers will do? A: The other 9 reducers will run through their lifecycle as normal, they just wont have any key/values to process once they run, so they will stop quickly. Thus you will waste resources while they needlessly run. You'll generally (most output formats do this) also find you end up with a part file for a reduce that ran but didnt write anything. The part file won't contain any actual data just file metadata, for example gzip headers. A: The other 9 reducers will run until their slot time ends. They will not have any K/V pairs to process and stop quickly. You can use custom partitioners to distribute the map outputs to all reducers evenly; - atleast for first level; and finalaly combine through one reducer at very last phase - thus reducing computing load in most of reduce phase.
stackoverflow
{ "language": "en", "length": 182, "provenance": "stackexchange_0000F.jsonl.gz:871718", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563200" }
8e920c89437183042b2a8b488b343d5126459f39
Stackoverflow Stackexchange Q: How to deploy react js in existing Codeigniter project I am using Codeigniter 2.0 Framework. I need to use react type script to load view. I am using Nginx server. How to deploye react js with existing project and port also.
Q: How to deploy react js in existing Codeigniter project I am using Codeigniter 2.0 Framework. I need to use react type script to load view. I am using Nginx server. How to deploye react js with existing project and port also.
stackoverflow
{ "language": "en", "length": 42, "provenance": "stackexchange_0000F.jsonl.gz:871720", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563207" }
64b997ef5f9571933711b18063b6edb936129d44
Stackoverflow Stackexchange Q: Can I fetch PHAssets in a background thread? I want to do some work on a user's photo library. Since the library can be huge, I want to do it in the background. I'm wondering whether it is safe to perform asset fetches (like PHAsset.fetchAssets) and work on them in the background? I only need the asset metadata for now. Would something like this be safe: class ViewController: UIViewController { var cachedResult = [Any]() func doBackgroundCalculationsOnPhotos(completionHandler: ([Any]) -> ()) { DispatchQueue.global(qos: .userInitiated).async { let photos = PHAsset.fetchAssets(with: .image, options: nil) var result = [Any]() photos.enumerateObjects({ asset, _, _ in result.append(calculateSomething(asset)) }) DispatchQueue.main.async { self.cachedResult = result completionHandler(result) } } } } Are there any references to documentation where I could learn about Photos Framework and background access? A: Yes, it can take some time to fetch so it might be a good idea to do this in the background as the fetchAssets(with:options:) method is synchronous.
Q: Can I fetch PHAssets in a background thread? I want to do some work on a user's photo library. Since the library can be huge, I want to do it in the background. I'm wondering whether it is safe to perform asset fetches (like PHAsset.fetchAssets) and work on them in the background? I only need the asset metadata for now. Would something like this be safe: class ViewController: UIViewController { var cachedResult = [Any]() func doBackgroundCalculationsOnPhotos(completionHandler: ([Any]) -> ()) { DispatchQueue.global(qos: .userInitiated).async { let photos = PHAsset.fetchAssets(with: .image, options: nil) var result = [Any]() photos.enumerateObjects({ asset, _, _ in result.append(calculateSomething(asset)) }) DispatchQueue.main.async { self.cachedResult = result completionHandler(result) } } } } Are there any references to documentation where I could learn about Photos Framework and background access? A: Yes, it can take some time to fetch so it might be a good idea to do this in the background as the fetchAssets(with:options:) method is synchronous.
stackoverflow
{ "language": "en", "length": 156, "provenance": "stackexchange_0000F.jsonl.gz:871726", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563228" }
cca6c05fd02d50b1ac08cfd8fdd391ec5c4c144a
Stackoverflow Stackexchange Q: What does "attach to standard streams" mean in docker? I am reading the spec of the run command and see the following: -i : Keep STDIN open even if not attached and -a=[] : Attach to STDIN, STDOUT and/or STDERR and You can specify to which of the three standard streams (STDIN, STDOUT, STDERR) you’d like to connect But I lack of understanding: what does it mean to "connect a container to a standard stream?" Can somebody explain? What does for example the -i parameter do? A: Well, STDIN (Standard Input), STDOUT (Standard Output), STDERR (Standard Error) are three standard stream. Normally, STDIN means keyboard, STDOUT and STDERR mean the direct screen display. So, if you want to give your container some input from keyboard, you need to connect it to the STDIN. And if you want your container print the result on the screen, you may need to connect it to STDOUT and STDERR. Otherwise, your container can run in the background. Input may come from, say, network and output may be stored in a log file.
Q: What does "attach to standard streams" mean in docker? I am reading the spec of the run command and see the following: -i : Keep STDIN open even if not attached and -a=[] : Attach to STDIN, STDOUT and/or STDERR and You can specify to which of the three standard streams (STDIN, STDOUT, STDERR) you’d like to connect But I lack of understanding: what does it mean to "connect a container to a standard stream?" Can somebody explain? What does for example the -i parameter do? A: Well, STDIN (Standard Input), STDOUT (Standard Output), STDERR (Standard Error) are three standard stream. Normally, STDIN means keyboard, STDOUT and STDERR mean the direct screen display. So, if you want to give your container some input from keyboard, you need to connect it to the STDIN. And if you want your container print the result on the screen, you may need to connect it to STDOUT and STDERR. Otherwise, your container can run in the background. Input may come from, say, network and output may be stored in a log file. A: if you want to execute any command on running container tty you need to attach a standard stream for input, output or error (STDIN, STDOUT, STDERR). So you can keep the tty intractive using -i command even if stream is not attached. or can directly execute attach using -it {running container id} /bin/bash
stackoverflow
{ "language": "en", "length": 233, "provenance": "stackexchange_0000F.jsonl.gz:871734", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563249" }
f274c28d71cfa7eed2f2c13d2891fa3f9f576ff0
Stackoverflow Stackexchange Q: Partial import of antd package not working I am importing antd package using the babel-plugin-import plugin. However, I am getting the warning that the whole bundle is imported. You are using a whole package of antd, please use https://www.npmjs.com/package/babel-plugin-import to reduce app bundle size. My webpack config for jsx is as follows: { test: /\.jsx$/, loader: 'babel-loader', exclude: [nodeModulesDir], options: { cacheDirectory: true, plugins: [ 'transform-decorators-legacy', 'add-module-exports', ["import", { "libraryName": "antd", "style": true }], ["react-transform", { transforms: [ { transform: 'react-transform-hmr', imports: ['react'], locals: ['module'] } ] }] ], presets: ['es2015', 'stage-0', 'react'] } }, For some reason, the entire antd bundle is being imported. A: I figured out the problem. I created a package searchtabular-antd. The package used babel transpiler to output javascript. The below line in the package caused the problem: import { DatePicker, Checkbox, Input, InputNumber } from 'antd'; The components should be manually imported from lib as follows: import DatePicker from 'antd/lib/date-picker'; This fixed the antd size in the main app which used searchtabular-antd.
Q: Partial import of antd package not working I am importing antd package using the babel-plugin-import plugin. However, I am getting the warning that the whole bundle is imported. You are using a whole package of antd, please use https://www.npmjs.com/package/babel-plugin-import to reduce app bundle size. My webpack config for jsx is as follows: { test: /\.jsx$/, loader: 'babel-loader', exclude: [nodeModulesDir], options: { cacheDirectory: true, plugins: [ 'transform-decorators-legacy', 'add-module-exports', ["import", { "libraryName": "antd", "style": true }], ["react-transform", { transforms: [ { transform: 'react-transform-hmr', imports: ['react'], locals: ['module'] } ] }] ], presets: ['es2015', 'stage-0', 'react'] } }, For some reason, the entire antd bundle is being imported. A: I figured out the problem. I created a package searchtabular-antd. The package used babel transpiler to output javascript. The below line in the package caused the problem: import { DatePicker, Checkbox, Input, InputNumber } from 'antd'; The components should be manually imported from lib as follows: import DatePicker from 'antd/lib/date-picker'; This fixed the antd size in the main app which used searchtabular-antd.
stackoverflow
{ "language": "en", "length": 169, "provenance": "stackexchange_0000F.jsonl.gz:871773", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563394" }
a47eee805fb42bcb758bfaec38479f7cc0702870
Stackoverflow Stackexchange Q: Remove leading whitespace from xml using php I need to remove leading white space in below xml, 1 2 <?xml version="1.0" encoding="UTF-8"?> 3 <VAST version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="vast.xsd"> 4 </VAST> I get above xml in variable like $html and using trim() function to remove leading white space in xml ex : echo trim($html),but it does not remove space. Anyhelp appreciated A: trim only removes specific whitespace characters from a string. However if your string contains other non-rendered characters it will not work. A candidate for this is the BOM (Byte Order Mark). It is interpreted but not rendered by the browser. So it will not be visible. The bytes itself are not part of the characters that trim() or ltrim() would remove. Here is an example: $string = "\xEF\xBB\xBF\r\n<foo/>"; var_dump(trim($string)); Output: string(11) " <foo/>" If you know that the string is XML you can use string function to remove anything before the first <. $string = substr($string, strpos($string, '<')); var_dump($string); Output: string(6) "<foo/>"
Q: Remove leading whitespace from xml using php I need to remove leading white space in below xml, 1 2 <?xml version="1.0" encoding="UTF-8"?> 3 <VAST version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="vast.xsd"> 4 </VAST> I get above xml in variable like $html and using trim() function to remove leading white space in xml ex : echo trim($html),but it does not remove space. Anyhelp appreciated A: trim only removes specific whitespace characters from a string. However if your string contains other non-rendered characters it will not work. A candidate for this is the BOM (Byte Order Mark). It is interpreted but not rendered by the browser. So it will not be visible. The bytes itself are not part of the characters that trim() or ltrim() would remove. Here is an example: $string = "\xEF\xBB\xBF\r\n<foo/>"; var_dump(trim($string)); Output: string(11) " <foo/>" If you know that the string is XML you can use string function to remove anything before the first <. $string = substr($string, strpos($string, '<')); var_dump($string); Output: string(6) "<foo/>" A: I seems you want to remove first line which is empty instead of white space. To remove empty lines in php: $html = str_replace(array("\r", "\n"), '', $html); A: If you get the whitespace at the beginning each time you could use ltrim($html) A: I had the exact same problem in a WP theme and I fixed it by doing the following: * *Create an .html file - eg. rsshead.html with the content: <?xml version="1.0" encoding="UTF-8"?> - starting at the very beginning of the file at line 1; encoding can be changed to whatever suits you best *Include the HTML file into my PHP file: include('rsshead.html'); instead of echo '<?xml version="1.0" encoding="UTF-8"?>' A: I had the same issue. It appeared that one of the files included before the XML-generating code contained a space just before the <?php declaration: <?php instead of <?php
stackoverflow
{ "language": "en", "length": 305, "provenance": "stackexchange_0000F.jsonl.gz:871774", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563395" }
0e41fc485e898bf7c988fb6f3e4dbbc82d8e3638
Stackoverflow Stackexchange Q: Button.isEnabled() returns true even though the button is disabled by default While testing I got a road block where I have a button in a WebPage which is disabled by default. I am using Selenium WebDriver to test if the button is disabled by default the boolean is always returning true. Boolean buttonStatus = (button XPath).isEnabled It will be great if someone can help me HTML Information: <div class="commandbutton commandbutton--theme-disabled commandbutton--recommended"> <button class="commandbutton-button commandbutton-button--disabled" type="button" tabindex="-1"> A: From isEnabled docs This will generally return true for everything but disabled input elements. But it will work on buttons as well. However, isEnabled() checks for the disabled attribute. If the button is disabled by JavaScript or any other means isEnabled() won't detect it. My guess is the button has other classes when it is enabled or disabled. For example, when enabled it probably won't have commandbutton-button--disabled class. You can check for it WebElement button = driver.findElement(By.xpath("button XPath")); String classes = button.getAttribute("class"); boolean isDisabled = classes.contains("commandbutton-button--disabled");
Q: Button.isEnabled() returns true even though the button is disabled by default While testing I got a road block where I have a button in a WebPage which is disabled by default. I am using Selenium WebDriver to test if the button is disabled by default the boolean is always returning true. Boolean buttonStatus = (button XPath).isEnabled It will be great if someone can help me HTML Information: <div class="commandbutton commandbutton--theme-disabled commandbutton--recommended"> <button class="commandbutton-button commandbutton-button--disabled" type="button" tabindex="-1"> A: From isEnabled docs This will generally return true for everything but disabled input elements. But it will work on buttons as well. However, isEnabled() checks for the disabled attribute. If the button is disabled by JavaScript or any other means isEnabled() won't detect it. My guess is the button has other classes when it is enabled or disabled. For example, when enabled it probably won't have commandbutton-button--disabled class. You can check for it WebElement button = driver.findElement(By.xpath("button XPath")); String classes = button.getAttribute("class"); boolean isDisabled = classes.contains("commandbutton-button--disabled"); A: I had the same problem. But my elements on the page were very strange. Some of them selenium could click although they were not clickable, some of them selenium couldn't click, but could send keys to them. After a few hours of thinking, I have wrote universal method, that checks if elements is enabled or not. After talking with programmer, I have known, that they use on this page some special Select, and it looks like Div with Input in it. And he says, that I can check it disabling by checking attribute Class of Div. If there is 'select2-container-disabled' then this Input is disabled. And I change my method. Now it looks like that: public boolean isNotClickable(WebElement... elements) { List<WebElement> elementsChecked = new ArrayList<>(); List<WebElement> elementsToCheckByClass = new ArrayList<>(); List<WebElement> elementsToCheckByClick = new ArrayList<>(); List<WebElement> elementsToCheckBySendKeys = new ArrayList<>(); for (WebElement checkedElement : elements) { log.info("Checking, that element [" + getLocator(checkedElement) + "] is not clickable by isEnabled()"); if (checkedElement.isEnabled()) { elementsToCheckByClass.add(checkedElement); } else { elementsChecked.add(checkedElement); } } if (!elementsToCheckByClass.isEmpty()) { for (WebElement checkedByClassElement : elementsToCheckByClass) { log.info("Checking, that element [" + getLocator(checkedByClassElement) + "] is not clickable by class"); String classOfElement = checkedByClassElement.getAttribute("class"); List<String> classes = new ArrayList<>(Arrays.asList(classOfElement.split(" "))); if (!classes.contains("select2-container-disabled")) { elementsToCheckByClick.add(checkedByClassElement); } else { elementsChecked.add(checkedByClassElement); } } } if (!elementsToCheckByClick.isEmpty()) { WebDriverWait wait = new WebDriverWait(driverUtils.getDriver(), 1); for (WebElement checkedByClickElement : elementsToCheckByClick) { log.info("Checking, that element [" + getLocator(checkedByClickElement) + "] is not clickable by clicking it"); try { wait.until(elementToBeClickable(checkedByClickElement)); elementsToCheckBySendKeys.add(checkedByClickElement); } catch (Exception e) { elementsChecked.add(checkedByClickElement); } } } if (!elementsToCheckBySendKeys.isEmpty()) { for (WebElement checkedBySendKeysElement : elementsToCheckBySendKeys) { log.info("Checking, that element [" + getLocator(checkedBySendKeysElement) + "] is not clickable by sending keys"); try { checkedBySendKeysElement.sendKeys("checking"); return false; } catch (Exception e) { elementsChecked.add(checkedBySendKeysElement); } } } return elementsChecked.size() == elements.length; } A: isEnabled can only tell you the button works fine, you need to check the class attribute to check is the button is enabled.
stackoverflow
{ "language": "en", "length": 485, "provenance": "stackexchange_0000F.jsonl.gz:871779", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563408" }
1cb22a77556dbc0cd6bca635aac0b8096371527a
Stackoverflow Stackexchange Q: TypeScript Set<> collection getting value via index Hello everyone I'm new in TypeScript and maybe i don't see something obvious but here is my question: const someSet = new Set(); someSet.add(1) console.log(someSet[0]) gives me undefined can someone explain me why we cant get value via index? And how can i do that? A: You need to use an iterator: var iterator = someSet.values(); console.log(iterator.next().value); You can learn more here. TypeScript doesn't throw when you do someSet[0] because you can also access the properties using the index syntax: someSet["values"]
Q: TypeScript Set<> collection getting value via index Hello everyone I'm new in TypeScript and maybe i don't see something obvious but here is my question: const someSet = new Set(); someSet.add(1) console.log(someSet[0]) gives me undefined can someone explain me why we cant get value via index? And how can i do that? A: You need to use an iterator: var iterator = someSet.values(); console.log(iterator.next().value); You can learn more here. TypeScript doesn't throw when you do someSet[0] because you can also access the properties using the index syntax: someSet["values"] A: Set is part of ES6, not TypeScript. It's property is to be an iterable collection of unique values though not with a fixed index. Do not rely on indices to access these items since they're not ordered (use an Array instead). If you would like to iterate over a set you can always use someSet.forEach(). A: Probably because Set does not have indexer. You can get value by doing so: let val = [...mySet][0];
stackoverflow
{ "language": "en", "length": 164, "provenance": "stackexchange_0000F.jsonl.gz:871815", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563529" }
81b689c27f0da51e675fe2e765a4ecbac7a7fa57
Stackoverflow Stackexchange Q: JPA Unary Operators I want to know if it is possible to set a boolean value in JPA with Unary Operator. I mean something like this @Modifying @Query("update Computer com set com.enabled=!com.enabled where com.id = ?1") The enabled field is mapped like this in the POJO private Boolean enabled; public Boolean getEnabled() { return enabled; } public void setEnabled(Boolean enabled) { this.enabled = enabled; } in the DB it is stored as boolean(1) this is the result Caused by: java.lang.IllegalArgumentException: org.hibernate.QueryException: expecting '=', found 'c' [update com.nicinc.Computer com set com.enabled=!com.enabled where com.id = ?1] and here the JPA properties Spring Data JPA properties spring.datasource.url=jdbc:h2:mem:testdb;MODE=MySQL;DB_CLOSE_ON_EXIT=FALSE spring.datasource.username=sa spring.datasource.password= spring.jpa.show-sql=false spring.jpa.properties.hibernate.format_sql=true hibernate.dialect=org.hibernate.dialect.H2Dialect A: Different databases have different support for handling of boolean expressions, so there is little room for JPA to provide a generalized approach, thus you have to be explicit: update Computer set enabled = case when enabled = true then false else true end where id = ?1
Q: JPA Unary Operators I want to know if it is possible to set a boolean value in JPA with Unary Operator. I mean something like this @Modifying @Query("update Computer com set com.enabled=!com.enabled where com.id = ?1") The enabled field is mapped like this in the POJO private Boolean enabled; public Boolean getEnabled() { return enabled; } public void setEnabled(Boolean enabled) { this.enabled = enabled; } in the DB it is stored as boolean(1) this is the result Caused by: java.lang.IllegalArgumentException: org.hibernate.QueryException: expecting '=', found 'c' [update com.nicinc.Computer com set com.enabled=!com.enabled where com.id = ?1] and here the JPA properties Spring Data JPA properties spring.datasource.url=jdbc:h2:mem:testdb;MODE=MySQL;DB_CLOSE_ON_EXIT=FALSE spring.datasource.username=sa spring.datasource.password= spring.jpa.show-sql=false spring.jpa.properties.hibernate.format_sql=true hibernate.dialect=org.hibernate.dialect.H2Dialect A: Different databases have different support for handling of boolean expressions, so there is little room for JPA to provide a generalized approach, thus you have to be explicit: update Computer set enabled = case when enabled = true then false else true end where id = ?1 A: Probably you are using tinyint on the MySQL side for that column and persistence provider cannot distinguish 0 and 1 as being false or true. If you are using JPA 2.1 then i would suggest creating a global converter for boolean conversion: @Converter(autoApply=true) public class GlobalBooleanConverter implements AttributeConverter<Boolean, Integer>{ @Override public String convertToDatabaseColumn(Boolean value) { if (Boolean.TRUE.equals(value)) { return Integer.valueOf(1); } else { return Integer.valueOf(0); } } @Override public Boolean convertToEntityAttribute(String value) { return Integer.valueOf(1).equals(value); } } A bit out of the box alternative would be to change the POJO field to Integer and change the query to following: @Modifying @Query("update Computer com set com.enabled=((-com.enabled)+1) where com.id = ?1")
stackoverflow
{ "language": "en", "length": 268, "provenance": "stackexchange_0000F.jsonl.gz:871826", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563550" }
cc15d8599f4ffbd76e405257f9ecf292dab7e7c4
Stackoverflow Stackexchange Q: Are Class variables mutable? If I define a simple class class someClass(): var = 1 x = someClass() someClass.var = 2 This will make x.var equal 2. This is confusing to be because normally something akin to this like: a = 1 b = a a = 2 will leave b intact as b==1. So why is this not the same with class variables? Where is the difference? Can call all class variables mutable? In a way the class variables work more like assigning a list to a=[1] and doing a[0]=2. Basically the problem is how is x.var acessing someClass.var it must be something different then is used when two variables are set equal in python. What is happening? A: var is a static class variable of someClass. When you reach out to get x.var, y.var or some_other_instance.var, you are accessing the same variable, not an instance derived one (as long as you didn't specifically assigned it to the instance as a property).
Q: Are Class variables mutable? If I define a simple class class someClass(): var = 1 x = someClass() someClass.var = 2 This will make x.var equal 2. This is confusing to be because normally something akin to this like: a = 1 b = a a = 2 will leave b intact as b==1. So why is this not the same with class variables? Where is the difference? Can call all class variables mutable? In a way the class variables work more like assigning a list to a=[1] and doing a[0]=2. Basically the problem is how is x.var acessing someClass.var it must be something different then is used when two variables are set equal in python. What is happening? A: var is a static class variable of someClass. When you reach out to get x.var, y.var or some_other_instance.var, you are accessing the same variable, not an instance derived one (as long as you didn't specifically assigned it to the instance as a property).
stackoverflow
{ "language": "en", "length": 164, "provenance": "stackexchange_0000F.jsonl.gz:871830", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563579" }
095314e32eb05037e8bf5aa45b4ca10d35afca44
Stackoverflow Stackexchange Q: How to atomically reset a shared_ptr? We have atomic access to shared_ptrs but I cannot see how to atomically reset them: what am I missing? A: You can just use atomic_exchange with a default constructed shared_ptr: atomic_exchange(&ptr, {});
Q: How to atomically reset a shared_ptr? We have atomic access to shared_ptrs but I cannot see how to atomically reset them: what am I missing? A: You can just use atomic_exchange with a default constructed shared_ptr: atomic_exchange(&ptr, {});
stackoverflow
{ "language": "en", "length": 39, "provenance": "stackexchange_0000F.jsonl.gz:871854", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563645" }
1bf632e1fa745736dbe4ddcf47a07a55e56346e4
Stackoverflow Stackexchange Q: How to effectively use tf.bucket_by_sequence_length in Tensorflow? So I'm trying to use tf.bucket_by_sequence_length() from Tensorflow, but can not quite figure out how to make it work. Basically, it should take sequences (of different lengths) as input and have buckets of sequences as output, but it does not seem to work this way. From this discussion: https://github.com/tensorflow/tensorflow/issues/5609 I have the impression that it needs a queue in order to feed this function, sequence by sequence. It's not clear though. Function's documentation can be found here: https://www.tensorflow.org/versions/r0.12/api_docs/python/contrib.training/bucketing#bucket_by_sequence_length A: Indeed you need input tensor to be a queue, which can be e.g. a tf.FIFOQueue().deque(), or a tf.TensorArray().read(tf.train.range_input_producer()). This notebook that explains it quite well: https://github.com/wcarvalho/jupyter_notebooks/blob/ebe762436e2eea1dff34bbd034898b64e4465fe4/tf.bucket_by_sequence_length/bucketing%20practice.ipynb
Q: How to effectively use tf.bucket_by_sequence_length in Tensorflow? So I'm trying to use tf.bucket_by_sequence_length() from Tensorflow, but can not quite figure out how to make it work. Basically, it should take sequences (of different lengths) as input and have buckets of sequences as output, but it does not seem to work this way. From this discussion: https://github.com/tensorflow/tensorflow/issues/5609 I have the impression that it needs a queue in order to feed this function, sequence by sequence. It's not clear though. Function's documentation can be found here: https://www.tensorflow.org/versions/r0.12/api_docs/python/contrib.training/bucketing#bucket_by_sequence_length A: Indeed you need input tensor to be a queue, which can be e.g. a tf.FIFOQueue().deque(), or a tf.TensorArray().read(tf.train.range_input_producer()). This notebook that explains it quite well: https://github.com/wcarvalho/jupyter_notebooks/blob/ebe762436e2eea1dff34bbd034898b64e4465fe4/tf.bucket_by_sequence_length/bucketing%20practice.ipynb A: My following answer is based on Tensorflow2.0. I can see that you might be using an older version of Tensorflow. But if you happen to use the new version, you can effectively use the bucket_by_sequence_length API in the following manner. # This will be used by bucket_by_sequence_length to batch them according to their length. def _element_length_fn(x, y=None): return array_ops.shape(x)[0] # These are the upper length boundaries for the buckets. # Based on these boundaries, the sentences will be shifted to different buckets. boundaries = [upper_boundary_for_batch] # Here you will have to define the upper boundaries for different buckets. You can have as many boundaries as you want. But make sure that the upper boundary contains the maximum length of the sentence that is in your dataset. # These defines the batch sizes for different buckets. # I am keeping the batch_size for each bucket same, but this can be changed based on more analysis. # As per the documentation - batch size per bucket. Length should be len(bucket_boundaries) + 1. # https://www.tensorflow.org/api_docs/python/tf/data/experimental/bucket_by_sequence_length batch_sizes = [batch_size] * (len(boundaries) + 1) # Bucket_by_sequence_length returns a dataset transformation function that has to be applied using dataset.apply. # Here the important parameter is pad_to_bucket_boundary. If this is set to true then, the sentences will be padded to # the bucket boundaries provided. If set to False, it will pad the sentences to the maximum length found in the batch. # Default value for padding is 0, so we do not need to supply anything extra here. dataset = dataset.apply(tf.data.experimental.bucket_by_sequence_length(_element_length_fn, boundaries, batch_sizes, drop_remainder=True, pad_to_bucket_boundary=True))
stackoverflow
{ "language": "en", "length": 373, "provenance": "stackexchange_0000F.jsonl.gz:871855", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563648" }
e3ac0c2dc1bc912c828dcb1deeb0beadc9d262b4
Stackoverflow Stackexchange Q: How to create pandas DataFrame with index from the list of tuples What would be the best way to create pandas DataFrame with index from records. Here is my sample: sales = [('Jones LLC', 150, 200, 50), ('Alpha Co', 200, 210, 90), ('Blue Inc', 140, 215, 95)] labels = ['account', 'Jan', 'Feb', 'Mar'] df = pd.DataFrame.from_records(sales, columns=labels) I need 'Account' to be an index here (not a column) Thanks A: Simpliest is set_index: df = pd.DataFrame.from_records(sales, columns=labels).set_index('account') print (df) Jan Feb Mar account Jones LLC 150 200 50 Alpha Co 200 210 90 Blue Inc 140 215 95 Or select by list comprehensions: labels = [ 'Jan', 'Feb', 'Mar'] idx = [x[0] for x in sales] data = [x[1:] for x in sales] df = pd.DataFrame.from_records(data, columns=labels, index=idx) print (df) Jan Feb Mar Jones LLC 150 200 50 Alpha Co 200 210 90 Blue Inc 140 215 95
Q: How to create pandas DataFrame with index from the list of tuples What would be the best way to create pandas DataFrame with index from records. Here is my sample: sales = [('Jones LLC', 150, 200, 50), ('Alpha Co', 200, 210, 90), ('Blue Inc', 140, 215, 95)] labels = ['account', 'Jan', 'Feb', 'Mar'] df = pd.DataFrame.from_records(sales, columns=labels) I need 'Account' to be an index here (not a column) Thanks A: Simpliest is set_index: df = pd.DataFrame.from_records(sales, columns=labels).set_index('account') print (df) Jan Feb Mar account Jones LLC 150 200 50 Alpha Co 200 210 90 Blue Inc 140 215 95 Or select by list comprehensions: labels = [ 'Jan', 'Feb', 'Mar'] idx = [x[0] for x in sales] data = [x[1:] for x in sales] df = pd.DataFrame.from_records(data, columns=labels, index=idx) print (df) Jan Feb Mar Jones LLC 150 200 50 Alpha Co 200 210 90 Blue Inc 140 215 95
stackoverflow
{ "language": "en", "length": 149, "provenance": "stackexchange_0000F.jsonl.gz:871876", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563707" }
cd0dd174b7b9aac8e8ca11a99fd056ec67c5995b
Stackoverflow Stackexchange Q: How to specify common string for version in build.gradle? I have a build.gradle that has following content:- compile 'com.android.support:appcompat-v7:25.3.1' compile 'com.android.support:design:25.3.1' compile 'com.android.support:cardview-v7:25.3.1' compile 'com.android.support:recyclerview-v7:25.3.1' How can I specify the version number (here 25.3.1) at a common place and reuse it every where, so that when ever I need to change it, I have to change it at only one place? A: You can use Gradle features to achieve this. ext { supportVersion = "25.3.1" } dependencies { compile "com.android.support:appcompat-v7:$supportVersion" compile "com.android.support:design:$supportVersion" compile "com.android.support:cardview-v7:$supportVersion" compile "com.android.support:recyclerview-v7:$supportVersion" } See also: * *https://docs.gradle.org/3.3/userguide/writing_build_scripts.html
Q: How to specify common string for version in build.gradle? I have a build.gradle that has following content:- compile 'com.android.support:appcompat-v7:25.3.1' compile 'com.android.support:design:25.3.1' compile 'com.android.support:cardview-v7:25.3.1' compile 'com.android.support:recyclerview-v7:25.3.1' How can I specify the version number (here 25.3.1) at a common place and reuse it every where, so that when ever I need to change it, I have to change it at only one place? A: You can use Gradle features to achieve this. ext { supportVersion = "25.3.1" } dependencies { compile "com.android.support:appcompat-v7:$supportVersion" compile "com.android.support:design:$supportVersion" compile "com.android.support:cardview-v7:$supportVersion" compile "com.android.support:recyclerview-v7:$supportVersion" } See also: * *https://docs.gradle.org/3.3/userguide/writing_build_scripts.html A: In Project level gradle, you specify the version code. If we specify version code in project level then we can use that version in all modules. For example, we had three modules in a project. If we specify version code in project level then easily use that in module level gradle file. ext { appcompatVersion = '25.3.1' supportDesignVersion = '25.3.1' cardviewVersion = '25.3.1' recyclerviewVersion = '25.3.1' } In Module level gradle, you use it like below: dependencies { compile 'com.android.support:appcompat-v7:$appcompatVersion' compile 'com.android.support:design:$supportDesignVersion' compile 'com.android.support:cardview-v7:$cardviewVersion' compile 'com.android.support:recyclerviewv7:$recyclerviewVersion' } A: In the project gradle file add the variable buildscript { ext.supportLibraryVersion = "25.3.1" dependencies { classpath 'com.android.tools.build:gradle:2.3.3' ........ And in the app gradle file refer the variable created. dependencies { compile "com.android.support:appcompat-v7:$supportLibraryVersion" compile "com.android.support:design:$supportLibraryVersion" for more reference follow this link
stackoverflow
{ "language": "en", "length": 221, "provenance": "stackexchange_0000F.jsonl.gz:871879", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563719" }
0d379e430b67424ba82b1799a20ad22b24069770
Stackoverflow Stackexchange Q: What does sizeof(int(123)) mean? I'm surprised why the following code would compile: #include <stdio.h> int main(){ printf("%lu",sizeof(int(123))); return 0; } the output is 4, what is the meaning of (123) here? And I found this line of code can compile with g++, but not gcc, what is the reason? A: This is C++, the int(123) is a function-style cast to int. It's of course pointless, since 123 is an int-typed literal anyway. Function-style casts are not part of C, which is why it won't build with a C compiler. To answer more of the question, what happens is that the operator sizeof is compile-time evaluated to the size (in chars) of its argument. The argument is of type int, so you output the size of int on your platform which is 4. You could also have used just a plain sizeof 123, which would build in C, or sizeof (int) to be explicit about the type instead of deriving it from a value. Note that the parentheses are part of the argument (the type name is written as a C-style cast), sizeof is not a function.
Q: What does sizeof(int(123)) mean? I'm surprised why the following code would compile: #include <stdio.h> int main(){ printf("%lu",sizeof(int(123))); return 0; } the output is 4, what is the meaning of (123) here? And I found this line of code can compile with g++, but not gcc, what is the reason? A: This is C++, the int(123) is a function-style cast to int. It's of course pointless, since 123 is an int-typed literal anyway. Function-style casts are not part of C, which is why it won't build with a C compiler. To answer more of the question, what happens is that the operator sizeof is compile-time evaluated to the size (in chars) of its argument. The argument is of type int, so you output the size of int on your platform which is 4. You could also have used just a plain sizeof 123, which would build in C, or sizeof (int) to be explicit about the type instead of deriving it from a value. Note that the parentheses are part of the argument (the type name is written as a C-style cast), sizeof is not a function. A: The sizeof is a keyword, but it is a compile-time operator that determines the size, in bytes, of a variable or data type. The sizeof operator can be used to get the size of classes, structures, unions and any other user defined data type. The syntax of using sizeof is as follows: sizeof (data type) Where data type is the desired data type including classes, structures, unions and any other user defined data type. Try the following example to understand all the sizeof operator available in C++. Copy and paste following C++ program in test.cpp file and compile and run this program. #include <iostream> using namespace std; int main() { cout << "Size of char : " << sizeof(char) << endl; cout << "Size of int : " << sizeof(int) << endl; cout << "Size of short int : " << sizeof(short int) << endl; cout << "Size of long int : " << sizeof(long int) << endl; cout << "Size of float : " << sizeof(float) << endl; cout << "Size of double : " << sizeof(double) << endl; cout << "Size of wchar_t : " << sizeof(wchar_t) << endl; return 0; } When the above code is compiled and executed, it produces the following result, which can vary from machine to machine: Size of char : 1 Size of int : 4 Size of short int : 2 Size of long int : 4 Size of float : 4 Size of double : 8 Size of wchar_t : 4 A: int(123) is an expression with using explicit type conversion. From the C++ Standard (5.2.3 Explicit type conversion (functional notation)) 1 A simple-type-specifier (7.1.6.2) or typename-specifier (14.6) followed by a parenthesized expression-list constructs a value of the specified type given the expression list. If the expression list is a single expression, the type conversion expression is equivalent (in definedness, and if defined in meaning) to the corresponding cast expression (5.4)... As for the sizeof operator then (C++ STandard, 5.3.3 Sizeof) 1 The sizeof operator yields the number of bytes in the object representation of its operand. The operand is either an expression, which is an unevaluated operand (Clause 5), or a parenthesized type-id... Thus in this expression sizeof(int(123)) there is used explicit conversion of an integer literal of type int to the type int (that does not make great sense) and the sizeof operator is applied to the expression that yields the result of the type size_t. In fact this expression is equivalent to sizeof(int) or in this particular case to sizeof(123) because the integer literal 123 has the type int. The form of the explicit conversion of the functional notation is valid only in C++. In C such a notation of conversion is absent.
stackoverflow
{ "language": "en", "length": 644, "provenance": "stackexchange_0000F.jsonl.gz:871889", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563748" }
3ba507a5ee939871e5cdef32e06d497bf5b8e5f8
Stackoverflow Stackexchange Q: Why is JPA throwing SQLCODE=-302, SQLSTATE=22001 for select query I'm trying to frame a select query using JPA repository. select * from Foo where name = 'testapp'. The "NAME" column is defined as varchar(4). My program throws SQLCODE=-302, SQLSTATE=22001 for the select operation whereas I get the same query working in the SQL developer IDE.
Q: Why is JPA throwing SQLCODE=-302, SQLSTATE=22001 for select query I'm trying to frame a select query using JPA repository. select * from Foo where name = 'testapp'. The "NAME" column is defined as varchar(4). My program throws SQLCODE=-302, SQLSTATE=22001 for the select operation whereas I get the same query working in the SQL developer IDE.
stackoverflow
{ "language": "en", "length": 56, "provenance": "stackexchange_0000F.jsonl.gz:871890", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563752" }
c6a5b2f0db9f1d27e193a2a6fe023274e19bb72c
Stackoverflow Stackexchange Q: How to correctly include other ReST-files in a sphinx-project? My hand-written documentation/user-guide (written in ReStructuredText with sphinx) has become quite big so I started organize my .rst-files in sub-directories. In the index.rst I'm including a subindex.rst of each sub-directory which itselfs includes other .rst-files for further sub-directories. index.rst: .. include:: subdir1/subindex.rst .. include:: subdir2/subindex.rst subdir1/subindex.rst: .. include:: file1.rst .. include:: file2.rst In principle this works well, except that sphinx is recursively looking for .rst-files which it tries to parse. without changing the current-working dir. So, it fails when seeing include:: file1.rst inside subdir1. I'm working around this issue by setting exclude_pattern to ignore my subdirs. This seems not right. What would be the right way to include a .rst-file of a subdir? A: The toctree directive should do what you want. .. toctree:: :glob: subdir1/* subdir2/* The glob * will alphabetically sort files within subdirs. To avoid sorting, you could specify the order without globbing. .. toctree:: :maxdepth: 2 subdir1/file2 subdir1/file1 subdir2/file1 subdir2/file2 If you don't want individual pages but one huge page, you can invoke make singlehtml.
Q: How to correctly include other ReST-files in a sphinx-project? My hand-written documentation/user-guide (written in ReStructuredText with sphinx) has become quite big so I started organize my .rst-files in sub-directories. In the index.rst I'm including a subindex.rst of each sub-directory which itselfs includes other .rst-files for further sub-directories. index.rst: .. include:: subdir1/subindex.rst .. include:: subdir2/subindex.rst subdir1/subindex.rst: .. include:: file1.rst .. include:: file2.rst In principle this works well, except that sphinx is recursively looking for .rst-files which it tries to parse. without changing the current-working dir. So, it fails when seeing include:: file1.rst inside subdir1. I'm working around this issue by setting exclude_pattern to ignore my subdirs. This seems not right. What would be the right way to include a .rst-file of a subdir? A: The toctree directive should do what you want. .. toctree:: :glob: subdir1/* subdir2/* The glob * will alphabetically sort files within subdirs. To avoid sorting, you could specify the order without globbing. .. toctree:: :maxdepth: 2 subdir1/file2 subdir1/file1 subdir2/file1 subdir2/file2 If you don't want individual pages but one huge page, you can invoke make singlehtml.
stackoverflow
{ "language": "en", "length": 179, "provenance": "stackexchange_0000F.jsonl.gz:871900", "question_score": "21", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563794" }
962bb06d30f43f430424b2ddbd576586ad3a8e5a
Stackoverflow Stackexchange Q: Check if FormGroup has been submitted In template-driven form, we can check that with the submitted property from NgForm. But how to achieve that in Model-driven form? A: Create one variable IsSubmitted = false into component. Once submit a button you can set it true into onSubmit() function. for example : onSubmit(): void{ this.isSubmited = true; // code.... } set it again to false after response
Q: Check if FormGroup has been submitted In template-driven form, we can check that with the submitted property from NgForm. But how to achieve that in Model-driven form? A: Create one variable IsSubmitted = false into component. Once submit a button you can set it true into onSubmit() function. for example : onSubmit(): void{ this.isSubmited = true; // code.... } set it again to false after response A: I just found out you can use ngForm together with formGroup: <form [formGroup]='form' #ngForm="ngForm" (ngSubmit)='validation(ngForm)' [ngClass]="{ 'form-unsubmitted': !ngForm.submitted}">
stackoverflow
{ "language": "en", "length": 86, "provenance": "stackexchange_0000F.jsonl.gz:871915", "question_score": "28", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563856" }
25c2dd07cba2fe50e42d9ae188131c1516d793aa
Stackoverflow Stackexchange Q: How to build docker images on AWS EC2 Windows Server instance? We use Team City to build C# applications on a Windows server in AWS EC2. Now there is a requirement to build Docker containers using the same system. The build steps have been tested locally and are able to produce a docker image. Docker is not installing correctly on the server which leads to the builds failing. Docker Edge supports Windows Server but fails on EC2 due to Hyper-V not functioning correctly. Docker Toolbox also fails because VT-X/AMD-v are not enabled. Is there any way to build docker images on an AWS EC2 Windows Server instance?
Q: How to build docker images on AWS EC2 Windows Server instance? We use Team City to build C# applications on a Windows server in AWS EC2. Now there is a requirement to build Docker containers using the same system. The build steps have been tested locally and are able to produce a docker image. Docker is not installing correctly on the server which leads to the builds failing. Docker Edge supports Windows Server but fails on EC2 due to Hyper-V not functioning correctly. Docker Toolbox also fails because VT-X/AMD-v are not enabled. Is there any way to build docker images on an AWS EC2 Windows Server instance?
stackoverflow
{ "language": "en", "length": 108, "provenance": "stackexchange_0000F.jsonl.gz:871916", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563859" }
94ac2b2bcd7339188b4364d2a887feb985e4a0c2
Stackoverflow Stackexchange Q: Is it necessary to add Spring Web when using Spring Actuator? I'm trying to in corporate Spring Actuator to my application. I have added the dependency in my pom.xml: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> <version>1.4.2.RELEASE</version> </dependency> But I get a 404 when trying to access the /health endpoint. After looking online, I've read that I need to also have the spring-boot-starter-web dependency in my POM. I was under the assumption that I only need the actuator dependency in order to get it working A: Yes web is needed if you want to access via HTTP (otherwise only JMX is available). The documentation for actuator states "Click Dependencies and select Spring Web and Spring Boot Actuator."
Q: Is it necessary to add Spring Web when using Spring Actuator? I'm trying to in corporate Spring Actuator to my application. I have added the dependency in my pom.xml: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> <version>1.4.2.RELEASE</version> </dependency> But I get a 404 when trying to access the /health endpoint. After looking online, I've read that I need to also have the spring-boot-starter-web dependency in my POM. I was under the assumption that I only need the actuator dependency in order to get it working A: Yes web is needed if you want to access via HTTP (otherwise only JMX is available). The documentation for actuator states "Click Dependencies and select Spring Web and Spring Boot Actuator."
stackoverflow
{ "language": "en", "length": 114, "provenance": "stackexchange_0000F.jsonl.gz:871917", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563862" }
abfad8fc7571049e9bbd409ad0798dcf8438b71f
Stackoverflow Stackexchange Q: Android WebView lifecycle I'm currently somewhat confused as to how (or if?) I'm supposed to manage the lifecycle of the WebViews in my Android application. The app seems to have a much bigger impact on the devices battery than I think it should have and I suspect the cause might be mismanagement of the WebViews on my part. The answers I found only ever seem to raise part of the problem and I couldn't find some kind of tutorial or more general answer on this so far. When I started developing my application, I thought that WebViews were supposed to follow the lifecycle of their respective Activity, then I stumbled across the methods onPause, onResume, pauseTimers, resumeTimers, saveState and restoreState. But I don't really unterstand what implications each one of these have for the lifecycle of their WebView and what it means for battery/memory/CPU management to use or not use any of these. This answer mentioned it'd be "cheaper to destroy the webviews and recreate them again", but didn't go into further detail and the posted link is dead. Could anyone please give a brief explanation and introduction on what is best practice for managing a WebViews lifecycle?
Q: Android WebView lifecycle I'm currently somewhat confused as to how (or if?) I'm supposed to manage the lifecycle of the WebViews in my Android application. The app seems to have a much bigger impact on the devices battery than I think it should have and I suspect the cause might be mismanagement of the WebViews on my part. The answers I found only ever seem to raise part of the problem and I couldn't find some kind of tutorial or more general answer on this so far. When I started developing my application, I thought that WebViews were supposed to follow the lifecycle of their respective Activity, then I stumbled across the methods onPause, onResume, pauseTimers, resumeTimers, saveState and restoreState. But I don't really unterstand what implications each one of these have for the lifecycle of their WebView and what it means for battery/memory/CPU management to use or not use any of these. This answer mentioned it'd be "cheaper to destroy the webviews and recreate them again", but didn't go into further detail and the posted link is dead. Could anyone please give a brief explanation and introduction on what is best practice for managing a WebViews lifecycle?
stackoverflow
{ "language": "en", "length": 199, "provenance": "stackexchange_0000F.jsonl.gz:871945", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44563946" }
f2323ae0842213df8cc6c6234b4e5d2aa52e390b
Stackoverflow Stackexchange Q: how to use Highcharts on Ionic? I did not understand how to use Highcharts on Ionic. * *Ionic version 3.4.0 *Highcharts version 5.0.12 Following the Highcharts guide for installation, include in my file.ts import highchart from 'highcharts/highcharts.js'; var Highcharts = require('highcharts'); require('highcharts/modules/exporting')(Highcharts); Then Ionic sever gives me the error Cannot find name 'require' A: You need to add this at the top: declare var require: any; In general I suggest you install highcharts module instead of using npm install angular2-highcharts: $ npm install highcharts --save Then you can declare Highcharts like this: declare var require: any; let hcharts = require('highcharts'); require('highcharts/modules/exporting')(hcharts); Here's a full example: import { ElementRef, Component} from '@angular/core'; import { NavController } from 'ionic/angular'; declare var require: any; let hcharts = require('highcharts'); require('highcharts/modules/exporting')(hcharts); @Component({ selector: 'page-about', template: `<div #myChart></div>`, }) export class AboutPage { @ViewChild('myChart') canvas: ElementRef; constructor(public navCtrl: NavController) {} ionViewDidLoad() { let chart = hcharts.chart(this.canvas.nativeElement, { chart: { zoomType: 'x', events: { load: function() { let self = this; setTimeout(function(){ self.reflow(); }, 100); } } }, series: [{ data: [1, 3, 2, 4] }], }); } }
Q: how to use Highcharts on Ionic? I did not understand how to use Highcharts on Ionic. * *Ionic version 3.4.0 *Highcharts version 5.0.12 Following the Highcharts guide for installation, include in my file.ts import highchart from 'highcharts/highcharts.js'; var Highcharts = require('highcharts'); require('highcharts/modules/exporting')(Highcharts); Then Ionic sever gives me the error Cannot find name 'require' A: You need to add this at the top: declare var require: any; In general I suggest you install highcharts module instead of using npm install angular2-highcharts: $ npm install highcharts --save Then you can declare Highcharts like this: declare var require: any; let hcharts = require('highcharts'); require('highcharts/modules/exporting')(hcharts); Here's a full example: import { ElementRef, Component} from '@angular/core'; import { NavController } from 'ionic/angular'; declare var require: any; let hcharts = require('highcharts'); require('highcharts/modules/exporting')(hcharts); @Component({ selector: 'page-about', template: `<div #myChart></div>`, }) export class AboutPage { @ViewChild('myChart') canvas: ElementRef; constructor(public navCtrl: NavController) {} ionViewDidLoad() { let chart = hcharts.chart(this.canvas.nativeElement, { chart: { zoomType: 'x', events: { load: function() { let self = this; setTimeout(function(){ self.reflow(); }, 100); } } }, series: [{ data: [1, 3, 2, 4] }], }); } }
stackoverflow
{ "language": "en", "length": 183, "provenance": "stackexchange_0000F.jsonl.gz:871960", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564009" }
6b7e5643dc5f49da5bef8ac957bff30d749d9929
Stackoverflow Stackexchange Q: new field false in Package.json After upgrading to npm 5.* I have noticed a new field on the package.json which is really obscure and unintelligible. What false: {} means? { "name": "test", "devDependencies": {}, "dependencies": {}, // What that means? What's the goal? "false": {} } A: This was bug #17141 in npm. It was fixed in commit c3b586a on June 30th and that was released in version 5.1.0 on July 5th. The fix for anyone experiencing this is to simply update npm. You can update by running: npm install -g npm
Q: new field false in Package.json After upgrading to npm 5.* I have noticed a new field on the package.json which is really obscure and unintelligible. What false: {} means? { "name": "test", "devDependencies": {}, "dependencies": {}, // What that means? What's the goal? "false": {} } A: This was bug #17141 in npm. It was fixed in commit c3b586a on June 30th and that was released in version 5.1.0 on July 5th. The fix for anyone experiencing this is to simply update npm. You can update by running: npm install -g npm
stackoverflow
{ "language": "en", "length": 93, "provenance": "stackexchange_0000F.jsonl.gz:871966", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564024" }
71de4d05a2a942abc1941fe5e88ae5aebb9eaf23
Stackoverflow Stackexchange Q: How to make x-axis more detailed when working with date_range I'm trying to plot a diagram of monthly data I received. When plotting the data, the plot only shows the year on the x axis, but not the month. How can I make it also show the month on x tick labels? import pandas as pd import numpy as np new_index = pd.date_range(start = "2012-07-01", end = "2017-07-01", freq = "MS") columns = ['0'] df = pd.DataFrame(index=new_index, columns=columns) for index, row in df.iterrows(): row[0] = np.random.randint(0,100) %matplotlib inline df.loc['2015-09-01'] = np.nan df.plot(kind="line",title="Data per month", figsize = (40,10), grid=True, fontsize=20) A: You may use FixedFormatter from matplotlib.ticker to define your own formatter for custom ticks like here: ... ticklabels = [item.strftime('%b %Y') for item in df.index[::6]] # set ticks format: month name and year for every 6 elements of index plt.gca().xaxis.set_ticks(df.index[::6]) # set new ticks for current x axis plt.gca().xaxis.set_major_formatter(ticker.FixedFormatter(ticklabels)) # apply new tick format ... Or dates in two lines if use %b\n%Y format:
Q: How to make x-axis more detailed when working with date_range I'm trying to plot a diagram of monthly data I received. When plotting the data, the plot only shows the year on the x axis, but not the month. How can I make it also show the month on x tick labels? import pandas as pd import numpy as np new_index = pd.date_range(start = "2012-07-01", end = "2017-07-01", freq = "MS") columns = ['0'] df = pd.DataFrame(index=new_index, columns=columns) for index, row in df.iterrows(): row[0] = np.random.randint(0,100) %matplotlib inline df.loc['2015-09-01'] = np.nan df.plot(kind="line",title="Data per month", figsize = (40,10), grid=True, fontsize=20) A: You may use FixedFormatter from matplotlib.ticker to define your own formatter for custom ticks like here: ... ticklabels = [item.strftime('%b %Y') for item in df.index[::6]] # set ticks format: month name and year for every 6 elements of index plt.gca().xaxis.set_ticks(df.index[::6]) # set new ticks for current x axis plt.gca().xaxis.set_major_formatter(ticker.FixedFormatter(ticklabels)) # apply new tick format ... Or dates in two lines if use %b\n%Y format:
stackoverflow
{ "language": "en", "length": 165, "provenance": "stackexchange_0000F.jsonl.gz:871977", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564057" }
32bb874fc298792d9c8b26cf65f0b1c66aa06013
Stackoverflow Stackexchange Q: React router v4 get user confirmation when leaving page In older versions I could use setRouteLeaveHook within my component. For example (SO): Detecting user leaving page With react router v4 the logic has changed away from injecting the router itself into the components and I only found the following function on router v4: BrowserRouter. getUserConfirmation I am a little bit confused, why I should link the confirm behavior with the Router itself and not with a specific component!? How can I place a confirm window, when leaving my component (linked to my current route), while being in a certain state? This seems to be not supported by the function above. A: react-router-navigation-prompt also does what you want: it is a more powerful <Prompt />.
Q: React router v4 get user confirmation when leaving page In older versions I could use setRouteLeaveHook within my component. For example (SO): Detecting user leaving page With react router v4 the logic has changed away from injecting the router itself into the components and I only found the following function on router v4: BrowserRouter. getUserConfirmation I am a little bit confused, why I should link the confirm behavior with the Router itself and not with a specific component!? How can I place a confirm window, when leaving my component (linked to my current route), while being in a certain state? This seems to be not supported by the function above. A: react-router-navigation-prompt also does what you want: it is a more powerful <Prompt />. A: I think the Prompt component is what you're looking for. Just render it in the component you want to confirm navigation form, i.e. the same component you render in your <Route>.
stackoverflow
{ "language": "en", "length": 157, "provenance": "stackexchange_0000F.jsonl.gz:871990", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564109" }
2d507af1e4889950c9edaee531337d85742d4f4b
Stackoverflow Stackexchange Q: Load spring-boot properties from json file Is it possible to load spring-boot config from a .json file as opposed to .yaml or .properties? From looking at the documentation, this isn't supported out of the box - I'm wondering if it's possible and if so how one would go about doing it? A: The spring boot way: @EnableAutoConfiguration @Configuration @PropertySource(value = { "classpath:/properties/config.default.json" }, factory=SpringBootTest.JsonLoader.class ) public class SpringBootTest extends SpringBootServletInitializer { @Bean public Object test(Environment e) { System.out.println(e.getProperty("test")); return new Object(); } public static void main(String[] args) { SpringApplication.run(SpringBootTest.class); } public static class JsonLoader implements PropertySourceFactory { @Override public org.springframework.core.env.PropertySource<?> createPropertySource(String name, EncodedResource resource) throws IOException { Map readValue = new ObjectMapper().readValue(resource.getInputStream(), Map.class); return new MapPropertySource("json-source", readValue); } } } Define your own PropertySourceFactory and hook it in via the @PropertySource annotation. Read the resource, set the properties, use them anywhere. Only thing is, how do you translate nested properties. The Spring way to do that (by the way you can define Json also as a variable for properties, see: https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html) is to translate nested properties as such: {"test": { "test2" : "x" } } Becomes: test.test2.x Hope that helps, Artur
Q: Load spring-boot properties from json file Is it possible to load spring-boot config from a .json file as opposed to .yaml or .properties? From looking at the documentation, this isn't supported out of the box - I'm wondering if it's possible and if so how one would go about doing it? A: The spring boot way: @EnableAutoConfiguration @Configuration @PropertySource(value = { "classpath:/properties/config.default.json" }, factory=SpringBootTest.JsonLoader.class ) public class SpringBootTest extends SpringBootServletInitializer { @Bean public Object test(Environment e) { System.out.println(e.getProperty("test")); return new Object(); } public static void main(String[] args) { SpringApplication.run(SpringBootTest.class); } public static class JsonLoader implements PropertySourceFactory { @Override public org.springframework.core.env.PropertySource<?> createPropertySource(String name, EncodedResource resource) throws IOException { Map readValue = new ObjectMapper().readValue(resource.getInputStream(), Map.class); return new MapPropertySource("json-source", readValue); } } } Define your own PropertySourceFactory and hook it in via the @PropertySource annotation. Read the resource, set the properties, use them anywhere. Only thing is, how do you translate nested properties. The Spring way to do that (by the way you can define Json also as a variable for properties, see: https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html) is to translate nested properties as such: {"test": { "test2" : "x" } } Becomes: test.test2.x Hope that helps, Artur A: The SPRING_APPLICATION_JSON properties can be supplied on the command line with an environment variable. For example, you could use the following line in a UN*X shell: $ SPRING_APPLICATION_JSON='{"acme":{"name":"test"}}' java -jar myapp.jar In the preceding example, you end up with acme.name=test in the Spring Environment. You can also supply the JSON as spring.application.json in a System property, as shown in the following example: $ java -Dspring.application.json='{"name":"test"}' -jar myapp.jar You can also supply the JSON by using a command line argument, as shown in the following example: $ java -jar myapp.jar --spring.application.json='{"name":"test"}' You can also supply the JSON as a JNDI variable, as follows: java:comp/env/spring.application.json. Reference documentation: https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html A: As noted in docs and on GitHub YAML is a superset of JSON So you can just create the following class in your Spring Boot project: public class JsonPropertySourceLoader extends YamlPropertySourceLoader { @Override public String[] getFileExtensions() { return new String[]{"json"}; } } Then create a file: /src/main/resources/META-INF/spring.factories with the following content: org.springframework.boot.env.PropertySourceLoader=\ io.myapp.JsonPropertySourceLoader And your Spring application is ready to load JSON configurations from application.json. The priority will be: .properties -> .yaml -> .json If you have multiple apps, you can create a jar with the shared PropertySourceLoader and spring.factories file in order to include it to any project you need. A: 2 steps public String asYaml(String jsonString) throws JsonProcessingException, IOException { // parse JSON JsonNode jsonNodeTree = new ObjectMapper().readTree(jsonString); // save it as YAML String jsonAsYaml = new YAMLMapper().writeValueAsString(jsonNodeTree); return jsonAsYaml; } Got from the post and public class YamlFileApplicationContextInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> { @Override public void initialize(ConfigurableApplicationContext applicationContext) { try { Resource resource = applicationContext.getResource("classpath:file.yml"); YamlPropertySourceLoader sourceLoader = new YamlPropertySourceLoader(); PropertySource<?> yamlTestProperties = yamlTestProperties = sourceLoader.load("yamlTestProperties", resource, null); applicationContext.getEnvironment().getPropertySources().addFirst(yamlTestProperties); } catch (IOException e) { throw new RuntimeException(e); } } } Got from the post So you can combine both. Load your json as resource and convert to yaml and then add to Environment all the found properties
stackoverflow
{ "language": "en", "length": 508, "provenance": "stackexchange_0000F.jsonl.gz:872006", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564166" }
a92f82fb168ceadd5b5e23717ddbdfcb2e590ad7
Stackoverflow Stackexchange Q: How can I use `return` to get back multiple values from a loop? Can I put them in a list? I have some code that prints data from a global dictionary named cal: def show_todo(): for key, value in cal.items(): print(value[0], key) However, I want to use this code as part of a Discord bot. In order for the bot to work properly, I need to return the data to another function that will actually send the message to the Discord chat. Using print like above means that the message is displayed in my local console window, and the chat just sees None. I tried to fix it by using return instead: def show_todo(): for key, value in cal.items(): return(value[0], key) but this way, the for loop does not work properly. I only get at most one key-value pair from the dictionary. How can I fix this so that all of the data is returned? A: Use a generator syntax (excellent explanation on SO here): def show_todo(): for key, value in cal.items(): yield value[0], key for value, key in show_todo(): print(value, key)
Q: How can I use `return` to get back multiple values from a loop? Can I put them in a list? I have some code that prints data from a global dictionary named cal: def show_todo(): for key, value in cal.items(): print(value[0], key) However, I want to use this code as part of a Discord bot. In order for the bot to work properly, I need to return the data to another function that will actually send the message to the Discord chat. Using print like above means that the message is displayed in my local console window, and the chat just sees None. I tried to fix it by using return instead: def show_todo(): for key, value in cal.items(): return(value[0], key) but this way, the for loop does not work properly. I only get at most one key-value pair from the dictionary. How can I fix this so that all of the data is returned? A: Use a generator syntax (excellent explanation on SO here): def show_todo(): for key, value in cal.items(): yield value[0], key for value, key in show_todo(): print(value, key) A: Using a return inside of a loop will break it and exit the function even if the iteration is still not finished. For example: def num(): # Here there will be only one iteration # For number == 1 => 1 % 2 = 1 # So, break the loop and return the number for number in range(1, 10): if number % 2: return number >>> num() 1 In some cases we need to break the loop if some conditions are met. However, in your current code, breaking the loop before finishing it is unintentional. Instead of that, you can use a different approach: Yielding your data def show_todo(): # Create a generator for key, value in cal.items(): yield value[0], key You can call it like: a = list(show_todo()) # or tuple(show_todo()) or you can iterate through it: for v, k in show_todo(): ... Putting your data into a list or other container Append your data to a list, then return it after the end of your loop: def show_todo(): my_list = [] for key, value in cal.items(): my_list.append((value[0], key)) return my_list Or use a list comprehension: def show_todo(): return [(value[0], key) for key, value in cal.items()]
stackoverflow
{ "language": "en", "length": 381, "provenance": "stackexchange_0000F.jsonl.gz:872101", "question_score": "30", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564414" }
60df3e05e0c5bb35690c213d53075cd918d70035
Stackoverflow Stackexchange Q: Request Timed out with Code Error Domain=NSURLErrorDomain Code=-1001 "The request timed out." UserInfo={NSUnderlyingError=0x608000244a70 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 "(null)" UserInfo={_kCFStreamErrorCodeKey=-2102, _kCFStreamErrorDomainKey=4}}, NSErrorFailingURLStringKey=http://www.dfdd, NSErrorFailingURLKey=http://www.dfdd.com, _kCFStreamErrorDomainKey=4, _kCFStreamErrorCodeKey=-2102, NSLocalizedDescription=The request timed out.} Am getting this response calling MY API.. am using alamofire to call the api.. Is any problem in alamofire or local API A: You have to following solution. * *Connect with fast internet connection because your request carry heavy data. *Set request timeout to session manager manager.session.configuration.timeoutIntervalForRequest = 120
Q: Request Timed out with Code Error Domain=NSURLErrorDomain Code=-1001 "The request timed out." UserInfo={NSUnderlyingError=0x608000244a70 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 "(null)" UserInfo={_kCFStreamErrorCodeKey=-2102, _kCFStreamErrorDomainKey=4}}, NSErrorFailingURLStringKey=http://www.dfdd, NSErrorFailingURLKey=http://www.dfdd.com, _kCFStreamErrorDomainKey=4, _kCFStreamErrorCodeKey=-2102, NSLocalizedDescription=The request timed out.} Am getting this response calling MY API.. am using alamofire to call the api.. Is any problem in alamofire or local API A: You have to following solution. * *Connect with fast internet connection because your request carry heavy data. *Set request timeout to session manager manager.session.configuration.timeoutIntervalForRequest = 120 A: The main causes of this issue are either: * *The server is under heavy load, or does not have the resources to be able to respond in a timely fashion *The users network connection is slow, and unable to download the response quick enough. You should check your API & Server logs to look for any potential issues there, ensure there are no errors and that the API is capable of handling your requests as the app scales. Also, you should add some additional error handling in your application so that if this issue does occur, not only do you handle the case properly and show the user that an error occured (or retry) and also log/report the error so that you can respond to it and investigate. Sometimes users connections will drop due to loss of mobile signal or other reasons so you need to handle this gracefully.
stackoverflow
{ "language": "en", "length": 227, "provenance": "stackexchange_0000F.jsonl.gz:872104", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564422" }
41d82636e2bd65887df67caa149be03b81091ee2
Stackoverflow Stackexchange Q: Styling an SVG tile layer with CSS in Leaflet I use Leaflet to display a vector tile layer, like this: var tiles = L.tileLayer('mytiles/{z}/{x}/{y}.svg', { renderer: L.svg(), continuousWorld: true, noWrap: true, minZoom: 0, maxZoom: 10, }) The elements of my tiles have CSS classes, such as <rect class="country-ES" ...></rect>, so I would like to style them in my CSS: .country-ES { fill: red !important; } However, the tiles do not seem to be affected by these CSS instructions. And I do not know how to debug this as the tiles cannot be inspected by the web developper tools of Chrome or Firefox. Any idea how that can be achieved? A: Setting the renderer: L.svg() in the tiles has no effect (this is meant for the overlay elements in the map). I had to force Leaflet to display the tiles as embedded SVG, like this: tiles.createTile = function (coords, done) { var tile = document.createElement('div'); tile.setAttribute('role', 'presentation'); $.get(this.getTileUrl(coords), success=function(data) { tile.appendChild(data.firstChild); done(null, tile); }).fail(function(error) { done(error, tile); }); return tile; }; And then it worked!
Q: Styling an SVG tile layer with CSS in Leaflet I use Leaflet to display a vector tile layer, like this: var tiles = L.tileLayer('mytiles/{z}/{x}/{y}.svg', { renderer: L.svg(), continuousWorld: true, noWrap: true, minZoom: 0, maxZoom: 10, }) The elements of my tiles have CSS classes, such as <rect class="country-ES" ...></rect>, so I would like to style them in my CSS: .country-ES { fill: red !important; } However, the tiles do not seem to be affected by these CSS instructions. And I do not know how to debug this as the tiles cannot be inspected by the web developper tools of Chrome or Firefox. Any idea how that can be achieved? A: Setting the renderer: L.svg() in the tiles has no effect (this is meant for the overlay elements in the map). I had to force Leaflet to display the tiles as embedded SVG, like this: tiles.createTile = function (coords, done) { var tile = document.createElement('div'); tile.setAttribute('role', 'presentation'); $.get(this.getTileUrl(coords), success=function(data) { tile.appendChild(data.firstChild); done(null, tile); }).fail(function(error) { done(error, tile); }); return tile; }; And then it worked!
stackoverflow
{ "language": "en", "length": 175, "provenance": "stackexchange_0000F.jsonl.gz:872126", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564490" }
9864153667ad6ef95ec72129174d968aef35b11e
Stackoverflow Stackexchange Q: How to save ImageCache to the disk in flutter? I load lots of image from the internet with Image.network statement and I know the ImageCache only save them until then the app is running. But I want save cache, because I don't want always downloading images, when app starting. So: save cache to the disk is possible in flutter? A: Flutter's image cache is for decoded images. It sounds like what you want is a cache of encoded image files. You could build this yourself by downloading to files to device storage and using Image.file, but you'd probably want to implement some kind of eviction logic to make sure you don't consume too much space on the device. You could use Image.asset for static images that you want to bundle with your app. Consider preloading your images before the user gets to the point where they are displayed. This will create the illusion that they load instantly.
Q: How to save ImageCache to the disk in flutter? I load lots of image from the internet with Image.network statement and I know the ImageCache only save them until then the app is running. But I want save cache, because I don't want always downloading images, when app starting. So: save cache to the disk is possible in flutter? A: Flutter's image cache is for decoded images. It sounds like what you want is a cache of encoded image files. You could build this yourself by downloading to files to device storage and using Image.file, but you'd probably want to implement some kind of eviction logic to make sure you don't consume too much space on the device. You could use Image.asset for static images that you want to bundle with your app. Consider preloading your images before the user gets to the point where they are displayed. This will create the illusion that they load instantly.
stackoverflow
{ "language": "en", "length": 158, "provenance": "stackexchange_0000F.jsonl.gz:872131", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564501" }
61aa9e148256745e18f383b04c9b30f1c1df11f4
Stackoverflow Stackexchange Q: angular 2 HostListener - keydown space event doesn't work in IE I have a problem with @HostListener in IE11. I need to catch when the user uses keydown event for space, my code very simple and it works fine in Chrome and FireFox but doesn't work in IE. import {Component, HostListener} from '@angular/core'; @Component({ selector: 'home', styleUrls: ['./home.component.css'], templateUrl: './home.component.html' }) export class HomeComponent { @HostListener('window:keydown.control.space', ['$event']) @HostListener('window:keydown.space', ['$event']) spaceEvent(event: any) { alert('space key!'); } } In the IE developer tools, I don't see any errors or warnings, and I don't know how to resolve this problem. Any suggestion how to resolve this problem? A: I found solutions. IE doesn't work correctly yet with many various events and need to use @HostListener('window:keydown', ['$event']) and then catch some keyCode Example: import {Component, HostListener} from '@angular/core'; @Component({ selector: 'home', styleUrls: ['./home.component.css'], templateUrl: './home.component.html' }) export class HomeComponent { @HostListener('window:keydown', ['$event']) spaceEvent(event: any) { if(event.ctrlKey && event.keyCode == 32) console.log('ctrl + space'); else if(event.keyCode == 32) console.log('space'); } }
Q: angular 2 HostListener - keydown space event doesn't work in IE I have a problem with @HostListener in IE11. I need to catch when the user uses keydown event for space, my code very simple and it works fine in Chrome and FireFox but doesn't work in IE. import {Component, HostListener} from '@angular/core'; @Component({ selector: 'home', styleUrls: ['./home.component.css'], templateUrl: './home.component.html' }) export class HomeComponent { @HostListener('window:keydown.control.space', ['$event']) @HostListener('window:keydown.space', ['$event']) spaceEvent(event: any) { alert('space key!'); } } In the IE developer tools, I don't see any errors or warnings, and I don't know how to resolve this problem. Any suggestion how to resolve this problem? A: I found solutions. IE doesn't work correctly yet with many various events and need to use @HostListener('window:keydown', ['$event']) and then catch some keyCode Example: import {Component, HostListener} from '@angular/core'; @Component({ selector: 'home', styleUrls: ['./home.component.css'], templateUrl: './home.component.html' }) export class HomeComponent { @HostListener('window:keydown', ['$event']) spaceEvent(event: any) { if(event.ctrlKey && event.keyCode == 32) console.log('ctrl + space'); else if(event.keyCode == 32) console.log('space'); } }
stackoverflow
{ "language": "en", "length": 167, "provenance": "stackexchange_0000F.jsonl.gz:872138", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564525" }
469d5ec0c8e5cc7e394b8a307b75d5d9d161a8e8
Stackoverflow Stackexchange Q: Getting error "The package appears to be corrupt" in Fabric Beta on Android 6 All my beta testers with Android 6 get this error when installing my app from Beta: App not installed. The package appears to be corrupt No problems for users with Android 7+. The APK can be dirrectly installed on all devices, including those with error in Beta. The problem appeared a few days ago, configuration of project did not change. All my users uses latest 1.7.0 Beta app. Project dependencies: dependencies { classpath 'io.fabric.tools:gradle:1.+' } compile('com.crashlytics.sdk.android:crashlytics:2.6.8@aar') { transitive = true } compile('com.crashlytics.sdk.android:crashlytics-ndk:1.1.6@aar') { transitive = true } Any help? UPD. I removed android:extractNativeLibs="false" from AndroidManifest.xml and now it works. A: For me the solution was to downgrade gradle from version 3.0.0 (introduced with Android Studio 3) to 2.3.3 (previous version). I made this by replacing this line in the project .gradle file: buildscript { repositories { ... } dependencies { classpath 'com.android.tools.build:gradle:3.0.0' ... } } With: buildscript { repositories { ... } dependencies { classpath 'com.android.tools.build:gradle:2.3.0' ... } } After a clen and build i was able to upload my app to Beta and install it with no problem.
Q: Getting error "The package appears to be corrupt" in Fabric Beta on Android 6 All my beta testers with Android 6 get this error when installing my app from Beta: App not installed. The package appears to be corrupt No problems for users with Android 7+. The APK can be dirrectly installed on all devices, including those with error in Beta. The problem appeared a few days ago, configuration of project did not change. All my users uses latest 1.7.0 Beta app. Project dependencies: dependencies { classpath 'io.fabric.tools:gradle:1.+' } compile('com.crashlytics.sdk.android:crashlytics:2.6.8@aar') { transitive = true } compile('com.crashlytics.sdk.android:crashlytics-ndk:1.1.6@aar') { transitive = true } Any help? UPD. I removed android:extractNativeLibs="false" from AndroidManifest.xml and now it works. A: For me the solution was to downgrade gradle from version 3.0.0 (introduced with Android Studio 3) to 2.3.3 (previous version). I made this by replacing this line in the project .gradle file: buildscript { repositories { ... } dependencies { classpath 'com.android.tools.build:gradle:3.0.0' ... } } With: buildscript { repositories { ... } dependencies { classpath 'com.android.tools.build:gradle:2.3.0' ... } } After a clen and build i was able to upload my app to Beta and install it with no problem. A: You need to build the apk first by :-1 Build > Build apk(s) This is because of security issue. if any non-developer want your APK, so its easily to get it from your folder. so now it does not happen only developer will able to create sharable APK. A: I answered a similar question here Since Android Studio 3.0, I have the exact same problem if I try to upload an apk via the user interface. For now, you will have to use the command line in order to upload an apk, as the documentation says. ./gradlew assembleDebug crashlyticsUploadDistributionDebug I hope this will help ! A: I solved the problem removing android:extractNativeLibs="false" from the AndroidManifest, until Fabric fixes the issue. A: After searching a lot i find a solution: Go to Build-> Build Apk(s). After create apk you will show dialog as below. Click on locate and install it into your phone Enjoy
stackoverflow
{ "language": "en", "length": 348, "provenance": "stackexchange_0000F.jsonl.gz:872146", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564545" }
750bb56642bcbc74f962f623a205f5db87f4ceb3
Stackoverflow Stackexchange Q: How can I remove Kafka topics marked for deletion I have several Kafka topics I used as tests. Now I want to get rid of them all by cleaning up my kafka topic list. I set the variable delete.topic.enable=true, I stopped and restarted the zookeeper and kafka servers. But nothing helped me. The topics are still there, 'marked for deletion'. I read this question, but didn't find any answer. Otherwise, here it is suggested to remove manually any topic. But how do I do that? At the end of the story, manually or by command line, how do I remove Kafka topics for good? A: Solved it: it is enough to manually delete the folders which contains all the logs from Zookeeper and Kafka servers.
Q: How can I remove Kafka topics marked for deletion I have several Kafka topics I used as tests. Now I want to get rid of them all by cleaning up my kafka topic list. I set the variable delete.topic.enable=true, I stopped and restarted the zookeeper and kafka servers. But nothing helped me. The topics are still there, 'marked for deletion'. I read this question, but didn't find any answer. Otherwise, here it is suggested to remove manually any topic. But how do I do that? At the end of the story, manually or by command line, how do I remove Kafka topics for good? A: Solved it: it is enough to manually delete the folders which contains all the logs from Zookeeper and Kafka servers.
stackoverflow
{ "language": "en", "length": 126, "provenance": "stackexchange_0000F.jsonl.gz:872166", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564606" }
d44d21f84882e8ccba7ec6add5fb7804ae5c056b
Stackoverflow Stackexchange Q: Where can I get the full key list in jaxws properties map? I have a jaxws:client configuration like below. Would like to know where can I get the full list of key I can pass into the jaxws:properties map? Example: schema-validation-enabled mtom-enabled set-jaxb-validation-event-handler .... xmlns:jaxws="http://cxf.apache.org/blueprint/jaxws" <jaxws:client> <jaxws:properties> <entry key="schema-validation-enabled" value="false"></entry> <entry key="mtom-enabled" value="false" /> <entry key="set-jaxb-validation-event-handler" value="false"></entry> </jaxws:properties> </jaxws:client>
Q: Where can I get the full key list in jaxws properties map? I have a jaxws:client configuration like below. Would like to know where can I get the full list of key I can pass into the jaxws:properties map? Example: schema-validation-enabled mtom-enabled set-jaxb-validation-event-handler .... xmlns:jaxws="http://cxf.apache.org/blueprint/jaxws" <jaxws:client> <jaxws:properties> <entry key="schema-validation-enabled" value="false"></entry> <entry key="mtom-enabled" value="false" /> <entry key="set-jaxb-validation-event-handler" value="false"></entry> </jaxws:properties> </jaxws:client>
stackoverflow
{ "language": "en", "length": 60, "provenance": "stackexchange_0000F.jsonl.gz:872172", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564628" }
126cb063684f68be59863d114a341a365481f0d0
Stackoverflow Stackexchange Q: Bulk unload from redshift async I want to fire an unload query to redshift. But is using jdbc connection for doing this is the best way to go around? As far as i have done my POC the call is blocking. Now many things can go wrong here, maybe the query dumps too much result and the jdbc connection may timeout. So is there anyway in which we can submit a query to red shift asynchronously and then poll the api to see the result of the query. PS: Using jdbc is not the only requirement. But the thing is the redshift should be connected by JAVA code EDIT : Then if anyone has to fire a long running unload query what is the best way to go around ? A: There is not currently a "submit and poll for results" feature in Redshift. I'd recommend using a workflow server. Something that can run the jobs and track success and failure. Look at AWS Data Pipeline, Apache Airflow, or Azkaban
Q: Bulk unload from redshift async I want to fire an unload query to redshift. But is using jdbc connection for doing this is the best way to go around? As far as i have done my POC the call is blocking. Now many things can go wrong here, maybe the query dumps too much result and the jdbc connection may timeout. So is there anyway in which we can submit a query to red shift asynchronously and then poll the api to see the result of the query. PS: Using jdbc is not the only requirement. But the thing is the redshift should be connected by JAVA code EDIT : Then if anyone has to fire a long running unload query what is the best way to go around ? A: There is not currently a "submit and poll for results" feature in Redshift. I'd recommend using a workflow server. Something that can run the jobs and track success and failure. Look at AWS Data Pipeline, Apache Airflow, or Azkaban
stackoverflow
{ "language": "en", "length": 171, "provenance": "stackexchange_0000F.jsonl.gz:872210", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564769" }
86362cc81549b560552a0e7685d89c944f2ec4d4
Stackoverflow Stackexchange Q: How do I fix Clippy's needless_range_loop for loops that copy between slices with an offset? When running cargo clippy, it complains about code like this: pub fn from_bytes(data: [u8; 72]) -> Stuff { let mut ts = [0u8; 8]; let mut cs = [0u8; 64]; for b in 0..8 { ts[b] = data[b]; } for bb in 0..64 { cs[bb] = data[bb + 8]; } } with warning: the loop variable `bb` is used to index `cs` --> src/main.rs:9:5 | 9 | / for bb in 0..64 { 10 | | cs[bb] = data[bb + 8]; 11 | | } | |_____^ | = note: #[warn(needless_range_loop)] on by default = help: for further information visit https://github.com/Manishearth/rust-clippy/wiki#needless_range_loop help: consider using an iterator | for (bb, <item>) in cs.iter().enumerate().take(64) { I can not wrap my head around this information. How can I change to the suggested method? I don't get how something like for (bb, <item>) in cs.iter().enumerate().take(64) can be applied to my use case. A: Use clone_from_slice ts.clone_from_slice(&data[..8]); cs.clone_from_slice(&data[8..]);
Q: How do I fix Clippy's needless_range_loop for loops that copy between slices with an offset? When running cargo clippy, it complains about code like this: pub fn from_bytes(data: [u8; 72]) -> Stuff { let mut ts = [0u8; 8]; let mut cs = [0u8; 64]; for b in 0..8 { ts[b] = data[b]; } for bb in 0..64 { cs[bb] = data[bb + 8]; } } with warning: the loop variable `bb` is used to index `cs` --> src/main.rs:9:5 | 9 | / for bb in 0..64 { 10 | | cs[bb] = data[bb + 8]; 11 | | } | |_____^ | = note: #[warn(needless_range_loop)] on by default = help: for further information visit https://github.com/Manishearth/rust-clippy/wiki#needless_range_loop help: consider using an iterator | for (bb, <item>) in cs.iter().enumerate().take(64) { I can not wrap my head around this information. How can I change to the suggested method? I don't get how something like for (bb, <item>) in cs.iter().enumerate().take(64) can be applied to my use case. A: Use clone_from_slice ts.clone_from_slice(&data[..8]); cs.clone_from_slice(&data[8..]);
stackoverflow
{ "language": "en", "length": 169, "provenance": "stackexchange_0000F.jsonl.gz:872211", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564772" }
9f9fc2b7b2a6c07c4b7b8613f10ca0107f3197a3
Stackoverflow Stackexchange Q: How to send 'Application Version' with client side application insights using javascript? We can send 'application version' property with every insight in c# like in this tutorial by adding a initializer. class AppVersionTelemetryInitializer : Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer { public void Initialize(Microsoft.ApplicationInsights.Channel.ITelemetry telemetry) { telemetry.Context.Component.Version = ApplicationInsightsHelper.ApplicationVersion; } } https://blogs.msdn.microsoft.com/visualstudioalm/2015/01/07/application-insights-support-for-multiple-environments-stamps-and-app-versions/ How can I do this with javascript? A: If you are using the @microsoft/applicationinsights-web SDK (for client-side Javascript), you can set the application version in this way: const appInsights = new ApplicationInsights(...); appInsights.loadAppInsights(); // important, otherwise the `application` object is missing appInsights.context.application.ver = "YOUR_VERSION_HERE"; This way, you'll be able to drill down into metrics by application version in the dashboards.
Q: How to send 'Application Version' with client side application insights using javascript? We can send 'application version' property with every insight in c# like in this tutorial by adding a initializer. class AppVersionTelemetryInitializer : Microsoft.ApplicationInsights.Extensibility.ITelemetryInitializer { public void Initialize(Microsoft.ApplicationInsights.Channel.ITelemetry telemetry) { telemetry.Context.Component.Version = ApplicationInsightsHelper.ApplicationVersion; } } https://blogs.msdn.microsoft.com/visualstudioalm/2015/01/07/application-insights-support-for-multiple-environments-stamps-and-app-versions/ How can I do this with javascript? A: If you are using the @microsoft/applicationinsights-web SDK (for client-side Javascript), you can set the application version in this way: const appInsights = new ApplicationInsights(...); appInsights.loadAppInsights(); // important, otherwise the `application` object is missing appInsights.context.application.ver = "YOUR_VERSION_HERE"; This way, you'll be able to drill down into metrics by application version in the dashboards. A: You can formulate the App version/Tags and send it in custom property or metrics via trackpageview. Config file is not possible, but enum or some key/value pair can be maintained for each release in webpages & slice the custom param in Azure portal AI blade or API calls.
stackoverflow
{ "language": "en", "length": 157, "provenance": "stackexchange_0000F.jsonl.gz:872236", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564844" }
f7059898738eebf19d5b2ef6b1dace91325a9cff
Stackoverflow Stackexchange Q: STRING to DATE in BIGQUERY I am struggling to try to do this with Google BigQuery: I do have a column with dates in the following STRING format: 6/9/2017 (M/D/YYYY) I am wondering how can I deal with this, trying to use the DATE clause in order to get the this format: YYYY-MM-DD. A: Easy one, with standard SQL: #standardSQL SELECT PARSE_DATE('%m/%d/%Y', '6/22/2017') 2017-06-22 https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#supported-format-elements-for-date
Q: STRING to DATE in BIGQUERY I am struggling to try to do this with Google BigQuery: I do have a column with dates in the following STRING format: 6/9/2017 (M/D/YYYY) I am wondering how can I deal with this, trying to use the DATE clause in order to get the this format: YYYY-MM-DD. A: Easy one, with standard SQL: #standardSQL SELECT PARSE_DATE('%m/%d/%Y', '6/22/2017') 2017-06-22 https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#supported-format-elements-for-date A: This solution can work SELECT CAST( CONCAT( SUBSTR(DT_DOCUMENTO, 0 , 4), '-' , SUBSTR(DT_DOCUMENTO, 5 , 2), '-' , SUBSTR(DT_DOCUMENTO, 7 , 2) ) AS DATE ) AS FORMAT_DATE A: cast('06-09-2017' AS DATE FORMAT 'DD-MM-YYYY') or you can try this cast('2017/06/09' AS DATE FORMAT 'YYYY/MM/DD') You can read more about string to date conversion with BigQuery in this link https://cloud.google.com/bigquery/docs/reference/standard-sql/date_functions A: If you are lazy you can do date parsing with automatic format detection select bigfunctions.eu.parse_date('1/20/21') as cleaned_date will give +--------------------+ | cleaned_date | +--------------------+ | date('2021-01-20') | +--------------------+ as well as select bigfunctions.eu.parse_date('Wed Jan 20 21:47:00 2021') as cleaned_date https://unytics.io/bigfunctions/reference/#parse_date
stackoverflow
{ "language": "en", "length": 168, "provenance": "stackexchange_0000F.jsonl.gz:872254", "question_score": "48", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564887" }
ee120b0470535cf4972a93f00f711458741d54fc
Stackoverflow Stackexchange Q: What is Over-Fetching or Under-fetching? I've been playing sometimes with graphQL. Before graphQL, we normally use REST API. Many developers said that graphQL fixes some problems of the REST. (e.g. over-fetching & under-fetching). I confuses with this terms. Can somebody explain what is over and under fetching in this context? Thanks, A: Over fetching means you are fetching irrelevant variables that are useless at this point. Under fetching means you are fetching less variables that are required at this point
Q: What is Over-Fetching or Under-fetching? I've been playing sometimes with graphQL. Before graphQL, we normally use REST API. Many developers said that graphQL fixes some problems of the REST. (e.g. over-fetching & under-fetching). I confuses with this terms. Can somebody explain what is over and under fetching in this context? Thanks, A: Over fetching means you are fetching irrelevant variables that are useless at this point. Under fetching means you are fetching less variables that are required at this point A: Over-fetching is fetching too much data, meaning there is data in the response you don't use. Under-fetching is not having enough data with a call to an endpoint, forcing you to call a second endpoint. In both cases, they are performance issues: you either use more bandwidth than ideal, or you are making more HTTP requests than ideal. In a perfect world, these problems would never arise; you would have exactly the right endpoints to give exactly the right data to your products. These problems often appear when you scale and iterate on your products. The data you use on your pages often change, and the cost to maintain a separate endpoint with exactly the right data for each component becomes too much. So, you end up with a compromise between not having too many endpoints, and having the endpoints fit each component needs best. This will lead to over-fetching in some cases (the endpoint will provide more data than needed for one specific component), and under-fetching in some others (you will need to call a second endpoint). GraphQL fixes this problem because it allows you to request which data you want from the server. You specify what you need and will get this data, and only this data, in one trip to the server. A: Over-fetching and under-fetching In a dynamic language like Ruby, over and under fetching are two common pitfalls. Over-fetching Over-fetching occurs when additional fields are declared in a fragment but are not actually used in the template. This will likely happen when template code is modified to remove usage of a certain field. If the fragment is not updated along with this changed, the property will still be fetched when we no longer need it. A simple title field may not be a big deal in practice but this property could have been a more expensive nested data tree. Under-fetching Under-fetching occurs when fields are not declared in a fragment but are used in the template. This missing data will likely surface as a NoFieldError or nil value. Worse, there may be a latent under-fetch bug when a template does not declare a data dependency but appears to be working because its caller happens to fetch the correct data upstream. But when this same template is rendered from a different path, it errors on missing data.
stackoverflow
{ "language": "en", "length": 473, "provenance": "stackexchange_0000F.jsonl.gz:872259", "question_score": "55", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44564905" }