id
stringlengths
40
40
text
stringlengths
29
2.03k
original_text
stringlengths
3
154k
subdomain
stringclasses
20 values
metadata
dict
3128f6b85c270c37731f397110b6fe3fe9aaa6b7
Stackoverflow Stackexchange Q: Is new() required for nullable non-reference type variable? If I have made a variable of a non-reference type, say int, nullable, i.e. int?, does this mean I need to use a constructor before assigning a value? Normally to intialise a non-reference type variable I simply do int foo = 5; But if I have a nullable non-reference data type variable is initialisation neccessary, as below, or can I still use the simple initialisation above? int? foo = new int(); foo = 5; A: No. You don't need to create an instance before assignment. The int? is a struct which is created on assignment. Your assignment foo = 5; is actually: foo = new Nullable<int>(5); This is all done by the compiler. No need to do this by yourself.
Q: Is new() required for nullable non-reference type variable? If I have made a variable of a non-reference type, say int, nullable, i.e. int?, does this mean I need to use a constructor before assigning a value? Normally to intialise a non-reference type variable I simply do int foo = 5; But if I have a nullable non-reference data type variable is initialisation neccessary, as below, or can I still use the simple initialisation above? int? foo = new int(); foo = 5; A: No. You don't need to create an instance before assignment. The int? is a struct which is created on assignment. Your assignment foo = 5; is actually: foo = new Nullable<int>(5); This is all done by the compiler. No need to do this by yourself. A: int? is a syntax sugar for Nullable<int>; as for Nullable<T> if we have look at its implementation https://referencesource.microsoft.com/#mscorlib/system/nullable.cs,ffebe438fd9cbf0e we'll find an implicit operator declaration: public struct Nullable<T> where T : struct { ... [System.Runtime.Versioning.NonVersionable] public static implicit operator Nullable<T>(T value) { return new Nullable<T>(value); } ... } So for any struct T instead of explicit constructor call T value = ... T? test = new Nullable<T>(value); we can use implicit operator T? test = value; // implicit operation in action In your particular case T is int and we have int? foo = 5;
stackoverflow
{ "language": "en", "length": 224, "provenance": "stackexchange_0000F.jsonl.gz:879338", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587065" }
93db03e1be54d0eb687b11b29f4ec40405952b88
Stackoverflow Stackexchange Q: OSError: Unable to locate Ghostscript on paths I tried to open an EPS image with Pyzo, I have installed PIL and Ghostscript (as I saw that it is necessary on some other website topics), my code is: from PIL import Image im = Image.open('''myimage.eps''') im.show() but when I run the code, Pyzo return me: OSError: Unable to locate Ghostscript on paths I tried to look into it on several websites but it seems pretty complicated for a novice coding student. A: You need ghostscript. * *download: https://www.ghostscript.com/download/gsdnld.html *Tell the variable(EpsImagePlugin.gs_windows_binary) what the path of EXE(gswin64c, gswin32c, gs ) it is. (If you don't want to change the system path.) from PIL import EpsImagePlugin EpsImagePlugin.gs_windows_binary = r'X:\...\gs\gs9.52\bin\gswin64c' im = Image.open('myimage.eps') im.save('myimage.png') You can see the following on PIL.EpsImagePlugin.py # EpsImagePlugin.py __version__ = "0.5" ... gs_windows_binary = None # def Ghostscript(tile, size, fp, scale=1): """Render an image using Ghostscript""" ... if gs_windows_binary is not None: if not gs_windows_binary: # raise WindowsError("Unable to locate Ghostscript on paths") command[0] = gs_windows_binary So that's why I tell you to set the gs_windows_binary will work.
Q: OSError: Unable to locate Ghostscript on paths I tried to open an EPS image with Pyzo, I have installed PIL and Ghostscript (as I saw that it is necessary on some other website topics), my code is: from PIL import Image im = Image.open('''myimage.eps''') im.show() but when I run the code, Pyzo return me: OSError: Unable to locate Ghostscript on paths I tried to look into it on several websites but it seems pretty complicated for a novice coding student. A: You need ghostscript. * *download: https://www.ghostscript.com/download/gsdnld.html *Tell the variable(EpsImagePlugin.gs_windows_binary) what the path of EXE(gswin64c, gswin32c, gs ) it is. (If you don't want to change the system path.) from PIL import EpsImagePlugin EpsImagePlugin.gs_windows_binary = r'X:\...\gs\gs9.52\bin\gswin64c' im = Image.open('myimage.eps') im.save('myimage.png') You can see the following on PIL.EpsImagePlugin.py # EpsImagePlugin.py __version__ = "0.5" ... gs_windows_binary = None # def Ghostscript(tile, size, fp, scale=1): """Render an image using Ghostscript""" ... if gs_windows_binary is not None: if not gs_windows_binary: # raise WindowsError("Unable to locate Ghostscript on paths") command[0] = gs_windows_binary So that's why I tell you to set the gs_windows_binary will work. A: In case someone else encounters this issue: It seems that Ghostscript has not been added to the paths properly. For those running Win7, here is a fix: Go to: Control Panel -> System -> Advanced system settings -> Environment Variables... Find the variable "PATH" -> Edit... -> add the path to your ghostscript binary folder, e.g. C:\Program Files\gs\gs9.22\bin\; to the end of the variable. It should be separated from the previous entry by a semicolon. I had to restart for the changes to take effect.
stackoverflow
{ "language": "en", "length": 266, "provenance": "stackexchange_0000F.jsonl.gz:879438", "question_score": "14", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587376" }
db349f5c619c9fa3db157562c87115410d1d6168
Stackoverflow Stackexchange Q: initWithBase64EncodedString return nil My resultString is 'PHNhbWxwOlJlc3BvbnNlIH...c3BvbnNlPgoK' and when i am decoding it shows me decodedData as nil. NSData *decodedData = [[NSData alloc] initWithBase64EncodedString:resultString options:0]; I also tried this string with https://www.base64decode.org/ , it successfully shows results. What wrong here in decoding ? A: Probably you have some invalid characters in your string, like padding new lines. Try to pass NSDataBase64DecodingIgnoreUnknownCharacters option instead of 0. NSData *decodedData = [[NSData alloc] initWithBase64EncodedString:resultString options:NSDataBase64DecodingIgnoreUnknownCharacters];
Q: initWithBase64EncodedString return nil My resultString is 'PHNhbWxwOlJlc3BvbnNlIH...c3BvbnNlPgoK' and when i am decoding it shows me decodedData as nil. NSData *decodedData = [[NSData alloc] initWithBase64EncodedString:resultString options:0]; I also tried this string with https://www.base64decode.org/ , it successfully shows results. What wrong here in decoding ? A: Probably you have some invalid characters in your string, like padding new lines. Try to pass NSDataBase64DecodingIgnoreUnknownCharacters option instead of 0. NSData *decodedData = [[NSData alloc] initWithBase64EncodedString:resultString options:NSDataBase64DecodingIgnoreUnknownCharacters]; A: Almost certainly your string is not valid Base64, but that it is "close enough" that base64decode.org accepts it. The most likely cause is that you've dropped a trailing =. base64decode.org is tolerant of that, and just quietly throws away what it can't decode (the last byte in that case). NSData is not tolerant of that, because it's not valid Base64. base64decode.org is also tolerant of random non-base64 characters in the string and just throws them away. NSData is not (again, sine it's invalid). A: Try this! Simple solution :) Must need Foundation.framework. By default initWithBase64EncodedString method returns nil when the input is not recognized as valid Base-64. Please check your string is a valid Base-64 type or not! NSData *decodedData = [[NSData alloc] initWithBase64EncodedString:@"eyJuYW1lIjoidmlnbmVzaCJ9" options:0]; NSError *dataError; NSDictionary* responseObject = [NSJSONSerialization JSONObjectWithData:decodedData options:kNilOptions error:&dataError]; if(dataError == nil) { NSLog(@"Result %@",responseObject); }
stackoverflow
{ "language": "en", "length": 215, "provenance": "stackexchange_0000F.jsonl.gz:879445", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587400" }
8540b8836bdfb6def4f8bbaa72fc86f70cd7159c
Stackoverflow Stackexchange Q: Should I manage my product catalog with BlueSnap I understand what the difference between the merchant integrations is connected with building a product catalog - why should I not build it? A: Well - It depends on your business model, if you're selling the same products at the same pricing to a large number of shoppers, then it's a good idea to work with a catalog. This will allow you to apply setup changes, like setting a new coupon or a price change to a contract affecting all shoppers. On the other hand, if you're going to sell tailored products with different pricing plans per shopper - so it may be a better idea to use BlueSnap without building a catalog. Remember that if you're already using Magento, Prestashop or WooCommerce, you can just integrate your cart to BlueSnap and keep your existing catalog at the cart.
Q: Should I manage my product catalog with BlueSnap I understand what the difference between the merchant integrations is connected with building a product catalog - why should I not build it? A: Well - It depends on your business model, if you're selling the same products at the same pricing to a large number of shoppers, then it's a good idea to work with a catalog. This will allow you to apply setup changes, like setting a new coupon or a price change to a contract affecting all shoppers. On the other hand, if you're going to sell tailored products with different pricing plans per shopper - so it may be a better idea to use BlueSnap without building a catalog. Remember that if you're already using Magento, Prestashop or WooCommerce, you can just integrate your cart to BlueSnap and keep your existing catalog at the cart.
stackoverflow
{ "language": "en", "length": 148, "provenance": "stackexchange_0000F.jsonl.gz:879459", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587444" }
2f9e03c7ea712343d29f5e184442d894e90162c7
Stackoverflow Stackexchange Q: service account roles to deploy google cloud function I'm trying to use gcloud beta functions deploy from CI using a service account, but get an error: (gcloud.beta.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[The caller does not have permission] I can't find any roles in the IAM web console that look appropriate. Which one do I use? A: Check your current config : gcloud config list View result and check project = is exact same as your target project's PROJECT_ID. You can list your projects : gcloud config set project IMPORTANT Project = is not set NAME. MUST SET PROJECT_ID.
Q: service account roles to deploy google cloud function I'm trying to use gcloud beta functions deploy from CI using a service account, but get an error: (gcloud.beta.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[The caller does not have permission] I can't find any roles in the IAM web console that look appropriate. Which one do I use? A: Check your current config : gcloud config list View result and check project = is exact same as your target project's PROJECT_ID. You can list your projects : gcloud config set project IMPORTANT Project = is not set NAME. MUST SET PROJECT_ID. A: This is the minimum role required for my Service Account (not the default Cloud Functions service account) to successfully deploy a Cloud Function using CI. Cloud Functions Developer Service Account User From the docs In order to assign a user the Cloud Functions Developer role (roles/cloudfunctions.developer) or a custom role that can deploy functions, you must also assign the user the IAM Service Account User role (roles/iam.serviceAccountUser) on the Cloud Functions Runtime service account. Reference: https://cloud.google.com/functions/docs/reference/iam/roles One thing which i don't understand is the mention of Runtime Service Account. You don't have to assign the Service Account User role to Runtime Service Account, rather it should be to your SA which you are using to deploy.(Not sure whether I understand the doc correctly) A: To deploy function user should have role roles/cloudfunctions.developer I found this by changing role in UI. I couldn't find any official google documentation. This role is also mentioned in this article https://medium.com/google-cloud/triggering-cloud-functions-deployments-97691f9b5416 A: You can always begin from this two options. At least they must work. And make sure that you set up all required environment variables to make Default Application Credentials work.
stackoverflow
{ "language": "en", "length": 286, "provenance": "stackexchange_0000F.jsonl.gz:879475", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587490" }
362203f3d5bab53f6fdb2a0483052a1d69e11a7b
Stackoverflow Stackexchange Q: Set font size of Angular Material Tooltip I am very new to web development, and I cannot figure out how to solve the following issue, although it may be very easy. I am using Angular 4 and Angular Material to implement tooltips like this: <div mdTooltip="tooltip text" mdTooltipPosition="above"> <span>Show tooltip</span> </div> I would like to make the font size of the tooltip text bigger. However, I did not manage to find how to do this in the Angular Material documentation, neither searching in the web. Does anyone have any idea on how to do this? Thanks. A: You can fix this by adding a .mat-tooltip css declaration in you main styles file and change the font size there. You need to set !important on the font size otherwise it won't show up.
Q: Set font size of Angular Material Tooltip I am very new to web development, and I cannot figure out how to solve the following issue, although it may be very easy. I am using Angular 4 and Angular Material to implement tooltips like this: <div mdTooltip="tooltip text" mdTooltipPosition="above"> <span>Show tooltip</span> </div> I would like to make the font size of the tooltip text bigger. However, I did not manage to find how to do this in the Angular Material documentation, neither searching in the web. Does anyone have any idea on how to do this? Thanks. A: You can fix this by adding a .mat-tooltip css declaration in you main styles file and change the font size there. You need to set !important on the font size otherwise it won't show up. A: Per the documentation here: https://material.angular.io/components/tooltip/api And the spec: https://github.com/angular/material2/blob/master/src/lib/tooltip/tooltip.spec.ts You can set the property 'matTooltipClass', as follows: <div matTooltip="tooltip text" matTooltipPosition="above" matTooltipClass="tooltip"> <span>Show tooltip</span> </div> Then in your CSS (global - not for the component): .mat-tooltip.tooltip { background-color: darkblue; font-size: 12px; } Also see their demo here: https://github.com/angular/material2/tree/master/src/demo-app/tooltip Also keep in mind if you are using SASS, that the container for the tooltip is at the bottom and nowhere near where you are placing it in your component's HTML, so do not nest it in that component. Make sure it is standalone, otherwise it will not work. This note applies as well obviously to the comment above if you just choose to override .mat-tooltip To see the changes, in developer tools, find the div at the bottom with the class "cdk-overlay-container". Then hover over the element. You can use your arrow keys to navigate into the element while you are hovered over to confirm whether your class is being added. A: My problem was that using a globally defined css class-name such as .customname-toolip for matTooltipClass was NOT working. My solution below, and the !important was needed; set in the global styles.css file: .mat-tooltip { font-size: 16px !important; } A: add following code in your styles.css to increase its font size i.e. 12px CSS .mat-tooltip { font-size: 14px !important; } and use matTooltip in your tag's as. <p matTooltip="My Tooltip">...<p> A: Try this way. It should work. test.component.html <div mdTooltip="tooltip text" mdTooltipPosition="above" matTooltipClass="myTest-tooltip"> <span>Show tooltip</span> </div> test.component.ts @Component({ selector: 'test', templateUrl: './test.component.html', styleUrls: ['./test.component.scss'], encapsulation: ViewEncapsulation.None, /* styles: [` .myTest-tooltip { min-width: 300px; background-color: #FC5558; font-size: 16px; } `]*/ }) test.component.scss .myTest-tooltip { min-width: 300px; background-color: #FC5558; font-size: 16px; } A: In v15, you can change css variables body{ .mat-mdc-tooltip{ --mdc-plain-tooltip-container-color: #616161; --mdc-plain-tooltip-supporting-text-color: white; --mdc-plain-tooltip-supporting-text-font: Roboto, sans-serif; --mdc-plain-tooltip-supporting-text-size: 12px; --mdc-plain-tooltip-supporting-text-weight: 400; --mdc-plain-tooltip-supporting-text-tracking: 0.0333333333em; line-height: 12px; } } A: You can use css /deep/ selector. For example: /deep/ .mat-tooltip { font-size: 14px; } Then you do not have to use !important A: Add ng-deep before class name Try this ::ng-deep .mat-tooltip { background: red!important; } A: Use matTooltipClass to apply your custom class on tooltips <button mat-raised-button matTooltip="Adding a class to the tooltip container" matTooltipClass="custom-tooltip"> Custom tooltip </button> Add your style in your component style.scss file .custom-tooltip { font-size: 20px !important; } A: You can set custom style only for your component by adding a custom class + using /deep/, which will apply the css changes only for your custom class and not globally. for example adding a custom tooltip for an image tag : <img matTooltip="text" matTooltipClass="my-custom-class"<---- src=""/> and in the css file : /deep/ .mat-tooltip.my-custom-class {<--- background: #FFFFFF; } A: I dont have an experience with angular but you may add a class or id for div. Then you may control with this class or id with css file. <div class="sth" mdTooltip="tooltip text" mdTooltipPosition="above"> <span>Show tooltip</span> </div> And .sth{ font-size:20px; } in css file. A: Put this in your component css (or home component css if you want to apply it globally. note that putting this in your global css file won't work, and you have to put it in the home component css to apply it globally). ::ng-deep .mat-tooltip { font-size: 16px; }
stackoverflow
{ "language": "en", "length": 669, "provenance": "stackexchange_0000F.jsonl.gz:879489", "question_score": "55", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587532" }
0cdee2fece1ed70b529d17bb91968883d8d0988e
Stackoverflow Stackexchange Q: Persistently save chrome devtools settings My Goal: Persistently save settings of google-chrome-devtools settings for console. Specifically the "User messages only" option. Problem: If I check that option, close and open the devtools again, the options is unchecked again. So the changes I make are not persisted. Question: Is that behavior normal? Thanks for your time (Y) :-) A: It's a bug. Star this issue to give it more attention: https://crbug.com/734088 Update: It's not a bug per se, the team intended for it to work like this. But I think it makes more sense for the setting to persist.
Q: Persistently save chrome devtools settings My Goal: Persistently save settings of google-chrome-devtools settings for console. Specifically the "User messages only" option. Problem: If I check that option, close and open the devtools again, the options is unchecked again. So the changes I make are not persisted. Question: Is that behavior normal? Thanks for your time (Y) :-) A: It's a bug. Star this issue to give it more attention: https://crbug.com/734088 Update: It's not a bug per se, the team intended for it to work like this. But I think it makes more sense for the setting to persist.
stackoverflow
{ "language": "en", "length": 99, "provenance": "stackexchange_0000F.jsonl.gz:879493", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587555" }
58e37b41077af521813b0ba0917c1353763fe132
Stackoverflow Stackexchange Q: How does the child components property change without any communication between child and parent component? How does the child components property change without any communication between child and parent component? As a magic, the child components property is manipulated, without parent to child communication.Can someone explain me how does this happen? Please have a look at the code here, https://plnkr.co/edit/ucQ47vTBSOyBv8jr1qEp?p=preview I update the form controls value with, SetFormByPatchValue() { this.formGroup.patchValue(); } The above code will manipulate the properties of the child component. Example:2 Here is a link to another plunker. You can see that i have added another date component, whose value do not change on edit.
Q: How does the child components property change without any communication between child and parent component? How does the child components property change without any communication between child and parent component? As a magic, the child components property is manipulated, without parent to child communication.Can someone explain me how does this happen? Please have a look at the code here, https://plnkr.co/edit/ucQ47vTBSOyBv8jr1qEp?p=preview I update the form controls value with, SetFormByPatchValue() { this.formGroup.patchValue(); } The above code will manipulate the properties of the child component. Example:2 Here is a link to another plunker. You can see that i have added another date component, whose value do not change on edit.
stackoverflow
{ "language": "en", "length": 108, "provenance": "stackexchange_0000F.jsonl.gz:879504", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587601" }
013c92cf207ccb0b2a7ff6cede2c203784cb29ab
Stackoverflow Stackexchange Q: Android Adding Watermark to video I used grafika project for making video recording app. Now i want to add watermark to the video.(I have filepath for video). Please Someone help with any code snippet or suggestion. And yeah i don't want to use ffmpeg like heavy library unless no other solution is left. Such Similar question exist but didn't find any solution. Here a solution is posted but it's not clear exactly how to proceed: https://stackoverflow.com/a/43231245/7026525 Any help is appreciated.
Q: Android Adding Watermark to video I used grafika project for making video recording app. Now i want to add watermark to the video.(I have filepath for video). Please Someone help with any code snippet or suggestion. And yeah i don't want to use ffmpeg like heavy library unless no other solution is left. Such Similar question exist but didn't find any solution. Here a solution is posted but it's not clear exactly how to proceed: https://stackoverflow.com/a/43231245/7026525 Any help is appreciated.
stackoverflow
{ "language": "en", "length": 81, "provenance": "stackexchange_0000F.jsonl.gz:879506", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587606" }
3d36d8c4265c54fa4ff0984c765e3fa32eba01ac
Stackoverflow Stackexchange Q: How to save a binary image(with dtype=bool) using cv2? I am using opencv in python and want to save a binary image(dtype=bool). If I simply use cv2.imwrite I get following error: TypeError: image data type = 0 is not supported Can someone help me with this? The image is basically supposed to work as mask later. A: Convert the binary image to the 'uint8' data type. Try this: >>> binary_image.dtype='uint8' >>> cv2.imwrite('image.png', binary_image)
Q: How to save a binary image(with dtype=bool) using cv2? I am using opencv in python and want to save a binary image(dtype=bool). If I simply use cv2.imwrite I get following error: TypeError: image data type = 0 is not supported Can someone help me with this? The image is basically supposed to work as mask later. A: Convert the binary image to the 'uint8' data type. Try this: >>> binary_image.dtype='uint8' >>> cv2.imwrite('image.png', binary_image) A: You can use this: cv2.imwrite('mask.png', maskimg * 255) So this converts it implicitly to integer, which gives 0 for False and 1 for True, and multiplies it by 255 to make a (bit-)mask before writing it. OpenCV is quite tolerant and writes int64 images with 8 bit depth (but e. g. uint16 images with 16 bit depth). The operation is not done inplace, so you can still use maskimg for indexing etc. A: No OpenCV does not expects the binary image in the format of a boolean ndarray. OpenCV supports only np.uint8, np.float32, np.float64, Since OpenCV is more of an Image manipulation library, so an image with boolean values makes no sense, when you think of RGB or Gray-scale formats. The most compact data type to store a binary matrix is uchar or dtype=np.uint8, So you need to use this data type instead of np.bool. A: ndarray.astype('bool') See this page may help: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.astype.html A: If you are using OpenCV, you should consider using hsv format for threshing the image. Convert the BGR image to HSV using cv2.cvtColor() and then threshold your image using cv2.inRange() function. You would need values for the upper and lower limits for Hue(h), Saturation(s) and Value(v). For this you may use this script or create your own using it as reference. This script is meant to return hsv lower and upper limit values for live video stream input but with minor adjustments, you can do the same with image inputs as well. Save the obtained binary(kind of) image using cv2.imwrite(), and there you have it. You may use this binary image for masking too. If you are still left with any doubts, you may refer to this script and it should clear most of them.
stackoverflow
{ "language": "en", "length": 364, "provenance": "stackexchange_0000F.jsonl.gz:879511", "question_score": "15", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587613" }
348195c77357c61f37f140498e7ec383e47d8aac
Stackoverflow Stackexchange Q: Length of each string in a NumPy array Is there any builtin operation in NumPy that returns the length of each string in an array? I don't think any of the NumPy string operations does that, is this correct? I can do it with a for loop, but maybe there's something more efficient? import numpy as np arr = np.array(['Hello', 'foo', 'and', 'whatsoever'], dtype='S256') sizes = [] for i in arr: sizes.append(len(i)) print(sizes) [5, 3, 3, 10] A: For me this would be the way to go : sizes = [len(i) for i in arr]
Q: Length of each string in a NumPy array Is there any builtin operation in NumPy that returns the length of each string in an array? I don't think any of the NumPy string operations does that, is this correct? I can do it with a for loop, but maybe there's something more efficient? import numpy as np arr = np.array(['Hello', 'foo', 'and', 'whatsoever'], dtype='S256') sizes = [] for i in arr: sizes.append(len(i)) print(sizes) [5, 3, 3, 10] A: For me this would be the way to go : sizes = [len(i) for i in arr] A: You can use vectorize of numpy. It is much faster. mylen = np.vectorize(len) print mylen(arr) A: UPDATE 06/20: Cater for u+0000 character and non-contiguous inputs - thanks @ M1L0U Here is a comparison of a couple of methods. Observations: * *For input size >1000 lines, viewcasting + argmax is consistently and by a large margin fastest. *Python solutions profit from converting the array to a list first. *map beats list comprehension *np.frompyfunc and to a lesser degree np.vectorize fare better than their reputation . contiguous method ↓↓ size →→ | 10| 100| 1000| 10000| 100000|1000000 ------------------------------------+-------+-------+-------+-------+-------+------- np.char.str_len | 0.006| 0.037| 0.350| 3.566| 34.781|345.803 list comprehension | 0.005| 0.036| 0.312| 2.970| 28.783|293.715 list comprehension after .tolist() | 0.002| 0.011| 0.117| 1.119| 12.863|133.886 map | 0.002| 0.008| 0.080| 0.745| 9.374|103.749 np.frompyfunc | 0.004| 0.011| 0.089| 0.861| 8.824| 88.739 np.vectorize | 0.025| 0.032| 0.132| 1.046| 12.112|133.863 safe argmax | 0.026| 0.026| 0.056| 0.290| 2.827| 32.583 non-contiguous method ↓↓ size →→ | 10| 100| 1000| 10000| 100000|1000000 ------------------------------------+-------+-------+-------+-------+-------+------- np.char.str_len | 0.006| 0.037| 0.349| 3.575| 34.525|344.859 list comprehension | 0.005| 0.032| 0.306| 2.963| 29.445|292.527 list comprehension after .tolist() | 0.002| 0.011| 0.117| 1.043| 11.081|130.644 map | 0.002| 0.008| 0.081| 0.731| 7.967| 99.848 np.frompyfunc | 0.005| 0.012| 0.099| 0.885| 9.221| 92.700 np.vectorize | 0.025| 0.033| 0.146| 1.063| 11.844|134.505 safe argmax | 0.026| 0.026| 0.057| 0.291| 2.997| 31.161 Code: import numpy as np flist = [] def timeme(name): def wrap_gen(f): flist.append((name, f)) return(f) return wrap_gen @timeme("np.char.str_len") def np_char(): return np.char.str_len(A) @timeme("list comprehension") def lst_cmp(): return [len(a) for a in A] @timeme("list comprehension after .tolist()") def lst_cmp_opt(): return [len(a) for a in A.tolist()] @timeme("map") def map_(): return list(map(len, A.tolist())) @timeme("np.frompyfunc") def np_fpf(): return np.frompyfunc(len, 1, 1)(A) @timeme("np.vectorize") def np_vect(): return np.vectorize(len)(A) @timeme("safe argmax") def np_safe(): assert A.dtype.kind=="U" # work around numpy's refusal to viewcast non contiguous arrays v = np.lib.stride_tricks.as_strided( A[0,None].view("u4"),(A.size,A.itemsize>>2),(A.strides[0],4)) v = v[:,::-1].astype(bool) l = v.argmax(1) empty = (~(v[:,0]|l.astype(bool))).nonzero() l = v.shape[1]-l l[empty] = 0 return l A = np.random.choice( "Blind\x00text do not use the quick brown fox jumps over the lazy dog " .split(" "),1000000)[::2] for _, f in flist[:-1]: assert (f()==flist[-1][1]()).all() from timeit import timeit for j,tag in [(1,"contiguous"),(2,"non-contiguous")]: print('\n',tag) L = ['|+' + len(flist)*'|', [f"{'method ↓↓ size →→':36s}", 36*'-'] + [f"{name:36s}" for name, f in flist]] for N in (10, 100, 1000, 10000, 100000, 1000000): A = np.random.choice("Blind\x00text do not use the quick brown fox" " jumps over the lazy dog ".split(" "),j*N)[::j] L.append([f"{N:>7d}", 7*'-'] + [f"{timeit(f, number=10)*100:7.3f}" for name, f in flist]) for sep, *line in zip(*L): print(*line, sep=sep) A: Using str_len from Numpy: sizes = np.char.str_len(arr) str_len documentation: https://numpy.org/devdocs/reference/generated/numpy.char.str_len.html
stackoverflow
{ "language": "en", "length": 524, "provenance": "stackexchange_0000F.jsonl.gz:879559", "question_score": "16", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587746" }
133a97c687021af1458997fe5a5348a4a4016663
Stackoverflow Stackexchange Q: Strange CFFI callback error with SBCL I have this Common Lisp code which is a CFFI binding to PortAudio : https://github.com/thodg/cffi-portaudio First time I build with ASDF it gives no warning, but if I try to reload the system it tells me about these : STYLE-WARNING: Undefined alien: "Pa_GetErrorText" STYLE-WARNING: Undefined alien: "Pa_Initialize" STYLE-WARNING: Undefined alien: "Pa_Terminate" STYLE-WARNING: Undefined alien: "Pa_OpenStream" STYLE-WARNING: Undefined alien: "Pa_OpenDefaultStream" STYLE-WARNING: Undefined alien: "Pa_CloseStream" STYLE-WARNING: Undefined alien: "Pa_StartStream" STYLE-WARNING: Undefined alien: "Pa_StopStream" STYLE-WARNING: Undefined alien: "Pa_AbortStream" STYLE-WARNING: Undefined alien: "Pa_IsStreamStopped" STYLE-WARNING: Undefined alien: "Pa_IsStreamActive" And then -- wether I load the code once or twice, it makes no difference -- if I run the TEST function it crashes SBCL with this strange message : "callback" CORRUPTION WARNING in SBCL pid 3501(tid 0x7fffe0ac9700): Received signal 8 in non-lisp thread 140736962795264, resignalling to a lisp thread. The integrity of this image is possibly compromised. Continuing with fingers crossed. Process inferior-lisp floating point exception There are no floats in my callback, so the error seems very strange to me. Everything before START-STREAM seems to run properly. But then START-STREAM jumps on the callback and it crashes. What am I doing wrong ?
Q: Strange CFFI callback error with SBCL I have this Common Lisp code which is a CFFI binding to PortAudio : https://github.com/thodg/cffi-portaudio First time I build with ASDF it gives no warning, but if I try to reload the system it tells me about these : STYLE-WARNING: Undefined alien: "Pa_GetErrorText" STYLE-WARNING: Undefined alien: "Pa_Initialize" STYLE-WARNING: Undefined alien: "Pa_Terminate" STYLE-WARNING: Undefined alien: "Pa_OpenStream" STYLE-WARNING: Undefined alien: "Pa_OpenDefaultStream" STYLE-WARNING: Undefined alien: "Pa_CloseStream" STYLE-WARNING: Undefined alien: "Pa_StartStream" STYLE-WARNING: Undefined alien: "Pa_StopStream" STYLE-WARNING: Undefined alien: "Pa_AbortStream" STYLE-WARNING: Undefined alien: "Pa_IsStreamStopped" STYLE-WARNING: Undefined alien: "Pa_IsStreamActive" And then -- wether I load the code once or twice, it makes no difference -- if I run the TEST function it crashes SBCL with this strange message : "callback" CORRUPTION WARNING in SBCL pid 3501(tid 0x7fffe0ac9700): Received signal 8 in non-lisp thread 140736962795264, resignalling to a lisp thread. The integrity of this image is possibly compromised. Continuing with fingers crossed. Process inferior-lisp floating point exception There are no floats in my callback, so the error seems very strange to me. Everything before START-STREAM seems to run properly. But then START-STREAM jumps on the callback and it crashes. What am I doing wrong ?
stackoverflow
{ "language": "en", "length": 195, "provenance": "stackexchange_0000F.jsonl.gz:879567", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587780" }
37586eb44e4e696f259cfb921125dd880daa07a1
Stackoverflow Stackexchange Q: Plotly chart not rendering correctly in Shiny Dashboard If I change tab in my shiny dashboard in the middle of a ggplotly (plotly) chart rendering /loading then come back to that tab after it has finished loading the chart would be created but would be compressed. The only way to get this to correct itself is to make sure not to change tabs while the plot is loading.This will be an problem as users of the app may keep switching tabs and end up creating my charts in this compressed format and having to reload the app. Any help or explanations to why shiny dashboard and ggplotly have this interaction would be great. Thanks A: Hard to help without a reproducible example but perhaps: output$thechart <- renderPlotly({ ...... }) outputOptions(output, "thechart", suspendWhenHidden = FALSE)
Q: Plotly chart not rendering correctly in Shiny Dashboard If I change tab in my shiny dashboard in the middle of a ggplotly (plotly) chart rendering /loading then come back to that tab after it has finished loading the chart would be created but would be compressed. The only way to get this to correct itself is to make sure not to change tabs while the plot is loading.This will be an problem as users of the app may keep switching tabs and end up creating my charts in this compressed format and having to reload the app. Any help or explanations to why shiny dashboard and ggplotly have this interaction would be great. Thanks A: Hard to help without a reproducible example but perhaps: output$thechart <- renderPlotly({ ...... }) outputOptions(output, "thechart", suspendWhenHidden = FALSE) A: I'm the one who put up the bounty for this question. Unfortunately I can't directly comment under your answer, Stéphane Laurent, due to my low rep. Here's a reproducible example: # This is a Shiny web application. You can run the application by clicking # the 'Run App' button above. # # Find out more about building applications with Shiny here: # # http://shiny.rstudio.com/ # library(shiny) library(ggplot2) library(plotly) # Define UI for application that draws a histogram ui <- fluidPage( # Application title titlePanel("Squished Graph Reproducible Example"), # Sidebar with a slider input for number of bins sidebarLayout( # Show a plot of the generated distribution sidebarPanel(), mainPanel( tabsetPanel( tabPanel('Tab1', plotlyOutput('plot1')), tabPanel('Tab2', plotlyOutput('plot2')), tabPanel('Tab3', plotlyOutput('plot3')) ) ) ) ) # Define server logic required to draw a histogram server <- function(input, output) { output$plot1 <- renderPlotly({ Sys.sleep(1) # represents time for other calculations p <- ggplot(mtcars, aes(x=wt, y=drat, color=cyl)) + geom_line() + theme(legend.position = 'none') ggplotly(p) }) output$plot2 <- renderPlotly({ Sys.sleep(1) # represents time for other calculations p <- ggplot(mtcars, aes(x=disp, y=drat, color=cyl)) + geom_line() + theme(legend.position = 'none') ggplotly(p) }) output$plot3 <- renderPlotly({ Sys.sleep(1) # represents time for other calculations p <- ggplot(mtcars, aes(x=qsec, y=drat, color=cyl)) + geom_line() + theme(legend.position = 'none') ggplotly(p) }) } # Run the application shinyApp(ui = ui, server = server) You start at Tab1. Click Tab2 then Tab3 before the graph in Tab2 can load. Go back to Tab2, and it should be squished. Here's a video showing it: https://www.loom.com/share/9416d225d481490da69a009e05b9f51e Stéphane Laurent's answer works! Adding the following code to the server part fixes it: outputOptions(output, "plot1", suspendWhenHidden = FALSE) outputOptions(output, "plot2", suspendWhenHidden = FALSE) outputOptions(output, "plot3", suspendWhenHidden = FALSE) Stéphane Laurent, could you give more info regarding why this works? Thanks!
stackoverflow
{ "language": "en", "length": 423, "provenance": "stackexchange_0000F.jsonl.gz:879581", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587825" }
5bbb00c0fc0b89157db1be8f333df1010230911e
Stackoverflow Stackexchange Q: Find common factor to convert list of floats to list of integers I have a list of floats which comes from some other function. What I know is that in ideal world there exist a common factor which can be used to multiply each term to obtain list of integers. There could be some small numerical noise (~1e-14). So for example [2.3333333333333335, 4.666666666666667, 1.0, 1.6666666666666667] here each term can by multiplied by 3 to obtain [7.0, 14.0, 3.0, 5.0] How can I find this term? We can assume integer solution exists. Any helpful comments will be appreciated A: Python's Fraction type can convert floating points to rationals with denominators under 1000000, and then you can find the lowest common denominator. >>> from fractions import Fraction >>> a = [2.3333333333333335, 4.666666666666667, 1.0, 1.6666666666666667] >>> [Fraction(x).limit_denominator() for x in a] [Fraction(7, 3), Fraction(14, 3), Fraction(1, 1), Fraction(5, 3)] A straightforward way to find the least common multiple using the math.gcd function: >>> denoms = [3,3,1,2] >>> functools.reduce(lambda a,b: a*b//math.gcd(a,b), denoms) 6
Q: Find common factor to convert list of floats to list of integers I have a list of floats which comes from some other function. What I know is that in ideal world there exist a common factor which can be used to multiply each term to obtain list of integers. There could be some small numerical noise (~1e-14). So for example [2.3333333333333335, 4.666666666666667, 1.0, 1.6666666666666667] here each term can by multiplied by 3 to obtain [7.0, 14.0, 3.0, 5.0] How can I find this term? We can assume integer solution exists. Any helpful comments will be appreciated A: Python's Fraction type can convert floating points to rationals with denominators under 1000000, and then you can find the lowest common denominator. >>> from fractions import Fraction >>> a = [2.3333333333333335, 4.666666666666667, 1.0, 1.6666666666666667] >>> [Fraction(x).limit_denominator() for x in a] [Fraction(7, 3), Fraction(14, 3), Fraction(1, 1), Fraction(5, 3)] A straightforward way to find the least common multiple using the math.gcd function: >>> denoms = [3,3,1,2] >>> functools.reduce(lambda a,b: a*b//math.gcd(a,b), denoms) 6 A: The brute force solution. Still looking for something more universal... def find_int(arr): test = False epsilon = 1e-15 maxint = 1000 for i in range(2, maxint, 1): for item in arr: if abs(i*item-round(i*item)) < epsilon: test = True else: test = False break if test: print i return [int(round(i*item)) for item in arr] print "Could not find one" return arr
stackoverflow
{ "language": "en", "length": 231, "provenance": "stackexchange_0000F.jsonl.gz:879602", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587875" }
b5b2e07f779f5304ad1382afec3b6b06ea716a41
Stackoverflow Stackexchange Q: Flask - Using external blueprints I am working on a big flask project, which supports flask-plugins. I need to add a feature which allows the user to upload files from his own PC. As I did not want to edit the core code of the server, I thought of adding this functionality by creating a plugin. There is a blueprint already present in the core code by the name of blueprint as follows - pybossa/views/projects.py blueprint = Blueprint('projects', __name__) And it is registered and has a url_prefix set in core.py - pybossa/core.py from pybossa.views.projects.py import blueprint as projects app.register_blueprint(projects, '/projects') Now I have a plugin called testUploader, and I am importing the blueprint 'projects' as follows - pybossa/plugins/testUploader/views.py from pybossa.view.projects import blueprint @blueprint.route('/test') def testUpload(): return("Hello World") As you can see, I have created a new blueprint route /test But when I go to localhost:5000/projects/test, I get 404 page not found. Why is the route not working?
Q: Flask - Using external blueprints I am working on a big flask project, which supports flask-plugins. I need to add a feature which allows the user to upload files from his own PC. As I did not want to edit the core code of the server, I thought of adding this functionality by creating a plugin. There is a blueprint already present in the core code by the name of blueprint as follows - pybossa/views/projects.py blueprint = Blueprint('projects', __name__) And it is registered and has a url_prefix set in core.py - pybossa/core.py from pybossa.views.projects.py import blueprint as projects app.register_blueprint(projects, '/projects') Now I have a plugin called testUploader, and I am importing the blueprint 'projects' as follows - pybossa/plugins/testUploader/views.py from pybossa.view.projects import blueprint @blueprint.route('/test') def testUpload(): return("Hello World") As you can see, I have created a new blueprint route /test But when I go to localhost:5000/projects/test, I get 404 page not found. Why is the route not working?
stackoverflow
{ "language": "en", "length": 158, "provenance": "stackexchange_0000F.jsonl.gz:879623", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587947" }
93e7e98f47030be7a0c952dab791f96868838b96
Stackoverflow Stackexchange Q: PHP iterable to array or Traversable I'm quite happy that PHP 7.1 introduced the iterable pseudo-type. Now while this is great when just looping over a parameter of this type, it is unclear to me what to do when you need to pass it to PHP functions that accept just an array or just a Traversable. For instance, if you want to do an array_diff, and your iterable is a Traversable, you will get an array. Conversely, if you call a function that takes an Iterator, you will get an error if the iterable is an array. Is there something like iterable_to_array (NOT: iterator_to_array) and iterable_to_traversable? I'm looking for a solution that avoids conditionals in my functions just to take care of this difference, and that does not depend on me defining my own global functions. Using PHP 7.1 A: For php >= 7.4 this works pretty well out of the box: $array = [...$iterable]; See https://3v4l.org/L3JNH Edit: Works only as long the iterable doesn't contain string keys
Q: PHP iterable to array or Traversable I'm quite happy that PHP 7.1 introduced the iterable pseudo-type. Now while this is great when just looping over a parameter of this type, it is unclear to me what to do when you need to pass it to PHP functions that accept just an array or just a Traversable. For instance, if you want to do an array_diff, and your iterable is a Traversable, you will get an array. Conversely, if you call a function that takes an Iterator, you will get an error if the iterable is an array. Is there something like iterable_to_array (NOT: iterator_to_array) and iterable_to_traversable? I'm looking for a solution that avoids conditionals in my functions just to take care of this difference, and that does not depend on me defining my own global functions. Using PHP 7.1 A: For php >= 7.4 this works pretty well out of the box: $array = [...$iterable]; See https://3v4l.org/L3JNH Edit: Works only as long the iterable doesn't contain string keys A: Can be done like this: $array = $iterable instanceof \Traversable ? iterator_to_array($iterable) : (array)$iterable; A: Is there something like iterable_to_array and iterable_to_traversable Just add these to your project somewhere, they don't take up a lot of space and give you the exact APIs you asked for. function iterable_to_array(iterable $it): array { if (is_array($it)) return $it; $ret = []; array_push($ret, ...$it); return $ret; } function iterable_to_traversable(iterable $it): Traversable { yield from $it; } A: Terms are easy to mix * *Traversable * *Iterator (I see this as a concrete type, like user-defined class A) *IteratorAggregate *iterable (this is a pseudo-type, array or traversable are accepted) *array (This is a concrete type, and it's not exchangeable with Iterator in context of that a Iterator type is required) *arrayIterator (can be used to convert array to iterator) So, that's why if function A(iterable $a){}, then it accepts parameter of either array or an instanceof traversable (Iterator, IteratorAggregate are both accepted because it's obvious these two classes implement Traversable. In my test, passing ArrayIterator also works ). In case Iterator type is specified for parameter, passing in an array will cause TypeError. A: You can use iterator_to_array converting your variable to Traversable first: $array = iterator_to_array((function() use ($iterable) {yield from $iterable;})()); Conversion method is taken from the comment under this question. Here is working demo. A: For the "iterable to array" case it seems there is no single function call you can make and that you'll either need to use a conditional in your code or define your own function like this one: function iterable_to_array( iterable $iterable ): array { if ( is_array( $iterable ) ) { return $iterable; } return iterator_to_array( $iterable ); } For the "iterable to Iterator" case things are much more complicated. Arrays can be easily translated into a Traversable using ArrayIterator. Iterator instances can just be returned as they are. That leaves Traversable instances that are not Iterator. On first glance it looks like you can use IteratorIterator, which takes a Traversable. However that class is bugged and does not work properly when giving it an IteratorAggregate that returns a Generator. The solution to this problem is too long to post here though I have created a mini-library that contains both conversion functions: * *function iterable_to_iterator( iterable $iterable ): Iterator *function iterable_to_array( iterable $iterable ): array See https://github.com/wmde/iterable-functions A: Not sure this is what are you searching for but this is the shortest way to do it. $array = []; array_push ($array, ...$iterable); I'm not very sure why it works. Just I found your question interesting and I start fiddling with PHP Full example: <?php function some_array(): iterable { return [1, 2, 3]; } function some_generator(): iterable { yield 1; yield 2; yield 3; } function foo(iterable $iterable) { $array = []; array_push ($array, ...$iterable); var_dump($array); } foo(some_array()); foo(some_generator()); It would be nice if works with function array(), but because it is a language construct is a bit special. It also doesn't preserve keys in assoc arrays. A: Starting with PHP 8.2 the iterator_to_array() and iterator_count() functions will accept iterable instead of Traversable. Thus they will start to accept arrays and do what you would expect them to do when encountering an array. Specifically the following equalities hold: iterator_to_array($array, true) == $array iterator_to_array($array, false) == array_values($array) and iterator_count($array) == count($array) More details can be found in the corresponding RFC: PHP RFC: Make the iterator_*() family accept all iterables.
stackoverflow
{ "language": "en", "length": 739, "provenance": "stackexchange_0000F.jsonl.gz:879629", "question_score": "23", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44587973" }
598a84fef040bf9d687679f56a1add4122003e14
Stackoverflow Stackexchange Q: what is the difference between Callable statement and Prepared Statement in Sql? Can anyone please explain the difference between Callable and Prepared Statement in Sql with any example? A: At the top level you can go by this thought Prepared Statement Instances of PreparedStatement contain an SQL statement that has already been compiled. This is what makes a statement "prepared" Because PreparedStatement objects are precompiled, their execution can be faster than that of Statement objects. The prepared statement is used to execute sql queries Callable Statement A CallableStatement object provides a way to call stored procedures in a standard way for all RDBMSs. A stored procedure is stored in a database; the call to the stored procedure is what a CallableStatement object contains.
Q: what is the difference between Callable statement and Prepared Statement in Sql? Can anyone please explain the difference between Callable and Prepared Statement in Sql with any example? A: At the top level you can go by this thought Prepared Statement Instances of PreparedStatement contain an SQL statement that has already been compiled. This is what makes a statement "prepared" Because PreparedStatement objects are precompiled, their execution can be faster than that of Statement objects. The prepared statement is used to execute sql queries Callable Statement A CallableStatement object provides a way to call stored procedures in a standard way for all RDBMSs. A stored procedure is stored in a database; the call to the stored procedure is what a CallableStatement object contains.
stackoverflow
{ "language": "en", "length": 124, "provenance": "stackexchange_0000F.jsonl.gz:879655", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588057" }
f641a0f567f81b77559da20d1797a6ba55875a2a
Stackoverflow Stackexchange Q: .NET Core Api slow response Yesterday I figured, that the WebAPI I published is not very fast. Usually every request takes round about 300 ms in a local environment (Release mode). To make sure my own code was not the problem I created a new Web Application in Visual Studio 2017 and picked "WebAPI" project. When I start the fresh project in release mode and request the values controller, that only returns ["value1","value2"], I get response times fluctuating between 50 and 100ms. If I create a new WebAPI .NET Core Project I get a worse result: Values between 100 and 200 ms. For me those are very bad results for a local environment and with getting this small amount of data. If you create a fresh project - are response times: * *Faster? *More stable (not fluctuating between 100 or 200ms)? And are there any proposals what I could try to speed up my API. I can not believe that it sometimes takes 200 ms to request literally less than a kilobyte. I mean: Even this online API is a lot faster and returns more data: https://jsonplaceholder.typicode.com/posts (ca. 20 ms)
Q: .NET Core Api slow response Yesterday I figured, that the WebAPI I published is not very fast. Usually every request takes round about 300 ms in a local environment (Release mode). To make sure my own code was not the problem I created a new Web Application in Visual Studio 2017 and picked "WebAPI" project. When I start the fresh project in release mode and request the values controller, that only returns ["value1","value2"], I get response times fluctuating between 50 and 100ms. If I create a new WebAPI .NET Core Project I get a worse result: Values between 100 and 200 ms. For me those are very bad results for a local environment and with getting this small amount of data. If you create a fresh project - are response times: * *Faster? *More stable (not fluctuating between 100 or 200ms)? And are there any proposals what I could try to speed up my API. I can not believe that it sometimes takes 200 ms to request literally less than a kilobyte. I mean: Even this online API is a lot faster and returns more data: https://jsonplaceholder.typicode.com/posts (ca. 20 ms)
stackoverflow
{ "language": "en", "length": 191, "provenance": "stackexchange_0000F.jsonl.gz:879659", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588067" }
f269e32bb555bfde7a85cb27dccf0a2581d0ee2f
Stackoverflow Stackexchange Q: json-c string with "/" character When my program saves something in json like this: json_object_object_add(jObj_my, "cats/dogs", json_object_new_double(cats/dogs)); the result in the .json file is: "cats\/dogs" : some_double_number How can i avoid it to print "\/" instead of "/"? A: The json-c library's code in its GitHub repository has a flag to make escaping of / optional. If you do not want the generated string to escape this, use the JSON_C_TO_STRING_NOSLASHESCAPE flag, like this: #include <stdio.h> #include <json.h> int main(int argc, char **argv) { json_object *my_string; my_string = json_object_new_string("/foo/bar/baz"); printf("my_string=%s\n", json_object_get_string(my_string)); printf("my_string.to_string()=%s\n", json_object_to_json_string(my_string)); printf("my_string.to_string(NOSLASHESCAPE)=%s\n", json_object_to_json_string_ext(my_string, JSON_C_TO_STRING_NOSLASHESCAPE)); json_object_put(my_string); return 0; } example adapted from https://github.com/json-c/json-c/blob/master/tests/test1.c#L155 Saving this in slashtest.c, compiling it, and running it produces: $ gcc -Wall slashtest.c -L/usr/local/lib -l:libjson-c.a -I/usr/local/include/json-c $ ./a.out my_string=/foo/bar/baz my_string.to_string()="\/foo\/bar\/baz" my_string.to_string(NOSLASHESCAPE)="/foo/bar/baz" Escaping / in JSON is legal and arguably may be useful, see this post about it: JSON: why are forward slashes escaped? Note that this flag was added to the library's code in 2015, but that, somehow the change didn't make it in the latest current json-c-0.12.1 release made in Jun 7, 2016. I am unsure why. So to use it, you will have to get the code from GitHub, and compile it.
Q: json-c string with "/" character When my program saves something in json like this: json_object_object_add(jObj_my, "cats/dogs", json_object_new_double(cats/dogs)); the result in the .json file is: "cats\/dogs" : some_double_number How can i avoid it to print "\/" instead of "/"? A: The json-c library's code in its GitHub repository has a flag to make escaping of / optional. If you do not want the generated string to escape this, use the JSON_C_TO_STRING_NOSLASHESCAPE flag, like this: #include <stdio.h> #include <json.h> int main(int argc, char **argv) { json_object *my_string; my_string = json_object_new_string("/foo/bar/baz"); printf("my_string=%s\n", json_object_get_string(my_string)); printf("my_string.to_string()=%s\n", json_object_to_json_string(my_string)); printf("my_string.to_string(NOSLASHESCAPE)=%s\n", json_object_to_json_string_ext(my_string, JSON_C_TO_STRING_NOSLASHESCAPE)); json_object_put(my_string); return 0; } example adapted from https://github.com/json-c/json-c/blob/master/tests/test1.c#L155 Saving this in slashtest.c, compiling it, and running it produces: $ gcc -Wall slashtest.c -L/usr/local/lib -l:libjson-c.a -I/usr/local/include/json-c $ ./a.out my_string=/foo/bar/baz my_string.to_string()="\/foo\/bar\/baz" my_string.to_string(NOSLASHESCAPE)="/foo/bar/baz" Escaping / in JSON is legal and arguably may be useful, see this post about it: JSON: why are forward slashes escaped? Note that this flag was added to the library's code in 2015, but that, somehow the change didn't make it in the latest current json-c-0.12.1 release made in Jun 7, 2016. I am unsure why. So to use it, you will have to get the code from GitHub, and compile it.
stackoverflow
{ "language": "en", "length": 199, "provenance": "stackexchange_0000F.jsonl.gz:879742", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588310" }
ef58b56920638d42c7b8a38d4fc7fd140f96ec89
Stackoverflow Stackexchange Q: How to find,total memory/ram using by the postgresql database in Linux machine? When I execute top -u postgres or ps -C postgres -o %cpu,%mem,cmd in linux machine I am getting the list of postgres processors in my database machine. I need a consolidated %cpu and %ram usage by postgres. A: Some details for caching in PostgreSQL and memory usage including disc caching.
Q: How to find,total memory/ram using by the postgresql database in Linux machine? When I execute top -u postgres or ps -C postgres -o %cpu,%mem,cmd in linux machine I am getting the list of postgres processors in my database machine. I need a consolidated %cpu and %ram usage by postgres. A: Some details for caching in PostgreSQL and memory usage including disc caching. A: Try and test: top -b -n 1 -u tomcat | awk 'NR>7 { cpusum += $9; ramsum += $10; } END { print cpusum, ramsum; }'
stackoverflow
{ "language": "en", "length": 90, "provenance": "stackexchange_0000F.jsonl.gz:879743", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588313" }
80598862dc35143b8a7e220ad9611da574b923bd
Stackoverflow Stackexchange Q: Android Studio 3 - Constraint layout editor broken I'm using Android Studio 3.0 (updated to canary 4 today) on macOS for a side project and recently (not sure really when) the constraint layout editor stopped working properly. Now it just shows a grey window and the blueprint view isn't working at all. Even the properties editor on the right doesn't show the constraints anymore. Here how it looks for a simple layout with just 1 button: I'm using constraint layout 1.0.2 but it fails the same way on 1.1.0-beta1. Any idea what could go wrong? No error are shown in the IDE or in the idea.log Thanks in advance for any help provided :) A: Which gradle version are you using? Also, support lib 26.0.0-beta2 has an issue with studio, if you are using it you should downgrade to beta1 to use the editor.
Q: Android Studio 3 - Constraint layout editor broken I'm using Android Studio 3.0 (updated to canary 4 today) on macOS for a side project and recently (not sure really when) the constraint layout editor stopped working properly. Now it just shows a grey window and the blueprint view isn't working at all. Even the properties editor on the right doesn't show the constraints anymore. Here how it looks for a simple layout with just 1 button: I'm using constraint layout 1.0.2 but it fails the same way on 1.1.0-beta1. Any idea what could go wrong? No error are shown in the IDE or in the idea.log Thanks in advance for any help provided :) A: Which gradle version are you using? Also, support lib 26.0.0-beta2 has an issue with studio, if you are using it you should downgrade to beta1 to use the editor. A: Just change the "Apptheme" to "AppCombat.NoActionBar" A: downgrading it to beta1 solved my problem A: Go in build.gradle and change the dependencies to 26.0.0-beta1: After Sync the project... Its works to me!!!! A: What worked for me was to UPGRADE all my dependencies to the latest version (currently 27.0.0) instead of doing a downgrade. It brings some additional effort to replace the "compile" dependencies (which are deprecated) by "implementation" or "api" and upgrade also some of the libraries. But after all the upgrades the tool worked again perfectly. A: Adding implementation 'com.android.support.constraint:constraint-layout:1.1.2' to the dependencies and reloading project solved my issue
stackoverflow
{ "language": "en", "length": 247, "provenance": "stackexchange_0000F.jsonl.gz:879756", "question_score": "21", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588343" }
d2d2d0b05fd956ddff4b61714244b45a80bffab4
Stackoverflow Stackexchange Q: (redux / ngrx) Would you recommend to store UI related states? As far as i understand the principle of redux architecture, it's to ease the complexity when application state changes (business logic?). But should one also handle presentation related states the redux way? E.g. whether a sidebar is currently open, or a certain block of information is currently unfolded, etc.? It's done that way in the ngrx/store example page. But since the application logic isn't depending in anyway of those presentation states, i can't really see the benefit of it. A: Every developer should decide that according to the requirements of their application. In the example app the state of the sidenav isn't truly relevant as u correctly pointed out, so is stored mainly for showcase purposes.
Q: (redux / ngrx) Would you recommend to store UI related states? As far as i understand the principle of redux architecture, it's to ease the complexity when application state changes (business logic?). But should one also handle presentation related states the redux way? E.g. whether a sidebar is currently open, or a certain block of information is currently unfolded, etc.? It's done that way in the ngrx/store example page. But since the application logic isn't depending in anyway of those presentation states, i can't really see the benefit of it. A: Every developer should decide that according to the requirements of their application. In the example app the state of the sidenav isn't truly relevant as u correctly pointed out, so is stored mainly for showcase purposes. A: I had to do it for several reasons ( this is more specific to HTTP request loading status, error status etc.,) : * *No way to figure out http status, success or error . When we dispatch, post or load request, it is just a one way flow . It wont give us if the request is success or if it gave an error. Imagine a post request gave validation error, we need to store this *Less code in component , easy to write unit tests, easy to reuse code , easy to automate code . Services are always easy to test compared to components . *Update in multiple components : Imagine we are showing summary on dashboard and actual data in the component . We need to show in dashboard that data is getting updated , not possible if status is stored in component *Store application state in localstorage: One scenario I had to store the status of UI in the local storage. Users preference of side bar maximized or shortened , text size etc ., Even in these cases had to use store. A: I agree it will really depends on your requirements, but we can however consider some generic cases like manage notifications (in order to display one notification at a time), user connection (a lot of functionality might have a specific UI depending on that) etc.
stackoverflow
{ "language": "en", "length": 359, "provenance": "stackexchange_0000F.jsonl.gz:879777", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588397" }
6ca4ec2d36301db6b5bd5777cd95391e0f5c0789
Stackoverflow Stackexchange Q: How can I read an image from Azure blob storage directly using Opencv without downloading it to a local file? I want to read an image from Azure blob storage by using opencv 3 in Python 2.7. How to do this without downloading the blob to a local file? A: Per my experience, you can try to use get_blob_to_bytes method to download the blob as a byte array and covert it to a opencv image, as my sample code below. from azure.storage.blob import BlockBlobService account_name = '<your-storage-account>' account_key = '<your accout key>' block_blob_service = BlockBlobService(account_name, account_key) container_name = 'mycontainer' blob_name = 'test.jpg' blob = block_blob_service.get_blob_to_bytes(container_name, blob_name) import numpy as np import cv2 # use numpy to construct an array from the bytes x = np.fromstring(blob.content, dtype='uint8') # decode the array into an image img = cv2.imdecode(x, cv2.IMREAD_UNCHANGED) print img.shape # show it cv2.imshow("Image Window", img) cv2.waitKey(0) Hope it helps.
Q: How can I read an image from Azure blob storage directly using Opencv without downloading it to a local file? I want to read an image from Azure blob storage by using opencv 3 in Python 2.7. How to do this without downloading the blob to a local file? A: Per my experience, you can try to use get_blob_to_bytes method to download the blob as a byte array and covert it to a opencv image, as my sample code below. from azure.storage.blob import BlockBlobService account_name = '<your-storage-account>' account_key = '<your accout key>' block_blob_service = BlockBlobService(account_name, account_key) container_name = 'mycontainer' blob_name = 'test.jpg' blob = block_blob_service.get_blob_to_bytes(container_name, blob_name) import numpy as np import cv2 # use numpy to construct an array from the bytes x = np.fromstring(blob.content, dtype='uint8') # decode the array into an image img = cv2.imdecode(x, cv2.IMREAD_UNCHANGED) print img.shape # show it cv2.imshow("Image Window", img) cv2.waitKey(0) Hope it helps.
stackoverflow
{ "language": "en", "length": 150, "provenance": "stackexchange_0000F.jsonl.gz:879778", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588402" }
1f632b898a305f83433a52706e359e6ff821b8f8
Stackoverflow Stackexchange Q: VSCode. Change tab size for markdown code preview It is possible to change tab size for markdown code preview ? Currently 1 tab = 8 spaces. A: If you are referring to the markdown preview window you can apply custom css to manipulate the way it is displayed using the markdown.styles setting. "markdown.styles": [ "Style.css" ] If you are referring to the editor window you can set the tab size for markdown files by editing your settings file: "[markdown]": { "editor.tabSize": 4 } To access the settings file click File -> Preferences -> Settings
Q: VSCode. Change tab size for markdown code preview It is possible to change tab size for markdown code preview ? Currently 1 tab = 8 spaces. A: If you are referring to the markdown preview window you can apply custom css to manipulate the way it is displayed using the markdown.styles setting. "markdown.styles": [ "Style.css" ] If you are referring to the editor window you can set the tab size for markdown files by editing your settings file: "[markdown]": { "editor.tabSize": 4 } To access the settings file click File -> Preferences -> Settings A: If you want to use that file "https://github.com/SepCode/vscode-markdown-style/blob/master/preview/github.css", we know that "https://raw.githubusercontent.com/SepCode/vscode-markdown-style/master/preview/github.css", the URL is not working. I have a good idea, we can use Github Pages. Add a submodule in your repository, like this "git submodule add https://github.com/SepCode/vscode-markdown-style.git". And now we can use the URL "https://sepcode.github.io/vscode-markdown-style/preview/github.css" set markdown.styles. Step: * *clone your GitHub pages "git clone https://github.com/SepCode/SepCode.github.io.git" *cd SepCode.github.io *git submodule add https://github.com/SepCode/vscode-markdown-style.git *git commit -am 'added vscode-markdown-style module' *git push *setting vscode setting.json { "markdown.styles":["https://sepcode.github.io/vscode-markdown-style/preview/github.css"] } the vscode-markdown-style repository is just an example, we should use ourself's CSS file. This way is more convenient and controllable. A: Maybe you should try! \ $~~~~~~~~~~~$ = 10 spaces
stackoverflow
{ "language": "en", "length": 204, "provenance": "stackexchange_0000F.jsonl.gz:879784", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588421" }
ff8761062ef91a6755073ba7222fd222f796472c
Stackoverflow Stackexchange Q: ViewModel is created again for the fragment I create viewmodel in MainFragment: @Override public void onActivityCreated(@Nullable Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); ... MainViewModel mainViewModel = ViewModelProviders.of(this).get(MainViewModel.class); ... } When user select item then navigate to Details fragment, this transaction is added to backstack. getFragmentManager() .beginTransaction() .replace(R.id.root, Details.newInstance()) .addToBackStack(null) .commit(); When user press back in Details fragment, everything is ok, but if user rotate device in Details fragment and press back then: * *new instance of ViewModel is created for MainFragment *old instance is still alive ( method onCleared not called) Is this a bug in ViewModelProviders? How to fix this? In my opinion ViewModel should be restored. A: This is not really obvious, but when you call addToBackStack, the fragment manager will not destroy your fragment, just stops it, when new replace transaction comes. You basically have two items on the backstack now, both being instances of your Details. Since onDestroy was never called for the first one, its ViewModel's onCleared was never called either. In your case, simply checking if your fragment is currently in the container (e.g. via FragmentManager.findFragment() and NOT replacing it in such situation, should be enough.
Q: ViewModel is created again for the fragment I create viewmodel in MainFragment: @Override public void onActivityCreated(@Nullable Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); ... MainViewModel mainViewModel = ViewModelProviders.of(this).get(MainViewModel.class); ... } When user select item then navigate to Details fragment, this transaction is added to backstack. getFragmentManager() .beginTransaction() .replace(R.id.root, Details.newInstance()) .addToBackStack(null) .commit(); When user press back in Details fragment, everything is ok, but if user rotate device in Details fragment and press back then: * *new instance of ViewModel is created for MainFragment *old instance is still alive ( method onCleared not called) Is this a bug in ViewModelProviders? How to fix this? In my opinion ViewModel should be restored. A: This is not really obvious, but when you call addToBackStack, the fragment manager will not destroy your fragment, just stops it, when new replace transaction comes. You basically have two items on the backstack now, both being instances of your Details. Since onDestroy was never called for the first one, its ViewModel's onCleared was never called either. In your case, simply checking if your fragment is currently in the container (e.g. via FragmentManager.findFragment() and NOT replacing it in such situation, should be enough. A: You are use link to fragment but need to Activity use: MainViewModel mainViewModel = ViewModelProviders.of(getActivity()).get(MainViewModel.class); A: This is a confirmed issue. The fix is available in the AndroidX 1.0.0-alpha2 release. https://issuetracker.google.com/issues/73644080
stackoverflow
{ "language": "en", "length": 223, "provenance": "stackexchange_0000F.jsonl.gz:879786", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588430" }
fdb1d22ba36c70670362c2ff291143e7c4ac35eb
Stackoverflow Stackexchange Q: What can I do to speed up IntelliJ IDEA when handling akka-http routes? Update of syntax highlighting is incredibly slow, when editing even a simple route definition. Updates take seconds, which really breaks the edit/will-this-compile flow. I didn't have this before. Maybe something's changed. Are others experiencing it? IntelliJ IDEA 2017.1.4 Build #IC-171.4694.23, built on June 6, 2017 JRE: 1.8.0_112-release-736-b21 x86_64 JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o Mac OS X 10.12.5 Scala 2.12.2 akka-http 10.0.7 A: This is a big nuisance for me too. You can disable type-aware highlighting which may help. Click the [T] icon in the lower-right corner.
Q: What can I do to speed up IntelliJ IDEA when handling akka-http routes? Update of syntax highlighting is incredibly slow, when editing even a simple route definition. Updates take seconds, which really breaks the edit/will-this-compile flow. I didn't have this before. Maybe something's changed. Are others experiencing it? IntelliJ IDEA 2017.1.4 Build #IC-171.4694.23, built on June 6, 2017 JRE: 1.8.0_112-release-736-b21 x86_64 JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o Mac OS X 10.12.5 Scala 2.12.2 akka-http 10.0.7 A: This is a big nuisance for me too. You can disable type-aware highlighting which may help. Click the [T] icon in the lower-right corner.
stackoverflow
{ "language": "en", "length": 103, "provenance": "stackexchange_0000F.jsonl.gz:879790", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588440" }
8b73fe3c54248d0a08b907b3cf259f450e16dcb7
Stackoverflow Stackexchange Q: How to draw multiple curved poly line between two locations in leaflet I want to create multiple curved polylines between two locations in leaflet. I have tried arc.js, but unable to get the result. Please suggest any option. I have added the below code for creating curved lines polyline.on('click', function (e) { for (var i = 1; i <= paths.length; i++) { var curve = L.Polyline.Arc([polylinePoints[0].lat, polylinePoints[0].lng], [polylinePoints[1].lat, polylinePoints[1].lng], { color: 'blue', vertices: i*350 }) pathMap.addLayer(curve); } }) But I am not getting curved lines in the map,I am getting the below result
Q: How to draw multiple curved poly line between two locations in leaflet I want to create multiple curved polylines between two locations in leaflet. I have tried arc.js, but unable to get the result. Please suggest any option. I have added the below code for creating curved lines polyline.on('click', function (e) { for (var i = 1; i <= paths.length; i++) { var curve = L.Polyline.Arc([polylinePoints[0].lat, polylinePoints[0].lng], [polylinePoints[1].lat, polylinePoints[1].lng], { color: 'blue', vertices: i*350 }) pathMap.addLayer(curve); } }) But I am not getting curved lines in the map,I am getting the below result
stackoverflow
{ "language": "en", "length": 94, "provenance": "stackexchange_0000F.jsonl.gz:879802", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588488" }
a5e1d027dfa4413ac45537ccca4badb4fedd9301
Stackoverflow Stackexchange Q: Why do python modules work in shell but not in the script? I am trying to make two programs. I want one to print the current weather of my city of residence and I want the other one to take data from an online account and return it. For those scripts I import the yweather module and the requests module. When I import them in the shell there are no problems but when I run the script it says "ImportError: No module named yweather". What am I doing wrong? Shell: >>> import requests >>> Script: Traceback (most recent call last): File "/Users/tim/Desktop/login.py", line 1, in <module> import requests ImportError: No module named requests This also happens for the yweather module Thank you A: I have a same problem as you, but with package 'sklearn'. With scikit-learn and sklearn installed, I run import sklearn in a .py file and it returns "ModuleNotFoundError: No module named 'sklearn.ensemble'; 'sklearn' is not a package". It turns out I made a funny mistake. I named the file 'sklearn.py'. So when I import sklearn, it probably just trys to import itself. I shouldn't have named that file 'sklearn.py'.
Q: Why do python modules work in shell but not in the script? I am trying to make two programs. I want one to print the current weather of my city of residence and I want the other one to take data from an online account and return it. For those scripts I import the yweather module and the requests module. When I import them in the shell there are no problems but when I run the script it says "ImportError: No module named yweather". What am I doing wrong? Shell: >>> import requests >>> Script: Traceback (most recent call last): File "/Users/tim/Desktop/login.py", line 1, in <module> import requests ImportError: No module named requests This also happens for the yweather module Thank you A: I have a same problem as you, but with package 'sklearn'. With scikit-learn and sklearn installed, I run import sklearn in a .py file and it returns "ModuleNotFoundError: No module named 'sklearn.ensemble'; 'sklearn' is not a package". It turns out I made a funny mistake. I named the file 'sklearn.py'. So when I import sklearn, it probably just trys to import itself. I shouldn't have named that file 'sklearn.py'. A: Are you sure it is the same version of Python? Try running the following in both the shell and in a script, compare the results. import sys sys.version If they are not identical, you have two versions installed. A: Maybe you are using a virtual environment while in script, and modules are not installed there. A: If you are on Windows , you probably have install Python twice . Did you install Python with Anaconda and install Python independently? When you type Python in the command prompt do you get the same Python version that the one in your interpreter ? If yes then go in your system panel and delete the program for the Python version running on command prompt . Otherwise you can check what pythonpath you are using in the command prompt : echo %PATH% If different one way of fixing it in the Python interpreter : import sys sys.path.append('your certain directory') Or you can also set a new path in the command prompt with : setx PATH "%PYTHONPATH%;C:\python27" Hope this helps A: I just encountered a similar situation as yours. The module being imported is opensbli. Here is the install guide: https://github.com/opensbli/opensbli/blob/version2.0/docs/installation_guide.pdf The mistake I made is that I first cloned it to /opt/ by sudo command, and move to the directory owned by my user. The problem is that since I clone it with sudo at first, the owner of the opensbli directory is root instead of my user, and this seems to be the reason I can't import it in script even with sudo command. The fix is simple. Just remove the clone and clone it again without sudo.
stackoverflow
{ "language": "en", "length": 469, "provenance": "stackexchange_0000F.jsonl.gz:879824", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588533" }
be91c7f789dab2d48b92d750a1135406bfa2f834
Stackoverflow Stackexchange Q: Is it possible to return the same type as the type parameter in when statement For example: fun <T> f(a: T): T = when (a) { a is Int -> 0 // if T is Int, then return Int a is String -> "" // if T is String, then return String else -> throw RuntimeException() // Otherwise, throw an exception so that the return type does not matter. } It gives compile errors: Error:(3, 20) The integer literal does not conform to the expected type T Error:(4, 23) Type mismatch: inferred type is String but T was expected A: You can cast the result to T afterwards. You won't get any compiler assistance with that and you will get warnings, but at least it does compile: fun <T> f(a: T): T = when { a is Int -> 0 // if T is Int, then return Int a is String -> "" // if T is String, then return String else -> throw RuntimeException() // Otherwise, throw an exception so that the return type does not matter. } as T Note that when (a) here is unnecessary, just when { is enough.
Q: Is it possible to return the same type as the type parameter in when statement For example: fun <T> f(a: T): T = when (a) { a is Int -> 0 // if T is Int, then return Int a is String -> "" // if T is String, then return String else -> throw RuntimeException() // Otherwise, throw an exception so that the return type does not matter. } It gives compile errors: Error:(3, 20) The integer literal does not conform to the expected type T Error:(4, 23) Type mismatch: inferred type is String but T was expected A: You can cast the result to T afterwards. You won't get any compiler assistance with that and you will get warnings, but at least it does compile: fun <T> f(a: T): T = when { a is Int -> 0 // if T is Int, then return Int a is String -> "" // if T is String, then return String else -> throw RuntimeException() // Otherwise, throw an exception so that the return type does not matter. } as T Note that when (a) here is unnecessary, just when { is enough. A: Currently, when the Kotlin compiler analyzes a function, it does not assume certain particular cases of a type parameter for the parts of the body. Instead, the code that works with a type parameter T is supposed to be correct with any T. Returning an Int where T is expected is not considered safe just because it's not analyzed deep enough to proof that T is always a supertype of Int if the function reaches that branch. One option is just to make an unchecked cast to T, as in @nhaarman's answer, thus expressing that you are sure that the types are correct. Another solution is to make several overloads of your function that work with different types: fun f(a: Int) = 1 fun f(a: String) = "" fun f(a: Any): Nothing = throw RuntimeException() In this case, the compiler will choose the function overload based on the argument you pass, in contrast with specializing a single generic function to a certain type argument, and this is a simpler task for the compiler, because it does not involve any type analysis inside a function body. Also, similar question: * *Kotlin reified type parameter doesn't smart cast *Why doesn't smart-cast handle this situation?
stackoverflow
{ "language": "en", "length": 397, "provenance": "stackexchange_0000F.jsonl.gz:879857", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588620" }
dac806bdb918a63ca1a376da44859dddee5bd3ba
Stackoverflow Stackexchange Q: Cannot select options on slack dynamic message menu Working on a Slack command I want to have some message menus as attachment. Those need to be dynamically populated, so I registered an options load URL and added the following attachments to my message: [{ "text": "Request's attributes", "fallback": "Upgrade your Slack client to use message like these.", "color": "#3AA3E3", "attachment_type": "default", "callback_id":"some ID", "actions": [{ "name": "priority_list", "text": "Select a priority", "type": "select", "data_source": "external", }, { "name": "status_list", "text": "Select a status", "type": "select", "data_source": "external", }] }] My options load URL is properly called by slack and here is what my server responds: { options: [{ text: 'Low', value: 'low' }, { text: 'Medium', value: 'medium' }, { text: 'High', value: 'high' } ], selected_options: [{ text: 'High', value: 'high' }]} Looking in Slack I can see the options are dynamically populated. However, none of them are selected. I am missing something when describing the selected_options ? A: Probably figured it out by now but you need to pass a header in the JSON with Content-Type as application/json in order for it to populate.
Q: Cannot select options on slack dynamic message menu Working on a Slack command I want to have some message menus as attachment. Those need to be dynamically populated, so I registered an options load URL and added the following attachments to my message: [{ "text": "Request's attributes", "fallback": "Upgrade your Slack client to use message like these.", "color": "#3AA3E3", "attachment_type": "default", "callback_id":"some ID", "actions": [{ "name": "priority_list", "text": "Select a priority", "type": "select", "data_source": "external", }, { "name": "status_list", "text": "Select a status", "type": "select", "data_source": "external", }] }] My options load URL is properly called by slack and here is what my server responds: { options: [{ text: 'Low', value: 'low' }, { text: 'Medium', value: 'medium' }, { text: 'High', value: 'high' } ], selected_options: [{ text: 'High', value: 'high' }]} Looking in Slack I can see the options are dynamically populated. However, none of them are selected. I am missing something when describing the selected_options ? A: Probably figured it out by now but you need to pass a header in the JSON with Content-Type as application/json in order for it to populate. A: I think selected_options can only be included in the original interactive message request. This doesn't really make sense to me because since you're generating the options dynamically, you wouldn't necessarily know which ones are coming back beforehand, but at the moment it's the only way I've gotten it to work.
stackoverflow
{ "language": "en", "length": 238, "provenance": "stackexchange_0000F.jsonl.gz:879891", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588752" }
9574a3175214bc85f97a7caf62476ea237edc72f
Stackoverflow Stackexchange Q: MySQL insert multiple rows with some values missing Given INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9); is a standard form for insert, how to insert multiple rows, if some data is missing? e.g. INSERT INTO tbl_name (a,b,c) VALUES(1,'missing','missing'),(4,'missing',6),(7,8,9); Note, table may contain data instead for "missing" values, and it should not be overwritten with "null" or else. A: If you want the same behavior as if the column was omitted in the insert use default: INSERT INTO tbl VALUES (1,default,default),(4,default,6)... If you want empty values just use null: INSERT INTO tbl VALUES (1,null,null),(4,null,6)...
Q: MySQL insert multiple rows with some values missing Given INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9); is a standard form for insert, how to insert multiple rows, if some data is missing? e.g. INSERT INTO tbl_name (a,b,c) VALUES(1,'missing','missing'),(4,'missing',6),(7,8,9); Note, table may contain data instead for "missing" values, and it should not be overwritten with "null" or else. A: If you want the same behavior as if the column was omitted in the insert use default: INSERT INTO tbl VALUES (1,default,default),(4,default,6)... If you want empty values just use null: INSERT INTO tbl VALUES (1,null,null),(4,null,6)... A: In that case you can insert null instead of missing : INSERT INTO tbl_name (a,b,c) VALUES(1,null,null),(4,null,6),(7,8,9); This solution will work only if the target fields are nullable or it will throw error A: First, about what you specified: Note, table may contain data instead for "missing" values, and it should not be overwritten with "null" or else. In this regard, if you use only 'INSERT` statements, then it doesn't matter if some data exists already in some form, because an 'INSERT' statement inserts (!) new records, doesn't update records. More of it: let's say you have an id column as a primary key column. If you try to INSERT a record with an 'id' identical with an existing one, then an error will arise: "Duplicate key...". Now, let's say id is not a primary key value. Then, when you try to INSERT a new record with an 'id' identical with an existing one, then the new record will be inserted as duplicate. That said, you can use UPDATE in order to update existing records. In order to not overwrite existing values, you can just omit them in UPDATE statement. Example: table users with columns id, fname, lname: id fname lname 1 John Smith 2 Sam Stevenson UPDATE statement: UPDATE users SET fname='Helen' WHERE id = 2; Results: id fname lname 1 John Smith 2 Helen Stevenson
stackoverflow
{ "language": "en", "length": 319, "provenance": "stackexchange_0000F.jsonl.gz:879892", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588762" }
702a6622e20aa26a7c4496c0d50ec1619e1599d1
Stackoverflow Stackexchange Q: java.net.ProtocolException: Too many follow-up requests (FAULTY COOKIE) i'm having this Too many follow-up problem, when i'm trying to make an okhttp3 Request, i found some similar problem's in stackoverflow & GitHub and most of them was pointing to Authentication errors. but in my case it's different here's what i'm doing : Login into a website --> obtain cookie --> make okhttp3 request when i'm doing the above from a desktop it's working fine but i'm getting error on doing the same from my android device i tried manually changing the userAgent of my webView, didn't worked. then i tried to make the same call(From my android device) by using the Cookie which was obtained by Desktop and it worked, so i have this conclusion that Android's Cookie is causing the problem So how can i solve this problem? here's my code: ////LOGIN Completed/// final RequestBody requestBody = new MultipartBody.Builder() .setType(MultipartBody.FORM) .addFormDataPart("key1","value1") .addFormDataPart("key2","value2") .build(); final Request request = new Request.Builder() .addHeader("Cookie",""+mCookie) .url("example.com/request.php") .method("POST", RequestBody.create(null, new byte[0])) .post(requestBody) .build(); final Response response = client.newCall(request).execute();
Q: java.net.ProtocolException: Too many follow-up requests (FAULTY COOKIE) i'm having this Too many follow-up problem, when i'm trying to make an okhttp3 Request, i found some similar problem's in stackoverflow & GitHub and most of them was pointing to Authentication errors. but in my case it's different here's what i'm doing : Login into a website --> obtain cookie --> make okhttp3 request when i'm doing the above from a desktop it's working fine but i'm getting error on doing the same from my android device i tried manually changing the userAgent of my webView, didn't worked. then i tried to make the same call(From my android device) by using the Cookie which was obtained by Desktop and it worked, so i have this conclusion that Android's Cookie is causing the problem So how can i solve this problem? here's my code: ////LOGIN Completed/// final RequestBody requestBody = new MultipartBody.Builder() .setType(MultipartBody.FORM) .addFormDataPart("key1","value1") .addFormDataPart("key2","value2") .build(); final Request request = new Request.Builder() .addHeader("Cookie",""+mCookie) .url("example.com/request.php") .method("POST", RequestBody.create(null, new byte[0])) .post(requestBody) .build(); final Response response = client.newCall(request).execute();
stackoverflow
{ "language": "en", "length": 173, "provenance": "stackexchange_0000F.jsonl.gz:879905", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588796" }
6d1fcf46cfc3feda99b7e3e8cc0b4dab800d5e45
Stackoverflow Stackexchange Q: Assets loading issue on Rails 5 app with Heroku I am facing asset loading issue in Rails 5 application deployed on Heroku. App Configuration is, ruby => ‘2.3.1’ rails => '~> 5.0.1' When image is stored on path, app/assets/home/image1.jpg I am accessing it in view as, = image_tag('/assets/home/image1.jpg’) which is working properly in Development ENV, but not in Production ENV. As per Heroku log, ActionController::RoutingError (No route matches [GET] "/assets/home/image1.jpg") If I am moving image directly to app/assets/image1.jpg then its working on Production ENV. Please guide about it. Thanks A: It looks like you assets are not compile on heroku. Follow below code: config/environments/production.rb config.assets.compile = true then run commands: RAILS_ENV=production rake assets:precompile then push all compiled files with menifest file to heroku.
Q: Assets loading issue on Rails 5 app with Heroku I am facing asset loading issue in Rails 5 application deployed on Heroku. App Configuration is, ruby => ‘2.3.1’ rails => '~> 5.0.1' When image is stored on path, app/assets/home/image1.jpg I am accessing it in view as, = image_tag('/assets/home/image1.jpg’) which is working properly in Development ENV, but not in Production ENV. As per Heroku log, ActionController::RoutingError (No route matches [GET] "/assets/home/image1.jpg") If I am moving image directly to app/assets/image1.jpg then its working on Production ENV. Please guide about it. Thanks A: It looks like you assets are not compile on heroku. Follow below code: config/environments/production.rb config.assets.compile = true then run commands: RAILS_ENV=production rake assets:precompile then push all compiled files with menifest file to heroku.
stackoverflow
{ "language": "en", "length": 124, "provenance": "stackexchange_0000F.jsonl.gz:879933", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588900" }
a11414df032d7e07a1dc3283f7031a691362686d
Stackoverflow Stackexchange Q: Firebase Remote Config error 8003, Unity, iOS We have a game in Unity 5.6.1f that uses Firebase Remote Config. Everything works fine on our devices, but after release we are noticing a lot of errors sent from player devices to our error reporting system. Problem occurs only on iOS. Remote Config: Fetch encountered an error: The operation couldn’t be completed. (com.google.remoteconfig.ErrorDomain error 8003.) I can't find a solution anywhere. Thanks! A: According to the documentation the error 8003 is defined as FIRRemoteConfigErrorInternalError = 8003 So the issue is probably not on your side. Added this so it might help someone in the future.
Q: Firebase Remote Config error 8003, Unity, iOS We have a game in Unity 5.6.1f that uses Firebase Remote Config. Everything works fine on our devices, but after release we are noticing a lot of errors sent from player devices to our error reporting system. Problem occurs only on iOS. Remote Config: Fetch encountered an error: The operation couldn’t be completed. (com.google.remoteconfig.ErrorDomain error 8003.) I can't find a solution anywhere. Thanks! A: According to the documentation the error 8003 is defined as FIRRemoteConfigErrorInternalError = 8003 So the issue is probably not on your side. Added this so it might help someone in the future. A: Like @jd291 said, 8003 documentation seems to point to backend problem: /// Internal error that covers all internal HTTP errors. FIRRemoteConfigErrorInternalError = 8003, A: This problem also occurs if you haven't published your configs on firebase yet.
stackoverflow
{ "language": "en", "length": 142, "provenance": "stackexchange_0000F.jsonl.gz:879947", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44588947" }
22052ffa4a28e3b73346be0a8834bfc73e00f2f2
Stackoverflow Stackexchange Q: Method for identifying .onion links in text? how can I identify .onion links in a text bearing in mind they can come in a variety of way; hfajlhfjkdsflkdsja.onion http://hfajlhfjkdsflkdsja.onion http://www.hfajlhfjkdsflkdsja.onion I'm thinking of regex but (.*?.onion) would return the whole paragraph where the URL Link is buried in A: This will do it: (?:https?://)?(?:www)?(\S*?\.onion)\b (Added non-capturing groups - credit: @WiktorStribiżew) Demo: s = '''hfajlhfjkdsflkdsja.onion https://hfajlhfjkdsflkdsja.onion http://www.hfajlhfjkdsflkdsja.onion https://www.google.com https://stackoverflow.com''' for m in re.finditer(r'(?:https?://)?(?:www)?(\S*?\.onion)\b', s, re.M | re.IGNORECASE): print(m.group(0)) Output hfajlhfjkdsflkdsja.onion https://hfajlhfjkdsflkdsja.onion http://www.hfajlhfjkdsflkdsja.onion
Q: Method for identifying .onion links in text? how can I identify .onion links in a text bearing in mind they can come in a variety of way; hfajlhfjkdsflkdsja.onion http://hfajlhfjkdsflkdsja.onion http://www.hfajlhfjkdsflkdsja.onion I'm thinking of regex but (.*?.onion) would return the whole paragraph where the URL Link is buried in A: This will do it: (?:https?://)?(?:www)?(\S*?\.onion)\b (Added non-capturing groups - credit: @WiktorStribiżew) Demo: s = '''hfajlhfjkdsflkdsja.onion https://hfajlhfjkdsflkdsja.onion http://www.hfajlhfjkdsflkdsja.onion https://www.google.com https://stackoverflow.com''' for m in re.finditer(r'(?:https?://)?(?:www)?(\S*?\.onion)\b', s, re.M | re.IGNORECASE): print(m.group(0)) Output hfajlhfjkdsflkdsja.onion https://hfajlhfjkdsflkdsja.onion http://www.hfajlhfjkdsflkdsja.onion A: Quick and easy: ([^\s]+\.onion) Matches all Characters starting from the first Space till ".onion". A: An approach without regex: url = 'http://hfajlhfjkdsflkdsja.onion' split = url.split('.onion') if len(split)==2 && len(split[1])==0: %do something
stackoverflow
{ "language": "en", "length": 114, "provenance": "stackexchange_0000F.jsonl.gz:879974", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589000" }
bc121ac72a5f00e3e94cb59a90454031cfd9218b
Stackoverflow Stackexchange Q: How to set singular name for a table in gorm type user struct { ID int Username string `gorm:"size:255"` Name string `gorm:"size:255"` } I want to create a table 'user' using this model. But the table name is automatically set to 'users'. I know it is gorm's default behavior. But I want the table name to be 'user'. A: Set method TableName for your struct. func (user) TableName() string { return "user" } Link: https://gorm.io/docs/models.html#conventions
Q: How to set singular name for a table in gorm type user struct { ID int Username string `gorm:"size:255"` Name string `gorm:"size:255"` } I want to create a table 'user' using this model. But the table name is automatically set to 'users'. I know it is gorm's default behavior. But I want the table name to be 'user'. A: Set method TableName for your struct. func (user) TableName() string { return "user" } Link: https://gorm.io/docs/models.html#conventions A: Gorm has a in-built method for that that will be set in global level so all tables will be singular. For gorm v1, you could do: db.SingularTable(true) For v2, it's a little more verbose: db, err := gorm.Open(postgres.Open(connStr), &gorm.Config{ NamingStrategy: schema.NamingStrategy{ SingularTable: true, }, }) A: To explicitly set a table name, you would have to create an interface Tabler with TableName method, and then create a receiver method (defined in the interface) for the struct: type user struct { ID int Username string `gorm:"size:255"` Name string `gorm:"size:255"` } type Tabler interface { TableName() string } // TableName overrides the table name used by User to `profiles` func (user) TableName() string { return "user" }
stackoverflow
{ "language": "en", "length": 192, "provenance": "stackexchange_0000F.jsonl.gz:879996", "question_score": "36", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589060" }
3cfb5a7f21908b3d30e5115d218000e8a0c32039
Stackoverflow Stackexchange Q: How to migrate deployed laravel project in amazon web server I deployed a laravel project to amazon web server. I used my git repository to deploy it. I updated composer in the server via sync.sh file. Now I need to migrate using artisan command. Here is my sync.sh file #!/bin/bash sudo chmod -R a+w /var/www/****serverName***/public_html/*projectName* sudo php /usr/bin/composer --working-dir=/var/www/*serverName*/public_html/*projectName*/ update A: you can add the following line to your sync.sh file. sudo php /var/www/****serverName***/public_html/projectName/artisan migrate
Q: How to migrate deployed laravel project in amazon web server I deployed a laravel project to amazon web server. I used my git repository to deploy it. I updated composer in the server via sync.sh file. Now I need to migrate using artisan command. Here is my sync.sh file #!/bin/bash sudo chmod -R a+w /var/www/****serverName***/public_html/*projectName* sudo php /usr/bin/composer --working-dir=/var/www/*serverName*/public_html/*projectName*/ update A: you can add the following line to your sync.sh file. sudo php /var/www/****serverName***/public_html/projectName/artisan migrate
stackoverflow
{ "language": "en", "length": 76, "provenance": "stackexchange_0000F.jsonl.gz:880001", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589074" }
06bad58c2d6aa22efb5ff8be98e6c247f7e83b75
Stackoverflow Stackexchange Q: When should I use the "superproject" pattern? Some libraries, such as LLVM, use a "superproject" pattern, where consumers of the library, such as libcxx, should live inside of the libraries' folder-structure. In the case of LLVM, this is llvm/projects. This seems quite limiting, as it makes it harder use the library when there are other folder-structure constraints. Why was this descision made, and what are some reasons to use such a layout?
Q: When should I use the "superproject" pattern? Some libraries, such as LLVM, use a "superproject" pattern, where consumers of the library, such as libcxx, should live inside of the libraries' folder-structure. In the case of LLVM, this is llvm/projects. This seems quite limiting, as it makes it harder use the library when there are other folder-structure constraints. Why was this descision made, and what are some reasons to use such a layout?
stackoverflow
{ "language": "en", "length": 73, "provenance": "stackexchange_0000F.jsonl.gz:880018", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589127" }
72053346d8b44b6525b81a7207a1af20795c5c2e
Stackoverflow Stackexchange Q: Elasticsearch 5.4 Pagination with NEST for .NET I know, I'm so stupid that I cannot see how, but I'm new with elasticsearch. I want to know how can i do a simple pagination. Like: 1 To 10 Of 123,456 entries I need to know the total hits for the query, have sense right? public long GetCount(SearchModel model) { return _elasticClient.Search<Document>(s => s .Query(q => GetWhere(q, model)) ).Total; } I'm not using From/Size because I want to get the total records for the query (that simple). I tried ISearchResponse.Total and that ignores the query filters some advice will be very appreciated, thanks A: When you do a simple document search you should probably just use From and Size for paging. The returned result should have a hits.total representation which is the total number of documents matching your query. The hits collection though will only have the 10 documents or whatever you define in (size). Example for From/Size: var response = client.Search<Tweet>(s => s .From(0) .Size(10) .Query(q => q.Term(t => t.User, "kimchy") || q.Match(mq => mq.Field(f => f.User).Query("nest")) ) ); response.HitsMetaData.Total should have the total number of docs found.
Q: Elasticsearch 5.4 Pagination with NEST for .NET I know, I'm so stupid that I cannot see how, but I'm new with elasticsearch. I want to know how can i do a simple pagination. Like: 1 To 10 Of 123,456 entries I need to know the total hits for the query, have sense right? public long GetCount(SearchModel model) { return _elasticClient.Search<Document>(s => s .Query(q => GetWhere(q, model)) ).Total; } I'm not using From/Size because I want to get the total records for the query (that simple). I tried ISearchResponse.Total and that ignores the query filters some advice will be very appreciated, thanks A: When you do a simple document search you should probably just use From and Size for paging. The returned result should have a hits.total representation which is the total number of documents matching your query. The hits collection though will only have the 10 documents or whatever you define in (size). Example for From/Size: var response = client.Search<Tweet>(s => s .From(0) .Size(10) .Query(q => q.Term(t => t.User, "kimchy") || q.Match(mq => mq.Field(f => f.User).Query("nest")) ) ); response.HitsMetaData.Total should have the total number of docs found.
stackoverflow
{ "language": "en", "length": 188, "provenance": "stackexchange_0000F.jsonl.gz:880032", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589174" }
61e130e2fbe862f98a19bb08316c482db8fe11b8
Stackoverflow Stackexchange Q: Node.js Server become unresponsive after certain time period I've recently been having problems with my server which become unresponsive after certain period of time. Basically after a certain amount of usage & time my node.js app stops responding to requests. I don't even see routes being fired on my console and the HTTP calls from my client (Android app) don't reach the server anymore. But after restart my node.js app server everything starts working again, until things inevitable stop again. The app never crashes, it just stops responding to requests. I'm not getting any errors, and I've made sure to handle and log all DB connection errors so I'm not sure where to start. Any clue as to what might be happening and how I can solve this problem? Here's my stack: Node.js on Digital Ocean server with Ubutnu 14.04 and Nginx (using Express 4.15.2 + PM2 2.4.6) Database running MySQL (using node-mysql)
Q: Node.js Server become unresponsive after certain time period I've recently been having problems with my server which become unresponsive after certain period of time. Basically after a certain amount of usage & time my node.js app stops responding to requests. I don't even see routes being fired on my console and the HTTP calls from my client (Android app) don't reach the server anymore. But after restart my node.js app server everything starts working again, until things inevitable stop again. The app never crashes, it just stops responding to requests. I'm not getting any errors, and I've made sure to handle and log all DB connection errors so I'm not sure where to start. Any clue as to what might be happening and how I can solve this problem? Here's my stack: Node.js on Digital Ocean server with Ubutnu 14.04 and Nginx (using Express 4.15.2 + PM2 2.4.6) Database running MySQL (using node-mysql)
stackoverflow
{ "language": "en", "length": 154, "provenance": "stackexchange_0000F.jsonl.gz:880056", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589222" }
4cbef0fd19eb4ac4ad7406a0d7319043def3fac5
Stackoverflow Stackexchange Q: Eclipse Plugin: Find location of selected project I have been attempting to find the location of a selected project in Eclipse via code from my editor plugin and I have had some success. I have not been able to locate the project in one scenario. When a file using the editor was left open from the last session in Eclipse and Eclipse is reopened, I cannot find the location of the current project without opening and closing the file because this method: Eclipse Plugin: how to get the path to the currently selected project will not work. Any suggestions? Thanks in advance. A: Eclipse doesn't really have a notion of a 'current project'. There is a current selection in each view but most views don't save the selection between sessions. In an editor you probably want the project that the file you are current editing belongs to. In the editor you can use something like: IEditorInput editorInput = getEditorInput(); IFile file = (IFile)editorInput.getAdapter(IFile.class); IProject project = file.getProject(); Note: file might be null if you are editing a file which is not in the workspace. You don't need the (IFile) cast on the most recent versions of Eclipse.
Q: Eclipse Plugin: Find location of selected project I have been attempting to find the location of a selected project in Eclipse via code from my editor plugin and I have had some success. I have not been able to locate the project in one scenario. When a file using the editor was left open from the last session in Eclipse and Eclipse is reopened, I cannot find the location of the current project without opening and closing the file because this method: Eclipse Plugin: how to get the path to the currently selected project will not work. Any suggestions? Thanks in advance. A: Eclipse doesn't really have a notion of a 'current project'. There is a current selection in each view but most views don't save the selection between sessions. In an editor you probably want the project that the file you are current editing belongs to. In the editor you can use something like: IEditorInput editorInput = getEditorInput(); IFile file = (IFile)editorInput.getAdapter(IFile.class); IProject project = file.getProject(); Note: file might be null if you are editing a file which is not in the workspace. You don't need the (IFile) cast on the most recent versions of Eclipse.
stackoverflow
{ "language": "en", "length": 198, "provenance": "stackexchange_0000F.jsonl.gz:880065", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589251" }
485a08cb319c6eddd8c219bc8b78eb0cd89c9fd5
Stackoverflow Stackexchange Q: Javascript - Format date/time with moment.js Here i have used Laravel:5.4 with vue.js:2.x and moment.js. I have used moment to display properly date & time in my vue templates. For example, I have 2017-06-18 date in my database. So that i used: {{ moment(bookingDetail.date).format('MMMM Do YYYY') }}, {{ moment(bookingDetail.date).format('dddd') }} and this gave me result like: June 18th 2017, Sunday. This works seem fine. Now, I also want to display time in my that vue template like:01:00 pm. I have 13:00:00 time in my database. I will tried for so like: {{ moment(bookingDetail.time).format("h:mm a") }} but this is not worked for me! Is there anyone to help me? Any help will be appreciate. Thanks! A: You have to use moment(String, String) parsing function. In the first case your input is in ISO 8601 format so it is recognized by moment(String), while in the second case you have to specify format. In your case, you can use the following code: {{ moment(bookingDetail.time, "HH:mm:ss").format("h:mm a") }}
Q: Javascript - Format date/time with moment.js Here i have used Laravel:5.4 with vue.js:2.x and moment.js. I have used moment to display properly date & time in my vue templates. For example, I have 2017-06-18 date in my database. So that i used: {{ moment(bookingDetail.date).format('MMMM Do YYYY') }}, {{ moment(bookingDetail.date).format('dddd') }} and this gave me result like: June 18th 2017, Sunday. This works seem fine. Now, I also want to display time in my that vue template like:01:00 pm. I have 13:00:00 time in my database. I will tried for so like: {{ moment(bookingDetail.time).format("h:mm a") }} but this is not worked for me! Is there anyone to help me? Any help will be appreciate. Thanks! A: You have to use moment(String, String) parsing function. In the first case your input is in ISO 8601 format so it is recognized by moment(String), while in the second case you have to specify format. In your case, you can use the following code: {{ moment(bookingDetail.time, "HH:mm:ss").format("h:mm a") }} A: I have solved this issue myself using just add like: {{ moment(bookingDetail.time, "HH:mm:ss").format("h:mm a") }} The moment gives us full date & time if we have proper timestamp in our Database like if we have in our DB: 2017-06-16 15:07:00 then it's give us June 16th 2017, Sunday @ 03:07 pm. But if we want to get 03:07 pm from in DB 15:07:00 then we need to do like: {{ moment(bookingDetail.time, "HH:mm:ss").format("h:mm a") }} Thanks!
stackoverflow
{ "language": "en", "length": 241, "provenance": "stackexchange_0000F.jsonl.gz:880074", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589275" }
c58afcdc3b8fe4a43312f39ea30dc1ae268625fe
Stackoverflow Stackexchange Q: Install4j : Set Path system variable when the media installer is finished I want to run a script or an action that could set Path system variable of windows when my media installer generated by Install4j is finished. But i don't find more informations to do it in the Install4j official documentation. So how can i do that using the install4j? A: You can use an "Modify an environment variable on Windows" action. Set its "Modification type" property to "append" and the "Variable name" property to "Path".
Q: Install4j : Set Path system variable when the media installer is finished I want to run a script or an action that could set Path system variable of windows when my media installer generated by Install4j is finished. But i don't find more informations to do it in the Install4j official documentation. So how can i do that using the install4j? A: You can use an "Modify an environment variable on Windows" action. Set its "Modification type" property to "append" and the "Variable name" property to "Path".
stackoverflow
{ "language": "en", "length": 88, "provenance": "stackexchange_0000F.jsonl.gz:880084", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589317" }
b0a0b260a6329e6433725ed4860a5741d59109b2
Stackoverflow Stackexchange Q: How to convert JSON string into List of Java object? This is my JSON Array :- [ { "firstName" : "abc", "lastName" : "xyz" }, { "firstName" : "pqr", "lastName" : "str" } ] I have this in my String object. Now I want to convert it into Java object and store it in List of java object. e.g. In Student object. I am using below code to convert it into List of Java object : - ObjectMapper mapper = new ObjectMapper(); StudentList studentList = mapper.readValue(jsonString, StudentList.class); My List class is:- public class StudentList { private List<Student> participantList = new ArrayList<Student>(); //getters and setters } My Student object is: - class Student { String firstName; String lastName; //getters and setters } Am I missing something here? I am getting below exception: - Exception : com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize instance of com.aa.Student out of START_ARRAY token A: You can also use Gson for this scenario. Gson gson = new Gson(); NameList nameList = gson.fromJson(data, NameList.class); List<Name> list = nameList.getList(); Your NameList class could look like: class NameList{ List<Name> list; //getter and setter }
Q: How to convert JSON string into List of Java object? This is my JSON Array :- [ { "firstName" : "abc", "lastName" : "xyz" }, { "firstName" : "pqr", "lastName" : "str" } ] I have this in my String object. Now I want to convert it into Java object and store it in List of java object. e.g. In Student object. I am using below code to convert it into List of Java object : - ObjectMapper mapper = new ObjectMapper(); StudentList studentList = mapper.readValue(jsonString, StudentList.class); My List class is:- public class StudentList { private List<Student> participantList = new ArrayList<Student>(); //getters and setters } My Student object is: - class Student { String firstName; String lastName; //getters and setters } Am I missing something here? I am getting below exception: - Exception : com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize instance of com.aa.Student out of START_ARRAY token A: You can also use Gson for this scenario. Gson gson = new Gson(); NameList nameList = gson.fromJson(data, NameList.class); List<Name> list = nameList.getList(); Your NameList class could look like: class NameList{ List<Name> list; //getter and setter } A: You can use below class to read list of objects. It contains static method to read a list with some specific object type. It is included Jdk8Module changes which provide new time class supports too. It is a clean and generic class. List<Student> students = JsonMapper.readList(jsonString, Student.class); Generic JsonMapper class: import com.fasterxml.jackson.databind.DeserializationFeature; import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.datatype.jdk8.Jdk8Module; import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule; import java.io.IOException; import java.util.*; import java.util.Collection; public class JsonMapper { public static <T> List<T> readList(String str, Class<T> type) { return readList(str, ArrayList.class, type); } public static <T> List<T> readList(String str, Class<? extends Collection> type, Class<T> elementType) { final ObjectMapper mapper = newMapper(); try { return mapper.readValue(str, mapper.getTypeFactory().constructCollectionType(type, elementType)); } catch (IOException e) { throw new RuntimeException(e); } } private static ObjectMapper newMapper() { final ObjectMapper mapper = new ObjectMapper(); mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); mapper.registerModule(new JavaTimeModule()); mapper.registerModule(new Jdk8Module()); return mapper; } } A: use below simple code, no need to use any other library, besides GSON String list = "your_json_string"; Gson gson = new Gson(); Type listType = new TypeToken<ArrayList<YourClassObject>>() {}.getType(); ArrayList<YourClassObject> users = new Gson().fromJson(list , listType); A: You are asking Jackson to parse a StudentList. Tell it to parse a List (of students) instead. Since List is generic you will typically use a TypeReference List<Student> participantJsonList = mapper.readValue(jsonString, new TypeReference<List<Student>>(){}); A: For any one who looks for answer yet: 1.Add jackson-databind library to your build tools like Gradle or Maven 2.in your Code: ObjectMapper mapper = new ObjectMapper(); List<Student> studentList = new ArrayList<>(); studentList = Arrays.asList(mapper.readValue(jsonStringArray, Student[].class)); A: StudentList studentList = mapper.readValue(jsonString,StudentList.class); Change this to this one StudentList studentList = mapper.readValue(jsonString, new TypeReference<List<Student>>(){}); A: I made a method to do this below called jsonArrayToObjectList. Its a handy static class that will take a filename and the file contains an array in JSON form. List<Items> items = jsonArrayToObjectList( "domain/ItemsArray.json", Item.class); public static <T> List<T> jsonArrayToObjectList(String jsonFileName, Class<T> tClass) throws IOException { ObjectMapper mapper = new ObjectMapper(); final File file = ResourceUtils.getFile("classpath:" + jsonFileName); CollectionType listType = mapper.getTypeFactory() .constructCollectionType(ArrayList.class, tClass); List<T> ts = mapper.readValue(file, listType); return ts; } A: I have resolved this one by creating the POJO class (Student.class) of the JSON and Main Class is used for read the values from the JSON in the problem. **Main Class** public static void main(String[] args) throws JsonParseException, JsonMappingException, IOException { String jsonStr = "[ \r\n" + " {\r\n" + " \"firstName\" : \"abc\",\r\n" + " \"lastName\" : \"xyz\"\r\n" + " }, \r\n" + " {\r\n" + " \"firstName\" : \"pqr\",\r\n" + " \"lastName\" : \"str\"\r\n" + " } \r\n" + "]"; ObjectMapper mapper = new ObjectMapper(); List<Student> details = mapper.readValue(jsonStr, new TypeReference<List<Student>>() { }); for (Student itr : details) { System.out.println("Value for getFirstName is: " + itr.getFirstName()); System.out.println("Value for getLastName is: " + itr.getLastName()); } } **RESULT:** Value for getFirstName is: abc Value for getLastName is: xyz Value for getFirstName is: pqr Value for getLastName is: str **Student.class:** public class Student { private String lastName; private String firstName; public String getLastName() { return lastName; } public String getFirstName() { return firstName; } } A: Gson only Solution the safest way is to iterate over json array by JsonParser.parseString(jsonString).getAsJsonArray() and parase it's elements one by one by checking jsonObject.has("key"). import com.google.gson.JsonArray; import com.google.gson.JsonObject; import com.google.gson.JsonParser; import lombok.Data; @Data class Foo { String bar; Double tar; } JsonArray jsonArray = JsonParser.parseString(jsonString).getAsJsonArray(); List<Foo> objects = new ArrayList<>(); jsonArray.forEach(jsonElement -> { objectList.add(JsonToObject(jsonElement.getAsJsonObject())); }); Foo parseJsonToFoo(JsonObject jsonObject) { Foo foo = new Foo(); if (jsonObject.has("bar")) { String data = jsonObject.get("bar").getAsString(); foo.setBar(data); } if (jsonObject.has("tar")) { Double data = jsonObject.get("tar").getAsDouble(); foo.setTar(data); } return foo; } A: Try this. It works with me. Hope you too! List<YOUR_OBJECT> testList = new ArrayList<>(); testList.add(test1); Gson gson = new Gson(); String json = gson.toJson(testList); Type type = new TypeToken<ArrayList<YOUR_OBJECT>>(){}.getType(); ArrayList<YOUR_OBJECT> array = gson.fromJson(json, type);
stackoverflow
{ "language": "en", "length": 803, "provenance": "stackexchange_0000F.jsonl.gz:880102", "question_score": "79", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589381" }
8d3b85f52a2583343dabbf958ef9897281150ed5
Stackoverflow Stackexchange Q: Can't acces FormControl instance directly. Cannot read property 'invalid' of undefined Can not acces it in the same way as in Angular docs, so must grab the FormGroup instance first and find FormControl instance in there.. I wonder why? This example works: <form [formGroup]="myForm" (ngSubmit)="onSubmit()"> <div class="form-group"> <label for="username">Username</label> <input type="text" name="username" class="form-control" formControlName="username" > <div *ngIf="myForm.controls.username.invalid" class="alert alert-danger"> username is required </div> </div> While this throws error (difference between these only in *ngIf statement): <form [formGroup]="myForm" (ngSubmit)="onSubmit()"> <div class="form-group"> <label for="username">Username</label> <input type="text" name="username" class="form-control" formControlName="username" > <div *ngIf="username.invalid" class="alert alert-danger"> username is required </div> </div> Cannot read property 'invalid' of undefined form.component: import {Component} from '@angular/core'; import {FormGroup, FormControl, Validators} from '@angular/forms'; @Component({ selector: 'sign-up', templateUrl: 'app/sign-up.component.html' }) export class SignUpComponent { myForm = new FormGroup({ username: new FormControl('username', Validators.required), password: new FormControl('', Validators.required), }); } A: For me works: form.component: getFormControl(name) { return this.Form.get(name); } template: <input type="text" name="username" class="form-control" formControlName="username" > <div *ngIf="getFormControl('username').invalid" class="alert alert-danger"> username is required </div>
Q: Can't acces FormControl instance directly. Cannot read property 'invalid' of undefined Can not acces it in the same way as in Angular docs, so must grab the FormGroup instance first and find FormControl instance in there.. I wonder why? This example works: <form [formGroup]="myForm" (ngSubmit)="onSubmit()"> <div class="form-group"> <label for="username">Username</label> <input type="text" name="username" class="form-control" formControlName="username" > <div *ngIf="myForm.controls.username.invalid" class="alert alert-danger"> username is required </div> </div> While this throws error (difference between these only in *ngIf statement): <form [formGroup]="myForm" (ngSubmit)="onSubmit()"> <div class="form-group"> <label for="username">Username</label> <input type="text" name="username" class="form-control" formControlName="username" > <div *ngIf="username.invalid" class="alert alert-danger"> username is required </div> </div> Cannot read property 'invalid' of undefined form.component: import {Component} from '@angular/core'; import {FormGroup, FormControl, Validators} from '@angular/forms'; @Component({ selector: 'sign-up', templateUrl: 'app/sign-up.component.html' }) export class SignUpComponent { myForm = new FormGroup({ username: new FormControl('username', Validators.required), password: new FormControl('', Validators.required), }); } A: For me works: form.component: getFormControl(name) { return this.Form.get(name); } template: <input type="text" name="username" class="form-control" formControlName="username" > <div *ngIf="getFormControl('username').invalid" class="alert alert-danger"> username is required </div> A: It throws error because you don't have a variable called username or password. In order to solve this, you could either: * *Store the control in a component variable: TS: @Component({ changeDetection: ChangeDetectionStrategy.OnPush, selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ] }) export class AppComponent { readonly usernameCtrl = this.formBuilder.control('username', Validators.required); readonly passwordCtrl = this.formBuilder.control('', Validators.required); readonly formGroup = this.formBuilder.group({ username: this.usernameCtrl, password: this.passwordCtrl }); HTML: <div *ngIf="userNameCtrl.invalid" class="alert alert-danger" > username is required </div> *Use AbstractControl#get to grab the control: HTML: <div *ngIf="formGroup.get('username').invalid" class="alert alert-danger" > username is required </div> *Use AbstractControl#hasError so you'll be able to specify different messages for each existent validation: HTML: <div *ngIf="formGroup.hasError('required', 'username')" class="alert alert-danger" > username is required </div> DEMO A: You can solve this issue using a Form Group and defining the corresponding getters in your controller. In order to achieve this goal: In the controller: 1) Remove the form control variables definition and initialization usernameCtrl: FormControl; passwordCtrl: FormControl; ... this.usernameCtrl = this.formBuilder.control('username',Validators.required); this.passwordCtrl = this.formBuilder.control('', Validators.required); 2)Change the form group initialization to this ngOnInit() { this.myForm = this.formBuilder.group({ username: ['usename', Validators.required] password: ['', Validators.required] }); } 3) Add the getters get username() { return this.myForm.get('username'); } get password() { return this.myForm.get('password'); } In the template: 1) add a parent div with [formGroup]="MyForm" <div [formGroup]="myForm"> ... </div> 2) change [formControl]="usernameCtrl" for forcontrolName=username and *ngIf="usernameCtrl.invalid" for *ngIf="username.invalid" <input type="text" name="username" class="form-control" formControlName="username"> <div *ngIf="username.invalid" class="alert alert-danger"> username is required </div> 3) change [formControl]="passwordCtrl" for forcontrolName=password and *ngIf="passwordCtrl.invalid" for *ngIf="password.invalid" te. <input type="text" name="password" class="form-control" formControlName="password"> <div *ngIf="password.invalid" class="alert alert-danger"> password is required </div> Plunker A: I had the same problem, I add 'this' to "myForm.controls....".It helped me. Instead of: <div *ngIf="myForm.controls.username.invalid" class="alert alert-danger"> username is required </div> Do: <div *ngIf="this.myForm.controls.username.invalid" class="alert alert-danger"> username is required </div> Hope this helped you. A: In the ts file add: get username() { return this.myForm.get('username'); } get password() { return this.myForm.get('password'); } } A: actually I'm just new in Angular, I'm just using it for a month and also searching for some answers hehe but in your Component add a getter like this: export class SignUpComponent { myForm = new FormGroup({ username: new FormControl('', Validators.required), password: new FormControl('', Validators.required), }); get username(){ return this.myForm.controls['username']; } } A: Another option is to check if username is defined, replacing the following HTML: <div *ngIf="username.invalid" class="alert alert-danger"> with the following, should also work HTML: <div *ngIf="username !== undefined && username.invalid" class="alert alert-danger">
stackoverflow
{ "language": "en", "length": 567, "provenance": "stackexchange_0000F.jsonl.gz:880114", "question_score": "17", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589415" }
03a7062bb94992ba864c38e7cc2859599ab94163
Stackoverflow Stackexchange Q: Unable to read from s3 bucket using spark val spark = SparkSession .builder() .appName("try1") .master("local") .getOrCreate() val df = spark.read .json("s3n://BUCKET-NAME/FOLDER/FILE.json") .select($"uid").show(5) I have given the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY as environment variables. I face below error while trying to read from S3. Exception in thread "main" org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: S3 HEAD request failed for '/FOLDER%2FFILE.json' - ResponseCode=400, ResponseMessage=Bad Request I suspect the error is caused due to "/" being converted to "%2F" by some internal function as the error shows '/FOLDER%2FFILE.json' instead of '/FOLDER/FILE.json' A: Your spark (jvm) application cannot read environment variable if you don't tell it to, so a quick work around : spark.sparkContext .hadoopConfiguration.set("fs.s3n.awsAccessKeyId", awsAccessKeyId) spark.sparkContext .hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", awsSecretAccessKey) You'll also need to precise the s3 endpoint : spark.sparkContext .hadoopConfiguration.set("fs.s3a.endpoint", "<<ENDPOINT>>"); To know more about what is AWS S3 Endpoint, refer to the following documentation : * *AWS Regions and Endpoints. *Working with Amazon S3 Buckets.
Q: Unable to read from s3 bucket using spark val spark = SparkSession .builder() .appName("try1") .master("local") .getOrCreate() val df = spark.read .json("s3n://BUCKET-NAME/FOLDER/FILE.json") .select($"uid").show(5) I have given the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY as environment variables. I face below error while trying to read from S3. Exception in thread "main" org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: S3 HEAD request failed for '/FOLDER%2FFILE.json' - ResponseCode=400, ResponseMessage=Bad Request I suspect the error is caused due to "/" being converted to "%2F" by some internal function as the error shows '/FOLDER%2FFILE.json' instead of '/FOLDER/FILE.json' A: Your spark (jvm) application cannot read environment variable if you don't tell it to, so a quick work around : spark.sparkContext .hadoopConfiguration.set("fs.s3n.awsAccessKeyId", awsAccessKeyId) spark.sparkContext .hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", awsSecretAccessKey) You'll also need to precise the s3 endpoint : spark.sparkContext .hadoopConfiguration.set("fs.s3a.endpoint", "<<ENDPOINT>>"); To know more about what is AWS S3 Endpoint, refer to the following documentation : * *AWS Regions and Endpoints. *Working with Amazon S3 Buckets.
stackoverflow
{ "language": "en", "length": 147, "provenance": "stackexchange_0000F.jsonl.gz:880160", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589563" }
b8d0362b71a1957b61e9b16b50e9e9ab44595e73
Stackoverflow Stackexchange Q: Azure Functions Service Bus Trigger: getting Serialization exception when trying to bind to custom class I'm creating an Azure Function with Service Bus trigger and trying to bind the incoming message to a custom class of mine: public class InputMessage { public string EntityId { get; set; } } public static string Run(InputMessage message, TraceWriter log) { log.Info($"C# ServiceBus trigger function processed message: {message}"); } My message is JSON, e.g. { "EntityId": "1234" } Unfortunately, the binding fails at runtime with the following message: Exception while executing function: Functions.ServiceBusTriggerCSharp1. Microsoft.Azure.WebJobs.Host: One or more errors occurred. Exception binding parameter 'message'. System.Runtime.Serialization: Expecting element 'Submission_x0023_0.InputMessage' from namespace 'http://schemas.datacontract.org/2004/07/'.. Encountered 'Element' with name 'string', namespace 'http://schemas.microsoft.com/2003/10/Serialization/'. . It looks like the runtime tries to deserialize the message with DataContractSerializer. How do I switch the deserialization to JSON? A: BrokeredMessage which comes to the function must have ContentType property explicitly set to application/json. If it's not specified, the default DataContractSerializer will be assumed. So, do this when sending the message: var message = new BrokeredMessage(body) { ContentType = "application/json" }; See ServiceBus Serialization Scenarios for details.
Q: Azure Functions Service Bus Trigger: getting Serialization exception when trying to bind to custom class I'm creating an Azure Function with Service Bus trigger and trying to bind the incoming message to a custom class of mine: public class InputMessage { public string EntityId { get; set; } } public static string Run(InputMessage message, TraceWriter log) { log.Info($"C# ServiceBus trigger function processed message: {message}"); } My message is JSON, e.g. { "EntityId": "1234" } Unfortunately, the binding fails at runtime with the following message: Exception while executing function: Functions.ServiceBusTriggerCSharp1. Microsoft.Azure.WebJobs.Host: One or more errors occurred. Exception binding parameter 'message'. System.Runtime.Serialization: Expecting element 'Submission_x0023_0.InputMessage' from namespace 'http://schemas.datacontract.org/2004/07/'.. Encountered 'Element' with name 'string', namespace 'http://schemas.microsoft.com/2003/10/Serialization/'. . It looks like the runtime tries to deserialize the message with DataContractSerializer. How do I switch the deserialization to JSON? A: BrokeredMessage which comes to the function must have ContentType property explicitly set to application/json. If it's not specified, the default DataContractSerializer will be assumed. So, do this when sending the message: var message = new BrokeredMessage(body) { ContentType = "application/json" }; See ServiceBus Serialization Scenarios for details.
stackoverflow
{ "language": "en", "length": 183, "provenance": "stackexchange_0000F.jsonl.gz:880190", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589662" }
4caa76c54b9f185ae0b8179620e0659858ca8d14
Stackoverflow Stackexchange Q: Install new php version in wamp for magento I had faced two problems with Wamp. 1) I cannot upgrade the php version. I tried a recommended way https://john-dugan.com/upgrade-php-wamp/ , but it didn't work. 2) I cannot change the php version from 5.6.16 to 7.0.0 (the default versions of wamp) These problems showed up when I added a Magento framework in my wamp server. The only version which accepted for Magento setup are 5.6.5, 7.0.2, 7.0.4, 7.0.6 What Can I do? A: Via This answer * *Download binaries on php.net (ZIP package based on your PC bit That package should php.exe file) *Extract all files in a new folder :C:/wamp/bin/php/php(7.0.0)/ *Copy the wampserver.conf from another php folder (like php/php5.6.16/) to the new folder *Rename php.ini-development file to phpForApache.ini *Restart WampServer
Q: Install new php version in wamp for magento I had faced two problems with Wamp. 1) I cannot upgrade the php version. I tried a recommended way https://john-dugan.com/upgrade-php-wamp/ , but it didn't work. 2) I cannot change the php version from 5.6.16 to 7.0.0 (the default versions of wamp) These problems showed up when I added a Magento framework in my wamp server. The only version which accepted for Magento setup are 5.6.5, 7.0.2, 7.0.4, 7.0.6 What Can I do? A: Via This answer * *Download binaries on php.net (ZIP package based on your PC bit That package should php.exe file) *Extract all files in a new folder :C:/wamp/bin/php/php(7.0.0)/ *Copy the wampserver.conf from another php folder (like php/php5.6.16/) to the new folder *Rename php.ini-development file to phpForApache.ini *Restart WampServer A: If you are using a version of WAMPServer > 3.0 then there are lots of simple php ADDON installs that you can pick from See SourceForge repo for all the available PHP ADDONS Or the backup Repo which can be easier to navigate than SourceForge. These are simply download and run the installs. It will just add another PHP folder in the usual place in the wamp folder structure. You can then Switch to the new release and back again to older releases using the wampmanager menus. Note: There are also many versions of Apache and MySQL as well. And if you upgrade to WAMPServer 3.0.8 (again a simple download and click the install), you can also add MariaDB installs as well.
stackoverflow
{ "language": "en", "length": 253, "provenance": "stackexchange_0000F.jsonl.gz:880238", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589814" }
0b8863c2759dedaacade8a9913c2c7b962302e58
Stackoverflow Stackexchange Q: Is it good to use jQuery with angular 2+ While searching for "showing bootsrap modal in angular 2" I encountered following answer: https://stackoverflow.com/a/38271918/1291122 It simply declares jQuery as this: declare var jQuery:any; And uses it to show/hide modal like this: jQuery("#myModal").modal("hide"); This was the shortest way to achieve what I need in angular 2(other all answers seemed to make it fairly complex equivalent to rocket science!) While this is the shortest way, is it the recommended way to do it? And in general is it a good idea to use jQuery with angular 2+? EDIT: My question is different from How to use jQuery with Angular2? Because I am asking WHETHER(or not) to use jQuery with angular 2, while that question is about HOW to use jquery with angular 2. I already know and have mentioned How to do it. A: Yes you can use it without any problem. This is the fastest solution, but you can have an even better one in this topic. Basically, it gives a way to have all the methods with the IDE auto complete.
Q: Is it good to use jQuery with angular 2+ While searching for "showing bootsrap modal in angular 2" I encountered following answer: https://stackoverflow.com/a/38271918/1291122 It simply declares jQuery as this: declare var jQuery:any; And uses it to show/hide modal like this: jQuery("#myModal").modal("hide"); This was the shortest way to achieve what I need in angular 2(other all answers seemed to make it fairly complex equivalent to rocket science!) While this is the shortest way, is it the recommended way to do it? And in general is it a good idea to use jQuery with angular 2+? EDIT: My question is different from How to use jQuery with Angular2? Because I am asking WHETHER(or not) to use jQuery with angular 2, while that question is about HOW to use jquery with angular 2. I already know and have mentioned How to do it. A: Yes you can use it without any problem. This is the fastest solution, but you can have an even better one in this topic. Basically, it gives a way to have all the methods with the IDE auto complete. A: I don't see a downside. Especially in cases where you use external libraries. However, just don't revert to Jquery trying to solve angular problems like templating etc.
stackoverflow
{ "language": "en", "length": 209, "provenance": "stackexchange_0000F.jsonl.gz:880301", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44589987" }
1010ce91bb4d2c5ce5d022e4fbe5dae342d2c9b1
Stackoverflow Stackexchange Q: Add new fields to Devise model. Rails 5 So I wanna add some new fields to my Devise model User. When I was using Rails 3 I just added new fields to model and added those fields to Model.rb attr_accessible :name, :etc and then I changed Registration view. Now I've done the same, but I haven't Devise/User controller so I can't do something like this def user_params params.require(:users).permit(:name) end Or attr_accessible :name, :etc A: There are three parts to this: * *Generate the migration to the table to add the fields to your database schema. rails generate migration add_name_to_users name:string *You need to add the ability to add/edit the name in the registration/edit forms for your user. (which it seems like you've already done in your first code sample) *You need to add the strong params that you added to your controller (which you've already done in your second code sample. Basically it seems like you haven't generated the migration. Are you getting any error messages?
Q: Add new fields to Devise model. Rails 5 So I wanna add some new fields to my Devise model User. When I was using Rails 3 I just added new fields to model and added those fields to Model.rb attr_accessible :name, :etc and then I changed Registration view. Now I've done the same, but I haven't Devise/User controller so I can't do something like this def user_params params.require(:users).permit(:name) end Or attr_accessible :name, :etc A: There are three parts to this: * *Generate the migration to the table to add the fields to your database schema. rails generate migration add_name_to_users name:string *You need to add the ability to add/edit the name in the registration/edit forms for your user. (which it seems like you've already done in your first code sample) *You need to add the strong params that you added to your controller (which you've already done in your second code sample. Basically it seems like you haven't generated the migration. Are you getting any error messages? A: Since Devise 4, the Parameter Sanitaizer API has changed: class ApplicationController < ActionController::Base before_action :configure_permitted_parameters, if: :devise_controller? protected def configure_permitted_parameters devise_parameter_sanitizer.permit(:sign_up, keys: [:username]) end end
stackoverflow
{ "language": "en", "length": 193, "provenance": "stackexchange_0000F.jsonl.gz:880324", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590059" }
f35b610e3e4af04fdbce6dcabbbc30207299499a
Stackoverflow Stackexchange Q: Running a Docker file stored locally I have a docker file that includes a python script. So far I pushed it to my github and from there to my docker hub. Afterwards I used the command docker pull name/repo I would like to store the docker file I created with a text editor in a folder together with the python scrip and execute it with a command like docker run c:/pathtodockerfile Is that possible? A: The process to run Dockerfile is: docker build . -t [tag] -f /path/to/Dockerfile And then: docker run -d tag
Q: Running a Docker file stored locally I have a docker file that includes a python script. So far I pushed it to my github and from there to my docker hub. Afterwards I used the command docker pull name/repo I would like to store the docker file I created with a text editor in a folder together with the python scrip and execute it with a command like docker run c:/pathtodockerfile Is that possible? A: The process to run Dockerfile is: docker build . -t [tag] -f /path/to/Dockerfile And then: docker run -d tag A: This works well in macOS Monterey docker build . --tag "cms" --file /Users/anjum/cms/Dockerfile [+] Building 19.5s (10/13) => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 1.15kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 367B 0.0s => [internal] load metadata for docker.io/library/node:12-slim 8.4s => [auth] library/node:pull token for registry-1.docker.io 0.0s => [1/8] FROM docker.io/library/node:12-slim@sha256:f23d5785b19e65224f2cd35f3cc6207d4de147d12d75b52d0dc8af2d507c7f51 9.2s => => resolve docker.io/library/node:12-slim@sha256:f23d5785b19e65224f2cd35f3cc6207d4de147d12d75b52d0dc8af2d507c7f51 0.0s => => sha256:b62f8e7e96f8d4168d3b9c1d1262ef7d8f78910f1e19cfedd1b7ce3ab5b7dffd 1.37kB / 1.37kB 0.0s => => sha256:0eae9e0410e240f60b70b757f3c298786d7e334af266798171c11e4dbc1cc4d8 6.89kB / 6.89kB 0.0s => => sha256:eec53b8a5053c739b5b685cb372b38eea3286ab6626532bad963291f76357c5f 22.53MB / 22.53MB 3.9s => => sha256:d72ba3acf6e599d15655c136900ccf28e9e3810f1f483753ba1109351ff4e64f 4.17kB / 4.17kB 0.8s => => sha256:5f97dde1af90835dff237728a22841d5967334a532b71898a151f2eb4ea51fb8 24.22MB / 24.22MB 3.9s => => sha256:f23d5785b19e65224f2cd35f3cc6207d4de147d12d75b52d0dc8af2d507c7f51 776B / 776B 0.0s => => sha256:9a16f71a1d56dbab6c2b8ef8fd1b530f808e847ddcc4e12f411abddcdf9d4b3d 2.78MB / 2.78MB 2.6s => => sha256:053d0f2346070637fbb7ee095998009bf08a45549649603a2dbb2ccb40c73d70 461B / 461B 3.2s => => extracting sha256:eec53b8a5053c739b5b685cb372b38eea3286ab6626532bad963291f76357c5f 1.9s => => extracting sha256:d72ba3acf6e599d15655c136900ccf28e9e3810f1f483753ba1109351ff4e64f 0.1s => => extracting sha256:5f97dde1af90835dff237728a22841d5967334a532b71898a151f2eb4ea51fb8 1.9s => => extracting sha256:9a16f71a1d56dbab6c2b8ef8fd1b530f808e847ddcc4e12f411abddcdf9d4b3d 0.2s => => extracting sha256:053d0f2346070637fbb7ee095998009bf08a45549649603a2dbb2ccb40c73d70 0.0s => [internal] load build context 4.7s => => transferring context: 77.76MB 4.6s => [2/8] WORKDIR /usr/src/app 0.3s => [3/8] COPY package.json ./ 0.0s => [4/8] RUN npm set unsafe-perm true
stackoverflow
{ "language": "en", "length": 273, "provenance": "stackexchange_0000F.jsonl.gz:880332", "question_score": "25", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590084" }
b0338d4ffec75547ca2378cce9078dfed6d53851
Stackoverflow Stackexchange Q: Deprecate normal argument name change in function signature? I want to change the normal argument name in a function. For the function func, initially with signature: def func(vect): /* code goes here */ And the new signature looks like: def func(vector): /* code goes here */ Do I need to raise a deprecation warning if I make changes like just simple change in variable names in function signature which is just a normal arg, neither *args nor **kwargs. The main point being do I expect the user to be using something like c = func(vect="lol") in this code?
Q: Deprecate normal argument name change in function signature? I want to change the normal argument name in a function. For the function func, initially with signature: def func(vect): /* code goes here */ And the new signature looks like: def func(vector): /* code goes here */ Do I need to raise a deprecation warning if I make changes like just simple change in variable names in function signature which is just a normal arg, neither *args nor **kwargs. The main point being do I expect the user to be using something like c = func(vect="lol") in this code?
stackoverflow
{ "language": "en", "length": 99, "provenance": "stackexchange_0000F.jsonl.gz:880346", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590140" }
3fc25d1344257e82f63db6ecab06d3fbaf164826
Stackoverflow Stackexchange Q: Adding attribute directives dynamically in Angular 2 I want to add attribute directives dynamically to a element. How to do this ? let say I have created a element and wants to add attribute directives "dir1", "dir2", "dir3" to that element. <div dir1 dir2 dir3></div>
Q: Adding attribute directives dynamically in Angular 2 I want to add attribute directives dynamically to a element. How to do this ? let say I have created a element and wants to add attribute directives "dir1", "dir2", "dir3" to that element. <div dir1 dir2 dir3></div>
stackoverflow
{ "language": "en", "length": 46, "provenance": "stackexchange_0000F.jsonl.gz:880400", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590289" }
d5211f2aac857a2092bd484251b4c130649fca92
Stackoverflow Stackexchange Q: Webpack compile all files in a folder So I'm using Laravel 5.4 and I use webpack to compile multiple .js files in 1 big js file. const { mix } = require('laravel-mix'); // Compile all CSS file from the theme mix.styles([ 'resources/assets/theme/css/bootstrap.min.css', 'resources/assets/theme/css/main.css', 'resources/assets/theme/css/plugins.css', 'resources/assets/theme/css/themes.css', 'resources/assets/theme/css/themes/emerald.css', 'resources/assets/theme/css/font-awesome.min.css', ], 'public/css/theme.css'); // Compile all JS file from the theme mix.scripts([ 'resources/assets/theme/js/bootstrap.min.js', 'resources/assets/theme/js/app.js', 'resources/assets/theme/js/modernizr.js', 'resources/assets/theme/js/plugins.js', ], 'public/js/theme.js'); This is my webpack.mix.js to do it (same for css). But I want to get something like: resources/assets/theme/js/* to get all files from a folder. So when I make a new js file in the folder that webpack automatically finds it, and compile it when I run the command. Does someone know how to this? Thanks for helping. A: Wildcards are actually allowed using the mix.scripts() method, as confirmed by the creator in this issue. So your call should look like this: mix.scripts( 'resources/assets/theme/js/*.js', 'public/js/theme.js'); I presume it works the same for styles, since they use the same method to combine the files. Hope this helps you.
Q: Webpack compile all files in a folder So I'm using Laravel 5.4 and I use webpack to compile multiple .js files in 1 big js file. const { mix } = require('laravel-mix'); // Compile all CSS file from the theme mix.styles([ 'resources/assets/theme/css/bootstrap.min.css', 'resources/assets/theme/css/main.css', 'resources/assets/theme/css/plugins.css', 'resources/assets/theme/css/themes.css', 'resources/assets/theme/css/themes/emerald.css', 'resources/assets/theme/css/font-awesome.min.css', ], 'public/css/theme.css'); // Compile all JS file from the theme mix.scripts([ 'resources/assets/theme/js/bootstrap.min.js', 'resources/assets/theme/js/app.js', 'resources/assets/theme/js/modernizr.js', 'resources/assets/theme/js/plugins.js', ], 'public/js/theme.js'); This is my webpack.mix.js to do it (same for css). But I want to get something like: resources/assets/theme/js/* to get all files from a folder. So when I make a new js file in the folder that webpack automatically finds it, and compile it when I run the command. Does someone know how to this? Thanks for helping. A: Wildcards are actually allowed using the mix.scripts() method, as confirmed by the creator in this issue. So your call should look like this: mix.scripts( 'resources/assets/theme/js/*.js', 'public/js/theme.js'); I presume it works the same for styles, since they use the same method to combine the files. Hope this helps you. A: If anyone wants the code to compile all sass/less/js files in a directory to a different directory with the same filename you can use this: // webpack.mix.js let fs = require('fs'); let getFiles = function (dir) { // get all 'files' in this directory // filter directories return fs.readdirSync(dir).filter(file => { return fs.statSync(`${dir}/${file}`).isFile(); }); }; getFiles('directory').forEach(function (filepath) { mix.js('directory/' + filepath, 'js'); });
stackoverflow
{ "language": "en", "length": 236, "provenance": "stackexchange_0000F.jsonl.gz:880419", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590340" }
af608f9383d520e8a4cdaaca9e5b69911058bacf
Stackoverflow Stackexchange Q: one route 2 controllers - Laravel I have a route, based on the term i need to call the appropriate controller. for e.g Route::get('/{term}','Usercontroller') Route::get('/{term}','brandcontroller') i want to achieve some thing like this. what the term holds is a name(string), based on this string it either belongs to User table or brand table. how can i achieve something like this using Service Container. that before deciding on which route to take, based on the term if it belongs to USER class, the usercontroller should be called, or if it belongs to brand class , brandcontroller route should be taken. Any help will be appreciated. Thanks A: Create middleware IsBrand, & check if brand exists? Route::group(['middleware' => 'IsBrand'], function () { Route::get('{term}', 'BrandController'); }); Same goes for IsUser. Route::group(['middleware' => 'IsUser'], function () { Route::get('{term}', 'UserController'); }); Use php artisan make:middleware IsBrand to create middleware. This command will place a new IsBrand class within your app/Http/Middleware directory. <?php namespace App\Http\Middleware; use Closure; class IsBrand { /** * Handle an incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { if (App\Brand::where('brand_name', $term)->count())) { return $next($request); } } }
Q: one route 2 controllers - Laravel I have a route, based on the term i need to call the appropriate controller. for e.g Route::get('/{term}','Usercontroller') Route::get('/{term}','brandcontroller') i want to achieve some thing like this. what the term holds is a name(string), based on this string it either belongs to User table or brand table. how can i achieve something like this using Service Container. that before deciding on which route to take, based on the term if it belongs to USER class, the usercontroller should be called, or if it belongs to brand class , brandcontroller route should be taken. Any help will be appreciated. Thanks A: Create middleware IsBrand, & check if brand exists? Route::group(['middleware' => 'IsBrand'], function () { Route::get('{term}', 'BrandController'); }); Same goes for IsUser. Route::group(['middleware' => 'IsUser'], function () { Route::get('{term}', 'UserController'); }); Use php artisan make:middleware IsBrand to create middleware. This command will place a new IsBrand class within your app/Http/Middleware directory. <?php namespace App\Http\Middleware; use Closure; class IsBrand { /** * Handle an incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { if (App\Brand::where('brand_name', $term)->count())) { return $next($request); } } }
stackoverflow
{ "language": "en", "length": 199, "provenance": "stackexchange_0000F.jsonl.gz:880447", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590420" }
10477b7c3631ceffd0fe16d6efab88de810ab293
Stackoverflow Stackexchange Q: Overlapping Elements in a List Xamarin Forms I am trying to get the elements of the listview to overlap slightly. Is this possible in the standard listview? Here is my Xaml: <StackLayout Padding="10, 10, 10, 10"> <StackLayout> <ListView HasUnevenRows="True" SeparatorColor="Blue" x:Name="cardsList"> </ListView> </StackLayout> </StackLayout> and this is my code behind: var cards = new List<string> {"card 1","card 2","card 3","card 4" }; // stacklayout.Children.Add(listView); cardsList.ItemsSource = cards; cardsList.RowHeight = 150; the code is very minimal and this is only a proof of concept. This gives the standard list with some height but not quite what im looking for. This is what im trying to achieve: I realise that I'm not going to be able to get that exactly that or probably close to it. But this kind of effect. It doesnt need to be a listview, it can be a grid or even a nuget package that does it. Any ideas?
Q: Overlapping Elements in a List Xamarin Forms I am trying to get the elements of the listview to overlap slightly. Is this possible in the standard listview? Here is my Xaml: <StackLayout Padding="10, 10, 10, 10"> <StackLayout> <ListView HasUnevenRows="True" SeparatorColor="Blue" x:Name="cardsList"> </ListView> </StackLayout> </StackLayout> and this is my code behind: var cards = new List<string> {"card 1","card 2","card 3","card 4" }; // stacklayout.Children.Add(listView); cardsList.ItemsSource = cards; cardsList.RowHeight = 150; the code is very minimal and this is only a proof of concept. This gives the standard list with some height but not quite what im looking for. This is what im trying to achieve: I realise that I'm not going to be able to get that exactly that or probably close to it. But this kind of effect. It doesnt need to be a listview, it can be a grid or even a nuget package that does it. Any ideas?
stackoverflow
{ "language": "en", "length": 151, "provenance": "stackexchange_0000F.jsonl.gz:880453", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590439" }
e8e05b654960344953f57d4f8aa4c7105f899b92
Stackoverflow Stackexchange Q: Connecting Bing Ads API using node-soap I am trying to connect to bing ads soap api using node-soap. I have created the request as suggested in bing documentation. But each time I try to connect the response states the Invalid credentials (Error code - 105) Message - Authentication failed. Either supplied credentials are invalid or the account is inactive. I was able to authenticate the API using sample C# code provided by bing. So, its clear that the credentials/token are working perfectly fine. Is there a way to identify the issue with my approach or in my node code. soap.createClient(url, function (err, client) { if (err) { console.log("err", err); } else { client.addSoapHeader({ 'AuthenticationToken': '<AuthenticationToken>', 'DeveloperToken': '<DeveloperToken>', 'CustomerId': '<CustomerId>', 'CustomerAccountId': '<CustomerAccountId>', }); client.SubmitGenerateReport(args, function (err, result) { if (err) { console.log("err", err.body); } else { console.log(result); } }); } }); PS: Bing Documentation Sucks. Hail Stackoverflow! A: You need to prefix each key in your headers with tns, e.g: tns:AuthenticationToken
Q: Connecting Bing Ads API using node-soap I am trying to connect to bing ads soap api using node-soap. I have created the request as suggested in bing documentation. But each time I try to connect the response states the Invalid credentials (Error code - 105) Message - Authentication failed. Either supplied credentials are invalid or the account is inactive. I was able to authenticate the API using sample C# code provided by bing. So, its clear that the credentials/token are working perfectly fine. Is there a way to identify the issue with my approach or in my node code. soap.createClient(url, function (err, client) { if (err) { console.log("err", err); } else { client.addSoapHeader({ 'AuthenticationToken': '<AuthenticationToken>', 'DeveloperToken': '<DeveloperToken>', 'CustomerId': '<CustomerId>', 'CustomerAccountId': '<CustomerAccountId>', }); client.SubmitGenerateReport(args, function (err, result) { if (err) { console.log("err", err.body); } else { console.log(result); } }); } }); PS: Bing Documentation Sucks. Hail Stackoverflow! A: You need to prefix each key in your headers with tns, e.g: tns:AuthenticationToken
stackoverflow
{ "language": "en", "length": 161, "provenance": "stackexchange_0000F.jsonl.gz:880463", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590474" }
c328c1b7d8d9cf012f557ef450085a074dbbe1f8
Stackoverflow Stackexchange Q: How do we slice individual items in a javascript list Input : dates = [201701, 201702, 201703] I want the output as [2017-01, 2017-02, 2017-03] I tried using the slice method in javascript, but it fails for (var i in dates) { dates[i].slice(0, 4) + "-" + dates[i].slice(4); } It fails. A: You just forgot toString(): var dates = [201701, 201702, 201703]; for (var i = 0; i < dates.length; i++) { console.log(dates[i].toString().slice(0, 4) + "-" + dates[i].toString().slice(4)); }
Q: How do we slice individual items in a javascript list Input : dates = [201701, 201702, 201703] I want the output as [2017-01, 2017-02, 2017-03] I tried using the slice method in javascript, but it fails for (var i in dates) { dates[i].slice(0, 4) + "-" + dates[i].slice(4); } It fails. A: You just forgot toString(): var dates = [201701, 201702, 201703]; for (var i = 0; i < dates.length; i++) { console.log(dates[i].toString().slice(0, 4) + "-" + dates[i].toString().slice(4)); } A: You could use Number#toString and String#replace for the wanted dates. var dates = [201701, 201702, 201703], result = dates.map(a => a.toString().replace(/(?=..$)/, '-')); console.log(result); Or use String#split. var dates = [201701, 201702, 201703], result = dates.map(a => a.toString().split(/(?=..$)/).join('-')); console.log(result); Both examples with ES5 var dates = [201701, 201702, 201703]; console.log(dates.map(function (a) { return a.toString().replace(/(?=..$)/, '-'); })); console.log(dates.map(function (a) { return a.toString().split(/(?=..$)/).join('-'); }));
stackoverflow
{ "language": "en", "length": 142, "provenance": "stackexchange_0000F.jsonl.gz:880520", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590657" }
2593073c5690ec271db100808d30ab05c28f425f
Stackoverflow Stackexchange Q: react-native get current Day I'm trying to get the current day/month/year in an Android App in react-native. This is what i've done: currentDay: new Date(), ... console.log('Date:'+this.state.currentDay ,); console.log('Day: '+this.state.currentDay.getDay() , 'Month: ' + this.state.currentDay.getMonth(), 'Year :'+ this.state.currentDay.getYear()); And in the console I have: Date:Fri Jun 16 2017 11:27:36 GMT+0000 (GMT) 'Day: 5', 'Month: 5', 'Year :117' As you see, getDay(), getMonth() and getYear() don't return what I want... A: You can use this : var today = new Date(); date=today.getDate() + "/"+ parseInt(today.getMonth()+1) +"/"+ today.getFullYear(); console.log(date);
Q: react-native get current Day I'm trying to get the current day/month/year in an Android App in react-native. This is what i've done: currentDay: new Date(), ... console.log('Date:'+this.state.currentDay ,); console.log('Day: '+this.state.currentDay.getDay() , 'Month: ' + this.state.currentDay.getMonth(), 'Year :'+ this.state.currentDay.getYear()); And in the console I have: Date:Fri Jun 16 2017 11:27:36 GMT+0000 (GMT) 'Day: 5', 'Month: 5', 'Year :117' As you see, getDay(), getMonth() and getYear() don't return what I want... A: You can use this : var today = new Date(); date=today.getDate() + "/"+ parseInt(today.getMonth()+1) +"/"+ today.getFullYear(); console.log(date);
stackoverflow
{ "language": "en", "length": 88, "provenance": "stackexchange_0000F.jsonl.gz:880532", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590692" }
ab0cef7406dd31104c358ef6ba5617ec81cfbe00
Stackoverflow Stackexchange Q: Java memory profiling in AWS Lambda Is it possible to profile memory usage of java projects in AWS Lambda with classes or packages of all objects on the heap? Heap memory profiling: A: Since August 2017, AWS has provided SAM local which allows lambda functions to be run locally. https://docs.aws.amazon.com/lambda/latest/dg/sam-cli-requirements.html The AWS SAM CLI is a command line tool that operates on an AWS SAM template and application code. With the AWS SAM CLI, you can invoke Lambda functions locally, create a deployment package for your serverless application, deploy your serverless application to the AWS Cloud, and so on.
Q: Java memory profiling in AWS Lambda Is it possible to profile memory usage of java projects in AWS Lambda with classes or packages of all objects on the heap? Heap memory profiling: A: Since August 2017, AWS has provided SAM local which allows lambda functions to be run locally. https://docs.aws.amazon.com/lambda/latest/dg/sam-cli-requirements.html The AWS SAM CLI is a command line tool that operates on an AWS SAM template and application code. With the AWS SAM CLI, you can invoke Lambda functions locally, create a deployment package for your serverless application, deploy your serverless application to the AWS Cloud, and so on. A: You need docker - to start your lambda locally. If you are in admin / docker group you can connect to the process with your profiler.
stackoverflow
{ "language": "en", "length": 127, "provenance": "stackexchange_0000F.jsonl.gz:880539", "question_score": "12", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590713" }
189cbf8dd513addd59a6ea90dc317552a262d1cb
Stackoverflow Stackexchange Q: Convert String in React Components I have a string which is actually a nested react components name with its props. Want to render that string inside another react component . For example var String = "<parentComponent title='Parent'><childComponent age='23'></childComponent></parentComponent>" React Code: - class MainComponnet extends Component{ render(){ let stringComponents = "<parentComponent title='Parent'><childComponent age='23'></childComponent></parentComponent>"; return( {stringComponents} ) } } Parent Component JSX : - class ParentComponent extends Component{ render(){ return( <div> {this.props.title}</div> ) } } Child Component JSX : - class ChildComponent extends Component{ render(){ return( <div> {this.props.age}</div> ) } } Please help.. A: Thanks alot for every one for helping me. Library 'react-jsx-parser' solved my problem .
Q: Convert String in React Components I have a string which is actually a nested react components name with its props. Want to render that string inside another react component . For example var String = "<parentComponent title='Parent'><childComponent age='23'></childComponent></parentComponent>" React Code: - class MainComponnet extends Component{ render(){ let stringComponents = "<parentComponent title='Parent'><childComponent age='23'></childComponent></parentComponent>"; return( {stringComponents} ) } } Parent Component JSX : - class ParentComponent extends Component{ render(){ return( <div> {this.props.title}</div> ) } } Child Component JSX : - class ChildComponent extends Component{ render(){ return( <div> {this.props.age}</div> ) } } Please help.. A: Thanks alot for every one for helping me. Library 'react-jsx-parser' solved my problem . A: That string has JSX content, you can directly render JSX contents, also the component names must begin with Uppercase charater. See this answer: React - Adding component after AJAX to view class MainComponnet extends Component{ render(){ let stringComponents = <ParentComponent title='Parent'><ChildComponent age='23'></ChildComponent></ParentComponent>; return( </div>{stringComponents}</div> ) } } In case you cannot use a direct JSX element, you can transform your string to JSX using babel. However its not a good idea. you should rather modify your logic. You can do it as follows import babel from 'babel-core'; var Component = eval(babel.transform('<ParentComponent title='Parent'><ChildComponent age='23'></ChildComponent></ParentComponent>).code);
stackoverflow
{ "language": "en", "length": 201, "provenance": "stackexchange_0000F.jsonl.gz:880545", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590731" }
0f27bb3ce9c2e323a81222fe0bab4e9b603020f6
Stackoverflow Stackexchange Q: Is it good practice to use exec with PHP? I was looking for some PHP library to convert Docx files to PDF files and didn't find anything free and stable but I came across some good bash scripts that ran pretty well. So I was wondering if it was considered okay to use exec() or shell_exec() to run some shell script that would accomplish a task instead of coding it in PHP ? If it's not, what are de cons of this method ? A: Yes, You can use exec() or shell_exec(). Problem is not in using these commands. The problem arises when you are going to take input from user and directly use user-input in the command without verifying the input.
Q: Is it good practice to use exec with PHP? I was looking for some PHP library to convert Docx files to PDF files and didn't find anything free and stable but I came across some good bash scripts that ran pretty well. So I was wondering if it was considered okay to use exec() or shell_exec() to run some shell script that would accomplish a task instead of coding it in PHP ? If it's not, what are de cons of this method ? A: Yes, You can use exec() or shell_exec(). Problem is not in using these commands. The problem arises when you are going to take input from user and directly use user-input in the command without verifying the input.
stackoverflow
{ "language": "en", "length": 123, "provenance": "stackexchange_0000F.jsonl.gz:880559", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590763" }
057ae3d4bacba415b3b3e454b935b9575624457e
Stackoverflow Stackexchange Q: MySQL in docker-compose -- access denied I try to start MySQL server with docker-compose. Here is docker-compose.yaml part: mysql: restart: always image: mysql:latest ports: - "3306:3306" volumes: - /Users/user/Documents/.docker/mysql/config:/etc/mysql/ - /Users/user/Documents/.docker/mysql/data:/var/lib/mysql environment: - MYSQL_ROOT_PASSWORD='123' - MYSQL_ROOT_HOST='172.18.0.1' You see I've specified root password and host as it is said here. Then I try to connect to db (using Intellij Idea if that matters): jdbc:mysql://localhost:3306/?user=root&password=123&ssl=false But it doesn't accept the credentials and writes to log: Access denied for user 'root'@'172.18.0.1' (using password: YES) Please advise on how to fix it. Thanks. A: Most likely you have initialized the mysql data directory when these were different: environment: - MYSQL_ROOT_PASSWORD='123' - MYSQL_ROOT_HOST='172.18.0.1' MySQL image only honors those vars when the /var/lib/mysql directory is created. So if you don't care about the data, empty your volume: /Users/user/Documents/.docker/mysql/data, or change the credentials manually from mysql terminal.
Q: MySQL in docker-compose -- access denied I try to start MySQL server with docker-compose. Here is docker-compose.yaml part: mysql: restart: always image: mysql:latest ports: - "3306:3306" volumes: - /Users/user/Documents/.docker/mysql/config:/etc/mysql/ - /Users/user/Documents/.docker/mysql/data:/var/lib/mysql environment: - MYSQL_ROOT_PASSWORD='123' - MYSQL_ROOT_HOST='172.18.0.1' You see I've specified root password and host as it is said here. Then I try to connect to db (using Intellij Idea if that matters): jdbc:mysql://localhost:3306/?user=root&password=123&ssl=false But it doesn't accept the credentials and writes to log: Access denied for user 'root'@'172.18.0.1' (using password: YES) Please advise on how to fix it. Thanks. A: Most likely you have initialized the mysql data directory when these were different: environment: - MYSQL_ROOT_PASSWORD='123' - MYSQL_ROOT_HOST='172.18.0.1' MySQL image only honors those vars when the /var/lib/mysql directory is created. So if you don't care about the data, empty your volume: /Users/user/Documents/.docker/mysql/data, or change the credentials manually from mysql terminal. A: if not in production you can also use the below with docker run -e MYSQL_ROOT_HOST=% Also it will be better to create your own docker network A: In my case dataSource.setCatalog(...) helped.
stackoverflow
{ "language": "en", "length": 174, "provenance": "stackexchange_0000F.jsonl.gz:880563", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590775" }
2b4f98bf941626d2408233dc4c2cd7a8d094b098
Stackoverflow Stackexchange Q: AWS Cognito delete-custom-attributes? There is add-custom-attributes command in cognito-idp but no delete-custom-attributes? How do I delete them? A: Never mind. This is not doable at the moment. Under the Custom Attributes it has mentioned that: Cannot be removed or changed once added to the user pool.
Q: AWS Cognito delete-custom-attributes? There is add-custom-attributes command in cognito-idp but no delete-custom-attributes? How do I delete them? A: Never mind. This is not doable at the moment. Under the Custom Attributes it has mentioned that: Cannot be removed or changed once added to the user pool. A: I know this question is over 4 years old. But for me it appears at the top in a Google search. As far as I know there is currently no option in the Cognito UI to delete attributes. But you can use the AWS-CLI to delete the attributes. Here is the official description: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cognito-idp/admin-delete-user-attributes.html?highlight=admin%20delete%20user%20attributes I know that this only works for specific users, but one could easy write a batch for this. Don't take me wrong ... it should be a feature of the CLI or the UI, but a batch is at least a workaround.
stackoverflow
{ "language": "en", "length": 144, "provenance": "stackexchange_0000F.jsonl.gz:880569", "question_score": "35", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590800" }
4248955335d50b183105ff7d1bfefac229d630e7
Stackoverflow Stackexchange Q: ARKit Demo Crashing on iPhone 6/iPhone 6 Plus I'm working with ARKit Feature , with recent major iOS release, but I'm getting a crash with error failed assertion MTLRenderPassDescriptor: MTLStoreActionMultisampleResolve store action for the depth attachment is not supported by device I already have iOS11 beta, installed in my iPhone device. A: As all answers above this is a hardware constraint to A9 chips. Anyway it is a good practice to addAdding ARKit to UIRequiredDeviceCapabilities on Info.plist will give you a better feedback running apps that primary function is ARKit based.
Q: ARKit Demo Crashing on iPhone 6/iPhone 6 Plus I'm working with ARKit Feature , with recent major iOS release, but I'm getting a crash with error failed assertion MTLRenderPassDescriptor: MTLStoreActionMultisampleResolve store action for the depth attachment is not supported by device I already have iOS11 beta, installed in my iPhone device. A: As all answers above this is a hardware constraint to A9 chips. Anyway it is a good practice to addAdding ARKit to UIRequiredDeviceCapabilities on Info.plist will give you a better feedback running apps that primary function is ARKit based. A: To be able to run ARKit your device should be able to support it. Not only using the latest iOS will help. As apple mention in the Keynote WWDC 2017 they support A9 chips and Above which means iPhone 6s and Above will be able to run and test ARKit.
stackoverflow
{ "language": "en", "length": 143, "provenance": "stackexchange_0000F.jsonl.gz:880577", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590817" }
f8fa68bc668f5353bcf0a2c6ae157d53698b5472
Stackoverflow Stackexchange Q: OPTIONAL MATCH and WHERE in Cypher I'm struggling to write a cypher-query. Graph The picutre bellow shows the complete graph. Some movies do not have a stuntman (the graph is fictional). Question I wanna get ALL ACTORS (and THEIR MOVIES) who never played in a movie with a stuntman. In this case it would be "Johnny Depp" A: This should work : MATCH (n:Actor)-->(m:Movie) WHERE NOT (n)-->()<--(:Stuntman) RETURN n AS actor, collect(m) AS movies Cheers PS: there is an other solution, but less performant I think : MATCH (n:Actor)-->(m:Movie) WITH n AS actor, collect(m) AS movies WHERE all(m IN movies WHERE not (m)<--(:Stuntman)) RETURN actor, movies
Q: OPTIONAL MATCH and WHERE in Cypher I'm struggling to write a cypher-query. Graph The picutre bellow shows the complete graph. Some movies do not have a stuntman (the graph is fictional). Question I wanna get ALL ACTORS (and THEIR MOVIES) who never played in a movie with a stuntman. In this case it would be "Johnny Depp" A: This should work : MATCH (n:Actor)-->(m:Movie) WHERE NOT (n)-->()<--(:Stuntman) RETURN n AS actor, collect(m) AS movies Cheers PS: there is an other solution, but less performant I think : MATCH (n:Actor)-->(m:Movie) WITH n AS actor, collect(m) AS movies WHERE all(m IN movies WHERE not (m)<--(:Stuntman)) RETURN actor, movies A: I think this will get you going // Find the actors and their movies MATCH (a:Actor)--(m:Movie) // where the actor was never in a movie with a stuntman WHERE NOT (a)-[:ACTS_IN]-(:Movie)-[:ACTS_IN]-(:Stuntman) RETURN a,m
stackoverflow
{ "language": "en", "length": 141, "provenance": "stackexchange_0000F.jsonl.gz:880603", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590926" }
f9a72b04a49be3e37b932fa4d59be0a353d9e6f0
Stackoverflow Stackexchange Q: Force datasource to reload data in Angular 2 grid I am using Angular 2 and have a grid (ag-grid-angular) with enableServerSideSorting=true and it's working fine only for the first time: the data is loaded and placed in the grid view as expected. When I click on a column to sort that column (which happens automatically) the data is fetched again by calling getRows(params). The problem is that I have several other input fields: * *Tabs: to change the type of elements displayed (for example persons or cars). *Text field: which is used to enter search criteria. When I change these input fields, how can I force datasource to collect data again from the server? If I change the tab or search field, then the correct data is only fetched again after I press on one of the columns (forcing to sort again). I tried this.agGrid.api.refreshView() in the method that is triggered by changing the tab, but this ofcourse does not work.
Q: Force datasource to reload data in Angular 2 grid I am using Angular 2 and have a grid (ag-grid-angular) with enableServerSideSorting=true and it's working fine only for the first time: the data is loaded and placed in the grid view as expected. When I click on a column to sort that column (which happens automatically) the data is fetched again by calling getRows(params). The problem is that I have several other input fields: * *Tabs: to change the type of elements displayed (for example persons or cars). *Text field: which is used to enter search criteria. When I change these input fields, how can I force datasource to collect data again from the server? If I change the tab or search field, then the correct data is only fetched again after I press on one of the columns (forcing to sort again). I tried this.agGrid.api.refreshView() in the method that is triggered by changing the tab, but this ofcourse does not work.
stackoverflow
{ "language": "en", "length": 162, "provenance": "stackexchange_0000F.jsonl.gz:880607", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590937" }
191e0987cf41de80523d57b1536c13194aa93bc4
Stackoverflow Stackexchange Q: Creating Spring beans dynamically runtime using method I have to use company's custom made libraries with Spring Boot and wondering if I'm able to create bean like this in runtime and add it to Spring application context. @Bean(name = {"customConnectionFactory"}) public ConnFactory connector() { return new SimpleConnFactory(configuration(), "prefix"); } So this worked fine when I was allowed to wire beans normally when starting the application. Now requirements have changed and I should be able to do this dynamically runtime. I've done some research and it seems that it's possible to add class to spring context runtime, but how about running method which returns new object? A: Could be something like this DefaultListableBeanFactory beanFactory = //get and store the factory somewhere MyBean newBean = new MyBean(); beanFactory.initializeBean(newBean,"TheBeanName"); //could be class' canonical name beanFactory.autowireBeanProperties(newBean, AutowireCapableBeanFactory.AUTOWIRE_BY_TYPE, true); beanFactory.registerSingleton("TheBeanName", newBean);
Q: Creating Spring beans dynamically runtime using method I have to use company's custom made libraries with Spring Boot and wondering if I'm able to create bean like this in runtime and add it to Spring application context. @Bean(name = {"customConnectionFactory"}) public ConnFactory connector() { return new SimpleConnFactory(configuration(), "prefix"); } So this worked fine when I was allowed to wire beans normally when starting the application. Now requirements have changed and I should be able to do this dynamically runtime. I've done some research and it seems that it's possible to add class to spring context runtime, but how about running method which returns new object? A: Could be something like this DefaultListableBeanFactory beanFactory = //get and store the factory somewhere MyBean newBean = new MyBean(); beanFactory.initializeBean(newBean,"TheBeanName"); //could be class' canonical name beanFactory.autowireBeanProperties(newBean, AutowireCapableBeanFactory.AUTOWIRE_BY_TYPE, true); beanFactory.registerSingleton("TheBeanName", newBean);
stackoverflow
{ "language": "en", "length": 137, "provenance": "stackexchange_0000F.jsonl.gz:880609", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44590942" }
a129cf6e5f92141c26e84ab0004b4277783cb8c8
Stackoverflow Stackexchange Q: socket.gaierror: [Errno -2] Name or service not known with Python3 I am trying to use port scanner program. import socket import subprocess import sys from datetime import datetime subprocess.call('clear', shell=True) remoteServer = input("Enter a remote host to scan: ") remoteServerIP = socket.gethostbyname(remoteServer) print( "-" * 60) print( "Please wait, scanning remote host", remoteServerIP) print( "-" * 60) t1 = datetime.now() try: for port in range(1,1025): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) result = sock.connect_ex((remoteServerIP, port)) if result == 0: print( "Port {}: Open".format(port)) sock.close() except KeyboardInterrupt: print( "You pressed Ctrl+C") sys.exit() except socket.gaierror: print( 'Hostname could not be resolved. Exiting') sys.exit() except socket.error: print( "Couldn't connect to server") sys.exit() t2 = datetime.now() total = t2 - t1 print( 'Scanning Completed in: ', total) But it is not working. Enter a remote host to scan: http://www.osjajinci.com/ Traceback (most recent call last): File "portscanner.py", line 12, in <module> remoteServerIP = socket.gethostbyname(remoteServer) socket.gaierror: [Errno -2] Name or service not known I am trying to learn more about the sockets,I am begginer.I have double checked the Python3 code and could not find any mistakes. A: socket.gethostbyname expects a host name and not an URL. You must give www.osjajinci.com instead of http://www.osjajinci.com/
Q: socket.gaierror: [Errno -2] Name or service not known with Python3 I am trying to use port scanner program. import socket import subprocess import sys from datetime import datetime subprocess.call('clear', shell=True) remoteServer = input("Enter a remote host to scan: ") remoteServerIP = socket.gethostbyname(remoteServer) print( "-" * 60) print( "Please wait, scanning remote host", remoteServerIP) print( "-" * 60) t1 = datetime.now() try: for port in range(1,1025): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) result = sock.connect_ex((remoteServerIP, port)) if result == 0: print( "Port {}: Open".format(port)) sock.close() except KeyboardInterrupt: print( "You pressed Ctrl+C") sys.exit() except socket.gaierror: print( 'Hostname could not be resolved. Exiting') sys.exit() except socket.error: print( "Couldn't connect to server") sys.exit() t2 = datetime.now() total = t2 - t1 print( 'Scanning Completed in: ', total) But it is not working. Enter a remote host to scan: http://www.osjajinci.com/ Traceback (most recent call last): File "portscanner.py", line 12, in <module> remoteServerIP = socket.gethostbyname(remoteServer) socket.gaierror: [Errno -2] Name or service not known I am trying to learn more about the sockets,I am begginer.I have double checked the Python3 code and could not find any mistakes. A: socket.gethostbyname expects a host name and not an URL. You must give www.osjajinci.com instead of http://www.osjajinci.com/
stackoverflow
{ "language": "en", "length": 196, "provenance": "stackexchange_0000F.jsonl.gz:880638", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591027" }
32e2b5c8e8a7cbf77d5f915c0994cc321e6b183e
Stackoverflow Stackexchange Q: How to use class constants in @Security annotation using the Symfony Expression Language? I am using Symfony 3 and I've created a custom Voter class. I want to access it using the SensioFrameworkExtraBundle @Security tag. It kind of works. If I do the following it works perfectly: /** * @Rest\Get("organisation/{id}") * @Security("is_granted('OrgAdmin', id)") * @param int $id * @param Request $request * * @return View */ public function getOrganisationAction($id, Request $request) { But I don't like the idea of using magic strings in the application and I would much rather use a class constant for the check. Something like this: /** * @Rest\Get("organisation/{id}") * @Security("is_granted(AppBundle\OrgRoles::ROLE_ADMIN, id)") * @param int $id * @param Request $request * * @return View */ public function getOrganisationAction($id, Request $request) { But when I try that I get the following error message: Unexpected character \"\\\" around position 20 for expression `is_granted(AppBundle\\OrgRoles::ROLE_ADMIN, id)`. Which when unescaped, is the following: Unexpected character "\" around position 20 for expression `is_granted(AppBundle\OrgRoles::ROLE_ADMIN, id)`. So I'm stumped on this. Can it be done? Any suggestions on a better way to do this? A: You can use the constant() function available in the Expression Language Component: @Security("is_granted(constant('\\Full\\Namespace\\To\\OrgRoles::ROLE_ADMIN'), id)")
Q: How to use class constants in @Security annotation using the Symfony Expression Language? I am using Symfony 3 and I've created a custom Voter class. I want to access it using the SensioFrameworkExtraBundle @Security tag. It kind of works. If I do the following it works perfectly: /** * @Rest\Get("organisation/{id}") * @Security("is_granted('OrgAdmin', id)") * @param int $id * @param Request $request * * @return View */ public function getOrganisationAction($id, Request $request) { But I don't like the idea of using magic strings in the application and I would much rather use a class constant for the check. Something like this: /** * @Rest\Get("organisation/{id}") * @Security("is_granted(AppBundle\OrgRoles::ROLE_ADMIN, id)") * @param int $id * @param Request $request * * @return View */ public function getOrganisationAction($id, Request $request) { But when I try that I get the following error message: Unexpected character \"\\\" around position 20 for expression `is_granted(AppBundle\\OrgRoles::ROLE_ADMIN, id)`. Which when unescaped, is the following: Unexpected character "\" around position 20 for expression `is_granted(AppBundle\OrgRoles::ROLE_ADMIN, id)`. So I'm stumped on this. Can it be done? Any suggestions on a better way to do this? A: You can use the constant() function available in the Expression Language Component: @Security("is_granted(constant('\\Full\\Namespace\\To\\OrgRoles::ROLE_ADMIN'), id)") A: Doctrine annotation reader has made this even easier for constants in PHP code: use MyCompany\Annotations\Bar; use MyCompany\Entity\SomeClass; /** * @Foo(PHP_EOL) * @Bar(Bar::FOO) */ This also works just as expected for @Security / @IsGranted. https://www.doctrine-project.org/projects/doctrine-annotations/en/latest/custom.html#constants
stackoverflow
{ "language": "en", "length": 231, "provenance": "stackexchange_0000F.jsonl.gz:880639", "question_score": "19", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591028" }
1a7725de3d1771c9ebaa7f4fe70c0228686e90fc
Stackoverflow Stackexchange Q: Is there a way to sort files by svn status in Windows 10 explorer? Is there a way to sort files by svn status in Windows 10 explorer? I'm using TortoiseSVN. A: No. Microsoft removed custom columns from Explorer a couple of releases ago, and that's the only way you'd be able to do this definitively. Like @magicandre1981 said, you can use TSVN's Check for Modifications dialog, or sort by Date Modified in Explorer (as the most recently edited ones will likely be ones you've edited locally & haven't committed yet).
Q: Is there a way to sort files by svn status in Windows 10 explorer? Is there a way to sort files by svn status in Windows 10 explorer? I'm using TortoiseSVN. A: No. Microsoft removed custom columns from Explorer a couple of releases ago, and that's the only way you'd be able to do this definitively. Like @magicandre1981 said, you can use TSVN's Check for Modifications dialog, or sort by Date Modified in Explorer (as the most recently edited ones will likely be ones you've edited locally & haven't committed yet). A: A hack you could do is update the last modified date of all files not under version control to now, then sorting by Date modified would yield first all files not under version control. Do not do that if losing the actual last modified date is a problem. Unix command to do that (works in Git for Windows): svn status|sed 's:\\:/:g'|grep '^?'|awk '{print $2}'|xargs -n1 touch This command only allow you to split between versioned and not versioned files, to actually sort by status you could try to replace grep '^?' by sort. Warning: all files will have their last modified date changed. That may not work if the last modified date precision is not enough, in which case you would have to add a millisecond delay in the xargs command.
stackoverflow
{ "language": "en", "length": 224, "provenance": "stackexchange_0000F.jsonl.gz:880648", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591047" }
64b91b2db1e3ac9e0eb167072066ae13e53239cd
Stackoverflow Stackexchange Q: Symmetric y-axis limits for barchart in ggplot2 I would like to make the y-axis of a bar chart symmetric, so that it's easier to see if positive or negative changes are bigger. Since otherwise this is a bit distorted. I do have working code although it's a bit clumsy and I thought it would be great if I could directly do this in the first ggplot() call. So as to say that ylim directly is symmetrical. set.seed(123) my.plot <- ggplot( data = data.table(x = 1:10, y = rnorm(10,0, 2)), aes(x=x, y=y)) + geom_bar(stat="identity") rangepull <- layer_scales(my.plot)$y newrange <- max(abs(rangepull$range$range)) my.plot + ylim(newrange*-1, newrange) A: What about this : library(ggplot2) library(data.table) set.seed(123) my.data = data.table(x = 1:10, y = rnorm(10,0, 2)) my.plot <- ggplot(data = my.data)+aes(x=x, y=y) + geom_bar(stat="identity")+ylim((0-abs(max(my.data$y))),(0+max(abs(my.data$y)))) my.plot
Q: Symmetric y-axis limits for barchart in ggplot2 I would like to make the y-axis of a bar chart symmetric, so that it's easier to see if positive or negative changes are bigger. Since otherwise this is a bit distorted. I do have working code although it's a bit clumsy and I thought it would be great if I could directly do this in the first ggplot() call. So as to say that ylim directly is symmetrical. set.seed(123) my.plot <- ggplot( data = data.table(x = 1:10, y = rnorm(10,0, 2)), aes(x=x, y=y)) + geom_bar(stat="identity") rangepull <- layer_scales(my.plot)$y newrange <- max(abs(rangepull$range$range)) my.plot + ylim(newrange*-1, newrange) A: What about this : library(ggplot2) library(data.table) set.seed(123) my.data = data.table(x = 1:10, y = rnorm(10,0, 2)) my.plot <- ggplot(data = my.data)+aes(x=x, y=y) + geom_bar(stat="identity")+ylim((0-abs(max(my.data$y))),(0+max(abs(my.data$y)))) my.plot A: You may want to consider using ceiling: set.seed(123) library(ggplot2) library(data.table) dT <- data.table(x = 1:10, y = rnorm(10,0, 2)) my.plot <- ggplot(dT, aes(x=x, y=y)) + geom_bar(stat="identity") + ylim(-ceiling(max(abs(dT$y))), ceiling(max(abs(dT$y)))) This will give you: > my.plot
stackoverflow
{ "language": "en", "length": 166, "provenance": "stackexchange_0000F.jsonl.gz:880662", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591108" }
d3f80ec337c10137d0244b463dafc4cc81622865
Stackoverflow Stackexchange Q: Run AspNet Core app in docker using GMSA I'm trying to use GMSA for SQL connection from AspNet core application. All the prep steps are done, but it appears it does not work. I guess the reason is that the application is started with "dotnet.exe myapp.dll" and it is not using LocalSystem or Network accounts, which are the only ones which are "proxied" trough the GMSA account. Is there a way to start the app in the container under LocalSystem without resorting to windows service or using IIS? I already tried to utilize sysinternal's "psexec64 -s" as an entry point, but it fails with: Connecting to local system...Starting PSEXESVC service on local system...Connecting with PsExec service on 1B2887C13C61...Error establishing communication with PsExec service on 1B2887C13C61: The handle is invalid.
Q: Run AspNet Core app in docker using GMSA I'm trying to use GMSA for SQL connection from AspNet core application. All the prep steps are done, but it appears it does not work. I guess the reason is that the application is started with "dotnet.exe myapp.dll" and it is not using LocalSystem or Network accounts, which are the only ones which are "proxied" trough the GMSA account. Is there a way to start the app in the container under LocalSystem without resorting to windows service or using IIS? I already tried to utilize sysinternal's "psexec64 -s" as an entry point, but it fails with: Connecting to local system...Starting PSEXESVC service on local system...Connecting with PsExec service on 1B2887C13C61...Error establishing communication with PsExec service on 1B2887C13C61: The handle is invalid.
stackoverflow
{ "language": "en", "length": 130, "provenance": "stackexchange_0000F.jsonl.gz:880702", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591253" }
7111e60c571489b8d7d53e6fef3af5388d75b7c5
Stackoverflow Stackexchange Q: Do not mutate state directly. Use setState() react/no-direct-mutation-state I have this code: constructor(props) { super(props) this.state = { loginButton: '', benchmarkList: '' } if (props.username == null) { this.state.loginButton = <GoogleButton></GoogleButton> } else { } } It is giving me an ESLint warning: Do not mutate state directly. Use setState() react/no-direct-mutation-state. Now what am I supposed to do as I can't use setState inside constructor directly as it creates error and updating like this gives me error. A: First of all, we should not store the ui components inside state variable, state should contain only data. All the ui part should be inside render method. If you want to render some component on the basis of any data then use conditional rendering. Check the value of this.state.loginButton and if it is null then render that button. Like this: constructor(props) { super(props) this.state = { loginButton: props.username, benchmarkList: '' } } render(){ return( <div> {!this.state.loginButton ? <GoogleButton></GoogleButton> : null} </div> ) } Ideally we should not store the props value in state also, so directly use this.props.username, i did that because don't know about the complete code.
Q: Do not mutate state directly. Use setState() react/no-direct-mutation-state I have this code: constructor(props) { super(props) this.state = { loginButton: '', benchmarkList: '' } if (props.username == null) { this.state.loginButton = <GoogleButton></GoogleButton> } else { } } It is giving me an ESLint warning: Do not mutate state directly. Use setState() react/no-direct-mutation-state. Now what am I supposed to do as I can't use setState inside constructor directly as it creates error and updating like this gives me error. A: First of all, we should not store the ui components inside state variable, state should contain only data. All the ui part should be inside render method. If you want to render some component on the basis of any data then use conditional rendering. Check the value of this.state.loginButton and if it is null then render that button. Like this: constructor(props) { super(props) this.state = { loginButton: props.username, benchmarkList: '' } } render(){ return( <div> {!this.state.loginButton ? <GoogleButton></GoogleButton> : null} </div> ) } Ideally we should not store the props value in state also, so directly use this.props.username, i did that because don't know about the complete code. A: constructor(props) { super(props) this.state = { loginButton: props.username == null? <GoogleButton></GoogleButton>: '', benchmarkList: '' } } Or You can use setState in componentWillMount() componentWillMount(){ let loginButton = props.username == null? <GoogleButton></GoogleButton>: ''; this.setState({loginButton: loginButton}); } A: How to update state inside constructor in ReactJS? Create the data struct, modify it as need, and assign to state at the end when all is done: constructor(props) { super(props) let state = { loginButton: '', benchmarkList: '' } if (props.username == null) { state.loginButton = true } else { state.loginButton = false } this.state = state } A: just add setState if (props.username == null) { this.setState({ loginButton: <GoogleButton></GoogleButton> }) } else {
stackoverflow
{ "language": "en", "length": 298, "provenance": "stackexchange_0000F.jsonl.gz:880724", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591315" }
350e2dcf6ec51af5a0b184fa2ba8cf9f13fa3e86
Stackoverflow Stackexchange Q: Is `terraform init` compulsory before every `terraform plan`? Just wondering if terraform init is compulsory before every terraform plan? i.e. if I've already done a terraform init but are about to do a second terraform plan based on some changed Terraform code would you need to do a second terraform init? A: It depends. Depending on exactly what the Terraform code is you've changed, you may need to re-run init. For instance, if you've made changes to a configured backend, you'll need to rerun terraform init to re-initialise with those changes. If the changes are to non-terraform configuration parts of your script, terraform plan and terraform apply should be fine to use by themselves. One further note is that if you're using modules, and you make a change in a module, you will need to re-run terraform get -update to get those changes before running plan or apply.
Q: Is `terraform init` compulsory before every `terraform plan`? Just wondering if terraform init is compulsory before every terraform plan? i.e. if I've already done a terraform init but are about to do a second terraform plan based on some changed Terraform code would you need to do a second terraform init? A: It depends. Depending on exactly what the Terraform code is you've changed, you may need to re-run init. For instance, if you've made changes to a configured backend, you'll need to rerun terraform init to re-initialise with those changes. If the changes are to non-terraform configuration parts of your script, terraform plan and terraform apply should be fine to use by themselves. One further note is that if you're using modules, and you make a change in a module, you will need to re-run terraform get -update to get those changes before running plan or apply. A: Agree with all the above answers, but something to add here is that it is safe to run terraform init many times even if nothing changed, its not going to affect anything. A: terraform init running state depends on what has been changes, in case there are changes in plugins or your backend configs. This command is always safe to run multiple times, to bring the working directory up to date with changes in the configuration. Though subsequent runs may give errors, this command will never delete your existing configuration or state. So, you can run init every time you run terraform plan to keep things up to date. In case there are no change then you skip it. However, if multiple people are working on the project & you are storing state somewhere then always run terraform init before running terraform plan. A: it depends on the situation.terraform init command is used to initialize a working directory containing Terraform configuration files. If you weren't changed the terraform configurations(key words you can say) you don't need to issue a terraform init. Instead you can terraform plan and terraform apply. Usually terraform tell, if terraform need be initialize by a messege
stackoverflow
{ "language": "en", "length": 350, "provenance": "stackexchange_0000F.jsonl.gz:880735", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591366" }
5bfa51a0f414ef80207e632cb4707ddccd82c3e0
Stackoverflow Stackexchange Q: Passing command line arguments to the benhmark program using stack I'm using stack as a build tool, and criterion as benchmarking library. To run the benchmarks I execute the following command: stack bench Criterion accepts command line arguments to specify where the output should be written to. I would like to pass these arguments to the executable built and run by stack. Is there a way to achieve this? A: stack bench --benchmark-arguments "--arguments --for --criterion" (It's among options listed under stack bench --help.)
Q: Passing command line arguments to the benhmark program using stack I'm using stack as a build tool, and criterion as benchmarking library. To run the benchmarks I execute the following command: stack bench Criterion accepts command line arguments to specify where the output should be written to. I would like to pass these arguments to the executable built and run by stack. Is there a way to achieve this? A: stack bench --benchmark-arguments "--arguments --for --criterion" (It's among options listed under stack bench --help.)
stackoverflow
{ "language": "en", "length": 85, "provenance": "stackexchange_0000F.jsonl.gz:880739", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591382" }
5a7ec66bc63f8b0903780f03926ce247a6fec4f1
Stackoverflow Stackexchange Q: How can you set a limit to S3 bucket size in AEM 6.2 Is there a configuration to set the limit on S3 bucket size in AEM 6.2. I am aware of S3 cache size that can be configured using the S3 data store configuration file. My issue is that S3 bucket can grow exponentially and although there is no limit to the size but there is a constraint on budget. For example if my bucket size in 250GB and it more or less stays the same after every compaction. I don't ever want it to cross 1TB. I am aware that S3 can limit this but I want to do it via AEM so that operations don't fail and data store is never corrupted. Any hints? A: There are no configuration available that will limit the size of Amazon S3 buckets. You can, however, obtain Amazon S3 metrics in Amazon CloudWatch. You could create an alarm on a bucket to send a notification when the amount of data stored in an Amazon S3 bucket exceeds a certain threshold.
Q: How can you set a limit to S3 bucket size in AEM 6.2 Is there a configuration to set the limit on S3 bucket size in AEM 6.2. I am aware of S3 cache size that can be configured using the S3 data store configuration file. My issue is that S3 bucket can grow exponentially and although there is no limit to the size but there is a constraint on budget. For example if my bucket size in 250GB and it more or less stays the same after every compaction. I don't ever want it to cross 1TB. I am aware that S3 can limit this but I want to do it via AEM so that operations don't fail and data store is never corrupted. Any hints? A: There are no configuration available that will limit the size of Amazon S3 buckets. You can, however, obtain Amazon S3 metrics in Amazon CloudWatch. You could create an alarm on a bucket to send a notification when the amount of data stored in an Amazon S3 bucket exceeds a certain threshold.
stackoverflow
{ "language": "en", "length": 180, "provenance": "stackexchange_0000F.jsonl.gz:880777", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591509" }
9887e8520699cc62ce8508b3af928fade5fb735b
Stackoverflow Stackexchange Q: Java - SonarQube, issue on 'Utility classes should not have public constructors' (squid:S1118) in singleton I am performing static code analysis on old code using a SonarLint analysis. I cannot paste the code here but it is similar to: @SuppressWarnings("static-access") public class SuperClass { private SuperClass() { } public static SuperClass getInstance() { return InstanceHolder.instance; } private static class InstanceHolder { public final static SuperClass instance = new SuperClass(); } public void doSomething() { //do something } } SonarQube (sonar-java: 4.2.1.6971), reports an issue on S1118. Adding a private constructor to InstanceHolder has no solving effect here, since SuperClassis the only class that can create an instance of it due to its private modifier. SuperClass can still create an instance, even with ÌnstanceHolder having a private constructor. BTW: adding the constructor removes the sonar-issue, so I think the analyzer marked this as a rule violation because of the internal 'UtilityClass' without further investigation. Is this a bug? Instead of a design flaw, this is an example of a thread-safe singleton. A: Make your class final so that Instance creation can be avoided. @SuppressWarnings("static-access") public final class SuperClass { private SuperClass() { } }
Q: Java - SonarQube, issue on 'Utility classes should not have public constructors' (squid:S1118) in singleton I am performing static code analysis on old code using a SonarLint analysis. I cannot paste the code here but it is similar to: @SuppressWarnings("static-access") public class SuperClass { private SuperClass() { } public static SuperClass getInstance() { return InstanceHolder.instance; } private static class InstanceHolder { public final static SuperClass instance = new SuperClass(); } public void doSomething() { //do something } } SonarQube (sonar-java: 4.2.1.6971), reports an issue on S1118. Adding a private constructor to InstanceHolder has no solving effect here, since SuperClassis the only class that can create an instance of it due to its private modifier. SuperClass can still create an instance, even with ÌnstanceHolder having a private constructor. BTW: adding the constructor removes the sonar-issue, so I think the analyzer marked this as a rule violation because of the internal 'UtilityClass' without further investigation. Is this a bug? Instead of a design flaw, this is an example of a thread-safe singleton. A: Make your class final so that Instance creation can be avoided. @SuppressWarnings("static-access") public final class SuperClass { private SuperClass() { } }
stackoverflow
{ "language": "en", "length": 194, "provenance": "stackexchange_0000F.jsonl.gz:880820", "question_score": "11", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591666" }
2cc93e2e245296e9485789ccfd53d1cb2dccc35d
Stackoverflow Stackexchange Q: How to set default value using data annotation I am learning ASP.Net MVC 5 and I want to set default value using data annotation for boolean property. Also I don't want to use the constructor to set the default value. Is it possible? public class BalanceDetailMV { public BalanceDetailMV() { this.isUnitNoEmptyInAllRow = true; // I do not want this } public bool isUnitNoEmptyInAllRow { get; set; } } My attmept: [DefaultValue("true")] public bool isUnitNoEmptyInAllRow { get; set; } But above does not work. Please guide me. A: If you're using C# 5 or earlier, you have to do it through the constructor, but since C# 6, you can do it like this: public class BalanceDetailMV { public bool isUnitNoEmptyInAllRow { get; set; } = true; }
Q: How to set default value using data annotation I am learning ASP.Net MVC 5 and I want to set default value using data annotation for boolean property. Also I don't want to use the constructor to set the default value. Is it possible? public class BalanceDetailMV { public BalanceDetailMV() { this.isUnitNoEmptyInAllRow = true; // I do not want this } public bool isUnitNoEmptyInAllRow { get; set; } } My attmept: [DefaultValue("true")] public bool isUnitNoEmptyInAllRow { get; set; } But above does not work. Please guide me. A: If you're using C# 5 or earlier, you have to do it through the constructor, but since C# 6, you can do it like this: public class BalanceDetailMV { public bool isUnitNoEmptyInAllRow { get; set; } = true; } A: You might be having error if forgot to add using System.ComponentModel; at the top of your file where you use DefaultValue annotation. For bool use [DefaultValue(true)] public bool IsUnitNoEmptyInAllRow { get; set; }
stackoverflow
{ "language": "en", "length": 161, "provenance": "stackexchange_0000F.jsonl.gz:880823", "question_score": "16", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591674" }
649cff1d6fe140ae5263ebd68ee70190701ac2bd
Stackoverflow Stackexchange Q: How to get mime type from file contents in a pure dotnet core? So far in .net I have been using urlmon.dll FindMimeFromData function which works really well and does not even require having full file contents present in a buffer (practically 1024 bytes is sufficient for most file types including XML). Now in dotnet core although it's possible to use this native library I would like to avoid such dependencies and use a reliable native C# method to determine the file type (images in my case). Is there any method of doing it without using native dependencies (Mime nuget project) or guessing just by filename (MimeSharp nuget project)?
Q: How to get mime type from file contents in a pure dotnet core? So far in .net I have been using urlmon.dll FindMimeFromData function which works really well and does not even require having full file contents present in a buffer (practically 1024 bytes is sufficient for most file types including XML). Now in dotnet core although it's possible to use this native library I would like to avoid such dependencies and use a reliable native C# method to determine the file type (images in my case). Is there any method of doing it without using native dependencies (Mime nuget project) or guessing just by filename (MimeSharp nuget project)?
stackoverflow
{ "language": "en", "length": 110, "provenance": "stackexchange_0000F.jsonl.gz:880829", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591702" }
4c8831d37d9b376a65e6ecb879c7d9c6977641b0
Stackoverflow Stackexchange Q: Angular - ngClass is not working I have a ngClass condition like this : <div [ngClass]="{'alert alert-danger': alert.type == 0,'alert alert-success': alert.type == 1}" If alert.type == 1, my class is alert alert-succes, but if alert.type = 0 my class is alert-danger. Why class alert is not here ? A: Yes, it seems to be an issue, but It isn't, read a comment for explanation. But you can easily have workaround by taking common class out in class attribute itself. <div class="alert" [ngClass]="{'alert-danger': alert.type == 0, 'alert-success': alert.type == 1}" Demo Plunker
Q: Angular - ngClass is not working I have a ngClass condition like this : <div [ngClass]="{'alert alert-danger': alert.type == 0,'alert alert-success': alert.type == 1}" If alert.type == 1, my class is alert alert-succes, but if alert.type = 0 my class is alert-danger. Why class alert is not here ? A: Yes, it seems to be an issue, but It isn't, read a comment for explanation. But you can easily have workaround by taking common class out in class attribute itself. <div class="alert" [ngClass]="{'alert-danger': alert.type == 0, 'alert-success': alert.type == 1}" Demo Plunker
stackoverflow
{ "language": "en", "length": 93, "provenance": "stackexchange_0000F.jsonl.gz:880923", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44591956" }
ac6937e89f98e1c8b0daf8f6b0a9dc1a8ae81447
Stackoverflow Stackexchange Q: Configuring an MDX query on SSIS Hi I am having troubles configuring the SSIS task to run an MDX query. The parse works fine but it doesn't allow me to display the different columns of the query to map it Here is the query i used: SELECT [Measures].[# Consumers] ON 0, [Company].[Company Country Code].[Company Country Code].MEMBERS ON 1 FROM _CDM The Error thrown is: No Column information was returned by the SQL Command Error snapshot A: You can use MDX Select as a Source in Data Transformation Task. Two important notes: * *Use MS OLE DB Provider for Analysis Services, configure it for your SSAS DB *In OLE DB Provider for AS, go to All Properties Tab, select Advanced section and type Format=Tabular for Extended Properties. In this case, at OLE DB Source Editor you can input your MDX query. Important - Preview button might not work, you should check query metadata switching to Columns tab.
Q: Configuring an MDX query on SSIS Hi I am having troubles configuring the SSIS task to run an MDX query. The parse works fine but it doesn't allow me to display the different columns of the query to map it Here is the query i used: SELECT [Measures].[# Consumers] ON 0, [Company].[Company Country Code].[Company Country Code].MEMBERS ON 1 FROM _CDM The Error thrown is: No Column information was returned by the SQL Command Error snapshot A: You can use MDX Select as a Source in Data Transformation Task. Two important notes: * *Use MS OLE DB Provider for Analysis Services, configure it for your SSAS DB *In OLE DB Provider for AS, go to All Properties Tab, select Advanced section and type Format=Tabular for Extended Properties. In this case, at OLE DB Source Editor you can input your MDX query. Important - Preview button might not work, you should check query metadata switching to Columns tab.
stackoverflow
{ "language": "en", "length": 157, "provenance": "stackexchange_0000F.jsonl.gz:880953", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592041" }
ae90e259f252e7b2bef047844682b35f4659eb1a
Stackoverflow Stackexchange Q: TSQL to count fields and summarize No experiences with this kind of consolidation, but I expect it's routine (hope so) Its counting the columns that throws me. Actual data is ~20k rows: Data format: State Owner Job1 Job2 Job3 Job4 TN Joe 123 456 234 TN Frank 456 789 FL Joe 123 456 FL Frank 123 Results needed: State Owner JobCount TN Joe 3 TN Frank 2 FL Joe 2 FL Frank 1 And rolled up to Owner Owner JobCount Joe 5 Frank 3 A: I guess PIVOT suites best, since jobs number might increase: ;WITH cte AS (SELECT [State] ,[Owner] ,[Job] ,[JobN] FROM ( SELECT [State] ,[Owner] ,Job1 ,Job2 ,Job3 ,Job4 FROM #state ) AS p UNPIVOT (JobN FOR [Job] IN (Job1,Job2,Job3,Job4) ) AS unpvt) --SELECT [State], [Owner], COUNT(1) AS JobCount --FROM cte --GROUP BY [State], [Owner] SELECT [Owner], COUNT(1) AS JobCOunt FROM cte GROUP BY [Owner] Commented rows are the first query you requested. I've primarily created a temp table #state like this: CREATE TABLE #state ( [State] VARCHAR(2) ,[Owner] VARCHAR(20) ,[Job1] INT ,[Job2] INT ,[Job3] INT ,[Job4] INT )
Q: TSQL to count fields and summarize No experiences with this kind of consolidation, but I expect it's routine (hope so) Its counting the columns that throws me. Actual data is ~20k rows: Data format: State Owner Job1 Job2 Job3 Job4 TN Joe 123 456 234 TN Frank 456 789 FL Joe 123 456 FL Frank 123 Results needed: State Owner JobCount TN Joe 3 TN Frank 2 FL Joe 2 FL Frank 1 And rolled up to Owner Owner JobCount Joe 5 Frank 3 A: I guess PIVOT suites best, since jobs number might increase: ;WITH cte AS (SELECT [State] ,[Owner] ,[Job] ,[JobN] FROM ( SELECT [State] ,[Owner] ,Job1 ,Job2 ,Job3 ,Job4 FROM #state ) AS p UNPIVOT (JobN FOR [Job] IN (Job1,Job2,Job3,Job4) ) AS unpvt) --SELECT [State], [Owner], COUNT(1) AS JobCount --FROM cte --GROUP BY [State], [Owner] SELECT [Owner], COUNT(1) AS JobCOunt FROM cte GROUP BY [Owner] Commented rows are the first query you requested. I've primarily created a temp table #state like this: CREATE TABLE #state ( [State] VARCHAR(2) ,[Owner] VARCHAR(20) ,[Job1] INT ,[Job2] INT ,[Job3] INT ,[Job4] INT ) A: For State/Owner select State, Owner, count(cs.Jobs) as JobCount from yourtable cross apply (values (Job1),(Job2),(Job3),(Job4)) cs (Jobs) Group By State, Owner Rolled upto Owner select Owner, count(cs.Jobs) as JobCount from yourtable cross apply (values (Job1),(Job2),(Job3),(Job4)) cs (Jobs) Group by Owner Note : This considers those empty's in sample data as NULL values in table A: Here is your TSQL for result 1 SELECT State ,Owner ,Sum ( ( CASE WHEN Job1 IS NULL THEN 0 ELSE 1 END)+ (CASE WHEN Job2 IS NULL THEN 0 ELSE 1 END) + (CASE WHEN Job3 IS NULL THEN 0 ELSE 1 END)+ (CASE WHEN Job4 IS NULL THEN 0 ELSE 1 END)) FROM table GROUP BY State, OWNER A: One more option... just for fun is GROUPING SETS You'll get the Owner/State level AND the Owner Level in one shot Select [Owner] ,[State] ,JobCount = sum(isnull(sign(Job1),0)+isnull(sign(Job2),0)+isnull(sign(Job3),0)+isnull(sign(Job4),0)) From YourTable Group By Grouping Sets ([State],[Owner]),([Owner]) Order By case when [State] is null then 1 else 0 end Returns Owner State JobCount Frank FL 1 Frank TN 2 Joe FL 2 Joe TN 3 Joe NULL 5 Frank NULL 3
stackoverflow
{ "language": "en", "length": 367, "provenance": "stackexchange_0000F.jsonl.gz:880962", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592069" }
ff5a5367a3021ef9fc382c62e56bb84fd8445ecf
Stackoverflow Stackexchange Q: C# VS2013 How to use App.config in a console application after publish it I have a console application with an App.Config with some keys. The user that will use this app needs to change the values of some of the keys before run it. If a publish my application I don't see the App.Config after install it. How can I add this functionality? Thanks. A: When you publish the application the app.config is transformed into an exe.config. Open up that file and make your edits. If you installed this program with a click once installer the easiest way to find the file is to just run the app, open the task manager (CTRL-SHIFT-ESC), select the app and right-click|Open file location. You should then find the *.exe.config file in the same folder.
Q: C# VS2013 How to use App.config in a console application after publish it I have a console application with an App.Config with some keys. The user that will use this app needs to change the values of some of the keys before run it. If a publish my application I don't see the App.Config after install it. How can I add this functionality? Thanks. A: When you publish the application the app.config is transformed into an exe.config. Open up that file and make your edits. If you installed this program with a click once installer the easiest way to find the file is to just run the app, open the task manager (CTRL-SHIFT-ESC), select the app and right-click|Open file location. You should then find the *.exe.config file in the same folder. A: This should help for your understanding: https://msdn.microsoft.com/en-us/library/ms228995.aspx specifically: "In a Windows Forms applications (iterchangeable with console in this instance) not deployed using ClickOnce, an application's app.exe.config file is stored in the application directory, while the user.config file is stored in the user's Documents and Settings folder. In a ClickOnce application, app.exe.config lives in the application directory inside of the ClickOnce application cache, and user.config lives in the ClickOnce data directory for that application." Short version to help: Look into one of these subfolders post deploy if you are using a clickonce publish option- C:\Users\UsersNameGosHere\AppData\Local\Apps\2.0 Outside of getting you up and running for the here and now, If you have specific user designated values that need mods after deployment then you should really look into putting them into user settings rather than app settings.
stackoverflow
{ "language": "en", "length": 266, "provenance": "stackexchange_0000F.jsonl.gz:880976", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592132" }
f3bfd5374113eee74ef788829afcd8bfab89ff3d
Stackoverflow Stackexchange Q: Make tag helper work "button" like "a" In ASP.NET Core I can set an action to a link item a, however if I change it to a button it doesn't work anymore. What is the correct way to bind an action/controller to a button click? <a asp-action="Delete" asp-route-id="@item.Id">remove</a> @*work*@ vs <button asp-action="Delete" asp-route-id="@item.Id">remove</button> @*does not work*@ A: It's not that it does not work, they are different Tag Helpers with different usage: If you hover over the <a> you will see it implements AnchorTagHelper, while the <button> implements the FormActionTagHelper, because it is supposed to be used in <form>s. So, in order to get the same behavior, you would do this: <a asp-action="Index">aaaa</a> <form> <button asp-action="Index">bbbbb</button> </form> Note, though, that the button is rendered as formaction="/", not as href="/", and this is why you need to wrap it inside a form. The second could also be written like this: <form asp-action="Index"> <button type="submit">bbbbb</button> </form>
Q: Make tag helper work "button" like "a" In ASP.NET Core I can set an action to a link item a, however if I change it to a button it doesn't work anymore. What is the correct way to bind an action/controller to a button click? <a asp-action="Delete" asp-route-id="@item.Id">remove</a> @*work*@ vs <button asp-action="Delete" asp-route-id="@item.Id">remove</button> @*does not work*@ A: It's not that it does not work, they are different Tag Helpers with different usage: If you hover over the <a> you will see it implements AnchorTagHelper, while the <button> implements the FormActionTagHelper, because it is supposed to be used in <form>s. So, in order to get the same behavior, you would do this: <a asp-action="Index">aaaa</a> <form> <button asp-action="Index">bbbbb</button> </form> Note, though, that the button is rendered as formaction="/", not as href="/", and this is why you need to wrap it inside a form. The second could also be written like this: <form asp-action="Index"> <button type="submit">bbbbb</button> </form>
stackoverflow
{ "language": "en", "length": 155, "provenance": "stackexchange_0000F.jsonl.gz:881024", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592268" }
3de5fa7360005cc354e5aa84f26af51fda26ba2a
Stackoverflow Stackexchange Q: What is the Angular2 equivalent to an AngularJS $routeChangeStart? In AngularJS we were able to specify route change event to observe changes in route object using the $routeChangeStart/End event of the $rootScope. What is the equivalent of the route change event in Angular2? how can we do this exact functionality of below code in Angular2 $scope.$on('$routeChangeStart', function (scope, next, current) { //do what you want }); I got some disucussions here, But it don't have more details, So I asked a new question. angular2 $routeChangeStart , $routeChangeSuccess ,$routeChangeError A: You can listen the events of the router by doing the following: import { Router, ActivatedRoute, NavigationEnd, NavigationStart, NavigationError, NavigationCancel, } from '@angular/router'; // constructor method of some angular element constructor( private _router: Router, ) { this._router.events .filter(event => event instanceof NavigationStart) .subscribe(event => { console.log("New route"); }); } EDIT: Im not completely sure is that is actually what you need, after taking a closer look to the angularjs docs seems like those events are more related to the resolution/result of a guard in angular2
Q: What is the Angular2 equivalent to an AngularJS $routeChangeStart? In AngularJS we were able to specify route change event to observe changes in route object using the $routeChangeStart/End event of the $rootScope. What is the equivalent of the route change event in Angular2? how can we do this exact functionality of below code in Angular2 $scope.$on('$routeChangeStart', function (scope, next, current) { //do what you want }); I got some disucussions here, But it don't have more details, So I asked a new question. angular2 $routeChangeStart , $routeChangeSuccess ,$routeChangeError A: You can listen the events of the router by doing the following: import { Router, ActivatedRoute, NavigationEnd, NavigationStart, NavigationError, NavigationCancel, } from '@angular/router'; // constructor method of some angular element constructor( private _router: Router, ) { this._router.events .filter(event => event instanceof NavigationStart) .subscribe(event => { console.log("New route"); }); } EDIT: Im not completely sure is that is actually what you need, after taking a closer look to the angularjs docs seems like those events are more related to the resolution/result of a guard in angular2
stackoverflow
{ "language": "en", "length": 175, "provenance": "stackexchange_0000F.jsonl.gz:881028", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592279" }
da42db8a1d04ee6906918b4b8356a5f3113ce3ba
Stackoverflow Stackexchange Q: Intellij plugin Fortran I have installed Intellij IDEA Version 2017.1.4 and installed the Fortran plugin. However, I don't see an option of starting a Fortran project even after restarting IntelliJ. I see the plugin has been successfully installed though. Is there a simple hello-world fortran example with Intellij? Thank you A: No, you can't create a Fortran project in Intellij IDEA. You can use cmake to build you project and import such project in CLion. Here you can find some information about compiling Fortran project with CMake. Also several example projects can be easily found in the Internet. General idea behind this is that IDE is not a build tool, so you are building your project with build tools and we are doing our best to support build tool that you're using. For now from all JetBrains IDEs only CLion supports build tool that is capable of compiling Fortran project (CMake). In the future CLion will support other build tools capable of doing this (make for example).
Q: Intellij plugin Fortran I have installed Intellij IDEA Version 2017.1.4 and installed the Fortran plugin. However, I don't see an option of starting a Fortran project even after restarting IntelliJ. I see the plugin has been successfully installed though. Is there a simple hello-world fortran example with Intellij? Thank you A: No, you can't create a Fortran project in Intellij IDEA. You can use cmake to build you project and import such project in CLion. Here you can find some information about compiling Fortran project with CMake. Also several example projects can be easily found in the Internet. General idea behind this is that IDE is not a build tool, so you are building your project with build tools and we are doing our best to support build tool that you're using. For now from all JetBrains IDEs only CLion supports build tool that is capable of compiling Fortran project (CMake). In the future CLion will support other build tools capable of doing this (make for example). A: JetBrains just released a new Fortran plugin you may be interested in. I tried it in IntelliJ 15 and it did not allow me to create a new Fortran project. I have NOT tried it in CLion yet.
stackoverflow
{ "language": "en", "length": 207, "provenance": "stackexchange_0000F.jsonl.gz:881058", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592395" }
f9c2ced649b32f36583b431a9ca918c74e7bd630
Stackoverflow Stackexchange Q: Office JS - refresh add-in version I'm writing new features for a Word add-in that is already published in the Store. When I publish a new add-in version, it is visible only to users who remove the previous version and reinstall the add-in. (Simply restarting Word didn't do the trick.) Is there a way to have the add-in update automatically? A: For the add-in itself (the website), changes happen immediately upon publishing to your server just as they would for any other web application. For the manifest, once you publish an update to the Store it will automatically update the manifest on the client when it sees a bump in the version number.
Q: Office JS - refresh add-in version I'm writing new features for a Word add-in that is already published in the Store. When I publish a new add-in version, it is visible only to users who remove the previous version and reinstall the add-in. (Simply restarting Word didn't do the trick.) Is there a way to have the add-in update automatically? A: For the add-in itself (the website), changes happen immediately upon publishing to your server just as they would for any other web application. For the manifest, once you publish an update to the Store it will automatically update the manifest on the client when it sees a bump in the version number.
stackoverflow
{ "language": "en", "length": 114, "provenance": "stackexchange_0000F.jsonl.gz:881130", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592637" }
68314c85aee99e33c54d36683bf9241f2c76acfe
Stackoverflow Stackexchange Q: How to pass array of object to select options in Vue 2.0 I have a select which I want to populate from the array of objects like [{"id": 1, "name": "Some name"}, ... {"id": 5, "name": "Another name"}] which is stored in items in data property of the Vue var app = new Vue({ el: "#app", data: { items: [], .... } }) I'm trying to do it like with the v-for and v-model like that: <select id="categories" v-model="items"> <option v-for="item in items" :value="item.id">{{ item.name }}</option> </select> and it doesn't work, however if I tried the same code with the int Array everything is fine. Can't wrap my head around it. A: v-model on a select is for the selected value. <select id="categories" v-model="selectedValue"> <option v-for="item in items" :value="item.id">{{ item.name }}</option> </select> Add selectedValue to your data. var app = new Vue({ el: "#app", data: { items: [], selectedValue: null } }) const items = [{"id": 1, "name": "Some name"}, {"id": 5, "name": "Another name"}] var app = new Vue({ el: "#app", data: { items, selectedValue: null } }) <script src="https://unpkg.com/vue@2.2.6/dist/vue.js"></script> <div id="app"> <select id="categories" v-model="selectedValue"> <option v-for="item in items" :value="item.id">{{ item.name }}</option> </select> Selected Value: {{selectedValue}} </div>
Q: How to pass array of object to select options in Vue 2.0 I have a select which I want to populate from the array of objects like [{"id": 1, "name": "Some name"}, ... {"id": 5, "name": "Another name"}] which is stored in items in data property of the Vue var app = new Vue({ el: "#app", data: { items: [], .... } }) I'm trying to do it like with the v-for and v-model like that: <select id="categories" v-model="items"> <option v-for="item in items" :value="item.id">{{ item.name }}</option> </select> and it doesn't work, however if I tried the same code with the int Array everything is fine. Can't wrap my head around it. A: v-model on a select is for the selected value. <select id="categories" v-model="selectedValue"> <option v-for="item in items" :value="item.id">{{ item.name }}</option> </select> Add selectedValue to your data. var app = new Vue({ el: "#app", data: { items: [], selectedValue: null } }) const items = [{"id": 1, "name": "Some name"}, {"id": 5, "name": "Another name"}] var app = new Vue({ el: "#app", data: { items, selectedValue: null } }) <script src="https://unpkg.com/vue@2.2.6/dist/vue.js"></script> <div id="app"> <select id="categories" v-model="selectedValue"> <option v-for="item in items" :value="item.id">{{ item.name }}</option> </select> Selected Value: {{selectedValue}} </div>
stackoverflow
{ "language": "en", "length": 199, "provenance": "stackexchange_0000F.jsonl.gz:881138", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592659" }
7701b19672205ee12b68d7293a52553e3bad92f1
Stackoverflow Stackexchange Q: vim doesn't recognize columns beyond 72 with fortran90 code I am editing a fortran90 code with vim. Note that I'm working with a *.f90 file, not *.f. vim doesn't recognize as legitimate code anything beyond column 72. This is an annoying problem because if a quote is opened at, say, column 50 but not closed until column 80, then vim colors all the following lines as part of the same quote. This would make sense if I was working with an old fortran77 file, but I'm clearly not. Is there any way to convince vim to recognize code beyond column 72? A: if I create a new .f90 file syntax is highlighted as if it is fortran 77 (fixed line length, comments in first col, code at 6th col, etc etc) rather than modern free form fortran. :let b:fortran_fixed_source=0 :set syntax=fortran does the trick to get vim highlighting it correctly.
Q: vim doesn't recognize columns beyond 72 with fortran90 code I am editing a fortran90 code with vim. Note that I'm working with a *.f90 file, not *.f. vim doesn't recognize as legitimate code anything beyond column 72. This is an annoying problem because if a quote is opened at, say, column 50 but not closed until column 80, then vim colors all the following lines as part of the same quote. This would make sense if I was working with an old fortran77 file, but I'm clearly not. Is there any way to convince vim to recognize code beyond column 72? A: if I create a new .f90 file syntax is highlighted as if it is fortran 77 (fixed line length, comments in first col, code at 6th col, etc etc) rather than modern free form fortran. :let b:fortran_fixed_source=0 :set syntax=fortran does the trick to get vim highlighting it correctly. A: This could be related to the 'synmaxcol' variable. If you run :set synmaxcol? In vim's command line, what do you get back? Setting this option higher might fix your issue. It's generally set low as vim can get laggy when syntax highlighting very long lines, such as those found in XML.
stackoverflow
{ "language": "en", "length": 203, "provenance": "stackexchange_0000F.jsonl.gz:881143", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592668" }
d0f262e40df09b3ce3c730f10422a48baaf11dd5
Stackoverflow Stackexchange Q: const array of const, to use its elements on array length definitions or give template parameters value I need a constant array of constants, which its constants (the elements of the constant array of constants) can be used where only a compile time constant can be used, like array length definitions. E.g: int a[ my_const_array_of_const[0] ]; int b[ my_const_array_of_const[1] ]; template<int p> foo() { ... }; foo< my_const_array_of_const[2] >(); I have tried solutions form other answers, but they were not "constant" enough to the compiler not give an error when using them on above situations. How can I create the "my_const_array_of_const" constant to compile in such situations? I need it to configure a High-Level Synthesis (HLS) design. For HLS C++ syntax is restricted. No dynamic memory is allowed, hence I need to use static arrays. Besides, all compilation time constants may be used to optimize the hardware accelerator (that is the reason to use template parameters instead of variables). A: You could use constexpr (since C++11), which guarantee that the value of the element of the array could be evaluated at compile time. e.g. constexpr int my_const_array_of_const[2] {1, 2}; LIVE
Q: const array of const, to use its elements on array length definitions or give template parameters value I need a constant array of constants, which its constants (the elements of the constant array of constants) can be used where only a compile time constant can be used, like array length definitions. E.g: int a[ my_const_array_of_const[0] ]; int b[ my_const_array_of_const[1] ]; template<int p> foo() { ... }; foo< my_const_array_of_const[2] >(); I have tried solutions form other answers, but they were not "constant" enough to the compiler not give an error when using them on above situations. How can I create the "my_const_array_of_const" constant to compile in such situations? I need it to configure a High-Level Synthesis (HLS) design. For HLS C++ syntax is restricted. No dynamic memory is allowed, hence I need to use static arrays. Besides, all compilation time constants may be used to optimize the hardware accelerator (that is the reason to use template parameters instead of variables). A: You could use constexpr (since C++11), which guarantee that the value of the element of the array could be evaluated at compile time. e.g. constexpr int my_const_array_of_const[2] {1, 2}; LIVE
stackoverflow
{ "language": "en", "length": 191, "provenance": "stackexchange_0000F.jsonl.gz:881252", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592986" }
0b4fc75b923a7f4cf29ee3dc42e6ce2c753e4489
Stackoverflow Stackexchange Q: WTH (NOLOCK) syntax fo subquery I am trying to add with (nolock) in a report query that when run locks the full db making imposssible for other users to use the db. I can't figure out how to use in this case: -- this is just an example: SELECT FIELDS FROM (SELECT * FROM ATABLE) AS SUB This gives syntax error: SELECT FIELDS FROM (SELECT * FROM ATABLE) WITH (NOLOCK) AS SUB where WITH (NOLOCK) shuold be put? I am not saying this is a solution to all problems, it is just a test i want to. Thanks! A: If there are more tables involved and more than one query invloved and you don't care about dirty reads then set the Isolation level of your transaction to read uncommited instead of writing Nolock everywhere SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Q: WTH (NOLOCK) syntax fo subquery I am trying to add with (nolock) in a report query that when run locks the full db making imposssible for other users to use the db. I can't figure out how to use in this case: -- this is just an example: SELECT FIELDS FROM (SELECT * FROM ATABLE) AS SUB This gives syntax error: SELECT FIELDS FROM (SELECT * FROM ATABLE) WITH (NOLOCK) AS SUB where WITH (NOLOCK) shuold be put? I am not saying this is a solution to all problems, it is just a test i want to. Thanks! A: If there are more tables involved and more than one query invloved and you don't care about dirty reads then set the Isolation level of your transaction to read uncommited instead of writing Nolock everywhere SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED A: I would put it here but the thing to note is you are using a view so really it should go on the tables in the view: SELECT FIELDS FROM (SELECT * FROM MYVIEW WITH (NOLOCK)) AS SUB A: If you care about accuracy you shouldn't put it anywhere on your report. That hint has some very interesting things it does that many people don't fully understand. http://blogs.sqlsentry.com/aaronbertrand/bad-habits-nolock-everywhere/ But if you are deadset on continuing, table hints belong next to the table. Of course since this is a view it isn't going to help much. SELECT FIELDS FROM (SELECT * FROM MYVIEW WITH (NOLOCK)) AS SUB
stackoverflow
{ "language": "en", "length": 248, "provenance": "stackexchange_0000F.jsonl.gz:881253", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44592992" }
0eefd7c2d670d8bbc957e9a85fa4a498ec91af32
Stackoverflow Stackexchange Q: Stacked TouchableOpacity inside another TouchableOpacity is not clickable Even though this document (https://facebook.github.io/react-native/docs/gesture-responder-system.html) states, that touch events are passed down to the children and are only consumed by a parent, if the child doesn't react on the event, I face the issue, that a TouchableOpacity nested inside another TouchableOpacity doesn't react properly on touches. My structure is like follows <ScrollView> <TouchableOpacity onPress={() => console.log('This is printed always')}> <View> <Text>I can click here</Text> <TouchableOpacity onPress={() => console.log('This is printed never')}> <Text>I can click here but the outer onPress is called instead of the inner one</text> </TouchableOpacity> </View> </TouchableOpacity> </ScrollView> The same happens for Buttons inside TouchableOpacitys: Clicking the Buttons calls the onPress method of the parent TouchableOpacity Am I overseeing something? A: You could just use TouchableWithoutFeedback to wrap inner TouchableOpacity. Something like: <TouchableOpacity onPress={() => console.log('This is printed always')}> <View> <Text>I can click here</Text> <TouchableWithoutFeedback> <TouchableOpacity onPress={() => console.log('This is printed never')}> <Text>I can click here but the outer onPress is called instead of the inner one</text> </TouchableOpacity> </TouchableWithoutFeedback> </View> </TouchableOpacity>
Q: Stacked TouchableOpacity inside another TouchableOpacity is not clickable Even though this document (https://facebook.github.io/react-native/docs/gesture-responder-system.html) states, that touch events are passed down to the children and are only consumed by a parent, if the child doesn't react on the event, I face the issue, that a TouchableOpacity nested inside another TouchableOpacity doesn't react properly on touches. My structure is like follows <ScrollView> <TouchableOpacity onPress={() => console.log('This is printed always')}> <View> <Text>I can click here</Text> <TouchableOpacity onPress={() => console.log('This is printed never')}> <Text>I can click here but the outer onPress is called instead of the inner one</text> </TouchableOpacity> </View> </TouchableOpacity> </ScrollView> The same happens for Buttons inside TouchableOpacitys: Clicking the Buttons calls the onPress method of the parent TouchableOpacity Am I overseeing something? A: You could just use TouchableWithoutFeedback to wrap inner TouchableOpacity. Something like: <TouchableOpacity onPress={() => console.log('This is printed always')}> <View> <Text>I can click here</Text> <TouchableWithoutFeedback> <TouchableOpacity onPress={() => console.log('This is printed never')}> <Text>I can click here but the outer onPress is called instead of the inner one</text> </TouchableOpacity> </TouchableWithoutFeedback> </View> </TouchableOpacity> A: Writing here to make it a little more prominent. It could be the nested TouchableOpacity is being imported from something different from the parent one as mentioned by @EliezerSteinbock. This could be quite possible as many of us use auto-imports on visual code or other IDE. Sadly I missed her comment the first time round, so hopefully this would help someone else A: Change import of Touchable opacity from: import { TouchableOpacity } from 'react-native-gesture-handler'; to the following, and it will now all be fine: import { TouchableOpacity } from 'react-native'; A: I had the same problem (non-clickable nested touchable opacity) but with iOS only, and strangely it was fixed simply by rearranging the order of JSX elements within the parent container touchable opacity. I had a nested absolute position touchable opacity declared first inside a container touchable opacity. Then, I had a flex-row view declared afterwards (also inside the parent container). I moved the JSX for the nested touchable opacity below the JSX for the flex-row view and it worked! I had no idea that the ordering in JSX would matter.
stackoverflow
{ "language": "en", "length": 355, "provenance": "stackexchange_0000F.jsonl.gz:881262", "question_score": "24", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44593024" }
f7162f7edab90d33574a4c908925d86201b17833
Stackoverflow Stackexchange Q: Spring Webflux : Webclient : Get body on error I am using the webclient from spring webflux, like this : WebClient.create() .post() .uri(url) .syncBody(body) .accept(MediaType.APPLICATION_JSON) .headers(headers) .exchange() .flatMap(clientResponse -> clientResponse.bodyToMono(tClass)); It is working well. I now want to handle the error from the webservice I am calling (Ex 500 internal error). Normally i would add an doOnError on the "stream" and isu the Throwable to test the status code, But my issue is that I want to get the body provided by the webservice because it is providing me a message that i would like to use. I am looking to do the flatMap whatever happen and test myself the status code to deserialize or not the body. A: I do something like this: Mono<ClientResponse> responseMono = requestSpec.exchange() .doOnNext(response -> { HttpStatus httpStatus = response.statusCode(); if (httpStatus.is4xxClientError() || httpStatus.is5xxServerError()) { throw new WebClientException( "ClientResponse has erroneous status code: " + httpStatus.value() + " " + httpStatus.getReasonPhrase()); } }); and then: responseMono.subscribe(v -> { }, ex -> processError(ex));
Q: Spring Webflux : Webclient : Get body on error I am using the webclient from spring webflux, like this : WebClient.create() .post() .uri(url) .syncBody(body) .accept(MediaType.APPLICATION_JSON) .headers(headers) .exchange() .flatMap(clientResponse -> clientResponse.bodyToMono(tClass)); It is working well. I now want to handle the error from the webservice I am calling (Ex 500 internal error). Normally i would add an doOnError on the "stream" and isu the Throwable to test the status code, But my issue is that I want to get the body provided by the webservice because it is providing me a message that i would like to use. I am looking to do the flatMap whatever happen and test myself the status code to deserialize or not the body. A: I do something like this: Mono<ClientResponse> responseMono = requestSpec.exchange() .doOnNext(response -> { HttpStatus httpStatus = response.statusCode(); if (httpStatus.is4xxClientError() || httpStatus.is5xxServerError()) { throw new WebClientException( "ClientResponse has erroneous status code: " + httpStatus.value() + " " + httpStatus.getReasonPhrase()); } }); and then: responseMono.subscribe(v -> { }, ex -> processError(ex)); A: Note that as of writing this, 5xx errors no longer result in an exception from the underlying Netty layer. See https://github.com/spring-projects/spring-framework/commit/b0ab84657b712aac59951420f4e9d696c3d84ba2 A: I had just faced the similar situation and I found out webClient does not throw any exception even it is getting 4xx/5xx responses. In my case, I use webclient to first make a call to get the response and if it is returning 2xx response then I extract the data from the response and use it for making the second call. If the first call is getting non-2xx response then throw an exception. Because it is not throwing exception so when the first call failed and the second is still be carried on. So what I did is return webClient.post().uri("URI") .header(HttpHeaders.CONTENT_TYPE, "XXXX") .header(HttpHeaders.ACCEPT, "XXXX") .header(HttpHeaders.AUTHORIZATION, "XXXX") .body(BodyInserters.fromObject(BODY)) .exchange() .doOnSuccess(response -> { HttpStatus statusCode = response.statusCode(); if (statusCode.is4xxClientError()) { throw new Exception(statusCode.toString()); } if (statusCode.is5xxServerError()) { throw new Exception(statusCode.toString()); } ) .flatMap(response -> response.bodyToMono(ANY.class)) .map(response -> response.getSomething()) .flatMap(something -> callsSecondEndpoint(something)); } A: Using what I learned this fantastic SO answer regarding the "Correct way of throwing exceptions with Reactor", I was able to put this answer together. It uses .onStatus, .bodyToMono, and .handle to map the error response body to an exception. // create a chicken webClient .post() .uri(urlService.getUrl(customer) + "/chickens") .contentType(MediaType.APPLICATION_JSON) .body(Mono.just(chickenCreateDto), ChickenCreateDto.class) // outbound request body .retrieve() .onStatus(HttpStatus::isError, clientResponse -> clientResponse.bodyToMono(ChickenCreateErrorDto.class) .handle((error, sink) -> sink.error(new ChickenException(error)) ) ) .bodyToMono(ChickenResponse.class) .subscribe( this::recordSuccessfulCreationOfChicken, // accepts ChickenResponse this::recordUnsuccessfulCreationOfChicken // accepts throwable (ChickenException) ); A: I prefer to use the methods provided by the ClientResponse to handle http errors and throw exceptions: WebClient.create() .post() .uri( url ) .body( bodyObject == null ? null : BodyInserters.fromValue( bodyObject ) ) .accept( MediaType.APPLICATION_JSON ) .headers( headers ) .exchange() .flatMap( clientResponse -> { //Error handling if ( clientResponse.statusCode().isError() ) { // or clientResponse.statusCode().value() >= 400 return clientResponse.createException().flatMap( Mono::error ); } return clientResponse.bodyToMono( clazz ) } ) //You can do your checks: doOnError (..), onErrorReturn (..) ... ... In fact, it's the same logic used in the DefaultResponseSpec of DefaultWebClient to handle errors. The DefaultResponseSpec is an implementation of ResponseSpec that we would have if we made a retrieve() instead of exchange(). A: Don't we have onStatus()? public Mono<Void> cancel(SomeDTO requestDto) { return webClient.post().uri(SOME_URL) .body(fromObject(requestDto)) .header("API_KEY", properties.getApiKey()) .retrieve() .onStatus(HttpStatus::isError, response -> { logTraceResponse(log, response); return Mono.error(new IllegalStateException( String.format("Failed! %s", requestDto.getCartId()) )); }) .bodyToMono(Void.class) .timeout(timeout); } And: public static void logTraceResponse(Logger log, ClientResponse response) { if (log.isTraceEnabled()) { log.trace("Response status: {}", response.statusCode()); log.trace("Response headers: {}", response.headers().asHttpHeaders()); response.bodyToMono(String.class) .publishOn(Schedulers.elastic()) .subscribe(body -> log.trace("Response body: {}", body)); } } A: We have finally understood what is happening : By default the Netty's httpclient (HttpClientRequest) is configured to fail on server error (response 5XX) and not on client error (4XX), this is why it was always emitting an exception. What we have done is extend AbstractClientHttpRequest and ClientHttpConnector to configure the httpclient behave the way the want and when we are invoking the WebClient we use our custom ClientHttpConnector : WebClient.builder().clientConnector(new CommonsReactorClientHttpConnector()).build(); A: I got the error body by doing like this: webClient ... .retrieve() .onStatus(HttpStatus::isError, response -> response.bodyToMono(String.class) // error body as String or other class .flatMap(error -> Mono.error(new RuntimeException(error)))) // throw a functional exception .bodyToMono(MyResponseType.class) .block(); A: The retrieve() method in WebClient throws a WebClientResponseException whenever a response with status code 4xx or 5xx is received. You can handle the exception by checking the response status code. Mono<Object> result = webClient.get().uri(URL).exchange().log().flatMap(entity -> { HttpStatus statusCode = entity.statusCode(); if (statusCode.is4xxClientError() || statusCode.is5xxServerError()) { return Mono.error(new Exception(statusCode.toString())); } return Mono.just(entity); }).flatMap(clientResponse -> clientResponse.bodyToMono(JSONObject.class)) Reference: https://www.callicoder.com/spring-5-reactive-webclient-webtestclient-examples/ A: You could also do this return webClient.getWebClient() .post() .uri("/api/Card") .body(BodyInserters.fromObject(cardObject)) .exchange() .flatMap(clientResponse -> { if (clientResponse.statusCode().is5xxServerError()) { clientResponse.body((clientHttpResponse, context) -> { return clientHttpResponse.getBody(); }); return clientResponse.bodyToMono(String.class); } else return clientResponse.bodyToMono(String.class); }); Read this article for more examples link, I found it to be helpful when I experienced a similar problem with error handling A: I stumbled across this so figured I might as well post my code. What I did was create a global handler that takes career of request and response errors coming out of the web client. This is in Kotlin but can be easily converted to Java, of course. This extends the default behavior so you can be sure to get all of the automatic configuration on top of your customer handling. As you can see this doesn't really do anything custom, it just translates the web client errors into relevant responses. For response errors the code and response body are simply passed through to the client. For request errors currently it just handles connection troubles because that's all I care about (at the moment), but as you can see it can be easily extended. @Configuration class WebExceptionConfig(private val serverProperties: ServerProperties) { @Bean @Order(-2) fun errorWebExceptionHandler( errorAttributes: ErrorAttributes, resourceProperties: ResourceProperties, webProperties: WebProperties, viewResolvers: ObjectProvider<ViewResolver>, serverCodecConfigurer: ServerCodecConfigurer, applicationContext: ApplicationContext ): ErrorWebExceptionHandler? { val exceptionHandler = CustomErrorWebExceptionHandler( errorAttributes, (if (resourceProperties.hasBeenCustomized()) resourceProperties else webProperties.resources) as WebProperties.Resources, serverProperties.error, applicationContext ) exceptionHandler.setViewResolvers(viewResolvers.orderedStream().collect(Collectors.toList())) exceptionHandler.setMessageWriters(serverCodecConfigurer.writers) exceptionHandler.setMessageReaders(serverCodecConfigurer.readers) return exceptionHandler } } class CustomErrorWebExceptionHandler( errorAttributes: ErrorAttributes, resources: WebProperties.Resources, errorProperties: ErrorProperties, applicationContext: ApplicationContext ) : DefaultErrorWebExceptionHandler(errorAttributes, resources, errorProperties, applicationContext) { override fun handle(exchange: ServerWebExchange, throwable: Throwable): Mono<Void> = when (throwable) { is WebClientRequestException -> handleWebClientRequestException(exchange, throwable) is WebClientResponseException -> handleWebClientResponseException(exchange, throwable) else -> super.handle(exchange, throwable) } private fun handleWebClientResponseException(exchange: ServerWebExchange, throwable: WebClientResponseException): Mono<Void> { exchange.response.headers.add("Content-Type", "application/json") exchange.response.statusCode = throwable.statusCode val responseBodyBuffer = exchange .response .bufferFactory() .wrap(throwable.responseBodyAsByteArray) return exchange.response.writeWith(Mono.just(responseBodyBuffer)) } private fun handleWebClientRequestException(exchange: ServerWebExchange, throwable: WebClientRequestException): Mono<Void> { if (throwable.rootCause is ConnectException) { exchange.response.headers.add("Content-Type", "application/json") exchange.response.statusCode = HttpStatus.BAD_GATEWAY val responseBodyBuffer = exchange .response .bufferFactory() .wrap(ObjectMapper().writeValueAsBytes(customErrorWebException(exchange, HttpStatus.BAD_GATEWAY, throwable.message))) return exchange.response.writeWith(Mono.just(responseBodyBuffer)) } else { return super.handle(exchange, throwable) } } private fun customErrorWebException(exchange: ServerWebExchange, status: HttpStatus, message: Any?) = CustomErrorWebException( Instant.now().toString(), exchange.request.path.value(), status.value(), status.reasonPhrase, message, exchange.request.id ) } data class CustomErrorWebException( val timestamp: String, val path: String, val status: Int, val error: String, val message: Any?, val requestId: String, ) A: Actually, you can log the body easily in the onError call: .doOnError { logger.warn { body(it) } } and: private fun body(it: Throwable) = if (it is WebClientResponseException) { ", body: ${it.responseBodyAsString}" } else { "" } A: For those that wish to the details of a WebClient request that triggered a 500 Internal System error, override the DefaultErrorWebExceptionHandler like as follows. The Spring default is to tell you the client had an error, but it does not provide the body of the WebClient call, which can be invaluable in debugging. /** * Extends the DefaultErrorWebExceptionHandler to log the response body from a failed WebClient * response that results in a 500 Internal Server error. */ @Component @Order(-2) public class ExtendedErrorWebExceptionHandler extends DefaultErrorWebExceptionHandler { private static final Log logger = HttpLogging.forLogName(ExtendedErrorWebExceptionHandler.class); public FsErrorWebExceptionHandler( ErrorAttributes errorAttributes, Resources resources, ServerProperties serverProperties, ApplicationContext applicationContext, ServerCodecConfigurer serverCodecConfigurer) { super(errorAttributes, resources, serverProperties.getError(), applicationContext); super.setMessageWriters(serverCodecConfigurer.getWriters()); super.setMessageReaders(serverCodecConfigurer.getReaders()); } /** * Override the default error log behavior to provide details for WebClientResponseException. This * is so that administrators can better debug WebClient errors. * * @param request The request to the foundation service * @param response The response to the foundation service * @param throwable The error that occurred during processing the request */ @Override protected void logError(ServerRequest request, ServerResponse response, Throwable throwable) { // When the throwable is a WebClientResponseException, also log the body if (HttpStatus.resolve(response.rawStatusCode()) != null && response.statusCode().equals(HttpStatus.INTERNAL_SERVER_ERROR) && throwable instanceof WebClientResponseException) { logger.error( LogMessage.of( () -> String.format( "%s 500 Server Error for %s\n%s", request.exchange().getLogPrefix(), formatRequest(request), formatResponseError((WebClientResponseException) throwable))), throwable); } else { super.logError(request, response, throwable); } } private String formatRequest(ServerRequest request) { String rawQuery = request.uri().getRawQuery(); String query = StringUtils.hasText(rawQuery) ? "?" + rawQuery : ""; return "HTTP " + request.methodName() + " \"" + request.path() + query + "\""; } private String formatResponseError(WebClientResponseException exception) { return String.format( "%-15s %s\n%-15s %s\n%-15s %d\n%-15s %s\n%-15s '%s'", " Message:", exception.getMessage(), " Status:", exception.getStatusText(), " Status Code:", exception.getRawStatusCode(), " Headers:", exception.getHeaders(), " Body:", exception.getResponseBodyAsString()); } } A: You have to cast the "Throwable e" parameter to WebClientResponseException, then you can call getResponseBodyAsString() : WebClient webClient = WebClient.create("https://httpstat.us/404"); Mono<Object> monoObject = webClient.get().retrieve().bodyToMono(Object.class); monoObject.doOnError(e -> { if( e instanceof WebClientResponseException ){ System.out.println( "ResponseBody = " + ((WebClientResponseException) e).getResponseBodyAsString() ); } }).subscribe(); // Display : ResponseBody = 404 Not Found
stackoverflow
{ "language": "en", "length": 1512, "provenance": "stackexchange_0000F.jsonl.gz:881279", "question_score": "46", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44593066" }
a64ddebfb53ed2a36bb46630f28e061c8df9fb0b
Stackoverflow Stackexchange Q: removing the central white circle? I have a circular plot and I would like to find a way to remove the little white circle in the middle. Here is my code: ggplot(d5)+geom_tile(aes(x=x, y=y, fill=xyz))+ scale_y_continuous(expand=c(0,0),breaks=NULL,limits=c(0,3.6))+ scale_fill_continuous(low="darkgreen", high="white")+ coord_polar(start=-1*pi/2, direction=1)+ theme_bw()+ theme(panel.grid.major = element_blank(),panel.grid.minor = element_blank()) many thanks. A: I made a dummy example here: require(dplyr) expand.grid(x = 1:20, y = 1:2) %>% mutate(z = rnorm(length(x))) %>% ggplot()+geom_tile(aes(x=x, y=y, fill=z))+ scale_y_continuous(expand=c(0,0),breaks=NULL,limits=c(0,3.6))+ scale_fill_continuous(low="darkgreen", high="white")+ coord_polar(start=-1*pi/2, direction=1)+ theme_bw()+ theme(panel.grid.major = element_blank(),panel.grid.minor = element_blank()) You're on the right track with the limits and expand arguments of scale_y, you just need to figure out where the actual lower bound is. To do that, let's plot the same set without coord_polar and without your scale_y. So in my example, the minimum edge of the tile is at y=0.5. So you have to figure out what your smallest y value is, and then subtract half of the default height for geom_tile (which is 1). Use that value for the lower y limit, and the hole in your pie will disappear.
Q: removing the central white circle? I have a circular plot and I would like to find a way to remove the little white circle in the middle. Here is my code: ggplot(d5)+geom_tile(aes(x=x, y=y, fill=xyz))+ scale_y_continuous(expand=c(0,0),breaks=NULL,limits=c(0,3.6))+ scale_fill_continuous(low="darkgreen", high="white")+ coord_polar(start=-1*pi/2, direction=1)+ theme_bw()+ theme(panel.grid.major = element_blank(),panel.grid.minor = element_blank()) many thanks. A: I made a dummy example here: require(dplyr) expand.grid(x = 1:20, y = 1:2) %>% mutate(z = rnorm(length(x))) %>% ggplot()+geom_tile(aes(x=x, y=y, fill=z))+ scale_y_continuous(expand=c(0,0),breaks=NULL,limits=c(0,3.6))+ scale_fill_continuous(low="darkgreen", high="white")+ coord_polar(start=-1*pi/2, direction=1)+ theme_bw()+ theme(panel.grid.major = element_blank(),panel.grid.minor = element_blank()) You're on the right track with the limits and expand arguments of scale_y, you just need to figure out where the actual lower bound is. To do that, let's plot the same set without coord_polar and without your scale_y. So in my example, the minimum edge of the tile is at y=0.5. So you have to figure out what your smallest y value is, and then subtract half of the default height for geom_tile (which is 1). Use that value for the lower y limit, and the hole in your pie will disappear. A: Just an addition to the answer given by @Brian. The correct limits of the y-axis that eliminate the little white circle in the middle can be calculated as follows: library(dplyr) library(ggplot2) set.seed(4321) d5 <- expand.grid(x = 1:20, y = 1:2) %>% mutate(z = rnorm(length(x))) yval <- sort(unique(d5$y)) h <- (yval[2] - yval[1])/2 ylim_lo <- yval[1] - h ylim_up <- yval[2] + h ggplot(d5)+geom_tile(aes(x=x, y=y, fill=z))+ scale_y_continuous(expand=c(0,0), breaks=NULL, limits=c(ylim_lo,ylim_up)) + scale_fill_continuous(low="darkgreen", high="white") + coord_polar(start=-1*pi/2, direction=1) + theme_bw()+ theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())
stackoverflow
{ "language": "en", "length": 257, "provenance": "stackexchange_0000F.jsonl.gz:881290", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44593112" }
6dde13c8ae43bf89c1fe9e81a5f550fffe1ee344
Stackoverflow Stackexchange Q: Stacking copies of an array/ a torch tensor efficiently? I'm a Python/Pytorch user. First, in numpy, let's say I have an array M of size LxL, and i want to have the following array: A=(M,...,M) of size, say, NxLxL, is there a more elegant/memory efficient way of doing it than : A=np.array([M]*N) ? Same question with torch tensor ! Cause, Now, if M is a Variable(torch.tensor), i have to do: A=torch.autograd.Variable(torch.tensor(np.array([M]*N))) which is ugly ! A: If you don't mind creating new memory: * *In numpy, you can use np.repeat() or np.tile(). With efficiency in mind, you should choose the one which organises the memory for your purposes, rather than re-arranging after the fact: * *np.repeat([1, 2], 2) == [1, 1, 2, 2] * np.tile([1, 2], 2) == [1, 2, 1, 2] *In pytorch, you can use tensor.repeat(). Note: This matches np.tile, not np.repeat. If you don't want to create new memory: * *In numpy, you can use np.broadcast_to(). This creates a readonly view of the memory. *In pytorch, you can use tensor.expand(). This creates an editable view of the memory, so operations like += will have weird effects.
Q: Stacking copies of an array/ a torch tensor efficiently? I'm a Python/Pytorch user. First, in numpy, let's say I have an array M of size LxL, and i want to have the following array: A=(M,...,M) of size, say, NxLxL, is there a more elegant/memory efficient way of doing it than : A=np.array([M]*N) ? Same question with torch tensor ! Cause, Now, if M is a Variable(torch.tensor), i have to do: A=torch.autograd.Variable(torch.tensor(np.array([M]*N))) which is ugly ! A: If you don't mind creating new memory: * *In numpy, you can use np.repeat() or np.tile(). With efficiency in mind, you should choose the one which organises the memory for your purposes, rather than re-arranging after the fact: * *np.repeat([1, 2], 2) == [1, 1, 2, 2] * np.tile([1, 2], 2) == [1, 2, 1, 2] *In pytorch, you can use tensor.repeat(). Note: This matches np.tile, not np.repeat. If you don't want to create new memory: * *In numpy, you can use np.broadcast_to(). This creates a readonly view of the memory. *In pytorch, you can use tensor.expand(). This creates an editable view of the memory, so operations like += will have weird effects. A: Note, that you need to decide whether you would like to allocate new memory for your expanded array or whether you simply require a new view of the existing memory of the original array. In PyTorch, this distinction gives rise to the two methods expand() and repeat(). The former only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory. In contrast, the latter copies the original data and allocates new memory. In PyTorch, you can use expand() and repeat() as follows for your purposes: import torch L = 10 N = 20 A = torch.randn(L,L) A.expand(N, L, L) # specifies new size A.repeat(N,1,1) # specifies number of copies In Numpy, there are a multitude of ways to achieve what you did above in a more elegant and efficient manner. For your particular purpose, I would recommend np.tile() over np.repeat(), since np.repeat() is designed to operate on the particular elements of an array, while np.tile() is designed to operate on the entire array. Hence, import numpy as np L = 10 N = 20 A = np.random.rand(L,L) np.tile(A,(N, 1, 1)) A: In numpy repeat is faster: np.repeat(M[None,...], N,0) I expand the dimensions of the M, and then repeat along that new dimension.
stackoverflow
{ "language": "en", "length": 425, "provenance": "stackexchange_0000F.jsonl.gz:881302", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44593141" }
3dbc365a17e856eb89013174d5be4dbd8048d695
Stackoverflow Stackexchange Q: Invoke-RestMethod InFile from Blob Storage I need to upload a file to a REST endpoint but my file resides in Blob Storage. My end goal is to run this from Azure Automation so I can have a consistent Runbook to add files to new Azure Web Apps. The challenge I am running into is that InFile appears to look for local storage. I tried using New-PSDrive from Azure Automation (along with New-PSSession), but it won't add my local drive as a session resource within Azure Automation. Is there anyway to upload to the Rest Endpoint from Blob Storage? I am trying to hit the Kudu Zip Upload API: https://github.com/projectkudu/kudu/wiki/REST-API. I am guessing that some sort of filestream from the Azure Blob to the Azure Web App FTP may be a better way forward. $username = "`$mywebappusername" $password = "abcdefghijklmnopqrstuvwxyz" $base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $username,$password))) $userAgent = "powershell/1.0" $apiUrl = "https://mywebapp.scm.azurewebsites.net/api/zip/site/wwwroot" $filePath = "https://myblob.blob.core.windows.net/configs/testconfig.zip" Invoke-RestMethod -Uri $apiUrl -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -UserAgent $userAgent -Method PUT -InFile $filePath -ContentType "multipart/form-data"
Q: Invoke-RestMethod InFile from Blob Storage I need to upload a file to a REST endpoint but my file resides in Blob Storage. My end goal is to run this from Azure Automation so I can have a consistent Runbook to add files to new Azure Web Apps. The challenge I am running into is that InFile appears to look for local storage. I tried using New-PSDrive from Azure Automation (along with New-PSSession), but it won't add my local drive as a session resource within Azure Automation. Is there anyway to upload to the Rest Endpoint from Blob Storage? I am trying to hit the Kudu Zip Upload API: https://github.com/projectkudu/kudu/wiki/REST-API. I am guessing that some sort of filestream from the Azure Blob to the Azure Web App FTP may be a better way forward. $username = "`$mywebappusername" $password = "abcdefghijklmnopqrstuvwxyz" $base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $username,$password))) $userAgent = "powershell/1.0" $apiUrl = "https://mywebapp.scm.azurewebsites.net/api/zip/site/wwwroot" $filePath = "https://myblob.blob.core.windows.net/configs/testconfig.zip" Invoke-RestMethod -Uri $apiUrl -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -UserAgent $userAgent -Method PUT -InFile $filePath -ContentType "multipart/form-data"
stackoverflow
{ "language": "en", "length": 170, "provenance": "stackexchange_0000F.jsonl.gz:881312", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44593163" }
c2e78bcdeb9ce8015cfb8a1d0b2e574286b01a99
Stackoverflow Stackexchange Q: Comparison Tool in GitHub (compare releases, or shas if necessary) Can someone recommend a comparison tool/add-on/ in GitHub. I'd like to see the code changes between two particular releases. It would be nice if it showed all the files that have changed and I could then drill down into each file of interest. This link: https://github.com/blog/612-introducing-github-compare-view said there already is one in GitHub, but I can't see the "compare" button they refer to. The post is from 2010 so perhaps the feature was removed. If there is nothing in for GitHub, perhaps some direction on just using Git to list all the files that have changed between the two releases. Then I'd need a way to see what those changes are for each file. Frankly though, this seems like something that would be a pain on the command line! Thanks, Dave A: Starting January 2020, you now have "a shortcut to compare across two releases": You can now compare tags between two releases – in order to determine what changes have been made – by clicking on the Compare ▾ button for a given release. That give you an URL like https://github.com/go-gitea/gitea/compare/v1.11.0-rc1...release/v1.11
Q: Comparison Tool in GitHub (compare releases, or shas if necessary) Can someone recommend a comparison tool/add-on/ in GitHub. I'd like to see the code changes between two particular releases. It would be nice if it showed all the files that have changed and I could then drill down into each file of interest. This link: https://github.com/blog/612-introducing-github-compare-view said there already is one in GitHub, but I can't see the "compare" button they refer to. The post is from 2010 so perhaps the feature was removed. If there is nothing in for GitHub, perhaps some direction on just using Git to list all the files that have changed between the two releases. Then I'd need a way to see what those changes are for each file. Frankly though, this seems like something that would be a pain on the command line! Thanks, Dave A: Starting January 2020, you now have "a shortcut to compare across two releases": You can now compare tags between two releases – in order to determine what changes have been made – by clicking on the Compare ▾ button for a given release. That give you an URL like https://github.com/go-gitea/gitea/compare/v1.11.0-rc1...release/v1.11 A: Github supports the split view in compare pages. Just add ?diff=split to the url and you'll be fine. E.g. https://github.com/rails/rails/compare/v5.0.2.rc1...v5.0.2?diff=split. Github remembers your preferred comparison view. To reset it, write ?diff=unified instead. A: Can someone recommend a comparison tool/add-on/ in GitHub. I can't see the "compare" button they refer to. Append /compare to your repository's path to enter the compare view.
stackoverflow
{ "language": "en", "length": 255, "provenance": "stackexchange_0000F.jsonl.gz:881323", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44593189" }
f0ed01adeb024c8ffb8402bb0c902ce929534090
Stackoverflow Stackexchange Q: Eclipse: how to set XML indentation type for a specific project only? Usually I use 4 spaces per indentation level. Some projects however, use something different, like 1 tab. How do I configure this setting on a per project basis to override the workspace default? The code style/format settings for Java allow for the creation of profiles. There seems to be no such thing for XML.
Q: Eclipse: how to set XML indentation type for a specific project only? Usually I use 4 spaces per indentation level. Some projects however, use something different, like 1 tab. How do I configure this setting on a per project basis to override the workspace default? The code style/format settings for Java allow for the creation of profiles. There seems to be no such thing for XML.
stackoverflow
{ "language": "en", "length": 67, "provenance": "stackexchange_0000F.jsonl.gz:881331", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44593221" }
281f3d01aede5833145e31bc2b109bcacc6c3c6b
Stackoverflow Stackexchange Q: Elevate md-card in angular material According to the Material Design spec: On desktop, cards can have a resting elevation of 0dp and gain an elevation of 8dp on hover. How can I create this animated effect using Angular Material 2? I have considered doing this with (hover)= and with animations. I don't really care for the approach, I would prefer for it to elevate on hover. The reason for this, I'm using cards as buttons in my UI. A: To change elevation of md-card, create a class like following: .z-depth:hover { box-shadow: 0 8px 8px 8px rgba(0,0,0,.2), 0 8px 8px 0 rgba(0,0,0,.14), 0 8px 8px 0 rgba(0,0,0,.12) !important; transform: translate3d(0,0,0); transition: background .4s cubic-bezier(.25,.8,.25,1),box-shadow 280ms cubic-bezier(.4,0,.2,1); } You can change the box-shadow numbers to find the exact elevation you are looking for. Plnkr demo.
Q: Elevate md-card in angular material According to the Material Design spec: On desktop, cards can have a resting elevation of 0dp and gain an elevation of 8dp on hover. How can I create this animated effect using Angular Material 2? I have considered doing this with (hover)= and with animations. I don't really care for the approach, I would prefer for it to elevate on hover. The reason for this, I'm using cards as buttons in my UI. A: To change elevation of md-card, create a class like following: .z-depth:hover { box-shadow: 0 8px 8px 8px rgba(0,0,0,.2), 0 8px 8px 0 rgba(0,0,0,.14), 0 8px 8px 0 rgba(0,0,0,.12) !important; transform: translate3d(0,0,0); transition: background .4s cubic-bezier(.25,.8,.25,1),box-shadow 280ms cubic-bezier(.4,0,.2,1); } You can change the box-shadow numbers to find the exact elevation you are looking for. Plnkr demo. A: As for me it would be better to use predefined css classes for it. And toggle this class when user hovers over md-card. To change the evelavtion use mat-elevation-z{{elevationValue}} A: another way of doing this is that you get material elevation classes in you style file and use it there. for example in my scss file i have: @use '~@angular/material' as mat; .my-card { // ...some-custom-styles &:hover { @include mat.elevation(12); } } A: A directive is re-usable and configurable, and can be applied to any number of elements. Create the directive, and reference it in your module's declarations. This directive adds and removes the elevation class when the user's mouse enters or leaves the element. import { Directive, ElementRef, HostListener, Input, Renderer2, OnChanges, SimpleChanges } from '@angular/core'; @Directive({ selector: '[appMaterialElevation]' }) export class MaterialElevationDirective implements OnChanges { @Input() defaultElevation = 2; @Input() raisedElevation = 8; constructor( private element: ElementRef, private renderer: Renderer2 ) { this.setElevation(this.defaultElevation); } ngOnChanges(_changes: SimpleChanges) { this.setElevation(this.defaultElevation); } @HostListener('mouseenter') onMouseEnter() { this.setElevation(this.raisedElevation); } @HostListener('mouseleave') onMouseLeave() { this.setElevation(this.defaultElevation); } setElevation(amount: number) { const elevationPrefix = 'mat-elevation-z'; // remove all elevation classes const classesToRemove = Array.from((<HTMLElement>this.element.nativeElement).classList) .filter(c => c.startsWith(elevationPrefix)); classesToRemove.forEach((c) => { this.renderer.removeClass(this.element.nativeElement, c); }); // add the given elevation class const newClass = `${elevationPrefix}${amount}`; this.renderer.addClass(this.element.nativeElement, newClass); } } Then the directive can be applied to an element, with optional input properties. <mat-card appMaterialElevation [defaultElevation]="variableHeight" raisedElevation="16"> <mat-card-header> <mat-card-title>Card Title</mat-card-title> </mat-card-header> <mat-card-content> <p> This card changes elevation when you hover over it! </p> </mat-card-content> </mat-card> See this demo StackBlitz.
stackoverflow
{ "language": "en", "length": 385, "provenance": "stackexchange_0000F.jsonl.gz:881338", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44593237" }