id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
72083494143792c1ec1e03d3ca29951f6d0074cf | Stackoverflow Stackexchange
Q: Detect case class in scala macro Within a method being called as a scala (2.11) macro, is there a way to programmatically determine whether a Type is a case class or not?
The API for the method I'm working through boils down to this:
def typeIsCaseClass(c: Context)(targetType: c.universe.Type): Boolean = {
// targetType "is case class?"
}
I'm open to altering the API if need be.
A: The symbols usually contain all the interesting information:
def typeIsCaseClass(c: Context)(targetType: c.universe.Type): Boolean = {
val sym = targetType.typeSymbol
sym.isClass && sym.asClass.isCaseClass
}
| Q: Detect case class in scala macro Within a method being called as a scala (2.11) macro, is there a way to programmatically determine whether a Type is a case class or not?
The API for the method I'm working through boils down to this:
def typeIsCaseClass(c: Context)(targetType: c.universe.Type): Boolean = {
// targetType "is case class?"
}
I'm open to altering the API if need be.
A: The symbols usually contain all the interesting information:
def typeIsCaseClass(c: Context)(targetType: c.universe.Type): Boolean = {
val sym = targetType.typeSymbol
sym.isClass && sym.asClass.isCaseClass
}
| stackoverflow | {
"language": "en",
"length": 91,
"provenance": "stackexchange_0000F.jsonl.gz:894001",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633267"
} |
3062fa808afdf13cd9e2ec011b176dbf9240223b | Stackoverflow Stackexchange
Q: remote prod not found in git remotes So, git somehow sees remotes of all my apps, but fails to use them when asked to:
❯ git remote -v
...
prod https://git.heroku.com/my-app.git (fetch)
prod https://git.heroku.com/my-app.git (push)
...
❯ heroku run rails c -r prod --verbose
▸ remote prod not found in git remotes
At the same time, --application works fine
❯ heroku run rails c -a my-app
Running rails c on ⬢ my-app... ⣷ connecting, run.4544 (Standard-1X)
A: So, I don't know what broke it, but re-running git:remote fixed it
heroku git:remote -r prod -a my-app-prod
| Q: remote prod not found in git remotes So, git somehow sees remotes of all my apps, but fails to use them when asked to:
❯ git remote -v
...
prod https://git.heroku.com/my-app.git (fetch)
prod https://git.heroku.com/my-app.git (push)
...
❯ heroku run rails c -r prod --verbose
▸ remote prod not found in git remotes
At the same time, --application works fine
❯ heroku run rails c -a my-app
Running rails c on ⬢ my-app... ⣷ connecting, run.4544 (Standard-1X)
A: So, I don't know what broke it, but re-running git:remote fixed it
heroku git:remote -r prod -a my-app-prod
| stackoverflow | {
"language": "en",
"length": 96,
"provenance": "stackexchange_0000F.jsonl.gz:894012",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633291"
} |
1cd8ea60fbcce820ba56ac80d59fcb7e22a750f1 | Stackoverflow Stackexchange
Q: AttributeError: module 'cv2.cv2' has no attribute 'createLBPHFaceRecognizer' I am facing some attribute error while running face recognizing the code. My face detects code run perfectly.But while I try to run the face recognizing code it shows some attribute error. I googled and tried to follow all the steps. But still, it shows the same error. Here is my code:
face recognition
and I get the following error:
C:\Users\MAN\AppData\Local\Programs\Python\Python36\python.exe C:/Users/MAN/PycharmProjects/facerecognition/Recognise/recognize1.py
Traceback (most recent call last):
File "C:/Users/MAN/PycharmProjects/facerecognition/Recognise/recognize1.py", line 4, in <module>
recognizer = cv2.createLBPHFaceRecognizer()
AttributeError: module 'cv2.cv2' has no attribute 'createLBPHFaceRecognizer'
Process finished with exit code 1.
I am using Windows platform. python 3.6 version.Thanks in advance.
A: You might be running Python3 and therefore you are supposed to use pip3 to install the opencv-contrib package :
pip3 install opencv-contrib-python
This worked for me.
| Q: AttributeError: module 'cv2.cv2' has no attribute 'createLBPHFaceRecognizer' I am facing some attribute error while running face recognizing the code. My face detects code run perfectly.But while I try to run the face recognizing code it shows some attribute error. I googled and tried to follow all the steps. But still, it shows the same error. Here is my code:
face recognition
and I get the following error:
C:\Users\MAN\AppData\Local\Programs\Python\Python36\python.exe C:/Users/MAN/PycharmProjects/facerecognition/Recognise/recognize1.py
Traceback (most recent call last):
File "C:/Users/MAN/PycharmProjects/facerecognition/Recognise/recognize1.py", line 4, in <module>
recognizer = cv2.createLBPHFaceRecognizer()
AttributeError: module 'cv2.cv2' has no attribute 'createLBPHFaceRecognizer'
Process finished with exit code 1.
I am using Windows platform. python 3.6 version.Thanks in advance.
A: You might be running Python3 and therefore you are supposed to use pip3 to install the opencv-contrib package :
pip3 install opencv-contrib-python
This worked for me.
A: Use the following
recognizer = **cv2.face.LBPHFaceRecognizer_create()**
After you install:
pip install opencv-contrib-python
If using anaconda then in anaconda prompt:
conda install pip
then
pip install opencv-contrib-python
A: opencv has changed some functions and moved them to their opencv_contrib repo so you have to call the mentioned method with:
recognizer = cv2.face.createLBPHFaceRecognizer()
Note: You can see this issue about missing docs. Try using help function help(cv2.face.createLBPHFaceRecognizer) for more details.
A: I have some problem while executing :
import cv2 as cv
face_recognizer = cv.face.LBPHFaceRecognizer_create()
generating an error : cv2.cv2 has no face attributes.
if i try to install with :
sudo pip install opencv-contrib-python
it will take hours to compile and finally nothing works !
But on the site : https://www.piwheels.org/project/opencv-contrib-python/#install
only version 4.4.0.46 has files !
Then I try this :
sudo pip3 install opencv-contrib-python==4.4.0.46
installation is instantaneous !!!
I need to install some other libraries :
sudo apt install libaec0 libaom0 libatk-bridge2.0-0 libatk1.0-0 libatlas3-base libatspi2.0-0 libavcodec58 libavformat58 libavutil56 libbluray2 libcairo-gobject2 libcairo2 libchromaprint1 libcodec2-0.8.1 libcroco3 libdatrie1 libdrm2 libepoxy0 libfontconfig1 libgdk-pixbuf2.0-0 libgfortran5 libgme0 libgraphite2-3 libgsm1 libgtk-3-0 libharfbuzz0b libhdf5-103 libilmbase23 libjbig0 libmp3lame0 libmpg123-0 libogg0 libopenexr23 libopenjp2-7 libopenmpt0 libopus0 libpango-1.0-0 libpangocairo-1.0-0 libpangoft2-1.0-0 libpixman-1-0 librsvg2-2 libshine3 libsnappy1v5 libsoxr0 libspeex1 libssh-gcrypt-4 libswresample3 libswscale5 libsz2 libthai0 libtheora0 libtiff5 libtwolame0 libva-drm2 libva-x11-2 libva2 libvdpau1 libvorbis0a libvorbisenc2 libvorbisfile3 libvpx5 libwavpack1 libwayland-client0 libwayland-cursor0 libwayland-egl1 libwebp6 libwebpmux3 libx264-155 libx265-165 libxcb-render0 libxcb-shm0 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxi6 libxinerama1 libxkbcommon0 libxrandr2 libxrender1 libxvidcore4 libzvbi0
It works well since when using import cv2 face !!
I have now in pip3 freeze :
opencv-contrib-python==4.4.0.46
opencv-python==4.5.1.48
Hope this will be usefull !!!!!
A: For me changing createLBPHFaceRecognizer() to
recognizer = cv2.face.LBPHFaceRecognizer_create()
fixed the problem
A: I got openCV installed smoothly in my mac by:
$ brew install opencv
$ brew link --overwrite --dry-run opencv // to force linking
$ pip3 install opencv-contrib-python
I got it at windows 10 using:
c:\> pip3 install opencv-python
c:\> pip3 install opencv-contrib-python
Then I got it tested by
$ python3
Python 3.7.3 (default, Mar 27 2019, 09:23:15)
[Clang 10.0.1 (clang-1001.0.46.3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'4.1.0'
>>> exit()
A: python -m pip install --user opencv-contrib-python
After doing this just Restart your system and then if you are on Opencv >= 4.* use :
recognizer = cv2.face.LBPHFaceRecognizer_create()
This should solve 90% of the problem.
A: I'm using PyCharm and installing opencv-contrib-python-headless solved it for me. I tried all the other solutions on this thread initially and none of them seemed to work for me.
A: if you are using python3.x and opencv==4.1.0 then use following commands
First of all
python -m pip install --user opencv-contrib-python
after that use this in the python script
cv2.face.LBPHFaceRecognizer_create()
A: You need to install opencv-contrib
pip install opencv-contrib-python
It should work after that.
A: I had a similar problem:
module cv2 has no attribute "cv2.TrackerCSRT_create"
My Python version is 3.8.0 under Windows 10.
The problem was the opencv version installation.
So I fixed this way (cmd prompt with administrator privileges):
*
*Uninstalled opencv-python: pip uninstall opencv-python
*Installed only opencv-contrib-python: pip install opencv-contrib-python
Anyway you can read the following guide:
https://github.com/skvark/opencv-python
A: RESTART YOUR IDE
I tried all of the different things but nothing seems to be working then I just restarted my IDE and it worked like charm.
Still, if it does not work then try restarting your system.
FYI, I am working on the following versions
opencv-contrib-python==4.4.0.46
opencv-python==4.1.2.30
A: For me, I had to have OpenCV (3.4.2), Py-OpenCV (3.4.2), LibOpenCV (3.4.2).
My Python was version 3.5.6 with Anaconda in Windows OS 10.
A: Check your OpenCV version
import cv2
cv2.__version__
If your are running Python v3.x and OpenCV v4 and above:
pip install opencv-contrib-python
Try running your program again. This should now work.
A: Go to Python Interpreter Settings in Pycharm / your IDE environment and change the package versions or install them in these versions.
opencv-contrib-python == 4.4.0.46
opencv-python == 4.1.2.30
Check Screenshot for clarity.
| stackoverflow | {
"language": "en",
"length": 787,
"provenance": "stackexchange_0000F.jsonl.gz:894037",
"question_score": "58",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633378"
} |
f064ba529528cf635180bb7072d7410649941208 | Stackoverflow Stackexchange
Q: How can I get the feature names from sklearn TruncatedSVD object? I have the following code
import pandas as pd
import numpy as np
from sklearn.decomposition import TruncatedSVD
df = df = pd.DataFrame(np.random.randn(1000, 25), index=dates, columns=list('ABCDEFGHIJKLMOPQRSTUVWXYZ'))
def reduce(dim):
svd = sklearn.decomposition.TruncatedSVD(n_components=dim, n_iter=7, random_state=42)
return svd.fit(df)
fitted = reduce(5)
how do i get the column names from fitted?
A:
In continuation of Mikhail post.
Assume that you already have feature_names from vectorizer.get_feature_names() and after that you have called svd.fit(X)
Now you can also extract sorted best feature names using the following code:
best_fearures = [feature_names[i] for i in svd.components_[0].argsort()[::-1]]
The above code, try to return the arguement of descending sort of svd.components_[0] and find the relative index from feature_names (all of the features) and construct the best_features array.
Then you can see for example the 10 best features:
In[21]: best_features[:10]
Out[21]:
['manag',
'develop',
'busi',
'solut',
'initi',
'enterprise',
'project',
'program',
'process',
'plan']
| Q: How can I get the feature names from sklearn TruncatedSVD object? I have the following code
import pandas as pd
import numpy as np
from sklearn.decomposition import TruncatedSVD
df = df = pd.DataFrame(np.random.randn(1000, 25), index=dates, columns=list('ABCDEFGHIJKLMOPQRSTUVWXYZ'))
def reduce(dim):
svd = sklearn.decomposition.TruncatedSVD(n_components=dim, n_iter=7, random_state=42)
return svd.fit(df)
fitted = reduce(5)
how do i get the column names from fitted?
A:
In continuation of Mikhail post.
Assume that you already have feature_names from vectorizer.get_feature_names() and after that you have called svd.fit(X)
Now you can also extract sorted best feature names using the following code:
best_fearures = [feature_names[i] for i in svd.components_[0].argsort()[::-1]]
The above code, try to return the arguement of descending sort of svd.components_[0] and find the relative index from feature_names (all of the features) and construct the best_features array.
Then you can see for example the 10 best features:
In[21]: best_features[:10]
Out[21]:
['manag',
'develop',
'busi',
'solut',
'initi',
'enterprise',
'project',
'program',
'process',
'plan']
A: fitted column names would be SVD dimensions.
Each dimension is a linear combination of input features. To understand what a particular dimension mean take a look at svd.components_ array - it contains a matrix of coefficients input features are multiplied by.
Your original example, slightly changed:
import pandas as pd
import numpy as np
from sklearn.decomposition import TruncatedSVD
feature_names = list('ABCDEF')
df = pd.DataFrame(
np.random.randn(1000, len(feature_names)),
columns=feature_names
)
def reduce(dim):
svd = TruncatedSVD(n_components=dim, n_iter=7, random_state=42)
return svd.fit(df)
svd = reduce(3)
Then you can do something like that to get a more readable SVD dimension name - let's compute it for 0th dimension:
" ".join([
"%+0.3f*%s" % (coef, feat)
for coef, feat in zip(svd.components_[0], feature_names)
])
It shows +0.170*A -0.564*B -0.118*C +0.367*D +0.528*E +0.475*F - this is a "feature name" you can use for a 0th SVD dimension in this case (of course, coefficients depend on data, so feature name also depends on data).
If you have many input dimensions you may trade some "precision" with inspectability, e.g. sort coefficients and use only a few top of them. A more elaborate example can be found in https://github.com/TeamHG-Memex/eli5/pull/208 (disclaimer: I'm one of eli5 maintainers; pull request is not by me).
| stackoverflow | {
"language": "en",
"length": 350,
"provenance": "stackexchange_0000F.jsonl.gz:894103",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633571"
} |
c258db3a4871263765eee43a8e344b18526b0bb9 | Stackoverflow Stackexchange
Q: gradle java: pre-process resources before building jar Gradle java plugin:
src/main/java
resources/foo-config.xml
The foo-config.xml has some variables to replace, for example, @VERSION_NUMBER@.
How to process it before generating jar.
The foo-config.xml should be copied to the build dir for processing to avoid any changes under src directory.
A: Configure the processResources task which is a copy task. You can add some filtering there. Make sure to set the right encoding for the filtering to not corrupt special characters if you have some, e. g. like
processResources {
filteringCharset 'UTF-8'
filter(ReplaceTokens, tokens: [VERSION_NUMBER: version])
}
| Q: gradle java: pre-process resources before building jar Gradle java plugin:
src/main/java
resources/foo-config.xml
The foo-config.xml has some variables to replace, for example, @VERSION_NUMBER@.
How to process it before generating jar.
The foo-config.xml should be copied to the build dir for processing to avoid any changes under src directory.
A: Configure the processResources task which is a copy task. You can add some filtering there. Make sure to set the right encoding for the filtering to not corrupt special characters if you have some, e. g. like
processResources {
filteringCharset 'UTF-8'
filter(ReplaceTokens, tokens: [VERSION_NUMBER: version])
}
| stackoverflow | {
"language": "en",
"length": 95,
"provenance": "stackexchange_0000F.jsonl.gz:894109",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633593"
} |
baf31ccb57e687f71e6c4a974e67f6c539e2706d | Stackoverflow Stackexchange
Q: What is the return type of document.getElementById() What is the type of the variable "element" in this snippet?
I thought it is a number (an ID or something), but now, I have no idea.
The code works, but I don't understand, why the var element can be used in a for cycle, like an array. Is there any explanation about this?
<script type="text/javascript">
function showAtrributes() {
var element = document.getElementById("videos");
var listAttributes = "";
for(var attribute in element) {
var valueOfAtrrib = element.getAttribute(attribute);
listAttributes = listAttributes + attribute + ": " + valueOfAttrib + "\n";
}
alert(listAttributes);
}
</script>
A:
The getElementById() method returns the element that has the ID
attribute with the specified value.
[....]
Returns null if no elements with the specified ID exists.
So it returns an HTMLElement Object
source
| Q: What is the return type of document.getElementById() What is the type of the variable "element" in this snippet?
I thought it is a number (an ID or something), but now, I have no idea.
The code works, but I don't understand, why the var element can be used in a for cycle, like an array. Is there any explanation about this?
<script type="text/javascript">
function showAtrributes() {
var element = document.getElementById("videos");
var listAttributes = "";
for(var attribute in element) {
var valueOfAtrrib = element.getAttribute(attribute);
listAttributes = listAttributes + attribute + ": " + valueOfAttrib + "\n";
}
alert(listAttributes);
}
</script>
A:
The getElementById() method returns the element that has the ID
attribute with the specified value.
[....]
Returns null if no elements with the specified ID exists.
So it returns an HTMLElement Object
source
A:
What is the return type of document.getElementById()
Element. It returns a reference to the actual object for the element in the DOM (or null if none was found with that id). Details:
*
*Spec for Element
*Spec for getElementById
*Element on MDN
*getElementById on MDN
I thought it is a number (an ID or something)
No, that's "video" (the string you used to look it up). It's also accessible from the id property of the Element object.
The code works, but I don't understand, why the var element can be used in a for cycle, like an array.
for-in isn't primarily for use on arrays, it's for use on objects. The only reason it works on arrays is that arrays are objects. (See this question's answers and this page on MDN for more on that.) DOM elements are objects, so you can loop through their enumerable properties via for-in.
A: The return type of document.getElementById() is Element Object or null. Please Refer the following link from MDN:
A: It looks like you are really questioning why the for loop works, not what kind of object getElementById returns. Read this article:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in
The for( var in ....) syntax causes Javascript to iterate over the properties of the object specified by ....
A: The return type can be anything that the programmer of a Web Browser defines to the JS VM library used, to create a specific implementation of Javascript. For instance, the webcwebbrowser which uses SpiderMonkey returns a JSObject of HTMLElement JSClass which it gets by calling CreateJSObject on the underlying internal HTMLElement object. The JSObject is the internal VM library representation of objects visible to JS scripts, such as a HTMLElement. A HTMLElement in a script is actually accessing a JSObject logically instantiated from the HTMLElement JSClass, where JSObject and JSClasses are C++ classes. The HTMLElement JSObject also has a corresponding C++ native marshalled object of class HTMLElement.
| stackoverflow | {
"language": "en",
"length": 453,
"provenance": "stackexchange_0000F.jsonl.gz:894116",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633621"
} |
684915af19a8467a31c8a5f8b9f7b4d9d617e845 | Stackoverflow Stackexchange
Q: New approach to configure kube-proxy's proxymode in GKE/Kubernetes? I see a recent pull request was merged to remove the net.experimental.kubernetes.io/proxy-mode and net.beta.kubernetes.io/proxy-mode annotations.
My application's reverse proxy servers currently work much better when the proxymode is set to userspace, and I would like to keep using this setting after upgrading.
Since Google Container Engine hosts the Kubernetes master, and I cannot directly access that VM, how can I configure the kube-proxy proxymode without using annotations? Ideally I could change a cluster-wide setting so that new nodes (after autoscaling) also use the userspace proxy mode.
| Q: New approach to configure kube-proxy's proxymode in GKE/Kubernetes? I see a recent pull request was merged to remove the net.experimental.kubernetes.io/proxy-mode and net.beta.kubernetes.io/proxy-mode annotations.
My application's reverse proxy servers currently work much better when the proxymode is set to userspace, and I would like to keep using this setting after upgrading.
Since Google Container Engine hosts the Kubernetes master, and I cannot directly access that VM, how can I configure the kube-proxy proxymode without using annotations? Ideally I could change a cluster-wide setting so that new nodes (after autoscaling) also use the userspace proxy mode.
| stackoverflow | {
"language": "en",
"length": 95,
"provenance": "stackexchange_0000F.jsonl.gz:894144",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633700"
} |
9a9961bd2fee0570b0679947886f79083f42aaa8 | Stackoverflow Stackexchange
Q: Stop a DispatchQueue that is running on the main thread I have this block of code:
DispatchQueue.main.asyncAfter(deadline: .now() + (delay * Double(isDelayAccounted.hashValue)) + extraDelay) {
self.isShootingOnHold = false
self.shoot()
self.shootingEngine = Timer.scheduledTimer(timeInterval: (Double(60)/Double(self.ratePerMinute)), target: self, selector: #selector(ShootingEnemy.shoot), userInfo: nil, repeats: true)
}
Now, I want to be able to stop this thread from executing. How can I stop it from being executed? For instance, after 3 seconds, I decide I don't want that to execute anymore so I want to stop it.
A: You can use DispatchWorkItems. They can be scheduled on DispatchQueues and cancelled before their execution.
let work = DispatchWorkItem(block: {
self.isShootingOnHold = false
self.shoot()
self.shootingEngine = Timer.scheduledTimer(timeInterval: (Double(60)/Double(self.ratePerMinute)), target: self, selector: #selector(ShootingEnemy.shoot), userInfo: nil, repeats: true)
})
DispatchQueue.main.asyncAfter(deadline: .now() + (delay * Double(isDelayAccounted.hashValue)) + extraDelay, execute: work)
work.cancel()
| Q: Stop a DispatchQueue that is running on the main thread I have this block of code:
DispatchQueue.main.asyncAfter(deadline: .now() + (delay * Double(isDelayAccounted.hashValue)) + extraDelay) {
self.isShootingOnHold = false
self.shoot()
self.shootingEngine = Timer.scheduledTimer(timeInterval: (Double(60)/Double(self.ratePerMinute)), target: self, selector: #selector(ShootingEnemy.shoot), userInfo: nil, repeats: true)
}
Now, I want to be able to stop this thread from executing. How can I stop it from being executed? For instance, after 3 seconds, I decide I don't want that to execute anymore so I want to stop it.
A: You can use DispatchWorkItems. They can be scheduled on DispatchQueues and cancelled before their execution.
let work = DispatchWorkItem(block: {
self.isShootingOnHold = false
self.shoot()
self.shootingEngine = Timer.scheduledTimer(timeInterval: (Double(60)/Double(self.ratePerMinute)), target: self, selector: #selector(ShootingEnemy.shoot), userInfo: nil, repeats: true)
})
DispatchQueue.main.asyncAfter(deadline: .now() + (delay * Double(isDelayAccounted.hashValue)) + extraDelay, execute: work)
work.cancel()
A: You could use an one-shot DispatchSourceTimer rather than asyncAfter
var oneShot : DispatchSourceTimer!
oneShot = DispatchSource.makeTimerSource(queue: DispatchQueue.main)
oneShot.scheduleOneshot(deadline: .now() + (delay * Double(isDelayAccounted.hashValue)) + extraDelay))
oneShot.setEventHandler {
self.isShootingOnHold = false
self.shoot()
self.shootingEngine = Timer.scheduledTimer(timeInterval: (Double(60)/Double(self.ratePerMinute)), target: self, selector: #selector(ShootingEnemy.shoot), userInfo: nil, repeats: true)
}
oneShot.setCancelHandler {
// do something after cancellation
}
oneShot.resume()
and cancel the execution with
oneShot?.cancel()
oneShot = nil
| stackoverflow | {
"language": "en",
"length": 195,
"provenance": "stackexchange_0000F.jsonl.gz:894155",
"question_score": "16",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633729"
} |
a4c209c9d0c0ad9787c648a03f074941b0dcb17d | Stackoverflow Stackexchange
Q: Transition table with VueJS I tried to continue with the example on the vuejs website. I tried to add images and a transition state when I sort data.
However it doesn't work. I have tried to add the following line to make it works but it doesn't:
<tbody name="table-row" is="transition-group">
Do you have some ideas for me?
https://codepen.io/wooza/pen/wezqXP
A: https://codepen.io/anon/pen/gRmxwJ
https://v2.vuejs.org/v2/guide/transitions.html#List-Transitions
Unlike <transition>, it renders an actual element: a <span> by
default. You can change the element that’s rendered with the tag
attribute.
<transition-group tag="tbody" name="table-row">
<tr v-for="entry in filteredData" :key="entry.name">
//...
</tr>
</transition-group>
| Q: Transition table with VueJS I tried to continue with the example on the vuejs website. I tried to add images and a transition state when I sort data.
However it doesn't work. I have tried to add the following line to make it works but it doesn't:
<tbody name="table-row" is="transition-group">
Do you have some ideas for me?
https://codepen.io/wooza/pen/wezqXP
A: https://codepen.io/anon/pen/gRmxwJ
https://v2.vuejs.org/v2/guide/transitions.html#List-Transitions
Unlike <transition>, it renders an actual element: a <span> by
default. You can change the element that’s rendered with the tag
attribute.
<transition-group tag="tbody" name="table-row">
<tr v-for="entry in filteredData" :key="entry.name">
//...
</tr>
</transition-group>
| stackoverflow | {
"language": "en",
"length": 95,
"provenance": "stackexchange_0000F.jsonl.gz:894171",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633778"
} |
d61a154b24d773148eda0a0ab071c0ec0f5edf6f | Stackoverflow Stackexchange
Q: alpine package py-pip missing Im trying to install python pip in my alpine using Docker compose file but get the following error.
ERROR: unsatisfiable constraints:
py-pip (missing):
required by: world[py-pip]
ERROR: Service 'web' failed to build: The command '/bin/sh -c apk add py-pip' returned a non-zero code: 1
A: For me --no-cache option worked.
apk add --no-cache py-pip
| Q: alpine package py-pip missing Im trying to install python pip in my alpine using Docker compose file but get the following error.
ERROR: unsatisfiable constraints:
py-pip (missing):
required by: world[py-pip]
ERROR: Service 'web' failed to build: The command '/bin/sh -c apk add py-pip' returned a non-zero code: 1
A: For me --no-cache option worked.
apk add --no-cache py-pip
A: I've found the following:
$ apk add --update py3-pip
A: For python3 on alpine edge:
apk add py3-setuptools
A: You have to use appropriate pip version depending on Alpine branch:
*
*Alpine v3.12 or newer, use apk add --update py3-pip
*Alpine v3.5 - v3.11, use apk add --update py2-pip
*Alpine v3.3 - v3.4, use apk add --update py-pip
A: Do update first:
apk add --update py-pip
Or:
apk update
apk add py-pip
A: AlpineWSL 3.14.0 | Last commands is a solution, info from ircs://irc.oftc.net/alpine-linux
apk update
apk upgrade
apk add python2
python -m ensurepip --upgrade
Example
pip install -r requirements.txt
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
...
Successfully installed certifi-2021.10.8 chardet-4.0.0 idna-2.10 requests-2.26.0 urllib3-1.26.7
WARNING: You are using pip version 19.2.3, however version 20.3.4 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
A: Following command in the console should work with any linux distro:
python -m ensurepip --upgrade
Tested succesfully with Alpine v3.17 the VM edition. More details here.
A: This worked for me:
curl https://bootstrap.pypa.io/pip/2.7/get-pip.py -o get-pip.py && python get-pip.py
A: The command '/bin/sh -c pip install django-mass-edit && pip install django-admin-list-filter-dropdown && pip install SQLAlchemy && pip
A: You need to modify your repository:
Modify the file /etc/apk/repositories
Add the repository community
g.e.:
/media/mmcblk0p1/apks
http://alpine.42.fr/v3.14/main
http://alpine.42.fr/v3.14/community
For me, the server used is http://alpine.42.fr, but you can use another server
Don't forget to commit your change if you want to have this configuration permanently
lbu commit -d
| stackoverflow | {
"language": "en",
"length": 348,
"provenance": "stackexchange_0000F.jsonl.gz:894208",
"question_score": "71",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633903"
} |
1bf2d52e22316af8a34ae43688f7f3f75e85cbc2 | Stackoverflow Stackexchange
Q: Add CMake project as subproject in CMake project in Qt Creator I have a directory layout like the following
projectA/
|-- CMakeLists.txt
|-- src/
|-- main.cpp
projectB/
|-- CMakeLists.txt
|-- src/
|-- file1.cpp
|-- file1.hpp
|-- file2.hpp
|-- main.cpp
|-- third_party/
|-- include
|-- lib1
I opened both project successfully in Qt Creator (using Ctrl+O and open the CMakeLists.txt file) and they're are able to build and run independently.
I need to gain access to file1.Xpp and file2.hpp from projectA. Is there a way in Qt Creator to add projectB as a subproject in projectA? And One might keep in mind that file1.Xpp and file2.hpp might depend on the third party library.
Using Ctrl+N -> Other Project -> Subdirs Project I can add a subproject, but only an empty one, if I'm not mistaken.
A: No, you can't add a CMake subject as a sub project.
Sub project is based of qmake.
| Q: Add CMake project as subproject in CMake project in Qt Creator I have a directory layout like the following
projectA/
|-- CMakeLists.txt
|-- src/
|-- main.cpp
projectB/
|-- CMakeLists.txt
|-- src/
|-- file1.cpp
|-- file1.hpp
|-- file2.hpp
|-- main.cpp
|-- third_party/
|-- include
|-- lib1
I opened both project successfully in Qt Creator (using Ctrl+O and open the CMakeLists.txt file) and they're are able to build and run independently.
I need to gain access to file1.Xpp and file2.hpp from projectA. Is there a way in Qt Creator to add projectB as a subproject in projectA? And One might keep in mind that file1.Xpp and file2.hpp might depend on the third party library.
Using Ctrl+N -> Other Project -> Subdirs Project I can add a subproject, but only an empty one, if I'm not mistaken.
A: No, you can't add a CMake subject as a sub project.
Sub project is based of qmake.
| stackoverflow | {
"language": "en",
"length": 153,
"provenance": "stackexchange_0000F.jsonl.gz:894218",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633931"
} |
d74c6aa3bc5c2034251c78e87c41d32ab421fe64 | Stackoverflow Stackexchange
Q: Setting file permissions for created files in R I'm on Linux and want to share R output with co-workers while also allowing them to overwrite my files. However, when I write a file the permissions are set to read-only for the group, for example:
> write.csv(data.frame(a = 1:3), file = "/tmp/test.csv")
> file.mode("/tmp/test.csv")
[1] "644"
creates a file that is only writeable by myself. Is there some option I can set so that the files I write have permission 660 set automatically for all ways of writing files (write.csv, data.table, etc)?
A: The solution is to set the umask using Sys.umask as follows.
# Before setting umask as in the question:
> write.csv(data.frame(a = 1:3), file = "/tmp/test.csv")
> file.mode("/tmp/test.csv")
[1] "644"
# Setting the umask results in succes:
Sys.umask("006")
> write.csv(data.frame(a = 1:3), file = "/tmp/test2.csv")
> file.mode("/tmp/test2.csv")
[1] "660"
| Q: Setting file permissions for created files in R I'm on Linux and want to share R output with co-workers while also allowing them to overwrite my files. However, when I write a file the permissions are set to read-only for the group, for example:
> write.csv(data.frame(a = 1:3), file = "/tmp/test.csv")
> file.mode("/tmp/test.csv")
[1] "644"
creates a file that is only writeable by myself. Is there some option I can set so that the files I write have permission 660 set automatically for all ways of writing files (write.csv, data.table, etc)?
A: The solution is to set the umask using Sys.umask as follows.
# Before setting umask as in the question:
> write.csv(data.frame(a = 1:3), file = "/tmp/test.csv")
> file.mode("/tmp/test.csv")
[1] "644"
# Setting the umask results in succes:
Sys.umask("006")
> write.csv(data.frame(a = 1:3), file = "/tmp/test2.csv")
> file.mode("/tmp/test2.csv")
[1] "660"
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:894220",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633935"
} |
aab4bcb0a6e3eb7507901ff373bd0ba73f3635ae | Stackoverflow Stackexchange
Q: How to get client IP address in Azure Functions node.js? I have an Azure function written on node.js. How could I retrieved an IP address of a client that called the function?
What I've found so far:
*
*An answer to the same question, but using C#.
*It is possible to read it from headers:
module.exports = function (context, req) {
var ip = req.headers['x-forwarded-for']
}
Is it reliable to get the ip this way, since it can be easily changed on the way to the function?
A: Yes it is reliable, because Azure web server will overwrite x-forwarded-for as it knows it is forwarding from load balancer.
| Q: How to get client IP address in Azure Functions node.js? I have an Azure function written on node.js. How could I retrieved an IP address of a client that called the function?
What I've found so far:
*
*An answer to the same question, but using C#.
*It is possible to read it from headers:
module.exports = function (context, req) {
var ip = req.headers['x-forwarded-for']
}
Is it reliable to get the ip this way, since it can be easily changed on the way to the function?
A: Yes it is reliable, because Azure web server will overwrite x-forwarded-for as it knows it is forwarding from load balancer.
A: If you're using express, consider setting app.set('trust proxy');
For more information, check the express manual page on 'Express behing proxies'
| stackoverflow | {
"language": "en",
"length": 130,
"provenance": "stackexchange_0000F.jsonl.gz:894231",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44633970"
} |
65f878ca6e5fa5b3ca8cc906f0f501ea896e3033 | Stackoverflow Stackexchange
Q: Why does == and equals produce different results? Executing the following code:
inline fun <reified R> foobar() {
println(R::class == Double::class)
println(R::class.equals(Double::class))
}
fun main(args: Array<String>) {
foobar<Double>()
}
Produces the following output:
false
true
Why is there a difference between == and equals in this case? IntelliJ itself is suggesting that I replace the equals call with ==. Also, I could have sworn this code using == was working in the past.
Using kotlin version 1.1.0-rc91
A: This behavior is a known issue in code generation for class tokens of reified type parameters, it's tracked here: KT-17748.
| Q: Why does == and equals produce different results? Executing the following code:
inline fun <reified R> foobar() {
println(R::class == Double::class)
println(R::class.equals(Double::class))
}
fun main(args: Array<String>) {
foobar<Double>()
}
Produces the following output:
false
true
Why is there a difference between == and equals in this case? IntelliJ itself is suggesting that I replace the equals call with ==. Also, I could have sworn this code using == was working in the past.
Using kotlin version 1.1.0-rc91
A: This behavior is a known issue in code generation for class tokens of reified type parameters, it's tracked here: KT-17748.
| stackoverflow | {
"language": "en",
"length": 99,
"provenance": "stackexchange_0000F.jsonl.gz:894252",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634016"
} |
f21cc647626ff35bf94585549c78c7d357e2f13c | Stackoverflow Stackexchange
Q: How to specify Dockerrun.aws.json for AWS using Terraform I am attempting to host a Docker application with AWS via Elastic Beanstalk. When going through manual creation of an environment I am given the option to run a sample application in the environment, upload my own, or pull an application off of s3. By uploading a Dockerrun.aws.json file with all the necessary configuration the environment is able to pull and run my Docker image.
Now I am using Terraform to programmatically create and configure these environments. However, upon creation they all run the sample application, which in turn causes problems when I attempt to manually upload the Dockerrun file to the environment.
What is the proper way to include the Dockerrun information in the Terraform configuration so my application can deploy without a hitch?
A: You should use an S3 bucket to store the Dockerrun.aws.json and set up a Beanstalk application version.
Something like:
resource "aws_elastic_beanstalk_application_version" "latest" {
name = "latest"
application = "your_app"
bucket = "your_bucket"
key = "Dockerrun.aws.json"
}
Then add to your Beanstalk environment:
version_label = "${aws_elastic_beanstalk_application_version.latest.name}"
Of course, is better to use references instead of hardcoding names.
| Q: How to specify Dockerrun.aws.json for AWS using Terraform I am attempting to host a Docker application with AWS via Elastic Beanstalk. When going through manual creation of an environment I am given the option to run a sample application in the environment, upload my own, or pull an application off of s3. By uploading a Dockerrun.aws.json file with all the necessary configuration the environment is able to pull and run my Docker image.
Now I am using Terraform to programmatically create and configure these environments. However, upon creation they all run the sample application, which in turn causes problems when I attempt to manually upload the Dockerrun file to the environment.
What is the proper way to include the Dockerrun information in the Terraform configuration so my application can deploy without a hitch?
A: You should use an S3 bucket to store the Dockerrun.aws.json and set up a Beanstalk application version.
Something like:
resource "aws_elastic_beanstalk_application_version" "latest" {
name = "latest"
application = "your_app"
bucket = "your_bucket"
key = "Dockerrun.aws.json"
}
Then add to your Beanstalk environment:
version_label = "${aws_elastic_beanstalk_application_version.latest.name}"
Of course, is better to use references instead of hardcoding names.
| stackoverflow | {
"language": "en",
"length": 191,
"provenance": "stackexchange_0000F.jsonl.gz:894272",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634079"
} |
91b8cd91d960d96bf52dca3736a9f06e5dfe34f6 | Stackoverflow Stackexchange
Q: Lost control of PWA due to aggressive service worker I was very cautious about adding a service worker to my PWA that would cache all my files. I tried to implement a system that would always call the server to get a "version" file so that when that "version" file updated, the cache would be cleared.
However, something didn't work correctly, and now the clients no longer call the server at all, since they have the files they need. This is perfect for offline use! But those clients will never call the server again so when I update the site to fix the problem (which I have done), they do not get the update!
Any suggestions on how I can connect with those clients again?
A: The easiest thing for you to do is deploy a change to your service worker code. In that version clear your cache and remove the buggy code.
Don't worry this happens a lot when you start working with service worker caching. :)
| Q: Lost control of PWA due to aggressive service worker I was very cautious about adding a service worker to my PWA that would cache all my files. I tried to implement a system that would always call the server to get a "version" file so that when that "version" file updated, the cache would be cleared.
However, something didn't work correctly, and now the clients no longer call the server at all, since they have the files they need. This is perfect for offline use! But those clients will never call the server again so when I update the site to fix the problem (which I have done), they do not get the update!
Any suggestions on how I can connect with those clients again?
A: The easiest thing for you to do is deploy a change to your service worker code. In that version clear your cache and remove the buggy code.
Don't worry this happens a lot when you start working with service worker caching. :)
| stackoverflow | {
"language": "en",
"length": 169,
"provenance": "stackexchange_0000F.jsonl.gz:894328",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634239"
} |
d7c673cdc193bedd6438dfb89d5bbfd3bcc4670e | Stackoverflow Stackexchange
Q: Failed to write all bytes for _bisect.so I was restoring a MongoDB environment, then it failed for no space in disk.
After that I cannot execute any docker-compose command, in each attempt this error message is displayed:
Failed to write all bytes for _bisect.so
I found some references about to free space in /tmp, although I want to be sure that was the best alternative of solution.
A: Remove the docker images:
docker rmi $(docker images -f dangling=true -q)
UPDATE:
you can now use prune
docker system prune -af
https://docs.docker.com/engine/reference/commandline/system_prune/
| Q: Failed to write all bytes for _bisect.so I was restoring a MongoDB environment, then it failed for no space in disk.
After that I cannot execute any docker-compose command, in each attempt this error message is displayed:
Failed to write all bytes for _bisect.so
I found some references about to free space in /tmp, although I want to be sure that was the best alternative of solution.
A: Remove the docker images:
docker rmi $(docker images -f dangling=true -q)
UPDATE:
you can now use prune
docker system prune -af
https://docs.docker.com/engine/reference/commandline/system_prune/
A: check df
normally you will find 100% for /var/lib/docker and 100% for/
try to free some space, may be stop syslog service.
Then remove and restart your containers
recheck df
now /var/lib/docker should be around 15%
A: During a docker-compose command, I got a similar error ("Failed to write all bytes for _ctypes.pyd") because my drive had no space left on it.
| stackoverflow | {
"language": "en",
"length": 154,
"provenance": "stackexchange_0000F.jsonl.gz:894363",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634346"
} |
17e1a74721a0a3740ab3c6a2d713f659214160cc | Stackoverflow Stackexchange
Q: Secure way to realloc I'm writing a C library which needs to often move around various sensitive data. I want to have benefits of realloc (extending allocated block instead copying when memory is available) while having some way to erase content of old block if copying is necessary.
Is there some lightweight implementation of malloc/realloc/free which could be used for mingw-gcc or some other trick to it, or I must overallocate and just allocate-and-copy without relying on realloc?
A: On Linux, mmap the block, mlock it, and then do mremap instead of using realloc.
Protecting against hidden copies isn't enough. You also need to make sure the memory never ever gets swapped to disk before you get a chance to zero it.
| Q: Secure way to realloc I'm writing a C library which needs to often move around various sensitive data. I want to have benefits of realloc (extending allocated block instead copying when memory is available) while having some way to erase content of old block if copying is necessary.
Is there some lightweight implementation of malloc/realloc/free which could be used for mingw-gcc or some other trick to it, or I must overallocate and just allocate-and-copy without relying on realloc?
A: On Linux, mmap the block, mlock it, and then do mremap instead of using realloc.
Protecting against hidden copies isn't enough. You also need to make sure the memory never ever gets swapped to disk before you get a chance to zero it.
| stackoverflow | {
"language": "en",
"length": 123,
"provenance": "stackexchange_0000F.jsonl.gz:894373",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634380"
} |
5cc45dbbcbaf9a98ce730f223ecbee6f2ebbf986 | Stackoverflow Stackexchange
Q: Base64 encoding from a website and terminal give two different results I used following command on terminal
`echo admin:admin | base64`
It gives me following output
YWRtaW46YWRtaW4K
But when I used https://www.base64encode.org/ for the same string admin:admin it gives me following
YWRtaW46YWRtaW4=
Any reason for this?
A: The reason this behaviour is the new line added by the echo command. Normally the echo command add a new line at the end which leads to a different encoding. Therefore if you use it with echo -n admin:admin | base64 the difference won't occur.
| Q: Base64 encoding from a website and terminal give two different results I used following command on terminal
`echo admin:admin | base64`
It gives me following output
YWRtaW46YWRtaW4K
But when I used https://www.base64encode.org/ for the same string admin:admin it gives me following
YWRtaW46YWRtaW4=
Any reason for this?
A: The reason this behaviour is the new line added by the echo command. Normally the echo command add a new line at the end which leads to a different encoding. Therefore if you use it with echo -n admin:admin | base64 the difference won't occur.
| stackoverflow | {
"language": "en",
"length": 93,
"provenance": "stackexchange_0000F.jsonl.gz:894374",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634382"
} |
e1a9ca47aa7508f377a89a9ab1ad624e7877158f | Stackoverflow Stackexchange
Q: Is there a way to tell Blue Ocean that build descriptions are html? We're generally switching from using Freestyle to pipeline projects, and as part of that are using Blue Ocean on a regular basis. For the classic view, we generate html job descriptions that point back, for example, to merge requests in the gitlab server we use. The same build descriptions on Blue Ocean are treated as text, and are virtually useless.
Is there some way of telling Blue Ocean to treat the build descriptions as html or similar?
A: It does not seem to be possible at the moment.
See JENKINS-45719
| Q: Is there a way to tell Blue Ocean that build descriptions are html? We're generally switching from using Freestyle to pipeline projects, and as part of that are using Blue Ocean on a regular basis. For the classic view, we generate html job descriptions that point back, for example, to merge requests in the gitlab server we use. The same build descriptions on Blue Ocean are treated as text, and are virtually useless.
Is there some way of telling Blue Ocean to treat the build descriptions as html or similar?
A: It does not seem to be possible at the moment.
See JENKINS-45719
| stackoverflow | {
"language": "en",
"length": 104,
"provenance": "stackexchange_0000F.jsonl.gz:894438",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634619"
} |
4fdce76279c4ea90c0b73687d8c2590566b6cd12 | Stackoverflow Stackexchange
Q: Android Studio: can no longer see dependencies javadoc Android Studio v3.0 Canary 4,
but happens for me on Android Studio v2.3.3 too
I'm not sure since when this started to happen, what version or configuration I've made (if any), but I can no longer see libraries/dependencies javadoc:
for instance, RxJava:
Even though, Android SDK does show the javadocs correctly:
This is true for all libs I have, I looked everywhere on the net for this issue, and it seems like I'm the only one.
A: As a workaround, you can manually specify JavaDoc.
Navigate to jar file, select "Library Properties", click "Specify Documentation URL".
For RxJava 2.x: http://reactivex.io/RxJava/2.x/javadoc/
For RxJava 1.x: http://reactivex.io/RxJava/1.x/javadoc/
Detailed how-to can be found here
| Q: Android Studio: can no longer see dependencies javadoc Android Studio v3.0 Canary 4,
but happens for me on Android Studio v2.3.3 too
I'm not sure since when this started to happen, what version or configuration I've made (if any), but I can no longer see libraries/dependencies javadoc:
for instance, RxJava:
Even though, Android SDK does show the javadocs correctly:
This is true for all libs I have, I looked everywhere on the net for this issue, and it seems like I'm the only one.
A: As a workaround, you can manually specify JavaDoc.
Navigate to jar file, select "Library Properties", click "Specify Documentation URL".
For RxJava 2.x: http://reactivex.io/RxJava/2.x/javadoc/
For RxJava 1.x: http://reactivex.io/RxJava/1.x/javadoc/
Detailed how-to can be found here
A: It's a bug. Allegedly it was fixed last month, but from what I've seen some extra monkeying is needed to get the correct behavior back - I downloaded the latest Android O artifacts from the SDK Manager, then temporarily changed all build.gradle settings to support Android O, then switched back, and now I see the javadoc again even without supporting O. Here's the bug report: https://issuetracker.google.com/issues/37140074
A: What you can do as a workaround is to inspect a class in the dependency, and it'll give you the decompiled byte code with an option to "Download...". Click that, source is downloaded, and now you have javadocs. You have to do this for each dependency, but it persists through cleans and resyncs.
| stackoverflow | {
"language": "en",
"length": 240,
"provenance": "stackexchange_0000F.jsonl.gz:894483",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634764"
} |
4d3ff247312f600d0f7b9d35379893793b3f045a | Stackoverflow Stackexchange
Q: Get substring before last occurrence of a word in shell script I have a string /abc/xyz/def/xyz/1234/lmn/xyz/7890/uvw in Linux.
I want to extract the substring before the last occurrence of the string xyz using shell script.
Example:
Input:
/abc/xyz/def/xyz/1234/lmn/xyz/7890/uvw
Output:
/abc/xyz/def/xyz/1234/lmn
I searched online and there are solutions with single character separator, but I couldn't figure out how to get it working with a string separator like xyz.
A: How about this:
$ x=/abc/xyz/def/xyz/1234/lmn/xyz/7890/uvw
$ echo ${x%xyz*}
/abc/xyz/def/xyz/1234/lmn/
$ echo ${x%/xyz*}
/abc/xyz/def/xyz/1234/lmn
If you really don't want the / before the last xyz, then the second echo should be what you're looking for; if leaving the trailing / is acceptable, the first does that.
| Q: Get substring before last occurrence of a word in shell script I have a string /abc/xyz/def/xyz/1234/lmn/xyz/7890/uvw in Linux.
I want to extract the substring before the last occurrence of the string xyz using shell script.
Example:
Input:
/abc/xyz/def/xyz/1234/lmn/xyz/7890/uvw
Output:
/abc/xyz/def/xyz/1234/lmn
I searched online and there are solutions with single character separator, but I couldn't figure out how to get it working with a string separator like xyz.
A: How about this:
$ x=/abc/xyz/def/xyz/1234/lmn/xyz/7890/uvw
$ echo ${x%xyz*}
/abc/xyz/def/xyz/1234/lmn/
$ echo ${x%/xyz*}
/abc/xyz/def/xyz/1234/lmn
If you really don't want the / before the last xyz, then the second echo should be what you're looking for; if leaving the trailing / is acceptable, the first does that.
| stackoverflow | {
"language": "en",
"length": 114,
"provenance": "stackexchange_0000F.jsonl.gz:894502",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634814"
} |
af22e6488c94c76e2cd6e991df0a49e31139dc3a | Stackoverflow Stackexchange
Q: How can you read what type of Storage is 'this' in JavaScript? I want to know how you can read what type of the Storage Object is "this"?
Let's say you got this function:
Storage.prototype.typeOf=function(){return this;}
Now you will see the data in sessionStorage or localStorage. But how to get this information in the JS code? I tried
Storage.prototype.typeOf=function(){
var x=this;
alert(this)
}
It returns just [object Storage] but this obviously isn't what I searched for.
I looked at the available methods of the Storage types but none returned the real type. Is there a method for getting this information?
A: Since there's only two types of Storage objects, you could just check for them explicitly.
Storage.prototype.typeOf = function() {
if (this === window.localStorage) {
return 'localStorage';
}
return 'sessionStorage';
};
console.log(localStorage.typeOf()); // 'localStorage'
console.log(sessionStorage.typeOf()); // 'sessionStorage'
Since each of these are just special instances of the Storage object, there's not a general way of determining what variable each instance has been assigned to.
| Q: How can you read what type of Storage is 'this' in JavaScript? I want to know how you can read what type of the Storage Object is "this"?
Let's say you got this function:
Storage.prototype.typeOf=function(){return this;}
Now you will see the data in sessionStorage or localStorage. But how to get this information in the JS code? I tried
Storage.prototype.typeOf=function(){
var x=this;
alert(this)
}
It returns just [object Storage] but this obviously isn't what I searched for.
I looked at the available methods of the Storage types but none returned the real type. Is there a method for getting this information?
A: Since there's only two types of Storage objects, you could just check for them explicitly.
Storage.prototype.typeOf = function() {
if (this === window.localStorage) {
return 'localStorage';
}
return 'sessionStorage';
};
console.log(localStorage.typeOf()); // 'localStorage'
console.log(sessionStorage.typeOf()); // 'sessionStorage'
Since each of these are just special instances of the Storage object, there's not a general way of determining what variable each instance has been assigned to.
A: Unfortunately, Storage objects do not expose any properties that can be used to distinguish whether they provide local or session storage. I just read through the HTML storage specification and much of the source code used to implement it in Google Chrome to confirm this.
Your only option is to compare the identity of the Storage objects with their global definitions. You may want to just do this directly, and not bother wrapping it in a method.
if (someStorage === window.localStorage) {
// ...
} else if (someStorage === window.sessionStorage) {
// ...
}
| stackoverflow | {
"language": "en",
"length": 260,
"provenance": "stackexchange_0000F.jsonl.gz:894531",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634913"
} |
23913c5bca33c78eb886fccf075aeebf9d50db0a | Stackoverflow Stackexchange
Q: Debounce @HostListener event I'm implementing a simple infinite-scroll directive in Angular2.
I'm using @HostListener('window:scroll') to get the scroll event and parsing the data from the $target.
The question is, for every scroll event, everything will be checked once again with no need.
I checked the ionic infinite-scroll directive for inspiration but they don't use @HostListener, they need a more granular control, I guess.
I ended up on this issue while searching https://github.com/angular/angular/issues/13248 but couldn't find any way to do what I want.
I think if I create an Observable, subscribe to it with debounce and push (next) items to it, I will reach the behaviour I want, but I'm not being able to do that.
A: I would leverage debounce method decorator like:
export function debounce(delay: number = 300): MethodDecorator {
return function (target: any, propertyKey: string | symbol, descriptor: PropertyDescriptor) {
const timeoutKey = Symbol();
const original = descriptor.value;
descriptor.value = function (...args) {
clearTimeout(this[timeoutKey]);
this[timeoutKey] = setTimeout(() => original.apply(this, args), delay);
};
return descriptor;
};
}
and use it as follows:
@HostListener('window:scroll', ['$event'])
@debounce()
scroll(event) {
...
}
Ng-run Example
| Q: Debounce @HostListener event I'm implementing a simple infinite-scroll directive in Angular2.
I'm using @HostListener('window:scroll') to get the scroll event and parsing the data from the $target.
The question is, for every scroll event, everything will be checked once again with no need.
I checked the ionic infinite-scroll directive for inspiration but they don't use @HostListener, they need a more granular control, I guess.
I ended up on this issue while searching https://github.com/angular/angular/issues/13248 but couldn't find any way to do what I want.
I think if I create an Observable, subscribe to it with debounce and push (next) items to it, I will reach the behaviour I want, but I'm not being able to do that.
A: I would leverage debounce method decorator like:
export function debounce(delay: number = 300): MethodDecorator {
return function (target: any, propertyKey: string | symbol, descriptor: PropertyDescriptor) {
const timeoutKey = Symbol();
const original = descriptor.value;
descriptor.value = function (...args) {
clearTimeout(this[timeoutKey]);
this[timeoutKey] = setTimeout(() => original.apply(this, args), delay);
};
return descriptor;
};
}
and use it as follows:
@HostListener('window:scroll', ['$event'])
@debounce()
scroll(event) {
...
}
Ng-run Example
A: An RXJS way of doing this can be achieved using fromEvent together with the throttleTime operator.
Instead of decorating your event handler with @HostListener, you create an observable from the event using fromEvent (e.g., in the ngOnInit method) and then throttling the emission of events using throttleTime.
...
import {fromEvent, Subscription} from 'rxjs';
import {tap, throttleTime} from 'rxjs/operators';
export class MyComponent implements OnInit, OnDestroy {
private eventSub: Subscription;
ngOnInit() {
this.eventSub = fromEvent(window, 'scroll').pipe(
throttleTime(300), // emits once, then ignores subsequent emissions for 300ms, repeat...
tap(event => this.scroll(event))
).subscribe();
}
scroll(event) {
...
}
ngOnDestroy() {
this.eventSub.unsubscribe(); // don't forget to unsubscribe
}
}
One advantage of using RXJS is that you can pass in custom schedulers to the throttleTime operator to achieve different behaviours. For example, you can throttle event emission by the animation frame rate (e.g., to throttle the emission of touch events).
import {animationFrameScheduler, ...} from 'rxjs';
...
this.eventSub = fromEvent(window, 'touchmove').pipe(
throttleTime(0, animationFrameScheduler),
tap(event => ...)
).subscribe();
A: I really like @yurzui's solution and I updated a lot of code to use it. However, I think it contains a bug. In the original code, there is only one timeout per class but in practice one is needed per instance.
In Angular terms, this means that if the component in which @debounce() is used is instantiated multiple times in a container, every instantiation will cancelTimeout the previous instantiation and only the last will fire.
I propose this slight variant to eliminate this trouble:
export function debounce(delay: number = 300): MethodDecorator {
return function (target: any, propertyKey: string, descriptor: PropertyDescriptor) {
const original = descriptor.value;
const key = `__timeout__${propertyKey}`;
descriptor.value = function (...args) {
clearTimeout(this[key]);
this[key] = setTimeout(() => original.apply(this, args), delay);
};
return descriptor;
};
}
Of course, it is possible to be more sophisticated about disambiguating the synthetic __timeout__ property.
| stackoverflow | {
"language": "en",
"length": 488,
"provenance": "stackexchange_0000F.jsonl.gz:894557",
"question_score": "38",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44634992"
} |
f2d42f9925ba462127161aef46a070420be93211 | Stackoverflow Stackexchange
Q: How to revert changes in Pycharm I know that Pycharm autosaves changes.
I want to know if it's possible to revert changes back to the old file if I give some input time? So is it possible to revert to, say, 8:00AM file?
A: You can use local history for this.
Right click on the file you want to revert, click Local History, then Show History. It's going to open a window with your current code versus previous version of your code and a side panel with the records stored.
| Q: How to revert changes in Pycharm I know that Pycharm autosaves changes.
I want to know if it's possible to revert changes back to the old file if I give some input time? So is it possible to revert to, say, 8:00AM file?
A: You can use local history for this.
Right click on the file you want to revert, click Local History, then Show History. It's going to open a window with your current code versus previous version of your code and a side panel with the records stored.
| stackoverflow | {
"language": "en",
"length": 91,
"provenance": "stackexchange_0000F.jsonl.gz:894585",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635062"
} |
4706cfc0395ee7c06eb80456063edaec5c2e645f | Stackoverflow Stackexchange
Q: Is there a specification for the PCD file format? Is there an official specification for the point cloud data (PCD) format? Or is it rather only intended for PCL-internal use? The only information I found about it is this which kind of looks like a specification, but it doesn't contain all the information needed.
I might want to write a PCD loader library (independent of PCL), but maybe this is discouraged?
A: Naturally, PCD files are mostly used in PCL-enabled applications written in C++. However, there are loaders implemented in other languages, for example Python and JavaScript, so it is definitely not just a PCL-internal format.
To my knowledge, there is no official specification besides the one you already linked. And indeed, it is incomplete, for instance the binary_compressed data storage format is not mentioned at all. I would suggest to use the PCL implementation (which is fairly stable) as a reference and resolve any ambiguities in the linked document by checking how the code works.
| Q: Is there a specification for the PCD file format? Is there an official specification for the point cloud data (PCD) format? Or is it rather only intended for PCL-internal use? The only information I found about it is this which kind of looks like a specification, but it doesn't contain all the information needed.
I might want to write a PCD loader library (independent of PCL), but maybe this is discouraged?
A: Naturally, PCD files are mostly used in PCL-enabled applications written in C++. However, there are loaders implemented in other languages, for example Python and JavaScript, so it is definitely not just a PCL-internal format.
To my knowledge, there is no official specification besides the one you already linked. And indeed, it is incomplete, for instance the binary_compressed data storage format is not mentioned at all. I would suggest to use the PCL implementation (which is fairly stable) as a reference and resolve any ambiguities in the linked document by checking how the code works.
| stackoverflow | {
"language": "en",
"length": 167,
"provenance": "stackexchange_0000F.jsonl.gz:894612",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635130"
} |
5f4962792db128cb4e3d844c20a3eb30c0b97480 | Stackoverflow Stackexchange
Q: ASP.NET Core pass antiforgerytoken in json post request body Well, I have a form rendered via form tagHelper. So it's include special hidden for anti-forgery token.
and I'm trying to send following ajax request:
var data = JSON.stringify(feedbackForm.serializeArray().reduce((res, item) => {
res[item.name] = item.value;
return res; }, {}));
// data example: '{"Description":"some description", "__RequestVerificationToken":"CfDJ8F9f8kTKlVNEsnTxejQIJ__pRCl2CuZTQDVAY2216J7GgHWGDC0XUMPc0FKHpr_K5uhz8Kx0VeHDkIPdQ3V0Xur9oLE2u_bpfXuVss6AWX3BVh0WbwfQriaibOrf_yvEuIYZV-jHU_G-AHPD91cKz_QE7MVmeLVgTum80yTb8biGctMtJcU67Wp7ZgN86yMuew"}'`
$.ajax({
type: "POST",
url: '@Url.Action("Feedback", "Profile", new {Area = ""})',
contentType: "application/json; charset=utf-8",
data: data,
dataType: "json"
});
to controller action which looks like that:
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Feedback([FromBody]FeedbackViewModel vm)
{
...
}
So post data include key for antiforgery token, however request still not pass antiforgeryvalidation and failed with error. If I remove antiforgery validation attribute from controller than it works perfectly.
Why it not check token inside request body - is it by design, or it's some kind of an issue?
A: can you try to implement like following.
data["__RequestVerificationToken"] = $('[name=__RequestVerificationToken]').val();
var data = JSON.stringify(feedbackForm.serializeArray().reduce((res, item) => {
res[item.name] = item.value;
return res; }, {}));
$.ajax({
url: '@Url.Action("Feedback", "Profile", new {Area = ""})',
contentType: "application/json"
type: 'POST',
context: document.body,
data: data,
success: function() { refresh(); }
});
| Q: ASP.NET Core pass antiforgerytoken in json post request body Well, I have a form rendered via form tagHelper. So it's include special hidden for anti-forgery token.
and I'm trying to send following ajax request:
var data = JSON.stringify(feedbackForm.serializeArray().reduce((res, item) => {
res[item.name] = item.value;
return res; }, {}));
// data example: '{"Description":"some description", "__RequestVerificationToken":"CfDJ8F9f8kTKlVNEsnTxejQIJ__pRCl2CuZTQDVAY2216J7GgHWGDC0XUMPc0FKHpr_K5uhz8Kx0VeHDkIPdQ3V0Xur9oLE2u_bpfXuVss6AWX3BVh0WbwfQriaibOrf_yvEuIYZV-jHU_G-AHPD91cKz_QE7MVmeLVgTum80yTb8biGctMtJcU67Wp7ZgN86yMuew"}'`
$.ajax({
type: "POST",
url: '@Url.Action("Feedback", "Profile", new {Area = ""})',
contentType: "application/json; charset=utf-8",
data: data,
dataType: "json"
});
to controller action which looks like that:
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Feedback([FromBody]FeedbackViewModel vm)
{
...
}
So post data include key for antiforgery token, however request still not pass antiforgeryvalidation and failed with error. If I remove antiforgery validation attribute from controller than it works perfectly.
Why it not check token inside request body - is it by design, or it's some kind of an issue?
A: can you try to implement like following.
data["__RequestVerificationToken"] = $('[name=__RequestVerificationToken]').val();
var data = JSON.stringify(feedbackForm.serializeArray().reduce((res, item) => {
res[item.name] = item.value;
return res; }, {}));
$.ajax({
url: '@Url.Action("Feedback", "Profile", new {Area = ""})',
contentType: "application/json"
type: 'POST',
context: document.body,
data: data,
success: function() { refresh(); }
});
A: You can pass "headers" like below.
var data = JSON.stringify(feedbackForm.serializeArray().reduce((res, item) => {res[item.name] = item.value;return res; }, {}));
$.ajax({
url: '@Url.Action("Feedback", "Profile", new {Area = ""})',
type: "POST",
dataType: "json",
headers: {"__RequestVerificationToken":$('[name=__RequestVerificationToken]').val()},
contentType: "application/json; charset=utf-8",
data: data});
Refer:https://api.jquery.com/jQuery.ajax/
| stackoverflow | {
"language": "en",
"length": 227,
"provenance": "stackexchange_0000F.jsonl.gz:894643",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635207"
} |
8b07ce84b95ab6e079949ed56cba174a5ee70a31 | Stackoverflow Stackexchange
Q: Microsoft.AspNetCore.NodeServices: Failed to start node process I'm using Microsoft.AspNetCore.NodeServices 1.1.1 in my ASP.Net Core application. Everything has been working fine, but now I'm on a new computer and I get the following error:
System.InvalidOperationException:
Failed to start Node process. To resolve this:.
[1] Ensure that Node.js is installed and can be found in one of the PATH directories.
Current PATH enviroment variable is: ....
Make sure the Node executable is in one of those directories, or update your PATH.
[2] See the InnerException for further details of the cause.
I have removed the path variables from this question, but the directory where Node is installed is listed in there.
node -v in a terminal gives me v6.11.0 so it is added to the path.
Nothing in the code has changed since it last worked, only my computer. Does anyone know what could be wrong?
A: After debugging I found out that it was due to a missing folder.
This is how NodeServices was configured in Startup.cs:
services.AddNodeServices(options =>
{
options.ProjectPath = "Path\That\Doesnt\Exist";
});
Once I added that path, everything runs okay.
| Q: Microsoft.AspNetCore.NodeServices: Failed to start node process I'm using Microsoft.AspNetCore.NodeServices 1.1.1 in my ASP.Net Core application. Everything has been working fine, but now I'm on a new computer and I get the following error:
System.InvalidOperationException:
Failed to start Node process. To resolve this:.
[1] Ensure that Node.js is installed and can be found in one of the PATH directories.
Current PATH enviroment variable is: ....
Make sure the Node executable is in one of those directories, or update your PATH.
[2] See the InnerException for further details of the cause.
I have removed the path variables from this question, but the directory where Node is installed is listed in there.
node -v in a terminal gives me v6.11.0 so it is added to the path.
Nothing in the code has changed since it last worked, only my computer. Does anyone know what could be wrong?
A: After debugging I found out that it was due to a missing folder.
This is how NodeServices was configured in Startup.cs:
services.AddNodeServices(options =>
{
options.ProjectPath = "Path\That\Doesnt\Exist";
});
Once I added that path, everything runs okay.
A: You can use this code snippet to get a client project
services.AddNodeServices(options =>{
options.ProjectPath = Path.Combine(Directory.GetCurrentDirectory(), "ClientApp"); });
A: For me the error was caused after upgrading my website from Net Core 2.2 to 3.0.
The upgrade altered my web.config file, in particular this part:
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath=".\MyWebsite.exe" arguments="" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" />
became this:
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" hostingModel="inprocess">
<environmentVariables>
<environmentVariable name="COMPLUS_ForceENC" value="1" />
<environmentVariable name="ASPNETCORE_ENVIRONMENT" value="Development" />
</environmentVariables>
</aspNetCore>
I fixed the issue by setting processPath and arguments back their previous values, and completely removed the <environmentVariables> section.
| stackoverflow | {
"language": "en",
"length": 289,
"provenance": "stackexchange_0000F.jsonl.gz:894669",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635280"
} |
f32f59e9ca51c35c40c0c1f642d14a953d73db09 | Stackoverflow Stackexchange
Q: Is there an easy way to rename an AWS target group for ALB? I need to rename a target group that my ALB uses. I tried to go to the website to do it but it does not give me the option. I was hoping maybe there is away to do it by command line. I googled but did not find a solution.
A: I was also unable to find a command to rename an Application Load Balancer Target Group. The closest was modify-target-group-attributes, but Name is not an attribute of a Target Group.
| Q: Is there an easy way to rename an AWS target group for ALB? I need to rename a target group that my ALB uses. I tried to go to the website to do it but it does not give me the option. I was hoping maybe there is away to do it by command line. I googled but did not find a solution.
A: I was also unable to find a command to rename an Application Load Balancer Target Group. The closest was modify-target-group-attributes, but Name is not an attribute of a Target Group.
| stackoverflow | {
"language": "en",
"length": 95,
"provenance": "stackexchange_0000F.jsonl.gz:894756",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635547"
} |
f58e058581f8d9e9f9cb449f3ac579010ff82709 | Stackoverflow Stackexchange
Q: React Native: How to use Java code to fetch data via React Native async storage
*
*Is it possible to fetch data from React Native Async-Storage through the Android-Java code?
*If it is possible then, how can I use Android-Java code to fetch Async-Storage data (id etcs).
My data structure for RN Async Storage are: (id, fname, lname, isactive)
What I want to do:
I want to use the Java code for Android to connect with RN Async Storage and get the values of the data. eg: id, fname, lname .. etcs.
| Q: React Native: How to use Java code to fetch data via React Native async storage
*
*Is it possible to fetch data from React Native Async-Storage through the Android-Java code?
*If it is possible then, how can I use Android-Java code to fetch Async-Storage data (id etcs).
My data structure for RN Async Storage are: (id, fname, lname, isactive)
What I want to do:
I want to use the Java code for Android to connect with RN Async Storage and get the values of the data. eg: id, fname, lname .. etcs.
| stackoverflow | {
"language": "en",
"length": 93,
"provenance": "stackexchange_0000F.jsonl.gz:894790",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635641"
} |
ce97aad03b451d6997bf6a47c613f39c9fa24ebf | Stackoverflow Stackexchange
Q: Change font and font size of CodeLens in Visual Studio Code I have enabled CodeLens for Visual Studio Code for TypeScript. The question is how I can change font and font size of it?
A: The commit has finally landed, see https://github.com/microsoft/vscode/issues/16038#issuecomment-724274684
The settings will be
editor.codeLensFontSize
editor.codeLensFontFamily
In v1.52. Or just search for codelens in the Settings UI.
| Q: Change font and font size of CodeLens in Visual Studio Code I have enabled CodeLens for Visual Studio Code for TypeScript. The question is how I can change font and font size of it?
A: The commit has finally landed, see https://github.com/microsoft/vscode/issues/16038#issuecomment-724274684
The settings will be
editor.codeLensFontSize
editor.codeLensFontFamily
In v1.52. Or just search for codelens in the Settings UI.
A: This is not possible as of VSCode 1.14
We are tracking a feature request for this here: https://github.com/Microsoft/vscode/issues/16038
| stackoverflow | {
"language": "en",
"length": 79,
"provenance": "stackexchange_0000F.jsonl.gz:894800",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635670"
} |
d5ab452b13ce85f92f4184b5cfddba31a3243784 | Stackoverflow Stackexchange
Q: Qt could not create directory I am new in Qt platform. I am trying to run and build a project in Qt but I have stumbled upon a bunch errors. I have found solutions to some of them and other I did not. Which leads me to ask you guys this question.
When I build/run my project it is giving me this error:
Could not create directory "C:\Users\name\Documents\Error in "
Util.asciify("build-untitled9-Android_for_armeabi_v7a_GCC_4_9_Qt_5_6_2-Debug")":
TypeError: Property 'asciify' of object
Core::Internal::UtilsJsExtension(0x34d22c8) is not a function" Error
while building/deploying project untitled9 (kit: Android for
armeabi-v7a (GCC 4.9, Qt 5.6.2)) When executing step "qmake"
Does anyone know how to solve the problem? Thanks
A: I met the error too!
You can try this method:
Tools -> Options -> Build&Run -> General
Then reset the "Default build directory".
My "Default build directory" is "../build-%{CurrentProject:Name}-%{CurrentKit:FileSystemName}-%{CurrentBuild:Name}"
Finally
Restart the QT and create new project. The setting can take effect.
| Q: Qt could not create directory I am new in Qt platform. I am trying to run and build a project in Qt but I have stumbled upon a bunch errors. I have found solutions to some of them and other I did not. Which leads me to ask you guys this question.
When I build/run my project it is giving me this error:
Could not create directory "C:\Users\name\Documents\Error in "
Util.asciify("build-untitled9-Android_for_armeabi_v7a_GCC_4_9_Qt_5_6_2-Debug")":
TypeError: Property 'asciify' of object
Core::Internal::UtilsJsExtension(0x34d22c8) is not a function" Error
while building/deploying project untitled9 (kit: Android for
armeabi-v7a (GCC 4.9, Qt 5.6.2)) When executing step "qmake"
Does anyone know how to solve the problem? Thanks
A: I met the error too!
You can try this method:
Tools -> Options -> Build&Run -> General
Then reset the "Default build directory".
My "Default build directory" is "../build-%{CurrentProject:Name}-%{CurrentKit:FileSystemName}-%{CurrentBuild:Name}"
Finally
Restart the QT and create new project. The setting can take effect.
A: I had the same problem and found the solution at here. this solved my problem.
Please go to Tools > Options > Build & Run > General and click "reset" behind "Default build directory.
| stackoverflow | {
"language": "en",
"length": 186,
"provenance": "stackexchange_0000F.jsonl.gz:894808",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635685"
} |
0f058d101da03d8ed6a77b5e1e137bdb44eb85c4 | Stackoverflow Stackexchange
Q: Spring 4 Join point to get method argument names and values I am using Spring 4.3. Is it possible to get method parameter names and values passed to it? I believe this can be done using AOP (before advice) if possible could you please give me a source code.
A: The following works as expected (Java 8 + Spring 5.0.4 + AspectJ 1.8.13):
@Aspect
@Component
public class SomeAspect {
@Around("@annotation(SomeAnnotation)")
public Object aroundAdvice(ProceedingJoinPoint joinPoint) throws Throwable {
CodeSignature codeSignature = (CodeSignature) joinPoint.getSignature();
System.out.println("First parameter's name: " + codeSignature.getParameterNames()[0]);
System.out.println("First argument's value: " + joinPoint.getArgs()[0]);
return joinPoint.proceed();
}
}
| Q: Spring 4 Join point to get method argument names and values I am using Spring 4.3. Is it possible to get method parameter names and values passed to it? I believe this can be done using AOP (before advice) if possible could you please give me a source code.
A: The following works as expected (Java 8 + Spring 5.0.4 + AspectJ 1.8.13):
@Aspect
@Component
public class SomeAspect {
@Around("@annotation(SomeAnnotation)")
public Object aroundAdvice(ProceedingJoinPoint joinPoint) throws Throwable {
CodeSignature codeSignature = (CodeSignature) joinPoint.getSignature();
System.out.println("First parameter's name: " + codeSignature.getParameterNames()[0]);
System.out.println("First argument's value: " + joinPoint.getArgs()[0]);
return joinPoint.proceed();
}
}
A: CodeSignature methodSignature = (CodeSignature) joinPoint.getSignature();
String[] sigParamNames = methodSignature.getParameterNames();
You can get method signature arguments names.
A: Unfortunately, you can't do this. It is a well-known limitation of bytecode - argument names can't be obtained using reflection, as they are not always stored in bytecode.
As workaround, you can add additional annotations like @ParamName(name = "paramName").
So that, you can get params names in the following way:
MethodSignature.getMethod().getParameterAnnotations()
UPDATE
Since Java 8 you can do this
You can obtain the names of the formal parameters of any method or constructor with the method java.lang.reflect.Executable.getParameters. (The classes Method and Constructor extend the class Executable and therefore inherit the method Executable.getParameters.) However, .class files do not store formal parameter names by default. This is because many tools that produce and consume class files may not expect the larger static and dynamic footprint of .class files that contain parameter names. In particular, these tools would have to handle larger .class files, and the Java Virtual Machine (JVM) would use more memory. In addition, some parameter names, such as secret or password, may expose information about security-sensitive methods.
To store formal parameter names in a particular .class file, and thus
enable the Reflection API to retrieve formal parameter names, compile
the source file with the -parameters option to the javac compiler.
https://docs.oracle.com/javase/tutorial/reflect/member/methodparameterreflection.html
A: In your AOP advice you can use methods of the JoinPoint to get access to methods and their parameters. There are multiple examples online and at stackoverflow.
Get method arguments using spring aop?
For getting arguments: https://docs.jboss.org/jbossaop/docs/2.0.0.GA/docs/aspect-framework/apidocs/org/jboss/aop/joinpoint/MethodInvocation.html#getArguments()
For getting method details: https://docs.jboss.org/jbossaop/docs/2.0.0.GA/docs/aspect-framework/apidocs/org/jboss/aop/joinpoint/MethodInvocation.html#getMethod%28%29
| stackoverflow | {
"language": "en",
"length": 361,
"provenance": "stackexchange_0000F.jsonl.gz:894831",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635757"
} |
1b2db6610a3b9c8af468f2c05cd685800e8a7668 | Stackoverflow Stackexchange
Q: Including a date or other variable in the git commit message I need to include the date and time in all commit messages (this is for syncing with a project management tool). I currently have have the alias:
alias commitDate = "date +%Y-%m-%d-%H-%M"
Is there anyway to include this or any other variables in the commit message?
A: You can specify the commit message on the command line directly via -m option. So if you want to commit new changes you could type:
git commit -m "Your message" -m "`$commitDate`"
which would lead to following commit message, as your shell replaces the env var commitDate with its value:
Your message
*current date*
| Q: Including a date or other variable in the git commit message I need to include the date and time in all commit messages (this is for syncing with a project management tool). I currently have have the alias:
alias commitDate = "date +%Y-%m-%d-%H-%M"
Is there anyway to include this or any other variables in the commit message?
A: You can specify the commit message on the command line directly via -m option. So if you want to commit new changes you could type:
git commit -m "Your message" -m "`$commitDate`"
which would lead to following commit message, as your shell replaces the env var commitDate with its value:
Your message
*current date*
A: Using an environment variable:
$ export COMMIT_TITLE=$(date)
$ git add commitfile.txt
$ git commit -m "$COMMIT_TITLE"
[master b5a9354] Sun, May 10, 2020 9:38:12 AM
1 file changed, 1 insertion(+)
create mode 100644 commitfile.txt
| stackoverflow | {
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:894860",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635861"
} |
c12dd518ce08e3734c3106bff8c29f93087da0ae | Stackoverflow Stackexchange
Q: KVM is required to run this AVD. Unknown Error! Please file a bug against Android Studio Operating System : CentOS Linux 7
Android Studios version : 2.3.3
Result of the command: lsmod | grep kvm
My computer supports virtualization but when i try to start the emulator I get this error:
2017-06-19 19:11:58,120 [ 98282] INFO - figurations.GeneralCommandLine - Cannot run program "/home/folder/Android/Sdk/emulator/emulator-check": error=13, Permission denied
java.io.IOException: Cannot run program "/home/folder/Android/Sdk/emulator/emulator-check": error=13, Permission denied
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at com.intellij.execution.configurations.GeneralCommandLine.startProcess(GeneralCommandLine.java:368)
... more
2017-06-19 19:15:28,593 [ 308755] INFO - figurations.GeneralCommandLine - Cannot run program "/home/folder/Android/Sdk/emulator/emulator": error=13, Permission denied
java.io.IOException: Cannot run program "/home/folder/Android/Sdk/emulator/emulator": error=13, Permission denied
at java.awt.EventQueue.dispatchEvent(EventQueue.java:728)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
Caused by: java.io.IOException: error=13, Permission denied
at java.lang.UNIXProcess.forkAndExec(Native Method)
... more
A: Changed permissions in the /home/folder/Android/Sdk/emulator/ folder
chmod 777 -R /home/folder/Android/Sdk/emulator/
| Q: KVM is required to run this AVD. Unknown Error! Please file a bug against Android Studio Operating System : CentOS Linux 7
Android Studios version : 2.3.3
Result of the command: lsmod | grep kvm
My computer supports virtualization but when i try to start the emulator I get this error:
2017-06-19 19:11:58,120 [ 98282] INFO - figurations.GeneralCommandLine - Cannot run program "/home/folder/Android/Sdk/emulator/emulator-check": error=13, Permission denied
java.io.IOException: Cannot run program "/home/folder/Android/Sdk/emulator/emulator-check": error=13, Permission denied
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at com.intellij.execution.configurations.GeneralCommandLine.startProcess(GeneralCommandLine.java:368)
... more
2017-06-19 19:15:28,593 [ 308755] INFO - figurations.GeneralCommandLine - Cannot run program "/home/folder/Android/Sdk/emulator/emulator": error=13, Permission denied
java.io.IOException: Cannot run program "/home/folder/Android/Sdk/emulator/emulator": error=13, Permission denied
at java.awt.EventQueue.dispatchEvent(EventQueue.java:728)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
Caused by: java.io.IOException: error=13, Permission denied
at java.lang.UNIXProcess.forkAndExec(Native Method)
... more
A: Changed permissions in the /home/folder/Android/Sdk/emulator/ folder
chmod 777 -R /home/folder/Android/Sdk/emulator/
A: I also had this problem, and was able to fix it by using:
sudo chmod 777 /dev/kvm
A: In my case the error was because the user has not enough permission to /dev/kvm.
So the solution is giving permission,i.e.,
chmod 777 /dev/kvm
A: You need to be part of the kvm group
Try this:
sudo adduser $USER kvm
Then relaunch android studio or unlog / relog user
| stackoverflow | {
"language": "en",
"length": 198,
"provenance": "stackexchange_0000F.jsonl.gz:894867",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635879"
} |
673c9dd990862648eee77edcff89f8aacff72d8c | Stackoverflow Stackexchange
Q: Docker : Pull images from Local git repo / hard drive I have a base docker image (Ubuntu) in Local Git Repo. Now I want to build docker image (with application jar) by pulling the base image from Git.
As I understand "FROM ubuntu:latest" pulls the ubuntu image from Docker Hub.
However, I am behind firewall and could not access Docker hub.
Is there an option to pull the base image from the Local Git/hard drive and build my own docker image without reaching Docker hub.
Any sample script, material will be of great help. Thanks.
A: You need this resulting file somewhere. This example is for ubuntu:latest image. Use docker save:
docker save ubuntu:latest > /somewhere/ubuntu.latest.tar
But you can gzip it to reduce its size:
docker save ubuntu:latest | gzip > ubuntu.latest.tar.gz
Then, having that file, with docker load you can:
▶ docker load < /somewhere/ubuntu.latest.tar.gz
Loaded image: ubuntu:latest
| Q: Docker : Pull images from Local git repo / hard drive I have a base docker image (Ubuntu) in Local Git Repo. Now I want to build docker image (with application jar) by pulling the base image from Git.
As I understand "FROM ubuntu:latest" pulls the ubuntu image from Docker Hub.
However, I am behind firewall and could not access Docker hub.
Is there an option to pull the base image from the Local Git/hard drive and build my own docker image without reaching Docker hub.
Any sample script, material will be of great help. Thanks.
A: You need this resulting file somewhere. This example is for ubuntu:latest image. Use docker save:
docker save ubuntu:latest > /somewhere/ubuntu.latest.tar
But you can gzip it to reduce its size:
docker save ubuntu:latest | gzip > ubuntu.latest.tar.gz
Then, having that file, with docker load you can:
▶ docker load < /somewhere/ubuntu.latest.tar.gz
Loaded image: ubuntu:latest
| stackoverflow | {
"language": "en",
"length": 151,
"provenance": "stackexchange_0000F.jsonl.gz:894883",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44635954"
} |
1a41a972faac347da1589fe56849056f58fe0fd3 | Stackoverflow Stackexchange
Q: Does Hibernate (HQL) Support Common Table Expression I have a query that looks like:
WITH SubQ AS
(SELECT elh.encntr_id, elh.location_cd
FROM encntr_loc_his elh
WHERE ...)
SELECT e.encntr_id
FROM encounter e
WHERE e.location_cd IN
(SELECT SubQ.location_cd
FROM...)
...
There are some other details in this query, and the SubQ has been used a lot. My question is, is it possible to put this query in HQL as a named query(namedquery)? When I try to do that and compile, it throws an error complaining about token WITH:
Jun 19, 2017 10:38:58 AM org.hibernate.hql.internal.ast.ErrorCounter reportError
ERROR: line 1:1: unexpected token: WITH
Jun 19, 2017 10:38:58 AM org.hibernate.hql.internal.ast.ErrorCounter reportError
ERROR: line 1:1: unexpected token: WITH
line 1:1: unexpected token: WITH
A: Hibernate doesn't support common table expressions, but if you want to be able to reference your SubQ query so you don't have to repeat it, you could define it as a view on the database and then map a Hibernate entity to that view.
| Q: Does Hibernate (HQL) Support Common Table Expression I have a query that looks like:
WITH SubQ AS
(SELECT elh.encntr_id, elh.location_cd
FROM encntr_loc_his elh
WHERE ...)
SELECT e.encntr_id
FROM encounter e
WHERE e.location_cd IN
(SELECT SubQ.location_cd
FROM...)
...
There are some other details in this query, and the SubQ has been used a lot. My question is, is it possible to put this query in HQL as a named query(namedquery)? When I try to do that and compile, it throws an error complaining about token WITH:
Jun 19, 2017 10:38:58 AM org.hibernate.hql.internal.ast.ErrorCounter reportError
ERROR: line 1:1: unexpected token: WITH
Jun 19, 2017 10:38:58 AM org.hibernate.hql.internal.ast.ErrorCounter reportError
ERROR: line 1:1: unexpected token: WITH
line 1:1: unexpected token: WITH
A: Hibernate doesn't support common table expressions, but if you want to be able to reference your SubQ query so you don't have to repeat it, you could define it as a view on the database and then map a Hibernate entity to that view.
A: There is no direct support, but I was able to run CTE using createNativeQuery API with MySQL 8.014 and Hibernate 5.2.16
EntityManager entityManager = _entityManagerFactory.createEntityManager();
Query q = entityManager.createNativeQuery(query, YourReturnTypePojo.class);
List<Object[]> a = q.getResultList();
| stackoverflow | {
"language": "en",
"length": 197,
"provenance": "stackexchange_0000F.jsonl.gz:894925",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44636070"
} |
b9f671be4d15a3c3d7836b0c6d1ac40dc438aee2 | Stackoverflow Stackexchange
Q: How to modify EXIF data in python I am trying to edit/modify existing metadata within python 2.7. More specifically I have GPS coordinates in a my metedata, however the altitude field is incorrect. Is there a way of changing this?
I have had a look at PIL piexif pyexif, but I cannot seem to find a way to modify existing fields.
Has anyone managed to do this? It sounds like it would be very simple, but I can't seem to work it out.
A: Late answer, but you can use GPSPhoto, i.e.:
from GPSPhoto import gpsphoto
photo = gpsphoto.GPSPhoto("photo.jpg")
# Create GPSInfo Data Object
# info = gpsphoto.GPSInfo((38.71615498471598, -9.148730635643007))
# info = gpsphoto.GPSInfo((38.71615498471598, -9.148730635643007), timeStamp='2018:12:25 01:59:05')'''
info = gpsphoto.GPSInfo((38.71615498471598, -9.148730635643007), alt=83, timeStamp='2018:12:25 01:59:05')
# Modify GPS Data
photo.modGPSData(info, 'new_photo.jpg')
Installation:
pip install GPSPhoto
| Q: How to modify EXIF data in python I am trying to edit/modify existing metadata within python 2.7. More specifically I have GPS coordinates in a my metedata, however the altitude field is incorrect. Is there a way of changing this?
I have had a look at PIL piexif pyexif, but I cannot seem to find a way to modify existing fields.
Has anyone managed to do this? It sounds like it would be very simple, but I can't seem to work it out.
A: Late answer, but you can use GPSPhoto, i.e.:
from GPSPhoto import gpsphoto
photo = gpsphoto.GPSPhoto("photo.jpg")
# Create GPSInfo Data Object
# info = gpsphoto.GPSInfo((38.71615498471598, -9.148730635643007))
# info = gpsphoto.GPSInfo((38.71615498471598, -9.148730635643007), timeStamp='2018:12:25 01:59:05')'''
info = gpsphoto.GPSInfo((38.71615498471598, -9.148730635643007), alt=83, timeStamp='2018:12:25 01:59:05')
# Modify GPS Data
photo.modGPSData(info, 'new_photo.jpg')
Installation:
pip install GPSPhoto
A: import piexif
from PIL import Image
img = Image.open(fname)
exif_dict = piexif.load(img.info['exif'])
altitude = exif_dict['GPS'][piexif.GPSIFD.GPSAltitude]
print(altitude)
(550, 1) % some values are saved in a fractional format. This means 550m, (51, 2) would be 25,5m.
exif_dict['GPS'][piexif.GPSIFD.GPSAltitude] = (140, 1)
This sets the altitude to 140m
exif_bytes = piexif.dump(exif_dict)
img.save('_%s' % fname, "jpeg", exif=exif_bytes)
| stackoverflow | {
"language": "en",
"length": 188,
"provenance": "stackexchange_0000F.jsonl.gz:894950",
"question_score": "27",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44636152"
} |
5b09dd54419906464ccb016a498b55563c4622f9 | Stackoverflow Stackexchange
Q: React native link creates duplicated entries I don't know why react-native link is creating duplicated entries in MainApplication.java (at imports and in getPackages function) and in app\build.gradle the compile project entry is not being added, but if I run again the command, I receive the same message instead of that the module is already linked.
When I run react-native link, I receive the messages that the module has been linked successfully on Android (duplicated) and in iOS it was already linked.
A: Many users are encountering the issue on Android (me included). It is due to difference between iOs and Android code-signin.
There is an opened (and recent) PR for this on the react-native project https://github.com/facebook/react-native/pull/18131 - hopefully it will be merged soon!
| Q: React native link creates duplicated entries I don't know why react-native link is creating duplicated entries in MainApplication.java (at imports and in getPackages function) and in app\build.gradle the compile project entry is not being added, but if I run again the command, I receive the same message instead of that the module is already linked.
When I run react-native link, I receive the messages that the module has been linked successfully on Android (duplicated) and in iOS it was already linked.
A: Many users are encountering the issue on Android (me included). It is due to difference between iOs and Android code-signin.
There is an opened (and recent) PR for this on the react-native project https://github.com/facebook/react-native/pull/18131 - hopefully it will be merged soon!
A: Faced a similar issue when I tried to link a library to my code. On running the react-native link command, it displayed that the library has been linked successfully though the entries were never created. What I did was to manually modify these 3 files:
*
*android/settings.gradle: Add the module using include() and specify the path of your project directory.
include ':your_package_name'
project(':your_package_name').projectDir = new File(rootProject.projectDir,'../node_modules/your_package_name')
*android/app/build.gradle: Add the compile statement
compile project(':your_package_name')
*android/app/src/main/java/[..project_name..]/MainApplication.java: Import the package and make sure that the getPackages() is returning your package along with the previous ones.
Would have answered you better if you specified the name of the required library
A: In my case, It was related with RN version and local libraries.
I solved it by removing the duplicated libraries on Xcode.
Here is link
Hope it give you some help.
| stackoverflow | {
"language": "en",
"length": 263,
"provenance": "stackexchange_0000F.jsonl.gz:894994",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44636287"
} |
cdb760b713f9c1aaac3162acf54c0cce34529754 | Stackoverflow Stackexchange
Q: Python, invoke method returned by __getattribute__ If this question has a duplicate, sorry, I didn't find it, I will erese the question if someone does.
I have this simple python class:
class NothingSpecial:
@classmethod
def meth(cls):
print("hi!")
And trying to get the method with different ways I did:
a = (object.__getattribute__(NothingSpecial, 'meth'))
b = (getattr(NothingSpecial, 'meth'))
The question is, if I do:
b()
$hi!
Is return, but when I do:
a()
TypeError: 'classmethod' object is not callable
How can I execute the a method?
A: You are bypassing the descriptor protocol, and you have an unbound class method.
The solution is to invoke the protocol, if there is a __get__ method present:
if hasattr(a, '__get__'):
a = a.__get__(None, NothingSpecial)
a()
Now the classmethod is bound to the class and it works again:
>>> a.__get__(None, NothingSpecial)
<bound method NothingSpecial.meth of <class '__main__.NothingSpecial'>>
>>> a.__get__(None, NothingSpecial)()
hi!
Alternatively, use the correct __getattribute__, one that actually knows how to apply the descriptor protocol to class attributes; classes do not use object.__getattribute__, but type.__getattribute__:
>>> type.__getattribute__(NothingSpecial, 'meth')
<bound method NothingSpecial.meth of <class '__main__.NothingSpecial'>>
You'd actually want to access type(NothingSpecial).__getattribute__ to allow metaclasses to override the implementation of __getattribute__ here.
| Q: Python, invoke method returned by __getattribute__ If this question has a duplicate, sorry, I didn't find it, I will erese the question if someone does.
I have this simple python class:
class NothingSpecial:
@classmethod
def meth(cls):
print("hi!")
And trying to get the method with different ways I did:
a = (object.__getattribute__(NothingSpecial, 'meth'))
b = (getattr(NothingSpecial, 'meth'))
The question is, if I do:
b()
$hi!
Is return, but when I do:
a()
TypeError: 'classmethod' object is not callable
How can I execute the a method?
A: You are bypassing the descriptor protocol, and you have an unbound class method.
The solution is to invoke the protocol, if there is a __get__ method present:
if hasattr(a, '__get__'):
a = a.__get__(None, NothingSpecial)
a()
Now the classmethod is bound to the class and it works again:
>>> a.__get__(None, NothingSpecial)
<bound method NothingSpecial.meth of <class '__main__.NothingSpecial'>>
>>> a.__get__(None, NothingSpecial)()
hi!
Alternatively, use the correct __getattribute__, one that actually knows how to apply the descriptor protocol to class attributes; classes do not use object.__getattribute__, but type.__getattribute__:
>>> type.__getattribute__(NothingSpecial, 'meth')
<bound method NothingSpecial.meth of <class '__main__.NothingSpecial'>>
You'd actually want to access type(NothingSpecial).__getattribute__ to allow metaclasses to override the implementation of __getattribute__ here.
| stackoverflow | {
"language": "en",
"length": 196,
"provenance": "stackexchange_0000F.jsonl.gz:895014",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44636368"
} |
4692ab8060fb77795683ef35daabc2d1fa94af36 | Stackoverflow Stackexchange
Q: Why is there no support for concatenating std::string and std::string_view? Since C++17, we have std::string_view, a light-weight view into a contiguous sequence of characters that avoids unnecessary copying of data. Instead of having a const std::string& parameter, it is now often recommended to use std::string_view.
However, one quickly finds out that switching from const std::string& to std::string_view breaks code that uses string concatenation as there is no support for concatenating std::string and std::string_view:
std::string{"abc"} + std::string_view{"def"}; // ill-formed (fails to compile)
std::string_view{"abc"} + std::string{"def"}; // ill-formed (fails to compile)
Why is there no support for concatenating std::string and std::string_view in the standard?
A: The reason for this is given in n3512 string_ref: a non-owning reference to a string, revision 2 by Jeffrey Yasskin:
I also omitted operator+(basic_string, basic_string_ref) because LLVM returns a lightweight object from this overload and only performs the concatenation lazily. If we define this overload, we'll have a hard time introducing that lightweight concatenation later.
It has been later suggested on the std-proposals mailing list to add these operator overloads to the standard.
| Q: Why is there no support for concatenating std::string and std::string_view? Since C++17, we have std::string_view, a light-weight view into a contiguous sequence of characters that avoids unnecessary copying of data. Instead of having a const std::string& parameter, it is now often recommended to use std::string_view.
However, one quickly finds out that switching from const std::string& to std::string_view breaks code that uses string concatenation as there is no support for concatenating std::string and std::string_view:
std::string{"abc"} + std::string_view{"def"}; // ill-formed (fails to compile)
std::string_view{"abc"} + std::string{"def"}; // ill-formed (fails to compile)
Why is there no support for concatenating std::string and std::string_view in the standard?
A: The reason for this is given in n3512 string_ref: a non-owning reference to a string, revision 2 by Jeffrey Yasskin:
I also omitted operator+(basic_string, basic_string_ref) because LLVM returns a lightweight object from this overload and only performs the concatenation lazily. If we define this overload, we'll have a hard time introducing that lightweight concatenation later.
It has been later suggested on the std-proposals mailing list to add these operator overloads to the standard.
A: I've submitted P2591: Concatenation of strings and string views, linking to this SO question. The paper at this point is targeted at C++26 minimum.
| stackoverflow | {
"language": "en",
"length": 202,
"provenance": "stackexchange_0000F.jsonl.gz:895079",
"question_score": "121",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44636549"
} |
75a11fcfef4e20f8520506653ca96a5213ef020d | Stackoverflow Stackexchange
Q: How to offset the center point in MapView How to offset the center point in MapView.
All these methods animated map on (center center position)
animateToRegion
animateToCoordinate
fitToElements
fitToSuppliedMarkers
Only this one fitToCoordinates allows to manipulate with offset position, but it doesn't work correctly with one coordinate.
How can I playing with offset using animateToRegion or another method
Thx
A: The official docs recomands to use mapPadding for that. It adds custom padding to each side of the map. Useful when map elements/markers are obscured.
For example :
mapPadding={{top: 100, left: 0, right: 0, bottom: 0}}
See: MapView Component API
| Q: How to offset the center point in MapView How to offset the center point in MapView.
All these methods animated map on (center center position)
animateToRegion
animateToCoordinate
fitToElements
fitToSuppliedMarkers
Only this one fitToCoordinates allows to manipulate with offset position, but it doesn't work correctly with one coordinate.
How can I playing with offset using animateToRegion or another method
Thx
A: The official docs recomands to use mapPadding for that. It adds custom padding to each side of the map. Useful when map elements/markers are obscured.
For example :
mapPadding={{top: 100, left: 0, right: 0, bottom: 0}}
See: MapView Component API
| stackoverflow | {
"language": "en",
"length": 101,
"provenance": "stackexchange_0000F.jsonl.gz:895092",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44636581"
} |
5e8a463e72f287a93874f38fe2b55907f9650179 | Stackoverflow Stackexchange
Q: Dagger @ContributesAndroidInjector ComponentProcessor was unable to process this interface I was testing new feature of dagger: Android module. And I am not able to compile the code when I use @ContributesAndroidInjector
I am always getting following error:
Error:(12, 8) error: dagger.internal.codegen.ComponentProcessor was unable to process this interface because not all of its dependencies could be resolved. Check for compilation errors or a circular dependency with generated code.
I tried to implement my components like here, but still I got the error.
Here is the smallest example:
@PerApplication
@Component(modules = {AndroidInjectionModule.class, LoginBindingModule.class})
public interface ApplicationComponent {
void inject(ExampleApplication application);
}
@Module
public abstract class LoginBindingModule {
@ContributesAndroidInjector
abstract LoginActivity contributeYourActivityInjector();
}
public class LoginActivity extends Activity {
@Inject
LoginPresenter loginPresenter;
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
AndroidInjection.inject(this);
super.onCreate(savedInstanceState);
}
}
public class LoginPresenter {
@Inject
public LoginPresenter() {
}
}
If I remove LoginBindingModule from ApplicationComponent the app would be build, but would fail with runtime exception:
java.lang.IllegalArgumentException: No injector factory bound for Class
project setup:
gradle 3.3
buildToolsVersion "25.0.2"
dagger 2.11
A: For Kotlin, instead of
annotationProcessor com.google.dagger:dagger-android-processor:2.11
use
kapt com.google.dagger:dagger-android-processor:2.11
| Q: Dagger @ContributesAndroidInjector ComponentProcessor was unable to process this interface I was testing new feature of dagger: Android module. And I am not able to compile the code when I use @ContributesAndroidInjector
I am always getting following error:
Error:(12, 8) error: dagger.internal.codegen.ComponentProcessor was unable to process this interface because not all of its dependencies could be resolved. Check for compilation errors or a circular dependency with generated code.
I tried to implement my components like here, but still I got the error.
Here is the smallest example:
@PerApplication
@Component(modules = {AndroidInjectionModule.class, LoginBindingModule.class})
public interface ApplicationComponent {
void inject(ExampleApplication application);
}
@Module
public abstract class LoginBindingModule {
@ContributesAndroidInjector
abstract LoginActivity contributeYourActivityInjector();
}
public class LoginActivity extends Activity {
@Inject
LoginPresenter loginPresenter;
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
AndroidInjection.inject(this);
super.onCreate(savedInstanceState);
}
}
public class LoginPresenter {
@Inject
public LoginPresenter() {
}
}
If I remove LoginBindingModule from ApplicationComponent the app would be build, but would fail with runtime exception:
java.lang.IllegalArgumentException: No injector factory bound for Class
project setup:
gradle 3.3
buildToolsVersion "25.0.2"
dagger 2.11
A: For Kotlin, instead of
annotationProcessor com.google.dagger:dagger-android-processor:2.11
use
kapt com.google.dagger:dagger-android-processor:2.11
A: In my case SomeModule class contained unnecessary lines:
@ContributesAndroidInjector
internal abstract fun fragmentInjector(): SomeFragment
A: my problem was with duplicate packages and files like (ViewModel 2).
Just delete it and clean, rebuild project.
A: Adding annotationProcessor "com.google.dagger:dagger-android-processor:2.11" to your gradle file will resolve your problem.
A: check if your all files have specificated the package -> "package com.something.blahblah...."
A: if none of the suggested solutions works, just check if you have forgot to add @Provides annotations to any of the dependencies, this was the issue in my case
A: I had the same error but the problem was with the module (project) where I declared the Dagger module.
Make sure you add the kotlin-kapt plugin otherwise Dagger won't be able to generate any class.
// declare it at the top of your build.gradle file
apply plugin: 'kotlin-kapt'
A: I've had a very weird error when converting a Module file to Kotlin. It might be rare, but maybe someone else stumbles across the same problem:
My Dagger module is part of a Gradle module. It uses dependencies which only have an api Gradle configuration. Dagger generates Stub (Java) files for every Kotlin class involved. Without those Subs everything worked. With those Stubs it gave the above error. Adding all missing dependencies to the Gradle module was the solution for me.
A: I had the same issue and accepted answer not worked for me. After a lot of analysis, I find the issue is pointing to some other library, in my case butterknife. I had a layout variable in my Dagger enabled activity called editLenearLayout as below.
@BindView(R.id.ll_edit11)
LinearLayout editLenearLayout;
When I removed these two lines of codes surprisingly it worked :)
And the problem was Butterknife unable to bind id inside < include> layout. But the exiting factor is Studio showed following error.
error: [ComponentProcessor:MiscError] dagger.internal.codegen.ComponentProcessor was unable to process this interface because not all of its dependencies could be resolved. Check for compilation errors or a circular dependency with generated code.
My Linearlayout had came inside < include> layout. pointing to this problem.
May save someone's day.
| stackoverflow | {
"language": "en",
"length": 531,
"provenance": "stackexchange_0000F.jsonl.gz:895104",
"question_score": "20",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44636626"
} |
561dc3a6975bf957a4e1ce050adf88e75ccd9ec6 | Stackoverflow Stackexchange
Q: responsive = false not working I'm trying to display a couple of charts but am having two main that I could use some help with. The first, responsive = false doesn't seem to be working because whenever I load the chart it makes the charts bigger, and the second is that my hover tool-tips can't seem to work.
Here is what my chart code looks like:
var ctx = document.getElementById("myChart").getContext("2d"); //this.$refs.canvas
var myChart = new Chart(ctx, {
type: 'pie',
data: {
labels: ["Checked", "Un-Checked"],
datasets: [{
label: '# of Hits',
data: [1000, 500],
backgroundColor: [
'rgba(54, 162, 235, 0.2)',
'rgba(255, 206, 86, 0.2)'
],
borderColor: [
'rgba(54, 162, 235, 1)',
'rgba(255, 206, 86, 1)'
],
borderWidth: 1
}]
},
options: {
responsive: false
}
});
A: Give myChart a fixed width, like:
<div id="myChart" style="width:200px;"></div>
It works for me.
| Q: responsive = false not working I'm trying to display a couple of charts but am having two main that I could use some help with. The first, responsive = false doesn't seem to be working because whenever I load the chart it makes the charts bigger, and the second is that my hover tool-tips can't seem to work.
Here is what my chart code looks like:
var ctx = document.getElementById("myChart").getContext("2d"); //this.$refs.canvas
var myChart = new Chart(ctx, {
type: 'pie',
data: {
labels: ["Checked", "Un-Checked"],
datasets: [{
label: '# of Hits',
data: [1000, 500],
backgroundColor: [
'rgba(54, 162, 235, 0.2)',
'rgba(255, 206, 86, 0.2)'
],
borderColor: [
'rgba(54, 162, 235, 1)',
'rgba(255, 206, 86, 1)'
],
borderWidth: 1
}]
},
options: {
responsive: false
}
});
A: Give myChart a fixed width, like:
<div id="myChart" style="width:200px;"></div>
It works for me.
A: Detecting when the canvas size changes can not be done directly from the CANVAS element. Chart.js uses its parent container to update the canvas render and display sizes. However, this method requires the container to be relatively positioned and dedicated to the chart canvas only. Responsiveness can then be achieved by setting relative values for the container size
Source :http://www.chartjs.org/docs/latest/general/responsive.html
| stackoverflow | {
"language": "en",
"length": 201,
"provenance": "stackexchange_0000F.jsonl.gz:895132",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44636715"
} |
de3100ef54848457627c71e49f60f3e0fdf779cb | Stackoverflow Stackexchange
Q: Trouble parsing the "First name" from a webpage How can I get the "First Name" from the target page in my script. I've tried like below but it throws the following error:
"selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: The result of the xpath expression "//div[@class="div_input_place"]/input[@id="txt_name"]/@value" is: [object Attr]. It should be an element."
However, the element within which the "First Name" I'm after:
<div class="div_input_place">
<input name="txt_name" type="text" value="CLINTO KUNJACHAN" maxlength="20" id="txt_name" disabled="disabled" tabindex="2" class="aspNetDisabled textboxDefault_de_active_student">
</div>
The script I've tried with so far:
from selenium import webdriver
import time
driver = webdriver.Chrome()
driver.get("https://www.icaionlineregistration.org/StudentRegistrationForCaNo.aspx")
driver.find_element_by_id('txtRegistNo').send_keys('SRO0394294')
driver.find_element_by_id('btnProceed').click()
time.sleep(5)
name = driver.find_element_by_xpath('//div[@class="div_input_place"]/input[@id="txt_name"]/@value')
print(name.text)
driver.quit()
A: Selenium doesn't support this syntax. Your XPath expression should return WebElement only, but not attribute value or text. Try to use below code instead:
name = driver.find_element_by_xpath('//div[@class="div_input_place"]/input[@id="txt_name"]').get_attribute('value')
print(name)
| Q: Trouble parsing the "First name" from a webpage How can I get the "First Name" from the target page in my script. I've tried like below but it throws the following error:
"selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: The result of the xpath expression "//div[@class="div_input_place"]/input[@id="txt_name"]/@value" is: [object Attr]. It should be an element."
However, the element within which the "First Name" I'm after:
<div class="div_input_place">
<input name="txt_name" type="text" value="CLINTO KUNJACHAN" maxlength="20" id="txt_name" disabled="disabled" tabindex="2" class="aspNetDisabled textboxDefault_de_active_student">
</div>
The script I've tried with so far:
from selenium import webdriver
import time
driver = webdriver.Chrome()
driver.get("https://www.icaionlineregistration.org/StudentRegistrationForCaNo.aspx")
driver.find_element_by_id('txtRegistNo').send_keys('SRO0394294')
driver.find_element_by_id('btnProceed').click()
time.sleep(5)
name = driver.find_element_by_xpath('//div[@class="div_input_place"]/input[@id="txt_name"]/@value')
print(name.text)
driver.quit()
A: Selenium doesn't support this syntax. Your XPath expression should return WebElement only, but not attribute value or text. Try to use below code instead:
name = driver.find_element_by_xpath('//div[@class="div_input_place"]/input[@id="txt_name"]').get_attribute('value')
print(name)
A: You cannot target the attributes with XPaths in Selenium - the expressions have to always match the actual elements:
name_element = driver.find_element_by_xpath('//div[@class="div_input_place"]/input[@id="txt_name"]')
name_attribute = name_element.get_attribute("value")
print(name_attribute)
Note that I'd also switch to a more concise and readable CSS selector:
driver.find_element_by_css_selector('.div_input_place input#txt_name')
Or, even go with "find by id" if your id is unique:
driver.find_element_by_id("txt_name")
| stackoverflow | {
"language": "en",
"length": 185,
"provenance": "stackexchange_0000F.jsonl.gz:895174",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44636853"
} |
4682c8e44edfaa520d03ef774b7ae5e6b97a5b71 | Stackoverflow Stackexchange
Q: Training and validating on images with different resolution in Keras I'm using Keras to build a convolutional neural net to perform regression from microscopic images to 2D label data (for counting). I'm looking into training the network on smaller patches of the microscopic data (where the patches are the size of the receptive field). The problem is, the fit() method requires validation data to be of the same size as the input. Instead, I'm hoping to be able to validate on entire images (not patches) so that I can validate on my entire validation set and compare the results to other methods I've used so far.
One solution I found was to alternate between fit() and evaluate() each epoch. However, I was hoping to be able to observe these results using Tensorboard. Since evaluate() doesn't take in callbacks, this solution isn't ideal. Does anybody have a good way validating on full-resolution images while training on patches?
A: You could use fit generator instead of fit and provide a different generator for validation set. As long as the rest of your network is agnostic to the image size, (e.g, fully convolutional layers), you should be fine.
| Q: Training and validating on images with different resolution in Keras I'm using Keras to build a convolutional neural net to perform regression from microscopic images to 2D label data (for counting). I'm looking into training the network on smaller patches of the microscopic data (where the patches are the size of the receptive field). The problem is, the fit() method requires validation data to be of the same size as the input. Instead, I'm hoping to be able to validate on entire images (not patches) so that I can validate on my entire validation set and compare the results to other methods I've used so far.
One solution I found was to alternate between fit() and evaluate() each epoch. However, I was hoping to be able to observe these results using Tensorboard. Since evaluate() doesn't take in callbacks, this solution isn't ideal. Does anybody have a good way validating on full-resolution images while training on patches?
A: You could use fit generator instead of fit and provide a different generator for validation set. As long as the rest of your network is agnostic to the image size, (e.g, fully convolutional layers), you should be fine.
A: You need to make sure that your network input is of shape (None,None,3), which means your network accepts an input color image of arbitrary size.
| stackoverflow | {
"language": "en",
"length": 222,
"provenance": "stackexchange_0000F.jsonl.gz:895186",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44636877"
} |
d876d8880d732affad12cffcf92631a5481cf7c0 | Stackoverflow Stackexchange
Q: navigator is deprecated and has been removed even i don't use navigator I don't use Navigator in my codes. But I am getting this error.
"dependencies": {
"react": "16.0.0-alpha.12",
"react-native": "0.45.1",
"react-native-deprecated-custom-components": "^0.1.0",
"react-native-router-redux": "^0.2.2",
"react-redux": "^5.0.5",
"redux": "^3.7.0"
},
"devDependencies": {
"babel-jest": "20.0.3",
"babel-preset-react-native": "2.0.0",
"jest": "20.0.4",
"react-test-renderer": "16.0.0-alpha.12"
}
Anyone can help me?
A: This happened to me when my IDE automatically added the following lines:
import * as AsyncStorage from "react-native";
which means: everything in that file, which won't work. Solved it by changing to this:
import AsyncStorage from "react-native";
| Q: navigator is deprecated and has been removed even i don't use navigator I don't use Navigator in my codes. But I am getting this error.
"dependencies": {
"react": "16.0.0-alpha.12",
"react-native": "0.45.1",
"react-native-deprecated-custom-components": "^0.1.0",
"react-native-router-redux": "^0.2.2",
"react-redux": "^5.0.5",
"redux": "^3.7.0"
},
"devDependencies": {
"babel-jest": "20.0.3",
"babel-preset-react-native": "2.0.0",
"jest": "20.0.4",
"react-test-renderer": "16.0.0-alpha.12"
}
Anyone can help me?
A: This happened to me when my IDE automatically added the following lines:
import * as AsyncStorage from "react-native";
which means: everything in that file, which won't work. Solved it by changing to this:
import AsyncStorage from "react-native";
A: Navigator is no longer supported in react native so this is no longer allowed:
import { Navigator } from 'react-native'
Remove the Navigator from 'react-native' imports and substitute it with this:
import { Navigator } from 'react-native-deprecated-custom-components';
A: Navigation experimental(Previously Navigator) has been removed from React Native and moved to a separate package react-native-deprecated-custom-components. It has been deprecated and not recommended. To fix your old code you can perform following steps
*
*Install package react-native-deprecated-custom-components
npm install react-native-deprecated-custom-components --save
*Import NavigationExperimental(Previously Navigator)
import NavigationExperimental from 'react-native-deprecated-custom-comreponents';
*Replace Navigator with NavigationExperimental.Navigator
| stackoverflow | {
"language": "en",
"length": 185,
"provenance": "stackexchange_0000F.jsonl.gz:895217",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637017"
} |
8b6bede84afa29611b3e7e2d754cbf6a457e64b0 | Stackoverflow Stackexchange
Q: Can memoization be used with an iterative solution in dynamic programming? For example, the Fibonacci sequence can be solved with memoization when using recursion. But can solving Fibonacci iteratively (stack + while loop) also take advantage of memoization?
A: Of course ... start at the base case F(0) and F(1), and compute values. Keep them all in an array, indexed by the functional subscript. When you get an input argument greater than your current array extent, compute more values. When you get one within the current bounds, simply return that value from the array.
| Q: Can memoization be used with an iterative solution in dynamic programming? For example, the Fibonacci sequence can be solved with memoization when using recursion. But can solving Fibonacci iteratively (stack + while loop) also take advantage of memoization?
A: Of course ... start at the base case F(0) and F(1), and compute values. Keep them all in an array, indexed by the functional subscript. When you get an input argument greater than your current array extent, compute more values. When you get one within the current bounds, simply return that value from the array.
| stackoverflow | {
"language": "en",
"length": 95,
"provenance": "stackexchange_0000F.jsonl.gz:895241",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637081"
} |
03ff9bffcf713b933fcdffa0c6e87f6da8e2040b | Stackoverflow Stackexchange
Q: Is there a way to create executable files in Python without os.chmod for *nix systems? By default open writes files with 666 octal permission: -rw-rw-rw-. I wonder if there's a way to make open creates files with the execution bit set. For instance, if presumably my system's umask value is 0000 then any file written with open will be written with the permission -rw-rw-rw-:
$ umask
0000
>>> open("aaa", "w")
$ ls -l aaa
-rw-rw-rw- 1 Kuser Kuser 0 Jun 19 08:44 aaa
I'm looking for a way to set the default permission value of open to 777 octal so I can write executable files directly without os.chmod. Or generally is there a way to achieve this in Python? Probably using lower-level file processing tools from os module? touch and most editors use 666 octal permission mode by default.
I wasn't able to obtain files with the execution bit set for files created by touch command, touch uses 666 by default.
Note: this just an artificial question.
A: open accepts an opener argument that returns a file descriptor; os.open accepts a mode, which defaults to 0o777.
import os
with open("aaa", "w", opener=os.open) as f:
⋮
| Q: Is there a way to create executable files in Python without os.chmod for *nix systems? By default open writes files with 666 octal permission: -rw-rw-rw-. I wonder if there's a way to make open creates files with the execution bit set. For instance, if presumably my system's umask value is 0000 then any file written with open will be written with the permission -rw-rw-rw-:
$ umask
0000
>>> open("aaa", "w")
$ ls -l aaa
-rw-rw-rw- 1 Kuser Kuser 0 Jun 19 08:44 aaa
I'm looking for a way to set the default permission value of open to 777 octal so I can write executable files directly without os.chmod. Or generally is there a way to achieve this in Python? Probably using lower-level file processing tools from os module? touch and most editors use 666 octal permission mode by default.
I wasn't able to obtain files with the execution bit set for files created by touch command, touch uses 666 by default.
Note: this just an artificial question.
A: open accepts an opener argument that returns a file descriptor; os.open accepts a mode, which defaults to 0o777.
import os
with open("aaa", "w", opener=os.open) as f:
⋮
| stackoverflow | {
"language": "en",
"length": 196,
"provenance": "stackexchange_0000F.jsonl.gz:895245",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637094"
} |
624708a0af8de2dd3b4f5279292cbb4e93b01360 | Stackoverflow Stackexchange
Q: How to do exactly same thing as array_to_json(array_agg(tags.*)) for N columns I am currently using PostgreSQL JSON capabilities to create JSON objects out of my query so I can easily use it on my application or pass it to the frontend.
array_to_json(array_agg(tags.*)) does exactly when I need to (creates JSON objects with columns as a keys from the data and convert it into array), however I haven't found any way how to do the same if I need only one or two columns from tags. I played with various JSON and array functions but I've never achieved the same result. Thanks for help
Whole query
SELECT
tags_components.component_id,
array_to_json(array_agg(tags.*)) as tags
FROM tags_components
LEFT JOIN tags ON tags.id = tags_components.tag_id
AND tags_components.component_name = 'company'
GROUP BY tags_components.component_id
A: Use a derived table, e.g.:
SELECT
tags_components.component_id,
array_to_json(array_agg(tags.*)) as tags
FROM tags_components
LEFT JOIN (
SELECT id, name -- only two columns
FROM tags
) tags
ON tags.id = tags_components.tag_id
AND tags_components.component_name = 'company'
GROUP BY tags_components.component_id
| Q: How to do exactly same thing as array_to_json(array_agg(tags.*)) for N columns I am currently using PostgreSQL JSON capabilities to create JSON objects out of my query so I can easily use it on my application or pass it to the frontend.
array_to_json(array_agg(tags.*)) does exactly when I need to (creates JSON objects with columns as a keys from the data and convert it into array), however I haven't found any way how to do the same if I need only one or two columns from tags. I played with various JSON and array functions but I've never achieved the same result. Thanks for help
Whole query
SELECT
tags_components.component_id,
array_to_json(array_agg(tags.*)) as tags
FROM tags_components
LEFT JOIN tags ON tags.id = tags_components.tag_id
AND tags_components.component_name = 'company'
GROUP BY tags_components.component_id
A: Use a derived table, e.g.:
SELECT
tags_components.component_id,
array_to_json(array_agg(tags.*)) as tags
FROM tags_components
LEFT JOIN (
SELECT id, name -- only two columns
FROM tags
) tags
ON tags.id = tags_components.tag_id
AND tags_components.component_name = 'company'
GROUP BY tags_components.component_id
| stackoverflow | {
"language": "en",
"length": 165,
"provenance": "stackexchange_0000F.jsonl.gz:895266",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637150"
} |
2eef080babe32afc2078416989c7affbe5875309 | Stackoverflow Stackexchange
Q: node.js electron bootstrap-tour popover is not a function Hy,
I make a node.js application and i have a problem with bootstrap-tour.
I have two files index.js and tour.js.
In index.js i have something like this :
window.$ = window.jQuery = require('jquery');
require('bootstrap');
require('./tour.js');
and in tour.js something like this :
const Tour = require('bootstrap-tour');
var tour = new Tour({
steps: [
{
element: "#account",
title: 'Your account',
content: 'Lorem ipsum dolor sit amet.'
}]
});
tour.init();
tour.start();
But i have this error in electron's console :
Uncaught TypeError: $element.popover is not a function
at Tour._showPopover
I tried to add require('bootstrap'); just before const Tour = require('bootstrap-tour'); but the same error still appear.
Someone can help me ?
PS:
Is that a good way to do it? Or is it better to put all the javascript in <script> tags and put all the code in document.ready functions ?
PPS:
Here my package.json content :
{
"name": "testelectron",
"version": "0.0.0",
"description": "TestElectron",
"main": "app.js",
"author": {
"name": "OOM"
},
"devDependencies": {
"electron": "^1.6.6"
},
"dependencies": {
"bootstrap": "^3.3.7",
"bootstrap-toggle": "^2.2.2",
"bootstrap-tour": "^0.11.0",
"jquery": "^3.2.1"
}
}
Thank you.
| Q: node.js electron bootstrap-tour popover is not a function Hy,
I make a node.js application and i have a problem with bootstrap-tour.
I have two files index.js and tour.js.
In index.js i have something like this :
window.$ = window.jQuery = require('jquery');
require('bootstrap');
require('./tour.js');
and in tour.js something like this :
const Tour = require('bootstrap-tour');
var tour = new Tour({
steps: [
{
element: "#account",
title: 'Your account',
content: 'Lorem ipsum dolor sit amet.'
}]
});
tour.init();
tour.start();
But i have this error in electron's console :
Uncaught TypeError: $element.popover is not a function
at Tour._showPopover
I tried to add require('bootstrap'); just before const Tour = require('bootstrap-tour'); but the same error still appear.
Someone can help me ?
PS:
Is that a good way to do it? Or is it better to put all the javascript in <script> tags and put all the code in document.ready functions ?
PPS:
Here my package.json content :
{
"name": "testelectron",
"version": "0.0.0",
"description": "TestElectron",
"main": "app.js",
"author": {
"name": "OOM"
},
"devDependencies": {
"electron": "^1.6.6"
},
"dependencies": {
"bootstrap": "^3.3.7",
"bootstrap-toggle": "^2.2.2",
"bootstrap-tour": "^0.11.0",
"jquery": "^3.2.1"
}
}
Thank you.
| stackoverflow | {
"language": "en",
"length": 187,
"provenance": "stackexchange_0000F.jsonl.gz:895272",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637160"
} |
99ea167a5161734f0c0a8b2bde5832b02ae62c9e | Stackoverflow Stackexchange
Q: How to change instance type in AWS ECS cluster? I have a cluster in AWS EC2 Container Service. When I've set it up, I used t2.micro instances because those were sufficient for development. Now I'd like to use more powerful instances, like m4.large.
I would like to know whether it is possible to change the instance types only, so I don't need to recreate the whole cluster. I could not find how to do this.
A: Yes, you can achieve this in CloudFormation.
*
*Click on the Stack corresponding to your ECS-Cluster.
*Click Update Stack
*Use radiocurrent template, Next
*change EcsInstanceType
*Next, Next, Update
*Upscale your cluster to 2*n instances
*Wait for the n new instances of the new type being created
*Downscale your cluster to n
*Or you could just drain and terminate the instances 1 by 1
| Q: How to change instance type in AWS ECS cluster? I have a cluster in AWS EC2 Container Service. When I've set it up, I used t2.micro instances because those were sufficient for development. Now I'd like to use more powerful instances, like m4.large.
I would like to know whether it is possible to change the instance types only, so I don't need to recreate the whole cluster. I could not find how to do this.
A: Yes, you can achieve this in CloudFormation.
*
*Click on the Stack corresponding to your ECS-Cluster.
*Click Update Stack
*Use radiocurrent template, Next
*change EcsInstanceType
*Next, Next, Update
*Upscale your cluster to 2*n instances
*Wait for the n new instances of the new type being created
*Downscale your cluster to n
*Or you could just drain and terminate the instances 1 by 1
A: Here are the exact steps I took to update the instance type on my cluster:
*
*Go to the cluster service, update Number of tasks to 0
*Go to EC2 -> Launch Configurations -> Actions dropdown -> Copy launch configuration and set the new instance type
*Go to EC2 -> Auto Scaling Groups -> Edit -> set Launch Configuration to newly created launch configuration
*Go to EC2 -> Auto Scaling Groups -> Instances -> Detach instance
*Go to EC2 -> Launch Configurations -> Delete old launch configuration
*Go to the cluster service, update Number of tasks to your desired count.
Now when tasks start, it'll be running on the updated EC2 instance type.
A: Yes, this is possible.
The instance types in your cluster are determined by the 'Instance Type' setting within your Launch Configuration. To update the instance type without having to recreate the cluster:
*
*Make a copy of the cluster Launch Configuration and update the 'Instance Type'.
*Adjust the cluster Auto Scaling Group to point to your new Launch Configuration.
*Wait for your new instances to register in your cluster and your services to start.
You can also add multiple instances types to a single cluster by creating multiple Auto Scaling Groups linked to different Launch Configurations. Note however that you can't copy Auto Scaling Groups easily within the console.
A: This can be achieved by modifying EcsInstanceType in the CloudFormation stack for the ECS instance. Any change to the autoscaling group by hand will be overwritten by the next "Scale ECS Instances" operation.
A: Yes, you can change the instance type in ECS cluster. I believe you have created ECS cluster manually from AWS GUI. Behind the scene, its creating aws cloud formation template as per your inputs from AWS console(ECS) like VPC, instance type, and size, etc. Please follow the below steps for the same.
*
*Find the cloud formation template with the name "EC2ContainerService-{your-ecs-cluster-name}".
*Check the existing setting in the Parameters tab(you can check your instance type here).
*Now you need to update the cloud formation. Click on-> Update ->use current template ->next->update the EcsInstanceType variable ->next->next->update stack.
*Now your cloud formation update. now you can check in EC2 console that there is a new spot fleet with new instance type.
A: Definitely, there are multiple ways to change the instance type as suggested about using launch configurations.
But beware that, it is a challenge to use multiple launch configuration to attach to ECS cluster that has Container Instances Scaling policies.
For example, If one is running a cluster with t2.medium type of instances using a launch configuration and have a Auto scaling policy attached to ECS cluster then it can signal only Auto scaling group and not more than 1.
A: To do it without any downtime:
*
*Create a copy of the Launch Configuration used by your Auto Scaling
Group, including any changes you want to make.
*Edit the Auto Scaling Group to:
*
*Use the new Launch Configuration
*Desired Capacity = Desired Capacity * 2
*Min = Desired Capacity
*Wait for all new instances to become 'ACTIVE' in the ECS Instances tab of the ECS Cluster
*Select the old instances and click Actions -> Drain Instances
*Wait until all the old instances are running 0 tasks
*Edit the Auto Scaling Group and change Min and Desired back to their original values
A: The AWS docs has a complete step by step guide covering CloudFormationStack and ECS cluster launched manually.
How do I change my container instance type in Amazon ECS?
From the guide:
To change your container instance type, complete the steps in one of
the following sections:
*
*Update container instances launched in an ECS cluster through the AWS CloudFormation stack
*Update container instances launched manually in an ECS cluster
| stackoverflow | {
"language": "en",
"length": 771,
"provenance": "stackexchange_0000F.jsonl.gz:895296",
"question_score": "49",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637225"
} |
7626dea0b3ea96b8d06813c07df5f14bb9f76d3e | Stackoverflow Stackexchange
Q: External fingerprint scanner with Android Emulator How can I use external fingerprint scanner with AVD in android studio?
A: You cannot use an external fingerprint scanner with AVD, sadly. Your best option would be to deploy the app to your existing Android device that has a fingerprint scanner.
| Q: External fingerprint scanner with Android Emulator How can I use external fingerprint scanner with AVD in android studio?
A: You cannot use an external fingerprint scanner with AVD, sadly. Your best option would be to deploy the app to your existing Android device that has a fingerprint scanner.
| stackoverflow | {
"language": "en",
"length": 49,
"provenance": "stackexchange_0000F.jsonl.gz:895307",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637253"
} |
7dcc7978c82523c6419c391c734c59a2b6eb3553 | Stackoverflow Stackexchange
Q: how to hot deploy jsp file using tomcat7-maven-plugin? I use tomcat7 with the tomcat-maven plugin. I am able to make it hotswap my jsp but it only work if I modify it directly in the target. How can I make tomcat also look for changes in my sources directory?
pom.xml
<plugin>
<groupId>org.apache.tomcat.maven</groupId>
<artifactId>tomcat7-maven-plugin</artifactId>
<version>2.2</version>
<configuration>
<serverXml>${project.build.directory}/config/tomcat-config/${usingDb}/server.xml</serverXml>
<tomcatUsers>${project.build.directory}/config/tomcat-config/tomcat-users.xml</tomcatUsers>
<configurationDir>${project.build.directory}/config/tomcat-config</configurationDir>
<additionalClassesDirs>
<classesDir>${project.basedir}/src/main/webapp</classesDir>
</additionalClassesDirs>
<contextReloadable>true</contextReloadable>
<port>${tomcat.http.local.port}</port>
<path>/${url.contextPath}</path>
</configuration>
</plugin>
A: This depends on how you use/start the maven plugin.
Starting it with
mvn tomcat7:run
should do the trick (in comparison to run-war or any other goal). See details at http://tomcat.apache.org/maven-plugin-2.2/tomcat7-maven-plugin/plugin-info.html
This will actually reload the context in your tomcat. I'm not sure actual "Hot replacement" without reloading the context is possible without third party libraries/plugins like jrebel or similar.
| Q: how to hot deploy jsp file using tomcat7-maven-plugin? I use tomcat7 with the tomcat-maven plugin. I am able to make it hotswap my jsp but it only work if I modify it directly in the target. How can I make tomcat also look for changes in my sources directory?
pom.xml
<plugin>
<groupId>org.apache.tomcat.maven</groupId>
<artifactId>tomcat7-maven-plugin</artifactId>
<version>2.2</version>
<configuration>
<serverXml>${project.build.directory}/config/tomcat-config/${usingDb}/server.xml</serverXml>
<tomcatUsers>${project.build.directory}/config/tomcat-config/tomcat-users.xml</tomcatUsers>
<configurationDir>${project.build.directory}/config/tomcat-config</configurationDir>
<additionalClassesDirs>
<classesDir>${project.basedir}/src/main/webapp</classesDir>
</additionalClassesDirs>
<contextReloadable>true</contextReloadable>
<port>${tomcat.http.local.port}</port>
<path>/${url.contextPath}</path>
</configuration>
</plugin>
A: This depends on how you use/start the maven plugin.
Starting it with
mvn tomcat7:run
should do the trick (in comparison to run-war or any other goal). See details at http://tomcat.apache.org/maven-plugin-2.2/tomcat7-maven-plugin/plugin-info.html
This will actually reload the context in your tomcat. I'm not sure actual "Hot replacement" without reloading the context is possible without third party libraries/plugins like jrebel or similar.
A: You should be able to run the war:exploded maven goal to get your changes copied from your sources directory to the target directory.
A: Change your workspace in Eclipse to \tomcat\webapps Since it is just for your work, this should work fine. Whatever changes you make in Eclipse is in the same directory tomcat looks for applications to deploy
| stackoverflow | {
"language": "en",
"length": 187,
"provenance": "stackexchange_0000F.jsonl.gz:895324",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637303"
} |
f4eb4b0a458fd3f7a8a9b6b4db6f869bcdf93c22 | Stackoverflow Stackexchange
Q: How to override wagtail authentication? When I attempt to access my wagtail back-office at /cms/, I get redirected to wagtail's login page, /cms/login/.
However, I would like to use my own custom login, which is default for the rest of the site, and sits at /auth/.
My LOGIN_URL is already set to /auth/ in django settings.
EDIT : it's been suggested that this is a generic question of how do you override namespaced url patterns but this is not the case. The urls are not namespaced, and I was looking for wagtail functionality that adressed this specific issue. Fortunately, that functionality does exist.
A: WAGTAIL_FRONTEND_LOGIN_URL suggested above is specifically intended just for front end users and there is not an equivalent setting for admin users. You could use redirect_to_login like so:
from django.contrib.auth.views import redirect_to_login
from django.urls import reverse
from wagtail.admin import urls as wagtailadmin_urls
def redirect_to_my_auth(request):
return redirect_to_login(reverse('wagtailadmin_home'), login_url='myauth:login')
urlpatterns = [
url(r'^cms/login', redirect_to_my_auth, name='wagtailadmin_login'),
url(r'^cms/', include(wagtailadmin_urls)),
]
| Q: How to override wagtail authentication? When I attempt to access my wagtail back-office at /cms/, I get redirected to wagtail's login page, /cms/login/.
However, I would like to use my own custom login, which is default for the rest of the site, and sits at /auth/.
My LOGIN_URL is already set to /auth/ in django settings.
EDIT : it's been suggested that this is a generic question of how do you override namespaced url patterns but this is not the case. The urls are not namespaced, and I was looking for wagtail functionality that adressed this specific issue. Fortunately, that functionality does exist.
A: WAGTAIL_FRONTEND_LOGIN_URL suggested above is specifically intended just for front end users and there is not an equivalent setting for admin users. You could use redirect_to_login like so:
from django.contrib.auth.views import redirect_to_login
from django.urls import reverse
from wagtail.admin import urls as wagtailadmin_urls
def redirect_to_my_auth(request):
return redirect_to_login(reverse('wagtailadmin_home'), login_url='myauth:login')
urlpatterns = [
url(r'^cms/login', redirect_to_my_auth, name='wagtailadmin_login'),
url(r'^cms/', include(wagtailadmin_urls)),
]
A: The Wagtail setting WAGTAIL_FRONTEND_LOGIN_URL allows you to configure how users login to the Wagtail admin.
From http://docs.wagtail.io/en/v1.10.1/advanced_topics/privacy.html#setting-up-a-login-page:
If the stock Django login view is not suitable - for example, you wish to use an external authentication system, or you are integrating Wagtail into an existing Django site that already has a working login view - you can specify the URL of the login view via the WAGTAIL_FRONTEND_LOGIN_URL setting
A: To elaborate on Erick M's answer, since this is the working answer:
You do need to set the correct permission (wagtailadmin.access_admin) or set the is_superuser flag in Django's auth_user database table to be able to access the CMS, otherwise you still get a "permission denied" error.
I thought this had to do with my implementation, but it was already working, but failed because of the above reason.
| stackoverflow | {
"language": "en",
"length": 297,
"provenance": "stackexchange_0000F.jsonl.gz:895328",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637308"
} |
36e3672342a2f8463da46bcfb4926a8e2230b3f8 | Stackoverflow Stackexchange
Q: angular 2 ionic 2 handling events on keypress I have a bunch of ionic 2 cards which I want to flip on the press of a key (any key, it doesn't matter). The code looks like
<ion-content padding>
<ion-card (click)="setTime(7)" *ngIf="status == 'morning'" (keypress)="eventHandler($event)" style="width:80%">
<img src="https://greatist.com/sites/default/files/Sleeping-Positions-feature.jpg"/>
</ion-card>
</ion-content>
the .ts code
eventHandler(keyCode){
alert('hey vikj');
}
On pressing any key, my event handler is not fired.
A: you can use this function in input field
(keypress)="onChange($event.keyCode)"
| Q: angular 2 ionic 2 handling events on keypress I have a bunch of ionic 2 cards which I want to flip on the press of a key (any key, it doesn't matter). The code looks like
<ion-content padding>
<ion-card (click)="setTime(7)" *ngIf="status == 'morning'" (keypress)="eventHandler($event)" style="width:80%">
<img src="https://greatist.com/sites/default/files/Sleeping-Positions-feature.jpg"/>
</ion-card>
</ion-content>
the .ts code
eventHandler(keyCode){
alert('hey vikj');
}
On pressing any key, my event handler is not fired.
A: you can use this function in input field
(keypress)="onChange($event.keyCode)"
A: It's set up correct, but the focus needs to be on the ion-card before it starts to listen. Click on the card and then press a key and it should work. If you want the focus to be on the entire page check out this question:
Angular 2 | listen for keypress event on whole page
| stackoverflow | {
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:895350",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637384"
} |
0b5ac77b282a21bc185f5bd9cd7d93d3c34ececa | Stackoverflow Stackexchange
Q: How to conditionally set required attribute when element is not hidden Whenever I place a $(".Othertext").attr('required', ''); before the show call for the element it shows the textbox regardless of the button condition. Is there any way to make it so that the textbox is required and shown when the Other button is clicked?
<label class="mdl-radio mdl-js-radio mdl-js-ripple-effect" for="DF4">
<input type="radio" id="DF4" class="mdl-radio__button" name="DF" value="4">
<span class="mdl-radio__label">Other - please describe in detail</span>
</label>
<div class="Othertext">
<div class="mdl-textfield mdl-js-textfield mdl-textfield--floating-label">
<input class="mdl-textfield__input" type="text" id="othertext">
<span class="mdl-textfield__label">Describe...</span>
</div>
</div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
$(".Othertext").hide();
$('input[type=radio][name=DF]').change(function() {
if($(this).val() == 4)
$(".Othertext").show();
else
$(".Othertext").hide();
});
});
</script>
A: This works fine for me:
$(document).ready(function() {
$("#othertext").hide();
$('input[type=radio][name=DF]').change(function() {
if($(this).val() == 4) {
$("#othertext").show();
$("#othertext").attr('required', '');
}
else {
$("#othertext").hide();
$("#othertext").removeAttr('required', '');
}
});
});
However remember to use brackets when you have more then 1 line in the if statement.
| Q: How to conditionally set required attribute when element is not hidden Whenever I place a $(".Othertext").attr('required', ''); before the show call for the element it shows the textbox regardless of the button condition. Is there any way to make it so that the textbox is required and shown when the Other button is clicked?
<label class="mdl-radio mdl-js-radio mdl-js-ripple-effect" for="DF4">
<input type="radio" id="DF4" class="mdl-radio__button" name="DF" value="4">
<span class="mdl-radio__label">Other - please describe in detail</span>
</label>
<div class="Othertext">
<div class="mdl-textfield mdl-js-textfield mdl-textfield--floating-label">
<input class="mdl-textfield__input" type="text" id="othertext">
<span class="mdl-textfield__label">Describe...</span>
</div>
</div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
$(".Othertext").hide();
$('input[type=radio][name=DF]').change(function() {
if($(this).val() == 4)
$(".Othertext").show();
else
$(".Othertext").hide();
});
});
</script>
A: This works fine for me:
$(document).ready(function() {
$("#othertext").hide();
$('input[type=radio][name=DF]').change(function() {
if($(this).val() == 4) {
$("#othertext").show();
$("#othertext").attr('required', '');
}
else {
$("#othertext").hide();
$("#othertext").removeAttr('required', '');
}
});
});
However remember to use brackets when you have more then 1 line in the if statement.
| stackoverflow | {
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:895364",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637429"
} |
c6e70b99a99e9d494c6034b1a2d2146c13103bcb | Stackoverflow Stackexchange
Q: BigQuery Query Latest Table Efficiently I'm trying to efficiently query from the latest table in a dataset that consists of tables of the form project_id:dataset:dataset_20160101, project_id:dataset:dataset_20160102 etc.
This query seems to be the recommended solution:
SELECT *
FROM `project_id.dataset.*`
WHERE _TABLE_SUFFIX=(SELECT MAX(table_id) FROM `project_id.dataset.__TABLES_SUMMARY__`)
However, this query bills me for accessing all tables in the dataset, not just the latest one. Why is that?
A: We can only prune the tables before the query runs when the WHERE clause uses a constant expression on the pseudo column, e.g., _TABLE_SUFFIX = 'dataset_20160102'. For your query, as the WHERE clause includes a sub-uery which doesn't parse to a constant, we cannot prune the tables before the query runs. Instead, data is read from all tables and the sub-query is executed. Then data is joined with the sub-query results and filtered.
It's possible to prune the tables during the query execution. Start the query, execute the sub-query, prune the tables, and read data. But there's no ETA for it yet.
| Q: BigQuery Query Latest Table Efficiently I'm trying to efficiently query from the latest table in a dataset that consists of tables of the form project_id:dataset:dataset_20160101, project_id:dataset:dataset_20160102 etc.
This query seems to be the recommended solution:
SELECT *
FROM `project_id.dataset.*`
WHERE _TABLE_SUFFIX=(SELECT MAX(table_id) FROM `project_id.dataset.__TABLES_SUMMARY__`)
However, this query bills me for accessing all tables in the dataset, not just the latest one. Why is that?
A: We can only prune the tables before the query runs when the WHERE clause uses a constant expression on the pseudo column, e.g., _TABLE_SUFFIX = 'dataset_20160102'. For your query, as the WHERE clause includes a sub-uery which doesn't parse to a constant, we cannot prune the tables before the query runs. Instead, data is read from all tables and the sub-query is executed. Then data is joined with the sub-query results and filtered.
It's possible to prune the tables during the query execution. Start the query, execute the sub-query, prune the tables, and read data. But there's no ETA for it yet.
A: For now, if you have a pipeline which inserts tables, then at the end of the pipeline, you could
(1) get the MAX(table_id)
(2) create a view with following query:
SELECT
*,
DATE max_table_id AS _LATEST_DATE,
PARSE_DATE('%E4Y%m%d', _TABLE_SUFFIX) AS _DATA_DATE
FROM `project_id.dataset.*`
Then whenever you want to query the data, you could just query against the view, and specify _DATA_DATE=_LATEST_DATE as the filter. In this case, _LATEST_DATE is pre-calculated, so it's constant. As Hua explained, you will be billed for processing only the latest table.
| stackoverflow | {
"language": "en",
"length": 254,
"provenance": "stackexchange_0000F.jsonl.gz:895368",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637448"
} |
470aa805a7cd58e0a6ff334f788c3454ee4c73df | Stackoverflow Stackexchange
Q: Is there a way to affect the range counter in Python? I'm trying to run a python program with a for loop which has a variable i increased by 1 every time from 1 to the length of my list. In java, my code that I'm going for might look something like this:
for (int i = 0; i < array.length; i++) {
//code goes here
i += //the number i want it to go up by
}
This actually affects my counter the way intended and allows me to effectively skip numbers in my for loop and I want to try to run a similar program but in python. Is there any way to do this with python's built in functionality or do I have to just use a while loop and a counter to simulate this myself if I want python to work this way?
A: You'll need a while loop for this:
i = 0
while i < len(myArray):
# do stuff
if special_case: i+= 1
i += 1
| Q: Is there a way to affect the range counter in Python? I'm trying to run a python program with a for loop which has a variable i increased by 1 every time from 1 to the length of my list. In java, my code that I'm going for might look something like this:
for (int i = 0; i < array.length; i++) {
//code goes here
i += //the number i want it to go up by
}
This actually affects my counter the way intended and allows me to effectively skip numbers in my for loop and I want to try to run a similar program but in python. Is there any way to do this with python's built in functionality or do I have to just use a while loop and a counter to simulate this myself if I want python to work this way?
A: You'll need a while loop for this:
i = 0
while i < len(myArray):
# do stuff
if special_case: i+= 1
i += 1
A: In python it is very similar with Java. You can dynamic increase your counter based on different conditions as :
x = 1
while x < 100:
if condition1:
x += 1
elif condition2:
x += 2
else:
x += 3
A: (Disclaimer: never use this code for any remotely serious purpose)
The problem with modifying the value of i in your code is this: normally, assignments (including augmented assignment, +=) made to local immutable values are only visible in the local scope. The inside of range is not in the local scope. When you reassign i, the range implementation has no way of knowing this.
Normally.
But Python has a built-in module named inspect that exposes all sorts of information about your program that you normally wouldn't be privy to at run-time. This includes the values of variables in frames which would otherwise be completely inaccessible.
In violation of good programming principles and the laws of nature, we can write a range-like function which pierces the veil of ignorance, and steals the value of i from the calling context, much like how Prometheus stole fire from the peak of Mount Olympus. (Note: recall what happens to Prometheus at the end of that story.)
import inspect
import re
def mutable_range(max):
x = 0
while x < max:
yield x
record = inspect.stack()[1]
frame = record[0]
source_lines = record[4]
iterator_name = re.match(r"\s*for (\w+) in mutable_range", source_lines[0]).group(1)
peek = frame.f_locals[iterator_name]
if peek != x:
x = peek
else:
x += 1
for i in mutable_range(10):
print(i)
if i == 3:
i = -10
if i == -8:
i = 6
Result:
0
1
2
3
-10
-9
-8
6
7
8
9
(Disclaimer: author is not responsible for use of code and subsequent punishment of your hubris by eagles feeding on your liver for all eternity)
A: You can't modify the step mid count, but if the stepping through is constant, you can specify it at the start:
# the default
>>> range(1, 10)
[1, 2, 3, 4, 5, 6, 7, 8, 9]
# step 2
>>> range(1, 10, 2)
[1, 3, 5, 7, 9]
You can also step backwards:
>>> range(10, 0, -1)
[10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
If your case is, sadly, not one where the step is constant throughout iterations, you'll certainly need a while loop as you rightly surmised.
A: In Python, you can create one-way iterators with the built-in iter function. With that, you can call next to effectively skip a step.
To do this with multiple steps, the itertools recipies defines a consume function:
def consume(iterator, n):
"Advance the iterator n-steps ahead. If n is none, consume entirely."
# Use functions that consume iterators at C speed.
if n is None:
# feed the entire iterator into a zero-length deque
collections.deque(iterator, maxlen=0)
else:
# advance to the empty slice starting at position n
next(islice(iterator, n, n), None)
In this case, we can do:
import itertools
def skip(iterator, n):
next(itertools.islice(iterator, n, n), None)
range_iter = iter(range(len(ls)))
for i in range_iter:
# ...
if custom_condition:
skip(range_iter, 2) # Or any number.
This also works directly iterating over lists:
ls_iter = iter(ls)
for i in ls_iter:
# ...
if custom_condition:
skip(ls_iter, 3)
These are super efficient as they use built-in types and functions.
A: You'll be happier with the while loop.
You can do something like
l = list(range(min_count,max_count))
for i in l:
and modify l during the loop. But getting that right is hard.
You could also create an iteration object with a skip method, and call that during the loop.
class SkipRange:
def __init__(self, minc, maxc, step):
self.count = minc
self.maxc = maxc
self.step = step
def __iter__(self): return self
def __next__(self):
if self.count > self.maxc: raise StopIteration
c = self.count
self.count += self.step
return c
def skip(self, num = 1): self.count += num
Untested and entirely off the top of bmy head; debugging is left as an exercise to anyone annoyed at while loops enough to go that route. I think it's more illustrative of what's going on under the covers.
s = SkipRange(min_count,max_count)
for i in s:
# do stuff
s.skip(3) #skip next 3 items
But the while loop is more readable and in most cases easier.
| stackoverflow | {
"language": "en",
"length": 881,
"provenance": "stackexchange_0000F.jsonl.gz:895370",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637457"
} |
566b648e81f5a39b8481e2620de4cb6fc2286c0a | Stackoverflow Stackexchange
Q: Chart.js: Widen hover distance for points Is there a Chart.js option for increasing the distance from a point at which its tooltip becomes active?
By default, a point is "active" and tooltips are displayed when the mouse is hovering directly over a point. I'd like to give the user a little more area around points to make them "active."
Thanks!
A: You can achieve this, by setting pointHitRadius property to a value of hit distance, for your dataset, like so ...
...
datasets: [{
pointHitRadius: 20,
...
}]
...
Working example
var ctx = document.getElementById("myChart").getContext('2d');
var myChart = new Chart(ctx, {
type: 'line',
data: {
labels: ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun'],
datasets: [{
label: 'Rating',
data: [1, 2, 3, 4, 5, 6],
backgroundColor: 'rgba(209, 230, 245, 0.5)',
borderColor: 'rgba(56, 163, 236, 1)',
borderWidth: 1,
pointHitRadius: 20 //set as you wish
}]
},
options: {
responsive: false,
scales: {
yAxes: [{
ticks: {
beginAtZero: true
}
}]
}
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.min.js"></script>
<canvas id="myChart" height="200"></canvas>
| Q: Chart.js: Widen hover distance for points Is there a Chart.js option for increasing the distance from a point at which its tooltip becomes active?
By default, a point is "active" and tooltips are displayed when the mouse is hovering directly over a point. I'd like to give the user a little more area around points to make them "active."
Thanks!
A: You can achieve this, by setting pointHitRadius property to a value of hit distance, for your dataset, like so ...
...
datasets: [{
pointHitRadius: 20,
...
}]
...
Working example
var ctx = document.getElementById("myChart").getContext('2d');
var myChart = new Chart(ctx, {
type: 'line',
data: {
labels: ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun'],
datasets: [{
label: 'Rating',
data: [1, 2, 3, 4, 5, 6],
backgroundColor: 'rgba(209, 230, 245, 0.5)',
borderColor: 'rgba(56, 163, 236, 1)',
borderWidth: 1,
pointHitRadius: 20 //set as you wish
}]
},
options: {
responsive: false,
scales: {
yAxes: [{
ticks: {
beginAtZero: true
}
}]
}
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.min.js"></script>
<canvas id="myChart" height="200"></canvas>
| stackoverflow | {
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:895379",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637477"
} |
a415401594b825afe57ea6bc81989ca482f44021 | Stackoverflow Stackexchange
Q: Automatically trust enterprise app developer I understand that Enterprise apps need to be "trusted" before users can use them. "Untrusted App Developer" message when installing enterprise iOS Application this answers details what I mean.
However, it's quite a horrible experience having users to go to Settings > General > Device Management blah blah to trust an Enterprise profile. Is there a programmatic way (URL scheme maybe?) to automatically launch to the said menu so all the user has to do is tap "trust"?
Thanks!
| Q: Automatically trust enterprise app developer I understand that Enterprise apps need to be "trusted" before users can use them. "Untrusted App Developer" message when installing enterprise iOS Application this answers details what I mean.
However, it's quite a horrible experience having users to go to Settings > General > Device Management blah blah to trust an Enterprise profile. Is there a programmatic way (URL scheme maybe?) to automatically launch to the said menu so all the user has to do is tap "trust"?
Thanks!
| stackoverflow | {
"language": "en",
"length": 85,
"provenance": "stackexchange_0000F.jsonl.gz:895382",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637482"
} |
289d3db4530e0e37a8b0cbe8f3dd8cc6488976fb | Stackoverflow Stackexchange
Q: RTL - Does android:autoMirrored work with png images? I read that since 4.4, android supports autoMirroring:
On previous versions of Android, if your app includes images that should reverse their horizontal orientation for right-to-left layouts, you must include the mirrored image in a drawables-ldrtl/ resource directory. Now, the system can automatically mirror images for you by enabling the autoMirrored attribute on a drawable resource or by calling setAutoMirrored(). When enabled, the Drawable is automatically mirrored when the layout direction is right-to-left.
Link:
https://developer.android.com/about/versions/android-4.4.html
Does this only work for vector graphics, or can it also be used with bitmaps like png files?
Attribute android:autoMirrored:
https://developer.android.com/reference/android/graphics/drawable/VectorDrawable.html
My Question is, if I embedded left-arrow.png as a resource in my app, could I somehow define this autoMirrior property for my image so that when the users device is set to an rtl language android will invert it dynamically. Is this possible? If so, how do I configure the property of a png image?
A: You can wrap your drawable in a bitmap resource
<bitmap xmlns:android="http://schemas.android.com/apk/res/android"
android:src="@drawable/left-arrow"
android:autoMirrored="true">
</bitmap>
| Q: RTL - Does android:autoMirrored work with png images? I read that since 4.4, android supports autoMirroring:
On previous versions of Android, if your app includes images that should reverse their horizontal orientation for right-to-left layouts, you must include the mirrored image in a drawables-ldrtl/ resource directory. Now, the system can automatically mirror images for you by enabling the autoMirrored attribute on a drawable resource or by calling setAutoMirrored(). When enabled, the Drawable is automatically mirrored when the layout direction is right-to-left.
Link:
https://developer.android.com/about/versions/android-4.4.html
Does this only work for vector graphics, or can it also be used with bitmaps like png files?
Attribute android:autoMirrored:
https://developer.android.com/reference/android/graphics/drawable/VectorDrawable.html
My Question is, if I embedded left-arrow.png as a resource in my app, could I somehow define this autoMirrior property for my image so that when the users device is set to an rtl language android will invert it dynamically. Is this possible? If so, how do I configure the property of a png image?
A: You can wrap your drawable in a bitmap resource
<bitmap xmlns:android="http://schemas.android.com/apk/res/android"
android:src="@drawable/left-arrow"
android:autoMirrored="true">
</bitmap>
| stackoverflow | {
"language": "en",
"length": 175,
"provenance": "stackexchange_0000F.jsonl.gz:895397",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637539"
} |
bad0b0523d8008275168c564481bab83315b334f | Stackoverflow Stackexchange
Q: Splunkbase modular inputs customized UI I've seen number of resources that described how to create customized UI for modular inputs but this customization is limited to configuration of Manager XML file (http://docs.splunk.com/Documentation/Splunk/6.6.1/AdvancedDev/ModInputsCustomizeUI). In this configuration customer will have possibility to specify only some static configs.
We'd like to build highly customised UI for "modular inputs" hence we should call external REST services to help customers to specify correct values for "modular inputs". Like present drop down with options and other stuff.
Is there're any ways to create such customized UI for modular inputs ? Any good references to follow ?
| Q: Splunkbase modular inputs customized UI I've seen number of resources that described how to create customized UI for modular inputs but this customization is limited to configuration of Manager XML file (http://docs.splunk.com/Documentation/Splunk/6.6.1/AdvancedDev/ModInputsCustomizeUI). In this configuration customer will have possibility to specify only some static configs.
We'd like to build highly customised UI for "modular inputs" hence we should call external REST services to help customers to specify correct values for "modular inputs". Like present drop down with options and other stuff.
Is there're any ways to create such customized UI for modular inputs ? Any good references to follow ?
| stackoverflow | {
"language": "en",
"length": 101,
"provenance": "stackexchange_0000F.jsonl.gz:895420",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637606"
} |
d1b4678145125893957dee9b67672b92e035c5a2 | Stackoverflow Stackexchange
Q: EntityFramework.Core 2.0 preview - OnConfiguring method not called I'm testing EF core 2.0 preview. But when I try to get a new instance of DbContext as I do usually with EF core 1.1.2, OnConfiguring method is not called:
public class DatabaseContext : DbContext
{
DatabasesType database_type;
string URI;
public DbSet<User> user;
public DatabaseContext(DatabasesType database_type, string URI) : base()
{
this.database_type = database_type;
this.URI = URI;
}
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
switch (database_type)
{
case DatabasesType.MySQL:
optionsBuilder.UseMySql(URI); break;
case DatabasesType.SQLite:
optionsBuilder.UseSqlite(URI); break;
}
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
// ...
}
}
What is happening? Someone with similar error?
Thank you!
| Q: EntityFramework.Core 2.0 preview - OnConfiguring method not called I'm testing EF core 2.0 preview. But when I try to get a new instance of DbContext as I do usually with EF core 1.1.2, OnConfiguring method is not called:
public class DatabaseContext : DbContext
{
DatabasesType database_type;
string URI;
public DbSet<User> user;
public DatabaseContext(DatabasesType database_type, string URI) : base()
{
this.database_type = database_type;
this.URI = URI;
}
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
switch (database_type)
{
case DatabasesType.MySQL:
optionsBuilder.UseMySql(URI); break;
case DatabasesType.SQLite:
optionsBuilder.UseSqlite(URI); break;
}
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
// ...
}
}
What is happening? Someone with similar error?
Thank you!
| stackoverflow | {
"language": "en",
"length": 105,
"provenance": "stackexchange_0000F.jsonl.gz:895423",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637612"
} |
b39fdb1b19a0cd712a3d3cf677049c54d4a56e93 | Stackoverflow Stackexchange
Q: How to install mitmproxy certificates in fedora 25 system? I have fedora 25.
I read this article - http://docs.mitmproxy.org/en/stable/certinstall.html#certinstall, but information for fedora is not exists.
How to install mitmproxy certificates for fedora 25?
A: According to Fedora's Documentation:
To add a certificate in the simple PEM or DER file formats to the list of CAs trusted on the system, copy the certificate file to the /etc/pki/ca-trust/source/anchors/ directory, for example:
# cp certificate.pem /etc/pki/ca-trust/source/anchors/
# update-ca-trust
Not sure if it will work for Fedora 25, but that has reached EOL anyways.
| Q: How to install mitmproxy certificates in fedora 25 system? I have fedora 25.
I read this article - http://docs.mitmproxy.org/en/stable/certinstall.html#certinstall, but information for fedora is not exists.
How to install mitmproxy certificates for fedora 25?
A: According to Fedora's Documentation:
To add a certificate in the simple PEM or DER file formats to the list of CAs trusted on the system, copy the certificate file to the /etc/pki/ca-trust/source/anchors/ directory, for example:
# cp certificate.pem /etc/pki/ca-trust/source/anchors/
# update-ca-trust
Not sure if it will work for Fedora 25, but that has reached EOL anyways.
| stackoverflow | {
"language": "en",
"length": 92,
"provenance": "stackexchange_0000F.jsonl.gz:895442",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637664"
} |
01f07d4cf6adeab578e6e57bb9ee640549dfdd13 | Stackoverflow Stackexchange
Q: How do I index a numpy array of zeroes with a boolean datatype to True? So I'm recreating a Matlab project they made last year, part of which involves creating mask that pull out the RGB bands.
They did this by an array of logical zeroes.
GMask_Whole = false(ROWS,COLS);
which I reconstructed as a numpy array.
self.green_mask_whole=np.zeros((self.rows, self.columns), dtype=bool)
The next part I can't for the life of me figure out how to do with numpy:
GMask_Whole(1:2:end,2:2:end) = true;
I've yet to find a numpy equivalent action. any Ideas?
btw, if your curious about what this is doing:
https://en.wikipedia.org/wiki/Bayer_filter
edit:
things I've tried:
wut(1:3:end, 1:2:end) = true
wut([1:3:end], [1:2:end]) = true
wut([1:3], [1:2]) = true
wut([1:3], [1:2]) = True
wut(slice(1:3), slice(1:2)) = True
A: You can translate Matlab's
GMask_Whole(1:2:end,2:2:end) = true;
to python by
green_mask_whole[::2,1::2] = True
(assuming green_mask_whole is a numpy array)
| Q: How do I index a numpy array of zeroes with a boolean datatype to True? So I'm recreating a Matlab project they made last year, part of which involves creating mask that pull out the RGB bands.
They did this by an array of logical zeroes.
GMask_Whole = false(ROWS,COLS);
which I reconstructed as a numpy array.
self.green_mask_whole=np.zeros((self.rows, self.columns), dtype=bool)
The next part I can't for the life of me figure out how to do with numpy:
GMask_Whole(1:2:end,2:2:end) = true;
I've yet to find a numpy equivalent action. any Ideas?
btw, if your curious about what this is doing:
https://en.wikipedia.org/wiki/Bayer_filter
edit:
things I've tried:
wut(1:3:end, 1:2:end) = true
wut([1:3:end], [1:2:end]) = true
wut([1:3], [1:2]) = true
wut([1:3], [1:2]) = True
wut(slice(1:3), slice(1:2)) = True
A: You can translate Matlab's
GMask_Whole(1:2:end,2:2:end) = true;
to python by
green_mask_whole[::2,1::2] = True
(assuming green_mask_whole is a numpy array)
A: numpy can do slicing more or less as in Matlab, but the synax is a little bit different. In numpy, the order is [begin:end:step] and it is possible to leave both begin, end and step empty, which will give them their default values first element, last element and step size 1 respectively.
Further, `numpy´ has a nice system of 'broad casting' which allows a single value (or row/column) be repeated to make a new array of the same size as another. This makes it possible to assign a single value to a whole array.
Thus, in the current case, it is possible to do
self.green_mask_whole=np.zeros((self.rows, self.columns), dtype=bool)
self.green_mask_whole[::2,1::2] = True
| stackoverflow | {
"language": "en",
"length": 255,
"provenance": "stackexchange_0000F.jsonl.gz:895449",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637679"
} |
173e57da6a4001cab74db64043d50650e4cab700 | Stackoverflow Stackexchange
Q: How to know Laravel version and where is it defined? How to know Laravel version and where is it defined?
Is Laravel version is defined inside my application directory or somewhere in global server side directory?
UPDATE
Sorry, the main question is where the version is defined? Where does
php artisan --version
takes it's answer?
UPDATE 2
The goal is to investigate, who (of us) has changed Laravel version on our site. Could it be changed by github repository edition only? Or server write access was also required?
A: Step 1:
go to: /vendor/laravel/framework/src.Illuminate/Foundation:
Step 2:
Open application.php file
Step 3:
Search for "version". The below indicates the version.
| Q: How to know Laravel version and where is it defined? How to know Laravel version and where is it defined?
Is Laravel version is defined inside my application directory or somewhere in global server side directory?
UPDATE
Sorry, the main question is where the version is defined? Where does
php artisan --version
takes it's answer?
UPDATE 2
The goal is to investigate, who (of us) has changed Laravel version on our site. Could it be changed by github repository edition only? Or server write access was also required?
A: Step 1:
go to: /vendor/laravel/framework/src.Illuminate/Foundation:
Step 2:
Open application.php file
Step 3:
Search for "version". The below indicates the version.
A: Run this command in your project folder location in cmd
php artisan --version
A: 1) php artisan -V
2) php artisan --version
AND its define at the composer.json file
"require": {
...........
"laravel/framework": "^6.2",
...........
},
A: Yet another way is to read the composer.json file, but it can end with wildcard character *
A: If you want to know the specific version then you need to check composer.lock file and search For
"name": "laravel/framework",
you will find your version in next line
"version": "v5.7.9",
A: If you want to know the user version in your code, then you can use using app() helper function
app()->version();
It is defined in this file ../src/Illuminate/Foundation/Application.php
Hope it will help :)
A: In your Laravel deployment it would be
/vendor/laravel/framework/src/Illuminate/Foundation/Application.php
to see who changed your Laravel version look at what's defined in composer.json. If you have "laravel/framework": "5.4.*", then it will update to the latest after composer update is run. Composer.lock is the file that results from running a composer update, so really see who last one to modify the composer.json file was (hopefully you have that in version control). You can read more about it here https://getcomposer.org/doc/01-basic-usage.md
A: Multiple way we can find out laravel version such as,
Using Command
php artisan --version
or
php artisan -v
From Composer.json
From Vendor Directory
/vendor/laravel/framework/src/Illuminate/Foundation/Application.php
A: You can also check with composer:
composer show laravel/framework
A: If you're like me and want to show the Laravel version and app version on the footer you can create a Blade directive in AppServiceProvider. Blade directives are placed in the boot method of the AppServiceProvider and example code may like something like
Blade::directive('laravelVersion', function () {
return "<?php echo app()->version(); ?>";
});
then in the blade template, you call it like @laravelVersion and it will show the current laravel version.
If you want, you can read more about blade directive here
A: You can find this on Composer.json file -> root directory
A: run php artisan --version from your console.
The version string is defined here:
https://github.com/laravel/framework/blob/master/src/Illuminate/Foundation/Application.php
/**
* The Laravel framework version.
*
* @var string
*/
const VERSION = '5.5-dev';
A: CASE - 1
Run this command in your project..
php artisan --version
You will get version of laravel installed in your system like this..
CASE - 2
Also you can check laravel version in the composer.json file in root directory.
A: You can view the result of dd(\Illuminate\Foundation\Application::VERSION)
| stackoverflow | {
"language": "en",
"length": 515,
"provenance": "stackexchange_0000F.jsonl.gz:895493",
"question_score": "195",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637811"
} |
827dfe189d4860d9cc924f8f80509c0c988ea8fa | Stackoverflow Stackexchange
Q: AEM 6.2 (Drag Component Here) Parsys height 0px I am using AEM 6.2 and trying to create a parsys component in crx, using the code below
However, the height of this parsys, in edit mode, comes as 0px.
Attached are the screenshots.
When I manually change the height to some values eg. 40px, it looks fine.
Note: I am not using any client library for the above page. (no css and js)
Futher, All sample sites like geomatrix etc have parsys showing correctly.
Could anyone guide me with what I am doing wrong?
A: I think that the problem is outside the component or any of the code shown here.
I think what's happening is that the css style for the div that gives the droptarget placeholder its dimensions is not loading.
That's loaded as part of the AEM authoring client libraries which you should be inheriting from the foundation page component.
Examine your page component's sling:resourceSuperType property. It should point to either wcm/foundation/components/page or wcm/foundation/components/page or inherit from a component that does.
If that is set then you have may have blocked one of the scripts within it, quite possibly head.html.
| Q: AEM 6.2 (Drag Component Here) Parsys height 0px I am using AEM 6.2 and trying to create a parsys component in crx, using the code below
However, the height of this parsys, in edit mode, comes as 0px.
Attached are the screenshots.
When I manually change the height to some values eg. 40px, it looks fine.
Note: I am not using any client library for the above page. (no css and js)
Futher, All sample sites like geomatrix etc have parsys showing correctly.
Could anyone guide me with what I am doing wrong?
A: I think that the problem is outside the component or any of the code shown here.
I think what's happening is that the css style for the div that gives the droptarget placeholder its dimensions is not loading.
That's loaded as part of the AEM authoring client libraries which you should be inheriting from the foundation page component.
Examine your page component's sling:resourceSuperType property. It should point to either wcm/foundation/components/page or wcm/foundation/components/page or inherit from a component that does.
If that is set then you have may have blocked one of the scripts within it, quite possibly head.html.
A: Include following code in the head section of the page component's rendering script.
<!--/* Include Adobe Dynamic Tag Management libraries for the header
<sly data-sly-include="/libs/cq/cloudserviceconfigs/components/servicelibs/servicelibs.jsp" data-sly-unwrap/>
*/-->
<!--/* Initializes the Experience Manager authoring UI */-->
<sly data-sly-include="/libs/wcm/core/components/init/init.jsp" data-sly-unwrap/>
A: For resolving your issue, you need to include init.jsp in the first before writing down the parsys code. I mean write like this.
<head>
<sly data-sly-include='/libs/wcm/core/components/init/init.jsp' />
</head>
<body>
<sly data-sly-resource="${'par' @resourceType='foundation/components/parsys'}" />
</body>
A: I think @l-klement pointed it out correctly that the problem is outside component. When I rename the landingpage.html file to body.html it starts working fine. I think this may be because of different files like head.html etc present at wcm/foundation/components/page which is required to provide proper styling and load certain required client libraries which assigns proper styling to parsys.
If the above is true, my next question would be, How can I have my own head.html, body.html, header.html, footer.html etc files without compromising with the parsys styling?
| stackoverflow | {
"language": "en",
"length": 355,
"provenance": "stackexchange_0000F.jsonl.gz:895497",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637825"
} |
6018aa7817ce130baa8d33c205ab881d9ea26c19 | Stackoverflow Stackexchange
Q: What is the equivalent of a Java HashMap in Swift I have an example written in Java that I would like to convert into Swift. Below is a section of the code. I would really appreciate if you can help.
Map<String, Integer> someProtocol = new HashMap<>();
someProtocol.put("one", Integer.valueOf(1));
someProtocol.put("two", Integer.valueOf(2));
for (Map.Entry<String, Integer> e : someProtocol.entrySet() {
int index = e.getValue();
...
}
NOTE: entrySet() is a method of the java.util.Map interface whereas getValue() is a method of the java.util.Map.Entry interface.
A: I believe you can use a dictionary. Here are two ways to do the dictionary part.
var someProtocol = [String : Int]()
someProtocol["one"] = 1
someProtocol["two"] = 2
or try this which uses type inference
var someProtocol = [
"one" : 1,
"two" : 2
]
as for the for loop
var index: Int
for (e, value) in someProtocol {
index = value
}
| Q: What is the equivalent of a Java HashMap in Swift I have an example written in Java that I would like to convert into Swift. Below is a section of the code. I would really appreciate if you can help.
Map<String, Integer> someProtocol = new HashMap<>();
someProtocol.put("one", Integer.valueOf(1));
someProtocol.put("two", Integer.valueOf(2));
for (Map.Entry<String, Integer> e : someProtocol.entrySet() {
int index = e.getValue();
...
}
NOTE: entrySet() is a method of the java.util.Map interface whereas getValue() is a method of the java.util.Map.Entry interface.
A: I believe you can use a dictionary. Here are two ways to do the dictionary part.
var someProtocol = [String : Int]()
someProtocol["one"] = 1
someProtocol["two"] = 2
or try this which uses type inference
var someProtocol = [
"one" : 1,
"two" : 2
]
as for the for loop
var index: Int
for (e, value) in someProtocol {
index = value
}
A: let stringIntMapping = [
"one": 1,
"two": 2,
]
for (word, integer) in stringIntMapping {
//...
print(word, integer)
}
A: I guess it will be something like that:
let someProtocol = [
"one" : 1,
"two" : 2
]
for (key, value) in someProtocol {
var index = value
}
| stackoverflow | {
"language": "en",
"length": 198,
"provenance": "stackexchange_0000F.jsonl.gz:895503",
"question_score": "37",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637836"
} |
d1a46248a0302e50babe89fc7c88bf04b2e61eb1 | Stackoverflow Stackexchange
Q: Asp.net core + EF Code first, migration files in different project In my solution I want to use Asp.net core + EF Code first
I have 2 projects:
*
*CC.API
*CC.Infrastructure
In CC.API I have startup class and there is:
services.AddDbContext<DataContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"), b => b.MigrationsAssembly("CC.Infrastructure")));
(Connection string is in appsettings.json)
As you can see I'm trying to keep migration files in different project - CC.Infrastructure.
Unfortunately whilst Add-Migration Init I receives an error:
Your target project 'PK.API' doesn't match your migrations assembly 'PK.Infrastructure'. Either change your target project or change your migrations assembly
If I will change in startup b => b.MigrationsAssembly("CC.API") then everthing works fine, but files migration files will be in CC.API :/
A: As said by IvanZazz
*
*Set UI/Web project as startup project
*Set Infrastructure as default project in package manager console dropdown, select the drop-down of default project for package manager console, set it to CC.Infrastructure or the project which has DbContext class
*Now run this command in package manage
add-migration InitialIdentityModel
| Q: Asp.net core + EF Code first, migration files in different project In my solution I want to use Asp.net core + EF Code first
I have 2 projects:
*
*CC.API
*CC.Infrastructure
In CC.API I have startup class and there is:
services.AddDbContext<DataContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"), b => b.MigrationsAssembly("CC.Infrastructure")));
(Connection string is in appsettings.json)
As you can see I'm trying to keep migration files in different project - CC.Infrastructure.
Unfortunately whilst Add-Migration Init I receives an error:
Your target project 'PK.API' doesn't match your migrations assembly 'PK.Infrastructure'. Either change your target project or change your migrations assembly
If I will change in startup b => b.MigrationsAssembly("CC.API") then everthing works fine, but files migration files will be in CC.API :/
A: As said by IvanZazz
*
*Set UI/Web project as startup project
*Set Infrastructure as default project in package manager console dropdown, select the drop-down of default project for package manager console, set it to CC.Infrastructure or the project which has DbContext class
*Now run this command in package manage
add-migration InitialIdentityModel
A: This is/was a longstanding issue with EF Core. The solution used to be making your class library an executable (temporarily) and then run all EF operations against it.
With current tooling, you can just run Add-Migration while in the library folder; the only caveat is you need to set the startup-project flag to the actual executable's project.
So the command ends up being something like:
C:\CC.Infrastructure>dotnet ef migrations add NewMigration --startup-project ../CC.API/CC.API.csproj
A: You have to add Microsoft.EntityFrameworkCore.Tools.DotNet to your CC.Infrastructure project. Right click the project and select Edit *.csproj. Then, add the following:
<ItemGroup>
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0-preview2-final" />
</ItemGroup>
You can't add this from the Nuget package manager. It has to be added directly to the project.
Once you do that. You can run the command with the startup project set as CC.API. Go to the folder for your class library. The easiest way it to right click the project and Open Folder in File Explorer. Then, type cmd in the address bar of the File Explorer to open a command prompt in that folder.
Now use the following command to create the migration:
dotnet ef migrations add InitialCreate -c DataContext --startup-project ../CC.API/CC.API.csproj
| stackoverflow | {
"language": "en",
"length": 364,
"provenance": "stackexchange_0000F.jsonl.gz:895518",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44637881"
} |
1383a2a0875262e8613426c6bbe5aef1c5925ef4 | Stackoverflow Stackexchange
Q: npm login - Registry returned 401 for PUT I am trying to login to npm by doing npm login and entering username, password, and email but I am receiving the following response:
Registry returned 401 for PUT
npm is saying that I have the incorrect username or password, but I've used the same credentials to login to npmjs.org.
version of node is:
node -v v6.2.2
A: Solutions with .npmrc and/or npm config were not working for me.
Eventually found the error was for and older npm version with 2fa enabled (see this thread).
So the following should work
npm update npm -g
npm login
-- update
On a different machine this didn't work until I updated NodeJS and did npm i npm -g.
| Q: npm login - Registry returned 401 for PUT I am trying to login to npm by doing npm login and entering username, password, and email but I am receiving the following response:
Registry returned 401 for PUT
npm is saying that I have the incorrect username or password, but I've used the same credentials to login to npmjs.org.
version of node is:
node -v v6.2.2
A: Solutions with .npmrc and/or npm config were not working for me.
Eventually found the error was for and older npm version with 2fa enabled (see this thread).
So the following should work
npm update npm -g
npm login
-- update
On a different machine this didn't work until I updated NodeJS and did npm i npm -g.
A: I guess, that you have overdid registry.
For checking, please, run npm config get registry. You should see
▶ npm config get registry
https://registry.npmjs.org/
If there is not this message, then use npm config set registry https://registry.npmjs.org/
A: I had an npmrc located at ~/.npmrc and removed it with rm ~/.npmrc and it seemed to fix the issue.
The file contained an authToken in the registry so I suppose it was conflicting with the login?
I'm not sure...
| stackoverflow | {
"language": "en",
"length": 203,
"provenance": "stackexchange_0000F.jsonl.gz:895564",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638016"
} |
f17f7b80dfdef701fb38a4756aa83acec28fba8b | Stackoverflow Stackexchange
Q: Unable to cat a file in a subdirectory of /data/user/0 of Android device In Android adb shell, I'm unable to cat a file in a subdirectory of /data/user/0. The error I get is Permission denied.
The ls command on /data/user/0 also returns Permission denied.
Is there any way around this, so I can see the content of the file?
A: Further digging reveals that there's a way to achieve this without having root permission if the file belongs to an app that's installed as a debug build. Details here.
In a nutshell:
First, run-as com.foo.app.
Current directory will switch to /data/data/com.foo.app.
Now you can perform permission restricted commands on subdirectories and files, such as cat and ls.
| Q: Unable to cat a file in a subdirectory of /data/user/0 of Android device In Android adb shell, I'm unable to cat a file in a subdirectory of /data/user/0. The error I get is Permission denied.
The ls command on /data/user/0 also returns Permission denied.
Is there any way around this, so I can see the content of the file?
A: Further digging reveals that there's a way to achieve this without having root permission if the file belongs to an app that's installed as a debug build. Details here.
In a nutshell:
First, run-as com.foo.app.
Current directory will switch to /data/data/com.foo.app.
Now you can perform permission restricted commands on subdirectories and files, such as cat and ls.
A: If you want to browse everything like this on your device you need to make your phone with root access to browse the data folders and you need to run adb root instead (in root mode)
| stackoverflow | {
"language": "en",
"length": 155,
"provenance": "stackexchange_0000F.jsonl.gz:895585",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638083"
} |
7751510520d990513b469c1bd2eed50095b13d55 | Stackoverflow Stackexchange
Q: Yahoo Finance charts as image file A while ago Yahoo Finance changed its API and since then the download of .csv data hasn't been working anymore via the old method. This already has been discussed in several other questions.
However, the old version also allowed to download charts of a certain symbol via https://chart.finance.yahoo.com/z?s=<<TICKER>> in form of a .png file which now also doesn't work anymore. The new chart viewer only seems to display data through painting on a canvas from a JS script, and there doesn't seem to be a "download as image" feature either.
So is there any way on the new website to get charts for a ticker symbol in form of a .png or .svg file through a GET/POST request, if possible with the option to define parameters as in the the old version?
A: I was looking for the same, not sure what happened to the old way to get stock images. If I find something out I'll post it here, if you could pls let me know as well if you do?
| Q: Yahoo Finance charts as image file A while ago Yahoo Finance changed its API and since then the download of .csv data hasn't been working anymore via the old method. This already has been discussed in several other questions.
However, the old version also allowed to download charts of a certain symbol via https://chart.finance.yahoo.com/z?s=<<TICKER>> in form of a .png file which now also doesn't work anymore. The new chart viewer only seems to display data through painting on a canvas from a JS script, and there doesn't seem to be a "download as image" feature either.
So is there any way on the new website to get charts for a ticker symbol in form of a .png or .svg file through a GET/POST request, if possible with the option to define parameters as in the the old version?
A: I was looking for the same, not sure what happened to the old way to get stock images. If I find something out I'll post it here, if you could pls let me know as well if you do?
A: This can be achieved by using Selenium. There is a blogpost here: How To Capture Element Screenshot Using Selenium WebDriver. Selenium has binding to several languages including Java and Python. Very powerful for testing webpages, but can be used to scrape information from sites with complex footprint and not very user friendly implementation.
| stackoverflow | {
"language": "en",
"length": 233,
"provenance": "stackexchange_0000F.jsonl.gz:895588",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638093"
} |
fdc35e613cdb4befe589fb974eb810e8c768a8dc | Stackoverflow Stackexchange
Q: How can you find the source for an "onevent" handler in HTML, without the event listeners panel? I'm looking at an element that has several event handlers added to it the old-fashioned way--
<input onblur="doSomething()" onkeyup="doSomethingElse()">
When I check the event listeners panel in the inspector, it is entirely empty.
Is there a way to find the code for these in the page's source besides manually ctrl+f'ing for the function names?
A: You could use the toString method in your console:
doSomething.toString()
Or you could find it via the debugger:
function findMyCode(element){
debugger
element.onblur.call(element);
}
findMyCode(document.getElementById('idOfYourInput'));
Then step into the function call.
| Q: How can you find the source for an "onevent" handler in HTML, without the event listeners panel? I'm looking at an element that has several event handlers added to it the old-fashioned way--
<input onblur="doSomething()" onkeyup="doSomethingElse()">
When I check the event listeners panel in the inspector, it is entirely empty.
Is there a way to find the code for these in the page's source besides manually ctrl+f'ing for the function names?
A: You could use the toString method in your console:
doSomething.toString()
Or you could find it via the debugger:
function findMyCode(element){
debugger
element.onblur.call(element);
}
findMyCode(document.getElementById('idOfYourInput'));
Then step into the function call.
A: This is fixed in Chrome 70. Here's a screenshot of Chrome DevTools showing the registered event handlers for the selected input element,
And to find the source code for those function, just copy-paste the function name in the console, and press enter - you'll get the source code for those function.
Or, you can do a quick search by pressing Ctrl+Shift+F, which will open up the search panel. Now, check the regular expression box and type "function\s*doSomething\s*\(" and press Enter. This will take you directly to the function definition.
| stackoverflow | {
"language": "en",
"length": 193,
"provenance": "stackexchange_0000F.jsonl.gz:895612",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638156"
} |
a1d758cbebdc5db38a96068853f0b6399011ee44 | Stackoverflow Stackexchange
Q: Fluent Validation with Swagger in Asp.net Core I am currently using Fluent Validation instead of Data Annotations for my Web api and using swagger for API documentation. Fluent validation rules are not reflected in swagger model as i am unable to configure fluent validation rules with swagger schema filter.
This Blog has a good explanation for using it with ASP.net MVC. but i am unable to configure it to use it in ASP.net Core.
So far i have tried the following code but i am unable to get validator type.
services.AddSwaggerGen(options => options.SchemaFilter<AddFluentValidationRules>());
public class AddFluentValidationRules : ISchemaFilter
{
public void Apply(Schema model, SchemaFilterContext context)
{
model.Required = new List<string>();
var validator = GetValidator(type); // How?
var validatorDescriptor = validator.CreateDescriptor();
foreach (var key in model.Properties.Keys)
{
foreach (var propertyValidator in validatorDescriptor.GetValidatorsForMember(key))
{
// Add to model properties as in blog
}
}
}
}
A: *
*Install Nuget package: MicroElements.Swashbuckle.FluentValidation
*Add to ConfigureServices:
services.AddFluentValidationRulesToSwagger();
| Q: Fluent Validation with Swagger in Asp.net Core I am currently using Fluent Validation instead of Data Annotations for my Web api and using swagger for API documentation. Fluent validation rules are not reflected in swagger model as i am unable to configure fluent validation rules with swagger schema filter.
This Blog has a good explanation for using it with ASP.net MVC. but i am unable to configure it to use it in ASP.net Core.
So far i have tried the following code but i am unable to get validator type.
services.AddSwaggerGen(options => options.SchemaFilter<AddFluentValidationRules>());
public class AddFluentValidationRules : ISchemaFilter
{
public void Apply(Schema model, SchemaFilterContext context)
{
model.Required = new List<string>();
var validator = GetValidator(type); // How?
var validatorDescriptor = validator.CreateDescriptor();
foreach (var key in model.Properties.Keys)
{
foreach (var propertyValidator in validatorDescriptor.GetValidatorsForMember(key))
{
// Add to model properties as in blog
}
}
}
}
A: *
*Install Nuget package: MicroElements.Swashbuckle.FluentValidation
*Add to ConfigureServices:
services.AddFluentValidationRulesToSwagger();
A: I've created github project and nuget package based on Mujahid Daud Khan answer. I made redesign to support extensibility and supported other validators.
github: https://github.com/micro-elements/MicroElements.Swashbuckle.FluentValidation
nuget: https://www.nuget.org/packages/MicroElements.Swashbuckle.FluentValidation
Note: For WebApi see: https://github.com/micro-elements/MicroElements.Swashbuckle.FluentValidation.WebApi
Supported validators
*
*INotNullValidator (NotNull)
*INotEmptyValidator (NotEmpty)
*ILengthValidator (Length, MinimumLength, MaximumLength, ExactLength)
*IRegularExpressionValidator (Email, Matches)
*IComparisonValidator (GreaterThan, GreaterThanOrEqual, LessThan, LessThanOrEqual)
*IBetweenValidator (InclusiveBetween, ExclusiveBetween)
Usage
1. Reference packages in your web project:
<PackageReference Include="FluentValidation.AspNetCore" Version="7.5.2" />
<PackageReference Include="MicroElements.Swashbuckle.FluentValidation" Version="0.4.0" />
<PackageReference Include="Swashbuckle.AspNetCore" Version="2.3.0" />
2. Change Startup.cs
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services
.AddMvc()
// Adds fluent validators to Asp.net
.AddFluentValidation(fv => fv.RegisterValidatorsFromAssemblyContaining<CustomerValidator>());
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new Info { Title = "My API", Version = "v1" });
// Adds fluent validation rules to swagger
c.AddFluentValidationRules();
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app
.UseMvc()
// Adds swagger
.UseSwagger();
// Adds swagger UI
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1");
});
}
Swagger Sample model and validator
public class Sample
{
public string PropertyWithNoRules { get; set; }
public string NotNull { get; set; }
public string NotEmpty { get; set; }
public string EmailAddress { get; set; }
public string RegexField { get; set; }
public int ValueInRange { get; set; }
public int ValueInRangeExclusive { get; set; }
}
public class SampleValidator : AbstractValidator<Sample>
{
public SampleValidator()
{
RuleFor(sample => sample.NotNull).NotNull();
RuleFor(sample => sample.NotEmpty).NotEmpty();
RuleFor(sample => sample.EmailAddress).EmailAddress();
RuleFor(sample => sample.RegexField).Matches(@"(\d{4})-(\d{2})-(\d{2})");
RuleFor(sample => sample.ValueInRange).GreaterThanOrEqualTo(5).LessThanOrEqualTo(10);
RuleFor(sample => sample.ValueInRangeExclusive).GreaterThan(5).LessThan(10);
}
}
Feel free to add issues!
A: After searching i have finally figured out that i needed to IValidationFactory for validator instance.
public class AddFluentValidationRules : ISchemaFilter
{
private readonly IValidatorFactory _factory;
/// <summary>
/// Default constructor with DI
/// </summary>
/// <param name="factory"></param>
public AddFluentValidationRules(IValidatorFactory factory)
{
_factory = factory;
}
/// <summary>
/// </summary>
/// <param name="model"></param>
/// <param name="context"></param>
public void Apply(Schema model, SchemaFilterContext context)
{
// use IoC or FluentValidatorFactory to get AbstractValidator<T> instance
var validator = _factory.GetValidator(context.SystemType);
if (validator == null) return;
if (model.Required == null)
model.Required = new List<string>();
var validatorDescriptor = validator.CreateDescriptor();
foreach (var key in model.Properties.Keys)
{
foreach (var propertyValidator in validatorDescriptor
.GetValidatorsForMember(ToPascalCase(key)))
{
if (propertyValidator is NotNullValidator
|| propertyValidator is NotEmptyValidator)
model.Required.Add(key);
if (propertyValidator is LengthValidator lengthValidator)
{
if (lengthValidator.Max > 0)
model.Properties[key].MaxLength = lengthValidator.Max;
model.Properties[key].MinLength = lengthValidator.Min;
}
if (propertyValidator is RegularExpressionValidator expressionValidator)
model.Properties[key].Pattern = expressionValidator.Expression;
// Add more validation properties here;
}
}
}
/// <summary>
/// To convert case as swagger may be using lower camel case
/// </summary>
/// <param name="inputString"></param>
/// <returns></returns>
private static string ToPascalCase(string inputString)
{
// If there are 0 or 1 characters, just return the string.
if (inputString == null) return null;
if (inputString.Length < 2) return inputString.ToUpper();
return inputString.Substring(0, 1).ToUpper() + inputString.Substring(1);
}
}
and add this class to swaggerGen options
options.SchemaFilter<AddFluentValidationRules>();
| stackoverflow | {
"language": "en",
"length": 643,
"provenance": "stackexchange_0000F.jsonl.gz:895626",
"question_score": "21",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638195"
} |
5b58d8ef077d90e1a5bda03906a34439119bad53 | Stackoverflow Stackexchange
Q: PEP8 - Contradiction between E129 and E127/E128 According to the PEP standards, indents should come before binary operators. Furthermore, multiline conditions should be enclosed within parentheses to avoid using backslashes before newlines. These two conventions lead to the following situation
if (long_condition_1
or long_condition_2):
do_some_function()
This code in turn breaks E129 visually indented line with same indent as next logical line in PEP8. However, the second line must be indented exactly four spaces, as otherwise it breaks E128 or E127 for under-indented or over-indented lines.
How should one format the above so that it confirms to PEP8 standards?
A: The answer to this question has changed over time. Due to a change in stance from PEP8, W503 is now widely regarded to go against PEP8.
PEP8 now says it's fine to break before OR after, but to keep it consistent locally.
For newer code, Knuth-style is preferred (which I think refers to breaking before the operator).
if (
long_condition_1
or long_condition_2
or (
long_condition_3
and long_condition4
)
):
do_some_function()
| Q: PEP8 - Contradiction between E129 and E127/E128 According to the PEP standards, indents should come before binary operators. Furthermore, multiline conditions should be enclosed within parentheses to avoid using backslashes before newlines. These two conventions lead to the following situation
if (long_condition_1
or long_condition_2):
do_some_function()
This code in turn breaks E129 visually indented line with same indent as next logical line in PEP8. However, the second line must be indented exactly four spaces, as otherwise it breaks E128 or E127 for under-indented or over-indented lines.
How should one format the above so that it confirms to PEP8 standards?
A: The answer to this question has changed over time. Due to a change in stance from PEP8, W503 is now widely regarded to go against PEP8.
PEP8 now says it's fine to break before OR after, but to keep it consistent locally.
For newer code, Knuth-style is preferred (which I think refers to breaking before the operator).
if (
long_condition_1
or long_condition_2
or (
long_condition_3
and long_condition4
)
):
do_some_function()
A: This should work properly
if (long_condition_1 or
long_condition_2):
do_some_function()
A: if any((long_condition_1,
long_condition_2)):
do_some_function()
it's better to read when both conditions aligned too ...
| stackoverflow | {
"language": "en",
"length": 195,
"provenance": "stackexchange_0000F.jsonl.gz:895634",
"question_score": "16",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638211"
} |
2e854cf15758bd9b10792f77be3d3a171d59d214 | Stackoverflow Stackexchange
Q: How do you make the ListHeaderComponent of a React Native FlatList sticky? I have a FlatList that is purposefully wider then the screen width.
The list scrolls vertically to view each row and sits in a horizontal ScrollView to access off-screen items.
The ListHeaderComponent prop is basically an x-axis label I'd like to behave as a "frozen header"; like in a spreadsheet.
How do I make the ListHeaderComponent sticky?
A: You need to set prop to Flatlist as stickyHeaderIndices={[0]}
ListHeaderComponent: This prop would set the header view at the top of FlatList.
stickyHeaderIndices={[0]}: This prop would set the FlatList 0 index position item as sticky header and as you can see we have already added the header to FlatList so the header is now on 0 index position, so it will by default make the header as sticky.
<FlatList
data={ this.state.FlatListItems }
ItemSeparatorComponent={ this.FlatListItemSeparator}
renderItem={ ({item}) => (
<Text
style={styles.FlatList_Item}
onPress={this.GetItem.bind(this, item.key)}> {item.key}
</Text>
)}
ListHeaderComponent={this.Render_FlatList_Sticky_header}
stickyHeaderIndices={[0]}
/>
| Q: How do you make the ListHeaderComponent of a React Native FlatList sticky? I have a FlatList that is purposefully wider then the screen width.
The list scrolls vertically to view each row and sits in a horizontal ScrollView to access off-screen items.
The ListHeaderComponent prop is basically an x-axis label I'd like to behave as a "frozen header"; like in a spreadsheet.
How do I make the ListHeaderComponent sticky?
A: You need to set prop to Flatlist as stickyHeaderIndices={[0]}
ListHeaderComponent: This prop would set the header view at the top of FlatList.
stickyHeaderIndices={[0]}: This prop would set the FlatList 0 index position item as sticky header and as you can see we have already added the header to FlatList so the header is now on 0 index position, so it will by default make the header as sticky.
<FlatList
data={ this.state.FlatListItems }
ItemSeparatorComponent={ this.FlatListItemSeparator}
renderItem={ ({item}) => (
<Text
style={styles.FlatList_Item}
onPress={this.GetItem.bind(this, item.key)}> {item.key}
</Text>
)}
ListHeaderComponent={this.Render_FlatList_Sticky_header}
stickyHeaderIndices={[0]}
/>
A: stickyHeaderIndices={[0]} would solve your problem.
<FlatList
data={this.state.data}
renderItem={this.renderItem}
keyExtractor={item => item.id}
stickyHeaderIndices={[0]}
/>
Besides, stickyHeaderIndices can also be an array of the indices we want to stick. You can even set a lot of indices like this:
FlatList Sticky Header Example
<FlatList
data={this.state.data}
renderItem={this.renderItem}
keyExtractor={item => item.name}
stickyHeaderIndices={[0, 6, 13]}
/>
When you keep scrolling the FlatList, item0 will be sticky, and then be replaced by item6, item13.
(source: nativebase.io)
You can't find stickyHeaderIndices in the RN FlatList documentation. But you can find it in the source code. VirtualizedList supports it.
VirtualizedList.js#L964
| stackoverflow | {
"language": "en",
"length": 252,
"provenance": "stackexchange_0000F.jsonl.gz:895658",
"question_score": "66",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638286"
} |
79160c680051204298df5a613be411e0e7e57a8b | Stackoverflow Stackexchange
Q: Python Scrapy Function Call I try to call the getNext() function from the main parse function that scrappy calls but it never gets called.
class BlogSpider(scrapy.Spider):
# User agent.
name = 'Mozilla/5.0 (Linux; Android 4.0.4; Galaxy Nexus Build/IMM76B) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.133 Mobile Safari/535.19'
start_urls = ['http://www.tricksforums.org/best-free-movie-streaming-sites-to/']
def getNext(self):
print("Getting next ... ")
# Check if next link in DB is valid and crawl.
try:
nextUrl = myDb.getNextUrl()
urllib.urlopen(nextUrl).getcode()
yield scrapy.Request(nextUrl['link'])
except IOError as e:
print("Server can't be reached", e.code)
yield self.getNext()
def parse(self, response):
print("Parsing link: ", response.url)
# Get all urls for futher crawling.
all_links = hxs.xpath('*//a/@href').extract()
for link in all_links:
if validators.url(link) and not myDb.existUrl(link) and not myDb.visited(link):
myDb.addUrl(link)
print("Getting next?")
yield self.getNext()
I tried with and without yield before it .. what's the issue ? And what's this yield supposed to be ? :)
A: You are trying to yield a generator, but meant to yield from a generator.
If you are on Python 3.3+, you can use yield from:
yield from self.getNext()
Or, simply do the return self.getNext().
| Q: Python Scrapy Function Call I try to call the getNext() function from the main parse function that scrappy calls but it never gets called.
class BlogSpider(scrapy.Spider):
# User agent.
name = 'Mozilla/5.0 (Linux; Android 4.0.4; Galaxy Nexus Build/IMM76B) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.133 Mobile Safari/535.19'
start_urls = ['http://www.tricksforums.org/best-free-movie-streaming-sites-to/']
def getNext(self):
print("Getting next ... ")
# Check if next link in DB is valid and crawl.
try:
nextUrl = myDb.getNextUrl()
urllib.urlopen(nextUrl).getcode()
yield scrapy.Request(nextUrl['link'])
except IOError as e:
print("Server can't be reached", e.code)
yield self.getNext()
def parse(self, response):
print("Parsing link: ", response.url)
# Get all urls for futher crawling.
all_links = hxs.xpath('*//a/@href').extract()
for link in all_links:
if validators.url(link) and not myDb.existUrl(link) and not myDb.visited(link):
myDb.addUrl(link)
print("Getting next?")
yield self.getNext()
I tried with and without yield before it .. what's the issue ? And what's this yield supposed to be ? :)
A: You are trying to yield a generator, but meant to yield from a generator.
If you are on Python 3.3+, you can use yield from:
yield from self.getNext()
Or, simply do the return self.getNext().
| stackoverflow | {
"language": "en",
"length": 175,
"provenance": "stackexchange_0000F.jsonl.gz:895659",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638287"
} |
d12f3dd3a2738f0731f7d9cf1eea06c89ea09bda | Stackoverflow Stackexchange
Q: VS Code - size of description popup I am creating a snippet for VS Code with long descriptions.
I have noticed that the description popup window in VS Code has a scrollbar, but it would be greater if I can enlarge the window? Is it possible?
A: No, that's not possible currently. Sometimes text even wraps in such a popup window, which makes it difficult to read. Certainly something that needs improvement.
Here's an example:
| Q: VS Code - size of description popup I am creating a snippet for VS Code with long descriptions.
I have noticed that the description popup window in VS Code has a scrollbar, but it would be greater if I can enlarge the window? Is it possible?
A: No, that's not possible currently. Sometimes text even wraps in such a popup window, which makes it difficult to read. Certainly something that needs improvement.
Here's an example:
A: VSCode 1.51 (Oct. 2020) should add (part of) that feature with:
Resizable suggestions
This milestone, we've made several improvements to the suggestions UI. First and foremost: it can now be resized! Drag the sides or corners to resize the control.
Theme: GitHub Light, Font: FiraCode
The size of the suggestions list will be saved and restored across sessions.
The size of the details pane is only saved per session, since the size tends to be more variable.
Also, the editor.suggest.maxVisibleSuggestions setting has become obsolete.
As noted by Jan M. in the comments, this only allows resizing the suggestion window, not the popup window.
Feature allowing to resize the popup window is not yet implemented:
microsoft/vscode issue 14165: "Feature request: configure tooltip max width".
A: This is possible now with the Custom CSS and JS Loader extension.
1. Install extension
Custom CSS and JS Loader extension
2. Set permissions
*
*macOS
*
*VS Code: sudo chown -R $(whoami) /Applications/Visual Studio Code.app/Contents/MacOS/Electron
*VS Code Insiders: sudo chown -R $(whoami) /Applications/Visual Studio Code - Insiders.app/Contents/MacOS/Electron
*Linux: sudo chown -R $(whoami) /usr/share/code
3. Create CSS override file
touch ~/.vscode-custom.css:
/* suggest-widget size */
.monaco-editor .suggest-widget.docs-side {
width: 1000px;
}
.monaco-editor .suggest-widget.docs-side > .details {
width: 60%;
max-height: 800px !important;
}
.monaco-editor .suggest-widget.docs-side > .tree {
width: 30%;
float: left;
}
/* parameter-hints-widget */
.editor-widget.parameter-hints-widget.visible {
max-height: 800px !important;
}
.monaco-editor .parameter-hints-widget > .wrapper {
max-width: 1000px;
}
/* editor-hover */
.monaco-editor-hover .monaco-editor-hover-content {
max-width: 1000px;
}
Apply CSS file path to settings.json
{
"vscode_custom_css.imports": ["file:///Users/yourusername/.vscode-custom.css"],
"vscode_custom_css.policy": true
}
4. Restart VSCode
*
*Restart VSCode
*Ignore "VSCode is corrupt errors_
*
*You can choose to suppress these forever
*Run command "Reload Custom CSS and JS"
A: That's definitely possible with Customize UI + Monkey Patch extensions.
Install them and then add the following to your settings.json:
"customizeUI.stylesheet": {".monaco-hover-content, .hover-contents span, .parameter-hints-widget div.code, .parameter-hints-widget div.docs": "font-size: 12px !important"}
Don't forget to reload the window to apply the changes!
Works like a charm for me with VS Code 1.63.2
| stackoverflow | {
"language": "en",
"length": 409,
"provenance": "stackexchange_0000F.jsonl.gz:895671",
"question_score": "22",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638328"
} |
36ded25211c92502944082207158f709e34f2010 | Stackoverflow Stackexchange
Q: How to change font and point size in bookdown pdf? I am writing a document with a strict requirement to use arial 12 point. I have modified my output yml in bookdown like this:
site: bookdown::bookdown_site
fontsize: 12pt
fontfamily: arial
documentclass: book
output:
bookdown::pdf_book:
includes:
in_header: preamble.tex
keep_tex: yes
toc_depth: 3
toc_appendix: yes
clean: [packages.bib, bookdown.bbl]
but it has no effect on the output other than I was forced to install some extra font packages in MikTex package manager, but even after this was done, there was no change to the actual document output, yet the top of the _main.tex looks like this:
\documentclass[12pt,]{book}
\usepackage[]{arial}
\usepackage{amssymb,amsmath}
Why doesn't it honour my choice of font? I also tried Lato, a similar font, but the document always comes back with the default serif family font.
A: Try adding the following lines to your preamble.tex:
\usepackage{fontspec}
\setmainfont{Arial}
| Q: How to change font and point size in bookdown pdf? I am writing a document with a strict requirement to use arial 12 point. I have modified my output yml in bookdown like this:
site: bookdown::bookdown_site
fontsize: 12pt
fontfamily: arial
documentclass: book
output:
bookdown::pdf_book:
includes:
in_header: preamble.tex
keep_tex: yes
toc_depth: 3
toc_appendix: yes
clean: [packages.bib, bookdown.bbl]
but it has no effect on the output other than I was forced to install some extra font packages in MikTex package manager, but even after this was done, there was no change to the actual document output, yet the top of the _main.tex looks like this:
\documentclass[12pt,]{book}
\usepackage[]{arial}
\usepackage{amssymb,amsmath}
Why doesn't it honour my choice of font? I also tried Lato, a similar font, but the document always comes back with the default serif family font.
A: Try adding the following lines to your preamble.tex:
\usepackage{fontspec}
\setmainfont{Arial}
A: The default article documentclass (but even the book one you selected) only supports 10, 11 and 12. If you want more choice you should use the KOMA equivalents.
| stackoverflow | {
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:895675",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638337"
} |
47dc2545a14530cf7edb0b3a9a2bf7d2d6a16146 | Stackoverflow Stackexchange
Q: mypy error: List or tuple literal expected as the second argument to namedtuple() I wrote this code in Python 3.5:
from collections import namedtuple
attributes = ('content', 'status')
Response = namedtuple('Response', attributes)
When I run the Mypy type checker to analyze this code it raised the error:
error: List or tuple literal expected as the second argument to namedtuple()
I tried to add a type annotation to the attributes variable:
from typing import Tuple
attributes = ('content', 'status') # type: Tuple[str, str]
But it didn't fix the raised error.
A: According to issue 848 on mypy's issue tracker, this will just never be implemented (see last message by GvR).
Though # type: ignore does actually silence this warning it then creates other issues, so, if you can, don't depend on dynamically specifying the field names of the namedtuple (i.e provide the literal in the ways Michael's answer provides).
| Q: mypy error: List or tuple literal expected as the second argument to namedtuple() I wrote this code in Python 3.5:
from collections import namedtuple
attributes = ('content', 'status')
Response = namedtuple('Response', attributes)
When I run the Mypy type checker to analyze this code it raised the error:
error: List or tuple literal expected as the second argument to namedtuple()
I tried to add a type annotation to the attributes variable:
from typing import Tuple
attributes = ('content', 'status') # type: Tuple[str, str]
But it didn't fix the raised error.
A: According to issue 848 on mypy's issue tracker, this will just never be implemented (see last message by GvR).
Though # type: ignore does actually silence this warning it then creates other issues, so, if you can, don't depend on dynamically specifying the field names of the namedtuple (i.e provide the literal in the ways Michael's answer provides).
A: If you want mypy to understand what your namedtuples look like, you should import NamedTuple from the typing module, like so:
from typing import NamedTuple
Response = NamedTuple('Response', [('content', str), ('status', str)])
Then, you can use Response just like any other namedtuple, except that mypy now understands the types of each individual field. If you're using Python 3.6, you can also use the alternative class-based syntax:
from typing import NamedTuple
class Response(NamedTuple):
content: str
status: str
If you were hoping to dynamically vary the fields and write something that can "build" different namedtuples at runtime, that's unfortunately not possible within Python's type ecosystem. PEP 484 currently doesn't any provisions for propagating or extracting the actual values of any given variable during the type-checking phase.
It's actually pretty challenging to implement this in a fully general way, so it's unlikely this feature will be added any time soon (and if it is, it'll likely be in a much more limited form).
| stackoverflow | {
"language": "en",
"length": 310,
"provenance": "stackexchange_0000F.jsonl.gz:895680",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638360"
} |
72d76b85d2a6f2e4de967424cc4c6a90d5ac99c1 | Stackoverflow Stackexchange
Q: Error: Cannot pass NA to dbQuoteIdentifier() in sqldf package in R Error: Cannot pass NA to dbQuoteIdentifier()
In addition: Warning message:
In field_types[] <- field_types[names(data)] :
number of items to replace is not a multiple of replacement length
This is the error message i am getting upon trying to run anything with sqldf package today.the same queries which ran yesterday dont run today, what am i doing wrong?
A: I had the same problem:
Error: Cannot pass NA to dbQuoteIdentifier()
In addition: Warning message:
In field_types[] <- field_types[names(data)] :
number of items to replace is not a multiple of replacement length
after some research, I noticed I selected the same column twice in one table:
table1<- sqldf("select columnA,
columnA,
keyA
from tableA")
table2<- sqldf("select columnB,
keyB
from tableB")
problematicMerge<- sqldf("select a.*,
b.*
from tableA a join
tableB
on a.keyA = b.keyB")
this was solved by altering table1 to remove the duplicate column (see below: --I suspect aliasing one of the columns to have a different name will also do the trick):
table1<-sqldf("select columnA,
keyA
from tableA")
Hope this helps
| Q: Error: Cannot pass NA to dbQuoteIdentifier() in sqldf package in R Error: Cannot pass NA to dbQuoteIdentifier()
In addition: Warning message:
In field_types[] <- field_types[names(data)] :
number of items to replace is not a multiple of replacement length
This is the error message i am getting upon trying to run anything with sqldf package today.the same queries which ran yesterday dont run today, what am i doing wrong?
A: I had the same problem:
Error: Cannot pass NA to dbQuoteIdentifier()
In addition: Warning message:
In field_types[] <- field_types[names(data)] :
number of items to replace is not a multiple of replacement length
after some research, I noticed I selected the same column twice in one table:
table1<- sqldf("select columnA,
columnA,
keyA
from tableA")
table2<- sqldf("select columnB,
keyB
from tableB")
problematicMerge<- sqldf("select a.*,
b.*
from tableA a join
tableB
on a.keyA = b.keyB")
this was solved by altering table1 to remove the duplicate column (see below: --I suspect aliasing one of the columns to have a different name will also do the trick):
table1<-sqldf("select columnA,
keyA
from tableA")
Hope this helps
A: I had the same problem yesterday when I was suddenly unable to upload a table from R to an SQLite db on my remote desktop.
lghdb <- dbConnect(SQLite(), 'lgh.db'
dbWriteTable(lghdb, 'SrtrRisks', SrtrRisks)
Error: Cannot pass NA to dbQuoteIdentifier()...
After muddling around for a while, I realized that this error was due to the addressed SQLite database being "locked" due to an uncompleted (not committed) transaction, related to my simultaneous work using the SQLite Browser. The problem disappeared once I committed the pending transaction.
I guess that you must have figured this out, too, since there has been no follow-up to your post. It might be nice for the RSQLite folks to see whether they can return a more helpful error message under these circumstances.
Larry Hunsicker
A: I too encountered the same error:
## step1: encountered the error as below while joining two tables
screens_temp_2 = sqldf("SELECT a.* , b.ue as 'sp_used_ue' , b.te as
'sp_used_te' from screens_temp a left outer join sp_temp b on
a.screen_name = b.screen_name ")
Error: Cannot pass NA to dbQuoteIdentifier()
In addition: Warning message:
In field_types[] <- field_types[names(data)] :
number of items to replace is not a multiple of replacement length
## step2: while checking the column names , this is what i found
colnames(screens_temp)
[1] "screen_name" "usv" "tsv" "20_ue" "20_te"
[6] "40_ue" "40_te" "60_ue" "60_te" "80_ue"
[11] "80_te" "100_ue" "100_te" "sp_load_ue" "sp_load_te"
[16] "sp_load_ue" "sp_load_te"
The above result shows that sp_load_ue and sp_load_te are repeated.
## below i corrected the column names:
colnames(screens_temp) <- c("screen_name", "usv", "tsv", "20_ue", "20_te", "40_ue" , "40_te" , "60_ue" , "60_te" , "80_ue" , "80_te" ,"100_ue" , "100_te" , "sp_load_ue" , "sp_load_te" , "sp_used_ue" , "sp_used_te" )
write.table(screens_temp, "screens_temp_corrected.csv", row.names = FALSE ,col.names = TRUE, sep = ",")
## again i ran step 1, it worked fine.
Note: I think there is a bug in sqldf due to which it allows column names to be repeated while assigning output to a dataframe. It should throw an error/warning while assigning the output to a dataframe so that the user can rename the columns appropriately.
A: Had same issue with sqldf inside a loop. Solved it by putting it inside data.frame call: data.frame(sqldf(..)).
| stackoverflow | {
"language": "en",
"length": 543,
"provenance": "stackexchange_0000F.jsonl.gz:895695",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638398"
} |
c2f8a1b7e591f07aef1b6bdfd6c910a5edf048ee | Stackoverflow Stackexchange
Q: Android Maps - Is there a way to check if there are tolls in a route? I am building an app that I use matrix google api to get distance, time and I see that I can use it to avoid tolls. But I would like to know if there is a way to check if there are tolls or not in the route I have traced between two coordinates points on the google maps.
Thanks in advance
A: By now it is not possible by google maps but I was able to do that with https://www.tollsmart.com/, you need to choose a plan according to your demand.
| Q: Android Maps - Is there a way to check if there are tolls in a route? I am building an app that I use matrix google api to get distance, time and I see that I can use it to avoid tolls. But I would like to know if there is a way to check if there are tolls or not in the route I have traced between two coordinates points on the google maps.
Thanks in advance
A: By now it is not possible by google maps but I was able to do that with https://www.tollsmart.com/, you need to choose a plan according to your demand.
| stackoverflow | {
"language": "en",
"length": 108,
"provenance": "stackexchange_0000F.jsonl.gz:895725",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638483"
} |
7d8d38d5bdcfddecde51c515f5dbebb560f42f7b | Stackoverflow Stackexchange
Q: Typescript ///: why doesn't it work for me? I wrote the following files:
main.ts:
///<reference path="./external.ts"/>
hello();
external.ts
var hello = function() {
console.log("hello");
}
I compiled both files to javascript and ran them by the command:
$ node main.js
I expected that function 'hello' will be invoked. But, no, I got an error:
ReferenceError: hello is not defined
The tutorial about triple-slash directive (https://www.typescriptlang.org/docs/handbook/triple-slash-directives.html) says that:
The compiler performs a preprocessing pass on input files to resolve
all triple-slash reference directives. During this process, additional
files are added to the compilation.
so I don't understand why function from external.ts file cannot be read.
A: That aproach only works in the browser. When using node you need to import (require) the file in order to use it.
You'll need to do this:
// external.ts
export var hello = function() {
console.log("hello");
}
And use it like this:
// main.ts
import { hello } from "./external";
hello();
Also, when compiling you'll need to compile it for node:
tsc -m commonjs ./main.ts
| Q: Typescript ///: why doesn't it work for me? I wrote the following files:
main.ts:
///<reference path="./external.ts"/>
hello();
external.ts
var hello = function() {
console.log("hello");
}
I compiled both files to javascript and ran them by the command:
$ node main.js
I expected that function 'hello' will be invoked. But, no, I got an error:
ReferenceError: hello is not defined
The tutorial about triple-slash directive (https://www.typescriptlang.org/docs/handbook/triple-slash-directives.html) says that:
The compiler performs a preprocessing pass on input files to resolve
all triple-slash reference directives. During this process, additional
files are added to the compilation.
so I don't understand why function from external.ts file cannot be read.
A: That aproach only works in the browser. When using node you need to import (require) the file in order to use it.
You'll need to do this:
// external.ts
export var hello = function() {
console.log("hello");
}
And use it like this:
// main.ts
import { hello } from "./external";
hello();
Also, when compiling you'll need to compile it for node:
tsc -m commonjs ./main.ts
A: The purpose of reference file is to tell what's kinds of functions or types or interfaces available in the following program.
It should be more about declaration rather than implementation.
An easier example will be:
If in main.ts, you got:
console.log('hi')
Without @types/node, compile will fail, because compiler have no idea what console it is. That's the reason we include reference files for the compiler to pick up:
– Oh, there is a console object defined with the log method.
In your example, you can make declare in hello.d.ts:
declare function hello(): void;
then in hello.ts do
/// <reference path="./hello.d.ts" />
hello();
now you will see the compile succeed:
tsc hello.ts
This means the compiler is happy, it knows hello is a function and can be called like that.
However, if you run with
node hello.js
ReferenceError: hello is not defined
You will get ReferenceError, because at runtime node engine not implemented hello() function.
try use
console.log('hello')
which is implemented by engine will help understanding.
| stackoverflow | {
"language": "en",
"length": 337,
"provenance": "stackexchange_0000F.jsonl.gz:895762",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638612"
} |
3ffb3811b53fcbd2eac820bb5796c80b74cde539 | Stackoverflow Stackexchange
Q: Android get serial number for the second sim in a dual sim phone? TelephonyManager Class works only for the first sim. it gives details about Sim 1 (Primary Sim)
| Q: Android get serial number for the second sim in a dual sim phone? TelephonyManager Class works only for the first sim. it gives details about Sim 1 (Primary Sim)
| stackoverflow | {
"language": "en",
"length": 30,
"provenance": "stackexchange_0000F.jsonl.gz:895773",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638639"
} |
51d273ad27f7c44b704be596da24b49f82423a10 | Stackoverflow Stackexchange
Q: Netbeans 8.2 C++ debugging on Mac OSX I have been trying to enable debugging on NetBeans 8.2 C++ (OS X - Sierra - 10.12.5) without success. I have tried the following:
*
*Using Mac Ports version of gdb with code-signing: result: gdb gives a standard error:
Unable to find Mach task port for process-id 3016: (os/kern) failure (0x5).
(please check gdb is codesigned - see taskgated(8))
*Using Mac's /usr/bin/llvm: result:
GDB has unexpectedly stopped with return 0
Anyone have any success getting this to work?
| Q: Netbeans 8.2 C++ debugging on Mac OSX I have been trying to enable debugging on NetBeans 8.2 C++ (OS X - Sierra - 10.12.5) without success. I have tried the following:
*
*Using Mac Ports version of gdb with code-signing: result: gdb gives a standard error:
Unable to find Mach task port for process-id 3016: (os/kern) failure (0x5).
(please check gdb is codesigned - see taskgated(8))
*Using Mac's /usr/bin/llvm: result:
GDB has unexpectedly stopped with return 0
Anyone have any success getting this to work?
| stackoverflow | {
"language": "en",
"length": 86,
"provenance": "stackexchange_0000F.jsonl.gz:895815",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638755"
} |
49d672098d60e20248e37b4da58ba00b1302dec2 | Stackoverflow Stackexchange
Q: VBA + Excel + Try Catch In VBA, I'm doing a simple script that records a version of a spreadsheet being used.
Private Sub Workbook_Open()
version = "1.0"
Set objHTTP = CreateObject("WinHttp.WinHttpRequest.5.1")
URL = "<WEB SERVICE>"
objHTTP.Open "POST", URL, False
objHTTP.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"
objHTTP.setRequestHeader "Content-type", "application/x-www-form-urlencoded"
objHTTP.send ("version=" + version)
End Sub
The process works fine, but...
I'm trying to do a try catch so if the web host is offline, instead of showing a run time error I catch it and suppress.
What is the best way to try catch in VBA so there is no error message shown?
A: Something like this:
Try
...
Catch (Exception e)
...
End Try
Might look like this in VBA:
' The "Try" part
On Error Resume Next
...
On Error GoTo 0
' The "Catch" part
If Err.Number <> 0 Then
...
End If
However, this form may not be following best practices.
| Q: VBA + Excel + Try Catch In VBA, I'm doing a simple script that records a version of a spreadsheet being used.
Private Sub Workbook_Open()
version = "1.0"
Set objHTTP = CreateObject("WinHttp.WinHttpRequest.5.1")
URL = "<WEB SERVICE>"
objHTTP.Open "POST", URL, False
objHTTP.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"
objHTTP.setRequestHeader "Content-type", "application/x-www-form-urlencoded"
objHTTP.send ("version=" + version)
End Sub
The process works fine, but...
I'm trying to do a try catch so if the web host is offline, instead of showing a run time error I catch it and suppress.
What is the best way to try catch in VBA so there is no error message shown?
A: Something like this:
Try
...
Catch (Exception e)
...
End Try
Might look like this in VBA:
' The "Try" part
On Error Resume Next
...
On Error GoTo 0
' The "Catch" part
If Err.Number <> 0 Then
...
End If
However, this form may not be following best practices.
A: Private Sub Workbook_Open()
on error goto Oops
version = "1.0"
Set objHTTP = CreateObject("WinHttp.WinHttpRequest.5.1")
URL = "<WEB SERVICE>"
objHTTP.Open "POST", URL, False
objHTTP.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"
objHTTP.setRequestHeader "Content-type", "application/x-www-form-urlencoded"
objHTTP.send ("version=" + version)
exit sub
Oops:
'handle error here
End Sub
If you wanted to, for example, change the URL because of the error, you can do this
Private Sub Workbook_Open()
on error goto Oops
version = "1.0"
Set objHTTP = CreateObject("WinHttp.WinHttpRequest.5.1")
URL = "<WEB SERVICE>"
Send:
objHTTP.Open "POST", URL, False
objHTTP.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"
objHTTP.setRequestHeader "Content-type", "application/x-www-form-urlencoded"
objHTTP.send ("version=" + version)
exit sub
Oops:
'handle error here
URL="new URL"
resume Send 'risk of endless loop if the new URL is also bad
End Sub
Also, if your feeling really try/catchy, you can emulate that like this.
Private Sub Workbook_Open()
version = "1.0"
Set objHTTP = CreateObject("WinHttp.WinHttpRequest.5.1")
URL = "<WEB SERVICE>"
on error resume next 'be very careful with this, it ignores all errors
objHTTP.Open "POST", URL, False
objHTTP.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"
objHTTP.setRequestHeader "Content-type", "application/x-www-form-urlencoded"
objHTTP.send ("version=" + version)
if err <> 0 then
'not 0 means it errored, handle it here
err.clear 'keep in mind this doesn't reset the error handler, any code after this will still ignore errors
end if
End Sub
So extending this to be really hard core...
Private Sub Workbook_Open()
version = "1.0"
on error resume next
Set objHTTP = CreateObject("WinHttp.WinHttpRequest.5.1")
if err <> 0 then
'unable to create object, give up
err.clear
exit sub
end if
URL = "<WEB SERVICE>"
objHTTP.Open "POST", URL, False
if err <> 0 then
'unable to open request, give up
err.clear
exit sub
end if
objHTTP.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"
objHTTP.setRequestHeader "Content-type", "application/x-www-form-urlencoded"
objHTTP.send ("version=" + version)
if err <> 0 then
'unable to send request, give up
err.clear
exit sub
end if
End Sub
Also worth noting that any errors that happen in an on error goto style will not be handled, so if you did this
private sub MakeError()
dim iTemp as integer
on error goto Oops
iTemp = 5 / 0 'divide by 0 error
exit sub
Oops:
itemp = 4 / 0 'unhandled exception, divide by 0 error
end sub
Will cause an unhandled exception, however
private sub MakeError()
dim iTemp as integer
on error resume next
iTemp = 5 / 0 'divide by 0 error
if err <> 0 then
err.clear
iTemp = 4 / 0 'divide by 0 error, but still ignored
if err <> 0 then
'another error
end if
end if
end sub
Will not cause any exceptions, since VBA ignored them all.
| stackoverflow | {
"language": "en",
"length": 601,
"provenance": "stackexchange_0000F.jsonl.gz:895854",
"question_score": "37",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638867"
} |
bd3744e914f081ac3603700cce90237bee6ff931 | Stackoverflow Stackexchange
Q: run docker exec from swarm manager I have two worker nodes: worker1 and worker2 and one swarm manager. I'm running all the services in the worker nodes only. I need to run from the manager docker exec to access some of the containers created in the worker nodes but I keep getting that the service is not recognized. I know I can run docker exec in any of the worker nodes and it works fine but I dont want to have to find on which node the service is running and then ssh to the designated node to run docker exec command. Is there a way to do so in swarm or not?
A: If this helps, nowadays you can create the overlay network with --attachable flag to enable any container to join the network. This is great feature as it allows a lot of flexibility.
E.g.
$ docker network create --attachable --driver overlay my-network
$ docker service create --network my-network --name web --publish 80:80 nginx
$ docker run --network=my-network -ti alpine sh
$ wget -qO- web
<!DOCTYPE html>
<html>
<head>
....
| Q: run docker exec from swarm manager I have two worker nodes: worker1 and worker2 and one swarm manager. I'm running all the services in the worker nodes only. I need to run from the manager docker exec to access some of the containers created in the worker nodes but I keep getting that the service is not recognized. I know I can run docker exec in any of the worker nodes and it works fine but I dont want to have to find on which node the service is running and then ssh to the designated node to run docker exec command. Is there a way to do so in swarm or not?
A: If this helps, nowadays you can create the overlay network with --attachable flag to enable any container to join the network. This is great feature as it allows a lot of flexibility.
E.g.
$ docker network create --attachable --driver overlay my-network
$ docker service create --network my-network --name web --publish 80:80 nginx
$ docker run --network=my-network -ti alpine sh
$ wget -qO- web
<!DOCTYPE html>
<html>
<head>
....
A: Swarm mode does not currently have a way to run an exec on a running task. You need to find the container and run the exec on the host. You can configure the workers to have a TLS protected port they listen on, which would give you remote access (see docker's guide). And you can lookup the node for each task in a service by checking the output of a docker service ps $service_name.
| stackoverflow | {
"language": "en",
"length": 258,
"provenance": "stackexchange_0000F.jsonl.gz:895856",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44638869"
} |
c1dec71dcac8db149057f4a95cccd8e2103bb7a6 | Stackoverflow Stackexchange
Q: R's toTitleCase() not working on word "all" For some reason R's toTitleCase() function isn't working on the word "all". Any ideas why?
library(tools)
toTitleCase("all") # gives "all"
toTitleCase("alt") # gives "Alt"
A: The Details section in the help page ?toTitleCase notes that
Generally words in all capitals are left alone: this implementation knows about conventional mixed-case words such as ‘LaTeX’ and ‘OpenBUGS’ and a few technical terms which are not usually capitalized such as ‘jar’ and ‘xls’.
Type toTitleCase without parentheses into your console. You will see the sets of excepted words along with a lengthy regex for connector words. Among these is
either <- c("all", "above", "after", "along", "also", "among",
"any", "both", "can", "few", "it", "less", "log", "many",
"may", "more", "over", "some", "their", "then", "this",
"under", "until", "using", "von", "when", "where", "which",
"will", "without", "yet", "you", "your")
which contains "all".
| Q: R's toTitleCase() not working on word "all" For some reason R's toTitleCase() function isn't working on the word "all". Any ideas why?
library(tools)
toTitleCase("all") # gives "all"
toTitleCase("alt") # gives "Alt"
A: The Details section in the help page ?toTitleCase notes that
Generally words in all capitals are left alone: this implementation knows about conventional mixed-case words such as ‘LaTeX’ and ‘OpenBUGS’ and a few technical terms which are not usually capitalized such as ‘jar’ and ‘xls’.
Type toTitleCase without parentheses into your console. You will see the sets of excepted words along with a lengthy regex for connector words. Among these is
either <- c("all", "above", "after", "along", "also", "among",
"any", "both", "can", "few", "it", "less", "log", "many",
"may", "more", "over", "some", "their", "then", "this",
"under", "until", "using", "von", "when", "where", "which",
"will", "without", "yet", "you", "your")
which contains "all".
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:895934",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639101"
} |
57916a4b87eb2273287e0f987d36fe52727fa5ab | Stackoverflow Stackexchange
Q: Xcode iOS Simulator: can I use the mouse as pretend Apple Pencil input (on iPad Pro), for testing? Is it possible to do this (with a menu, a shortcut, or a modifier key + mouse)?
For example, you can use the mouse to test simple touch gestures in the simulator, like left mouse acts as single finger, and shift / option allow for different two finger gestures.
I have been unable to find any documentation one way or the other about whether this is possible, despite this developer.apple.com page where the simple, easy-to-understand API changes for supporting Apple Pencil hardware are documented.
Do I need a physical iPad Pro + Pencil hardware to test my Pencil support?
(My app is not a drawing app, just an app where touch input should work with large touch targets and Pencil should allow finer distinctions.)
A: The Simulator does not currently support simulating Apple Pencil input. We are aware people would like to do this.
| Q: Xcode iOS Simulator: can I use the mouse as pretend Apple Pencil input (on iPad Pro), for testing? Is it possible to do this (with a menu, a shortcut, or a modifier key + mouse)?
For example, you can use the mouse to test simple touch gestures in the simulator, like left mouse acts as single finger, and shift / option allow for different two finger gestures.
I have been unable to find any documentation one way or the other about whether this is possible, despite this developer.apple.com page where the simple, easy-to-understand API changes for supporting Apple Pencil hardware are documented.
Do I need a physical iPad Pro + Pencil hardware to test my Pencil support?
(My app is not a drawing app, just an app where touch input should work with large touch targets and Pencil should allow finer distinctions.)
A: The Simulator does not currently support simulating Apple Pencil input. We are aware people would like to do this.
A: iOS 14
#if targetEnvironment(simulator)
canvasView.drawingPolicy = .anyInput
#else
canvasView.drawingPolicy = .pencilOnly
#endif
Also in the Settings app there is a global setting called "Only Draw with Apple Pencil". This can be read from UIPencilInteraction.prefersPencilOnlyDrawing in PencilKit.
iOS 13 (only)
#if targetEnvironment(simulator)
canvasView.allowsFingerDrawing = true
#else
canvasView.allowsFingerDrawing = false
#endif
Courtesy of https://stackoverflow.com/a/62567169/2667933
| stackoverflow | {
"language": "en",
"length": 216,
"provenance": "stackexchange_0000F.jsonl.gz:895946",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639153"
} |
f55803fdbd589f54e5358e0d4fa7be59675c0b68 | Stackoverflow Stackexchange
Q: timestampTz fields in Laravel Laravel 5.4 supports the Postgres TIMESTAMP WITH TIME ZONE field type in migrations:
$table->timestampTz('scheduled_for');
Laravel can be set up to convert date fields (DATE, DATETIME, TIMESTAMP) into Carbon objects (and does so by default for the created_at and updated_at TIMESTAMP fields), but putting scheduled_for into the $dates field causes an error with the timezone-aware version:
InvalidArgumentException with message 'Trailing data'
Looking in the database and tinker, the field's value appears to be something like 2017-06-19 19:19:19-04. Is there a native way to get a Carbon object out of one of these field types? Or am I stuck using an accessor?
A: Resurrecting this question, hopefully with a helpful answer that gets accepted.
Laravel assumes a Y-m-d H:i:s database timestamp format. If you're using a Postgres timestampz column, that's obviously different. You need to tell Eloquent how to get Carbon to parse that format.
Simply define the $dateFormat property on your model like so:
Class MyModel extends Eloquent {
protected $dateFormat = 'Y-m-d H:i:sO';
}
Credit where credit is due: I found this solution in a GitHub issue
| Q: timestampTz fields in Laravel Laravel 5.4 supports the Postgres TIMESTAMP WITH TIME ZONE field type in migrations:
$table->timestampTz('scheduled_for');
Laravel can be set up to convert date fields (DATE, DATETIME, TIMESTAMP) into Carbon objects (and does so by default for the created_at and updated_at TIMESTAMP fields), but putting scheduled_for into the $dates field causes an error with the timezone-aware version:
InvalidArgumentException with message 'Trailing data'
Looking in the database and tinker, the field's value appears to be something like 2017-06-19 19:19:19-04. Is there a native way to get a Carbon object out of one of these field types? Or am I stuck using an accessor?
A: Resurrecting this question, hopefully with a helpful answer that gets accepted.
Laravel assumes a Y-m-d H:i:s database timestamp format. If you're using a Postgres timestampz column, that's obviously different. You need to tell Eloquent how to get Carbon to parse that format.
Simply define the $dateFormat property on your model like so:
Class MyModel extends Eloquent {
protected $dateFormat = 'Y-m-d H:i:sO';
}
Credit where credit is due: I found this solution in a GitHub issue
A: Put this inside your model
protected $casts = [
'scheduled_for' => 'datetime' // date | datetime | timestamp
];
Using $dates is more likely obsolete as $casts do the same stuff (maybe except $dateFormat attribute which can work only for $dates fields iirc, but I saw some complaining on it)
Edit
I was testing Carbon once on Laravel 5.4 and I created a trait for it
this is not production level code yet so include it in your model on your own risk
<?php namespace App\Traits;
use Carbon\Carbon;
trait castTrait
{
protected function castAttribute($key, $value)
{
$database_format = 'Y-m-d H:i:se'; // Store this somewhere in config files
$output_format_date = 'd/m/Y'; // Store this somewhere in config files
$output_format_datetime = 'd/m/Y H:i:s'; // Store this somewhere in config files
if (is_null($value)) {
return $value;
}
switch ($this->getCastType($key)) {
case 'int':
case 'integer':
return (int) $value;
case 'real':
case 'float':
case 'double':
return (float) $value;
case 'string':
return (string) $value;
case 'bool':
case 'boolean':
return (bool) $value;
case 'object':
return $this->fromJson($value, true);
case 'array':
case 'json':
return $this->fromJson($value);
case 'collection':
return new BaseCollection($this->fromJson($value));
case 'date':
Carbon::setToStringFormat($output_format_date);
$date = (string)$this->asDate($value);
Carbon::resetToStringFormat(); // Just for sure
return $date;
case 'datetime':
Carbon::setToStringFormat($output_format_datetime);
$datetime = (string)$this->asDateTime($value);
Carbon::resetToStringFormat();
return $datetime;
case 'timestamp':
return $this->asTimestamp($value);
default:
return $value;
}
}
/**
* Return a timestamp as DateTime object with time set to 00:00:00.
*
* @param mixed $value
* @return \Carbon\Carbon
*/
protected function asDate($value)
{
return $this->asDateTime($value)->startOfDay();
}
/**
* Return a timestamp as DateTime object.
*
* @param mixed $value
* @return \Carbon\Carbon
*/
protected function asDateTime($value)
{
$carbon = null;
$database_format = [ // This variable should also be in config file
'datetime' => 'Y-m-d H:i:se', // e -timezone
'date' => 'Y-m-d'
];
if(empty($value)) {
return null;
}
// If this value is already a Carbon instance, we shall just return it as is.
// This prevents us having to re-instantiate a Carbon instance when we know
// it already is one, which wouldn't be fulfilled by the DateTime check.
if ($value instanceof Carbon) {
$carbon = $value;
}
// If the value is already a DateTime instance, we will just skip the rest of
// these checks since they will be a waste of time, and hinder performance
// when checking the field. We will just return the DateTime right away.
if ($value instanceof DateTimeInterface) {
$carbon = new Carbon(
$value->format($database_format['datetime'], $value->getTimezone())
);
}
// If this value is an integer, we will assume it is a UNIX timestamp's value
// and format a Carbon object from this timestamp. This allows flexibility
// when defining your date fields as they might be UNIX timestamps here.
if (is_numeric($value)) {
$carbon = Carbon::createFromTimestamp($value);
}
// If the value is in simply year, month, day format, we will instantiate the
// Carbon instances from that format. Again, this provides for simple date
// fields on the database, while still supporting Carbonized conversion.
if ($this->isStandardDateFormat($value)) {
$carbon = Carbon::createFromFormat($database_format['date'], $value)->startOfDay();
}
// Finally, we will just assume this date is in the format used by default on
// the database connection and use that format to create the Carbon object
// that is returned back out to the developers after we convert it here.
$carbon = Carbon::createFromFormat(
$database_format['datetime'], $value
);
return $carbon;
}
}
| stackoverflow | {
"language": "en",
"length": 729,
"provenance": "stackexchange_0000F.jsonl.gz:895974",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639221"
} |
406a41882219cd4d403e5e44de1599b056274539 | Stackoverflow Stackexchange
Q: Neo4j: How can I write this query better? I am new in Neo4j. I tried to do this query:
"Mati Gol would like to watch a new movie. therefore would like to get the following list of movies: Write a query that returns all movies that were LIKED by any person who is a FRIEND of the person with the name Mati Gol or is a FRIEND of a FRIEND of Mati Gol, excluding all movies WATCHED by Mati Gol."
My query is:
MATCH (a:person {name:"Moti Gol"})-[:WATCHED]->(b)
WITH collect(b) AS Already_Watched
MATCH (a:person {name:"Moti Gol"})-[:FRIEND*1..2]->(b)-[:LIKED]->(c)
WITH collect(c) AS Friend_Liked
(movie:Friend_Liked) WHERE NOT (movie.name) IN Already_Watched
RETURN movie.name
Is this query OK? Can someone offer me better writing of this?
A: Your query has some errors... Firstly, the first line has no MATCH statement. You are MATCHing (a:person {name:"Moti Gol"}) two times and redeclaring the a variable.
A more simple and intuitive way to do the same query:
// get all the movies liked by friends or friends of friends of "Moti Gol"...
MATCH (a:person {name:"Moti Gol"})-[:FRIEND*1..2]->(b:person)-[:LIKED]->(c:movie)
// excluding all movies WATCHED by Mati Gol
WHERE NOT (a)-[:WATCHED]->(c)
// return the movies
RETURN c.name
| Q: Neo4j: How can I write this query better? I am new in Neo4j. I tried to do this query:
"Mati Gol would like to watch a new movie. therefore would like to get the following list of movies: Write a query that returns all movies that were LIKED by any person who is a FRIEND of the person with the name Mati Gol or is a FRIEND of a FRIEND of Mati Gol, excluding all movies WATCHED by Mati Gol."
My query is:
MATCH (a:person {name:"Moti Gol"})-[:WATCHED]->(b)
WITH collect(b) AS Already_Watched
MATCH (a:person {name:"Moti Gol"})-[:FRIEND*1..2]->(b)-[:LIKED]->(c)
WITH collect(c) AS Friend_Liked
(movie:Friend_Liked) WHERE NOT (movie.name) IN Already_Watched
RETURN movie.name
Is this query OK? Can someone offer me better writing of this?
A: Your query has some errors... Firstly, the first line has no MATCH statement. You are MATCHing (a:person {name:"Moti Gol"}) two times and redeclaring the a variable.
A more simple and intuitive way to do the same query:
// get all the movies liked by friends or friends of friends of "Moti Gol"...
MATCH (a:person {name:"Moti Gol"})-[:FRIEND*1..2]->(b:person)-[:LIKED]->(c:movie)
// excluding all movies WATCHED by Mati Gol
WHERE NOT (a)-[:WATCHED]->(c)
// return the movies
RETURN c.name
A: Here is a solution which I think is what you were after from the start but didn't quite get right.
// find the person and the movies they have already watched
MATCH (a:Person {name:"Mati Gol"})-[:WATCHED]->(movie:Movie)
WITH a, collect(movie) as my_movie_list
// find the person's friends and the movies that they like
MATCH (a)-[:FRIEND*1..2]->(:Person)-[:LIKED]->(movie:Movie)
WITH a, my_movie_list, collect(DISTINCT movie) as friend_movie_list
// return the friend like movies that are not already watched
RETURN [m IN friend_movie_list WHERE NOT m in my_movie_list] as movies_to_watch
I think this solution gives you a little more cost certainty as it should only traverse the movie nodes once each. If there is a lot of duplication in movies LIKED by friends and friends of friends (which I expect is a reasonably likely scenario) then reducing the list of LIKED movies to the distinct list first and then filtering it against the movies watched afterwards could save on database comparisons.
| stackoverflow | {
"language": "en",
"length": 348,
"provenance": "stackexchange_0000F.jsonl.gz:895976",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639224"
} |
ac0099e727e197df76df6d740dea948ce5367e76 | Stackoverflow Stackexchange
Q: JavaScript: Difference between Reflect.get() and obj['foo'] Can't understand why I should use Reflect.get(obj, 'foo') instead of obj['foo'], or why the first one is useful as we can do the same thing using the good and old object bracket notation. Can someone please elaborate?
var obj = {foo: 'bar'};
obj['foo'];
Reflect.get(obj, 'foo');
A: Well, a pedantic answer to your question would be that they are entirely different: a property accessor returns a reference to a property, while Reflect.get returns its value.
From the practical standpoint that doesn't make any difference since property references are always dereferenced on the right side.
One practical usage of Reflect.get would be with its third argument, which, when combined with a Proxy, can be used to create different "views" of the same data.
let numbersView = obj => new Proxy(obj, {
get(target, key, receiver) {
return receiver(target[key])
}
});
let decimal = x => String(x);
let english = x => {
if (x === 1) return 'one';
if (x === 2) return 'two';
};
let v = numbersView({
a: 1,
b: 2
});
console.log(Reflect.get(v, 'a', decimal))
console.log(Reflect.get(v, 'a', english))
This example is a bit made-up, but you got the idea.
| Q: JavaScript: Difference between Reflect.get() and obj['foo'] Can't understand why I should use Reflect.get(obj, 'foo') instead of obj['foo'], or why the first one is useful as we can do the same thing using the good and old object bracket notation. Can someone please elaborate?
var obj = {foo: 'bar'};
obj['foo'];
Reflect.get(obj, 'foo');
A: Well, a pedantic answer to your question would be that they are entirely different: a property accessor returns a reference to a property, while Reflect.get returns its value.
From the practical standpoint that doesn't make any difference since property references are always dereferenced on the right side.
One practical usage of Reflect.get would be with its third argument, which, when combined with a Proxy, can be used to create different "views" of the same data.
let numbersView = obj => new Proxy(obj, {
get(target, key, receiver) {
return receiver(target[key])
}
});
let decimal = x => String(x);
let english = x => {
if (x === 1) return 'one';
if (x === 2) return 'two';
};
let v = numbersView({
a: 1,
b: 2
});
console.log(Reflect.get(v, 'a', decimal))
console.log(Reflect.get(v, 'a', english))
This example is a bit made-up, but you got the idea.
A: return Reflect.get(...arguments);
Reflect.get refers to the getter if any. The foo getter gets the proxied object, receiver (receiver === case1), for its this. Meaning, the get trap for the bar is called as well.
const case1 = new Proxy({
get foo() {
console.log("The foo getter", this);
return this.bar;
},
bar: 3
}, {
get(target, property, receiver) {
console.log("The Proxy get trap", ...arguments);
return Reflect.get(...arguments);
}
});
console.log(case1.foo);
> case1.foo
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The foo getter ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶bar ▶Proxy {bar: 3}
▶3
return target[property];
Using the unproxied object, target. This also triggers the foo getter, but notice: this for the foo getter is the unproxied object, target. The get trap for the bar is not called.
const case2 = new Proxy({
get foo() {
console.log("The foo getter", this);
return this.bar;
},
bar: 3
}, {
get(target, property, receiver) {
console.log("The Proxy get trap", ...arguments);
return target[property];
}
});
console.log(case2.foo);
> case2.foo
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The foo getter ▶{bar: 3}
▶3
return receiver[property];
Using the proxied object, receiver (receiver === case3). receiver[property] refers to the get trap, not the getter, causing an infinity loop.
const case3 = new Proxy({
get foo() {
console.log("The foo getter", this);
return this.bar;
},
bar: 3
}, {
get(target, property, receiver) {
console.log("The Proxy get trap", ...arguments);
return receiver[property];
}
});
console.log(case3.foo);
> case3.foo
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
▶The Proxy get trap ▶{bar: 3} ▶foo ▶Proxy {bar: 3}
……
Uncaught RangeError: Maximum call stack size exceeded
Now you get it.
Which to use
Can't understand why I should use Reflect.get(obj, 'foo') instead of obj['foo']
While using the Reflect verbs is idiomatic for Proxy trap implementation, there's actually no should. It depends on your use case. If your target ("unproxied") object does not have getters or you're not insterested in what properties its getters are accessing ("secondary property accesses"), you might not need the fancy-looking Reflect. On the other hand, if you'd like to trigger the trap for all kinds of property accesses, primary or secondary, you would need Reflect.
For me, I always stick to return Reflect.get(...arguments);.
| stackoverflow | {
"language": "en",
"length": 824,
"provenance": "stackexchange_0000F.jsonl.gz:896009",
"question_score": "24",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639309"
} |
2323121dde05e27742f622f678f5023a671c8f37 | Stackoverflow Stackexchange
Q: Can vowpal wabbit use all my CPU cores? I've tried to train --oaa vowpal wabbit classifier on 10M+ train data and found that it uses only one core. Is any ways to make it use all 12 cores?
A: VW uses two threads: one for loading&parsing the input data and one for the machine learning.
VW comes with a spanning_tree tool for parallel execution (AllReduce) of several VW instances on a cluster (e.g. Hadoop) or on a single machine (--span_server localhost).
That said, I think 12 cores are not enough for AllReduce to pay off. For the best results, you need to do hyper-parameter search anyway, so you can do it in parallel using the 12 cores.
| Q: Can vowpal wabbit use all my CPU cores? I've tried to train --oaa vowpal wabbit classifier on 10M+ train data and found that it uses only one core. Is any ways to make it use all 12 cores?
A: VW uses two threads: one for loading&parsing the input data and one for the machine learning.
VW comes with a spanning_tree tool for parallel execution (AllReduce) of several VW instances on a cluster (e.g. Hadoop) or on a single machine (--span_server localhost).
That said, I think 12 cores are not enough for AllReduce to pay off. For the best results, you need to do hyper-parameter search anyway, so you can do it in parallel using the 12 cores.
| stackoverflow | {
"language": "en",
"length": 118,
"provenance": "stackexchange_0000F.jsonl.gz:896016",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639329"
} |
5ac443e466af32f0f3f8a6589bbd50551bb4c715 | Stackoverflow Stackexchange
Q: Angular 2 (cli) protractor jasmine expect is not resolving promise I am writing E2E tests with Protractor and Angular 2 using Jasmine.
I am trying to do a simple expectation on the getText() of an element returned by protractor.
it('should display correct hero title', () => {
expect(element(by.css('Hero-title')).getText()).toEqual('Foobar');
});
This results in a type error:
Argument of type '"Foobar"' is not assignable to parameter of type 'Expected<Promise<string>>'. [2345]
I know I could use .then but I don't want to do that as I will have loads of these types of expectations.
Using a fresh Angular CLI project this works as expected. I have gone through all the configs but cannot find any differences.
A: This relates to jasmine, starting with 2.5.46, enforcing correct typings, here is a related open issue in the Protractor issue tracker:
*
*Typings issue since @types/jasmine update
As a workaround, you can pin your "jasmine types" version to 2.5.45 until the issue is fixed:
"@types/jasmine": "2.5.45"
| Q: Angular 2 (cli) protractor jasmine expect is not resolving promise I am writing E2E tests with Protractor and Angular 2 using Jasmine.
I am trying to do a simple expectation on the getText() of an element returned by protractor.
it('should display correct hero title', () => {
expect(element(by.css('Hero-title')).getText()).toEqual('Foobar');
});
This results in a type error:
Argument of type '"Foobar"' is not assignable to parameter of type 'Expected<Promise<string>>'. [2345]
I know I could use .then but I don't want to do that as I will have loads of these types of expectations.
Using a fresh Angular CLI project this works as expected. I have gone through all the configs but cannot find any differences.
A: This relates to jasmine, starting with 2.5.46, enforcing correct typings, here is a related open issue in the Protractor issue tracker:
*
*Typings issue since @types/jasmine update
As a workaround, you can pin your "jasmine types" version to 2.5.45 until the issue is fixed:
"@types/jasmine": "2.5.45"
| stackoverflow | {
"language": "en",
"length": 161,
"provenance": "stackexchange_0000F.jsonl.gz:896022",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639345"
} |
b5e863e4cb77906f1b1ea39de0bfa7e87d2dd4c9 | Stackoverflow Stackexchange
Q: Print python list without quotation marks or space after commas So right now I have a list of file names. I want to print them without the space after the comma and without quotation marks.
So basically I have a file that has the following output:
['1', '2', '3']
And I want the output to be
1,2,3
A: Use the join method.
>>> your_list = ['1', '2', '3']
>>> print(', '.join(your_list))
1, 2, 3
| Q: Print python list without quotation marks or space after commas So right now I have a list of file names. I want to print them without the space after the comma and without quotation marks.
So basically I have a file that has the following output:
['1', '2', '3']
And I want the output to be
1,2,3
A: Use the join method.
>>> your_list = ['1', '2', '3']
>>> print(', '.join(your_list))
1, 2, 3
| stackoverflow | {
"language": "en",
"length": 75,
"provenance": "stackexchange_0000F.jsonl.gz:896025",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639357"
} |
fe3b1016ea6e666fa2ac8c4797c13011bf3f44a8 | Stackoverflow Stackexchange
Q: Mongoose field not required but unique I have an email field set as unique, but it is not required.
The problem is that if the user does not enter anything Mongoose puts "null" in it. This causes duplicates because every user that does not enter the email field will have "null" assigned to it.
What is the standard practice to avoid this?
Thanks
A: Use a sparse unique index
If a document does not have a value for a field, the index entry for
that item will be null in any index that includes it. Thus, in many
situations you will want to combine the unique constraint with the
sparse option. Sparse indexes skip over any document that is missing
the indexed field, rather than storing null for the index entry.
db.collection.createIndex( { a: 1, b: 1 }, { unique: true, sparse: true } )
More information: https://docs.mongodb.com/v3.0/tutorial/create-a-unique-index/
| Q: Mongoose field not required but unique I have an email field set as unique, but it is not required.
The problem is that if the user does not enter anything Mongoose puts "null" in it. This causes duplicates because every user that does not enter the email field will have "null" assigned to it.
What is the standard practice to avoid this?
Thanks
A: Use a sparse unique index
If a document does not have a value for a field, the index entry for
that item will be null in any index that includes it. Thus, in many
situations you will want to combine the unique constraint with the
sparse option. Sparse indexes skip over any document that is missing
the indexed field, rather than storing null for the index entry.
db.collection.createIndex( { a: 1, b: 1 }, { unique: true, sparse: true } )
More information: https://docs.mongodb.com/v3.0/tutorial/create-a-unique-index/
| stackoverflow | {
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:896032",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639377"
} |
412dd8eaa4530f56d9681ba9786353b992b2c5dc | Stackoverflow Stackexchange
Q: Download Android APK file from Fabric Beta Is it possible to download the Android APK file from Fabric Beta? We have multiple releases uploaded.
A: Mike from Fabric here. We currently don't provide a way to download the .APK, they are only provided via the Beta by Crashlytics apps.
| Q: Download Android APK file from Fabric Beta Is it possible to download the Android APK file from Fabric Beta? We have multiple releases uploaded.
A: Mike from Fabric here. We currently don't provide a way to download the .APK, they are only provided via the Beta by Crashlytics apps.
A: Late answer but someone may need this. You can download it in a hacky way from devices that apps install by Beta or any way:
Connect the device to your computer and run the following command, ensure that you have configured the adb correctly:
adb shell pm list packages | grep xyz # get the package name of the app
adb shell pm path app.xyz.stg # get the path of the app
adb pull /data/app/app.xyz.stg/base.apk . # pull the app to PWD
the name of the app is base.apk, change it to xyz. This can be used for the same device.
A: Mesut's answer is correct. Just to make it more clear.
*
*adb shell pm path ${package_name}
*adb pull /data/app/${package_name_2}/base.apk
In the second command, the value ${package_name_2}/base.apk is from the first command. Sometimes it's not exactly the package name.
In my case, it's ${package_name}-1/base.apk
A: If you just want to download a specific build, say "1.0(143)" then you can choose that build in the beta app and download it.
If your need is to upload multiple apks from same build (say an apk for each deployment environment such as prevalidation, validation, production) then you need to setup your gradle to define productFlavors for each deployment environment like this:
android {
...
flavorDimensions "deploymentEnvironment"
productFlavors {
prevalidation {
dimension "deploymentEnvironment"
}
validation {
dimension "deploymentEnvironment"
}
production {
dimension "deploymentEnvironment"
}
}
...
}
Then you publish multiple APKs from the same build (one for each target deployment environment) to the same Fabric project using following gradle tasks as illustrative examples. Actual tasks depend on the variants defined for your project:
./gradlew -s assemblePrevalidationRelease assembleValidationRelease
./gradlew -s crashlyticsUploadDistributionPrevalidationRelease crashlyticsUploadDistributionValidationRelease
The Fabric console beta page does show both apks and you can choose to download and install one or the other. The only missing part is that both variants are listed as exactly the same (since they have the same versionName and versionCode). This could easily be solved if Fabric console shows the actual apk name in addition to the version / build info. I would love for the awesome Fabric team to address this small feature request sometime soon.
Until then a workaround I use is to identify the build based on order in Fabric beta console (risky but works) and put the target deployment info in the release notes for each apk in Fabric for a given build.
| stackoverflow | {
"language": "en",
"length": 450,
"provenance": "stackexchange_0000F.jsonl.gz:896049",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639429"
} |
73f967eb39e7e095a5a28453eaa0554c784521c3 | Stackoverflow Stackexchange
Q: Ignore string columns while doing I am using the following code to normalize a pandas DataFrame:
df_norm = (df - df.mean()) / (df.max() - df.min())
This works fine when all columns are numeric. However, now I have some string columns in df and the above normalization got errors. Is there a way to perform such normalization only on numeric columns of a data frame (keeping string column unchanged)?
A: You can use select_dtypes to calculate value for the desired columns:
df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c'], 'c': [4, 5, 6]})
df
a b c
0 1 a 4
1 2 b 5
2 3 c 6
df_num = df.select_dtypes(include='number')
df_num
a c
0 1 4
1 2 5
2 3 6
And then you can assign them back to the original df:
df_norm = (df_num - df_num.mean()) / (df_num.max() - df_num.min())
df[df_norm.columns] = df_norm
df
a b c
0 -0.5 a -0.5
1 0.0 b 0.0
2 0.5 c 0.5
| Q: Ignore string columns while doing I am using the following code to normalize a pandas DataFrame:
df_norm = (df - df.mean()) / (df.max() - df.min())
This works fine when all columns are numeric. However, now I have some string columns in df and the above normalization got errors. Is there a way to perform such normalization only on numeric columns of a data frame (keeping string column unchanged)?
A: You can use select_dtypes to calculate value for the desired columns:
df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c'], 'c': [4, 5, 6]})
df
a b c
0 1 a 4
1 2 b 5
2 3 c 6
df_num = df.select_dtypes(include='number')
df_num
a c
0 1 4
1 2 5
2 3 6
And then you can assign them back to the original df:
df_norm = (df_num - df_num.mean()) / (df_num.max() - df_num.min())
df[df_norm.columns] = df_norm
df
a b c
0 -0.5 a -0.5
1 0.0 b 0.0
2 0.5 c 0.5
| stackoverflow | {
"language": "en",
"length": 165,
"provenance": "stackexchange_0000F.jsonl.gz:896053",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639442"
} |
f7ce1c8ee698e069b2e9efad8dc16e8caa7a5079 | Stackoverflow Stackexchange
Q: Update all rows in the PhpMyAdmin I want to change in the PhpMyAdmin the text "pelecard" for all fields that contain this text. the table name is sales_flat_order_payment. How I can do this?
thank you
A: What is with SQL?
UPDATE sales_flat_order_payment SET method="123"
Then you change all fields to a new value. Without a where clause you change all entries.
| Q: Update all rows in the PhpMyAdmin I want to change in the PhpMyAdmin the text "pelecard" for all fields that contain this text. the table name is sales_flat_order_payment. How I can do this?
thank you
A: What is with SQL?
UPDATE sales_flat_order_payment SET method="123"
Then you change all fields to a new value. Without a where clause you change all entries.
A: Can also be done directly in phpMyAdmin. Open the table, choose "Search" then "Find and replace".
| stackoverflow | {
"language": "en",
"length": 79,
"provenance": "stackexchange_0000F.jsonl.gz:896096",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44639600"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.