id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
2224b7810d64dacffde3c07d8896278b72a3ca2b | Stackoverflow Stackexchange
Q: How to fork your own repo on BitBucket? How to fork your own repo on BitBucket ?
I know how to fork another user repo from web interface, and I know how to clone my repo.
But how to fork your own repo on BitBucket and ease a future pull request workflow?
A: Go to your repository, and then go to Actions -> Fork.
If you have the new navigation enabled, then go to your repository, click on the + on the left navigation bar and then Get to work -> Fork this repository.
Also, make sure that forking is enabled in repository settings (for the existing repository).
| Q: How to fork your own repo on BitBucket? How to fork your own repo on BitBucket ?
I know how to fork another user repo from web interface, and I know how to clone my repo.
But how to fork your own repo on BitBucket and ease a future pull request workflow?
A: Go to your repository, and then go to Actions -> Fork.
If you have the new navigation enabled, then go to your repository, click on the + on the left navigation bar and then Get to work -> Fork this repository.
Also, make sure that forking is enabled in repository settings (for the existing repository).
A: First, create a new repository 'bar'.
Next, clone the existing project 'foo':
$ git clone git@bitbucket.org:YOURNAME/foo.git bar
Next, edit your Git config file and replace the origin URL with your new URL:
$ cd bar
$ vim .git/config
[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = git@bitbucket.org:YOURNAME/bar.git #replace foo with bar
Optionally add your original repo as an upstream source:
$ git remote add upstream git@bitbucket.org:YOURNAME/foo.git
Finally, push your new repository up to Bitbucket:
$ git push -u origin master
Now you can push/pull from your new repo (bar) as expected. You should also be able to merge upstream changes using the following command:
$ git fetch upstream
$ git merge upstream/master
credit: bitdrift
A: On bitbucket server, it does not seem to be an option to fork from your own personal repository. I ran into this trying to move a personal repository to a public location. Possible options:
Get someone else to fork it for you or use a different login if available.
Or
From the project settings, "Move" the repository to the public location, then fork the moved repository back to the original location.
| stackoverflow | {
"language": "en",
"length": 295,
"provenance": "stackexchange_0000F.jsonl.gz:876091",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44576721"
} |
e0a0712bfa92b51209951559816e4c818d3e133c | Stackoverflow Stackexchange
Q: Angular 2 Material - Why is my input deselected? I have a weird problem.
I am using Angular CLI and I have added Material 2 to it. I created some inputs with ngfor and bound them to ngmodel. Everything is working fine.
But whenever I type something, the input becomes deselected.
This is the HTML code of the component:
<md-input-container class="treatments" *ngFor="let t of treatment; let i = index">
<input mdInput placeholder="treatment {{ i + 1 }}"
value="{{ t[i] }}" name="treatment_{{ i + 1 }}" [(ngModel)]="treatment[i]">
</md-input-container>
When I remove the ngmodel, it does work 100%.
A: You are iterating over an Array of primitive type, and thus they are compared by value. When you change a treatment (i.e t), all values are destroyed and new ones created, which causes the field to loose focus. This can be solved by using trackBy, which tracks the treatments by the index and only destroys and recreates the treatment you are modifying.
So add trackBy:
<md-input-container class="treatments" *ngFor="let t of treatment;
let i = index; trackBy:trackByFn">
and in TS add the function:
trackByFn(index, treatment) {
return index;
}
| Q: Angular 2 Material - Why is my input deselected? I have a weird problem.
I am using Angular CLI and I have added Material 2 to it. I created some inputs with ngfor and bound them to ngmodel. Everything is working fine.
But whenever I type something, the input becomes deselected.
This is the HTML code of the component:
<md-input-container class="treatments" *ngFor="let t of treatment; let i = index">
<input mdInput placeholder="treatment {{ i + 1 }}"
value="{{ t[i] }}" name="treatment_{{ i + 1 }}" [(ngModel)]="treatment[i]">
</md-input-container>
When I remove the ngmodel, it does work 100%.
A: You are iterating over an Array of primitive type, and thus they are compared by value. When you change a treatment (i.e t), all values are destroyed and new ones created, which causes the field to loose focus. This can be solved by using trackBy, which tracks the treatments by the index and only destroys and recreates the treatment you are modifying.
So add trackBy:
<md-input-container class="treatments" *ngFor="let t of treatment;
let i = index; trackBy:trackByFn">
and in TS add the function:
trackByFn(index, treatment) {
return index;
}
A: The problem is you are trying to bind ngModel to primitive values. You can solve the deselect problem by adding @Input() before the treatment declaration.
@Input() treatment: string[] = [ "treatment 1", "treatment 2"];
But it will not update the values in the array. If you want the values to update, ngModel needs to bind with object property. I have added both examples in this Plnkr.
For more info, refer this SO question
Angular2 ngModel inside of ngFor
| stackoverflow | {
"language": "en",
"length": 265,
"provenance": "stackexchange_0000F.jsonl.gz:876107",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44576796"
} |
d8f77d8673e559372d01ec8657781bcbea5f0ec5 | Stackoverflow Stackexchange
Q: How to use `pkg-config gtkmm-3.0 --cflags --libs` in the Visual Studio Code How can I configure the tasks.json file so that when I press Ctrl + Shift + B the copier will use pkg-config gtkmm-3.0 --cflags --libs.
My file looks like this:
"version": "0.1.0",
"command": "g++ `pkg-config gtkmm-3.0 --cflags --libs`",
"isShellCommand": true,
"args": ["main.cpp"]
But it returns this message:
Failed to launch external program g++ pkg-config gtkmm-3.0 --cflags --libs main.cpp. spawn g++ pkg-config gtkmm-3.0 --cflags --libs ENOENT
If I put it as an argument, like this:
"version": "0.1.0",
"command": "g++",
"isShellCommand": true,
"args": ["main.cpp", "pkg-config gtkmm-3.0 --cflags --libs`"]
Returns this message:
g++: error: pkg-config gtkmm-3.0 --cflags --libs`:
File or directory not found
A: you can try this!
"version": "0.1.0"
"command": "g++",
"isShellCommand": true,
"args": [
"main.cpp",
"`pkg-config", "--libs", "--cflags", "gtkmm-3.0`",
]
you should control how the argument is quoted.
more information: Task in Visual Studio Code
| Q: How to use `pkg-config gtkmm-3.0 --cflags --libs` in the Visual Studio Code How can I configure the tasks.json file so that when I press Ctrl + Shift + B the copier will use pkg-config gtkmm-3.0 --cflags --libs.
My file looks like this:
"version": "0.1.0",
"command": "g++ `pkg-config gtkmm-3.0 --cflags --libs`",
"isShellCommand": true,
"args": ["main.cpp"]
But it returns this message:
Failed to launch external program g++ pkg-config gtkmm-3.0 --cflags --libs main.cpp. spawn g++ pkg-config gtkmm-3.0 --cflags --libs ENOENT
If I put it as an argument, like this:
"version": "0.1.0",
"command": "g++",
"isShellCommand": true,
"args": ["main.cpp", "pkg-config gtkmm-3.0 --cflags --libs`"]
Returns this message:
g++: error: pkg-config gtkmm-3.0 --cflags --libs`:
File or directory not found
A: you can try this!
"version": "0.1.0"
"command": "g++",
"isShellCommand": true,
"args": [
"main.cpp",
"`pkg-config", "--libs", "--cflags", "gtkmm-3.0`",
]
you should control how the argument is quoted.
more information: Task in Visual Studio Code
| stackoverflow | {
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:876108",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44576801"
} |
8d7fee0fb6d4beffe54ba30afffc97f6a2ca8aec | Stackoverflow Stackexchange
Q: How to obtain object from navigator.getUserMedia() in mobile web? Trying to get a JavaScript reference/promise to a MediaStream returned from MediaDevices in HTML5 and WebRTC.
JavaScript here is run on Chrome and Safari on iOS and Android.
navigator.mediaDevices.getUserMedia(session, initRec , onErr);
This throws TypeError: undefined is not an object (evaluating 'navigator.mediaDevices.getUserMedia')
Fine, let's try it the slightly older way:
navigator.getUserMedia(parms,initfunc,errfunc);
This throws TypeError: navigator.getuserMedia is not a function. (In 'navigator.getUserMedia(parms,initfunc,errfunc)', 'navigator.getUserMedia' is undefined)
Note: Both of those calls work on desktop versions. Did mobile web EVER work for audio recording?
A: Try to serve the web page over HTTPS. I've read that iOS Safari only allows access to the microphone (and camera) on secure connections.
I had the same problem while implementing ng-audio-recorder in my Ionic Angular app. While working flawlessly on desktop, mobile safari threw this error. Switching to HTTPS solved the problem.
| Q: How to obtain object from navigator.getUserMedia() in mobile web? Trying to get a JavaScript reference/promise to a MediaStream returned from MediaDevices in HTML5 and WebRTC.
JavaScript here is run on Chrome and Safari on iOS and Android.
navigator.mediaDevices.getUserMedia(session, initRec , onErr);
This throws TypeError: undefined is not an object (evaluating 'navigator.mediaDevices.getUserMedia')
Fine, let's try it the slightly older way:
navigator.getUserMedia(parms,initfunc,errfunc);
This throws TypeError: navigator.getuserMedia is not a function. (In 'navigator.getUserMedia(parms,initfunc,errfunc)', 'navigator.getUserMedia' is undefined)
Note: Both of those calls work on desktop versions. Did mobile web EVER work for audio recording?
A: Try to serve the web page over HTTPS. I've read that iOS Safari only allows access to the microphone (and camera) on secure connections.
I had the same problem while implementing ng-audio-recorder in my Ionic Angular app. While working flawlessly on desktop, mobile safari threw this error. Switching to HTTPS solved the problem.
A: Safari (or any browser on iOS) does not support getUserMedia and WebRTC until the upcoming Safari 11. See the webkit blog post for details and note that it does not mention the MediaRecorder API.
| stackoverflow | {
"language": "en",
"length": 180,
"provenance": "stackexchange_0000F.jsonl.gz:876121",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44576832"
} |
e6e18fac7fef319918af4f07e58a0d0bf316742e | Stackoverflow Stackexchange
Q: echo to stdout and append to file I have this:
echo "all done creating tables" >> ${SUMAN_DEBUG_LOG_PATH}
but that should only append to the file, not write to stdout.
How can I write to stdout and append to a file in the same bash line?
A: Use the tee command
$ echo hi | tee -a foo.txt
hi
$ cat foo.txt
hi
| Q: echo to stdout and append to file I have this:
echo "all done creating tables" >> ${SUMAN_DEBUG_LOG_PATH}
but that should only append to the file, not write to stdout.
How can I write to stdout and append to a file in the same bash line?
A: Use the tee command
$ echo hi | tee -a foo.txt
hi
$ cat foo.txt
hi
A: Something like this?
echo "all done creating tables" | tee -a "${SUMAN_DEBUG_LOG_PATH}"
A: Normally tee is used, however a version using just bash:
#!/bin/bash
function mytee (){
fn=$1
shift
IFS= read -r LINE
printf '%s\n' "$LINE"
printf '%s\n' "$LINE" >> "$fn"
}
SUMAN_DEBUG_LOG_PATH=/tmp/abc
echo "all done creating tables" | mytee "${SUMAN_DEBUG_LOG_PATH}"
| stackoverflow | {
"language": "en",
"length": 115,
"provenance": "stackexchange_0000F.jsonl.gz:876156",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44576935"
} |
71e38db97c0d26ff6153341d6fd1cb813bafacfa | Stackoverflow Stackexchange
Q: Python Pandas: Passing arguments to a function in agg() I am trying to reduce data in a pandas dataframe by using different kind of functions and argument values. However, I did not manage to change the default arguments in the aggregation functions. Here is an example:
>>> df = pd.DataFrame({'x': [1,np.nan,2,1],
... 'y': ['a','a','b','b']})
>>> df
x y
0 1.0 a
1 NaN a
2 2.0 b
3 1.0 b
Here is an aggregation function, for which I would like to test different values of b:
>>> def translate_mean(x, b=10):
... y = [elem + b for elem in x]
... return np.mean(y)
In the following code, I can use this function with the default b value, but I would like to pass other values:
>>> df.groupby('y').agg(translate_mean)
x
y
a NaN
b 11.5
Any ideas?
A: Maybe you can try using apply in this case:
df.groupby('y').apply(lambda x: translate_mean(x['x'], 20))
Now the result is:
y
a NaN
b 21.5
| Q: Python Pandas: Passing arguments to a function in agg() I am trying to reduce data in a pandas dataframe by using different kind of functions and argument values. However, I did not manage to change the default arguments in the aggregation functions. Here is an example:
>>> df = pd.DataFrame({'x': [1,np.nan,2,1],
... 'y': ['a','a','b','b']})
>>> df
x y
0 1.0 a
1 NaN a
2 2.0 b
3 1.0 b
Here is an aggregation function, for which I would like to test different values of b:
>>> def translate_mean(x, b=10):
... y = [elem + b for elem in x]
... return np.mean(y)
In the following code, I can use this function with the default b value, but I would like to pass other values:
>>> df.groupby('y').agg(translate_mean)
x
y
a NaN
b 11.5
Any ideas?
A: Maybe you can try using apply in this case:
df.groupby('y').apply(lambda x: translate_mean(x['x'], 20))
Now the result is:
y
a NaN
b 21.5
A: Just in case you have multiple columns, and you want to apply different functions and different parameters for each column, you can use lambda function with agg function.
For example:
>>> df = pd.DataFrame({'x': [1,np.nan,2,1],
... 'y': ['a','a','b','b']
'z': ['0.1','0.2','0.3','0.4']})
>>> df
x y z
0 1.0 a 0.1
1 NaN a 0.2
2 2.0 b 0.3
3 1.0 0.4
>>> def translate_mean(x, b=10):
... y = [elem + b for elem in x]
... return np.mean(y)
To groupby column 'y', and apply function translate_mean with b=10 for col 'x'; b=25 for col 'z', you can try this:
df_res = df.groupby(by='a').agg({
'x': lambda x: translate_mean(x, 10),
'z': lambda x: translate_mean(x, 25)})
Hopefully, it helps.
A: Just pass as arguments to agg (this works with apply, too).
df.groupby('y').agg(translate_mean, b=4)
Out:
x
y
a NaN
b 5.5
| stackoverflow | {
"language": "en",
"length": 295,
"provenance": "stackexchange_0000F.jsonl.gz:876179",
"question_score": "15",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577019"
} |
e8924b5d5eab1d57e4782a7fd52fb2503f7dc618 | Stackoverflow Stackexchange
Q: Form action redirect to another controller Symfony2.8 I have a weird problem. I want to setAction to form to redirect to another controller.
I have 2 controllers for user and address. On route /{id}/modify we are in users controller twig and there is this, generated form:
$add=new Address();
$formAddress=$this->createFormBuilder($add)
->setAction($this->redirectToRoute("/{id}/addAddress",array('id'=>$id)))
->add("city","text")
->add("street","text")
->add("housenumber","text")
->add("flatnumber","text")
->add("send","submit")
->getForm();
After submitting I want to be redirected to address controller where form will be handled, route of address controller is /{id}/addAddress.
Thanks in advance for answers! Cheers!
A: Your action is not correct. If you use the function redirectToRoute it expect a route name.
// redirect to a route with parameters
return $this->redirectToRoute('blog_show', array('slug' => 'my-page'));
The first parameter (string) is the name for the route you try to send your form to. Otherwise you have to use redirect and use generateUrl to get the url which does the same but redirectToUrl is newer a shorter.
https://symfony.com/doc/current/controller.html
| Q: Form action redirect to another controller Symfony2.8 I have a weird problem. I want to setAction to form to redirect to another controller.
I have 2 controllers for user and address. On route /{id}/modify we are in users controller twig and there is this, generated form:
$add=new Address();
$formAddress=$this->createFormBuilder($add)
->setAction($this->redirectToRoute("/{id}/addAddress",array('id'=>$id)))
->add("city","text")
->add("street","text")
->add("housenumber","text")
->add("flatnumber","text")
->add("send","submit")
->getForm();
After submitting I want to be redirected to address controller where form will be handled, route of address controller is /{id}/addAddress.
Thanks in advance for answers! Cheers!
A: Your action is not correct. If you use the function redirectToRoute it expect a route name.
// redirect to a route with parameters
return $this->redirectToRoute('blog_show', array('slug' => 'my-page'));
The first parameter (string) is the name for the route you try to send your form to. Otherwise you have to use redirect and use generateUrl to get the url which does the same but redirectToUrl is newer a shorter.
https://symfony.com/doc/current/controller.html
| stackoverflow | {
"language": "en",
"length": 154,
"provenance": "stackexchange_0000F.jsonl.gz:876184",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577049"
} |
4ba9b5fa9741e59a424c3e7b783b2b15f50a6a76 | Stackoverflow Stackexchange
Q: If someone knows your SHA-1 certificate fingerprint:, how "dangerous" is that? I'm new to android, I have seen many people hiding their SHA-1 certificate fingerprint. I have developed an app using google play services and shared it with someone. It has my SHA-1 certificate fingerprint in it. Can a hacker do any damage knowing my SHA-1 Certificate.
Thanks.
A: The certificate fingerprint is calculated from the certificate. The certificate itself is public information and transferred in clear during the SSL/TLS handshake. Which makes the fingerprint public information too, i.e. there is usually no danger in having it known by others.
But one could probably construct a situation where this might be dangerous. For example if your application uses the fingerprint to verify that it connects to the correct site and this site is an illegal site and you know this. In this case one could probably try to associate you with illegal activities from the fact that you've included the fingerprint of this certificate in your application.
| Q: If someone knows your SHA-1 certificate fingerprint:, how "dangerous" is that? I'm new to android, I have seen many people hiding their SHA-1 certificate fingerprint. I have developed an app using google play services and shared it with someone. It has my SHA-1 certificate fingerprint in it. Can a hacker do any damage knowing my SHA-1 Certificate.
Thanks.
A: The certificate fingerprint is calculated from the certificate. The certificate itself is public information and transferred in clear during the SSL/TLS handshake. Which makes the fingerprint public information too, i.e. there is usually no danger in having it known by others.
But one could probably construct a situation where this might be dangerous. For example if your application uses the fingerprint to verify that it connects to the correct site and this site is an illegal site and you know this. In this case one could probably try to associate you with illegal activities from the fact that you've included the fingerprint of this certificate in your application.
A: No they can't do any harm to you by knowing this information.
In public key cryptography there are two pieces of information that are used for encryption and decryption of data. Private key, that you have and are not sharing with anyone, and public key. You can share public key with others, so they can send you encrypted data.
SHA1 fingerprint is just fingerprint of that public key, so anyone having your public key, can verify that it is really your key and not someone else.
| stackoverflow | {
"language": "en",
"length": 255,
"provenance": "stackexchange_0000F.jsonl.gz:876218",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577162"
} |
0e56676123a445717912afa8f3f4bd7dc027f6d5 | Stackoverflow Stackexchange
Q: react-bootstrap NavBar padding I am using the react-bootstrap library to construct a nav bar at the top of my page using Navbar import. However, the bar seems to have some padding at the bottom which I do not want and can't seem to remove, as seen here (Yellow on bottom. Also navbar component does not seem to span the entire page [as seen by white space on either side of bar]; not sure of why this is either):
I would like the bar to span the page and have no padding on the bottom.
My render method is as follows:
render() {
if(Auth.isUserAuthenticated() && this.props.location.pathname === '/') {
return <div/>;
}
return (
<span className="nav-bar">
<Navbar inverse className="fixed-top collapseOnSelect nav-bar">
<Navbar.Collapse>
<Navbar.Header className="locl-link">
<Navbar.Brand>
<LinkContainer to="/">
<a>locl</a>
</LinkContainer>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
<BootstrapNav>
<LinkContainer to="/about">
<NavItem active={this.linkActive("about")}>About</NavItem>
</LinkContainer>
</BootstrapNav>
<BootstrapNav pullRight>
{this.logInOut()}
</BootstrapNav>
</Navbar.Collapse>
</Navbar>
</span>
);
}
Any help would be greatly appreciated.
A: I didn't realise body tag has default margin; whoops
| Q: react-bootstrap NavBar padding I am using the react-bootstrap library to construct a nav bar at the top of my page using Navbar import. However, the bar seems to have some padding at the bottom which I do not want and can't seem to remove, as seen here (Yellow on bottom. Also navbar component does not seem to span the entire page [as seen by white space on either side of bar]; not sure of why this is either):
I would like the bar to span the page and have no padding on the bottom.
My render method is as follows:
render() {
if(Auth.isUserAuthenticated() && this.props.location.pathname === '/') {
return <div/>;
}
return (
<span className="nav-bar">
<Navbar inverse className="fixed-top collapseOnSelect nav-bar">
<Navbar.Collapse>
<Navbar.Header className="locl-link">
<Navbar.Brand>
<LinkContainer to="/">
<a>locl</a>
</LinkContainer>
</Navbar.Brand>
<Navbar.Toggle />
</Navbar.Header>
<BootstrapNav>
<LinkContainer to="/about">
<NavItem active={this.linkActive("about")}>About</NavItem>
</LinkContainer>
</BootstrapNav>
<BootstrapNav pullRight>
{this.logInOut()}
</BootstrapNav>
</Navbar.Collapse>
</Navbar>
</span>
);
}
Any help would be greatly appreciated.
A: I didn't realise body tag has default margin; whoops
A: Just like @Zero wrote .navbar class has margin-bottom: 20px; property. You will need to override it or set margin-top: -20px; on element below if you want to keep it for some other view.
When it comes to padding at the right side - it's left for a vertical scroll bar. I have faced the same issue when I was using react-sidebar.
A: The space below navbar is coming from .navbar class in bootstrap.css. You can remove the margin-bottom: 20px; in bootstrap.css.
If you’re using bootstrap.css via CDN you can add style to your navbar, like so:
<Navbar style={{marginBottom: "0"}} inverse className="fixed-top collapseOnSelect nav-bar">
Regarding your problem with white space on either side or navbar. I think the problem is with how you render an element into the DOM. You should check the root DOM node in your HTML, maybe it has padding or margin. It could also be from the .nav-bar class in your span, or some other class.
| stackoverflow | {
"language": "en",
"length": 325,
"provenance": "stackexchange_0000F.jsonl.gz:876234",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577222"
} |
f73c8fc3f3ae4ca2893cdaa56a88e245c52e454e | Stackoverflow Stackexchange
Q: Google Places API Place Description/Summary Using the Google Places API I cannot seem to get the description of a place whether through a nearby_search or a details_search. Please look at the attached picture for what I am wanting to pull from the JSON. This information must be coming from somewhere, it's just a question of where.
Example Picture:
This has been asked here: Displaying a Place Description on Google Places API for iOS but the answers are not adequate and link to documentation that I've read many times over and can't seem to find it.
A: Unfortunately, the mentioned description/summary is not available via the Places API at the moment.
There is a corresponding feature request in the public issue tracker:
https://issuetracker.google.com/issues/35827225
Please star the feature request to express your interest and subscribe to further notifications from Google.
| Q: Google Places API Place Description/Summary Using the Google Places API I cannot seem to get the description of a place whether through a nearby_search or a details_search. Please look at the attached picture for what I am wanting to pull from the JSON. This information must be coming from somewhere, it's just a question of where.
Example Picture:
This has been asked here: Displaying a Place Description on Google Places API for iOS but the answers are not adequate and link to documentation that I've read many times over and can't seem to find it.
A: Unfortunately, the mentioned description/summary is not available via the Places API at the moment.
There is a corresponding feature request in the public issue tracker:
https://issuetracker.google.com/issues/35827225
Please star the feature request to express your interest and subscribe to further notifications from Google.
A: I believe what you're looking for is the review_summary field, part of the extensions parameter which was unfortunately deprecated a little over a month ago.
You could try HTML parsing the section-editorial div class.
A: This was added to the places API as PlaceEditorialSummary just last month.
It is not yet in any of the SDKs, so you need to do a raw HTTP request to get the info. Examples are here.
I couldn't find any issues tracking adding that field to the SDKs, so I created one.
| stackoverflow | {
"language": "en",
"length": 228,
"provenance": "stackexchange_0000F.jsonl.gz:876243",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577246"
} |
bac2567d8e9133f552a6d9783dba1ab04d60ae18 | Stackoverflow Stackexchange
Q: Replacement for Jenkins Scriptler plugin? It looks like the Jenkins Scriptler plugin is no longer available, due to security reasons: https://wiki.jenkins-ci.org/display/JENKINS/Scriptler+Plugin
"Distribution of This Plugin Has Been Suspended"
Is there a similar plugin that I could use to run saved Groovy scripts?
A: Hi you can store your groovy scripts in Managed Files and pass the parameters to groovy script through Extended Choice Parameters Plugin.
Or else you can download Scriptler plugin source code and add it to your /var/lib/jenkins/plugin folder and start Jenkins server. It will work fine.
| Q: Replacement for Jenkins Scriptler plugin? It looks like the Jenkins Scriptler plugin is no longer available, due to security reasons: https://wiki.jenkins-ci.org/display/JENKINS/Scriptler+Plugin
"Distribution of This Plugin Has Been Suspended"
Is there a similar plugin that I could use to run saved Groovy scripts?
A: Hi you can store your groovy scripts in Managed Files and pass the parameters to groovy script through Extended Choice Parameters Plugin.
Or else you can download Scriptler plugin source code and add it to your /var/lib/jenkins/plugin folder and start Jenkins server. It will work fine.
A: For removed plugins: You can find the .hpi file on archive sites on the internet, and then in Jenkins in Manage Plugins, use the "Advanced" tab > "Upload Plugin" to install it.
| stackoverflow | {
"language": "en",
"length": 123,
"provenance": "stackexchange_0000F.jsonl.gz:876254",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577281"
} |
f8602f881ef63fa8306a9ae684d726cff8b47ad0 | Stackoverflow Stackexchange
Q: how can I center an image in ionic 2 app hello I have some pages in an ionic 2 app, that have an inside a .. like this
<ion-content padding>
<p>Some text here....</p>
<p>Some other text here...</p>
<ion-img width="180" height="180" src="assets/images/goal.jpg"></ion-img>
<p>bottom text here...</p>
</ion-content>
I need to see the image centered horizontally.. I have tested some css but without luck.. how can I achieve that?
A: You can use ionic CSS utilities to align center by applying the attribute text-center to the parent element of the one you want to center horizontally.
Here is an example:
<ion-content text-center>
<img src="assets/imgs/logo.png" width="128" />
</ion-content>
In your case I would wrap the <img> in a <div> so that it affects only the image and not the <p> elements.
Like this:
<ion-content padding>
<p>Some text here....</p>
<p>Some other text here...</p>
<div text-center>
<ion-img width="180" height="180" src="assets/images/goal.jpg"></ion-img>
</div>
<p>bottom text here...</p>
</ion-content>
| Q: how can I center an image in ionic 2 app hello I have some pages in an ionic 2 app, that have an inside a .. like this
<ion-content padding>
<p>Some text here....</p>
<p>Some other text here...</p>
<ion-img width="180" height="180" src="assets/images/goal.jpg"></ion-img>
<p>bottom text here...</p>
</ion-content>
I need to see the image centered horizontally.. I have tested some css but without luck.. how can I achieve that?
A: You can use ionic CSS utilities to align center by applying the attribute text-center to the parent element of the one you want to center horizontally.
Here is an example:
<ion-content text-center>
<img src="assets/imgs/logo.png" width="128" />
</ion-content>
In your case I would wrap the <img> in a <div> so that it affects only the image and not the <p> elements.
Like this:
<ion-content padding>
<p>Some text here....</p>
<p>Some other text here...</p>
<div text-center>
<ion-img width="180" height="180" src="assets/images/goal.jpg"></ion-img>
</div>
<p>bottom text here...</p>
</ion-content>
A: <ion-content text-center>
<p align="center"><ion-img src="assets/imgs/logo.png" width="128"></ion-img></p>
</ion-content>
| stackoverflow | {
"language": "en",
"length": 158,
"provenance": "stackexchange_0000F.jsonl.gz:876276",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577334"
} |
a001875db2841b1498c49e189029cd64561f3466 | Stackoverflow Stackexchange
Q: How do I automatically tail (delete) older logs using Serilog in a .Net WPF application? I'm using Serliog in a .Net WPF application.
Is there a way that I can "tail" (delete) the log files automatically when they are over N days old?
A: https://github.com/serilog/serilog-sinks-rollingfile/blob/dev/README.md
Look there. You can configure autocreation of a new log file every day and also you can set how many of them you want to be kept
| Q: How do I automatically tail (delete) older logs using Serilog in a .Net WPF application? I'm using Serliog in a .Net WPF application.
Is there a way that I can "tail" (delete) the log files automatically when they are over N days old?
A: https://github.com/serilog/serilog-sinks-rollingfile/blob/dev/README.md
Look there. You can configure autocreation of a new log file every day and also you can set how many of them you want to be kept
A: Now you can also specify a property retainedFileTimeLimit:
https://github.com/serilog/serilog-sinks-file/pull/90
By the way, don't forget to specify retainedFileCountLimit: null if you want limitation only by the date. With the current implementation default value of retainedFileCountLimit is 31. Therefore, if you leave the parameter out, this filter will also be applied
A: According to the documentation, the default value of retainedFileCountLimit is 31 so only the most recent 31 files are kept by default.
To change the amount of files kept in code:
var log = new LoggerConfiguration()
.WriteTo.File("log.txt", retainedFileCountLimit: 42)
.CreateLogger();
pass null to remove the limit.
In XML <appSettings> configuration:
<appSettings>
<add key="serilog:using:File" value="Serilog.Sinks.File" />
<add key="serilog:write-to:File.path" value="log.txt" />
<add key="serilog:write-to:File.retainedFileCountLimit" value="42"/>
</appSettings>
and pass an empty string to remove the limit.
In JSON appsettings.json configuration
{
"Serilog": {
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "log.txt",
"retainedFileCountLimit": "42"
}
}
]
}
}
and pass an empty string to remove the limit.
Note that I have not tested the JSON configuration.
| stackoverflow | {
"language": "en",
"length": 237,
"provenance": "stackexchange_0000F.jsonl.gz:876278",
"question_score": "25",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577336"
} |
f7e246b9d1b7e6716340401b1a5ae3650ac0af45 | Stackoverflow Stackexchange
Q: Piping docker run container ID to docker exec In my development, I find myself issuing a docker run command followed by a docker exec command on the resulting container ID quite frequently. It's a little annoying to have to copy/paste the container ID between commands, so I was trying to pipe the container ID into my docker exec command.
Here's my example command.
docker run -itd image | xargs -i docker exec -it {} bash
This starts the container, but then I get the following error.
the input device is not a TTY
Does anyone have any idea how to get around this?
Edit: I also forgot to mention I have an ENTRYPOINT defined and cannot override that.
A: Do this instead:
ID=$(docker run -itd image) && docker exec -it $ID bash
Because xargs executes it arguments without allocating a new tty.
| Q: Piping docker run container ID to docker exec In my development, I find myself issuing a docker run command followed by a docker exec command on the resulting container ID quite frequently. It's a little annoying to have to copy/paste the container ID between commands, so I was trying to pipe the container ID into my docker exec command.
Here's my example command.
docker run -itd image | xargs -i docker exec -it {} bash
This starts the container, but then I get the following error.
the input device is not a TTY
Does anyone have any idea how to get around this?
Edit: I also forgot to mention I have an ENTRYPOINT defined and cannot override that.
A: Do this instead:
ID=$(docker run -itd image) && docker exec -it $ID bash
Because xargs executes it arguments without allocating a new tty.
A: If you just want to "bash"-into the container you do not have to pass the container-id around. You can simply run
docker run -it --rm <image> /bin/bash
For example, if we take the ubuntu base image
docker run -it --rm ubuntu /bin/bash
root@f80f83eec0d4:/#
from the documentation
-t : Allocate a pseudo-tty
-i : Keep STDIN open even if not attached
--rm : Automatically remove the container when it exits
The command /bin/bash overwrites the default command that is specified with the CMD instruction in the Dockerfile.
| stackoverflow | {
"language": "en",
"length": 230,
"provenance": "stackexchange_0000F.jsonl.gz:876280",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577344"
} |
069e0e6303c5f32bdadc0f43cd284cf8ce53491d | Stackoverflow Stackexchange
Q: How to upload files in Laravel directly into public folder? The server which I'm hosting my website does not support links so I cannot run the php artisan storage:link to link my storage directory into the public directory. I tried to remake the disk configuration in the filesystems.php to directly reference the public folder but it didn't seems to work either. Is there a way to upload a file using Laravel libraries directly into the public folder or will I have to use a php method?
A: You can create a new storage disc in config/filesystems.php:
'public_uploads' => [
'driver' => 'local',
'root' => public_path() . '/uploads',
],
And store files like this:
if(!Storage::disk('public_uploads')->put($path, $file_content)) {
return false;
}
| Q: How to upload files in Laravel directly into public folder? The server which I'm hosting my website does not support links so I cannot run the php artisan storage:link to link my storage directory into the public directory. I tried to remake the disk configuration in the filesystems.php to directly reference the public folder but it didn't seems to work either. Is there a way to upload a file using Laravel libraries directly into the public folder or will I have to use a php method?
A: You can create a new storage disc in config/filesystems.php:
'public_uploads' => [
'driver' => 'local',
'root' => public_path() . '/uploads',
],
And store files like this:
if(!Storage::disk('public_uploads')->put($path, $file_content)) {
return false;
}
A: inside config/filesystem.php, add this :
'public_uploads' => [
'driver' => 'local',
'root' => public_path(),
],
and in the controller
$file = $request->file("contract");
$ext = $file->extension();
$filename = $file->storeAs('/contracts/', $contract->title.'.' . $ext,['disk' => 'public_uploads']);
A: You can pass disk to method of \Illuminate\Http\UploadedFile class:
$file = request()->file('uploadFile');
$file->store('toPath', ['disk' => 'public']);
or you can create new Filesystem disk and you can save it to that disk.
You can create a new storage disk in config/filesystems.php:
'my_files' => [
'driver' => 'local',
'root' => public_path() . '/myfiles',
],
in controller:
$file = request()->file('uploadFile');
$file->store('toPath', ['disk' => 'my_files']);
A: I figured out the fix to this issue , in the config/filesystem.php add this line 'default' => 'public', in my case that fixed for me.
A: You should try this hopping you have added method="post" enctype="multipart/form-data" to your form. Note that the public path (uploadedimages) will be moved to will be your public folder in your project directory and that's where the uploaded images will be.
public function store (Request $request) {
$imageName = time().'.'.$request->image->getClientOriginalExtension();
$request->image->move(public_path('/uploadedimages'), $imageName);
// then you can save $imageName to the database
}
A: The most flexible is to edit .env file:
FILESYSTEM_DRIVER=public
A: In addition to every answer there, I would suggest to add another protection layer because it's a public folder.
For every file that you will have in your public folder that requires protection, do a route that will verify access to them, like
/files/images/cat.jpg
and route that looks like /files/images/{image_name} so you could verify the user against given file.
After a correct validation you just do
return response($full_server_filepath, 200)->header('Content-Type', 'image/jpeg');
It makes a work little harder, but much safer
A: The recommended way is to symlink the storage from the public directory.
To make these files accessible from the web, you should create a
symbolic link from public/storage to storage/app/public. Utilizing
this folder convention will keep your publicly accessible files in one
directory that can be easily shared across deployments when using zero
down-time deployment systems like Envoyer.
To create the symbolic link, you may use the storage:link Artisan
command:
A: Step 1: change setting in Filesystems.php
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
'public' => [
'driver' => 'local',
'root' => public_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
Step 2: in Controller: Store file like this:
'indoc_file' => $request->indoc_file->store('app/file'),
Step 3: Go to file .ENV and change the Config like this:
FILESYSTEM_DRIVER=public
After that your file Location will be saved in Project like this:
Project\public\app\public\app\file
if you want to download file able do this on your view page:
<a href="{{url('/app/public/'.$item->indoc_file)}}" class="btn btn-info" title="ດາວໂຫຼດ"><i class="fa fa-download"></i> </a>
A: as alternative,
let laravel set the file stored in /storage/app folder.
if we want to read the file, just use like $publicPath = '../storage/app/'.$path;
| stackoverflow | {
"language": "en",
"length": 584,
"provenance": "stackexchange_0000F.jsonl.gz:876294",
"question_score": "42",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577380"
} |
1e9c172f0e9befffc9fdb3c874583fe0f6c6e5f0 | Stackoverflow Stackexchange
Q: Why does npm flash "verb" and "sill" while installing things? I'd like to understand what is intended to be communicated by the words "verb" and "sill" when installing node modules via npm:
⋊> ~/t/quill on develop ◦ npm install 15:35:02
⸨ ░░░░░░░░░░░░⸩ ⠙ fetchMetadata: sill mapToRegistry uri https://registry.npmjs.org/big.js
The sill right before mapToRegistry also changes to verb and back again. What do these mean?
A: I believe this is referring to the silly (sill) and verbose (verb) log levels for npm install. See changelog here.
I am not quite sure how it ascertains which to use, but it is for the npm log files to enable easier debugging for developers.
| Q: Why does npm flash "verb" and "sill" while installing things? I'd like to understand what is intended to be communicated by the words "verb" and "sill" when installing node modules via npm:
⋊> ~/t/quill on develop ◦ npm install 15:35:02
⸨ ░░░░░░░░░░░░⸩ ⠙ fetchMetadata: sill mapToRegistry uri https://registry.npmjs.org/big.js
The sill right before mapToRegistry also changes to verb and back again. What do these mean?
A: I believe this is referring to the silly (sill) and verbose (verb) log levels for npm install. See changelog here.
I am not quite sure how it ascertains which to use, but it is for the npm log files to enable easier debugging for developers.
A: sill shows log level is silly in which almost anything is logged. verb indicates verbose log level that shows log messages a little bit less than silly but more than info. Regarding the npm official documentation, it has following log levels that are sorted from the least log messages to the most ones:
*
*silent
*error
*warn
*notice
*http
*timing
*info
*verbose
*silly
| stackoverflow | {
"language": "en",
"length": 175,
"provenance": "stackexchange_0000F.jsonl.gz:876308",
"question_score": "85",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577416"
} |
fd592475411deb9cae3fb81d625981a3639d5d78 | Stackoverflow Stackexchange
Q: RabbitMQ EventingBasicConsumer stops receiving messages I'm using RabbitClient as Queue reader singleton in my .NET Core application (publishing to queue goes through other application). And I'm facing the problem when after some time my application stops reading from queue. I see queue full of messages, I push more and more but reader app is not reading until I restart It. Here is the code of RabbitClient:
private readonly IConnection _connection;
public RabbitClient(IMessagingConfiguration configuration)
{
IConnectionFactory connectionFactory = new ConnectionFactory { Uri = _configuration.ConnectionString };
_connection = connectionFactory.CreateConnection();
}
public void OnMessageAsync<T>(Func<T, Task> callback) where T : class
{
T result;
var queueName = _configuration.QueueName;
var channel = _connection.CreateModel();
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, args) =>
{
var json = Encoding.UTF8.GetString(args.Body);
result = JsonConvert.DeserializeObject<T>(json);
callback(result).GetAwaiter().GetResult();
channel.BasicAck(args.DeliveryTag, false);
};
channel.BasicConsume(queue: queueName, noAck: false, consumer: consumer);
}
How can I force reader to keep reading without such stops?
| Q: RabbitMQ EventingBasicConsumer stops receiving messages I'm using RabbitClient as Queue reader singleton in my .NET Core application (publishing to queue goes through other application). And I'm facing the problem when after some time my application stops reading from queue. I see queue full of messages, I push more and more but reader app is not reading until I restart It. Here is the code of RabbitClient:
private readonly IConnection _connection;
public RabbitClient(IMessagingConfiguration configuration)
{
IConnectionFactory connectionFactory = new ConnectionFactory { Uri = _configuration.ConnectionString };
_connection = connectionFactory.CreateConnection();
}
public void OnMessageAsync<T>(Func<T, Task> callback) where T : class
{
T result;
var queueName = _configuration.QueueName;
var channel = _connection.CreateModel();
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, args) =>
{
var json = Encoding.UTF8.GetString(args.Body);
result = JsonConvert.DeserializeObject<T>(json);
callback(result).GetAwaiter().GetResult();
channel.BasicAck(args.DeliveryTag, false);
};
channel.BasicConsume(queue: queueName, noAck: false, consumer: consumer);
}
How can I force reader to keep reading without such stops?
| stackoverflow | {
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:876312",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577429"
} |
afffd76378efe3262e4cd14ffc6bc5832d53be9b | Stackoverflow Stackexchange
Q: extract text between single quotes in mult line variable with sed I am trying to write a script. I use a remote procedure call with xmlrpc to get a url. I assign the output as a variable all in one command like url=$(xmlrpc 192.168.1.1 command...) for example. This is the output of the procedure call when output to a file:
Result:
String: 'http://example.url'
It all shows up on one line with echo $url. When I try to extract the URL between the single quotes with sed s/^.*'\(.*\)'.*$/\1/ $url I get the following:
sed: can't read Result:: no such file or directory
sed: can't read String:: no such file or directory
sed: can't read http://example.url: no such file or directory
Maybe the multiple lines is the problem. I get a similar error with grep -oP "(?<=').*?(?=')" $url Any ideas? I just want to extract the URL.
A: With bash regular expresssions:
[[ "$url" =~ \'(http:[^\']*)\' ]] && echo ${BASH_REMATCH[1]}
http://example.url
| Q: extract text between single quotes in mult line variable with sed I am trying to write a script. I use a remote procedure call with xmlrpc to get a url. I assign the output as a variable all in one command like url=$(xmlrpc 192.168.1.1 command...) for example. This is the output of the procedure call when output to a file:
Result:
String: 'http://example.url'
It all shows up on one line with echo $url. When I try to extract the URL between the single quotes with sed s/^.*'\(.*\)'.*$/\1/ $url I get the following:
sed: can't read Result:: no such file or directory
sed: can't read String:: no such file or directory
sed: can't read http://example.url: no such file or directory
Maybe the multiple lines is the problem. I get a similar error with grep -oP "(?<=').*?(?=')" $url Any ideas? I just want to extract the URL.
A: With bash regular expresssions:
[[ "$url" =~ \'(http:[^\']*)\' ]] && echo ${BASH_REMATCH[1]}
http://example.url
A: sed -n "s/^.*'\(.*\)'.*$/\1/ p" <<< ${url}
A: I think you should give native bash regex and replacement a try. If you really want to extract the URL from your multiline variable:
$ echo "$url"
Results:
String: 'http://example.url'
with sed, then you can use something like this:
$ url=$(sed -n "s/^.*'\(.*\)'.*$/\1/p" <<< $var)
$ echo "$url"
http://example.url
| stackoverflow | {
"language": "en",
"length": 217,
"provenance": "stackexchange_0000F.jsonl.gz:876313",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577440"
} |
bdc4f7050f59d222883857f29cb1596fee7cf68a | Stackoverflow Stackexchange
Q: Gremlin query to sum 2 or more values I need to write a gremlin query that can sum 2 or more properties on a set of vertices and return them as separate values.
Statement 1:
g.V().has('label1').values('p1').sum()
This will return the sum of the p1 values of all the 'label1' vertices - let's say that's 100
Statement 2:
g.V().has('label1').values('p2').sum()
This will return the sum of the p2 values of all the 'label1' vertices - let's say that's 200
Statement 3:
g.V().has('label1').values('p1','p2').sum()
Tried the above statement, but as expected that doesn't work, it will return 300, the sum of all p1, and p2 properties of 'label1'
I need a query that will return the sum of p1, and the sum p2 in one result. Maybe there's a way I could "collapse" or fold a set of vertices into one vertex and aggregate properties by certain rules... in my case for sum.
A: You can group all values by their key:
g.V().has("label1").properties("p1","p2").
group().by(key).by(value().sum())
EDIT
Proof that it works:
gremlin> g = TinkerGraph.open().traversal()
==>graphtraversalsource[tinkergraph[vertices:0 edges:0], standard]
gremlin> g.addV().property("p1", 1).property("p2", 10).iterate()
gremlin> g.addV().property("p1", 2).property("p2", 20).iterate()
gremlin> g.V().properties("p1", "p2").group().by(key).by(value().sum())
==>[p1:3,p2:30]
| Q: Gremlin query to sum 2 or more values I need to write a gremlin query that can sum 2 or more properties on a set of vertices and return them as separate values.
Statement 1:
g.V().has('label1').values('p1').sum()
This will return the sum of the p1 values of all the 'label1' vertices - let's say that's 100
Statement 2:
g.V().has('label1').values('p2').sum()
This will return the sum of the p2 values of all the 'label1' vertices - let's say that's 200
Statement 3:
g.V().has('label1').values('p1','p2').sum()
Tried the above statement, but as expected that doesn't work, it will return 300, the sum of all p1, and p2 properties of 'label1'
I need a query that will return the sum of p1, and the sum p2 in one result. Maybe there's a way I could "collapse" or fold a set of vertices into one vertex and aggregate properties by certain rules... in my case for sum.
A: You can group all values by their key:
g.V().has("label1").properties("p1","p2").
group().by(key).by(value().sum())
EDIT
Proof that it works:
gremlin> g = TinkerGraph.open().traversal()
==>graphtraversalsource[tinkergraph[vertices:0 edges:0], standard]
gremlin> g.addV().property("p1", 1).property("p2", 10).iterate()
gremlin> g.addV().property("p1", 2).property("p2", 20).iterate()
gremlin> g.V().properties("p1", "p2").group().by(key).by(value().sum())
==>[p1:3,p2:30]
A: Based on Daniel's original answer, this is the query that solved it for me:
g.V().has('label1').union(values('p1').sum(), values('p2').sum())
| stackoverflow | {
"language": "en",
"length": 202,
"provenance": "stackexchange_0000F.jsonl.gz:876322",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577483"
} |
1eb8a6753e7bcc26c1171a6134036da246ca1825 | Stackoverflow Stackexchange
Q: React-Native: Facebook and Google Login I am currently trying to implement Facebook and google login for a react-native app for ios and android. I must say, it is much less user-friendly than ionic for example. I have seen some libraries trying to implement this, but they all seem not to be maintained anymore.
Is there any common, reliable and stable solution that is easy to implement (if not easy to implement, really any solution that will work), to implement Facebook and/or Google login for react-native apps?
A: The fbsdk is the best option for Facebook obviously.
For Google: I'm already using react-native-google-signin. It does (at least for Android) work as expected. It is a bit tricky to install, but there is a good FAQ section provided by the authors.
| Q: React-Native: Facebook and Google Login I am currently trying to implement Facebook and google login for a react-native app for ios and android. I must say, it is much less user-friendly than ionic for example. I have seen some libraries trying to implement this, but they all seem not to be maintained anymore.
Is there any common, reliable and stable solution that is easy to implement (if not easy to implement, really any solution that will work), to implement Facebook and/or Google login for react-native apps?
A: The fbsdk is the best option for Facebook obviously.
For Google: I'm already using react-native-google-signin. It does (at least for Android) work as expected. It is a bit tricky to install, but there is a good FAQ section provided by the authors.
A: https://github.com/react-native-community/react-native-google-signin seems to be maintained well nowadays, and last week only I implemented it in a production react native app.
So would recommend that for Google authentication.
A: For Google Login:
I tried both https://github.com/devfd/react-native-google-signin and https://github.com/joonhocho/react-native-google-sign-in. And neither of them work properly! I'm doubted whether they're maintained anymore.
The final correct solution is https://github.com/fullstackreact/react-native-oauth. It has a very good installation guideline and worked very well for my project. It also supports auth with other providers like Facebook, Twitter, Slack, ...
Btw, for Facebook Login, https://github.com/facebook/react-native-fbsdk also works nicely, despite of complex installation.
A: Have you had an answer? I'm also finding a library to implement google auth in reactnative and did not find a suitable one. but for facebook login ,you can use this .
Because it's made by facebook so I think it will be well maintained.
A: I tried using react-native-oauth. Maybe it was once a great option, but now the documentation on github is outdated. The documentation says to use Identity Toolkit API, which has now shifted to Firebase, which already creates problems. From the api home page:
The newest version of Google Identity Toolkit has been released as
Firebase Authentication.
New projects should use Firebase Authentication. To migrate an
existing project from Identity Toolkit to Firebase Authentication, see
the migration guide.
So the next I found was react-native-google-signin. It has a hefty procedure but this medium article was a great help to implement it within minutes if you don't want to get into much detail.
For facebook, fbsdk is the best one to use.
So the best options would be:
Google: react-native-google-signin
Facebook: fbsdk
A: You can consider using react-social-login. It supports Amazon, Facebook, GitHub, Google, Instagram and LinkedIn as providers. Please refer below link for more details.
https://www.npmjs.com/package/react-social-login
| stackoverflow | {
"language": "en",
"length": 425,
"provenance": "stackexchange_0000F.jsonl.gz:876327",
"question_score": "15",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577495"
} |
a8f9405774e2b88faff2d1a3e3754b74712be9c6 | Stackoverflow Stackexchange
Q: Py2exe - Can't run a .exe created on Windows 10 with a Windows 7 computer I created a .exe file using Py2exe on Windows 10 but when I try to run it on a Windows 7 computer it says that the os version is wrong.
Can anyone tell me how to fix this? (like using another Python or Py2exe version or setting a specific configuration inside setup.py)
A: I solved the problem myself and I'm going to share the answer if someone ever has the same mistake. I just had to download a 32-bit version of Canopy (with Python 2.7) and py2exe in order for them to work on Windows 7.
| Q: Py2exe - Can't run a .exe created on Windows 10 with a Windows 7 computer I created a .exe file using Py2exe on Windows 10 but when I try to run it on a Windows 7 computer it says that the os version is wrong.
Can anyone tell me how to fix this? (like using another Python or Py2exe version or setting a specific configuration inside setup.py)
A: I solved the problem myself and I'm going to share the answer if someone ever has the same mistake. I just had to download a 32-bit version of Canopy (with Python 2.7) and py2exe in order for them to work on Windows 7.
| stackoverflow | {
"language": "en",
"length": 112,
"provenance": "stackexchange_0000F.jsonl.gz:876348",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577583"
} |
cce842263b7a801be0ccaf1a6e545c49a4413be0 | Stackoverflow Stackexchange
Q: Lambda data frame reference a value in another column How do I correctly reference another column value when using a Lambda in a pandas dataframe.
dfresult_tmp2['Retention_Rolling_temp'] = dfresult_tmp2['Retention_tmp'].apply(lambda x: x if x['Count Billings']/4 < 0.20 else '')
The above code gives me this error.
TypeError: 'float' object is not subscriptable
A:
dfresult_tmp2['Retention_tmp'].apply(
lambda x: x if x['Count Billings'] / 4 < 0.20 else ''
)
You are using pd.Series.apply which is different than pd.DataFrame.apply. In this case, you are iteratively passing a scalar value to the lambda. So some_scalar_x['Count Billings'] makes no sense.
Instead of telling you how to shoehorn your logic into an apply, I'll show you the vectorized versions instead
Option 1
pd.Series.where
dfresult_tmp2['Retention_tmp'] = \
dfresult_tmp2['Retention_tmp'].where(
dfresult_tmp2['Count Billings'] / 4 < .2, '')
Option 2
np.where
r = dfresult_tmp2['Retention_tmp'].values
b = dfresult_tmp2['Count Billings'].values
dfresult_tmp2['Retention_tmp'] = np.where(b / 4 < .2, r, '')
Option 3
apply
What you asked for but not what I'd recommend.
dfresult_tmp2['Retention_tmp'] = dfresult_tmp2.apply(
lambda x: x['Retention_tmp'] if x['Count Billings'] / 4 < .2 else '',
axis=1
)
| Q: Lambda data frame reference a value in another column How do I correctly reference another column value when using a Lambda in a pandas dataframe.
dfresult_tmp2['Retention_Rolling_temp'] = dfresult_tmp2['Retention_tmp'].apply(lambda x: x if x['Count Billings']/4 < 0.20 else '')
The above code gives me this error.
TypeError: 'float' object is not subscriptable
A:
dfresult_tmp2['Retention_tmp'].apply(
lambda x: x if x['Count Billings'] / 4 < 0.20 else ''
)
You are using pd.Series.apply which is different than pd.DataFrame.apply. In this case, you are iteratively passing a scalar value to the lambda. So some_scalar_x['Count Billings'] makes no sense.
Instead of telling you how to shoehorn your logic into an apply, I'll show you the vectorized versions instead
Option 1
pd.Series.where
dfresult_tmp2['Retention_tmp'] = \
dfresult_tmp2['Retention_tmp'].where(
dfresult_tmp2['Count Billings'] / 4 < .2, '')
Option 2
np.where
r = dfresult_tmp2['Retention_tmp'].values
b = dfresult_tmp2['Count Billings'].values
dfresult_tmp2['Retention_tmp'] = np.where(b / 4 < .2, r, '')
Option 3
apply
What you asked for but not what I'd recommend.
dfresult_tmp2['Retention_tmp'] = dfresult_tmp2.apply(
lambda x: x['Retention_tmp'] if x['Count Billings'] / 4 < .2 else '',
axis=1
)
| stackoverflow | {
"language": "en",
"length": 175,
"provenance": "stackexchange_0000F.jsonl.gz:876361",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577622"
} |
686cafd6d39ac49c1f964f5d81a4b81730a34c9d | Stackoverflow Stackexchange
Q: How to get method name in spring filter I want retrieve the method name or class name from request in filter method.
Please let me know any possible ways.
Thanks in advance.
A: I am able to achieve using below snippet. Thanks for you time.
RequestMappingHandlerMapping mappings1 = (RequestMappingHandlerMapping) ApplicationContextHolder.getBean("requestMappingHandlerMapping");
Map<RequestMappingInfo, HandlerMethod> handlerMethods = mappings1.getHandlerMethods();
HandlerExecutionChain handler = mappings1.getHandler(httpServletRequest);
if(Objects.nonNull(handler)){
HandlerMethod handler1 = (HandlerMethod) handler.getHandler();}
| Q: How to get method name in spring filter I want retrieve the method name or class name from request in filter method.
Please let me know any possible ways.
Thanks in advance.
A: I am able to achieve using below snippet. Thanks for you time.
RequestMappingHandlerMapping mappings1 = (RequestMappingHandlerMapping) ApplicationContextHolder.getBean("requestMappingHandlerMapping");
Map<RequestMappingInfo, HandlerMethod> handlerMethods = mappings1.getHandlerMethods();
HandlerExecutionChain handler = mappings1.getHandler(httpServletRequest);
if(Objects.nonNull(handler)){
HandlerMethod handler1 = (HandlerMethod) handler.getHandler();}
A: From what I could understand from the question, you are looking to retrieve some information from Web filters (like OncePerRequestFilter of Spring or so)
retrieve the method name or class name from request in filter method
At this point, there is no class or method, it's just an httpInputStream. All you can do is, just byte operations, or read it as string and do String operations, or deserialize it to more Structured (For Example using Jackson deserialize json string to Java Objects/ Using JAXB deserializing XML to Java Objects)
| stackoverflow | {
"language": "en",
"length": 156,
"provenance": "stackexchange_0000F.jsonl.gz:876364",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577628"
} |
7d6d9f3da80db18f4314ddf658d3be5c71829a7f | Stackoverflow Stackexchange
Q: .NET Get embedded Resource File I have an embedded Resource File:
I need to open it as a Stream.
What I've tried (did not work, stream is null):
var assembly = Assembly.GetExecutingAssembly();
using (var stream = assembly.GetManifestResourceStream("client_secret.json"))
Any ideas or suggestions?
Edit: What I'm doing with it:
using (var stream = assembly.GetManifestResourceStream("client_secret.json"))
{
credential = await GoogleWebAuthorizationBroker.AuthorizeAsync(
GoogleClientSecrets.Load(stream).Secrets,
// This OAuth 2.0 access scope allows an application to upload files to the
// authenticated user's YouTube channel, but doesn't allow other types of access.
new[] { YouTubeService.Scope.YoutubeUpload },
"user",
CancellationToken.None
);
}
A: Right-Click your Project in Solution Explorer -> Add -> New Item -> Resources File
Then double-click on the created file (e.g. Resource1.resx), and then add your client_secret.json to it. Now you can access to client_secret.json content with blow code. Note that if you put a json file to resource, when get the client_secret you get a byte[] and must convert that to string
var foo= Encoding.UTF8.GetString(Resource1.client_secret);
But if only add .txt file you can access to that file with:
var foo= Resource1.txtfile;
| Q: .NET Get embedded Resource File I have an embedded Resource File:
I need to open it as a Stream.
What I've tried (did not work, stream is null):
var assembly = Assembly.GetExecutingAssembly();
using (var stream = assembly.GetManifestResourceStream("client_secret.json"))
Any ideas or suggestions?
Edit: What I'm doing with it:
using (var stream = assembly.GetManifestResourceStream("client_secret.json"))
{
credential = await GoogleWebAuthorizationBroker.AuthorizeAsync(
GoogleClientSecrets.Load(stream).Secrets,
// This OAuth 2.0 access scope allows an application to upload files to the
// authenticated user's YouTube channel, but doesn't allow other types of access.
new[] { YouTubeService.Scope.YoutubeUpload },
"user",
CancellationToken.None
);
}
A: Right-Click your Project in Solution Explorer -> Add -> New Item -> Resources File
Then double-click on the created file (e.g. Resource1.resx), and then add your client_secret.json to it. Now you can access to client_secret.json content with blow code. Note that if you put a json file to resource, when get the client_secret you get a byte[] and must convert that to string
var foo= Encoding.UTF8.GetString(Resource1.client_secret);
But if only add .txt file you can access to that file with:
var foo= Resource1.txtfile;
A: I solved this, I had to provide the full Namespace of the Resource:
using (var stream = assembly.GetManifestResourceStream("Project.Resources.client_secret.json"))
| stackoverflow | {
"language": "en",
"length": 195,
"provenance": "stackexchange_0000F.jsonl.gz:876392",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577716"
} |
72aef57fa26d231b490300379a43b0ed6e4dea45 | Stackoverflow Stackexchange
Q: File extension for serialized protobuf output Seems odd that I can't find the answer to this, but what file extension are you supposed to use when storing serialized protobuf output in a file? Just .protobuf? The json equivalent of what I am talking about would be a .json file.
A: I just use .bin, but there's no actual standard here AFAIK. If protoc -o (which emits a .proto schema in protobuf binary format as a FileDescriptorSet) had taken a directory like all the other output options do, we could have used that as a de-facto answer, but protoc -o is unusual in that it takes a file instead. In an old post on the protobuf group, Kenton Varda (one of the original authors) suggests that the file extension should be implementation specific (meaning: you decide) rather than simply referring to the format: https://groups.google.com/forum/#!topic/protobuf/JWZx9n8CUvw
| Q: File extension for serialized protobuf output Seems odd that I can't find the answer to this, but what file extension are you supposed to use when storing serialized protobuf output in a file? Just .protobuf? The json equivalent of what I am talking about would be a .json file.
A: I just use .bin, but there's no actual standard here AFAIK. If protoc -o (which emits a .proto schema in protobuf binary format as a FileDescriptorSet) had taken a directory like all the other output options do, we could have used that as a de-facto answer, but protoc -o is unusual in that it takes a file instead. In an old post on the protobuf group, Kenton Varda (one of the original authors) suggests that the file extension should be implementation specific (meaning: you decide) rather than simply referring to the format: https://groups.google.com/forum/#!topic/protobuf/JWZx9n8CUvw
| stackoverflow | {
"language": "en",
"length": 144,
"provenance": "stackexchange_0000F.jsonl.gz:876396",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577730"
} |
fddded56ae3395f20a96b5880816e2ba4a3bfd2c | Stackoverflow Stackexchange
Q: JSF 2.3 schemas http://xmlns.jcp.org/xml/ns/javaee/web-facesconfig_2_3.xsd seems that doesn't exit. 2.2 works fine.
<faces-config
xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-facesconfig_2_3.xsd"
version="2.3">
Any thoughts?
Application works fine, but IntelliJ shows everything in Red since cannot validate schema.
A: On newest IntelliJ version (2017.2 - I think it will work with older versions as well), set the cursor inside the "http://xmlns.jcp.org/xml/ns/javaee/web-facesconfig_2_3.xsd" and hit ALT+ENTER (on a Mac) for opening the QuickFix Popup and select "Fetch external resource"... after that everything inside the faces-config is recognized correctly and you can set the version "2.3".
| Q: JSF 2.3 schemas http://xmlns.jcp.org/xml/ns/javaee/web-facesconfig_2_3.xsd seems that doesn't exit. 2.2 works fine.
<faces-config
xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-facesconfig_2_3.xsd"
version="2.3">
Any thoughts?
Application works fine, but IntelliJ shows everything in Red since cannot validate schema.
A: On newest IntelliJ version (2017.2 - I think it will work with older versions as well), set the cursor inside the "http://xmlns.jcp.org/xml/ns/javaee/web-facesconfig_2_3.xsd" and hit ALT+ENTER (on a Mac) for opening the QuickFix Popup and select "Fetch external resource"... after that everything inside the faces-config is recognized correctly and you can set the version "2.3".
A: I think it is related to your IntelliJ IDEA version. JSF 2.3 is part of Java EE 8. IntelliJ IDEA adds support for Java EE 8 in 2017.3 version which is currently in Early Access Program. More info and release notes of new versions available:
https://www.jetbrains.com/idea/nextversion/
| stackoverflow | {
"language": "en",
"length": 135,
"provenance": "stackexchange_0000F.jsonl.gz:876406",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577758"
} |
3416c92f9477086d44de2fab8dbe69a899f8201d | Stackoverflow Stackexchange
Q: Ruby: alias_method for module static method Given this module
module Test
def self.foo(v)
puts "Test.foo with #{v}"
end
end
The following doesn't work
module Test
alias_method :bar, :foo
# ...
end
although it works for instance methods. I get following error
NameError: undefined method `foo' for module `Test'
My goal is to override self.foo as following
def self.foo(v)
self.bar(v + " monkey patched")
end
Is there a way to alias static method?
A: Test.singleton_class.send(:alias_method, :bar, :foo)
Test.bar("cat")
#=> "Test Foo with cat"
| Q: Ruby: alias_method for module static method Given this module
module Test
def self.foo(v)
puts "Test.foo with #{v}"
end
end
The following doesn't work
module Test
alias_method :bar, :foo
# ...
end
although it works for instance methods. I get following error
NameError: undefined method `foo' for module `Test'
My goal is to override self.foo as following
def self.foo(v)
self.bar(v + " monkey patched")
end
Is there a way to alias static method?
A: Test.singleton_class.send(:alias_method, :bar, :foo)
Test.bar("cat")
#=> "Test Foo with cat"
| stackoverflow | {
"language": "en",
"length": 83,
"provenance": "stackexchange_0000F.jsonl.gz:876436",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577888"
} |
7744f2c905826803e8ec1b81f5e5e114ac82eb0f | Stackoverflow Stackexchange
Q: Pandas: create rows to fill numeric gaps From a DataFrame like this one:
ref from to
abcd 1 2
efgh 2 4
ijkl 1 3
mnop 3 4
qrst 4 4
uvwx 4 6
The idea would be to "fill gaps" between columns from and to so as to obtain:
ref value
abcd 1
abcd 2
efgh 2
efgh 3
efgh 4
ijkl 1
ijkl 2
ijkl 3
mnop 3
mnop 4
qrst 4
uvwx 4
uvwx 5
uvwx 6
A: A numpy approach
r = df['ref'].values
f = df['from'].values
t = df['to'].values
pd.DataFrame(dict(
ref=r.repeat(t - f + 1),
value=np.concatenate([np.arange(f, t + 1) for f, t in zip(f, t)])
))
ref value
0 abcd 1
1 abcd 2
2 efgh 2
3 efgh 3
4 efgh 4
5 ijkl 1
6 ijkl 2
7 ijkl 3
8 mnop 3
9 mnop 4
10 qrst 4
11 uvwx 4
12 uvwx 5
13 uvwx 6
Timing
| Q: Pandas: create rows to fill numeric gaps From a DataFrame like this one:
ref from to
abcd 1 2
efgh 2 4
ijkl 1 3
mnop 3 4
qrst 4 4
uvwx 4 6
The idea would be to "fill gaps" between columns from and to so as to obtain:
ref value
abcd 1
abcd 2
efgh 2
efgh 3
efgh 4
ijkl 1
ijkl 2
ijkl 3
mnop 3
mnop 4
qrst 4
uvwx 4
uvwx 5
uvwx 6
A: A numpy approach
r = df['ref'].values
f = df['from'].values
t = df['to'].values
pd.DataFrame(dict(
ref=r.repeat(t - f + 1),
value=np.concatenate([np.arange(f, t + 1) for f, t in zip(f, t)])
))
ref value
0 abcd 1
1 abcd 2
2 efgh 2
3 efgh 3
4 efgh 4
5 ijkl 1
6 ijkl 2
7 ijkl 3
8 mnop 3
9 mnop 4
10 qrst 4
11 uvwx 4
12 uvwx 5
13 uvwx 6
Timing
A: You can use groupby ref first, create a Series to fill the gaps and then transform it to a Dataframe and rename the column in the end.
df.groupby('ref').apply(lambda x: pd.Series(range(x['from'],x['to']+1)))\
.reset_index(level=1,drop=True)\
.reset_index()\
.rename(columns={0:'value'})
Out[22]:
ref value
0 abcd 1
1 abcd 2
2 efgh 2
3 efgh 3
4 efgh 4
5 ijkl 1
6 ijkl 2
7 ijkl 3
8 mnop 3
9 mnop 4
10 qrst 4
11 uvwx 4
12 uvwx 5
13 uvwx 6
| stackoverflow | {
"language": "en",
"length": 235,
"provenance": "stackexchange_0000F.jsonl.gz:876468",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44577976"
} |
622ce2d16f029eb18ddec0406b31a96721fbb707 | Stackoverflow Stackexchange
Q: Quickest way to select rows from pandas dataframe? I have a pandas dataframe df with millions of rows, and columns A1,..., AN
What is the quickest way to select rows such that df['A1']==30?
Edit: there are at least three methods:
*
*Method 1. df[(df['A1']==30)]
*Method 2. df.query('A1==30')
*Method 3. Do df = df.set_index(A1) once; then df.loc[30] (or df.loc[x] for all x values we try to locate in the column A1)
What are the pros and cons?
A: 50 Million rows and 52 columns
from string import ascii_letters
df = pd.DataFrame(np.random.randint(50, size=(50000000, 52)), columns=list(ascii_letters))
Variety of methods
%timeit df[df.B == 30]
%timeit df[df.B.values == 30]
%timeit df.query('B == 30')
1 loop, best of 3: 31.4 s per loop
1 loop, best of 3: 31.6 s per loop
1 loop, best of 3: 27.1 s per loop
Use numexpr
import numexpr as ne
%%timeit
B = df.B.values
df[ne.evaluate('B == 30')]
1 loop, best of 3: 22.8 s per loop
Or reconstruct the whole thing with numpy slicing in addition to numexpr
%%timeit
B = df.B.values
mask = ne.evaluate('B == 30')
pd.DataFrame(df.values[mask], df.index[mask], df.columns)
1 loop, best of 3: 21.4 s per loop
| Q: Quickest way to select rows from pandas dataframe? I have a pandas dataframe df with millions of rows, and columns A1,..., AN
What is the quickest way to select rows such that df['A1']==30?
Edit: there are at least three methods:
*
*Method 1. df[(df['A1']==30)]
*Method 2. df.query('A1==30')
*Method 3. Do df = df.set_index(A1) once; then df.loc[30] (or df.loc[x] for all x values we try to locate in the column A1)
What are the pros and cons?
A: 50 Million rows and 52 columns
from string import ascii_letters
df = pd.DataFrame(np.random.randint(50, size=(50000000, 52)), columns=list(ascii_letters))
Variety of methods
%timeit df[df.B == 30]
%timeit df[df.B.values == 30]
%timeit df.query('B == 30')
1 loop, best of 3: 31.4 s per loop
1 loop, best of 3: 31.6 s per loop
1 loop, best of 3: 27.1 s per loop
Use numexpr
import numexpr as ne
%%timeit
B = df.B.values
df[ne.evaluate('B == 30')]
1 loop, best of 3: 22.8 s per loop
Or reconstruct the whole thing with numpy slicing in addition to numexpr
%%timeit
B = df.B.values
mask = ne.evaluate('B == 30')
pd.DataFrame(df.values[mask], df.index[mask], df.columns)
1 loop, best of 3: 21.4 s per loop
A: Have you had a look at Enhancing Performance. From here you will see that you get significant speed ups from,
df.query('A1==30')
There is more information in that link but I am sure this is the easiest to implement.
| stackoverflow | {
"language": "en",
"length": 230,
"provenance": "stackexchange_0000F.jsonl.gz:876477",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578004"
} |
dc3c1e4f2f52b5abedf33565aea33d0bc4aab88b | Stackoverflow Stackexchange
Q: std::complex in struct that makes compile slow I noticed that the following code is extremely slow to compile: (It can't even finish on my computer)
#include <complex>
struct some_big_struct {
std::complex <double> a[1000000][2];
};
some_big_struct a;
int main () {
return 0;
}
Out of curiosity, I've also tried other alternatives of the code. However, these codes seem to compile just fine on my computer :
#include <complex>
struct some_big_struct {
double a[1000000][2];
};
some_big_struct a;
int main () {
return 0;
}
and
#include <complex>
std::complex <double> a[1000000][2];
int main () {
return 0;
}
I wonder if anyone can share some insight on why such is the case. Thanks!
A: The compiler is probably running the default std::complex constructor while compiling, so that it can put the initialized values of all the array members into the executable, rather than generate code that performs this loop when the program starts. So it's calling the constructor 2 million times while compiling.
| Q: std::complex in struct that makes compile slow I noticed that the following code is extremely slow to compile: (It can't even finish on my computer)
#include <complex>
struct some_big_struct {
std::complex <double> a[1000000][2];
};
some_big_struct a;
int main () {
return 0;
}
Out of curiosity, I've also tried other alternatives of the code. However, these codes seem to compile just fine on my computer :
#include <complex>
struct some_big_struct {
double a[1000000][2];
};
some_big_struct a;
int main () {
return 0;
}
and
#include <complex>
std::complex <double> a[1000000][2];
int main () {
return 0;
}
I wonder if anyone can share some insight on why such is the case. Thanks!
A: The compiler is probably running the default std::complex constructor while compiling, so that it can put the initialized values of all the array members into the executable, rather than generate code that performs this loop when the program starts. So it's calling the constructor 2 million times while compiling.
| stackoverflow | {
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:876491",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578037"
} |
d75faadc921d127fa1b54841042338265179a362 | Stackoverflow Stackexchange
Q: How to change the language at runtime in python/gettext? This is my first use of gettext in Python (and gettext generally). I understood many things and my test application works well: one command line parameter changes the language of output messages.
Now I'd like to let the user change the language at startup.
I know I can load many translations and install one of them at run-time, however many strings are already translated with the old language and will not change again.
Any simple solution?
import gettext
language = "it"
t_en = gettext.translation("messages", localedir="locale", languages=["en"], fallback=True)
t_it = gettext.translation("messages", localedir="locale", languages=["it"], fallback=True)
def language_install():
if language == "it":
t_it.install()
else:
t_en.install()
language_install()
main_menu = [_("First item"), _("Second item"), _("Switch language"), _("Exit")]
while True:
print("MAIN MENU")
print("---------")
for (n, item) in enumerate(main_menu):
print("{:d}: ".format(n + 1) + item)
print("")
ans = input(_("Select an item") + ": ")
if ans == "4":
break
elif ans == "3":
if language == "en":
language = "it"
else:
language = "en"
language_install()
else:
print(_("You have selected item") + " " + ans)
| Q: How to change the language at runtime in python/gettext? This is my first use of gettext in Python (and gettext generally). I understood many things and my test application works well: one command line parameter changes the language of output messages.
Now I'd like to let the user change the language at startup.
I know I can load many translations and install one of them at run-time, however many strings are already translated with the old language and will not change again.
Any simple solution?
import gettext
language = "it"
t_en = gettext.translation("messages", localedir="locale", languages=["en"], fallback=True)
t_it = gettext.translation("messages", localedir="locale", languages=["it"], fallback=True)
def language_install():
if language == "it":
t_it.install()
else:
t_en.install()
language_install()
main_menu = [_("First item"), _("Second item"), _("Switch language"), _("Exit")]
while True:
print("MAIN MENU")
print("---------")
for (n, item) in enumerate(main_menu):
print("{:d}: ".format(n + 1) + item)
print("")
ans = input(_("Select an item") + ": ")
if ans == "4":
break
elif ans == "3":
if language == "en":
language = "it"
else:
language = "en"
language_install()
else:
print(_("You have selected item") + " " + ans)
| stackoverflow | {
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:876508",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578092"
} |
49d6927c39f03b5bc59191dd6f715c5426803379 | Stackoverflow Stackexchange
Q: Scrapy is filtering unique urls as duplicate urls The urls:
*
*http://www.extrastores.com/en-sa/products/mobiles/smartphones-99500240157?page=1
*http://www.extrastores.com/en-sa/products/mobiles/smartphones-99500240157?page=2 are unique but scrapy is filtering these urls as duplicates and not scraping them.
I am using CrawlSpider with these rules:
rules = (
Rule(LinkExtractor(restrict_css=('.resultspagenum'))),
Rule(LinkExtractor(allow=('\/mobiles\/smartphones\/[a-zA-Z0-9_.-]*',), ), callback='parse_product'),
)`
I do not understand this behavior, can somebody explain please? The same code was working last week.
Using Scrapy version 1.3.0
A: Following the suggestion of @paul trmbrth I rechecked the code and website that is getting scraped. Scrapy is downloading the links and filtering the links because they were downloaded before. The issue was the link attribute in 'a' tag of html was changed from a static link to some javascript function:
<a href='javascript:gtm.traceProductClick("/en-sa/mobiles/smartphones/samsung-galaxy-s7-32gb-dual-sim-lte-gold-188024">
Correspondingly I changed my spider code as:
def _process_value(value):
m = re.search('javascript:gtm.traceProductClick\("(.*?)"', value)
if m:
return m.group(1)
rules = (
Rule(LinkExtractor(restrict_css=('.resultspagenum'))),
Rule(LinkExtractor(
allow=('\/mobiles\/smartphones\/[a-zA-Z0-9_.-]*',),
process_value=_process_value
), callback='parse_product'),
)
This was not he issue of scrapy filtering non-unique urls but it was about the extracting the link from 'href' attribute from 'a' tag because that link was changed recently and my code was broken.
Thanks again @paul trmbrth
| Q: Scrapy is filtering unique urls as duplicate urls The urls:
*
*http://www.extrastores.com/en-sa/products/mobiles/smartphones-99500240157?page=1
*http://www.extrastores.com/en-sa/products/mobiles/smartphones-99500240157?page=2 are unique but scrapy is filtering these urls as duplicates and not scraping them.
I am using CrawlSpider with these rules:
rules = (
Rule(LinkExtractor(restrict_css=('.resultspagenum'))),
Rule(LinkExtractor(allow=('\/mobiles\/smartphones\/[a-zA-Z0-9_.-]*',), ), callback='parse_product'),
)`
I do not understand this behavior, can somebody explain please? The same code was working last week.
Using Scrapy version 1.3.0
A: Following the suggestion of @paul trmbrth I rechecked the code and website that is getting scraped. Scrapy is downloading the links and filtering the links because they were downloaded before. The issue was the link attribute in 'a' tag of html was changed from a static link to some javascript function:
<a href='javascript:gtm.traceProductClick("/en-sa/mobiles/smartphones/samsung-galaxy-s7-32gb-dual-sim-lte-gold-188024">
Correspondingly I changed my spider code as:
def _process_value(value):
m = re.search('javascript:gtm.traceProductClick\("(.*?)"', value)
if m:
return m.group(1)
rules = (
Rule(LinkExtractor(restrict_css=('.resultspagenum'))),
Rule(LinkExtractor(
allow=('\/mobiles\/smartphones\/[a-zA-Z0-9_.-]*',),
process_value=_process_value
), callback='parse_product'),
)
This was not he issue of scrapy filtering non-unique urls but it was about the extracting the link from 'href' attribute from 'a' tag because that link was changed recently and my code was broken.
Thanks again @paul trmbrth
| stackoverflow | {
"language": "en",
"length": 184,
"provenance": "stackexchange_0000F.jsonl.gz:876570",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578296"
} |
683a45e6db9a849b88d625f7e95669ce58882c9a | Stackoverflow Stackexchange
Q: Splitting all output directories in Gradle Gradle 4.0 came out yesterday and I updated my project for it.
Now I am getting the following warning:
Gradle now uses separate output directories for each JVM language, but
this build assumes a single directory for all classes from a source
set. This behaviour has been deprecated and is scheduled to be removed
in Gradle 5.0
I would like to use separate output directories for each language. What do I need to change to make that happen?
Things I tried:
*
*gradle clean followed by gradle build
*deleting the build directory then running gradle build.
*deleting the gradle and build directory then running gradle
Related GitHub issue
Gradle Plugins:
*
*java
*eclipse
*idea
*org.springframework.boot
A: This is due to the change introduced in Gradle 4.0: it now uses separate output directories if there are multiple language sources.
To return to the old behaviour and get rid of the warning, insert this into your build.gradle:
// Change the output directory for the main source set back to the old path
sourceSets.main.output.classesDir = new File(buildDir, "classes/main")
Reference: https://docs.gradle.org/4.0/release-notes.html#multiple-class-directories-for-a-single-source-set
| Q: Splitting all output directories in Gradle Gradle 4.0 came out yesterday and I updated my project for it.
Now I am getting the following warning:
Gradle now uses separate output directories for each JVM language, but
this build assumes a single directory for all classes from a source
set. This behaviour has been deprecated and is scheduled to be removed
in Gradle 5.0
I would like to use separate output directories for each language. What do I need to change to make that happen?
Things I tried:
*
*gradle clean followed by gradle build
*deleting the build directory then running gradle build.
*deleting the gradle and build directory then running gradle
Related GitHub issue
Gradle Plugins:
*
*java
*eclipse
*idea
*org.springframework.boot
A: This is due to the change introduced in Gradle 4.0: it now uses separate output directories if there are multiple language sources.
To return to the old behaviour and get rid of the warning, insert this into your build.gradle:
// Change the output directory for the main source set back to the old path
sourceSets.main.output.classesDir = new File(buildDir, "classes/main")
Reference: https://docs.gradle.org/4.0/release-notes.html#multiple-class-directories-for-a-single-source-set
A: Gradle 4.0 introduces multiple sourceSets per JVM language in order to enable remote build caching. With the java plugin your build/classes/main should become build/classes/java/main and build/classes/test should become build/classes/java/test, etc.
The warning you're seeing is defined in DefaultSourceSets.java
Therefore, if any plugin within your project or your build.gradle calls DefaultSourceSetOutput.getClassesDir() (or access classesDir) you get this warning.
Solution 1
Use
sourceSets.main.output.classesDir = new File(buildDir, "classes/main")
which corresponds to:
@Override
public boolean isLegacyLayout() {
return classesDir!=null;
}
@Override
public void setClassesDir(File classesDir) {
setClassesDir((Object)classesDir);
}
@Override
public void setClassesDir(Object classesDir) {
this.classesDir = classesDir;
this.classesDirs.setFrom(classesDir);
}
Note that SourceSetOutput.java marks getClassesDir() as deprecated.
So until all plugins in your project get support for Gradle 4.0 you should stick to the workaround and ignore the deprecation warnings.
Another issue is test files. If you don't want to have the new layout (build/classes/main and build/classes/java/test) you should adjust test path too:
sourceSets.main.output.classesDir = new File(buildDir, "classes/main")
sourceSets.test.output.classesDir = new File(buildDir, "classes/test")
UPDATE
Users of IDEA may notice that IDE starts using separate out directories for build if Gradle 4.x is detected. That makes impossible hot app reloading if you run app outside of IDEA. To fix that add and reimport:
subprojects {
apply plugin: 'idea'
// Due to Gradle 4.x changes (separate output directories per JVM language)
// Idea developers refuse to reuse Gradle classpath and use own 'out/' directory.
// Revert to old behavior to allow Spring Devtool to work with using fast Idea compiler.
// https://youtrack.jetbrains.com/issue/IDEA-175172
// Alternatively use native Gradle builds or bootRun.addResources = true
// To use this feature push Ctrl+Shift+F9 to recompile!
// Be aware that Idea put resources into classes/ directory!!
idea.module.inheritOutputDirs = false
idea.module.outputDir = sourceSets.main.output.classesDir
idea.module.testOutputDir = sourceSets.test.output.classesDir
}
Please note that IDEA puts resources into the same directory as .class files so your Gradle classpath could be corrupted. Just do gradle clean for modules on which you use IDEA built-in build commands (Ctrl+Shift+F10, etc).
A: My case was a bit specific because output classes directories were used to construct classpath entry for command-line execution. But perhaps this will help someone.
I decided to concatenate all output directories. The change I made was form
sourceSets.integrationTest.output.classesDir
to
ext {
classpathSeparator = System.properties['os.name'].toLowerCase().contains('windows')?";":":"
}
...
sourceSets.integrationTest.output.classesDirs.join(classpathSeparator)
A: For example, if you mix Java, Kotlin and Groovy project structure should be like the following:
root/
src/
main/
java/
kotlin/
groovy/
test/
java/
kotlin/
groovy/
In you build.gradle you have to specify plugins that are required for specific language.
apply plugin: 'java'
apply plugin: 'groovy'
apply plugin: 'kotlin'
| stackoverflow | {
"language": "en",
"length": 604,
"provenance": "stackexchange_0000F.jsonl.gz:876573",
"question_score": "32",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578302"
} |
9b42f2757ab95465a01639bbbe735bba239ff4a6 | Stackoverflow Stackexchange
Q: Flattening a list of elements in Java 8 Optional pipeline I have a id value which can be null. Then I need to call some service with this id to get a list of trades and fetch the first not null trade from the list.
Currently I have this working code
Optional.ofNullable(id)
.map(id -> service.findTrades(id))
.flatMap(t -> t.stream().filter(Objects::nonNull).findFirst())
.orElse(... default value...);
Is it possible to implement a line with a flatMap call more elegantly? I don't want to put much logic in one pipeline step.
Initially I expected to implement the logic this way
Optional.ofNullable(id)
.flatMap(id -> service.findTrades(id))
.filter(Objects::nonNull)
.findFirst()
.orElse(... default value...);
But Optional.flatMap doesn't allow to flatten a list into a set of it's elements.
A: There is a better way to do it by StreamEx
StreamEx.ofNullable(id)
.flatMap(id -> service.findTrades(id))
.filter(Objects::nonNull)
.findFirst()
.orElse(... default value...);
I just saw: "As Stuart Marks says it, Rule #4: It's generally a bad idea to create an Optional for the specific purpose of chaining methods from it to get a value.." in the comments under another question:
| Q: Flattening a list of elements in Java 8 Optional pipeline I have a id value which can be null. Then I need to call some service with this id to get a list of trades and fetch the first not null trade from the list.
Currently I have this working code
Optional.ofNullable(id)
.map(id -> service.findTrades(id))
.flatMap(t -> t.stream().filter(Objects::nonNull).findFirst())
.orElse(... default value...);
Is it possible to implement a line with a flatMap call more elegantly? I don't want to put much logic in one pipeline step.
Initially I expected to implement the logic this way
Optional.ofNullable(id)
.flatMap(id -> service.findTrades(id))
.filter(Objects::nonNull)
.findFirst()
.orElse(... default value...);
But Optional.flatMap doesn't allow to flatten a list into a set of it's elements.
A: There is a better way to do it by StreamEx
StreamEx.ofNullable(id)
.flatMap(id -> service.findTrades(id))
.filter(Objects::nonNull)
.findFirst()
.orElse(... default value...);
I just saw: "As Stuart Marks says it, Rule #4: It's generally a bad idea to create an Optional for the specific purpose of chaining methods from it to get a value.." in the comments under another question:
A: I don't know if this is elegant or not, but here's a way to transform the optional in a stream before initiating the stream pipeline:
Trade trade = Optional.ofNullable(id)
.map(service::findTrades)
.map(Collection::stream)
.orElse(Stream.empty()) // or orElseGet(Stream::empty)
.filter(Objects::nonNull)
.findFirst()
.orElse(... default value...);
In Java 9, Optional will have a .stream() method, so you will be able to directly convert the optional into a stream:
Trade trade = Optional.ofNullable(id)
.stream() // <-- Stream either empty or with the id
.map(service::findTrades) // <-- Now we are at the stream pipeline
.flatMap(Collection::stream) // We need to flatmap, so that we
.filter(Objects::nonNull) // stream the elements of the collection
.findFirst()
.orElse(... default value...);
| stackoverflow | {
"language": "en",
"length": 284,
"provenance": "stackexchange_0000F.jsonl.gz:876625",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578449"
} |
66bbdcc0a2d664f171cadcebfd0dc5fde576fbe9 | Stackoverflow Stackexchange
Q: Intersect two boolean arrays for True Having the numpy arrays
a = np.array([ True, False, False, True, False], dtype=bool)
b = np.array([False, True, True, True, False], dtype=bool)
how can I make the intersection of the two so that only the True values match? I can do something like:
a == b
array([False, False, False, True, True], dtype=bool)
but the last item is True (understandably because both are False), whereas I would like the result array to be True only in the 4th element, something like:
array([False, False, False, True, False], dtype=bool)
A: Numpy provides logical_and() for that purpose:
a = np.array([ True, False, False, True, False], dtype=bool)
b = np.array([False, True, True, True, False], dtype=bool)
c = np.logical_and(a, b)
# array([False, False, False, True, False], dtype=bool)
More at Numpy Logical operations.
| Q: Intersect two boolean arrays for True Having the numpy arrays
a = np.array([ True, False, False, True, False], dtype=bool)
b = np.array([False, True, True, True, False], dtype=bool)
how can I make the intersection of the two so that only the True values match? I can do something like:
a == b
array([False, False, False, True, True], dtype=bool)
but the last item is True (understandably because both are False), whereas I would like the result array to be True only in the 4th element, something like:
array([False, False, False, True, False], dtype=bool)
A: Numpy provides logical_and() for that purpose:
a = np.array([ True, False, False, True, False], dtype=bool)
b = np.array([False, True, True, True, False], dtype=bool)
c = np.logical_and(a, b)
# array([False, False, False, True, False], dtype=bool)
More at Numpy Logical operations.
| stackoverflow | {
"language": "en",
"length": 132,
"provenance": "stackexchange_0000F.jsonl.gz:876665",
"question_score": "28",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578571"
} |
33197b34d091c6977987c6d241f0f554cee834b5 | Stackoverflow Stackexchange
Q: Reverse the order of words in a String in Kotlin I am looking for way to reverse the order of words in a string in Kotlin.
For example, the input string would be:
What is up, Pal!
And the output string would be:
Pal! up, is What
I know I need to use the reversed module, but I am not sure how.
A: You could try this:
fun reverse(str:String) = str.split(" ").reduce{acc, x -> x + " " + acc}
| Q: Reverse the order of words in a String in Kotlin I am looking for way to reverse the order of words in a string in Kotlin.
For example, the input string would be:
What is up, Pal!
And the output string would be:
Pal! up, is What
I know I need to use the reversed module, but I am not sure how.
A: You could try this:
fun reverse(str:String) = str.split(" ").reduce{acc, x -> x + " " + acc}
A: You are correct in assuming that the reversed module would be helpful in this task.
However to reverse the order of the words you would also need to use things like split and joinToString (or implement them yourself):
fun reverseOrderOfWords(s: String) = s.split(" ").reversed().joinToString(" ")
val s = "What is up, Pal!"
println(reverseOrderOfWords(s))
Output:
Pal! up, is What
| stackoverflow | {
"language": "en",
"length": 140,
"provenance": "stackexchange_0000F.jsonl.gz:876672",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578583"
} |
965205faef9b9ecdae0de02f3398f22f37e984d7 | Stackoverflow Stackexchange
Q: Support Library for Chronometer in Android If there any support Library function that could make the setCountDown function of the chronometer support by APIs lower than 24?
A: You can try RXJava to simple handle threads, timers, and many other functionalities.
The documentation for the timer operator says this:
Create an Observable that emits a particular item after a given delay
Thus the behavior you are observing is expected- timer() emits just a single item after a delay.
The interval operator, on the other hand, will emit items spaced out with a given interval.
For example, this Observable will emit an item every second:
Observable.interval(1, TimeUnit.SECONDS);
And an complete example:
Observable.interval(1,TimeUnit.SECONDS, Schedulers.io())
.take(300) // take 300 second
.map(v -> 300 - v)
.subscribe(
onNext -> {
//on every second pass trigger
},
onError -> {
//do on error
},
() -> {
//do on complete
},
onSubscribe -> {
//do once on subscription
});
| Q: Support Library for Chronometer in Android If there any support Library function that could make the setCountDown function of the chronometer support by APIs lower than 24?
A: You can try RXJava to simple handle threads, timers, and many other functionalities.
The documentation for the timer operator says this:
Create an Observable that emits a particular item after a given delay
Thus the behavior you are observing is expected- timer() emits just a single item after a delay.
The interval operator, on the other hand, will emit items spaced out with a given interval.
For example, this Observable will emit an item every second:
Observable.interval(1, TimeUnit.SECONDS);
And an complete example:
Observable.interval(1,TimeUnit.SECONDS, Schedulers.io())
.take(300) // take 300 second
.map(v -> 300 - v)
.subscribe(
onNext -> {
//on every second pass trigger
},
onError -> {
//do on error
},
() -> {
//do on complete
},
onSubscribe -> {
//do once on subscription
});
A: According to Android Documentation , its not possible with below API level 24. Chronometer widget only use count up. If you need to do so go with CountDownTimer.
For Example:
final int oneSecond = 1000; // in milliSeconds i.e. 1 second
final int tenSeconds = 100000; // 100 seconds
CountDownTimer cTimer = new CountDownTimer(100000, oneSecond) {
public void onTick(long millisUntilFinished) {
int totalTime = 60000; // in milliseconds i.e. 60 seconds
String v = String.format("%02d", millisUntilFinished/totalTime);
int va = (int)( (millisUntilFinished%totalTime)/oneSecond);
textView.setText("remaining seconds: " +v+":"+String.format("%02d",va));
}
public void onFinish() {
textView.setText("done!");
}
};
cTimer.start();
EDIT:
If you want to go with GitHub Solution then have a look at this and this
| stackoverflow | {
"language": "en",
"length": 267,
"provenance": "stackexchange_0000F.jsonl.gz:876705",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578653"
} |
301beb67ce8dfa13b2e0ea52b4bd31652ca24809 | Stackoverflow Stackexchange
Q: Clone a marc4j record Class I am trying to clone a Recrod class from Java marc4j
with
private Record copyRecord(Record record) {
Record f = new Record();
f.set(record.get());
return f;
}
However I get a java.lang.RuntimeException: with error: org.marc4j.marc.Record is abstract; cannot be instantiated
on line
f.set(record.get());
Please keep in mind that my java skills are limited,
so the question is
How can I get a clone of the original class that I can manipulate and change its values without messing around with the original class data and methods??
Thanks in advance
Regards,
Harry
A: You cannot instantiate Record because it is an interface.
If you are wanting to copy the Record you can use a MarcStreamWriter and a MarcStreamReader like so - However this does seem over the top:
public Record copyRecord(Record r)
{
ByteArrayOutputStream boas = new ByteArrayOutputStream();
MarcStreamWriter mw = new MarcStreamWriter(boas);
mw.write(r);
byte[] ba = boas.toByteArray();
ByteArrayInputStream bois = new ByteArrayInputStream(ba);
MarcStreamReader mr = new MarcStreamReader(bois);
Record r2 = null;
if(mr.hasNext())
r2 = mr.next();
return r2;
}
You might have to take care of some exceptions here and also close the writers/readers.
| Q: Clone a marc4j record Class I am trying to clone a Recrod class from Java marc4j
with
private Record copyRecord(Record record) {
Record f = new Record();
f.set(record.get());
return f;
}
However I get a java.lang.RuntimeException: with error: org.marc4j.marc.Record is abstract; cannot be instantiated
on line
f.set(record.get());
Please keep in mind that my java skills are limited,
so the question is
How can I get a clone of the original class that I can manipulate and change its values without messing around with the original class data and methods??
Thanks in advance
Regards,
Harry
A: You cannot instantiate Record because it is an interface.
If you are wanting to copy the Record you can use a MarcStreamWriter and a MarcStreamReader like so - However this does seem over the top:
public Record copyRecord(Record r)
{
ByteArrayOutputStream boas = new ByteArrayOutputStream();
MarcStreamWriter mw = new MarcStreamWriter(boas);
mw.write(r);
byte[] ba = boas.toByteArray();
ByteArrayInputStream bois = new ByteArrayInputStream(ba);
MarcStreamReader mr = new MarcStreamReader(bois);
Record r2 = null;
if(mr.hasNext())
r2 = mr.next();
return r2;
}
You might have to take care of some exceptions here and also close the writers/readers.
A: I think you should use MarcFactory. Add the Leader, the Errors, and the VariableFields. If I am correct, that should be complete.
private static Record cloneRecord(Record inputRecord) {
Record outputRecord = MarcFactory.newInstance().newRecord();
outputRecord.setLeader(MarcFactory.newInstance().newLeader(inputRecord.getLeader().marshal()));
if (inputRecord.hasErrors()) {
inputRecord.getErrors().forEach(inputError - > {
outputRecord.getErrors().add(new MarcError(inputError.curField, inputError.curSubfield, inputError.severity, inputError.message));
});
}
for (ControlField inputControlField: inputRecord.getControlFields()) {
outputRecord.addVariableField(MarcFactory.newInstance().newControlField(inputControlField.getTag(), inputControlField.getData()));
}
for (DataField inputDataField: inputRecord.getDataFields()) {
DataField outputDataField = MarcFactory.newInstance().newDataField(inputDataField.getTag(), inputDataField.getIndicator1(), inputDataField.getIndicator2());
inputDataField.getSubfields().forEach(inputSubfield - > {
outputDataField.addSubfield(MarcFactory.newInstance().newSubfield(inputSubfield.getCode(), inputSubfield.getData()));
});
outputRecord.addVariableField(outputDataField);
}
return outputRecord;
}
| stackoverflow | {
"language": "en",
"length": 266,
"provenance": "stackexchange_0000F.jsonl.gz:876736",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578750"
} |
7be6044f8d1d9fe65dab3e4cf18a1804f2e025d7 | Stackoverflow Stackexchange
Q: PostgreSQL: How do I split/export table records into year wise slices (CSV)? I have a table in my PostgreSQL 9.5 database containing two columns i.e., start_time (timestamp without time zone) and values (records) like this.
Start_time Values
2003-06-07 00:00:00 12
2004-02-03 00:00:00 16
2005-07-09 00:00:00 14
2003-07-07 00:00:00 17
2004-01-31 00:00:00 11
2005-05-02 00:00:00 10
For start_time, I need to export my_table records so that it generates CSV files for each year slice like this (separating the records for each year in a separate CSV file).
Expected output:
results_2003.csv
results_2004.csv
results_2005.csv
and so on...
How to do this?
A: Use copy command in a dynamic execute format inside a plpgsql DO block, e.g.:
do $$
declare
y int;
begin
for y in
select distinct extract(year from start_time)
from my_table
loop
execute format($ex$
copy (
select *
from my_table
where extract(year from start_time) = %1$s
)
to '\data\%1$s.csv'
$ex$, y);
end loop;
end $$;
| Q: PostgreSQL: How do I split/export table records into year wise slices (CSV)? I have a table in my PostgreSQL 9.5 database containing two columns i.e., start_time (timestamp without time zone) and values (records) like this.
Start_time Values
2003-06-07 00:00:00 12
2004-02-03 00:00:00 16
2005-07-09 00:00:00 14
2003-07-07 00:00:00 17
2004-01-31 00:00:00 11
2005-05-02 00:00:00 10
For start_time, I need to export my_table records so that it generates CSV files for each year slice like this (separating the records for each year in a separate CSV file).
Expected output:
results_2003.csv
results_2004.csv
results_2005.csv
and so on...
How to do this?
A: Use copy command in a dynamic execute format inside a plpgsql DO block, e.g.:
do $$
declare
y int;
begin
for y in
select distinct extract(year from start_time)
from my_table
loop
execute format($ex$
copy (
select *
from my_table
where extract(year from start_time) = %1$s
)
to '\data\%1$s.csv'
$ex$, y);
end loop;
end $$;
A: Of several possible alternative ways to do this, I would use execsql.py (https://pypi.python.org/pypi/execsql/ -- disclaimer: I wrote it) and this script:
select distinct
extract(year from start_time) as start_year,
False as exported
into temporary table tt_years
from interval_table;
create temporary view unexported as
select * from tt_years
where exported = False
limit 1;
-- !x! begin script export_year
-- !x! select_sub unexported
-- !x! if(sub_defined(@start_year))
create temporary view export_data as
select * from interval_table
where extract(year from start_time) = !!@start_year!!;
-- !x! export export_data to results_!!@start_year!!.csv as csv
update tt_years
set exported = True
where start_year = !!@start_year!!;
-- !x! execute script export_year
-- !x! endif
-- !x! end script
-- !x! execute script export_year
The !x! tokens identify metacommands to execsql, which allows looping (through end recursion) and exporting to CSV.
| stackoverflow | {
"language": "en",
"length": 286,
"provenance": "stackexchange_0000F.jsonl.gz:876741",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578758"
} |
1c791673ea258fce3e933c1b80e434a1e14429b2 | Stackoverflow Stackexchange
Q: Can I have null values in clustering columns of a primary key? So I have a table and I want to make a composite primary key: one partition key and several clustering columns. However these columns are not strictly speaking always populated, so some rows may have null values. Is this allowed in Cassandra? To have clustering columns with null values?
A: Cassandra does not allow null clustering key values.
If you really need "no value" for some reason, then use an empty string OR some other special literal value like 'UNDEFINED' to cluster those together.
A similar question is here:
How can I have null column value for a composite key column in CQL3
| Q: Can I have null values in clustering columns of a primary key? So I have a table and I want to make a composite primary key: one partition key and several clustering columns. However these columns are not strictly speaking always populated, so some rows may have null values. Is this allowed in Cassandra? To have clustering columns with null values?
A: Cassandra does not allow null clustering key values.
If you really need "no value" for some reason, then use an empty string OR some other special literal value like 'UNDEFINED' to cluster those together.
A similar question is here:
How can I have null column value for a composite key column in CQL3
A: It is possible to have null values in clustering keys but only in tables created WITH COMPACT STORAGE and only for trailing columns.
For example:
cqlsh:test> CREATE TABLE cf (p int, c1 int, c2 int, v int, primary key (p, c1, c2)) WITH COMPRESSION = {'sstable_compression': ''} AND COMPACT STORAGE;
cqlsh:test> INSERT INTO cf (p, c1, v) VALUES (1, 1, 1);
cqlsh:test> SELECT * FROM cf;
p | c1 | c2 | v
---+----+------+---
1 | 1 | null | 1
(1 rows)
In regular (non-compact) tables, clustering keys cannot have missing columns.
A: You can skip the clustering columns on INSERT, when you pass only STATIC field values. And this makes sense, because static fields relate to the partition itself, not to the partition rows (identified by the clustering keys).
This helps implementing one-to-many parent-child relationships within a partition, as in the following example (tested on Amazon Keyspaces and DSE DB 4 Astra):
CREATE TABLE countries (
country text,
country_pop int STATIC,
state text,
state_pop int,
PRIMARY KEY (country, state)
);
INSERT INTO countries (country, country_pop) VALUES ('USA', 328000000);
A:
This example works for me as well. I used katakoda to run this.
This is part of query.
| stackoverflow | {
"language": "en",
"length": 316,
"provenance": "stackexchange_0000F.jsonl.gz:876750",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578787"
} |
42ad6efe856a5dcc96fc19ade15fe4856e549135 | Stackoverflow Stackexchange
Q: Java.lang.ClassNotFoundException: Didn't find class Kotlin
Every time I change my code and run it, this appear and the second time I run the code, it doesn't appears, what is the cause of this bug ?
A: check project package name and add kotlin plugin & dependencies.
apply plugin: 'kotlin-android'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
| Q: Java.lang.ClassNotFoundException: Didn't find class Kotlin
Every time I change my code and run it, this appear and the second time I run the code, it doesn't appears, what is the cause of this bug ?
A: check project package name and add kotlin plugin & dependencies.
apply plugin: 'kotlin-android'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
| stackoverflow | {
"language": "en",
"length": 52,
"provenance": "stackexchange_0000F.jsonl.gz:876753",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578791"
} |
192ca2ffc0290823da64ff51dedb00a18781e967 | Stackoverflow Stackexchange
Q: Loading second component causes "The template specified for component SidebarComponent is not a string" I have just started learning angular and i tried to create a simple dashboard.
I've created 2 componentents, DashboardComponent and SidebarComponent.
Dashboard loads fine, but when i load SidebarComponent i'm getting a error on browser "The template specified for component SidebarComponent is not a string"
SidebarComponent:
import { Component } from '@angular/core';
@Component({
selector: 'sidebar-component',
templateUrl: './sidebar.component.ts',
styleUrls: ['./sidebar.component.scss']
})
export class SidebarComponent {}
App.module
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
import { DashboardComponent } from './dashboard/dashboard.component';
import { SidebarComponent } from './sidebar/sidebar.component';
@NgModule({
declarations: [
AppComponent,
DashboardComponent,
SidebarComponent
],
imports: [
BrowserModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
Both of them are also loaded in app.component
<sidebar-component></sidebar-component>
<dashboard></dashboard>
A: The error speaks for itself...
You're referring to a .ts file instead of .html.
Change this line:
templateUrl: './sidebar.component.ts'
to:
templateUrl: './sidebar.component.html'
| Q: Loading second component causes "The template specified for component SidebarComponent is not a string" I have just started learning angular and i tried to create a simple dashboard.
I've created 2 componentents, DashboardComponent and SidebarComponent.
Dashboard loads fine, but when i load SidebarComponent i'm getting a error on browser "The template specified for component SidebarComponent is not a string"
SidebarComponent:
import { Component } from '@angular/core';
@Component({
selector: 'sidebar-component',
templateUrl: './sidebar.component.ts',
styleUrls: ['./sidebar.component.scss']
})
export class SidebarComponent {}
App.module
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
import { DashboardComponent } from './dashboard/dashboard.component';
import { SidebarComponent } from './sidebar/sidebar.component';
@NgModule({
declarations: [
AppComponent,
DashboardComponent,
SidebarComponent
],
imports: [
BrowserModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
Both of them are also loaded in app.component
<sidebar-component></sidebar-component>
<dashboard></dashboard>
A: The error speaks for itself...
You're referring to a .ts file instead of .html.
Change this line:
templateUrl: './sidebar.component.ts'
to:
templateUrl: './sidebar.component.html'
| stackoverflow | {
"language": "en",
"length": 164,
"provenance": "stackexchange_0000F.jsonl.gz:876819",
"question_score": "12",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578976"
} |
5b9a45246640ac8ec799b54dd17997fef8a9561f | Stackoverflow Stackexchange
Q: Slow page load times with paypal express checkout integration I'm looking into a website which has slow loading times.
Sometimes it takes 15 seconds or more before the page starts to paint to the screen.
When I analyze the website using the chrome dev tools I see whats in the following image :
This appears before the waterfall starts.
Furthermore when I pause the network recording in dev tools and hover my mouse over the logger name it shows a url to https://www.paypal.com/webapps/hermes/api/logger
Does anyone know about this issue at all.
Waiting 15 or more seconds for the page to start rendering is way to long.
I have done some research but can't find anything about this problem.
I didn't build the website either so I'm still trying to gather more information about how paypal has been integrated.
I know it's using express checkout.
As far as I can tell, the logger that shows in dev tools is some kind of ajax request which is the first thing that happens when the browser refresh button is pressed.
Cheers
| Q: Slow page load times with paypal express checkout integration I'm looking into a website which has slow loading times.
Sometimes it takes 15 seconds or more before the page starts to paint to the screen.
When I analyze the website using the chrome dev tools I see whats in the following image :
This appears before the waterfall starts.
Furthermore when I pause the network recording in dev tools and hover my mouse over the logger name it shows a url to https://www.paypal.com/webapps/hermes/api/logger
Does anyone know about this issue at all.
Waiting 15 or more seconds for the page to start rendering is way to long.
I have done some research but can't find anything about this problem.
I didn't build the website either so I'm still trying to gather more information about how paypal has been integrated.
I know it's using express checkout.
As far as I can tell, the logger that shows in dev tools is some kind of ajax request which is the first thing that happens when the browser refresh button is pressed.
Cheers
| stackoverflow | {
"language": "en",
"length": 179,
"provenance": "stackexchange_0000F.jsonl.gz:876823",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44578990"
} |
14335dfee30b69c3cdf13f8b3df0fe48564e38a5 | Stackoverflow Stackexchange
Q: UWP INotifyDataErrorInfo Do controls on the UWP platform automatically support the INotifyDataErrorInfo interface through binding?
On Silverlight and WPF, if we implement the INotifyDataErrorInfo interface, most controls will automatically glow red and display an error message when the field is in error. This is great functionality as it means that you can place errors at the model level instead of at the control level.
Is this supported in UWP? Are there any samples anywhere?
Edit: It seems as though the answer to this question might be that controls in UWP don't handle INotifyDataErrorInfo at all. So, the question now is, if the functionality is not being used, does the Microsoft team plan to implement the functionality in future? Is there an announcement from Microsoft anywhere on this?
A: Not supported today. Here is a related UserVoice link for you to comment and vote:
| Q: UWP INotifyDataErrorInfo Do controls on the UWP platform automatically support the INotifyDataErrorInfo interface through binding?
On Silverlight and WPF, if we implement the INotifyDataErrorInfo interface, most controls will automatically glow red and display an error message when the field is in error. This is great functionality as it means that you can place errors at the model level instead of at the control level.
Is this supported in UWP? Are there any samples anywhere?
Edit: It seems as though the answer to this question might be that controls in UWP don't handle INotifyDataErrorInfo at all. So, the question now is, if the functionality is not being used, does the Microsoft team plan to implement the functionality in future? Is there an announcement from Microsoft anywhere on this?
A: Not supported today. Here is a related UserVoice link for you to comment and vote:
A: The answer is in this channel9 video: https://channel9.msdn.com/events/Build/2018/BRK3502?term=lob%20uwp&lang-en=true
There will be System.ComponententModel.INotifyDataErrorInfo to re-use existing .NET code, and also Windows.UI.Xaml.Data.INotifyDataErrorInfo to make the functionality also available to C++ developers.
In the future controls will support these interfaces.
| stackoverflow | {
"language": "en",
"length": 182,
"provenance": "stackexchange_0000F.jsonl.gz:876877",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579154"
} |
3aff619c44adfe52942b0faab26dcaca19df3dc9 | Stackoverflow Stackexchange
Q: How do I see a list of yarn installed dependencies? When I install dependencies from package.json using npm install, I can see a list of all installed dependencies under the node_modules directory. However, the same is not true for yarn. When I do a yarn/yarn install, I see a .yarn-integrity file. How can I see a list of installed dependencies?
A: For Yarn 2, run:
yarn info --name-only
| Q: How do I see a list of yarn installed dependencies? When I install dependencies from package.json using npm install, I can see a list of all installed dependencies under the node_modules directory. However, the same is not true for yarn. When I do a yarn/yarn install, I see a .yarn-integrity file. How can I see a list of installed dependencies?
A: For Yarn 2, run:
yarn info --name-only
A: The yarn list command is what you need. You can find the full documentation of this command here: https://yarnpkg.com/en/docs/cli/list
'yarn help' can also show you what commands are available plus some additional info.
A: For Yarn 2 and onwards you can use yarn info --name-only.
For Yarn 1 you can use yarn list.
| stackoverflow | {
"language": "en",
"length": 123,
"provenance": "stackexchange_0000F.jsonl.gz:876901",
"question_score": "21",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579221"
} |
89711d1d280fcd078047676eb9ddad9a5335de1a | Stackoverflow Stackexchange
Q: Where is the Docker JSON file logging driver writing files to? I have Dockerized a sample app and have configured it to log to STDOUT. I then run the container without specifying a logging-driver. According to the Docker docs, STDOUT should be collected out of the container and into a JSON file...
But nowhere in the logging docs do they tell you where you can find this JSON file!!!
Any ideas how I could find this file on my host and inspect its contents?
A: docker inspect is your friend to figure out details regarding a container. With this you can get the log path by running following command:
$ docker inspect --format='{{.LogPath}}' NAME|ID
For example:
$ docker inspect --format='{{.LogPath}}' 2de7566c47eb
Hope it helps.
| Q: Where is the Docker JSON file logging driver writing files to? I have Dockerized a sample app and have configured it to log to STDOUT. I then run the container without specifying a logging-driver. According to the Docker docs, STDOUT should be collected out of the container and into a JSON file...
But nowhere in the logging docs do they tell you where you can find this JSON file!!!
Any ideas how I could find this file on my host and inspect its contents?
A: docker inspect is your friend to figure out details regarding a container. With this you can get the log path by running following command:
$ docker inspect --format='{{.LogPath}}' NAME|ID
For example:
$ docker inspect --format='{{.LogPath}}' 2de7566c47eb
Hope it helps.
| stackoverflow | {
"language": "en",
"length": 125,
"provenance": "stackexchange_0000F.jsonl.gz:876903",
"question_score": "15",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579227"
} |
0d9f0364f4d8bbd8444cebc7501b1b38cb70ff9a | Stackoverflow Stackexchange
Q: numpy covariance compared to excel covariance difference I am calculating covariance of dataset using Numpy.Cov() function for sample dataset below and I am getting discrepancy compared to calculating covariance using Excel covariance function.
I am unable to find the root cause of discrepancy and hence any help will be appreciated
Data
CL1 CL2 CL3
11.61815873 27.01813137 25.284136
11.77125896 27.15483024 25.77108973
11.70410973 26.89751471 25.59316433
11.81557745 26.96184359 25.76172524
11.83437923 27.21915913 25.76172524
Excel Covariance Values
CL1 CL2 CL3
CL1 0.006270349
CL2 0.004384429 0.014328536
CL3 0.014014102 0.008418645 0.035098552
However when I run numpy_array.cov() function I get the following matrix
Numpy Cov() function
CL1 CL2 CL3
CL1 0.007838 0.005481 0.017518
CL2 0.005481 0.017911 0.010523
CL3 0.017518 0.010523 0.043873
I will appreciate any help in this regards.
| Q: numpy covariance compared to excel covariance difference I am calculating covariance of dataset using Numpy.Cov() function for sample dataset below and I am getting discrepancy compared to calculating covariance using Excel covariance function.
I am unable to find the root cause of discrepancy and hence any help will be appreciated
Data
CL1 CL2 CL3
11.61815873 27.01813137 25.284136
11.77125896 27.15483024 25.77108973
11.70410973 26.89751471 25.59316433
11.81557745 26.96184359 25.76172524
11.83437923 27.21915913 25.76172524
Excel Covariance Values
CL1 CL2 CL3
CL1 0.006270349
CL2 0.004384429 0.014328536
CL3 0.014014102 0.008418645 0.035098552
However when I run numpy_array.cov() function I get the following matrix
Numpy Cov() function
CL1 CL2 CL3
CL1 0.007838 0.005481 0.017518
CL2 0.005481 0.017911 0.010523
CL3 0.017518 0.010523 0.043873
I will appreciate any help in this regards.
| stackoverflow | {
"language": "en",
"length": 122,
"provenance": "stackexchange_0000F.jsonl.gz:876919",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579291"
} |
dbbfb013f7700a32bb7f922514dbcc6ba5ca9a87 | Stackoverflow Stackexchange
Q: How to ignore empty cell values for getRange().getValues() I am able to get the range values using getValues() and put it into a string by declaring the following variables in Google App Script
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName("Test");
var range_input = ss.getRange("A1:A").getValues();
However, I realize I am getting a lot of commas in my string probably from all the empty calls.
For example, if values are following
================
Spreadsheet("Test") Values
A1=abc
A2=def
A3=
A4=
A5=
A6=uvw
A7=xyz
================
If I do msgBox, it gets something like below.
Browser.msgBox(range_input) // results = abc,def,,,,uvw,xyz,,,,,,,,,,,
Is there a way to remove the trailing commas so I get something like below?
(i.e. ignore the empty cells)
Browser.msgBox(range_input) // results = abc,def,uvw,xyz
A: You can also use filter().
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName("Test");
var range_input = ss.getRange("A1:A").getValues();
var filtered_input = range_input.filter(String);
| Q: How to ignore empty cell values for getRange().getValues() I am able to get the range values using getValues() and put it into a string by declaring the following variables in Google App Script
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName("Test");
var range_input = ss.getRange("A1:A").getValues();
However, I realize I am getting a lot of commas in my string probably from all the empty calls.
For example, if values are following
================
Spreadsheet("Test") Values
A1=abc
A2=def
A3=
A4=
A5=
A6=uvw
A7=xyz
================
If I do msgBox, it gets something like below.
Browser.msgBox(range_input) // results = abc,def,,,,uvw,xyz,,,,,,,,,,,
Is there a way to remove the trailing commas so I get something like below?
(i.e. ignore the empty cells)
Browser.msgBox(range_input) // results = abc,def,uvw,xyz
A: You can also use filter().
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName("Test");
var range_input = ss.getRange("A1:A").getValues();
var filtered_input = range_input.filter(String);
A: *
*You want to achieve the following result.
*
*Input
A1=abc
A2=def
A3=
A4=
A5=
A6=uvw
A7=xyz
*Output
Browser.msgBox(range_input) // results = abc,def,uvw,xyz
In the current stage, I thought that although the comprehensions of var result = [i for each (i in range_input)if (isNaN(i))] can be still used, it is not suitable for this situation as tehhowch's comment. Alto I think that filter() is suitable for this situation. In this update, I would like to update this by proposing other solution. If this was useful, I'm glad.
Pattern 1:
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName("Test");
var range_input = ss.getRange("A1:A").getValues();
var result = range_input.reduce(function(ar, e) {
if (e[0]) ar.push(e[0])
return ar;
}, []);
Logger.log(result) // ["abc","def","uvw","xyz"]
Browser.msgBox(result)
*
*In this pattern, the empty rows are removed by reduce().
Pattern 2:
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName("Test");
var range_input = ss.getRange("A1:A").getValues();
var result = [].concat.apply([], range_input).filter(String); // or range_input.filter(String).map(String)
Logger.log(result) // ["abc","def","uvw","xyz"]
Browser.msgBox(result)
*
*In this pattern, the empty rows are removed by filter() and when filter() is used, the 2 dimensional array is returned. In order to return 1 dimensional array, the array is flatten.
Pattern 3:
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getSheetByName("Test");
var range_input = ss.getRange("A1:A").getValues();
var criteria = SpreadsheetApp.newFilterCriteria().whenCellNotEmpty().build();
var f = ss.getRange("A1:A").createFilter().setColumnFilterCriteria(1, criteria);
var url = "https://docs.google.com/spreadsheets/d/" + ss.getId() + "/gviz/tq?tqx=out:csv&gid=" + sheet.getSheetId() + "&access_token=" + ScriptApp.getOAuthToken();
var res = UrlFetchApp.fetch(url);
f.remove();
var result = Utilities.parseCsv(res.getContentText()).map(function(e) {return e[0]});
Logger.log(result) // ["abc","def","uvw","xyz"]
Browser.msgBox(result)
*
*In this pattern, the empty rows are removed by the filter, then the filtered values are retrieved.
Result:
References:
*
*reduce()
*filter()
*map()
*Class Filter
A: Not tested for efficiency, but for a simple removal of multiple ,s, you can use regex:
const a=[['abc'],['def'],[''],[''],[''],['xyz'],['']];//simulate getValues()
const out = a.join(',').replace(/,+(?=,)|,*$/g,'') //'abc,def,xyz'
const out2 = a.join('«').replace(/«+$/,'').split(/«+/) //flattened
console.log({out, out2})
A: In case that you want to get multiple columns with the getValues(), the filter(String) won't work, instead you have to create a custom filter function like:
dataSheet.getRange("F3:H").getValues().filter(function(row) {
return !row.some(cell => cell === '' || cell === null)
})
| stackoverflow | {
"language": "en",
"length": 482,
"provenance": "stackexchange_0000F.jsonl.gz:876921",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579300"
} |
3f57945e0c531457bd0a90731f055ce455858597 | Stackoverflow Stackexchange
Q: Why does this CMake script find "alloca" and still fail? I'm using the alloca function in one of my projects and decided to use CMake to make sure it's available. So I added this bit to my CMakeLists.txt file:
include(CheckSymbolExists)
check_symbol_exists(alloca stdlib.h;cstdlib ALLOCA_EXISTS)
if (NOT ALLOCA_EXISTS)
message(FATAL_ERROR "Platform does not support alloca")
endif ()
When I run CMake, this is the (relevant part of the) output:
-- Looking for alloca
-- Looking for alloca - found
CMake Error at CMakeLists.txt:11 (message):
Platform does not support alloca
-- Configuring incomplete, errors occurred!
So how come the shown code finds the function but doesn't set the variable? Or is it something else?
A: You must add quotes when you specify the headers:
check_symbol_exists(alloca "stdlib.h;cstdlib" ALLOCA_EXISTS)
Otherwise, ALLOCA_EXISTS is ignored and a variable cstdlib is created with value TRUE.
| Q: Why does this CMake script find "alloca" and still fail? I'm using the alloca function in one of my projects and decided to use CMake to make sure it's available. So I added this bit to my CMakeLists.txt file:
include(CheckSymbolExists)
check_symbol_exists(alloca stdlib.h;cstdlib ALLOCA_EXISTS)
if (NOT ALLOCA_EXISTS)
message(FATAL_ERROR "Platform does not support alloca")
endif ()
When I run CMake, this is the (relevant part of the) output:
-- Looking for alloca
-- Looking for alloca - found
CMake Error at CMakeLists.txt:11 (message):
Platform does not support alloca
-- Configuring incomplete, errors occurred!
So how come the shown code finds the function but doesn't set the variable? Or is it something else?
A: You must add quotes when you specify the headers:
check_symbol_exists(alloca "stdlib.h;cstdlib" ALLOCA_EXISTS)
Otherwise, ALLOCA_EXISTS is ignored and a variable cstdlib is created with value TRUE.
| stackoverflow | {
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:876943",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579381"
} |
31d717c3b9b196bc08018971931067e450e3bc5e | Stackoverflow Stackexchange
Q: Differnt Nltk, wordnet hypernym output needed When I try to print the hypernym, I just want the word rather the all the information about the word.
pp = wn.synset('grow.v.01')
pp1= pp.hypernyms()
print pp1
My output is [Synset('change.v.02')]. I just want "change". What change do i need to do? Sorry I am new to wordnet.
A: You can use the lemma_names function of the Synset object.
Bear in mind it returns list of names, you can pick the one you are happy with (in this case its only 1 result 'change').
>> print(pp1[0].lemma_names())
['change']
Also calling hypernyms() also returns you a list, thus I used pp1[0]. For example querying for 'dog' returns [dog, frump, cad...] etc.. If you want to get all lemma_names for all hypernyms, you can use a list comprehension.
>> [s.lemma_names() for s in wn.synsets('dog')]
[['dog', 'domestic_dog', 'Canis_familiaris'],
['frump', 'dog'],
['dog'],
...
['chase', 'chase_after', 'trail', 'tail', 'tag', 'give_chase', 'dog', 'go_after', 'track']]
| Q: Differnt Nltk, wordnet hypernym output needed When I try to print the hypernym, I just want the word rather the all the information about the word.
pp = wn.synset('grow.v.01')
pp1= pp.hypernyms()
print pp1
My output is [Synset('change.v.02')]. I just want "change". What change do i need to do? Sorry I am new to wordnet.
A: You can use the lemma_names function of the Synset object.
Bear in mind it returns list of names, you can pick the one you are happy with (in this case its only 1 result 'change').
>> print(pp1[0].lemma_names())
['change']
Also calling hypernyms() also returns you a list, thus I used pp1[0]. For example querying for 'dog' returns [dog, frump, cad...] etc.. If you want to get all lemma_names for all hypernyms, you can use a list comprehension.
>> [s.lemma_names() for s in wn.synsets('dog')]
[['dog', 'domestic_dog', 'Canis_familiaris'],
['frump', 'dog'],
['dog'],
...
['chase', 'chase_after', 'trail', 'tail', 'tag', 'give_chase', 'dog', 'go_after', 'track']]
| stackoverflow | {
"language": "en",
"length": 154,
"provenance": "stackexchange_0000F.jsonl.gz:876965",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579428"
} |
ab0db72d505e0d141dd88d91d3f75ede24ab07b5 | Stackoverflow Stackexchange
Q: When will the Microsoft Bot Framework support Facebook Chat Extensions Facebook recently announced Chat Extensions which will allow group-based interaction with bots. More about how that works is listed here: https://developers.facebook.com/docs/messenger-platform/design/guides/chat-extensions
Has any announcement been made about when the Microsoft Bot Framework will support this feature of FB?
A: I suggest you make a new issue asking for this enhancement on GitHub
| Q: When will the Microsoft Bot Framework support Facebook Chat Extensions Facebook recently announced Chat Extensions which will allow group-based interaction with bots. More about how that works is listed here: https://developers.facebook.com/docs/messenger-platform/design/guides/chat-extensions
Has any announcement been made about when the Microsoft Bot Framework will support this feature of FB?
A: I suggest you make a new issue asking for this enhancement on GitHub
| stackoverflow | {
"language": "en",
"length": 63,
"provenance": "stackexchange_0000F.jsonl.gz:876984",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579485"
} |
d84f5925175bfd23d168a90a90a91a64bb9d8048 | Stackoverflow Stackexchange
Q: Have text shrink when needed to fit in div in CSS so I've got a bit of an issue. I have my header that looks like this:
and it looks really poor on mobile as shown here:
Would there be a good way to downscale text as needed to fit in the header? Preferably a CSS only solution.
I have a premade JSFiddle here with just the header for experimenting: https://jsfiddle.net/wgy1ohc3/1/
<div class="parallax-container header-parallax">
<div class="container container-wide readable-text">
<h1 class="white-text">Account Security</h1>
<h4 class="white-text">Control active sessions and 2-Factor Authentication.</h4>
</div>
<div class="parallax"><img src="https://i.imgur.com/k45V80z.jpg?2"></div>
</div>
Any help whatsoever would be appreciated!
A: The vw or vh CSS measurement
For a fluid responsive text size adjustment, we can use the vw (viewer width) and vh (viewer height) CSS measurements.
They are widely supported and very useful.
Adding:
h1 {
font-size: 10vw;
}
h4 {
font-size: 4vw;
}
to your fiddle will give you a result close to what I believe you are seeking.
| Q: Have text shrink when needed to fit in div in CSS so I've got a bit of an issue. I have my header that looks like this:
and it looks really poor on mobile as shown here:
Would there be a good way to downscale text as needed to fit in the header? Preferably a CSS only solution.
I have a premade JSFiddle here with just the header for experimenting: https://jsfiddle.net/wgy1ohc3/1/
<div class="parallax-container header-parallax">
<div class="container container-wide readable-text">
<h1 class="white-text">Account Security</h1>
<h4 class="white-text">Control active sessions and 2-Factor Authentication.</h4>
</div>
<div class="parallax"><img src="https://i.imgur.com/k45V80z.jpg?2"></div>
</div>
Any help whatsoever would be appreciated!
A: The vw or vh CSS measurement
For a fluid responsive text size adjustment, we can use the vw (viewer width) and vh (viewer height) CSS measurements.
They are widely supported and very useful.
Adding:
h1 {
font-size: 10vw;
}
h4 {
font-size: 4vw;
}
to your fiddle will give you a result close to what I believe you are seeking.
A: You could use viewport, combine with calc(1.5vw + 25px) give a base fontsize + scale when the screen get bigger (4vw = 4% of current screen width)
If you want the font to scale more/less you could change 1.5vw, change 25px base size to set the minimal font-size
(ALSO you should use media query if you care a lot for mobile responsive, that way define font-size for each screen size)
Using the viewport meta tag to control layout on mobile browsers
REF: https://developer.mozilla.org/en/docs/Mozilla/Mobile/Viewport_meta_tag
$(document).ready(function() {
$('.parallax').parallax();
});
.header-parallax {
height: 17em;
}
.container-wide {
width: 95%;
max-width: none;
}
.readable-text {
color: white;
color: white;
text-shadow: black 0.1em 0.1em 0.2em;
}
.myh1 {
font-size: calc(1.5vw + 25px);
}
.myh4 {
font-size: calc(1vw + 15px);
}
<link href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.98.2/css/materialize.min.css" rel="stylesheet"/>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.98.2/js/materialize.min.js"></script>
<div class="parallax-container header-parallax">
<div class="container container-wide readable-text">
<h1 class="white-text myh1">Account Security</h1>
<h4 class="white-text myh4">Control active sessions and 2-Factor Authentication.</h4>
</div>
<div class="parallax"><img src="https://i.imgur.com/k45V80z.jpg?2"></div>
</div>
A: You can use those two JavaScript functions to check the width and height of the screen of the user :
function takeHeight(){
var myHeight = 0;
if( typeof( window.innerHeight ) == 'number' ) {
//Non-IE
myHeight = window.innerHeight;
} else if( document.documentElement &&
(document.documentElement.clientHeight) ) {
//IE 6+ in 'standards compliant mode'
myHeight = document.documentElement.clientHeight;
} else if( document.body && (document.body.clientHeight ) ) {
//IE 4 compatible
myHeight = document.body.clientHeight;
}
return myHeight;
}
function takeWidth(){
var myWidth = 0;
if( typeof( window.innerWidth ) == 'number' ) {
//Non-IE
myWidth = window.innerWidth;
} else if( document.documentElement &&
(document.documentElement.clientWidth) ) {
//IE 6+ in 'standards compliant mode'
myWidth = document.documentElement.clientWidth;
} else if( document.body && (document.body.clientWidth ) ) {
//IE 4 compatible
myWidth = document.body.clientWidtht;
}
//console.log(myWidth);
return myWidth;
}
and rezise your font accordingly with something like this:
if(takeHeight() <= SomeValue || takeWidth() <= SomeOtherValue){
document.getElementById("Your_Object's_Id_Here").style.size = (new size
here) + 'px';
}else{
document.getElementById("Your_Object's_Id_Here").style.size = (another
new size here) + 'px'; }
| stackoverflow | {
"language": "en",
"length": 481,
"provenance": "stackexchange_0000F.jsonl.gz:877025",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579655"
} |
f48e27e040baf980d913b03213fcc8e99344ea3d | Stackoverflow Stackexchange
Q: SFML Library not loaded error, image not found I am trying to get started with SFML on my OS Mac without using Xcode and have gone through the non-IDE installation. I am following this tutorial: https://www.sfml-dev.org/tutorials/2.0/start-linux.php It is the linux install page but it seems suited for someone trying to do it on a mac OS terminal.
I have a CPP directory where I keep my example.cpp file and in that directory I have a "Resources" folder where I keep the SFML stuff. However I am getting a "Library not loaded" error on my terminal and I have searched a bit across the web and am still having quite some trouble. I have since added "freetype" on homebrew but that doesn't seem to work. I also made sure to tell the dynamic linker where to find the SMFL libraries.
el-nino:CPP Home$ g++ -std=c++11 -IResources/SFMLR/include -c example.cpp
el-nino:CPP Home$ g++ example.o -o sfml-app -LResources/SFMLR/lib -lsfml-graphics -lsfml-window -lsfml-system
el-nino:CPP Home$ ./sfml-app
dyld: Library not loaded: @rpath/../Frameworks/freetype.framework/Versions/A/freetype
Referenced from: /Users/Home/Desktop/Junk_Code/CPP/Resources/SFMLR/lib/libsfml-graphics.2.4.2.dylib
Reason: image not found
Abort trap: 6
A: You need to copy the content of extlibs to /Library/Frameworks
then try to build and run your app :)
| Q: SFML Library not loaded error, image not found I am trying to get started with SFML on my OS Mac without using Xcode and have gone through the non-IDE installation. I am following this tutorial: https://www.sfml-dev.org/tutorials/2.0/start-linux.php It is the linux install page but it seems suited for someone trying to do it on a mac OS terminal.
I have a CPP directory where I keep my example.cpp file and in that directory I have a "Resources" folder where I keep the SFML stuff. However I am getting a "Library not loaded" error on my terminal and I have searched a bit across the web and am still having quite some trouble. I have since added "freetype" on homebrew but that doesn't seem to work. I also made sure to tell the dynamic linker where to find the SMFL libraries.
el-nino:CPP Home$ g++ -std=c++11 -IResources/SFMLR/include -c example.cpp
el-nino:CPP Home$ g++ example.o -o sfml-app -LResources/SFMLR/lib -lsfml-graphics -lsfml-window -lsfml-system
el-nino:CPP Home$ ./sfml-app
dyld: Library not loaded: @rpath/../Frameworks/freetype.framework/Versions/A/freetype
Referenced from: /Users/Home/Desktop/Junk_Code/CPP/Resources/SFMLR/lib/libsfml-graphics.2.4.2.dylib
Reason: image not found
Abort trap: 6
A: You need to copy the content of extlibs to /Library/Frameworks
then try to build and run your app :)
A: For those that were interested in the SFML documentation:
Installing SFML for macOSx
Header files and libraries
SFML is available either as dylibs or as frameworks. Only one type of binary is required although both can be installed simultaneously on the same system. We recommend using the frameworks
frameworks: Copy the content of Frameworks to /Library/Frameworks.
dylib: Copy the content of lib to /usr/local/lib and copy the content of include to /usr/local/include.
SFML dependencies
SFML depends on a few external libraries on macOS. Copy the content of extlibs to /Library/Frameworks.
| stackoverflow | {
"language": "en",
"length": 286,
"provenance": "stackexchange_0000F.jsonl.gz:877026",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579656"
} |
61e33692be1fa559a658503624200fd59c599f4f | Stackoverflow Stackexchange
Q: How to generate switch statement labels on Visual Studio Code? I want to generate switch statement labels in Visual Studio Code. I searched Google, the extensions marketplace, and the command pallete, but I didn't find anything. Is this action available?
A: This works with the OmniSharp extension:
More info here:
https://github.com/OmniSharp/omnisharp-vscode/issues/1752
| Q: How to generate switch statement labels on Visual Studio Code? I want to generate switch statement labels in Visual Studio Code. I searched Google, the extensions marketplace, and the command pallete, but I didn't find anything. Is this action available?
A: This works with the OmniSharp extension:
More info here:
https://github.com/OmniSharp/omnisharp-vscode/issues/1752
A: No, it is not available.
Visual studio code is a simple editor and hasn't capability like visual studio IDE and can't do this.
In Visual studio 2015 IDE You can generate switch statement labels for enumeration as:
1) write "switch"
2) press two times TAB
for details, read Switch enum auto-fill
Also Resharper tool (integerated with Visual studio IDE) can do the same.
For details read: https://blog.jetbrains.com/dotnet/2006/06/14/quick-fixes-help-generate-switch-blocks/
| stackoverflow | {
"language": "en",
"length": 120,
"provenance": "stackexchange_0000F.jsonl.gz:877058",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579770"
} |
698fb1bdc91d180f98aac027fd4866bee3b9d9ad | Stackoverflow Stackexchange
Q: Add TURN server to android webRtc native I'm working on WebRtc native android application. Im also compiling io.pristine lib. Im able to establish calls between two devices only if both of them are connected to the wifi. In case when one of the devices is connected to the cellular network im not able to establish call. I read any possible forum out there and its look like I need TURN server. I already run my own TURN server but idk how I can force the app to use this server. Any help is welcome. Thank you!!
A: WebRTC deprecated old API to create ICE servers. (Answer which uses old API)
To create ICE server you need to use IceServer builder pattern.
PeerConnection.IceServer stun = PeerConnection.IceServer.builder("stun:stun.l.google.com:19302").createIceServer();
PeerConnection.IceServer turn = PeerConnection.IceServer.builder("turn:numb.viagenie.ca").setUsername("webrtc@live.com").setPassword("muazkh").createIceServer();
| Q: Add TURN server to android webRtc native I'm working on WebRtc native android application. Im also compiling io.pristine lib. Im able to establish calls between two devices only if both of them are connected to the wifi. In case when one of the devices is connected to the cellular network im not able to establish call. I read any possible forum out there and its look like I need TURN server. I already run my own TURN server but idk how I can force the app to use this server. Any help is welcome. Thank you!!
A: WebRTC deprecated old API to create ICE servers. (Answer which uses old API)
To create ICE server you need to use IceServer builder pattern.
PeerConnection.IceServer stun = PeerConnection.IceServer.builder("stun:stun.l.google.com:19302").createIceServer();
PeerConnection.IceServer turn = PeerConnection.IceServer.builder("turn:numb.viagenie.ca").setUsername("webrtc@live.com").setPassword("muazkh").createIceServer();
A: You need to set the TURN server when creating the PeerConnection.
It will go something like this:
// Set ICE servers
List<PeerConnection.IceServer> iceServers = new ArrayList<>();
iceServers.add(new org.webrtc.PeerConnection.IceServer("stun:xxx.xxx.xxx.xxx"));
iceServers.add(new org.webrtc.PeerConnection.IceServer("turn:xxx.xxx.xxx.xxx:3478", "username", "credential"));
// Create peer connection
final PeerConnectionFactory.Options options = new PeerConnectionFactory.Options();
PeerConnectionFactory factory = new PeerConnectionFactory(new PeerConnectionFactory.Options());
MediaConstraints constraints = new MediaConstraints();
PeerConnection peerConnection = factory.createPeerConnection(iceServers, constraints, new YourPeerConnectionObserver());
I have not run this code, but you should get the idea.
| stackoverflow | {
"language": "en",
"length": 203,
"provenance": "stackexchange_0000F.jsonl.gz:877088",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579855"
} |
a1d66e9aeb378270a951d6f4f4b7ec70989fcc8f | Stackoverflow Stackexchange
Q: How do you get total contributions with Githubs API v4 I have been looking through the Github V4 API docs and I cannot seem to find a way to query total contributions for the year (as displayed on your github profile). Has anyone managed to use the new API to grab some statistics from your personal profile?
I am using graphQL and a Personal Access Token on github, and managed to get minimal user profile data; username, profile name etc.
A: The ContributionsCollection object provides total contributions for each contribution type between two dates.
Note: from and to can be a maximum of one year apart, for a longer timeframe make multiple requests.
query ContributionsView($username: String!, $from: DateTime!, $to: DateTime!) {
user(login: $username) {
contributionsCollection(from: $from, to: $to) {
totalCommitContributions
totalIssueContributions
totalPullRequestContributions
totalPullRequestReviewContributions
}
}
}
| Q: How do you get total contributions with Githubs API v4 I have been looking through the Github V4 API docs and I cannot seem to find a way to query total contributions for the year (as displayed on your github profile). Has anyone managed to use the new API to grab some statistics from your personal profile?
I am using graphQL and a Personal Access Token on github, and managed to get minimal user profile data; username, profile name etc.
A: The ContributionsCollection object provides total contributions for each contribution type between two dates.
Note: from and to can be a maximum of one year apart, for a longer timeframe make multiple requests.
query ContributionsView($username: String!, $from: DateTime!, $to: DateTime!) {
user(login: $username) {
contributionsCollection(from: $from, to: $to) {
totalCommitContributions
totalIssueContributions
totalPullRequestContributions
totalPullRequestReviewContributions
}
}
}
A: There is no API for this as such. So there are two ways to go about it. Simple data scraping the user url or looping through each repo user has forked and then count the contribution. The later one will be more time consuming. The first one is much more reliable as it is cached by github. Below is a python approach to fetch the same
import json
import requests
from bs4 import BeautifulSoup
GITHUB_URL = 'https://github.com/'
def get_contributions(usernames):
"""
Get a github user's public contributions.
:param usernames: A string or sequence of github usernames.
"""
contributions = {'users': [], 'total': 0}
if isinstance(usernames, str) or isinstance(usernames, unicode):
usernames = [usernames]
for username in usernames:
response = requests.get('{0}{1}'.format(GITHUB_URL, username))
if not response.ok:
contributions['users'].append({username: dict(total=0)})
continue
bs = BeautifulSoup(response.content, "html.parser")
total = bs.find('div', {'class': 'js-yearly-contributions'}).findNext('h2')
contributions['users'].append({username: dict(total=int(total.text.split()[0].replace(',', '')))})
contributions['total'] += int(total.text.split()[0].replace(',', ''))
return json.dumps(contributions, indent=4)
PS: Taken from https://github.com/garnertb/github-contributions
For later approach there is a npm package
https://www.npmjs.com/package/github-user-contributions
But I would recommend using the scraping approach only
| stackoverflow | {
"language": "en",
"length": 304,
"provenance": "stackexchange_0000F.jsonl.gz:877094",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579877"
} |
da9324aab4de2751d5ff0929e0cb77c5bf7704ce | Stackoverflow Stackexchange
Q: How to debug class library projects in Rider There is an external application that executes C# libraries(plugins - my class library).
Is it possible to attach debug to my class library project in Rider.
In a Visual Studio, this is done very easily. For example, as described in this article. But how to do it in a Rider?
Thank you
A: Now you can use .NET Executable for your task. Put your library as command-line arguments into a run configuration. In the future, we want to add the macro for OutputPath.
| Q: How to debug class library projects in Rider There is an external application that executes C# libraries(plugins - my class library).
Is it possible to attach debug to my class library project in Rider.
In a Visual Studio, this is done very easily. For example, as described in this article. But how to do it in a Rider?
Thank you
A: Now you can use .NET Executable for your task. Put your library as command-line arguments into a run configuration. In the future, we want to add the macro for OutputPath.
| stackoverflow | {
"language": "en",
"length": 92,
"provenance": "stackexchange_0000F.jsonl.gz:877128",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579950"
} |
686f7f8c51f0e18be70b80bd1249c4ed63252878 | Stackoverflow Stackexchange
Q: Is it possible to view one file in a new window in intellij-idea? What I actually want to do is to view different files in a some project on different screens.
If they are in different windows, I can easily drag one window to another screen.
It's fine if there are other ways can do this. I'm using windows7.
A: You can drag the editor tab to another screen and it will open in a separate window. See the Detaching Editor Tabs help section for details.
Shift+F4 does the same:
The shortcut can be changed here:
| Q: Is it possible to view one file in a new window in intellij-idea? What I actually want to do is to view different files in a some project on different screens.
If they are in different windows, I can easily drag one window to another screen.
It's fine if there are other ways can do this. I'm using windows7.
A: You can drag the editor tab to another screen and it will open in a separate window. See the Detaching Editor Tabs help section for details.
Shift+F4 does the same:
The shortcut can be changed here:
A: You have to add a new keyboard shortcut in your keymap.
The action is called Open In New Editor Window
Then when searching for a class using Ctrl + n (go to Class...) or Ctrl + e (recent files)
instead of opening the Class in the same window by pressing Enter
you can open it in a new window using your own keyboard shortcut (Shift + Enter in my case).
A: *
*Right-click on tab.
*Select Move Tab to New Window option.
| stackoverflow | {
"language": "en",
"length": 180,
"provenance": "stackexchange_0000F.jsonl.gz:877142",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44579998"
} |
4189571c077d0ac69c534ae0b04b7fa0cddedc6f | Stackoverflow Stackexchange
Q: PHP how to include composer autoload in Class file I got difficult to include composer autoload in Class file, it not working on require_once('../vendor/autoload.php');
require_once('phpmailer/PHPMailerAutoload.php');
require_once('../vendor/autoload.php');
class Test {
function X()
{ ... }
}
What is the proper way to load multiple include files in a class?
A: If you (correctly) use composer you need do add only the vendor autoload file. Then add the other dependency via composer vendor library or add custom path (composer do the rest for you).
As example, more simply:
*
*start in an empty directory
*launch the command:
php composer.phar init
*Add the dependency of the library in the composer.json files (if you don't add it in the init process) with the command (suggested by the packagist site)
composer require phpmailer/phpmailer
*Then your class should be like:
require_once('../vendor/autoload.php');
class Test {
function X()
{ ... }
}
Hope this help
| Q: PHP how to include composer autoload in Class file I got difficult to include composer autoload in Class file, it not working on require_once('../vendor/autoload.php');
require_once('phpmailer/PHPMailerAutoload.php');
require_once('../vendor/autoload.php');
class Test {
function X()
{ ... }
}
What is the proper way to load multiple include files in a class?
A: If you (correctly) use composer you need do add only the vendor autoload file. Then add the other dependency via composer vendor library or add custom path (composer do the rest for you).
As example, more simply:
*
*start in an empty directory
*launch the command:
php composer.phar init
*Add the dependency of the library in the composer.json files (if you don't add it in the init process) with the command (suggested by the packagist site)
composer require phpmailer/phpmailer
*Then your class should be like:
require_once('../vendor/autoload.php');
class Test {
function X()
{ ... }
}
Hope this help
A: I think you want something like this
class Loader
{
public function __construct()
{
require_once('phpmailer/PHPMailerAutoload.php');
require_once('../vendor/autoload.php');
}
}
$loader = new Loader();
just add some function as you want
tell me if this help you ... goodluck
| stackoverflow | {
"language": "en",
"length": 186,
"provenance": "stackexchange_0000F.jsonl.gz:877152",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580024"
} |
4981d72eee3c5f1944942e68f8501acddb5556c0 | Stackoverflow Stackexchange
Q: Implement IComparable with strings I have class Employee and I need to implement IComparable and use CompareTo method to sort employees by name. From what I've seen, I have to return 1, -1, and 0, but how do I use the strings?
Here's what I have.
class Employee : IComparable<Employee>
{
string name;
string address;
public int CompareTo(Employee obj)
{
Employee person = obj;
}
}
A: The easiest thing to do is to just pass it through to an already implemented comparison method. In this case, since you just need to compare two strings, you could just call String.Compare:
class Employee : IComparable<Employee>
{
string name;
string address;
public int CompareTo(Employee obj)
=> string.Compare(name, obj.name);
}
You could use name.CompareTo(obj.name) too, but then you'd need to worry whether name might be null. According to the MSDN article on String.Compare:
One or both comparands can be null. By definition, any string, including the empty string (""), compares greater than a null reference; and two null references compare equal to each other.
| Q: Implement IComparable with strings I have class Employee and I need to implement IComparable and use CompareTo method to sort employees by name. From what I've seen, I have to return 1, -1, and 0, but how do I use the strings?
Here's what I have.
class Employee : IComparable<Employee>
{
string name;
string address;
public int CompareTo(Employee obj)
{
Employee person = obj;
}
}
A: The easiest thing to do is to just pass it through to an already implemented comparison method. In this case, since you just need to compare two strings, you could just call String.Compare:
class Employee : IComparable<Employee>
{
string name;
string address;
public int CompareTo(Employee obj)
=> string.Compare(name, obj.name);
}
You could use name.CompareTo(obj.name) too, but then you'd need to worry whether name might be null. According to the MSDN article on String.Compare:
One or both comparands can be null. By definition, any string, including the empty string (""), compares greater than a null reference; and two null references compare equal to each other.
A: Simple :
public int CompareTo(Employee obj)
{
if (name == obj.name)
{
return address.CompareTo(obj.address);
}
return name.CompareTo(obj.name);
}
| stackoverflow | {
"language": "en",
"length": 191,
"provenance": "stackexchange_0000F.jsonl.gz:877154",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580026"
} |
3de6798653cfb27e7bc3d5d29098e30d4702b79c | Stackoverflow Stackexchange
Q: How to add a icon inside a button that shape oval button I have created a drawable button_oval.xml
<shape xmlns:android="http://schemas.android.com/apk/res/android"
android:dither="true"
android:shape="rectangle">
<corners android:radius="120dp" />
<solid android:color="#eceef5" />
<stroke
android:width="3dp"
android:color="#29395e" />
<size
android:width="300dp"
android:height="120dp" />
</shape>
and then i use it on my layout like this:
// I want to add a icon inside this button
<Button
android:id="@+id/goToPersonalPage"
android:layout_width="110dp"
android:layout_height="50dp"
android:layout_marginLeft="30dp"
android:background="@drawable/button_oval"
android:text="@string/memberButton"
android:textColor="#29395e"
android:textSize="18dp" />
I want to add an icon inside the Button, is it any possible to complete it by changing my button_oval.xml?
The final output should look Just like the image below :
Any help would be greatly appreciated.
A: Add this code in your Button element
android:drawableLeft="@android:drawable/yourIcon"
| Q: How to add a icon inside a button that shape oval button I have created a drawable button_oval.xml
<shape xmlns:android="http://schemas.android.com/apk/res/android"
android:dither="true"
android:shape="rectangle">
<corners android:radius="120dp" />
<solid android:color="#eceef5" />
<stroke
android:width="3dp"
android:color="#29395e" />
<size
android:width="300dp"
android:height="120dp" />
</shape>
and then i use it on my layout like this:
// I want to add a icon inside this button
<Button
android:id="@+id/goToPersonalPage"
android:layout_width="110dp"
android:layout_height="50dp"
android:layout_marginLeft="30dp"
android:background="@drawable/button_oval"
android:text="@string/memberButton"
android:textColor="#29395e"
android:textSize="18dp" />
I want to add an icon inside the Button, is it any possible to complete it by changing my button_oval.xml?
The final output should look Just like the image below :
Any help would be greatly appreciated.
A: Add this code in your Button element
android:drawableLeft="@android:drawable/yourIcon"
A: You have to add these lines in your Button element-
android:drawableLeft="@mipmap/ic_launcher_round" // to set an icon
android:drawablePadding="10dp" // to set the padding of icon from text
android:paddingLeft="20dp" // to set the padding of the icon and text
adjust the values according to your need.
It will look like-
A: in XML
android:drawableLeft="@drawable/button_icon"
android:drawablePadding="2dip"
or in Acitvity class
yourButton.setCompoundDrawablesWithIntrinsicBounds(R.drawable.icon, 0, 0, 0);
yourButton.setCompoundDrawablePadding(padding_value);
| stackoverflow | {
"language": "en",
"length": 177,
"provenance": "stackexchange_0000F.jsonl.gz:877179",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580113"
} |
20fc61919ae2ccfa33b6633fbbafe2b68af329dd | Stackoverflow Stackexchange
Q: TeamCity Unmet requirements: MSBuildTools14.0_x86_Path exists I have a solution built in VS 2015 and need to set up TeamCity to run it.
I have installed a build agent on a Virtual Machine, but TeamCity marks all the build profiles for this build agent as incompatible and gives the following error:
Unmet requirements:
MSBuildTools14.0_x86_Path exists
I have installed MSBuild Tools 2013. Please advise what to do. Thank you.
A: You need to download Microsoft Build Tools 2015 from this link -https://www.microsoft.com/en-us/download/details.aspx?id=48159 and make sure path is added in System Variable
| Q: TeamCity Unmet requirements: MSBuildTools14.0_x86_Path exists I have a solution built in VS 2015 and need to set up TeamCity to run it.
I have installed a build agent on a Virtual Machine, but TeamCity marks all the build profiles for this build agent as incompatible and gives the following error:
Unmet requirements:
MSBuildTools14.0_x86_Path exists
I have installed MSBuild Tools 2013. Please advise what to do. Thank you.
A: You need to download Microsoft Build Tools 2015 from this link -https://www.microsoft.com/en-us/download/details.aspx?id=48159 and make sure path is added in System Variable
A: Thank you. I used chocolatey tool to install MSBuild Tools packages. It installed the latest version for VS 2017, but my solution was built in VS 2015. So I uninstalled the package and then installed again. It resolved the problem.
| stackoverflow | {
"language": "en",
"length": 131,
"provenance": "stackexchange_0000F.jsonl.gz:877218",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580236"
} |
2bd1eb905ccaa5d79f1bbcab59d23da2b6ad4467 | Stackoverflow Stackexchange
Q: How to change material-icon on click event with angular2/4 material? I have the following icon within md-tab-group:
<md-tab-group>
<md-tab *ngFor="let tab of arrayOfTabs">
<ng-template md-tab-label>
<md-icon (click)="changetab()">close</md-icon>
</ng-template>
My Tab Content
</md-tab>
</md-tab-group>
I want to make it so that instead of the "close" material icon, change it to a "star" icon. How can I accomplish that through a click event on the icon for that specific tab?
A: In component :
public icon = 'close';
public changeIcon(newIcon: string ){
this.icon = newIcon ;
}
In HTML
<md-icon (click)="changeIcon('star')>{{icon}}</md-icon>
| Q: How to change material-icon on click event with angular2/4 material? I have the following icon within md-tab-group:
<md-tab-group>
<md-tab *ngFor="let tab of arrayOfTabs">
<ng-template md-tab-label>
<md-icon (click)="changetab()">close</md-icon>
</ng-template>
My Tab Content
</md-tab>
</md-tab-group>
I want to make it so that instead of the "close" material icon, change it to a "star" icon. How can I accomplish that through a click event on the icon for that specific tab?
A: In component :
public icon = 'close';
public changeIcon(newIcon: string ){
this.icon = newIcon ;
}
In HTML
<md-icon (click)="changeIcon('star')>{{icon}}</md-icon>
| stackoverflow | {
"language": "en",
"length": 90,
"provenance": "stackexchange_0000F.jsonl.gz:877271",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580387"
} |
32bb320d29e056671566003d51641b0a9c7603d4 | Stackoverflow Stackexchange
Q: CUDA vs. DataParallel: Why the difference? I have a simple neural network model and I apply either cuda() or DataParallel() on the model like following.
model = torch.nn.DataParallel(model).cuda()
OR,
model = model.cuda()
When I don't use DataParallel, rather simply transform my model to cuda(), I need to explicitly convert the batch inputs to cuda() and then give it to the model, otherwise it returns the following error.
torch.index_select received an invalid combination of arguments - got (torch.cuda.FloatTensor, int, torch.LongTensor)
But with DataParallel, the code works fine. Rest of the other things are same. Why this happens? Why when I use DataParallel, I don't need to transform the batch inputs explicitly to cuda()?
A: Because, DataParallel allows CPU inputs, as it's first step is to transfer inputs to appropriate GPUs.
Info source: https://discuss.pytorch.org/t/cuda-vs-dataparallel-why-the-difference/4062/3
| Q: CUDA vs. DataParallel: Why the difference? I have a simple neural network model and I apply either cuda() or DataParallel() on the model like following.
model = torch.nn.DataParallel(model).cuda()
OR,
model = model.cuda()
When I don't use DataParallel, rather simply transform my model to cuda(), I need to explicitly convert the batch inputs to cuda() and then give it to the model, otherwise it returns the following error.
torch.index_select received an invalid combination of arguments - got (torch.cuda.FloatTensor, int, torch.LongTensor)
But with DataParallel, the code works fine. Rest of the other things are same. Why this happens? Why when I use DataParallel, I don't need to transform the batch inputs explicitly to cuda()?
A: Because, DataParallel allows CPU inputs, as it's first step is to transfer inputs to appropriate GPUs.
Info source: https://discuss.pytorch.org/t/cuda-vs-dataparallel-why-the-difference/4062/3
| stackoverflow | {
"language": "en",
"length": 133,
"provenance": "stackexchange_0000F.jsonl.gz:877291",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580450"
} |
d1cb4a97dce302458ab538743fe6da59e1bd5f07 | Stackoverflow Stackexchange
Q: How to read a local json file and display newbie here,I could not find any example on Xamarin Forms read a local json file and display it. I need to do a local testing to read the local Json file.
1) Where do I save the json file for reading? in Android and iOS Projects or just in PCL project?
2) How to read the file?
here the code but it is not complete as I dont how to read the file.
using (var reader = new System.IO.StreamReader(stream))
{
var json = reader.ReadToEnd();
var rootobject = JsonConvert.DeserializeObject<Rootobject>(json);
whateverArray = rootobject.Whatever;
}
The code miss the Path and others which required.
A: You can directly add your JSON file in PCL. Then change build action to Embedded Resource
Now you can read Json data by:
var assembly = typeof("<ContentPageName>").GetTypeInfo().Assembly;
Stream stream = assembly.GetManifestResourceStream("Your_File.json");
using (var reader = new System.IO.StreamReader(stream))
{
var json = reader.ReadToEnd();
var data= JsonConvert.DeserializeObject<Model>(json);
}
| Q: How to read a local json file and display newbie here,I could not find any example on Xamarin Forms read a local json file and display it. I need to do a local testing to read the local Json file.
1) Where do I save the json file for reading? in Android and iOS Projects or just in PCL project?
2) How to read the file?
here the code but it is not complete as I dont how to read the file.
using (var reader = new System.IO.StreamReader(stream))
{
var json = reader.ReadToEnd();
var rootobject = JsonConvert.DeserializeObject<Rootobject>(json);
whateverArray = rootobject.Whatever;
}
The code miss the Path and others which required.
A: You can directly add your JSON file in PCL. Then change build action to Embedded Resource
Now you can read Json data by:
var assembly = typeof("<ContentPageName>").GetTypeInfo().Assembly;
Stream stream = assembly.GetManifestResourceStream("Your_File.json");
using (var reader = new System.IO.StreamReader(stream))
{
var json = reader.ReadToEnd();
var data= JsonConvert.DeserializeObject<Model>(json);
}
| stackoverflow | {
"language": "en",
"length": 158,
"provenance": "stackexchange_0000F.jsonl.gz:877297",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580467"
} |
7183d37aab1390cefde49d0a49aaf7cdf744e603 | Stackoverflow Stackexchange
Q: In which position does Datomic lies in the CAP Triangle? Recently I heard that Datomic as a modern database, can be excellent at data modeling and scalability. But I know little of it. Does the Datomic database follows the CAP Theorem?
If so, in which position does it lies in the CAP Triangle?
A: Datomic provides ACID semantics and is CP for writes. The unusual thing about Datomic is that it does not co-locate read services with
transactional services. You don’t go through the transactor
to read and you don’t need to read inside transactions to
see consistent data. This allows Datomic to scale reads horizontally, and lets you make independent decisions about read availability.
It is worth reading http://docs.datomic.com/acid.html, particularly
the Implications section.
| Q: In which position does Datomic lies in the CAP Triangle? Recently I heard that Datomic as a modern database, can be excellent at data modeling and scalability. But I know little of it. Does the Datomic database follows the CAP Theorem?
If so, in which position does it lies in the CAP Triangle?
A: Datomic provides ACID semantics and is CP for writes. The unusual thing about Datomic is that it does not co-locate read services with
transactional services. You don’t go through the transactor
to read and you don’t need to read inside transactions to
see consistent data. This allows Datomic to scale reads horizontally, and lets you make independent decisions about read availability.
It is worth reading http://docs.datomic.com/acid.html, particularly
the Implications section.
| stackoverflow | {
"language": "en",
"length": 125,
"provenance": "stackexchange_0000F.jsonl.gz:877344",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580596"
} |
58615f0013655398f3f9688e887da55c261cc7b6 | Stackoverflow Stackexchange
Q: How to make JSON response with URL endpoint from 3rd party site I have some problem with sync data from other site to my site with JSON response. I got the documentation from the other site to push some data to my site. The documentation said if I must create URL endpoint and make JSON response to get the data. This is the JSON response format I got form the documentation :
{
"id": INTEGER, // Confirmation Number
"type": STRING, // "new" or "update" or "cancel"
"data": OBJECT // data element
}
In this case my site is using Codeigniter framework. Which I want to ask is how to make that JSON response in my site ? I have already create some function in Controller like this :
function json_response()
{
}
And I don't know what codes I must put in that Controller. Thanks.
A: If you not sure on what variable name they send JSON data, try to var_dump the $_POST variable to some table in your database or write the $_POST variable values into a file. By doing that, you could inspect what are the variables you got on that request.
| Q: How to make JSON response with URL endpoint from 3rd party site I have some problem with sync data from other site to my site with JSON response. I got the documentation from the other site to push some data to my site. The documentation said if I must create URL endpoint and make JSON response to get the data. This is the JSON response format I got form the documentation :
{
"id": INTEGER, // Confirmation Number
"type": STRING, // "new" or "update" or "cancel"
"data": OBJECT // data element
}
In this case my site is using Codeigniter framework. Which I want to ask is how to make that JSON response in my site ? I have already create some function in Controller like this :
function json_response()
{
}
And I don't know what codes I must put in that Controller. Thanks.
A: If you not sure on what variable name they send JSON data, try to var_dump the $_POST variable to some table in your database or write the $_POST variable values into a file. By doing that, you could inspect what are the variables you got on that request.
| stackoverflow | {
"language": "en",
"length": 195,
"provenance": "stackexchange_0000F.jsonl.gz:877346",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580598"
} |
cf4d2932ce023f1c6f88bd88daef70c2434fafad | Stackoverflow Stackexchange
Q: How can I leverage spring-data-jpa auditing (AuditorAware) in asynchronous tasks? Currently, My AuditorAware Implementation uses Spring's SecurityContextHolder to retrieve the current Auditor for saving creation/modification usernames:
@Service
public class AuditorAwareImpl implements AuditorAware<UserDetails> {
private final UserDetailsService userDetailsService;
@Autowired
public AuditorAwareImpl(UserDetailsService userDetailsService){
this.userDetailsService = userDetailsService;
}
@Override
public UserDetails getCurrentAuditor() {
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
return userDetailsService.loadUserByUsername(authentication.getName());
}
}
This works fine for most operations except for asynchronous tasks executed by Spring batch's SimpleAsyncTaskExecutor.
By the time entities need saving, since the SecurityContextHolder is wiped after the request has been processed, and the jobLauncher.run(...) returns asynchrnously, the AuditorAwareImpl.getCurrentAuditor() method throws a NullPointerException due to a null getAuthentication():
java.lang.NullPointerException: null
at com.example.services.AuditorAwareImpl.getCurrentAuditor(AuditorAwareImpl.java:31)
at com.example.services.AuditorAwareImpl.getCurrentAuditor(AuditorAwareImpl.java:18)
So far I have included the request-invoking user as a non-identifying parameter to the Job but don't know where to proceed from here.
What is a recommended way of leveraging spring's inbuilt auditing when the SecurityContextHolder is not appropriate for finding the invoking "auditor"?
A: You can wrap your AsyncTaskExecutor in a DelegatingSecurityContextAsyncTaskExecutor which is specially designed for propagating Spring SecurityContext. Also you will additionally need to set MODE_INHERITABLETHREADLOCAL for the security context.
| Q: How can I leverage spring-data-jpa auditing (AuditorAware) in asynchronous tasks? Currently, My AuditorAware Implementation uses Spring's SecurityContextHolder to retrieve the current Auditor for saving creation/modification usernames:
@Service
public class AuditorAwareImpl implements AuditorAware<UserDetails> {
private final UserDetailsService userDetailsService;
@Autowired
public AuditorAwareImpl(UserDetailsService userDetailsService){
this.userDetailsService = userDetailsService;
}
@Override
public UserDetails getCurrentAuditor() {
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
return userDetailsService.loadUserByUsername(authentication.getName());
}
}
This works fine for most operations except for asynchronous tasks executed by Spring batch's SimpleAsyncTaskExecutor.
By the time entities need saving, since the SecurityContextHolder is wiped after the request has been processed, and the jobLauncher.run(...) returns asynchrnously, the AuditorAwareImpl.getCurrentAuditor() method throws a NullPointerException due to a null getAuthentication():
java.lang.NullPointerException: null
at com.example.services.AuditorAwareImpl.getCurrentAuditor(AuditorAwareImpl.java:31)
at com.example.services.AuditorAwareImpl.getCurrentAuditor(AuditorAwareImpl.java:18)
So far I have included the request-invoking user as a non-identifying parameter to the Job but don't know where to proceed from here.
What is a recommended way of leveraging spring's inbuilt auditing when the SecurityContextHolder is not appropriate for finding the invoking "auditor"?
A: You can wrap your AsyncTaskExecutor in a DelegatingSecurityContextAsyncTaskExecutor which is specially designed for propagating Spring SecurityContext. Also you will additionally need to set MODE_INHERITABLETHREADLOCAL for the security context.
| stackoverflow | {
"language": "en",
"length": 186,
"provenance": "stackexchange_0000F.jsonl.gz:877351",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580608"
} |
8a3b73e1ec6622b826d07c6f5405a9cab75e9f80 | Stackoverflow Stackexchange
Q: Using SpecFlow in Jetbrains Rider Is there any way to use SpecFlow in Jetbrains Rider? I searched about it but I couldn't find any information about it.
A: As of March 2021 there is now a Rider Plugin for SpecFlow. You can find it at https://plugins.jetbrains.com/plugin/15957-specflow-for-rider
SpecFlow has two parts. The Visual Studio extension and the NuGet packages.
In the Visual Studio Extension are the intellisense, syntax highlighting and item templates included.
The NuGet package contains the runtime and the generators for the code behind files.
The generation of the code behind files can be triggered by the Visual Studio extension (default behaviour) or at build time (http://specflow.org/documentation/Generate-Tests-from-MsBuild/). There are the generated coded tests located, that get then discovered by the unit test runner.
So if you use the MSBuild integration and work without intellisense and syntax highlighting, you should be already able to work with SpecFlow in Jetbrains Rider.
| Q: Using SpecFlow in Jetbrains Rider Is there any way to use SpecFlow in Jetbrains Rider? I searched about it but I couldn't find any information about it.
A: As of March 2021 there is now a Rider Plugin for SpecFlow. You can find it at https://plugins.jetbrains.com/plugin/15957-specflow-for-rider
SpecFlow has two parts. The Visual Studio extension and the NuGet packages.
In the Visual Studio Extension are the intellisense, syntax highlighting and item templates included.
The NuGet package contains the runtime and the generators for the code behind files.
The generation of the code behind files can be triggered by the Visual Studio extension (default behaviour) or at build time (http://specflow.org/documentation/Generate-Tests-from-MsBuild/). There are the generated coded tests located, that get then discovered by the unit test runner.
So if you use the MSBuild integration and work without intellisense and syntax highlighting, you should be already able to work with SpecFlow in Jetbrains Rider.
| stackoverflow | {
"language": "en",
"length": 151,
"provenance": "stackexchange_0000F.jsonl.gz:877361",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580637"
} |
f2115bce1de71d7781f2f39cb119835390674d82 | Stackoverflow Stackexchange
Q: How to get confidence interval from RNN? I am working on time series prediction using RNN and tensorflow. I am not sure how to get confidence interval from the distribution which may or may not be defined in the internal memory state of the rnn_decoder.
This way I can plot the distribution like ARIMA:
or Gaussian Process
Here is the code I am working on:
https://github.com/guillaume-chevalier/seq2seq-signal-prediction
Here is the definition for tf.contrib.legacy_seq2seq.basic_rnn_seq2seq.
with variable_scope.variable_scope(scope or "basic_rnn_seq2seq"):
enc_cell = copy.deepcopy(cell)
_, enc_state = rnn.static_rnn(enc_cell, encoder_inputs, dtype=dtype)
return rnn_decoder(decoder_inputs, enc_state, cell)
And rnn_decoder
It should be similar to logits in the discrete case(namely softmax cross entropy). I have tried passing a custom loop function in the decoder but the code was not working at all. My second guess was to write a custom decoder but need someone pointing to the right direction.
| Q: How to get confidence interval from RNN? I am working on time series prediction using RNN and tensorflow. I am not sure how to get confidence interval from the distribution which may or may not be defined in the internal memory state of the rnn_decoder.
This way I can plot the distribution like ARIMA:
or Gaussian Process
Here is the code I am working on:
https://github.com/guillaume-chevalier/seq2seq-signal-prediction
Here is the definition for tf.contrib.legacy_seq2seq.basic_rnn_seq2seq.
with variable_scope.variable_scope(scope or "basic_rnn_seq2seq"):
enc_cell = copy.deepcopy(cell)
_, enc_state = rnn.static_rnn(enc_cell, encoder_inputs, dtype=dtype)
return rnn_decoder(decoder_inputs, enc_state, cell)
And rnn_decoder
It should be similar to logits in the discrete case(namely softmax cross entropy). I have tried passing a custom loop function in the decoder but the code was not working at all. My second guess was to write a custom decoder but need someone pointing to the right direction.
| stackoverflow | {
"language": "en",
"length": 141,
"provenance": "stackexchange_0000F.jsonl.gz:877364",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580640"
} |
b0494b8ff6c5f72e72919e4198dc5ca9b7e75ad3 | Stackoverflow Stackexchange
Q: Qt dependencies not found I am trying to compile the bitcoin-core on my Mac and I want to use QT to develop the project also. Here is the instruction on GitHub:
https://github.com/bitcoin/bitcoin/blob/0.14/doc/build-osx.md
And I have set up my QT:
$ qmake --version
QMake version 3.0
Using Qt version 5.5.1 in /usr/local/Cellar/qt@5.5/5.5.1_1/lib
then when i run
./configure --with-gui
it throws the error below:
checking for Qt5Core Qt5Gui Qt5Network Qt5Widgets... no
checking for QtCore QtGui QtNetwork... no
configure: error: Qt dependencies not found
and i can't run ./src/qt in QT. It throws these error:
make[1]: *** No rule to make target `bitcoin_qt'. Stop.
make: *** [all] Error 2
11:24:14: The process "/usr/bin/make" exited with code 2.
Error while building/deploying project bitocin (kit: Desktop)
When executing step "Make"
My English's bad, hope you could understand. Thank you for help!
| Q: Qt dependencies not found I am trying to compile the bitcoin-core on my Mac and I want to use QT to develop the project also. Here is the instruction on GitHub:
https://github.com/bitcoin/bitcoin/blob/0.14/doc/build-osx.md
And I have set up my QT:
$ qmake --version
QMake version 3.0
Using Qt version 5.5.1 in /usr/local/Cellar/qt@5.5/5.5.1_1/lib
then when i run
./configure --with-gui
it throws the error below:
checking for Qt5Core Qt5Gui Qt5Network Qt5Widgets... no
checking for QtCore QtGui QtNetwork... no
configure: error: Qt dependencies not found
and i can't run ./src/qt in QT. It throws these error:
make[1]: *** No rule to make target `bitcoin_qt'. Stop.
make: *** [all] Error 2
11:24:14: The process "/usr/bin/make" exited with code 2.
Error while building/deploying project bitocin (kit: Desktop)
When executing step "Make"
My English's bad, hope you could understand. Thank you for help!
| stackoverflow | {
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:877380",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580704"
} |
a9c101a4a305c17145feaaf5dcb9d752932b44a2 | Stackoverflow Stackexchange
Q: How to use ffmpeg to split a video and then merge it smoothly? The idea is to split a video into n segments and process them separated and when the process is done to merge the segments into a full video.
I tried using the following approach:
```
// spliting
ffmpeg -i video.mp4 -c:v copy -c:a copy -ss 0 -t 10 video_0_10.mp4
ffmpeg -i video.mp4 -c:v copy -c:a copy -ss 10 -t 20 video_10_20.mp4
vim video_list.txt (with all files)
// joining (merging them)
ffmpeg -f concat -safe 0 -i video_list.txt -c:v copy -c:a copy new_video.mp4
```
But when I tried to play the new_video.mp4 it didn't play (using VLC) smooth, it froze seemly at the moment of the joining.
What's the best way to split a bigger video into several smaller, work on them and after joining the smaller into a new?
A: Thanks to @Mulvya the answer is to use, properly, the segmenter muxer:
wget http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_60fps_normal.mp4
ffmpeg -fflags +genpts -i bbb_sunflower_1080p_60fps_normal.mp4 -map 0 -c copy -f segment -segment_format mp4 -segment_time 30 -segment_list video.ffcat -reset_timestamps 1 -v error chunk-%03d.mp4
ffmpeg -y -v error -i video.ffcat -map 0 -c copy output.mp4
| Q: How to use ffmpeg to split a video and then merge it smoothly? The idea is to split a video into n segments and process them separated and when the process is done to merge the segments into a full video.
I tried using the following approach:
```
// spliting
ffmpeg -i video.mp4 -c:v copy -c:a copy -ss 0 -t 10 video_0_10.mp4
ffmpeg -i video.mp4 -c:v copy -c:a copy -ss 10 -t 20 video_10_20.mp4
vim video_list.txt (with all files)
// joining (merging them)
ffmpeg -f concat -safe 0 -i video_list.txt -c:v copy -c:a copy new_video.mp4
```
But when I tried to play the new_video.mp4 it didn't play (using VLC) smooth, it froze seemly at the moment of the joining.
What's the best way to split a bigger video into several smaller, work on them and after joining the smaller into a new?
A: Thanks to @Mulvya the answer is to use, properly, the segmenter muxer:
wget http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_60fps_normal.mp4
ffmpeg -fflags +genpts -i bbb_sunflower_1080p_60fps_normal.mp4 -map 0 -c copy -f segment -segment_format mp4 -segment_time 30 -segment_list video.ffcat -reset_timestamps 1 -v error chunk-%03d.mp4
ffmpeg -y -v error -i video.ffcat -map 0 -c copy output.mp4
| stackoverflow | {
"language": "en",
"length": 191,
"provenance": "stackexchange_0000F.jsonl.gz:877418",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580808"
} |
a9204e6be639ba355a2b3c7e23ad048d7b00dbcd | Stackoverflow Stackexchange
Q: MongoDB not authorized on admin to execute command after reinstall I am fresh to mongodb. I created user root and to database 'admin' but I cannot connect to other database via mongoose. So, I drop that user. Now there is not user, but I cannot connect even database 'admin'. Then, I reinstall mongodb, and still, I get 'not authorized on admin to execute command'. What should I do ???
| Q: MongoDB not authorized on admin to execute command after reinstall I am fresh to mongodb. I created user root and to database 'admin' but I cannot connect to other database via mongoose. So, I drop that user. Now there is not user, but I cannot connect even database 'admin'. Then, I reinstall mongodb, and still, I get 'not authorized on admin to execute command'. What should I do ???
| stackoverflow | {
"language": "en",
"length": 70,
"provenance": "stackexchange_0000F.jsonl.gz:877424",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580830"
} |
10054d6705688107e68d1edf1a3dc43234c3da8d | Stackoverflow Stackexchange
Q: How to handle LiveData items in onResume - onPause state only? Documentation says:
LifecycleOwner is considered as active, if its state is STARTED or RESUMED.
But what if I want it to be active if the state is RESUMED only? For example, show some fancy animation when user back on screen.
Is there a way to do this using only LiveData?
For now, I'm checking state when an event comes and if state is not RESUMED,
I'm caching it to proceed in onResume method. That doesn't feel right.
A: According to the documentation provided by Google, this is the only way to do that, at least for now (version alpha3 as I'm writing this answer). I think what you are doing here (distinguishing between stared and resumed state) is quite an edge case and Android Architecture Components are designed to be a generic "fit all" library.
| Q: How to handle LiveData items in onResume - onPause state only? Documentation says:
LifecycleOwner is considered as active, if its state is STARTED or RESUMED.
But what if I want it to be active if the state is RESUMED only? For example, show some fancy animation when user back on screen.
Is there a way to do this using only LiveData?
For now, I'm checking state when an event comes and if state is not RESUMED,
I'm caching it to proceed in onResume method. That doesn't feel right.
A: According to the documentation provided by Google, this is the only way to do that, at least for now (version alpha3 as I'm writing this answer). I think what you are doing here (distinguishing between stared and resumed state) is quite an edge case and Android Architecture Components are designed to be a generic "fit all" library.
A: You can also subclass LiveData or MutableLiveData to get the behavior you want, which will be easier if you want this behavior in more than one place.
| stackoverflow | {
"language": "en",
"length": 175,
"provenance": "stackexchange_0000F.jsonl.gz:877431",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580843"
} |
aa90a985aefc2b7fc917629e77cd42c7c25e52fd | Stackoverflow Stackexchange
Q: sending message to contacts in telethon python telegram How can I see all my contacts and send them messages?
i use Telethon (API telegram python).
from telethon.tl.functions.contacts import ResolveUsernameRequest
from telethon.tl.types import InputChannelEmpty
from telethon import TelegramClient
from telethon.tl.types.messages import Messages
from telethon.tl.types.contacts import Contacts
api_id = 1****
api_hash = '5fbd2d************************'
client = TelegramClient('arta0', api_id, api_hash)
client.connect()
A: Just add this line to your code:
contacts = client.invoke(GetContactsRequest(""))
print(contacts)
And you should see the contacts in the result.
To send messages to contacts, you can use the send_message function defined in telegram_client.py and has an example in InteractiveTelegramClient.py.
for u in contacts.users:
client.send_message(InputPeerUser(u.id, u.access_hash), "hi")
If you need more details comment below and I will try to reply.
| Q: sending message to contacts in telethon python telegram How can I see all my contacts and send them messages?
i use Telethon (API telegram python).
from telethon.tl.functions.contacts import ResolveUsernameRequest
from telethon.tl.types import InputChannelEmpty
from telethon import TelegramClient
from telethon.tl.types.messages import Messages
from telethon.tl.types.contacts import Contacts
api_id = 1****
api_hash = '5fbd2d************************'
client = TelegramClient('arta0', api_id, api_hash)
client.connect()
A: Just add this line to your code:
contacts = client.invoke(GetContactsRequest(""))
print(contacts)
And you should see the contacts in the result.
To send messages to contacts, you can use the send_message function defined in telegram_client.py and has an example in InteractiveTelegramClient.py.
for u in contacts.users:
client.send_message(InputPeerUser(u.id, u.access_hash), "hi")
If you need more details comment below and I will try to reply.
A: Sending empty string didn't worked for me:
contacts = client.invoke(GetContactsRequest(""))
*** struct.error: required argument is not an integer
So I think you should use '0' instead:
contacts = client.invoke(GetContactsRequest(0))
print(contacts)
A: I think in new update client.invoke() is not defined.
I used following code and it worked for me.
from telethon.tl.functions.contacts import GetContactsRequest
contacts = client(GetContactsRequest(0))
| stackoverflow | {
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:877434",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580847"
} |
87f2cb0ca4be5724778192bc31ba7613d289220c | Stackoverflow Stackexchange
Q: Get value of parameters in deep link url iOS everyone.
My question is: How can I get data from deep link URL?
I have two apps and I want to send data from app1 to app2 using the deep link.
I have a button on app1 to click and open app2 then app 2 will get data from app1 by deep link URL.
Here is my code of button send in app1:
@IBAction func btnSend_Clicked(_ sender: Any) {
let text = self.txtInput.text?.replacingOccurrences(of: " ", with: "%20")
UIApplication.shared.open(URL(string: "myapp://?code=\(text!)")!, options: [:], completionHandler: nil)
}
so, How can i get data from deeplink url (code parameter) in app2?
Really Thanks for your help !!!!
A: You implement this code in Appdelegate:
func application(_ app: UIApplication, open url: URL, options: [UIApplicationOpenURLOptionsKey : Any] = [:]) -> Bool {
let urlComponents = URLComponents(url: url, resolvingAgainstBaseURL: false)
let items = (urlComponents?.queryItems)! as [NSURLQueryItem]
if (url.scheme == "myapp") {
var vcTitle = ""
if let _ = items.first, let propertyName = items.first?.name, let propertyValue = items.first?.value {
vcTitle = url.query!//"propertyName"
}
}
return false
}
| Q: Get value of parameters in deep link url iOS everyone.
My question is: How can I get data from deep link URL?
I have two apps and I want to send data from app1 to app2 using the deep link.
I have a button on app1 to click and open app2 then app 2 will get data from app1 by deep link URL.
Here is my code of button send in app1:
@IBAction func btnSend_Clicked(_ sender: Any) {
let text = self.txtInput.text?.replacingOccurrences(of: " ", with: "%20")
UIApplication.shared.open(URL(string: "myapp://?code=\(text!)")!, options: [:], completionHandler: nil)
}
so, How can i get data from deeplink url (code parameter) in app2?
Really Thanks for your help !!!!
A: You implement this code in Appdelegate:
func application(_ app: UIApplication, open url: URL, options: [UIApplicationOpenURLOptionsKey : Any] = [:]) -> Bool {
let urlComponents = URLComponents(url: url, resolvingAgainstBaseURL: false)
let items = (urlComponents?.queryItems)! as [NSURLQueryItem]
if (url.scheme == "myapp") {
var vcTitle = ""
if let _ = items.first, let propertyName = items.first?.name, let propertyValue = items.first?.value {
vcTitle = url.query!//"propertyName"
}
}
return false
}
| stackoverflow | {
"language": "en",
"length": 180,
"provenance": "stackexchange_0000F.jsonl.gz:877441",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580871"
} |
f8f9d70a0410d3c7e6173b15f930cddc23285e02 | Stackoverflow Stackexchange
Q: Recycler View loading very slow for large data when inside NestedScrollView I have added RecyclerView inside my NestedScrollView. Basically I want RecyclerView to scroll with other Views. The problem that I am facing is that for a small set of data, it is working fine, but for a large set of data(200 entries) whenever I launch the activity, it freezes for about about 3-5 seconds and then loads. I removed the NestedScrollView and it is working flawlessly, but it doesn't provide me the behaviour I want.
(For extra info, I am loading the data from SQLite database. There is no problem in scrolling, as it is smooth. The only problem is the activity is freezing for a while)
<android.support.v4.widget.NestedScrollView
android:layout_width="match_parent"
android:layout_height="match_parent">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<... Some other Views ...>
<android.support.v7.widget.RecyclerView
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:orientation="vertical">
</android.support.v7.widget.RecyclerView>
</LinearLayout>
</android.support.v4.widget.NestedScrollView>
A: This case of RecyclerView inside NestedScrollView.
RecyclerView is calling onCreateViewHolder() times equal to your data size.
If data has 200 items, it freezes for onCreateViewHolder() to be called 200 times.
| Q: Recycler View loading very slow for large data when inside NestedScrollView I have added RecyclerView inside my NestedScrollView. Basically I want RecyclerView to scroll with other Views. The problem that I am facing is that for a small set of data, it is working fine, but for a large set of data(200 entries) whenever I launch the activity, it freezes for about about 3-5 seconds and then loads. I removed the NestedScrollView and it is working flawlessly, but it doesn't provide me the behaviour I want.
(For extra info, I am loading the data from SQLite database. There is no problem in scrolling, as it is smooth. The only problem is the activity is freezing for a while)
<android.support.v4.widget.NestedScrollView
android:layout_width="match_parent"
android:layout_height="match_parent">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<... Some other Views ...>
<android.support.v7.widget.RecyclerView
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:orientation="vertical">
</android.support.v7.widget.RecyclerView>
</LinearLayout>
</android.support.v4.widget.NestedScrollView>
A: This case of RecyclerView inside NestedScrollView.
RecyclerView is calling onCreateViewHolder() times equal to your data size.
If data has 200 items, it freezes for onCreateViewHolder() to be called 200 times.
A: The problem as said above is because RecyclerView as a child or subChild in NestedScrollView measures its height as infinitive when you use WRAP_CONTENT or MATCH_PARENT for height of RecyclerView.
one solution that solved this problem for me was setting the RecyclerView Height to a fixed size. you could set height to a dp value, or you could set it to a pixel value matching devices height if your requirements needs a vertical infinitive RecyclerView.
here is a snippet for setting the recyclerView size in kotlin
val params = recyclerView.layoutParams
params.apply {
width = context.resources.displayMetrics.widthPixels
height = context.resources.displayMetrics.heightPixels
}
recyclerView.layoutParams = params
A: I faced the same problem!
The solution was to change NestedScrollView to SwipeRefreshLayout.
add this for enable/disable ToolBar Scrolling:
ViewCompat.setNestedScrollingEnabled(recyclerView, true);
A: As said by Nancy , recyclerview.setNestedScrollingEnabled(false); will solve scroll stuck issue. i too faced this type of issue and solved by false the NestedScroll.
| stackoverflow | {
"language": "en",
"length": 319,
"provenance": "stackexchange_0000F.jsonl.gz:877449",
"question_score": "14",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580892"
} |
6b0b7c4932ed63421b1dad8c212704b688bdad29 | Stackoverflow Stackexchange
Q: Password protect a SPECIFIC Jupyter notebook The docs describe how to create a password to protect your jupyter notebooks. I would like to be able to create and share a particular notebook with a special password for just that notebook. Is this possible?
A: No, it is not possible. The password protects the whole Jupyter server. Once somebody has logged into the server, there's nothing that could stop them from accessing all of the notebooks stored in the file system.
| Q: Password protect a SPECIFIC Jupyter notebook The docs describe how to create a password to protect your jupyter notebooks. I would like to be able to create and share a particular notebook with a special password for just that notebook. Is this possible?
A: No, it is not possible. The password protects the whole Jupyter server. Once somebody has logged into the server, there's nothing that could stop them from accessing all of the notebooks stored in the file system.
A: I needed to share password prtected notebook recently and after some research I found 3 ways to do it (and even wrote an article on how to share password protected Jupyter notebook).
First two options are for static notebook (converted to HTML) and third option is for sharing interactive notebook.
1. Use hosting platforms
There are hosting platforms like Vercel or Netlify that can add password protection to static website. You need to convert notebook to HTML file, change the name to index.html and upload to such hosting provider. It might require paid plan.
2. Encrypt notebook
There is a staticrypt library that can encrypt HTML file with your password.
*
*Convert Jupyter Notebook to HTML file.
*Use staticrypt website to generate encrypted HTML.
*Download encrypted HTML file.
You can share encrypted HTML file on GitHub Pages or AWS S3. You will have one password for all users.
3. Mercury
There is an open-source framework Mercury that can convert Jupyter Notebook to interactive web application. It is using YAML header to add widgets to the notebook. The YAML header has a share option, that can be used to specify with whom notebook is shared (you can read more in Mercury's documentation).
Example YAML header:
title: Some title
description: Some app description
share: private
params:
input: text
label: Please enter text
Authentication is a paid feature in Mercury, so you will need a commercial license for this.
| stackoverflow | {
"language": "en",
"length": 318,
"provenance": "stackexchange_0000F.jsonl.gz:877467",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580965"
} |
e9450376175d901f61320d76ef215e9eb0c4a3bf | Stackoverflow Stackexchange
Q: How do I get a placeholder image to load when my image is still loading from server I am building an application with Angular 4/ Ionic 3 that loads images that users upload from a server. I am currently using the code below but it is not working:
<img src="{{user.image}} || assets/images/profileimage.png" />
I am not sure what I am doing wrong. I thought what it would show the placeholder image until the main image had loaded from the server. Instead it is still just blank until the main image loads. Is there something specific to Angular that I should be using?
A: isImgLoaded:bool = false;
<img *ngIf="!isImgLoaded" src="assets/images/profileimage.png" >
<img [hidden]="!isImgLoaded" [src]="user.image" (load)="isImgLoaded = true" >
| Q: How do I get a placeholder image to load when my image is still loading from server I am building an application with Angular 4/ Ionic 3 that loads images that users upload from a server. I am currently using the code below but it is not working:
<img src="{{user.image}} || assets/images/profileimage.png" />
I am not sure what I am doing wrong. I thought what it would show the placeholder image until the main image had loaded from the server. Instead it is still just blank until the main image loads. Is there something specific to Angular that I should be using?
A: isImgLoaded:bool = false;
<img *ngIf="!isImgLoaded" src="assets/images/profileimage.png" >
<img [hidden]="!isImgLoaded" [src]="user.image" (load)="isImgLoaded = true" >
A: As far as I can tell, all modern browsers should support HTMLImageElement.complete, so you can just use
<img #userImage [src]="{{user.image}}">
<img *ngIf="!userImage.complete" src="assets/images/profileimage.png">
A: I found this YouTube video very helpful. Here is the detailed documentation for the Angular module which is used in the video.
Install LazyLoadImageModule
npm i ng-lazyload-image
Module file
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { LazyLoadImageModule } from 'ng-lazyload-image'; // <-- import it
import { AppComponent } from './app.component';
@NgModule({
declarations: [AppComponent],
imports: [BrowserModule, LazyLoadImageModule], // <-- and include it
bootstrap: [AppComponent],
})
export class MyAppModule {}
Component file
import { Component } from '@angular/core';
@Component({
selector: 'image',
template: ` <img [defaultImage]="defaultImage" [lazyLoad]="image" /> `,
})
class ImageComponent {
defaultImage = 'https://www.placecage.com/1000/1000';
image = 'https://images.unsplash.com/photo-1443890923422-7819ed4101c0?fm=jpg';
}
Where [defaultImage] is a placeholder and [lazyLoad] is the actual image.
A: You need to provide the relative path as a string literal here .
<img src="{{user.image}} || ./assets/images/profileimage.png" />
Or Try:
<img [src]="user.image || './assets/images/profileimage.png'" />
A: Better to use this lazy load plugin. It works awesome..
Install : npm install ng2-lazyload-image --save
Import lazy load in app.module.ts: import { LazyLoadImageModule } from 'ng2-lazyload-image';
imports: [ BrowserModule, LazyLoadImageModule ],
in ur html replace ur img tag with this:
<img [defaultImage]="defaultImage" [lazyLoad]="image" [offset]="offset">
in ur component create this vaiables :
defaultImage = 'https://www.placecage.com/1000/1000';
image = 'https://images.unsplash.com/photo-1443890923422-7819ed4101c0?fm=jpg';
offset = 100;
For more info refer this https://www.npmjs.com/package/ng2-lazyload-image
If you need any help in this let me know..
Hope this helps..
| stackoverflow | {
"language": "en",
"length": 365,
"provenance": "stackexchange_0000F.jsonl.gz:877469",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44580968"
} |
d0042bdfd05f6d99b64834cfbdce05480574bef6 | Stackoverflow Stackexchange
Q: How to reduce memory to minimum with global.gc() in nodejs? I found the related problem here
But still don't got the real answer for this. :(
So, How to reduce to minimum memory with global.gc() in nodejs?
should I spam global.gc() function to reduce?
A: Instead of forcing the garbage collector to run, you should first identify if you actually have a memory leak by using various tools (e.g. node's built-in inspector, heapdump module, etc.) available for node that allow you to detect such leaks.
It's entirely possible that it appears there is a leak when there isn't, due to how V8's garbage collector works (it is generally lazy because GC is not exactly a cheap operation CPU usage-wise).
Also, you can limit the amount of memory used by V8 via the --max-old-space-size=xxxx command-line argument (where xxxx is the amount of memory in megabytes). This can also be helpful in more quickly determining whether you have a legitimate memory leak.
| Q: How to reduce memory to minimum with global.gc() in nodejs? I found the related problem here
But still don't got the real answer for this. :(
So, How to reduce to minimum memory with global.gc() in nodejs?
should I spam global.gc() function to reduce?
A: Instead of forcing the garbage collector to run, you should first identify if you actually have a memory leak by using various tools (e.g. node's built-in inspector, heapdump module, etc.) available for node that allow you to detect such leaks.
It's entirely possible that it appears there is a leak when there isn't, due to how V8's garbage collector works (it is generally lazy because GC is not exactly a cheap operation CPU usage-wise).
Also, you can limit the amount of memory used by V8 via the --max-old-space-size=xxxx command-line argument (where xxxx is the amount of memory in megabytes). This can also be helpful in more quickly determining whether you have a legitimate memory leak.
| stackoverflow | {
"language": "en",
"length": 161,
"provenance": "stackexchange_0000F.jsonl.gz:877492",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581024"
} |
69d6e554b0cf5777bd793478a6b60fd45ac905df | Stackoverflow Stackexchange
Q: Reloading fragment, Edittext's text not cleared In my fragment there are lot of spinner and edit text and submit button is to save data, reset button is to reset all elements(Edit Texts and Spinners). I have used folllowing code to reset all controls
FragmentTransaction ft = getFragmentManager().beginTransaction();
ft.detach(this).attach(this).commit();
but it doesn't clear editext. All spinners are reset but editext's text remain as it is
A:
detach().detach() not working after support library update 25.1.0 (may be earlier). This solution works fine after update:
note:
use runOnUiThread() to use commitNowAllowingStateLoss
getSupportFragmentManager()
.beginTransaction()
.detach(oldFragment)
.commitNowAllowingStateLoss();
getSupportFragmentManager()
.beginTransaction()
.attach(oldFragment)
.commitAllowingStateLoss();
| Q: Reloading fragment, Edittext's text not cleared In my fragment there are lot of spinner and edit text and submit button is to save data, reset button is to reset all elements(Edit Texts and Spinners). I have used folllowing code to reset all controls
FragmentTransaction ft = getFragmentManager().beginTransaction();
ft.detach(this).attach(this).commit();
but it doesn't clear editext. All spinners are reset but editext's text remain as it is
A:
detach().detach() not working after support library update 25.1.0 (may be earlier). This solution works fine after update:
note:
use runOnUiThread() to use commitNowAllowingStateLoss
getSupportFragmentManager()
.beginTransaction()
.detach(oldFragment)
.commitNowAllowingStateLoss();
getSupportFragmentManager()
.beginTransaction()
.attach(oldFragment)
.commitAllowingStateLoss();
A: Try this one :
FragmentTransaction ft = getSupportFragmentManager().beginTransaction();
ft.remove(this).replace(R.id.container, YourFragment.newInstance());;
ft.commit();
Performance note : if you are only replacing the fragment just to reset the values then its better to reset the values manually because replacing the entire fragment involves lot of extra overhead as compared to manually resetting values.
| stackoverflow | {
"language": "en",
"length": 148,
"provenance": "stackexchange_0000F.jsonl.gz:877501",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581051"
} |
b6f5649ed5de9c920fac1bd21f83807847fecb41 | Stackoverflow Stackexchange
Q: How to change the colour of an Icon from FontAwesome I have a menuitem with an icon specified, like this:
{
xtype: 'menuitem',
text: 'Random Text',
iconCls: 'x-fa fa-briefcase',
}
How do I gain access to this icon in the css and change the colour of it?
A: If you want to change all icons, do as EvanTrimboli suggests. In SCSS, add
$menu-glyph-color: dynamic(#008000);
If you want to change only certain icons, you should make a special class for that:
iconCls: 'x-fa fa-briefcase greenIcon',
and then add the new color to the CSS:
.greenIcon {
color: green;
}
| Q: How to change the colour of an Icon from FontAwesome I have a menuitem with an icon specified, like this:
{
xtype: 'menuitem',
text: 'Random Text',
iconCls: 'x-fa fa-briefcase',
}
How do I gain access to this icon in the css and change the colour of it?
A: If you want to change all icons, do as EvanTrimboli suggests. In SCSS, add
$menu-glyph-color: dynamic(#008000);
If you want to change only certain icons, you should make a special class for that:
iconCls: 'x-fa fa-briefcase greenIcon',
and then add the new color to the CSS:
.greenIcon {
color: green;
}
A: Skip 'iconCls' (and 'glyph' for that matter) and declare the webfont icon class styled with inline CSS in the same field as the extjs component's text/header/title:
{
xtype: 'menuitem',
text: '<i class="x-fa fa-briefcase" style="color:green;"></i> Random Text',
}
| stackoverflow | {
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:877506",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581067"
} |
27689f52cda2216eafbf820006677569306aa8d0 | Stackoverflow Stackexchange
Q: How can i apply an interface to an anonymous callback function in typescript I have an interface that looks like this
interface CallBackHandler{
(err: Error, response: {statusCode: number, body: object}):void
}
and i want to apply that interface to the callback of this
request({
url: url,
method: "GET",
withCredentials: false,
json: true,
headers: headers
}, (err, response) => {
this.handleResponse(err, response, resolve, reject);
});
but im getting a error saying the function must return something if the return type is not void when i add the interface
(err, response): CallBackHandler => {
this.handleResponse(err, response, resolve, reject);
}
what is the correct way to apply this interface?
A: If request() already has the type signature:
function request(options, callback: CallBackHandler) {
...
}
Then you shouldn't need to do anything, as the callback you provide will be type-checked accordingly.
If that function doesn't already have that type signature, and you want to manually cast your callback to CallBackHandler, then you will need to wrap the callback function in parentheses and cast that expression, like so:
request({
...etc
}, ((err, response) => {
this.handleResponse(err, response, resolve, reject);
}) as CallBackHandler);
| Q: How can i apply an interface to an anonymous callback function in typescript I have an interface that looks like this
interface CallBackHandler{
(err: Error, response: {statusCode: number, body: object}):void
}
and i want to apply that interface to the callback of this
request({
url: url,
method: "GET",
withCredentials: false,
json: true,
headers: headers
}, (err, response) => {
this.handleResponse(err, response, resolve, reject);
});
but im getting a error saying the function must return something if the return type is not void when i add the interface
(err, response): CallBackHandler => {
this.handleResponse(err, response, resolve, reject);
}
what is the correct way to apply this interface?
A: If request() already has the type signature:
function request(options, callback: CallBackHandler) {
...
}
Then you shouldn't need to do anything, as the callback you provide will be type-checked accordingly.
If that function doesn't already have that type signature, and you want to manually cast your callback to CallBackHandler, then you will need to wrap the callback function in parentheses and cast that expression, like so:
request({
...etc
}, ((err, response) => {
this.handleResponse(err, response, resolve, reject);
}) as CallBackHandler);
A: You can have something like
request({
url: url,
method: "GET",
withCredentials: false,
json: true,
headers: headers
}, (err: Error, response: {statusCode: number, body: object}) => {
this.handleResponse(err, response, resolve, reject);
});
| stackoverflow | {
"language": "en",
"length": 220,
"provenance": "stackexchange_0000F.jsonl.gz:877511",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581096"
} |
6d9cf1401ef66f370dfd0e9da098d7c1d1a30245 | Stackoverflow Stackexchange
Q: How to return from forEach lambda while filtering over a collection of iterable element I have one List of elements . Inside the list element in my case say Bean i have another list . My requirement is like , While iterating over the parent list i have to check for a specific condition in the list obtained from Bean class getList()and have to return a boolean from there . Below is a demo of code what i want to achieve . How to achieve this in JAVA -8 using lambda .?
public boolean test(List<Bean> parentList) {
//Bean is having another List of Bean1
// i want to do some thing like below
parentList.forEach(bean ->
bean.getList().stream().
filter(somePredicate).
findFirst().isPresent();
}
A: You should use Stream::flatMap and check your condition:
parentList.stream().flatMap(bean -> bean.getList().stream()).anyMatch(somePredicate);
| Q: How to return from forEach lambda while filtering over a collection of iterable element I have one List of elements . Inside the list element in my case say Bean i have another list . My requirement is like , While iterating over the parent list i have to check for a specific condition in the list obtained from Bean class getList()and have to return a boolean from there . Below is a demo of code what i want to achieve . How to achieve this in JAVA -8 using lambda .?
public boolean test(List<Bean> parentList) {
//Bean is having another List of Bean1
// i want to do some thing like below
parentList.forEach(bean ->
bean.getList().stream().
filter(somePredicate).
findFirst().isPresent();
}
A: You should use Stream::flatMap and check your condition:
parentList.stream().flatMap(bean -> bean.getList().stream()).anyMatch(somePredicate);
| stackoverflow | {
"language": "en",
"length": 132,
"provenance": "stackexchange_0000F.jsonl.gz:877514",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581107"
} |
8f40940e4673e183f40ff769bd2c991cb79c5679 | Stackoverflow Stackexchange
Q: malloc_usable_size() returns the wrong size I want to know the size allocated by malloc.
I have written the source code below.
test.c
#include <stdio.h>
#include <stdint.h>
#include <malloc.h>
void main(void)
{
uint8_t *test;
test = (uint8_t *)malloc(sizeof(uint8_t)*4);
printf("sizeof(test) = %d\n",malloc_usable_size(test));
free(test);
}
I expected size to be 4.
But the result is 12.
sizeof(test) = 12
Can you tell me what's wrong?
I hope that size 4 correctly comes out.
A: malloc_usable_size(test)
The value returned by above function is not fixed as you requested. it may be greater than the requested size of the allocation depending upon the cpu byte ordering and alignment. this is totally depend upon the underlaying implementation.
| Q: malloc_usable_size() returns the wrong size I want to know the size allocated by malloc.
I have written the source code below.
test.c
#include <stdio.h>
#include <stdint.h>
#include <malloc.h>
void main(void)
{
uint8_t *test;
test = (uint8_t *)malloc(sizeof(uint8_t)*4);
printf("sizeof(test) = %d\n",malloc_usable_size(test));
free(test);
}
I expected size to be 4.
But the result is 12.
sizeof(test) = 12
Can you tell me what's wrong?
I hope that size 4 correctly comes out.
A: malloc_usable_size(test)
The value returned by above function is not fixed as you requested. it may be greater than the requested size of the allocation depending upon the cpu byte ordering and alignment. this is totally depend upon the underlaying implementation.
| stackoverflow | {
"language": "en",
"length": 112,
"provenance": "stackexchange_0000F.jsonl.gz:877532",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581168"
} |
904890b9a28e6d95001a8e91870999d3a75fcf59 | Stackoverflow Stackexchange
Q: Samsung galaxy note 5 viewport size I have some trouble replicating a website responsive issue using Samsung Note 5. The display is messed up when using the said device. I tried using chrome dev tool and adjust sizes for a chance to replicate it but no luck. I also search the internet but I can't see note 5 viewport size.
I tried 360x640 but it displays with no issues so I think this is not the right viewport for the Note 5. Any help would be much appreciated.
A: From this site
Screen Size (X - Horizontally): 1440
Screen Size (Y - Vertically): 2560
Pixel Ratio: 4.0
Portrait
It is 360 x 640
The site also allow you to enter any URL and render it in a virtual galaxy note 5
Side Note
The material design project have a web directory in which they listed the display details of many well known mobile phones
Alternate Site
This site also allow you to render a url into a virtual galaxy note 5
Disclosure affiliation : I am in no way affiliated with any of the sites and also not an official representative of it/them
| Q: Samsung galaxy note 5 viewport size I have some trouble replicating a website responsive issue using Samsung Note 5. The display is messed up when using the said device. I tried using chrome dev tool and adjust sizes for a chance to replicate it but no luck. I also search the internet but I can't see note 5 viewport size.
I tried 360x640 but it displays with no issues so I think this is not the right viewport for the Note 5. Any help would be much appreciated.
A: From this site
Screen Size (X - Horizontally): 1440
Screen Size (Y - Vertically): 2560
Pixel Ratio: 4.0
Portrait
It is 360 x 640
The site also allow you to enter any URL and render it in a virtual galaxy note 5
Side Note
The material design project have a web directory in which they listed the display details of many well known mobile phones
Alternate Site
This site also allow you to render a url into a virtual galaxy note 5
Disclosure affiliation : I am in no way affiliated with any of the sites and also not an official representative of it/them
A: According to statistics
Pixel size: 1440 x 2560
ViewPort size: 480 x 853
Also you can get this information with javascript:
var w = Math.max(document.documentElement.clientWidth, window.innerWidth || 0);
var h = Math.max(document.documentElement.clientHeight, window.innerHeight || 0);
A: Samsung Galaxy Note 5
screen size: 1440 x 2560
viewport: 480 x 853
| stackoverflow | {
"language": "en",
"length": 244,
"provenance": "stackexchange_0000F.jsonl.gz:877563",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581305"
} |
c59991d1483706e669f04c64c36b2a5b01585ec7 | Stackoverflow Stackexchange
Q: What are the purpose and benefits of using persistent connection in mysql? According to php manual:
Persistent connections are not closed at the end of the script, but are cached and re-used when another script requests a connection using the same credentials. The persistent connection cache allows you to avoid the overhead of establishing a new connection every time a script needs to talk to a database, resulting in a faster web application.
One advantage is efficiency that we do not need to make connection each time script executes. What are other advantages of persistent connection? Like query cache etc?
A: it Allows for resumption of transactions across multiple request as well as Allow connections setting such as character set encoding to persist
| Q: What are the purpose and benefits of using persistent connection in mysql? According to php manual:
Persistent connections are not closed at the end of the script, but are cached and re-used when another script requests a connection using the same credentials. The persistent connection cache allows you to avoid the overhead of establishing a new connection every time a script needs to talk to a database, resulting in a faster web application.
One advantage is efficiency that we do not need to make connection each time script executes. What are other advantages of persistent connection? Like query cache etc?
A: it Allows for resumption of transactions across multiple request as well as Allow connections setting such as character set encoding to persist
| stackoverflow | {
"language": "en",
"length": 124,
"provenance": "stackexchange_0000F.jsonl.gz:877581",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581377"
} |
55345d6f6612959611e784b554f9b3f47840fdaf | Stackoverflow Stackexchange
Q: Filter AWS KMS Keys by Tag or by current role which has encrypt/decrypt permissions? I am writing an API to display a list of kms keys to the user. Based on user selection I need to use that particular KMS key for encryption. Currently, I am displaying all the KMS keys. But I am facing issues while encrypting/decrypting because lambda_role does not permissions on that kms key.
How can I filter them on any of the below options
*
*Get all kms keys where (Tag) product = "product 1" - Planning to Tag the keys with product tag, and fetch by tag
*Get all kms keys where role = "lambda_role" has permission to encrypt/decrypt.
I could not find any AWS API to filter based on any of the options.
A: Unfortunately, the ListKeys API does not fill any of your requirements. The only way I could see to do what you want is to do client-side filtering, i.e. call ListKeys and then for each key you care about, call ListResourceTags and ListKeyPolicies, respectively.
| Q: Filter AWS KMS Keys by Tag or by current role which has encrypt/decrypt permissions? I am writing an API to display a list of kms keys to the user. Based on user selection I need to use that particular KMS key for encryption. Currently, I am displaying all the KMS keys. But I am facing issues while encrypting/decrypting because lambda_role does not permissions on that kms key.
How can I filter them on any of the below options
*
*Get all kms keys where (Tag) product = "product 1" - Planning to Tag the keys with product tag, and fetch by tag
*Get all kms keys where role = "lambda_role" has permission to encrypt/decrypt.
I could not find any AWS API to filter based on any of the options.
A: Unfortunately, the ListKeys API does not fill any of your requirements. The only way I could see to do what you want is to do client-side filtering, i.e. call ListKeys and then for each key you care about, call ListResourceTags and ListKeyPolicies, respectively.
| stackoverflow | {
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:877601",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581451"
} |
bebd973bb8ec116f261edffe92a5eafc006ee07b | Stackoverflow Stackexchange
Q: Is it possible to have Firebase host your static Web site and another provider host your blog under the same domain? I recently began hosting my static site on Firebase, however I would like that site to also have a blog powered by Wordpress.
I'm wondering, if it's possible to configure Firebase to direct mysite.com/blog back to the my domain/hosting provider which supports PHP.
A: If you have pointed your domain to Firebase, it is possible to have a Wordpress blog hosted by a separate hosting provider who supports PHP. However, you will need to use a subdomain such as blog.mydomain.com rather than a subdirectory such as mydomain.com/blog.
At least with my domain registrar, it is not currently possible to configure the DNS to point a subdirectory to the secondary host instead of Firebase once the domain itself is pointed to Firebase.
| Q: Is it possible to have Firebase host your static Web site and another provider host your blog under the same domain? I recently began hosting my static site on Firebase, however I would like that site to also have a blog powered by Wordpress.
I'm wondering, if it's possible to configure Firebase to direct mysite.com/blog back to the my domain/hosting provider which supports PHP.
A: If you have pointed your domain to Firebase, it is possible to have a Wordpress blog hosted by a separate hosting provider who supports PHP. However, you will need to use a subdomain such as blog.mydomain.com rather than a subdirectory such as mydomain.com/blog.
At least with my domain registrar, it is not currently possible to configure the DNS to point a subdirectory to the secondary host instead of Firebase once the domain itself is pointed to Firebase.
| stackoverflow | {
"language": "en",
"length": 143,
"provenance": "stackexchange_0000F.jsonl.gz:877604",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581458"
} |
f3462a7364b8cddc2b1237b717089c5f3fe15512 | Stackoverflow Stackexchange
Q: Inconsisent gap between number and text I got strange spacing issues. There is number and each text parallel. And there is different spacing between 1, 4, 7 and 'each' text. How can we fix this issue or it can't be fixed. I have not used any spacing and extra css properties.
@import url('https://fonts.googleapis.com/css?family=Spectral');
@import url('https://fonts.googleapis.com/css?family=Open+Sans|Spectral');
.bigger {
font-size: 40px;
}
p {
font-family: 'Open Sans', sans-serif;
}
<p>
<span class="bigger">81</span>
<small>each</small>
</p> <br>
<p>
<span class="bigger">84</span>
<small>each</small>
</p> <br>
<p>
<span class="bigger">87</span>
<small>each</small>
</p> <br>
A: That is the issue of letter spacing of the font. You should use a monospace font to achieve same spacing for all characters.
Try the below snippet.
.bigger {
font-size: 40px;
}
p {
font-family: monospace;
}
<p>
<span class="bigger">81</span>
<small>each</small>
</p> <br>
<p>
<span class="bigger">84</span>
<small>each</small>
</p> <br>
<p>
<span class="bigger">87</span>
<small>each</small>
</p> <br>
| Q: Inconsisent gap between number and text I got strange spacing issues. There is number and each text parallel. And there is different spacing between 1, 4, 7 and 'each' text. How can we fix this issue or it can't be fixed. I have not used any spacing and extra css properties.
@import url('https://fonts.googleapis.com/css?family=Spectral');
@import url('https://fonts.googleapis.com/css?family=Open+Sans|Spectral');
.bigger {
font-size: 40px;
}
p {
font-family: 'Open Sans', sans-serif;
}
<p>
<span class="bigger">81</span>
<small>each</small>
</p> <br>
<p>
<span class="bigger">84</span>
<small>each</small>
</p> <br>
<p>
<span class="bigger">87</span>
<small>each</small>
</p> <br>
A: That is the issue of letter spacing of the font. You should use a monospace font to achieve same spacing for all characters.
Try the below snippet.
.bigger {
font-size: 40px;
}
p {
font-family: monospace;
}
<p>
<span class="bigger">81</span>
<small>each</small>
</p> <br>
<p>
<span class="bigger">84</span>
<small>each</small>
</p> <br>
<p>
<span class="bigger">87</span>
<small>each</small>
</p> <br>
A: The character 1 (and 7 sometimes) would usually be spaced out in most fonts. If you want uniform spacing, you should consider using monospace fonts.
Another improvement that you can make to your code is removing the spaces between tags.
Please check the code below:
@import url('https://fonts.googleapis.com/css?family=Spectral');
@import url('https://fonts.googleapis.com/css?family=Open+Sans|Spectral');
.bigger {
font-size: 40px;
}
p {
font-family: 'Open Sans', sans-serif;
}
<p>
<span class="bigger">81</span><small>each</small>
</p> <br>
<p>
<span class="bigger">84</span><small>each</small>
</p> <br>
<p>
<span class="bigger">87</span><small>each</small>
</p> <br>
A: Although using a monospace font is a nice workaround, you could solve this with your original font if it has the correct OpenType features.
The difference in the space that a digit occupies is caused by the width of the digit (as opposed to kerning or letter spacing, as suggested in the other answers). The width is proportional — the 1 is narrower than the 4.
But a font can also offer tabular figures, where each digit is of equal width:
You can enable this in CSS with font-feature-settings: 'tnum';. Or to use other OpenType features and take care of browser inconsistencies, see Utility OpenType.
| stackoverflow | {
"language": "en",
"length": 325,
"provenance": "stackexchange_0000F.jsonl.gz:877618",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581499"
} |
a47ab1a81c1f7e8d8b9a659c85fa472cdc788007 | Stackoverflow Stackexchange
Q: How to filter appProperty where null in Google Drive API? Am trying to filter files where property is null and where property has {key=1 and value=1} at the same time?
Here is my query string sent in the request:
'q'=>"'$FolderId' in parents
and (appProperties has null or appProperties has { key='1_id' and
value='1' })"
| Q: How to filter appProperty where null in Google Drive API? Am trying to filter files where property is null and where property has {key=1 and value=1} at the same time?
Here is my query string sent in the request:
'q'=>"'$FolderId' in parents
and (appProperties has null or appProperties has { key='1_id' and
value='1' })"
| stackoverflow | {
"language": "en",
"length": 55,
"provenance": "stackexchange_0000F.jsonl.gz:877622",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581507"
} |
a8ebb2e31153a01d6fb9cdaaddea6afdb42b4c79 | Stackoverflow Stackexchange
Q: Error android studio canary 3 Hello everyone I have a problem in android studio canary 3 layout doesn't work what is the solution for this?
ScreenShot of message display Gradle project sync failed
A: Try to replace this string
"org.gradle.jvmargs=-Xmx512m" with this
"org.gradle.jvmargs=-XX:MaxPermSize=512m"
| Q: Error android studio canary 3 Hello everyone I have a problem in android studio canary 3 layout doesn't work what is the solution for this?
ScreenShot of message display Gradle project sync failed
A: Try to replace this string
"org.gradle.jvmargs=-Xmx512m" with this
"org.gradle.jvmargs=-XX:MaxPermSize=512m"
| stackoverflow | {
"language": "en",
"length": 44,
"provenance": "stackexchange_0000F.jsonl.gz:877623",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581509"
} |
f37b7d5b431925104c6cd3ecfbbeb4fffe2f13f4 | Stackoverflow Stackexchange
Q: Generate Number Then Play Music So for my website, I want to make it so if you load the page, a javascript block will generate a random number 1-5 and depending on the number generated, it will play an audio file. Such as something like this in Visual Basic but into JavaScript code:
Generate a random number between 1-5, so that 1, 2, 3, 4 and 5 are all possibilities. This number is intRandomNumber
If intRandomNumber = 1 Then
Play song1.mp3
Else
End if
If intRandomNumber = 2 Then
Play song2.mp3
Else
End if
If intRandomNumber = 3 Then
Play song3.mp3
Else
End if
If intRandomNumber = 4 Then
Play song4.mp3
Else
End if
If intRandomNumber = 5 Then
Play song5.mp3
Else
End if
Thanks for any help back. I do appreciate it. :)
A: (new Audio("somefolder/song"+(Math.floor(Math.random()*5)+1)+".mp3")). play();
| Q: Generate Number Then Play Music So for my website, I want to make it so if you load the page, a javascript block will generate a random number 1-5 and depending on the number generated, it will play an audio file. Such as something like this in Visual Basic but into JavaScript code:
Generate a random number between 1-5, so that 1, 2, 3, 4 and 5 are all possibilities. This number is intRandomNumber
If intRandomNumber = 1 Then
Play song1.mp3
Else
End if
If intRandomNumber = 2 Then
Play song2.mp3
Else
End if
If intRandomNumber = 3 Then
Play song3.mp3
Else
End if
If intRandomNumber = 4 Then
Play song4.mp3
Else
End if
If intRandomNumber = 5 Then
Play song5.mp3
Else
End if
Thanks for any help back. I do appreciate it. :)
A: (new Audio("somefolder/song"+(Math.floor(Math.random()*5)+1)+".mp3")). play();
A: random_number = Math.ceil(Math.random()*5)
var audio = new Audio('song'+random_number+'.mp3');
audio.play();
| stackoverflow | {
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:877632",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581540"
} |
098cbb6e188fd862edc3477f8c4fdab40e56c335 | Stackoverflow Stackexchange
Q: AWS Lambda RDS Connection Pooling We are trying to add AWS X ray JDBC interceptor to our lambda functions and in order to add JDBC interceptor we have added Tomcat JDBC datasource with max active and max idle connection as 1. Connections are not getting reused and we are getting lot of "connection already closed error".
Another pattern we observed is Lambda is taking almost 10 minutes to flush the connection from Aurora DB.
Did any one successfully implemented connection pooling with Lambda.( Java 8) and RDS (Aurora).
A: I think your cry for connection pooling in RDS has reached AWS just now...
Here you go!
RDS Proxy for Aurora/RDS has been launched recently in AWS ReInvent 2019.
| Q: AWS Lambda RDS Connection Pooling We are trying to add AWS X ray JDBC interceptor to our lambda functions and in order to add JDBC interceptor we have added Tomcat JDBC datasource with max active and max idle connection as 1. Connections are not getting reused and we are getting lot of "connection already closed error".
Another pattern we observed is Lambda is taking almost 10 minutes to flush the connection from Aurora DB.
Did any one successfully implemented connection pooling with Lambda.( Java 8) and RDS (Aurora).
A: I think your cry for connection pooling in RDS has reached AWS just now...
Here you go!
RDS Proxy for Aurora/RDS has been launched recently in AWS ReInvent 2019.
A: I've had some recent success with the latest MariaDB Connector-J and aurora failover. I've had no issues with any queries as of yet with my jdbc url like jdbc:mariadb:aurora://host:port/db?...
See https://mariadb.com/kb/en/the-mariadb-library/failover-and-high-availability-with-mariadb-connector-j/#specifics-for-amazon-aurora
I'm still working on error-free connection pooling, but I'm running into the occasional DEBUG from HikariCP about TransientConnectionError or the MariaDB Connector-J with NullPointerException
| stackoverflow | {
"language": "en",
"length": 175,
"provenance": "stackexchange_0000F.jsonl.gz:877657",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581622"
} |
a0638ce06970071af148f27d01338a30bfbf2e20 | Stackoverflow Stackexchange
Q: Location Updates Not Regular I am developing a distance based app which requires regular distance updates the app works fine if the device has internet but if the device does not have connectivity in that case even if the GPS is on and I am moving it fails to update the location on the OnLocationChanged Listener swiftly as required
Registering The Location Listener
LocationListener[] mLocationListeners = new LocationListener[]{
new LocationListener(LocationManager.GPS_PROVIDER),
new LocationListener(LocationManager.NETWORK_PROVIDER)
mLocationManager.requestLocationUpdates(
LocationManager.NETWORK_PROVIDER, LOCATION_INTERVAL, LOCATION_DISTANCE,
mLocationListeners[1]);
LocationListener
private class LocationListener implements android.location.LocationListener {
Location mLastLocation;
public LocationListener(String provider) {
Log.e(TAG, "LocationListener " + provider);
mLastLocation = new Location(provider);
}
@Override
public void onLocationChanged(Location location) {
Log.e(TAG, "onLocationChanged: " + location);
mLastLocation.set(location);
mLocationController.updateLastKnownLocation(location);
}
@Override
public void onProviderDisabled(String provider) {
Log.e(TAG, "onProviderDisabled: " + provider);
}
@Override
public void onProviderEnabled(String provider) {
Log.e(TAG, "onProviderEnabled: " + provider);
}
@Override
public void onStatusChanged(String provider, int status, Bundle extras) {
Log.e(TAG, "onStatusChanged: " + provider);
}
}
LocationListener[] mLocationListeners = new LocationListener[]{
new LocationListener(LocationManager.GPS_PROVIDER),
new LocationListener(LocationManager.NETWORK_PROVIDER)
};
I am using this Location Listener
| Q: Location Updates Not Regular I am developing a distance based app which requires regular distance updates the app works fine if the device has internet but if the device does not have connectivity in that case even if the GPS is on and I am moving it fails to update the location on the OnLocationChanged Listener swiftly as required
Registering The Location Listener
LocationListener[] mLocationListeners = new LocationListener[]{
new LocationListener(LocationManager.GPS_PROVIDER),
new LocationListener(LocationManager.NETWORK_PROVIDER)
mLocationManager.requestLocationUpdates(
LocationManager.NETWORK_PROVIDER, LOCATION_INTERVAL, LOCATION_DISTANCE,
mLocationListeners[1]);
LocationListener
private class LocationListener implements android.location.LocationListener {
Location mLastLocation;
public LocationListener(String provider) {
Log.e(TAG, "LocationListener " + provider);
mLastLocation = new Location(provider);
}
@Override
public void onLocationChanged(Location location) {
Log.e(TAG, "onLocationChanged: " + location);
mLastLocation.set(location);
mLocationController.updateLastKnownLocation(location);
}
@Override
public void onProviderDisabled(String provider) {
Log.e(TAG, "onProviderDisabled: " + provider);
}
@Override
public void onProviderEnabled(String provider) {
Log.e(TAG, "onProviderEnabled: " + provider);
}
@Override
public void onStatusChanged(String provider, int status, Bundle extras) {
Log.e(TAG, "onStatusChanged: " + provider);
}
}
LocationListener[] mLocationListeners = new LocationListener[]{
new LocationListener(LocationManager.GPS_PROVIDER),
new LocationListener(LocationManager.NETWORK_PROVIDER)
};
I am using this Location Listener
| stackoverflow | {
"language": "en",
"length": 172,
"provenance": "stackexchange_0000F.jsonl.gz:877673",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581673"
} |
5a06955d4fb04c17f6e84e699b58d008f30c8c5d | Stackoverflow Stackexchange
Q: How to speed up simple Pandas for/if loop? I have a fairly simple loop that runs fine, but takes much longer than I think it should (~5 minutes).
for i in range(len(df)):
if pd.isnull(df['Date'][i]):
df['Date'][i] = df['Date'][i-1]
The purpose here is to fix date and times in a data file I have that is structured where the first row for each day has the text for the date, but all others are blank. I'm simply looking to see if the value is null or not, and if it is, setting it to the previous value.
Is there a more Pandas-y way to do this more efficiently?
Thanks,
Ben
A: Use forward filling ffill
df.Date.ffill(inplace=True)
Demo
df = pd.DataFrame(dict(
Date=['Wed', None, None, 'Thr', None, None],
Time=[1, 2, 3, 4, 5, 6]
))
df
Date Time
0 Wed 1
1 None 2
2 None 3
3 Thr 4
4 None 5
5 None 6
Then
df.Date.ffill(inplace=True)
df
Date Time
0 Wed 1
1 Wed 2
2 Wed 3
3 Thr 4
4 Thr 5
5 Thr 6
| Q: How to speed up simple Pandas for/if loop? I have a fairly simple loop that runs fine, but takes much longer than I think it should (~5 minutes).
for i in range(len(df)):
if pd.isnull(df['Date'][i]):
df['Date'][i] = df['Date'][i-1]
The purpose here is to fix date and times in a data file I have that is structured where the first row for each day has the text for the date, but all others are blank. I'm simply looking to see if the value is null or not, and if it is, setting it to the previous value.
Is there a more Pandas-y way to do this more efficiently?
Thanks,
Ben
A: Use forward filling ffill
df.Date.ffill(inplace=True)
Demo
df = pd.DataFrame(dict(
Date=['Wed', None, None, 'Thr', None, None],
Time=[1, 2, 3, 4, 5, 6]
))
df
Date Time
0 Wed 1
1 None 2
2 None 3
3 Thr 4
4 None 5
5 None 6
Then
df.Date.ffill(inplace=True)
df
Date Time
0 Wed 1
1 Wed 2
2 Wed 3
3 Thr 4
4 Thr 5
5 Thr 6
| stackoverflow | {
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:877674",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581674"
} |
2795a35ebdd882f273f685b6dfb87e5853225013 | Stackoverflow Stackexchange
Q: How to get stock status from product collection in magento here is my product collection can anyone help to get me stock status from product collection.
$collection = Mage::getModel('catalog/category')
->load($categoryId)
->getProductCollection()
->addAttributeToSelect('*')
->addAttributeToFilter('visibility', array(
Mage_Catalog_Model_Product_Visibility::VISIBILITY_BOTH,
Mage_Catalog_Model_Product_Visibility::VISIBILITY_IN_CATALOG));
return $collection;
A: Stock status (is_in_stock) is part of every product collection.
foreach ($collections as $product) {
echo $product->getStockItem()->getIsInStock();
}
Returns 1 if in stock, else null.
If you need other stock information like min_qty or backorders you can add this to your collection:
$collection->setFlag('require_stock_items', true);
More details: https://magento.stackexchange.com/questions/106455/get-product-stock-quantity-in-magento/209510#209510
| Q: How to get stock status from product collection in magento here is my product collection can anyone help to get me stock status from product collection.
$collection = Mage::getModel('catalog/category')
->load($categoryId)
->getProductCollection()
->addAttributeToSelect('*')
->addAttributeToFilter('visibility', array(
Mage_Catalog_Model_Product_Visibility::VISIBILITY_BOTH,
Mage_Catalog_Model_Product_Visibility::VISIBILITY_IN_CATALOG));
return $collection;
A: Stock status (is_in_stock) is part of every product collection.
foreach ($collections as $product) {
echo $product->getStockItem()->getIsInStock();
}
Returns 1 if in stock, else null.
If you need other stock information like min_qty or backorders you can add this to your collection:
$collection->setFlag('require_stock_items', true);
More details: https://magento.stackexchange.com/questions/106455/get-product-stock-quantity-in-magento/209510#209510
A: $collection = Mage::getModel('catalog/category')
->load($categoryId)
->getProductCollection()
->addAttributeToSelect('*')
->joinField('qty','cataloginventory/stock_item','qty','product_id=entity_id','{{table}}.stock_id=1','left')
->addAttributeToFilter('visibility', array(
Mage_Catalog_Model_Product_Visibility::VISIBILITY_BOTH,
Mage_Catalog_Model_Product_Visibility::VISIBILITY_IN_CATALOG));
return $collection;
Here you can do it like this
You have to add join Field for stock items ->joinField('qty','cataloginventory/stock_item','qty','product_id=entity_id','{{table}}.is_in_stock=1','left')
| stackoverflow | {
"language": "en",
"length": 117,
"provenance": "stackexchange_0000F.jsonl.gz:877678",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581682"
} |
9e9a33ddd96e0512b0fd5dd9b2a42d723a4c3f55 | Stackoverflow Stackexchange
Q: How to draw a straight line with the given length and angle? I have to draw a straight line starting at 0,0 with some length and angle(from the top of view). Currently able to create a line by giving starting and ending points but instead of ending points, I have to use angle and length, any help?
Here is the code:
let path = UIBezierPath()
path.move(to: CGPoint(x: 0, y: 0))
path.addLine(to: CGPoint(x: 0+10, y: 0+10))
let shapeLayer = CAShapeLayer()
shapeLayer.path = path.cgPath
shapeLayer.strokeColor = UIColor.blue.cgColor
shapeLayer.lineWidth = 3.0
A: There are many ways to do this. One way is to start with a unit-length line along the Y axis. Rotate the line to the desired angle and scale it to the desired length. Example:
let angleInRadians: CGFloat = ...
let length: CGFloat = ...
let path = UIBezierPath()
path.move(to: .zero)
path.addLine(to: CGPoint(x: 0, y: 1))
path.apply(.init(rotationAngle: angleInRadians))
path.apply(.init(scaleX: length, y: length))
Another way is to use trigonometric functions directly to compute the non-origin endpoint of the line:
let angleInRadians: CGFloat = ...
let length: CGFloat = ...
let path = UIBezierPath()
path.move(to: .zero)
path.addLine(to: CGPoint(x: -sin(angleInRadians) * length, cos(angleInRadians) * length))
| Q: How to draw a straight line with the given length and angle? I have to draw a straight line starting at 0,0 with some length and angle(from the top of view). Currently able to create a line by giving starting and ending points but instead of ending points, I have to use angle and length, any help?
Here is the code:
let path = UIBezierPath()
path.move(to: CGPoint(x: 0, y: 0))
path.addLine(to: CGPoint(x: 0+10, y: 0+10))
let shapeLayer = CAShapeLayer()
shapeLayer.path = path.cgPath
shapeLayer.strokeColor = UIColor.blue.cgColor
shapeLayer.lineWidth = 3.0
A: There are many ways to do this. One way is to start with a unit-length line along the Y axis. Rotate the line to the desired angle and scale it to the desired length. Example:
let angleInRadians: CGFloat = ...
let length: CGFloat = ...
let path = UIBezierPath()
path.move(to: .zero)
path.addLine(to: CGPoint(x: 0, y: 1))
path.apply(.init(rotationAngle: angleInRadians))
path.apply(.init(scaleX: length, y: length))
Another way is to use trigonometric functions directly to compute the non-origin endpoint of the line:
let angleInRadians: CGFloat = ...
let length: CGFloat = ...
let path = UIBezierPath()
path.move(to: .zero)
path.addLine(to: CGPoint(x: -sin(angleInRadians) * length, cos(angleInRadians) * length))
| stackoverflow | {
"language": "en",
"length": 192,
"provenance": "stackexchange_0000F.jsonl.gz:877692",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581730"
} |
ddcca8e4369b7f89adca9bda317681e4315cc44e | Stackoverflow Stackexchange
Q: Graphics library - React Native I am creating an app using React Native. I want to render some cool graphics (eg. A screen displaying a man running etc.) in my app screens. What shall I learn to be able to do so?
A: Perhaps this might be what you're looking for
React-Canvas
The library lets user draw things in canvas as well as image transition with multi layer and more by using components.
| Q: Graphics library - React Native I am creating an app using React Native. I want to render some cool graphics (eg. A screen displaying a man running etc.) in my app screens. What shall I learn to be able to do so?
A: Perhaps this might be what you're looking for
React-Canvas
The library lets user draw things in canvas as well as image transition with multi layer and more by using components.
A: You may be interested in Expo's GLView. It provides you an OpenGL ES render target.
A: You could look at awesome-react on github. there is packages listed where you can just exporte things created using adobe after effects
| stackoverflow | {
"language": "en",
"length": 113,
"provenance": "stackexchange_0000F.jsonl.gz:877694",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581735"
} |
487980577ddd23089f06b0f3716af25f695ceaff | Stackoverflow Stackexchange
Q: change visibility of method on the basis of sdk android Suppose I developed an android library XYZ which has methods
animateWithTransition() which has code related with Transition api(i.e. minsdk=21)
animateSimply() which has simple animation.
When Client uses XYZ library, he should be able to see animateWithTransition() as a suggestion(ctrl+space) if his minsdk < 21.
and should be able to see only animateSimply()
: |
How to go about this?
A: You should try structuring your code in the following manner -
public void performAnimation() {
if(Build.VERSION.SDK_INT < 21 )
{
// write code for animateSimply function here
}
else
{
// write code for animateWithTransition function here
}
}
That ways, you'll have a single function (which means less code, clean code), and easier testing. Plus your client has to call only 1 function, which makes it easier for him/her to use your library.
| Q: change visibility of method on the basis of sdk android Suppose I developed an android library XYZ which has methods
animateWithTransition() which has code related with Transition api(i.e. minsdk=21)
animateSimply() which has simple animation.
When Client uses XYZ library, he should be able to see animateWithTransition() as a suggestion(ctrl+space) if his minsdk < 21.
and should be able to see only animateSimply()
: |
How to go about this?
A: You should try structuring your code in the following manner -
public void performAnimation() {
if(Build.VERSION.SDK_INT < 21 )
{
// write code for animateSimply function here
}
else
{
// write code for animateWithTransition function here
}
}
That ways, you'll have a single function (which means less code, clean code), and easier testing. Plus your client has to call only 1 function, which makes it easier for him/her to use your library.
| stackoverflow | {
"language": "en",
"length": 145,
"provenance": "stackexchange_0000F.jsonl.gz:877697",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581740"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.