id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
caa57cfb40e48a17bcb20f5ad314380c2d7673d9 | Stackoverflow Stackexchange
Q: Which query is faster : top X or limit X when using order by in Amazon Redshift 3 options, on a table of events that are inserted by a timestamp.
Which query is faster/better?
*
*Select a,b,c,d,e.. from tab1 order by timestamp desc limit 100
*Select top 100 a,b,c,d,e.. from tab1 order by timestamp desc
*Select top 100 a,b,c,d,e.. from tab1 order by timestamp desc limit 100
A: When you ask a question like that, EXPLAIN syntax is helpful. Just add this keyword at the beginning of your query and you will see a query plan. In cases 1 and 2 the plans will be absolutely identical. These are variations of SQL syntax but the internal interpreter of SQL should produce the same query plan according to which requested operations will be performed physically.
More about EXPLAIN command here: EXPLAIN in Redshift
| Q: Which query is faster : top X or limit X when using order by in Amazon Redshift 3 options, on a table of events that are inserted by a timestamp.
Which query is faster/better?
*
*Select a,b,c,d,e.. from tab1 order by timestamp desc limit 100
*Select top 100 a,b,c,d,e.. from tab1 order by timestamp desc
*Select top 100 a,b,c,d,e.. from tab1 order by timestamp desc limit 100
A: When you ask a question like that, EXPLAIN syntax is helpful. Just add this keyword at the beginning of your query and you will see a query plan. In cases 1 and 2 the plans will be absolutely identical. These are variations of SQL syntax but the internal interpreter of SQL should produce the same query plan according to which requested operations will be performed physically.
More about EXPLAIN command here: EXPLAIN in Redshift
A: You can get the result by running these queries on a sample dataset. Here are my observations:
*
*Type 1: 5.54s, 2.42s, 1.77s, 1.76s, 1.76s, 1.75s
*Type 2: 5s, 1.77s, 1s, 1.75s, 2s, 1.75s
*Type 3: Is an invalid SQL statement as you are using two LIMIT clauses
As you can observe, the results are the same for both the queries as both undergo internal optimization by the query engine.
A: Apparently both TOP and LIMIT do a similar job, so you shouldn't be worrying about which one to use.
More important is the design of your underlying table, especially if you are using WHERE and JOIN clauses. In that case, you should be carefully choosing your SORTKEY and DISTKEY, which will have much more impact on the performance of Amazon Redshift that a simple syntactical difference like TOP/LIMIT.
| stackoverflow | {
"language": "en",
"length": 283,
"provenance": "stackexchange_0000F.jsonl.gz:877727",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581862"
} |
f4656909432835de93fc2d75559c72d79290f1e2 | Stackoverflow Stackexchange
Q: Realm String greaterThan Is there any way to find all (or just the next) RealmObjects with Strings lexicographically greater than the target?
Something like
MyEntry next = realm.where(MyEntry.class)
.greaterThan("name", current)
.findAllSorted("name")
.first();
which did not work, because greaterThan is not implemented for Strings.
A: As a non-db-workaround, you can use
List<MyEntry> l = realm.where(MyEntry.class)
.findAllSorted("name");
int pos = l.indexOf(entryWithName);
MyEntry next = l.get((pos+1)%l.size());
This does the searching outside of the db. Possibly not as well-performing, and not as readable, but it should work.
| Q: Realm String greaterThan Is there any way to find all (or just the next) RealmObjects with Strings lexicographically greater than the target?
Something like
MyEntry next = realm.where(MyEntry.class)
.greaterThan("name", current)
.findAllSorted("name")
.first();
which did not work, because greaterThan is not implemented for Strings.
A: As a non-db-workaround, you can use
List<MyEntry> l = realm.where(MyEntry.class)
.findAllSorted("name");
int pos = l.indexOf(entryWithName);
MyEntry next = l.get((pos+1)%l.size());
This does the searching outside of the db. Possibly not as well-performing, and not as readable, but it should work.
| stackoverflow | {
"language": "en",
"length": 84,
"provenance": "stackexchange_0000F.jsonl.gz:877733",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581878"
} |
e13299ec8f5a9a3e30b592182e116a6e634ef6e1 | Stackoverflow Stackexchange
Q: ElasticSearch: How to use filter_path parameter in POST body So I can successfully do request like:
localhost:9200/filebeat-*/_count?filter_path=-_shards
{"query": {
"match_phrase" : {
"message" : "o hohoho"
}
}}
How I can move filter_path=-_shards into the request body to make it work?
A: According to the documentation code, still not possible in Elasticsearch 6.2:
All REST APIs accept a filter_path parameter that can be used to
reduce the response returned by Elasticsearch
and it's impossible to include it into the request body, that's just not supported (to be honest, I'm not sure if it will ever be supported).
However, for some scenarios, you could limit response returned by Elasticsearch by using source filtering (unfortunately, it's only applicable to returned fields of documents)
| Q: ElasticSearch: How to use filter_path parameter in POST body So I can successfully do request like:
localhost:9200/filebeat-*/_count?filter_path=-_shards
{"query": {
"match_phrase" : {
"message" : "o hohoho"
}
}}
How I can move filter_path=-_shards into the request body to make it work?
A: According to the documentation code, still not possible in Elasticsearch 6.2:
All REST APIs accept a filter_path parameter that can be used to
reduce the response returned by Elasticsearch
and it's impossible to include it into the request body, that's just not supported (to be honest, I'm not sure if it will ever be supported).
However, for some scenarios, you could limit response returned by Elasticsearch by using source filtering (unfortunately, it's only applicable to returned fields of documents)
| stackoverflow | {
"language": "en",
"length": 122,
"provenance": "stackexchange_0000F.jsonl.gz:877740",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581899"
} |
c312b81dfcfb3c14e362e13e2b575d82baac61b0 | Stackoverflow Stackexchange
Q: Saving RecyclerView list State I have a recyclerView containing list of a objects of a class. Currently when I close my app all the list in the recyclerView gets lost and after restarting the app, recyclerView shows nothing(no list).
What and How shall I use to retain the list even after my is closed and destroyed?
public class Info {
public String pName;
public String pContact;
public Character pGender;
public int pID;
public String tDateTime; // today's date and time
}
I am storing objects of this class in arraylist to populate my recyclerview adapter.
A: Save state:
protected void onSaveInstanceState(Bundle state) {
super.onSaveInstanceState(state);
// Save list state
mListState = mLayoutManager.onSaveInstanceState();
state.putParcelable(LIST_STATE_KEY, mListState);
}
Restore state :
protected void onRestoreInstanceState(Bundle state) {
super.onRestoreInstanceState(state);
// Retrieve list state and list/item positions
if(state != null)
mListState = state.getParcelable(LIST_STATE_KEY);
}
Then update the LayoutManager :
@Override
protected void onResume() {
super.onResume();
if (mListState != null) {
mLayoutManager.onRestoreInstanceState(mListState);
}
}
| Q: Saving RecyclerView list State I have a recyclerView containing list of a objects of a class. Currently when I close my app all the list in the recyclerView gets lost and after restarting the app, recyclerView shows nothing(no list).
What and How shall I use to retain the list even after my is closed and destroyed?
public class Info {
public String pName;
public String pContact;
public Character pGender;
public int pID;
public String tDateTime; // today's date and time
}
I am storing objects of this class in arraylist to populate my recyclerview adapter.
A: Save state:
protected void onSaveInstanceState(Bundle state) {
super.onSaveInstanceState(state);
// Save list state
mListState = mLayoutManager.onSaveInstanceState();
state.putParcelable(LIST_STATE_KEY, mListState);
}
Restore state :
protected void onRestoreInstanceState(Bundle state) {
super.onRestoreInstanceState(state);
// Retrieve list state and list/item positions
if(state != null)
mListState = state.getParcelable(LIST_STATE_KEY);
}
Then update the LayoutManager :
@Override
protected void onResume() {
super.onResume();
if (mListState != null) {
mLayoutManager.onRestoreInstanceState(mListState);
}
}
A: Override onSaveInstanceState in the activity and save the state of your model, not the state of your layout manager. You surely have the data model somewhere, if the view shows any data at all.
As very minimum, you need to remember just the number of items currently in the model. This is if the model is capable of fetching the content of required items from somewhere. If it is not, or takes too long, the state needs to include also the items being displayed. Something like
@Override
public void onSaveInstanceState(Bundle outState) {
super.onSaveInstanceState(outState);
outState.putSerializable("d.list.data", adapter.getState());
}
And, where state must be restored:
if (savedInstanceState != null) {
adapter.setState(savedInstanceState.getSerializable("d.list.data"));
}
Here is the code of the class that saves and applies state for the model used with the RecyclerView.
| stackoverflow | {
"language": "en",
"length": 286,
"provenance": "stackexchange_0000F.jsonl.gz:877746",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581911"
} |
e4f431aa7aefe177c25ecfe11f6ee74ce88ecd9a | Stackoverflow Stackexchange
Q: How can we generate all submatrix of a given 2D matrix in java or C++? I am using this loop structure but it fails to generate all the submatrix that are possible for any given 2D matrix with n rows and m columns.
for(i=0;i<n;i++)
{
for(j=0;j<m;j++)
{
System.out.println("sub-MATRIX:");
for(k=i;k<n;k++)
{
for(p=j;p<m;p++)
{
System.out.print(arr[k][p]+" ");
}
System.out.println();
}
}
}
Ex: Given matrix 3X3 : [[1 2 3],[4 5 6],[7 8 9]]
Then its submatrix will be:
for size 1:
[1],[2],[3],[4],[5],[6],[7],[8],[9]
for size 4:
[[1,2],[4,5]],[[2,3],[5,6]],[[4,5],[7,8]] and [[5,6],[8,9]]
and so on
A: You are missing a couple more loops to cover all cases. PrintMatyrix() should have 2 nested loops for printing contents.
for (i = 1; i < n; ++i)
{
for (j = 1; j < m; ++j)
{
// we are at each sub matrix of size(i,j)
for (k = 0; k <= (n - i); ++k)
{
for (p = 0; p <= (m - j); ++p)
{
// we are at submatrix of size(i,j) starting at (k,p)
// assuming PrintMatrix(Matrix&, int rows, int cols, int r0, int c0);
PrintMatrix(arr, i, j, k, p);
}
}
}
}
| Q: How can we generate all submatrix of a given 2D matrix in java or C++? I am using this loop structure but it fails to generate all the submatrix that are possible for any given 2D matrix with n rows and m columns.
for(i=0;i<n;i++)
{
for(j=0;j<m;j++)
{
System.out.println("sub-MATRIX:");
for(k=i;k<n;k++)
{
for(p=j;p<m;p++)
{
System.out.print(arr[k][p]+" ");
}
System.out.println();
}
}
}
Ex: Given matrix 3X3 : [[1 2 3],[4 5 6],[7 8 9]]
Then its submatrix will be:
for size 1:
[1],[2],[3],[4],[5],[6],[7],[8],[9]
for size 4:
[[1,2],[4,5]],[[2,3],[5,6]],[[4,5],[7,8]] and [[5,6],[8,9]]
and so on
A: You are missing a couple more loops to cover all cases. PrintMatyrix() should have 2 nested loops for printing contents.
for (i = 1; i < n; ++i)
{
for (j = 1; j < m; ++j)
{
// we are at each sub matrix of size(i,j)
for (k = 0; k <= (n - i); ++k)
{
for (p = 0; p <= (m - j); ++p)
{
// we are at submatrix of size(i,j) starting at (k,p)
// assuming PrintMatrix(Matrix&, int rows, int cols, int r0, int c0);
PrintMatrix(arr, i, j, k, p);
}
}
}
}
A: If we have a matrix with dimensions M x N, and the sub matrix we are looking for is with K x L dimensions. If there is more optimized solution, please share.
for (int i = 0; i < m-k+1; i++) {
for (int j = 0; j < n-l+1; j++) {
for (int p = 0; p < k; p++) {
for(int q = 0; q < l; q++) {
System.out.print(arr[i+p][j+q] + " ");
}
System.out.println();
}
System.out.println("*****************");
}
}
A: You should not loops, you should use some recursions.
Think about this way, for each row or column, you either take it or throw away it. So you can select rows first and then columns and based on the rows and columns selected you construct the submatrix.
Some code,
bool rowTaken[N], columnTaken[M];
void constructSubMatrixRow(int i)
{
if (i >= N) constructSubMatrixCol(0);
rowTaken[i] = true;
constructSubMatrix(i+1);
rowTaken[i] = false;
constructSubMatrix(i+1);
}
void constructSubMatrixCol(int i)
{
if (i >= M) printSubMatrix();
columnTaken[i] = true;
constructSubMatrixCol(i+1);
columnTaken[i] = false;
constructSubMatrixCol(i+1);
}
void printSubMatrix()
{
for (unsigned i = 0; i < N; i++)
if (rowTaken[i]){
for (unsigned j = 0; j < M; j++)
if (columnTaken[j])
print matrix[i][j]
}
}
| stackoverflow | {
"language": "en",
"length": 390,
"provenance": "stackexchange_0000F.jsonl.gz:877757",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581943"
} |
8b660aeb20f5170a45c53314086d5976d8b1c722 | Stackoverflow Stackexchange
Q: Concatenating two vectors in R I want to concatenate two vectors one after the other in R. I have written the following code to do it:
> a = head(tracks_listened_train)
> b = head(tracks_listened_test)
> a
[1] cc1a46ee0446538ecf6b65db01c30cd8 19acf9a5cbed34743ce0ee42ef3cae3e
[3] 9e7fdbf2045c9f814f6c0bed5da9bed7 3441b1031267fbb6009221bf47f9c5e8
[5] 206c8b79bd02beeea200879afc414879 1a7a95e3845a6815060628e847d14362
18585 Levels: 0001a423baf29add84af6ec58aeb5b90 ...
> b
[1] 7251a7694b79aa9a39f9a1a5f5c8a253 2f362377ef0e7bca112233fdda22a79c
[3] c1196625b1b733b62c43935334e1d190 58e41e462af4185b08231a41453c3faf
[5] 1cc2517fa9c037e02a14ce0950a28f67
10186 Levels: 0001a423baf29add84af6ec58aeb5b90 ...
> res = c(a,b)
> res
[1] 14898 1898 11556 3859 2408 1950 4473 1865 7674 3488 1130
However, I get the unexpected result of the resultant vector. What could the problem be?
A: We need to convert the factor class to character class
c(as.character(a), as.character(b))
The reason we get numbers instead of the character is based on the storage mode of factor i.e. an integer. So when we do the concatenation, it coerces to the integer mode
| Q: Concatenating two vectors in R I want to concatenate two vectors one after the other in R. I have written the following code to do it:
> a = head(tracks_listened_train)
> b = head(tracks_listened_test)
> a
[1] cc1a46ee0446538ecf6b65db01c30cd8 19acf9a5cbed34743ce0ee42ef3cae3e
[3] 9e7fdbf2045c9f814f6c0bed5da9bed7 3441b1031267fbb6009221bf47f9c5e8
[5] 206c8b79bd02beeea200879afc414879 1a7a95e3845a6815060628e847d14362
18585 Levels: 0001a423baf29add84af6ec58aeb5b90 ...
> b
[1] 7251a7694b79aa9a39f9a1a5f5c8a253 2f362377ef0e7bca112233fdda22a79c
[3] c1196625b1b733b62c43935334e1d190 58e41e462af4185b08231a41453c3faf
[5] 1cc2517fa9c037e02a14ce0950a28f67
10186 Levels: 0001a423baf29add84af6ec58aeb5b90 ...
> res = c(a,b)
> res
[1] 14898 1898 11556 3859 2408 1950 4473 1865 7674 3488 1130
However, I get the unexpected result of the resultant vector. What could the problem be?
A: We need to convert the factor class to character class
c(as.character(a), as.character(b))
The reason we get numbers instead of the character is based on the storage mode of factor i.e. an integer. So when we do the concatenation, it coerces to the integer mode
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:877759",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44581945"
} |
e6ed36f333f9ba1ee48771f763c567d6fb3cf7d1 | Stackoverflow Stackexchange
Q: cannot convert Java.IO.FileOutputStream to System.IO.Stream? I'm trying to save the picture and in the line bitmapImg.Compress(Bitmap.CompressFormat.Png, 90, fos); displays an error cannot convert Java.IO.FileOutputStream to System.IO.Stream. How to solve it?
File file = new File(Android.OS.Environment.DirectoryPictures + File.Separator + "newProdict.png");
FileOutputStream fos = null;
try
{
fos = new FileOutputStream(file);
if (fos != null)
{
bitmapImg.Compress(Bitmap.CompressFormat.Png, 90, fos);
fos.Close();
}
}
catch (Exception ex) { }
A: The Compress function expects something derived from the .net System.IO.Stream while you passing a class from the java namespace, use the FileStream instead:
try
{
string path = Path.Combine(Environment.GetExternalStoragePublicDirectory(Environment.DirectoryPictures).AbsolutePath, "newProdict.png");
var fs = new FileStream(path, FileMode.Open);
if (fs != null)
{
bitmapImg.Compress(Bitmap.CompressFormat.Png, 90, fs);
fos.Close();
}
}
catch (Exception ex) { }
| Q: cannot convert Java.IO.FileOutputStream to System.IO.Stream? I'm trying to save the picture and in the line bitmapImg.Compress(Bitmap.CompressFormat.Png, 90, fos); displays an error cannot convert Java.IO.FileOutputStream to System.IO.Stream. How to solve it?
File file = new File(Android.OS.Environment.DirectoryPictures + File.Separator + "newProdict.png");
FileOutputStream fos = null;
try
{
fos = new FileOutputStream(file);
if (fos != null)
{
bitmapImg.Compress(Bitmap.CompressFormat.Png, 90, fos);
fos.Close();
}
}
catch (Exception ex) { }
A: The Compress function expects something derived from the .net System.IO.Stream while you passing a class from the java namespace, use the FileStream instead:
try
{
string path = Path.Combine(Environment.GetExternalStoragePublicDirectory(Environment.DirectoryPictures).AbsolutePath, "newProdict.png");
var fs = new FileStream(path, FileMode.Open);
if (fs != null)
{
bitmapImg.Compress(Bitmap.CompressFormat.Png, 90, fs);
fos.Close();
}
}
catch (Exception ex) { }
| stackoverflow | {
"language": "en",
"length": 119,
"provenance": "stackexchange_0000F.jsonl.gz:877791",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582049"
} |
49b85a989dc68810051302a42c5adf2ecc1a237c | Stackoverflow Stackexchange
Q: How to judge the direction of the UIScrollView is going to scroll? Note:My question is the direction of the scrollView will scroll,not scrolling.That is to say,when user scroll the scrollView,can we get the direction of the scrollView is going to scroll before the scrollView begin scrolling?
Any idea?Thanks in advance.
A: By using scroll view delegate you can identify.
Objective C :
- (void)scrollViewWillBeginDecelerating:(UIScrollView *)scrollView {
CGPoint point = [scrollView.panGestureRecognizer translationInView:scrollView.superview];
if (point.y > 0) {
// Dragging down
} else {
// Dragging up
}
}
Swift :
func scrollViewWillBeginDecelerating(_ scrollView: UIScrollView) {
let actualPosition = scrollView.panGestureRecognizer.translation(in: scrollView.superview)
if (actualPosition.y > 0){
// Dragging down
}else{
// Dragging up
}
}
| Q: How to judge the direction of the UIScrollView is going to scroll? Note:My question is the direction of the scrollView will scroll,not scrolling.That is to say,when user scroll the scrollView,can we get the direction of the scrollView is going to scroll before the scrollView begin scrolling?
Any idea?Thanks in advance.
A: By using scroll view delegate you can identify.
Objective C :
- (void)scrollViewWillBeginDecelerating:(UIScrollView *)scrollView {
CGPoint point = [scrollView.panGestureRecognizer translationInView:scrollView.superview];
if (point.y > 0) {
// Dragging down
} else {
// Dragging up
}
}
Swift :
func scrollViewWillBeginDecelerating(_ scrollView: UIScrollView) {
let actualPosition = scrollView.panGestureRecognizer.translation(in: scrollView.superview)
if (actualPosition.y > 0){
// Dragging down
}else{
// Dragging up
}
}
| stackoverflow | {
"language": "en",
"length": 113,
"provenance": "stackexchange_0000F.jsonl.gz:877811",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582092"
} |
7ed5dfb35206e6b63f2f92a580ce78e0f6478415 | Stackoverflow Stackexchange
Q: How can I play a haptic feedback while AvCapture Session is running? When my Avcapture session is running and ready to record a video , I cannot play and audio files, vibrate or play any haptic feedback? I only found a solution for objective c but not swift.
Objective C Solution : Using vibrate and AVCaptureSession at the same time
A: You can use setAllowHapticsAndSystemSoundsDuringRecording from iOS13 and later.
try session.setAllowHapticsAndSystemSoundsDuringRecording(true)
| Q: How can I play a haptic feedback while AvCapture Session is running? When my Avcapture session is running and ready to record a video , I cannot play and audio files, vibrate or play any haptic feedback? I only found a solution for objective c but not swift.
Objective C Solution : Using vibrate and AVCaptureSession at the same time
A: You can use setAllowHapticsAndSystemSoundsDuringRecording from iOS13 and later.
try session.setAllowHapticsAndSystemSoundsDuringRecording(true)
A: As long as you are using microphone in the AVCaptureSession, the haptic feedback will not play.
Apple tries to prevent the vibration from disrupting the microphone capture. This is noted in https://developer.apple.com/documentation/audiotoolbox/1405202-audioservicesplayalertsound
If you are not recording from the mic yet (as you said you are just in a "ready state"), then a solution is to delay adding the mic until you want to start recording.
Of course, while recording, you still may not play any haptic feedback.
| stackoverflow | {
"language": "en",
"length": 151,
"provenance": "stackexchange_0000F.jsonl.gz:877825",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582128"
} |
52fe8aaa53f9d452b6a9d39cd21650f1627ef84d | Stackoverflow Stackexchange
Q: PropTypes in functional stateless component Without using class, how do I use PropTypes in functional stateless component of react?
export const Header = (props) => (
<div>hi</div>
)
A: It isn't different with the stateful, You can add it like:
import PropTypes from 'prop-types';
Header.propTypes = {
title: PropTypes.string
}
Here is a link to prop-types npm
| Q: PropTypes in functional stateless component Without using class, how do I use PropTypes in functional stateless component of react?
export const Header = (props) => (
<div>hi</div>
)
A: It isn't different with the stateful, You can add it like:
import PropTypes from 'prop-types';
Header.propTypes = {
title: PropTypes.string
}
Here is a link to prop-types npm
A: It's done the same way you do with Class Based Components
import PropTypes from "prop-types";
const = function_name => {}
function_name.propTypes = {
prop_name : PropTypes.number
. . . . . . . . . . . . . .
}
Hope This Helps !
A: The official docs show how to do this with ES6 component classes, but the same applies for stateless functional components.
Firstly, npm install / yarn add the prop-types package if you haven't already.
Then, add your propTypes (and defaultProps too if required) after the stateless functional component has been defined, before you export it.
import React from "react";
import PropTypes from "prop-types";
const Header = ({ name }) => <div>hi {name}</div>;
Header.propTypes = {
name: PropTypes.string
};
// Same approach for defaultProps too
Header.defaultProps = {
name: "Alan"
};
export default Header;
A: Same way you do in class based components:
import PropTypes from 'prop-types';
const funcName = (props) => {
...
}
funcName.propTypes = {
propName: PropTypes.string.isRequired
}
Note: You might have to install prop-types via npm install prop-types or yarn add prop-types, depending on the React version you are using.
A: Since React 15, use propTypes to validate props and provide documentation this way:
import React from 'react';
import PropTypes from 'prop-types';
export const Header = (props={}) => (
<div>{props}</div>
);
Header.propTypes = {
props: PropTypes.object
};
This code on the assumption of default value props={} if no props provided to the component.
| stackoverflow | {
"language": "en",
"length": 300,
"provenance": "stackexchange_0000F.jsonl.gz:877851",
"question_score": "121",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582209"
} |
699968602a8488f6bc187ae221de8309c859681c | Stackoverflow Stackexchange
Q: Docker Swarm discovery is still relevant? i'm learning about docker swarm, and got confused about the swarm discovery option, i see that lots of tutorials on internet use this option to create containers with docker-machine, but when i enter the documentation on docker swarm doc it says:
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users should use integrated Swarm mode.
So, what are the use cases for the discovery options? All the tutorials use the docker-machine to create a swarm, i always need it or can just install the docker on machines in my cluster, join them in swarm and use normal?
I saw some names like Docker Swarm and Docker Swarm Mode, are there any difference or just different ways to call the same feature?
A: Q. Docker Swarm discovery is still relevant? | Q: Docker Swarm discovery is still relevant? i'm learning about docker swarm, and got confused about the swarm discovery option, i see that lots of tutorials on internet use this option to create containers with docker-machine, but when i enter the documentation on docker swarm doc it says:
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users should use integrated Swarm mode.
So, what are the use cases for the discovery options? All the tutorials use the docker-machine to create a swarm, i always need it or can just install the docker on machines in my cluster, join them in swarm and use normal?
I saw some names like Docker Swarm and Docker Swarm Mode, are there any difference or just different ways to call the same feature?
A: Q. Docker Swarm discovery is still relevant?
A: No, if you use docker Swarm Mode and an overlay network (see below)
Q. Are there any difference between Docker Swarm and Docker Swarm Mode?
A: Yes, TL;DR Docker Swarm is deprecated and should not be used anymore, Docker Swarm Mode (we should just say Swarm Mode) is the recommended way of clustering containers and have reliability, load-balancing, scaling, and rolling service upgrades.
Docker Swarm (official doc) :
*
*is the old fashioned way (<1.12) of clustering containers
*uses a dedicated container for building a Docker Swarm cluster
*needs a discovery service like Consul to reference containers in cluster
Swarm Mode (official doc):
*
*is the new and recommended way (>=1.12) of clustering containers on host nodes (called managers / workers)
*is built-in in Docker engine, you don't need an additional container
*has a built-in discovery service if you use an overlay network (DNS resolution is done within this network), you don't need an additional container
You can have a look to this SO thread on same topic.
Q. Do i always need docker-machine to create a swarm?
A: No, docker-machine is a helper to create virtual hosts in the cloud like amazon ec2, azure, digitalocean, google, openstack..., or your own network with virtual box.
To create a Swarm Mode, you need :
*
*a multiple hosts cluster with docker engine installed on each host (called node) (that is what docker-machine facilitates)
*run docker swarm init to switch to Swarm Mode on your first manager node
*run docker swarm join on worker nodes to add them in the cluster
There are some subtle adjustments to Swarm mode to increase high availability (recommended number of managers in the swarm, node placement in multiple availability zones in the cloud)
Hope this helps!
| stackoverflow | {
"language": "en",
"length": 446,
"provenance": "stackexchange_0000F.jsonl.gz:877867",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582265"
} |
7852c9912cb48f0937c5e612f78696d30f5799aa | Stackoverflow Stackexchange
Q: Setup.py Exporting Under a Different Name I have a situation where I need to put all of my scripts inside of a subfolder
in the base directory
Current Structure
If I have the following document structure
superhero
setup.py
scripts.py
heros
superman.py
batman.py
modules
__init__.py
fly
fly.py
spy
spy.py
And I wanted to export the modules directory under the name superhero,
In the setup.py script, I tried doing:
setup(
...
packages=["superhero"],
package_dir={
'superhero': 'modules'
},
...)
and then running
pip install --editable .
In that directory. Instead of getting the module installed under the name
superhero, I get it installed under the name modules. So in python
> import superhero
ERROR:
> import modules
>
Question
How would I restructure my setup.py file so that I could run
import superhero to import all of my modules under a package called
superhero? I couldn't figure this out from the docs.
| Q: Setup.py Exporting Under a Different Name I have a situation where I need to put all of my scripts inside of a subfolder
in the base directory
Current Structure
If I have the following document structure
superhero
setup.py
scripts.py
heros
superman.py
batman.py
modules
__init__.py
fly
fly.py
spy
spy.py
And I wanted to export the modules directory under the name superhero,
In the setup.py script, I tried doing:
setup(
...
packages=["superhero"],
package_dir={
'superhero': 'modules'
},
...)
and then running
pip install --editable .
In that directory. Instead of getting the module installed under the name
superhero, I get it installed under the name modules. So in python
> import superhero
ERROR:
> import modules
>
Question
How would I restructure my setup.py file so that I could run
import superhero to import all of my modules under a package called
superhero? I couldn't figure this out from the docs.
| stackoverflow | {
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:877872",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582284"
} |
b456c9cf0c1db155318c9d4da7112a7f17d8a1d8 | Stackoverflow Stackexchange
Q: How do I make the Structured Indentation lines solid in Visual Studio? Instead of having dotted vertical lines underneath each {:
It used to to have solid lines:
I know you can turn these lines off and change their color. (Found these answers while searching.) But what happened to the line style?
A: You can disable the lines with the option Show guides for declaration level constructs under Text Editor > C# > Advanced > Block Structure Guides.
Since Visual Studio 2017 Update 2, the soft line has been replaced by the dotted one. As far as I can see there is no option to change the dotted line back to the softer one. You can suggest this feature using Connect.
| Q: How do I make the Structured Indentation lines solid in Visual Studio? Instead of having dotted vertical lines underneath each {:
It used to to have solid lines:
I know you can turn these lines off and change their color. (Found these answers while searching.) But what happened to the line style?
A: You can disable the lines with the option Show guides for declaration level constructs under Text Editor > C# > Advanced > Block Structure Guides.
Since Visual Studio 2017 Update 2, the soft line has been replaced by the dotted one. As far as I can see there is no option to change the dotted line back to the softer one. You can suggest this feature using Connect.
| stackoverflow | {
"language": "en",
"length": 122,
"provenance": "stackexchange_0000F.jsonl.gz:877900",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582366"
} |
ca9fb2cb37148e1c812011aa5acf2511bc325f26 | Stackoverflow Stackexchange
Q: Slow cheetah Add Transform does not appear I want to use Slow Cheetah for transform .config file. Currently I am using Visual Studio 2017 and for this I have installed Slow Cheetah 2.5.48 from nuget but 'Add Transform' does not appear when I right-click on config file.
A: As a work-around you could do this by hand.
Make a copy in the _Solution Explorer* of your Web.config and rename it to Web.Debug.config or whatever you like.
Unload the project and then Edit the project.
Find the ItemGroup element that contains your newly created config file and add the <DependentUpon>Web.config</DependentUpon> element there, like so:
After that, it will look like this in the Solution Explorer:
Now you can edit your newly added config file with the required transformations.
| Q: Slow cheetah Add Transform does not appear I want to use Slow Cheetah for transform .config file. Currently I am using Visual Studio 2017 and for this I have installed Slow Cheetah 2.5.48 from nuget but 'Add Transform' does not appear when I right-click on config file.
A: As a work-around you could do this by hand.
Make a copy in the _Solution Explorer* of your Web.config and rename it to Web.Debug.config or whatever you like.
Unload the project and then Edit the project.
Find the ItemGroup element that contains your newly created config file and add the <DependentUpon>Web.config</DependentUpon> element there, like so:
After that, it will look like this in the Solution Explorer:
Now you can edit your newly added config file with the required transformations.
A: I had the same issue. I installed the NuGet package 2.5.48 and then was able to download and install the vsix file from here and it worked:
https://github.com/sayedihashimi/slow-cheetah/releases
A: The issue is, only adding nugget would not suffice. You need to install slowcheetah from visual studio marketplace. Following are the steps:
*
*Install SlowCheetah from Tools > Extensions and Updates
*Restart VS, allowing for the VSIX installer to run
*Create new C# App (.NET Framework). (In my case, it started to work on existing app also.)
GitHub Reference
A: *
*Download it here
https://marketplace.visualstudio.com/items?itemName=WillBuikMSFT.SlowCheetah-XMLTransforms or search for "SlowCheetah" in Visualstudio marketplace,
*install it
*restart you VS and see.
Good luck
A: Correct as @Himanshu Singla and many others here are saying, in addition to installing nuget package, you still need to download the Extension from Tools->Extensions and Updates-> Search "Slow Cheetah", install and restart VS.
A: I had to do the following before it worked:
*
*Install the package from their site instead of from NuGet
*List item
Restart VS
Then the Add Transform option started showing up in the drop-down menu.
| stackoverflow | {
"language": "en",
"length": 311,
"provenance": "stackexchange_0000F.jsonl.gz:877906",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582393"
} |
e059f111c04188171a9cb3e773b941259c1f845f | Stackoverflow Stackexchange
Q: How to pass variables in spark SQL, using python? I am writing spark code in python.
How do I pass a variable in a spark.sql query?
q25 = 500
Q1 = spark.sql("SELECT col1 from table where col2>500 limit $q25 , 1")
Currently the above code does not work? How do we pass variables?
I have also tried,
Q1 = spark.sql("SELECT col1 from table where col2>500 limit q25='{}' , 1".format(q25))
A: Another option if you're doing this sort of thing often or want to make your code easier to re-use is to use a map of configuration variables and the format option:
configs = {"q25":10,
"TABLE_NAME":"my_table",
"SCHEMA":"my_schema"}
Q1 = spark.sql("""SELECT col1 from {SCHEMA}.{TABLE_NAME}
where col2>500
limit {q25}
""".format(**configs))
| Q: How to pass variables in spark SQL, using python? I am writing spark code in python.
How do I pass a variable in a spark.sql query?
q25 = 500
Q1 = spark.sql("SELECT col1 from table where col2>500 limit $q25 , 1")
Currently the above code does not work? How do we pass variables?
I have also tried,
Q1 = spark.sql("SELECT col1 from table where col2>500 limit q25='{}' , 1".format(q25))
A: Another option if you're doing this sort of thing often or want to make your code easier to re-use is to use a map of configuration variables and the format option:
configs = {"q25":10,
"TABLE_NAME":"my_table",
"SCHEMA":"my_schema"}
Q1 = spark.sql("""SELECT col1 from {SCHEMA}.{TABLE_NAME}
where col2>500
limit {q25}
""".format(**configs))
A: Using f-Strings approach (PySpark):
table = 'my_schema.my_table'
df = spark.sql(f'select * from {table}')
A: You need to remove single quote and q25 in string formatting like this:
Q1 = spark.sql("SELECT col1 from table where col2>500 limit {}, 1".format(q25))
Update:
Based on your new queries:
spark.sql("SELECT col1 from table where col2>500 order by col1 desc limit {}, 1".format(q25))
Note that the SparkSQL does not support OFFSET, so the query cannot work.
If you need add multiple variables you can try this way:
q25 = 500
var2 = 50
Q1 = spark.sql("SELECT col1 from table where col2>{0} limit {1}".format(var2,q25))
A: A really easy solution is to store the query as a string (using the usual python formatting), and then pass it to the spark.sql() function:
q25 = 500
query = "SELECT col1 from table where col2>500 limit {}".format(q25)
Q1 = spark.sql(query)
A: All you need to do is add s (String interpolator) to the string. This allows the usage of variable directly into the string.
val q25 = 10
Q1 = spark.sql(s"SELECT col1 from table where col2>500 limit $q25)
| stackoverflow | {
"language": "en",
"length": 296,
"provenance": "stackexchange_0000F.jsonl.gz:877924",
"question_score": "25",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582450"
} |
cea1de0b5313944fd9da20a8414d480d91fa74e8 | Stackoverflow Stackexchange
Q: Ruby - slice all characters till underscore in a string I have a string like this solution_10 and I would like to remove the part solution_ from it, the number after the underscore will increase, it can be 100, 1000 and even larger. I cant seem to wrap my head around on how to do this.
I have tried to use slice!(0, 9) but that gives me solution_, I then tried slice!(0, -2) but that gives me null,
I then tried using solution_10[1..9] this gives me ortable_1
So my question is how to get rid of all characters till underscore, all I want is the number after the underscore.
A: Use String#split method
'solution_10'.split('_').last #will return original string if no underscore present
#=> "10"
'solution_10'.split('_')[1] #will return nil if no underscore present
#=> "10"
| Q: Ruby - slice all characters till underscore in a string I have a string like this solution_10 and I would like to remove the part solution_ from it, the number after the underscore will increase, it can be 100, 1000 and even larger. I cant seem to wrap my head around on how to do this.
I have tried to use slice!(0, 9) but that gives me solution_, I then tried slice!(0, -2) but that gives me null,
I then tried using solution_10[1..9] this gives me ortable_1
So my question is how to get rid of all characters till underscore, all I want is the number after the underscore.
A: Use String#split method
'solution_10'.split('_').last #will return original string if no underscore present
#=> "10"
'solution_10'.split('_')[1] #will return nil if no underscore present
#=> "10"
A: "solution_10"[/(?<=_).*/]
#⇒ "10"
or simply just get digits until the end of the line:
"solution_10"[/\d+\z/]
#⇒ "10"
A:
I cant seem to wrap my head around on how to do this.
First of all, slice and its shortcut [] can be used in many ways. One way is by providing a start index and a length:
'hello'[2, 3] #=> "llo" # 3 characters, starting at index 2
# ^^^
You can use that variant if you know the length in advance. But since the number part in your string could be 10 or 100 or 1000, you don't.
Another way is to provide a range, denoting the start and end index:
'hello'[2..3] #=> "ll" # substring from index 2 to index 3
# ^^
In this variant, Ruby will determine the length for you. You can also provide negative indices to count from the end. -1 is the last character, -2 the second to last and so on.
So my question is how to get rid of all characters till underscore, all I want is the number after the underscore.
We have to get the index of the underscore:
s = "solution_10"
i = s.index('_') #=> 8
Now we can get the substring from that index to the last character via:
s[i..-1] #=> "_10"
Apparently, we're off by one, so let's add 1:
s[i+1..-1] #=> "10"
There you go.
Note that this approach will not necessarily return a number (or numeric string), it will simply return everything after the first underscore:
s = 'foo_bar'
i = s.index('_') #=> 3
s[i+1..-1] #=> "bar"
It will also fail miserably if the string does not contain an underscore, because i would be nil:
s = 'foo'
i = s.index('_') #=> nil
s[i+1..-1] #=> NoMethodError: undefined method `+' for nil:NilClass
For a more robust solution, you can pass a regular expression to slice / [] as already shown in the other answers. Here's a version that matches an underscored followed by a number at the end of the string. The number part is captured and returned:
"solution_10"[/_(\d+)\z/, 1] #=> "10"
# _ literal underscore
# ( ) capture group (the `1` argument refers to this)
# \d+ one or more digits
# \z end of string
A: Another way:
'solution_10'[/\d+/]
#=> "10"
A: Why don't just make use of regex
"solution_10".scan(/\d+/).last
#=> "10"
| stackoverflow | {
"language": "en",
"length": 525,
"provenance": "stackexchange_0000F.jsonl.gz:877928",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582476"
} |
d981b0d0dafa55bb92d71cbb1945f29bcb7b69d0 | Stackoverflow Stackexchange
Q: Drop all drows in python pandas dataframe except Beginner Pandas Question:
How do I drop all rows except where Ticker = NIVD?
That is, return a dataframe like:
Sector Ticker Price
0 Future NVID 350
1 Future NVID NaN
Dataframe Code:
import numpy as np
import pandas as pd
raw_data = {'Sector': [ 'Gas', 'Future', 'Future', 'Gas', 'Beer', 'Future'],
'Ticker': ['EX', 'NVID', 'ATVI', 'EX', 'BUSCH', 'NVID'],
'Price': [100, 350, 250, 500, 50, np.NaN]}
df = pd.DataFrame(raw_data, columns = ['Sector', 'Ticker', 'Price'])
print(df)
So Far I'm playing around with have:
new_df =df[ ~(df[TICKER] == 'NVIDA'):, ] OR
dummy_df=df.loc[:, ~(df == 'NVIDA')]
A: You are really close.
Use boolean indexing or query:
print(df['Ticker'] == 'NVID')
0 False
1 True
2 False
3 False
4 False
5 True
Name: Ticker, dtype: bool
new_df = df[df['Ticker'] == 'NVID']
print (new_df)
Sector Ticker Price
1 Future NVID 350.0
5 Future NVID NaN
new_df = df.query("Ticker == 'NVID'")
print (new_df)
Sector Ticker Price
1 Future NVID 350.0
5 Future NVID NaN
| Q: Drop all drows in python pandas dataframe except Beginner Pandas Question:
How do I drop all rows except where Ticker = NIVD?
That is, return a dataframe like:
Sector Ticker Price
0 Future NVID 350
1 Future NVID NaN
Dataframe Code:
import numpy as np
import pandas as pd
raw_data = {'Sector': [ 'Gas', 'Future', 'Future', 'Gas', 'Beer', 'Future'],
'Ticker': ['EX', 'NVID', 'ATVI', 'EX', 'BUSCH', 'NVID'],
'Price': [100, 350, 250, 500, 50, np.NaN]}
df = pd.DataFrame(raw_data, columns = ['Sector', 'Ticker', 'Price'])
print(df)
So Far I'm playing around with have:
new_df =df[ ~(df[TICKER] == 'NVIDA'):, ] OR
dummy_df=df.loc[:, ~(df == 'NVIDA')]
A: You are really close.
Use boolean indexing or query:
print(df['Ticker'] == 'NVID')
0 False
1 True
2 False
3 False
4 False
5 True
Name: Ticker, dtype: bool
new_df = df[df['Ticker'] == 'NVID']
print (new_df)
Sector Ticker Price
1 Future NVID 350.0
5 Future NVID NaN
new_df = df.query("Ticker == 'NVID'")
print (new_df)
Sector Ticker Price
1 Future NVID 350.0
5 Future NVID NaN
| stackoverflow | {
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:877934",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582503"
} |
3b919b9ac304d6a65d4fb228cf4211792f468408 | Stackoverflow Stackexchange
Q: Firebase-Admin, importing it to react application throws Module not found error I'm developming simple React application which uses firebase admin.
I have generated react application by using create react app.
Then I have installed firebase-admin by using this npm command:
npm install firebase-admin --save
In my index.js I have added this import:
import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
import * as admin from 'firebase-admin'
ReactDOM.render(<App />, document.getElementById('root'));
registerServiceWorker();
When I launch with npm start command and open my page I get this error:
Module not found: Can't resolve 'dns' in 'D:\path\to\my\project\node_modules\firebase-admin\node_modules\isemail\lib'
Why this is happening? Did I miss something?
A: Admin SDKs cannot be used in client-side environments. That includes web browsers. Admin SDKs can and should only be used in privileged server environments owned or managed by the developers of a Firebase app. You should use the Firebase web SDK in your React app.
| Q: Firebase-Admin, importing it to react application throws Module not found error I'm developming simple React application which uses firebase admin.
I have generated react application by using create react app.
Then I have installed firebase-admin by using this npm command:
npm install firebase-admin --save
In my index.js I have added this import:
import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
import * as admin from 'firebase-admin'
ReactDOM.render(<App />, document.getElementById('root'));
registerServiceWorker();
When I launch with npm start command and open my page I get this error:
Module not found: Can't resolve 'dns' in 'D:\path\to\my\project\node_modules\firebase-admin\node_modules\isemail\lib'
Why this is happening? Did I miss something?
A: Admin SDKs cannot be used in client-side environments. That includes web browsers. Admin SDKs can and should only be used in privileged server environments owned or managed by the developers of a Firebase app. You should use the Firebase web SDK in your React app.
| stackoverflow | {
"language": "en",
"length": 156,
"provenance": "stackexchange_0000F.jsonl.gz:877936",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582508"
} |
02d98687a44ae12763e51bc62a77e3959a4c79e0 | Stackoverflow Stackexchange
Q: Mapreduce implementation Input data is a json file and the structure of records is:
{id=x, h1=0.1, h2=0.3, h3=0.8, h4=0.7}.
The task is to implement a mapreduce execution to get "h" triples that contains a peak. In the previous example the output is x-> h2,h3,h4, because h3 value is higher than its neighborhood. My idea is to implement a map that creates record as x->h1(0.1), x->h2(0.3), x->h3(0.4)... and then a reduce that extracts peaks.
Is this the right way to proceed? Map does a step useless because the shuffle and sort step gives back more or less the initial structure. Does it introduce overhead? Or is it something tolerable if you decide to use MR paradigm?
| Q: Mapreduce implementation Input data is a json file and the structure of records is:
{id=x, h1=0.1, h2=0.3, h3=0.8, h4=0.7}.
The task is to implement a mapreduce execution to get "h" triples that contains a peak. In the previous example the output is x-> h2,h3,h4, because h3 value is higher than its neighborhood. My idea is to implement a map that creates record as x->h1(0.1), x->h2(0.3), x->h3(0.4)... and then a reduce that extracts peaks.
Is this the right way to proceed? Map does a step useless because the shuffle and sort step gives back more or less the initial structure. Does it introduce overhead? Or is it something tolerable if you decide to use MR paradigm?
| stackoverflow | {
"language": "en",
"length": 116,
"provenance": "stackexchange_0000F.jsonl.gz:877945",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582542"
} |
893d025f0716ceba85caeb1b76464dc3f0bf8bd0 | Stackoverflow Stackexchange
Q: Are remote addresses pushed in Git? When I'm working on a Git project, I clone the repo from Github, then add a remote: git remote add bitbucket https://foo@bitbucket.org/foo/bar.git then I commit and push to both origin and bitbucket. Is the new remote also saved in Github? Will the other users see that I have added a new remote? Or is it just stored locally?
Thanks,
A: Which remotes are used for your copy of a repository is not part of the repository itself and thus not pushed.
Note that the remotes can also be local or on private networks.
| Q: Are remote addresses pushed in Git? When I'm working on a Git project, I clone the repo from Github, then add a remote: git remote add bitbucket https://foo@bitbucket.org/foo/bar.git then I commit and push to both origin and bitbucket. Is the new remote also saved in Github? Will the other users see that I have added a new remote? Or is it just stored locally?
Thanks,
A: Which remotes are used for your copy of a repository is not part of the repository itself and thus not pushed.
Note that the remotes can also be local or on private networks.
A: No, they are not. The git remotes are not a part of the git repository, and are only stored locally inside the .git folder.
A: No, the remotes settings are locally inside your .git folder and are not pushed back to any of the remotes you've set
| stackoverflow | {
"language": "en",
"length": 148,
"provenance": "stackexchange_0000F.jsonl.gz:877956",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582579"
} |
21822c3bf74dd8242bff6fc8183c46b336f41f49 | Stackoverflow Stackexchange
Q: React Native Open settings through Linking.openURL in IOS I want to open ios setting app from my app. the settings destination is [ settings => notification => myapp ]. to turn on & turn off push notification.
There are some documents about how to link to settings, but I don't know how to open deep link. (notification => myapp).
How can I do this?
A: Use Linking.openURL. For example, below is how to check and open Health app on iOS
import { Linking } from 'react-native'
async goToSettings() {
const healthAppUrl = 'x-apple-health://'
const canOpenHealthApp = await Linking.canOpenURL(healthAppUrl)
if (canOpenHealthApp) {
Linking.openURL(healthAppUrl)
} else {
Linking.openURL('app-settings:')
}
}
| Q: React Native Open settings through Linking.openURL in IOS I want to open ios setting app from my app. the settings destination is [ settings => notification => myapp ]. to turn on & turn off push notification.
There are some documents about how to link to settings, but I don't know how to open deep link. (notification => myapp).
How can I do this?
A: Use Linking.openURL. For example, below is how to check and open Health app on iOS
import { Linking } from 'react-native'
async goToSettings() {
const healthAppUrl = 'x-apple-health://'
const canOpenHealthApp = await Linking.canOpenURL(healthAppUrl)
if (canOpenHealthApp) {
Linking.openURL(healthAppUrl)
} else {
Linking.openURL('app-settings:')
}
}
A: You can deep-link referencing the settings's index like so:
Linking.openURL('app-settings:')
Above method only for IOS
A: Since React Native 0.60 to open App settings use:
import { Linking } from 'react-native';
Linking.openSettings();
Open the app’s custom settings, if it has any.
Works for Android and iOS
A: for iOS 14, this is how i open location service settings
Linking.openURL('App-Prefs:Privacy&path=LOCATION')
tested in react native 0.63.4
A: To access specific settings screens, try this:
Linking.openURL("App-Prefs:root=WIFI");
Linking to app-settings only opens the settings for the
Reference: iOS Launching Settings -> Restrictions URL Scheme (note that prefs changed to App-Prefs in iOS 6)
A: Try this one for Open Specific System URL - Linking.openURL('App-Prefs:{3}')
A: try this
Linking.openURL('app-settings://notification/myapp')
| stackoverflow | {
"language": "en",
"length": 224,
"provenance": "stackexchange_0000F.jsonl.gz:877992",
"question_score": "23",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582694"
} |
4788e71f8b98ee14a3b28bd31ca35acf20b74f39 | Stackoverflow Stackexchange
Q: Core Image filter "CIDepthBlurEffect" not working on iOS 11 / Xcode 9.0 I can't get the new CIDepthBlurEffect to work. Am I doing something wrong, or is this a known issue?
Below is my code in Objective-C:
NSDictionary *dict = [NSDictionary dictionaryWithObjects:[NSArray arrayWithObjects:[NSNumber numberWithBool:YES], [NSNumber numberWithBool:YES], nil] forKeys:[NSArray arrayWithObjects:kCIImageAuxiliaryDisparity, @"kCIImageApplyOrientationProperty", nil]];
CIImage *disparityImage = [CIImage imageWithData:imageData options:dict];
CIFilter *ciDepthBlurEffect = [CIFilter filterWithName:@"CIDepthBlurEffect"];
[ciDepthBlurEffect setDefaults];
[ciDepthBlurEffect setValue:disparityImage forKey:@"inputDisparityImage"];
[ciDepthBlurEffect setValue:originalImage forKey:@"inputImage"];
CIImage *outputImage = [ciDepthBlurEffect valueForKey:@"outputImage"];
EAGLContext *previewEaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *context = [CIContext contextWithEAGLContext:previewEaglContext options:@{kCIContextWorkingFormat :[NSNumber numberWithInt:kCIFormatRGBAh]} ];
CGImageRef cgimg = [context createCGImage:disparityImage fromRect:[disparityImage extent]];
image = [[UIImage alloc] initWithCGImage:cgimg];
CGImageRelease(cgimg);
A: This issue was resolved with the release of Xcode 9 beta 2 and iOS 11 beta 2.
| Q: Core Image filter "CIDepthBlurEffect" not working on iOS 11 / Xcode 9.0 I can't get the new CIDepthBlurEffect to work. Am I doing something wrong, or is this a known issue?
Below is my code in Objective-C:
NSDictionary *dict = [NSDictionary dictionaryWithObjects:[NSArray arrayWithObjects:[NSNumber numberWithBool:YES], [NSNumber numberWithBool:YES], nil] forKeys:[NSArray arrayWithObjects:kCIImageAuxiliaryDisparity, @"kCIImageApplyOrientationProperty", nil]];
CIImage *disparityImage = [CIImage imageWithData:imageData options:dict];
CIFilter *ciDepthBlurEffect = [CIFilter filterWithName:@"CIDepthBlurEffect"];
[ciDepthBlurEffect setDefaults];
[ciDepthBlurEffect setValue:disparityImage forKey:@"inputDisparityImage"];
[ciDepthBlurEffect setValue:originalImage forKey:@"inputImage"];
CIImage *outputImage = [ciDepthBlurEffect valueForKey:@"outputImage"];
EAGLContext *previewEaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *context = [CIContext contextWithEAGLContext:previewEaglContext options:@{kCIContextWorkingFormat :[NSNumber numberWithInt:kCIFormatRGBAh]} ];
CGImageRef cgimg = [context createCGImage:disparityImage fromRect:[disparityImage extent]];
image = [[UIImage alloc] initWithCGImage:cgimg];
CGImageRelease(cgimg);
A: This issue was resolved with the release of Xcode 9 beta 2 and iOS 11 beta 2.
| stackoverflow | {
"language": "en",
"length": 122,
"provenance": "stackexchange_0000F.jsonl.gz:878017",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582781"
} |
4239630888ef96cc118a7df08d44cbc8a800ccf5 | Stackoverflow Stackexchange
Q: why rxjava Observable.subscribe(observer) return void? When I'm using Observable.subscribe() normally returns Disposable.
But Observable.subscribe(Observer) returns void.
So I can't dispose to Observable.subscribe(Observer).
According to introtorx.com Observable.subscribe(Obeserver) returns Disposable.
Why are rx and rxjava different?
++++++++++++++
I use compile 'io.reactivex.rxjava2:rxandroid:2.0.1' in Android Studio.
github.com/ReactiveX/RxJava/blob/2.x/src/main/java/io/reactivex/Observable.java#L10831
public final void subscribe(Observer<? super T> observer) {
...
}
[[1]: https://i.stack.imgur.com/0owg1.png][1]
[[2]: https://i.stack.imgur.com/7H4av.jpg][2]
A: It's probably because of the Reactive Stream contract.
Reactive Stream README
public interface Publisher<T> {
public void subscribe(Subscriber<? super T> s);
}
The interface of Publisher is defined return void. RxJava Flowable implements that interface. And RxJava Observable follow that contract as well.
So they provide a subscribeWith() to return you a Disposable instead of void. Or you can use those overload method which can give you back a disposable too ex: subscribe(consumer<T>,consumer<Throwable>,action)
PS: above its my guess. I'm not sure of it.
| Q: why rxjava Observable.subscribe(observer) return void? When I'm using Observable.subscribe() normally returns Disposable.
But Observable.subscribe(Observer) returns void.
So I can't dispose to Observable.subscribe(Observer).
According to introtorx.com Observable.subscribe(Obeserver) returns Disposable.
Why are rx and rxjava different?
++++++++++++++
I use compile 'io.reactivex.rxjava2:rxandroid:2.0.1' in Android Studio.
github.com/ReactiveX/RxJava/blob/2.x/src/main/java/io/reactivex/Observable.java#L10831
public final void subscribe(Observer<? super T> observer) {
...
}
[[1]: https://i.stack.imgur.com/0owg1.png][1]
[[2]: https://i.stack.imgur.com/7H4av.jpg][2]
A: It's probably because of the Reactive Stream contract.
Reactive Stream README
public interface Publisher<T> {
public void subscribe(Subscriber<? super T> s);
}
The interface of Publisher is defined return void. RxJava Flowable implements that interface. And RxJava Observable follow that contract as well.
So they provide a subscribeWith() to return you a Disposable instead of void. Or you can use those overload method which can give you back a disposable too ex: subscribe(consumer<T>,consumer<Throwable>,action)
PS: above its my guess. I'm not sure of it.
A: In RxJava2, Disposable object is passed to Observer's onSubscribe call back method. You can get hold of Disposable object from onSubscribe call back method and use it to dispose the subscription at later point of time after subscribing the observer to observable.
A: Which version of RxJava do you use? With RxJava2 (io.reactivex.rxjava2):
public abstract class Observable<T> implements ObservableSource<T> {
...
public final Disposable subscribe() {...}
...
}
| stackoverflow | {
"language": "en",
"length": 211,
"provenance": "stackexchange_0000F.jsonl.gz:878036",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582830"
} |
272c0c2a4936c18255496fd1bd24f321b498d739 | Stackoverflow Stackexchange
Q: Convert String to html in label AngularJS I have an angular snippet in which I want to convert string to HTML object.
`<div class="row">
<label class="col-md-4 info_text">Remarks<span>:</span></label> <label
class="col-md-8 fieldValue">{{initialTableInfo.comments}}
</label>
</div>`
The initialTableInfo.comments has the value <b>someText</b>. It is getting printed as it is. I want "someText" to be printed as someText instead of <b>someText</b>.
A: You can use the $sce parameter for angular.
module.controller('myctrl', ['$scope', '$http', '$sce',function($scope, $http, $sce) {
$scope.initialTableInfo.comments = $sce.trustAsHtml("<b>Some Text<b>");
}]);
And in your HTML use ng-bind-html
<label class="col-md-8 fieldValue" ng-bind-html="initialTableInfo.comment"> </label>
| Q: Convert String to html in label AngularJS I have an angular snippet in which I want to convert string to HTML object.
`<div class="row">
<label class="col-md-4 info_text">Remarks<span>:</span></label> <label
class="col-md-8 fieldValue">{{initialTableInfo.comments}}
</label>
</div>`
The initialTableInfo.comments has the value <b>someText</b>. It is getting printed as it is. I want "someText" to be printed as someText instead of <b>someText</b>.
A: You can use the $sce parameter for angular.
module.controller('myctrl', ['$scope', '$http', '$sce',function($scope, $http, $sce) {
$scope.initialTableInfo.comments = $sce.trustAsHtml("<b>Some Text<b>");
}]);
And in your HTML use ng-bind-html
<label class="col-md-8 fieldValue" ng-bind-html="initialTableInfo.comment"> </label>
A: You can render string to html using $sce.trustAsHtml(html) and use ng-bind-html.
DEMO
angular.module("app",[])
.controller("ctrl",function($scope){
$scope.initialTableInfo ={};
$scope.initialTableInfo.comments = '<b>someText</b>';
})
.filter('trustHtml',function($sce){
return function(html){
return $sce.trustAsHtml(html)
}
})
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script>
<div ng-app="app" ng-controller="ctrl">
<div class="background-white p20 reasons" >
<h6><b>About {{aboutlongs[0].name}}</b></h6>
<div class="reason-content" ng-bind-html="$scope.initialTableInfo.comments | trustHtml" >
</div>
</div>
</div>
A: You should check this link https://docs.angularjs.org/api/ng/directive/ngBindHtml.
<div ng-controller="ExampleController">
<p ng-bind-html="myHTML"></p>
</div>
As Alexi mentioned, be sure to have the correct syntax on the controller too.
| stackoverflow | {
"language": "en",
"length": 164,
"provenance": "stackexchange_0000F.jsonl.gz:878074",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44582966"
} |
5245bda109164a91f8eebef309d52983305e6b07 | Stackoverflow Stackexchange
Q: How to avoid rebuilding the graph in Tensorflow when doing hyper-parameter optimization I have a Tensorflow model that I am optimizing using Optunity.
The way I am doing things is that I have an objective function that creates a model and returns the best loss of my model. I pass this function to optunity which runs different tests with different parameters each time i.e it builds the graph each time.
In my code, I am using tf.reset_default_graph() before creating an instance of my model. Hence, it is rebuilding the model every time.
My issue is that having to build the graph every time I am using a new combination of hyper-parameters takes a lot of time. Is there a way to make things faster?
If I do not use tf.reset_default_graph() I get errors about conflicting tensors.
| Q: How to avoid rebuilding the graph in Tensorflow when doing hyper-parameter optimization I have a Tensorflow model that I am optimizing using Optunity.
The way I am doing things is that I have an objective function that creates a model and returns the best loss of my model. I pass this function to optunity which runs different tests with different parameters each time i.e it builds the graph each time.
In my code, I am using tf.reset_default_graph() before creating an instance of my model. Hence, it is rebuilding the model every time.
My issue is that having to build the graph every time I am using a new combination of hyper-parameters takes a lot of time. Is there a way to make things faster?
If I do not use tf.reset_default_graph() I get errors about conflicting tensors.
| stackoverflow | {
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:878098",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583067"
} |
842952c4291b32bd56af2bec159a72d53645997b | Stackoverflow Stackexchange
Q: Monitor Java 8 ForkJoinPool Is there a way to monitor what Java ForkJoinPool is doing over time, particularly how effeciently CountedCompleter performs ?
*
*How much time did subtasks take to execute ?
*Was one of the subtasks longer than the others ?
*Did they execute in parallel ?
Applied to the example of triggers in CountedCompleter javadoc, the monitoring tool would tell me that PacketSender completion was limited by HeaderBuilder which started first but took 13s while BodyBuilder took 4s and completed before.
I look for such a tool, mostly as a Java agent not to pollute my production code with unnecessary statistics, but found nothing.
Thanks for any help
A: The best (and realistically only) way is to use AOP to "spy" on method calls and record start and end timestamps and context that can be collected for statistical analysis.
I have used AspectJ to good effect and can recommend it, but any such library should work.
| Q: Monitor Java 8 ForkJoinPool Is there a way to monitor what Java ForkJoinPool is doing over time, particularly how effeciently CountedCompleter performs ?
*
*How much time did subtasks take to execute ?
*Was one of the subtasks longer than the others ?
*Did they execute in parallel ?
Applied to the example of triggers in CountedCompleter javadoc, the monitoring tool would tell me that PacketSender completion was limited by HeaderBuilder which started first but took 13s while BodyBuilder took 4s and completed before.
I look for such a tool, mostly as a Java agent not to pollute my production code with unnecessary statistics, but found nothing.
Thanks for any help
A: The best (and realistically only) way is to use AOP to "spy" on method calls and record start and end timestamps and context that can be collected for statistical analysis.
I have used AspectJ to good effect and can recommend it, but any such library should work.
| stackoverflow | {
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:878110",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583101"
} |
34f30e6baa0907356e7735edf76e7dc2eee49cdf | Stackoverflow Stackexchange
Q: Unable to return Account Linking button from Amazon Lex to FB Messenger Im developing a FB chatbot using Lex. To integrate the user experience Im trying to match the user on my DB (UID of User) to the user chatting on Lex ( PSID ). I have been suggested to use AccountLinking API to achieve this.
https://developers.facebook.com/docs/messenger-platform/account-linking/v2.9
In the API they have asked to request a button with type="account_link". How ever the lex bot doesnt allow to place a key "type" in the response card and throws an error when done.
*
*Firstly am I on the right track.?
*If yes, Is there a way to make lex accept the key-parameter "type" ?
I have come across ID-Matching API aswell but I would like to know the expected way solving the problem (Not shortcuts which may break in near future ).
| Q: Unable to return Account Linking button from Amazon Lex to FB Messenger Im developing a FB chatbot using Lex. To integrate the user experience Im trying to match the user on my DB (UID of User) to the user chatting on Lex ( PSID ). I have been suggested to use AccountLinking API to achieve this.
https://developers.facebook.com/docs/messenger-platform/account-linking/v2.9
In the API they have asked to request a button with type="account_link". How ever the lex bot doesnt allow to place a key "type" in the response card and throws an error when done.
*
*Firstly am I on the right track.?
*If yes, Is there a way to make lex accept the key-parameter "type" ?
I have come across ID-Matching API aswell but I would like to know the expected way solving the problem (Not shortcuts which may break in near future ).
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:878129",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583189"
} |
df46f872f35080df7f48bbfc5d2b460025a5fd9d | Stackoverflow Stackexchange
Q: How to create component that has HTMLAttributes as props I am using Typescript and Preact. I want to create a component that exposes all the props that a <span/> can have:
import { h, Component } from "preact";
export class MySpan extends Component<any, void> {
render(props) {
return <span {...props}></span>;
}
}
However, the above example uses any, which is not really typesafe. Rather I want to expose the properties span has.
A: In react I would do it this way:
export class MySpan extends React.Component<React.HTMLProps<HTMLDivElement>, void>
{
public render()
{
return <span {...this.props}/>;
}
}
I do not have real experience with preact but taking into account their preact.d.ts, it should be something similar to:
import { h, Component } from "preact";
export class MySpan extends Component<JSX.HTMLAttributes, void> {
render(props) {
return <span {...props}></span>;
}
}
Note that this will not be specific properties for span element but rather generic ones.
| Q: How to create component that has HTMLAttributes as props I am using Typescript and Preact. I want to create a component that exposes all the props that a <span/> can have:
import { h, Component } from "preact";
export class MySpan extends Component<any, void> {
render(props) {
return <span {...props}></span>;
}
}
However, the above example uses any, which is not really typesafe. Rather I want to expose the properties span has.
A: In react I would do it this way:
export class MySpan extends React.Component<React.HTMLProps<HTMLDivElement>, void>
{
public render()
{
return <span {...this.props}/>;
}
}
I do not have real experience with preact but taking into account their preact.d.ts, it should be something similar to:
import { h, Component } from "preact";
export class MySpan extends Component<JSX.HTMLAttributes, void> {
render(props) {
return <span {...props}></span>;
}
}
Note that this will not be specific properties for span element but rather generic ones.
| stackoverflow | {
"language": "en",
"length": 153,
"provenance": "stackexchange_0000F.jsonl.gz:878139",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583214"
} |
6ee5effecb27ea7388e0418e9e8c7c4ef8ab5c63 | Stackoverflow Stackexchange
Q: ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4 I am trying for multi-class classification and here are the details of my training input and output:
train_input.shape= (1, 95000, 360) (95000 length input array with each
element being an array of 360 length)
train_output.shape = (1, 95000, 22) (22 Classes are there)
model = Sequential()
model.add(LSTM(22, input_shape=(1, 95000,360)))
model.add(Dense(22, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(train_input, train_output, epochs=2, batch_size=500)
The error is:
ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4
in line:
model.add(LSTM(22, input_shape=(1, 95000,360)))
Please help me out, I am not able to solve it through other answers.
A: In Artifical Neural Networks (ANN), input is of shape (N,D), where N is the number of samples and D is the number of features.
IN RNN, GRU and LSTM, input is of shape (N,T,D), where N is the number of samples, T is length of time sequence and D is the number of features.
So, while adding layers
Input(shape = (D,)) for ANN and
Input(shape = (T,D)) for RNN, GRU and LSTMs
| Q: ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4 I am trying for multi-class classification and here are the details of my training input and output:
train_input.shape= (1, 95000, 360) (95000 length input array with each
element being an array of 360 length)
train_output.shape = (1, 95000, 22) (22 Classes are there)
model = Sequential()
model.add(LSTM(22, input_shape=(1, 95000,360)))
model.add(Dense(22, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(train_input, train_output, epochs=2, batch_size=500)
The error is:
ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4
in line:
model.add(LSTM(22, input_shape=(1, 95000,360)))
Please help me out, I am not able to solve it through other answers.
A: In Artifical Neural Networks (ANN), input is of shape (N,D), where N is the number of samples and D is the number of features.
IN RNN, GRU and LSTM, input is of shape (N,T,D), where N is the number of samples, T is length of time sequence and D is the number of features.
So, while adding layers
Input(shape = (D,)) for ANN and
Input(shape = (T,D)) for RNN, GRU and LSTMs
A: I solved the problem by making
input size: (95000,360,1) and
output size: (95000,22)
and changed the input shape to (360,1) in the code where model is defined:
model = Sequential()
model.add(LSTM(22, input_shape=(360,1)))
model.add(Dense(22, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(ml2_train_input, ml2_train_output_enc, epochs=2, batch_size=500)
A: input_shape is supposed to be (timesteps, n_features). Remove the first dimension.
input_shape = (95000,360)
Same for the output.
A: Well, I think the main problem out there is with the return_sequences parameter in the network.This hyper parameter should be set to False for the last layer and true for the other previous layers.
| stackoverflow | {
"language": "en",
"length": 276,
"provenance": "stackexchange_0000F.jsonl.gz:878152",
"question_score": "38",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583254"
} |
37fe8c73daf78689106d66c4ae29ae183fc3da9b | Stackoverflow Stackexchange
Q: Does Google OAuth work inside an iframe? I was wondering if Google's OAuth works from within an iframe? I tried looking this up but couldn't find exactly what I was looking for so any insight would be appreciated!
Background: When I use my web-app with Google OAuth inside a normal window it's fine but when I try it inside an iframe it just takes me to a blank page instead of letting me select which account I'd like to sign in with.
Thanks for your help!
| Q: Does Google OAuth work inside an iframe? I was wondering if Google's OAuth works from within an iframe? I tried looking this up but couldn't find exactly what I was looking for so any insight would be appreciated!
Background: When I use my web-app with Google OAuth inside a normal window it's fine but when I try it inside an iframe it just takes me to a blank page instead of letting me select which account I'd like to sign in with.
Thanks for your help!
| stackoverflow | {
"language": "en",
"length": 87,
"provenance": "stackexchange_0000F.jsonl.gz:878170",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583330"
} |
c807bb9769369dbcbe21a858a4a4b077b519a524 | Stackoverflow Stackexchange
Q: Android websocket stream video playback How to playback video stream from web socket address like :
ws://somevideserver.net:8410/5b03a7aad01502281a39?
I tried using WebSocketClient and successfully connect to port, but how to retrieve stream and send it to player?
URI uri;
try {
uri = new URI(stremURL);
} catch (URISyntaxException e) {
e.printStackTrace();
return;
}
mWebSocketClient = new WebSocketClient(uri) {
@Override
public void onOpen(ServerHandshake serverHandshake) {
Log.d(TAG, "[getStreamURL] = "+"Opened");
// mWebSocketClient.send("Hello from " + Build.MANUFACTURER + " " + Build.MODEL);
startStreaming();
}
@Override
public void onMessage(String s) {
final String message = s;
runOnUiThread(new Runnable() {
@Override
public void run() {
Log.d(TAG, "[getStreamURL] message = "+message);
}
});
}
private void runOnUiThread(Runnable websocket) {
}
@Override
public void onClose(int i, String s, boolean b) {
Log.d(TAG, "[getStreamURL] = "+"Closed");
}
@Override
public void onError(Exception e) {
Log.d(TAG, "[getStreamURL] = "+"Error");
}
};
mWebSocketClient.connect();
| Q: Android websocket stream video playback How to playback video stream from web socket address like :
ws://somevideserver.net:8410/5b03a7aad01502281a39?
I tried using WebSocketClient and successfully connect to port, but how to retrieve stream and send it to player?
URI uri;
try {
uri = new URI(stremURL);
} catch (URISyntaxException e) {
e.printStackTrace();
return;
}
mWebSocketClient = new WebSocketClient(uri) {
@Override
public void onOpen(ServerHandshake serverHandshake) {
Log.d(TAG, "[getStreamURL] = "+"Opened");
// mWebSocketClient.send("Hello from " + Build.MANUFACTURER + " " + Build.MODEL);
startStreaming();
}
@Override
public void onMessage(String s) {
final String message = s;
runOnUiThread(new Runnable() {
@Override
public void run() {
Log.d(TAG, "[getStreamURL] message = "+message);
}
});
}
private void runOnUiThread(Runnable websocket) {
}
@Override
public void onClose(int i, String s, boolean b) {
Log.d(TAG, "[getStreamURL] = "+"Closed");
}
@Override
public void onError(Exception e) {
Log.d(TAG, "[getStreamURL] = "+"Error");
}
};
mWebSocketClient.connect();
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:878175",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583343"
} |
59163ed1b3b2faf8e51bfdd3fef5091c6dd56b2e | Stackoverflow Stackexchange
Q: What is the purpose of ApplicationRef in Angular 2+? I don't understand the ApplicationRef class and its uses. What does it mean to have a "reference to an Angular application running on a page"? When is this needed?
Please provide a small example using ApplicationRef.
A: https://angular.io/api/core/ApplicationRef
*
*allows to invoke application-wide change detection by calling appRef.tick()
*it allows to add/remove views to be included in or excluded from change detection using attachView() and detachView()
*provides a list of registered components and component types using componentTypes and components
and some other change detection related information
| Q: What is the purpose of ApplicationRef in Angular 2+? I don't understand the ApplicationRef class and its uses. What does it mean to have a "reference to an Angular application running on a page"? When is this needed?
Please provide a small example using ApplicationRef.
A: https://angular.io/api/core/ApplicationRef
*
*allows to invoke application-wide change detection by calling appRef.tick()
*it allows to add/remove views to be included in or excluded from change detection using attachView() and detachView()
*provides a list of registered components and component types using componentTypes and components
and some other change detection related information
A: ApplicationRef contains reference to the root view and can be used to manually run change detection using tick function
Invoke this method to explicitly process change detection and its
side-effects.
In development mode, tick() also performs a second change detection
cycle to ensure that no further changes are detected. If additional
changes are picked up during this second cycle, bindings in the app
have side-effects that cannot be resolved in a single change detection
pass. In this case, Angular throws an error, since an Angular
application can only have one change detection pass during which all
change detection must complete.
Here is an example:
@Component()
class C {
property = 3;
constructor(app: ApplicationRef, zone: NgZone) {
// this emulates any third party code that runs outside Angular zone
zone.runOutsideAngular(()=>{
setTimeout(()=>{
// this won't be reflected in the component view
this.property = 5;
// but if you run detection manually you will see the changes
app.tick();
})
})
Another application is to attach a dynamically created component view for change detection if it was created using a root node:
addDynamicComponent() {
let factory = this.resolver.resolveComponentFactory(SimpleComponent);
let newNode = document.createElement('div');
newNode.id = 'placeholder';
document.getElementById('container').appendChild(newNode);
const ref = factory.create(this.injector, [], newNode);
this.app.attachView(ref.hostView);
}
check this answer for more details.
| stackoverflow | {
"language": "en",
"length": 303,
"provenance": "stackexchange_0000F.jsonl.gz:878188",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583394"
} |
e7f3d4fc6e014495d5994309a8521eb41e3779e7 | Stackoverflow Stackexchange
Q: Bootstrap 4 navbar-toggler-icon does not appear Visit: https://jsfiddle.net/8tpm4z00/
<div class="container">
<button class="navbar-toggler navbar-toggler-right" type="button" data-toggle="collapse" data-target="#myNavigation" aria-controls="myNavigation" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<a href="#" class="navbar-brand">KP</a>
<div class="collapse navbar-collapse" id="myNavigation">
<div class="navbar-nav">
<a class="p-3 nav-item nav-link active " href="#">Home</a>
<a class="p-3 nav-item nav-link " href="#">About</a>
<a class="p-3 nav-item nav-link " href="#">Contact Me</a>
</div><!-- <div class="navbar-nav"> -->
</div><!-- <div class="collapse navbar-collapse"> -->
</div><!-- <div class="container"> -->
The .navbar-toggler-icon does not appear on the navbar-toggler button when the menu is collapsed for mobile responsiveness.
I have searched about this problem and adjusted the jquery and bootstrap links also. By putting the jquery link above the bootstrap 4 links. But that does not seem to work. The external libraries are linked in my HTML in the same order as the jsfiddle.
A: if you don't use navbar-light or dark set background-img some image into .navbar-toggler-icon as follow
.navbar-toggler-icon {
background-image: url(data:image/svg+xml,%3csvg viewBox='0 0 30 30' xmlns='http://www.w3.org/2000…p='round' stroke-miterlimit='10' d='M47h22M4 15h22M423h22'/%3e%3c/svg%3e);
}
or use fa-bars from font awsome
| Q: Bootstrap 4 navbar-toggler-icon does not appear Visit: https://jsfiddle.net/8tpm4z00/
<div class="container">
<button class="navbar-toggler navbar-toggler-right" type="button" data-toggle="collapse" data-target="#myNavigation" aria-controls="myNavigation" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<a href="#" class="navbar-brand">KP</a>
<div class="collapse navbar-collapse" id="myNavigation">
<div class="navbar-nav">
<a class="p-3 nav-item nav-link active " href="#">Home</a>
<a class="p-3 nav-item nav-link " href="#">About</a>
<a class="p-3 nav-item nav-link " href="#">Contact Me</a>
</div><!-- <div class="navbar-nav"> -->
</div><!-- <div class="collapse navbar-collapse"> -->
</div><!-- <div class="container"> -->
The .navbar-toggler-icon does not appear on the navbar-toggler button when the menu is collapsed for mobile responsiveness.
I have searched about this problem and adjusted the jquery and bootstrap links also. By putting the jquery link above the bootstrap 4 links. But that does not seem to work. The external libraries are linked in my HTML in the same order as the jsfiddle.
A: if you don't use navbar-light or dark set background-img some image into .navbar-toggler-icon as follow
.navbar-toggler-icon {
background-image: url(data:image/svg+xml,%3csvg viewBox='0 0 30 30' xmlns='http://www.w3.org/2000…p='round' stroke-miterlimit='10' d='M47h22M4 15h22M423h22'/%3e%3c/svg%3e);
}
or use fa-bars from font awsome
A: Use navbar-dark instead for navbar-inverse and bg-dark also instead of bg-inverse
<nav class="navbar navbar-dark navbar-expand-md bg-dark ">
A: Update:
navbar-inverse is no longer available in B4 version, you can use navbar-dark instead.
Use navbar-inverse bg-inverse instead of .navbar-default
<section role="navigation">
<nav class="navbar navbar-inverse bg-inverse navbar-toggleable-sm fixed-top"><!-- navbar-inverse -->
<div class="container">
<button class="navbar-toggler navbar-toggler-right" type="button" data-toggle="collapse" data-target="#myNavigation" aria-controls="myNavigation" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<a href="#" class="navbar-brand">KP</a>
<div class="collapse navbar-collapse" id="myNavigation">
<div class="navbar-nav">
<a class="p-3 nav-item nav-link active " href="#">Home</a>
<a class="p-3 nav-item nav-link " href="#">About</a>
<a class="p-3 nav-item nav-link " href="#">Contact Me</a>
</div><!-- <div class="navbar-nav"> -->
</div><!-- <div class="collapse navbar-collapse"> -->
</div><!-- <div class="container"> -->
</nav>
</section>
Updated fiddle
A: If you use bootstrap 4 beta:
You can only add the navbar-dark classe in the nav tag.
Result:
<nav class = "navbar navbar-expand-lg navbar-dark fixed-top navbar-default" role = "navigation">
<a class="navbar-brand" href="#">
<img class = "img-responsive logo" src = "~/Content/Images/ogo__160x36.png" alt = "" />
</a>
<button type = "button" class = "navbar-toggler" data-toggle = "collapse" data-target = "#menuPrincipal" aria-controls = "navbarNav" aria-expanded = "false" aria-label = "Toggle navigation ">
<span class = "navbar-toggler-icon"> </ span>
</button>
<div class="collapse navbar-collapse" ...>...</div>
</nav>
| stackoverflow | {
"language": "en",
"length": 356,
"provenance": "stackexchange_0000F.jsonl.gz:878193",
"question_score": "29",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583416"
} |
8cdc81e54de1998a82065ad93b9c7a345eca2210 | Stackoverflow Stackexchange
Q: Sequelize.js min and max length validator are not working I am trying to validate min and max validations through model validations
last_name:{
type:Sequelize.STRING,
validate:{
notEmpty:{
args:true,
msg:"Last name required"
},
is:{
args:["^[a-z]+$",'i'],
msg:"Only letters allowed in last name"
},
max:{
args:32,
msg:"Maximum 32 characters allowed in last name"
},
min:{
args:4,
msg:"Minimum 4 characters required in last name"
}
}
}
But the min and max validators are never fired all other validators are working fine
A: You need to pass args an array
max:{
args:[32],
msg:"Maximum 32 characters allowed in last name"
},
min:{
args:[4],
msg:"Minimum 4 characters required in last name"
}
With use of len validator:
var Test = sequelize.define('test', {
name: {
type: Sequelize.STRING,
validate: {
notEmpty: {
args: true,
msg: "Required"
},
is: {
args: ["^[a-z]+$", 'i'],
msg: "Only letters allowed"
},
len: {
args: [4,32],
msg: "String length is not in this range"
}
}
},
id: {
type: Sequelize.INTEGER,
primaryKey: true,
autoIncrement: true
}
}, {
tableName: 'test'
});
Test.create({name: "ab"}, function(error, result) {});
| Q: Sequelize.js min and max length validator are not working I am trying to validate min and max validations through model validations
last_name:{
type:Sequelize.STRING,
validate:{
notEmpty:{
args:true,
msg:"Last name required"
},
is:{
args:["^[a-z]+$",'i'],
msg:"Only letters allowed in last name"
},
max:{
args:32,
msg:"Maximum 32 characters allowed in last name"
},
min:{
args:4,
msg:"Minimum 4 characters required in last name"
}
}
}
But the min and max validators are never fired all other validators are working fine
A: You need to pass args an array
max:{
args:[32],
msg:"Maximum 32 characters allowed in last name"
},
min:{
args:[4],
msg:"Minimum 4 characters required in last name"
}
With use of len validator:
var Test = sequelize.define('test', {
name: {
type: Sequelize.STRING,
validate: {
notEmpty: {
args: true,
msg: "Required"
},
is: {
args: ["^[a-z]+$", 'i'],
msg: "Only letters allowed"
},
len: {
args: [4,32],
msg: "String length is not in this range"
}
}
},
id: {
type: Sequelize.INTEGER,
primaryKey: true,
autoIncrement: true
}
}, {
tableName: 'test'
});
Test.create({name: "ab"}, function(error, result) {});
| stackoverflow | {
"language": "en",
"length": 172,
"provenance": "stackexchange_0000F.jsonl.gz:878202",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583441"
} |
8e641a3934db0d6c1e6e7656d68c3826a2de8334 | Stackoverflow Stackexchange
Q: Why does word2vec only take one task for mapPartitionsWithIndex at Word2Vec.scala:323 I am running word2vec in spark and when it comes to fit(), only one task is observed in UI as in image:
.
As per the configuration, num-executors = 1000, executor-cores = 2. And the RDD coalesces to 2000 partitions. It takes quite a long time for mapPartitionsWithIndex. Can it be distributed to multiple executors or tasks?
A: setNumPartitions(numPartitions: Int) solves my problem. I did not check the default value.
Sets number of partitions (default: 1).
| Q: Why does word2vec only take one task for mapPartitionsWithIndex at Word2Vec.scala:323 I am running word2vec in spark and when it comes to fit(), only one task is observed in UI as in image:
.
As per the configuration, num-executors = 1000, executor-cores = 2. And the RDD coalesces to 2000 partitions. It takes quite a long time for mapPartitionsWithIndex. Can it be distributed to multiple executors or tasks?
A: setNumPartitions(numPartitions: Int) solves my problem. I did not check the default value.
Sets number of partitions (default: 1).
| stackoverflow | {
"language": "en",
"length": 88,
"provenance": "stackexchange_0000F.jsonl.gz:878228",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583529"
} |
6778030ad5ee671a2241c47a0766c617c1119885 | Stackoverflow Stackexchange
Q: Import xgboost in anaconda? git clone --recursive https://github.com/dmlc/xgboost
cd xgboost; make -j4
I used the official documention to install xgboost on ubuntu. There were no errors, but when I start up my ipython notebook which is anaconda environment, import xgboost show the error that this is no module.
How to import xgboost in my anaconda python environment?
Should I need to modify some environmental variables in ubuntu?
A: Please use conda command:
conda install -c conda-forge xgboost
https://anaconda.org/conda-forge/xgboost
| Q: Import xgboost in anaconda? git clone --recursive https://github.com/dmlc/xgboost
cd xgboost; make -j4
I used the official documention to install xgboost on ubuntu. There were no errors, but when I start up my ipython notebook which is anaconda environment, import xgboost show the error that this is no module.
How to import xgboost in my anaconda python environment?
Should I need to modify some environmental variables in ubuntu?
A: Please use conda command:
conda install -c conda-forge xgboost
https://anaconda.org/conda-forge/xgboost
A: The same thing is happening for me while using Spyder.
I am able to import it using terminal (with a deprecation warning) i.e:
pinaki@Excalibur:~$ python
Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 13:09:58)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from xgboost import XGBClassifier
/home/pinaki/anaconda3/lib/python3.6/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
>>>
But while trying to run the same code using Spyder got the following error:
from xgboost import XGBClassifier
Traceback (most recent call last):
File "<ipython-input-1-9b31cfdb821c>", line 1, in <module>
from xgboost import XGBClassifier
File "/media/pinaki/MyStuff/Work/Machine Learning A-Z Template Folder/Part 10 - Model Selection & Boosting/Section 49 - XGBoost/XGBoost/xgboost.py", line 30, in <module>
from xgboost import XGBClassifier
ImportError: cannot import name 'XGBClassifier'
and pip install xgboost returned the following output:
Requirement already satisfied: xgboost in /home/pinaki/xgboost/python-package
Requirement already satisfied: numpy in /home/pinaki/anaconda3/lib/python3.6/site-packages (from xgboost)
Requirement already satisfied: scipy in /home/pinaki/anaconda3/lib/python3.6/site-packages (from xgboost)
A: For me the problem got fixed by renaming the working file from xgboost.py to something else.
A: You have to go to python-package folder inside xgboost folder and run setup.py as well.
After
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost; make -j4
Run
cd python-package; sudo python setup.py install
A: Use this conda command:
conda install -c conda-forge xgboost
Or pip:
pip install xgboost
| stackoverflow | {
"language": "en",
"length": 350,
"provenance": "stackexchange_0000F.jsonl.gz:878243",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583577"
} |
e48aef03b50f92c1d1be2674b0926bd6be7b397a | Stackoverflow Stackexchange
Q: PHP Short hand tags 7.x Just wondering, I have a codeigniter project that I have been asked to do some work on. Im also trying to (where I can) prepare the project for upgrading to php 7.x (currently on 5).
One thing Im confused about is the use of php short tags. I love them but it seems they are being removed in 7?
So my question is, should I be removing them? The problem is, they make the code so much more readable, why are they removing it?
So for example:
<?php if ($product->price_discounted > 0)
{
echo "<p class='discounted price'>" . $product->price_discounted . "</p>"
}
?>
vs
<? if ($product->price_discounted > 0): ?>
<p class='discounted price'><?= $product->price_discounted ?></p>
<? endif; ?>
I would much rather the second version, but whenever I run my project on php7 these don't work. Am I missing something or are they really removed?
FYI: I have short tags on in php.ini
A: Shorthand tags are still in PHP7, the tags being removed are:
*
*<% opening tag
*<%= opening tag with echo
*%> closing tag
*(<script\s+language\s*=\s*(php|"php"|'php')\s*>)i opening tag
*(</script>)i closing tag
https://wiki.php.net/rfc/remove_alternative_php_tags
| Q: PHP Short hand tags 7.x Just wondering, I have a codeigniter project that I have been asked to do some work on. Im also trying to (where I can) prepare the project for upgrading to php 7.x (currently on 5).
One thing Im confused about is the use of php short tags. I love them but it seems they are being removed in 7?
So my question is, should I be removing them? The problem is, they make the code so much more readable, why are they removing it?
So for example:
<?php if ($product->price_discounted > 0)
{
echo "<p class='discounted price'>" . $product->price_discounted . "</p>"
}
?>
vs
<? if ($product->price_discounted > 0): ?>
<p class='discounted price'><?= $product->price_discounted ?></p>
<? endif; ?>
I would much rather the second version, but whenever I run my project on php7 these don't work. Am I missing something or are they really removed?
FYI: I have short tags on in php.ini
A: Shorthand tags are still in PHP7, the tags being removed are:
*
*<% opening tag
*<%= opening tag with echo
*%> closing tag
*(<script\s+language\s*=\s*(php|"php"|'php')\s*>)i opening tag
*(</script>)i closing tag
https://wiki.php.net/rfc/remove_alternative_php_tags
A: Important Update:
Short tags are not deprecated anymore.
Despite the RFC passing, some controversy ensued and the PHP internals group refused to implement it1. Check out the migration guide for confirmation:
https://www.php.net/manual/en/migration74.php
Original Post:
Short open tags are deprecated in PHP 7.4, and will be removed in PHP 8.
https://wiki.php.net/rfc/deprecate_php_short_tags
Also, the short echo (<?=) is not part of short_open_tag as of 5.4. It is always available, and is not part of the deprecation.
https://wiki.php.net/rfc/shortags
A: Is the correct php.ini being loaded?
<?php phpinfo(); ?>
Check for Loaded Configuration File
| stackoverflow | {
"language": "en",
"length": 281,
"provenance": "stackexchange_0000F.jsonl.gz:878266",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583653"
} |
bb7e3887ec1bfd40f93e06394fa269f6e9bd22ca | Stackoverflow Stackexchange
Q: Google Play Developer API Error Rresponses Hi I am using the Google Play Developer API to verify subscription purchases with my own server. Here is the official documentation for the purchases.subscriptions: get method:
https://developers.google.com/android-publisher/api-ref/purchases/subscriptions/get#response
The problem is that the documentation only shows the result in case of a successful execution. There is no listing of error situations. How am I supposed to handle these on my server if I don't know the responses?
| Q: Google Play Developer API Error Rresponses Hi I am using the Google Play Developer API to verify subscription purchases with my own server. Here is the official documentation for the purchases.subscriptions: get method:
https://developers.google.com/android-publisher/api-ref/purchases/subscriptions/get#response
The problem is that the documentation only shows the result in case of a successful execution. There is no listing of error situations. How am I supposed to handle these on my server if I don't know the responses?
| stackoverflow | {
"language": "en",
"length": 74,
"provenance": "stackexchange_0000F.jsonl.gz:878274",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583685"
} |
ba31886aa58b958a87ccef264e0d42e16a9277b3 | Stackoverflow Stackexchange
Q: Unable to Install GTK3.4(libgtk-3.so.0) to run firefox in AWS ec2 I am trying to run my selenium script which has been developed in Selenium 2.53.0 using firefox 46 in a headless AWS linux server. AWS has GTK2.0 and Firefox 46 is compatible with GTK3.4 and above.
I tried to install GTK 3.4 from the given links and it is installed but still I am getting error
XPCOMGlueLoad error for file /usr/local/firefox/libmozgtk.so:
libgtk-3.so.0: cannot open shared object file: No such file or directory
Couldn't load XPCOM.
I do not have libgtk-3.so.0 installed in my system hence the error. If somebidy can help me on how to upgrade/install GTK 3.4 along with the said libraries to run my firefox in AWS EC2 server it will be really helpful.
The Links i have used : http://ftp.gnome.org/pub/gnome/sources/gtk+/3.4/gtk+-3.4.0.tar.xz
ftp://fr2.rpmfind.net/linux/fedora/linux/development/rawhide/Everything/x86_64/os/Packages/g/gtk3-3.22.15-2.fc27.x86_64.rpm
to install the GTK3
Installed Firefox 46 using the Code https://gist.github.com/joekiller/4144838 from here.
Thanks.
| Q: Unable to Install GTK3.4(libgtk-3.so.0) to run firefox in AWS ec2 I am trying to run my selenium script which has been developed in Selenium 2.53.0 using firefox 46 in a headless AWS linux server. AWS has GTK2.0 and Firefox 46 is compatible with GTK3.4 and above.
I tried to install GTK 3.4 from the given links and it is installed but still I am getting error
XPCOMGlueLoad error for file /usr/local/firefox/libmozgtk.so:
libgtk-3.so.0: cannot open shared object file: No such file or directory
Couldn't load XPCOM.
I do not have libgtk-3.so.0 installed in my system hence the error. If somebidy can help me on how to upgrade/install GTK 3.4 along with the said libraries to run my firefox in AWS EC2 server it will be really helpful.
The Links i have used : http://ftp.gnome.org/pub/gnome/sources/gtk+/3.4/gtk+-3.4.0.tar.xz
ftp://fr2.rpmfind.net/linux/fedora/linux/development/rawhide/Everything/x86_64/os/Packages/g/gtk3-3.22.15-2.fc27.x86_64.rpm
to install the GTK3
Installed Firefox 46 using the Code https://gist.github.com/joekiller/4144838 from here.
Thanks.
| stackoverflow | {
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:878279",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583699"
} |
404b33ccad17da65b84f56e97bb46279ed664c4e | Stackoverflow Stackexchange
Q: How can I plot a million markers on leaflet without browser becoming unresponsive? I am using leaflet and angular2 to plot a million markers on the map returned by an api,however the map becomes unresponsive when zooming in.I have used the MarkerClusterer plugin, still the browser hangs.Can someone please help.
A: Use vector tiles see geojson-vt, leaflet vector grid, tippecanoe or mapbox. They can be built on the fly or hosted... Using the canvas renderer.
| Q: How can I plot a million markers on leaflet without browser becoming unresponsive? I am using leaflet and angular2 to plot a million markers on the map returned by an api,however the map becomes unresponsive when zooming in.I have used the MarkerClusterer plugin, still the browser hangs.Can someone please help.
A: Use vector tiles see geojson-vt, leaflet vector grid, tippecanoe or mapbox. They can be built on the fly or hosted... Using the canvas renderer.
A: You can try Leaflet Marker Cluster to plot more than 4 lacks markers (not million markers :-))
#map {
width: 800px;
height: 600px;
border: 1px solid #ccc;
}
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.0.3/leaflet.css" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.0.3/leaflet.js"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="https://unpkg.com/leaflet.markercluster@1.0.3/dist/MarkerCluster.css" />
<link rel="stylesheet" href="https://unpkg.com/leaflet.markercluster@1.0.3/dist/MarkerCluster.Default.css" />
<script src="https://unpkg.com/leaflet.markercluster@1.0.3/dist/leaflet.markercluster-src.js"></script>
<script>
var addressPoints = [
[-37.8133062833, 175.2721598, "3"],
[-37.7650657333, 175.30196635, "6"],
[-37.7635318, 175.3017915, "32"],
[-37.76501495, 175.30214865, "8"],
[-37.7626951333, 175.30102495, "33"],
[-37.7628413, 175.3011039167, "31"],
[-37.7252768667, 175.2858272833, "39"],
[-37.7251813, 175.2860325833, "41"],
[-37.7253529667, 175.28606595, "40"],
[-37.72613405, 175.2825743, "4"],
[-37.7260856167, 175.2830322, "8"],
[-37.7258475833, 175.2823754667, "3"],
[-37.7258087333, 175.2855237167, "30"],
[-37.7259408167, 175.2841378667, "16"],
[-37.7252846667, 175.2853744167, "35"],
[-37.7252901, 175.2849533833, "31"],
[-37.7251719333, 175.2845442167, "27"],
[-37.7257350667, 175.2834726333, "15"],
[-37.72551855, 175.2829995667, "11"],
[-37.7258122333, 175.282854, "7"],
[-37.72609895, 175.2827986333, "6"],
[-37.7258737167, 175.28217635, "1"],
[-37.7258311333, 175.2825961667, "5"],
[-37.725612, 175.2828941667, "9"],
[-37.7260252333, 175.2834950167, "12"],
[-37.7258516333, 175.2843625333, "18"],
[-37.7252865167, 175.2856084833, "37"],
[-37.72552805, 175.28440285, "21"],
[-37.7260591667, 175.2832461833, "10"],
[-37.7257771167, 175.2831591833, "13"],
[-37.7256427, 175.2840918167, "19"],
[-37.7252874667, 175.28515965, "33"],
[-37.7253612667, 175.2847015333, "29"],
[-37.7253173333, 175.28437885, "23"],
[-37.7255606833, 175.285023, "24"],
[-37.7255772167, 175.2855245333, "32"],
[-37.7251607833, 175.28435025, "25"],
[-37.7256776333, 175.2858937667, "34"],
[-37.7255147667, 175.2860862, "38"],
[-37.7255604167, 175.2852782667, "26"],
[-37.7258475, 175.2853665167, "28"],
[-37.7256432167, 175.28479385, "22"],
[-37.7261499167, 175.2823003833, "2"],
[-37.7827152167, 175.2669377333, "8"],
[-37.7826653167, 175.2651315333, "19"],
[-37.7824514333, 175.2661956, "1/14-4/14"],
[-37.7826543333, 175.2665598, "10"],
[-37.7825191, 175.2655053833, "20"],
[-37.78283245, 175.2655798167, "15"],
[-37.7825732333, 175.2659369333, "16"],
[-37.7831668, 175.2677692, "1"],
[-37.7824864833, 175.2653481667, "22"],
[-37.7831375667, 175.2675576, "3"],
[-37.78311335, 175.2673731667, "5"],
[-37.7829425, 175.2661583667, "9"],
[-37.7824919333, 175.26519525, "24"],
[-37.7823427, 175.2655492667, "1/20-6/20"],
[-37.7826039667, 175.2661666667, "14"],
[-37.7823595, 175.2664568, "12"],
[-37.78260965, 175.2669722167, "8B"],
[-37.7829082167, 175.2659751167, "11"],
[-37.7826221833, 175.26637085, "12A"],
[-37.7828835, 175.2657931667, "13"],
[-37.7827837833, 175.2653184167, "17"],
[-37.7825404167, 175.2657024167, "18"],
[-37.7825830667, 175.266866, "8A"],
[-37.7826210667, 175.26709865, "8C"],
[-37.78309715, 175.26719625, "7"],
[-37.7824794833, 175.2666219, "1/10-8/10"],
[-37.7597504833, 175.2538294167, "1/4"],
[-37.7606522, 175.2539928833, "9A"],
[-37.7603274167, 175.2536633833, "11"],
[-37.7595836167, 175.25372635, "2/4"],
[-37.7596923167, 175.2539812167, "2"],
[-37.760061, 175.2541011333, "3"],
[-37.7601809333, 175.2539225667, "5"],
[-37.7598443, 175.2536890833, "6"],
[-37.7603764, 175.25413965, "7B"],
[-37.760479, 175.2540011833, "7"],
[-37.75995195, 175.2535132333, "8"],
[-37.7605351667, 175.2538995333, "9"],
[-37.8142647167, 175.29588845, "10A"],
[-37.8141523333, 175.2956623667, "10B"],
[-37.8145259, 175.2959641, "5"],
[-37.8144401167, 175.2958561667, "7"],
[-37.814215, 175.2961794833, "6"],
[-37.8144337833, 175.2962903333, "4"],
[-37.81415115, 175.2955115667, "11"],
[-37.8141950167, 175.2960154, "8"],
[-37.8143154333, 175.2956368333, "9"],
[-37.81463225, 175.2961358, "3"],
[-37.8145120667, 175.2964456333, "2"],
[-37.7267252, 175.24639565, "1"],
[-37.7267363, 175.24607095, "2"],
[-37.7272662667, 175.2464523833, "7"],
[-37.7268794167, 175.2464094833, "3"],
[-37.7270709667, 175.24641145, "5"],
[-37.7272869167, 175.2462229333, "9"],
[-37.7273377333, 175.2459922833, "11"],
[-37.7272791167, 175.2458228, "10"],
[-37.7272242333, 175.2456467, "8"],
[-37.7270979667, 175.2458302667, "6"],
[-37.7269099667, 175.2460811333, "4"],
[-37.7269516667, 175.26654195, "19"],
[-37.7272433667, 175.2688282167, "20"],
[-37.7272086, 175.26856935, "18"],
[-37.7270515167, 175.2663482, "17"],
[-37.7269029167, 175.2684720333, "37"],
[-37.7269129667, 175.2687034167, "39"],
[-37.7270226833, 175.269985, "49"],
[-37.7273227833, 175.27040065, "32"],
[-37.7269961667, 175.2697445667, "47"],
[-37.7277992333, 175.26607075, "10A"],
[-37.7267754833, 175.2668876, "23"],
[-37.7269725833, 175.2695108833, "45"],
[-37.7270770833, 175.2704350667, "53"],
[-37.7270976833, 175.2706593, "55"],
[-37.72769385, 175.26626005, "12B"],
[-37.7273659833, 175.2657939833, "11"],
[-37.7275722833, 175.2659764, "10"],
[-37.7278991667, 175.264895, "1"],
[-37.7270492, 175.2670066667, "16"],
[-37.7272566, 175.2659714833, "13"],
[-37.7271470333, 175.2661578333, "15"],
[-37.7268324667, 175.2677899167, "31"],
[-37.7268681167, 175.2680062167, "33"],
[-37.72689075, 175.2682406167, "35"],
[-37.7269306833, 175.2689642833, "41"],
[-37.72696875, 175.269263, "43"],
[-37.7280193667, 175.2652022667, "2"],
[-37.7278003, 175.2650767167, "3"],
[-37.7277980833, 175.2656216667, "6"],
[-37.7274714833, 175.2661344, "12A"],
[-37.7272278, 175.2690721167, "22"],
[-37.7279022167, 175.26542265, "4"],
[-37.7276824667, 175.2652469167, "5"],
[-37.7268506333, 175.266725, "21"],
[-37.72681425, 175.2673108, "27"],
[-37.7267847667, 175.26708975, "25"],
[-37.7268163333, 175.2675368833, "29"],
[-37.7272382167, 175.2692983333, "24"],
[-37.7270493, 175.27021985, "51"],
[-37.7275880333, 175.2654261, "7"],
[-37.7274762, 175.2656128333, "9"],
[-37.7276818833, 175.2657883, "8"],
[-37.7597651667, 175.2605711667, "6"],
[-37.76018805, 175.2609881, "2"],
[-37.7602787167, 175.2614594333, "2A"],
[-37.7601363333, 175.2614756833, "2B"],
[-37.7599083333, 175.26127835, "4B"],
[-37.7600629667, 175.2613225667, "4A"],
[-37.7591153333, 175.2608816, "9"],
[-37.7594763, 175.2604528333, "3"],
[-37.7595980167, 175.2607975, "10"],
[-37.7592820333, 175.2612282667, "14A"],
[-37.7594027667, 175.2613270333, "14B"],
[-37.7589391833, 175.2613034833, "13"],
[-37.7589293667, 175.2614704, "15"],
[-37.7594669333, 175.26142695, "16"],
[-37.7595715, 175.2603478, "1"],
[-37.7587638667, 175.2617398, "17"],
[-37.75928755, 175.2615212167, "18"],
[-37.7590696333, 175.26160655, "20"],
[-37.76004645, 175.2609580333, "4"],
[-37.7593509167, 175.2605798667, "5"],
[-37.7599710167, 175.2607496667, "6A"],
[-37.75922275, 175.2607182667, "7"],
[-37.7597137667, 175.2612152667, "8"],
[-37.7590570667, 175.26111555, "11"],
[-37.75938005, 175.2610309167, "12"],
[-37.8094496333, 175.29109025, "8A"],
[-37.8096383, 175.2908512833, "6A"],
[-37.8095118833, 175.2907225, "6"],
[-37.8095745, 175.2903304333, "2"],
[-37.8091726, 175.2911845333, "10A"],
[-37.8093263, 175.2908983333, "10"],
[-37.8093507333, 175.2901929667, "1"],
[-37.8093190333, 175.2903236333, "3"],
[-37.8095235833, 175.29050985, "4"],
[-37.8092396833, 175.2904833667, "5"],
[-37.80915545, 175.2906272167, "7"],
[-37.8094314667, 175.2908612, "8"],
[-37.8092101333, 175.2908132833, "9"],
[-37.7533304667, 175.2494180833, "43"],
[-37.7526352, 175.2483552333, "42"],
[-37.7545179333, 175.2496692833, "62"],
[-37.7543923667, 175.2495332667, "62A"],
[-37.7534358667, 175.2504690667, "9"],
[-37.75329495, 175.2497042167, "15A"],
[-37.7540200167, 175.2497675667, "53B"],
[-37.7535320667, 175.2498621167, "49"],
[-37.75358825, 175.2496730833, "49A"],
[-37.7532715667, 175.2495825167, "43A"],
[-37.7530439333, 175.24837735, "35A"],
[-37.7531751167, 175.2483719, "35B"],
[-37.7527424167, 175.24818575, "44"],
[-37.75283325, 175.2480310333, "46"],
[-37.7535719333, 175.2492009333, "45"],
[-37.7540046167, 175.2504339167, "70"],
[-37.75249145, 175.250112, "24A"],
[-37.7538706667, 175.2509267, "5A"],
[-37.7541943667, 175.2492624667, "58"],
[-37.7540283833, 175.2491471, "56"],
[-37.75237835, 175.2499499, "26"],
[-37.75370355, 175.2502355167, "59"],
[-37.7526963167, 175.24923805, "25C"],
[-37.7527793833, 175.2504055667, "18"],
[-37.75318385, 175.2502221167, "13"],
[-37.7543455667, 175.2493899667, "60"],
[-37.7531738833, 175.2486192, "35C"],
[-37.7530230667, 175.2486677, "35E"],
[-37.7530415667, 175.2476700333, "48B"],
[-37.7543446667, 175.2497958833, "64"],
[-37.7541102167, 175.2501747, "68"],
[-37.7542193167, 175.2499942667, "66"],
[-37.7524716, 175.2484220833, "40B"],
[-37.7525259167, 175.24859275, "40A"],
[-37.75304995, 175.24791705, "48A"],
[-37.7533106, 175.2503497833, "11"],
[-37.7535395667, 175.25122055, "10"],
[-37.7530557, 175.2505998333, "14"],
[-37.7534549833, 175.2510777667, "12"],
[-37.7532645, 175.2498790167, "15"],
[-37.7529169167, 175.2505167333, "16"],
[-37.7530122, 175.2501154167, "17"],
[-37.7528793667, 175.2500031, "19"],
[-37.7524861667, 175.250491, "20A"],
[-37.7525711833, 175.25061535, "20B"],
[-37.7527637, 175.2498751667, "21"],
[-37.7526454833, 175.2502624667, "22"],
[-37.7540655667, 175.25139255, "1"],
[-37.7528507167, 175.2495407833, "23"],
[-37.75266675, 175.2490152167, "29"],
[-37.7521746833, 175.24958125, "30"],
[-37.7527662667, 175.2488566833, "31"],
[-37.7520474667, 175.2491321333, "32A"],
[-37.7519347667, 175.2493122333, "32B"],
[-37.7519778, 175.24944285, "32C"],
[-37.7521468667, 175.249391, "32D"],
[-37.75304875, 175.2489884333, "33"],
[-37.7521923833, 175.2490508167, "34"],
[-37.7522956333, 175.24890365, "36"],
[-37.7532920167, 175.2486579167, "37"],
[-37.75239395, 175.2487506167, "38"],
[-37.75335085, 175.24882495, "39"],
[-37.7534400667, 175.24903275, "41"],
[-37.7539658833, 175.2512702167, "3"],
[-37.7536349, 175.2487029667, "50"],
[-37.7538621, 175.2494562, "51"],
[-37.7537511333, 175.2488972667, "52"],
[-37.7540289, 175.2495731, "53"],
[-37.7538807833, 175.2490229333, "54"],
[-37.7538779333, 175.24998435, "55A"],
[-37.7537814, 175.2497839333, "55B"],
[-37.7537928333, 175.2501063833, "57"],
[-37.7538317833, 175.25161815, "4"],
[-37.7539651, 175.2505948667, "72"],
[-37.7538675, 175.2511449833, "5"],
[-37.7535527833, 175.2515935333, "6"],
[-37.7529532333, 175.2496368167, "23B"],
[-37.7528883833, 175.2492822833, "27A"],
[-37.7529762333, 175.2493908, "27B"],
[-37.7524936667, 175.2493519833, "25B"],
[-37.7525876667, 175.2491184, "25A"],
[-37.7527940667, 175.2493274333, "25D"],
[-37.7529821167, 175.2476663833, "48C"],
[-37.7529520833, 175.2478886333, "48D"],
[-37.7526021333, 175.2496809833, "25"],
[-37.7524205833, 175.2502557167, "24"],
[-37.7537035833, 175.2493436, "47"],
[-37.7535693, 175.2506006667, "7"],
[-37.75362475, 175.2513740333, "8"],
[-37.7529255833, 175.2485607333, "35D"],
[-37.7406592, 175.2541540333, "20"],
[-37.73970635, 175.2542623167, "2"],
[-37.7408172333, 175.2542480667, "22"],
[-37.7401503, 175.25436535, "1"],
[-37.7413353, 175.2546195333, "17"],
[-37.7404642833, 175.2540473833, "18"],
[-37.7402663667, 175.25397305, "16"],
[-37.7399413667, 175.2535634167, "10"],
[-37.741079, 175.2547560667, "11"],
[-37.7400583667, 175.2533637167, "12"],
[-37.7412374833, 175.25481145, "13"],
[-37.7400478333, 175.2539777167, "14"],
[-37.7414328, 175.2548579333, "15"],
[-37.73943015, 175.2540313667, "4"],
[-37.7405255667, 175.2544956, "5"],
[-37.7398022833, 175.2540526167, "6"],
[-37.74070675, 175.2545902667, "7"],
[-37.7398755667, 175.2537977833, "8"],
[-37.7408881667, 175.254673, "9"],
[-37.7403441833, 175.2544100333, "3"],
[-37.7388455667, 175.2626767167, "62/3"],
[-37.7372424667, 175.2626006167, "24/3"],
[-37.7387254833, 175.2626415667, "61/3"],
[-37.73830135, 175.2624793833, "57/3"],
[-37.7374028667, 175.2620363833, "16/3"],
[-37.7370862667, 175.26191675, "13/3"],
[-37.7374895833, 175.2632328667, "32/3"],
[-37.7373917333, 175.2631711333, "31/3"],
[-37.7376033167, 175.2632756167, "33/3"],
[-37.7368605333, 175.2629430333, "3/3"],
[-37.7377045, 175.26332575, "34/3"],
[-37.7379133667, 175.2632624, "43/3"],
[-37.7379307833, 175.26316935, "44/3"],
[-37.7379521833, 175.2624367, "53/3"],
[-37.7380988833, 175.2623830167, "55/3"],
[-37.73761405, 175.2621603333, "18/3"],
[-37.73750525, 175.2629252167, "35/3"],
[-37.73791505, 175.2625837167, "54/3"],
[-37.7384692333, 175.2628984, "66/3"],
[-37.7376195667, 175.2624920833, "49/3"],
[-37.7378294667, 175.26282665, "42/3"],
[-37.73799185, 175.2629139, "46/3"],
[-37.737702, 175.262998, "37/3"],
[-37.7375974, 175.26296335, "36/3"],
[-37.7370049833, 175.2629984833, "2/3"],
[-37.7367591, 175.2629045833, "4/3"],
[-37.7371307833, 175.2630572333, "1/3"],
[-37.7379644167, 175.2630417333, "45/3"],
[-37.7373141, 175.2633483333, "1"],
[-37.73663385, 175.2626703333, "6/3"],
[-37.7375158167, 175.2627149167, "39/3"],
[-37.7373195667, 175.2623184, "28/3"],
[-37.7366752333, 175.2621685, "9/3"],
[-37.7366931333, 175.2620520667, "10/3"],
[-37.7368461167, 175.2619946833, "11/3"],
[-37.73694355, 175.2619276667, "12/3"],
[-37.73718485, 175.2619434, "14/3"],
[-37.7373047667, 175.26198635, "15/3"],
[-37.7375193667, 175.2620948167, "17/3"],
[-37.7370616333, 175.2622085, "30/3"],
[-37.7371940167, 175.26224795, "29/3"],
[-37.7374514, 175.2623851333, "27/3"],
[-37.73701605, 175.26246445, "26/3"],
[-37.7371355833, 175.2625268833, "25/3"],
[-37.73733795, 175.2626271667, "23/3"],
[-37.7366885667, 175.2624224833, "8/3"],
[-37.7368083167, 175.2624732333, "7/3"],
[-37.7376435, 175.26238365, "50/3"],
[-37.7377576, 175.2622586833, "51/3"],
[-37.7378621333, 175.2622972167, "52/3"],
[-37.7381938833, 175.26243295, "56/3"],
[-37.73840285, 175.2625139333, "58/3"],
[-37.7385259667, 175.2625530833, "59/3"],
[-37.7381541, 175.2627086, "63/3"],
[-37.7382732833, 175.2627866333, "64/3"],
[-37.73838385, 175.2628331667, "65/3"],
[-37.7386233667, 175.2629346667, "67/3"],
[-37.7387464, 175.2629777167, "68/3"],
[-37.7367412333, 175.2627206667, "5/3"],
[-37.7369866833, 175.2626858333, "22/3"],
[-37.7382063667, 175.2630262833, "48/3"],
[-37.7381730667, 175.2631383167, "47/3"],
[-37.73863995, 175.2631809667, "69/3"],
[-37.73876925, 175.2632203167, "70/3"],
[-37.7377186, 175.2627957, "41/3"],
[-37.7386056333, 175.2625947167, "60/3"],
[-37.73780955, 175.2630405333, "38/3"],
[-37.7375927167, 175.26274085, "40/3"],
[-37.7372908167, 175.26280895, "19/3"],
[-37.7371831167, 175.2627742, "20/3"],
[-37.73706645, 175.26272355, "21/3"],
[-37.7848857333, 175.2653156167, "3"],
[-37.7846897, 175.2649621833, "2"],
[-37.7848253333, 175.2649359833, "4"],
[-37.7849487333, 175.26489275, "6"],
[-37.78521955, 175.2648237167, "10"],
[-37.7854084333, 175.2648019167, "12"],
[-37.7847416667, 175.2653472667, "1"],
[-37.78507545, 175.2648510167, "8"],
[-37.7851414167, 175.2652587, "7-9"],
[-37.7810749167, 175.2260865833, "28"],
[-37.7812585833, 175.2260324333, "22"],
[-37.7816422167, 175.2264606, "14"],
[-37.7814536667, 175.2249716667, "19"],
[-37.7813703667, 175.2248887, "21"],
[-37.7806153833, 175.2258977167, "38A"],
[-37.7813339667, 175.2249035167, "23"],
[-37.7818254167, 175.2261700667, "10"],
[-37.75828075, 175.3046800333, "55"],
[-37.8177078167, 175.28656555, "36"],
[-37.7776995, 175.2232183667, "68"],
[-37.7739027667, 175.2264543333, "4"],
[-37.7777290667, 175.22495975, "52"],
[-37.7736832833, 175.2267423833, "1"],
[-37.7793876167, 175.2209870667, "99"],
[-37.7776858167, 175.2245548833, "56"],
[-37.7797406667, 175.2207979833, "103"],
[-37.77767585, 175.2243799833, "58"],
[-37.77990535, 175.2207323333, "105"],
[-37.7784736333, 175.2221205167, "80"],
[-37.7777384167, 175.2251824667, "50"],
[-37.7785325167, 175.2218720833, "82"],
[-37.7780789, 175.2226019167, "74"],
[-37.7786049333, 175.2225454667, "83"],
[-37.7790482167, 175.2205042167, "94"],
[-37.77877555, 175.2221031167, "87"],
[-37.7789347, 175.2209618, "90"],
[-37.77883645, 175.2218758333, "89"],
[-37.7792706333, 175.2206462, "96"],
[-37.7780605167, 175.2256446167, "47"],
[-37.7778262833, 175.2261931833, "41"],
[-37.7780589833, 175.2254904, "49"],
[-37.77953265, 175.2200974667, "102"],
[-37.7783669833, 175.2255023167, "51"],
[-37.77771785, 175.2247485833, "54"],
[-37.7780063667, 175.2257931167, "45"],
[-37.7777639667, 175.2230532, "70"],
[-37.7757195, 175.2275160667, "21"],
[-37.77911685, 175.2212834833, "95"],
[-37.7760025333, 175.2271657333, "22"],
[-37.7777111167, 175.2263709333, "39"],
[-37.7758996167, 175.2275354, "23"],
[-37.7788214833, 175.2211284667, "88"]
];
</script>
<div id="map"></div>
<span>Mouse over a cluster to see the bounds of its children and click a cluster to zoom to those bounds</span>
<script type="text/javascript">
var tiles = L.tileLayer('http://{s}.tile.osm.org/{z}/{x}/{y}.png', {
maxZoom: 12,
attribution: '© <a href="http://osm.org/copyright">OpenStreetMap</a> contributors, Points © 2012 LINZ'
}),
latlng = L.latLng(-37.82, 175.24);
var map = L.map('map', {center: latlng, zoom: 13, layers: [tiles]});
var markers = L.markerClusterGroup();
for (var i = 0; i < addressPoints.length; i++) {
var a = addressPoints[i];
var title = a[2];
var marker = L.marker(new L.LatLng(a[0], a[1]), { title: title });
marker.bindPopup(title);
markers.addLayer(marker);
}
map.addLayer(markers);
</script>
| stackoverflow | {
"language": "en",
"length": 1472,
"provenance": "stackexchange_0000F.jsonl.gz:878300",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583766"
} |
c444969b0d9bb579cf4872bf58d5c82a90fc399a | Stackoverflow Stackexchange
Q: Set static IP for outcoming request I need to access a service from my cluster on GKE outside of it. This service restricts access IP allowing just one IP. So, I have to set a NAT or something like that, but I don't really sure that is the right solution to set an external gateway/NAT on my GKE cluster. Can you help me, please?
A: You can achieve this by configuring a NAT Gateway.
Here's a guide: https://github.com/johnlabarge/gke-nat-example
The key steps to note are that you'll need to recreate your GKE cluster to apply a network tag to the nodes, and then use that tag in your GCP Route. (You cannot just apply the route to all nodes, as it would then be applied to your NAT Gateway instance(s) as well).
The other point to note (perhaps obviously) is that you cannot route all traffic through the NAT Gateway, unless you route all incoming traffic through it as well. I just it just for outbound traffic to a specific set of IPs which need a stable source.
| Q: Set static IP for outcoming request I need to access a service from my cluster on GKE outside of it. This service restricts access IP allowing just one IP. So, I have to set a NAT or something like that, but I don't really sure that is the right solution to set an external gateway/NAT on my GKE cluster. Can you help me, please?
A: You can achieve this by configuring a NAT Gateway.
Here's a guide: https://github.com/johnlabarge/gke-nat-example
The key steps to note are that you'll need to recreate your GKE cluster to apply a network tag to the nodes, and then use that tag in your GCP Route. (You cannot just apply the route to all nodes, as it would then be applied to your NAT Gateway instance(s) as well).
The other point to note (perhaps obviously) is that you cannot route all traffic through the NAT Gateway, unless you route all incoming traffic through it as well. I just it just for outbound traffic to a specific set of IPs which need a stable source.
A: You can use kubeip in order to assign IP addresses blog post
| stackoverflow | {
"language": "en",
"length": 191,
"provenance": "stackexchange_0000F.jsonl.gz:878304",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583776"
} |
272f6a782af9eb9d6849d008a524def88be16f96 | Stackoverflow Stackexchange
Q: Concatenating Firstname and lastname as Name and then ommit Firtname and LastName in SSIS I have a problem statement as below :
From a DB source I have a table which has data related to Person. Out of which 3 columns are as Title, FirstName, LastName and I want to replace this as Name before actually putting it to the destination.
I tried using derived column but it gives me Name (ie. concatenation of Title, FirstName, LastName) and individual these columns as well.
Please suggest.
A: You can just ignore the original columns when mapping to destination or you can right click on the destination (or any transformation after the derive column) -> show advanced editor -> input columns and then uncheck the columns that you don't need anymore.
| Q: Concatenating Firstname and lastname as Name and then ommit Firtname and LastName in SSIS I have a problem statement as below :
From a DB source I have a table which has data related to Person. Out of which 3 columns are as Title, FirstName, LastName and I want to replace this as Name before actually putting it to the destination.
I tried using derived column but it gives me Name (ie. concatenation of Title, FirstName, LastName) and individual these columns as well.
Please suggest.
A: You can just ignore the original columns when mapping to destination or you can right click on the destination (or any transformation after the derive column) -> show advanced editor -> input columns and then uncheck the columns that you don't need anymore.
A: First of all, your solution is good, even if individuals columns still appearing, it is not necessary to map them, to your destination, just ignore them.
Other method
If using an OLEDB Source select Source type as SQL Command and use the following command:
SELECT [Title] + ' ' + [FirstName] + ' ' + [LastName] AS Name, ...
FROM MyTable
If using Excel Source select Source type as SQL Command and use the following command:
SELECT [Title] + ' ' + [FirstName] + ' ' + [LastName] AS Name
FROM [Sheet1$]
| stackoverflow | {
"language": "en",
"length": 223,
"provenance": "stackexchange_0000F.jsonl.gz:878312",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583796"
} |
e3b6692793e4d37e43e78e257a05ee3f587b1268 | Stackoverflow Stackexchange
Q: Date localization with Jest unit test The following unit test fails because the date is in english, i.e. "Friday, June 16, 2017" instead of the expected Dutch version, which would be "vrijdag 16 juni 2017".
test('returns dutch version of date when given ISO version', () => {
const dateToFormat = new Date('2017-06-16');
const options = {
weekday: 'long',
year: 'numeric',
month: 'long',
day: 'numeric',
};
const result = new Intl
.DateTimeFormat('nl-NL', options)
.format(dateToFormat);
expect(result).toBe('vrijdag 16 juni 2017');
});
Why is this unit test failing?
Environment:
*
*Node.js: 6.11.0
*npm: 3.10.10
*Jest: 18.1.0
*OS: Ubuntu 16.04
Log of failing unit test:
FAIL [path-to-file]
returns dutch version of date when given ISO version
expect(received).toBe(expected)
Expected value to be (using ===):
"vrijdag 16 juni 2017"
Received:
"Friday, June 16, 2017"
at Object.<anonymous> ([path-to-file]:15:20)
at emitTwo (events.js:106:13)
at process.emit (events.js:191:7)
at process.nextTick (internal/child_process.js:758:12)
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickCallback (internal/process/next_tick.js:104:9)
| Q: Date localization with Jest unit test The following unit test fails because the date is in english, i.e. "Friday, June 16, 2017" instead of the expected Dutch version, which would be "vrijdag 16 juni 2017".
test('returns dutch version of date when given ISO version', () => {
const dateToFormat = new Date('2017-06-16');
const options = {
weekday: 'long',
year: 'numeric',
month: 'long',
day: 'numeric',
};
const result = new Intl
.DateTimeFormat('nl-NL', options)
.format(dateToFormat);
expect(result).toBe('vrijdag 16 juni 2017');
});
Why is this unit test failing?
Environment:
*
*Node.js: 6.11.0
*npm: 3.10.10
*Jest: 18.1.0
*OS: Ubuntu 16.04
Log of failing unit test:
FAIL [path-to-file]
returns dutch version of date when given ISO version
expect(received).toBe(expected)
Expected value to be (using ===):
"vrijdag 16 juni 2017"
Received:
"Friday, June 16, 2017"
at Object.<anonymous> ([path-to-file]:15:20)
at emitTwo (events.js:106:13)
at process.emit (events.js:191:7)
at process.nextTick (internal/child_process.js:758:12)
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickCallback (internal/process/next_tick.js:104:9)
| stackoverflow | {
"language": "en",
"length": 146,
"provenance": "stackexchange_0000F.jsonl.gz:878340",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583875"
} |
bbcfe7d35b4fe26b1cf67ca0915c1fe098152a02 | Stackoverflow Stackexchange
Q: Positioning BootStrap daterangepicker calendar I am using BootStrap Datepicker. I want to load/position the calendar at top of the field. How can i do this ?
Currently it is loading like this -
A: To make the calendar appear above the input instead of below, change the "drops" setting to "up".
$(element).daterangepicker({
drops: 'up'
});
| Q: Positioning BootStrap daterangepicker calendar I am using BootStrap Datepicker. I want to load/position the calendar at top of the field. How can i do this ?
Currently it is loading like this -
A: To make the calendar appear above the input instead of below, change the "drops" setting to "up".
$(element).daterangepicker({
drops: 'up'
});
A: If anyone uses it with angular:
[bsConfig]="{ adaptivePosition: true }"
| stackoverflow | {
"language": "en",
"length": 67,
"provenance": "stackexchange_0000F.jsonl.gz:878354",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44583941"
} |
97a0672dee89900c5ee69a757e998774df3da37f | Stackoverflow Stackexchange
Q: FSCalendar - Change the title color of Particular Dates Using Swift Hello all I'm working on the WenchaoD's FSCalendar now a days.I successfully loaded the calendar with many events.But now the question is how to change the date's title color for particular dates.Can anyone suggest me how to do this?
A: 1) First of all implement FSCalendarDelegateAppearance
2) Let's assume you are having an array of some dates,let's declare an array first.
var somedays : Array = [String]()
3) Now you will need formatter to change the string into date.
fileprivate let gregorian: Calendar = Calendar(identifier: .gregorian)
fileprivate lazy var dateFormatter1: DateFormatter = {
let formatter = DateFormatter()
formatter.dateFormat = "yyyy-MM-dd"
return formatter
}()
4) FSCalendarDelegateAppearance is having one method named : titleDefaultColorFor
5) Implement this method using below code.
func calendar(_ calendar: FSCalendar, appearance: FSCalendarAppearance, titleDefaultColorFor date: Date) -> UIColor? {
somedays = ["2017-06-03",
"2017-06-06",
"2017-06-12",
"2017-06-25"]
let dateString : String = dateFormatter1.string(from:date)
if self.somedays.contains(dateString) {
return .green
} else {
return nil
}
}
6) Run this code.Happy coding.
| Q: FSCalendar - Change the title color of Particular Dates Using Swift Hello all I'm working on the WenchaoD's FSCalendar now a days.I successfully loaded the calendar with many events.But now the question is how to change the date's title color for particular dates.Can anyone suggest me how to do this?
A: 1) First of all implement FSCalendarDelegateAppearance
2) Let's assume you are having an array of some dates,let's declare an array first.
var somedays : Array = [String]()
3) Now you will need formatter to change the string into date.
fileprivate let gregorian: Calendar = Calendar(identifier: .gregorian)
fileprivate lazy var dateFormatter1: DateFormatter = {
let formatter = DateFormatter()
formatter.dateFormat = "yyyy-MM-dd"
return formatter
}()
4) FSCalendarDelegateAppearance is having one method named : titleDefaultColorFor
5) Implement this method using below code.
func calendar(_ calendar: FSCalendar, appearance: FSCalendarAppearance, titleDefaultColorFor date: Date) -> UIColor? {
somedays = ["2017-06-03",
"2017-06-06",
"2017-06-12",
"2017-06-25"]
let dateString : String = dateFormatter1.string(from:date)
if self.somedays.contains(dateString) {
return .green
} else {
return nil
}
}
6) Run this code.Happy coding.
| stackoverflow | {
"language": "en",
"length": 172,
"provenance": "stackexchange_0000F.jsonl.gz:878442",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584278"
} |
4e5b31d38efdd59c02d3299960f8f81ef5cec993 | Stackoverflow Stackexchange
Q: How to listen for 'props' changes In the VueJs 2.0 docs I can't find any hooks that would listen on props changes.
Does VueJs have such hooks like onPropsUpdated() or similar?
Update
As @wostex suggested, I tried to watch my property but nothing changed. Then I realized that I've got a special case:
<template>
<child :my-prop="myProp"></child>
</template>
<script>
export default {
props: ['myProp']
}
</script>
I am passing myProp that the parent component receives to the child component. Then the watch: {myProp: ...} is not working.
A: In my case I needed a solution where anytime any props would change, I needed to parse my data again. I was tired of making separated watcher for all my props, so I used this:
watch: {
$props: {
handler() {
this.parseData();
},
deep: true,
immediate: true,
},
},
Key point to take away from this example is to use deep: true so it not only watches $props but also it's nested values like e.g. props.myProp
You can learn more about this extended watch options here: https://v2.vuejs.org/v2/api/#vm-watch
| Q: How to listen for 'props' changes In the VueJs 2.0 docs I can't find any hooks that would listen on props changes.
Does VueJs have such hooks like onPropsUpdated() or similar?
Update
As @wostex suggested, I tried to watch my property but nothing changed. Then I realized that I've got a special case:
<template>
<child :my-prop="myProp"></child>
</template>
<script>
export default {
props: ['myProp']
}
</script>
I am passing myProp that the parent component receives to the child component. Then the watch: {myProp: ...} is not working.
A: In my case I needed a solution where anytime any props would change, I needed to parse my data again. I was tired of making separated watcher for all my props, so I used this:
watch: {
$props: {
handler() {
this.parseData();
},
deep: true,
immediate: true,
},
},
Key point to take away from this example is to use deep: true so it not only watches $props but also it's nested values like e.g. props.myProp
You can learn more about this extended watch options here: https://v2.vuejs.org/v2/api/#vm-watch
A: Not sure if you have resolved it (and if I understand correctly), but here's my idea:
If parent receives myProp, and you want it to pass to child and watch it in child, then parent has to have copy of myProp (not reference).
Try this:
new Vue({
el: '#app',
data: {
text: 'Hello'
},
components: {
'parent': {
props: ['myProp'],
computed: {
myInnerProp() { return myProp.clone(); } //eg. myProp.slice() for array
}
},
'child': {
props: ['myProp'],
watch: {
myProp(val, oldval) { now val will differ from oldval }
}
}
}
}
and in html:
<child :my-prop="myInnerProp"></child>
actually you have to be very careful when working on complex collections in such situations (passing down few times)
A: I work with a computed property like:
items:{
get(){
return this.resources;
},
set(v){
this.$emit("update:resources", v)
}
},
Resources is in this case a property:
props: [ 'resources' ]
A: You can watch props to execute some code upon props changes:
new Vue({
el: '#app',
data: {
text: 'Hello'
},
components: {
'child' : {
template: `<p>{{ myprop }}</p>`,
props: ['myprop'],
watch: {
myprop: function(newVal, oldVal) { // watch it
console.log('Prop changed: ', newVal, ' | was: ', oldVal)
}
}
}
}
});
<script src="https://unpkg.com/vue/dist/vue.js"></script>
<div id="app">
<child :myprop="text"></child>
<button @click="text = 'Another text'">Change text</button>
</div>
A: Props and v-model handling. How to pass values from parent to child and child to parent.
Watch is not required! Also mutating props in Vue is an anti-pattern, so you should never change the prop value in the child or component. Use $emit to change the value and Vue will work as expected always.
/* COMPONENT - CHILD */
Vue.component('props-change-component', {
props: ['value', 'atext', 'anumber'],
mounted() {
var _this = this
this.$emit("update:anumber", 6)
setTimeout(function () {
// Update the parent binded variable to 'atext'
_this.$emit("update:atext", "4s delay update from child!!")
}, 4000)
setTimeout(function () {
// Update the parent binded v-model value
_this.$emit("input", "6s delay update v-model value from child!!")
}, 6000)
},
template: '<div> \
v-model value: {{ value }} <br> \
atext: {{ atext }} <br> \
anumber: {{ anumber }} <br> \
</div>'
})
/* MAIN - PARENT */
const app = new Vue({
el: '#app',
data() {
return {
myvalue: 7,
mynumber: 99,
mytext: "My own text",
}
},
mounted() {
var _this = this
// Update our variable directly
setTimeout(function () {
_this.mytext = "2s delay update mytext from parent!!"
}, 2000)
},
})
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js"></script>
<div id="app">
<props-change-component
v-model='myvalue'
:atext.sync='mytext'
:anumber.sync='mynumber'>
</props-change-component>
</div>
A: For me this is a polite solution to get one specific prop(s) changes and create logic with it
I would use props and variables computed properties to create logic after to receive the changes
export default {
name: 'getObjectDetail',
filters: {},
components: {},
props: {
objectDetail: { // <--- we could access to this value with this.objectDetail
type: Object,
required: true
}
},
computed: {
_objectDetail: {
let value = false
// ...
// if || do || while -- whatever logic
// insert validation logic with this.objectDetail (prop value)
value = true
// ...
return value
}
}
So, we could use _objectDetail on html render
<span>
{{ _objectDetail }}
</span>
or in some method:
literallySomeMethod: function() {
if (this._objectDetail) {
....
}
}
A: My below answer is applicable if someone using Vue 2 with composition API.
So setup function will be
setup: (props: any) => {
watch(() => (props.myProp), (updatedProps: any) => {
// you will get the latest props into updatedProp
})
}
However, you will need to import the watch function from the composition API.
A: I think that in most cases Vue updates component's DOM on a prop change.
If this is your case then you can use beforeUpdate() or updated() hooks (docs) to watch props.
You can do it if you're only interested in newVal and don't need oldVal
new Vue({
el: '#app',
data: {
text: ''
},
components: {
'child': {
template: `<p>{{ myprop }}</p>`,
props: ['myprop'],
beforeUpdate() {
console.log(this.myprop)
},
updated() {
console.log(this.myprop)
}
}
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js"></script>
<div id="app">
<child :myprop="text"></child>
<input v-model="text" placeholder="Type here to view prop changes" style="width:20em">
</div>
A: You need to understand, the component hierarchy you are having and how you are passing props, definitely your case is special and not usually encountered by the devs.
Parent Component -myProp-> Child Component -myProp-> Grandchild
Component
If myProp is changed in parent component it will be reflected in the child component too.
And if myProp is changed in child component it will be reflected in grandchild component too.
So if myProp is changed in parent component then it will be reflected in grandchild component. (so far so good).
Therefore down the hierarchy you don't have to do anything props will be inherently reactive.
Now talking about going up in hierarchy
If myProp is changed in grandChild component it won't be reflected in the child component. You have to use .sync modifier in child and emit event from the grandChild component.
If myProp is changed in child component it won't be reflected in the parent component. You have to use .sync modifier in parent and emit event from the child component.
If myProp is changed in grandChild component it won't be reflected in the parent component (obviously). You have to use .sync modifier child and emit event from the grandchild component, then watch the prop in child component and emit an event on change which is being listened by parent component using .sync modifier.
Let's see some code to avoid confusion
Parent.vue
<template>
<div>
<child :myProp.sync="myProp"></child>
<input v-model="myProp"/>
<p>{{myProp}}</p>
</div>
</template>
<script>
import child from './Child.vue'
export default{
data(){
return{
myProp:"hello"
}
},
components:{
child
}
}
</script>
<style scoped>
</style>
Child.vue
<template>
<div> <grand-child :myProp.sync="myProp"></grand-child>
<p>{{myProp}}</p>
</div>
</template>
<script>
import grandChild from './Grandchild.vue'
export default{
components:{
grandChild
},
props:['myProp'],
watch:{
'myProp'(){
this.$emit('update:myProp',this.myProp)
}
}
}
</script>
<style>
</style>
Grandchild.vue
<template>
<div><p>{{myProp}}</p>
<input v-model="myProp" @input="changed"/>
</div>
</template>
<script>
export default{
props:['myProp'],
methods:{
changed(event){
this.$emit('update:myProp',this.myProp)
}
}
}
</script>
<style>
</style>
But after this you wont help notice the screaming warnings of vue saying
'Avoid mutating a prop directly since the value will be overwritten
whenever the parent component re-renders.'
Again as I mentioned earlier most of the devs don't encounter this issue, because it's an anti pattern. That's why you get this warning.
But in order to solve your issue (according to your design). I believe you have to do the above work around(hack to be honest). I still recommend you should rethink your design and make is less prone to bugs.
I hope it helps.
A: Have you tried this ?
watch: {
myProp: {
// the callback will be called immediately after the start of the observation
immediate: true,
handler (val, oldVal) {
// do your stuff
}
}
}
https://v2.vuejs.org/v2/api/#watch
A: for two way binding you have to use .sync modifier
<child :myprop.sync="text"></child>
more details...
and you have to use watch property in child component to listen and update any changes
props: ['myprop'],
watch: {
myprop: function(newVal, oldVal) { // watch it
console.log('Prop changed: ', newVal, ' | was: ', oldVal)
}
}
A: I use props and variables computed properties if I need create logic after to receive the changes
export default {
name: 'getObjectDetail',
filters: {},
components: {},
props: {
objectDetail: {
type: Object,
required: true
}
},
computed: {
_objectDetail: {
let value = false
...
if (someValidation)
...
}
}
A: Interesting observation for some use cases.
If you watch a data item from your store via a prop and you change the data item multiple times in the same store mutation it will not be watched.
However if you separate the data item changes into multiple calls of the same mutation it will be watched.
*
*This code will NOT trigger the watcher:
// Somewhere in the code:
this.$store.commit('changeWatchedDataItem');
// In the 'changeWatchedDataItem' mutation:
state.dataItem = false;
state.dataItem = true;
*This code WILL trigger the watcher at each mutation:
// Somewhere in the code:
this.$store.commit('changeWatchedDataItem', true);
this.$store.commit('changeWatchedDataItem', false);
// In the 'changeWatchedDataItem' mutation:
changeWatchedDataItem(state, newValue) {
state.dataItem = newValue;
}
A: By default props in the component are reactive and you can setup watch on the props within the component which will help you to modify functionality according to your need. Here is a simple code snippet to show how it works
setup(props) {
watch(
() => props.propName,
(oldValue, newValue) => {
//Here you can add you functionality
// as described in the name you will get old and new value of watched property
},
{ deep: true },
{ immediate: true } //if you need to run callback as soon as prop changes
)
}
Hope this helps you to get the result you want out of this.
Have a great day.
A: You can use the watch mode to detect changes:
Do everything at atomic level. So first check if watch method itself is getting called or not by consoling something inside. Once it has been established that watch is getting called, smash it out with your business logic.
watch: {
myProp: function() {
console.log('Prop changed')
}
}
A: @JoeSchr has an answer. Here is another way to do if you don't want deep: true
mounted() {
this.yourMethod();
// re-render any time a prop changes
Object.keys(this.$options.props).forEach(key => {
this.$watch(key, this.yourMethod);
});
},
A: If your prop myProp has nested items, those nested won't be reactive, so you'll need to use something like lodash deepClone :
<child :myProp.sync="_.deepClone(myProp)"></child>
That's it, no need for watchers or anything else.
A: in my case i didnt get any information and cant retrive the info. make sure use try and catch in body of watchs
my case
setup(props, { emit, attrs, slots, expose }) {
...
....
.....
}
.....
watch: {
isModalActive: function () {
console.log('action:: ', this.props.isModalActive) // here cause error undefined and no error information on my inspect element dunno why
}
},
so when i tried to console for other
watch: {
isModalActive: function () {
console.log('action:: ') // this work and printed
console.log('action:: ', this.props.isModalActive)
}
},
A: if myProp is an object, it may not be changed in usual. so, watch will never be triggered. the reason of why myProp not be changed is that you just set some keys of myProp in most cases. the myProp itself is still the one.
try to watch props of myProp, like "myProp.a",it should work.
A: props will be change if you add
<template>
<child :my-prop="myProp"/>
</template>
<script>
export default {
props: 'myProp'
}
</script>
A: The watch function should place in Child component. Not parent.
| stackoverflow | {
"language": "en",
"length": 1937,
"provenance": "stackexchange_0000F.jsonl.gz:878447",
"question_score": "476",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584292"
} |
0079e7afc9cdc821acf788ce0e1d8eec740c32ff | Stackoverflow Stackexchange
Q: Can't bind to 'matDatepicker' since it isn't a known property of 'input' - Angular I have just copied and pasted angular material code for datePicker and input, but I am getting this error for the datePicker.
app.module
import {MaterialModule} from '@angular/material';
@NgModule({
imports: [
...
MaterialModule
]
<md-input-container>
<input mdInput placeholder="Rechercher" [(ngModel)]="filterHistorique">
</md-input-container>
<md-input-container>
<input mdInput [mdDatepicker]="picker" placeholder="Choose a date">
<button mdSuffix [mdDatepickerToggle]="picker"></button>
</md-input-container>
<md-datepicker #picker></md-datepicker>
This is the error I am having in my browser:
Can't bind to 'mdDatepicker' since it isn't a known property of
'input' If 'md-datepicker' is an Angular component, then verify that
it is part of this module.
2. If 'md-datepicker' is a Web Component then add "CUSTOM_ELEMENTS_SCHEMA" to the '@NgModule.schemas' of this component
to suppress this message. (" [ERROR
->]
The error is for the datepicker, when I removed it, the errors disappears
A: you need to import FormsModule and ReactiveFormsModule if you used NgModule and formgroup. so your app.module should be like that
import {MaterialModule} from '@angular/material';
@NgModule({
imports: [
MdDatepickerModule,
MdNativeDateModule,
FormsModule,
ReactiveFormsModule
]
Note: MaterialModule Removed. please use separate module instead. like MdDatepickerModule see here https://github.com/angular/material2/blob/master/CHANGELOG.md#200-beta11-carapace-parapet-2017-09-21
| Q: Can't bind to 'matDatepicker' since it isn't a known property of 'input' - Angular I have just copied and pasted angular material code for datePicker and input, but I am getting this error for the datePicker.
app.module
import {MaterialModule} from '@angular/material';
@NgModule({
imports: [
...
MaterialModule
]
<md-input-container>
<input mdInput placeholder="Rechercher" [(ngModel)]="filterHistorique">
</md-input-container>
<md-input-container>
<input mdInput [mdDatepicker]="picker" placeholder="Choose a date">
<button mdSuffix [mdDatepickerToggle]="picker"></button>
</md-input-container>
<md-datepicker #picker></md-datepicker>
This is the error I am having in my browser:
Can't bind to 'mdDatepicker' since it isn't a known property of
'input' If 'md-datepicker' is an Angular component, then verify that
it is part of this module.
2. If 'md-datepicker' is a Web Component then add "CUSTOM_ELEMENTS_SCHEMA" to the '@NgModule.schemas' of this component
to suppress this message. (" [ERROR
->]
The error is for the datepicker, when I removed it, the errors disappears
A: you need to import FormsModule and ReactiveFormsModule if you used NgModule and formgroup. so your app.module should be like that
import {MaterialModule} from '@angular/material';
@NgModule({
imports: [
MdDatepickerModule,
MdNativeDateModule,
FormsModule,
ReactiveFormsModule
]
Note: MaterialModule Removed. please use separate module instead. like MdDatepickerModule see here https://github.com/angular/material2/blob/master/CHANGELOG.md#200-beta11-carapace-parapet-2017-09-21
A: To use MatDatePicker in application add the following lines in your app.module.ts (or the current module your component/page belongs to) file:
*
*import MatDatepickerModule, MatNativeDateModule in your app.module.ts.
import { MatDatepickerModule, MatNativeDateModule } from '@angular/material';
for angular 10.x import them independently
import { MatDatepickerModule } from '@angular/material/datepicker';
import { MatNativeDateModule } from '@angular/material/core';
*Add MatDatepickerModule, MatNativeDateModule under @NgModule in imports and exports' array:
@NgModule ({
imports: [
MatDatepickerModule,
MatNativeDateModule
],
exports: [
MatDatepickerModule,
MatNativeDateModule
]
})
A: While using mat-datepicker, you have to import MatDatepickerModule as well, also MatNativeDateModule is recommended to be imported too. see docs here.
import { MaterialModule, MatDatepickerModule, MatNativeDateModule } from '@angular/material';
@NgModule({
imports: [
...
MaterialModule, // <----- this module will be deprecated in the future version.
MatDatepickerModule, // <----- import(must)
MatNativeDateModule, // <----- import for date formating(optional)
MatMomentDateModule // <----- import for date formating adapted to more locales(optional)
]
For optional module of date formating, see Module for DateAdapter from material team.
Mention: please avoid using MaterialModule for it'll be deprecated in the future.
A: You just need to import below module
import {MatDatepickerModule} from '@angular/material/datepicker';
@NgModule ({
imports: [
MatDatepickerModule
]
})
A: In the latest versions of Angular Material, you have to import MatDatepickerModule from @angular/material/datepicker in this case and MatNativeDateModule from @angular/material/core.
import { MatDatepickerModule } from '@angular/material/datepicker';
import { MatNativeDateModule } from '@angular/material/core';
@NgModule ({
imports: [
MatDatepickerModule,
MatNativeDateModule
]
})
A: Below import works for me my on my solution in Angular8
@NgModule ({
imports: [
MatDatepickerModule,
MatNativeDateModule,
]
});
| stackoverflow | {
"language": "en",
"length": 438,
"provenance": "stackexchange_0000F.jsonl.gz:878459",
"question_score": "53",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584323"
} |
2f733a04af0a11a98bf22c61e2a4705623091325 | Stackoverflow Stackexchange
Q: How to build opencv in ios I have compiled openCV 3.1 with contrib modules using cmake gui following this link. The files have been generated but how do I use it in my ios project? Is there a way to create the opencv.framework file or do I just import the whole built folder into my XCode project.
A: Maybe it would be simplest to use command line instead of using CMakeGUI to build openCV with additional modules.
CMake must be installed, of course.
Create somewhere in a place suitable for you a new folder
>mkdir ~/your_open_cv_dir
>
>cd ~/your_open_cv_dir
>
>git clone https://github.com/opencv/opencv.git
If your need extra modules clone their sources too
>git clone https://github.com/opencv/opencv_contrib.git
Your your_open_cv_dir can have inside 2 folders,
opencv and opencv_contrib
Make a symbolic link for Xcode to let the OpenCV build scripts find the compiler, header files etc.
>cd /
>
>sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
Build OpenCV framework:
>cd ~/your_open_cv_dir
>
>python opencv/platforms/ios/build_framework.py ios
In case you need extended version of OpenCV build it with extra modules
>python opencv/platforms/ios/build_framework.py ios --contrib opencv_contrib
| Q: How to build opencv in ios I have compiled openCV 3.1 with contrib modules using cmake gui following this link. The files have been generated but how do I use it in my ios project? Is there a way to create the opencv.framework file or do I just import the whole built folder into my XCode project.
A: Maybe it would be simplest to use command line instead of using CMakeGUI to build openCV with additional modules.
CMake must be installed, of course.
Create somewhere in a place suitable for you a new folder
>mkdir ~/your_open_cv_dir
>
>cd ~/your_open_cv_dir
>
>git clone https://github.com/opencv/opencv.git
If your need extra modules clone their sources too
>git clone https://github.com/opencv/opencv_contrib.git
Your your_open_cv_dir can have inside 2 folders,
opencv and opencv_contrib
Make a symbolic link for Xcode to let the OpenCV build scripts find the compiler, header files etc.
>cd /
>
>sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
Build OpenCV framework:
>cd ~/your_open_cv_dir
>
>python opencv/platforms/ios/build_framework.py ios
In case you need extended version of OpenCV build it with extra modules
>python opencv/platforms/ios/build_framework.py ios --contrib opencv_contrib
A: *
*Look for opencv2.framework in opencv/platforms/ios/ios/opencv2.framework (if you followed the cmake instructions correctly, the framework should have been built there). If you built opencv_contrib separately, the framework will be under opencv/platforms/ios/ios_contrib/opencv2.framework.
*Drag opencv2.framework into your Xcode project. Make sure you check "copy items if needed".
*In "Build Phases", under "Link Binary with Libraries", add opencv2.framework. This might do the trick. You can check by importing an OpenCV header in one of your Objective-C++ files and seeing if Xcode can find it. If not, follow the next steps to specify a header search path.
*In your project's "Build Settings", add $(PROJECT_DIR) to "Framework Search Paths", "Header Search Paths", and "Library Search Paths".
*Now you can import OpenCV header files (.hpp files) in your Objective-C++ code.
| stackoverflow | {
"language": "en",
"length": 304,
"provenance": "stackexchange_0000F.jsonl.gz:878464",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584343"
} |
e407ffb026ba46b6d3a8a239f1c372b2c95a28b1 | Stackoverflow Stackexchange
Q: How to force download an image on click with django and aws s3 I have this view, which takes a user_id and image_id. When the user cliks the link, check if there is an image. If there is, then I would like the file to force download automatically.
template:
<a class="downloadBtn" :href="website + '/download-image/'+ user_id+'/'+ image_id +'/'">Download</a>
Before I was developing it in my local machine, and this code was working.
@api_view(['GET'])
@permission_classes([AllowAny])
def download_image(request, user_id=None, image_id=None):
try:
ui = UserImage.objects.get(user=user_id, image=image_id)
content_type = mimetypes.guess_type(ui.image.url)
wrapper = FileWrapper(open(str(ui.image.file)))
response = HttpResponse(wrapper, content_type=content_type)
response['Content-Disposition'] = 'attachment; filename="image.jpeg'
return response
except UserImage.DoesNotExist:
...
But now I am using aws s3 for my static and media files. I am using django-storages and boto3. How can I force download the image in the browser?
@api_view(['GET'])
@permission_classes([AllowAny])
def download_image(request, user_id=None, image_id=None):
try:
ui = UserImage.objects.get(user=user_id, image=image_id)
url = ui.image.url
...
... FORCE DOWNLOAD THE IMAGE
...
except UserImage.DoesNotExist:
...
... ERROR, NO IMAGE AVAILABLE
...
A: You can just return a HttpResponse with the image itself.
return HttpResponse(instance.image, content_type="image/jpeg")
This will return the image's byte stream. The Content-type header is to show the images in platforms like Postman.
| Q: How to force download an image on click with django and aws s3 I have this view, which takes a user_id and image_id. When the user cliks the link, check if there is an image. If there is, then I would like the file to force download automatically.
template:
<a class="downloadBtn" :href="website + '/download-image/'+ user_id+'/'+ image_id +'/'">Download</a>
Before I was developing it in my local machine, and this code was working.
@api_view(['GET'])
@permission_classes([AllowAny])
def download_image(request, user_id=None, image_id=None):
try:
ui = UserImage.objects.get(user=user_id, image=image_id)
content_type = mimetypes.guess_type(ui.image.url)
wrapper = FileWrapper(open(str(ui.image.file)))
response = HttpResponse(wrapper, content_type=content_type)
response['Content-Disposition'] = 'attachment; filename="image.jpeg'
return response
except UserImage.DoesNotExist:
...
But now I am using aws s3 for my static and media files. I am using django-storages and boto3. How can I force download the image in the browser?
@api_view(['GET'])
@permission_classes([AllowAny])
def download_image(request, user_id=None, image_id=None):
try:
ui = UserImage.objects.get(user=user_id, image=image_id)
url = ui.image.url
...
... FORCE DOWNLOAD THE IMAGE
...
except UserImage.DoesNotExist:
...
... ERROR, NO IMAGE AVAILABLE
...
A: You can just return a HttpResponse with the image itself.
return HttpResponse(instance.image, content_type="image/jpeg")
This will return the image's byte stream. The Content-type header is to show the images in platforms like Postman.
| stackoverflow | {
"language": "en",
"length": 194,
"provenance": "stackexchange_0000F.jsonl.gz:878473",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584376"
} |
7825dc552091d2722b265e8648114d8a27f4fa92 | Stackoverflow Stackexchange
Q: Do we need to checkpoint both readStream and writeStream of Kafka in Spark Structured Streaming? Do we need to checkpoint both readStream and writeStream of Kafka in Spark Structured Streaming ? When do we need to checkpoint both of these streams or only one of these streams?
A: Checkpointing is needed to save information about processed data by a stream and in case of failure spark could recover from last saved progress point. Processed means it is read from source, (transformed) and finally written to a sink.
Therefore, there is no need to set checkpointing for reader and writer separately since it make no sense after recovery not to process the data that was only read but not written to a sink. Moreover, checkpointing location can be set as an option to DataStreamWriter only (returns from dataset.writeStream()) and before starting a stream.
Here is an example of a simple structured stream with checkpointing:
session
.readStream()
.schema(RecordSchema.fromClass(TestRecord.class))
.csv("s3://test-bucket/input")
.as(Encoders.bean(TestRecord.class))
.writeStream()
.outputMode(OutputMode.Append())
.format("csv")
.option("path", "s3://test-bucket/output")
.option("checkpointLocation", "s3://test-bucket/checkpoint")
.queryName("test-query")
.start();
| Q: Do we need to checkpoint both readStream and writeStream of Kafka in Spark Structured Streaming? Do we need to checkpoint both readStream and writeStream of Kafka in Spark Structured Streaming ? When do we need to checkpoint both of these streams or only one of these streams?
A: Checkpointing is needed to save information about processed data by a stream and in case of failure spark could recover from last saved progress point. Processed means it is read from source, (transformed) and finally written to a sink.
Therefore, there is no need to set checkpointing for reader and writer separately since it make no sense after recovery not to process the data that was only read but not written to a sink. Moreover, checkpointing location can be set as an option to DataStreamWriter only (returns from dataset.writeStream()) and before starting a stream.
Here is an example of a simple structured stream with checkpointing:
session
.readStream()
.schema(RecordSchema.fromClass(TestRecord.class))
.csv("s3://test-bucket/input")
.as(Encoders.bean(TestRecord.class))
.writeStream()
.outputMode(OutputMode.Append())
.format("csv")
.option("path", "s3://test-bucket/output")
.option("checkpointLocation", "s3://test-bucket/checkpoint")
.queryName("test-query")
.start();
| stackoverflow | {
"language": "en",
"length": 168,
"provenance": "stackexchange_0000F.jsonl.gz:878500",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584476"
} |
f6bd8089bdc30433a04a5e3637de03e97c7bcf3a | Stackoverflow Stackexchange
Q: Angular2 [selected] does not work to set up the default value? I'm trying to set up default value for my selection so I tried
[selected]= "selected_choice == 'one'"
something like this
but this didn't work.
People said [selected] no longer works so I also tried [attr.selected] but didn't work as well..
this is my whole code for one select tag
<select (change)="passValue3()" formControlName="school" class="form-control" required [(ngModel)]="selected_student" class="selectionbox">
<option *ngIf="selected_student == undefined">학년 선택</option>
<option *ngFor="let gradetype of gradeTypes" [ngValue]="gradetype" [attr.selected] = "gradetype.gradeType === 'Middle'">{{gradetype.gradeName}}</option>
</select>
How can I set up the default option for the select?
A:
You need to do something like this:
In Markup:
<select placeholder="Sample select" [(ngModel)]="selectedItem">
<option [value]="'all'">View All</option>
<option [value]="'item-1'">Item-1</option>
<option [value]="'item-2'">Item-2</option>
</select>
In Component
selectedItem='all'
| Q: Angular2 [selected] does not work to set up the default value? I'm trying to set up default value for my selection so I tried
[selected]= "selected_choice == 'one'"
something like this
but this didn't work.
People said [selected] no longer works so I also tried [attr.selected] but didn't work as well..
this is my whole code for one select tag
<select (change)="passValue3()" formControlName="school" class="form-control" required [(ngModel)]="selected_student" class="selectionbox">
<option *ngIf="selected_student == undefined">학년 선택</option>
<option *ngFor="let gradetype of gradeTypes" [ngValue]="gradetype" [attr.selected] = "gradetype.gradeType === 'Middle'">{{gradetype.gradeName}}</option>
</select>
How can I set up the default option for the select?
A:
You need to do something like this:
In Markup:
<select placeholder="Sample select" [(ngModel)]="selectedItem">
<option [value]="'all'">View All</option>
<option [value]="'item-1'">Item-1</option>
<option [value]="'item-2'">Item-2</option>
</select>
In Component
selectedItem='all'
A: you compare options to select by compareWith property, If you are using angular 4, may be it will not working on angular 2.
HTML File :
<select [compareWith]="byAnimal" [(ngModel)]="selectedAnimal">
<option *ngFor="let animal of animals" [ngValue]="animal">
{{animal.type}}
</option>
</select>
TS File
byAnimal(item1,item2){
return item1.type == item2.type;
}
One of the best solution from this link
A: Here is my solution:
Example is about time zones. From the backend I got the next object item:
activeItem = { "timezone": { "timeZoneHolder": "Europe", "region": "Europe/Paris (CEST)", "UTC": "UTC+1" }}
And the same item from my model looks a little bit different as source is change:
{ "timeZoneHolder": "France", "region": "Europe/Paris", "UTC": "UTC +01:00" }
As you see a little bit different.
So here is my model:
timeZones = [{ "timeZoneHolder": "France", "region": "Europe/Paris", "UTC": "UTC +01:00" }, { "timeZoneHolder": "French Polynesia", "region": "Pacific/Gambier", "UTC": "UTC -09:00" }]
And here is the mark-up for the select, works like a charm :
<select id="timezone" name="timezone" [(ngModel)]="activeItem.timezone">
<option [ngValue]="activeItem.timezone" [selected]="true" disabled hidden>{{activeItem.timezone.region}}</option>
<option *ngFor="let timeZone of timeZones"
[ngValue]="{timeZoneHolder: timeZone.countryName, region: timeZone.timeZone, UTC: timeZone.UTC}">
{{timeZone.timeZone}}
</option>
Enjoy :)
| stackoverflow | {
"language": "en",
"length": 303,
"provenance": "stackexchange_0000F.jsonl.gz:878526",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584560"
} |
b8589d501ef4bdb19b37c94d7b85b1adacf057b0 | Stackoverflow Stackexchange
Q: How to get Element Properties in React Native on a Click Event How should I access the properties of an element without using the 'this' keyword in React Native? I have a function with which the parent class itself is bound as 'this' but I want to access the properties of the element that is being clicked. Here's the code-
import {Circle} from 'react-native-svg';
export default App extends Component {
constructor(props) {
super(props);
this.state = {activeX: null}
}
handleTouch(event) {
const x = event.target.cx; //How to access "cx" property here?
this.setState({ activeX: x });
}
render() {
return (
<Circle cx='10' cy='10' r='5' onPress={this.handleTouch.bind(this)}/>
<Circle cx='20' cy='20' r='5' onPress={this.handleTouch.bind(this)}/>
);
}
}
A: Try this
import {Circle} from 'react-native-svg';
export default App extends Component {
constructor(props) {
super(props);
this.state = {
activeX: null,
cx: 10
}
}
handleTouch = () => {
const x = this.state.cx
this.setState({ activeX: x });
}
render() {
return (
<Circle cx={this.state.cx} cy='10' r='5' onPress={this.handleTouch}/>
);
}
}
| Q: How to get Element Properties in React Native on a Click Event How should I access the properties of an element without using the 'this' keyword in React Native? I have a function with which the parent class itself is bound as 'this' but I want to access the properties of the element that is being clicked. Here's the code-
import {Circle} from 'react-native-svg';
export default App extends Component {
constructor(props) {
super(props);
this.state = {activeX: null}
}
handleTouch(event) {
const x = event.target.cx; //How to access "cx" property here?
this.setState({ activeX: x });
}
render() {
return (
<Circle cx='10' cy='10' r='5' onPress={this.handleTouch.bind(this)}/>
<Circle cx='20' cy='20' r='5' onPress={this.handleTouch.bind(this)}/>
);
}
}
A: Try this
import {Circle} from 'react-native-svg';
export default App extends Component {
constructor(props) {
super(props);
this.state = {
activeX: null,
cx: 10
}
}
handleTouch = () => {
const x = this.state.cx
this.setState({ activeX: x });
}
render() {
return (
<Circle cx={this.state.cx} cy='10' r='5' onPress={this.handleTouch}/>
);
}
}
A: import ReactNativeComponentTree from'react-native/Libraries/Renderer/src/renderers/native/ReactNativeComponentTree';
And access the properties as-
const x = ReactNativeComponentTree.getInstanceFromNode(event.currentTarget)._currentElement.props.cx;
A: Sorry for leaving an answer but I cannot leave a comment since <50 rep.
You should edit the improve part of your answer, with the following bit:
import ReactNativeComponentTree from 'react-native';
instead of what you have right now,
import ReactNativeComponentTree from'react-native/Libraries/Renderer/src/renderers/native/ReactNativeComponentTree';
since is throwing an error (trying to import unknown module).
A: A better way of accessing the component properties in an event is actually by creating a component and passing it the needed data:
import { Circle } from 'react-native-svg';
class TouchableCircle extends React.PureComponent {
constructor(props) {
super(props);
this.circlePressed = this.circlePressed.bind(this);
}
circlePressed(){
this.props.onPress(this.props.cx);
}
render() {
return (
<Circle cx={this.props.cx} cy={this.props.cy} r={this.props.r} onPress={this.circlePressed}/>
);
}
}
export default App extends Component {
constructor(props) {
super(props);
this.state = {activeX: null}
this.handleTouch = this.handleTouch.bind(this);
}
handleTouch(cx) {
this.setState({ activeX: cx });
}
render() {
return (
<TouchableCircle cx='10' cy='10' r='5' onPress={this.handleTouch}/>
<TouchableCircle cx='20' cy='20' r='5' onPress={this.handleTouch}/>
);
}
}
NB: Performance tip from Facebook for event handlers:
We generally recommend binding in the constructor or using the property initializer syntax, to avoid this sort of performance problem. (i.e. to avoid the creation of the callback everytime a component renders)
ref: React Handling Events
(credits to https://stackoverflow.com/a/42125039/1152843)
A: You can change your event handler to a curried function like so:
import {Circle} from 'react-native-svg';
export default App extends Component {
constructor(props) {
super(props);
this.state = {activeX: null}
}
//Use ES6 arrow and avoid this.bind
//Curried function handleTouch accepts cx, cy as extra parameters
handleTouch = (cx, cy) => event => {
console.log(cx, cy) // This is how you access props passed to Circle here
console.log(event)
this.setState({ activeX: cx });
}
render() {
//You are actually invoking the handleTouch function here, whose return value is
//a function, which is set as the onPress event handler for the <Circle> component
return (
<Circle cx='10' cy='10' r='5' onPress={this.handleTouch(10, 10)}/>
<Circle cx='20' cy='20' r='5' onPress={this.handleTouch.(20, 20)}/>
);
}
}
Checkout the working snack below:
https://snack.expo.io/@prashand/accessing-props-from-react-native-touch-event
| stackoverflow | {
"language": "en",
"length": 498,
"provenance": "stackexchange_0000F.jsonl.gz:878535",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584585"
} |
bcad760f20db356fc783a66c101cf4facb680bc6 | Stackoverflow Stackexchange
Q: Java Regular expression validation I want to validate a string which allows only alpha numeric values and only
one dot character and only underscore character in java .
String fileName = (String) request.getParameter("read");
I need to validate the fileName retrieving from the request and should
satisfy the above criteria
I tried in "^[a-zA-Z0-9_'.']*$" , but this allows more than one dot character
I need to validate my string in the given scenarios ,
1 . Filename contains only alpha numeric values .
2 . It allows only one dot character (.) , example : fileRead.pdf ,
fileWrite.txt etc
3 . it allows only underscore characters . All the other symbols should be
declined
Can any one help me on this ?
A: You should use String.matches() method :
System.out.println("My_File_Name.txt".matches("\\w+\\.\\w+"));
You can also use java.util.regex package.
java.util.regex.Pattern pattern =
java.util.regex.Pattern.compile("\\w+\\.\\w+");
java.util.regex.Matcher matcher = pattern.matcher("My_File_Name.txt");
System.out.println(matcher.matches());
For more information about REGEX and JAVA, look at this page :
https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html
| Q: Java Regular expression validation I want to validate a string which allows only alpha numeric values and only
one dot character and only underscore character in java .
String fileName = (String) request.getParameter("read");
I need to validate the fileName retrieving from the request and should
satisfy the above criteria
I tried in "^[a-zA-Z0-9_'.']*$" , but this allows more than one dot character
I need to validate my string in the given scenarios ,
1 . Filename contains only alpha numeric values .
2 . It allows only one dot character (.) , example : fileRead.pdf ,
fileWrite.txt etc
3 . it allows only underscore characters . All the other symbols should be
declined
Can any one help me on this ?
A: You should use String.matches() method :
System.out.println("My_File_Name.txt".matches("\\w+\\.\\w+"));
You can also use java.util.regex package.
java.util.regex.Pattern pattern =
java.util.regex.Pattern.compile("\\w+\\.\\w+");
java.util.regex.Matcher matcher = pattern.matcher("My_File_Name.txt");
System.out.println(matcher.matches());
For more information about REGEX and JAVA, look at this page :
https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html
A: You could use two negative lookaheads here:
^((?!.*\..*\.)(?!.*_.*_)[A-Za-z0-9_.])*$
Each lookahead asserts that either a dot or an underscore does not occur two times, implying that it can occur at most once.
It wasn't completely clear whether you require one dot and/or underscore. I assumed not, but my regex could be easily modified to this requirement.
Demo
A: You can first check the special characters which have the number limits.
Here is the code:
int occurance = StringUtils.countOccurrencesOf("123123..32131.3", ".");
or
int count = StringUtils.countMatches("123123..32131.3", ".");
If it does not match your request you can discard it before regex check.
If there is no problem you can now put your String to alphanumeric value check.
| stackoverflow | {
"language": "en",
"length": 272,
"provenance": "stackexchange_0000F.jsonl.gz:878546",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584622"
} |
21ff012b4534043c9b3fc83cb2f3ac1b467b4114 | Stackoverflow Stackexchange
Q: AWS Cloudformation: Enable PostGIS Extension in RDS from Cloudformation New to cloudformation. I am spawning PostgreSQL RDS instance using a aws cloudformation script. Is there a way to enable PostGIS (and other extensions) from aws cloudFormation script?
A:
Working with PostGIS PostGIS is an extension to PostgreSQL for storing
and managing spatial information. If you are not familiar with
PostGIS, you can get a good general overview at PostGIS Introduction.
You need to perform a bit of setup before you can use the PostGIS
extension. The following list shows what you need to do; each step is
described in greater detail later in this section.
*
*Connect to the DB instance using the master user name used to create the DB instance.
*Load the PostGIS extensions.
*Transfer ownership of the extensions to therds_superuser role.
*Transfer ownership of the objects to the rds_superuser role.
*Test the extensions.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html
I'm not sure but maybe you can create a lambda function and RDS with your cloudformation and then you can invoke your lambda to do above steps. You need to try.
best,
| Q: AWS Cloudformation: Enable PostGIS Extension in RDS from Cloudformation New to cloudformation. I am spawning PostgreSQL RDS instance using a aws cloudformation script. Is there a way to enable PostGIS (and other extensions) from aws cloudFormation script?
A:
Working with PostGIS PostGIS is an extension to PostgreSQL for storing
and managing spatial information. If you are not familiar with
PostGIS, you can get a good general overview at PostGIS Introduction.
You need to perform a bit of setup before you can use the PostGIS
extension. The following list shows what you need to do; each step is
described in greater detail later in this section.
*
*Connect to the DB instance using the master user name used to create the DB instance.
*Load the PostGIS extensions.
*Transfer ownership of the extensions to therds_superuser role.
*Transfer ownership of the objects to the rds_superuser role.
*Test the extensions.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html
I'm not sure but maybe you can create a lambda function and RDS with your cloudformation and then you can invoke your lambda to do above steps. You need to try.
best,
A: I think this can be done with AWSUtility::CloudFormation::CommandRunner.
Basically we can run bash command with this (https://aws.amazon.com/blogs/mt/running-bash-commands-in-aws-cloudformation-templates/)
A: I don't think you will be able to achieve it by using cloudformation. Cloudformation is a provisioning tool not a configuration management tool.
| stackoverflow | {
"language": "en",
"length": 222,
"provenance": "stackexchange_0000F.jsonl.gz:878547",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584623"
} |
709689ecdff12365018f9a4b9198723be62396c4 | Stackoverflow Stackexchange
Q: Disable a specific USB port in Windows I need to disable a specific USB port programmatically on a Windows PC.
For example, let's say I have 2 removable disks plugged into my computer - one called F:\ and one called H:\ . I want to disable only F:\ programmatically.
I've already tried to use this CMD command to disable the device:
reg add HKLM\SYSTEM\CurrentControlSet\Services\UsbStor /v "Start" /t REG_DWORD /d "4" /f
and this one to enable the device:
reg add HKLM\SYSTEM\CurrentControlSet\Services\UsbStor /v "Start" /t REG_DWORD /d "3" /f
But it does not work at all.
Any suggestions?
A: reg add HKLM\SYSTEM\CurrentControlSet\Services\UsbStor /v "Start" /t REG_DWORD /d "4" /f
Change value to 3
| Q: Disable a specific USB port in Windows I need to disable a specific USB port programmatically on a Windows PC.
For example, let's say I have 2 removable disks plugged into my computer - one called F:\ and one called H:\ . I want to disable only F:\ programmatically.
I've already tried to use this CMD command to disable the device:
reg add HKLM\SYSTEM\CurrentControlSet\Services\UsbStor /v "Start" /t REG_DWORD /d "4" /f
and this one to enable the device:
reg add HKLM\SYSTEM\CurrentControlSet\Services\UsbStor /v "Start" /t REG_DWORD /d "3" /f
But it does not work at all.
Any suggestions?
A: reg add HKLM\SYSTEM\CurrentControlSet\Services\UsbStor /v "Start" /t REG_DWORD /d "4" /f
Change value to 3
| stackoverflow | {
"language": "en",
"length": 113,
"provenance": "stackexchange_0000F.jsonl.gz:878565",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584679"
} |
37753e2cc2b5c2d17df6e8e4263e8d681cd8a506 | Stackoverflow Stackexchange
Q: map box turn by turn navigation iOS I am actually trying MapBox navigation turn by turn SDK.
MapBox Navigation SDK
it show here to upadtae with carthage. i have followed all steps but everytime it throw error that
@import MapboxCoreNavigation;
@import MapboxDirections;
@import MapboxNavigation;
This imported module not found.
is there is any repository or pods available for this.
please guide me through this.
here is issue that is open open issue of mapBox navigation
A: After a bit of research, I believed this article on embedded-frameworks has workaround for your issue. Which is to add the Map Box framework as embedded framework too.
| Q: map box turn by turn navigation iOS I am actually trying MapBox navigation turn by turn SDK.
MapBox Navigation SDK
it show here to upadtae with carthage. i have followed all steps but everytime it throw error that
@import MapboxCoreNavigation;
@import MapboxDirections;
@import MapboxNavigation;
This imported module not found.
is there is any repository or pods available for this.
please guide me through this.
here is issue that is open open issue of mapBox navigation
A: After a bit of research, I believed this article on embedded-frameworks has workaround for your issue. Which is to add the Map Box framework as embedded framework too.
A: If it's Objective C as stated above, don't you need
#import "MapboxCoreNavigation.h"
(check the string)? @import is, I believe, only for system framework modules.
| stackoverflow | {
"language": "en",
"length": 130,
"provenance": "stackexchange_0000F.jsonl.gz:878614",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584817"
} |
3a54c04df9dc46e6c5ca8af9782eeac92cc20545 | Stackoverflow Stackexchange
Q: How to determine if file is opened in binary or text mode? Given a file object, how do I determine whether it is opened in bytes mode (read returns bytes) or in text mode (read returns str)? It should work with reading and writing.
In other words:
>>> with open('filename', 'rb') as f:
... is_binary(f)
...
True
>>> with open('filename', 'r') as f:
... is_binary(f)
...
False
(Another question which sounds related is not. That question is about guessing whether a file is binary or not from it's contents.)
A: File objects have a .mode attribute:
def is_binary(f):
return 'b' in f.mode
This limits the test to files; in-memory file objects like TextIO and BytesIO do not have that attribute. You could also test for the appropriate abstract base classes:
import io
def is_binary(f):
return isinstance(f, (io.RawIOBase, io.BufferedIOBase))
or the inverse
def is_binary(f):
return not isinstance(f, io.TextIOBase)
| Q: How to determine if file is opened in binary or text mode? Given a file object, how do I determine whether it is opened in bytes mode (read returns bytes) or in text mode (read returns str)? It should work with reading and writing.
In other words:
>>> with open('filename', 'rb') as f:
... is_binary(f)
...
True
>>> with open('filename', 'r') as f:
... is_binary(f)
...
False
(Another question which sounds related is not. That question is about guessing whether a file is binary or not from it's contents.)
A: File objects have a .mode attribute:
def is_binary(f):
return 'b' in f.mode
This limits the test to files; in-memory file objects like TextIO and BytesIO do not have that attribute. You could also test for the appropriate abstract base classes:
import io
def is_binary(f):
return isinstance(f, (io.RawIOBase, io.BufferedIOBase))
or the inverse
def is_binary(f):
return not isinstance(f, io.TextIOBase)
A: For streams opened as reading, perhaps the most reliable way to determine its mode is to actually read from it:
def is_binary(f):
return isinstance(f.read(0), bytes)
Through it does have a caveat that it won't work if the stream was already closed (which may raise IOError) it would reliably determine binary-ness of any custom file-like objects neither extending from appropriate io ABCs nor providing the mode attribute.
If only Python 3 support is required, it is also possible to determine text/binary mode of writable streams given the clear distinction between bytes and text:
def is_binary(f):
read = getattr(f, 'read', None)
if read is not None:
try:
data = read(0)
except (TypeError, ValueError):
pass # ValueError is also a superclass of io.UnsupportedOperation
else:
return isinstance(data, bytes)
try:
# alternatively, replace with empty text literal
# and swap the following True and False.
f.write(b'')
except TypeError:
return False
return True
Unless you are to frequently test if a stream is in binary mode or not (which is unnecessary since binary-ness of a stream should not change for the lifetime of the object), I suspect any performance drawbacks resulting from extensive usage of catching exceptions would be an issue (you could certainly optimize for the likelier path, though).
A: There is one library called mimetypes where guess_type returns the The return value is a tuple (type, encoding) where type is None if the type can’t be guessed (missing or unknown suffix) or a string of the form 'type/subtype'
import mimetypes
file= mimetypes.guess_type(file)
| stackoverflow | {
"language": "en",
"length": 397,
"provenance": "stackexchange_0000F.jsonl.gz:878618",
"question_score": "15",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584829"
} |
cd9db4a105544ee14fbc740318f574ab50499c86 | Stackoverflow Stackexchange
Q: cypher query - check for a relationship if not present check for another one I would like to check if a relationship exist from a node, and in case its not found, then i want to check for another relationship type from the same node.
Something like, (a:Type)-[:relation1]-(b) if relation1 exist query returns node b. If not exist, then will check for another relationship like, (a:Type)-[:relation2]-(b) and returns node b.
I want to know how this can be writen as a single cypher query. Any help would be appreciated. Thanks.
A: What about using a UNION ?
MATCH (a:Type)-[:relation1]-(b)
RETURN b
UNION
MATCH (a:Type)-[:relation2]-(b)
RETURN b
Hope it helps,
Tom
| Q: cypher query - check for a relationship if not present check for another one I would like to check if a relationship exist from a node, and in case its not found, then i want to check for another relationship type from the same node.
Something like, (a:Type)-[:relation1]-(b) if relation1 exist query returns node b. If not exist, then will check for another relationship like, (a:Type)-[:relation2]-(b) and returns node b.
I want to know how this can be writen as a single cypher query. Any help would be appreciated. Thanks.
A: What about using a UNION ?
MATCH (a:Type)-[:relation1]-(b)
RETURN b
UNION
MATCH (a:Type)-[:relation2]-(b)
RETURN b
Hope it helps,
Tom
A: You may be able to use COALESCE() to make a backup choice in case the node at the first relation is null.
// after you've already matched to a
OPTIONAL MATCH (a)-[:relation1]-(b)
OPTIONAL MATCH (a)-[:relation2]-(c)
WITH a, COALESCE(b, c) as b // will use node c if b is null
...
| stackoverflow | {
"language": "en",
"length": 163,
"provenance": "stackexchange_0000F.jsonl.gz:878619",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584833"
} |
a024514613e3106b8348200cad6719652c775843 | Stackoverflow Stackexchange
Q: How to bind this to a function for AngularIO's Observable::subscribe? There are a lot of examples using the Observable.subscribe() function from AngularIO. Anyhow, I was only able to see anonymous functions inside as in:
bar().subscribe(data => this.data = data, ...);
If I try to hand in a function of the same class like here:
updateData(myData : DataType[]) {
this.data = data;
}
...
bar().subscribe(this.updateData, ...);
Then the this object in line 2 doesn't refer to the current object anymore. This is probably some JavaScript logic that I don't understand. I know that you can bind an object to a function, is this what I have to do? Is this best practice? How would one usually resolve this issue (I'd like to avoid having a big anonymous function inside the subscribe().
A: You can wrap it inside an arrow function which will capture the correct this:
bar().subscribe((myData) => this.updateData(myData), ...);
Or use Function.bind which will also bind the correct context:
bar().subscribe(this.updateData.bind(this), ...);
But be aware that Function.bind returns any which will make you lose type checking in TypeScript. See https://github.com/Microsoft/TypeScript/issues/212
| Q: How to bind this to a function for AngularIO's Observable::subscribe? There are a lot of examples using the Observable.subscribe() function from AngularIO. Anyhow, I was only able to see anonymous functions inside as in:
bar().subscribe(data => this.data = data, ...);
If I try to hand in a function of the same class like here:
updateData(myData : DataType[]) {
this.data = data;
}
...
bar().subscribe(this.updateData, ...);
Then the this object in line 2 doesn't refer to the current object anymore. This is probably some JavaScript logic that I don't understand. I know that you can bind an object to a function, is this what I have to do? Is this best practice? How would one usually resolve this issue (I'd like to avoid having a big anonymous function inside the subscribe().
A: You can wrap it inside an arrow function which will capture the correct this:
bar().subscribe((myData) => this.updateData(myData), ...);
Or use Function.bind which will also bind the correct context:
bar().subscribe(this.updateData.bind(this), ...);
But be aware that Function.bind returns any which will make you lose type checking in TypeScript. See https://github.com/Microsoft/TypeScript/issues/212
A: This is related to the fat arrow behavior.
You can find more here (in the "this and fat arrow" topic, about half the page)
| stackoverflow | {
"language": "en",
"length": 205,
"provenance": "stackexchange_0000F.jsonl.gz:878627",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584849"
} |
1e3c4de9b4d9d7c9d17c2ae2273a0a83af02cd06 | Stackoverflow Stackexchange
Q: How to add a comment in sqlite3 init file .sqliterc Is there a way to add a comment in the init file? The man page for sqlite3 doesn't mention any and I don't see any meta command that would suggest itself for such a purpose.
A: You can put any of sqlite3s dot commands or any SQL commands into the init file.
So you can just use SQL comments:
/* Hello, world! */
-- line comment
| Q: How to add a comment in sqlite3 init file .sqliterc Is there a way to add a comment in the init file? The man page for sqlite3 doesn't mention any and I don't see any meta command that would suggest itself for such a purpose.
A: You can put any of sqlite3s dot commands or any SQL commands into the init file.
So you can just use SQL comments:
/* Hello, world! */
-- line comment
A: -- single line comment
/* multiple
* line
* comment */
Be comfortable to add your comments in .sqliterc file , because it's a normal SQL query file
| stackoverflow | {
"language": "en",
"length": 106,
"provenance": "stackexchange_0000F.jsonl.gz:878645",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44584922"
} |
d2d307174e7b158ed90f7def1033134f2815ee37 | Stackoverflow Stackexchange
Q: convert sketch line height into ios line 'height multiple' property
My designer send me sketch file which says 'Line height: 22' for label. How can i achieve this in xcode interface builder.
Is there any way to define this line height using code or UI builder.
A: I've found the following formula to work well for me.
It converts form Sketch line height to iOS line spacing:
lineSpacing = sketchLineHeight - sketchFontSize - (font.lineHeight - font.pointSize)
In code, for your case this would be:
let font = UIFont.systemFont(ofSize: 18) // or whatever font you use
textLabel.font = font
let attributedString = NSMutableAttributedString(string: "your text")
let paragraphStyle = NSMutableParagraphStyle()
paragraphStyle.lineSpacing = 22 - 18 - (font.lineHeight - font.pointSize)
attributedString.addAttribute(.paragraphStyle, value: paragraphStyle, range: NSMakeRange(0, attributedString.length))
textLabel.attributedText = attributedString
| Q: convert sketch line height into ios line 'height multiple' property
My designer send me sketch file which says 'Line height: 22' for label. How can i achieve this in xcode interface builder.
Is there any way to define this line height using code or UI builder.
A: I've found the following formula to work well for me.
It converts form Sketch line height to iOS line spacing:
lineSpacing = sketchLineHeight - sketchFontSize - (font.lineHeight - font.pointSize)
In code, for your case this would be:
let font = UIFont.systemFont(ofSize: 18) // or whatever font you use
textLabel.font = font
let attributedString = NSMutableAttributedString(string: "your text")
let paragraphStyle = NSMutableParagraphStyle()
paragraphStyle.lineSpacing = 22 - 18 - (font.lineHeight - font.pointSize)
attributedString.addAttribute(.paragraphStyle, value: paragraphStyle, range: NSMakeRange(0, attributedString.length))
textLabel.attributedText = attributedString
A: Line height is coming from CSS, so your designer must have a web designer background. On the mobile platforms, we do not specify line height, but line spacing.
In general NSMutableParagraphStyle offers capabilities to modify multiline labels for iOS.
NSMutableParagraphStyle has a property called maximumLineHeight, but this will only set the maximum line height to a certain value, if the containment of the label would exceed a certain value.
To set this up in IB, you need to add the label, and change the Text property to Attributed. Than click on paragraph style icon, and set the line spacing for the label. Looking at the design, it is around 2 points of line spacing, what you need. You can either ask your designer to provide you with line spacing attribute or try to find the right line spacing value by randomly trying out different values.
A: @bbjay did put me on the right track.
If you want to obtain the exact result of Sketch, the formula is:
paragraphStyle .lineSpacing = sketchLineHeight - font.lineHeight
Provided that the font was given sketchFontSize
A: In storyboard, use the Atributed style of UILabel. Below is example with 2.5 line height
| stackoverflow | {
"language": "en",
"length": 324,
"provenance": "stackexchange_0000F.jsonl.gz:878672",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585026"
} |
3ec081faf286af4f42f5d6ac6ab5991d21c8ae9a | Stackoverflow Stackexchange
Q: Take a column of a matrix and make it a row in kdb Consider the matrix:
1 2 3
4 5 6
7 8 9
I'd like to take the middle column, assign it to a variable, and replace the middle row with it, giving me
1 2 3
2 5 8
7 8 9
I'm extracting the middle column using
a:m[;enlist1]
which returns
2
5
8
How do I replace the middle row with a? Is a flip necessary?
Thanks.
A: If you want to update the matrix in place you can use
q)show m:(3;3)#1+til 10
1 2 3
4 5 6
7 8 9
q)a:m[;1]
q)m[1]:a
q)show m
1 2 3
2 5 8
7 8 9
q)
cutting out "a" all you need is:
m[1]:m[;1]
| Q: Take a column of a matrix and make it a row in kdb Consider the matrix:
1 2 3
4 5 6
7 8 9
I'd like to take the middle column, assign it to a variable, and replace the middle row with it, giving me
1 2 3
2 5 8
7 8 9
I'm extracting the middle column using
a:m[;enlist1]
which returns
2
5
8
How do I replace the middle row with a? Is a flip necessary?
Thanks.
A: If you want to update the matrix in place you can use
q)show m:(3;3)#1+til 10
1 2 3
4 5 6
7 8 9
q)a:m[;1]
q)m[1]:a
q)show m
1 2 3
2 5 8
7 8 9
q)
cutting out "a" all you need is:
m[1]:m[;1]
A: You can use dot amend -
q)show m:(3;3)#1+til 10
1 2 3
4 5 6
7 8 9
q)show a:m[;1]
2 5 8
q).[m;(1;::);:;a]
1 2 3
2 5 8
7 8 9
Can see documentation here:
*
*http://code.kx.com/wiki/Reference/DotSymbol
*http://code.kx.com/wiki/JB:QforMortals2/functions#Functional_Forms_of_Amend
A: Making it slightly more generic where you can define the operation, row, and column
q)m:3 cut 1+til 9
1 2 3
4 5 6
7 8 9
Assigning the middle column to middle row :
q){[ m;o;i1;i2] .[m;enlist i1;o; flip[m] i2 ] }[m;:;1;1]
1 2 3
2 5 8
7 8 9
Adding the middle column to middle row by passing o as +
q){[ m;o;i1;i2] .[m;enlist i1;o; flip[m] i2 ] }[m;+;1;1]
1 2 3
6 10 14
7 8 9
| stackoverflow | {
"language": "en",
"length": 252,
"provenance": "stackexchange_0000F.jsonl.gz:878676",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585037"
} |
f363a861f07459e6e74db95d3c5a4cec1a5c17c9 | Stackoverflow Stackexchange
Q: lodash map skip iteration i can't seem to find a way to skip an iteration using the lodash _.map, consider this array: checklist =['first','second','third','fourth','fifth','sixth']
i want to map through it with a step of 3 for example, i tried this code but it didn't work:
_.map(checkList, function(value, key){
console.log('checkList', firstValue);
console.log('checkList1', secondValue);
console.log('checkList2', thirdValue);
}, 3)
expected output : checkList first , checkList second, checkList third, checkList fourth, checkList fifth, checkList sixth
but only with two iterations
something i can achieve using for like this:
for(let i = 0; i< checkList.length; i+=3){
console.log('value', checkList[i]);
console.log('value1', checkList[i+1]);
console.log('value2', checkList[i+2]);
}
thank you
A: You're talking about two different operations - one to take each third item and one to transform them. .map transforms all items in the input and I'm not sure why you assume passing a 3 into it would do what you're describing.
To do what you're describing, filter the items first, then map them:
var checkList = ['first', 'second', 'third', 'fourth', 'fifth', 'sixth'];
var thirdItems = _.filter(checkList, function(v, i) { return i % 3 === 1; });
var result = _.map(thirdItems, function(value, key) { return 'checkList ' + value; });
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.4/lodash.min.js"></script>
| Q: lodash map skip iteration i can't seem to find a way to skip an iteration using the lodash _.map, consider this array: checklist =['first','second','third','fourth','fifth','sixth']
i want to map through it with a step of 3 for example, i tried this code but it didn't work:
_.map(checkList, function(value, key){
console.log('checkList', firstValue);
console.log('checkList1', secondValue);
console.log('checkList2', thirdValue);
}, 3)
expected output : checkList first , checkList second, checkList third, checkList fourth, checkList fifth, checkList sixth
but only with two iterations
something i can achieve using for like this:
for(let i = 0; i< checkList.length; i+=3){
console.log('value', checkList[i]);
console.log('value1', checkList[i+1]);
console.log('value2', checkList[i+2]);
}
thank you
A: You're talking about two different operations - one to take each third item and one to transform them. .map transforms all items in the input and I'm not sure why you assume passing a 3 into it would do what you're describing.
To do what you're describing, filter the items first, then map them:
var checkList = ['first', 'second', 'third', 'fourth', 'fifth', 'sixth'];
var thirdItems = _.filter(checkList, function(v, i) { return i % 3 === 1; });
var result = _.map(thirdItems, function(value, key) { return 'checkList ' + value; });
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.4/lodash.min.js"></script>
A: Map, filter, forEach, reduce, etc... iterate all items in the array, and you can't skip items. For this tasks, a for loop is quite fitting.
If you want to achieve that with lodash, use _.chunk() to split the array into sub arrays with the required size, and then iterate using _.forEach() (or map, filter, etc...).
var checklist = ['first', 'second', 'third', 'fourth', 'fifth', 'sixth'];
_(checklist)
.chunk(3)
.forEach(function(subArr) {
console.log('checkList', subArr[0]);
console.log('checkList1', subArr[1]);
console.log('checkList2', subArr[2]);
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.4/lodash.min.js"></script>
| stackoverflow | {
"language": "en",
"length": 274,
"provenance": "stackexchange_0000F.jsonl.gz:878687",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585062"
} |
1f55e3258a03d3019acec5fa22cdbcf12dc2ad72 | Stackoverflow Stackexchange
Q: How to I set min and max value for Input with type='datetime-local'?
SOLVED
Hi, i have an input field which should send a value to mysql and I want to set max and min value to predict a date range the user can select from.
This input is a datetime-local field, because i want him to select the time as well.
I know how to set it for date, but can't get it to work. Maybe i miss a thing?
Here is my fiddle
<label>This is datetime-local</label>
<input type='datetime-local' min='2017-06-14 00:00:00' max='2017-06-16 00:00:00'>
<label>This is date</label>
<input type='date' name="date" min="2017-06-14" max="2017-06-16">
A: In spite of what is declared here I made it work adding seconds too, like yyyy-MM-ddThh:mm:ss
<input type="datetime-local" id="start-date" min="2021-06-07T14:47:57" />
I needed min input to be "today" so with JS:
let dateInput = document.getElementById("start-date");
dateInput.min = new Date().toISOString().slice(0,new Date().toISOString().lastIndexOf(":"));
Then calendar won't let you pass that min date. You can apply same code for max attribute.
| Q: How to I set min and max value for Input with type='datetime-local'?
SOLVED
Hi, i have an input field which should send a value to mysql and I want to set max and min value to predict a date range the user can select from.
This input is a datetime-local field, because i want him to select the time as well.
I know how to set it for date, but can't get it to work. Maybe i miss a thing?
Here is my fiddle
<label>This is datetime-local</label>
<input type='datetime-local' min='2017-06-14 00:00:00' max='2017-06-16 00:00:00'>
<label>This is date</label>
<input type='date' name="date" min="2017-06-14" max="2017-06-16">
A: In spite of what is declared here I made it work adding seconds too, like yyyy-MM-ddThh:mm:ss
<input type="datetime-local" id="start-date" min="2021-06-07T14:47:57" />
I needed min input to be "today" so with JS:
let dateInput = document.getElementById("start-date");
dateInput.min = new Date().toISOString().slice(0,new Date().toISOString().lastIndexOf(":"));
Then calendar won't let you pass that min date. You can apply same code for max attribute.
A: Example with min and max values
<input
type="datetime-local"
className="form-control mt-2"
name="start_date"
min="2011-02-20T20:20"
max="2031-02-20T20:20"
/>
A: I've had a hard time finding an answer about this using PHP. But I just discovered an answer and I just want to share the code that I've came up using PHP. I know this question is too old.
<input type="datetime-local" min="<?=date('Y-m-d\Th:i')?>">
| stackoverflow | {
"language": "en",
"length": 218,
"provenance": "stackexchange_0000F.jsonl.gz:878709",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585148"
} |
e87c1493fee5d7c19a9b6e958f712b442e5da38b | Stackoverflow Stackexchange
Q: differences between JPA @EntityListeners , @PrePersist and Spring @RepositoryEventHandler, @HandleBeforeSave I did some research but was not able to find what is a differences between
JPA @EntityListeners , @PrePersist
and
Spring @RepositoryEventHandler, @HandleBeforeSave
Thanks.
A: @HandleBeforeSave only works when an Entity is saved through a Spring Data repository. @PrePersist will be trigger if you use EntityManager::persist() and if you use JPARepository::save(), since it calls persist.
The nice thing with @RepositoryEventHandler+@HandleBeforeSaveis that your @HandleBeforeSave method is defined inside a spring bean, so you can interact with other spring beans. @EntityListeners and @PrePersist can only access the state of the current Entity when operating in a J2SE environment. In J2EE you can @Inject beans into a @EntityListeners because the JPA subsystem and CDI are both managed by the same Container.
| Q: differences between JPA @EntityListeners , @PrePersist and Spring @RepositoryEventHandler, @HandleBeforeSave I did some research but was not able to find what is a differences between
JPA @EntityListeners , @PrePersist
and
Spring @RepositoryEventHandler, @HandleBeforeSave
Thanks.
A: @HandleBeforeSave only works when an Entity is saved through a Spring Data repository. @PrePersist will be trigger if you use EntityManager::persist() and if you use JPARepository::save(), since it calls persist.
The nice thing with @RepositoryEventHandler+@HandleBeforeSaveis that your @HandleBeforeSave method is defined inside a spring bean, so you can interact with other spring beans. @EntityListeners and @PrePersist can only access the state of the current Entity when operating in a J2SE environment. In J2EE you can @Inject beans into a @EntityListeners because the JPA subsystem and CDI are both managed by the same Container.
A: Actually after more searching I found this answer
stackoverflow.com/a/31155291/1780517
It seems that there is also one VERY BIG different, @HandleBeforeSave called on Controller POST method and not on repository save.
So @RepositoryEventHandler should be used only if you want handle events from controller (PUT, POST, GET with@HandleBeforeSave, @HandleBeforeCreate ..) and @EntityListeners should be used for repository method save,delete, update with @PreUpdate , @PreRemove and so on..
| stackoverflow | {
"language": "en",
"length": 196,
"provenance": "stackexchange_0000F.jsonl.gz:878714",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585161"
} |
738cbd6df2d65e4570a3cf928a79f1fa52176472 | Stackoverflow Stackexchange
Q: Why expect component to be truthy instead of defined This a a general question.
When scaffolding a component with the Angular-cli, it creates the first test itself.
It looks something like this:
it('should create', () => {
expect(component).toBeTruthy();
});
How Come it checks if it's Truthy and not Defined? And what is the difference?
Thanks in advance :)
A: The truthy source code:
getJasmineRequireObj().toBeTruthy = function() {
function toBeTruthy() {
return {
compare: function(actual) {
return {
pass: !!actual
};
}
};
}
return toBeTruthy;
};
The defined source code:
getJasmineRequireObj().toBeDefined = function() {
function toBeDefined() {
return {
compare: function(actual) {
return {
pass: (void 0 !== actual)
};
}
};
}
https://github.com/jasmine/jasmine/blob/4097718b6682f643833f5435b63e4f590f22919f/lib/jasmine-core/jasmine.js#L2908
So it's a comparison between !!actual and void 0 !== actual.
void 0 is same as undefined AFAIK and to me although they are practically the same, toBeDefined is a more secure way to check for defined values on some edge cases.
For example:
expect(0).toBeTruthy() will evaluate to false/fail
expect(0).toBeDefined() will evaluate to true/success
There are more of these cases as @trichetriche mentioned in the comments.
However for your case, it won't make a difference.
| Q: Why expect component to be truthy instead of defined This a a general question.
When scaffolding a component with the Angular-cli, it creates the first test itself.
It looks something like this:
it('should create', () => {
expect(component).toBeTruthy();
});
How Come it checks if it's Truthy and not Defined? And what is the difference?
Thanks in advance :)
A: The truthy source code:
getJasmineRequireObj().toBeTruthy = function() {
function toBeTruthy() {
return {
compare: function(actual) {
return {
pass: !!actual
};
}
};
}
return toBeTruthy;
};
The defined source code:
getJasmineRequireObj().toBeDefined = function() {
function toBeDefined() {
return {
compare: function(actual) {
return {
pass: (void 0 !== actual)
};
}
};
}
https://github.com/jasmine/jasmine/blob/4097718b6682f643833f5435b63e4f590f22919f/lib/jasmine-core/jasmine.js#L2908
So it's a comparison between !!actual and void 0 !== actual.
void 0 is same as undefined AFAIK and to me although they are practically the same, toBeDefined is a more secure way to check for defined values on some edge cases.
For example:
expect(0).toBeTruthy() will evaluate to false/fail
expect(0).toBeDefined() will evaluate to true/success
There are more of these cases as @trichetriche mentioned in the comments.
However for your case, it won't make a difference.
| stackoverflow | {
"language": "en",
"length": 190,
"provenance": "stackexchange_0000F.jsonl.gz:878723",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585192"
} |
30859785b14e877d67dd354bdfaa4eb0765a3221 | Stackoverflow Stackexchange
Q: Importing CSV in pandas with $$ as delimiter I am trying to import a csv in pandas with $$ as delimiter, I would expect the following command to work:
pd.read_csv('data.csv', delimiter="$$")
This however returns the following error:
Falling back to the 'python' engine because the 'c' engine does not
support regex separators (separators > 1 char and different from '\s+'
are interpreted as regex), but this causes 'error_bad_lines' to be
ignored as it is not supported by the 'python' engine.
Is this operation even possible in Pandas?
A: You can escape $ by \:
df = pd.read_csv('data.csv', sep="\$\$", engine='python')
Sample:
import pandas as pd
from pandas.compat import StringIO
temp=u"""a$$b
a$$1
s$$2
f$$3"""
#after testing replace 'StringIO(temp)' to 'filename.csv'
df = pd.read_csv(StringIO(temp), sep="\$\$", engine='python')
print (df)
a b
0 a 1
1 s 2
2 f 3
| Q: Importing CSV in pandas with $$ as delimiter I am trying to import a csv in pandas with $$ as delimiter, I would expect the following command to work:
pd.read_csv('data.csv', delimiter="$$")
This however returns the following error:
Falling back to the 'python' engine because the 'c' engine does not
support regex separators (separators > 1 char and different from '\s+'
are interpreted as regex), but this causes 'error_bad_lines' to be
ignored as it is not supported by the 'python' engine.
Is this operation even possible in Pandas?
A: You can escape $ by \:
df = pd.read_csv('data.csv', sep="\$\$", engine='python')
Sample:
import pandas as pd
from pandas.compat import StringIO
temp=u"""a$$b
a$$1
s$$2
f$$3"""
#after testing replace 'StringIO(temp)' to 'filename.csv'
df = pd.read_csv(StringIO(temp), sep="\$\$", engine='python')
print (df)
a b
0 a 1
1 s 2
2 f 3
| stackoverflow | {
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:878725",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585198"
} |
fa21ff0f787bc12f6de3e96026dc3f19e761dac6 | Stackoverflow Stackexchange
Q: Delphi: Set ImageBase bigger than 32-bit (for 64-bit Windows application) I have been playing with the {$IMAGEBASE} directive in Delphi but I can see that I can only put a value lower than $FFFFFFFF (32-bit).
I'm compiling as x64 and I need to set an image base bigger than 32-bit but Delphi ignores the higher 32-bit DWORD in my 64-bit ImageBase.
Has anyone managed to set a value higher than $FFFFFFFF as ImageBase for Delphi?
I need it because I need to test my application in "high" ImageBase (due to some hook tests, etc)
Thanks!
A: The Delphi linker does not support large image base, although there are new PE optional headers that allow large image base values to be specified.
So I think that until Embarcadero introduce any such functionality, you would need to use a third party tool to rebase the executable file after it has been built. For instance EDITBIN with the /REBASE option from the MS toolchain.
I took a simple 64 bit VCL program, built with XE7, and rebased it like this:
editbin /rebase:base=0xffffff0000 Project1.exe
I confirmed using Process Hacker that the image base was indeed as specified.
| Q: Delphi: Set ImageBase bigger than 32-bit (for 64-bit Windows application) I have been playing with the {$IMAGEBASE} directive in Delphi but I can see that I can only put a value lower than $FFFFFFFF (32-bit).
I'm compiling as x64 and I need to set an image base bigger than 32-bit but Delphi ignores the higher 32-bit DWORD in my 64-bit ImageBase.
Has anyone managed to set a value higher than $FFFFFFFF as ImageBase for Delphi?
I need it because I need to test my application in "high" ImageBase (due to some hook tests, etc)
Thanks!
A: The Delphi linker does not support large image base, although there are new PE optional headers that allow large image base values to be specified.
So I think that until Embarcadero introduce any such functionality, you would need to use a third party tool to rebase the executable file after it has been built. For instance EDITBIN with the /REBASE option from the MS toolchain.
I took a simple 64 bit VCL program, built with XE7, and rebased it like this:
editbin /rebase:base=0xffffff0000 Project1.exe
I confirmed using Process Hacker that the image base was indeed as specified.
| stackoverflow | {
"language": "en",
"length": 194,
"provenance": "stackexchange_0000F.jsonl.gz:878737",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585237"
} |
21bd6f4d9e6e28ebf1d2665a00432c0b3e6ffaa2 | Stackoverflow Stackexchange
Q: What encryption mechanism is used in CouchDB? Does anyone know about what type of encryption is used to store data securely on CouchDB? How one can change/control this encryption mechanism for data security on CouchDB?
A: CouchDB does not encrypt data at rest (except passwords, by way of a PBKDF2 one-way hash).
It does allow the encryption of data in transit, by use of HTTPS, but for at-rest encryption, your options are:
*
*Device/filesystem-level encryption. This is handled by your OS, and is completely invisible to CouchDB (and all other apps).
*Application-level encryption. You can have your application encrypt data before marshaling it to JSON for storage in CouchDB. The crypto-pouch plugin is one example of this, which works for PouchDB (Note: I've never used it, so can't vouch for its usefulness).
| Q: What encryption mechanism is used in CouchDB? Does anyone know about what type of encryption is used to store data securely on CouchDB? How one can change/control this encryption mechanism for data security on CouchDB?
A: CouchDB does not encrypt data at rest (except passwords, by way of a PBKDF2 one-way hash).
It does allow the encryption of data in transit, by use of HTTPS, but for at-rest encryption, your options are:
*
*Device/filesystem-level encryption. This is handled by your OS, and is completely invisible to CouchDB (and all other apps).
*Application-level encryption. You can have your application encrypt data before marshaling it to JSON for storage in CouchDB. The crypto-pouch plugin is one example of this, which works for PouchDB (Note: I've never used it, so can't vouch for its usefulness).
| stackoverflow | {
"language": "en",
"length": 133,
"provenance": "stackexchange_0000F.jsonl.gz:878758",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585302"
} |
d1691a8695e28d55fe9d630e978b5715ea951685 | Stackoverflow Stackexchange
Q: Angular 2(4) component with ControlValueAccessor testing I would like to test component which implements ControlValueAccessor interface for allow to use [(ngModel)] in my custom component, but the issue is that usual inputs comes correct but ngModel - undefined. Here is code example:
@Component({
template: `
<custom-component
[usualInput]="usualInput"
[(ngModel)]="modelValue"
></custom-component>`
})
class TestHostComponent {
usualInput: number = 1;
modelValue: number = 2;
}
describe('Component test', () => {
let component: TestHostComponent;
let fixture: ComponentFixture<TestHostComponent>;
let de: DebugElement;
let customComponent: DebugElement;
beforeEach(async(() => {
TestBed.configureTestingModule({
declarations: [
CustomComponent,
],
schemas: [NO_ERRORS_SCHEMA],
}).compileComponents();
}));
});
So, I expect usualInput Input() value in my customComponent will equal 1 (it is true), and ngModel value will equal 2, but ngModel = undefined and after debug I know that ControlValueAccessor writeValue method doesn't call in test environment (but it works correct for browser). So how can I fix it?
A: Inside your ControlValueAccessor component you do not have access to ngModel unless you injected it and did some tricks to avoid circular dependency.
ControlValueAccessor has writeValue method which receives values from control when it is updated — if you need, you can store this value in your component and then test it.
| Q: Angular 2(4) component with ControlValueAccessor testing I would like to test component which implements ControlValueAccessor interface for allow to use [(ngModel)] in my custom component, but the issue is that usual inputs comes correct but ngModel - undefined. Here is code example:
@Component({
template: `
<custom-component
[usualInput]="usualInput"
[(ngModel)]="modelValue"
></custom-component>`
})
class TestHostComponent {
usualInput: number = 1;
modelValue: number = 2;
}
describe('Component test', () => {
let component: TestHostComponent;
let fixture: ComponentFixture<TestHostComponent>;
let de: DebugElement;
let customComponent: DebugElement;
beforeEach(async(() => {
TestBed.configureTestingModule({
declarations: [
CustomComponent,
],
schemas: [NO_ERRORS_SCHEMA],
}).compileComponents();
}));
});
So, I expect usualInput Input() value in my customComponent will equal 1 (it is true), and ngModel value will equal 2, but ngModel = undefined and after debug I know that ControlValueAccessor writeValue method doesn't call in test environment (but it works correct for browser). So how can I fix it?
A: Inside your ControlValueAccessor component you do not have access to ngModel unless you injected it and did some tricks to avoid circular dependency.
ControlValueAccessor has writeValue method which receives values from control when it is updated — if you need, you can store this value in your component and then test it.
A: You have to wrap your test with async, and wait for fixture.whenStable
it('should get ngModel', async(() => {
let customComponent = debugEl.query(By.directive(CustomComponent));
fixture.whenStable().then(() => {
fixture.detectChanges();
expect(customComponent.componentInstance.value).toEqual(2);
});
});
| stackoverflow | {
"language": "en",
"length": 227,
"provenance": "stackexchange_0000F.jsonl.gz:878771",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585351"
} |
581056c76c1938665147143092390023ef22693c | Stackoverflow Stackexchange
Q: jenkins how can I trigger builds when there is a change on gitlab I am trying that Jenkins build my project when there is a change in my GitLab repository. I checked the option
Build when a change is pushed to GitLab. GitLab CI Service URL: http://172.172.10.21:8090/project/myproject/myproject_frontend
and the suboptions
Push Events
Merge Request Events
Rebuild open Merge Request
Comments
Comment for triggering a build
Also I checked
Check repository (SCM) ((without schedules))
It does not work like this.
what is missing??
thanks in advance
A: From the look of it, you are already using Gitlab Plugin for Jenkins, you just need to :
*
*go to your Gitlab repository, go to the "Integrations" section of the repo settings :
*
*Then, set the webhook URL as : https://<jenkins-host>:<port>/project/<your job> :
This way Gitlab will perform a POST request on your Jenkins Job each time the selected trigger will occur which will trigger your job task if you have configured the specified event to trigger the build.
| Q: jenkins how can I trigger builds when there is a change on gitlab I am trying that Jenkins build my project when there is a change in my GitLab repository. I checked the option
Build when a change is pushed to GitLab. GitLab CI Service URL: http://172.172.10.21:8090/project/myproject/myproject_frontend
and the suboptions
Push Events
Merge Request Events
Rebuild open Merge Request
Comments
Comment for triggering a build
Also I checked
Check repository (SCM) ((without schedules))
It does not work like this.
what is missing??
thanks in advance
A: From the look of it, you are already using Gitlab Plugin for Jenkins, you just need to :
*
*go to your Gitlab repository, go to the "Integrations" section of the repo settings :
*
*Then, set the webhook URL as : https://<jenkins-host>:<port>/project/<your job> :
This way Gitlab will perform a POST request on your Jenkins Job each time the selected trigger will occur which will trigger your job task if you have configured the specified event to trigger the build.
| stackoverflow | {
"language": "en",
"length": 168,
"provenance": "stackexchange_0000F.jsonl.gz:878774",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585361"
} |
f3b4626c4d9a18dfceae05c7866811e2aeba07f1 | Stackoverflow Stackexchange
Q: Function visible even if not included in C I have defined function A in main.c file. I have created three libraries which use the function A without importing anything. The code works but I have only one warning: implicit declaration of function 'A' [-Wimplicit-function-declaration].
How is it possible that the function A works in a function B defined in a separate file without importing it?
How is it possible that I have only one warning when function A is called by other functions except function B?
A: Global non-static symbols (variables and functions) have by default external linkage meaning they can be accessed from other translation units.
| Q: Function visible even if not included in C I have defined function A in main.c file. I have created three libraries which use the function A without importing anything. The code works but I have only one warning: implicit declaration of function 'A' [-Wimplicit-function-declaration].
How is it possible that the function A works in a function B defined in a separate file without importing it?
How is it possible that I have only one warning when function A is called by other functions except function B?
A: Global non-static symbols (variables and functions) have by default external linkage meaning they can be accessed from other translation units.
A: In C, we don't "import" functions. We compile individual translation units to object files and then link all of them together to form the binary / executable.
In the linking phase, linker checks the object files for required symbols and references and links them together to produce the single executable (thus making the function call possible at runtime).
In your case, the compiler does not "see" the function declaration at the time of the call (so, it does not have any idea of the function signature, which can be a potential pitfall, that is why you have the "warning"), but in the linking phase, linker is able to find the reference to the function (assuming both the translation units are being linked together to form the binary) and creates the binary.
FWIW, implicit function declarations are non-standard as per the latest C standards. You must forward declare the function (provide a prototype) before you can actually use the function. Quoting C11, Foreword,
Major changes in the second edition included:
[....]
— remove implicit function declaration
A: Compiling:
*
*During compilation each file is compiled separately and at last a .o
file is generated from a .c file.
*For each function called in the file compiler expect's the function definition or at least the function's declaration.
*In case of missing the definition or declaration you get a warning from the compiler like implicit declaration of function 'A'
[-Wimplicit-function-declaration].
*In your case as the function definition is in another file you must at least include the function declaration in your include file.
Linking:
*
*Linking refers to the creation of a single executable file from
multiple object files. In this step, it is common that the linker
will complain about undefined functions.
*As the function A in main.c is globally defined it will be used by the library.
| stackoverflow | {
"language": "en",
"length": 415,
"provenance": "stackexchange_0000F.jsonl.gz:878856",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585596"
} |
c60cfe989fff09d10b3d0b62ac7250974e84433a | Stackoverflow Stackexchange
Q: Testcafe Vue Selectors can't grab Vue component I'm using Testcafe Vue Selectors to perform e2e testing on my Vue application but it looks like I can't grab any of my components:
1) An error occurred in getVue code:
TypeError: Cannot read property '__vue__' of undefined
This is a sample test I have created:
import VueSelector from "testcafe-vue-selectors";
import { Selector } from 'testcafe';
fixture `Getting Started`
.page `http://localhost:8081/`;
test('test totalValue format', async t => {
const totalValue = VueSelector("total-value");
await t
.click("#total-value-toggle-format")
.expect(totalValue.getVue(({ props }) => props.formatProperty)).eql(null)
});
The structure of my components tree is the following:
Root
|___App
|___Hello
|___TotalValue
And I import the component like this:
"total-value": TotalValue,
Why is this not working?
EDIT: this is the page where I test the component
<template>
<div class="hello">
<div class="component-wrapper">
<total-value
:value="totalValueValue"
:formatProperty="computedFormatNumber">
</total-value>
</div>
</div>
</template>
<script>
import TotalValue from "../../core/TotalValue";
export default {
name: "hello",
components: {
"total-value": TotalValue,
},
data() {
return {
totalValueValue: 1000000,
formatNumber: true,
formatFunction: Assets.formatNumber,
};
},
computed: {
computedFormatNumber() {
return this.formatNumber ? ["nl", "0,0 a"] : [];
},
},
};
A: Just a follow-up, we have fixed the issue described in this thread:
Support component loaded via vue-loader
| Q: Testcafe Vue Selectors can't grab Vue component I'm using Testcafe Vue Selectors to perform e2e testing on my Vue application but it looks like I can't grab any of my components:
1) An error occurred in getVue code:
TypeError: Cannot read property '__vue__' of undefined
This is a sample test I have created:
import VueSelector from "testcafe-vue-selectors";
import { Selector } from 'testcafe';
fixture `Getting Started`
.page `http://localhost:8081/`;
test('test totalValue format', async t => {
const totalValue = VueSelector("total-value");
await t
.click("#total-value-toggle-format")
.expect(totalValue.getVue(({ props }) => props.formatProperty)).eql(null)
});
The structure of my components tree is the following:
Root
|___App
|___Hello
|___TotalValue
And I import the component like this:
"total-value": TotalValue,
Why is this not working?
EDIT: this is the page where I test the component
<template>
<div class="hello">
<div class="component-wrapper">
<total-value
:value="totalValueValue"
:formatProperty="computedFormatNumber">
</total-value>
</div>
</div>
</template>
<script>
import TotalValue from "../../core/TotalValue";
export default {
name: "hello",
components: {
"total-value": TotalValue,
},
data() {
return {
totalValueValue: 1000000,
formatNumber: true,
formatFunction: Assets.formatNumber,
};
},
computed: {
computedFormatNumber() {
return this.formatNumber ? ["nl", "0,0 a"] : [];
},
},
};
A: Just a follow-up, we have fixed the issue described in this thread:
Support component loaded via vue-loader
| stackoverflow | {
"language": "en",
"length": 198,
"provenance": "stackexchange_0000F.jsonl.gz:878896",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585727"
} |
585ab426b9dd1ca57f59375c12d5cb1c4c442395 | Stackoverflow Stackexchange
Q: Batch read file contents into variable I'm trying to read the contents of a file into a batch script variable. The file only has a guid on the first line.
If I do type myfile.id then it prints out the guid. But if I try to set that value to a variable
set /p out=<myfile.id
or
for /f "delims=" %%x in (myfile.id) do set out=%%x
Then when I echo %out% I get
■a
A: You got an encoding problem.
for /f "delims=" %%x in ('type myfile.id') do set id=%%x
should work. (type "translates" Unicode files "on the fly")
| Q: Batch read file contents into variable I'm trying to read the contents of a file into a batch script variable. The file only has a guid on the first line.
If I do type myfile.id then it prints out the guid. But if I try to set that value to a variable
set /p out=<myfile.id
or
for /f "delims=" %%x in (myfile.id) do set out=%%x
Then when I echo %out% I get
■a
A: You got an encoding problem.
for /f "delims=" %%x in ('type myfile.id') do set id=%%x
should work. (type "translates" Unicode files "on the fly")
| stackoverflow | {
"language": "en",
"length": 99,
"provenance": "stackexchange_0000F.jsonl.gz:878917",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585798"
} |
83cc8fa03e2d4c258d017493090f435216635f98 | Stackoverflow Stackexchange
Q: Cant compile latest firebase core library I was usingimplementation 'com.google.firebase:firebase-core:10.0.1' and tried to update my dependencies to implementation 'com.google.firebase:firebase-core:11.0.1' but no luck.
All I get is:
Failed to resolve: com.google.firebase:firebase-core:11.0.1'
I use google play services 3.1.0
A: Just update the Google Repository.
*
*Go to SDK Manager
*Select Android SDK under Appearance and Behaviour
*Select SDK Tools
*Extract Support Repository and Update the Google Repository to 54.
| Q: Cant compile latest firebase core library I was usingimplementation 'com.google.firebase:firebase-core:10.0.1' and tried to update my dependencies to implementation 'com.google.firebase:firebase-core:11.0.1' but no luck.
All I get is:
Failed to resolve: com.google.firebase:firebase-core:11.0.1'
I use google play services 3.1.0
A: Just update the Google Repository.
*
*Go to SDK Manager
*Select Android SDK under Appearance and Behaviour
*Select SDK Tools
*Extract Support Repository and Update the Google Repository to 54.
A:
I have solve this issue by following these steps:
Configure Gradle's
build.Gradle(Project:{ProjectName})
classpath 'com.android.tools.build:gradle:2.3.1'
gradle-wrapper.properties
distributionUrl=https\://services.gradle.org/distributions/gradle-3.3-all.zip
Configure for Dependencies Error
*
*Open SDK Manager
*Select SDK Tools Tab
*Expand Support Repository
*Select Google Repository
*Update It, make sure you have version 54 and above, after update.
| stackoverflow | {
"language": "en",
"length": 115,
"provenance": "stackexchange_0000F.jsonl.gz:878918",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585799"
} |
c729d67551d644373e95399108619b1ae7abef81 | Stackoverflow Stackexchange
Q: How do you create an instance of RsaSecurityKey from a SSL key file I have a RSA private key in the format as stated in RFC 7468 and a library I'm using requires an instance of SecurityKey none of the constructor's seem to accept strings in this format, nor any of the accepted arguments for it's constructor seem to accept this format.
A: Found a Library that handles PEM Keys in .Net, If you include both DerConverter and PemUtils you can simply read the file:
RsaSecurityKey key;
using (var stream = File.OpenRead(path))
using (var reader = new PemReader(stream))
{
key = new RsaSecurityKey(reader.ReadRsaKey());
// ...
}
| Q: How do you create an instance of RsaSecurityKey from a SSL key file I have a RSA private key in the format as stated in RFC 7468 and a library I'm using requires an instance of SecurityKey none of the constructor's seem to accept strings in this format, nor any of the accepted arguments for it's constructor seem to accept this format.
A: Found a Library that handles PEM Keys in .Net, If you include both DerConverter and PemUtils you can simply read the file:
RsaSecurityKey key;
using (var stream = File.OpenRead(path))
using (var reader = new PemReader(stream))
{
key = new RsaSecurityKey(reader.ReadRsaKey());
// ...
}
| stackoverflow | {
"language": "en",
"length": 107,
"provenance": "stackexchange_0000F.jsonl.gz:878947",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585859"
} |
acf23c16eab9c1fca75075adac700303d83bf275 | Stackoverflow Stackexchange
Q: Google Api Php client - Spreadsheets permission error I am using Google PHP client to access spreadsheet data.
I getting this fatal error:
Fatal error: Uncaught exception 'Google_Service_Exception' with message '{ "error": { "code": 403, "message": "The caller does not have permission", "errors": [ { "message": "The caller does not have permission", "domain": "global", "reason": "forbidden" } ], "status": "PERMISSION_DENIED" } }
My code:
$client = new Google_Client();
$client->setApplicationName("Google spreadsheets");
$client->setDeveloperKey("xxxxx");
$client->setScopes(array('https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/spreadsheets.readonly',
'https://www.googleapis.com/auth/drive.file'));
$service = new Google_Service_Sheets($client);
$range = 'Class Data!A2:E';
$response = $service->spreadsheets_values->get($sheetid, $range);
$values = $response->getValues();
if (count($values) == 0) {
print "No data found.\n";
} else {
print "Name, Major:\n";
foreach ($values as $row) {
// Print columns A and E, which correspond to indices 0 and 4.
printf("%s, %s\n", $row[0], $row[4]);
}
}
How to fix this?
A: Take the 'client_email' from the downloaded JSON file or from your 'service account' and share the spreadsheet with this email address, you will get access to the spreadsheet. This solution worked for me.
| Q: Google Api Php client - Spreadsheets permission error I am using Google PHP client to access spreadsheet data.
I getting this fatal error:
Fatal error: Uncaught exception 'Google_Service_Exception' with message '{ "error": { "code": 403, "message": "The caller does not have permission", "errors": [ { "message": "The caller does not have permission", "domain": "global", "reason": "forbidden" } ], "status": "PERMISSION_DENIED" } }
My code:
$client = new Google_Client();
$client->setApplicationName("Google spreadsheets");
$client->setDeveloperKey("xxxxx");
$client->setScopes(array('https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/spreadsheets.readonly',
'https://www.googleapis.com/auth/drive.file'));
$service = new Google_Service_Sheets($client);
$range = 'Class Data!A2:E';
$response = $service->spreadsheets_values->get($sheetid, $range);
$values = $response->getValues();
if (count($values) == 0) {
print "No data found.\n";
} else {
print "Name, Major:\n";
foreach ($values as $row) {
// Print columns A and E, which correspond to indices 0 and 4.
printf("%s, %s\n", $row[0], $row[4]);
}
}
How to fix this?
A: Take the 'client_email' from the downloaded JSON file or from your 'service account' and share the spreadsheet with this email address, you will get access to the spreadsheet. This solution worked for me.
A: Take the service account email address and share the sheet with it like you would any other user. It will then have access to the sheet
A: The error means that you do not have access to that sheet. I suggest you follow the Google Sheets php quick start tutorial, this will show you how to get authentication working.
<?php
require_once __DIR__ . '/vendor/autoload.php';
define('APPLICATION_NAME', 'Google Sheets API PHP Quickstart');
define('CREDENTIALS_PATH', '~/.credentials/sheets.googleapis.com-php-quickstart.json');
define('CLIENT_SECRET_PATH', __DIR__ . '/client_secret.json');
// If modifying these scopes, delete your previously saved credentials
// at ~/.credentials/sheets.googleapis.com-php-quickstart.json
define('SCOPES', implode(' ', array(
Google_Service_Sheets::SPREADSHEETS_READONLY)
));
if (php_sapi_name() != 'cli') {
throw new Exception('This application must be run on the command line.');
}
/**
* Returns an authorized API client.
* @return Google_Client the authorized client object
*/
function getClient() {
$client = new Google_Client();
$client->setApplicationName(APPLICATION_NAME);
$client->setScopes(SCOPES);
$client->setAuthConfig(CLIENT_SECRET_PATH);
$client->setAccessType('offline');
// Load previously authorized credentials from a file.
$credentialsPath = expandHomeDirectory(CREDENTIALS_PATH);
if (file_exists($credentialsPath)) {
$accessToken = json_decode(file_get_contents($credentialsPath), true);
} else {
// Request authorization from the user.
$authUrl = $client->createAuthUrl();
printf("Open the following link in your browser:\n%s\n", $authUrl);
print 'Enter verification code: ';
$authCode = trim(fgets(STDIN));
// Exchange authorization code for an access token.
$accessToken = $client->fetchAccessTokenWithAuthCode($authCode);
// Store the credentials to disk.
if(!file_exists(dirname($credentialsPath))) {
mkdir(dirname($credentialsPath), 0700, true);
}
file_put_contents($credentialsPath, json_encode($accessToken));
printf("Credentials saved to %s\n", $credentialsPath);
}
$client->setAccessToken($accessToken);
// Refresh the token if it's expired.
if ($client->isAccessTokenExpired()) {
$client->fetchAccessTokenWithRefreshToken($client->getRefreshToken());
file_put_contents($credentialsPath, json_encode($client->getAccessToken()));
}
return $client;
}
/**
* Expands the home directory alias '~' to the full path.
* @param string $path the path to expand.
* @return string the expanded path.
*/
function expandHomeDirectory($path) {
$homeDirectory = getenv('HOME');
if (empty($homeDirectory)) {
$homeDirectory = getenv('HOMEDRIVE') . getenv('HOMEPATH');
}
return str_replace('~', realpath($homeDirectory), $path);
}
// Get the API client and construct the service object.
$client = getClient();
$service = new Google_Service_Sheets($client);
// Prints the names and majors of students in a sample spreadsheet:
// https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit
$spreadsheetId = '1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms';
$range = 'Class Data!A2:E';
$response = $service->spreadsheets_values->get($spreadsheetId, $range);
$values = $response->getValues();
if (count($values) == 0) {
print "No data found.\n";
} else {
print "Name, Major:\n";
foreach ($values as $row) {
// Print columns A and E, which correspond to indices 0 and 4.
printf("%s, %s\n", $row[0], $row[4]);
}
}
A: as per this document you'll not be able to access your own Spreadsheet via API key unless you make your document publicly available:
Source: https://developers.google.com/sheets/api/guides/authorizing
This document says:
*
*If the request requires authorization (such as a request for an individual's private data), then the application must provide an OAuth
2.0 token with the request. The application may also provide the API key, but it doesn't have to.
*If the request doesn't require authorization (such as a request for public data), then the application must provide either the API key or
an OAuth 2.0 token, or both—whatever option is most convenient for
you.
Unfortunately it is not "very clear" to what it means.
But combining explanations from several sources, it means that API key allows you to "identify yourself" for accessing public information. If you want to get a plublicly available resource as data from google maps, Google still wants to know "who is asking". API Key works here.
Instead, although the text in the the previous link might suggest that OAuth is for accessing data "from other users", in fact, any private data even your own data must be accessed by the OAuth method.
So for accessing private google spreadsheets that contain company data and have not been made publicly-available, then the OAuth keying system must be in place.
| stackoverflow | {
"language": "en",
"length": 753,
"provenance": "stackexchange_0000F.jsonl.gz:878960",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585903"
} |
af22a7dd65a1219b5d2cb5e7c4036c51df70af7e | Stackoverflow Stackexchange
Q: Hyperledger Java SDK working example I am currently digging into Hyperledger Fabric and I can't get stuff started with the Java SDK (talking about 1.0.0-beta here). Is there a working example starting from connecting to the Fabric node, doing queries, etc? All I found so far through extensive googling are "let's-write-some-chaincode" examples.
A: I find this Java example to be more helpful than the links provided. Out of the box it provides you with an end to end test without bloat. Shows you how to do everything without CLI, in plain Java.
https://github.com/venugopv/FabricJavaSDKSample
| Q: Hyperledger Java SDK working example I am currently digging into Hyperledger Fabric and I can't get stuff started with the Java SDK (talking about 1.0.0-beta here). Is there a working example starting from connecting to the Fabric node, doing queries, etc? All I found so far through extensive googling are "let's-write-some-chaincode" examples.
A: I find this Java example to be more helpful than the links provided. Out of the box it provides you with an end to end test without bloat. Shows you how to do everything without CLI, in plain Java.
https://github.com/venugopv/FabricJavaSDKSample
A: Here is an example, implementing some functionality from fabcar (query.js and invoke.js - only query by one car and change owner)
I used Java8 on Windows. If you use another OS please update paths accordingly.
I used no implementation for json to avoid additional libraries (it is required to deal with certs a bit - see below).
You will need fabcar example up and running.
And (because of 'no json'):
*
*Put Private key (cd96d5260ad4757551ed4a5a991e62130f8008a0bf996e4e4b84cd097a747fec-priv from example) to c:\tmp\cert\PeerAdm.priv
*Put Certificate from PeerAdmin file (value of json's "certificate", with '\n' replaced by newlines) to c:\tmp\cert\PeerAdm.cert
The code (fabrictest/fabcar/Program.java):
package fabrictest.fabcar;
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.security.GeneralSecurityException;
import java.security.KeyFactory;
import java.security.PrivateKey;
import java.security.spec.PKCS8EncodedKeySpec;
import java.util.Collection;
import java.util.Date;
import java.util.HashSet;
import java.util.Random;
import java.util.Set;
import javax.xml.bind.DatatypeConverter;
import org.hyperledger.fabric.sdk.ChaincodeID;
import org.hyperledger.fabric.sdk.Channel;
import org.hyperledger.fabric.sdk.Enrollment;
import org.hyperledger.fabric.sdk.HFClient;
import org.hyperledger.fabric.sdk.ProposalResponse;
import org.hyperledger.fabric.sdk.QueryByChaincodeRequest;
import org.hyperledger.fabric.sdk.TransactionProposalRequest;
import org.hyperledger.fabric.sdk.User;
import org.hyperledger.fabric.sdk.security.CryptoSuite;
public class Program {
private static HFClient client = null;
public static void main(String[] args) throws Throwable {
/*
* wallet_path: path.join(__dirname, './creds'), user_id: 'PeerAdmin',
* channel_id: 'mychannel', chaincode_id: 'fabcar', network_url:
* 'grpc://192.168.99.100:7051', orderer: grpc://192.168.99.100:7050
*
*/
// just new objects, without any payload inside
client = HFClient.createNewInstance();
CryptoSuite cs = CryptoSuite.Factory.getCryptoSuite();
client.setCryptoSuite(cs);
// We implement User interface below in code
// folder c:\tmp\creds should contain PeerAdmin.cert (extracted from HF's fabcar
// example's PeerAdmin json file)
// and PeerAdmin.priv (copy from
// cd96d5260ad4757551ed4a5a991e62130f8008a0bf996e4e4b84cd097a747fec-priv)
User user = new SampleUser("c:\\tmp\\creds", "PeerAdmin");
// "Log in"
client.setUserContext(user);
// Instantiate channel
Channel channel = client.newChannel("mychannel");
channel.addPeer(client.newPeer("peer", "grpc://192.168.99.100:7051"));
// It always wants orderer, otherwise even query does not work
channel.addOrderer(client.newOrderer("orderer", "grpc://192.168.99.100:7050"));
channel.initialize();
// below is querying and setting new owner
String newOwner = "New Owner #" + new Random(new Date().getTime()).nextInt(999);
System.out.println("New owner is '" + newOwner + "'\n");
queryFabcar(channel, "CAR1");
updateCarOwner(channel, "CAR1", newOwner, false);
System.out.println("after request for transaction without commit");
queryFabcar(channel, "CAR1");
updateCarOwner(channel, "CAR1", newOwner, true);
System.out.println("after request for transaction WITH commit");
queryFabcar(channel, "CAR1");
System.out.println("Sleeping 5s");
Thread.sleep(5000); // 5secs
queryFabcar(channel, "CAR1");
System.out.println("all done");
}
private static void queryFabcar(Channel channel, String key) throws Exception {
QueryByChaincodeRequest req = client.newQueryProposalRequest();
ChaincodeID cid = ChaincodeID.newBuilder().setName("fabcar").build();
req.setChaincodeID(cid);
req.setFcn("queryCar");
req.setArgs(new String[] { key });
System.out.println("Querying for " + key);
Collection<ProposalResponse> resps = channel.queryByChaincode(req);
for (ProposalResponse resp : resps) {
String payload = new String(resp.getChaincodeActionResponsePayload());
System.out.println("response: " + payload);
}
}
private static void updateCarOwner(Channel channel, String key, String newOwner, Boolean doCommit)
throws Exception {
TransactionProposalRequest req = client.newTransactionProposalRequest();
ChaincodeID cid = ChaincodeID.newBuilder().setName("fabcar").build();
req.setChaincodeID(cid);
req.setFcn("changeCarOwner");
req.setArgs(new String[] { key, newOwner });
System.out.println("Executing for " + key);
Collection<ProposalResponse> resps = channel.sendTransactionProposal(req);
if (doCommit) {
channel.sendTransaction(resps);
}
}
}
/***
* Implementation of user. main business logic (as for fabcar example) is in
* getEnrollment - get user's private key and cert
*
*/
class SampleUser implements User {
private final String certFolder;
private final String userName;
public SampleUser(String certFolder, String userName) {
this.certFolder = certFolder;
this.userName = userName;
}
@Override
public String getName() {
return userName;
}
@Override
public Set<String> getRoles() {
return new HashSet<String>();
}
@Override
public String getAccount() {
return "";
}
@Override
public String getAffiliation() {
return "";
}
@Override
public Enrollment getEnrollment() {
return new Enrollment() {
@Override
public PrivateKey getKey() {
try {
return loadPrivateKey(Paths.get(certFolder, userName + ".priv"));
} catch (Exception e) {
return null;
}
}
@Override
public String getCert() {
try {
return new String(Files.readAllBytes(Paths.get(certFolder, userName + ".cert")));
} catch (Exception e) {
return "";
}
}
};
}
@Override
public String getMspId() {
return "Org1MSP";
}
/***
* loading private key from .pem-formatted file, ECDSA algorithm
* (from some example on StackOverflow, slightly changed)
* @param fileName - file with the key
* @return Private Key usable
* @throws IOException
* @throws GeneralSecurityException
*/
public static PrivateKey loadPrivateKey(Path fileName) throws IOException, GeneralSecurityException {
PrivateKey key = null;
InputStream is = null;
try {
is = new FileInputStream(fileName.toString());
BufferedReader br = new BufferedReader(new InputStreamReader(is));
StringBuilder builder = new StringBuilder();
boolean inKey = false;
for (String line = br.readLine(); line != null; line = br.readLine()) {
if (!inKey) {
if (line.startsWith("-----BEGIN ") && line.endsWith(" PRIVATE KEY-----")) {
inKey = true;
}
continue;
} else {
if (line.startsWith("-----END ") && line.endsWith(" PRIVATE KEY-----")) {
inKey = false;
break;
}
builder.append(line);
}
}
//
byte[] encoded = DatatypeConverter.parseBase64Binary(builder.toString());
PKCS8EncodedKeySpec keySpec = new PKCS8EncodedKeySpec(encoded);
KeyFactory kf = KeyFactory.getInstance("ECDSA");
key = kf.generatePrivate(keySpec);
} finally {
is.close();
}
return key;
}
}
A: You can take a look at the following
-
Java SDK for Hyperledger Fabric 2.2. In this, there are two files given in the folder "fabric-sdk-java/src/test/java/org/hyperledger/fabric/sdkintegration/" ==> End2endAndBackAgainIT.java, End2endIT.java. This can help.
*
*For a demonstration, refer to Youtube channel video: End to end Demo
*For a fabric network which has everything (network & crypto) setup for the E2E demo: E2E Cli Setup
Update on 2020-June-07
The link above Java SDK for Hyperledger Fabric 2.2, is a low level SDK for interacting with Hyperledger Fabric.
If your purpose is building Hyperledger Fabric blockchain client applications, then its recommended to use the Hyperledger Fabric Gateway SDK for Java, a high level API. Its very simple to use, just refer to the code snippet from [2.2]. please refer to the link how to use
// code snippet from [2.2]
class Sample {
public static void main(String[] args) throws IOException
{
// Load an existing wallet holding identities used to access the network.
Path walletDirectory = Paths.get("wallet");
Wallet wallet = Wallets.newFileSystemWallet(walletDirectory);
// Path to a common connection profile describing the network.
Path networkConfigFile = Paths.get("connection.json");
// Configure the gateway connection used to access the network.
Gateway.Builder builder = Gateway.createBuilder() .identity(wallet, "user1").networkConfig(networkConfigFile);
// Create a gateway connection
try (Gateway gateway = builder.connect()){
// Obtain a smart contract deployed on the network.
Network network = gateway.getNetwork("mychannel");
Contract contract = network.getContract("fabcar");
// Submit transactions that store state to the ledger.
byte[] createCarResult = contract.createTransaction("createCar").submit("CAR10", "VW", "Polo", "Grey","Mary");
System.out.println(new String(createCarResult, StandardCharsets.UTF_8));
// Evaluate transactions that query state from the ledger.
byte[] queryAllCarsResult = contract.evaluateTransaction("queryAllCars");
System.out.println(new String(queryAllCarsResult, StandardCharsets.UTF_8));
}
catch (ContractException | TimeoutException | InterruptedException e) {
e.printStackTrace();
}
}
}
API documentation for both 1.4 and 2.2 are available.
| stackoverflow | {
"language": "en",
"length": 1102,
"provenance": "stackexchange_0000F.jsonl.gz:878964",
"question_score": "14",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585909"
} |
b4a08530b93c4790f2dc98823819753da07ae32d | Stackoverflow Stackexchange
Q: Concatenate many Future[Seq] into one Future[Seq] Without Future, that's how I combine all smaller Seq into one big Seq with a flatmap
category.getCategoryUrlKey(id: Int):Seq[Meta] // main method
val appDomains: Seq[Int]
val categories:Seq[Meta] = appDomains.flatMap(category.getCategoryUrlKey(_))
Now the method getCategoryUrlKey could fail. I put a circuit breaker in front to avoid to call it for the next elements after an amount of maxFailures. Now the circuit breaker doesn't return a Seq but a Future[Seq]
lazy val breaker = new akka.pattern.CircuitBreaker(...)
private def getMeta(appDomainId: Int): Future[Seq[Meta]] = {
breaker.withCircuitBreaker {
category.getCategoryUrlKey(appDomainId)
}
}
How to iterate through the List appDomains and combine the result into one single Future[Seq] , possible into Seq ?
If Functional Programming is applicable, is there a way to directly transform without temporary variables ?
A: Squash seq of futures using Future.sequence
Future.sequenceconverts Seq[Future[T]] to Future[Seq[T]]
In your case T is Seq. After the sequence operation, you will end up with Seq[Seq[T]]. So Just flatten it after the sequence operation using flatten.
def squashFutures[T](list: Seq[Future[Seq[T]]]): Future[Seq[T]] =
Future.sequence(list).map(_.flatten)
Your code becomes
Future.sequence(appDomains.map(getMeta)).map(_.flatten)
| Q: Concatenate many Future[Seq] into one Future[Seq] Without Future, that's how I combine all smaller Seq into one big Seq with a flatmap
category.getCategoryUrlKey(id: Int):Seq[Meta] // main method
val appDomains: Seq[Int]
val categories:Seq[Meta] = appDomains.flatMap(category.getCategoryUrlKey(_))
Now the method getCategoryUrlKey could fail. I put a circuit breaker in front to avoid to call it for the next elements after an amount of maxFailures. Now the circuit breaker doesn't return a Seq but a Future[Seq]
lazy val breaker = new akka.pattern.CircuitBreaker(...)
private def getMeta(appDomainId: Int): Future[Seq[Meta]] = {
breaker.withCircuitBreaker {
category.getCategoryUrlKey(appDomainId)
}
}
How to iterate through the List appDomains and combine the result into one single Future[Seq] , possible into Seq ?
If Functional Programming is applicable, is there a way to directly transform without temporary variables ?
A: Squash seq of futures using Future.sequence
Future.sequenceconverts Seq[Future[T]] to Future[Seq[T]]
In your case T is Seq. After the sequence operation, you will end up with Seq[Seq[T]]. So Just flatten it after the sequence operation using flatten.
def squashFutures[T](list: Seq[Future[Seq[T]]]): Future[Seq[T]] =
Future.sequence(list).map(_.flatten)
Your code becomes
Future.sequence(appDomains.map(getMeta)).map(_.flatten)
A: From TraversableOnce[Future[A]] to Future[TraversableOnce[A]]
val categories = Future.successful(appDomains).flatMap(seq => {
val fs = seq.map(i => getMeta(i))
val sequenced = Future.sequence(fs)
sequenced.map(_.flatten)
})
*
*Future.successful(appDomains) lifts the appDomains into the context of Future
Hope this helps.
A: val metaSeqFutureSeq = appDomains.map(i => getMeta(i))
// Seq[Future[Seq[Meta]]]
val metaSeqSeqFuture = Future.sequence(metaSeqFutureSeq)
// Future[Seq[Seq[Meta]]]
// NOTE :: this future will fail if any of the futures in the sequence fails
val metaSeqFuture = metaSeqSeqFuture.map(seq => seq.flatten)
// Future[Seq[Meta]]
If you want to reject the only failed futures but keep the successful one's then we will have to be a bit creative and build our future using a promise.
import java.util.concurrent.locks.ReentrantLock
import scala.collection.mutable.ArrayBuffer
import scala.concurrent.{Future, Promise}
import scala.util.{Failure, Success}
def futureSeqToOptionSeqFuture[T](futureSeq: Seq[Future[T]]): Future[Seq[Option[T]]] = {
val promise = Promise[Seq[Option[T]]]()
var remaining = futureSeq.length
val result = ArrayBuffer[Option[T]]()
result ++ futureSeq.map(_ => None)
val resultLock = new ReentrantLock()
def handleFutureResult(option: Option[T], index: Int): Unit = {
resultLock.lock()
result(index) = option
remaining = remaining - 1
if (remaining == 0) {
promise.success(result)
}
resultLock.unlock()
}
futureSeq.zipWithIndex.foreach({ case (future, index) => future.onComplete({
case Success(t) => handleFutureResult(Some(t), index)
case Failure(ex) => handleFutureResult(None, index)
}) })
promise.future
}
val metaSeqFutureSeq = appDomains.map(i => getMeta(i))
// Seq[Future[Seq[Meta]]]
val metaSeqOptionSeqFuture = futureSeqToOptionSeqFuture(metaSeqFutureSeq)
// Future[Seq[Option[Seq[Meta]]]]
val metaSeqFuture = metaSeqSeqFuture.map(seq => seq.flatten.flatten)
// Future[Seq[Meta]]
| stackoverflow | {
"language": "en",
"length": 385,
"provenance": "stackexchange_0000F.jsonl.gz:878965",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585910"
} |
bf20d3fe8bd7be28357a9253ef966dec52962841 | Stackoverflow Stackexchange
Q: Self-signed SSL certificates not working with MAMP and Chrome SSL certificates created by MAMP are not working in Chrome. I'm getting a "Not secure" issue.
Is there a workaround for this?
A: I followed the answers. What worked for me was setting the port number to 443 in the general tab
| Q: Self-signed SSL certificates not working with MAMP and Chrome SSL certificates created by MAMP are not working in Chrome. I'm getting a "Not secure" issue.
Is there a workaround for this?
A: I followed the answers. What worked for me was setting the port number to 443 in the general tab
A: **NOTE: Since I posted this, Google have acquired the .dev top level domain, so it's not advised to use .dev hostnames for your local development. I use *.dv now. When reading this answer, please replace .dev with .test or something else when recreating the steps in your own project. Use of .local is not advised **
Chrome now requires SSL certificates to use the "Subject Alt Name" (SAN) rather than the old Common Name. This breaks self-signed certs previously generated by MAMP.
Fortunately, the workaround is pretty straightforward.
Here are all the steps from the very first moment of setting a host to be SSL in MAMP Pro. If you previously created SSL certificates in MAMP, then I've found that deleting them and starting again using this method works.
*
*Create your hostname, eg. test.dev and select your document root
*Click the SSL tab, and check the "SSL" box. Make sure you leave the other checkbox "Only allow connections using TLS protocols" unchecked.
*Click the "Create self signed certificate" button and fill in the popup form with the relevant details. Click "Generate" and save the certificate in /Applications/MAMP/Library/OpenSSL/certs/
*Save your changes in MAMP, and restart the servers.
*Click the round arrow button beside "Certificate file" in the MAMP SSL panel (Show in Finder). Double click the .crt file that is highlighted - it should be named like your host, eg. if your host is test.dev then your certificate file will be test.dev.crt. This should open Keychain Access and you should see the new certificate in there.
*Right click / Control click on the certificate, and choose "Get Info". Click the drop-down triangle beside "Trust"
*From the "When using this certificate" selector, choose "Always Trust" - every selector should change to show "Always Trust". Close that window. It will ask for your Mac OS system password to make that change. You should see that the certificate icon shows a little blue plus sign icon over it, marking it as trusted.
*Restart Chrome.
*Visit your new hostname, and enjoy the green https in the browser
bar.
A: If the solution above doesn't help, go to chrome://flags look for "Allow invalid certificates for resources loaded from localhost" and enable it, restart Chrome and you should be good to go.
A: For those that are still having issues, try using port 8890. The default MAMP ssl port is 8890 so visit https://test.dev:8890. Worked for me.
A: For me, it wasn't necessary to use MAMP Ports but instead they were kept at Apache defaults. I also didn't need to specify port 443. What did help once I created the self-signed cert was to install the certificate icon that shows in Chrome into my Mac Keychain by dragging the image to the desktop and double-clicking it. Once it's installed into the Mac Keychain, you can set it to trust the cert.
Refer to this illustrated answer:
https://www.accuweaver.com/2014/09/19/make-chrome-accept-a-self-signed-certificate-on-osx/
*
*MAMP Pro 4.5
*Chrome 71
| stackoverflow | {
"language": "en",
"length": 540,
"provenance": "stackexchange_0000F.jsonl.gz:878968",
"question_score": "27",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585919"
} |
92accaf8e6e05a33684c3642bacdff58cea902f3 | Stackoverflow Stackexchange
Q: com.apple.WebKit.WebContent drops 113 error: Could not find specified service I am using WKWebView for viewing custom HTML.
*
*Regardless of HTML content, when testing on real device, I receive the following error Could not signal service com.apple.WebKit.WebContent: 113: Could not find specified service in 29 sec after WKWebView content loaded, sometimes I even receive this error twice. Clearly, it is a configuration issue. I have checked cookies as proposed in Could not signal service com.apple.WebKit.WebContent, however it doesn't help
*Another question is whether there exist a list of all error codes that might pop up in WKWebView
A: I got this error loading a http:// URL where the server replied with a redirect to https. After changing the URL I pass to WKWebView to https://... it worked.
| Q: com.apple.WebKit.WebContent drops 113 error: Could not find specified service I am using WKWebView for viewing custom HTML.
*
*Regardless of HTML content, when testing on real device, I receive the following error Could not signal service com.apple.WebKit.WebContent: 113: Could not find specified service in 29 sec after WKWebView content loaded, sometimes I even receive this error twice. Clearly, it is a configuration issue. I have checked cookies as proposed in Could not signal service com.apple.WebKit.WebContent, however it doesn't help
*Another question is whether there exist a list of all error codes that might pop up in WKWebView
A: I got this error loading a http:// URL where the server replied with a redirect to https. After changing the URL I pass to WKWebView to https://... it worked.
A: I had this problem I iOS 12.4 when calling evaluateJavascript. I solved it by wrapping the call in DispatchQueue.main.async { }
A: Finally, solved the problem above. I was receiving errors
*
*Could not signal service com.apple.WebKit.WebContent: 113: Could not find specified service
Since I have not added WKWebView object on the view as a subview and tried to call -loadHTMLString:baseURL: on the top of it. And only after it was successfully loaded I was adding it to view's subviews - which was totally wrong. The correct solution for my problem is:
1. Add WKWebView object to view's subviews array
2. Call -loadHTMLString:baseURL: for recently added WKWebView
A: I too faced this problem when loading an 'http' url in WKWebView in iOS 11, it is working fine with https.
What worked for me was setting App transport setting in info.pist file to allow arbitary load.
<key>NSAppTransportSecurity</key>
<dict>
<!--Not a recommended way, there are better solutions available-->
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>
A: Maybe it's an entirely different situation, but I always got WebView[43046:188825] Could not signal service com.apple.WebKit.WebContent: 113: Could not find specified service
when opening a webpage on the simulator while having the debugger attached to it. If I end the debugger and opening the app again the webpage will open just fine. This doesn't happen on the devices.
After spending an entire work-day trying to figure out what's wrong, I found out that if we have a framework named Preferences, UIWebView and WKWebView will not be able to open a webpage and will throw the error above.
To reproduce this error just make a simple app with WKWebView to show a webpage. Then create a new framework target and name it Preferences. Then import it to the main target and run the simulator again. WKWebView will fail to open a webpage.
So, it might be unlikely, but if you have a framework with the name Preferences, try deleting or renaming it.
Also, if anyone has an explanation for this please do share.
BTW, I was on Xcode 9.2.
A: SWIFT
Well I did this in the following order and didn't get any error like Could not signal service com.apple.WebKit.WebContent: 113: Could not find specified service after that, following code might help you too.
webView = WKWebView(frame: self.view.frame)
self.view.addSubview(self.view.webView)
webView.navigationDelegate = self
webView.loadHTMLString(htmlString, baseURL: nil)
Do as order.
Thanks
A: In my case I was launching a WKWebView and displaying a website. Then (within 25 seconds) I deallocated the WKWebView. But 25-60 seconds after launching the WKWebView I received this "113" error message. I assume the system was trying to signal something to the WKWebView and couldn't find it because it was deallocated.
The fix was simply to leave the WKWebView allocated.
A: On OS X, it's necessary to make sure Sandbox capabilities are set-up properly in order to use WKWebView.
This link made this clear to me:
https://forums.developer.apple.com/thread/92265
Sharing hoping that it will help someone.
Select the Project File in the Navigator, select Capabilities, then make sure that:
* App Sandbox is OFF,
OR
* App Sandbox is ON AND Outgoing Connections (Client) is checked.
A: Mine was different again. I was setting the user-agent like so:
NSString *jScript = @"var meta = document.createElement('meta'); meta.setAttribute('name', 'viewport'); meta.setAttribute('content', 'width=device-width'); document.getElementsByTagName('head')[0].appendChild(meta);";
WKUserScript *wkUScript = [[WKUserScript alloc] initWithSource:jScript injectionTime:WKUserScriptInjectionTimeAtDocumentEnd forMainFrameOnly:YES];
This was causing something on the web page to freak out and leak memory. Not sure why but removing this sorted the issue for me.
A: Perhaps the below method could be the cause if you've set it to
func webView(_ webView: WebView!,decidePolicyForNavigationAction actionInformation: [AnyHashable : Any]!, request: URLRequest!, frame: WebFrame!, decisionListener listener: WebPolicyDecisionListener!)
ends with
decisionHandler(.cancel)
for the default navigationAction.request.url
Hope it works!
A: Just for others reference, I seemed to have this issue too if I tried to load a URL that had whitespace at the end (was being pulled from user input).
A: Deleting/commenting
- (void)viewWillAppear:(BOOL)animated {[super viewWillAppear:YES];}
function solved the problem for me.
XCode (11.3.1)
A: I tried almost everything, my solution was simple, updating MacOS to the latest version, and also Xcode to the latest version, that way the error was gone, white blank screen was not happening anymore.
A: Just check your URL that you are passing to load request, I was receiving same error when I came to this page, later checked that I was getting URl starting from "www"
Later I added "https://" , it works for me
A: For those who use flutter, I get the same error on webview_flutter, flutter_inappwebview and flutter_webview_plugin which I thought its from the package so I tried different things. However, in my case I was trying to open customer scheme URL to use it to open the app which is something like appname://code=xxx... and the WKWebView won't allow you to open it, but on Android it will be opened but you'll get some error message.
It was working fine on flutter_webview_plugin cause it does provider onUrlChange listener which will intercept the call before loading it and allow you to do what you want with it... for me, I closed the webview and used url_luncher.
To do the same thing on webview_flutter you should use the navigationDelegate option to allow opening the URL or not, as follows:
WebView(
javascriptMode: JavascriptMode.unrestricted,
initialUrl: url,
navigationDelegate: (x) {
if(x.url.toString().toLowerCase().startsWith('appname://')){
//close webview and do something
// prevent open the url
return NavigationDecision.prevent;
}
else return NavigationDecision.navigate;
},
For flutter_inappwebview there is an option they mention on the official doc
... I didn't try it cause webview_flutter did work... but I suppose it does the same thing
resourceCustomSchemes: List of custom schemes that the WebView must handle. Use the WebView.onLoadResourceCustomScheme event to intercept resource requests with custom scheme.
| stackoverflow | {
"language": "en",
"length": 1073,
"provenance": "stackexchange_0000F.jsonl.gz:878991",
"question_score": "107",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44585980"
} |
7f881c1db7444fa57f035098ed657f8774e1ebe2 | Stackoverflow Stackexchange
Q: Ruby: How do you include a % sign in sprintf output? I have:
sprintf("%02X" % 13)
Which outputs:
=>"OD"
I want my output to be:
=>"%0D"
I've tried:
sprintf("\%%02X" % 13)
but I get an error warning: too many arguments for format string. The same goes for:
sprintf("%%02X" % 13)
Is it possible to add a leading % in sprintf alone?
A: sprintf('%%%02X', 13)
# => "%0D"
From the ruby docs:
Field: % | Other Format: A percent sign itself will be displayed. No argument taken.
i.e. You must escape the % character with a double %%; much like you much escape a single \ with \\ in regular strings.
| Q: Ruby: How do you include a % sign in sprintf output? I have:
sprintf("%02X" % 13)
Which outputs:
=>"OD"
I want my output to be:
=>"%0D"
I've tried:
sprintf("\%%02X" % 13)
but I get an error warning: too many arguments for format string. The same goes for:
sprintf("%%02X" % 13)
Is it possible to add a leading % in sprintf alone?
A: sprintf('%%%02X', 13)
# => "%0D"
From the ruby docs:
Field: % | Other Format: A percent sign itself will be displayed. No argument taken.
i.e. You must escape the % character with a double %%; much like you much escape a single \ with \\ in regular strings.
A: A literal % has to be escaped as %%:
sprintf('%%') #=> "%"
Furthermore, you should either use sprintf or %, not both:
sprintf('%%%02X', 13) #=> "%0D"
# ^
# comma here
'%%%02X' % 13 #=> "%0D"
# ^
# percent sign here
If these are too many percent signs, you can separate the string literal to make it more obvious:
sprintf('%%' '%02X', 13)
#=> "%0D"
In Ruby, 'foo' 'bar' is equivalent to 'foobar', i.e. adjacent string literals are automatically concatenated by the interpreter.
A: Another possibility is to use Integer#to_s :
"%" + 13.to_s(16).rjust(2, '0').upcase
#=> "%0D"
And since % has a higher precedence than +, you could also write :
"%" + "%02X" % 13
#=> "%0D"
which is equivalent to
"%" + ("%02X" % 13)
#=> "%0D"
| stackoverflow | {
"language": "en",
"length": 241,
"provenance": "stackexchange_0000F.jsonl.gz:879020",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586100"
} |
4fc87cfa6b5f03b20fc1dbe453b5b981235e4639 | Stackoverflow Stackexchange
Q: android gradle hang by creating new cache I need to create a gradle build server to auto build our android project. after I set java,android sdk,gradle all done .
then I copied my macbook's folder ".gradle" on this server(Just for avoid re-downloading),
now I got hangged everytime after execute "./gradlew assemble --info".
the log is hang message are following:
Creating new cache for metadata-2.23/module-metadata, path /root/.gradle/caches/modules-2/metadata-2.23/module-metadata.bin, access org.gradle.cache.internal.DefaultCacheAccess@4ed31790
Creating new cache for metadata-2.23/artifact-at-repository, path /root/.gradle/caches/modules-2/metadata-2.23/artifact-at-repository.bin, access org.gradle.cache.internal.DefaultCacheAccess@4ed31790
Creating new cache for metadata-2.23/artifact-at-url, path /root/.gradle/caches/modules-2/metadata-2.23/artifact-at-url.bin, access org.gradle.cache.internal.DefaultCacheAccess@4ed31790
> Configuring > 0/4 projects > root project
someone can help me ? appreciate any help :)
| Q: android gradle hang by creating new cache I need to create a gradle build server to auto build our android project. after I set java,android sdk,gradle all done .
then I copied my macbook's folder ".gradle" on this server(Just for avoid re-downloading),
now I got hangged everytime after execute "./gradlew assemble --info".
the log is hang message are following:
Creating new cache for metadata-2.23/module-metadata, path /root/.gradle/caches/modules-2/metadata-2.23/module-metadata.bin, access org.gradle.cache.internal.DefaultCacheAccess@4ed31790
Creating new cache for metadata-2.23/artifact-at-repository, path /root/.gradle/caches/modules-2/metadata-2.23/artifact-at-repository.bin, access org.gradle.cache.internal.DefaultCacheAccess@4ed31790
Creating new cache for metadata-2.23/artifact-at-url, path /root/.gradle/caches/modules-2/metadata-2.23/artifact-at-url.bin, access org.gradle.cache.internal.DefaultCacheAccess@4ed31790
> Configuring > 0/4 projects > root project
someone can help me ? appreciate any help :)
| stackoverflow | {
"language": "en",
"length": 104,
"provenance": "stackexchange_0000F.jsonl.gz:879033",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586159"
} |
2f6ac5e3a633d816e9e026cd9e6e121662284e06 | Stackoverflow Stackexchange
Q: Sumif value is in year I have list of dates and values:
01.01.2016 100
01.02.2017 200
01.03.2017 300
What i want is now if the the year is 2017 count the value.
Result: 200 + 300 = 500
This is a formula i tried:
=SUMIF($F5:$F,"<="&$A2,AF5:AF)
The issue is that google sheet can not compare a date and a year. When i change the date from 01.02.2017 to 2017 it works.
If i try to get the year of the date it doesnt work:
=SUMIF(YEAR($F5:$F),"="&$A2,AF5:AF)
How can compare a date with a year? Or is there an other way?
Thanks!
A: Try this formula:
=SUM(FILTER(B:B,YEAR(A:A)=2017))
And also please try this:
=QUERY(A:B,"select year(A), sum(B) where A is not null group by year(A)")
| Q: Sumif value is in year I have list of dates and values:
01.01.2016 100
01.02.2017 200
01.03.2017 300
What i want is now if the the year is 2017 count the value.
Result: 200 + 300 = 500
This is a formula i tried:
=SUMIF($F5:$F,"<="&$A2,AF5:AF)
The issue is that google sheet can not compare a date and a year. When i change the date from 01.02.2017 to 2017 it works.
If i try to get the year of the date it doesnt work:
=SUMIF(YEAR($F5:$F),"="&$A2,AF5:AF)
How can compare a date with a year? Or is there an other way?
Thanks!
A: Try this formula:
=SUM(FILTER(B:B,YEAR(A:A)=2017))
And also please try this:
=QUERY(A:B,"select year(A), sum(B) where A is not null group by year(A)")
| stackoverflow | {
"language": "en",
"length": 121,
"provenance": "stackexchange_0000F.jsonl.gz:879037",
"question_score": "19",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586166"
} |
3bf073082e73ed57d66edba6d94682182c5ea094 | Stackoverflow Stackexchange
Q: C#: using Take() inside foreach is statement is alright to use? Can anyone suggest which below two lines of code we should use ?
foreach(var items in itemList.Take(20))
{
}
or
var itemList = itemList.Take(20);
foreach(var items in itemList)
{
}
Does the above two lines have any difference in terms of optimised code and if yes, please do let me know the reason.
A: No, the two pieces of code won't behave different.
In fact, if you take a look at this SharpLab example you'll notice that the two pieces of code compile to the exact same IL.
So:
*
*The code looks different, which if important to you, pick the one you feel the most comfortable with
*The code will compile to the exact same output (in release builds), in which case the looks of the code has (in this case) no bearing on the behavior or performace of the code
In short, pick the version you think looks best.
| Q: C#: using Take() inside foreach is statement is alright to use? Can anyone suggest which below two lines of code we should use ?
foreach(var items in itemList.Take(20))
{
}
or
var itemList = itemList.Take(20);
foreach(var items in itemList)
{
}
Does the above two lines have any difference in terms of optimised code and if yes, please do let me know the reason.
A: No, the two pieces of code won't behave different.
In fact, if you take a look at this SharpLab example you'll notice that the two pieces of code compile to the exact same IL.
So:
*
*The code looks different, which if important to you, pick the one you feel the most comfortable with
*The code will compile to the exact same output (in release builds), in which case the looks of the code has (in this case) no bearing on the behavior or performace of the code
In short, pick the version you think looks best.
A: foreach evaluates the method after in keyword only once. A very simple example as ConsoleApplication:
public class Test
{
public IEnumerable<int> ReturnAList()
{
Console.WriteLine("ReturnAList called");
return new List<int>()
{
1, 1, 2, 3, 5, 8, 13, 21, 34
};
}
}
Then:
var test = new Test();
foreach(var t in test.ReturnAList())
{
Console.WriteLine(t);
}
You will see that the output will be:
// ReturnAList called
// 1
// 1
// 2
// 3
// 5
// 8
// 13
// 21
// 34
A: Thanks for your insight.
However I have written a console application and First example (direct iterating over Take()) consumes more time in execution than the later.
| stackoverflow | {
"language": "en",
"length": 274,
"provenance": "stackexchange_0000F.jsonl.gz:879038",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586167"
} |
412e11697c01136861601c4eeab0ac4a609c8a99 | Stackoverflow Stackexchange
Q: How to change kernel when Jupyter notebook shows only one Python I need to change the kernel to point it to miniconda version of Python, but Jupyter Notebook shows only one "Python 3" under Kernel-> Change Kernel.
Any idea how to get Jupyter notebook to show the additional one installed?
A: You can have a look at this and install the required kernel
https://ipython.readthedocs.io/en/latest/install/kernel_install.html
| Q: How to change kernel when Jupyter notebook shows only one Python I need to change the kernel to point it to miniconda version of Python, but Jupyter Notebook shows only one "Python 3" under Kernel-> Change Kernel.
Any idea how to get Jupyter notebook to show the additional one installed?
A: You can have a look at this and install the required kernel
https://ipython.readthedocs.io/en/latest/install/kernel_install.html
A: If you want to manually configure (add) Python 2.7 environment, try this:
conda create -n py27 python=2.7
conda activate py27
conda install notebook ipykernel
ipython kernel install --user
| stackoverflow | {
"language": "en",
"length": 94,
"provenance": "stackexchange_0000F.jsonl.gz:879057",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586234"
} |
445335d2ad3bf27c631e83b6b6e7bcb80fcb3137 | Stackoverflow Stackexchange
Q: android studio - zxing barcode scanner - custom layout I am opening the scanner like this
IntentIntegrator integrator = new IntentIntegrator(activity);
integrator.setDesiredBarcodeFormats(IntentIntegrator.ALL_CODE_TYPES);
integrator.setPrompt(message);
integrator.setCameraId(0);
integrator.setBeepEnabled(false);
integrator.setBarcodeImageEnabled(false);
integrator.initiateScan();
How can I change the layout?
I want to add a button to the scanner view and also increase the font of the prompt.
Thanks!
A: If you want to use this library with a custom layout, you'll have to build the layout yourself. The IntentIntegrator just launch a default activity that is part of the library, there aren't much customization options there. You can have a look at the documentation here in order to learn how to embed their component into your own layout.
Hope this helps!
| Q: android studio - zxing barcode scanner - custom layout I am opening the scanner like this
IntentIntegrator integrator = new IntentIntegrator(activity);
integrator.setDesiredBarcodeFormats(IntentIntegrator.ALL_CODE_TYPES);
integrator.setPrompt(message);
integrator.setCameraId(0);
integrator.setBeepEnabled(false);
integrator.setBarcodeImageEnabled(false);
integrator.initiateScan();
How can I change the layout?
I want to add a button to the scanner view and also increase the font of the prompt.
Thanks!
A: If you want to use this library with a custom layout, you'll have to build the layout yourself. The IntentIntegrator just launch a default activity that is part of the library, there aren't much customization options there. You can have a look at the documentation here in order to learn how to embed their component into your own layout.
Hope this helps!
| stackoverflow | {
"language": "en",
"length": 116,
"provenance": "stackexchange_0000F.jsonl.gz:879058",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586236"
} |
2d0be5dfe902a42aa066ba88464eae65de8a5385 | Stackoverflow Stackexchange
Q: multicast id in firebase cloud messaging service What is the multicast_id in Firebase cloud messaging service?
As mentioned in the documentation that is provided by Google, multicast id is the unique number that identifies the multicast message.
I read it, but I did not understand it clearly.
Can anyone explain it?
A: Per Firebase Cloud Messaging HTTP Protocol documentation:
The multicast_id is a required parameter in the FCM response payload, and is a unique ID that identifies the multicast message.
A multicast message is a notification that will be sent from a server and targeting multiple client applications.
To answer your question directly, this response parameter is just an identification ID of a multicast message sent to client applications.
I'm not sure yet, but looking at the Build App Server Send Requests documentation, you will only get this response parameter when you use Single Device Messaging targeting a single or array of registration tokens.
I hope this helps.
| Q: multicast id in firebase cloud messaging service What is the multicast_id in Firebase cloud messaging service?
As mentioned in the documentation that is provided by Google, multicast id is the unique number that identifies the multicast message.
I read it, but I did not understand it clearly.
Can anyone explain it?
A: Per Firebase Cloud Messaging HTTP Protocol documentation:
The multicast_id is a required parameter in the FCM response payload, and is a unique ID that identifies the multicast message.
A multicast message is a notification that will be sent from a server and targeting multiple client applications.
To answer your question directly, this response parameter is just an identification ID of a multicast message sent to client applications.
I'm not sure yet, but looking at the Build App Server Send Requests documentation, you will only get this response parameter when you use Single Device Messaging targeting a single or array of registration tokens.
I hope this helps.
A: On a simpler note, a multicast_id is similar (if not identical enough) to a message_id.
In addition to looptheloop88's answer, the usual purpose of both the message_id and multicast_id is for tracking/logging. As I mentioned here:
There is currently no available API to make use of the message_ids/multicast_ids to retrieve the details of the delivery status of the message sent, other than using the FCM Diagnostics Page.
| stackoverflow | {
"language": "en",
"length": 227,
"provenance": "stackexchange_0000F.jsonl.gz:879093",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586336"
} |
3e56f3c1a7398fe17c598d07b012eae42677ddc4 | Stackoverflow Stackexchange
Q: Airflow failed slack message How can I configure Airflow so that any failure in the DAG will (immediately) result in a slack message?
At this moment I manage it by creating a slack_failed_task:
slack_failed_task = SlackAPIPostOperator(
task_id='slack_failed',
channel="#datalabs",
trigger_rule='one_failed',
token="...",
text = ':red_circle: DAG Failed',
icon_url = 'http://airbnb.io/img/projects/airflow3.png',
dag=dag)
And set this task (one_failed) upstream from each other task in the DAG:
slack_failed_task << download_task_a
slack_failed_task << download_task_b
slack_failed_task << process_task_c
slack_failed_task << process_task_d
slack_failed_task << other_task_e
It works, but it's error prone since forgetting to add the task will skip the slack notifications and seems like a lot of work.
Is there perhaps a way to expand on the email_on_failure property in the DAG?
Bonus ;-) for including a way to pass the name of the failed task to the message.
A: Try the new SlackWebhookOperator which is there in Airflow version>=1.10.0
from airflow.contrib.operators.slack_webhook_operator import SlackWebhookOperator
slack_msg="Hi Wssup?"
slack_test = SlackWebhookOperator(
task_id='slack_test',
http_conn_id='slack_connection',
webhook_token='/1234/abcd',
message=slack_msg,
channel='#airflow_updates',
username='airflow_'+os.environ['ENVIRONMENT'],
icon_emoji=None,
link_names=False,
dag=dag)
Note: Make sure you have slack_connection added in your Airflow connections as
host=https://hooks.slack.com/services/
| Q: Airflow failed slack message How can I configure Airflow so that any failure in the DAG will (immediately) result in a slack message?
At this moment I manage it by creating a slack_failed_task:
slack_failed_task = SlackAPIPostOperator(
task_id='slack_failed',
channel="#datalabs",
trigger_rule='one_failed',
token="...",
text = ':red_circle: DAG Failed',
icon_url = 'http://airbnb.io/img/projects/airflow3.png',
dag=dag)
And set this task (one_failed) upstream from each other task in the DAG:
slack_failed_task << download_task_a
slack_failed_task << download_task_b
slack_failed_task << process_task_c
slack_failed_task << process_task_d
slack_failed_task << other_task_e
It works, but it's error prone since forgetting to add the task will skip the slack notifications and seems like a lot of work.
Is there perhaps a way to expand on the email_on_failure property in the DAG?
Bonus ;-) for including a way to pass the name of the failed task to the message.
A: Try the new SlackWebhookOperator which is there in Airflow version>=1.10.0
from airflow.contrib.operators.slack_webhook_operator import SlackWebhookOperator
slack_msg="Hi Wssup?"
slack_test = SlackWebhookOperator(
task_id='slack_test',
http_conn_id='slack_connection',
webhook_token='/1234/abcd',
message=slack_msg,
channel='#airflow_updates',
username='airflow_'+os.environ['ENVIRONMENT'],
icon_emoji=None,
link_names=False,
dag=dag)
Note: Make sure you have slack_connection added in your Airflow connections as
host=https://hooks.slack.com/services/
A:
How can I configure Airflow so that any failure in the DAG will
(immediately) result in a slack message?
Using airflow.providers.slack.hooks.slack_webhook.SlackWebhookHook you can achieve that, by passing a on_failure_callback function on DAG level.
Bonus ;-) for including a way to pass the name of the failed task to
the message.
def fail():
raise Exception("Task failed intentionally for testing purpose")
def success():
print("success")
def task_fail_slack_alert(context):
tis_dagrun = context['ti'].get_dagrun().get_task_instances()
failed_tasks = []
for ti in tis_dagrun:
if ti.state == State.FAILED:
# Adding log url
failed_tasks.append(f"<{ti.log_url}|{ti.task_id}>")
dag=context.get('task_instance').dag_id
exec_date=context.get('execution_date')
blocks = [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":red_circle: Dag Failed."
}
},
{
"type": "section",
"block_id": f"section{uuid.uuid4()}",
"text": {
"type": "mrkdwn",
"text": f"*Dag*: {dag} \n *Execution Time*: {exec_date}"
},
"accessory": {
"type": "image",
"image_url": "https://raw.githubusercontent.com/apache/airflow/main/airflow/www/static/pin_100.png",
"alt_text": "Airflow"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"Failed Tasks: {', '.join(failed_tasks)}"
}
}
]
failed_alert = SlackWebhookHook(
http_conn_id='slack-airflow',
channel="#airflow-notifications",
blocks=blocks,
username='airflow'
)
failed_alert.execute()
return
default_args = {
'owner': 'airflow'
}
with DAG(
dag_id="slack-test",
default_args=default_args,
start_date=datetime(2021,8,19),
schedule_interval=None,
on_failure_callback=task_fail_slack_alert
) as dag:
task_1 = PythonOperator(
task_id="slack_notification_test",
python_callable=fail
)
task_2 = PythonOperator(
task_id="slack_notification_test2",
python_callable=success
)
A: The BaseOperator supports 'on_failure_callback' parameter:
on_failure_callback (callable) – a function to be called when a task instance of this task fails. a context dictionary is passed as a single parameter to this function. Context contains references to related objects to the task instance and is documented under the macros section of the API.
I have not tested this but you should be able to define a function which posts to slack on failure and pass it to each task definition. To get the name of the current task, you can use the {{ task_id }} template.
A: Maybe this example will be helpful:
def slack_failed_task(contextDictionary, **kwargs):
failed_alert = SlackAPIPostOperator(
task_id='slack_failed',
channel="#datalabs",
token="...",
text = ':red_circle: DAG Failed',
owner = '_owner',)
return failed_alert.execute
task_with_failed_slack_alerts = PythonOperator(
task_id='task0',
python_callable=<file to execute>,
on_failure_callback=slack_failed_task,
provide_context=True,
dag=dag)
A: I would prefer to add the callback to the DAG and to be inhered by all its tasks:
def on_failure_callback(context):
webhook_url = os.getenv('SLACK_WEBHOOK_TOKEN')
slack_data = {
'text': "@here DAG {} Failed".format(context['dag'].dag_id)
}
response = requests.post(
webhook_url, data=json.dumps(slack_data),
headers={'Content-Type': 'application/json'}
)
dag = DAG(
dag_id='dag_with_templated_dir',
start_date=datetime(2020, 1, 1),
on_failure_callback=on_failure_callback
)
| stackoverflow | {
"language": "en",
"length": 540,
"provenance": "stackexchange_0000F.jsonl.gz:879102",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586356"
} |
b0182b0c9a4716f14aad07c594a485bf9244ac68 | Stackoverflow Stackexchange
Q: How to hide and show a menu in BottomNavigationView? I have created an activity with BottomNavigationView and it is working fine.
But I am unable to hide a menu item.
I have tried this code.
bottomNavigationView.getMenu().findItem(R.id.tab_email).setVisible(false);
I even tried to call bottomNavigationView.invalidate();
All inputs are appreciated.
A: bottomNavigation.getMenu().removeItem(R.id.nav_user_download);
removeItem(int menu_item_id),call this method.
i have try hide/show method @Sachin Rao, but it work not very well. so i finally found this way, it's work well for me.
| Q: How to hide and show a menu in BottomNavigationView? I have created an activity with BottomNavigationView and it is working fine.
But I am unable to hide a menu item.
I have tried this code.
bottomNavigationView.getMenu().findItem(R.id.tab_email).setVisible(false);
I even tried to call bottomNavigationView.invalidate();
All inputs are appreciated.
A: bottomNavigation.getMenu().removeItem(R.id.nav_user_download);
removeItem(int menu_item_id),call this method.
i have try hide/show method @Sachin Rao, but it work not very well. so i finally found this way, it's work well for me.
A: This is working for me :
/**
* Hides specified item in BottomNavigation View
*
* @param id - id of the menu item - example R.id.profile
* @param view - instance of BottomNavigationView
*/
public void hideBottomNavigationItem(int id, BottomNavigationView view) {
BottomNavigationMenuView menuView = (BottomNavigationMenuView) view.getChildAt(0);
for (int i = 0; i < menuView.getChildCount(); i++) {
BottomNavigationItemView itemView = (BottomNavigationItemView) menuView.getChildAt(i);
if (itemView.getId() == id) {
itemView.setVisibility(View.GONE);
}
}
}
Similarly if you set itemView.setVisibility(View.VISIBLE); instead of itemView.setVisibility(View.GONE); in above above code that particular menu will be visible
Update: easily hide/show menu item in BottomNavigationMenu
public void hideBottomNavigationItem(int id, BottomNavigationView view) {
BottomNavigationMenuView menuView = (BottomNavigationMenuView) view.getChildAt(0);
menuView.findViewById(id).setVisibility(View.GONE);
}
public void showBottomNavigationItem(int id, BottomNavigationView view) {
BottomNavigationMenuView menuView = (BottomNavigationMenuView) view.getChildAt(0);
menuView.findViewById(id).setVisibility(View.VISIBLE);
}
A: You can using following example:
BottomNavigationMenuView menuView = rootView.findViewById(R.id.my_bottom_nav_menu);
menuView.findViewById(R.id.action_menu_need_to_hide).setVisibility(View.GONE);
| stackoverflow | {
"language": "en",
"length": 213,
"provenance": "stackexchange_0000F.jsonl.gz:879128",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586435"
} |
672069a188080bd51e4a36351a392c29c3707723 | Stackoverflow Stackexchange
Q: Is it safe to store AES IV prepended to CipherText in a DB? I would like to store AES encrypted data in a DB field.
Is it safe to store the AES IV (unique for each row) prepended to the cipher text? e.g.
IV_CipherText
Both would be encoded in Base64.
The Key used will be not be stored in the database.
A: It is safe. The purpose of initialization vector is to insert some randomness into resulting cipher (so the same message won't result into same ciphers). It just must be unpredictable.
It can be publicly stored in the database. Often it is appended to the first bytes of the message. As said, it exists to avoid statistical attacks.
| Q: Is it safe to store AES IV prepended to CipherText in a DB? I would like to store AES encrypted data in a DB field.
Is it safe to store the AES IV (unique for each row) prepended to the cipher text? e.g.
IV_CipherText
Both would be encoded in Base64.
The Key used will be not be stored in the database.
A: It is safe. The purpose of initialization vector is to insert some randomness into resulting cipher (so the same message won't result into same ciphers). It just must be unpredictable.
It can be publicly stored in the database. Often it is appended to the first bytes of the message. As said, it exists to avoid statistical attacks.
| stackoverflow | {
"language": "en",
"length": 120,
"provenance": "stackexchange_0000F.jsonl.gz:879132",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586440"
} |
a63113aef6fd1ab6a9e26f2b5c27bd8e5b23e69f | Stackoverflow Stackexchange
Q: How to use shiny actionButton to show & hide text output? I'm trying to write a simple shiny app with an actionButton. When the actionButton is pressed, some text output should be printed directly below. The code below takes me part way to a solution:
shinyApp(
ui = shinyUI( fluidPage(
actionButton("button", "don't press the button"),
verbatimTextOutput("text")
)
),
server = function(input, output, session){
observeEvent(input$button, {
output$text <- renderText({"ahh you pressed it"})
})
}
)
There are two things I'd like to change, but not sure how:
1) The code above shows an empty gray box before the button is pressed - I'd like there to be nothing there prior to the button being pressed. It looks like conditionalPanel might be the right approach, but not sure how to implement this.
2) Can the above code be adapted so that once the button is pressed a second time, the text output is hidden again?
A: You can try this with shinyjs, hidden and toggle
library(shiny)
library(shinyjs)
shinyApp(
ui = shinyUI(fluidPage(useShinyjs(),
actionButton("button", "don't press the button"),
hidden(
div(id='text_div',
verbatimTextOutput("text")
)
)
)
),
server = function(input, output, session){
observeEvent(input$button, {
toggle('text_div')
output$text <- renderText({"ahh you pressed it"})
})
}
)
| Q: How to use shiny actionButton to show & hide text output? I'm trying to write a simple shiny app with an actionButton. When the actionButton is pressed, some text output should be printed directly below. The code below takes me part way to a solution:
shinyApp(
ui = shinyUI( fluidPage(
actionButton("button", "don't press the button"),
verbatimTextOutput("text")
)
),
server = function(input, output, session){
observeEvent(input$button, {
output$text <- renderText({"ahh you pressed it"})
})
}
)
There are two things I'd like to change, but not sure how:
1) The code above shows an empty gray box before the button is pressed - I'd like there to be nothing there prior to the button being pressed. It looks like conditionalPanel might be the right approach, but not sure how to implement this.
2) Can the above code be adapted so that once the button is pressed a second time, the text output is hidden again?
A: You can try this with shinyjs, hidden and toggle
library(shiny)
library(shinyjs)
shinyApp(
ui = shinyUI(fluidPage(useShinyjs(),
actionButton("button", "don't press the button"),
hidden(
div(id='text_div',
verbatimTextOutput("text")
)
)
)
),
server = function(input, output, session){
observeEvent(input$button, {
toggle('text_div')
output$text <- renderText({"ahh you pressed it"})
})
}
)
| stackoverflow | {
"language": "en",
"length": 199,
"provenance": "stackexchange_0000F.jsonl.gz:879159",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586505"
} |
8d972cdc43ec5df1ff5f67fd16d9123cb715e406 | Stackoverflow Stackexchange
Q: can I bind an attribute on an element with ng-non-bindable? I am trying to get an html5 progress bar working in Angular.
Similarly to this user I have an old version of ui.bootstrap on the page which has it's own progress directive causing my <progress></progress> to be converted into <div class="progress ng-isolate-scope" ng-transclude=""></div>
I can fix this problem by using ng-non-bindable
<progress ng-non-bindable></progress>
However now that this progress element is non-bindable, I can't figure out how to dynamically set the value from my controller. Ideally I would like to do something like this:
<progress ng-non-bindable ng-bind-value="ctrl.currentProgress"=></progress>
Is there anyway I can use ng-non-bindable on the element, but somehow bind the value attribute so I can dynamically set it?
ng-bind-attr looked promising but I wasn't able to get it working/find documentation
I am using Angular 1.5.8 and ideally I wouldn't update bootstrap at this point (to remove the 'progress' name clash) unless I have to.
| Q: can I bind an attribute on an element with ng-non-bindable? I am trying to get an html5 progress bar working in Angular.
Similarly to this user I have an old version of ui.bootstrap on the page which has it's own progress directive causing my <progress></progress> to be converted into <div class="progress ng-isolate-scope" ng-transclude=""></div>
I can fix this problem by using ng-non-bindable
<progress ng-non-bindable></progress>
However now that this progress element is non-bindable, I can't figure out how to dynamically set the value from my controller. Ideally I would like to do something like this:
<progress ng-non-bindable ng-bind-value="ctrl.currentProgress"=></progress>
Is there anyway I can use ng-non-bindable on the element, but somehow bind the value attribute so I can dynamically set it?
ng-bind-attr looked promising but I wasn't able to get it working/find documentation
I am using Angular 1.5.8 and ideally I wouldn't update bootstrap at this point (to remove the 'progress' name clash) unless I have to.
| stackoverflow | {
"language": "en",
"length": 155,
"provenance": "stackexchange_0000F.jsonl.gz:879171",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586531"
} |
ca6ce0e2a08633b4c85b8b608b94160a878fe9dc | Stackoverflow Stackexchange
Q: How to trigger focus to the Google Chrome address bar (OmniBox)? I am making a Google Chrome extension that overrides the New Tab (the page that appears when the user opens a new tab or window).
I want to trigger the focus on the Google's address bar on a button click. I've read the chrome.omnibox docs, but I haven't found a method that triggers its focus.
Is there a trick how I can do that?
A: The answer is: No!
I've tried a million workarounds and tricks, but nothing helped.
My conclusion is that this action is considered to be a security (and privacy) vulnerability, and therefore is not allowed.
| Q: How to trigger focus to the Google Chrome address bar (OmniBox)? I am making a Google Chrome extension that overrides the New Tab (the page that appears when the user opens a new tab or window).
I want to trigger the focus on the Google's address bar on a button click. I've read the chrome.omnibox docs, but I haven't found a method that triggers its focus.
Is there a trick how I can do that?
A: The answer is: No!
I've tried a million workarounds and tricks, but nothing helped.
My conclusion is that this action is considered to be a security (and privacy) vulnerability, and therefore is not allowed.
| stackoverflow | {
"language": "en",
"length": 111,
"provenance": "stackexchange_0000F.jsonl.gz:879176",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586548"
} |
c08935c5fec20896275618e69c037d8309aae348 | Stackoverflow Stackexchange
Q: Angular 2 : Generating word and pdf from template Here I'm back again.
I have a word template with some fields to fill and I would like to use this template in my Angular 2 application to generate the same word with fields filled (with an object).
As I take the data from a database, I need absolutly to use an object coming from a service and I need to use a particular template.
I saw that there was some API like office-js, jsPDF, etc... but I don't know wich one to choose and I don't know how to use them.
What do you guys recommend?
Thank you in advance !
| Q: Angular 2 : Generating word and pdf from template Here I'm back again.
I have a word template with some fields to fill and I would like to use this template in my Angular 2 application to generate the same word with fields filled (with an object).
As I take the data from a database, I need absolutly to use an object coming from a service and I need to use a particular template.
I saw that there was some API like office-js, jsPDF, etc... but I don't know wich one to choose and I don't know how to use them.
What do you guys recommend?
Thank you in advance !
| stackoverflow | {
"language": "en",
"length": 112,
"provenance": "stackexchange_0000F.jsonl.gz:879185",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586568"
} |
aae7597c61c09516ceb7edb621b155ba86469788 | Stackoverflow Stackexchange
Q: How we can add a new line in c# How we can write output on second line in c# ??
How we can write the given code into two lines
MessageBox.Show("my name is " + LineBreak, "yazdan" ,
MessageBoxButton.OK, MessageBoxIcon.Information);
How we can start a new line after "my name is " and before "yazdan"
A: You could try this:
"my name is " + Environment.NewLine + "yazdan"
| Q: How we can add a new line in c# How we can write output on second line in c# ??
How we can write the given code into two lines
MessageBox.Show("my name is " + LineBreak, "yazdan" ,
MessageBoxButton.OK, MessageBoxIcon.Information);
How we can start a new line after "my name is " and before "yazdan"
A: You could try this:
"my name is " + Environment.NewLine + "yazdan"
A: MessageBox.Show("my name is " + Environment.NewLine + "yazdan", MessageBoxButton.OK, MessageBoxIcon.Information);
A: "my name is " + "\n" + "yazdan"
you can use either "\n" or Environment.NewLine
A: Depends on your environment, in Unix it is just "\n", however in Windows environments you need "\r\n". These are both written as strings.
\r = Carriage return
\n = Line feed
This is the equivalent to vbCrLf in VB.NET
| stackoverflow | {
"language": "en",
"length": 136,
"provenance": "stackexchange_0000F.jsonl.gz:879210",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586655"
} |
7c4046b19a999e021dc1953167aa49c0fe6cb764 | Stackoverflow Stackexchange
Q: datatable pagination in laravel I am using laravel 5.0
I am also using datatable jquery plugin to display grid.
Controller mehtod
public function index() {
$jobs = \App\Job::orderBy('created_at', 'DESC')->limit(1000)->get();
return View::make('jobs.index', ['jobs' => $jobs]);
}
The issue:
Right now I hard-coded the ->limit(1000) to 1000 jobs in datatable grid to display
it but i have more then 1000 records to display.
What I want?
I want to display 500 records with grid and then 500 records.
I am not sure if there is any call back data-table plugin function available?
I need a dynamic way to load next 500
NOTE:
I am not willing to us this solution of scrolling
https://datatables.net/extensions/scroller/examples/initialisation/server-side_processing.html
A: I think above answer should be extended with search feature.
Update the answer;
$filter = $request->get('search');
$search = (isset($filter['value']))? $filter['value'] : false;
where('somecolumnonyourdb','like', '%'.$search.'%')
This works for me
| Q: datatable pagination in laravel I am using laravel 5.0
I am also using datatable jquery plugin to display grid.
Controller mehtod
public function index() {
$jobs = \App\Job::orderBy('created_at', 'DESC')->limit(1000)->get();
return View::make('jobs.index', ['jobs' => $jobs]);
}
The issue:
Right now I hard-coded the ->limit(1000) to 1000 jobs in datatable grid to display
it but i have more then 1000 records to display.
What I want?
I want to display 500 records with grid and then 500 records.
I am not sure if there is any call back data-table plugin function available?
I need a dynamic way to load next 500
NOTE:
I am not willing to us this solution of scrolling
https://datatables.net/extensions/scroller/examples/initialisation/server-side_processing.html
A: I think above answer should be extended with search feature.
Update the answer;
$filter = $request->get('search');
$search = (isset($filter['value']))? $filter['value'] : false;
where('somecolumnonyourdb','like', '%'.$search.'%')
This works for me
A: You can user ajax data source:
please visit : https://datatables.net/examples/ajax/objects.html
Example PHP Script:
// function will process the ajax request
public function getMembers(Request $request) {
$draw = $request->get('draw');
$start = $request->get('start');
$length = $request->get('length');
$search = (isset($filter['value']))? $filter['value'] : false;
$total_members = 1000; // get your total no of data;
$members = $this->methodToGetMembers($start, $length); //supply start and length of the table data
$data = array(
'draw' => $draw,
'recordsTotal' => $total_members,
'recordsFiltered' => $total_members,
'data' => $members,
);
echo json_encode($data);
}
Example JavaScript :
$('#all-member-table').DataTable( {
"processing": true,
"serverSide": true,
"ajax": {
url: base_url+"ajax/members"
},
"columns": [
{ data: '1' },
{ data: '2' },
{ data: '3' },
{ data: '4' },
{ data: '5' },
]
} );
Example HTML:
<table id="all-member-table">
<thead>
<tr>
<th>Column1</th>
<th>Column2</th>
<th>Column3</th>
<th>Column4</th>
<th>Column5</th>
</tr>
</thead>
</table>
A: You can use standard pagination:
$jobs = \App\Job::latest()->paginate(500);
Or create it manually.
| stackoverflow | {
"language": "en",
"length": 289,
"provenance": "stackexchange_0000F.jsonl.gz:879233",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586737"
} |
b77ea31652be12cd1e5e0720d30658b8d7f9fc07 | Stackoverflow Stackexchange
Q: Running sess.run (tensorflow) in parallel using python multiprocessing on windows 7 I have successfully trained a CNN and saved the model using tf.train.Saver() object.
In my production code, I need to load the model and run the predict op to get the prediction. I am able to do the same with tf.Saver and it's restore method.
Now I need to run this predict op in parallel. I am using Python's joblib's Parallel method to get the work done. I am running on windows 7.
Now the problem is when I use threading as backend it runs fine. But I wanna use mutliprocessing so as to assign cpu affinity.
I tried passing in session object but, sess.run command gets hang forever.
so other option is to create a new session in each worker process. If I create a graph in worker process then the subsequent creation of the graph changes the op name and I can't restore my weights into the graph.
Can any1 help
| Q: Running sess.run (tensorflow) in parallel using python multiprocessing on windows 7 I have successfully trained a CNN and saved the model using tf.train.Saver() object.
In my production code, I need to load the model and run the predict op to get the prediction. I am able to do the same with tf.Saver and it's restore method.
Now I need to run this predict op in parallel. I am using Python's joblib's Parallel method to get the work done. I am running on windows 7.
Now the problem is when I use threading as backend it runs fine. But I wanna use mutliprocessing so as to assign cpu affinity.
I tried passing in session object but, sess.run command gets hang forever.
so other option is to create a new session in each worker process. If I create a graph in worker process then the subsequent creation of the graph changes the op name and I can't restore my weights into the graph.
Can any1 help
| stackoverflow | {
"language": "en",
"length": 165,
"provenance": "stackexchange_0000F.jsonl.gz:879268",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586873"
} |
2c9e4c6bce896ceac09e8b585f11a3eeeec30fa0 | Stackoverflow Stackexchange
Q: Using readline(prompt = "") in rMarkdown I'm currently attempting to automate some statistical report generation, however to do so I would like to collect a couple of piece of information from the user before beginning, then create a markdown report from it.
When knitting the document however it hangs forever because it has no route to receive the user input from. Does anyone know of one, or would it be a case of using a separate r script to gather the information then using calling the report generation from within that using rmarkdown::render?
A: You could embed a Shiny app or make use of parameterized reports in the Rmarkdown document. Without further detail (eg some code), it is hard to tell you more.
I hope that this helps, though.
| Q: Using readline(prompt = "") in rMarkdown I'm currently attempting to automate some statistical report generation, however to do so I would like to collect a couple of piece of information from the user before beginning, then create a markdown report from it.
When knitting the document however it hangs forever because it has no route to receive the user input from. Does anyone know of one, or would it be a case of using a separate r script to gather the information then using calling the report generation from within that using rmarkdown::render?
A: You could embed a Shiny app or make use of parameterized reports in the Rmarkdown document. Without further detail (eg some code), it is hard to tell you more.
I hope that this helps, though.
| stackoverflow | {
"language": "en",
"length": 130,
"provenance": "stackexchange_0000F.jsonl.gz:879272",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586879"
} |
2cef122df1563a5b9911c57c1f2b3ea58f7de581 | Stackoverflow Stackexchange
Q: Router canActivate with more than 1 guard Does angular(v 4.1.1) router canActivate take more than one function
{
path: '',
component: SomeComponent,
canActivate: [guard1, guard2, ...]
}
should something like that work? If not they why would it be in a list if its suppose to take just one guard
Because I have something similar and even though guard1 returns false, guard2 will still be executed.
Thanks in advance
Angular 4.1.1
A: This should work but I believe the guards are executed in parallel not in a sequence. So the second one does not wait until the first one return a value. This should not really affect you if your guards are synchronous, but if they are asynchronous, you will run into this "issue".
If you need your guards to depend on each other, you could separate the common part of the check and all your guards could call that logic. But I think in most cases this should not even be necessary, because if only one of them fails, the route is not activated.
| Q: Router canActivate with more than 1 guard Does angular(v 4.1.1) router canActivate take more than one function
{
path: '',
component: SomeComponent,
canActivate: [guard1, guard2, ...]
}
should something like that work? If not they why would it be in a list if its suppose to take just one guard
Because I have something similar and even though guard1 returns false, guard2 will still be executed.
Thanks in advance
Angular 4.1.1
A: This should work but I believe the guards are executed in parallel not in a sequence. So the second one does not wait until the first one return a value. This should not really affect you if your guards are synchronous, but if they are asynchronous, you will run into this "issue".
If you need your guards to depend on each other, you could separate the common part of the check and all your guards could call that logic. But I think in most cases this should not even be necessary, because if only one of them fails, the route is not activated.
| stackoverflow | {
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:879298",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44586947"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.