id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
507d730fa89656edce63146c55ee669225a4af63 | Stackoverflow Stackexchange
Q: Laravel 5.4 : The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths It is Laravel 5.4 setup of my web app.
one thing is happening repeatedly on page load. and because of that, I am not able to get data on my page.
Runtime exception: The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths.
Got this error repeatedly
I would be thankful for any help.
A: Make sure your app config has key and cipher set. Also make sure your .env file does not have an empty APP_KEY entry. Finally run:
php artisan key:generate
| Q: Laravel 5.4 : The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths It is Laravel 5.4 setup of my web app.
one thing is happening repeatedly on page load. and because of that, I am not able to get data on my page.
Runtime exception: The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths.
Got this error repeatedly
I would be thankful for any help.
A: Make sure your app config has key and cipher set. Also make sure your .env file does not have an empty APP_KEY entry. Finally run:
php artisan key:generate
A: Make sure to set APP_Key in .env file and then run following command in the terminal of your application root.
php artisan key:generate
A: i was getting the same error
run this command
php artisan key:generate
and then this command
php artisan config:clear
A: i was getting the same error.
run this command.
php artisan key:generate
updated the app key in .env
work for me
you can take help from link
| stackoverflow | {
"language": "en",
"length": 175,
"provenance": "stackexchange_0000F.jsonl.gz:872265",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44564962"
} |
0591b12326acd3fdfd05c1608f6675064bc4952f | Stackoverflow Stackexchange
Q: how to bind HorizontalOptions property of label in xamarin.forms How to bind the HorizontalOptions attribute of Label in Xamarin.Forms`
<Label TextColor="#01B6FF" Text="{Binding RecepientFullName}" FontSize="Small" HorizontalOptions="{Binding TextAlign} />`
A: <ContentPage.Resources>
<ResourceDictionary >
<local:ChatTextAlignmentConverter x:Key="ChatTextAlignmentConverter">
</local:ChatTextAlignmentConverter>
</ResourceDictionary>
</ContentPage.Resources>
<Frame Margin="10,0,10,0" Padding="10,5,10,5" HorizontalOptions="{Binding TextAlign, Converter={StaticResource ChatTextAlignmentConverter}}" BackgroundColor="{Binding BackgroundColor}"/>
public class ChatTextAlignmentConverter: IValueConverter
{
public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
if (value != null)
{
string valueAsString = value.ToString();
switch (valueAsString)
{
case ("EndAndExpand"):
{
return LayoutOptions.EndAndExpand;
}
case ("StartAndExpand"):
{
return LayoutOptions.StartAndExpand;
}
default:
{
return LayoutOptions.StartAndExpand;
}
}
}
else
{
return LayoutOptions.StartAndExpand;
}
}
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
{
return null;
}
}
| Q: how to bind HorizontalOptions property of label in xamarin.forms How to bind the HorizontalOptions attribute of Label in Xamarin.Forms`
<Label TextColor="#01B6FF" Text="{Binding RecepientFullName}" FontSize="Small" HorizontalOptions="{Binding TextAlign} />`
A: <ContentPage.Resources>
<ResourceDictionary >
<local:ChatTextAlignmentConverter x:Key="ChatTextAlignmentConverter">
</local:ChatTextAlignmentConverter>
</ResourceDictionary>
</ContentPage.Resources>
<Frame Margin="10,0,10,0" Padding="10,5,10,5" HorizontalOptions="{Binding TextAlign, Converter={StaticResource ChatTextAlignmentConverter}}" BackgroundColor="{Binding BackgroundColor}"/>
public class ChatTextAlignmentConverter: IValueConverter
{
public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
if (value != null)
{
string valueAsString = value.ToString();
switch (valueAsString)
{
case ("EndAndExpand"):
{
return LayoutOptions.EndAndExpand;
}
case ("StartAndExpand"):
{
return LayoutOptions.StartAndExpand;
}
default:
{
return LayoutOptions.StartAndExpand;
}
}
}
else
{
return LayoutOptions.StartAndExpand;
}
}
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
{
return null;
}
}
| stackoverflow | {
"language": "en",
"length": 114,
"provenance": "stackexchange_0000F.jsonl.gz:872344",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565205"
} |
4c626b21c272ee0d8a9f3d0853c18b9b3e7b6f37 | Stackoverflow Stackexchange
Q: Selecting a row in DataGrid by clicking Row I have a DataGrid that i am trying to change the selected item by selecting the row and not the cell. As you can see from the image below when i click outside the cell the item doesn't update.
<DataGrid x:Name="customerListBox" SelectedItem="{Binding SelectedCustomer, UpdateSourceTrigger=PropertyChanged}" SelectionMode="Single" IsReadOnly="True" ItemsSource="{Binding Customers}" Margin="10,57,10,10" AlternationCount="2" BorderThickness="1" SnapsToDevicePixels="True" AutoGenerateColumns="False" BorderBrush="Black" Foreground="Black">
<DataGrid.Columns>
<DataGridTextColumn Binding="{Binding Id}" Header="Id"/>
<DataGridTextColumn Binding="{Binding Name}" Header="Name"/>
<DataGridTextColumn Binding="{Binding Phone}" Header="Phone"/>
<DataGridTextColumn Binding="{Binding Email}" Header="Email"/>
</DataGrid.Columns>
</DataGrid>
What i can do to get it working is setting the last column width to * However this makes the header central and looks messy when displayed on a wide screen monitor.
<DataGridTextColumn Binding="{Binding Email}" Width="*" Header="Email"/>
A: I have workaround to fix this issue, just add dummy empty column after the last column with Width="*"
| Q: Selecting a row in DataGrid by clicking Row I have a DataGrid that i am trying to change the selected item by selecting the row and not the cell. As you can see from the image below when i click outside the cell the item doesn't update.
<DataGrid x:Name="customerListBox" SelectedItem="{Binding SelectedCustomer, UpdateSourceTrigger=PropertyChanged}" SelectionMode="Single" IsReadOnly="True" ItemsSource="{Binding Customers}" Margin="10,57,10,10" AlternationCount="2" BorderThickness="1" SnapsToDevicePixels="True" AutoGenerateColumns="False" BorderBrush="Black" Foreground="Black">
<DataGrid.Columns>
<DataGridTextColumn Binding="{Binding Id}" Header="Id"/>
<DataGridTextColumn Binding="{Binding Name}" Header="Name"/>
<DataGridTextColumn Binding="{Binding Phone}" Header="Phone"/>
<DataGridTextColumn Binding="{Binding Email}" Header="Email"/>
</DataGrid.Columns>
</DataGrid>
What i can do to get it working is setting the last column width to * However this makes the header central and looks messy when displayed on a wide screen monitor.
<DataGridTextColumn Binding="{Binding Email}" Width="*" Header="Email"/>
A: I have workaround to fix this issue, just add dummy empty column after the last column with Width="*"
A: Have you tried this ?
customerListBox.SelectionUnit = DataGridSelectionUnit.FullRow;
You can also use
customerListBox.SelectionMode = DataGridSelectionMode.Extended;
To allow multiple selection if you need
| stackoverflow | {
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:872372",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565272"
} |
f485e5ea8c496cbc66096bfa8c020843ee0a5454 | Stackoverflow Stackexchange
Q: Comparing slices in python Inspecting the slice class in Python with dir(), I see that it has attributes __le__ and __lt__. Indeed I saw that the following code works:
slice(1, 2) < slice(3, 4)
# True
However, I cannot see which logic is implemented for this comparison, nor its usecase. Can anyone point me to that?
I am not asking about tuple comparison. Even if slice and tuple are compared the same way, I don't think this makes my question a duplicate. What's more, I also asked for a possible usecase of slice comparison, which the suggested duplicate does not give.
A: Looking at the source code for slice reveals that the comparison is implemented by first converting the two objects into (start, stop, step) tuples, and then comparing those tuples:
https://github.com/python/cpython/blob/6cca5c8459cc439cb050010ffa762a03859d3051/Objects/sliceobject.c#L598
As to the use cases, I am not sure of the authors' intent. I do note that there don't appear to be any comparison unit tests for anything other than equality:
https://github.com/python/cpython/blob/6f0eb93183519024cb360162bdd81b9faec97ba6/Lib/test/test_slice.py#L87
| Q: Comparing slices in python Inspecting the slice class in Python with dir(), I see that it has attributes __le__ and __lt__. Indeed I saw that the following code works:
slice(1, 2) < slice(3, 4)
# True
However, I cannot see which logic is implemented for this comparison, nor its usecase. Can anyone point me to that?
I am not asking about tuple comparison. Even if slice and tuple are compared the same way, I don't think this makes my question a duplicate. What's more, I also asked for a possible usecase of slice comparison, which the suggested duplicate does not give.
A: Looking at the source code for slice reveals that the comparison is implemented by first converting the two objects into (start, stop, step) tuples, and then comparing those tuples:
https://github.com/python/cpython/blob/6cca5c8459cc439cb050010ffa762a03859d3051/Objects/sliceobject.c#L598
As to the use cases, I am not sure of the authors' intent. I do note that there don't appear to be any comparison unit tests for anything other than equality:
https://github.com/python/cpython/blob/6f0eb93183519024cb360162bdd81b9faec97ba6/Lib/test/test_slice.py#L87
A: Comparing tuples : (1, 2) < (3, 4) returns True because (1, 2) comes before (3, 4).
However, (1, 2) < (0, 4) returns False because (1, 2) comes after (0, 4).
NB: < and > doesn't mean smaller than or greater than, but is before and is after.
So, in other words, you're comapring which is coming before and which is coming after.
Some "odd" cases (Or, misleading cases within < and >):
(1, 2) < (3, 4, 5) returns True because the missing value of the first tuple will be equal to nil value of the operation which is zero in this case. Or you can think of it that (1, 2) come before (3, 4, 5).
And:
(0, 1) < (1, 0) will returns True because (0, 1) comes before (1, 0)
Another case:
(0, 1, 20000) < (0, 3, 1) will returns True because (0, 1, 20000) comes before (0, 3, 1).
Same logic for slice, list and even strings.
For more informations, visit this answer.
A: Python data model only mentions that slice objects have three read-only attributes and one method. It does not mention any other properties of slices.
As mentioned by @NPE, CPython implementation indeed provides a comparison for slice objects, which simply treats slice as a tuple of (start, end, step). I've checked it with a small Python program which confirmed that:
vals = []
for a in range(-5, 5):
for b in range(-5, 5):
for c in range(-5, 5):
vals.append((a, b, c))
for x in vals:
for y in vals:
assert (slice(*x) < slice(*y)) == (x < y)
However, that looks like a non-standard extension. For example, Jython also implements comparison for slices, but in a different way. Moreover, it looks like it implements comparison for all possible pairs of objects by comparing objects of the same type by their ids, which is propagated to slices.
So, order of slices in Jython is non-deterministic. For example, the following snippet prints True True on my system with Jython and True False with CPython:
print(slice(1, 2) < slice(1, 3))
print(slice(1, 3) < slice(1, 2))
Summarizing: __lt__ is implemented in CPython for some obscure reason, but it's not described anywhere in documentation and other implementations may behave not only differently, but "incorrectly" (in mathematical sense). So, one should not compare slices for inequality.
| stackoverflow | {
"language": "en",
"length": 555,
"provenance": "stackexchange_0000F.jsonl.gz:872375",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565297"
} |
ca740b9e20b423746863cdec385c3a27e2531733 | Stackoverflow Stackexchange
Q: MySql - Get Min, Max, and 3 random Rows I want a MySql query to get 5 rows which include min price row, max price row and other 3 random row.
Table:
ID Product Price
1 data 2
2 data 20
3 data 55
4 data 24
5 data 2
6 data 15
7 data 10
8 data 33
9 data 3
10 data 30
Expected Result(with 3 random rows)
ID Product Price
1 data 2
3 data 55
4 data 24
6 data 15
7 data 10
Any help would be greatly appreciated!
A: You can do this like ,
select * from table order by Price asc limit 0,1
union all
select * from table order by Price desc limit 0,1
union all
select * from table order by RAND() limit 0,3
| Q: MySql - Get Min, Max, and 3 random Rows I want a MySql query to get 5 rows which include min price row, max price row and other 3 random row.
Table:
ID Product Price
1 data 2
2 data 20
3 data 55
4 data 24
5 data 2
6 data 15
7 data 10
8 data 33
9 data 3
10 data 30
Expected Result(with 3 random rows)
ID Product Price
1 data 2
3 data 55
4 data 24
6 data 15
7 data 10
Any help would be greatly appreciated!
A: You can do this like ,
select * from table order by Price asc limit 0,1
union all
select * from table order by Price desc limit 0,1
union all
select * from table order by RAND() limit 0,3
A: you can get it with UNION and sub-query:
(SELECT * FROM table ORDER BY Price ASC LIMIT 0 , 1 )
UNION ALL
(SELECT * FROM table ORDER BY Price DESC limit 0,1 )
UNION ALL
(SELECT * FROM table WHERE Price NOT IN ( SELECT CONCAT( MIN( `Price` ) , ',', MAX( `Price` ) ) AS MaxPrice FROM table ) ) ORDER BY RAND( ) LIMIT 0 , 3 )
A: SELECT table.*
FROM table
, ( SELECT @minPrice := ( SELECT min(Price) FROM table ) minPrice
, @minId := ( SELECT id FROM table WHERE Price = @minPrice ORDER BY rand() LIMIT 1 )
, @maxPrice := ( SELECT max(Price) FROM table ) maxPrice
, @maxId := ( SELECT id FROM table WHERE Price = @maxPrice ORDER BY rand() LIMIT 1 )
) tmp
WHERE table.id in (@minId,@maxId)
UNION
(SELECT *
FROM table
WHERE Price not in (@minPrice,@maxPrice)
ORDER BY rand()
LIMIT 3
)
A: So ... get the min, get the max, get all the other records that are not min and max, sort by rand and return the first 3 that are not min and max.
Here is the SQL fiddle
-- get the first occurence of any item matching the products and prices returned
select min(top_bottom_and_3_random.id) id, top_bottom_and_3_random.product, top_bottom_and_3_random.price from (
-- get the min and the max
select distinct product, price from top_bottom_and_3_random where price in (
select max( price) from top_bottom_and_3_random
union select min( price ) from top_bottom_and_3_random
) union
select product, price from (
-- get 3 random rows that are not max or min
select rand() rand, product, price from (
select product, price from top_bottom_and_3_random where price not in (
select max( price) from top_bottom_and_3_random
union select min( price ) from top_bottom_and_3_random
) group by product, price
) rand_product_price_group
order by rand
limit 3
) random_mix
) min_max_3_random
inner join top_bottom_and_3_random
on min_max_3_random.product = top_bottom_and_3_random.product
and min_max_3_random.price = top_bottom_and_3_random.price
group by top_bottom_and_3_random.product, top_bottom_and_3_random.price
order by id
-- example results
id product price
1 data 2
3 data 55
4 data 24
7 data 10
10 data 30
A: SELECT x.*
FROM my_table x
JOIN (SELECT MIN(price) a, MAX(price) b FROM my_table) y
ORDER
BY COALESCE(x.price NOT IN (a,b))
, RAND()
LIMIT 5;
To address Keith's concerns... so, if we should always have 3, and either 1 or 5...
SELECT x.id
, x.product
, x.price
FROM my_table x
JOIN (
(SELECT id FROM my_table ORDER BY price, RAND() LIMIT 1)
UNION
(SELECT id FROM my_table ORDER BY price DESC, RAND() LIMIT 1)
) y
GROUP
BY x.id
, x.product
, x.price
ORDER
BY MIN(COALESCE(x.id != y.id))
, RAND()
LIMIT 5;
...but this is starting to be a bit of a mouthful - it may be smarter to solve this in application code.
A: You can take help of MySQL sub-query to get the desired result
select * from table WHERE Price = (SELECT MIN(Price ) FROM table)
union all
select * from table WHERE Price = (SELECT MAX(Price ) FROM table)
union all
select * from table order by RAND() limit 0,3
A: (select * from table order by Price limit 1)
union
(select * from table order by Price desc limit 4)
| stackoverflow | {
"language": "en",
"length": 668,
"provenance": "stackexchange_0000F.jsonl.gz:872386",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565328"
} |
3ef6c0a3c9526529bc4b64c0854287db571af779 | Stackoverflow Stackexchange
Q: Pug templates with the HTML Webpack Plugin I'm currently trying to get Pug templates running with the HTML Webpack Plugin. I followed their instruction to be able to use a custom template engine like Handlebars or in my case Pug. When I execute it, I'm getting this error:
ERROR in ./~/html-webpack-plugin/lib/loader.js!./src/index.pug
Module build failed: Error: Cannot find module 'pug'
My current config looks like this:
const HtmlWebpackPlugin = require('html-webpack-plugin');
const webpack = require('webpack');
const path = require('path');
module.exports = {
entry: {
global: './src/assets/scripts/global.js',
index: './src/assets/scripts/index.js',
},
output: {
filename: '[name].bundle.js',
path: path.resolve(__dirname, 'dist/js'),
publicPath: '/assets/js/',
},
module: {
rules: [
{test: /\.pug$/, use: 'pug-loader'},
],
},
plugins: [
new webpack.optimize.UglifyJsPlugin(),
new HtmlWebpackPlugin({
template: 'src/index.pug',
filename: 'index.html',
chunks: ['global', 'index'],
}),
],
};
Any suggestions?
A: I had to manually install the pug package itself.
| Q: Pug templates with the HTML Webpack Plugin I'm currently trying to get Pug templates running with the HTML Webpack Plugin. I followed their instruction to be able to use a custom template engine like Handlebars or in my case Pug. When I execute it, I'm getting this error:
ERROR in ./~/html-webpack-plugin/lib/loader.js!./src/index.pug
Module build failed: Error: Cannot find module 'pug'
My current config looks like this:
const HtmlWebpackPlugin = require('html-webpack-plugin');
const webpack = require('webpack');
const path = require('path');
module.exports = {
entry: {
global: './src/assets/scripts/global.js',
index: './src/assets/scripts/index.js',
},
output: {
filename: '[name].bundle.js',
path: path.resolve(__dirname, 'dist/js'),
publicPath: '/assets/js/',
},
module: {
rules: [
{test: /\.pug$/, use: 'pug-loader'},
],
},
plugins: [
new webpack.optimize.UglifyJsPlugin(),
new HtmlWebpackPlugin({
template: 'src/index.pug',
filename: 'index.html',
chunks: ['global', 'index'],
}),
],
};
Any suggestions?
A: I had to manually install the pug package itself.
A: You have to do it like this. You can add chunks if you want.
new HtmlWebpackPlugin({
filename: 'index.html',
template: path.join(__dirname, './src/index.pug')
});
| stackoverflow | {
"language": "en",
"length": 159,
"provenance": "stackexchange_0000F.jsonl.gz:872398",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565379"
} |
ae93c16ff0aa94c19f88506c71a8a860ff3b6315 | Stackoverflow Stackexchange
Q: How to remove function name prefix from winston logger? When logging with winston, it adds the level prefix to every line:
const winston = require('winston');
const logger = new (winston.Logger)();
logger.info('line 1');
logger.debug('line 2');
/*
output:
info: line 1
debug: line 2
*/
It there a way to use the level functions without them outputting the prefix?
| Q: How to remove function name prefix from winston logger? When logging with winston, it adds the level prefix to every line:
const winston = require('winston');
const logger = new (winston.Logger)();
logger.info('line 1');
logger.debug('line 2');
/*
output:
info: line 1
debug: line 2
*/
It there a way to use the level functions without them outputting the prefix?
| stackoverflow | {
"language": "en",
"length": 58,
"provenance": "stackexchange_0000F.jsonl.gz:872403",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565395"
} |
a00b47fa333fcf4b4c5a5191551e0fe33d5765f4 | Stackoverflow Stackexchange
Q: X Frame options meta tag i am getting this error on browser console when using following meta tag.
X-Frame-Options may only be set via an HTTP header sent along with a document. It may not be set inside <meta>.
this is the meta tag i'm using.
<meta http-equiv="X-Frame-Options" content="deny" />
can anybody tell me whats the cause, as this code is not written by me. how can i remove this error?
A: This error simply means that X-Frame-Options cannot be used in a meta tag. It only works when sent as a HTTP header. See https://developer.mozilla.org/fr/docs/Web/HTTP/Headers/X-Frame-Options
Disregard the error, it is safe. If you need this feature, then you need to send the header along the response.
By the way, the same can easily be achieved with Javascript.
| Q: X Frame options meta tag i am getting this error on browser console when using following meta tag.
X-Frame-Options may only be set via an HTTP header sent along with a document. It may not be set inside <meta>.
this is the meta tag i'm using.
<meta http-equiv="X-Frame-Options" content="deny" />
can anybody tell me whats the cause, as this code is not written by me. how can i remove this error?
A: This error simply means that X-Frame-Options cannot be used in a meta tag. It only works when sent as a HTTP header. See https://developer.mozilla.org/fr/docs/Web/HTTP/Headers/X-Frame-Options
Disregard the error, it is safe. If you need this feature, then you need to send the header along the response.
By the way, the same can easily be achieved with Javascript.
A: Blocks clickjacking attacks. XFrame Options
| stackoverflow | {
"language": "en",
"length": 135,
"provenance": "stackexchange_0000F.jsonl.gz:872410",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565421"
} |
69f707563b4892063293a72f79a35c4a417bc853 | Stackoverflow Stackexchange
Q: how to add button to lightning list view is it possible to add new action along with edit/delete values in drop down near every object(image below)
I followed this topic topick
the only decision I have come up with is like in this article click here
since winter 17 relese we can add button with vf page action only in lightning on the header of list view.
1. Create your Visualforce page.
2. Create a custom button that references your Visualforce page.
3. Add the action to your list view.
its looks like this:
So, is it possible to add action to the dropdown like in first picture?
| Q: how to add button to lightning list view is it possible to add new action along with edit/delete values in drop down near every object(image below)
I followed this topic topick
the only decision I have come up with is like in this article click here
since winter 17 relese we can add button with vf page action only in lightning on the header of list view.
1. Create your Visualforce page.
2. Create a custom button that references your Visualforce page.
3. Add the action to your list view.
its looks like this:
So, is it possible to add action to the dropdown like in first picture?
| stackoverflow | {
"language": "en",
"length": 109,
"provenance": "stackexchange_0000F.jsonl.gz:872411",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565423"
} |
083871d18a7f5d41392c08ee9c81f80b7eae10f8 | Stackoverflow Stackexchange
Q: Get FragmentManager inside AndroidViewModel AndroidViewModel is used to access Application context. I'm trying to access Activity's FragmentManager without passing it explicitly:
class FooViewModel(app: Application) : AndroidViewModel(app) {
private val fm = (app.applicationContext as Activity).fragmentManager
..
}
Getting error, unable to cast Context to Activity.
Question: is there any way to get FragmentManager inside AndroidViewModel without passing it explicitly?
A: I think short answer will be "no, there is no way", because Application context is in no aware of FragmentManager.
FragmentManager is an object that subclasses of FragmentActivity may have. Application is not a subclass of FragmentActivity.
Another question would be, why would you ever need a FragmentManager instance inside your ViewModel? Most possibly you should delegate view-related stuff to handle to other unit other than ViewModel (e.g. Activity, Fragment). Keep in mind, that this ViewModel would be retained over configuration change, thus if you keep a reference to the FragmentManager inside your ViewModel, you'd be leaking your Activity instance.
| Q: Get FragmentManager inside AndroidViewModel AndroidViewModel is used to access Application context. I'm trying to access Activity's FragmentManager without passing it explicitly:
class FooViewModel(app: Application) : AndroidViewModel(app) {
private val fm = (app.applicationContext as Activity).fragmentManager
..
}
Getting error, unable to cast Context to Activity.
Question: is there any way to get FragmentManager inside AndroidViewModel without passing it explicitly?
A: I think short answer will be "no, there is no way", because Application context is in no aware of FragmentManager.
FragmentManager is an object that subclasses of FragmentActivity may have. Application is not a subclass of FragmentActivity.
Another question would be, why would you ever need a FragmentManager instance inside your ViewModel? Most possibly you should delegate view-related stuff to handle to other unit other than ViewModel (e.g. Activity, Fragment). Keep in mind, that this ViewModel would be retained over configuration change, thus if you keep a reference to the FragmentManager inside your ViewModel, you'd be leaking your Activity instance.
| stackoverflow | {
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:872416",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565431"
} |
6c038ba25c27c63246089b643c89f90a897476b2 | Stackoverflow Stackexchange
Q: Valet - phpMyAdmin throws 404 not found Before installing Laravel and Valet on my dev environment (Ubuntu), I had installed PHP 7, MySQL and phpMyAdmin and everything was working fine.
In order to install Valet I had to disable apache2 as Valet was complaining during the instalation and add nginx and follow these steps https://github.com/cpriego/valet-linux/wiki/Requirements:%20Ubuntu
However after the instalation when I try to access the phpMyAdmin through the browser I'm getting the default white page 404 - not found. How can I fix this?
A: You have parked your workspace directory using
valet park
Clone the phpmyadmin repository using
git clone https://github.com/phpmyadmin/phpmyadmin --depth=1
cd phpmyadmin
composer install
In the same directory just download phpmyadmin package & extract it. You will be able to access it from
http://phpmyadmin.test
| Q: Valet - phpMyAdmin throws 404 not found Before installing Laravel and Valet on my dev environment (Ubuntu), I had installed PHP 7, MySQL and phpMyAdmin and everything was working fine.
In order to install Valet I had to disable apache2 as Valet was complaining during the instalation and add nginx and follow these steps https://github.com/cpriego/valet-linux/wiki/Requirements:%20Ubuntu
However after the instalation when I try to access the phpMyAdmin through the browser I'm getting the default white page 404 - not found. How can I fix this?
A: You have parked your workspace directory using
valet park
Clone the phpmyadmin repository using
git clone https://github.com/phpmyadmin/phpmyadmin --depth=1
cd phpmyadmin
composer install
In the same directory just download phpmyadmin package & extract it. You will be able to access it from
http://phpmyadmin.test
A: If you have phpmyadmin already installed from your last setup, you don't have to download it & install it again.
I was facing the same thing migrating to valet from lamp, this is what I did:
1- Navigate to original phpmyadmin folder
cd /usr/share/phpmyadmin
2- Then add a link to valet
valet link
Hooray! you can now access it at: phpmyadmin.test
A: If you changed a port 8080. You can try this
localhost:8080/phpmyadmin/
A: I tried this and it failed to work for me, I found a workaround, check the link for my solution: https://stackoverflow.com/a/47211246/8768078
| stackoverflow | {
"language": "en",
"length": 224,
"provenance": "stackexchange_0000F.jsonl.gz:872452",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565541"
} |
c388d9b44985e7c4892d5ae0a72b5578470e20c1 | Stackoverflow Stackexchange
Q: Define typeof abstract class in typescript How do I make typeof BaseListComponent below assignable?
The code below throws the following error:
Type '({ title: string; component: typeof QuestionsListComponent;
} | { title: string; component: ...' is not assignable to type '{
title: string; component: typeof BaseListComponent; }[]'.
Excerpt:
abstract class BaseListComponent {
protected title
}
class WeatherListComponent extends BaseListComponent {}
class QuestionsListComponent extends BaseListComponent {}
let containerItems: Array<{title: string, component: typeof BaseListComponent}>;
containerItems = [
{title:'TypeScript', component: QuestionsListComponent},
{title:'Angular2', component: QuestionsListComponent},
{title:'Weather', component: WeatherListComponent}
]
P.S this is a simplified excerpt from an angular application and so there is logic behind this madness.
A: Using angular's Type interface was the solution. A better explanation of what Type is can be found here. Steps to solve:
*
*Import Type like so: import { Type } from "@angular/core";
*Replace typeof BaseListComponent with Type<BaseListComponent>
For people coming here who are not using angular, this thread might be helpful.
| Q: Define typeof abstract class in typescript How do I make typeof BaseListComponent below assignable?
The code below throws the following error:
Type '({ title: string; component: typeof QuestionsListComponent;
} | { title: string; component: ...' is not assignable to type '{
title: string; component: typeof BaseListComponent; }[]'.
Excerpt:
abstract class BaseListComponent {
protected title
}
class WeatherListComponent extends BaseListComponent {}
class QuestionsListComponent extends BaseListComponent {}
let containerItems: Array<{title: string, component: typeof BaseListComponent}>;
containerItems = [
{title:'TypeScript', component: QuestionsListComponent},
{title:'Angular2', component: QuestionsListComponent},
{title:'Weather', component: WeatherListComponent}
]
P.S this is a simplified excerpt from an angular application and so there is logic behind this madness.
A: Using angular's Type interface was the solution. A better explanation of what Type is can be found here. Steps to solve:
*
*Import Type like so: import { Type } from "@angular/core";
*Replace typeof BaseListComponent with Type<BaseListComponent>
For people coming here who are not using angular, this thread might be helpful.
A: I may be wrong, but isnt component: typeof BaseListComponent declaring component to be of type Type
shouldn't it just be component: BaseListComponent
| stackoverflow | {
"language": "en",
"length": 179,
"provenance": "stackexchange_0000F.jsonl.gz:872453",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565544"
} |
288d4b1fed553befc77215c7d633663d16a9a7bd | Stackoverflow Stackexchange
Q: Vaadin Grid refresh issue I'm using Vaadin Framework 8.0.6
I have the following workflow :
*
*Display grid with DataProvider and function DataProvider.fromCallbacks for lazy loading
*Update one item of this grid via a form displayed in a window
*Save the updated item, close window and call dataProvider.refreshAll()
*Grid is now up to date and show the new data in the corresponding row
So far everything is ok but when I select and only when I select the row of the updated item, it will display the old data of the item.
To do some tests, I have created next to the grid a button to call dataProvider.refreshAll()
When I click on it, the data is refreshed and up to date again
but after, if I select the row of the updated item, it displays the old data again
Any idea ? is it a cache problem ?
A: FYI, after updating to Vaadin 8.1.0, this issue has disappeared.
| Q: Vaadin Grid refresh issue I'm using Vaadin Framework 8.0.6
I have the following workflow :
*
*Display grid with DataProvider and function DataProvider.fromCallbacks for lazy loading
*Update one item of this grid via a form displayed in a window
*Save the updated item, close window and call dataProvider.refreshAll()
*Grid is now up to date and show the new data in the corresponding row
So far everything is ok but when I select and only when I select the row of the updated item, it will display the old data of the item.
To do some tests, I have created next to the grid a button to call dataProvider.refreshAll()
When I click on it, the data is refreshed and up to date again
but after, if I select the row of the updated item, it displays the old data again
Any idea ? is it a cache problem ?
A: FYI, after updating to Vaadin 8.1.0, this issue has disappeared.
| stackoverflow | {
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:872463",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565570"
} |
206690dd06f7e3bfe06af4ad4318dce352bcc995 | Stackoverflow Stackexchange
Q: Ignore TS6133: "(import) is declared but never used"? While working on a TypeScript project, I commented out a line, and got the error:
Failed to compile
./src/App.tsx
(4,8): error TS6133: 'axios' is declared but never used.
This error occurred during the build time and cannot be dismissed.
The error is right, I am importing axios, but I wanted to temporarily comment out the call to axios.get. I appreciate that error as it keeps my imports clean, but during early development is is pretty disruptive.
Any way to disable or ignore that warning?
A: You probably have the noUnusedLocals compiler option turned on in your tsconfig.json. Just turn it off during development.
| Q: Ignore TS6133: "(import) is declared but never used"? While working on a TypeScript project, I commented out a line, and got the error:
Failed to compile
./src/App.tsx
(4,8): error TS6133: 'axios' is declared but never used.
This error occurred during the build time and cannot be dismissed.
The error is right, I am importing axios, but I wanted to temporarily comment out the call to axios.get. I appreciate that error as it keeps my imports clean, but during early development is is pretty disruptive.
Any way to disable or ignore that warning?
A: You probably have the noUnusedLocals compiler option turned on in your tsconfig.json. Just turn it off during development.
A: In tsconfig.json
{
"compilerOptions": {
...
"noUnusedLocals": false, // just set this attribute false
}
}
It will be done.
For more tips:
In xxx.ts file
//@ts-nocheck
when on the top of the file,it will not check the below.
//@ts-ignore
when use it,it will not check the next line
A: I had the same problem in my React App. Apart from changing the "noUsedLocals": false property in the tsconfig.json, you also need to adjust the "noUnusedParameters": false. The former is only applicable to local variables, if you are passing unused parameters through functions, the latter will need to be changed to false as well.
In summary, you'll have to do the following:
{
"compilerOptions": {
"noUnusedLocals": false,
"noUnusedParameters": false,
}
}
| stackoverflow | {
"language": "en",
"length": 234,
"provenance": "stackexchange_0000F.jsonl.gz:872499",
"question_score": "54",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565671"
} |
6f1f4a94a91a304c94a1e4f681612b657c317be7 | Stackoverflow Stackexchange
Q: Should a python abstract base class inherit from object The standard document about abc, and other tutorials I read all use examples that defines a abstract base class without inheriting from object.
class Foo(object):
def __getitem__(self, index):
...
def __len__(self):
...
def get_iterator(self):
return iter(self)
class MyIterable:
__metaclass__ = ABCMeta
@abstractmethod
def __iter__(self):
while False:
yield None
In the past I always let my class inherit object to have new-style class. Should I do the same with ABC?
A: Declaring the metaclass of MyIterable to be ABCMeta ensures that instances of MyIterable (or more appropriately, subclasses of MyIterable since it is an Abstract Base Class) will be the "new" style. If you were to create an instance of a subclass of MyIterable as seen below, it would behave as a new style class.
class SubIterable(MyIterable):
def __iter__(self):
# your implementation here
...
>>> type(SubIterable())
<type '__main__'.MyIterable>
If MyIterable were indeed an "old" style class, type(SubIterable()) would return <type 'instance'>
| Q: Should a python abstract base class inherit from object The standard document about abc, and other tutorials I read all use examples that defines a abstract base class without inheriting from object.
class Foo(object):
def __getitem__(self, index):
...
def __len__(self):
...
def get_iterator(self):
return iter(self)
class MyIterable:
__metaclass__ = ABCMeta
@abstractmethod
def __iter__(self):
while False:
yield None
In the past I always let my class inherit object to have new-style class. Should I do the same with ABC?
A: Declaring the metaclass of MyIterable to be ABCMeta ensures that instances of MyIterable (or more appropriately, subclasses of MyIterable since it is an Abstract Base Class) will be the "new" style. If you were to create an instance of a subclass of MyIterable as seen below, it would behave as a new style class.
class SubIterable(MyIterable):
def __iter__(self):
# your implementation here
...
>>> type(SubIterable())
<type '__main__'.MyIterable>
If MyIterable were indeed an "old" style class, type(SubIterable()) would return <type 'instance'>
| stackoverflow | {
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:872526",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565754"
} |
2b537bd962d7084f5b107c7ba541a067385773c3 | Stackoverflow Stackexchange
Q: What is the LIMIT clause alternative in JPQL? I'm working with PostgreSQL query implementing in JPQL.
This is a sample native psql query which works fine,
SELECT * FROM students ORDER BY id DESC LIMIT 1;
The same query in JPQL doesnt work,
@Query("SELECT s FROM Students s ORDER BY s.id DESC LIMIT 1")
Students getLastStudentDetails();
seems like LIMIT clause doesn't work in JPQL.
According to JPA documentation we can use setMaxResults/setFirstResult, Can anyone tell me how can I use that in my above query?
A: You can not use Limit in HQL because Limit is database vendor dependent so Hibernate doesn't allow it through HQL query.
A way you can implement is using a subquery:
@Query("FROM Students st WHERE st.id = (SELECT max(s.id) FROM Students s)")
Students getLastStudentDetails();
| Q: What is the LIMIT clause alternative in JPQL? I'm working with PostgreSQL query implementing in JPQL.
This is a sample native psql query which works fine,
SELECT * FROM students ORDER BY id DESC LIMIT 1;
The same query in JPQL doesnt work,
@Query("SELECT s FROM Students s ORDER BY s.id DESC LIMIT 1")
Students getLastStudentDetails();
seems like LIMIT clause doesn't work in JPQL.
According to JPA documentation we can use setMaxResults/setFirstResult, Can anyone tell me how can I use that in my above query?
A: You can not use Limit in HQL because Limit is database vendor dependent so Hibernate doesn't allow it through HQL query.
A way you can implement is using a subquery:
@Query("FROM Students st WHERE st.id = (SELECT max(s.id) FROM Students s)")
Students getLastStudentDetails();
A: The correct way is to write your JPA interface method like this
public interface MyRepository extends PagingAndSortingRepository<EntityClass, KeyClass> {
List<EntityClass> findTop100ByOrderByLastModifiedDesc();
}
In the method name, "100" denotes how many rows you want which you would have otherwise put in the limit clause. also "LastModified" is the column which you want to sort by.
PagingAndSortingRepository or CrudRepository, both will work for this.
For the sake of completeness, OP's interface method would be
List<Students> findTop1ByIdDesc();
A: JPQL does not allow to add the limit keyword to the query generated by the HQL. You would get the following exception.
org.hibernate.hql.internal.ast.QuerySyntaxException: unexpected token:
LIMIT near line 1
But don't worry there is an alternative to use the limit keyword in the query generated by the HQL by using the following steps.
Sort.by(sortBy).descending() // fetch the records in descending order
pageSize = 1 // fetch the first record from the descending order result set.
Refer the following service class
Service:
@Autowired
StudentRepository repository;
public List<Student> getLastStudentDetails(Integer pageNo, Integer pageSize, String sortBy)
{
Integer pageNo = 0;
Integer pageSize = 1;
String sortBy = "id";
Pageable paging = PageRequest.of(pageNo, pageSize, Sort.by(sortBy).descending());
Slice<Student> pagedResult = repository.findLastStudent(paging);
return pagedResult.getContent();
}
Your repository interface should implement the PagingAndSortingRepository
Repository:
public interface StudentRepository extends JpaRepository<Student,Long>, PagingAndSortingRepository<Student,Long>{
@Query("select student from Student student")
Slice<Student> findLastStudent(Pageable paging);
}
This will add the limit keyword to you query which you can see in the console. Hope this helps.
A: Hardcode the pagination(new PageRequest(0, 1)) to achieve fetch only one record.
@QueryHints({ @QueryHint(name = "org.hibernate.cacheable", value = "true") })
@Query("select * from a_table order by a_table_column desc")
List<String> getStringValue(Pageable pageable);
you have to pass new PageRequest(0, 1)to fetch records and from the list fetch the first record.
A: As stated in the comments, JPQL does not support the LIMIT keyword.
You can achieve that using the setMaxResults but if what you want is just a single item, then use the getSingleResult - it throws an exception if no item is found.
So, your query would be something like:
TypedQuery<Student> query = entityManager.createQuery("SELECT s FROM Students s ORDER BY s.id DESC", Student.class);
query.setMaxResults(1);
If you want to set a specific start offset, use query.setFirstResult(initPosition); too
A: Here a Top Ten Service (it's a useful example)
REPOSITORY
(In the Query, I parse the score entity to ScoreTo ( DTO class) by a constructor)
@Repository
public interface ScoreRepository extends JpaRepository<Scores, UUID> {
@Query("SELECT new com.example.parameters.model.to.ScoreTo(u.scoreId , u.level, u.userEmail, u.scoreLearningPoints, u.scoreExperiencePoints, u.scoreCommunityPoints, u.scoreTeamworkPoints, u.scoreCommunicationPoints, u.scoreTotalPoints) FROM Scores u "+
"order by u.scoreTotalPoints desc")
List<ScoreTo> findTopScore(Pageable pageable);
}
SERVICE
@Service
public class ScoreService {
@Autowired
private ScoreRepository scoreRepository;
public List<ScoreTo> getTopScores(){
return scoreRepository.findTopScore(PageRequest.of(0,10));
}
}
A: Hello for fetching single row and using LIMIT in jpql we can tell the jpql if it's a native query.
( using - nativeQuery=true )
Below is the use
@Query("SELECT s FROM Students s ORDER BY s.id DESC LIMIT 1", nativeQuery=true)
Students getLastStudentDetails();
A: You are using JPQL which doesn't support limiting results like this. When using native JPQL you should use setMaxResults to limit the results.
However you are using Spring Data JPA which basically makes it pretty easy to do. See here in the reference guide on how to limit results based on a query. In your case the following, find method would do exactly what you want.
findFirstByOrderById();
You could also use a Pageable argument with your query instead of a LIMIT clause.
@Query("SELECT s FROM Students s ORDER BY s.id DESC")
List<Students> getLastStudentDetails(Pageable pageable);
Then in your calling code do something like this (as explained here in the reference guide).
getLastStudentDetails(PageRequest.of(0,1));
Both should yield the same result, without needing to resort to plain SQL.
A: You can use something like this:
@Repository
public interface ICustomerMasterRepository extends CrudRepository<CustomerMaster, String>
{
@Query(value = "SELECT max(c.customer_id) FROM CustomerMaster c ")
public String getMaxId();
}
A: As your query is simple, you can use the solution of the accepted answer, naming your query findFirstByOrderById();
But if your query is more complicated, I also found this way without need to use a native query:
@Query("SELECT MAX(s) FROM Students s ORDER BY s.id DESC")
Students getLastStudentDetails();
Here a practical example where the named query method cannot be used.
| stackoverflow | {
"language": "en",
"length": 824,
"provenance": "stackexchange_0000F.jsonl.gz:872543",
"question_score": "110",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565820"
} |
8fd0036735ed0ad1a740807c7a618f6267984ccf | Stackoverflow Stackexchange
Q: Behavior subject vs Observable error handling Im using rxJS in Angular and have a set of Behavior Subjects that are exposed as a readonly Observable
public _data = new BehaviorSubject<DataItem[]>([]);
public readonly data$ = this._data.asObservable();
now I have noticed that if I subscribe directly to the BehaviorSubject and there is an error it will throw the error to the console.
but if I subscribe to the Observable with same error I don't get any messages and the listener is then unsubscribed automatically.
I know this is the expected behavior but...
I would like to know what is the pattern to avoid code duplication on errors e.g.
this.myDataService.data$.subscribe(d=> throwSomeError(), e=> handleError(e));
//or use this:
this.myDataService.data$.subscribe(d=> throwSomeError()).catch(e=> handleError(e));
the handleError(e)
A: The second option will work better as BehaviorSubject will always kill the stream on an error.
Here's more info:
How do I throw an error on a behaviour subject and continue the stream?
| Q: Behavior subject vs Observable error handling Im using rxJS in Angular and have a set of Behavior Subjects that are exposed as a readonly Observable
public _data = new BehaviorSubject<DataItem[]>([]);
public readonly data$ = this._data.asObservable();
now I have noticed that if I subscribe directly to the BehaviorSubject and there is an error it will throw the error to the console.
but if I subscribe to the Observable with same error I don't get any messages and the listener is then unsubscribed automatically.
I know this is the expected behavior but...
I would like to know what is the pattern to avoid code duplication on errors e.g.
this.myDataService.data$.subscribe(d=> throwSomeError(), e=> handleError(e));
//or use this:
this.myDataService.data$.subscribe(d=> throwSomeError()).catch(e=> handleError(e));
the handleError(e)
A: The second option will work better as BehaviorSubject will always kill the stream on an error.
Here's more info:
How do I throw an error on a behaviour subject and continue the stream?
| stackoverflow | {
"language": "en",
"length": 153,
"provenance": "stackexchange_0000F.jsonl.gz:872564",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565882"
} |
dc2238866d529d2c3b76d5294d672555856fc7cf | Stackoverflow Stackexchange
Q: Missing Marketing Icon When trying to submit my app, iTunes Connect says
Missing Marketing Icon. iOS Apps must include a 1024x1024px Marketing Icon in PNG format. Apps that do not include the Marketing Icon cannot be submitted for App Review or Beta App Review.
I do have a 1024x1024px PNG in my submission in iTunes Connect, under General App Information and App Icon. So I guess they want me to add it as an Asset to the bundle, in Xcode. But when I drag and drop my PNG to this Unassigned placeholder, nothing happens.
This error started appearing after WWDC 2017 and I installed XCode 9 Beta. This issue is occuring in Version 8.3.1 (8E1000a) too though.
A: According to new guidelines for the XCode 9: we need to drag a new icon with size 1024pt new available icon item named "App Store iOS 1024pt" under AppIcon image set.
IMPORTANT: Make sure to use the icon without Alpha/Transparency
After doing this, the warning will be gone and you should be able to successfully submit the binary to Apple for review.
Reference link: https://help.apple.com/xcode/mac/current/#/dev4b0ebb1bb
| Q: Missing Marketing Icon When trying to submit my app, iTunes Connect says
Missing Marketing Icon. iOS Apps must include a 1024x1024px Marketing Icon in PNG format. Apps that do not include the Marketing Icon cannot be submitted for App Review or Beta App Review.
I do have a 1024x1024px PNG in my submission in iTunes Connect, under General App Information and App Icon. So I guess they want me to add it as an Asset to the bundle, in Xcode. But when I drag and drop my PNG to this Unassigned placeholder, nothing happens.
This error started appearing after WWDC 2017 and I installed XCode 9 Beta. This issue is occuring in Version 8.3.1 (8E1000a) too though.
A: According to new guidelines for the XCode 9: we need to drag a new icon with size 1024pt new available icon item named "App Store iOS 1024pt" under AppIcon image set.
IMPORTANT: Make sure to use the icon without Alpha/Transparency
After doing this, the warning will be gone and you should be able to successfully submit the binary to Apple for review.
Reference link: https://help.apple.com/xcode/mac/current/#/dev4b0ebb1bb
A: If you are building an IOS app from Unity follow these steps:
*
*In the Xcode project go to Unity-iPhone > Images.xcassets > AppIcon
*Scroll to the bottom
*Drag in a 1024x1024 icon
*Build (CMD+B), archive, upload as usual
A: The App Icon guidelines have changed with the release of new iPhons X, iOS 11, and Xcode 9.
A new App Store icon should be added to the project in Xcode 9 of size:
1024px × 1024px (1024pt × 1024pt @1x)
Hope this helps.
Reference:
https://developer.apple.com/ios/human-interface-guidelines/icons-and-images/app-icon/
Note: As of today, Technical Q&A QA1686 - App Icons on iPhone, iPad and Apple Watch hasn't been updated with this requirement.
A: I'm using beta 3 and I'm only getting a warning after uploading. I uploaded a binary for Test Flight, not release.
Adding the marketing image in .xcassets , AppIcon fixed the warning.
A: The issue seems to be submitting a binary that was built using a beta version of Xcode. Use a released version of Xcode when submitting builds to the App Store.
A: How to add "1024 application icon" in sys cordova?
edit config file:
<icon src="res/icon/ios/icon-1024.png" width="1024" height="1024" />
command line:
cordova prepare ios
Don't forget to actually add the file res/icon/ios/icon-1024.png to the filesystem.
A: For people that still facing problems after adding the new app icon:
Make sure the 'Transparency' checkbox is unchecked when you're exporting the PNG image from Photoshop. Apparently this is an issue even if the image has no transparency.
This worked for me.
Thanks to Hammoud's answer at How to solve "Missing Marketing Icon. iOS Apps must include a 1024x1024px"
A: In Xcode 8:
Find your iconset directory, put a prepared file (for example 'Icon-Marketing.png') here and add the following to Contents.json
{
"size" : "1024x1024",
"idiom" : "ios-marketing",
"filename" : "Icon-Marketing.png",
"scale" : "1x"
}
A: Solved by adding iOS Marketing 1024pt icon in project
A: I was submitting an update of the iOS app to the Apple app store.
I found the below error :
Missing Marketing Icon. iOS Apps must include a 1024x1024px Marketing Icon in PNG format. Apps that do not include the Marketing Icon cannot be submitted for App Review or Beta App Review
According to new Apple guide line in xcode 9, we need to add support of "App Store iOS" icon
So I added to App icon of 1024x1024pt.
App is submitted to App Store.
A: The problem can also be in the other icons. I created the 1024 icon without alpha-channel but this didn't help. Then, I removed all icons and upload it again. This helped.
| stackoverflow | {
"language": "en",
"length": 616,
"provenance": "stackexchange_0000F.jsonl.gz:872565",
"question_score": "141",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565883"
} |
7710b188c4f6ccf34dfb5031ed91b3ca771c2f3f | Stackoverflow Stackexchange
Q: ValueTuple With WPF Binding why binding to ValueTuple property members (like Item1, Item2 ect) dont work?
<TextBlock x:Name="txtTest" Text="{Binding Item1}" />
the code:
txtTest.DataContext = ("Item A", "Another Item..");
output window:
BindingExpression path error: 'Item1' property not found on 'object' ''ValueTuple`2'
However in Tuple It always worked.
A: As stated in the documentation, Item1 and Item2 of a ValueTuple are fields rather than properties and you can only bind to public properties in WPF.
So if you want to be able to bind to a tuple, you should use the Tuple class.
| Q: ValueTuple With WPF Binding why binding to ValueTuple property members (like Item1, Item2 ect) dont work?
<TextBlock x:Name="txtTest" Text="{Binding Item1}" />
the code:
txtTest.DataContext = ("Item A", "Another Item..");
output window:
BindingExpression path error: 'Item1' property not found on 'object' ''ValueTuple`2'
However in Tuple It always worked.
A: As stated in the documentation, Item1 and Item2 of a ValueTuple are fields rather than properties and you can only bind to public properties in WPF.
So if you want to be able to bind to a tuple, you should use the Tuple class.
| stackoverflow | {
"language": "en",
"length": 93,
"provenance": "stackexchange_0000F.jsonl.gz:872575",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44565917"
} |
3582099578ff2a639bc08165b0e6825b7a1bf24d | Stackoverflow Stackexchange
Q: datatables ajax reload not updating parameters passed Using ajax.relaod isn't updating the value that is being passed to the controller.
The variables have the correct values each time when entering the function but I'm not sure how to get reload to also see/accept the updated variables.
Do I need to destroy and rebuild the table each time?
if (!$.fn.DataTable.isDataTable('.workorder-table')) {
$('.workorder-table').DataTable({
"initComplete": function () {
hidePleaseWait();
},
rowCallback: function (row, data, index) {
--row classes added here based on data
},
columns: [
{ "data": "Facility", "name": "Facility", "title": "Facility" },
{ "data": "ShortDescription", "name": "ShortDescription", "title": "Short Description" },
{ "data": "Created", "name": "Created", "title": "Created" },
{ "data": "Completed", "name": "Completed", "title": "Completed" },
{ "data": "Status", "name": "Status", "title": "Status" }
],
ajax: {
url: "/Facility/WorkOrderSearch",
type: "POST",
data: { status: $('#Status').val(), facilityID: $('#FacilityID').val(), quickView: $('#QuickView').val() }
},
-- options here
});
} else {
$('.workorder-table').DataTable().ajax.reload(hidePleaseWait);
}
A: If data is turned into a function that can be executed
data: function(data) {
data.status = $('#Status').val();
data.facilityID = $('#FacilityID').val();
data.quickView = $('#QuickView').val();
}
Then this function will be executed upon each request, i.e when ajax.reload() is called.
| Q: datatables ajax reload not updating parameters passed Using ajax.relaod isn't updating the value that is being passed to the controller.
The variables have the correct values each time when entering the function but I'm not sure how to get reload to also see/accept the updated variables.
Do I need to destroy and rebuild the table each time?
if (!$.fn.DataTable.isDataTable('.workorder-table')) {
$('.workorder-table').DataTable({
"initComplete": function () {
hidePleaseWait();
},
rowCallback: function (row, data, index) {
--row classes added here based on data
},
columns: [
{ "data": "Facility", "name": "Facility", "title": "Facility" },
{ "data": "ShortDescription", "name": "ShortDescription", "title": "Short Description" },
{ "data": "Created", "name": "Created", "title": "Created" },
{ "data": "Completed", "name": "Completed", "title": "Completed" },
{ "data": "Status", "name": "Status", "title": "Status" }
],
ajax: {
url: "/Facility/WorkOrderSearch",
type: "POST",
data: { status: $('#Status').val(), facilityID: $('#FacilityID').val(), quickView: $('#QuickView').val() }
},
-- options here
});
} else {
$('.workorder-table').DataTable().ajax.reload(hidePleaseWait);
}
A: If data is turned into a function that can be executed
data: function(data) {
data.status = $('#Status').val();
data.facilityID = $('#FacilityID').val();
data.quickView = $('#QuickView').val();
}
Then this function will be executed upon each request, i.e when ajax.reload() is called.
| stackoverflow | {
"language": "en",
"length": 190,
"provenance": "stackexchange_0000F.jsonl.gz:872618",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566061"
} |
14077b3ef0578c446d6abefb4609a17b17860c07 | Stackoverflow Stackexchange
Q: FLURL: mapping property names Is there any way to map non-matching property names when doing ReceiveJson()? For example 'user_name' in JSON should map to 'UserName' in C# object.
List<Person> people = await _settings.Url
.AppendPathSegment("people")
.GetAsync()
.ReceiveJson<List<Person>>();
A: Updated answer for Flurl.Http 4.0 and beyond:
Starting with 4.0 (in prerelease as of June 2022), Flurl.Http uses System.Text.Json for serialization, so any of its prescribed methods for customizing property names will work with Flurl:
using System.Text.Json.Serialization;
public class Person
{
[JsonPropertyName("user_name")]
public string UserName { get; set; }
}
A Json.NET serializer is available for 4.0 and beyond for those who prefer it, in which case use the approach below.
For Flurl.Http 3.x and earlier:
Prior to 4.0, Flurl.Http used Newtonsoft Json.NET, so using that library's serialization attributes, specifically JsonProperty, will work in those versions:
using Newtonsoft.Json;
public class Person
{
[JsonProperty("user_name")]
public string UserName { get; set; }
}
| Q: FLURL: mapping property names Is there any way to map non-matching property names when doing ReceiveJson()? For example 'user_name' in JSON should map to 'UserName' in C# object.
List<Person> people = await _settings.Url
.AppendPathSegment("people")
.GetAsync()
.ReceiveJson<List<Person>>();
A: Updated answer for Flurl.Http 4.0 and beyond:
Starting with 4.0 (in prerelease as of June 2022), Flurl.Http uses System.Text.Json for serialization, so any of its prescribed methods for customizing property names will work with Flurl:
using System.Text.Json.Serialization;
public class Person
{
[JsonPropertyName("user_name")]
public string UserName { get; set; }
}
A Json.NET serializer is available for 4.0 and beyond for those who prefer it, in which case use the approach below.
For Flurl.Http 3.x and earlier:
Prior to 4.0, Flurl.Http used Newtonsoft Json.NET, so using that library's serialization attributes, specifically JsonProperty, will work in those versions:
using Newtonsoft.Json;
public class Person
{
[JsonProperty("user_name")]
public string UserName { get; set; }
}
| stackoverflow | {
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:872665",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566204"
} |
ef2fde6fdbc619da38610e570ffe03951a25d3b8 | Stackoverflow Stackexchange
Q: CSV to AVRO conversion in Azure I am trying to convert csv files stored in azure data lake store into avro files with created scheme. Is there any kind of example source code which has same purpose?
A: You can use Azure Data Lake Analytics for this. There is a sample Avro extractor at https://github.com/Azure/usql/blob/master/Examples/DataFormats/Microsoft.Analytics.Samples.Formats/Avro/AvroExtractor.cs. You can easily adapt the code into an outputter.
Another possibility is to fire up an HDInsight cluster on top of your data lake store and use Pig, Hive or Spark.
| Q: CSV to AVRO conversion in Azure I am trying to convert csv files stored in azure data lake store into avro files with created scheme. Is there any kind of example source code which has same purpose?
A: You can use Azure Data Lake Analytics for this. There is a sample Avro extractor at https://github.com/Azure/usql/blob/master/Examples/DataFormats/Microsoft.Analytics.Samples.Formats/Avro/AvroExtractor.cs. You can easily adapt the code into an outputter.
Another possibility is to fire up an HDInsight cluster on top of your data lake store and use Pig, Hive or Spark.
A: That's actually pretty straightforward to do with Azure Data Factory and Blob Storage. This should be also very cheap because you pay per second when executing in ADF so you only pay for conversion time. No infra required.
If your CSV looks like this
ID,Name,Surname
1,Adam,Marczak
2,Tom,Kowalski
3,John,Johnson
Upload it to blob storage into input container
Add linked service for blob storage in ADF
*
*
*
*
Select your storage
Add dataset
Of blob type
And set it to CSV format
With values as such
Add another dataset
Of blob type
And select Avro type
With value likes
Add pipeline
Drag-n-drop Copy Data activity
And in the source select your CSV input dataset
And in the sink select your target Avro dataset
And publish and trigger the pipeline
With output
And on the blob
And with inspection you can see Avro file
Full github code here https://github.com/MarczakIO/azure-datafactory-csv-to-avro
If you want to learn about data factory check out ADF introduction video https://youtu.be/EpDkxTHAhOs
And if you want to dynamically pass input and output paths to blob files check out video on parametrization of ADF video https://youtu.be/pISBgwrdxPM
A: Python is always your best friend. Please use this sample code to convert csv to avro:
Install these dependencies:
pip install fastavro
pip install pandas
Execute the following python script.
from fastavro import writer, parse_schema
import pandas as pd
# Read CSV
df = pd.read_csv('sample.csv')
# Define AVRO schema
schema = {
'doc': 'Documentation',
'name': 'Subject',
'namespace': 'test',
'type': 'record',
'fields': [{'name': c, 'type': 'string'} for c in df.columns]
}
parsed_schema = parse_schema(schema)
# Writing AVRO file
with open('sample.avro', 'wb') as out:
writer(out, parsed_schema, df.to_dict('records'))
input: sample.csv
col1,col2,col3
a,b,c
d,e,f
g,h,i
output: sample.avro
Objavro.codecnullavro.schemaƒ{"type": "record", "name": "test.Subject", "fields": [{"name": "col1", "type": "string"}, {"name": "col2", "type": "string"}, {"name": "col3", "type": "string"}]}Y«Ÿ>[Ú Ÿÿ Æ?âQI$abcdefghiY«Ÿ>[Ú Ÿÿ Æ?âQI
| stackoverflow | {
"language": "en",
"length": 388,
"provenance": "stackexchange_0000F.jsonl.gz:872669",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566218"
} |
48d2f4c2108dee74085e2929ea9700c3274c0e44 | Stackoverflow Stackexchange
Q: Reflection - method call panics with "call of reflect.Value.Elem on struct Value" Here is a code snippet -
type Gateway struct {
Svc1 svc1.Interface
Svc2 svc2.Interface
}
func (g *Gateway) GetClient(service string) interface{} {
ps := reflect.ValueOf(g)
s := ps.Elem()
f := s.FieldByName(strings.Title(service))
return f.Interface()
}
func (g *Gateway) Invoke(service string, endpoint string, args...
interface{}) []reflect.Value {
log.Info("Gateway.Invoke " + service + "." + endpoint)
inputs := make([]reflect.Value, len(args))
for i, _ := range args {
inputs[i] = reflect.ValueOf(args[i])
}
client := g.GetClient(service)
return reflect.ValueOf(client).Elem().MethodByName(endpoint).Call(inputs)
}
GetClient("svc1") works fine.
However, when I call Invoke("svc1", "endpoint1", someArg), it panics saying -
reflect: call of reflect.Value.Elem on struct Value
reflect.ValueOf(client).MethodByName(endpoint).Call(inputs) panics saying Call on a zero value.
A: There are a couple issues:
*
*If svc1.Interface is not a pointer or an interface, reflect.Value.Elem() will panic (see https://golang.org/pkg/reflect/#Value.Elem)
*If the endpoint argument string of Invoke doesn't match the capitalization of the target method, it will panic due to zero value (invalid reflect.Value). Please note that the method you want to call must be exported.
| Q: Reflection - method call panics with "call of reflect.Value.Elem on struct Value" Here is a code snippet -
type Gateway struct {
Svc1 svc1.Interface
Svc2 svc2.Interface
}
func (g *Gateway) GetClient(service string) interface{} {
ps := reflect.ValueOf(g)
s := ps.Elem()
f := s.FieldByName(strings.Title(service))
return f.Interface()
}
func (g *Gateway) Invoke(service string, endpoint string, args...
interface{}) []reflect.Value {
log.Info("Gateway.Invoke " + service + "." + endpoint)
inputs := make([]reflect.Value, len(args))
for i, _ := range args {
inputs[i] = reflect.ValueOf(args[i])
}
client := g.GetClient(service)
return reflect.ValueOf(client).Elem().MethodByName(endpoint).Call(inputs)
}
GetClient("svc1") works fine.
However, when I call Invoke("svc1", "endpoint1", someArg), it panics saying -
reflect: call of reflect.Value.Elem on struct Value
reflect.ValueOf(client).MethodByName(endpoint).Call(inputs) panics saying Call on a zero value.
A: There are a couple issues:
*
*If svc1.Interface is not a pointer or an interface, reflect.Value.Elem() will panic (see https://golang.org/pkg/reflect/#Value.Elem)
*If the endpoint argument string of Invoke doesn't match the capitalization of the target method, it will panic due to zero value (invalid reflect.Value). Please note that the method you want to call must be exported.
| stackoverflow | {
"language": "en",
"length": 173,
"provenance": "stackexchange_0000F.jsonl.gz:872688",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566293"
} |
8acc820f874b8929a67cd314b899f2b19f15a27e | Stackoverflow Stackexchange
Q: Visual Studio 2017 Encapsulate Field - how to get the old format back? So let's say I want to encapsulate the field with the good ol' Edit->Refactor->Encapsulate field, since it saves quite a bit of time:
private GameSettings gameSettings;
In Visual Studio 2015, I would get:
public GameSettings GameSettings
{
get
{
return gameSettings;
}
set
{
gameSettings = value;
}
}
But with Visual Studio 2017 I get:
internal GameSettings GameSettings { get => gameSettings; set => gameSettings = value; }
Is there any way I can make it generate the old style? It looks wrong to have half the properties in one style and half in another...
A: I know this thread is old, but the answer can help anyone else...
You can go to Options > Text Editor > C# > Code Style > General and change "Use expression body for accessors" to "Never". So you'll get the old style.
| Q: Visual Studio 2017 Encapsulate Field - how to get the old format back? So let's say I want to encapsulate the field with the good ol' Edit->Refactor->Encapsulate field, since it saves quite a bit of time:
private GameSettings gameSettings;
In Visual Studio 2015, I would get:
public GameSettings GameSettings
{
get
{
return gameSettings;
}
set
{
gameSettings = value;
}
}
But with Visual Studio 2017 I get:
internal GameSettings GameSettings { get => gameSettings; set => gameSettings = value; }
Is there any way I can make it generate the old style? It looks wrong to have half the properties in one style and half in another...
A: I know this thread is old, but the answer can help anyone else...
You can go to Options > Text Editor > C# > Code Style > General and change "Use expression body for accessors" to "Never". So you'll get the old style.
A: Try messing around with the snippets:
"C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC#\Snippets\1033\Refactoring\EncapsulateField.snippet"
| stackoverflow | {
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:872735",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566436"
} |
3d8337fc4f4d33af1acac37a6489bd2441bfd7f4 | Stackoverflow Stackexchange
Q: Why are instances of old style classes instances of `object`? In Python 2, why are instances of old style classes still instances of object even when they do not explicitly inherit from object?
class OldClass:
pass
>>> isinstance(OldClass(), object)
True
Before testing this, I would have concluded that isinstance(x, object) == True would imply that x is an instance of a subclass of object and thus an instance of a new style class, but it appears that all objects in Python 2 are instances of object (yes, I know how obvious that sounds).
Digging around further, I found some other seemingly odd behavior:
>>> issubclass(OldClass, object)
False
I was under the impression that isinstance(x, SomeClass) is virtually equivalent to issubclass(x.__class__, SomeClass), but apparently I'm missing something.
| Q: Why are instances of old style classes instances of `object`? In Python 2, why are instances of old style classes still instances of object even when they do not explicitly inherit from object?
class OldClass:
pass
>>> isinstance(OldClass(), object)
True
Before testing this, I would have concluded that isinstance(x, object) == True would imply that x is an instance of a subclass of object and thus an instance of a new style class, but it appears that all objects in Python 2 are instances of object (yes, I know how obvious that sounds).
Digging around further, I found some other seemingly odd behavior:
>>> issubclass(OldClass, object)
False
I was under the impression that isinstance(x, SomeClass) is virtually equivalent to issubclass(x.__class__, SomeClass), but apparently I'm missing something.
| stackoverflow | {
"language": "en",
"length": 127,
"provenance": "stackexchange_0000F.jsonl.gz:872764",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566500"
} |
9ce8ebdfcc735459a2c7240b4cc197d3b67cf49e | Stackoverflow Stackexchange
Q: Why .nextElementSibling doesn't return null? In the below code, the <div> doesn't have any siblings. previousElementSibling correctly returns null, but nextElementSibling does not.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>nextElementSibling</title>
</head>
<body>
<div>test</div>
</body>
<script>
var demo = document.getElementsByTagName('div')[0];
console.log(demo.previousElementSibling);
console.log(demo.nextElementSibling);
</script>
</html>
Console output:
Google Chrome Version 59.0.3071.86 (Official Build) (64-bit)
Why is this?
A: Because Chrome is being kind.
The <script> tag as positioned in your source code is not valid, so Chrome is automatically moving it into the <body>, where it is valid, and where it becomes the nextElementSibling. If you inspect the loaded page, you see this:
| Q: Why .nextElementSibling doesn't return null? In the below code, the <div> doesn't have any siblings. previousElementSibling correctly returns null, but nextElementSibling does not.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>nextElementSibling</title>
</head>
<body>
<div>test</div>
</body>
<script>
var demo = document.getElementsByTagName('div')[0];
console.log(demo.previousElementSibling);
console.log(demo.nextElementSibling);
</script>
</html>
Console output:
Google Chrome Version 59.0.3071.86 (Official Build) (64-bit)
Why is this?
A: Because Chrome is being kind.
The <script> tag as positioned in your source code is not valid, so Chrome is automatically moving it into the <body>, where it is valid, and where it becomes the nextElementSibling. If you inspect the loaded page, you see this:
| stackoverflow | {
"language": "en",
"length": 103,
"provenance": "stackexchange_0000F.jsonl.gz:872783",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566553"
} |
10f5d03f5cc04f81365ed3bcc7a386f4082c310f | Stackoverflow Stackexchange
Q: Spring - Autowire Service into POJO I have a model in which I want to inject my service.
My Model
@Configurable
@Entity
@Table(name = "user")
public Class User {
@Autowired
private UserService userService;
{
System.out.println("Trying Service : " + userService.getMyName());
}
}
Here I get always a NullPointerException on 7'th line.
In my spring-context.xml I have :
<context:spring-configured/>
<bean
class="org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor" />
<bean
class="org.springframework.context.annotation.CommonAnnotationBeanPostProcessor" />
EDIT
UserService
@Component
public Class UserService {
public String getMyName() { return "it's Me!";}
}
A: Spring managed components can be wired only into another beans managed by Spring.
However, there is a trick to add service to your POJO if you really need it:
*
*Add UserService as a static field to your POJO with a setter
*In UserService after spring initializes the bean, set itself as a field on the POJO (this can be done in @PostConstruct method)
| Q: Spring - Autowire Service into POJO I have a model in which I want to inject my service.
My Model
@Configurable
@Entity
@Table(name = "user")
public Class User {
@Autowired
private UserService userService;
{
System.out.println("Trying Service : " + userService.getMyName());
}
}
Here I get always a NullPointerException on 7'th line.
In my spring-context.xml I have :
<context:spring-configured/>
<bean
class="org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor" />
<bean
class="org.springframework.context.annotation.CommonAnnotationBeanPostProcessor" />
EDIT
UserService
@Component
public Class UserService {
public String getMyName() { return "it's Me!";}
}
A: Spring managed components can be wired only into another beans managed by Spring.
However, there is a trick to add service to your POJO if you really need it:
*
*Add UserService as a static field to your POJO with a setter
*In UserService after spring initializes the bean, set itself as a field on the POJO (this can be done in @PostConstruct method)
A: Make a static instance of UserService available:
@Service
public Class UserService {
private static UserService instance;
public static UserService getInstance() { return instance; }
@PostConstruct
void init() { instance = this; }
public String getMyName() { return "it's Me!";}
}
call with:
UserService.getInstance().getMyName()
| stackoverflow | {
"language": "en",
"length": 189,
"provenance": "stackexchange_0000F.jsonl.gz:872792",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566583"
} |
0f495d4139586b8b39e2aafc694e799cc41aa56b | Stackoverflow Stackexchange
Q: Visual Basic Memory Management Form.Show I have a question on memory management within Visual Studio using Visual Basic.
I have a form with a button. When the button is pressed a second form that I have created appears. I am using the code below to show the second form:
Form2.show()
My question is: When I press the button in the first form , is the second form loaded into memory, or has it been loaded from the start of the program, but just hidden?
Is there a way to unload the form after it has been loaded, so that it doesn't take up memory anymore?
A: Use Unload Form2 to unload the form from memory. The show method simply shows the already memory loaded form.
Load Form2 is used to load the form into memory. But Form2.Showis only used to show it for the user.
| Q: Visual Basic Memory Management Form.Show I have a question on memory management within Visual Studio using Visual Basic.
I have a form with a button. When the button is pressed a second form that I have created appears. I am using the code below to show the second form:
Form2.show()
My question is: When I press the button in the first form , is the second form loaded into memory, or has it been loaded from the start of the program, but just hidden?
Is there a way to unload the form after it has been loaded, so that it doesn't take up memory anymore?
A: Use Unload Form2 to unload the form from memory. The show method simply shows the already memory loaded form.
Load Form2 is used to load the form into memory. But Form2.Showis only used to show it for the user.
| stackoverflow | {
"language": "en",
"length": 146,
"provenance": "stackexchange_0000F.jsonl.gz:872798",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566619"
} |
fc441e5ef57a11eb151a03c5fa0340193b955d9d | Stackoverflow Stackexchange
Q: What is the complexity of dist()? I used the dist function in R and I am wondering the time complexity of it.
I know that the hierarchical clustering has a N^2*logN time complexity. And hierarchical clustering is composed of two parts as below codes in R.
> d <- dist(as.matrix(mtcars)) # find distance matrix
> hc <- hclust(d) # apply hirarchical clustering
> plot(hc) # plot the dendrogram
before applying hierarchical clustering, calculating the distance matrix is required. I think this takes N^2 complexity?
A: Precisely, if matrix X has N rows P columns, the complexity of dist(X) is 3N(N-1)P/2. This is computed by N(N - 1)/2 * 3P.
Explanation:
*
*There are N(N - 1)/2 entries in the resulting distance matrix;
*Each entry is a dot product between two length P vectors (plus a square root), each involving P subtraction, P multiplication and P addition.
| Q: What is the complexity of dist()? I used the dist function in R and I am wondering the time complexity of it.
I know that the hierarchical clustering has a N^2*logN time complexity. And hierarchical clustering is composed of two parts as below codes in R.
> d <- dist(as.matrix(mtcars)) # find distance matrix
> hc <- hclust(d) # apply hirarchical clustering
> plot(hc) # plot the dendrogram
before applying hierarchical clustering, calculating the distance matrix is required. I think this takes N^2 complexity?
A: Precisely, if matrix X has N rows P columns, the complexity of dist(X) is 3N(N-1)P/2. This is computed by N(N - 1)/2 * 3P.
Explanation:
*
*There are N(N - 1)/2 entries in the resulting distance matrix;
*Each entry is a dot product between two length P vectors (plus a square root), each involving P subtraction, P multiplication and P addition.
| stackoverflow | {
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:872801",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566631"
} |
8de7884f636b7e41c228a62f74ea21d8b2987504 | Stackoverflow Stackexchange
Q: How to deploy hyperledger fabric on a network and run hyperledger composer on it? I'm still a beginner and am working on a project. I have done the getting started from the fabric docs but I don't know how exactly I am supposed to deploy it to a network so that it runs on multiple peers.
After creating the fabric, I want to deploy a hyperledger composer model on it. I've completed the dev guide from the composer docs. So, I wanted to ask whether the process would be any different from deploying to a fabric with a single peer.
A: If you have followed the Composer Developer Tutorial:
https://hyperledger.github.io/composer/tutorials/developer-guide.html
The you have installed a Fabric (currently at v0.8), created a Composer business network definition and deployed it to a channel on your Fabric development instance.
The process from a development perspective is identical, regardless of how many peers you have.
| Q: How to deploy hyperledger fabric on a network and run hyperledger composer on it? I'm still a beginner and am working on a project. I have done the getting started from the fabric docs but I don't know how exactly I am supposed to deploy it to a network so that it runs on multiple peers.
After creating the fabric, I want to deploy a hyperledger composer model on it. I've completed the dev guide from the composer docs. So, I wanted to ask whether the process would be any different from deploying to a fabric with a single peer.
A: If you have followed the Composer Developer Tutorial:
https://hyperledger.github.io/composer/tutorials/developer-guide.html
The you have installed a Fabric (currently at v0.8), created a Composer business network definition and deployed it to a channel on your Fabric development instance.
The process from a development perspective is identical, regardless of how many peers you have.
| stackoverflow | {
"language": "en",
"length": 152,
"provenance": "stackexchange_0000F.jsonl.gz:872812",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44566685"
} |
938d0feb409b3b526d6e5e22fc28032b4da9b448 | Stackoverflow Stackexchange
Q: Snapshot testing on Angular 1.x with Jest I'd like to use Jest to snapshot-test my angular 1.x directives.
I've already got a working test environment with jest, but I'm not sure how (and if I can) snapshot test my directives/components.
I don't think I can use the renderer object used in this example (looks like a react-specific object) http://facebook.github.io/jest/docs/en/snapshot-testing.html#content and I'm not sure how to use the .toJSON() function in order to serialize my directive/components.
This is the only link I've found on Jest+Angular 1.x usage:
https://medium.com/aya-experience/testing-an-angularjs-app-with-jest-3029a613251 and I can't find any answer about snapshot testing.
Thanks in advance,
Federico
A: It works.
test.js
const angular = require('angular');
require('angular-mocks');
describe('Renderer', () => {
let element;
let scope;
beforeEach(
angular.mock.inject(($rootScope, $compile) => {
scope = $rootScope.$new();
element = $compile(
'<div><label ng-show="label.show">1</label><label ng-hide="label.show">2</label></div>'
)(scope);
scope.$digest();
})
);
it('should render the element', () => {
expect(element).toBeDefined();
expect(element[0]).toMatchSnapshot();
});
});
Snapshot
exports[`Renderer should render the element 1`] = `
<div
class="ng-scope"
>
<label
class="ng-hide"
ng-show="label.show"
>
1
</label>
<label
ng-hide="label.show"
>
2
</label>
</div>
`;
| Q: Snapshot testing on Angular 1.x with Jest I'd like to use Jest to snapshot-test my angular 1.x directives.
I've already got a working test environment with jest, but I'm not sure how (and if I can) snapshot test my directives/components.
I don't think I can use the renderer object used in this example (looks like a react-specific object) http://facebook.github.io/jest/docs/en/snapshot-testing.html#content and I'm not sure how to use the .toJSON() function in order to serialize my directive/components.
This is the only link I've found on Jest+Angular 1.x usage:
https://medium.com/aya-experience/testing-an-angularjs-app-with-jest-3029a613251 and I can't find any answer about snapshot testing.
Thanks in advance,
Federico
A: It works.
test.js
const angular = require('angular');
require('angular-mocks');
describe('Renderer', () => {
let element;
let scope;
beforeEach(
angular.mock.inject(($rootScope, $compile) => {
scope = $rootScope.$new();
element = $compile(
'<div><label ng-show="label.show">1</label><label ng-hide="label.show">2</label></div>'
)(scope);
scope.$digest();
})
);
it('should render the element', () => {
expect(element).toBeDefined();
expect(element[0]).toMatchSnapshot();
});
});
Snapshot
exports[`Renderer should render the element 1`] = `
<div
class="ng-scope"
>
<label
class="ng-hide"
ng-show="label.show"
>
1
</label>
<label
ng-hide="label.show"
>
2
</label>
</div>
`;
| stackoverflow | {
"language": "en",
"length": 172,
"provenance": "stackexchange_0000F.jsonl.gz:872920",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567061"
} |
387f0a53379dfaac78be6557d5ef5b40fb872a98 | Stackoverflow Stackexchange
Q: Can string concatenation be used for application yml value that includes SpEL? I'm trying to define a spring data source url like so:
spring:
datasource:
url: "jdbc:${vcap.services.compose-for-mysql.credentials.uri}?useSSL=true&requireSSL=true&verifyServerCertificate=true"
username: ${vcap.services.compose-for-mysql.credentials.username}
password: ${vcap.services.compose-for-mysql.credentials.password}
driver-class-name: com.mysql.jdbc.Driver
Where vcap.services.compose-for-mysql.credentials.uri is set to mysql://xxxx:xxxx@xxxx.1.dblayer.com:28018/compose.
I need the url to look like this:
jdbc:mysql://xxxx:xxxx@xxxx.1.dblayer.com:28018/compose?useSSL=true&requireSSL=true&verifyServerCertificate=true
However, Spring doesn't appear to be able to handle this:
Could not get JDBC Connection; nested exception is java.sql.SQLException: Driver:com.mysql.jdbc.Driver@6c6efbc8 returned null for URL:jdbc:${vcap.services.compose-for-mysql.credentials.uri}?useSSL=true&requireSSL=true&verifyServerCertificate=true
Is there a way that I can construct the url using a yaml file, or do I need to use another approach such as xml configuration?
Update
I've tried:
url: ${'jdbc:'}${vcap.services.compose-for-mysql.credentials.uri}{'?useSSL=true&requireSSL=true&verifyServerCertificate=true'}
But get the error:
java.lang.IllegalArgumentException: URL must start with 'jdbc'
Also tried:
url: jdbc:${vcap.services.compose-for-mysql.credentials.uri}?useSSL=true&requireSSL=true&verifyServerCertificate=true
But get the error:
Driver:com.mysql.jdbc.Driver@567443ab returned null for URL:jdbc:${vcap.services.compose-for-mysql.credentials.uri}?useSSL=true&requireSSL=true&verifyServerCertificate=true
| Q: Can string concatenation be used for application yml value that includes SpEL? I'm trying to define a spring data source url like so:
spring:
datasource:
url: "jdbc:${vcap.services.compose-for-mysql.credentials.uri}?useSSL=true&requireSSL=true&verifyServerCertificate=true"
username: ${vcap.services.compose-for-mysql.credentials.username}
password: ${vcap.services.compose-for-mysql.credentials.password}
driver-class-name: com.mysql.jdbc.Driver
Where vcap.services.compose-for-mysql.credentials.uri is set to mysql://xxxx:xxxx@xxxx.1.dblayer.com:28018/compose.
I need the url to look like this:
jdbc:mysql://xxxx:xxxx@xxxx.1.dblayer.com:28018/compose?useSSL=true&requireSSL=true&verifyServerCertificate=true
However, Spring doesn't appear to be able to handle this:
Could not get JDBC Connection; nested exception is java.sql.SQLException: Driver:com.mysql.jdbc.Driver@6c6efbc8 returned null for URL:jdbc:${vcap.services.compose-for-mysql.credentials.uri}?useSSL=true&requireSSL=true&verifyServerCertificate=true
Is there a way that I can construct the url using a yaml file, or do I need to use another approach such as xml configuration?
Update
I've tried:
url: ${'jdbc:'}${vcap.services.compose-for-mysql.credentials.uri}{'?useSSL=true&requireSSL=true&verifyServerCertificate=true'}
But get the error:
java.lang.IllegalArgumentException: URL must start with 'jdbc'
Also tried:
url: jdbc:${vcap.services.compose-for-mysql.credentials.uri}?useSSL=true&requireSSL=true&verifyServerCertificate=true
But get the error:
Driver:com.mysql.jdbc.Driver@567443ab returned null for URL:jdbc:${vcap.services.compose-for-mysql.credentials.uri}?useSSL=true&requireSSL=true&verifyServerCertificate=true
| stackoverflow | {
"language": "en",
"length": 127,
"provenance": "stackexchange_0000F.jsonl.gz:872930",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567099"
} |
d309e4a545c373b4f6205f48c010ac33b5ec185b | Stackoverflow Stackexchange
Q: Counting Calls by Half-Hour intervals I was trying to get the count of calls per half-hour interval.
Couldn't figure it out.
select
count(call_id) as '#Calls',
1/2 h(date_time) as 'Call_Interval'
from My_Table
A: One method to aggregate by various time intervals is with DATEADD and DATEDIFF:
SELECT
COUNT(*) as '#Calls',
DATEADD(minute, (DATEDIFF(minute, '', date_time) / 30) * 30, '') as Call_Interval
FROM dbo.My_Table
GROUP BY DATEADD(minute, (DATEDIFF(minute, '', date_time) / 30) * 30, '')
ORDER BY Call_Interval;
On a side note, the empty string constant above represents the default value for datetime. The default values for datetime and other temporal types are listed below, expressed in ISO 8601 string format:
Data Type
Default Value
date
1900-01-01
datetime
1900-01-01T00:00:00
datetime2
1900-01-01T00:00:00
datetimeoffset
1900-01-01T00:00:00+00:00
smalldatetime
1900-01-01T00:00:00
time
00:00:00
Time interval calculations with a datepart more granular than minute (i.e. second, millisecond, and microsecond) may require a more recent base datetime value than the default value (e.g. 2020-01-01T00:00:00) to avoid overflow.
| Q: Counting Calls by Half-Hour intervals I was trying to get the count of calls per half-hour interval.
Couldn't figure it out.
select
count(call_id) as '#Calls',
1/2 h(date_time) as 'Call_Interval'
from My_Table
A: One method to aggregate by various time intervals is with DATEADD and DATEDIFF:
SELECT
COUNT(*) as '#Calls',
DATEADD(minute, (DATEDIFF(minute, '', date_time) / 30) * 30, '') as Call_Interval
FROM dbo.My_Table
GROUP BY DATEADD(minute, (DATEDIFF(minute, '', date_time) / 30) * 30, '')
ORDER BY Call_Interval;
On a side note, the empty string constant above represents the default value for datetime. The default values for datetime and other temporal types are listed below, expressed in ISO 8601 string format:
Data Type
Default Value
date
1900-01-01
datetime
1900-01-01T00:00:00
datetime2
1900-01-01T00:00:00
datetimeoffset
1900-01-01T00:00:00+00:00
smalldatetime
1900-01-01T00:00:00
time
00:00:00
Time interval calculations with a datepart more granular than minute (i.e. second, millisecond, and microsecond) may require a more recent base datetime value than the default value (e.g. 2020-01-01T00:00:00) to avoid overflow.
| stackoverflow | {
"language": "en",
"length": 158,
"provenance": "stackexchange_0000F.jsonl.gz:872952",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567163"
} |
6ef2676e418c587a58af88b6ad72869b2ad8e693 | Stackoverflow Stackexchange
Q: Wordpress contact form 7 send confirmation message to user I want to send confirmation message to user after submitting form.
the information recorded should be sent to client & confirmation message send to user.
Is there any setting in contact form 7 or we need to do it using custom ?
Does anyone know how to do it ?
A: Ya you can send confirmation email to user.
Just use Mail 2 option which will be triggered only when the email is successfully sent to client.
https://contactform7.com/faq/can-i-implement-autoresponder/
| Q: Wordpress contact form 7 send confirmation message to user I want to send confirmation message to user after submitting form.
the information recorded should be sent to client & confirmation message send to user.
Is there any setting in contact form 7 or we need to do it using custom ?
Does anyone know how to do it ?
A: Ya you can send confirmation email to user.
Just use Mail 2 option which will be triggered only when the email is successfully sent to client.
https://contactform7.com/faq/can-i-implement-autoresponder/
A: You can use contact 7 forms for it but you may have to then hardcode where is it going. So if you go onto the page the form is on you will see
<input type="submit"/>
If you change this code to something along the lines of
<input type="submit" href="example@example.com"/>
When clicked it should then send the required data to the correct e-mail address. Make sure that your form has the method of "POST" so the data actually gets sent over.
Also with contact form 7 there is an option to do e-mail forms, you can then set the e-mail to your e-mail for testing and your clients e-mail too. Send some dummy data over and job should be a good'n. Just look at the different options you have available.
This link should help you below:
https://contactform7.com/setting-up-mail/
| stackoverflow | {
"language": "en",
"length": 225,
"provenance": "stackexchange_0000F.jsonl.gz:872972",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567235"
} |
a361dfb6e5f0694f08bd26440c1e8af1eb2e084b | Stackoverflow Stackexchange
Q: TensorFlow: Why does tf.one_hot give the best performance on tf.uint8 dtypes? I am rather puzzled by how there is a huge variance (5% difference in accuracy) in the performance of the same model (keeping all other factors the same), when I simply place the conversion of my labels dtype (tf.uint8) after using tf.one_hot, meaning to say the tf.one_hot function processes uint8 integers instead.
For example
...
labels = tf.cast(labels, tf.int64)
labels = tf.one_hot(labels, num_classes=12)
In comparison to
...
labels = tf.one_hot(labels, num_classes=12)
labels = tf.cast(labels, tf.int64)
the latter has better performance.
Is there a preferred dtype when using tf.one_hot?
| Q: TensorFlow: Why does tf.one_hot give the best performance on tf.uint8 dtypes? I am rather puzzled by how there is a huge variance (5% difference in accuracy) in the performance of the same model (keeping all other factors the same), when I simply place the conversion of my labels dtype (tf.uint8) after using tf.one_hot, meaning to say the tf.one_hot function processes uint8 integers instead.
For example
...
labels = tf.cast(labels, tf.int64)
labels = tf.one_hot(labels, num_classes=12)
In comparison to
...
labels = tf.one_hot(labels, num_classes=12)
labels = tf.cast(labels, tf.int64)
the latter has better performance.
Is there a preferred dtype when using tf.one_hot?
| stackoverflow | {
"language": "en",
"length": 100,
"provenance": "stackexchange_0000F.jsonl.gz:872974",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567241"
} |
e23b7caeb14fab9377d79c733b7bd2ae4e212688 | Stackoverflow Stackexchange
Q: Where is MSBuild.exe installed in Windows when installed using BuildTools_Full.exe? I'm trying to set up a build server for .NET, but can't figure out where MSBuild.exe is installed.
I'm trying to install MSBuild using the Microsoft Build Tools 2013:
https://www.microsoft.com/en-us/download/details.aspx?id=40760
A: This worked for me (this searches msbuild.exe in c:\ - the default didn't work)
where /R c:\ msbuild.exe
| Q: Where is MSBuild.exe installed in Windows when installed using BuildTools_Full.exe? I'm trying to set up a build server for .NET, but can't figure out where MSBuild.exe is installed.
I'm trying to install MSBuild using the Microsoft Build Tools 2013:
https://www.microsoft.com/en-us/download/details.aspx?id=40760
A: This worked for me (this searches msbuild.exe in c:\ - the default didn't work)
where /R c:\ msbuild.exe
A: As per https://learn.microsoft.com/en-us/visualstudio/msbuild/what-s-new-in-msbuild-15-0
MSBuild is now installed in a folder under each version of Visual Studio. For example, C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild. You can also use the following PowerShell module to locate MSBuild: vssetup.powershell.
MSBuild is no longer installed in the Global Assembly Cache. To reference MSBuild programmatically, use NuGet packages.
A: MSBuild in the previous versions of .NET Framework was installed with it but, they decided to install it with Visual Studio or with the package BuildTools_Full.exe.
The path to MSBuild when installed with the .NET framework:
C:\Windows\Microsoft.NET\Framework[64 or empty][framework_version]
The path to MSBuild when installed with Visual Studio is:
C:\Program Files (x86)\MSBuild[version]\Bin for x86
and
C:\Program Files (x86)\MSBuild[version]\Bin\amd64 for x64.
The path when BuildTools_Full.exe is installed is the same as when MSBuild is installed with Visual Studio.
A: For MsBuild 17:
C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Current\Bin
For MsBuild 16:
C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\MSBuild\Current\Bin
For MsBuild 15:
C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild (or replace 'Enterprise' with 'Professional' or 'Community')
A: Open the Microsoft command line. I'm using Visual Studio 2019, so my command line is "Developer Command Prompt for VS 2019".
Then run the command (the Where-Object Powershell command)
where msbuild
And the path will be echo'd.
Or try this (the where.exe program/executable)
where.exe /R C:\ msbuild
More here on the difference between:
*
*where Powershell alias /Where-Object Powershell command vs
*where.exe executable
A: You can find the VS2019 here : C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\MSBuild.exe
| stackoverflow | {
"language": "en",
"length": 299,
"provenance": "stackexchange_0000F.jsonl.gz:872984",
"question_score": "72",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567280"
} |
2c38482dc732cfd6f8b53dcbf297eea90546eec6 | Stackoverflow Stackexchange
Q: NetSuite - Returning an header is not NLAuth scheme error , when I am using OAuth I am trying to call a Restlet from a User Event Script which returns Customer information using Token Based Authentication .
But I receive an user error which says header is not NLAuth scheme
I know this means that I have to use the NLAuth scheme , but why is it giving this error?
Here is the authorization header I am using
var headers = { 'Authorization': 'Oauth realm="XXXXX", oauth_consumer_key="XXXXX" , oauth_token="XXXXX", oauth_nonce="XXX",oauth_timestamp="XXXX", oauth_signature_method="HMAC-SHA1", oauth_version="1.0",oauth_signature="XXXXXXXXXX="',
'content-type': 'application/json'
};
A: You need to capitalize the "a" in "Oauth"
Oauth -> OAuth
| Q: NetSuite - Returning an header is not NLAuth scheme error , when I am using OAuth I am trying to call a Restlet from a User Event Script which returns Customer information using Token Based Authentication .
But I receive an user error which says header is not NLAuth scheme
I know this means that I have to use the NLAuth scheme , but why is it giving this error?
Here is the authorization header I am using
var headers = { 'Authorization': 'Oauth realm="XXXXX", oauth_consumer_key="XXXXX" , oauth_token="XXXXX", oauth_nonce="XXX",oauth_timestamp="XXXX", oauth_signature_method="HMAC-SHA1", oauth_version="1.0",oauth_signature="XXXXXXXXXX="',
'content-type': 'application/json'
};
A: You need to capitalize the "a" in "Oauth"
Oauth -> OAuth
A: Make sure that you are using the account ID for the realm and that it is formatted correctly. 12345_SB1
The underscore must be used if that applies to your account
The letters must be capitalized.
A: Can you try this headers you will get solution
var headers = {
"Authorization": "NLAuth nlauth_account='your account number',nlauth_email='login email' ,nlauth_signature='password'",
"Content-Type":"application/json"
}
| stackoverflow | {
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:872992",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567313"
} |
3c2fce65eafaf38fc0467b9daa3e1addb84760f4 | Stackoverflow Stackexchange
Q: List files recursively in Kotlin to list files in a directory with kotlin, i used list() and listFiles() functions:
File("/tmp").list().forEach { println(it) }
File("/tmp").listFiles().forEach { println(it) }
but, how can i list files recursively?
A: Use one of .walk(...), .walkBottomUp() or .walkTopDown() extensions for File, which differ only in the order in which the files appear and all produce a FileTreeWalk, that implements Sequence<File>:
File("/tmp").walkTopDown().forEach { println(it) }
| Q: List files recursively in Kotlin to list files in a directory with kotlin, i used list() and listFiles() functions:
File("/tmp").list().forEach { println(it) }
File("/tmp").listFiles().forEach { println(it) }
but, how can i list files recursively?
A: Use one of .walk(...), .walkBottomUp() or .walkTopDown() extensions for File, which differ only in the order in which the files appear and all produce a FileTreeWalk, that implements Sequence<File>:
File("/tmp").walkTopDown().forEach { println(it) }
| stackoverflow | {
"language": "en",
"length": 69,
"provenance": "stackexchange_0000F.jsonl.gz:873028",
"question_score": "59",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567419"
} |
40d2d8989f772346bda804bec0fc1fdc4a9bc92d | Stackoverflow Stackexchange
Q: Install R latest verison on ubuntu 16.04 So I tried to install R (after repairing ubuntu on my system) using following command :
sudo apt-get install r-base-core
sudo apt-get install r-recommended
It installs R 3.2 , but the latest version of R currently available to use is R 3.4, any idea why it is not installing R 3.4 ?
I lately installed R.3.4 manually, it works fine. just curious to know why it didn't installed at the first place using the command.
A: Follow these steps:
*
*Add this entry deb https://cloud.r-project.org/bin/linux/ubuntu xenial/ to your /etc/apt/sources.list file.
*Run this command in shell: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E084DAB9.
*Update and install: sudo apt update; sudo apt install r-base.
I wrote a post that explains each step in detail (update: also covers installing R on Ubuntu 18.04); here's the link.
| Q: Install R latest verison on ubuntu 16.04 So I tried to install R (after repairing ubuntu on my system) using following command :
sudo apt-get install r-base-core
sudo apt-get install r-recommended
It installs R 3.2 , but the latest version of R currently available to use is R 3.4, any idea why it is not installing R 3.4 ?
I lately installed R.3.4 manually, it works fine. just curious to know why it didn't installed at the first place using the command.
A: Follow these steps:
*
*Add this entry deb https://cloud.r-project.org/bin/linux/ubuntu xenial/ to your /etc/apt/sources.list file.
*Run this command in shell: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E084DAB9.
*Update and install: sudo apt update; sudo apt install r-base.
I wrote a post that explains each step in detail (update: also covers installing R on Ubuntu 18.04); here's the link.
A: It installs 3.2 because that's the default in the Ubuntu 16.04 repository. If you want the most up to date version of R for Ubuntu it's best to follow the instructions at the cran page for R on Ubuntu.
A: The xenial-cran35/ version of the repo does NOT work if you have a "default release" set in apt, as is the case in some distros that work on top of Ubuntu, such as Mint. For my Mint distro, there exists a file /etc/apt/apt.conf.d/01ubuntu inside of which it declares the Default-Release "xenial"; What this means is that, since r-base exists in the ubuntu repo at version 3.2, with release "xenial", it'll never use the 3.6 branch from the other repo, because the release name for that repo is "xenial-cran35". You need to edit that file to change the default release to "xenail-cran35", or do something more pointed using apt preference files (https://wiki.debian.org/AptPreferences#A.2Fetc.2Fapt.2Fpreferences).
This is basically R's fault for having a poorly formatted repo. They should have had 2 repos, each of which had a "xenial" release folder, one url for their 3.2 branch work and one for the 3.5+ branch work. Instead they have one repo, and have bastardized the "release name" instead, which just sort of happens to work for base Ubuntu, but won't work if you have non-base configuration of apt in this way.
| stackoverflow | {
"language": "en",
"length": 367,
"provenance": "stackexchange_0000F.jsonl.gz:873053",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567499"
} |
9fb682d24e7dbb394b9fdea5f7c867c316aea506 | Stackoverflow Stackexchange
Q: Downloading canvas image using toBlob I am attempting to download a large canvas image (several thousand pixels height and width) on the click of a button using toBlob in the following code, which doesn't seem to work:
document.getElementById("download_button").onclick = function() {
var link = document.createElement("a");
link.download = "image.png";
canvas.toBlob(function(blob){
link.href = URL.createObjectURL(blob);
console.log(blob);
},'image/png');
console.log(link.href);
link.click();
}
console.log(blob) in the callback function returns: Blob {size: 64452, type: "image/png"}
But console.log(link.href) returns nothing.
Am I not using .createObjectURL correctly?
I used to work with toDataURL, but it stopped working above a certain canvas size. And this post canvas.toDataURL() download size limit suggested to try toBlob.
A: Your code is fine.. just use it at the right time :)
canvas.toBlob(function(blob){
link.href = URL.createObjectURL(blob);
console.log(blob);
console.log(link.href); // this line should be here
},'image/png');
| Q: Downloading canvas image using toBlob I am attempting to download a large canvas image (several thousand pixels height and width) on the click of a button using toBlob in the following code, which doesn't seem to work:
document.getElementById("download_button").onclick = function() {
var link = document.createElement("a");
link.download = "image.png";
canvas.toBlob(function(blob){
link.href = URL.createObjectURL(blob);
console.log(blob);
},'image/png');
console.log(link.href);
link.click();
}
console.log(blob) in the callback function returns: Blob {size: 64452, type: "image/png"}
But console.log(link.href) returns nothing.
Am I not using .createObjectURL correctly?
I used to work with toDataURL, but it stopped working above a certain canvas size. And this post canvas.toDataURL() download size limit suggested to try toBlob.
A: Your code is fine.. just use it at the right time :)
canvas.toBlob(function(blob){
link.href = URL.createObjectURL(blob);
console.log(blob);
console.log(link.href); // this line should be here
},'image/png');
A: My solution to the problem:
async function getImage({
canvas,
width,
height,
mime = 'image/jpeg',
quality = 0.8,
}) {
return new Promise(resolve => {
const tmpCanvas = document.createElement('canvas');
tmpCanvas.width = width;
tmpCanvas.height = height;
const ctx = tmpCanvas.getContext('2d');
ctx.drawImage(
canvas,
0,
0,
canvas.width,
canvas.height,
0,
0,
width,
height,
);
tmpCanvas.toBlob(resolve, mime, quality);
});
}
const photo = await getImage({ canvas, width: 500, height: 500 });
| stackoverflow | {
"language": "en",
"length": 197,
"provenance": "stackexchange_0000F.jsonl.gz:873056",
"question_score": "21",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567511"
} |
aebc54e42cad49afaaf6ec1bd2f04ec5de78e854 | Stackoverflow Stackexchange
Q: Work-around for $@ unbound variable in Bash 4.0.0? In specifically Bash version 4.0.0, is there any way to work around the use of an empty $@ raising an unbound variable error when set -u is enabled?
Consider the following:
#!/usr/bin/env bash-4.0.0-1
set -xvu
echo "$BASH_VERSION"
echo "${BASH_VERSINFO[@]}"
main () {
printf '%q\n' "${@:-}"
}
main "${@:-}"
Gives me the following output when I provide an empty set of arguments:
neech@nicolaw.uk:~ $ ./test.sh
echo "$BASH_VERSION"
+ echo '4.0.0(1)-release'
4.0.0(1)-release
echo "${BASH_VERSINFO[@]}"
+ echo 4 0 0 1 release x86_64-unknown-linux-gnu
4 0 0 1 release x86_64-unknown-linux-gnu
main () {
printf '%q\n' "${@:-}"
}
main "${@:-}"
./test.sh: line 12: $@: unbound variable
I only see this behaviour in Bash version 4.0.0.
I was hoping that using variable substitution ${@:-} would allow me to work around this, but it seems not.
Is there a way to work around this?
A: $@, $* are special variables so should always be defined it's a bug
https://unix.stackexchange.com/questions/16560/bash-su-unbound-variable-with-set-u
a workaround, maybe:
set +u
args=("$@")
set -u
main "${args[@]}"
or maybe also
main "${@:+$@}"
| Q: Work-around for $@ unbound variable in Bash 4.0.0? In specifically Bash version 4.0.0, is there any way to work around the use of an empty $@ raising an unbound variable error when set -u is enabled?
Consider the following:
#!/usr/bin/env bash-4.0.0-1
set -xvu
echo "$BASH_VERSION"
echo "${BASH_VERSINFO[@]}"
main () {
printf '%q\n' "${@:-}"
}
main "${@:-}"
Gives me the following output when I provide an empty set of arguments:
neech@nicolaw.uk:~ $ ./test.sh
echo "$BASH_VERSION"
+ echo '4.0.0(1)-release'
4.0.0(1)-release
echo "${BASH_VERSINFO[@]}"
+ echo 4 0 0 1 release x86_64-unknown-linux-gnu
4 0 0 1 release x86_64-unknown-linux-gnu
main () {
printf '%q\n' "${@:-}"
}
main "${@:-}"
./test.sh: line 12: $@: unbound variable
I only see this behaviour in Bash version 4.0.0.
I was hoping that using variable substitution ${@:-} would allow me to work around this, but it seems not.
Is there a way to work around this?
A: $@, $* are special variables so should always be defined it's a bug
https://unix.stackexchange.com/questions/16560/bash-su-unbound-variable-with-set-u
a workaround, maybe:
set +u
args=("$@")
set -u
main "${args[@]}"
or maybe also
main "${@:+$@}"
A: Why not do error handling on your own? This way you can control exactly what happens when an exception is encountered, for instance return a custom exit code and message for that error, rather than be confined to some predefined behavior.
function log_error
{
[[ $# -ne 1 ]] && return 1
typeset msg="$1"
typeset timestamp=$(date "+%F %T")
echo "[${timestamp}] [ERROR] - $msg " >&2
}
if [[ -z "$BASH_VERSION" ]]
then
log_error "BASH VERSION is not set"
exit 1
fi
| stackoverflow | {
"language": "en",
"length": 258,
"provenance": "stackexchange_0000F.jsonl.gz:873067",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567534"
} |
422f7f10ab27264b4c7e11d6a1f6e6ad20527a9f | Stackoverflow Stackexchange
Q: Reactjs : Add component on click of button I am trying to add a component on click of button.
Following is the fiddle for that
https://jsfiddle.net/rzv6Lrjh/92/
render: function() {
return (
<div>
<IndividualTicketInput />
{this.state.tickets}
<CreateTicket createTicket={this.onClick} />
</div>
);
}
});
Here I am using individual component IndividualTicketInput , is it possible to do it inside single component Tickets component?
A: You could store an array of tickets in state and generate a new ticket object each time you click the CreateTicket button. Store the new tickets in state and iterate over them, rendering each one to the dom. The component will rerender each time setState is called, updating the dom with your new <Ticket> component.
state = { tickets: [] }
render: function() {
return (
<div>
<IndividualTicketInput />
{this.state.tickets}
<CreateTicket createTicket={this.onClick} />
{this.renderTickets()}
</div>
);
}
});
renderTickets() {
return this.state.tickets.map(ticket => {
return <Ticket key={ticket.id} ticket={ticket} />;
});
}
onClick = () => {
let newTicket = { ... };
let tickets = this.state.tickets.unshift(newTicket);
this.setState({tickets});
}
| Q: Reactjs : Add component on click of button I am trying to add a component on click of button.
Following is the fiddle for that
https://jsfiddle.net/rzv6Lrjh/92/
render: function() {
return (
<div>
<IndividualTicketInput />
{this.state.tickets}
<CreateTicket createTicket={this.onClick} />
</div>
);
}
});
Here I am using individual component IndividualTicketInput , is it possible to do it inside single component Tickets component?
A: You could store an array of tickets in state and generate a new ticket object each time you click the CreateTicket button. Store the new tickets in state and iterate over them, rendering each one to the dom. The component will rerender each time setState is called, updating the dom with your new <Ticket> component.
state = { tickets: [] }
render: function() {
return (
<div>
<IndividualTicketInput />
{this.state.tickets}
<CreateTicket createTicket={this.onClick} />
{this.renderTickets()}
</div>
);
}
});
renderTickets() {
return this.state.tickets.map(ticket => {
return <Ticket key={ticket.id} ticket={ticket} />;
});
}
onClick = () => {
let newTicket = { ... };
let tickets = this.state.tickets.unshift(newTicket);
this.setState({tickets});
}
A: Yes, you can.
You can implement a function, which is returning the html elements of ticket UI, inside of Tickets component. But, I think it's not the best practice. Because, you should divide each UI component item as React Component.
https://jsfiddle.net/rwnvt8vs/
ticket: function(ticket = {name: '', quantity: '', price: null}){
return (
<ul>
<li>
<label>Ticket Name</label>
<input className="ticket-name" type="text" placeholder="E.g. General Admission" value={ticket.name} />
</li>
<li>
<label>Quantity Available</label>
<input className="quantity" type="number" placeholder="100" value={ticket.quantity} />
</li>
<li>
<label>Price</label>
<input className="price" type="number" placeholder="25.00" value={ticket.price} />
</li>
<li>
<button type="button" className="delete-ticket" onClick={this.deleteTicket}><i className="fa fa-trash-o delete-ticket"></i></button>
</li>
</ul>
);
},
onClick: function() {
var newTicket = this.ticket();
var tickets = this.state.tickets.push(newTicket);
this.setState({tickets: tickets});
},
| stackoverflow | {
"language": "en",
"length": 282,
"provenance": "stackexchange_0000F.jsonl.gz:873068",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567537"
} |
86f582683512f8f1c58835d814de691140063a58 | Stackoverflow Stackexchange
Q: ReCAPTCHA couldn't find user-provided function: myCallBack I'm trying to use ReCAPTCHA where I am getting following error.
ReCAPTCHA couldn't find user-provided function: myCallBack.
How can I solve this issue?
var verifyCallback3 = function(response) {
if(response!=null){
$("#rss").show();
}
};
var myCallBack = function() {
grecaptcha.render('html_element', {
'sitekey' : '6sssfffffffffAAPfEI_RkbAlUuw5FA4p-kiGy5Nea',
'callback' : verifyCallback3,
'theme' : 'light',
'type':'image'
});
};
A: Make sure your callback function is being defined in the global scope. For some reason, in production my function was not in this namespace.
In addition to:
function myCallback() { ... }
Make sure you directly assign it into the global space:
window.myCallback = myCallback;
You should be able to test whether this is your problem, my typing the function name at the Javascript console and seeing if it's defined or not.
| Q: ReCAPTCHA couldn't find user-provided function: myCallBack I'm trying to use ReCAPTCHA where I am getting following error.
ReCAPTCHA couldn't find user-provided function: myCallBack.
How can I solve this issue?
var verifyCallback3 = function(response) {
if(response!=null){
$("#rss").show();
}
};
var myCallBack = function() {
grecaptcha.render('html_element', {
'sitekey' : '6sssfffffffffAAPfEI_RkbAlUuw5FA4p-kiGy5Nea',
'callback' : verifyCallback3,
'theme' : 'light',
'type':'image'
});
};
A: Make sure your callback function is being defined in the global scope. For some reason, in production my function was not in this namespace.
In addition to:
function myCallback() { ... }
Make sure you directly assign it into the global space:
window.myCallback = myCallback;
You should be able to test whether this is your problem, my typing the function name at the Javascript console and seeing if it's defined or not.
A: In your recaptcha div, make sure not to use parenthesis in your data-callback.
Like so data-callback="yourCallback", rather than data-callback="yourCallback();"
A: The same thing happening with me.I have checked my code carefully, every thing is fine but captcha not shown and in console the message is "ReCAPTCHA couldn't find user-provided function: myCallBack" but finally I found that my JavaScript code was is in page load function. I am just put out it from page load function and its working.
A: You have to put your script function:
<script> function registerFormCheck()</script>
Before the google script something like this:
/* First */ <script> function registerFormCheck(){} </script>
/* Second */ <script src='https://www.google.com/recaptcha/api.js'></script>
This worked for me...
A: There is likely an error somewhere in the function or somewhere in your javascript causing the function to not become registered, for me it was a missing comma. From what you have, I'm guessing its related to 'html_element' or a widgetID not being assigned. Try:
var myCallBack = function() {
var widgetID;
widgetID = grecaptcha.render(document.getElementById('html_element'), {
'sitekey' : '6sssfffffffffAAPfEI_RkbAlUuw5FA4p-kiGy5Nea',
'callback' : verifyCallback3,
'theme' : 'light',
'type':'image'
});
};
A: As reference to the John Lehmann's answer, for React users, in order to make your callback function visible make it global.
You can achieve this by using useEffect() or componentDidMount() lifecycle methods.
For example:
useEffect(() => {
window.verifyCaptcha = verifyCaptcha;
})
This way, when component that contains reCAPTCHA box loads, it will also make your callback function global.
A: For me working this solution:
<script src="https://www.google.com/recaptcha/api.js?&render=explicit" async defer></script>
A: In one of the code samples in Google's documentation, they include this script tag without a closing tag:
<script async src="https://www.google.com/recaptcha/api.js">
Adding the closing tag fixed the problem for me:
<script async src="https://www.google.com/recaptcha/api.js"></script>
| stackoverflow | {
"language": "en",
"length": 416,
"provenance": "stackexchange_0000F.jsonl.gz:873070",
"question_score": "38",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567543"
} |
f0b49acd90a8b370798e9351adaa45dbdafe07ed | Stackoverflow Stackexchange
Q: Why are my tables not listed in the Redshift pg_table_def system table? So I have created some tables:
But they are not listed when I query pg_table_def? Any ideas appreciated.
A: "For modifying the search path permanently on your cluster , please modify the search_path parameter in the parameter group that is currently associated with your cluster.
For step by step instructions please refer to the link below :-
http://docs.aws.amazon.com/redshift/latest/mgmt/managing-parameter-groups-console.html
"
https://forums.aws.amazon.com/thread.jspa?threadID=131150
| Q: Why are my tables not listed in the Redshift pg_table_def system table? So I have created some tables:
But they are not listed when I query pg_table_def? Any ideas appreciated.
A: "For modifying the search path permanently on your cluster , please modify the search_path parameter in the parameter group that is currently associated with your cluster.
For step by step instructions please refer to the link below :-
http://docs.aws.amazon.com/redshift/latest/mgmt/managing-parameter-groups-console.html
"
https://forums.aws.amazon.com/thread.jspa?threadID=131150
A: This is old, but in case there is anyone like me looking for the straight answer on the command line:
Set the search_path parameter to include the schemas you want.
For a single session, it's SET search_path TO my_schema, my_second_schema [...] for as many schemas as you like.
To set the search path permanently for one user, it's
ALTER USER my_user SET search_path TO my_schema, my_second_schema [...].
| stackoverflow | {
"language": "en",
"length": 141,
"provenance": "stackexchange_0000F.jsonl.gz:873107",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567667"
} |
bf578bc5784b8e03071f75601eaa6722a3a9545c | Stackoverflow Stackexchange
Q: wait for webview to load html before taking screenshot I'm trying to take a screenshot of a webview in Android. However the screenshot fires too quickly and as a result, I get a blank screenshot. I tried implementing a webviewclient and onPageFinished to listen for the webview to load before taking the shot, but it didn't work. How do I make sure the view loads before taking the screenshot?
public void onSaveClicked(View reLayout){
final WebView webview;
setContentView(R.layout.webview);
webview = (WebView) findViewById(R.id.webview);
WriteJsJson();
Activity context;
context = _activity.get();
Intent fire = new Intent(context, WebviewActivity.class);
switch (_reportType) {
case 1 :
fire.putExtra("target", "daily"); // Parameter to tell the webview activity to open the right report.
case 2 :
fire.putExtra("target", "week");
case 3 :
fire.putExtra("target", "month");
}
startActivity(fire);
webview.setWebViewClient(new WebViewClient() {
@Override
public void onPageFinished(WebView view, String url) {
grabScreen(); //method for taking screenshot and storing it...
}
});
A: You can add setWebChromeClient to see the process of webview.
webview.getSettings().setJavaScriptEnabled(true);
webview.setWebChromeClient(new WebChromeClient() {
public void onProgressChanged(WebView view, final int progress) {
progressBar.setProgress(progress);
if (progress == 100) {
grabScreen();
}
}
});
| Q: wait for webview to load html before taking screenshot I'm trying to take a screenshot of a webview in Android. However the screenshot fires too quickly and as a result, I get a blank screenshot. I tried implementing a webviewclient and onPageFinished to listen for the webview to load before taking the shot, but it didn't work. How do I make sure the view loads before taking the screenshot?
public void onSaveClicked(View reLayout){
final WebView webview;
setContentView(R.layout.webview);
webview = (WebView) findViewById(R.id.webview);
WriteJsJson();
Activity context;
context = _activity.get();
Intent fire = new Intent(context, WebviewActivity.class);
switch (_reportType) {
case 1 :
fire.putExtra("target", "daily"); // Parameter to tell the webview activity to open the right report.
case 2 :
fire.putExtra("target", "week");
case 3 :
fire.putExtra("target", "month");
}
startActivity(fire);
webview.setWebViewClient(new WebViewClient() {
@Override
public void onPageFinished(WebView view, String url) {
grabScreen(); //method for taking screenshot and storing it...
}
});
A: You can add setWebChromeClient to see the process of webview.
webview.getSettings().setJavaScriptEnabled(true);
webview.setWebChromeClient(new WebChromeClient() {
public void onProgressChanged(WebView view, final int progress) {
progressBar.setProgress(progress);
if (progress == 100) {
grabScreen();
}
}
});
A: onPageFinished notify the host application that a page has finished loading. This method is called only for main frame. When onPageFinished() is called, the rendering picture may not be updated yet. To get the notification for the new Picture, use onNewPicture(WebView, Picture).
Sample Code
mWebView.setPictureListener(new MyPictureListener());
//... and then later on....
class MyPictureListener implements PictureListener {
@Override
public void onNewPicture(WebView view, Picture arg1) {
// put code here that needs to run when the page has finished loading and
// a new "picture" is on the webview.
}
}
| stackoverflow | {
"language": "en",
"length": 270,
"provenance": "stackexchange_0000F.jsonl.gz:873125",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567731"
} |
9b1a62e8b0dea5e558a856269d872d28c9fbe319 | Stackoverflow Stackexchange
Q: NSStatusBar + Swift: title shows and immediately disappear I want make status bar for macOS, but after I run application title shows and immediately disappears
func applicationDidFinishLaunching(_ aNotification: Notification) {
// Insert code here to initialize your application
let statusItem = NSStatusBar.system().statusItem(withLength: NSVariableStatusItemLength)
statusItem.title = "Hello"
}
I think something wrong with references, but don't know how to fix this problem.
A: Indeed you need a strong reference to the status item
var statusItem : NSStatusItem!
func applicationDidFinishLaunching(_ aNotification: Notification) {
// Insert code here to initialize your application
statusItem = NSStatusBar.system().statusItem(withLength: NSVariableStatusItemLength)
statusItem.title = "Hello"
}
However I recommend to use a closure to initialize the status item
let statusItem : NSStatusItem = {
let item = NSStatusBar.system().statusItem(withLength: NSVariableStatusItemLength)
item.title = "Hello"
return item
}()
| Q: NSStatusBar + Swift: title shows and immediately disappear I want make status bar for macOS, but after I run application title shows and immediately disappears
func applicationDidFinishLaunching(_ aNotification: Notification) {
// Insert code here to initialize your application
let statusItem = NSStatusBar.system().statusItem(withLength: NSVariableStatusItemLength)
statusItem.title = "Hello"
}
I think something wrong with references, but don't know how to fix this problem.
A: Indeed you need a strong reference to the status item
var statusItem : NSStatusItem!
func applicationDidFinishLaunching(_ aNotification: Notification) {
// Insert code here to initialize your application
statusItem = NSStatusBar.system().statusItem(withLength: NSVariableStatusItemLength)
statusItem.title = "Hello"
}
However I recommend to use a closure to initialize the status item
let statusItem : NSStatusItem = {
let item = NSStatusBar.system().statusItem(withLength: NSVariableStatusItemLength)
item.title = "Hello"
return item
}()
| stackoverflow | {
"language": "en",
"length": 127,
"provenance": "stackexchange_0000F.jsonl.gz:873133",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567756"
} |
c0c61220125d1c1cbb3fd93eb4ebdbd8bfffbfae | Stackoverflow Stackexchange
Q: C++ compile error, mutex in std does not name a type in MinGW (GCC 6.3.0) I'm trying to compile Mongo C++11 driver with MinGW (G++ 6.3.0) on Windows 10 64bit. From GCC 6 release notes;
The default mode has been changed to -std=gnu++14.
My understanding is that C++11 is also supported by default.
Why then do I get these error message about mutex and thread?
from F:/Projects/Mongo/attempt_4_mingw64/mongo-cxx-driver-r3.1.1/src/mongocxx/exception/private/mongoc_error.hh:19,
from F:\Projects\Mongo\attempt_4_mingw64\mongo-cxx-driver-r3.1.1\src\mongocxx\bulk_write.cpp:20:
F:/Projects/Mongo/attempt_4_mingw64/mongo-cxx-driver-r3.1.1/src/mongocxx/test_util/mock.hh:183:10: error: 'mutex' in namespace 'std' does not name a type
std::mutex _active_instances_lock;
^~~~~
F:/Projects/Mongo/attempt_4_mingw64/mongo-cxx-driver-r3.1.1/src/mongocxx/test_util/mock.hh:184:24: error: 'thread' is not a member of 'std'
std::unordered_map<std::thread::id, instance*> _active_instances;
^~~
F:/Projects/Mongo/attempt_4_mingw64/mongo-cxx-driver-r3.1.1/src/mongocxx/test_util/mock.hh:184:24: error: 'thread' is not a member of 'std'
F:/Projects/Mongo/attempt_4_mingw64/mongo-cxx-driver-r3.1.1/src/mongocxx/test_util/mock.hh:184:50: error: wrong number of template arguments (1, should be at least 2)
std::unordered_map<std::thread::id, instance*> _active_instances;
^
A: mongocxx currently only supports MSVC on Windows, so building with MinGW might not be possible. That being said, if you're not already, I suggest passing -std=c++11 in your CMAKE_CXX_FLAGS to see if that works.
| Q: C++ compile error, mutex in std does not name a type in MinGW (GCC 6.3.0) I'm trying to compile Mongo C++11 driver with MinGW (G++ 6.3.0) on Windows 10 64bit. From GCC 6 release notes;
The default mode has been changed to -std=gnu++14.
My understanding is that C++11 is also supported by default.
Why then do I get these error message about mutex and thread?
from F:/Projects/Mongo/attempt_4_mingw64/mongo-cxx-driver-r3.1.1/src/mongocxx/exception/private/mongoc_error.hh:19,
from F:\Projects\Mongo\attempt_4_mingw64\mongo-cxx-driver-r3.1.1\src\mongocxx\bulk_write.cpp:20:
F:/Projects/Mongo/attempt_4_mingw64/mongo-cxx-driver-r3.1.1/src/mongocxx/test_util/mock.hh:183:10: error: 'mutex' in namespace 'std' does not name a type
std::mutex _active_instances_lock;
^~~~~
F:/Projects/Mongo/attempt_4_mingw64/mongo-cxx-driver-r3.1.1/src/mongocxx/test_util/mock.hh:184:24: error: 'thread' is not a member of 'std'
std::unordered_map<std::thread::id, instance*> _active_instances;
^~~
F:/Projects/Mongo/attempt_4_mingw64/mongo-cxx-driver-r3.1.1/src/mongocxx/test_util/mock.hh:184:24: error: 'thread' is not a member of 'std'
F:/Projects/Mongo/attempt_4_mingw64/mongo-cxx-driver-r3.1.1/src/mongocxx/test_util/mock.hh:184:50: error: wrong number of template arguments (1, should be at least 2)
std::unordered_map<std::thread::id, instance*> _active_instances;
^
A: mongocxx currently only supports MSVC on Windows, so building with MinGW might not be possible. That being said, if you're not already, I suggest passing -std=c++11 in your CMAKE_CXX_FLAGS to see if that works.
| stackoverflow | {
"language": "en",
"length": 158,
"provenance": "stackexchange_0000F.jsonl.gz:873140",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567784"
} |
3b2fc1b6e38793ac462d66a60b9ade8874bb3718 | Stackoverflow Stackexchange
Q: using a stream based on the content of Optional I get a map from a service not under my control that might be null and want to process it, let's say, filter, map and reduce to a single element I need.
Question: is there a "link" from Optional to Stream?
I tried (among other things):
return Optional.ofNullable(getMap())
.map(Map::entrySet) // gets the entryset
.map(Stream::of)
.orElseGet(Stream::empty)
// i would then like to continue with
.filter(e -> e.getKey().startsWith("f")
.map(Entry::getValue)
.findFirst();
but then I get not Stream<Entry> but Stream<Set<Entry>> ... is there a way to somehow flatMap a collection or map out of an Optional?
Note: I am interested in a fluent, pure stream/optional approach here. It works of course when I save the map to local var first and make sure it is not null.
A: Your mistake is in this line:
.map(Stream::of)
The of function takes a single parameter (or a vararg parameter), and returns a stream with only that element. You will therefore get a Stream<Set<Map.Entry>>. Instead, you should call the stream method on the entryset, like this:
.map(Set::stream)
| Q: using a stream based on the content of Optional I get a map from a service not under my control that might be null and want to process it, let's say, filter, map and reduce to a single element I need.
Question: is there a "link" from Optional to Stream?
I tried (among other things):
return Optional.ofNullable(getMap())
.map(Map::entrySet) // gets the entryset
.map(Stream::of)
.orElseGet(Stream::empty)
// i would then like to continue with
.filter(e -> e.getKey().startsWith("f")
.map(Entry::getValue)
.findFirst();
but then I get not Stream<Entry> but Stream<Set<Entry>> ... is there a way to somehow flatMap a collection or map out of an Optional?
Note: I am interested in a fluent, pure stream/optional approach here. It works of course when I save the map to local var first and make sure it is not null.
A: Your mistake is in this line:
.map(Stream::of)
The of function takes a single parameter (or a vararg parameter), and returns a stream with only that element. You will therefore get a Stream<Set<Map.Entry>>. Instead, you should call the stream method on the entryset, like this:
.map(Set::stream)
A: I think I'm going to answer the question.
return Optional.ofNullable(getMap())
.map(Map::entrySet) // gets the entryset
.map(Stream::of)
.orElseGet(Stream::empty)
// i would then like to continue with
.filter(e -> e.getKey().startsWith("f")
.map(Entry::getValue)
.findFirst();
I'm sick, really sick when I saw above code. Is it really so important for you write code in fluent approach, instead of writing simple code? first of all, as @ Didier L mentioned in the comments, it's wrong way to use Optional. Secondly, the code is so hard to read, isn't it? if you write it with defining a local variable:
Map<String, Integer> map = getMap();
return map == null ? Optional.<Integer> empty()
: map.entrySet().stream()
.filter(e -> e.getKey().startsWith("f")).map(Entry::getValue).findFirst();
Isn't it much clear? Or you can do it with StreamEx if you can't get over not using fluent approach:
StreamEx.ofNullable(getMap())
.flatMapToEntry(Function.identity())
.filterKeys(k -> k.startsWith("f")).values().findFirst();
Or my library abacus-common
EntryStream.of(getMap()).filterByKey(k -> k.startsWith("f")).values().first();
Always trying to look for a better approach if things are stuck.
| stackoverflow | {
"language": "en",
"length": 333,
"provenance": "stackexchange_0000F.jsonl.gz:873147",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567806"
} |
62d96ba7714f2a8dcd894be50a03b63ef877eb78 | Stackoverflow Stackexchange
Q: Store a column as a string in kdb I have a table with numerous columns. I'm trying to take the data from one of the columns and return it as a string.
For instance, if I had:
A B C
1 2 3
4 5 6
7 8 9
I would like to take column B and store 258 as a string.
How would I do this?
A: Like this?
q)raze exec string B from ([] A:1 4 7;B:2 5 8;C:3 6 9)
"258"
Or are you trying to change the type of the column in the table?
q)update string B from ([] A:1 4 7;B:2 5 8;C:3 6 9)
A B C
--------
1 ,"2" 3
4 ,"5" 6
7 ,"8" 9
| Q: Store a column as a string in kdb I have a table with numerous columns. I'm trying to take the data from one of the columns and return it as a string.
For instance, if I had:
A B C
1 2 3
4 5 6
7 8 9
I would like to take column B and store 258 as a string.
How would I do this?
A: Like this?
q)raze exec string B from ([] A:1 4 7;B:2 5 8;C:3 6 9)
"258"
Or are you trying to change the type of the column in the table?
q)update string B from ([] A:1 4 7;B:2 5 8;C:3 6 9)
A B C
--------
1 ,"2" 3
4 ,"5" 6
7 ,"8" 9
A: If all your entries are single digit numbers, all you need to do is
.Q.n t.B
Taking your data as an example,
q)show t:([] A:1 4 7;B:2 5 8;C:3 6 9)
A B C
-----
1 2 3
4 5 6
7 8 9
q).Q.n t.B
"258"
Note that .Q.n is simply a string containing the 10 digits:
q).Q.n
"0123456789"
If you want to store the string back in the table, just use update:
q)update .Q.n B from `t
`t
q)t.B
"258"
| stackoverflow | {
"language": "en",
"length": 207,
"provenance": "stackexchange_0000F.jsonl.gz:873169",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567868"
} |
4b29577c56cbedb98569802f3f1fcdba487ca6fa | Stackoverflow Stackexchange
Q: Alamofire unable disable caching I can not get Alamofire or iOS to stop caching:
Alamofire.SessionManager.default.session.configuration.requestCachePolicy = .reloadIgnoringLocalCacheData
or
URLCache.shared.removeAllCachedResponses()
I need to disable it for all requests?
Also tried:
let configuration = URLSessionConfiguration.default
configuration.urlCache = nil
let manager = Alamofire.SessionManager(configuration: configuration)
This give this error:
Auth request failed with error:
Error Domain=NSURLErrorDomain Code=-999 "cancelled" UserInfo={NSErrorFailingURLKey=http://localhost:8080/slow/file.json, NSLocalizedDescription=cancelled, NSErrorFailingURLStringKey=http://localhost:8080/slow/file.json}
A: This is working:
URLCache.shared = URLCache(memoryCapacity: 0, diskCapacity: 0, diskPath: nil)
And then just Alamofire.request
| Q: Alamofire unable disable caching I can not get Alamofire or iOS to stop caching:
Alamofire.SessionManager.default.session.configuration.requestCachePolicy = .reloadIgnoringLocalCacheData
or
URLCache.shared.removeAllCachedResponses()
I need to disable it for all requests?
Also tried:
let configuration = URLSessionConfiguration.default
configuration.urlCache = nil
let manager = Alamofire.SessionManager(configuration: configuration)
This give this error:
Auth request failed with error:
Error Domain=NSURLErrorDomain Code=-999 "cancelled" UserInfo={NSErrorFailingURLKey=http://localhost:8080/slow/file.json, NSLocalizedDescription=cancelled, NSErrorFailingURLStringKey=http://localhost:8080/slow/file.json}
A: This is working:
URLCache.shared = URLCache(memoryCapacity: 0, diskCapacity: 0, diskPath: nil)
And then just Alamofire.request
A: To disable you urlCache you have to create custom Alamofire Manager
with nil urlCache.
let configuration = URLSessionConfiguration.default
configuration.urlCache = nil
let manager = Manager(configuration: configuration)
More information you can find in Apple Documenation
To disable caching, set this property to nil.
A: In Alamofire 5.0 you should create an instance of ResponseCacher and set its caching behavior to .doNotCache and then inject it to a new Session and use only that session:
static let mySession = Session(cachedResponseHandler: ResponseCacher(behavior: .doNotCache))
A: I used URLCache.shared.removeAllCachedResponses() before each Alamofire request to stop caching
A: You cannot alter the properties of a URLSessionConfiguration that has already been used to initialize a URLSession, which is what your code sample is doing. Like k8mil said, you should create your own Alamofire SessionManager with the cache disabled if you want this behavior.
| stackoverflow | {
"language": "en",
"length": 211,
"provenance": "stackexchange_0000F.jsonl.gz:873182",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567908"
} |
1bacbc6ab41db64eb6894ba2d86905677aa67c83 | Stackoverflow Stackexchange
Q: Testing ARKit without iPhone6s or newer I am before decision to download Xcode9. I want to play with new framework - ARKit. I know that to run app with ARKit I need a device with A9 chip or newer. Unfortunately I have an older one. My question is to people who already downloaded the new Xcode. There is a possibility to run ARKit app in my case? Any simulator for that or something else? Any ideas or will I have to buy new device?
A: ARKit is available on any iOS 11 device, but the world tracking features that enable high-quality AR experiences require a device with the A9 or later processor.It is Necessary to update your device with iOS 11 beta.
| Q: Testing ARKit without iPhone6s or newer I am before decision to download Xcode9. I want to play with new framework - ARKit. I know that to run app with ARKit I need a device with A9 chip or newer. Unfortunately I have an older one. My question is to people who already downloaded the new Xcode. There is a possibility to run ARKit app in my case? Any simulator for that or something else? Any ideas or will I have to buy new device?
A: ARKit is available on any iOS 11 device, but the world tracking features that enable high-quality AR experiences require a device with the A9 or later processor.It is Necessary to update your device with iOS 11 beta.
A: There is another problem due to iOS11 beta1 bug, iOS 11 Beta 1 Release Notes And Known Issues According To Apple
This means you need an iPhone 6S or better to use ARKit(ARSessionConfiguration) at the current time. Until the iOS11 beta2 release...
2017.07.13 update
My iphone6 had update to iOS11 beta3, and it can run ARWorldTrackingSessionConfiguration, amazing!
2017.09.07 update
iphone6 can not run ARWorldTrackingConfiguration in recently iOS11 beta...... :(
| stackoverflow | {
"language": "en",
"length": 193,
"provenance": "stackexchange_0000F.jsonl.gz:873185",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567920"
} |
0ee852b1c8188865092f926b58f823ee8011a7ee | Stackoverflow Stackexchange
Q: node js express routing request get called twice I'm building an app with Nodejs and when a page load the get request get called twice. Here is the request for the index page.
Any idea why this is happening and how to fix it?
router.get("/",(req,res)=>{
res.render("index",{csrfToken: req.csrfToken()});
});
| Q: node js express routing request get called twice I'm building an app with Nodejs and when a page load the get request get called twice. Here is the request for the index page.
Any idea why this is happening and how to fix it?
router.get("/",(req,res)=>{
res.render("index",{csrfToken: req.csrfToken()});
});
| stackoverflow | {
"language": "en",
"length": 49,
"provenance": "stackexchange_0000F.jsonl.gz:873209",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44567997"
} |
9675ec356654b4fcd015c3bfca05160596d36771 | Stackoverflow Stackexchange
Q: How to disable push notification capability in xcode project? I use a free Apple developer account, so no push notification support. So when I get an existing xcode project and try to run it on my phone, I get "Your development team, "xxx", does not support the Push Notifications capability."
But when I go to "Capabilities" tab, I don't see it there to disable (It said "10 capabilities Unavailable"). So I guess it hides them? But the project still require the capabilities somewhere?
So how do I disable the push notification capability of the project, so I can run it?
A: UPDATE:
Thank KerimGökarslan for remind me that somebody can't see push notifications capability.
If your developer account doesn't have push notifications capability, you must clear current provisioning profile and certificate. Then you can disable it in capabilities tab.
Select capabilities tab of your target and turn off what you want. Make sure configuration of every target is changed.
| Q: How to disable push notification capability in xcode project? I use a free Apple developer account, so no push notification support. So when I get an existing xcode project and try to run it on my phone, I get "Your development team, "xxx", does not support the Push Notifications capability."
But when I go to "Capabilities" tab, I don't see it there to disable (It said "10 capabilities Unavailable"). So I guess it hides them? But the project still require the capabilities somewhere?
So how do I disable the push notification capability of the project, so I can run it?
A: UPDATE:
Thank KerimGökarslan for remind me that somebody can't see push notifications capability.
If your developer account doesn't have push notifications capability, you must clear current provisioning profile and certificate. Then you can disable it in capabilities tab.
Select capabilities tab of your target and turn off what you want. Make sure configuration of every target is changed.
A: Open YourAppName.entitlements and delete
<key>aps-environment</key>
A: Following this pictrue will fix this issue
A: Just clear "Automatically manage signing" checkbox and select it again, you will find the "push notification capability" or other capability in Capabilities page.
| stackoverflow | {
"language": "en",
"length": 198,
"provenance": "stackexchange_0000F.jsonl.gz:873212",
"question_score": "18",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568009"
} |
9dde6e5663ada5d1b18d812ea12aadb15d0e03dc | Stackoverflow Stackexchange
Q: XSLT - remove duplicate namespace declarations I have the following xml:
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" Id="xmldsig">
<ds:SignedInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
</ds:SignedInfo>
<ds:SignedInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfoData xmlns:ds="http://www.w3.org/2000/09/xmldsig#"/>
</ds:SignedInfo/>
</ds:Signature>
The problem is, while I need the first ds namespace declaration in <ds:Signature>. the following ones (in <ds:SignedInfo> and <ds:SignedInfoData>) are not required. Is there any way to remove duplicate namespace declarations using XSLT 1.0 to get this output:
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" Id="xmldsig">
<ds:SignedInfo>
</ds:SignedInfo>
<ds:SignedInfo>
<ds:SignedInfoData/>
</ds:SignedInfo>
</ds:Signature>
A: Eliminating the duplicated namespace declarations is something that happens by just copying the input, for instance with an identity transformation http://xsltransform.net/jxDigU1/1
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
</xsl:transform>
| Q: XSLT - remove duplicate namespace declarations I have the following xml:
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" Id="xmldsig">
<ds:SignedInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
</ds:SignedInfo>
<ds:SignedInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfoData xmlns:ds="http://www.w3.org/2000/09/xmldsig#"/>
</ds:SignedInfo/>
</ds:Signature>
The problem is, while I need the first ds namespace declaration in <ds:Signature>. the following ones (in <ds:SignedInfo> and <ds:SignedInfoData>) are not required. Is there any way to remove duplicate namespace declarations using XSLT 1.0 to get this output:
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" Id="xmldsig">
<ds:SignedInfo>
</ds:SignedInfo>
<ds:SignedInfo>
<ds:SignedInfoData/>
</ds:SignedInfo>
</ds:Signature>
A: Eliminating the duplicated namespace declarations is something that happens by just copying the input, for instance with an identity transformation http://xsltransform.net/jxDigU1/1
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
</xsl:transform>
| stackoverflow | {
"language": "en",
"length": 105,
"provenance": "stackexchange_0000F.jsonl.gz:873217",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568027"
} |
39febad14bcab12d03cb05cd7fed2588144969cc | Stackoverflow Stackexchange
Q: F# sum of all other elements in list Lets say i have a function definition like this:
let sum (numbers: int list) =
// code here
What are possible ways to compute a sum of all other elements in numbers list? So if input is [3;5;3;7] result is [(3, 15); (5, 13); (3, 15); (7, 11)].
Im actually interested in any solutions, especially those which use functional approach.
Thanks
A: You could apply mapFold and use the state to see if the first occurrence has already been found.
let excludeAndSum (numbers: int list) i =
numbers
|> Seq.mapFold (fun c i' -> (i', c||i<>i'), c||i=i') false |> fst
|> Seq.filter snd
|> Seq.sumBy fst
let sum numbers =
List.map (fun i -> i , excludeAndSum numbers i) numbers
| Q: F# sum of all other elements in list Lets say i have a function definition like this:
let sum (numbers: int list) =
// code here
What are possible ways to compute a sum of all other elements in numbers list? So if input is [3;5;3;7] result is [(3, 15); (5, 13); (3, 15); (7, 11)].
Im actually interested in any solutions, especially those which use functional approach.
Thanks
A: You could apply mapFold and use the state to see if the first occurrence has already been found.
let excludeAndSum (numbers: int list) i =
numbers
|> Seq.mapFold (fun c i' -> (i', c||i<>i'), c||i=i') false |> fst
|> Seq.filter snd
|> Seq.sumBy fst
let sum numbers =
List.map (fun i -> i , excludeAndSum numbers i) numbers
A: You can do this pretty naively by summing the list and then returning a tuple of (x, sum - x) for each element x in the list:
let sum (numbers: int list) : (int * int) list =
let s = List.sum numbers
numbers |> List.map(fun x -> (x, s-x))
let nums = [3;5;3;7]
printfn "%A" (sum nums) // [(3, 15); (5, 13); (3, 15); (7, 11)]
| stackoverflow | {
"language": "en",
"length": 197,
"provenance": "stackexchange_0000F.jsonl.gz:873232",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568080"
} |
ce9eb32bf1c2a5040acd020c0a7956cc2f373461 | Stackoverflow Stackexchange
Q: Gitignore everything except a subfolder (with every content)? I want to ignore everything, except a specific subfolder (and all of its contents!). I tried solutions from possible duplicate questions without any success.
I'd need something simple like:
*
!That/Very/Folder/*
But this is not working
A:
I want to ignore everything
add the folder to the gitignore
except a specific subfolder (and all of its contents!).
force folder to be added to the repository
git add -f folder
EDIT:
I use this solution for example when I need to keep log folder, for example, but not its content. Generally, when I suppose the content of the folder is never to be added. And generally I add just path/to/folder/.gitkeep file with -f option.
| Q: Gitignore everything except a subfolder (with every content)? I want to ignore everything, except a specific subfolder (and all of its contents!). I tried solutions from possible duplicate questions without any success.
I'd need something simple like:
*
!That/Very/Folder/*
But this is not working
A:
I want to ignore everything
add the folder to the gitignore
except a specific subfolder (and all of its contents!).
force folder to be added to the repository
git add -f folder
EDIT:
I use this solution for example when I need to keep log folder, for example, but not its content. Generally, when I suppose the content of the folder is never to be added. And generally I add just path/to/folder/.gitkeep file with -f option.
A: Your .gitignore almost works but it doesn't for a simple reason: the first rule (*) tells Git to ignore every file and directory in the root of the repository. Git honors it and ignores everything, including the That directory and its content. The "unignore" rules that follow do not match anything inside the That subdirectory because the That directory is ignored together with it content, and they don't have effect.
In order to tell Git to not ignore files and directories in a deeply nested sub-directory you have to write ignore and unignore rules to let it reach the enclosing sub-directory first and then add the rules you want.
Your .gitignore file should look like this:
### Ignore everything ###
*
# But do not ignore "That" because we need something from its internals...
!That/
# ... but ignore (almost all) the content of "That"...
That/*
# ... however, do not ignore "That/Very" because we need to dig more into it
!That/Very/
# ... but we don't care about most of the content of "That/Very"
That/Very/*
# ... except for "That/Very/Folder" we care
!That/Very/Folder/
# ... and its content
!That/Very/Folder/*
A: *
!*/
!That/Very/Folder/**
!Also/This/Another/Folder/**
Ignore everything, allow subfolders (!), then allow specific folder contents (with unlimited subfolders within).
Credits to @Jepessen for the middle piece that makes it work.
| stackoverflow | {
"language": "en",
"length": 343,
"provenance": "stackexchange_0000F.jsonl.gz:873270",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568184"
} |
d73b762bd65ef3583f6eae6d8646ab57e07de6b9 | Stackoverflow Stackexchange
Q: Arrange path order of line plot in >4.x Plotly I need to plot a path that does not strictly go from left to right but cross' itself on the y-axis, however since I upgraded to plotly 4.7 I cannot not do this. It was no problem in fx. 3.6
Does anyone know, how to tell plotly how to order the path?
library(dplyr)
library(plotly) # > 4.x
data.frame(x = c(1:5,5:1),y = c(1:10)) %>%
arrange(y) %>%
plot_ly(x = ~x,y = ~y) %>% add_lines()
if you look at the data.frame it should follow the red path:
data.frame(x = c(1:5,5:1),y = c(1:10)) %>% arrange(y)
x y
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 5 6
7 4 7
8 3 8
9 2 9
10 1 10
A: You can set the mode in plotly:
data.frame(x = c(1:5,5:1),y = c(1:10)) %>%
arrange(y) %>%
plot_ly(x = ~x,y = ~y, mode = 'lines+markers')
Graph would be:
Or you can use the following base-R solution:
df <- data.frame(x = c(1:5,5:1),y = c(1:10))
with(df, plot(x,y))
with(df, lines(x,y))
This will give you following plot:
| Q: Arrange path order of line plot in >4.x Plotly I need to plot a path that does not strictly go from left to right but cross' itself on the y-axis, however since I upgraded to plotly 4.7 I cannot not do this. It was no problem in fx. 3.6
Does anyone know, how to tell plotly how to order the path?
library(dplyr)
library(plotly) # > 4.x
data.frame(x = c(1:5,5:1),y = c(1:10)) %>%
arrange(y) %>%
plot_ly(x = ~x,y = ~y) %>% add_lines()
if you look at the data.frame it should follow the red path:
data.frame(x = c(1:5,5:1),y = c(1:10)) %>% arrange(y)
x y
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 5 6
7 4 7
8 3 8
9 2 9
10 1 10
A: You can set the mode in plotly:
data.frame(x = c(1:5,5:1),y = c(1:10)) %>%
arrange(y) %>%
plot_ly(x = ~x,y = ~y, mode = 'lines+markers')
Graph would be:
Or you can use the following base-R solution:
df <- data.frame(x = c(1:5,5:1),y = c(1:10))
with(df, plot(x,y))
with(df, lines(x,y))
This will give you following plot:
| stackoverflow | {
"language": "en",
"length": 185,
"provenance": "stackexchange_0000F.jsonl.gz:873352",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568441"
} |
1ee28c130a5f50db2eb1ab7ab1c9ed946033f66a | Stackoverflow Stackexchange
Q: Upgrade existing custom angular Filter using upgrade module I am using angular's upgrade module to create a hybrid app where both angular js and angular2 can co-exist together.I have a situation here
where i need a existing custom filter to be used for a component.Does the upgrade module support upgrading custom filters.Ifsd so please advice how to do that?
A: Unfortunately upgrade module doesn't support upgrading filters to Pipes. But Pipes are very similar to filters and are really easy to upgrade manually.
If you need to have co-existing filter & Pipe I suggest to extract all logic & transforms to simple TypeScript / JavaScript:
export class PipeUtils {
static myFilterTransform(value, ...args) {
// return transformed value
}
}
AngularJS filter:
angular.module('app', [])
.filter('myFilter', () => PipeUtils.myFilterTransform)
Angular Pipe:
export class MyPipe {
transform(value, ...args) {
return PipeUtils.myFilterTransform(value, ...args)
}
}
| Q: Upgrade existing custom angular Filter using upgrade module I am using angular's upgrade module to create a hybrid app where both angular js and angular2 can co-exist together.I have a situation here
where i need a existing custom filter to be used for a component.Does the upgrade module support upgrading custom filters.Ifsd so please advice how to do that?
A: Unfortunately upgrade module doesn't support upgrading filters to Pipes. But Pipes are very similar to filters and are really easy to upgrade manually.
If you need to have co-existing filter & Pipe I suggest to extract all logic & transforms to simple TypeScript / JavaScript:
export class PipeUtils {
static myFilterTransform(value, ...args) {
// return transformed value
}
}
AngularJS filter:
angular.module('app', [])
.filter('myFilter', () => PipeUtils.myFilterTransform)
Angular Pipe:
export class MyPipe {
transform(value, ...args) {
return PipeUtils.myFilterTransform(value, ...args)
}
}
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:873356",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568452"
} |
24b2868d4c282d513be4f26a722837a914372b58 | Stackoverflow Stackexchange
Q: IntelliJ: Display .java extension in Packages View I am pretty new to IntelliJ and I can't find an option to display all file extension in the packages view (in my case .java).
As you can see on the sceenshot it just says "Main" or "Controller" on the left package explorer view.
Is there any option to make it display Main.Java and Controller.java (like in the editor view on the right side)?
Thanks a lot!
A: File extension can not be shown in package view(or any other). Rather you can try showing file extension in editors tab. To do this go to File -> Editor -> General -> Editors Tab .
Here select placement anything other than none, tick Show file extension . (version IntelliJ IDEA 2020.1)
| Q: IntelliJ: Display .java extension in Packages View I am pretty new to IntelliJ and I can't find an option to display all file extension in the packages view (in my case .java).
As you can see on the sceenshot it just says "Main" or "Controller" on the left package explorer view.
Is there any option to make it display Main.Java and Controller.java (like in the editor view on the right side)?
Thanks a lot!
A: File extension can not be shown in package view(or any other). Rather you can try showing file extension in editors tab. To do this go to File -> Editor -> General -> Editors Tab .
Here select placement anything other than none, tick Show file extension . (version IntelliJ IDEA 2020.1)
A: An interface is written in a file with a .java extension, with the name of the interface matching the name of the file.
https://www.tutorialspoint.com/java/java_interfaces.htm
A: Switching "Packages" view to "Project Files" will show extensions:
| stackoverflow | {
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:873366",
"question_score": "35",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568488"
} |
9dbc92245ad1e7ad1ea47ce21b8bcd7e258536cb | Stackoverflow Stackexchange
Q: Rotate text only in specific Excel row I'd like to rotate headers in an Excel file using Microsoft.Office.Interop. To achieve this, I'm using the following code:
worksheet.Range["A1:" + worksheet.UsedRange.Columns.Count + "1"].Style.Orientation
= Excel.XlOrientation.xlUpwards;
The result looks like this:
As you can see, every cell gets rotated although I'm only specifying the first row. However, I just want the headers to be rotated:
I even tried it with a for loop for every column:
for (int counter = 1; counter <= worksheet.UsedRange.Columns.Count; counter++)
worksheet.Range[GetExcelColumnName(counter) + "1"].Style.Orientation
= Excel.XlOrientation.xlUpwards;
But I get the same result. What should I do to only change the orientation of the headers?
(Method GetExcelColumnName)
A: Just convert the entire row 1.
worksheet.Range["1:1"].Style.Orientation = Excel.XlOrientation.xlUpwards;
worksheet.Rows["1"].Style.Orientation = Excel.XlOrientation.xlUpwards;
fwiw, in VBA this might be best handled with application.intersect of rows(1) and the .usedrange. From your code it looks like that would be,
Excel.Intersect(worksheet.Range["1:1"], worksheet.UsedRange).Style.Orientation = Excel.XlOrientation.xlUpwards;
/* just the cells, not the style */
Excel.Intersect(worksheet.Range["1:1"], worksheet.UsedRange).Cells.Orientation = Excel.XlOrientation.xlUpwards;
| Q: Rotate text only in specific Excel row I'd like to rotate headers in an Excel file using Microsoft.Office.Interop. To achieve this, I'm using the following code:
worksheet.Range["A1:" + worksheet.UsedRange.Columns.Count + "1"].Style.Orientation
= Excel.XlOrientation.xlUpwards;
The result looks like this:
As you can see, every cell gets rotated although I'm only specifying the first row. However, I just want the headers to be rotated:
I even tried it with a for loop for every column:
for (int counter = 1; counter <= worksheet.UsedRange.Columns.Count; counter++)
worksheet.Range[GetExcelColumnName(counter) + "1"].Style.Orientation
= Excel.XlOrientation.xlUpwards;
But I get the same result. What should I do to only change the orientation of the headers?
(Method GetExcelColumnName)
A: Just convert the entire row 1.
worksheet.Range["1:1"].Style.Orientation = Excel.XlOrientation.xlUpwards;
worksheet.Rows["1"].Style.Orientation = Excel.XlOrientation.xlUpwards;
fwiw, in VBA this might be best handled with application.intersect of rows(1) and the .usedrange. From your code it looks like that would be,
Excel.Intersect(worksheet.Range["1:1"], worksheet.UsedRange).Style.Orientation = Excel.XlOrientation.xlUpwards;
/* just the cells, not the style */
Excel.Intersect(worksheet.Range["1:1"], worksheet.UsedRange).Cells.Orientation = Excel.XlOrientation.xlUpwards;
A: What worked for me was:
ws.SelectedRange[1, 1, 1, 15].Style.TextRotation = 180;
TextRotation for vertical text: 90 or 180 (upwards, downwards)
| stackoverflow | {
"language": "en",
"length": 182,
"provenance": "stackexchange_0000F.jsonl.gz:873405",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568593"
} |
a36163ac0be481a93bc16e779e0e570d7ed9137c | Stackoverflow Stackexchange
Q: Call await in a callback inside an async function Here is some code (it's an over-simplified example, I know it is dumb):
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
async function test() {
[1, 2, 3].map(() => {
console.log('test');
await sleep(1000);
});
}
test();
The objective is to:
*
*display test then wait one second
*then display test then wait one second
*then display test then wait one second
But running this code results in a failure:
await is a reserved word
I know I can do fix it by using a for loop:
async function test() {
for(let i = 0; i < 3; i++) {
console.log('test');
await sleep(1000);
}
}
But is there a way to do it in a more "functional" way. I mean, can I avoid the for loop and await inside a map?
A: const result = await [1, 2, 3].reduce(async function(prom, v){
const result= await prom;
await sleep(1000);
result.push(v);
return result;
}, Promise.resolve([]));
You could reduce to create a promise chain. However in your simplyfied case:
(a=b=>(b==2||(console.log("test"),setTimeout(a,1000,b+1))))(0);
| Q: Call await in a callback inside an async function Here is some code (it's an over-simplified example, I know it is dumb):
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
async function test() {
[1, 2, 3].map(() => {
console.log('test');
await sleep(1000);
});
}
test();
The objective is to:
*
*display test then wait one second
*then display test then wait one second
*then display test then wait one second
But running this code results in a failure:
await is a reserved word
I know I can do fix it by using a for loop:
async function test() {
for(let i = 0; i < 3; i++) {
console.log('test');
await sleep(1000);
}
}
But is there a way to do it in a more "functional" way. I mean, can I avoid the for loop and await inside a map?
A: const result = await [1, 2, 3].reduce(async function(prom, v){
const result= await prom;
await sleep(1000);
result.push(v);
return result;
}, Promise.resolve([]));
You could reduce to create a promise chain. However in your simplyfied case:
(a=b=>(b==2||(console.log("test"),setTimeout(a,1000,b+1))))(0);
A: If a library like bluebird is an option then you could write:
'use strict'
const Promise = require('bluebird')
async function test() {
return Promise.mapSeries([1, 2, 3], async (idx) => {
console.log('test: ' + idx);
await Promise.delay(1000)
});
}
test();
| stackoverflow | {
"language": "en",
"length": 217,
"provenance": "stackexchange_0000F.jsonl.gz:873408",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568597"
} |
0e49511c6f32979b824a45a9152c5f6c1210bf92 | Stackoverflow Stackexchange
Q: Concatenate two fields into one field in graphql query Let say I have the following schema
type Human {
Title: String
Name: String
}
Now the query
{
Human {
Title
Name
}
}
Returns
{
"data" {
Title: "Mr.",
Name: "Nielsen"
}
}
How do I get combined/concatenated string "Mr. Nielsen" as a result?
A: You would add a new field to Human, something like FullName: String and the resolve on that field would be along the lines of:
(parent) => `${parent.Title} ${parent.Name}`;
| Q: Concatenate two fields into one field in graphql query Let say I have the following schema
type Human {
Title: String
Name: String
}
Now the query
{
Human {
Title
Name
}
}
Returns
{
"data" {
Title: "Mr.",
Name: "Nielsen"
}
}
How do I get combined/concatenated string "Mr. Nielsen" as a result?
A: You would add a new field to Human, something like FullName: String and the resolve on that field would be along the lines of:
(parent) => `${parent.Title} ${parent.Name}`;
| stackoverflow | {
"language": "en",
"length": 85,
"provenance": "stackexchange_0000F.jsonl.gz:873424",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568651"
} |
573f062f9118d9406b889fad65b9e8e5608c51da | Stackoverflow Stackexchange
Q: How to replace the second occurrence of a string in javascript I am trying to replace the second occurrence of a string in javascript. I'm using a regex to detect all the matches of the character that I'm looking for. The alert returns the same initial text.
text = 'BLABLA';
//var count = (texte.match(/B/g) || []).length;
var t=0;
texte.replace(/B/g, function (match) {
t++;
return (t === 2) ? "Z" : match;
});
alert(text);
https://js.do/code/157264
A: It's because you never use the result returned by the replace function.
Here's the corrected code:
const text = 'BLABLA'
let t = 0
const result = text.replace(/B/g, match => ++t === 2 ? 'Z' : match)
console.log(result)
| Q: How to replace the second occurrence of a string in javascript I am trying to replace the second occurrence of a string in javascript. I'm using a regex to detect all the matches of the character that I'm looking for. The alert returns the same initial text.
text = 'BLABLA';
//var count = (texte.match(/B/g) || []).length;
var t=0;
texte.replace(/B/g, function (match) {
t++;
return (t === 2) ? "Z" : match;
});
alert(text);
https://js.do/code/157264
A: It's because you never use the result returned by the replace function.
Here's the corrected code:
const text = 'BLABLA'
let t = 0
const result = text.replace(/B/g, match => ++t === 2 ? 'Z' : match)
console.log(result)
| stackoverflow | {
"language": "en",
"length": 114,
"provenance": "stackexchange_0000F.jsonl.gz:873426",
"question_score": "15",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568657"
} |
8eff1e38f8a24cff50e8dc2f278cc362a161f2af | Stackoverflow Stackexchange
Q: nvd3: control guideline programmatically I'm working on a project where I want to represent different data with the same date range on x-axis using nvd3 (angular).
My idea is to synchronize guideline among different charts and onmousemove or using a slider I want to show programmatically guideline with a tooltip in each chart (e.g. like heroku does)
Can anyone give a hint to a solution or if it's possible to control interactive guideline programmatically?
| Q: nvd3: control guideline programmatically I'm working on a project where I want to represent different data with the same date range on x-axis using nvd3 (angular).
My idea is to synchronize guideline among different charts and onmousemove or using a slider I want to show programmatically guideline with a tooltip in each chart (e.g. like heroku does)
Can anyone give a hint to a solution or if it's possible to control interactive guideline programmatically?
| stackoverflow | {
"language": "en",
"length": 75,
"provenance": "stackexchange_0000F.jsonl.gz:873431",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568675"
} |
545be35ed02cf7ea71d5e6b0c15247e5e240a2d6 | Stackoverflow Stackexchange
Q: Environment variables set up in Windows for pyspark I have Spark installed in my laptop. And I am able to execute spark-shell command and open the scala shell as shown below:
C:\Spark1_6\spark-1.6.0-bin-hadoop2.6\bin>spark-shell
scala>
But when I am trying to execute pyspark command:
C:\Spark1_6\spark-1.6.0-bin-hadoop2.6\bin>pyspark
I am getting the below error message:
'python' is not recognized as an internal or external command
I did set up the environment User 'Path' variable manually.
By appending with
";C:\Python27"
I rebooted the laptop and still get the same error.
Can anyone please help me how to fix this ? Am I not correctly updating the environment variable?
Versions: Spark: 1.6.2 Windows: 8.1
A: The Spark documentation is available. Don't be afraid, read it.
http://spark.apache.org/docs/1.6.0/configuration.html#environment-variables
Certain Spark settings can be configured through environment variables, which are read from ... conf\spark-env.cmd on Windows
...
PYSPARK_PYTHON Python binary executable to use for
PySpark in both driver and workers (default is python2.7 if available, otherwise python).
PYSPARK_DRIVER_PYTHON Python binary executable to use
for PySpark in driver only (default is PYSPARK_PYTHON).
Try something like this:
set PYSPARK_PYTHON=C:\Python27\bin\python.exe
pyspark
| Q: Environment variables set up in Windows for pyspark I have Spark installed in my laptop. And I am able to execute spark-shell command and open the scala shell as shown below:
C:\Spark1_6\spark-1.6.0-bin-hadoop2.6\bin>spark-shell
scala>
But when I am trying to execute pyspark command:
C:\Spark1_6\spark-1.6.0-bin-hadoop2.6\bin>pyspark
I am getting the below error message:
'python' is not recognized as an internal or external command
I did set up the environment User 'Path' variable manually.
By appending with
";C:\Python27"
I rebooted the laptop and still get the same error.
Can anyone please help me how to fix this ? Am I not correctly updating the environment variable?
Versions: Spark: 1.6.2 Windows: 8.1
A: The Spark documentation is available. Don't be afraid, read it.
http://spark.apache.org/docs/1.6.0/configuration.html#environment-variables
Certain Spark settings can be configured through environment variables, which are read from ... conf\spark-env.cmd on Windows
...
PYSPARK_PYTHON Python binary executable to use for
PySpark in both driver and workers (default is python2.7 if available, otherwise python).
PYSPARK_DRIVER_PYTHON Python binary executable to use
for PySpark in driver only (default is PYSPARK_PYTHON).
Try something like this:
set PYSPARK_PYTHON=C:\Python27\bin\python.exe
pyspark
| stackoverflow | {
"language": "en",
"length": 179,
"provenance": "stackexchange_0000F.jsonl.gz:873467",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568769"
} |
2edf3d89442042a0a2f18959329c91b05831497f | Stackoverflow Stackexchange
Q: REST API for delete by specific field of the entity I'm developing a DELETE API for my service.
I have read some docs and articles and didn't find a good description on how to design API for DELETE method if we need to delete an entity not by id, but by another field.
For example, I have an entity called "Person". It has "id" and "group".
I want to have two delete methods.
*
*Delete user by ID;
*Delete users by group;
It's not a problem to delete by ID:
@DeleteMapping("/persons/{personId}")
However, to delete via another property, the best solution that I found was using request params. e.g.
DELETE /persons/?group=groupValue
That's totally fine to use such a way, but I wonder if there is more convenient way, maybe more clear and explicit one, to solve this task?
| Q: REST API for delete by specific field of the entity I'm developing a DELETE API for my service.
I have read some docs and articles and didn't find a good description on how to design API for DELETE method if we need to delete an entity not by id, but by another field.
For example, I have an entity called "Person". It has "id" and "group".
I want to have two delete methods.
*
*Delete user by ID;
*Delete users by group;
It's not a problem to delete by ID:
@DeleteMapping("/persons/{personId}")
However, to delete via another property, the best solution that I found was using request params. e.g.
DELETE /persons/?group=groupValue
That's totally fine to use such a way, but I wonder if there is more convenient way, maybe more clear and explicit one, to solve this task?
| stackoverflow | {
"language": "en",
"length": 138,
"provenance": "stackexchange_0000F.jsonl.gz:873471",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568788"
} |
7aefda2669b921dd6c0dc20701ef3707a140d532 | Stackoverflow Stackexchange
Q: Hide Ionic TabBar on specific subpages (IONIC 3) I want to hide my tabbar on multiple specific pages. My main focus is to hide it on my Login page, Register page, and a comment page. I have tried tabsHideOnSubPages: true, but when i do this my UserProfile page
(which is a subpage) hides the tabbar. The tabbar must also be visible on the UserProfile page but then again not on my previous mentioned subpages (login, register etc..).
I am currently using Ionic Framework : ionic-angular 3.2.0
Does anyone know how i can fix this?
A: You can try this.
Just put tabsHideOnSubPages in your config like this:
@NgModule({
declarations: [ MyApp ],
imports: [
IonicModule.forRoot(MyApp, {
tabsHideOnSubPages: true,
}, {}
)],
bootstrap: [IonicApp],
entryComponents: [ MyApp ],
providers: []
})
| Q: Hide Ionic TabBar on specific subpages (IONIC 3) I want to hide my tabbar on multiple specific pages. My main focus is to hide it on my Login page, Register page, and a comment page. I have tried tabsHideOnSubPages: true, but when i do this my UserProfile page
(which is a subpage) hides the tabbar. The tabbar must also be visible on the UserProfile page but then again not on my previous mentioned subpages (login, register etc..).
I am currently using Ionic Framework : ionic-angular 3.2.0
Does anyone know how i can fix this?
A: You can try this.
Just put tabsHideOnSubPages in your config like this:
@NgModule({
declarations: [ MyApp ],
imports: [
IonicModule.forRoot(MyApp, {
tabsHideOnSubPages: true,
}, {}
)],
bootstrap: [IonicApp],
entryComponents: [ MyApp ],
providers: []
})
A: I can give you a quick hotfix for that.
Copy this Code into your .ts page file.
The function will execute when the page is loaded.
If you want to hide the tabbar then do this line of code:
tabs[key].style.display = 'none';
If you want to show it use this code by simply changing 'none' it to 'flex'.
tabs[key].style.display = 'flex';
This code is an angular function wich basicly means it executes when page is loaded.
ngAfterViewInit()
Full code:
ngAfterViewInit() {
let tabs = document.querySelectorAll('.show-tabbar');
if (tabs !== null) {
Object.keys(tabs).map((key) => {
tabs[key].style.display = 'none';
});
}
}
You can also use this code to show the tabbar again if you leave the page.
ionViewWillLeave() {
let tabs = document.querySelectorAll('.show-tabbar');
if (tabs !== null) {
Object.keys(tabs).map((key) => {
tabs[key].style.display = 'flex';
});
}
}
Hope this helped you.
| stackoverflow | {
"language": "en",
"length": 271,
"provenance": "stackexchange_0000F.jsonl.gz:873483",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568816"
} |
4284399000ea2a9ea5190166dd5f60da506d08f1 | Stackoverflow Stackexchange
Q: Graphics.MeasureString() returns size in Point or Pixel? I use Graphics.MeasureString to calculate size of a text.
EG: 10 pixel = 7.5 Point.
My question:
Is the size calculated from Graphics.MeasureString point value or pixel value?
A: From the MSDN page:
This method returns a SizeF structure that represents the size, in the
units specified by the PageUnit property, of the string specified by
the text parameter as drawn with the font parameter.
The PageUnit is of type GraphicsUnit which is an enum with the following
values
Member name Description
Display Specifies the unit of measure of the display device. Typically pixels for video displays, and 1/100 inch for printers.
Document Specifies the document unit (1/300 inch) as the unit of measure.
Inch Specifies the inch as the unit of measure.
Millimeter Specifies the millimeter as the unit of measure.
Pixel Specifies a device pixel as the unit of measure.
Point Specifies a printer's point (1/72 inch) as the unit of measure.
World Specifies the world coordinate system unit as the unit of measure.
Apologies for the terrible formatting!
| Q: Graphics.MeasureString() returns size in Point or Pixel? I use Graphics.MeasureString to calculate size of a text.
EG: 10 pixel = 7.5 Point.
My question:
Is the size calculated from Graphics.MeasureString point value or pixel value?
A: From the MSDN page:
This method returns a SizeF structure that represents the size, in the
units specified by the PageUnit property, of the string specified by
the text parameter as drawn with the font parameter.
The PageUnit is of type GraphicsUnit which is an enum with the following
values
Member name Description
Display Specifies the unit of measure of the display device. Typically pixels for video displays, and 1/100 inch for printers.
Document Specifies the document unit (1/300 inch) as the unit of measure.
Inch Specifies the inch as the unit of measure.
Millimeter Specifies the millimeter as the unit of measure.
Pixel Specifies a device pixel as the unit of measure.
Point Specifies a printer's point (1/72 inch) as the unit of measure.
World Specifies the world coordinate system unit as the unit of measure.
Apologies for the terrible formatting!
A: You can use Graphics.PageUnit in order to set the return type of measurement. So it can be either Pixel or Point, it is your choice
| stackoverflow | {
"language": "en",
"length": 205,
"provenance": "stackexchange_0000F.jsonl.gz:873497",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568853"
} |
4c425f3c491c1df895af5bdee00ef95beb9d1963 | Stackoverflow Stackexchange
Q: The config file (.\ionic.config.json) is not valid JSON format Every time I run this command: ionic cordova build android --release
I am using cordova -v
7.0.1
I get this error:
The config file (.\ionic.config.json) is not valid JSON format.
Please fix any JSON errors in the file.
Here is my file:
{
"app_id": "12345",
"name": "myApp",
"type": "ionic1"
}
I'm not sure what's going on here.
This command works fine: cordova build android --release
I am not sure what's going on with ionic.
A: Goto this path- C:/Users/{your_username}/.ionic
find file ionic.config and delete it.
It will be auto-regenerated by the CLI.
| Q: The config file (.\ionic.config.json) is not valid JSON format Every time I run this command: ionic cordova build android --release
I am using cordova -v
7.0.1
I get this error:
The config file (.\ionic.config.json) is not valid JSON format.
Please fix any JSON errors in the file.
Here is my file:
{
"app_id": "12345",
"name": "myApp",
"type": "ionic1"
}
I'm not sure what's going on here.
This command works fine: cordova build android --release
I am not sure what's going on with ionic.
A: Goto this path- C:/Users/{your_username}/.ionic
find file ionic.config and delete it.
It will be auto-regenerated by the CLI.
A: It wasn't my ionic config file after all. It was a package.json and bower.json files. They did not have the dependencies correctly in place and the app name was in uppercase. After I fixed my dependencies, I got the command to work.
A: Run:
npm uninstall -g ionic
Remove .ionic from user home folder
rm -Rf ~/.ionic
Then, reinstall ionic:
npm i -g ionic
A: just open the config.json file on your ide, delete all the content and ionic serve
| stackoverflow | {
"language": "en",
"length": 183,
"provenance": "stackexchange_0000F.jsonl.gz:873537",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44568997"
} |
6a8d52ec3f34dea8715e1e042070b0acfe8770a7 | Stackoverflow Stackexchange
Q: Change color of flask.flash messages Is it possible to change the color of flask.flash messages? The message is currently written in black and in very small characters.
A: Flask messages takes optional argument called category and use this to update template as you like.
flash('This is error message', 'error')
And in your html do remember to add with_categories option
{% with messages = get_flashed_messages(with_categories=true) %}
{% for category, message in messages %}
<div class="{{ category }}">{{ message }}</div>
{% endfor %}
{% endwith %}
Also please include this in your stylesheet
.error {
color: red
}
| Q: Change color of flask.flash messages Is it possible to change the color of flask.flash messages? The message is currently written in black and in very small characters.
A: Flask messages takes optional argument called category and use this to update template as you like.
flash('This is error message', 'error')
And in your html do remember to add with_categories option
{% with messages = get_flashed_messages(with_categories=true) %}
{% for category, message in messages %}
<div class="{{ category }}">{{ message }}</div>
{% endfor %}
{% endwith %}
Also please include this in your stylesheet
.error {
color: red
}
| stackoverflow | {
"language": "en",
"length": 97,
"provenance": "stackexchange_0000F.jsonl.gz:873551",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569040"
} |
34568b9f13226d0bdf866790c7727d177c028cf6 | Stackoverflow Stackexchange
Q: MSBuild error when building the solution from VSTS build I am getting the following error when I am building the solution using continuous integration
[error]Apps\App.Service\App.Service.csproj(203,11):
Error MSB4226: The imported project
"$(VSToolsPath)\Web\Microsoft.Web.Publishing.targets" was not found.
Also, tried to find
"$(VSToolsPath)\Web\Microsoft.Web.Publishing.targets" in the fallback
search path(s) for $(VSToolsPath) - "C:\Program Files
(x86)\MSBuild\Microsoft\VisualStudio\v15.0" . These search paths are
defined in "C:\Program Files (x86)\Microsoft Visual
Studio\2017\Community\MSBuild\15.0\Bin\MSBuild.exe.Config". Confirm
that the path in the declaration is correct, and that the
file exists on disk in one of the search paths.
When I check my .csproj I have the following on top <Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" /> so can some one tell me what was the issue
A: I had the same issue when trying to setup gitlab CI on my project.
This answer https://stackoverflow.com/a/20095260/5873231 solved my problem. I leave it here, maybe it's useful for someone.
| Q: MSBuild error when building the solution from VSTS build I am getting the following error when I am building the solution using continuous integration
[error]Apps\App.Service\App.Service.csproj(203,11):
Error MSB4226: The imported project
"$(VSToolsPath)\Web\Microsoft.Web.Publishing.targets" was not found.
Also, tried to find
"$(VSToolsPath)\Web\Microsoft.Web.Publishing.targets" in the fallback
search path(s) for $(VSToolsPath) - "C:\Program Files
(x86)\MSBuild\Microsoft\VisualStudio\v15.0" . These search paths are
defined in "C:\Program Files (x86)\Microsoft Visual
Studio\2017\Community\MSBuild\15.0\Bin\MSBuild.exe.Config". Confirm
that the path in the declaration is correct, and that the
file exists on disk in one of the search paths.
When I check my .csproj I have the following on top <Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" /> so can some one tell me what was the issue
A: I had the same issue when trying to setup gitlab CI on my project.
This answer https://stackoverflow.com/a/20095260/5873231 solved my problem. I leave it here, maybe it's useful for someone.
| stackoverflow | {
"language": "en",
"length": 140,
"provenance": "stackexchange_0000F.jsonl.gz:873558",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569064"
} |
a29936179de070a4cffb8ebde7a756aa518ffcf2 | Stackoverflow Stackexchange
Q: Java 8 – Create Instant from LocalDateTime with TimeZone I have a date stored in the DB in string format ddMMyyyy and hh:mm and the TimeZone.
I want to create an Instant based on that information, but I don't know how to do it.
something like
LocalDateTime dateTime = LocalDateTime.of(2017, Month.JUNE, 1, 13, 39);
Instant instant = dateTime.toInstant(TimeZone.getTimeZone("ECT"));
A: You can first create a ZonedDateTime with that time zone, and then call toInstant:
LocalDateTime dateTime = LocalDateTime.of(2017, Month.JUNE, 15, 13, 39);
Instant instant = dateTime.atZone(ZoneId.of("Europe/Paris")).toInstant();
System.out.println(instant); // 2017-06-15T11:39:00Z
I also switched to using the full time zone name (per Basil's advice), since it is less ambiguous.
| Q: Java 8 – Create Instant from LocalDateTime with TimeZone I have a date stored in the DB in string format ddMMyyyy and hh:mm and the TimeZone.
I want to create an Instant based on that information, but I don't know how to do it.
something like
LocalDateTime dateTime = LocalDateTime.of(2017, Month.JUNE, 1, 13, 39);
Instant instant = dateTime.toInstant(TimeZone.getTimeZone("ECT"));
A: You can first create a ZonedDateTime with that time zone, and then call toInstant:
LocalDateTime dateTime = LocalDateTime.of(2017, Month.JUNE, 15, 13, 39);
Instant instant = dateTime.atZone(ZoneId.of("Europe/Paris")).toInstant();
System.out.println(instant); // 2017-06-15T11:39:00Z
I also switched to using the full time zone name (per Basil's advice), since it is less ambiguous.
A: Forget the old TimeZone class. Use ZoneId, because it's properly thread-safe and you can just use a final static field to store the zone.
LocalDateTime dateTime = LocalDateTime.of(2017, Month.JUNE, 1, 13, 39);
ZonedDateTime.of(dateTime, ZoneId.of("ECT")).toInstant();
A: I think the following code should work:
LocalDateTime time = LocalDateTime.of(2017, Month.JUNE, 15, 13, 39);
ZonedDateTime.of(time, TimeZone.getTimeZone("ZONE").toZoneId()).toInstant();
You just have to replace "ZONE" with the timezone you need.
| stackoverflow | {
"language": "en",
"length": 171,
"provenance": "stackexchange_0000F.jsonl.gz:873601",
"question_score": "52",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569202"
} |
f071e49408418c33d2fc5e993851a86ea0fc5ee2 | Stackoverflow Stackexchange
Q: How to make orphan item in a wrapping flex grid not grow to fill the last row? I need a grid where I only specify the minimum width of the items.
Here is my attempt using flex-wrap and flex-basis: https://jsfiddle.net/2z9pgjfg/1/
HTML:
<div class="container">
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
</div>
CSS:
.container {
display: flex;
flex-wrap: wrap;
}
.item {
background-color: red;
border: 1px solid black;
flex: 1;
flex-basis: 150px;
}
.item:after {
content: "";
display: block;
padding-bottom: 75%;
}
I want any items in the last row to be the same size as all the others. Is there a way to achieve this without media queries?
A: Set flex grow to 0.
.container {
display: flex;
flex-wrap: wrap;
}
.item {
background-color: red;
border: 1px solid black;
flex: 0;
flex-basis: 150px;
}
.item:after {
content: "";
display: block;
padding-bottom: 75%;
}
<div class="container">
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
</div>
| Q: How to make orphan item in a wrapping flex grid not grow to fill the last row? I need a grid where I only specify the minimum width of the items.
Here is my attempt using flex-wrap and flex-basis: https://jsfiddle.net/2z9pgjfg/1/
HTML:
<div class="container">
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
</div>
CSS:
.container {
display: flex;
flex-wrap: wrap;
}
.item {
background-color: red;
border: 1px solid black;
flex: 1;
flex-basis: 150px;
}
.item:after {
content: "";
display: block;
padding-bottom: 75%;
}
I want any items in the last row to be the same size as all the others. Is there a way to achieve this without media queries?
A: Set flex grow to 0.
.container {
display: flex;
flex-wrap: wrap;
}
.item {
background-color: red;
border: 1px solid black;
flex: 0;
flex-basis: 150px;
}
.item:after {
content: "";
display: block;
padding-bottom: 75%;
}
<div class="container">
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
<div class="item"></div>
</div>
| stackoverflow | {
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:873656",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569363"
} |
f59f3f2f3df8210ddb3301bf00f72e1159c78f07 | Stackoverflow Stackexchange
Q: Get-Content Replace a full string So I have the following command:
(Get-Content $path).Replace($current, $new) | Set-Content $path
It works fine for changing a string but lets say I have the following string in $path : "This is a test. testing was done."
If i set $current to "test" and $new to "Blah". I'll get "This is a Blah. Blahing was done."
How do I make it so that it only changes "test" and not "testing" so my string would be: "This is a blah. testing was done."?
A: $path = "This is a test. testing was done."
$current = "\btest\b"
$new = "Blah"
$path -replace ($current, $new)
\b is a regular expression word boundary
| Q: Get-Content Replace a full string So I have the following command:
(Get-Content $path).Replace($current, $new) | Set-Content $path
It works fine for changing a string but lets say I have the following string in $path : "This is a test. testing was done."
If i set $current to "test" and $new to "Blah". I'll get "This is a Blah. Blahing was done."
How do I make it so that it only changes "test" and not "testing" so my string would be: "This is a blah. testing was done."?
A: $path = "This is a test. testing was done."
$current = "\btest\b"
$new = "Blah"
$path -replace ($current, $new)
\b is a regular expression word boundary
A: Instead of using the Replace() method, try using the -replace operator - this operator uses regular expressions as described in Get-Help about_Regular_Expressions (link is to MSDN).
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:873660",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569373"
} |
8604bb3be14262d88d11b2b95c857a513d112877 | Stackoverflow Stackexchange
Q: Read a file from HDFS and assign the contents to string In Scala, How to read a file in HDFS and assign the contents to a variable. I know how to read a file and I am able to print it. But If I try assign the content to a string, It giving output as Unit(). Below is the codes I tried.
val dfs = org.apache.hadoop.fs.FileSystem.get(config);
val snapshot_file = "/path/to/file/test.txt"
val stream = dfs.open(new Path(snapshot_file))
def readLines = Stream.cons(stream.readLine, Stream.continually( stream.readLine))
readLines.takeWhile(_ != null).foreach(line => println(line))
The above code printing the output properly. But If I tried assign the output to a string, I am getting correct output.
val snapshot_id = readLines.takeWhile(_ != null).foreach(line => println(line))
snapshot_id: Unit = ()
what is the correct way to assign the contents to a variable ?
A: You need to use mkString. Since println returns Unit() which gets stored to your variable if you call println on you stream
val hdfs = org.apache.hadoop.fs.FileSystem.get(new java.net.URI("hdfs://namenode:port/"), new org.apache.hadoop.conf.Configuration())
val path = new org.apache.hadoop.fs.Path("/user/cloudera/file.txt")
val stream = hdfs.open(path)
def readLines = scala.io.Source.fromInputStream(stream)
val snapshot_id : String = readLines.takeWhile(_ != null).mkString("\n")
| Q: Read a file from HDFS and assign the contents to string In Scala, How to read a file in HDFS and assign the contents to a variable. I know how to read a file and I am able to print it. But If I try assign the content to a string, It giving output as Unit(). Below is the codes I tried.
val dfs = org.apache.hadoop.fs.FileSystem.get(config);
val snapshot_file = "/path/to/file/test.txt"
val stream = dfs.open(new Path(snapshot_file))
def readLines = Stream.cons(stream.readLine, Stream.continually( stream.readLine))
readLines.takeWhile(_ != null).foreach(line => println(line))
The above code printing the output properly. But If I tried assign the output to a string, I am getting correct output.
val snapshot_id = readLines.takeWhile(_ != null).foreach(line => println(line))
snapshot_id: Unit = ()
what is the correct way to assign the contents to a variable ?
A: You need to use mkString. Since println returns Unit() which gets stored to your variable if you call println on you stream
val hdfs = org.apache.hadoop.fs.FileSystem.get(new java.net.URI("hdfs://namenode:port/"), new org.apache.hadoop.conf.Configuration())
val path = new org.apache.hadoop.fs.Path("/user/cloudera/file.txt")
val stream = hdfs.open(path)
def readLines = scala.io.Source.fromInputStream(stream)
val snapshot_id : String = readLines.takeWhile(_ != null).mkString("\n")
A: I used org.apache.commons.io.IOUtils.toString to convert stream in to string
def getfileAsString( file: String): String = {
import org.apache.hadoop.fs.FileSystem
val config: Configuration = new Configuration();
config.set("fs.hdfs.impl", classOf[DistributedFileSystem].getName)
config.set("fs.file.impl", classOf[LocalFileSystem].getName)
val dfs = FileSystem.get(config)
val filePath: FSDataInputStream = dfs.open(new Path(file))
logInfo("file.available " + filePath.available)
val outputxmlAsString: String = org.apache.commons.io.IOUtils.toString(filePath, "UTF-8")
outputxmlAsString
}
| stackoverflow | {
"language": "en",
"length": 236,
"provenance": "stackexchange_0000F.jsonl.gz:873667",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569397"
} |
4b5041e97d41c3d609dbabadc9ce4e86e6faac52 | Stackoverflow Stackexchange
Q: Android requestSingleUpdate vs requestLocationUpdates battery consumption I have an application that logs the user location every minute. Till today I was using requestLocationUpdates():
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 60000, 0, gpsLocationListener)
and every minute (almost) I receive an update and log it in the onLocationChanged() method.
Today I found out that also requestSingleUpdate is available:
locationManager.requestSingleUpdate(LocationManager.GPS_PROVIDER, gpsLocationListener, null )
In this second case once I receive the location in onLocationChanged(), I start an handler postDelay and after one minute I run again the requestSingleUpdate(). During this minute interval when no locations updates are requested the gps icon on the top right disappears.
My question is: can the second solution reduce the battery consumption? Thank you
| Q: Android requestSingleUpdate vs requestLocationUpdates battery consumption I have an application that logs the user location every minute. Till today I was using requestLocationUpdates():
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 60000, 0, gpsLocationListener)
and every minute (almost) I receive an update and log it in the onLocationChanged() method.
Today I found out that also requestSingleUpdate is available:
locationManager.requestSingleUpdate(LocationManager.GPS_PROVIDER, gpsLocationListener, null )
In this second case once I receive the location in onLocationChanged(), I start an handler postDelay and after one minute I run again the requestSingleUpdate(). During this minute interval when no locations updates are requested the gps icon on the top right disappears.
My question is: can the second solution reduce the battery consumption? Thank you
| stackoverflow | {
"language": "en",
"length": 112,
"provenance": "stackexchange_0000F.jsonl.gz:873675",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569422"
} |
8625e14a5f8b07aeb234eff5b9298824795e60d4 | Stackoverflow Stackexchange
Q: Segmentation fault for git commit command (Windows) I have started to work with project, which I cloned from bitbucket.
I use as Git Bash, as SourceTree.
I changed one file and try to commit my change.
I can execute the command "add", but when I try to execute the command "commit" (git commit -m "for testing"), I get the following error:
Segmentation fault.
I got an error in SourceTree too.
But if I create my new folder & file, the commit happens successfully
How I can fix the problem situation?
Thanks in advance.
A: If you already have Git installed, you can get the latest development version via Git itself:
git clone https://github.com/git/git
but this didn't work for me.I uninstalled the git and then again downloaded then my problem got resolved.
| Q: Segmentation fault for git commit command (Windows) I have started to work with project, which I cloned from bitbucket.
I use as Git Bash, as SourceTree.
I changed one file and try to commit my change.
I can execute the command "add", but when I try to execute the command "commit" (git commit -m "for testing"), I get the following error:
Segmentation fault.
I got an error in SourceTree too.
But if I create my new folder & file, the commit happens successfully
How I can fix the problem situation?
Thanks in advance.
A: If you already have Git installed, you can get the latest development version via Git itself:
git clone https://github.com/git/git
but this didn't work for me.I uninstalled the git and then again downloaded then my problem got resolved.
A: I have resolved the problem.
The last git version (2.13.1) has the bug - it has been released 05.06.2017.
I installed previous version (2.12.2) and now all is OK.
A: Running git 2.15.1.windows.2 on Windows 10 x64 v1709
For me the problem was caused by a faulty index. To resolve I ran the following from the project directory:
rm .git/index
git reset
As you can see from the image in this link, I didn't loose any changes by performing a reset.
A: I had the same issue on some project, I could no checkout a new or an existing branch.
I installed the latest version of git from the website and now my version is
> git --version
git version 2.13.1.windows.2
The issue seems fixed with this build. For now.
A: use git reset SHA --hard
Where SHA points to a valid SHA before the error
Changes done after this point is lost. But repo is saved.
A:
For me the problem was caused by a faulty index.
With Git 2.36 (Q2 2022), you will see more details instead of a segfault.
Git now check the return value from parse_tree_indirect() to turn segfaults into calls to die().
For clone/checkout, but can also be applied to commit.
See commit 8d2eaf6 (01 Mar 2022) by Glen Choo (chooglen).
(Merged by Junio C Hamano -- gitster -- in commit bde1e3e, 13 Mar 2022)
checkout, clone: die if tree cannot be parsed
Signed-off-by: Glen Choo
When a tree oid is invalid, parse_tree_indirect() can return NULL.
Check for NULL instead of proceeding as though it were a valid pointer and segfaulting.
| stackoverflow | {
"language": "en",
"length": 399,
"provenance": "stackexchange_0000F.jsonl.gz:873686",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569450"
} |
e52acc5802af51b296f7c0f0d17ff1c0f61c9a99 | Stackoverflow Stackexchange
Q: Test catch block logic with Junit and mockito I have class to test like below:
public class ReportWriter {
private FileWrter fw;
private static Logger logger = Logger.getLogger(ReportWriter.class);
public ReportWriter(FileWrter fw) {
this.fw = fw;
}
public void writeData(Data) {
try{
fw.open();
fw.write(data);
fw.close();
} **catch(DiskFullException e) {
//after catch we log and handle it without rethrow the exception
logger.log(Level.WARN, "warning log here", e);
// some more logic here
sendEmailToSupport();
}**
}
}
The question is how to test the logic in catch block?
A: If the sendEmailToSupport is at least a package level method, then you could go for something like:
public class ReportWriterClass{
@Spy
@InjectMocks
private ReportWriter reportWriterSpy;
@Mock
private FileWrter fwMock;
@Before
public void init(){
MockitoAnnotations.initMocks(this);
}
@Test
public void shouldSendEmail_whenDiskIsFull() throws Exception{
// Arrange
Data data = new Data();
doNothing().when(reportWriterSpy).sendEmailToSupport());
doThrow(new DiskFullException()).when(fwMock).write(data);
// Act
reportWriterSpy.writeData(data);
// Assert
verify(reportWriterSpy).sendEmailToSupport();
}
}
| Q: Test catch block logic with Junit and mockito I have class to test like below:
public class ReportWriter {
private FileWrter fw;
private static Logger logger = Logger.getLogger(ReportWriter.class);
public ReportWriter(FileWrter fw) {
this.fw = fw;
}
public void writeData(Data) {
try{
fw.open();
fw.write(data);
fw.close();
} **catch(DiskFullException e) {
//after catch we log and handle it without rethrow the exception
logger.log(Level.WARN, "warning log here", e);
// some more logic here
sendEmailToSupport();
}**
}
}
The question is how to test the logic in catch block?
A: If the sendEmailToSupport is at least a package level method, then you could go for something like:
public class ReportWriterClass{
@Spy
@InjectMocks
private ReportWriter reportWriterSpy;
@Mock
private FileWrter fwMock;
@Before
public void init(){
MockitoAnnotations.initMocks(this);
}
@Test
public void shouldSendEmail_whenDiskIsFull() throws Exception{
// Arrange
Data data = new Data();
doNothing().when(reportWriterSpy).sendEmailToSupport());
doThrow(new DiskFullException()).when(fwMock).write(data);
// Act
reportWriterSpy.writeData(data);
// Assert
verify(reportWriterSpy).sendEmailToSupport();
}
}
| stackoverflow | {
"language": "en",
"length": 145,
"provenance": "stackexchange_0000F.jsonl.gz:873738",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569595"
} |
abd125430bfdd86d8a64ab35033d6e8682cc3dc6 | Stackoverflow Stackexchange
Q: Why is getResources().getColor(int) deprecated? The document says:
This method was deprecated in API level 23. Use getColor(int, Theme)
instead.
And many posts point to the ContextCompat.getColor(Context, int) method as a replacement.
Also the document explains the Theme parameter:
theme Resources.Theme: The theme used to style the color attributes,
may be null.
Can you explain how can the theme affects the color?
A: Some complex colors like android.content.res.GradientColor (which are used inside a VectorDrawable) need a Theme in order to inflate the gradient, since you could have a definition like:
<gradient xmlns:android="http://schemas.android.com/apk/res/android">
<android:startColor="?android:attr/colorPrimary"/>
<android:endColor="?android:attr/colorControlActivated"/>
<android:type="linear"/>
</gradient>
| Q: Why is getResources().getColor(int) deprecated? The document says:
This method was deprecated in API level 23. Use getColor(int, Theme)
instead.
And many posts point to the ContextCompat.getColor(Context, int) method as a replacement.
Also the document explains the Theme parameter:
theme Resources.Theme: The theme used to style the color attributes,
may be null.
Can you explain how can the theme affects the color?
A: Some complex colors like android.content.res.GradientColor (which are used inside a VectorDrawable) need a Theme in order to inflate the gradient, since you could have a definition like:
<gradient xmlns:android="http://schemas.android.com/apk/res/android">
<android:startColor="?android:attr/colorPrimary"/>
<android:endColor="?android:attr/colorControlActivated"/>
<android:type="linear"/>
</gradient>
A: You can use the following as a workaround:
ContextCompat.getColor(this, R.color.yourcolor)
| stackoverflow | {
"language": "en",
"length": 107,
"provenance": "stackexchange_0000F.jsonl.gz:873741",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569615"
} |
31762f2d720ad50c8447d3f9370c269579fe1f6d | Stackoverflow Stackexchange
Q: React Native gradient shadow I'm adding a shadow to an image, and it works fine but I want the shadow to be gradient like in this picture (from dark to bright ) :
In the documentation shadow Props doesn't have such property, so is there any way to achieve it?
A: I'm not sure how Expo is doing it but we've used the react-native-linear-gradient for things like that
https://github.com/react-native-community/react-native-linear-gradient
| Q: React Native gradient shadow I'm adding a shadow to an image, and it works fine but I want the shadow to be gradient like in this picture (from dark to bright ) :
In the documentation shadow Props doesn't have such property, so is there any way to achieve it?
A: I'm not sure how Expo is doing it but we've used the react-native-linear-gradient for things like that
https://github.com/react-native-community/react-native-linear-gradient
| stackoverflow | {
"language": "en",
"length": 70,
"provenance": "stackexchange_0000F.jsonl.gz:873752",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569636"
} |
4038846fe29361b2027d555821cdf1e6bc0e71c1 | Stackoverflow Stackexchange
Q: Multiple custom serializers for same entity in Spring Boot I'm working on a Spring Boot application. I've created a custom serializer for one entity A and registered it using @JsonSerialize(using = CustomSerializer.class) annotation. Whenever i send A in ResponseEntity<> the custom serializer is called and everything is working fine till this point.
Now there is another API in which i need to send a Collection of A in response. But i cant use same serializer to construct the list of A's as response parameters are totally different. I need to write one more serializer for same entity .
How can i configure 2 serializers for same entity? They should be called based on the object type sent in response i.e. When i'm sending A, then serializer1 should be called and when i'm sending Collection, the serializer2 should be called.
Please help!
A: A simple workaround would be to annotate the collection to use a specific serializer for content. E.g.
@JsonSerialize(using = CustomSerializer.class)
class A {
}
class AList {
@JsonSerialize(contentUsing = AnotherCustomSerializer.class)
private final List<A> list;
}
| Q: Multiple custom serializers for same entity in Spring Boot I'm working on a Spring Boot application. I've created a custom serializer for one entity A and registered it using @JsonSerialize(using = CustomSerializer.class) annotation. Whenever i send A in ResponseEntity<> the custom serializer is called and everything is working fine till this point.
Now there is another API in which i need to send a Collection of A in response. But i cant use same serializer to construct the list of A's as response parameters are totally different. I need to write one more serializer for same entity .
How can i configure 2 serializers for same entity? They should be called based on the object type sent in response i.e. When i'm sending A, then serializer1 should be called and when i'm sending Collection, the serializer2 should be called.
Please help!
A: A simple workaround would be to annotate the collection to use a specific serializer for content. E.g.
@JsonSerialize(using = CustomSerializer.class)
class A {
}
class AList {
@JsonSerialize(contentUsing = AnotherCustomSerializer.class)
private final List<A> list;
}
| stackoverflow | {
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:873753",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569640"
} |
df83caf0ed71ea5cb1b1fcc59a6a23cc50d73d8b | Stackoverflow Stackexchange
Q: Concatenating Strings from a List of Objects I know that the pythonic way of concatenating a list of strings is to use
l =["a", "b", "c"]
"".join(l)
But how would I do this if I have a list of objects which contain a string (as an attribute), without reassigning the string?
I guess I could implement __str__(self). But that's a workaround that I would prefer not to use.
A: What about something like :
joined = "".join([object.string for object in lst_object])
| Q: Concatenating Strings from a List of Objects I know that the pythonic way of concatenating a list of strings is to use
l =["a", "b", "c"]
"".join(l)
But how would I do this if I have a list of objects which contain a string (as an attribute), without reassigning the string?
I guess I could implement __str__(self). But that's a workaround that I would prefer not to use.
A: What about something like :
joined = "".join([object.string for object in lst_object])
A: The performance difference between generator expression and list comprehension is easy to measure:
python --version && python -m timeit -s \
"import argparse; l = [argparse.Namespace(a=str(i)) for i in range(1000000)]" \
"''.join(obj.a for obj in l)"
Python 2.7.12
10 loops, best of 3: 87.2 msec per loop
python --version && python -m timeit -s \
"import argparse; l = [argparse.Namespace(a=str(i)) for i in range(1000000)]" \
"''.join([obj.a for obj in l])"
Python 2.7.12
10 loops, best of 3: 77.1 msec per loop
python3.4 --version && python3.4 -m timeit -s \
"import argparse; l = [argparse.Namespace(a=str(i)) for i in range(1000000)]" \
"''.join(obj.a for obj in l)"
Python 3.4.5
10 loops, best of 3: 77.4 msec per loop
python3.4 --version && python3.4 -m timeit -s \
"import argparse; l = [argparse.Namespace(a=str(i)) for i in range(1000000)]" \
"''.join([obj.a for obj in l])"
Python 3.4.5
10 loops, best of 3: 66 msec per loop
python3.5 --version && python3.5 -m timeit -s \
"import argparse; l = [argparse.Namespace(a=str(i)) for i in range(1000000)]" \
"''.join(obj.a for obj in l)"
Python 3.5.2
10 loops, best of 3: 82.8 msec per loop
python3.5 --version && python3.5 -m timeit -s \
"import argparse; l = [argparse.Namespace(a=str(i)) for i in range(1000000)]" \
"''.join([obj.a for obj in l])"
Python 3.5.2
10 loops, best of 3: 64.9 msec per loop
python3.6 --version && python3.6 -m timeit -s \
"import argparse; l = [argparse.Namespace(a=str(i)) for i in range(1000000)]" \
"''.join(obj.a for obj in l)"
Python 3.6.0
10 loops, best of 3: 84.6 msec per loop
python3.6 --version && python3.6 -m timeit -s \
"import argparse; l = [argparse.Namespace(a=str(i)) for i in range(1000000)]" \
"''.join([obj.a for obj in l])"
Python 3.6.0
10 loops, best of 3: 64.7 msec per loop
It turns out that list comprehension is consistently faster than generator expression:
*
*2.7: ~12% faster
*3.4: ~15% faster
*3.5: ~22% faster
*3.6: ~24% faster
But note that memory consumption for list comprehension is 2x.
Update
Dockerfile you can run on your hardware to get your results, like docker build -t test-so . && docker run --rm test-so.
FROM saaj/snake-tank
RUN echo '[tox] \n\
envlist = py27,py33,py34,py35,py36 \n\
skipsdist = True \n\
[testenv] \n\
commands = \n\
python --version \n\
python -m timeit -s \\\n\
"import argparse; l = [argparse.Namespace(a=str(i)) for i in range(1000000)]" \\\n\
"str().join(obj.a for obj in l)" \n\
python -m timeit -s \\\n\
"import argparse; l = [argparse.Namespace(a=str(i)) for i in range(1000000)]" \\\n\
"str().join([obj.a for obj in l])"' > tox.ini
CMD tox
A: You can convert all your string attributes to list of strings:
string_list = [myobj.str for myobj in l]
The code above creates list of strings using generator. Afterwards u would use a standard way to concatenate strings:
"".join(string_list)
A: I guess the most pythonic way to do this would be using generator expression / list comprehension.
If the string for example is an attribute of the object obj_instance.str_attr
then just run:
"".join(x.str_attr for x in l)
or
"".join([x.str_attr for x in l])
edited:
see discussion on the performance below (they claim that list comprehension - 2nd option is faster).
A: list comprehension may be helpful. for example, with a list of dictionaries,
# data
data = [
{'str': 'a', 'num': 1},
{'str': 'b', 'num': 2},
]
joined_string = ''.join([item['str'] for item in data])
A: From previous answers :
"".join([x.str_attr if hasattr(x,'str_attr_') else x for x in l ])
If your data type are simple.
''.join([somefunction(x) for x in l]) #
Have a look at the itertools module too. Then you could check filtering on values.
A: Another possibility is to use functional programming:
class StrObj:
def __init__(self, str):
self.str = str
a = StrObj('a')
b = StrObj('b')
c = StrObj('c')
l = [a,b,c]
"".join(map(lambda x: x.str, l))
This will work with any way the string might be connected to the object (direktly as an attribute or in a more complicated way). Only the lambda has to be adapted.
A: A self-explaining one-liner
"".join(str(d.attr) for d in l)
| stackoverflow | {
"language": "en",
"length": 741,
"provenance": "stackexchange_0000F.jsonl.gz:873755",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569646"
} |
d3ad4e2347a2a412d4dd1a66076502064dada7e4 | Stackoverflow Stackexchange
Q: How to re-build task on Jenkins until build succeed? I have a Jenkins job which requires a several build attepmts until it is built successfully, is there a way to keep re-building the job automatically until the job is done?
A: If your build is expected to require more than one attempt to succeed, I would fix the build first.
To retry a Jenkins job, you can use the Naginator plugin.
Configuration
Simply install the plugin, and then check the Post-Build action "Retry build after failure" on your project's configuration page.
If the build fails, it will be rescheduled to run again after the time you specified. You can choose how many times to retry running the job. For each consecutive unsuccessful build, you can choose to extend the waiting period.
The following options are also available:
*
*Rerun build for unstable builds as well as failures
*Only rebuild the job if the build's log output contains a given regular expression
*Rerun build only for the failed parts of a matrix job
The plugin also adds a rerun button for in the build section.
| Q: How to re-build task on Jenkins until build succeed? I have a Jenkins job which requires a several build attepmts until it is built successfully, is there a way to keep re-building the job automatically until the job is done?
A: If your build is expected to require more than one attempt to succeed, I would fix the build first.
To retry a Jenkins job, you can use the Naginator plugin.
Configuration
Simply install the plugin, and then check the Post-Build action "Retry build after failure" on your project's configuration page.
If the build fails, it will be rescheduled to run again after the time you specified. You can choose how many times to retry running the job. For each consecutive unsuccessful build, you can choose to extend the waiting period.
The following options are also available:
*
*Rerun build for unstable builds as well as failures
*Only rebuild the job if the build's log output contains a given regular expression
*Rerun build only for the failed parts of a matrix job
The plugin also adds a rerun button for in the build section.
A: I haven't tried it myself, but a quick Google turned up the Naginator Plugin, which appears to do what you're asking.
Obligatory side note: better to fix your build for real, though.
A: While fixing your build is always a good approach, sometimes success is determined by a third-party service. For example, an API endpoint goes down and the tests start failing. I want this build to rerun until success, then marked as such. I have created a second job, that is taking only the output of the last successful build and running tests periodically against it.
So what we get is a monitoring service against the last known good build AND an automatic retry system for the development branch build system.
| stackoverflow | {
"language": "en",
"length": 308,
"provenance": "stackexchange_0000F.jsonl.gz:873765",
"question_score": "19",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569686"
} |
e91d44c758d7343886959cb2fdde82d27148a854 | Stackoverflow Stackexchange
Q: Implementing Service Workers with an appcache fallback for unsupported browsers I'm looking into techniques for building a Progressive Web App in Aurelia with offline functionality that works across the major browsers. Service Workers are seemingly the preferred option for asset caching, but with lack of support in Safari (and currently Edge). Is it possible to use service workers with a fallback to appcache if they're not supported? How will an application behave if there's an appcache manifest AND a service worker installed?
A: if a browser supports service workers then the service worker caching will be used instead of the appCache manifest. You can include the appCache manifest for legacy browsers like Safari and things will work the way they did in the past. Plus for modern browsers they will leverage service worker caching and act as if the appCache does not exist. Sort of like the way responsive images work.
| Q: Implementing Service Workers with an appcache fallback for unsupported browsers I'm looking into techniques for building a Progressive Web App in Aurelia with offline functionality that works across the major browsers. Service Workers are seemingly the preferred option for asset caching, but with lack of support in Safari (and currently Edge). Is it possible to use service workers with a fallback to appcache if they're not supported? How will an application behave if there's an appcache manifest AND a service worker installed?
A: if a browser supports service workers then the service worker caching will be used instead of the appCache manifest. You can include the appCache manifest for legacy browsers like Safari and things will work the way they did in the past. Plus for modern browsers they will leverage service worker caching and act as if the appCache does not exist. Sort of like the way responsive images work.
A: The check which technology the browser is supporting is easly done:
if(navigator.serviceWorker){
initServiceWorker()
}else if(window.applicationCache){
initApplicationCache();
}else{
console.log('no caching possible');
}
Dynamic loading a service worker should not be a problem since it is done in javascript anyway.
Dynamic loading applicationCache's mainfest seems not to be possible, but you can try an iframe hack, see:
Dynamically Trigger HTML5 Cache Manifest file?
A: It's 2019 and iPhone still doesn't get the service worker functioning in WebViews. So a application cache fallback is still useful.
It's not exactly true that app cache will have no effect when service worker is up. It still tries to update its cache which is a silly thing to do. Turning it off isn't crucial but would be good thing to do.
The trick I'm doing now to disable app cache when service worker is functioning, is by intercepting the html (navigation) request and just remove the manifest attribute from <html>.
Something like this in the service worker script:
self.addEventListener('fetch', (ev) => {
if (request.mode === 'navigate' && request.method === 'GET') {
ev.respondWith(
fetch(ev.request.url)
.then(r => r.text())
.then(html => new Response(html.replace('manifest=', 'xmanifest='), {
headers: {'Content-Type': 'text/html'}
}))
)
}
})
| stackoverflow | {
"language": "en",
"length": 346,
"provenance": "stackexchange_0000F.jsonl.gz:873792",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569769"
} |
46355aaac058f1f5f148722cba0dc418e8215a25 | Stackoverflow Stackexchange
Q: Use Connection pool with Jedis I am using Jedis to connect with a Redis server in a REST service.
When I am calling the web service I want to do operations like jedis.hmget , jedis.exits and hgetALL.
For example:
jedis.hmget("employee:data:" + emp_user_id, "employee_id").get(0);
The configuration that I am using for Redis is:
Jedis jedis;
JedisShardInfo shardInfo;
@PostConstruct
public void init() {
try {
shardInfo = new JedisShardInfo(Config.getRedisHost(), Config.getRedisPort());
shardInfo.setPassword(Config.getRedisPassword());
jedis = new Jedis(shardInfo);
jedis.select(2);
//jedis.se
} catch (Exception e) {
logger.error("Exception in init ------- > " + e);
}
}
I know that Jedis is NOT thread safe. When I am using 1000 threads at once to call the service at that time I am getting an exception as Unexpected end of stream. I want to know Jedis pool is thread safe? Unable to find a specific solution for it.
Thanks. Any Help would be appreciated.
A: JedisPool pool = new JedisPool(new JedisPoolConfig(), "localhost", portno, 10000,
"password");
See here: https://github.com/xetorthio/jedis/wiki/Getting-started
| Q: Use Connection pool with Jedis I am using Jedis to connect with a Redis server in a REST service.
When I am calling the web service I want to do operations like jedis.hmget , jedis.exits and hgetALL.
For example:
jedis.hmget("employee:data:" + emp_user_id, "employee_id").get(0);
The configuration that I am using for Redis is:
Jedis jedis;
JedisShardInfo shardInfo;
@PostConstruct
public void init() {
try {
shardInfo = new JedisShardInfo(Config.getRedisHost(), Config.getRedisPort());
shardInfo.setPassword(Config.getRedisPassword());
jedis = new Jedis(shardInfo);
jedis.select(2);
//jedis.se
} catch (Exception e) {
logger.error("Exception in init ------- > " + e);
}
}
I know that Jedis is NOT thread safe. When I am using 1000 threads at once to call the service at that time I am getting an exception as Unexpected end of stream. I want to know Jedis pool is thread safe? Unable to find a specific solution for it.
Thanks. Any Help would be appreciated.
A: JedisPool pool = new JedisPool(new JedisPoolConfig(), "localhost", portno, 10000,
"password");
See here: https://github.com/xetorthio/jedis/wiki/Getting-started
A: Check out Spring-data-redis.
When you add a JedisConnectionFactory you get a connectionFactory which has connection pooling capability by default.
JedisConnectionFactory()
Constructs a new JedisConnectionFactory instance with default settings (default connection pooling, no shard information). See docs.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="jedisConnectionFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" p:use-pool="true" p:host-name="server" p:port="6379"/>
</beans>
For further information, see the documentation.
| stackoverflow | {
"language": "en",
"length": 217,
"provenance": "stackexchange_0000F.jsonl.gz:873836",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569901"
} |
b8c05b899218c5159a5a334f1b4ab8508fdefc77 | Stackoverflow Stackexchange
Q: Time triggered azure function to trigger immediately after deploy I have a time-triggered azure function triggered every 1 hour. My requirement is that, it should get triggered every hour but once also immediately after the deployment ?
Is playing with cron expression my only way for this ?
A: There isn't something directly tied to the deployment. The runOnStartup setting, documented here, triggers your function when the runtime starts, but won't cause the runtime to start as a result of a deployment.
Your best option would likely be to customize your deployment, as documented here, and invoke your function (by issuing an HTTP request) once the deployment completes. You can share the code and have an HTTP triggered function that uses the same logic as the timer function that runs on a schedule.
| Q: Time triggered azure function to trigger immediately after deploy I have a time-triggered azure function triggered every 1 hour. My requirement is that, it should get triggered every hour but once also immediately after the deployment ?
Is playing with cron expression my only way for this ?
A: There isn't something directly tied to the deployment. The runOnStartup setting, documented here, triggers your function when the runtime starts, but won't cause the runtime to start as a result of a deployment.
Your best option would likely be to customize your deployment, as documented here, and invoke your function (by issuing an HTTP request) once the deployment completes. You can share the code and have an HTTP triggered function that uses the same logic as the timer function that runs on a schedule.
| stackoverflow | {
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:873843",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569926"
} |
96fbdc31cce5605066aa0c60d1aee13570e4169c | Stackoverflow Stackexchange
Q: Odoo `--test-enable` doesn't work I'm following Odoo 10 Development Essentials Chapter 2 to develop a simple todo addon. I'm using odoo's docker as my environment.
You can check all my source code (including dockers') at https://github.com/spacegoing/docker_odoo
The problem is I set up my tests/ directory exactly the same as the book's example. However, the test only run once. After the first execution the test was never invoked. There is even no .pyc file in tests directory.
Here are the commands I tried
odoo --db_host db --db_port 5432 -r odoo -w odoo -i todo_app --test-enable --xmlrpc-port=8070 --logfile=/var/log/odoo/odoo_inst1.log
odoo --db_host db --db_port 5432 -r odoo -w odoo -u todo_app --test-enable --xmlrpc-port=8070 --logfile=/var/log/odoo/odoo_inst1.log
Notes:
*
*odoo-bin is odoo in docker
*I've installed todo_app with another odoo instance running on port 8069
A: Finally I found the answer. This is the hugest gotcha I've ever met since I'm 5 years old.
No where mentioned in official document that test will only run after you installed the demo database.
I found this from this forum post:
https://www.odoo.com/forum/help-1/question/why-my-test-yaml-do-not-run-42123
So if you tried every single command you can find and none of them works, this might be your answer.
| Q: Odoo `--test-enable` doesn't work I'm following Odoo 10 Development Essentials Chapter 2 to develop a simple todo addon. I'm using odoo's docker as my environment.
You can check all my source code (including dockers') at https://github.com/spacegoing/docker_odoo
The problem is I set up my tests/ directory exactly the same as the book's example. However, the test only run once. After the first execution the test was never invoked. There is even no .pyc file in tests directory.
Here are the commands I tried
odoo --db_host db --db_port 5432 -r odoo -w odoo -i todo_app --test-enable --xmlrpc-port=8070 --logfile=/var/log/odoo/odoo_inst1.log
odoo --db_host db --db_port 5432 -r odoo -w odoo -u todo_app --test-enable --xmlrpc-port=8070 --logfile=/var/log/odoo/odoo_inst1.log
Notes:
*
*odoo-bin is odoo in docker
*I've installed todo_app with another odoo instance running on port 8069
A: Finally I found the answer. This is the hugest gotcha I've ever met since I'm 5 years old.
No where mentioned in official document that test will only run after you installed the demo database.
I found this from this forum post:
https://www.odoo.com/forum/help-1/question/why-my-test-yaml-do-not-run-42123
So if you tried every single command you can find and none of them works, this might be your answer.
| stackoverflow | {
"language": "en",
"length": 193,
"provenance": "stackexchange_0000F.jsonl.gz:873876",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44569998"
} |
1703d4e412fdb7a520dc189f818651b3cf0b0ad1 | Stackoverflow Stackexchange
Q: Push Notifications without Apple Push Notification Service? Can one use a 3rd party service to send Push Notifications without relying on the Apple Push Notification Service (APNS)?
If it is a requirement that one use the APNS service, is it simply a requirement for App Store approval or is it a technological limitation?
I have seen other questions, such as this one: Apple push notification without Apple Server, but it mainly deals with sending files and is several years old.
A: Apple requires you to use APNS to send push notifications to devices. This cannot be done without APNS, if you found a way around this then Apple would most likely reject the app.
Click here to read the documentation. When you register for push notifications you are actually getting the device token for your app on that specific device from APNS, therefore that is an APNS specific token and you will need to use APNS to send the notification.
| Q: Push Notifications without Apple Push Notification Service? Can one use a 3rd party service to send Push Notifications without relying on the Apple Push Notification Service (APNS)?
If it is a requirement that one use the APNS service, is it simply a requirement for App Store approval or is it a technological limitation?
I have seen other questions, such as this one: Apple push notification without Apple Server, but it mainly deals with sending files and is several years old.
A: Apple requires you to use APNS to send push notifications to devices. This cannot be done without APNS, if you found a way around this then Apple would most likely reject the app.
Click here to read the documentation. When you register for push notifications you are actually getting the device token for your app on that specific device from APNS, therefore that is an APNS specific token and you will need to use APNS to send the notification.
| stackoverflow | {
"language": "en",
"length": 161,
"provenance": "stackexchange_0000F.jsonl.gz:873898",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570070"
} |
5e6949b7bf39e1d12f5b73873ca3f76facb38cfc | Stackoverflow Stackexchange
Q: What does Java's BST ZoneId represent? I have stored in the DB this Time Frame: any day from 15:00 to 16:00 in LONDON (BST)
I need to execute a program IF when I receive an event is between this Time Frame.
I am running the Test now in Paris (16:22) where in London is 15:22 (so in between the Time Frame stored in the DB).
SO this is my code
// create Local Date Time from what I have stored in the DB
LocalDateTime dateTime1 = LocalDateTime.of(2017, Month.JUNE, 15, 15, 00);
LocalDateTime dateTime2 = LocalDateTime.of(2017, Month.JUNE, 15, 16, 00);
Instant now = Instant.now();
System.out.println (now.isAfter (dateTime1.atZone(ZoneId.of("BST", ZoneId.SHORT_IDS)).toInstant()));
System.out.println (now.isBefore(dateTime2.atZone(ZoneId.of("BST", ZoneId.SHORT_IDS)).toInstant()));
theoretically now (16:22 in PARIS / 15:22 in LONDON) is after dateTime1 in LONDON (15:00) and before dateTime2 (16:00) in LONDON
but I got that now is not before that dateTime2
A: As indicated in the javadoc of ZonedId.SHORT_IDS, “BST” is not British Summer Time but Bangladesh Standard Time (Asia/Dhaka).
You can check the value with:
System.out.println(ZoneId.of("BST", ZoneId.SHORT_IDS));
So I suggest using full time zone names to avoid any confusion:
ZoneId london = ZoneId.of("Europe/London")
| Q: What does Java's BST ZoneId represent? I have stored in the DB this Time Frame: any day from 15:00 to 16:00 in LONDON (BST)
I need to execute a program IF when I receive an event is between this Time Frame.
I am running the Test now in Paris (16:22) where in London is 15:22 (so in between the Time Frame stored in the DB).
SO this is my code
// create Local Date Time from what I have stored in the DB
LocalDateTime dateTime1 = LocalDateTime.of(2017, Month.JUNE, 15, 15, 00);
LocalDateTime dateTime2 = LocalDateTime.of(2017, Month.JUNE, 15, 16, 00);
Instant now = Instant.now();
System.out.println (now.isAfter (dateTime1.atZone(ZoneId.of("BST", ZoneId.SHORT_IDS)).toInstant()));
System.out.println (now.isBefore(dateTime2.atZone(ZoneId.of("BST", ZoneId.SHORT_IDS)).toInstant()));
theoretically now (16:22 in PARIS / 15:22 in LONDON) is after dateTime1 in LONDON (15:00) and before dateTime2 (16:00) in LONDON
but I got that now is not before that dateTime2
A: As indicated in the javadoc of ZonedId.SHORT_IDS, “BST” is not British Summer Time but Bangladesh Standard Time (Asia/Dhaka).
You can check the value with:
System.out.println(ZoneId.of("BST", ZoneId.SHORT_IDS));
So I suggest using full time zone names to avoid any confusion:
ZoneId london = ZoneId.of("Europe/London")
| stackoverflow | {
"language": "en",
"length": 185,
"provenance": "stackexchange_0000F.jsonl.gz:873932",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570166"
} |
470265a0791f887d8c578132f7a52898197b1d2e | Stackoverflow Stackexchange
Q: How does the Tomcat container calls service() method? How does the Tomcat container calls service(ServletRequest req, ServletResponse res) method?
Where can I look for implementation of this call?
How can I see how this req and res objects (that passed to service() method) are created?
A: Although the Tomcat Architecture page is in TODO status, there is at least a link to the UML sequence diagram of the request processing. Combining with Tomcat's source code, it is a good starting point.
| Q: How does the Tomcat container calls service() method? How does the Tomcat container calls service(ServletRequest req, ServletResponse res) method?
Where can I look for implementation of this call?
How can I see how this req and res objects (that passed to service() method) are created?
A: Although the Tomcat Architecture page is in TODO status, there is at least a link to the UML sequence diagram of the request processing. Combining with Tomcat's source code, it is a good starting point.
A: If you want to know it, first clone apache tomcat source code
git clone https://github.com/apache/tomcat.git
Then inside the cloned repository, launch this command to search where it is invoked service method:
grep -H -n -r "\.service(" --include=*.java
You will find a short file list:
java/javax/servlet/jsp/PageContext.java:107: * in this PageContext until the return from the current Servlet.service()
java/org/apache/catalina/connector/Request.java:3128: // that set towards the start of CoyoyeAdapter.service()
java/org/apache/catalina/core/ApplicationFilterChain.java:231: servlet.service(request, response);
java/org/apache/catalina/servlets/DefaultServlet.java:411: super.service(req, resp);
java/org/apache/catalina/servlets/WebdavServlet.java:349: super.service(req, resp);
java/org/apache/coyote/ajp/AjpProcessor.java:403: getAdapter().service(request, response);
java/org/apache/coyote/AsyncStateMachine.java:41: * been called during a single Servlet.service() method. The
java/org/apache/coyote/AsyncStateMachine.java:58: * been called during a single Servlet.service() method. The
java/org/apache/coyote/http11/Http11Processor.java:498: getAdapter().service(request, response);
java/org/apache/coyote/http2/StreamProcessor.java:257: adapter.service(request, response);
java/org/apache/jasper/Constants.java:41: * HttpJspBase.service(). This is where most of the code generated
java/org/apache/jasper/servlet/JspServlet.java:385: wrapper.service(request, response, precompile);
java/org/apache/jasper/servlet/JspServletWrapper.java:440: servlet.service(request, response);
java/org/apache/jasper/servlet/JspServletWrapper.java:443: servlet.service(request, response);
The most intresting one is java/org/apache/catalina/core/ApplicationFilterChain.java. You wil find more coincidences, but much of them are because there is another interface into Tomcat source code that has a very similar method java/org/apache/coyote/Adapter.java ignore it.
Once you get java/org/apache/catalina/core/ApplicationFilterChain.java, you can edit, got to line 231 and see where the service method is called.
However, both req and res objects are not created in that place. Finding how those are created seems to be a bit more complex and requires more time.
A: Servlet lifecycle is controlled by the underlying container. Once the servlet has been initialized and there is a request, Tomcat will call the servlet's service method to process the request.
Service method will delegate request to your Servlet class where you can get access to req and res objects in doGet or doPost methods.
public void doGet(HttpServletRequest req, HttpServletResponse res){
}
Update :
1. Upon request from the client, Container creates two objects : HttpServletRequest and HttpServletResponse.
2. Based on the request, Container will find correct Servlet (as per URL mapping), creates new thread for that particular request(one-to-one mapping - new thread for each request) and calls Servlet's service method, passing in created HttpServletRequest and HttpServletResponse objects as arguments.
3. Based on request method (GET or POST) service() method will call doGet() or doPost() method in Servlet, again passing the same HttpServletRequest and HttpServletResponse objects as arguments.
Those are Servlet specifications in a nutshell. How does Tomcat act exactly is implementation specific, it is not controlled by specification. If you need to know how exactly it is implemented in Tomcat, you might check it's source code.
| stackoverflow | {
"language": "en",
"length": 475,
"provenance": "stackexchange_0000F.jsonl.gz:873933",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570168"
} |
acc456cafcfd32c9777cb5a31df6404b383bed0a | Stackoverflow Stackexchange
Q: Using a variable in component URL in laravel markdown mail I have the following markdown mail in Laravel:
# Welcome to Offer Site
Thanks for listing your product, {{$user->name}}!
@component('mail::button', ['url' => 'https://www.example.com/product/view/{{$product->id}}', 'color'=> 'orange'])
View it here
@endcomponent
However, the rendered URL when the mail is sent is https://www.example.com/product/view/%3C?php%20echo%20e(%24product-%3Eid);%20?%3E
This is probably super simple but its hard to word... how do I go about making sure the variable is properly inserted into the URL bar as the parameter, this is in the build of the ProductAdded mail:
return $this->markdown('emails.product-added-email');
And this is what I pass to ProductAdded Mail:
\Mail::to($user)->send(new \App\Mail\ProductAdded($user, $product));
The variables work fine.
Any ideas?
A: You are already inside a php string, there is no need to use the blade brackets. You can just concatenate the string like so:
@component('mail::button', ['url' => 'https://www.example.com/product/view/' . $product->id, 'color' => 'orange'])
| Q: Using a variable in component URL in laravel markdown mail I have the following markdown mail in Laravel:
# Welcome to Offer Site
Thanks for listing your product, {{$user->name}}!
@component('mail::button', ['url' => 'https://www.example.com/product/view/{{$product->id}}', 'color'=> 'orange'])
View it here
@endcomponent
However, the rendered URL when the mail is sent is https://www.example.com/product/view/%3C?php%20echo%20e(%24product-%3Eid);%20?%3E
This is probably super simple but its hard to word... how do I go about making sure the variable is properly inserted into the URL bar as the parameter, this is in the build of the ProductAdded mail:
return $this->markdown('emails.product-added-email');
And this is what I pass to ProductAdded Mail:
\Mail::to($user)->send(new \App\Mail\ProductAdded($user, $product));
The variables work fine.
Any ideas?
A: You are already inside a php string, there is no need to use the blade brackets. You can just concatenate the string like so:
@component('mail::button', ['url' => 'https://www.example.com/product/view/' . $product->id, 'color' => 'orange'])
| stackoverflow | {
"language": "en",
"length": 143,
"provenance": "stackexchange_0000F.jsonl.gz:873936",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570175"
} |
9f6b363bdafbee5551542b7fe89989642039566b | Stackoverflow Stackexchange
Q: =default constructor visibility I have such code
class A
{
A() = default;
};
class B
{
B();
};
B::B() = default;
int main()
{
A a{}; //1
A a1; //2
B b{}; //3
B b1; //4
}
Lines 2, 3, 4 generates compilation error as expected. But line 1 works! Please explain
*
*Difference between lines 1 and 2? In my opinion both of them should use default constructor and generate the same error.
*Difference between constructors in A and B classes.
| Q: =default constructor visibility I have such code
class A
{
A() = default;
};
class B
{
B();
};
B::B() = default;
int main()
{
A a{}; //1
A a1; //2
B b{}; //3
B b1; //4
}
Lines 2, 3, 4 generates compilation error as expected. But line 1 works! Please explain
*
*Difference between lines 1 and 2? In my opinion both of them should use default constructor and generate the same error.
*Difference between constructors in A and B classes.
| stackoverflow | {
"language": "en",
"length": 84,
"provenance": "stackexchange_0000F.jsonl.gz:873939",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570180"
} |
ab2b4c05bb5cc75b563b379a21c63e95941f168f | Stackoverflow Stackexchange
Q: Docker - how do i restart nginx to apply custom config? I am trying to configure a LEMP dev environment with docker and am having trouble with nginx because I can't seem to restart nginx once it has it's new configuration.
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx
ports:
- '8080:80'
volumes:
- ./nginx/log:/var/log/nginx
- ./nginx/config/default:/etc/nginx/sites-available/default
- ../wordpress:/var/www/wordpress
php:
image: php:fpm
ports:
- 9000:9000
mysql:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- ./mysql/data:/var/lib/mysql
I have a custom nginx config that replaces /etc/nginx/sites-available/default, and in a normal Ubuntu environment, I would run service nginx restart to pull in the new config.
However, if I try to do that this Docker environment, the nginx container exits with code 1.
docker-compose exec nginx sh
service nginx restart
-exit with code 1-
How would I be able use nginx with a custom /etc/nginx/sites-available/default file?
A: To reload nginx with docker-compose specifically (rather than restart the whole container, causing downtime):
docker-compose exec nginx nginx -s reload
| Q: Docker - how do i restart nginx to apply custom config? I am trying to configure a LEMP dev environment with docker and am having trouble with nginx because I can't seem to restart nginx once it has it's new configuration.
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx
ports:
- '8080:80'
volumes:
- ./nginx/log:/var/log/nginx
- ./nginx/config/default:/etc/nginx/sites-available/default
- ../wordpress:/var/www/wordpress
php:
image: php:fpm
ports:
- 9000:9000
mysql:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- ./mysql/data:/var/lib/mysql
I have a custom nginx config that replaces /etc/nginx/sites-available/default, and in a normal Ubuntu environment, I would run service nginx restart to pull in the new config.
However, if I try to do that this Docker environment, the nginx container exits with code 1.
docker-compose exec nginx sh
service nginx restart
-exit with code 1-
How would I be able use nginx with a custom /etc/nginx/sites-available/default file?
A: To reload nginx with docker-compose specifically (rather than restart the whole container, causing downtime):
docker-compose exec nginx nginx -s reload
A: Basically you can reload nginx configuration by invoking this command:
docker exec <nginx-container-name-or-id> nginx -s reload
A: Docker containers should be running a single application in the foreground. When that process it launches as pid 1 inside the container exits, so does the container (similar to how killing pid 1 on a linux server will shutdown that machine). This process isn't managed by the OS service command.
The normal way to reload a configuration in a container is to restart the container. Since you're using docker-compose, that would be docker-compose restart nginx. Note that if this config was part of your image, you would need to rebuild and redeploy a new container, but since you're using a volume, that isn't necessary.
| stackoverflow | {
"language": "en",
"length": 287,
"provenance": "stackexchange_0000F.jsonl.gz:873965",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570242"
} |
e7164f7f8d9811aadc179e975f7b92913d6f0f8b | Stackoverflow Stackexchange
Q: Push Notifications without Firebase Cloud Messaging Can one use a 3rd party service to send Push Notifications without relying on Google's Firebase Cloud Messaging (FCM)?
If the Firebase package is not included with the app by default, could creating / using a custom framework have a similar feature set as Firebase? Or is Firebase integrated within the Android Operating System in some way that is external from an app?
A: So it depends on your requirements. So if you want to just send normal notifications, then I can really recommend the PushBots.
But if you want to create extra features like invisible data payload, you probaly have to use FCM.
| Q: Push Notifications without Firebase Cloud Messaging Can one use a 3rd party service to send Push Notifications without relying on Google's Firebase Cloud Messaging (FCM)?
If the Firebase package is not included with the app by default, could creating / using a custom framework have a similar feature set as Firebase? Or is Firebase integrated within the Android Operating System in some way that is external from an app?
A: So it depends on your requirements. So if you want to just send normal notifications, then I can really recommend the PushBots.
But if you want to create extra features like invisible data payload, you probaly have to use FCM.
A: You can try out services like Baidu, a very well known messaging service in China.
| stackoverflow | {
"language": "en",
"length": 127,
"provenance": "stackexchange_0000F.jsonl.gz:873967",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570247"
} |
46499d0ec3315b5cb1d903425a7bae18f6c626f9 | Stackoverflow Stackexchange
Q: Why do only some of the subprojects of a maven project refresh when changing SVN branches in Eclipse? I have a maven project with several subprojects. When I change branches only some of the subprojects indicate that the code is pointing to the newly switched to branch in the navigator window. Has anyone else seen this? Why does it only happen some of the time? Is there a way I can get all of the subprojects to update correctly (I've tried switching using "Working Copy" and "Fully Recursive" and I've also tried doing a refresh on all of the subprojects as well as team->refresh/clean up followed by refresh)?
| Q: Why do only some of the subprojects of a maven project refresh when changing SVN branches in Eclipse? I have a maven project with several subprojects. When I change branches only some of the subprojects indicate that the code is pointing to the newly switched to branch in the navigator window. Has anyone else seen this? Why does it only happen some of the time? Is there a way I can get all of the subprojects to update correctly (I've tried switching using "Working Copy" and "Fully Recursive" and I've also tried doing a refresh on all of the subprojects as well as team->refresh/clean up followed by refresh)?
| stackoverflow | {
"language": "en",
"length": 109,
"provenance": "stackexchange_0000F.jsonl.gz:874004",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570357"
} |
c3a582011bd8f88e5b007e8329bd1f3ff93c115b | Stackoverflow Stackexchange
Q: Does Android Things Support RenderScript Compute? I know that Android Things supports the NDK. I cannot find a reference that states, one way or another, whether Android Things supports RenderScript Compute. My assumption is no (on the grounds that not all Things platforms will have a suitable GPU), but I was hoping perhaps that people had tried it or otherwise know whether RenderScript Compute works on Android Things.
A:
My assumption is no (on the grounds that not all Things platforms will have a suitable GPU)
I'll be more specific when I know what a "suitable" GPU is, but if there are some of those (see below) among the supported hardware platformes the assumption isn't correct.
but I was hoping perhaps that people had tried it or otherwise know whether RenderScript Compute works on Android Things.
I tried this sample on a RPi3 and it worked, although the question if the computation was parallelized across both CPU and GPU left open.
| Q: Does Android Things Support RenderScript Compute? I know that Android Things supports the NDK. I cannot find a reference that states, one way or another, whether Android Things supports RenderScript Compute. My assumption is no (on the grounds that not all Things platforms will have a suitable GPU), but I was hoping perhaps that people had tried it or otherwise know whether RenderScript Compute works on Android Things.
A:
My assumption is no (on the grounds that not all Things platforms will have a suitable GPU)
I'll be more specific when I know what a "suitable" GPU is, but if there are some of those (see below) among the supported hardware platformes the assumption isn't correct.
but I was hoping perhaps that people had tried it or otherwise know whether RenderScript Compute works on Android Things.
I tried this sample on a RPi3 and it worked, although the question if the computation was parallelized across both CPU and GPU left open.
| stackoverflow | {
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:874006",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570359"
} |
106998ece3e4b0f65cbdbe1235e66173c7343e96 | Stackoverflow Stackexchange
Q: PreProcessor: Skipping only the current sampler execution if a condition is not Here is my Jmeter structure.
Thread Group
Sampler 1
Pre-Processor 1
Sampler 2
I am checking a condition in 'Pre-Processor 1'. If it fails, I want to skip the execution of 'Sampler 1' altogether along with any post-processor and assertions and proceed to the next Sampler. How can we do this?
I am aware that I can do this in a sampler before 'Sampler 1' and wrap 'Sampler 1' around an IF controller to check this. But I don't want that. I am looking for a solution similar to ctx.setRestartNextLoop(true); which will go to the next iteration. Instead of that, I want to skip just the current sampler.
A: In JMeter core, it is not expected that a Pre-Processor can cancel execution of Sampler, make an enhancetrequest describing your particular need as I don't understand the motivation.
Meanwhile, stick to your alternative approach based on IfController.
Use "if controller" element, It will run your Sampler 1 only if the condition is true:
*
*http://jmeter.apache.org/usermanual/component_reference.html#If_Controller
| Q: PreProcessor: Skipping only the current sampler execution if a condition is not Here is my Jmeter structure.
Thread Group
Sampler 1
Pre-Processor 1
Sampler 2
I am checking a condition in 'Pre-Processor 1'. If it fails, I want to skip the execution of 'Sampler 1' altogether along with any post-processor and assertions and proceed to the next Sampler. How can we do this?
I am aware that I can do this in a sampler before 'Sampler 1' and wrap 'Sampler 1' around an IF controller to check this. But I don't want that. I am looking for a solution similar to ctx.setRestartNextLoop(true); which will go to the next iteration. Instead of that, I want to skip just the current sampler.
A: In JMeter core, it is not expected that a Pre-Processor can cancel execution of Sampler, make an enhancetrequest describing your particular need as I don't understand the motivation.
Meanwhile, stick to your alternative approach based on IfController.
Use "if controller" element, It will run your Sampler 1 only if the condition is true:
*
*http://jmeter.apache.org/usermanual/component_reference.html#If_Controller
| stackoverflow | {
"language": "en",
"length": 177,
"provenance": "stackexchange_0000F.jsonl.gz:874010",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570364"
} |
74f8123abddf96259e851a816eabd234c23fe4bb | Stackoverflow Stackexchange
Q: How to view response from Spring 5 Reactive API in Postman? I have next endpoint in my application:
@GetMapping(value = "/users")
public Mono<ServerResponse> users() {
Flux<User> flux = Flux.just(new User("id"));
return ServerResponse.ok()
.contentType(APPLICATION_JSON)
.body(flux, User.class)
.onErrorResume(CustomException.class, e -> ServerResponse.notFound().build());
}
Currently I can see text "data:" as a body and Content-Type →text/event-stream in Postman. As I understand Mono<ServerResponse> always return data with SSE(Server Sent Event).
Is it possible to somehow view response in Postman client?
A: It seems you're mixing the annotation model and the functional model in WebFlux. The ServerResponse class is part of the functional model.
Here's how to write an annotated endpoint in WebFlux:
@RestController
public class HomeController {
@GetMapping("/test")
public ResponseEntity serverResponseMono() {
return ResponseEntity
.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(Flux.just("test"));
}
}
Here's the functional way now:
@Component
public class UserHandler {
public Mono<ServerResponse> findUser(ServerRequest request) {
Flux<User> flux = Flux.just(new User("id"));
return ServerResponse.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(flux, User.class)
.onErrorResume(CustomException.class, e -> ServerResponse.notFound().build());
}
}
@SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
@Bean
public RouterFunction<ServerResponse> users(UserHandler userHandler) {
return route(GET("/test")
.and(accept(MediaType.APPLICATION_JSON)), userHandler::findUser);
}
}
| Q: How to view response from Spring 5 Reactive API in Postman? I have next endpoint in my application:
@GetMapping(value = "/users")
public Mono<ServerResponse> users() {
Flux<User> flux = Flux.just(new User("id"));
return ServerResponse.ok()
.contentType(APPLICATION_JSON)
.body(flux, User.class)
.onErrorResume(CustomException.class, e -> ServerResponse.notFound().build());
}
Currently I can see text "data:" as a body and Content-Type →text/event-stream in Postman. As I understand Mono<ServerResponse> always return data with SSE(Server Sent Event).
Is it possible to somehow view response in Postman client?
A: It seems you're mixing the annotation model and the functional model in WebFlux. The ServerResponse class is part of the functional model.
Here's how to write an annotated endpoint in WebFlux:
@RestController
public class HomeController {
@GetMapping("/test")
public ResponseEntity serverResponseMono() {
return ResponseEntity
.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(Flux.just("test"));
}
}
Here's the functional way now:
@Component
public class UserHandler {
public Mono<ServerResponse> findUser(ServerRequest request) {
Flux<User> flux = Flux.just(new User("id"));
return ServerResponse.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(flux, User.class)
.onErrorResume(CustomException.class, e -> ServerResponse.notFound().build());
}
}
@SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
@Bean
public RouterFunction<ServerResponse> users(UserHandler userHandler) {
return route(GET("/test")
.and(accept(MediaType.APPLICATION_JSON)), userHandler::findUser);
}
}
| stackoverflow | {
"language": "en",
"length": 182,
"provenance": "stackexchange_0000F.jsonl.gz:874121",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570730"
} |
2b21c5eadeb421bda0c3e133c35e8210ea428fb2 | Stackoverflow Stackexchange
Q: Nginx/Django File Upload Permissions Today I noticed that whenever I upload a file through my Django site the file is uploaded with the file permissions 0600 meaning whenever a non root user wants to view the file (nginx) a 403 is shown.
This only started happening today from what I can tell. I have checked both the file_upload_permissions and file_upload_directory_permissions in the Django settings file and they are both set to 0644.
I haven't done any Linux/Django updates recently so that shouldn't be the cause, any help would be greatly appreciated.
Thanks,
Sam
A: If you have recently switched to Python 3, please take a look at here for a reference to octal literals in Python 3. Changing your settings as follows should fix it:
FILE_UPLOAD_PERMISSIONS = 0o644
This is also helpful in writing Python 2-3 compatible code.
| Q: Nginx/Django File Upload Permissions Today I noticed that whenever I upload a file through my Django site the file is uploaded with the file permissions 0600 meaning whenever a non root user wants to view the file (nginx) a 403 is shown.
This only started happening today from what I can tell. I have checked both the file_upload_permissions and file_upload_directory_permissions in the Django settings file and they are both set to 0644.
I haven't done any Linux/Django updates recently so that shouldn't be the cause, any help would be greatly appreciated.
Thanks,
Sam
A: If you have recently switched to Python 3, please take a look at here for a reference to octal literals in Python 3. Changing your settings as follows should fix it:
FILE_UPLOAD_PERMISSIONS = 0o644
This is also helpful in writing Python 2-3 compatible code.
| stackoverflow | {
"language": "en",
"length": 139,
"provenance": "stackexchange_0000F.jsonl.gz:874148",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570808"
} |
ec063a453e08df9cf3552f7303bdc5484969481f | Stackoverflow Stackexchange
Q: How do I undo a 'set arabic' command in Vim? I'm a vim noob who was doing a little exploring. In the terminal I ran :set arabic just out of curiosity.
I ended up with all my code right-justified and flipped backwards (i.e. import became tropmi).
I understand that exiting Vim and restarting will undo those changes. I am just interested in knowing what command would reverse those changes without me having to close and open Vim.
A: I believe the proper incantation is :set noarabic. See http://vimdoc.sourceforge.net/htmldoc/arabic.html
| Q: How do I undo a 'set arabic' command in Vim? I'm a vim noob who was doing a little exploring. In the terminal I ran :set arabic just out of curiosity.
I ended up with all my code right-justified and flipped backwards (i.e. import became tropmi).
I understand that exiting Vim and restarting will undo those changes. I am just interested in knowing what command would reverse those changes without me having to close and open Vim.
A: I believe the proper incantation is :set noarabic. See http://vimdoc.sourceforge.net/htmldoc/arabic.html
A: :set noarabic
All the Boolean flags can be turned off by prefixing no.
Further, since you are new to Vim:
*
*You can get status of flag using ?: :set arabic?
*Toggle the flag using !: set arabic!
| stackoverflow | {
"language": "en",
"length": 128,
"provenance": "stackexchange_0000F.jsonl.gz:874164",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44570854"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.