id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
b17b8c82eff2bdd0320ca79d0237a1dc2ad2298a | Stackoverflow Stackexchange
Q: Caching responses in DRF I have a REST API and some of the endpoints take a significant time to generate the responses, so I want to add some response caching and etag support. I have looked at the conditional response implementation in Django and at both response caching and conditional response in DRF extensions package. The problem I am having is that my data changes very frequently on one side, but is also heavily segregated, so if something changes in the response to user A calling endpoint X, nothing might change for users B, C and D calling the same endpoint. Since my data changes often, if I invalidate all responses on every change, I will never hit cache. The endpoints in question all generate lists of JSON objects, so the question is how can I only invalidate cached responses that contain a changed object rather than invalidating all of them?
| Q: Caching responses in DRF I have a REST API and some of the endpoints take a significant time to generate the responses, so I want to add some response caching and etag support. I have looked at the conditional response implementation in Django and at both response caching and conditional response in DRF extensions package. The problem I am having is that my data changes very frequently on one side, but is also heavily segregated, so if something changes in the response to user A calling endpoint X, nothing might change for users B, C and D calling the same endpoint. Since my data changes often, if I invalidate all responses on every change, I will never hit cache. The endpoints in question all generate lists of JSON objects, so the question is how can I only invalidate cached responses that contain a changed object rather than invalidating all of them?
| stackoverflow | {
"language": "en",
"length": 152,
"provenance": "stackexchange_0000F.jsonl.gz:866653",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547911"
} |
8e1210b73e9374bf6ebb4033e8e52db2bb42d51b | Stackoverflow Stackexchange
Q: How to select all text up to the cursor? I occasionally look through logs or other data where I'm only interested in a small piece of a very large amount of data. I usually just drop it into sublime text and then cut out all the data I'm not interested in. How can I select all the text from the beginning up to my cursor? And all the text from my cursor to the end of the file? That would make reviewing data a little easier. I'd prefer if this answer worked in sublime text 3, but other lightweight text editors would be useful too if it couldn't be done if sublime text 3.
A: Mac:
Select all text above your cursor: command+shift+up.
Select all text below your cursor: command+shift+down.
Windows:
Select all text above your cursor: ctrl+shift+home.
Select all text below your cursor: ctrl+shift+end.
*This works in all versions of Sublime Text
| Q: How to select all text up to the cursor? I occasionally look through logs or other data where I'm only interested in a small piece of a very large amount of data. I usually just drop it into sublime text and then cut out all the data I'm not interested in. How can I select all the text from the beginning up to my cursor? And all the text from my cursor to the end of the file? That would make reviewing data a little easier. I'd prefer if this answer worked in sublime text 3, but other lightweight text editors would be useful too if it couldn't be done if sublime text 3.
A: Mac:
Select all text above your cursor: command+shift+up.
Select all text below your cursor: command+shift+down.
Windows:
Select all text above your cursor: ctrl+shift+home.
Select all text below your cursor: ctrl+shift+end.
*This works in all versions of Sublime Text
| stackoverflow | {
"language": "en",
"length": 154,
"provenance": "stackexchange_0000F.jsonl.gz:866673",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44547992"
} |
0064246b189f9bb333080bb0614c9fb3084083d5 | Stackoverflow Stackexchange
Q: Scale for colour is already present, overlay plots in ggplot2 I'm attempting to overlay two plots with ggplot2. I don't have any problems doing it with scale_x_manual (as it's described in other question), but it crashes when I have to use scale_color_gradient...
My code is:
ggplot() +
geom_point(data = mtcars, aes(x = mpg, y = qsec, color = qsec)) +
scale_color_gradient(low = "gray", high = "blue") +
geom_point(data = mtcars, aes(x = drat, y = wt, color = wt)) +
scale_color_gradient(low = "gray", high = "red")
But it prompts this message and only keeps one scale:
Scale for 'colour' is already present.
Adding another scale for 'colour', which will replace the existing scale.
I've tried different options (scale_color_manual, scale_fill_gradient, scale_color_gradient2 or scale_color_gradientn...) but nothing seems to work. Please, any suggestions? Thanks for your help!
| Q: Scale for colour is already present, overlay plots in ggplot2 I'm attempting to overlay two plots with ggplot2. I don't have any problems doing it with scale_x_manual (as it's described in other question), but it crashes when I have to use scale_color_gradient...
My code is:
ggplot() +
geom_point(data = mtcars, aes(x = mpg, y = qsec, color = qsec)) +
scale_color_gradient(low = "gray", high = "blue") +
geom_point(data = mtcars, aes(x = drat, y = wt, color = wt)) +
scale_color_gradient(low = "gray", high = "red")
But it prompts this message and only keeps one scale:
Scale for 'colour' is already present.
Adding another scale for 'colour', which will replace the existing scale.
I've tried different options (scale_color_manual, scale_fill_gradient, scale_color_gradient2 or scale_color_gradientn...) but nothing seems to work. Please, any suggestions? Thanks for your help!
| stackoverflow | {
"language": "en",
"length": 135,
"provenance": "stackexchange_0000F.jsonl.gz:866683",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548023"
} |
7f39ca9dcf72fd0bac5fc5b4a9adb41503b7723b | Stackoverflow Stackexchange
Q: How do I expose ports on Heroku with a Dockerfile? I am trying to deploy a Docker image on Heroku and am trying to understand how to expose multiple ports. Here is the Docker command that I am trying to run in the Heroku deploy:
docker run \
-p 2222:22 \
-p 33306:3306 \
-p 27017:27017 \
-p 28015:28015 \
-p 29015:29015 \
-p 8080:8080 \
test/db-migration
How do I do this in Heroku?
A: You can't - you should use the $PORT environment variable which will be randomly assigned and then mapped to port 80 by the Heroku routers. Also, only http requests are accepted. See https://devcenter.heroku.com/articles/container-registry-and-runtime#dockerfile-commands-and-runtime for more details.
| Q: How do I expose ports on Heroku with a Dockerfile? I am trying to deploy a Docker image on Heroku and am trying to understand how to expose multiple ports. Here is the Docker command that I am trying to run in the Heroku deploy:
docker run \
-p 2222:22 \
-p 33306:3306 \
-p 27017:27017 \
-p 28015:28015 \
-p 29015:29015 \
-p 8080:8080 \
test/db-migration
How do I do this in Heroku?
A: You can't - you should use the $PORT environment variable which will be randomly assigned and then mapped to port 80 by the Heroku routers. Also, only http requests are accepted. See https://devcenter.heroku.com/articles/container-registry-and-runtime#dockerfile-commands-and-runtime for more details.
A: You may want to look at Dockhero add-on. It's a good way to deploy supplementary resources alongside your Heroku app, and it supports docker-compose with multi-port mapping. The web app itself should still be running on Heroku dynos.
| stackoverflow | {
"language": "en",
"length": 151,
"provenance": "stackexchange_0000F.jsonl.gz:866697",
"question_score": "15",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548074"
} |
0f917f09a71ac0b5da71c29ac51da5522242d99a | Stackoverflow Stackexchange
Q: Create long running .NET Core service targeting Linux Docker container There appear to be two types of substantive .NET Core project: ASP.NET Core Web App and Console App. I would like to build something like a Windows Service in a Docker environment (Linux container), where the process starts, runs indefinitely, and stops only when told to. Neither project type seems suitable. Am I missing something?
A: Both types of applications make sense, it depends how you plan on communicating with this service.
If you want to communicate with it over standard HTTP on some TCP port, then using an ASP.Net Core Web Application will make things easier.
If you want to communicate over something a bit more "exotic" like RabbitMQ, Kafka, raw TCP sockets or something else, then a Console Application is what you want. The trick, as Gareth Luckett's answer points out, is to just make sure that your main function blocks. A running Docker container expects the main thread to block as long as the container should be running.
| Q: Create long running .NET Core service targeting Linux Docker container There appear to be two types of substantive .NET Core project: ASP.NET Core Web App and Console App. I would like to build something like a Windows Service in a Docker environment (Linux container), where the process starts, runs indefinitely, and stops only when told to. Neither project type seems suitable. Am I missing something?
A: Both types of applications make sense, it depends how you plan on communicating with this service.
If you want to communicate with it over standard HTTP on some TCP port, then using an ASP.Net Core Web Application will make things easier.
If you want to communicate over something a bit more "exotic" like RabbitMQ, Kafka, raw TCP sockets or something else, then a Console Application is what you want. The trick, as Gareth Luckett's answer points out, is to just make sure that your main function blocks. A running Docker container expects the main thread to block as long as the container should be running.
A: The term "console" might be a bit misleading here. Microsoft uses it to distinguish it from "GUI" apps (like WinForms, WPF, UWP, Xamarin etc.) or web applications that are brought through IIS. ASP.NET Core applications are just console apps with libraries to host a web server.
So for your app, a "console" is the project type you want. As has been mentioned by @mason, even Windows Services are just "console" applications - an .exe file that isn't a GUI application.
A: Unfortunately as a console application requires a stdin when running,through docker it will exit right away. You can 'host' it using asp.net.
public class Program
{
public static ManualResetEventSlim Done = new ManualResetEventSlim(false);
public static void Main(string[] args)
{
//This is unbelievably complex because .NET Core Console.ReadLine() does not block in a docker container...!
var host = new WebHostBuilder().UseStartup(typeof(Startup)).Build();
using (CancellationTokenSource cts = new CancellationTokenSource())
{
Action shutdown = () =>
{
if (!cts.IsCancellationRequested)
{
Console.WriteLine("Application is shutting down...");
cts.Cancel();
}
Done.Wait();
};
Console.CancelKeyPress += (sender, eventArgs) =>
{
shutdown();
// Don't terminate the process immediately, wait for the Main thread to exit gracefully.
eventArgs.Cancel = true;
};
host.Run(cts.Token);
Done.Set();
}
}
}
The Startup class:
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<IServer, ConsoleAppRunner>();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
}
}
The ConsoleAppRunner class
public class ConsoleAppRunner : IServer
{
/// <summary>A collection of HTTP features of the server.</summary>
public IFeatureCollection Features { get; }
public ConsoleAppRunner(ILoggerFactory loggerFactory)
{
Features = new FeatureCollection();
}
/// <summary>Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.</summary>
public void Dispose()
{
}
/// <summary>Start the server with an application.</summary>
/// <param name="application">An instance of <see cref="T:Microsoft.AspNetCore.Hosting.Server.IHttpApplication`1" />.</param>
/// <typeparam name="TContext">The context associated with the application.</typeparam>
public void Start<TContext>(IHttpApplication<TContext> application)
{
//Actual program code starts here...
Console.WriteLine("Demo app running...");
Program.Done.Wait(); // <-- Keeps the program running - The Done property is a ManualResetEventSlim instance which gets set if someone terminates the program.
}
}
Source: https://stackoverflow.com/a/40549512/2238275
| stackoverflow | {
"language": "en",
"length": 504,
"provenance": "stackexchange_0000F.jsonl.gz:866703",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548088"
} |
4277c94bd1ab1b5732410ef4f70a8ec35498685b | Stackoverflow Stackexchange
Q: Prevent app language change after locale changes My app supports 3 languages and the user can choose between these options.
When I change the device locale to one of my supported languages, the UI language automatically changes, the problem is that I don't want it to change if the user already choose one of my 3 supported languages within the app.
How can I prevent this automatic language change?
A: Save locale to SharedPreference when user select it in app. And add in your code default Supported Locale.
String userLocale = App.getSharedPreferences().getString(SELECTED_LOCALE,"your_default_string");
In you Application.class onCreate method add this block:
Locale locale = new Locale(userLocale) //
Resources resources = getResources();
Configuration configuration = resources.getConfiguration();
configuration.setLocale(locale);
| Q: Prevent app language change after locale changes My app supports 3 languages and the user can choose between these options.
When I change the device locale to one of my supported languages, the UI language automatically changes, the problem is that I don't want it to change if the user already choose one of my 3 supported languages within the app.
How can I prevent this automatic language change?
A: Save locale to SharedPreference when user select it in app. And add in your code default Supported Locale.
String userLocale = App.getSharedPreferences().getString(SELECTED_LOCALE,"your_default_string");
In you Application.class onCreate method add this block:
Locale locale = new Locale(userLocale) //
Resources resources = getResources();
Configuration configuration = resources.getConfiguration();
configuration.setLocale(locale);
A: What I did is update a flag on my preferences to indicate if the user changed the language and check its value on Activity's onResume() to restart it if it did.
On Activity's onCreate() I update the language to the stored preference.
| stackoverflow | {
"language": "en",
"length": 159,
"provenance": "stackexchange_0000F.jsonl.gz:866723",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548140"
} |
4664b3eaca4869957a9f50e18b69e6033dd91ae6 | Stackoverflow Stackexchange
Q: "The Azure PowerShell session has not been properly initialized" error message in Octopus I am trying to run the Get-AzureRmEventHubNamespaceKey cmdlet in an Azure Powershell step within Octopus.
I am getting the following error:
Get-AzureRmEventHubNamespaceKey : The Azure PowerShell session has not been properly
initialized. Please import the module and try again
The module is installed in the following directory on the Octopus server:
C:\Program Files (x86)\Microsoft
SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager\AzureRM.EventHub
I have tried importing the module first as part of the same step:
Import-Module –Name "C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager\AzureRM.EventHub" -Verbose
And I can see in the output that it has been imported:
VERBOSE: Importing cmdlet 'Get-AzureRmEventHubNamespaceKey'.
But it is immediately followed by the above error. If I RDP to the octopus server and run directly from there it runs fine.
Any ideas on what might be causing this?
A: To use any Azure related commands from your machine, you need to log in first.
Note that there are several Azure modules, and each has a different login cmdlet, but the link above is specific to the module you're using.
| Q: "The Azure PowerShell session has not been properly initialized" error message in Octopus I am trying to run the Get-AzureRmEventHubNamespaceKey cmdlet in an Azure Powershell step within Octopus.
I am getting the following error:
Get-AzureRmEventHubNamespaceKey : The Azure PowerShell session has not been properly
initialized. Please import the module and try again
The module is installed in the following directory on the Octopus server:
C:\Program Files (x86)\Microsoft
SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager\AzureRM.EventHub
I have tried importing the module first as part of the same step:
Import-Module –Name "C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager\AzureRM.EventHub" -Verbose
And I can see in the output that it has been imported:
VERBOSE: Importing cmdlet 'Get-AzureRmEventHubNamespaceKey'.
But it is immediately followed by the above error. If I RDP to the octopus server and run directly from there it runs fine.
Any ideas on what might be causing this?
A: To use any Azure related commands from your machine, you need to log in first.
Note that there are several Azure modules, and each has a different login cmdlet, but the link above is specific to the module you're using.
| stackoverflow | {
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:866734",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548168"
} |
e33c798e4cef9189151b98965d9b7c13962dbb49 | Stackoverflow Stackexchange
Q: how to fix the low quality of tkinter render On Mac, all widgets and canvas items appear of high quality on Retina display. However, on Windows 4K display, Tkinter has poor quality, and renders unnecessarily badly (pixelated) as if from 2009.
How do I fix the quality of Tkinter on Windows 10?
I have tried using scaling, but this just makes all sorts of elements all sorts of different sizes.
A: What I usually do is that I import this module ctypes and type in ctypes.windll.shcore.SetProcessDpiAwareness(True). This will make the window of a higher quality.
Hope it works for you!
| Q: how to fix the low quality of tkinter render On Mac, all widgets and canvas items appear of high quality on Retina display. However, on Windows 4K display, Tkinter has poor quality, and renders unnecessarily badly (pixelated) as if from 2009.
How do I fix the quality of Tkinter on Windows 10?
I have tried using scaling, but this just makes all sorts of elements all sorts of different sizes.
A: What I usually do is that I import this module ctypes and type in ctypes.windll.shcore.SetProcessDpiAwareness(True). This will make the window of a higher quality.
Hope it works for you!
| stackoverflow | {
"language": "en",
"length": 101,
"provenance": "stackexchange_0000F.jsonl.gz:866739",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548176"
} |
dc431d669b83ae80915fc2b52c36be6af31bd644 | Stackoverflow Stackexchange
Q: Minimum K values of a tensor? I am aware of an implementation to compute the inverse (i.e. tf.nn.top_k computes the K largest values along a given axis), but I can't find a in-built method to find the K smallest values.
Aside from taking the inverse or playing around with sets, is it possible to do this using the tensorflow library, or will I have to devise something myself?
Cheers/
A: No, there is no single function that you can use. There is no problem using tf.nn.top_k() for this purpose. Just negate the argument:
-tf.nn.top_k(-A) will do the same as tf.negative(tf.nn.top_k(tf.negative(A)))
| Q: Minimum K values of a tensor? I am aware of an implementation to compute the inverse (i.e. tf.nn.top_k computes the K largest values along a given axis), but I can't find a in-built method to find the K smallest values.
Aside from taking the inverse or playing around with sets, is it possible to do this using the tensorflow library, or will I have to devise something myself?
Cheers/
A: No, there is no single function that you can use. There is no problem using tf.nn.top_k() for this purpose. Just negate the argument:
-tf.nn.top_k(-A) will do the same as tf.negative(tf.nn.top_k(tf.negative(A)))
| stackoverflow | {
"language": "en",
"length": 101,
"provenance": "stackexchange_0000F.jsonl.gz:866759",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548227"
} |
5a32cf260bde1e0f5c527805bd6097cd38360958 | Stackoverflow Stackexchange
Q: Enable warning for Unused Enum fields in Intellij When IDEA has the following code:
final public static String unused="";
It will show "unused" in grey with a squiggle underscore and a tooltip that say "Field 'unused' is never used".
However this code:
enum MyEnum{
UNUSED
}
does not show the squiggle. I can run Analyze|Inspect Code to get an "Unused declaration" message in the "Inspection Results".
Is there a way to make IDEA find the unused fields of an enum automatically when opening the code in the editor?
A: As said in here go to Settings|search for unused declaration and under Java click on that. On the right, there are all available things you can do with it.
| Q: Enable warning for Unused Enum fields in Intellij When IDEA has the following code:
final public static String unused="";
It will show "unused" in grey with a squiggle underscore and a tooltip that say "Field 'unused' is never used".
However this code:
enum MyEnum{
UNUSED
}
does not show the squiggle. I can run Analyze|Inspect Code to get an "Unused declaration" message in the "Inspection Results".
Is there a way to make IDEA find the unused fields of an enum automatically when opening the code in the editor?
A: As said in here go to Settings|search for unused declaration and under Java click on that. On the right, there are all available things you can do with it.
A: There might be something else going on. Please check if somewhere in your code you call MyEnum.values()
According to this IDEA bug it's a special request that in that case all enum members are considered used. That's a double edged sword as in some cases it's a smell not to have the enum constant referenced in the code.
| stackoverflow | {
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:866772",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548281"
} |
e832cf1227977b5b0d32057db26f410db7b7f053 | Stackoverflow Stackexchange
Q: Setting up a private pypi package? I am planning to create a python package which can only be used by team in my univ.I can host that on my university server.Is there any reference, guide or tutorial to do the same.I have developed pip packages previously but they were pushed onto the public space.
The idea is to put code on github (enterprised by my univ) and point the pip package to the git repo.
A: Maybe just pointing the dependencies (requirements.txt/setup.py) of the packages that depends on this private package to that package's private github repo URL is enough. Add a line like this to your requirements.txt
-e git+ssh://git@github.com/example/example.git#egg=example
| Q: Setting up a private pypi package? I am planning to create a python package which can only be used by team in my univ.I can host that on my university server.Is there any reference, guide or tutorial to do the same.I have developed pip packages previously but they were pushed onto the public space.
The idea is to put code on github (enterprised by my univ) and point the pip package to the git repo.
A: Maybe just pointing the dependencies (requirements.txt/setup.py) of the packages that depends on this private package to that package's private github repo URL is enough. Add a line like this to your requirements.txt
-e git+ssh://git@github.com/example/example.git#egg=example
| stackoverflow | {
"language": "en",
"length": 111,
"provenance": "stackexchange_0000F.jsonl.gz:866773",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548282"
} |
d4fd31ef941d86e3e6f1f4ba6dc714b6a8cb0aec | Stackoverflow Stackexchange
Q: How to get all Set-Cookie headers with Postman My app returns two Set-Cookie headers: JSESSIONID and AWSELB.
When I write test in Postman and use postman.getResponseHeader("Set-Cookie") it only returns AWSELB.
Any idea how can I get JSESSIONID?
EDIT:
Accepted Answer solved it in one way, now I have same issue but with sending two headers with same key.
I should be able to send multiple 'Set-Cookie' headers, but when I do that it looks like only the last one is being sent, first one is overridden.
A: It seems that getResponseHeader contains only the last header, so it is not really useful when dealing with cookies.
I would rather suggest you try
getResponseCookie
For example:
tests["Should contain JSESSIONID cookie"] = postman.getResponseCookie('JSESSIONID').value === 'abcdef';
Hope this helps!
| Q: How to get all Set-Cookie headers with Postman My app returns two Set-Cookie headers: JSESSIONID and AWSELB.
When I write test in Postman and use postman.getResponseHeader("Set-Cookie") it only returns AWSELB.
Any idea how can I get JSESSIONID?
EDIT:
Accepted Answer solved it in one way, now I have same issue but with sending two headers with same key.
I should be able to send multiple 'Set-Cookie' headers, but when I do that it looks like only the last one is being sent, first one is overridden.
A: It seems that getResponseHeader contains only the last header, so it is not really useful when dealing with cookies.
I would rather suggest you try
getResponseCookie
For example:
tests["Should contain JSESSIONID cookie"] = postman.getResponseCookie('JSESSIONID').value === 'abcdef';
Hope this helps!
A: Actually postman contains all headers under postman.response.headers
Its type is HeaderList. But it stores headers with type Array. And Header has key and value.
So you can loop through postman.response.headers and filter out what you need with either value or key
//filter by header key
pm.response.headers
.filter(header=>header.key.includes("whatever you are looking for"))
.map(f=>console.log( f.value))
//filter by header value
pm.response.headers
.filter(header=>header.value.includes("whatever you are looking for"))
.map(f=>console.log( f.value))
A: i used below method to get cookies from response header,
const Cookie = require('postman-collection').Cookie;
const oResponseHeaders = pm.response.headers;
oResponseHeaders.filter(function(resHeader){
//console.log("resHeader : ",resHeader);
let bSetCookieExists = resHeader.key.includes("Set-Cookie");
if(bSetCookieExists){
console.log('cookie : ',resHeader.key.includes("Set-Cookie"));
let rawHeader = pm.response.headers.get("Set-Cookie");
let myCookie = new Cookie(rawHeader);
console.log("myCookie : ",myCookie.toJSON());
console.log("myCookie name : ", myCookie.name);
console.log("myCookie value : ", myCookie.value);
}
})
| stackoverflow | {
"language": "en",
"length": 248,
"provenance": "stackexchange_0000F.jsonl.gz:866781",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548306"
} |
51c2958911cb8607cbf84b9a22a3177d1bd1d0aa | Stackoverflow Stackexchange
Q: Cannot find module './$data' when building Angular2 application I am trying to compile an Angular2 application and getting this error when I issue ng serve command:
C:\Projects\All\MyAngularApp>ng serve
Cannot find module './$data'
Error: Cannot find module './$data'
at Function.Module._resolveFilename (module.js:469:15)
at Function.Module._load (module.js:417:25)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (C:\Projects\All\MyAngularApp\node_modules\ajv\lib\ajv.js:10:23)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (C:\Projects\All\MyAngularApp\node_modules\schema-utils\dist\validateOptions.js:15:12)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
Is there a good resource that explains how to debug these issues?
This is not the first time we are getting these build errors, but previously we were able to resolve them by running npm install command.
A: It happened to me because our build definition requires node_modules to be checked in. I got three errors because of this from har-validator\node_modules\ajv\lib, schema-utils\node_modules\ajv\lib, and webpack\node_modules\ajv\lib. I manually deleted ajv\lib from source control (and ofc locally) the build then succeeded.
Hope this helps.
| Q: Cannot find module './$data' when building Angular2 application I am trying to compile an Angular2 application and getting this error when I issue ng serve command:
C:\Projects\All\MyAngularApp>ng serve
Cannot find module './$data'
Error: Cannot find module './$data'
at Function.Module._resolveFilename (module.js:469:15)
at Function.Module._load (module.js:417:25)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (C:\Projects\All\MyAngularApp\node_modules\ajv\lib\ajv.js:10:23)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (C:\Projects\All\MyAngularApp\node_modules\schema-utils\dist\validateOptions.js:15:12)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
Is there a good resource that explains how to debug these issues?
This is not the first time we are getting these build errors, but previously we were able to resolve them by running npm install command.
A: It happened to me because our build definition requires node_modules to be checked in. I got three errors because of this from har-validator\node_modules\ajv\lib, schema-utils\node_modules\ajv\lib, and webpack\node_modules\ajv\lib. I manually deleted ajv\lib from source control (and ofc locally) the build then succeeded.
Hope this helps.
| stackoverflow | {
"language": "en",
"length": 167,
"provenance": "stackexchange_0000F.jsonl.gz:866797",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548359"
} |
3f6074bb2b2fca144d2dac8d8103358865c7cbe1 | Stackoverflow Stackexchange
Q: JavaFX Stage close event handler I have a Stage in JavaFX that can be closed in multiple ways, either by clicking the red (X) or by a Button which calls stage.close()
Regardless of how the Stage is closed, I would like to perform an action before (or as) it is closed.
If I use the following code:
myStage.setOnCloseRequest( event -> {System.out.println("Closing Stage");} );
then the handler is called when I click the (X) but not when I call myStage.close()
This is the same issue that this question talks about (with a key difference):
JavaFX: Stage close handler
The difference is that he wants to call a handler as the entire application is closed, and therefore can override the Application's stop() method. However I'm not closing the entire application, just one stage. And Stage does not have a stop() method to override.
Thanks for any help.
A: Thanks to comments by VGR, the solution I was looking for really was as simple as replacing setOnCloseRequest with setOnHiding:
myStage.setOnHiding( event -> {System.out.println("Closing Stage");} );
| Q: JavaFX Stage close event handler I have a Stage in JavaFX that can be closed in multiple ways, either by clicking the red (X) or by a Button which calls stage.close()
Regardless of how the Stage is closed, I would like to perform an action before (or as) it is closed.
If I use the following code:
myStage.setOnCloseRequest( event -> {System.out.println("Closing Stage");} );
then the handler is called when I click the (X) but not when I call myStage.close()
This is the same issue that this question talks about (with a key difference):
JavaFX: Stage close handler
The difference is that he wants to call a handler as the entire application is closed, and therefore can override the Application's stop() method. However I'm not closing the entire application, just one stage. And Stage does not have a stop() method to override.
Thanks for any help.
A: Thanks to comments by VGR, the solution I was looking for really was as simple as replacing setOnCloseRequest with setOnHiding:
myStage.setOnHiding( event -> {System.out.println("Closing Stage");} );
| stackoverflow | {
"language": "en",
"length": 173,
"provenance": "stackexchange_0000F.jsonl.gz:866832",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548460"
} |
369eb89814e777907ed7ad2bd76baa7e5754e15c | Stackoverflow Stackexchange
Q: Using Methods inside Computed Properties in vueJs I'm trying to call a method inside of a computed property. My code is more complicated, but calling the method doesn't seem to even work in this simple example:
new Vue({
el: '#vue-instance',
data: {
x: 1
},
methods: {
augmented: function(variable) {
return (2 * variable);
},
},
computed: {
doubleX: function() {
return augmented(this.x);
}
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.16/vue.min.js"></script>
<div id="vue-instance">
<input type="number" v-model="x"> result: {{ doubleX }}
</div>
As you can see by running the snippet, the value of doubleX is not getting rendered.
A: You need to reference your component's methods via this:
var vm = new Vue({
el: '#vue-instance',
data: {
x: 1
},
methods: {
augmented: function(variable) {
return (2 * variable);
},
},
computed: {
doubleX: function() {
return this.augmented(this.x);
}
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.3.4/vue.min.js"></script>
<div id="vue-instance">
<input type="number" v-model="x"> result: {{ doubleX }}
</div>
| Q: Using Methods inside Computed Properties in vueJs I'm trying to call a method inside of a computed property. My code is more complicated, but calling the method doesn't seem to even work in this simple example:
new Vue({
el: '#vue-instance',
data: {
x: 1
},
methods: {
augmented: function(variable) {
return (2 * variable);
},
},
computed: {
doubleX: function() {
return augmented(this.x);
}
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.16/vue.min.js"></script>
<div id="vue-instance">
<input type="number" v-model="x"> result: {{ doubleX }}
</div>
As you can see by running the snippet, the value of doubleX is not getting rendered.
A: You need to reference your component's methods via this:
var vm = new Vue({
el: '#vue-instance',
data: {
x: 1
},
methods: {
augmented: function(variable) {
return (2 * variable);
},
},
computed: {
doubleX: function() {
return this.augmented(this.x);
}
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.3.4/vue.min.js"></script>
<div id="vue-instance">
<input type="number" v-model="x"> result: {{ doubleX }}
</div>
| stackoverflow | {
"language": "en",
"length": 150,
"provenance": "stackexchange_0000F.jsonl.gz:866874",
"question_score": "20",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548585"
} |
00127f789433df0c26c6e84e3378b4af4218a29e | Stackoverflow Stackexchange
Q: disable interactive mode in docker Before pushing/publishing/sharing a docker image, I would like to disable interactive mode or password protect logging in the container. Is there a option to do so?
The use case is that one can run app from docker run or exec in detach mode only
docker exec -d ubuntu_bash touch /tmp/execWorks
but can not do
docker run -ti ubuntu bash
I could not find it in the docker docs so far.
A: One solution would be to completely remove shell from the image:
docker exec :id -it /bin/rm -R /bin/*
That gets rid of sh and any bin useful command in linux. I do not know if it is possible to regain access at this point. Another aspect to keep in mind is that you might be able to use a memory debugger to get environment variables of the running container, but it makes it that much more difficult.
Last but not least if you would like to keep sensitive information from users and allow some kind of access check out:
https://docs.docker.com/engine/swarm/secrets/
| Q: disable interactive mode in docker Before pushing/publishing/sharing a docker image, I would like to disable interactive mode or password protect logging in the container. Is there a option to do so?
The use case is that one can run app from docker run or exec in detach mode only
docker exec -d ubuntu_bash touch /tmp/execWorks
but can not do
docker run -ti ubuntu bash
I could not find it in the docker docs so far.
A: One solution would be to completely remove shell from the image:
docker exec :id -it /bin/rm -R /bin/*
That gets rid of sh and any bin useful command in linux. I do not know if it is possible to regain access at this point. Another aspect to keep in mind is that you might be able to use a memory debugger to get environment variables of the running container, but it makes it that much more difficult.
Last but not least if you would like to keep sensitive information from users and allow some kind of access check out:
https://docs.docker.com/engine/swarm/secrets/
| stackoverflow | {
"language": "en",
"length": 177,
"provenance": "stackexchange_0000F.jsonl.gz:866883",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548607"
} |
ad98d7f69877964e66cb5d6b39cb340bdf5fc70a | Stackoverflow Stackexchange
Q: JSDoc ignores some functions I have functions in this style:
/**
* Here is my jsdoc comment
*/
Controller.add = function(req, res, next){}
My problem is that jsdoc ignores this comments. I just get a documentation for functions like this:
/**
* Here is my jsdoc comment (which works fine)
*/
function add(req, res, next){}
Do I miss a configuration? The documentation doc doesn't give me useful information.
Thanks
A: Just add in your comment a @alias
In your example
/**
* Here is my jsdoc comment
* @alias add
*/
| Q: JSDoc ignores some functions I have functions in this style:
/**
* Here is my jsdoc comment
*/
Controller.add = function(req, res, next){}
My problem is that jsdoc ignores this comments. I just get a documentation for functions like this:
/**
* Here is my jsdoc comment (which works fine)
*/
function add(req, res, next){}
Do I miss a configuration? The documentation doc doesn't give me useful information.
Thanks
A: Just add in your comment a @alias
In your example
/**
* Here is my jsdoc comment
* @alias add
*/
| stackoverflow | {
"language": "en",
"length": 92,
"provenance": "stackexchange_0000F.jsonl.gz:866912",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548686"
} |
628c66afcc98f26faa8c3dfea795fbaac7d15fe0 | Stackoverflow Stackexchange
Q: loadFixtures method in jest.js Is there an equivalent method to loadFixtures method in jest. I can't seem to find anything in the jest docs. I'd like to be able to load a html fixture into jest test file ?
Or is there another way of doing html stub in jest that I'm missing ?
Note i'm not using React as its an old project with Jquery.
So instead of writing something like.
window.$ = require('jquery');
beforeEach(() => {
document.body.innerHTML =
'<div>' +
' <input id="exMonth" value="02" />' +
' <input id="exYear" value="2017" />'
'</div>';
});
test("exMonth should be 02", () =>{
expect($('#exMonth').val()).toBe('01');
});
I'd like to abstract my html out to a html fixture file and require it to the
document.body.innerHTML = require(myHtmlFixture.html)
A: You could use node's fs for serverside tests.
var fs = require('fs');
var htmlFixture = fs.readFileSync('spec/fixtures/myFixture.html')
document.body.innerHTML = htmlFixture;
see: https://nodejs.dev/learn/reading-files-with-nodejs
| Q: loadFixtures method in jest.js Is there an equivalent method to loadFixtures method in jest. I can't seem to find anything in the jest docs. I'd like to be able to load a html fixture into jest test file ?
Or is there another way of doing html stub in jest that I'm missing ?
Note i'm not using React as its an old project with Jquery.
So instead of writing something like.
window.$ = require('jquery');
beforeEach(() => {
document.body.innerHTML =
'<div>' +
' <input id="exMonth" value="02" />' +
' <input id="exYear" value="2017" />'
'</div>';
});
test("exMonth should be 02", () =>{
expect($('#exMonth').val()).toBe('01');
});
I'd like to abstract my html out to a html fixture file and require it to the
document.body.innerHTML = require(myHtmlFixture.html)
A: You could use node's fs for serverside tests.
var fs = require('fs');
var htmlFixture = fs.readFileSync('spec/fixtures/myFixture.html')
document.body.innerHTML = htmlFixture;
see: https://nodejs.dev/learn/reading-files-with-nodejs
| stackoverflow | {
"language": "en",
"length": 146,
"provenance": "stackexchange_0000F.jsonl.gz:866929",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548730"
} |
8e71b85b9ab1c47251a49effd481fd8ff5652fd0 | Stackoverflow Stackexchange
Q: When to use package-lock.json and shrinkwrap.json Should package-lock.json also be published?
What is the difference between npm-shrinkwrap.json and package-lock.json?
After reading the above one question remains. When to use what where.
When writing a node module which will be published to the npm lifecycle (so others can npm install it), shrinkwrap.json should be used, since it can be published.
When writing a node module which you will use in prodution for your company etc., which will not be published to the npm lifecycle, package-lock.json should be used.
Tbh reading the other questions might give people insight in the mechanics, but for those who want a simple view of how to use them, I must ask this straight forward question.
| Q: When to use package-lock.json and shrinkwrap.json Should package-lock.json also be published?
What is the difference between npm-shrinkwrap.json and package-lock.json?
After reading the above one question remains. When to use what where.
When writing a node module which will be published to the npm lifecycle (so others can npm install it), shrinkwrap.json should be used, since it can be published.
When writing a node module which you will use in prodution for your company etc., which will not be published to the npm lifecycle, package-lock.json should be used.
Tbh reading the other questions might give people insight in the mechanics, but for those who want a simple view of how to use them, I must ask this straight forward question.
| stackoverflow | {
"language": "en",
"length": 120,
"provenance": "stackexchange_0000F.jsonl.gz:866930",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548732"
} |
449e3fb95f62b74fda1196c6b123ab132f59eb09 | Stackoverflow Stackexchange
Q: The reference assemblies for framework ".NETFramework,Version=v4.6.2" were not found When trying to compile a solution, I get the following build error:
Error MSB3644 The reference assemblies for framework
".NETFramework,Version=v4.6.2" were not found. To resolve this,
install the SDK or Targeting Pack for this framework version or
retarget your application to a version of the framework for which you
have the SDK or Targeting Pack installed. Note that assemblies will be
resolved from the Global Assembly Cache (GAC) and will be used in
place of reference assemblies. Therefore your assembly may not be
correctly targeted for the framework you intend.
C:\RPR\Dev\Libraries\Common\Common.csproj C:\Program Files
(x86)\Microsoft Visual
Studio\2017\Community\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets 1111
I've tried installing the .NET Framework 4.6.2 SDK, as well as the 4.6 Targeting Pack, however both error that I already have it installed. I also tried installing Visual Studio 2017 but it still gives the same error.
Any ideas?
A: For 4.7.2 issue I have to go here: https://dotnet.microsoft.com/download/dotnet-framework/net472
Install the Download .NET Framework 4.7.2 Developer Pack as displayed in the image to fix the issue.
| Q: The reference assemblies for framework ".NETFramework,Version=v4.6.2" were not found When trying to compile a solution, I get the following build error:
Error MSB3644 The reference assemblies for framework
".NETFramework,Version=v4.6.2" were not found. To resolve this,
install the SDK or Targeting Pack for this framework version or
retarget your application to a version of the framework for which you
have the SDK or Targeting Pack installed. Note that assemblies will be
resolved from the Global Assembly Cache (GAC) and will be used in
place of reference assemblies. Therefore your assembly may not be
correctly targeted for the framework you intend.
C:\RPR\Dev\Libraries\Common\Common.csproj C:\Program Files
(x86)\Microsoft Visual
Studio\2017\Community\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets 1111
I've tried installing the .NET Framework 4.6.2 SDK, as well as the 4.6 Targeting Pack, however both error that I already have it installed. I also tried installing Visual Studio 2017 but it still gives the same error.
Any ideas?
A: For 4.7.2 issue I have to go here: https://dotnet.microsoft.com/download/dotnet-framework/net472
Install the Download .NET Framework 4.7.2 Developer Pack as displayed in the image to fix the issue.
A: Starting May, 2019 you can build your project on net20 up to net48 (including ne461) any machine with at least MSBuild or the .NET Core SDK installed without the need of Developer Pack installed.
If .NET Core SDK installed in you machine, Add the nuget package Microsoft.NETFramework.ReferenceAssemblies to your project
<ItemGroup>
<PackageReference Include="Microsoft.NETFramework.ReferenceAssemblies" Version="1.0.2">
<IncludeAssets>runtime; build; native; contentfiles; analyzers</IncludeAssets>
<PrivateAssets>all</PrivateAssets>
</PackageReference>
</ItemGroup>
The package include all ReferenceAssemblies starting net20 up to net48
These packages enable building .NET Framework projects on any machine with at least MSBuild or the .NET Core SDK installed plus other scenarios.
For more details:
https://github.com/Microsoft/dotnet/tree/master/releases/reference-assemblies
A: I was using ubuntu and faced the same problem. Even after I've downloaded latest vscode and mono for ubuntu it was not working. Then I found this.
Basically if you've installed mono then go to settings and set
"omnisharp.useGlobalMono": "always".
hope it fix your issue.
A: Installing the 4.6.2 Developer Pack did not work for me.
I had to install .NET Framework 4.6 Targeting Pack
A: I was getting the exact same error when building except it was for ".NETFramework,Version=v4.7.1".
I downloaded the Developer pack for 4.7.1 from here: https://www.microsoft.com/en-us/download/confirmation.aspx?id=56119
The pack installed these programs on the target machine (my build server).
*
*Microsoft .NET Framework 4.7.1 SDK
*Microsoft .NET Framework 4.7.1 Targeting Pack
*Microsoft .NET Framework 4.7.1 Targeting Pack (ENU)
When I tried building again, I didn't get the error anymore and the build succeeded.
A: Windows -> Search -> Visual Studio Installer -> Modify -> Individual Components and check the right version
A: It turns out that I had installed the .NET Framework v4.6.2, not the Developer Pack for 4.6.2. Doh!
https://www.microsoft.com/en-us/download/details.aspx?id=53321
A: Check the installed .net framework on your development machine, it must be the same as project file targeting. You need to install the .net framework which the project file targeting after that try again the errors and warnings will disappear.
A: you can find this omnisharp setting inside Visual Studio C# extensions settings and go to the botton.
A: Download required SDK package with link , .net framework 4.6.2 developer pack download-link and install. Restart the server, now build will be successful.
You can check dotnet version with dotnet --info
A: In my case, (I'm embarrassed to admit) I had a website loaded as a project and forgot to set it to No Build.
| stackoverflow | {
"language": "en",
"length": 564,
"provenance": "stackexchange_0000F.jsonl.gz:866952",
"question_score": "152",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548780"
} |
c78654954510514e48c531bdbe81ef9fce769c26 | Stackoverflow Stackexchange
Q: Invalid Input when trying to configure lambda blueprint iot-button-email I'm trying to use tha AWS Lambda Blueprint iot-button-email. Apparently it does not accept the serial number, even if I'm quite sure it is correct:
This prevent creating the lambda. What am I doing wrong?
workaround
By creating an empty lambda ( ie pressing remove ) and then creating a rule
from the "thing" in the registry it is possible to achieve the desired result as well. Maybe the problem was my thing already be registered?
A: As of today, that bug has been fixed by Amazon!
Configuration should now work fine.
There is a post in the amazon forum:
https://forums.aws.amazon.com/thread.jspa?threadID=257887
Where ist says, it was a technical problem, which is now fixed.
| Q: Invalid Input when trying to configure lambda blueprint iot-button-email I'm trying to use tha AWS Lambda Blueprint iot-button-email. Apparently it does not accept the serial number, even if I'm quite sure it is correct:
This prevent creating the lambda. What am I doing wrong?
workaround
By creating an empty lambda ( ie pressing remove ) and then creating a rule
from the "thing" in the registry it is possible to achieve the desired result as well. Maybe the problem was my thing already be registered?
A: As of today, that bug has been fixed by Amazon!
Configuration should now work fine.
There is a post in the amazon forum:
https://forums.aws.amazon.com/thread.jspa?threadID=257887
Where ist says, it was a technical problem, which is now fixed.
| stackoverflow | {
"language": "en",
"length": 123,
"provenance": "stackexchange_0000F.jsonl.gz:866958",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548802"
} |
1c410bda66076b0a9bfdb17f19aa63cb679267cf | Stackoverflow Stackexchange
Q: Can I run JetBrains dotCover in a Linux Docker container? I'm building and running a .NET Core application inside a Linux-based Docker container. I'm having trouble figuring out if dotCover is cross-platform? I want to either...
*
*Install and run dotCover inside the Docker container (that is $ dotcover analyse ...).
*Or run some compatible instrumentation during the test step inside the Docker container and send a file back to the host, where I can run dotCover on the file.
Obviously, I'm already using the dotCover "server package". It doesn't indicate that it's cross-platform. So, maybe I'm stuck trying to find another way.
A: I don't mean to resurrect an old post, but I'm going to for anyone landing on this from the internet.
Currently no, all of the ReSharper products from JetBrains are not cross platform. Windows is a requirement under the system requirements page.
https://www.jetbrains.com/resharper/download/system_requirements.html
This means that it is not readily available to run in a linux-based container or linux environment.
| Q: Can I run JetBrains dotCover in a Linux Docker container? I'm building and running a .NET Core application inside a Linux-based Docker container. I'm having trouble figuring out if dotCover is cross-platform? I want to either...
*
*Install and run dotCover inside the Docker container (that is $ dotcover analyse ...).
*Or run some compatible instrumentation during the test step inside the Docker container and send a file back to the host, where I can run dotCover on the file.
Obviously, I'm already using the dotCover "server package". It doesn't indicate that it's cross-platform. So, maybe I'm stuck trying to find another way.
A: I don't mean to resurrect an old post, but I'm going to for anyone landing on this from the internet.
Currently no, all of the ReSharper products from JetBrains are not cross platform. Windows is a requirement under the system requirements page.
https://www.jetbrains.com/resharper/download/system_requirements.html
This means that it is not readily available to run in a linux-based container or linux environment.
A: Resurrecting this one more time for the next reader. Jetbrains released a cross-platform console runner.
https://blog.jetbrains.com/dotnet/2019/08/26/cross-platform-dotcover-console-runner-whats-new-dotcover-2019-2/
| stackoverflow | {
"language": "en",
"length": 182,
"provenance": "stackexchange_0000F.jsonl.gz:866991",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548890"
} |
7b0d9371c0ec755e19cd3daa1637012e312f8960 | Stackoverflow Stackexchange
Q: Plotting pandas dataframe with two groups I'm using Pandas and matplotlib to try to replicate this graph from tableau:
So far, I have this code:
group = df.groupby(["Region","Rep"]).sum()
total_price = group["Total Price"].groupby(level=0, group_keys=False)
total_price.nlargest(5).plot(kind="bar")
Which produces this graph:
It correctly groups the data, but is it possible to get it grouped similar to how Tableau shows it?
A: You can create some lines and labels using the respective matplotlib methods (ax.text and ax.axhline).
import pandas as pd
import numpy as np; np.random.seed(5)
import matplotlib.pyplot as plt
a = ["West"]*25+ ["Central"]*10+ ["East"]*10
b = ["Mattz","McDon","Jeffs","Warf","Utter"]*5 + ["Susanne","Lokomop"]*5 + ["Richie","Florence"]*5
c = np.random.randint(5,55, size=len(a))
df=pd.DataFrame({"Region":a, "Rep":b, "Total Price":c})
group = df.groupby(["Region","Rep"]).sum()
total_price = group["Total Price"].groupby(level=0, group_keys=False)
gtp = total_price.nlargest(5)
ax = gtp.plot(kind="bar")
#draw lines and titles
count = gtp.groupby("Region").count()
cum = np.cumsum(count)
for i in range(len(count)):
title = count.index.values[i]
ax.axvline(cum[i]-.5, lw=0.8, color="k")
ax.text(cum[i]-(count[i]+1)/2., 1.02, title, ha="center",
transform=ax.get_xaxis_transform())
# shorten xticklabels
ax.set_xticklabels([l.get_text().split(", ")[1][:-1] for l in ax.get_xticklabels()])
plt.show()
| Q: Plotting pandas dataframe with two groups I'm using Pandas and matplotlib to try to replicate this graph from tableau:
So far, I have this code:
group = df.groupby(["Region","Rep"]).sum()
total_price = group["Total Price"].groupby(level=0, group_keys=False)
total_price.nlargest(5).plot(kind="bar")
Which produces this graph:
It correctly groups the data, but is it possible to get it grouped similar to how Tableau shows it?
A: You can create some lines and labels using the respective matplotlib methods (ax.text and ax.axhline).
import pandas as pd
import numpy as np; np.random.seed(5)
import matplotlib.pyplot as plt
a = ["West"]*25+ ["Central"]*10+ ["East"]*10
b = ["Mattz","McDon","Jeffs","Warf","Utter"]*5 + ["Susanne","Lokomop"]*5 + ["Richie","Florence"]*5
c = np.random.randint(5,55, size=len(a))
df=pd.DataFrame({"Region":a, "Rep":b, "Total Price":c})
group = df.groupby(["Region","Rep"]).sum()
total_price = group["Total Price"].groupby(level=0, group_keys=False)
gtp = total_price.nlargest(5)
ax = gtp.plot(kind="bar")
#draw lines and titles
count = gtp.groupby("Region").count()
cum = np.cumsum(count)
for i in range(len(count)):
title = count.index.values[i]
ax.axvline(cum[i]-.5, lw=0.8, color="k")
ax.text(cum[i]-(count[i]+1)/2., 1.02, title, ha="center",
transform=ax.get_xaxis_transform())
# shorten xticklabels
ax.set_xticklabels([l.get_text().split(", ")[1][:-1] for l in ax.get_xticklabels()])
plt.show()
| stackoverflow | {
"language": "en",
"length": 156,
"provenance": "stackexchange_0000F.jsonl.gz:866999",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548913"
} |
c3a4bbef00917a59d7c35d2ea45fd39926530de6 | Stackoverflow Stackexchange
Q: Unable to connect to GitHub API We are getting an error "Unable to connect to GitHub API: org.kohsuke.github.HttpException: Server returned HTTP response code: -1, message: 'null' for URL: https://github.xxx.com/api/v3/user" when trying to use github pull request builder in jenkins
A: You may need to add your Certificate Authority cert to the java keytool.
If you look in your jenkins log and find something like this:
org.kohsuke.github.HttpException: Server returned HTTP response code: -1, message: 'null' for URL: https://github.xxx.com/api/v3/user
Scroll down and see if there is a line like this:
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
This error is saying that the SSL handshake failed with something about the PKIX path/certpath. Try adding your CA Cert to the keytool and restarting Jenkins to see if that helps.
Here's the post that helped me modify the java certs with the keytool. (the default keytool password is "changeit")
| Q: Unable to connect to GitHub API We are getting an error "Unable to connect to GitHub API: org.kohsuke.github.HttpException: Server returned HTTP response code: -1, message: 'null' for URL: https://github.xxx.com/api/v3/user" when trying to use github pull request builder in jenkins
A: You may need to add your Certificate Authority cert to the java keytool.
If you look in your jenkins log and find something like this:
org.kohsuke.github.HttpException: Server returned HTTP response code: -1, message: 'null' for URL: https://github.xxx.com/api/v3/user
Scroll down and see if there is a line like this:
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
This error is saying that the SSL handshake failed with something about the PKIX path/certpath. Try adding your CA Cert to the keytool and restarting Jenkins to see if that helps.
Here's the post that helped me modify the java certs with the keytool. (the default keytool password is "changeit")
A: You can also try installing the skip certificate check plugin, in plugin manager.
A: it seems your java cacerts is not having correct certificate for your git URL. you may try following steps.
Step 1 : Get root certificate of https://www.google.com
*
*Open https://www.google.com in a chrome browser.
*Select Inspect from context menu(right clicking on page) and navigate to security tab
*Click on view certificates
*Click on top most certificate on hierarchy and confirm it is tailed with Root CA phrase.
*drag and drop that image which you saw written certificate on desktop.
Thats it! you got your root certificate!
Step 2 : install certificate to your java cacerts
please verify you have system variable JAVA_HOME declared and you will perform these steps on that jre cacerts only!
*
*Navigate to cacerts by JAVA_HOME/jre/lib/security/cacerts
*Download and install keytool explorer it is available for all platforms
*open cacerts in that tool and import cetificate by "import trusted certificate" button.
*Save your changes (you may come across issue if it is mac and you do not have write access!)
Step 3 : Restart jenkins
You should not get ssl handshake problem now onwards.
| stackoverflow | {
"language": "en",
"length": 349,
"provenance": "stackexchange_0000F.jsonl.gz:867007",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548942"
} |
5d67666259389951cae36a5ac0f9dc5746495f66 | Stackoverflow Stackexchange
Q: Debugging Elixir phoenix.server What I am looking for is some ideas on how to debug mix phoenix.server
When I run the command there is no output and it hangs (doesn't finish and show the cmd prompt). I've tried:
IEx -S mix phoenix.server
this opens up the elixir session but at that point I'm unsure what to do next. I was hoping to see something verbose that showed me where specifically the server start was stopping. I tried:
mix phoenix.server --verbose
and that didn't work, of course. At this moment I'm struggling to figure out what the right approach to this is.
A: In your code you need to require the IEx module and place an IEx.pry where you want to debug:
defmodule MyModule do
require IEx
def my_function do
IEx.pry
end
end
then run your phoenix server in an IEx context:
iex -S mix phx.server
| Q: Debugging Elixir phoenix.server What I am looking for is some ideas on how to debug mix phoenix.server
When I run the command there is no output and it hangs (doesn't finish and show the cmd prompt). I've tried:
IEx -S mix phoenix.server
this opens up the elixir session but at that point I'm unsure what to do next. I was hoping to see something verbose that showed me where specifically the server start was stopping. I tried:
mix phoenix.server --verbose
and that didn't work, of course. At this moment I'm struggling to figure out what the right approach to this is.
A: In your code you need to require the IEx module and place an IEx.pry where you want to debug:
defmodule MyModule do
require IEx
def my_function do
IEx.pry
end
end
then run your phoenix server in an IEx context:
iex -S mix phx.server
A: Try modifying your dev.exs file and set your logger level to debug.
| stackoverflow | {
"language": "en",
"length": 159,
"provenance": "stackexchange_0000F.jsonl.gz:867018",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44548972"
} |
d4936b524b28831b1237f3fdb046f25a50cea525 | Stackoverflow Stackexchange
Q: skip characters/numbers in JAVA How do I skip a number or a character same for two numbers or characters together or apart while those characters/numbers exist within a row of numbers or word
example (not lists):
I want to skip the number 3 (in any position) when I count from 2 to 14
the result would be
2,4,5,6,7,8,9,10,11,12,14
another would be skipping the number 31 in any combination where 3 and 1 come out as long as both exist
these two examples would apply for the characters also.
What I was doing was
for(int i = startingNum; i <= endingNum; i++){
if(i "has a" 3){
skip number;
}
else{
counter++;
}
}
combination of numbers
for(int i = startingNum; i <= endingNum; i++){
if((i "has a" 3) AND (i "has a " 1)){
skip number;
}
else{
counter++;
}
}
at the character one I'm completely lost...
A: One way would be to convert the number to a string and check if it contains that number as a substring:
for(int i = startingNum; i <= endingNum; i++) {
if (!String.valueOf(i).contains("3")) { // Here
counter++;
}
}
| Q: skip characters/numbers in JAVA How do I skip a number or a character same for two numbers or characters together or apart while those characters/numbers exist within a row of numbers or word
example (not lists):
I want to skip the number 3 (in any position) when I count from 2 to 14
the result would be
2,4,5,6,7,8,9,10,11,12,14
another would be skipping the number 31 in any combination where 3 and 1 come out as long as both exist
these two examples would apply for the characters also.
What I was doing was
for(int i = startingNum; i <= endingNum; i++){
if(i "has a" 3){
skip number;
}
else{
counter++;
}
}
combination of numbers
for(int i = startingNum; i <= endingNum; i++){
if((i "has a" 3) AND (i "has a " 1)){
skip number;
}
else{
counter++;
}
}
at the character one I'm completely lost...
A: One way would be to convert the number to a string and check if it contains that number as a substring:
for(int i = startingNum; i <= endingNum; i++) {
if (!String.valueOf(i).contains("3")) { // Here
counter++;
}
}
A: One of the approaches is using results of parsing, for example:
public static Integer isParsable(String text) {
try {
return Integer.parseInt(text);
} catch (NumberFormatException e) {
return null;
}
}
...
import java.io.IOException;
public class NumChecker {
static String[] str = new String[]{"2", "4", "a", "sd", "d5", "6", "7", "8", "a1", "3", "10", "11", "12", "14"};
static int startingNum = 2;
static int endingNum = 10;
static int counter = 0;
static int mark = 3;
public static Integer isParsable(String text) {
try {
return Integer.parseInt(text);
} catch (NumberFormatException e) {
return null;
}
}
public static void main(String[] args) throws IOException {
for (int i = startingNum; i <= endingNum; i++) {
Integer num = isParsable(str[i]);
if (num != null) {
if (num == mark) {
counter++;
}
}
}
System.out.println(counter);
}
}
OUTPUT:
1
| stackoverflow | {
"language": "en",
"length": 324,
"provenance": "stackexchange_0000F.jsonl.gz:867040",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549040"
} |
c1b9a8377199eb70e38920451ce420e01650cc88 | Stackoverflow Stackexchange
Q: Transforming Web.Config in a Windows Docker container I have a .NET app that uses Octopus to deploy to the server. In this process, Octopus updates some of the values in the Web.Config (API keys, database connection string etc). I'm moving this app into a container on the same server, and the image has been built before Octopus gets anywhere near it.
How do I update the Web.Config based on the environment in which the docker run command is being triggered?
I've found this blog post, which seems to necessitate a config transformation file.
I could also pass everything in as environment variables to the container, but then I'd have to change how the app accesses them, which I don't want to do because there are lots of other apps to be done.
| Q: Transforming Web.Config in a Windows Docker container I have a .NET app that uses Octopus to deploy to the server. In this process, Octopus updates some of the values in the Web.Config (API keys, database connection string etc). I'm moving this app into a container on the same server, and the image has been built before Octopus gets anywhere near it.
How do I update the Web.Config based on the environment in which the docker run command is being triggered?
I've found this blog post, which seems to necessitate a config transformation file.
I could also pass everything in as environment variables to the container, but then I'd have to change how the app accesses them, which I don't want to do because there are lots of other apps to be done.
| stackoverflow | {
"language": "en",
"length": 133,
"provenance": "stackexchange_0000F.jsonl.gz:867065",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549117"
} |
a3240b6f0352b6d5c1daff063bde13dec9841f18 | Stackoverflow Stackexchange
Q: Construct tuple from dictionary values Given a python a list of dictionary of key-value pairs i.e.
[{'color': 'red', 'value': 'high'}, {'color': 'yellow', 'value': 'low'}]
How to construct a list of tuples from the dictionary values only:
[('red', 'high'), ('yellow', 'low')]
A: As simple as it gets:
result = [(d['color'], d['value']) for d in dictionarylist]
| Q: Construct tuple from dictionary values Given a python a list of dictionary of key-value pairs i.e.
[{'color': 'red', 'value': 'high'}, {'color': 'yellow', 'value': 'low'}]
How to construct a list of tuples from the dictionary values only:
[('red', 'high'), ('yellow', 'low')]
A: As simple as it gets:
result = [(d['color'], d['value']) for d in dictionarylist]
A: If order is important then:
[tuple(d[k] for k in ['color', 'value']) for d in data]
Or:
[(d['color'], d['value']) for d in data]
Else without order guarantees or from an OrderedDict (or relying on Py3.6 dict):
[tuple(d.values()) for d in data]
A: For Dynamic List Of Dictionaries
This is what I would go with in this case, I hope I have been of help.
tuple_list = []
for li_item in list_dict:
for k, v in li_item.items():
tuple_list.append((k,v))
Of course there is a one liner option like this one below:
tupples = [
[(k,v) for k, v in li_item.items()][0:] for li_item in list_dict
]
A: To generalize for any class instance where self.__dict__ is defined, you can also use:
tuple([self.__dict__[_] for _,__ in self.__dict__.items()])
A: a = [{'color': 'red', 'value': 'high'}, {'color': 'yellow', 'value': 'low'}]
b = [tuple(sub.values()) for sub in a] # [('red', 'high'), ('yellow', 'low')]
| stackoverflow | {
"language": "en",
"length": 201,
"provenance": "stackexchange_0000F.jsonl.gz:867082",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549167"
} |
c9cab275c60b8f9e4268ae021b1ca44fe84882a7 | Stackoverflow Stackexchange
Q: How do I use the custom URL in a Swift app to open another Swift app I have a custom URL set up for a Swift application. I would like to use this URL on another app's button action to deep link. I tried UIApplication.shared.open(NSURL(string: "redirectToApp://Testapp/startPage")! as URL, options: [:], completionHandler: nil) but, it isn't working. Any suggestions?
Update:
redirectToApp://Testapp/startPage opens the app from a Safari.
Thanks!
A: Firstly, you shouldn't use NSURL in Swift3, you should use the native Swift version, URL. On iOS9+ you also have to add LSApplicationQueriesSchemes entries to your Info.plist file in order to be able to open apps using deep links.
For example if you want to open the Uber app, you have to do:
UIApplication.shared.open(URL(string: "uber://)!). from code and add these lines to your Info.plist file:
<key>LSApplicationQueriesSchemes</key>
<array>
<string>uber</string>
</array>
| Q: How do I use the custom URL in a Swift app to open another Swift app I have a custom URL set up for a Swift application. I would like to use this URL on another app's button action to deep link. I tried UIApplication.shared.open(NSURL(string: "redirectToApp://Testapp/startPage")! as URL, options: [:], completionHandler: nil) but, it isn't working. Any suggestions?
Update:
redirectToApp://Testapp/startPage opens the app from a Safari.
Thanks!
A: Firstly, you shouldn't use NSURL in Swift3, you should use the native Swift version, URL. On iOS9+ you also have to add LSApplicationQueriesSchemes entries to your Info.plist file in order to be able to open apps using deep links.
For example if you want to open the Uber app, you have to do:
UIApplication.shared.open(URL(string: "uber://)!). from code and add these lines to your Info.plist file:
<key>LSApplicationQueriesSchemes</key>
<array>
<string>uber</string>
</array>
A: Make sure you write code with error checking / handling so you can figure out what's not working.
Try it like this:
if let url = URL(string: "redirectToApp://Testapp/startPage")
{
if UIApplication.shared.canOpenURL(url)
{
UIApplication.shared.open(url, options: [:], completionHandler: {
(success) in
if (success)
{
print("OPENED \(url): \(success)")
}
else
{
print("FAILED to open \(url)")
}
})
}
else
{
print("CANNOT open \(url)")
}
}
| stackoverflow | {
"language": "en",
"length": 201,
"provenance": "stackexchange_0000F.jsonl.gz:867084",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549174"
} |
ac6744b8d04042121936f8c8b81e90b8d469f9fa | Stackoverflow Stackexchange
Q: How to close menu onNewOptionClick? My Code
<Creatable
name="productType"=
options = {this.state.productOptions}
value = {this.state.productType}
onNewOptionClick = {this.createProductType}
onChange = {this.handleProductChange}
/>
createProductType(option) {
var options = this.state.productOptions;
var label = option.label.charAt(0).toUpperCase() + option.label.slice(1);
options.push({
label: label,
value: option.value
})
this.setState({
productOptions: options,
productType: option.value
})
}
Before I click new option:
After I click new option:
Desired UI state after clicking new option:
Did not whether to post this as issues on Github as I am not sure of the exact way of using onNewOptionClick.
A: I was able to solve this by adding a ref
ref={input => this.productSelect = input }
and then calling it so
this.productSelect.select.closeMenu();
This (https://github.com/JedWatson/react-select/issues/1262) provided the final clue which helped me solve this. Thanks.
| Q: How to close menu onNewOptionClick? My Code
<Creatable
name="productType"=
options = {this.state.productOptions}
value = {this.state.productType}
onNewOptionClick = {this.createProductType}
onChange = {this.handleProductChange}
/>
createProductType(option) {
var options = this.state.productOptions;
var label = option.label.charAt(0).toUpperCase() + option.label.slice(1);
options.push({
label: label,
value: option.value
})
this.setState({
productOptions: options,
productType: option.value
})
}
Before I click new option:
After I click new option:
Desired UI state after clicking new option:
Did not whether to post this as issues on Github as I am not sure of the exact way of using onNewOptionClick.
A: I was able to solve this by adding a ref
ref={input => this.productSelect = input }
and then calling it so
this.productSelect.select.closeMenu();
This (https://github.com/JedWatson/react-select/issues/1262) provided the final clue which helped me solve this. Thanks.
A: closeMenu() has been depreciated in v2 of React-Select. It has been replaced by blur() The following worked for me:
// Assign the ref in your Select object
<Select ref={input => this.selectRef = input } ... />
// Later in your code when you are trying to close the menu
this.selectRef.select.blur();
A: Not sure if there have been breaking changes in the library since the first answer, but I had to do this:
ref={input => {
if (input) this.productSelectMenu = input._selectRef.select
}}
Then:
this.productSelectMenu.closeMenu();
| stackoverflow | {
"language": "en",
"length": 206,
"provenance": "stackexchange_0000F.jsonl.gz:867088",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549186"
} |
8f3894c0f0d528ff6c8a189a1a5550fdb1d4e9c3 | Stackoverflow Stackexchange
Q: How to validate dates without leading zero on day and month? I need to validate a precise date format in php.
In my case the optimum date format should be: e.g. 1/1/2017 (without leading zeros) and the code should allow me to avoid the rest of date's format.
The following code is what i wrote but with no result:
if(preg_match("/([1-9]|[1-2][0-9]|3[0-1]-([1-9]|1[0-2])-^[0-9]{4})$/", $date){
// insert into etc. etc....
}else{
// update etc. etc....
}
The problem is that the code doesn't properly validate the date; it accepts every kind of date format.
A: you can use with Carbon
$date->format('j')
| Q: How to validate dates without leading zero on day and month? I need to validate a precise date format in php.
In my case the optimum date format should be: e.g. 1/1/2017 (without leading zeros) and the code should allow me to avoid the rest of date's format.
The following code is what i wrote but with no result:
if(preg_match("/([1-9]|[1-2][0-9]|3[0-1]-([1-9]|1[0-2])-^[0-9]{4})$/", $date){
// insert into etc. etc....
}else{
// update etc. etc....
}
The problem is that the code doesn't properly validate the date; it accepts every kind of date format.
A: you can use with Carbon
$date->format('j')
A: Your date delimiter is / and not -, so add \/ to regex for /. And use ^ at the start of regex:
if(preg_match("/^([1-9]|[1-2][0-9]|3[0-1])\/([1-9]|1[0-2])\/([0-9]{4})$/", $date){
A: I noticed that the OP's submitted answer was not bulletproof, so I started to research some of the functions mentioned in the comments under the question and some of my own thoughts.
I agree with apokryfos in that using regex to validate a date expression is not best practice. A pure regex solution is going to be an increasingly verbose and decreasingly comprehensible pattern because it will have to factor leap years both in the past and future. For this reason, I've investigated a few date functions that php has on offer.
*
*date_parse_from_format() as suggested by Alex Howansky. Unfortunately, it will not be a stand-alone solution for this case because the format parameter obeys the syntax/rules of DateTime::createFromFormat():
See how d and j are grouped together, as are m and n. Consequently, this function will catch invalid dates, but not the unwanted leading zeros.
*strptime() as suggested by Casimir et Hippolyte. This function affords matching days without leading zeros using %e. This function does not have a character that matches non-zero-leading month numbers. Furthermore, there can be hiccups with this function bases on the operating system.
*check_date() seems like a logical choice, but again it will not flinch at zero-led numbers. Furthermore, this function requires a bit more data preparation than the others, because it requires the month, day, and year values to be individualized.
*if(!DateTime::createFromFormat('j/n/Y', $date)) suggested by deceze regarding a slightly different scenario will not work in this case. createFromFormat() will go to great lengths to construct a valid date -- even a string like 99/99/9999 will make it through.
In an attempt to create the most concise expression, here is what I can offer as a seemingly bulletproof solution using a DateTime class:
if (($d=DateTime::createFromFormat('n/j/Y',$date)) && $date==$d->format('n/j/Y')){
// valid
}else{
// invalid
}
strtotime() can be used with a single condition for this case because the OP's date format uses slashes as a delimiter and this function will correctly parse the date expression "American-style".
This seems to be the simplest way to check for a valid date without leading zeros: Demo Link
if(date('n/j/Y',strtotime($date))==$date){
// valid
}else{
// invalid
}
If you are dealing with datetime expressions and don't wish to check the time portion, you can call this line before conditional line:
$date = explode(' ', $date)[0]; // create an array of two elements: date and time, then only access date
Demo Link
or
$date = strstr($date, ' ', true); // extract substring prior to first space
A: After several hours finally i found the correct answer. The correct pattern for php is to use single quote (') and not the double quote (") e.g.:
preg_match('/^([1-9]|[1-2][0-9]|3[0-1])\/([1-9]|1[0-2])\/([0-9]{4})/',$date)
this works fine for me... Thanks everyone especially to @Mohammad Hamedani 'cause the expression was correct but the f******g quote makes me gone crazy
| stackoverflow | {
"language": "en",
"length": 589,
"provenance": "stackexchange_0000F.jsonl.gz:867104",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549229"
} |
bd2495a7d2d58afb4d3a97ae663b2fb4d8138833 | Stackoverflow Stackexchange
Q: How to make an existing wxPython code run on Mac with conda? I have an existing wxPython code which runs perfectly on linux and which i want to run on mac. I have the installation through anaconda on both linux and mac.
For mac, i am getting the error "This program needs access to the screen.Please run with a Framework build of python, and only when you are
logged in on the main display of your Mac."
I installed pythonw on through anaconda and I am able to run wx.APP() when running python through the location "/Users/vnigam200/anaconda/bin/pythonw".
I am not sure how to use this location for running my existing script. I tried shebang but it doesn't seem to work.
A: On Macs you need to run pythonw for wxPython scripts instead of the default, which is python. This is a known problem with Anaconda that they don't seem willing to fix:
*
*https://groups.google.com/a/continuum.io/forum/#!searchin/anaconda/wxpython$20osx/anaconda/-ZAynUQW5HQ/L8AeqfMWNWwJ
*https://groups.google.com/a/continuum.io/forum/#!searchin/anaconda/osx$20framework/anaconda/1rX3A1Noi9Q/68MNJWLxupYJ
So basically just do the following in Mac's terminal:
pythonw /path/to/your/script.py
Then it should work fine.
| Q: How to make an existing wxPython code run on Mac with conda? I have an existing wxPython code which runs perfectly on linux and which i want to run on mac. I have the installation through anaconda on both linux and mac.
For mac, i am getting the error "This program needs access to the screen.Please run with a Framework build of python, and only when you are
logged in on the main display of your Mac."
I installed pythonw on through anaconda and I am able to run wx.APP() when running python through the location "/Users/vnigam200/anaconda/bin/pythonw".
I am not sure how to use this location for running my existing script. I tried shebang but it doesn't seem to work.
A: On Macs you need to run pythonw for wxPython scripts instead of the default, which is python. This is a known problem with Anaconda that they don't seem willing to fix:
*
*https://groups.google.com/a/continuum.io/forum/#!searchin/anaconda/wxpython$20osx/anaconda/-ZAynUQW5HQ/L8AeqfMWNWwJ
*https://groups.google.com/a/continuum.io/forum/#!searchin/anaconda/osx$20framework/anaconda/1rX3A1Noi9Q/68MNJWLxupYJ
So basically just do the following in Mac's terminal:
pythonw /path/to/your/script.py
Then it should work fine.
| stackoverflow | {
"language": "en",
"length": 172,
"provenance": "stackexchange_0000F.jsonl.gz:867126",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549306"
} |
87910d4c14226e088eea3e4e64719643d1f170d4 | Stackoverflow Stackexchange
Q: Find pickle version installed in my system I would like to know pickle version installed in my system.Can someone please tell me what should be the procedure to do this.
A: Python 3
import pickle
print(pickle.format_version)
| Q: Find pickle version installed in my system I would like to know pickle version installed in my system.Can someone please tell me what should be the procedure to do this.
A: Python 3
import pickle
print(pickle.format_version)
A: Python 2
import pickle
print pickle.__version__
| stackoverflow | {
"language": "en",
"length": 44,
"provenance": "stackexchange_0000F.jsonl.gz:867144",
"question_score": "25",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549353"
} |
439e4092c11a3357807565c0ca29c04dcd9a19a8 | Stackoverflow Stackexchange
Q: Proper obfusction and minification of Ionic 3/Angular2 app that has lazy-loaded pages and components I have just finished building an Ionic 3 Web app and I opted for lazy-loading due to the size of the the app itself. I have however run into efficiency problems when it comes to initial app load. The main.js that get built is (4.3MB) which can take quite a while if the network is slow ( I realize this would be a breeze on an actual device). I have resulted to showing a "loading" .gif while I wait for it to load and I realize this is not actually solving the problem but actually adding to it.
I looked into the main.js and the file would definitely be a lot smaller if it were minified and obfuscated.
I do not have experience with either and the tutorials I find on the net do not cover lazy loaded modules...Most just broke my app.
Can anyone please guide me in the right direction? Or at least tell me if this is doable?
| Q: Proper obfusction and minification of Ionic 3/Angular2 app that has lazy-loaded pages and components I have just finished building an Ionic 3 Web app and I opted for lazy-loading due to the size of the the app itself. I have however run into efficiency problems when it comes to initial app load. The main.js that get built is (4.3MB) which can take quite a while if the network is slow ( I realize this would be a breeze on an actual device). I have resulted to showing a "loading" .gif while I wait for it to load and I realize this is not actually solving the problem but actually adding to it.
I looked into the main.js and the file would definitely be a lot smaller if it were minified and obfuscated.
I do not have experience with either and the tutorials I find on the net do not cover lazy loaded modules...Most just broke my app.
Can anyone please guide me in the right direction? Or at least tell me if this is doable?
| stackoverflow | {
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:867153",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549375"
} |
4b6bb56982e5de3eb1373b218bd4fffd16305bdc | Stackoverflow Stackexchange
Q: Antd how to fixe the size of the multi selection component? I am using the Ant design components for React.
I would like to fixe the size of the multi selection input fields in order to have the selected values into the same line without taking a new line like is the default behavior :
https://ant.design/components/select/#components-select-demo-multiple
I need to have the values ranged into the same line.
I can fixe the size of the input fields by overriding the style
.ant-select-selection--multiple:before, .ant-select-selection--multiple:after {
display: inline !important; }
But when I select several values, then they are outside the inputr field.
A: Finally I found a solution by adding this css style options :
.ant-select-selection--multiple
{
white-space: nowrap;
height: 30px;
overflow: auto
}
Thus the div is looking like an input text field and when the content ground a scroll appear at the right side of the div field.
| Q: Antd how to fixe the size of the multi selection component? I am using the Ant design components for React.
I would like to fixe the size of the multi selection input fields in order to have the selected values into the same line without taking a new line like is the default behavior :
https://ant.design/components/select/#components-select-demo-multiple
I need to have the values ranged into the same line.
I can fixe the size of the input fields by overriding the style
.ant-select-selection--multiple:before, .ant-select-selection--multiple:after {
display: inline !important; }
But when I select several values, then they are outside the inputr field.
A: Finally I found a solution by adding this css style options :
.ant-select-selection--multiple
{
white-space: nowrap;
height: 30px;
overflow: auto
}
Thus the div is looking like an input text field and when the content ground a scroll appear at the right side of the div field.
A: You can specify maxTagCount
<Select
mode="multiple"
maxTagCount={1}
>
// here is rendering of the Opitons
</Select>
| stackoverflow | {
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:867172",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549440"
} |
ece8f56e33c1572ad9c4f4179ad9ffa12b01374a | Stackoverflow Stackexchange
Q: SSH.NET based colored terminal emulator I'm using SSH.NET to create my terminal application for UWP.
For now, I've been able to send/receive data with the library, but I would like to do something like the PuTTY application, that shows the text with different colors, or even being able to edit files with the Linux vi editor.
Is there a way to get color / position information with this library?
A: When implementing a terminal emulation, you primarily have to process ANSI escape codes sent by the server.
There's no support for that in SSH.NET or .NET Framework.
Implementing it on your own is a huge task. PuTTY implementation of the terminal emulation, terminal.c, has almost 8000 lines of code. And that's only a processing part, a drawing is separate.
Quick google search for "c# terminal emulation" results in:
https://github.com/munificent/malison-dotnet
(though I have no experience with this library)
The only part of this on SSH.NET side, is to request terminal emulation by using an overload of SshClient.CreateShell that takes terminalName argument (and its companions).
| Q: SSH.NET based colored terminal emulator I'm using SSH.NET to create my terminal application for UWP.
For now, I've been able to send/receive data with the library, but I would like to do something like the PuTTY application, that shows the text with different colors, or even being able to edit files with the Linux vi editor.
Is there a way to get color / position information with this library?
A: When implementing a terminal emulation, you primarily have to process ANSI escape codes sent by the server.
There's no support for that in SSH.NET or .NET Framework.
Implementing it on your own is a huge task. PuTTY implementation of the terminal emulation, terminal.c, has almost 8000 lines of code. And that's only a processing part, a drawing is separate.
Quick google search for "c# terminal emulation" results in:
https://github.com/munificent/malison-dotnet
(though I have no experience with this library)
The only part of this on SSH.NET side, is to request terminal emulation by using an overload of SshClient.CreateShell that takes terminalName argument (and its companions).
| stackoverflow | {
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:867173",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549443"
} |
f5a40c6d9758864b1cdccf9166c870317b4908ff | Stackoverflow Stackexchange
Q: How to run a background job every few seconds using django I would like to know how to execute a function or file every few seconds using django so that I can populate my database with data obtained from this function call.
I need a function to be executed every 5 seconds, this function will scrape a website and save its information into my database, this information will be used by one of my template views to generate a plotly graph.
I've read about Celery and async stuff but couldn't figure out a way to put this into practice.
If someone answers this, please tell me where should I put this job file in my django project or if I should just add a function to an existing file.
A: pip install django-crontab
add django_crontab in the installed apps in settings file
create a file called cron.py in the project directory and write the function in this file
in settings.py add the time
CRONJOBS = [
('* * * * 5', 'cron.my_scheduled_job')
]
then from terminal
python manage.py crontab add
| Q: How to run a background job every few seconds using django I would like to know how to execute a function or file every few seconds using django so that I can populate my database with data obtained from this function call.
I need a function to be executed every 5 seconds, this function will scrape a website and save its information into my database, this information will be used by one of my template views to generate a plotly graph.
I've read about Celery and async stuff but couldn't figure out a way to put this into practice.
If someone answers this, please tell me where should I put this job file in my django project or if I should just add a function to an existing file.
A: pip install django-crontab
add django_crontab in the installed apps in settings file
create a file called cron.py in the project directory and write the function in this file
in settings.py add the time
CRONJOBS = [
('* * * * 5', 'cron.my_scheduled_job')
]
then from terminal
python manage.py crontab add
A: Something this simple could be achieved as a daemon rather than using a cron or celery etc. Take a look at python-daemon or confusingly,
another package with the same name.
A: add app on the top of list
step 1.
INSTALLED_APPS = [ # default django apps 'django_crontab',
#other apps
]
step 2.
save settings.py and run server
step 3.
if you are using in docker than docker-compose build before docker-compose-up
| stackoverflow | {
"language": "en",
"length": 253,
"provenance": "stackexchange_0000F.jsonl.gz:867207",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549538"
} |
75d008e23a410bac2e95908ecb2533fc0431f431 | Stackoverflow Stackexchange
Q: How can I set up an URL rewrite filter BEFORE authentication in Spring 5? We have a working Spring 4 multi-tenant web application under development, which has the Tuckey UrlRewriter filter in the beginning of the filter chain (before authentication, session management, etc).
Our URLs looks like this: http://example.com/mytenant/content
Which is rewritten to: http://example.com/content?_s=mytenant
Until now, all filters (authentication, session) were triggered for both rewritten and non-rewritten URLs (because in Spring 4 all filters were configured for both DispatcherType.REQUEST and DispatcherType.FORWARD by default)
Now, we are trying out Spring 5, and now (following the servlet specification) the filters are not being triggered after the rewrite event.
I guess reconfiguring all filters for DispatcherType.FORWARD too, would be a solution, but it seems ugly (I guess there is a good reason for the new defaults), so I am still searching for a better solution. What would be a recommended way for this in Spring 5? (maybe I use the UrlRewrite filter completely wrong)
Thank you for any advice!
| Q: How can I set up an URL rewrite filter BEFORE authentication in Spring 5? We have a working Spring 4 multi-tenant web application under development, which has the Tuckey UrlRewriter filter in the beginning of the filter chain (before authentication, session management, etc).
Our URLs looks like this: http://example.com/mytenant/content
Which is rewritten to: http://example.com/content?_s=mytenant
Until now, all filters (authentication, session) were triggered for both rewritten and non-rewritten URLs (because in Spring 4 all filters were configured for both DispatcherType.REQUEST and DispatcherType.FORWARD by default)
Now, we are trying out Spring 5, and now (following the servlet specification) the filters are not being triggered after the rewrite event.
I guess reconfiguring all filters for DispatcherType.FORWARD too, would be a solution, but it seems ugly (I guess there is a good reason for the new defaults), so I am still searching for a better solution. What would be a recommended way for this in Spring 5? (maybe I use the UrlRewrite filter completely wrong)
Thank you for any advice!
| stackoverflow | {
"language": "en",
"length": 167,
"provenance": "stackexchange_0000F.jsonl.gz:867252",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549679"
} |
9db96e2289220d55261686ae5e2fec16e5833c20 | Stackoverflow Stackexchange
Q: Return last n elements of vector in Rust without mutating the vector I am struggling to find a way to take two values from the end of a vector, sum those values, and push the sum to the vector.
I have found that pop, truncate, and drain do not work because they remove the values from the original vector.
fn main() {
println!("Which Fibonacci number would you like to find?");
let mut fib_num = String::new();
io::stdin().read_line(&mut fib_num)
.expect("Failed to read line");
let fib_num: u32 = fib_num.trim().parse()
.expect("Please enter a number");
let mut stored_nums: Vec<u32> = vec![0, 1];
while fib_num > stored_nums.len() as u32 {
let mut limit = stored_nums.len();
let mut new_num1 = stored_nums.pop().unwrap();
let mut new_num2 = stored_nums.pop().unwrap_or(0);
stored_nums.push(new_num1 + new_num2);
}
}
A: To retrieve the last n elements, you could just convert it to a slice.
E.g. Get last 3 elements:
let v = vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
let last3 = v.as_slice()[v.len()-3..].to_vec();
println!("{:?}", last3); // [7, 8, 9]
| Q: Return last n elements of vector in Rust without mutating the vector I am struggling to find a way to take two values from the end of a vector, sum those values, and push the sum to the vector.
I have found that pop, truncate, and drain do not work because they remove the values from the original vector.
fn main() {
println!("Which Fibonacci number would you like to find?");
let mut fib_num = String::new();
io::stdin().read_line(&mut fib_num)
.expect("Failed to read line");
let fib_num: u32 = fib_num.trim().parse()
.expect("Please enter a number");
let mut stored_nums: Vec<u32> = vec![0, 1];
while fib_num > stored_nums.len() as u32 {
let mut limit = stored_nums.len();
let mut new_num1 = stored_nums.pop().unwrap();
let mut new_num2 = stored_nums.pop().unwrap_or(0);
stored_nums.push(new_num1 + new_num2);
}
}
A: To retrieve the last n elements, you could just convert it to a slice.
E.g. Get last 3 elements:
let v = vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
let last3 = v.as_slice()[v.len()-3..].to_vec();
println!("{:?}", last3); // [7, 8, 9]
A: You can use a reverse iterator on the Vec:
let (n1, n2) = {
let mut rev_iter = stored_nums.iter().rev();
(rev_iter.next().unwrap().clone(), rev_iter.next().unwrap().clone())
};
stored_nums.push(n1 + n2);
A: You need to consider the case where the vector doesn't have two items.
I'd use iterator adapters like Iterator::rev and Iterator::take and then finish with Iterator::sum:
let sum = stored_nums.iter().rev().take(2).sum();
stored_nums.push(sum);
This allows you to avoid explicit handling of cases where the vector / slice / iterator is too short but the code still deals with it implicitly.
You could also directly index into the slice:
let len = stored_nums.len();
let sum = stored_nums[len - 1] + stored_nums[len - 2];
stored_nums.push(sum);
This will panic if there are less than 2 elements, however.
You could attempt to deal with the vector being too short in this case, but it's a bit verbose:
fn add_last_two(nums: &[u32]) -> Option<u32> {
let len = nums.len();
let idx_a = len.checked_sub(1)?;
let idx_b = len.checked_sub(2)?;
let a = nums.get(idx_a)?;
let b = nums.get(idx_b)?;
Some(a + b)
}
fn main() {
let mut stored_nums: Vec<u32> = vec![0, 1];
let sum = add_last_two(&stored_nums).unwrap_or(0);
stored_nums.push(sum);
}
Note that it might be nicer to use a Fibonacci iterator and just collect that into a Vec.
| stackoverflow | {
"language": "en",
"length": 370,
"provenance": "stackexchange_0000F.jsonl.gz:867276",
"question_score": "19",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549759"
} |
7b79dffda1d4e11116472ccf7739eea3801f1887 | Stackoverflow Stackexchange
Q: visual studio error : Collection was modified; enumeration operation may not execute I have a large solution with many C++ projects in Visual Studio 2015. When I load the solution, certain projects consistently fail to load. Typically they are plain old static C++ library projects.
In the output window of VS " Collection was modified; enumeration operation may not execute."
And in the solution window it shows the project is unloaded. If I reload just that project using right click context menu in the solution window the project just reloads without error.
How can I sort this out?
It is not clear what in the project file VS is having problems with. I have not been able to find better or more detailed logs. It only happens when I open the solution and only for certain projects.
A: So following a suggestion from the msdn forum, I deleted two directories that make up the local Visual Studio cache. That resolved the issue for me.
.vs in the solution directory
C:\Users\<My user name>\AppData\Local\Microsoft\VisualStudio\14.0\ComponentModelCache\
| Q: visual studio error : Collection was modified; enumeration operation may not execute I have a large solution with many C++ projects in Visual Studio 2015. When I load the solution, certain projects consistently fail to load. Typically they are plain old static C++ library projects.
In the output window of VS " Collection was modified; enumeration operation may not execute."
And in the solution window it shows the project is unloaded. If I reload just that project using right click context menu in the solution window the project just reloads without error.
How can I sort this out?
It is not clear what in the project file VS is having problems with. I have not been able to find better or more detailed logs. It only happens when I open the solution and only for certain projects.
A: So following a suggestion from the msdn forum, I deleted two directories that make up the local Visual Studio cache. That resolved the issue for me.
.vs in the solution directory
C:\Users\<My user name>\AppData\Local\Microsoft\VisualStudio\14.0\ComponentModelCache\
A: I would suggest checking the VS logs at %APPDATA%\Microsoft\VisualStudio\Version\ActivityLog.xml and if there is nothing in there, try turning on extra VisualStudio logging
Devenv /log Path\NameOfLogFile
| stackoverflow | {
"language": "en",
"length": 199,
"provenance": "stackexchange_0000F.jsonl.gz:867280",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549789"
} |
96169c0d23df1bbed0292c8ecdad0de0724cb3cb | Stackoverflow Stackexchange
Q: How to init firebase in the right directory? SEB:~ SEB$ cd /Users/SEB/Desktop/demo/polymer
SEB:polymer SEB$ firebase init
You're about to initialize a Firebase project in this directory:
/Users/SEB
Just dont understand why... How can i init Firebase in my "polymer" directory ? Thank you. Sébastien
A: Check /Users/SEB for a firebase.json file and delete it if it exists. When firebase init runs it goes up the directory tree looking for a parent directory that's already initialized as a Firebase project.
| Q: How to init firebase in the right directory? SEB:~ SEB$ cd /Users/SEB/Desktop/demo/polymer
SEB:polymer SEB$ firebase init
You're about to initialize a Firebase project in this directory:
/Users/SEB
Just dont understand why... How can i init Firebase in my "polymer" directory ? Thank you. Sébastien
A: Check /Users/SEB for a firebase.json file and delete it if it exists. When firebase init runs it goes up the directory tree looking for a parent directory that's already initialized as a Firebase project.
| stackoverflow | {
"language": "en",
"length": 80,
"provenance": "stackexchange_0000F.jsonl.gz:867312",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44549898"
} |
685274a82f20e3efbed0909aff6797f8173569ad | Stackoverflow Stackexchange
Q: How to rebuild java files in React Native? I made a change to the file ReactWebViewManager.java, which is a file inside the react-native package. (full path to the file is node_modules/react-native/ReactAndroid/src/main/java/com/facebook/react/views/webview/ReactWebViewManager.java). However the changes I make to the file do not seem to take effect. I deleted the entire file to test this theory and the app continued to work fine, so it seems I need recompile/rebuild/clear the cache of the react-native package. Is there a way to do this?
A: You just need to run the following command every time you make changes to Java files: react-native run-android
| Q: How to rebuild java files in React Native? I made a change to the file ReactWebViewManager.java, which is a file inside the react-native package. (full path to the file is node_modules/react-native/ReactAndroid/src/main/java/com/facebook/react/views/webview/ReactWebViewManager.java). However the changes I make to the file do not seem to take effect. I deleted the entire file to test this theory and the app continued to work fine, so it seems I need recompile/rebuild/clear the cache of the react-native package. Is there a way to do this?
A: You just need to run the following command every time you make changes to Java files: react-native run-android
| stackoverflow | {
"language": "en",
"length": 100,
"provenance": "stackexchange_0000F.jsonl.gz:867392",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550146"
} |
9785b6626abd20748388ae2afe7471141114ced5 | Stackoverflow Stackexchange
Q: How to create a Unique Index with a Where clause with Entity Framework I want to create a UNIQUE INDEX with a WHERE condition using Entity Framework.
public class Person {
public string Name { get; set; }
public bool Expired { get; set; }
}
The condition is that the Name is unique only when "Expired" is false.
The INDEX would look like this:
CREATE UNIQUE INDEX [IX_Person]
ON Person(Name)
WHERE [Expired] = 0;
Is there some way to write this unique index with code-first and not having to execute this SQL in a migration?
I can write this:
[Index("IX_Expired", IsUnique = true)]
public bool Expired { get; set; }
But I don't find a way to specify the "WHERE" part.
A:
Is there some way to write this unique index with code-first and not having to execute this SQL in a migration?
No. EF can create basic indexes only.
EG for SQL Server, Filtered indexes, indexes with included columns, partitioned indexes, indexes on particular filegroups, indexes with row or page compression, Columnstore indexes, heap tables, memory-optimized tables, etc, can only be created using TSQL.
| Q: How to create a Unique Index with a Where clause with Entity Framework I want to create a UNIQUE INDEX with a WHERE condition using Entity Framework.
public class Person {
public string Name { get; set; }
public bool Expired { get; set; }
}
The condition is that the Name is unique only when "Expired" is false.
The INDEX would look like this:
CREATE UNIQUE INDEX [IX_Person]
ON Person(Name)
WHERE [Expired] = 0;
Is there some way to write this unique index with code-first and not having to execute this SQL in a migration?
I can write this:
[Index("IX_Expired", IsUnique = true)]
public bool Expired { get; set; }
But I don't find a way to specify the "WHERE" part.
A:
Is there some way to write this unique index with code-first and not having to execute this SQL in a migration?
No. EF can create basic indexes only.
EG for SQL Server, Filtered indexes, indexes with included columns, partitioned indexes, indexes on particular filegroups, indexes with row or page compression, Columnstore indexes, heap tables, memory-optimized tables, etc, can only be created using TSQL.
A: In case anyone comes here looking to do this in Entity Framework core.... you can now do it!
e.g. in your MigrationContext:
builder.Entity<MyEntity>()
.HasIndex(x => x.MyField)
.HasFilter($"{nameof(MyEntity.MyField)} IS NOT NULL")
.IsUnique();
| stackoverflow | {
"language": "en",
"length": 219,
"provenance": "stackexchange_0000F.jsonl.gz:867399",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550161"
} |
ae08710b3e832b4283127acede0f40f948bf5371 | Stackoverflow Stackexchange
Q: I cannot find rtl support for ng-bootstrap (for right to left language) Is there a rtl support for ng-bootstrap (for arabic, hebrew etc')?
I am writing a project in right to left language in angular 4 and would like to use ng-bootstrap.
Thanks
A: You can use this patch for full RTL compatibility for both Bootstrap 3.3.7 and 4.0.0-alpha.6
https://github.com/parsmizban/RTL-Bootstrap
If you have any pre designed theme, You can use this path to make your template RTL very easy.
| Q: I cannot find rtl support for ng-bootstrap (for right to left language) Is there a rtl support for ng-bootstrap (for arabic, hebrew etc')?
I am writing a project in right to left language in angular 4 and would like to use ng-bootstrap.
Thanks
A: You can use this patch for full RTL compatibility for both Bootstrap 3.3.7 and 4.0.0-alpha.6
https://github.com/parsmizban/RTL-Bootstrap
If you have any pre designed theme, You can use this path to make your template RTL very easy.
A: You can fix the direction for some components like ngb-pagination
direction: ltr;
It makes sense to set the direction as LTR everywhere for such commun displays (e.g., numbers based)
| stackoverflow | {
"language": "en",
"length": 110,
"provenance": "stackexchange_0000F.jsonl.gz:867418",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550223"
} |
f65d23ed7f0229f92f411cb282d5d0a121b26182 | Stackoverflow Stackexchange
Q: Using Composer scripts defined in composer.json, how to display phpunit colors? Context
I'm using Composer scripts defined in my composer.json file to run unit tests in my application.
My composer.json:
{
"name": "my company/my_application",
"description": "Description of my application",
"license": "my licence",
"require": {
"php": ">=5.6",
"ext-soap": "*",
"ext-ssh2": "*",
"ext-curl": "*",
"ext-zip": "*",
"ext-pdo_mysql": "*",
"ext-gettext": "*",
"dusank/knapsack": "^9.0"
},
"require-dev": {
"ext-mysqli": "*",
"phpunit/phpunit": "^5.7",
"squizlabs/php_codesniffer": "2.*"
},
"scripts": {
"test-php": "vendor/bin/phpunit --colors --verbose --bootstrap tests/bootstrap.php --configuration tests/phpunit.xml",
}
}
My command is:
vendor/bin/phpunit --colors --verbose --bootstrap tests/bootstrap.php --configuration tests/phpunit.xml tests/
Problem
If I run vendor/bin/phpunit --colors --verbose --bootstrap tests/bootstrap.php --configuration tests/phpunit.xml tests/ inside a terminal, I can see colored output. But if I run it using Composer, I don't see any color.
Question
Using Composer scripts defined in composer.json, how to display phpunit colors?
A: To force phpunit to display colors in any situations, add =always to the --color parameters.
With
vendor/bin/phpunit --colors=always --verbose --bootstrap tests/bootstrap.php --configuration tests/phpunit.xml tests/
in my composer.json file, I can see colors.
| Q: Using Composer scripts defined in composer.json, how to display phpunit colors? Context
I'm using Composer scripts defined in my composer.json file to run unit tests in my application.
My composer.json:
{
"name": "my company/my_application",
"description": "Description of my application",
"license": "my licence",
"require": {
"php": ">=5.6",
"ext-soap": "*",
"ext-ssh2": "*",
"ext-curl": "*",
"ext-zip": "*",
"ext-pdo_mysql": "*",
"ext-gettext": "*",
"dusank/knapsack": "^9.0"
},
"require-dev": {
"ext-mysqli": "*",
"phpunit/phpunit": "^5.7",
"squizlabs/php_codesniffer": "2.*"
},
"scripts": {
"test-php": "vendor/bin/phpunit --colors --verbose --bootstrap tests/bootstrap.php --configuration tests/phpunit.xml",
}
}
My command is:
vendor/bin/phpunit --colors --verbose --bootstrap tests/bootstrap.php --configuration tests/phpunit.xml tests/
Problem
If I run vendor/bin/phpunit --colors --verbose --bootstrap tests/bootstrap.php --configuration tests/phpunit.xml tests/ inside a terminal, I can see colored output. But if I run it using Composer, I don't see any color.
Question
Using Composer scripts defined in composer.json, how to display phpunit colors?
A: To force phpunit to display colors in any situations, add =always to the --color parameters.
With
vendor/bin/phpunit --colors=always --verbose --bootstrap tests/bootstrap.php --configuration tests/phpunit.xml tests/
in my composer.json file, I can see colors.
| stackoverflow | {
"language": "en",
"length": 171,
"provenance": "stackexchange_0000F.jsonl.gz:867431",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550254"
} |
1573920165cb450b9d180de448584f7eefdce556 | Stackoverflow Stackexchange
Q: Where is the ALT+UP/DOWN (move line) setting in Visual Studio 2017? I installed Resharper 2017 for VS2017, in the first run it asked about hotkey setup and I chose VS hotkeys. I tried Resharper and decided to uninstall it. Now ALT + UP / DOWN doesn't move lines! How can I fix it? What option should I check.
A: Ok. I found it in Tools > Options > Environment > Keyboard > Edit.MoveSelectedLinesUp(Down)
| Q: Where is the ALT+UP/DOWN (move line) setting in Visual Studio 2017? I installed Resharper 2017 for VS2017, in the first run it asked about hotkey setup and I chose VS hotkeys. I tried Resharper and decided to uninstall it. Now ALT + UP / DOWN doesn't move lines! How can I fix it? What option should I check.
A: Ok. I found it in Tools > Options > Environment > Keyboard > Edit.MoveSelectedLinesUp(Down)
| stackoverflow | {
"language": "en",
"length": 74,
"provenance": "stackexchange_0000F.jsonl.gz:867434",
"question_score": "16",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550260"
} |
7c712ce69822c8f67dc6364ff8e33cd0ed4f8574 | Stackoverflow Stackexchange
Q: Verbose build logs from Android Studio How do I get a verbose log (including the command-line arguments to compiler and linker) when building with Android Studio?
I have just transitioned from Ant / Android.mk builds to Android-Studio builds.
With the old system, I was able to see how the compiler was evoked by doing:
$ ndk-build V=1
What would be the equivalent setting in Android Studio to accomplish this?
I have a suspicion that my project is building against the wrong GLES version (gles2 instead of gles3) and want to make sure by seeing the command line arguments to the linker.
A: It turns out you can make the build verbose by changing the build.gradle file as follows:
externalNativeBuild {
cmake {
arguments "-DCMAKE_VERBOSE_MAKEFILE=1"
}
}
When using ndk-build instead of cmake, use this instead:
externalNativeBuild {
ndkBuild {
arguments "V=1"
}
}
| Q: Verbose build logs from Android Studio How do I get a verbose log (including the command-line arguments to compiler and linker) when building with Android Studio?
I have just transitioned from Ant / Android.mk builds to Android-Studio builds.
With the old system, I was able to see how the compiler was evoked by doing:
$ ndk-build V=1
What would be the equivalent setting in Android Studio to accomplish this?
I have a suspicion that my project is building against the wrong GLES version (gles2 instead of gles3) and want to make sure by seeing the command line arguments to the linker.
A: It turns out you can make the build verbose by changing the build.gradle file as follows:
externalNativeBuild {
cmake {
arguments "-DCMAKE_VERBOSE_MAKEFILE=1"
}
}
When using ndk-build instead of cmake, use this instead:
externalNativeBuild {
ndkBuild {
arguments "V=1"
}
}
A: Regarding to https://developer.android.com/reference/tools/gradle-api/4.1/com/android/build/api/dsl/NdkBuild there is no possibility to pass arguments.
But you can pass an the folder for outputs, which generates .json files
externalNativeBuild {
ndkBuild {
// Tells Gradle to put outputs from external native
// builds in the path specified below.
buildStagingDirectory "./outputs/ndk-build"
path 'Android.mk'
}
}
So in my case in outputs/ndk-build/debug/json_generation_record.json the last "message" told me the error:
JSON generation completed with problem. Exception: Build command failed.
Error while executing process .... ndk-build.cmd ....
...
Android.mk:myLib-prebuilt: LOCAL_SRC_FILES points to a missing file
Android NDK: Check that ... exists or that its path is correct
...prebuilt-library.mk:45: *** Android NDK: Aborting . Stop.\n"
| stackoverflow | {
"language": "en",
"length": 250,
"provenance": "stackexchange_0000F.jsonl.gz:867472",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550342"
} |
ae41a0345d14ffd9763e8ce57e5ef49f610712f4 | Stackoverflow Stackexchange
Q: How to use RegEx to ignore the first period and match all subsequent periods? How to use RegEx to ignore the first period and match all subsequent periods?
For example:
*
*1.23 (no match)
*1.23.45 (matches the second period)
*1.23.45.56 (matches the second and third periods)
I am trying to limit users from entering invalid numbers. So I will be using this RegEx to replace matches with empty strings.
I currently have /[^.0-9]+/ but it is not enough to disallow . after an (optional) initial .
A: Constrain the number between the start ^ and end anchor $, then specify the number pattern you require. Such as:
/^\d+\.?\d+?$/
Which allows 1 or more numbers, followed by an optional period, then optional numbers.
| Q: How to use RegEx to ignore the first period and match all subsequent periods? How to use RegEx to ignore the first period and match all subsequent periods?
For example:
*
*1.23 (no match)
*1.23.45 (matches the second period)
*1.23.45.56 (matches the second and third periods)
I am trying to limit users from entering invalid numbers. So I will be using this RegEx to replace matches with empty strings.
I currently have /[^.0-9]+/ but it is not enough to disallow . after an (optional) initial .
A: Constrain the number between the start ^ and end anchor $, then specify the number pattern you require. Such as:
/^\d+\.?\d+?$/
Which allows 1 or more numbers, followed by an optional period, then optional numbers.
A: I suggest using a regex that will match 1+ digits, a period, and then any number of digits and periods capturing these 2 parts into separate groups. Then, inside a replace callback method, remove all periods with an additional replace:
var ss = ['1.23', '1.23.45', '1.23.45.56'];
var rx = /^(\d+\.)([\d.]*)$/;
for (var s of ss) {
var res = s.replace(rx, function($0,$1,$2) {
return $1+$2.replace(/\./g, '');
});
console.log(s, "=>", res);
}
Pattern details:
*
*^ - start of string
*(\d+\.) - Group 1 matching 1+ digits and a literal .
*([\d.]*) - zero or more chars other than digits and a literal dot
*$ - end of string.
| stackoverflow | {
"language": "en",
"length": 231,
"provenance": "stackexchange_0000F.jsonl.gz:867506",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550442"
} |
70c7e51e5a75ee4ae87da9e83b76916364629fc7 | Stackoverflow Stackexchange
Q: R - failed installation of package with github dependency I want to install a package that has a dependency from github. In the DESCRIPTION file of the package it says
Imports: ggplot2, ggthemr
Remotes: cttobin/ggthemr@0b2e7da43d4d2844b08b039510b31078
However, when I try to install the package, automatic installation of the ggthemr package from github as a dependency fails without any informative error message. It looks like as if it tried to install the dependency from CRAN instead of github.
In the PACKAGES file of the repository where I want to install the package from (a drat repo), the Imports are listed, the Remotes are not. Adding the Remotes line manually does not solve the problem.
Any help is appreciated.
| Q: R - failed installation of package with github dependency I want to install a package that has a dependency from github. In the DESCRIPTION file of the package it says
Imports: ggplot2, ggthemr
Remotes: cttobin/ggthemr@0b2e7da43d4d2844b08b039510b31078
However, when I try to install the package, automatic installation of the ggthemr package from github as a dependency fails without any informative error message. It looks like as if it tried to install the dependency from CRAN instead of github.
In the PACKAGES file of the repository where I want to install the package from (a drat repo), the Imports are listed, the Remotes are not. Adding the Remotes line manually does not solve the problem.
Any help is appreciated.
| stackoverflow | {
"language": "en",
"length": 117,
"provenance": "stackexchange_0000F.jsonl.gz:867517",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550467"
} |
d7448be3aac5b09b668608c7df0c8ed6e6102ee2 | Stackoverflow Stackexchange
Q: Are Android gradle tasks open sourced? I use ./gradlew connectedAndroidtest to test my android app.
When the connectedAndroidtest task running, from the terminal, I can get the task that ran many sub-tasks.
:assembleDebugAndroidTest UP-TO-DATE
:connectedDebugAndroidTest ...
but I don't understand the sub-tasks details.
I try to find gradle source code but can't find any about connectedDebugAndroidTest tasks.
Are android tasks open source? Or where I can know more details?
Thanks.
A: If you would like to see Android Build Tools source code, there is open Google repository with it: android/platform/tools/build/master
Specifically Android Gradle Plugin: build/gradle.
If you would like to see manual for specific task, you could execute:
./gradlew help --task "${taskName}"
In your case it should be:
./gradlew help --task connectedAndroidTest
Output:
Detailed task information for connectedAndroidTest
Path
:app:connectedAndroidTest
Type
Task (org.gradle.api.Task)
Description
Installs and runs instrumentation tests for all flavors on connected devices.
Group
verification
| Q: Are Android gradle tasks open sourced? I use ./gradlew connectedAndroidtest to test my android app.
When the connectedAndroidtest task running, from the terminal, I can get the task that ran many sub-tasks.
:assembleDebugAndroidTest UP-TO-DATE
:connectedDebugAndroidTest ...
but I don't understand the sub-tasks details.
I try to find gradle source code but can't find any about connectedDebugAndroidTest tasks.
Are android tasks open source? Or where I can know more details?
Thanks.
A: If you would like to see Android Build Tools source code, there is open Google repository with it: android/platform/tools/build/master
Specifically Android Gradle Plugin: build/gradle.
If you would like to see manual for specific task, you could execute:
./gradlew help --task "${taskName}"
In your case it should be:
./gradlew help --task connectedAndroidTest
Output:
Detailed task information for connectedAndroidTest
Path
:app:connectedAndroidTest
Type
Task (org.gradle.api.Task)
Description
Installs and runs instrumentation tests for all flavors on connected devices.
Group
verification
| stackoverflow | {
"language": "en",
"length": 148,
"provenance": "stackexchange_0000F.jsonl.gz:867560",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550598"
} |
a9bf3af4de387954e661eed0df5a808d2fe10a30 | Stackoverflow Stackexchange
Q: How can I disable the connection successful message on H2O I'm making an R notebook with H2O and I don't want the H2O "Connection Successful" message and accompanying info (see below) to show.
Connection successful!
R is connected to the H2O cluster:
H2O cluster uptime:
H2O cluster version:
H2O cluster version age:
H2O cluster name:
H2O cluster total nodes:
H2O cluster total memory:
H2O cluster total cores:
H2O cluster allowed cores: 4
H2O cluster healthy:
H2O Connection ip:
H2O Connection port:
H2O Connection proxy:
H2O Internal Security:
R Version:
I would appreciate any help!
A: You can set include=FALSE in the chunk options. That should prevent any output from printing.
```{r include=FALSE}
h2o.init()
```
| Q: How can I disable the connection successful message on H2O I'm making an R notebook with H2O and I don't want the H2O "Connection Successful" message and accompanying info (see below) to show.
Connection successful!
R is connected to the H2O cluster:
H2O cluster uptime:
H2O cluster version:
H2O cluster version age:
H2O cluster name:
H2O cluster total nodes:
H2O cluster total memory:
H2O cluster total cores:
H2O cluster allowed cores: 4
H2O cluster healthy:
H2O Connection ip:
H2O Connection port:
H2O Connection proxy:
H2O Internal Security:
R Version:
I would appreciate any help!
A: You can set include=FALSE in the chunk options. That should prevent any output from printing.
```{r include=FALSE}
h2o.init()
```
A: If you want to suppress all R output, you can use the sink() function. Here is how that would look with h2o.init():
> library(h2o)
> sink("/dev/null")
> h2o.init()
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
| stackoverflow | {
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:867636",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550808"
} |
dd42e9ac50e129427bae0923aa02abe34b49419d | Stackoverflow Stackexchange
Q: How to check permission is granted in ViewModel? I need to ask permission for contacts and when application starts I'm asking,in ViewModel part I need to call method which requires permission. I need to check permission is granted by user or not and then call, but for checking permission I need to have access Activity. while in my ViewModel I don't have a reference to Activity and don't want to have, How I can overcome, the problem?
A: I just ran into this problem, and I decided to use make use of LiveData instead.
Core concept:
*
*ViewModel has a LiveData on what permission request needs to be made
*ViewModel has a method (essentially callback) that returns if permission is granted or not
SomeViewModel.kt:
class SomeViewModel : ViewModel() {
val permissionRequest = MutableLiveData<String>()
fun onPermissionResult(permission: String, granted: Boolean) {
TODO("whatever you need to do")
}
}
FragmentOrActivity.kt
class FragmentOrActivity : FragmentOrActivity() {
private viewModel: SomeViewModel by lazy {
ViewModelProviders.of(this).get(SomeViewModel::class.java)
}
override fun onCreate(savedInstanceState: Bundle?) {
......
viewModel.permissionRequest.observe(this, Observer { permission ->
TODO("ask for permission, and then call viewModel.onPermissionResult aftwewards")
})
......
}
}
| Q: How to check permission is granted in ViewModel? I need to ask permission for contacts and when application starts I'm asking,in ViewModel part I need to call method which requires permission. I need to check permission is granted by user or not and then call, but for checking permission I need to have access Activity. while in my ViewModel I don't have a reference to Activity and don't want to have, How I can overcome, the problem?
A: I just ran into this problem, and I decided to use make use of LiveData instead.
Core concept:
*
*ViewModel has a LiveData on what permission request needs to be made
*ViewModel has a method (essentially callback) that returns if permission is granted or not
SomeViewModel.kt:
class SomeViewModel : ViewModel() {
val permissionRequest = MutableLiveData<String>()
fun onPermissionResult(permission: String, granted: Boolean) {
TODO("whatever you need to do")
}
}
FragmentOrActivity.kt
class FragmentOrActivity : FragmentOrActivity() {
private viewModel: SomeViewModel by lazy {
ViewModelProviders.of(this).get(SomeViewModel::class.java)
}
override fun onCreate(savedInstanceState: Bundle?) {
......
viewModel.permissionRequest.observe(this, Observer { permission ->
TODO("ask for permission, and then call viewModel.onPermissionResult aftwewards")
})
......
}
}
A: I have reworked the solution. The PermissionRequester object is everything you need to request permissions from any point where you have at least an application context. It uses its helper PermissionRequestActivity to accomplish this job.
@Parcelize
class PermissionResult(val permission: String, val state: State) : Parcelable
enum class State { GRANTED, DENIED_TEMPORARILY, DENIED_PERMANENTLY }
typealias Cancellable = () -> Unit
private const val PERMISSIONS_ARGUMENT_KEY = "PERMISSIONS_ARGUMENT_KEY"
private const val REQUEST_CODE_ARGUMENT_KEY = "REQUEST_CODE_ARGUMENT_KEY"
object PermissionRequester {
private val callbackMap = ConcurrentHashMap<Int, (List<PermissionResult>) -> Unit>(1)
private var requestCode = 256
get() {
requestCode = field--
return if (field < 0) 255 else field
}
fun requestPermissions(context: Context, vararg permissions: String, callback: (List<PermissionResult>) -> Unit): Cancellable {
val intent = Intent(context, PermissionRequestActivity::class.java)
.putExtra(PERMISSIONS_ARGUMENT_KEY, permissions)
.putExtra(REQUEST_CODE_ARGUMENT_KEY, requestCode)
.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
context.startActivity(intent)
callbackMap[requestCode] = callback
return { callbackMap.remove(requestCode) }
}
internal fun onPermissionResult(responses: List<PermissionResult>, requestCode: Int) {
callbackMap[requestCode]?.invoke(responses)
callbackMap.remove(requestCode)
}
}
class PermissionRequestActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
if (savedInstanceState == null) {
requestPermissions()
}
}
private fun requestPermissions() {
val permissions = intent?.getStringArrayExtra(PERMISSIONS_ARGUMENT_KEY) ?: arrayOf()
val requestCode = intent?.getIntExtra(REQUEST_CODE_ARGUMENT_KEY, -1) ?: -1
when {
permissions.isNotEmpty() && requestCode != -1 -> ActivityCompat.requestPermissions(this, permissions, requestCode)
else -> finishWithResult()
}
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
val permissionResults = grantResults.zip(permissions).map { (grantResult, permission) ->
val state = when {
grantResult == PackageManager.PERMISSION_GRANTED -> State.GRANTED
ActivityCompat.shouldShowRequestPermissionRationale(this, permission) -> State.DENIED_TEMPORARILY
else -> State.DENIED_PERMANENTLY
}
PermissionResult(permission, state)
}
finishWithResult(permissionResults)
}
private fun finishWithResult(permissionResult: List<PermissionResult> = listOf()) {
val requestCode = intent?.getIntExtra(REQUEST_CODE_ARGUMENT_KEY, -1) ?: -1
PermissionRequester.onPermissionResult(permissionResult, requestCode)
finish()
}
}
Usage:
class MyViewModel(application: Application) : AndroidViewModel(application) {
private val cancelRequest: Cancellable = requestPermission()
private fun requestPermission(): Cancellable {
return PermissionRequester.requestPermissions(getApplication(), "android.permission.SEND_SMS") {
if (it.firstOrNull()?.state == State.GRANTED) {
Toast.makeText(getApplication(), "GRANTED", Toast.LENGTH_LONG).show()
} else {
Toast.makeText(getApplication(), "DENIED", Toast.LENGTH_LONG).show()
}
}
}
override fun onCleared() {
super.onCleared()
cancelRequest()
}
}
A: I did something like this:
create an abstract class that extends AndroidViewModel which gives you access to the application context:
abstract class BaseViewModel(application: Application) : AndroidViewModel(application), CoroutineScope {
private val job = Job()
override val coroutineContext: CoroutineContext
get() = job + Dispatchers.Main
override fun onCleared() {
super.onCleared()
job.cancel()
}
}
Now, create your view model by extending the BaseViewModel class and you will have access to the application context
class AdminViewModel(application: Application) : BaseViewModel(application) {
.....
}
Now you always have access to a Context that you can use to get access to resources.
| stackoverflow | {
"language": "en",
"length": 584,
"provenance": "stackexchange_0000F.jsonl.gz:867651",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550851"
} |
ba5b0d03bf366f49f37bb795cbe94b3595e337a2 | Stackoverflow Stackexchange
Q: NLTK Perceptron Tagger - What does it recognize as FW (foreign word)? Relatively new to NLP and working on tagging sentences that contain foreign words using NLTK's PerceptronTagger (in Python) - but it continues to tag the tokenized foreign word by position in the syntax rather than as a 'FW'.
Does the whole sentence have to be in the language (with the appropriate language pickle file loaded) for the 'FW' tag to work ala the NLTK documentation? Is there a way of sensing a foreign word within an English sentence?
On the flip side of that coin, do sentences containing foreign words that have been normalized into the English language tag as English? (ie: entrepreneur, siesta, zeitgeist, etc)
A: in Spacy it means "Foreign Word". Maybe it is the same in NLTK.
| Q: NLTK Perceptron Tagger - What does it recognize as FW (foreign word)? Relatively new to NLP and working on tagging sentences that contain foreign words using NLTK's PerceptronTagger (in Python) - but it continues to tag the tokenized foreign word by position in the syntax rather than as a 'FW'.
Does the whole sentence have to be in the language (with the appropriate language pickle file loaded) for the 'FW' tag to work ala the NLTK documentation? Is there a way of sensing a foreign word within an English sentence?
On the flip side of that coin, do sentences containing foreign words that have been normalized into the English language tag as English? (ie: entrepreneur, siesta, zeitgeist, etc)
A: in Spacy it means "Foreign Word". Maybe it is the same in NLTK.
| stackoverflow | {
"language": "en",
"length": 133,
"provenance": "stackexchange_0000F.jsonl.gz:867658",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550883"
} |
4c56b5e37eb08c1b64fd6e7f2bda9113fb214184 | Stackoverflow Stackexchange
Q: Shifting the pointer of the string in C++ In C we are able to do this :
char *str;
scanf("%s", str); // lets say that we enter "Hello"
str = str + 2;
printf("%s", str); // and the answer would be "llo"
Is there something like this in C++ with string datatype in O(1) time?
Edit : It is giving me this error ->
error: invalid operands to binary expression ('string' (aka 'basic_string<char,
char_traits<char>, allocator<char> >') and 'int')
str = str + 1;
Edit2 : I ran this code and got the above error ->
string str = "Hello";
str = str + 1;
cout << str << endl;
A: #include <iostream>
#include <string>
using namespace std;
int main() {
string s = "foobar";
cout << &s[s.size() - 2] << endl;
cout << &s[3] << endl;
}
| Q: Shifting the pointer of the string in C++ In C we are able to do this :
char *str;
scanf("%s", str); // lets say that we enter "Hello"
str = str + 2;
printf("%s", str); // and the answer would be "llo"
Is there something like this in C++ with string datatype in O(1) time?
Edit : It is giving me this error ->
error: invalid operands to binary expression ('string' (aka 'basic_string<char,
char_traits<char>, allocator<char> >') and 'int')
str = str + 1;
Edit2 : I ran this code and got the above error ->
string str = "Hello";
str = str + 1;
cout << str << endl;
A: #include <iostream>
#include <string>
using namespace std;
int main() {
string s = "foobar";
cout << &s[s.size() - 2] << endl;
cout << &s[3] << endl;
}
| stackoverflow | {
"language": "en",
"length": 138,
"provenance": "stackexchange_0000F.jsonl.gz:867671",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550935"
} |
0b06d566b79bbfd29fb2f00ec2552d2ffc2f3b5d | Stackoverflow Stackexchange
Q: select option not working with ng-model in angular 4 I have a select box where I show elements from a list
Code Snippet:
export class CreateauctionComponent implements OnInit{
createAuctionForm: FormGroup;
test:any = ["cat","dog"];
constructor(private _formBuilder: FormBuilder,private _categories: UserCategoriesForAuctionService){
//
}
}
In HTML rendered as:
<div class="form-col">
<h5><span style="color:red;">*</span>{{'createAuction.category' | translate}}:</h5>
<select class="form_cbx long-select" name="" ng-model="test">
<option ng-options="t in test">{{t}}</option>
</select>
</div>
I am not able to see any values. Only one blank field in the list. Can anyone direct what is the issue here?
A: You need to use compareWith property on select tag, but If you are using angular 4, your html still looks like using angularJs.
HTML File :
<select [compareWith]="byAnimal" [(ngModel)]="selectedAnimal">
<option *ngFor="let animal of animals" [ngValue]="animal">
{{animal.type}}
</option>
</select>
TS File
byAnimal(item1,item2){
return item1.type == item2.type;
}
One of the best soltion from this link
| Q: select option not working with ng-model in angular 4 I have a select box where I show elements from a list
Code Snippet:
export class CreateauctionComponent implements OnInit{
createAuctionForm: FormGroup;
test:any = ["cat","dog"];
constructor(private _formBuilder: FormBuilder,private _categories: UserCategoriesForAuctionService){
//
}
}
In HTML rendered as:
<div class="form-col">
<h5><span style="color:red;">*</span>{{'createAuction.category' | translate}}:</h5>
<select class="form_cbx long-select" name="" ng-model="test">
<option ng-options="t in test">{{t}}</option>
</select>
</div>
I am not able to see any values. Only one blank field in the list. Can anyone direct what is the issue here?
A: You need to use compareWith property on select tag, but If you are using angular 4, your html still looks like using angularJs.
HTML File :
<select [compareWith]="byAnimal" [(ngModel)]="selectedAnimal">
<option *ngFor="let animal of animals" [ngValue]="animal">
{{animal.type}}
</option>
</select>
TS File
byAnimal(item1,item2){
return item1.type == item2.type;
}
One of the best soltion from this link
A: It should be,
<select [(ngModel)]="selectedanimal" (ngModelChange)="onChange($event)">
<option *ngFor="let c of test" [ngValue]="c"> {{c}} </option>
</select>
DEMO
A: Correct Way would be:
<select id="select-type-basic" [(ngModel)]="status">
<option *ngFor="let status_item of status_values">
{{status_item}}
</option>
</select>
Value Should be avoided inside option since that will set the default value of the 'Select field'. Default Selection should be binded with [(ngModel)] and Options should be declared likewise.
status : any = "Completed";
status_values: any = ["In Progress", "Completed", "Closed"];
| stackoverflow | {
"language": "en",
"length": 216,
"provenance": "stackexchange_0000F.jsonl.gz:867683",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550964"
} |
15097ba5afac5ed0ce9505951eaa6c93e68aa833 | Stackoverflow Stackexchange
Q: Flutter constructor error I have a problem with a code snippet I tried to play with, and since I am new to dart I don't really understand the error message. Can somebody explain to me why the error message says
The constructor returns type 'dynamic' that isn't of expected type
'widget'.
and how to fix it?
A: The class MaterialList doesn't exist. It looks like maybe you meant TwoLevelList, which is deprecated. You should try ListView instead.
| Q: Flutter constructor error I have a problem with a code snippet I tried to play with, and since I am new to dart I don't really understand the error message. Can somebody explain to me why the error message says
The constructor returns type 'dynamic' that isn't of expected type
'widget'.
and how to fix it?
A: The class MaterialList doesn't exist. It looks like maybe you meant TwoLevelList, which is deprecated. You should try ListView instead.
A: If you have other import statements try to use alias as some libraries may be the reason for conflict.
Example: import 'package:html/parser.dart' as parser;
| stackoverflow | {
"language": "en",
"length": 103,
"provenance": "stackexchange_0000F.jsonl.gz:867699",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44550997"
} |
9209b72aad4e5f8f238e8d55a530b670acff248b | Stackoverflow Stackexchange
Q: Hibernate char column to enum I have column 'gender' as a VARCHAR(1)/CHAR.
How to parse values "M"/"F" to java enum with values (MALE, FEMALE) in @Entity class without using
@Column(name="gender")
private Character cGender;
and converting it manually to enum object?
A: You need to provide your own converter:
@Convert(converter = GenderConverter.class)
@Column(name="gender")
private Gender gender;
Then implement it
public class GenderConverter implements AttributeConverter<Gender, Character> {
@Override
public Character convertToDatabaseColumn(Gender from) {
Character value = 'm';
if (from == Gender.FEMALE) {
value = 'f';
}
return value;
}
@Override
public Gender convertToEntityAttribute(Character to) {
Gender g = Gender.MALE;
if ('f' == to)
g = Gender.FEMALE;
}
return g;
}
}
| Q: Hibernate char column to enum I have column 'gender' as a VARCHAR(1)/CHAR.
How to parse values "M"/"F" to java enum with values (MALE, FEMALE) in @Entity class without using
@Column(name="gender")
private Character cGender;
and converting it manually to enum object?
A: You need to provide your own converter:
@Convert(converter = GenderConverter.class)
@Column(name="gender")
private Gender gender;
Then implement it
public class GenderConverter implements AttributeConverter<Gender, Character> {
@Override
public Character convertToDatabaseColumn(Gender from) {
Character value = 'm';
if (from == Gender.FEMALE) {
value = 'f';
}
return value;
}
@Override
public Gender convertToEntityAttribute(Character to) {
Gender g = Gender.MALE;
if ('f' == to)
g = Gender.FEMALE;
}
return g;
}
}
A: For enum you can use enum type instead to use Character type, eg:
@Enumerated(EnumType.STRING)
@Column(name = "GENDER")
private Gender gender;
A: In addition to the answer of @Alexey Soshin:
Also it's possible to add @Converter(autoApply = true) on coverter class and after that Hibernate will be converting all Gender fields automatically (no need for @Convert(converter = GenderConverter.class)).
| stackoverflow | {
"language": "en",
"length": 168,
"provenance": "stackexchange_0000F.jsonl.gz:867717",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551058"
} |
1b62ae16c7b5869a5cbfed416d93b66659c274a3 | Stackoverflow Stackexchange
Q: How often does code in an *ngIf condition in Angular 2+ fire? I am curious about how *ngif works:
<img *ngif="isMediaMessage(message) === 'audio'" src="assets/img/audio1" />
1)
When I put a console inside the isMediaMessage function, the console prints out indefinitely; I wonder why it does that. Is it because of the digest loop? dirty checking? I am reading up more on these.
2) Should I use less data binding if I want to reduce rendering time?
3) Would you guys say this article is up to date?
This might be related.
A: This is concerning to the digest loop/detection cycle and the watches on the page.
Every time there is a change in the page and the queue of dirty checking is running then the mechanism of detection is running will reevaluate the ngIf and your code/condition of ngIf will fire.
| Q: How often does code in an *ngIf condition in Angular 2+ fire? I am curious about how *ngif works:
<img *ngif="isMediaMessage(message) === 'audio'" src="assets/img/audio1" />
1)
When I put a console inside the isMediaMessage function, the console prints out indefinitely; I wonder why it does that. Is it because of the digest loop? dirty checking? I am reading up more on these.
2) Should I use less data binding if I want to reduce rendering time?
3) Would you guys say this article is up to date?
This might be related.
A: This is concerning to the digest loop/detection cycle and the watches on the page.
Every time there is a change in the page and the queue of dirty checking is running then the mechanism of detection is running will reevaluate the ngIf and your code/condition of ngIf will fire.
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:867785",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551282"
} |
a01ac40ddb8143e6385f6edb43c644564ab5e602 | Stackoverflow Stackexchange
Q: Spark Create a dataframe from an InputStream? I want to avoid writing the entire stream to a file and then load it to dataframe. what's the right way?
A: You can check Spark Streaming and sqlnetworkWordCount which explains that your problem can be solved by creating singleton instance of SparkSession by using SparkContext of SparkStreaming.
You should have better ideas by going through above links where dataframes are created from streaming rdd.
| Q: Spark Create a dataframe from an InputStream? I want to avoid writing the entire stream to a file and then load it to dataframe. what's the right way?
A: You can check Spark Streaming and sqlnetworkWordCount which explains that your problem can be solved by creating singleton instance of SparkSession by using SparkContext of SparkStreaming.
You should have better ideas by going through above links where dataframes are created from streaming rdd.
| stackoverflow | {
"language": "en",
"length": 73,
"provenance": "stackexchange_0000F.jsonl.gz:867827",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551396"
} |
44999e44283067b71124902a3254e3c7e22e7897 | Stackoverflow Stackexchange
Q: When I get a JsonReaderException in JSON.NET, how do I get the original text that caused the parse exception? I use a third party library (the MS Bot Framework StateClient to be specific) that in turn uses JSON.NET. Something is being returned to the third party library and that is causing a JsonReaderException.
I don't have access to the raw original text that is causing the parse exception, and I can't intercept the communication because I'm running in an Azure App Service.
The only information I have is the following (which could mean anything):
Unexpected character encountered while parsing value: T. Path '', line 0, position 0. at Newtonsoft.Json.JsonTextReader.ParseValue(
� How can I get the original raw text that caused the parse exception from JSON.NET?
I was hoping it would be on the JsonReaderException object, but it was not. I also tried hooking up an "Error" handler in JsonSerializerSettings, but that does not contain the original text either.
| Q: When I get a JsonReaderException in JSON.NET, how do I get the original text that caused the parse exception? I use a third party library (the MS Bot Framework StateClient to be specific) that in turn uses JSON.NET. Something is being returned to the third party library and that is causing a JsonReaderException.
I don't have access to the raw original text that is causing the parse exception, and I can't intercept the communication because I'm running in an Azure App Service.
The only information I have is the following (which could mean anything):
Unexpected character encountered while parsing value: T. Path '', line 0, position 0. at Newtonsoft.Json.JsonTextReader.ParseValue(
� How can I get the original raw text that caused the parse exception from JSON.NET?
I was hoping it would be on the JsonReaderException object, but it was not. I also tried hooking up an "Error" handler in JsonSerializerSettings, but that does not contain the original text either.
| stackoverflow | {
"language": "en",
"length": 159,
"provenance": "stackexchange_0000F.jsonl.gz:867850",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551474"
} |
95dc2893e661293cb252f651ed76ea58d22b661d | Stackoverflow Stackexchange
Q: How to do a determinant in ArrayFire? How can i made a simple determinant in a af::array?
Tryed to use in af::array x:
af::det(x)
det(x)
x.det()
and dont works.
Someone can help me?
error: no matching function for call to ‘det(af::array&)’
if(det(x) == 0){
candidate: template<class T> T af::det(const af::array&)
template<typename T> T det(const array &in);
^
Thanks.
A: According to the documentation, the function is templated. You should try something like this instead:
std::cout << af::det<float>(x);
| Q: How to do a determinant in ArrayFire? How can i made a simple determinant in a af::array?
Tryed to use in af::array x:
af::det(x)
det(x)
x.det()
and dont works.
Someone can help me?
error: no matching function for call to ‘det(af::array&)’
if(det(x) == 0){
candidate: template<class T> T af::det(const af::array&)
template<typename T> T det(const array &in);
^
Thanks.
A: According to the documentation, the function is templated. You should try something like this instead:
std::cout << af::det<float>(x);
| stackoverflow | {
"language": "en",
"length": 78,
"provenance": "stackexchange_0000F.jsonl.gz:867853",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551482"
} |
f153e7832f204009dc54ccb06e5afd5be181afc3 | Stackoverflow Stackexchange
Q: maxIter parameter in spark ml.classification What is the role of the maxIter parameter used in LogisticRegression from pyspark.ml.classification?.
mlor = LogisticRegression(maxIter=5, regParam=0.01, weightCol="weight",
family="multinomial")
A: Logistic regression is optimized by iterative methods like gradient descent. It is most likely the maximum number of iterations to run the optimization algorithm.
| Q: maxIter parameter in spark ml.classification What is the role of the maxIter parameter used in LogisticRegression from pyspark.ml.classification?.
mlor = LogisticRegression(maxIter=5, regParam=0.01, weightCol="weight",
family="multinomial")
A: Logistic regression is optimized by iterative methods like gradient descent. It is most likely the maximum number of iterations to run the optimization algorithm.
| stackoverflow | {
"language": "en",
"length": 50,
"provenance": "stackexchange_0000F.jsonl.gz:867877",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551549"
} |
6d1fd50f48db06e03e15b939676b04738c54b613 | Stackoverflow Stackexchange
Q: Log4j2 wildcard logger names I am using log4j to do some logging on one of my applications. The Loggers in my config are looking like this.
<Root level="info">
<AppenderRef ref="Console"/>
</Root>
<Logger name="org.eclipse.jetty.server.Server" level="warn" additivity="false">
<AppenderRef ref="Console"></AppenderRef>
</Logger>
<Logger name="org.eclipse.jetty.util.log" level="warn" additivity="false">
<AppenderRef ref="Console"></AppenderRef>
</Logger>
<Logger name="org.eclipse.jetty.server.ServerConnector" level="warn" additivity="false">
<AppenderRef ref="Console"></AppenderRef>
</Logger>
Basically, I want the "info" level messages from the code that I have written, but I want the external libs to only log if something is a warning or more dire.
This is working as I might expect, but there are a lot of classes under "org.eclipse.jetty"
Is it possible to do something like this?
<Logger name="org.eclipse.jetty.*" level="warn">
<AppenderRef ref="Console" level="warn"></AppenderRef>
</Logger>
That is, I want everything in the entire package to only warn/error/fatal.
I tried the above and it had no effect. Is there a "wildcard" or something I can use to set up a logger for everything in the package?
A: For property based configuration you just simply take the package name without any wildcards. Should work with XML too :)
| Q: Log4j2 wildcard logger names I am using log4j to do some logging on one of my applications. The Loggers in my config are looking like this.
<Root level="info">
<AppenderRef ref="Console"/>
</Root>
<Logger name="org.eclipse.jetty.server.Server" level="warn" additivity="false">
<AppenderRef ref="Console"></AppenderRef>
</Logger>
<Logger name="org.eclipse.jetty.util.log" level="warn" additivity="false">
<AppenderRef ref="Console"></AppenderRef>
</Logger>
<Logger name="org.eclipse.jetty.server.ServerConnector" level="warn" additivity="false">
<AppenderRef ref="Console"></AppenderRef>
</Logger>
Basically, I want the "info" level messages from the code that I have written, but I want the external libs to only log if something is a warning or more dire.
This is working as I might expect, but there are a lot of classes under "org.eclipse.jetty"
Is it possible to do something like this?
<Logger name="org.eclipse.jetty.*" level="warn">
<AppenderRef ref="Console" level="warn"></AppenderRef>
</Logger>
That is, I want everything in the entire package to only warn/error/fatal.
I tried the above and it had no effect. Is there a "wildcard" or something I can use to set up a logger for everything in the package?
A: For property based configuration you just simply take the package name without any wildcards. Should work with XML too :)
A: I've similar issue but I've tons of classes in a Apache kafka library package and i just need to enable logging for various packages whos names match below regular expression,
org.apache.kafka*.clients.
In future if I update my Kafka library a support for Kafka Producer with version larger than 2.6 might get added and there will be a producer to support it corresponding version,
e.g. there might be version 2.8 for producer with package name
org.apache.kafka280.clients.producer.
This is to just futureproof my configuration for logging.
Various classes I want to be logged currently are as follows.
org.apache.kafka.clients.producer
org.apache.kafka260.clients.producer
org.apache.kafka221.clients.producer
org.apache.kafka240.clients.producer
org.apache.kafka250.clients.producer
org.apache.kafka251.clients.producer
| stackoverflow | {
"language": "en",
"length": 278,
"provenance": "stackexchange_0000F.jsonl.gz:867883",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551571"
} |
9626c7465c87ed9c6799e7a4698dae00abd1a6ec | Stackoverflow Stackexchange
Q: Python Keyring ,how to pass master password Is there any way to pass master password automatically/programmatically. as per the below code we have to input password manually, can we avoid this
import keyring
keyring.set_password('testuser','testuser','testpassword')
test = keyring.get_password('testuser', 'testuser')
print(test)
A: If this is your personal computer then I would suggest that you store your master password in a secure location like /etc.
For linux you can create a file like /etc/master_passwd.txt and store password in it (run the below command with sudo).
$ touch /etc/master_passwd.txt > your_password_here
Then in your python script you can get master password with:
with open('/etc/master_passwd.txt', 'r') as passwd_file:
master_password = passwd_file.read()
| Q: Python Keyring ,how to pass master password Is there any way to pass master password automatically/programmatically. as per the below code we have to input password manually, can we avoid this
import keyring
keyring.set_password('testuser','testuser','testpassword')
test = keyring.get_password('testuser', 'testuser')
print(test)
A: If this is your personal computer then I would suggest that you store your master password in a secure location like /etc.
For linux you can create a file like /etc/master_passwd.txt and store password in it (run the below command with sudo).
$ touch /etc/master_passwd.txt > your_password_here
Then in your python script you can get master password with:
with open('/etc/master_passwd.txt', 'r') as passwd_file:
master_password = passwd_file.read()
| stackoverflow | {
"language": "en",
"length": 107,
"provenance": "stackexchange_0000F.jsonl.gz:867896",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551614"
} |
f66c5f139aa7e2abfc19f159d792fc00d327d833 | Stackoverflow Stackexchange
Q: How to manually edit the link to share in google drive or google photos When the photos are uploaded in Google drive or Google photos and shared to others, the link will be https://docs.google.com/xyz. However is there anyway to convert this into meaningful link like Link to Presentation 2017 ??
A: This option is currently not available for Google Drive or Google Photos.
Some good references:
*
*How to change / modify shareable Google Drive link? (which mentions the same, option is currently not available)
*Is it possible to have clean URLs for Google Drive items? (accepted answer discusses why the URLs should stay as they are)
| Q: How to manually edit the link to share in google drive or google photos When the photos are uploaded in Google drive or Google photos and shared to others, the link will be https://docs.google.com/xyz. However is there anyway to convert this into meaningful link like Link to Presentation 2017 ??
A: This option is currently not available for Google Drive or Google Photos.
Some good references:
*
*How to change / modify shareable Google Drive link? (which mentions the same, option is currently not available)
*Is it possible to have clean URLs for Google Drive items? (accepted answer discusses why the URLs should stay as they are)
| stackoverflow | {
"language": "en",
"length": 108,
"provenance": "stackexchange_0000F.jsonl.gz:867961",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551784"
} |
18bc117c8236dc5eb806c1fae558d9c2fac77c02 | Stackoverflow Stackexchange
Q: Import CSS from node_modules using Gulp-SASS I want to import a CSS-file from my node_modules using SASS.
@import 'normalize.css/normalize';
This is how my gulpfile.js handles my SASS:
const
gulp = require('gulp'),
sass = require('gulp-sass');
gulp.task('sass', function () {
return gulp.src(['source/assets/css/**.scss', '!source/assets/css/**/_*.[scss|sass'])
.pipe(sass())
.pipe(gulp.dest('output/assets/css'));
});
SASS compiler will not import the css from node_modules. Instead, this will throw an error.
Error: File to import not found or unreadable: normalize.css/normalize.
A: What works for me, in 2020, is this:
function styles() {
return (
gulp.src(paths.styles.src)
.pipe(sourcemaps.init())
.pipe(sass({
includePaths: ['./node_modules/purecss-sass/vendor/assets/stylesheets/',
'./node_modules/modularscale-sass/stylesheets/',
'./node_modules/typi/scss/'
]
}))
.on("error", sass.logError)
.pipe(postcss([autoprefixer(), cssnano()]))
.pipe(sourcemaps.write())
.pipe(gulp.dest(paths.styles.dest))
.pipe(browserSync.stream())
);
}
Now in the scss files, I can
@import 'modularscale';
@import 'typi';
@import 'purecss';
The other options seem to be:
*
*put the full paths to the main _somelibrary.scss file directly in the scss files (minus the extension), so something like:
@import '../../node_modules/purecss-sass/vendor/assets/stylesheets/_purecss';
*Put includePaths: ['./node_modules'] and add the relative paths in the scss files:
@import 'purecss-sass/vendor/assets/stylesheets/_purecss';
| Q: Import CSS from node_modules using Gulp-SASS I want to import a CSS-file from my node_modules using SASS.
@import 'normalize.css/normalize';
This is how my gulpfile.js handles my SASS:
const
gulp = require('gulp'),
sass = require('gulp-sass');
gulp.task('sass', function () {
return gulp.src(['source/assets/css/**.scss', '!source/assets/css/**/_*.[scss|sass'])
.pipe(sass())
.pipe(gulp.dest('output/assets/css'));
});
SASS compiler will not import the css from node_modules. Instead, this will throw an error.
Error: File to import not found or unreadable: normalize.css/normalize.
A: What works for me, in 2020, is this:
function styles() {
return (
gulp.src(paths.styles.src)
.pipe(sourcemaps.init())
.pipe(sass({
includePaths: ['./node_modules/purecss-sass/vendor/assets/stylesheets/',
'./node_modules/modularscale-sass/stylesheets/',
'./node_modules/typi/scss/'
]
}))
.on("error", sass.logError)
.pipe(postcss([autoprefixer(), cssnano()]))
.pipe(sourcemaps.write())
.pipe(gulp.dest(paths.styles.dest))
.pipe(browserSync.stream())
);
}
Now in the scss files, I can
@import 'modularscale';
@import 'typi';
@import 'purecss';
The other options seem to be:
*
*put the full paths to the main _somelibrary.scss file directly in the scss files (minus the extension), so something like:
@import '../../node_modules/purecss-sass/vendor/assets/stylesheets/_purecss';
*Put includePaths: ['./node_modules'] and add the relative paths in the scss files:
@import 'purecss-sass/vendor/assets/stylesheets/_purecss';
A: SASS compiler doesn't know where to look for the files. The location needs to be specified.
gulp.task('sass', function () {
return gulp.src(['source/assets/css/**.scss', '!source/assets/css/**/_*.[scss|sass'])
.pipe(sass({
includePaths: ['node_modules']
}))
.pipe(gulp.dest('output/assets/css'));
});
| stackoverflow | {
"language": "en",
"length": 187,
"provenance": "stackexchange_0000F.jsonl.gz:867975",
"question_score": "14",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551822"
} |
46490d2c88eb11e1d2b81728bb9ff29fb2a2aef9 | Stackoverflow Stackexchange
Q: Can VSCode (Visual Studio Code) be used to run elm-make? I am new to VSCode.
I am new to ELM.
I am perfectly capable of using VIM and command line tools to create an ELM Project, but I want to utilize an IDE. I have chosen VSCode on advice from the internet since it seems to pick up some nice pieces of VIM.
So now I have a few .elm files.
Main.elm
View.elm
Model.elm
I want to run elm-make on Model.elm to make sure it has no errors.
Then I want to run elm-make on Main.elm to create an index.html so I can view my project.
I think this is a pretty simple question for people familiar with how to customize VSCode, but as I stated previously, I am new to VSCode.
A: Try setting up a task for elm-make:
Create a ./vscode/tasks.json with the contents:
{
"version": "0.1.0",
"tasks": [
{
"taskName": "elm make",
"isBuildCommand": true,
"command": "elm-make",
"args": ["./main.elm"],
"isShellCommand": true
}
]
}
You can then use the build command to run the task, or run the task individually.
You may also want to look into the elm extension: https://marketplace.visualstudio.com/items?itemName=sbrink.elm
| Q: Can VSCode (Visual Studio Code) be used to run elm-make? I am new to VSCode.
I am new to ELM.
I am perfectly capable of using VIM and command line tools to create an ELM Project, but I want to utilize an IDE. I have chosen VSCode on advice from the internet since it seems to pick up some nice pieces of VIM.
So now I have a few .elm files.
Main.elm
View.elm
Model.elm
I want to run elm-make on Model.elm to make sure it has no errors.
Then I want to run elm-make on Main.elm to create an index.html so I can view my project.
I think this is a pretty simple question for people familiar with how to customize VSCode, but as I stated previously, I am new to VSCode.
A: Try setting up a task for elm-make:
Create a ./vscode/tasks.json with the contents:
{
"version": "0.1.0",
"tasks": [
{
"taskName": "elm make",
"isBuildCommand": true,
"command": "elm-make",
"args": ["./main.elm"],
"isShellCommand": true
}
]
}
You can then use the build command to run the task, or run the task individually.
You may also want to look into the elm extension: https://marketplace.visualstudio.com/items?itemName=sbrink.elm
| stackoverflow | {
"language": "en",
"length": 194,
"provenance": "stackexchange_0000F.jsonl.gz:868012",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551944"
} |
575421fb145f260900f0aafce18f2b0d1d2f8e18 | Stackoverflow Stackexchange
Q: Elasticsearch installation : Error missing 'server' JVM at ...jvm.dll After having downloaded elasticsearch and unzipped it following the steps in this link:
Install Elastic Search on Windows
I am receiving the following error:
Error: missing 'server' JVM at 'C:\Program Files (x86)\Java\jre1.8.0_131\bin\server\jvm.dll'.
Please install or use the JRE or JDK that contains these missing components.
Note: I also had to install the JDK8 as suggested in this resolution
Should I change something in the .config file? Maybe this line?
# force the server VM (remove on 32-bit client JVMs)
-server
A: I solved this by installing Java JRE 64-bit.
And then setting the environment variable JAVA_HOME to this version. (In my case C:\Program Files\Java\jre1.8.0_131)
| Q: Elasticsearch installation : Error missing 'server' JVM at ...jvm.dll After having downloaded elasticsearch and unzipped it following the steps in this link:
Install Elastic Search on Windows
I am receiving the following error:
Error: missing 'server' JVM at 'C:\Program Files (x86)\Java\jre1.8.0_131\bin\server\jvm.dll'.
Please install or use the JRE or JDK that contains these missing components.
Note: I also had to install the JDK8 as suggested in this resolution
Should I change something in the .config file? Maybe this line?
# force the server VM (remove on 32-bit client JVMs)
-server
A: I solved this by installing Java JRE 64-bit.
And then setting the environment variable JAVA_HOME to this version. (In my case C:\Program Files\Java\jre1.8.0_131)
A: Set your JAVA_HOME environment variable to point to the path of your JDK 8 installation.
You can do this on the command line as the example below illustrates:
SET JAVA_HOME="C:\Program Files (x86)\Java\jdk1.8.0_131"
SET PATH=%JAVA_HOME%\bin;%PATH%
Confirm that the correct version of the JDK is in your PATH with:
javac -version
A: I solved my issue editing that line of file jvm.options from:
# force the server VM
-server
to:
# force the server VM
-client
A: Quick (hack) alternative:
*
*Copy jdk1.8.0_131\bin\client to jdk1.8.0_131\bin\server
*If necessary, configure Elasticsearch JVM heap size in config/jvm.options
A: I had same issue:
Error: missing 'server' JVM at 'C:\Program Files (x86)\Java\jre1.8.0_131\bin\server\jvm.dll'.
Please install or use the JRE or JDK that contains these missing components.
It got resolved just by setting java_home:
SET JAVA_HOME="C:\Program Files (x86)\Java\jdk1.8.0_131"
SET PATH=%JAVA_HOME%\bin;%PATH%
A: I faced this issue while running SonarQube Server on my local machine.
If none of the above solution works just check the SonarQube Version you are using and the JDK version it runs on which is mentioned on the SonarQube site.
Mine was SonarQube 7.9.3
https://docs.sonarqube.org/latest/requirements/requirements/
Changing JDK 15.0.1 to 11.0.9 Fixed the issue.
| stackoverflow | {
"language": "en",
"length": 301,
"provenance": "stackexchange_0000F.jsonl.gz:868020",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551961"
} |
cd8af46fa70ed79d1dc196419eefdd2c3b272390 | Stackoverflow Stackexchange
Q: libuv how to ignore SIGPIPE when connection reset by peer (linux) I receive a SIGPIPE from uv_write() as connection is reset by the peer.
p/x stream->flags
0x46064
./src/unix/internal.h:# define UV__POLLRDHUP 0x2000
For BSD to prevent SIGPIPE libuv has:
./src/unix/core.c
#if defined(SO_NOSIGPIPE)
{
int on = 1;
setsockopt(sockfd, SOL_SOCKET, SO_NOSIGPIPE, &on, sizeof(on));
}
#endif
Linux does not have SO_NOSIGPIPE as an option. Any suggestions how to handle SIGPIPE signal for linux other than signal(SIGPIPE, SIG_IGN);
A: You would have to update uv_write() to call send() with the MSG_NOSIGNAL flag:
Don't generate a SIGPIPE signal if the peer on a stream-oriented socket has closed the connection. The EPIPE error is still returned. This provides similar behavior to using sigaction(2) to ignore SIGPIPE, but, whereas MSG_NOSIGNAL is a per-call feature, ignoring SIGPIPE sets a process attribute that affects all threads in the process.
| Q: libuv how to ignore SIGPIPE when connection reset by peer (linux) I receive a SIGPIPE from uv_write() as connection is reset by the peer.
p/x stream->flags
0x46064
./src/unix/internal.h:# define UV__POLLRDHUP 0x2000
For BSD to prevent SIGPIPE libuv has:
./src/unix/core.c
#if defined(SO_NOSIGPIPE)
{
int on = 1;
setsockopt(sockfd, SOL_SOCKET, SO_NOSIGPIPE, &on, sizeof(on));
}
#endif
Linux does not have SO_NOSIGPIPE as an option. Any suggestions how to handle SIGPIPE signal for linux other than signal(SIGPIPE, SIG_IGN);
A: You would have to update uv_write() to call send() with the MSG_NOSIGNAL flag:
Don't generate a SIGPIPE signal if the peer on a stream-oriented socket has closed the connection. The EPIPE error is still returned. This provides similar behavior to using sigaction(2) to ignore SIGPIPE, but, whereas MSG_NOSIGNAL is a per-call feature, ignoring SIGPIPE sets a process attribute that affects all threads in the process.
A: The short answer to your question is: no, any way to handle SIGPIPE but to set a signal handler.
See this issue for a discussion about SO_NOSIGPIPE and libuv. It also clarifies why there exists that code for BSD.
| stackoverflow | {
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:868031",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44551990"
} |
5bea75da434f4ce368f259bf965ba8b4591d02c0 | Stackoverflow Stackexchange
Q: Angular Material 2 mat-palette() using lighter and darker option Based on the theming guide you can define a custom theme accent for our example. The second parameter will override the $mat-pink default to A200, but how does the 3rd ($lighter) and 4th ($darker) parameter works?
$candy-app-accent: mat-palette($mat-pink, A200, A100, A400);
From @angular/material/_theming.scss
@function mat-palette($base-palette, $default: 500, $lighter: 100, $darker: 700) {
.....
}
A: Those different hues are used by a few components, such as the progress bar. But it's more useful if you're using them as mixins for your own styling. This answer on a GH issue gives a good explanation.
| Q: Angular Material 2 mat-palette() using lighter and darker option Based on the theming guide you can define a custom theme accent for our example. The second parameter will override the $mat-pink default to A200, but how does the 3rd ($lighter) and 4th ($darker) parameter works?
$candy-app-accent: mat-palette($mat-pink, A200, A100, A400);
From @angular/material/_theming.scss
@function mat-palette($base-palette, $default: 500, $lighter: 100, $darker: 700) {
.....
}
A: Those different hues are used by a few components, such as the progress bar. But it's more useful if you're using them as mixins for your own styling. This answer on a GH issue gives a good explanation.
| stackoverflow | {
"language": "en",
"length": 103,
"provenance": "stackexchange_0000F.jsonl.gz:868041",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552017"
} |
d153ae7c254280f78f8a811debd3e2e35c608008 | Stackoverflow Stackexchange
Q: [sklearn][standardscaler] can I inverse the standardscaler for the model output? I have some data structured as below, trying to predict t from the features.
train_df
t: time to predict
f1: feature1
f2: feature2
f3:......
Can t be scaled with StandardScaler, so I instead predict t' and then inverse the StandardScaler to get back the real time?
For example:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(train_df['t'])
train_df['t']= scaler.transform(train_df['t'])
run regression model,
check score,
!! check predicted t' with real time value(inverse StandardScaler) <- possible?
A: Yeah, and it's conveniently called inverse_transform.
The documentation provides examples of its use.
| Q: [sklearn][standardscaler] can I inverse the standardscaler for the model output? I have some data structured as below, trying to predict t from the features.
train_df
t: time to predict
f1: feature1
f2: feature2
f3:......
Can t be scaled with StandardScaler, so I instead predict t' and then inverse the StandardScaler to get back the real time?
For example:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(train_df['t'])
train_df['t']= scaler.transform(train_df['t'])
run regression model,
check score,
!! check predicted t' with real time value(inverse StandardScaler) <- possible?
A: Yeah, and it's conveniently called inverse_transform.
The documentation provides examples of its use.
A: Here is sample code. You can replace here data with train_df['colunm_name'].
Hope it helps.
from sklearn.preprocessing import StandardScaler
data = [[1,1], [2,3], [3,2], [1,1]]
scaler = StandardScaler()
scaler.fit(data)
scaled = scaler.transform(data)
print(scaled)
# for inverse transformation
inversed = scaler.inverse_transform(scaled)
print(inversed)
A: While @Rohan's answer generally worked for me and my DataFrame column, I had to reshape the data according to the below StackOverflow answer.
Sklearn transform error: Expected 2D array, got 1D array instead
scaler = StandardScaler()
scaler.fit(df[[col_name]])
scaled = scaler.transform(df[[col_name]])
| stackoverflow | {
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:868049",
"question_score": "28",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552031"
} |
133de06d509e06616b38989ee5fc930532992527 | Stackoverflow Stackexchange
Q: Random number in range - sql oracle I am trying to write a query. It will take employee IDs from a table, and assign random numbers between 1-45 in whole number increments. The data is in Oracle - sql.
A: Got it:
round(DBMS_RANDOM.VALUE (1, 45))
round helps me get whole numbers.
| Q: Random number in range - sql oracle I am trying to write a query. It will take employee IDs from a table, and assign random numbers between 1-45 in whole number increments. The data is in Oracle - sql.
A: Got it:
round(DBMS_RANDOM.VALUE (1, 45))
round helps me get whole numbers.
| stackoverflow | {
"language": "en",
"length": 52,
"provenance": "stackexchange_0000F.jsonl.gz:868096",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552182"
} |
f126ec286f7500b3172d0ea142a5adffcee54d36 | Stackoverflow Stackexchange
Q: How to only list folders in S3 in PHP I know that to get the list of all the contents of a bucket you do something like this:
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => array(
'key' => <key>,
'secret' => <secret>
)
]);
$objects = $s3->getIterator('ListObjects', array('Bucket' => <bucketname>, 'Prefix' => 'downloads/'));
Is there a way to only get the list of folders inside 1 specific folder instead of the entire recursive list of contents?
A: <?php
use Aws\S3\S3Client;
require_once 'vendor/autoload.php';
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => array(
'key' => <key>,
'secret' => <secret>
)
]);
$objects = $s3->ListObjects(['Bucket' => <bucketname>, 'Delimiter'=>'/', 'Prefix' => 'downloads']);
var_dump($objects);
?>
| Q: How to only list folders in S3 in PHP I know that to get the list of all the contents of a bucket you do something like this:
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => array(
'key' => <key>,
'secret' => <secret>
)
]);
$objects = $s3->getIterator('ListObjects', array('Bucket' => <bucketname>, 'Prefix' => 'downloads/'));
Is there a way to only get the list of folders inside 1 specific folder instead of the entire recursive list of contents?
A: <?php
use Aws\S3\S3Client;
require_once 'vendor/autoload.php';
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-east-1',
'credentials' => array(
'key' => <key>,
'secret' => <secret>
)
]);
$objects = $s3->ListObjects(['Bucket' => <bucketname>, 'Delimiter'=>'/', 'Prefix' => 'downloads']);
var_dump($objects);
?>
| stackoverflow | {
"language": "en",
"length": 120,
"provenance": "stackexchange_0000F.jsonl.gz:868101",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552193"
} |
8887fbf4dfad3efaedc87a1490a4ab871a315e13 | Stackoverflow Stackexchange
Q: Hidden statusBar re-appears when I show an alert dialog, How to prevent it? (Android) I am using Android studio, and I have set up my main Activity in order to not show the statusBar. But when I display an alert dialog, the status bar reapears, and it won't hide after.
(Also if I receive a notfication on my phonem the status bar will also show up and not hide)
How can I fix that? Can I set my entire app to keep the status Bar hidden?
Here is my current setting:
in onCreate()
View decorView = getWindow().getDecorView();
int uiOptions = View.SYSTEM_UI_FLAG_FULLSCREEN;
decorView.setSystemUiVisibility(uiOptions);
A: I found a working solution!
In my onCreate method, I replaced the previous code by this one:
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
| Q: Hidden statusBar re-appears when I show an alert dialog, How to prevent it? (Android) I am using Android studio, and I have set up my main Activity in order to not show the statusBar. But when I display an alert dialog, the status bar reapears, and it won't hide after.
(Also if I receive a notfication on my phonem the status bar will also show up and not hide)
How can I fix that? Can I set my entire app to keep the status Bar hidden?
Here is my current setting:
in onCreate()
View decorView = getWindow().getDecorView();
int uiOptions = View.SYSTEM_UI_FLAG_FULLSCREEN;
decorView.setSystemUiVisibility(uiOptions);
A: I found a working solution!
In my onCreate method, I replaced the previous code by this one:
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
| stackoverflow | {
"language": "en",
"length": 123,
"provenance": "stackexchange_0000F.jsonl.gz:868110",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552221"
} |
dfb130c52544ab60610cad465c667a7f432d262d | Stackoverflow Stackexchange
Q: What user does cloud-config run as? When spinning up an AWS or other cloud vendor instance, and using cloud-init to configure the host, what user do bash scripts that get called run as?
A: Building an instance with the following config:
#cloud-config
write_files:
- path: /root/test.sh
content: |
#!/bin/bash
set -x
set -e
whoami
runcmd:
- bash /root/test.sh
I got an output of:
+ whoami
root
Ubuntu cloud-config runs as root.
| Q: What user does cloud-config run as? When spinning up an AWS or other cloud vendor instance, and using cloud-init to configure the host, what user do bash scripts that get called run as?
A: Building an instance with the following config:
#cloud-config
write_files:
- path: /root/test.sh
content: |
#!/bin/bash
set -x
set -e
whoami
runcmd:
- bash /root/test.sh
I got an output of:
+ whoami
root
Ubuntu cloud-config runs as root.
| stackoverflow | {
"language": "en",
"length": 72,
"provenance": "stackexchange_0000F.jsonl.gz:868130",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552288"
} |
2c4ef9bacd0399876a87fc3369afcb22376020f9 | Stackoverflow Stackexchange
Q: How to enable arm native bridge in android emulator? I wish to test ARM applications in an android emulator (SDK) running android 6.0 with an Intel x86 architecture.
Since the ARM (armeabi-v7a) architecture is very slow, compared to Intel which takes advantage of Intel(R) Hardware Accelerated Execution, Intel(R) Virtualization Technology (VT)
switching to such an option is not an intended solution.
I have noted in Android x86 (RemixOS), it supports native arm applications through the arm bridge. This is possible though the use of libhoudini.so libraries.
I rooted my virtual device device in emulator, and tried pushing these libraries through adb in /system/lib, unfortunately this didn't work.
Is there something that I am doing wrong or maybe there is a simpler way to do this?
| Q: How to enable arm native bridge in android emulator? I wish to test ARM applications in an android emulator (SDK) running android 6.0 with an Intel x86 architecture.
Since the ARM (armeabi-v7a) architecture is very slow, compared to Intel which takes advantage of Intel(R) Hardware Accelerated Execution, Intel(R) Virtualization Technology (VT)
switching to such an option is not an intended solution.
I have noted in Android x86 (RemixOS), it supports native arm applications through the arm bridge. This is possible though the use of libhoudini.so libraries.
I rooted my virtual device device in emulator, and tried pushing these libraries through adb in /system/lib, unfortunately this didn't work.
Is there something that I am doing wrong or maybe there is a simpler way to do this?
| stackoverflow | {
"language": "en",
"length": 126,
"provenance": "stackexchange_0000F.jsonl.gz:868134",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552313"
} |
b87916523c275df222e78776c99dcb9863d72034 | Stackoverflow Stackexchange
Q: Should I commit yarn.lock and package-lock.json files? We're using yarn for all our deterministic pkg installations but don't prevent the user from using npm - I'm guessing having both these files will cause issues however. Should one be added to your .gitignore dir?
A: Here's my rule of thumb: if you are working on an application, commit the lock file(s). If you are maintaining a library, add it to your ignored list. Either way you should be using accurate semver ranges in package.json. Yehuda Katz (cached) wrote a great explanation for when to commit Gemfile.lock (Ruby's lock file) and when to not. At least read the tl;dr section.
| Q: Should I commit yarn.lock and package-lock.json files? We're using yarn for all our deterministic pkg installations but don't prevent the user from using npm - I'm guessing having both these files will cause issues however. Should one be added to your .gitignore dir?
A: Here's my rule of thumb: if you are working on an application, commit the lock file(s). If you are maintaining a library, add it to your ignored list. Either way you should be using accurate semver ranges in package.json. Yehuda Katz (cached) wrote a great explanation for when to commit Gemfile.lock (Ruby's lock file) and when to not. At least read the tl;dr section.
A: These files are managed by your tools, so–assuming using yarn will effectively update the package-lock.json–I suppose committing both files works fine.
I think the most important for your user is package-lock.json (I, for instance, don't use yarn) so this one has to be committed.
For the yarn.lock, it depends if you work alone or in a team. If solo, then I suppose there is no need to commit it. If you (plan to) work in a team, then you probably should commit it, at least until yarn supports it
I suppose the yarn team will eventually stop using yarn.lock and use package-json.lock instead, at this time it will become simpler
A: You should commit 1 dependency tree lock file, but you shouldn't commit both. This also requires standardizing on either yarn or npm (not both) to build + develop a project with.
Here's the yarn article on why yarn.lock should be committed, if you standardize on yarn.
If you commit both the yarn.lock file, AND the package-lock.json files there are a lot of ways that the 2 files can provide different dependency trees (even if yarn's and npm's tree resolution algorithms are identical), and it's non-trivial to ensure that they provide exactly the same answer. Since it's non-trivial, it's unlikely that the same dependency tree will be maintained in both files, and you don't want different behavior depending on whether the build was done using yarn or npm.
If and when yarn switches from using yarn.lock to package-lock.json (issue here), then the choice of lock file to commit becomes easy, and we no longer have to worry about yarn and npm resulting in different builds. Based on this blog post, this is a change we shouldn't expect soon (the blog post also describes the differences between yarn.lock and package-lock.json.
A: Always commit dependency lock files in general
As is covered elsewhere, dependency lock files, which are supported by many package management systems (e.g.:
composer and bundler), should be committed to the codebase in end-of-chain projects - so that each individual trying to run that project is doing so with exactly the tested set of dependencies.
It's less clear whether lock files should always be committed into packages that are intended to be included in other projects (where looser dependencies are desirable). However, both Yarn and NPM (as covered by @Cyrille) intelligently ignore yarn.lock and package-lock.json respectively where necessary, making it safe to always commit these lockfiles.
So you should always commit at least one of yarn.lock or package-lock.json depending on which package manager you're using.
Should you commit both yarn.lock and package-lock.json?
At present we have two different package management systems, which both install the same set of dependencies from package.json, but which generate and read from two different lockfiles. NPM 5 generates package-lock.json, whereas Yarn generates yarn.lock.
If you commit package-lock.json then you're building in support for people installing your dependencies with NPM 5. If you commit yarn.lock, you're building in support for people installing dependencies with Yarn.
Whether you choose to commit yarn.lock or package-lock.json or both depends on whether those developing on your project are only using Yarn or NPM 5 or both. If your project is open-source, the most community-friendly thing to do would probably be to commit both and have an automated process to ensure yarn.lock and package-lock.json always stay in sync.
Update: Yarn have now introduced an import command which will generate a yarn.lock file from a package-lock.json file. This could be useful for keeping the two files in sync. (Thanks @weakish)
This issues was discussed at length on the Yarn project in:
*
*"Idea: support package-lock.json from npm 5"
*"Competing lockfiles create poor UX"
Both are now closed.
A: I was thinking about the same question. Here are my thoughts, hope it helps :
The npm package-lock.json documentation says the following :
package-lock.json is automatically generated for any operations where npm modifies either the node_modules tree, or package.json. It describes the exact tree that was generated, such that subsequent installs are able to generate identical trees, regardless of intermediate dependency updates.
This is great because it prevents the "works on my machine" effect.
Without this file, if you npm install --save A, npm will add "A": "^1.2.3" to your package.json. When somebody else runs npm install on your project, it is possible that the version 1.2.4 of A has been released. Since it is the latest available version that satisfies the semver range specified in your package.json, it will install this version. But what if there's a new bug introduced in this version ? This person will have a problem that you can't reproduce because you have the previous version, without any bug.
By fixing the state of your node_modules directory, package-lock.json file prevents this problem because everybody will have the same versions of every packages.
But, what if you're writing and publishing a npm module ? The documentation says the following :
One key detail about package-lock.json is that it cannot be published, and it will be ignored if found in any place other than the toplevel package.
So, even if you commit it, when the user installs your module, he/she will not get the package-lock.json file, but only the package.json file. So npm will install the latest version that satisfies the semver ranges of all your dependencies. It means that you always want to test your module with theses verions of your dependencies, and not the one you installed when you started writing your module. So, in that case, package-lock.json is clearly useless. More, it can be annoying.
A: You're correct! Allowing both npm and yarn to be used is going to cause issues. Take a look at this article.
Currently, we’re planning to add some warnings to users who use both yarn and npm in the same repository to install packages.
We highly recommend you to delete the package-lock.json file if you decide to use yarn in order to avoid future confusion and possible consistency issues.
You may not want both npm and yarn as your package manager.
A: No, using both lock files simultaneously will most often result in inconsistencies in your dependency tree, especially when collaborating on a team. Ignoring one lock or the other is a simple solution. Just make sure your team understands and agrees with this change.
| stackoverflow | {
"language": "en",
"length": 1162,
"provenance": "stackexchange_0000F.jsonl.gz:868143",
"question_score": "172",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552348"
} |
d8b7a76bdf7963bde9db614e3d25353a9bb79132 | Stackoverflow Stackexchange
Q: Prevent TensorFlow from accessing the GPU? Is there a way to run TensorFlow purely on the CPU. All of the memory on my machine is hogged by a separate process running TensorFlow. I have tried setting the per_process_memory_fraction to 0, unsuccessfully.
A: You can use only CPUs by openning a session with a GPU limit of 0:
sess = tf.Session(config=tf.ConfigProto(device_count={'GPU': 0}))
See https://www.tensorflow.org/api_docs/python/tf/ConfigProto for more details.
A proof that it works for @Nicolas:
In Python, write:
import tensorflow as tf
sess_cpu = tf.Session(config=tf.ConfigProto(device_count={'GPU': 0}))
Then in a terminal:
nvidia-smi
You will see something like:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 24869 C /.../python 99MiB |
+-----------------------------------------------------------------------------+
Then repeat the process:
In Python, write:
import tensorflow as tf
sess_gpu = tf.Session()
Then in a terminal:
nvidia-smi
You will see something like:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 25900 C /.../python 5775MiB |
+-----------------------------------------------------------------------------+
| Q: Prevent TensorFlow from accessing the GPU? Is there a way to run TensorFlow purely on the CPU. All of the memory on my machine is hogged by a separate process running TensorFlow. I have tried setting the per_process_memory_fraction to 0, unsuccessfully.
A: You can use only CPUs by openning a session with a GPU limit of 0:
sess = tf.Session(config=tf.ConfigProto(device_count={'GPU': 0}))
See https://www.tensorflow.org/api_docs/python/tf/ConfigProto for more details.
A proof that it works for @Nicolas:
In Python, write:
import tensorflow as tf
sess_cpu = tf.Session(config=tf.ConfigProto(device_count={'GPU': 0}))
Then in a terminal:
nvidia-smi
You will see something like:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 24869 C /.../python 99MiB |
+-----------------------------------------------------------------------------+
Then repeat the process:
In Python, write:
import tensorflow as tf
sess_gpu = tf.Session()
Then in a terminal:
nvidia-smi
You will see something like:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 25900 C /.../python 5775MiB |
+-----------------------------------------------------------------------------+
A: Have a look to this question or this answer.
To summarise you can add this piece of code:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
import tensorflow as tf
Playing with the CUDA_VISIBLE_DEVICES environment variable is one of if not the way to go whenever you have GPU-tensorflow installed and you don't want to use any GPUs.
You to want either export CUDA_VISIBLE_DEVICES= or alternatively use a virtualenv with a non-GPU installation of TensorFlow.
| stackoverflow | {
"language": "en",
"length": 238,
"provenance": "stackexchange_0000F.jsonl.gz:868221",
"question_score": "47",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552585"
} |
d31d3f33e64c1a3eb0d91b32e729cb932614c49b | Stackoverflow Stackexchange
Q: when can I use * in assembly language? with this data
.data
tableD DWORD 10h, 20h, 30h, 40h, 50h, 60h
Rowsize = ($ - tableD)
DWORD 60h,70h,80h,90h,0A0h
DWORD 0B0h,0C0h,0D0h,0E0h,0F0h
I can use
.code
mov eax,tableD[ebx + esi*TYPE tableD]
but I can't use
mov eax,tableD[ebx*2 + esi*TYPE tableD]
but I can use
mov eax,tableD[ebx*2 + esi]
can't I use 2 *s in there?
can I know the terms for those objects?
A: The x86 architecture supports a four-part addressing mode of the following form:
base + index * scale + displacement
where all four parts can be absent (scale is 1 if absent). This means that there can be only one scaled component in a memory operand; so yes, you can only use one *.
Furthermore, scaling factors are limited to 1, 2, 4, or 8; other scaling factors cannot be encoded.
| Q: when can I use * in assembly language? with this data
.data
tableD DWORD 10h, 20h, 30h, 40h, 50h, 60h
Rowsize = ($ - tableD)
DWORD 60h,70h,80h,90h,0A0h
DWORD 0B0h,0C0h,0D0h,0E0h,0F0h
I can use
.code
mov eax,tableD[ebx + esi*TYPE tableD]
but I can't use
mov eax,tableD[ebx*2 + esi*TYPE tableD]
but I can use
mov eax,tableD[ebx*2 + esi]
can't I use 2 *s in there?
can I know the terms for those objects?
A: The x86 architecture supports a four-part addressing mode of the following form:
base + index * scale + displacement
where all four parts can be absent (scale is 1 if absent). This means that there can be only one scaled component in a memory operand; so yes, you can only use one *.
Furthermore, scaling factors are limited to 1, 2, 4, or 8; other scaling factors cannot be encoded.
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:868239",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552628"
} |
bcaf2ea66a311f5152acc75549b4647ece66c299 | Stackoverflow Stackexchange
Q: How to clear HTML data list current options? I am writing an dynamic data list. However, when I tried to update the list, the previous didn't clear. Are there any solutions?
Here is my code
function loadDataList(selectedSchoolName)
{
var options = '';
//document.getElementById('schoolNameList').remove();
for(var i = 0; i < selectedSchoolName.length; i++)
{
options += '<option value="'+ selectedSchoolName[i] +'" >';
}
document.getElementById('schoolNameList').innerHTML = options;
}
Thank You
A: In this instance, you don't want to remove schoolNameList itself; you want to remove the children of that list (the list items). There are a few ways to do this, but this one should work:
document.getElementById('schoolNameList').innerHTML = '';
| Q: How to clear HTML data list current options? I am writing an dynamic data list. However, when I tried to update the list, the previous didn't clear. Are there any solutions?
Here is my code
function loadDataList(selectedSchoolName)
{
var options = '';
//document.getElementById('schoolNameList').remove();
for(var i = 0; i < selectedSchoolName.length; i++)
{
options += '<option value="'+ selectedSchoolName[i] +'" >';
}
document.getElementById('schoolNameList').innerHTML = options;
}
Thank You
A: In this instance, you don't want to remove schoolNameList itself; you want to remove the children of that list (the list items). There are a few ways to do this, but this one should work:
document.getElementById('schoolNameList').innerHTML = '';
A: I like this one. Seems the cleanest I could find, but it is jQuery not vanilla JS DOM.
$('#schoolNameList').empty();
A: Simpler way and in Vanilla JS to do is by using the node.replaceWith() method.
Removing all childs in a loop can be a costly DOM operation.
Example:
const node = document.getElementById("schoolNameList");
if(node.hasChildNodes) {
const newNodeToReplace = node.cloneNode(false); //false: because we don't want to deep clone it
node.replaceWith(newNodeToReplace);
}
| stackoverflow | {
"language": "en",
"length": 175,
"provenance": "stackexchange_0000F.jsonl.gz:868272",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552724"
} |
07eaf7e6f82f88c5930d257b3b8ed5ee181bd750 | Stackoverflow Stackexchange
Q: How to use JSON::Validator with Perl 5.8? There are not many modules which offer functionality to validate JSON files when a JSON schema is provided.
I found out that
JSON::Validator
has a dependency on the Mojolicious. And, Mojolicious has dependency on feature pragma which was introduced in Perl 5.10.
Is there any way to avoid using feature, or using it in Perl 5.8?
| Q: How to use JSON::Validator with Perl 5.8? There are not many modules which offer functionality to validate JSON files when a JSON schema is provided.
I found out that
JSON::Validator
has a dependency on the Mojolicious. And, Mojolicious has dependency on feature pragma which was introduced in Perl 5.10.
Is there any way to avoid using feature, or using it in Perl 5.8?
| stackoverflow | {
"language": "en",
"length": 64,
"provenance": "stackexchange_0000F.jsonl.gz:868331",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552890"
} |
b7a03e0c181e3fb2d369c0047761936ae5540e1f | Stackoverflow Stackexchange
Q: How to add markers in react-google-maps? Using React JS in Meteor 1.5
Question: Need a way to add Marker using
react-google-maps
Using ES6 and in JSX format
Followed the documentation and was able to get the map embedded in, but not able to add the marker.
Here is my code:
const InitialMap = withGoogleMap(props => {
var index = this.marker.index || [];
return(
<GoogleMap
ref={props.onMapLoad}
defaultZoom={14}
defaultCenter={{lat: 40.6944, lng:-73.9213}}
>
<Marker
key={index}
position={marker.position}
onClick={() => props.onMarkerClick(marker)}
/>
</GoogleMap>
)
});
export default class MapContainer extends Component{
constructor(props){
this.state = {
markers:[{
position:{
lat: 255.0112183,
lng:121.52067570000001,
}
}]
}
}
render(){
return(
<div style={{height:"100%"}}>
<InitialMap
containerElement={
<div style={{height:"100%"}}/>
}
mapElement={
<div style={{height:"100%"}} />
}
markers={this.state.markers} />
</div>
)
}
}
A: Added the first constant
const GettingStartedGoogleMap = withGoogleMap(props => (
<GoogleMap
ref={props.onMapLoad}
zoom={13}
center={{ lat: 21.178574, lng: 72.814149 }}
onClick={props.onMapClick}
>
{props.markers.map(marker => (
<Marker
{...marker}
onRightClick={() => props.onMarkerRightClick(marker)}
/>
))}
</GoogleMap>
Changed the containerElement size and mapElement size to pixels instead of percentage
containerElement={
<div style={{ height: `150px` }} />
}
mapElement={
<div style={{ height: `150px` }} />
}
And just adding marker to the function which was called
markers={this.state.markers}
| Q: How to add markers in react-google-maps? Using React JS in Meteor 1.5
Question: Need a way to add Marker using
react-google-maps
Using ES6 and in JSX format
Followed the documentation and was able to get the map embedded in, but not able to add the marker.
Here is my code:
const InitialMap = withGoogleMap(props => {
var index = this.marker.index || [];
return(
<GoogleMap
ref={props.onMapLoad}
defaultZoom={14}
defaultCenter={{lat: 40.6944, lng:-73.9213}}
>
<Marker
key={index}
position={marker.position}
onClick={() => props.onMarkerClick(marker)}
/>
</GoogleMap>
)
});
export default class MapContainer extends Component{
constructor(props){
this.state = {
markers:[{
position:{
lat: 255.0112183,
lng:121.52067570000001,
}
}]
}
}
render(){
return(
<div style={{height:"100%"}}>
<InitialMap
containerElement={
<div style={{height:"100%"}}/>
}
mapElement={
<div style={{height:"100%"}} />
}
markers={this.state.markers} />
</div>
)
}
}
A: Added the first constant
const GettingStartedGoogleMap = withGoogleMap(props => (
<GoogleMap
ref={props.onMapLoad}
zoom={13}
center={{ lat: 21.178574, lng: 72.814149 }}
onClick={props.onMapClick}
>
{props.markers.map(marker => (
<Marker
{...marker}
onRightClick={() => props.onMarkerRightClick(marker)}
/>
))}
</GoogleMap>
Changed the containerElement size and mapElement size to pixels instead of percentage
containerElement={
<div style={{ height: `150px` }} />
}
mapElement={
<div style={{ height: `150px` }} />
}
And just adding marker to the function which was called
markers={this.state.markers}
A: I'd check over your lat, lng coordinates again. From google explaining coordinates
"Check that the first number in your latitude coordinate is between -90 and 90."
Also any other error info would be helpful getting an answer for you.
| stackoverflow | {
"language": "en",
"length": 232,
"provenance": "stackexchange_0000F.jsonl.gz:868341",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552917"
} |
ded03ef5b9de6158e56a88d6db6eb7d849e976cf | Stackoverflow Stackexchange
Q: Send Firebase Cloud Messaging notification to users by user property I'm trying to send out an FCM message to a specific set of users (or really, a single user) based on a specific user property but looking through the FCM HTTP API I can't seem to find a way to do that. I can send to users via topics, registration tokens, and device group notification keys, but I don't have any of that infrastructure set up in the near term. I know this functionality exists as you can send such a message via the UI, but I'm not seeing how to do it in the API documentation as yet.
A: There is currently no parameter that you could use to specify a user property (or even for user segments) that will serve as a target for the FCM API to send the message to.
As you've already searched, the only targets possible are single/multiple registration tokens (to and registration_ids), topics, conditions, and device groups (notification_key).
The option you're looking for is currently only available when using the Notifications Console.
| Q: Send Firebase Cloud Messaging notification to users by user property I'm trying to send out an FCM message to a specific set of users (or really, a single user) based on a specific user property but looking through the FCM HTTP API I can't seem to find a way to do that. I can send to users via topics, registration tokens, and device group notification keys, but I don't have any of that infrastructure set up in the near term. I know this functionality exists as you can send such a message via the UI, but I'm not seeing how to do it in the API documentation as yet.
A: There is currently no parameter that you could use to specify a user property (or even for user segments) that will serve as a target for the FCM API to send the message to.
As you've already searched, the only targets possible are single/multiple registration tokens (to and registration_ids), topics, conditions, and device groups (notification_key).
The option you're looking for is currently only available when using the Notifications Console.
| stackoverflow | {
"language": "en",
"length": 180,
"provenance": "stackexchange_0000F.jsonl.gz:868346",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552926"
} |
2cc6e79737341e2224ed8da95e8807f89cd5a43d | Stackoverflow Stackexchange
Q: Dynamically changing the content of: meta property="og:image" I want to change the main photo in a web page, I just have the url page. So I decided to use the meta written for Facebook sharing.
I want to change the image in: meta property="og:image" content="http://myweb.com/image.jpg"
A: Change it with jQuery like:
$('meta[property=og\\:image]').attr('content', 'http://myweb.com/image.jpg');
| Q: Dynamically changing the content of: meta property="og:image" I want to change the main photo in a web page, I just have the url page. So I decided to use the meta written for Facebook sharing.
I want to change the image in: meta property="og:image" content="http://myweb.com/image.jpg"
A: Change it with jQuery like:
$('meta[property=og\\:image]').attr('content', 'http://myweb.com/image.jpg');
A: You can change og:image with following code:
$('meta[name=og\\:image]').attr('content', newVideoUrl);
But, if you want to change the image permanently (so Facebook can scrape your data and the image will be available for sharing), you need to change this value on the server.
Facebook is reading <meta og:image> only from the response of the server.
See similar topic: Facebook scraper doesn't load dynamic meta-tags
A: for javascript:
document.querySelectorAll('meta[property=og\\:image]')[0].setAttribute('content', 'http://myweb.com/image.jpg')
A: I think this can be useful to you. Instead of getAttribute you should use setAttribute and thats all. :-)
| stackoverflow | {
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:868352",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552942"
} |
385e76dc33a3765e993ade15ab617d2412c731df | Stackoverflow Stackexchange
Q: Maintain Order of multiple JobServices scheduled through JobScheduler When using JobScheduler to schedule JobServices, if you queue up multiple jobs how does the order of execution get decided?
For instance if JobSerivces get scheduled 1, 2, 3 in that order with the same requirements, what is the logic behind the OS picking which one to do first when those requirements are met?
In my app I've seen the jobTags scheduled 1,2,3,4,5,6 in order and when network finally is available they executed 4,3,6,1,5. Not sure what logic that is but would like to know more if anyone has some insight. I couldn't find anything in the documentation or the several articles there are out there for job scheduling.
| Q: Maintain Order of multiple JobServices scheduled through JobScheduler When using JobScheduler to schedule JobServices, if you queue up multiple jobs how does the order of execution get decided?
For instance if JobSerivces get scheduled 1, 2, 3 in that order with the same requirements, what is the logic behind the OS picking which one to do first when those requirements are met?
In my app I've seen the jobTags scheduled 1,2,3,4,5,6 in order and when network finally is available they executed 4,3,6,1,5. Not sure what logic that is but would like to know more if anyone has some insight. I couldn't find anything in the documentation or the several articles there are out there for job scheduling.
| stackoverflow | {
"language": "en",
"length": 118,
"provenance": "stackexchange_0000F.jsonl.gz:868356",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552959"
} |
a21a413a37a916842f3595e3ca01de5b6ad78d45 | Stackoverflow Stackexchange
Q: Getting error: 'Cannot assign to 'location' because it is a constant or a read-only property' with mailto function in Angular app I am trying to set up a function in my Angular 2 app that will send an email using the user's default email client with some pre-populated info:
sendEmail() {
this.title = document.title;
this.title = this.title.replace("&", "-");
window.location = "mailto:?body=" + this.title + " - " + window.location + "&subject=I thought this link might interest you.";
}
But I'm running into an issue where I'm getting an error:
Cannot assign to 'location' because it is a constant or a read-only
property. webpack: Failed to compile.
The examples I've seen so far all describe doing it thie way, with "window.location", so how can I resolve this issue?
A: You're missing the href
window.location.href = ....
You can also do this with the Angular Router by giving it a static url:
this.router.navigateByUrl('url')
| Q: Getting error: 'Cannot assign to 'location' because it is a constant or a read-only property' with mailto function in Angular app I am trying to set up a function in my Angular 2 app that will send an email using the user's default email client with some pre-populated info:
sendEmail() {
this.title = document.title;
this.title = this.title.replace("&", "-");
window.location = "mailto:?body=" + this.title + " - " + window.location + "&subject=I thought this link might interest you.";
}
But I'm running into an issue where I'm getting an error:
Cannot assign to 'location' because it is a constant or a read-only
property. webpack: Failed to compile.
The examples I've seen so far all describe doing it thie way, with "window.location", so how can I resolve this issue?
A: You're missing the href
window.location.href = ....
You can also do this with the Angular Router by giving it a static url:
this.router.navigateByUrl('url')
| stackoverflow | {
"language": "en",
"length": 152,
"provenance": "stackexchange_0000F.jsonl.gz:868367",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552984"
} |
ccabc0e03ec1b7b51bf965f682c078053d3768d8 | Stackoverflow Stackexchange
Q: When using python networkx is it possible to add multiple labels to a single node (i.e. a main label and then a sub label)? When using python networkx is is possible to add multiple labels to a single node (i.e. a main label and then a sub label in each node)?
A: If you mean 'attribute' for 'label', then you can do this in (at least) 2 ways
For example:
import networkx as nx
G = nx.Graph()
G.add_node('Bob', {'age':45, 'gender':'male'})
G.node['Bob']['age']
> 45
G.add_node('Sara', age=40, gender = 'female')
G.node['Sara']['age']
> 40
G.node['Sara']['gender']
> 'female'
Notice that in the assignment for 'Sara' I didn't need to make the attribute names into strings, but when I accessed them, I did.
If on the other hand you mean that you want to have two different names for the node when you reference it, that's a different matter. For example say you want to use G.neighbors(node_name) to access the neighbors of a given node, you won't be able to use 'Robert' and 'Bob' interchangeably for the node name (unless there's something I'm unaware of).
| Q: When using python networkx is it possible to add multiple labels to a single node (i.e. a main label and then a sub label)? When using python networkx is is possible to add multiple labels to a single node (i.e. a main label and then a sub label in each node)?
A: If you mean 'attribute' for 'label', then you can do this in (at least) 2 ways
For example:
import networkx as nx
G = nx.Graph()
G.add_node('Bob', {'age':45, 'gender':'male'})
G.node['Bob']['age']
> 45
G.add_node('Sara', age=40, gender = 'female')
G.node['Sara']['age']
> 40
G.node['Sara']['gender']
> 'female'
Notice that in the assignment for 'Sara' I didn't need to make the attribute names into strings, but when I accessed them, I did.
If on the other hand you mean that you want to have two different names for the node when you reference it, that's a different matter. For example say you want to use G.neighbors(node_name) to access the neighbors of a given node, you won't be able to use 'Robert' and 'Bob' interchangeably for the node name (unless there's something I'm unaware of).
A: Nodes can be any hashable Python object. You could use a tuple of multiple labels if you want.
From the documentation: https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.Graph.add_node.html#networkx.Graph.add_node
A hashable object is one that can be used as a key in a Python dictionary. This includes strings, numbers, tuples of strings and numbers, etc.
On many platforms hashable items also include mutables such as NetworkX Graphs, though one should be careful that the hash doesn’t change on mutables.
| stackoverflow | {
"language": "en",
"length": 254,
"provenance": "stackexchange_0000F.jsonl.gz:868374",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44552994"
} |
5e39b14fc58a7991f8d479c8107c1b3ffe9b2755 | Stackoverflow Stackexchange
Q: How to "browser back" when using pushState? I have this code:
window.history.pushState(newUrl, "", newUrl);
My question is, how to make sure that when doing pushState the browser back button will function as normal or in other words should go "back"?
(without using jQUery)
A: The normal behaviour for the back button is for the browser to go back to the previous document, but when you use pushState, there isn't a previous document.
The point of pushState is to keep the browser on the same document while updating the URL. This is accompanied by DOM changes applied with JavaScript.
It is a simulation of going to a new page.
To make the back button appear to work, you need to write a matching simulation of going to the previous page.
You can do this by listening for a popstate event.
Page <span id="p">1</span>
<button>Next</button>
<script>
document.querySelector("button").addEventListener("click", function () {
document.getElementById('p').textContent++;
history.pushState({}, "", "/" + document.getElementById('p').textContent);
});
addEventListener("popstate", function (e) {
document.getElementById('p').textContent--;
e.preventDefault();
});
</script>
| Q: How to "browser back" when using pushState? I have this code:
window.history.pushState(newUrl, "", newUrl);
My question is, how to make sure that when doing pushState the browser back button will function as normal or in other words should go "back"?
(without using jQUery)
A: The normal behaviour for the back button is for the browser to go back to the previous document, but when you use pushState, there isn't a previous document.
The point of pushState is to keep the browser on the same document while updating the URL. This is accompanied by DOM changes applied with JavaScript.
It is a simulation of going to a new page.
To make the back button appear to work, you need to write a matching simulation of going to the previous page.
You can do this by listening for a popstate event.
Page <span id="p">1</span>
<button>Next</button>
<script>
document.querySelector("button").addEventListener("click", function () {
document.getElementById('p').textContent++;
history.pushState({}, "", "/" + document.getElementById('p').textContent);
});
addEventListener("popstate", function (e) {
document.getElementById('p').textContent--;
e.preventDefault();
});
</script>
A: push is for pushing... adding
you should go for history.back()
If you want to popState - emit popstate event on window or do history.replaceState()
If you want to cancell commented event:
My answer will do the trick
https://stackoverflow.com/a/44553087/5694206
| stackoverflow | {
"language": "en",
"length": 203,
"provenance": "stackexchange_0000F.jsonl.gz:868377",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553005"
} |
369479980b0dcd1dd9a5e8dd95ad527c9ade3932 | Stackoverflow Stackexchange
Q: Setting 'HttpOnly' and 'Secure' in web.xml I need to have the 'HttpOnly' and 'Secure' attributes set to 'true' to prevent the CWE-614: Sensitive Cookie in HTTPS Session Without 'Secure' Attribute and CWE-402: Transmission of Private Resources into a New Sphere flaws from showing in the Veracode report.
After doing some online searching, it seems that the best thing to do is to simply set the attributes in the project's web.xml file as follows:
<session-config>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
However, I get an error message on the opening tag saying that "The content of element type "session-config" must match "(session-timeout)?".
I'm not sure what that means exactly. I'm guessing it has something to do with the order of elements but I don't really know how to fix it.
Any thoughts?
Thanks!
A: The support for secure and http-only attribute is available only on http-servlet specification 3. Check that version attribute in your web.xml is "3.0".
<web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
version="3.0">
| Q: Setting 'HttpOnly' and 'Secure' in web.xml I need to have the 'HttpOnly' and 'Secure' attributes set to 'true' to prevent the CWE-614: Sensitive Cookie in HTTPS Session Without 'Secure' Attribute and CWE-402: Transmission of Private Resources into a New Sphere flaws from showing in the Veracode report.
After doing some online searching, it seems that the best thing to do is to simply set the attributes in the project's web.xml file as follows:
<session-config>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
However, I get an error message on the opening tag saying that "The content of element type "session-config" must match "(session-timeout)?".
I'm not sure what that means exactly. I'm guessing it has something to do with the order of elements but I don't really know how to fix it.
Any thoughts?
Thanks!
A: The support for secure and http-only attribute is available only on http-servlet specification 3. Check that version attribute in your web.xml is "3.0".
<web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
version="3.0">
| stackoverflow | {
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:868381",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553017"
} |
664523c53308006eb17ac5ac51fe42e27618a65a | Stackoverflow Stackexchange
Q: import with jest error: Unexpected token import I've seen similar questions but still can't find a viable solution.
I'm trying to integrate Jest into a working project, which uses import/export default in hundreds of places. The following test does work for Jest using require:
const bar = require('../../flows/foo');
test('adds 1 + 2 to equal 3', () => {
expect(bar.foobar(1, 2)).toBe(3);
});
when export is:
module.exports = {
foobar: foobar,
fizz: fizz
}
The functions I'll want to be testing however are exported using:
export default {
foobar: foobar,
fizz: fizz
};
So when I try to update my test to import:
import foobar from '../../flows/foo';
With export:
export default {foobar: foobar};
I get the error
SyntaxError: Unexpected token import
A: All it takes:
// run this command (or npm equivalent)
yarn add @babel/core @babel/preset-env
// add babel.config.js
module.exports = {
presets: [
[
'@babel/preset-env',
{
targets: {
node: 'current'
}
}
]
]
};
Jest automatically picks it up, no other configuration required.
| Q: import with jest error: Unexpected token import I've seen similar questions but still can't find a viable solution.
I'm trying to integrate Jest into a working project, which uses import/export default in hundreds of places. The following test does work for Jest using require:
const bar = require('../../flows/foo');
test('adds 1 + 2 to equal 3', () => {
expect(bar.foobar(1, 2)).toBe(3);
});
when export is:
module.exports = {
foobar: foobar,
fizz: fizz
}
The functions I'll want to be testing however are exported using:
export default {
foobar: foobar,
fizz: fizz
};
So when I try to update my test to import:
import foobar from '../../flows/foo';
With export:
export default {foobar: foobar};
I get the error
SyntaxError: Unexpected token import
A: All it takes:
// run this command (or npm equivalent)
yarn add @babel/core @babel/preset-env
// add babel.config.js
module.exports = {
presets: [
[
'@babel/preset-env',
{
targets: {
node: 'current'
}
}
]
]
};
Jest automatically picks it up, no other configuration required.
A: You have not set up a .babelrc file in your project, so transpiling is not happening. You need to transpile the ES6+ syntax (import, export, etc) into browser readable ES5.
A: I ran into this and solved it this way thanks to this GitHub issue post:
If you're using babel to transpile your code then remember to use the transform-es2015-modules-commonjs plugin.
To use it, you'll need to:
*
*Install the plugin for BabelJS by entering this command in the CLI:
npm install --save-dev babel-plugin-transform-es2015-modules-commonjs
*Add the plugin to your list of plugins in your babel config
plugins: [
"transform-es2015-modules-commonjs"
]
| stackoverflow | {
"language": "en",
"length": 265,
"provenance": "stackexchange_0000F.jsonl.gz:868383",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553020"
} |
7b722606895190a2fcb101f3f46f45400884712e | Stackoverflow Stackexchange
Q: How to delete a date from pandas date_range So I have a pandas date_range like so
dates = pd.date_range(start='2005-1-1', end='2014-12-31', freq='D')
I want to remove all the extra days resulting from leap years.
I do a for loop
for each in index:
if each.month==2 and each.day==29:
print(each) # I actually want to delete this item from dates
But my problem is that I don't know how to delete the item. The regular python list methods and functions doesn't work.
I've looked everywhere on SO. I've looked at the documentation for pandas.date_range but found nothing
Any help will be appreciated.
A: You probably want to use drop to remove the rows.
import pandas as pd
dates = pd.date_range(start='2005-1-1', end='2014-12-31', freq='D')
leap = []
for each in dates:
if each.month==2 and each.day ==29:
leap.append(each)
dates = dates.drop(leap)
| Q: How to delete a date from pandas date_range So I have a pandas date_range like so
dates = pd.date_range(start='2005-1-1', end='2014-12-31', freq='D')
I want to remove all the extra days resulting from leap years.
I do a for loop
for each in index:
if each.month==2 and each.day==29:
print(each) # I actually want to delete this item from dates
But my problem is that I don't know how to delete the item. The regular python list methods and functions doesn't work.
I've looked everywhere on SO. I've looked at the documentation for pandas.date_range but found nothing
Any help will be appreciated.
A: You probably want to use drop to remove the rows.
import pandas as pd
dates = pd.date_range(start='2005-1-1', end='2014-12-31', freq='D')
leap = []
for each in dates:
if each.month==2 and each.day ==29:
leap.append(each)
dates = dates.drop(leap)
A: You could try creating two Series objects to store the months and days separately and use them as masks.
dates = pd.date_range(start='2005-1-1', end='2014-12-31', freq='D') #All dates between range
days = dates.day #Store all the days
months = dates.month #Store all the months
dates = dates[(days != 29) & (months != 2)] #Filter dates using a mask
Just to check if the approach works, If you change the != condition to ==, we can see the dates you wish to eliminate.
UnwantedDates = dates[(days == 29) & (months == 2)]
Output:
DatetimeIndex(['2008-02-29', '2012-02-29'], dtype='datetime64[ns]', freq=None)
A: You can try:
dates = dates[~dates['Date'].str.contains('02-29')]
In place of Date you will have to put the name of the column where the dates are stored.
You don't have to use the for loop so it is faster to run.
| stackoverflow | {
"language": "en",
"length": 271,
"provenance": "stackexchange_0000F.jsonl.gz:868395",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553054"
} |
0c74485515b0069dc08c12bc72d723593f7dd297 | Stackoverflow Stackexchange
Q: How does Yarn resolve conflicting dependencies? I am new to Yarn. I wonder how it resolves dependencies if I have package A and B in package.json where package A depends on package C@1.0.0 and package B depends on package C@2.0.0. Would it include both version which will bloat up the build?
| Q: How does Yarn resolve conflicting dependencies? I am new to Yarn. I wonder how it resolves dependencies if I have package A and B in package.json where package A depends on package C@1.0.0 and package B depends on package C@2.0.0. Would it include both version which will bloat up the build?
| stackoverflow | {
"language": "en",
"length": 52,
"provenance": "stackexchange_0000F.jsonl.gz:868470",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553280"
} |
d89a60ac8e0ce68f66445b33848a35f0bc0c26e2 | Stackoverflow Stackexchange
Q: How to get the client ip address from browser in angular (typescript) Hey there I would really appreciate it if you can provide me with an example where a type script class can get the client's IP address and the browser that the client is using and set those values in variables
I want to do this in type script, not in javascript is that possible and if not how to do it with type script
-
So For Example I can
*
*set those variables while submitting the form to the database in the back end
*I can for example display for the user the browser he is using
any help would be appreciated thanks
A: Try the services of https://geolocation-db.com to get the public ip address of the user.
import { HttpClient } from "@angular/common/http";
import { catchError, tap } from "rxjs/operators";
this.http.get<any>('https://geolocation-db.com/json/')
.pipe(
catchError(err => {
return throwError(err);
}),
tap(response => {
console.log(response.IPv4);
})
)
| Q: How to get the client ip address from browser in angular (typescript) Hey there I would really appreciate it if you can provide me with an example where a type script class can get the client's IP address and the browser that the client is using and set those values in variables
I want to do this in type script, not in javascript is that possible and if not how to do it with type script
-
So For Example I can
*
*set those variables while submitting the form to the database in the back end
*I can for example display for the user the browser he is using
any help would be appreciated thanks
A: Try the services of https://geolocation-db.com to get the public ip address of the user.
import { HttpClient } from "@angular/common/http";
import { catchError, tap } from "rxjs/operators";
this.http.get<any>('https://geolocation-db.com/json/')
.pipe(
catchError(err => {
return throwError(err);
}),
tap(response => {
console.log(response.IPv4);
})
)
A: You should try like this
var json = 'http://ipv4.myexternalip.com/json';
$http.get(json).then(function(result) {
console.log(result.data.ip)
}, function(e) {
alert("error");
});
A: I took it as a basis for my problem but I did not solve it because it gave me the public IP of the internet server.
For an internal network with DHCP, change the URL by the following:
getIpCliente(): Observable<string> {
return this.http
.get('http://api.ipify.org/?format=jsonp&callback=JSONP_CALLBACK')
.map((res: Response) => {
console.log('res ', res);
console.log('res.json() ', res.text());
console.log('parseado stringify ', JSON.stringify(res.text()));
let ipVar = res.text();
let num = ipVar.indexOf(":");
let num2 = ipVar.indexOf("\"});");
ipVar = ipVar.slice(num+2,num2);
return ipVar
}
);
}
A: Try This :
Create Provider and add function with required dependencies :
import { Injectable } from '@angular/core';
import { Http, Response, Headers, RequestOptions } from '@angular/http';
import {Observable} from 'rxjs/Rx';
import 'rxjs/add/operator/toPromise';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/catch';
import 'rxjs/Rx';
// Function :
getIP(): Observable<Data[]> {
return this.http.get('http://ipinfo.io') // ...using post request
.map((res:Response) => res.json()) // ...and calling .json() on the response to return data
.catch((error:any) => Observable.throw(error.json().error || 'Server error')); //...errors if any
}
Controller Code :
getIP() {
this.loading = true;
this._myIPService.getIP()
.subscribe(
IPDetails => this.IppDetails,
error => this.errorMessage = <any>error
);
}
You will have all the details of IP in this.IppDetails
| stackoverflow | {
"language": "en",
"length": 361,
"provenance": "stackexchange_0000F.jsonl.gz:868483",
"question_score": "16",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553323"
} |
815dc4b1b6a16c0b2d0378613c0662538084ef27 | Stackoverflow Stackexchange
Q: Nix: Querying packages' packages I can query available packages w/ nix-env -qa [package] but how can I look for optional packages (e.g. libraries) that depend on a primary package and can be loaded or installed separately? Example: Coq (coq-8.6) has packages coqPackages_8_6.ssreflect and coqPackages_8_6.mathcomp that I can get no information about in Nix AFAIK
A: nix search can also be used to search for packages. Caching all available packages is slow, but searching the cache is quite fast! Check nix search --help!
| Q: Nix: Querying packages' packages I can query available packages w/ nix-env -qa [package] but how can I look for optional packages (e.g. libraries) that depend on a primary package and can be loaded or installed separately? Example: Coq (coq-8.6) has packages coqPackages_8_6.ssreflect and coqPackages_8_6.mathcomp that I can get no information about in Nix AFAIK
A: nix search can also be used to search for packages. Caching all available packages is slow, but searching the cache is quite fast! Check nix search --help!
A: I personally never use nix-env -qa because it is very slow.
Instead, when I'm looking for a top-level package (an application), I use http://nixos.org/nixos/packages.html.
When (like in your question), I'm looking for a non-top-level package, I use the auto-completion of nix repl. Run it with
nix repl '<nixpkgs>'
Now type for instance coqPackages and use auto-complete to see all the available versions. Type coqPackages_8_6. and use auto-complete to see all the available packages in this set.
| stackoverflow | {
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:868495",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553350"
} |
798efc75569e9e92b33cf564951ea3f7b343d0f7 | Stackoverflow Stackexchange
Q: ExitCode 1603 when silently installing a .msi I'm trying to write a script that installs a .msi silently. When I run the command from the Powershell command line as a ./thing.msi with the argument /qn, it works just fine.
However, now that it is in a script it is returning a 1603 error ("A fatal error occurred during install"). If I try to switch it up and go to /qb with or without /quite, it runs, but it's not silent. Using -WindowStyle Hidden is doing nothing of note either. Any thoughts?
$InsightInstall = Start-Process -FilePath $PSScriptRoot\support.msi -
ArgumentList "/quiet /qb" -Wait -Passthru -WindowStyle Hidden
if($InsightInstall.ExitCode -eq 0)
{
Write-Host "Installation complete."
}
else
{
Write-Host "Failed with ExitCode" $InsightInstall.ExitCode
pause
}
A: You don't need to try that hard (I don't think Start-Process is needed). Just run msiexec and specify the package, followed by parameters.
msiexec /i d:\path\package.msi /quiet
| Q: ExitCode 1603 when silently installing a .msi I'm trying to write a script that installs a .msi silently. When I run the command from the Powershell command line as a ./thing.msi with the argument /qn, it works just fine.
However, now that it is in a script it is returning a 1603 error ("A fatal error occurred during install"). If I try to switch it up and go to /qb with or without /quite, it runs, but it's not silent. Using -WindowStyle Hidden is doing nothing of note either. Any thoughts?
$InsightInstall = Start-Process -FilePath $PSScriptRoot\support.msi -
ArgumentList "/quiet /qb" -Wait -Passthru -WindowStyle Hidden
if($InsightInstall.ExitCode -eq 0)
{
Write-Host "Installation complete."
}
else
{
Write-Host "Failed with ExitCode" $InsightInstall.ExitCode
pause
}
A: You don't need to try that hard (I don't think Start-Process is needed). Just run msiexec and specify the package, followed by parameters.
msiexec /i d:\path\package.msi /quiet
| stackoverflow | {
"language": "en",
"length": 150,
"provenance": "stackexchange_0000F.jsonl.gz:868528",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553450"
} |
cbb28d298fda644c497f0f3f85461ee25427aa85 | Stackoverflow Stackexchange
Q: css input required not focused or disabled I'm trying to create a css rule to change the background color of an input if it is required, but bot disabled or in focus
I have this, but it's not working obviously!
<input type="tel" class="form-control" placeholder="Phone" name="phone" value="" required>
and css:
input:not(disabled):not(focus):required {
background-color: rgba(255,0,0,.10)
}
A: Use the : for your pseudo-selectors :disabled and :focus
input:required:not(:disabled):not(:focus)
| Q: css input required not focused or disabled I'm trying to create a css rule to change the background color of an input if it is required, but bot disabled or in focus
I have this, but it's not working obviously!
<input type="tel" class="form-control" placeholder="Phone" name="phone" value="" required>
and css:
input:not(disabled):not(focus):required {
background-color: rgba(255,0,0,.10)
}
A: Use the : for your pseudo-selectors :disabled and :focus
input:required:not(:disabled):not(:focus)
| stackoverflow | {
"language": "en",
"length": 66,
"provenance": "stackexchange_0000F.jsonl.gz:868565",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553566"
} |
3466ed0156eae7500560b64c5fa925d26d522638 | Stackoverflow Stackexchange
Q: LayoutResultCallback() is not public in LayoutResultCallback; cannot be accessed from outside package error while generating pdf I am trying to print webview html in android api 19. PrintDocumentAdapter is added in api level 19 according to doc
https://developer.android.com/reference/android/print/PrintDocumentAdapter.html
and
https://developer.android.com/reference/android/print/PrintDocumentAdapter.LayoutResultCallback.html
But i am getting 2 error
Error:(38, 64) error: LayoutResultCallback() is not public in LayoutResultCallback; cannot be accessed from outside package
and
Error:(42, 101) error: WriteResultCallback() is not public in WriteResultCallback; cannot be accessed from outside package
my code is
public void print(final PrintDocumentAdapter printAdapter, final File path, final String fileName) {
printAdapter.onLayout(null, printAttributes, null, new PrintDocumentAdapter.LayoutResultCallback() {
@Override
public void onLayoutFinished(PrintDocumentInfo info, boolean changed) {
printAdapter.onWrite(null, getOutputFile(path, fileName), new CancellationSignal(), new PrintDocumentAdapter.WriteResultCallback()
{
@Override
public void onWriteFinished(PageRange[] pages) {
super.onWriteFinished(pages);
openHome();
}
}
);
}
}, null);
}
Please need help as soon as.
A: Create a package inside your src folder with the name: android.print. Then create a file there with your "print" method.
| Q: LayoutResultCallback() is not public in LayoutResultCallback; cannot be accessed from outside package error while generating pdf I am trying to print webview html in android api 19. PrintDocumentAdapter is added in api level 19 according to doc
https://developer.android.com/reference/android/print/PrintDocumentAdapter.html
and
https://developer.android.com/reference/android/print/PrintDocumentAdapter.LayoutResultCallback.html
But i am getting 2 error
Error:(38, 64) error: LayoutResultCallback() is not public in LayoutResultCallback; cannot be accessed from outside package
and
Error:(42, 101) error: WriteResultCallback() is not public in WriteResultCallback; cannot be accessed from outside package
my code is
public void print(final PrintDocumentAdapter printAdapter, final File path, final String fileName) {
printAdapter.onLayout(null, printAttributes, null, new PrintDocumentAdapter.LayoutResultCallback() {
@Override
public void onLayoutFinished(PrintDocumentInfo info, boolean changed) {
printAdapter.onWrite(null, getOutputFile(path, fileName), new CancellationSignal(), new PrintDocumentAdapter.WriteResultCallback()
{
@Override
public void onWriteFinished(PageRange[] pages) {
super.onWriteFinished(pages);
openHome();
}
}
);
}
}, null);
}
Please need help as soon as.
A: Create a package inside your src folder with the name: android.print. Then create a file there with your "print" method.
A: Create folder android.print here .
| stackoverflow | {
"language": "en",
"length": 163,
"provenance": "stackexchange_0000F.jsonl.gz:868573",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553592"
} |
d936fa4d86aaa034027055954083b7a79ad409b0 | Stackoverflow Stackexchange
Q: Insert std::unique_ptr into boost:ptr_map I'm moving some old code to c++14, it used the deprecated auto_ptr and that worked well with boost:ptr_map, you could do:
auto_ptr<Foo> foo(new Foo);
boost:map_ptr<int, Foo> m;
m.insert(5, foo);
Now, replacing that auto_ptr with unique_ptr, it doesn't compile:
unique_ptr<Foo> foo(new Foo);
boost:map_ptr<int, Foo> m;
m.insert(5, foo); // Does not compile
m.insert(5, move(foo)); // Does not compile either,
// this should be the right thing to do
m.insert(5, move.release()); // Does compile, but isn't exception safe
Is map_ptr API just not up to date yet?
Edit based on responses, using map of unique_ptr is not a good option in my case because it requires rewrite of a fair amount of code. I really wanted to make it work with map_ptr, I'm dealing with some old code and I wish to make minimum changes.
A: I think in C++14 what you want is this:
std::unordered_map<int, std::unique_ptr<Foo>> x;
x.emplace(5, std::make_unique<Foo>());
You don't need those old boost _ptr containers any more, they were basically workarounds for the lack of an owning, zero overhead pointer that could safely be handled in containers (i.e. unique_ptr).
| Q: Insert std::unique_ptr into boost:ptr_map I'm moving some old code to c++14, it used the deprecated auto_ptr and that worked well with boost:ptr_map, you could do:
auto_ptr<Foo> foo(new Foo);
boost:map_ptr<int, Foo> m;
m.insert(5, foo);
Now, replacing that auto_ptr with unique_ptr, it doesn't compile:
unique_ptr<Foo> foo(new Foo);
boost:map_ptr<int, Foo> m;
m.insert(5, foo); // Does not compile
m.insert(5, move(foo)); // Does not compile either,
// this should be the right thing to do
m.insert(5, move.release()); // Does compile, but isn't exception safe
Is map_ptr API just not up to date yet?
Edit based on responses, using map of unique_ptr is not a good option in my case because it requires rewrite of a fair amount of code. I really wanted to make it work with map_ptr, I'm dealing with some old code and I wish to make minimum changes.
A: I think in C++14 what you want is this:
std::unordered_map<int, std::unique_ptr<Foo>> x;
x.emplace(5, std::make_unique<Foo>());
You don't need those old boost _ptr containers any more, they were basically workarounds for the lack of an owning, zero overhead pointer that could safely be handled in containers (i.e. unique_ptr).
A: You can use
std::unordered_map<int, std::unique_ptr<Foo>> x;
x.emplace(5, std::make_unique<Foo>());
Its a C++14 feature. No need for the old boost containers!!! :)
A:
Is map_ptr API just not up to date yet?
No, you're just using it the wrong way.
As from the documentation:
A ptr_map is a pointer container that uses an underlying std::map to store the pointers.
Note that this doesn't compile:
unique_ptr<Foo> foo(new Foo);
void *ptr = foo;
Because you cannot convert a std::unique_ptr to void * with an assignment, it doesn't make much sense.
That's more or less what happens internally when you try to do with this:
m.insert(5, move(foo));
On the other side this compiles instead:
unique_ptr<Foo> foo(new Foo);
Foo *bar = foo.realease();
void *ptr = bar;
That's something close to:
m.insert(5, move.release());
Therefore you cannot expect the first case to work and actually it doesn't.
That being said, nowadays I'd rather use a map of int and std::unique_ptr<Foo> from the standard template library and get rid of boost::ptr_map, as suggested in the comments to the question.
Something like the following should work:
std::map<int, std::unique_ptr<Foo>>
Note that a std::map is more appropriate than a std::unordered_map if you want something closer to how boost::ptr_map works for, as mentioned above, its underlying data structure is an std::map and not an std::unordered_map.
| stackoverflow | {
"language": "en",
"length": 397,
"provenance": "stackexchange_0000F.jsonl.gz:868625",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553737"
} |
d10abdb17e0381c801a6a93c470af87598ea8bb3 | Stackoverflow Stackexchange
Q: .NET Reactor encryption vs obfuscation I have a requirement to protect our assemblies against reverse engineering, to lessen the risk of IP theft or license hacks. .NET Reactor looks powerful and we already have a license for it.
Reading through the documentation it seems there are several mechanisms for preventing decompilation other than obfuscation. I've read that obfuscation can foul up serialization, which a big part of our system, and I am hoping to avoid it completely.
I'm mainly interested in NecroBit, which claims to encrypt the CIL, making it "impossible to decompile/reverse engineer." It seems to me that if this is true, obfuscation or any other settings would be pointless.
Can any experienced .NET Reactor users give any more practical explanation of the various options and/or suggest a good permutation for a serialized system? What are some good tools for testing this software's claims?
A: As long as the corresponding classes are marked as serializable you can tell .NET Reactor to exclude this classes from obfuscation:
| Q: .NET Reactor encryption vs obfuscation I have a requirement to protect our assemblies against reverse engineering, to lessen the risk of IP theft or license hacks. .NET Reactor looks powerful and we already have a license for it.
Reading through the documentation it seems there are several mechanisms for preventing decompilation other than obfuscation. I've read that obfuscation can foul up serialization, which a big part of our system, and I am hoping to avoid it completely.
I'm mainly interested in NecroBit, which claims to encrypt the CIL, making it "impossible to decompile/reverse engineer." It seems to me that if this is true, obfuscation or any other settings would be pointless.
Can any experienced .NET Reactor users give any more practical explanation of the various options and/or suggest a good permutation for a serialized system? What are some good tools for testing this software's claims?
A: As long as the corresponding classes are marked as serializable you can tell .NET Reactor to exclude this classes from obfuscation:
A: Hopefully this helps some other people using .NET Reactor or similar tools. I'm aware the limitations of any tool. The goal was to reduce the risk of licensing hacks as much as possible with minimal effort. My company has been burned before and the boss wanted it.
Our project in particular is a WPF desktop using Prism. I found when I tried to Merge my assemblies into a single fat exe, some of my interface registrations were failing to resolve in the Unity container. We decided it was ok to protect each dll individually rather than fight with this. Once I did that this tool worked nicely. I literally checked every protection option for the desktop.
Our services run SignalR hubs in a self-hosted OWIN process. In this case the Native EXE File option would not work. We got Bad Image Format exceptions when we ran the services. Otherwise all options checked.
Beyond that I ran into some spotty issues where we were using reflection in the form of Type.GetMethod(string). I had to exclude a few methods and classes with an ObfuscationAttribute.
I was anticipating issues with JSON serialization but didn't get any. Everything just worked :)
A: I have been using netreactor for many years. I use the iserialization interface together with a serialization binder to get around obfuscation etc. It works through every protection method that Netreactor has.
Stream s = null;
BinaryFormatter b = new BinaryFormatter();
Binder CB = new Binder();
b.Binder = CB;
try
{
s = File.Open(fileName, FileMode.OpenOrCreate);
//to serialize
b.Serialize(s, yourObject);
// to deserialize
yourObject = (YourClass)b.Deserialize(s);
}
catch
{
}
finally
{
s.Close();
}
[Serializable]
public class YourClass : System.Runtime.Serialization.ISerializable
{
//Explicit serialization function
public void GetObjectData(SerializationInfo info, StreamingContext ctxt)
{
info.AddValue("stringVar", stringVar);
// and so forth...
}
// Deserialization
public YourClass(SerializationInfo info, StreamingContext ctxt)
{
stringvar = (string)info.GetValue("stringVar", typeof(string));
// and so forth
}
}
// the serialization binder
public class Binder : SerializationBinder
{
public override Type BindToType(string assemblyName, string typeName)
{
return System.Type.GetType(typeName); // Get it from this
//assembly
}
}
| stackoverflow | {
"language": "en",
"length": 510,
"provenance": "stackexchange_0000F.jsonl.gz:868666",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553869"
} |
74027a3665db026fcb4319bba3d2d1af13534f98 | Stackoverflow Stackexchange
Q: Implicit copy constructor in presence of user-defined move constructor/assignment It's written on cppreference that for post C++11 versions of the standard one of the cases when copy constructor is implicitly defined as deleted is the following (constructor for class T):
T has a user-defined move constructor or move assignment operator;
It seems to be true according to compilers, however I didn't manage to find it in the standard...
The place seems to be 12.8/11 (at least for 14882:2011), all other cases seem to be there except the aforementioned one..
Where should I look for this particular case?
A: At least as of the draft I have handiest at the moment (N4618), it's at §[class.copy.ctor]/6:
If the class definition does not explicitly declare a copy constructor, a non-explicit one is declared implicitly. If the class definition declares a move constructor or move assignment operator, the implicitly declared copy constructor is defined as deleted; otherwise, it is defined as defaulted (8.4).
| Q: Implicit copy constructor in presence of user-defined move constructor/assignment It's written on cppreference that for post C++11 versions of the standard one of the cases when copy constructor is implicitly defined as deleted is the following (constructor for class T):
T has a user-defined move constructor or move assignment operator;
It seems to be true according to compilers, however I didn't manage to find it in the standard...
The place seems to be 12.8/11 (at least for 14882:2011), all other cases seem to be there except the aforementioned one..
Where should I look for this particular case?
A: At least as of the draft I have handiest at the moment (N4618), it's at §[class.copy.ctor]/6:
If the class definition does not explicitly declare a copy constructor, a non-explicit one is declared implicitly. If the class definition declares a move constructor or move assignment operator, the implicitly declared copy constructor is defined as deleted; otherwise, it is defined as defaulted (8.4).
| stackoverflow | {
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:868683",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44553920"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.