content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Common Files in Visual Studio Solution
Many times I have seen Visual Studio solutions which have multiple projects that share source files. These common source files are usually out in a common directory and in the solution explorer their icon shows up with a link arrow in the bottom left.
However, any time I try to add a source file to the project that is outside of that project's main directory, it just automatically copies it into the directory so that I no longer have a shared copy.
I found that I can get around this by manually opening the project file in a text editor and modifying the path to something like "../../../Common/Source.cs" but this is more of a hack then I would like.
Is there a setting or something I can change that will allow me to do this from within the IDE?
A:
Right click on a project, select Add->Existing Item->Add as link (press on small arrow on Add button)
A:
Thanks @aku!
I knew this could be done, but I didn't know how to do this from Visual Studio. It shows up as a shortcut to the file and the csproj file generates the resulting XML like this:
<Compile Include="..\CommonAssemblyInfo.cs">
<Link>CommonAssemblyInfo.cs</Link>
</Compile>
I've seen this technique commonly used for common AssemblyInfo files to keep a consistent version.
| Common Files in Visual Studio Solution | Many times I have seen Visual Studio solutions which have multiple projects that share source files. These common source files are usually out in a common directory and in the solution explorer their icon shows up with a link arrow in the bottom left.
However, any time I try to add a source file to the project that is outside of that project's main directory, it just automatically copies it into the directory so that I no longer have a shared copy.
I found that I can get around this by manually opening the project file in a text editor and modifying the path to something like "../../../Common/Source.cs" but this is more of a hack then I would like.
Is there a setting or something I can change that will allow me to do this from within the IDE?
| [
"Right click on a project, select Add->Existing Item->Add as link (press on small arrow on Add button)\n",
"Thanks @aku!\nI knew this could be done, but I didn't know how to do this from Visual Studio. It shows up as a shortcut to the file and the csproj file generates the resulting XML like this:\n<Compile Include=\"..\\CommonAssemblyInfo.cs\">\n <Link>CommonAssemblyInfo.cs</Link>\n</Compile>\n\nI've seen this technique commonly used for common AssemblyInfo files to keep a consistent version.\n"
] | [
22,
3
] | [] | [] | [
"projects_and_solutions",
"visual_studio"
] | stackoverflow_0000045650_projects_and_solutions_visual_studio.txt |
Q:
Is there a way to perform a "Refresh Dependencies" in a setup project outside VS2008?
I have a solution with several projects. One of them is a setup project. If you expand the setup project in the Solution Explorer, you see a Detected Dependencies node. If you right click on it, you get a menu item called Refresh Dependencies. This refreshes any dependencies based on the files included in the setup.
I am asking if I can execute this action outside Visual Studio, using either devenv.com or MSBuild.
I want this because I am using CruiseControl.NET for continuous integration and in some solutions I found that the setup output is missing some dependencies because of the way I automatically build the projects.
Update:
It turned out that my setup is not very friendly to how Setup projects work in Visual Studio. I ended up using Post Build Events in order to create the whole application structure ready to just be copied to a computer and work out of the box. I am not using setup projects in Visual Studio anymore, unless I really have to.
A:
Record or create a macro:
Option Strict Off
Option Explicit Off
Imports System
Imports EnvDTE
Imports EnvDTE80
Imports EnvDTE90
Imports System.Diagnostics
Public Module RefreshDependencies
Sub TemporaryMacro()
DTE.ActiveWindow.Object.GetItem("Project\Setup1\Setup1").Select(vsUISelectionType.vsUISelectionTypeSelect)
DTE.ExecuteCommand("Build.RefreshDependencies")
End Sub
End Module
Then just call the macro in the command line:
devenv /command "Macros.MyMacros.RefreshDependencies C:\MyProjects\MyApp\"
| Is there a way to perform a "Refresh Dependencies" in a setup project outside VS2008? | I have a solution with several projects. One of them is a setup project. If you expand the setup project in the Solution Explorer, you see a Detected Dependencies node. If you right click on it, you get a menu item called Refresh Dependencies. This refreshes any dependencies based on the files included in the setup.
I am asking if I can execute this action outside Visual Studio, using either devenv.com or MSBuild.
I want this because I am using CruiseControl.NET for continuous integration and in some solutions I found that the setup output is missing some dependencies because of the way I automatically build the projects.
Update:
It turned out that my setup is not very friendly to how Setup projects work in Visual Studio. I ended up using Post Build Events in order to create the whole application structure ready to just be copied to a computer and work out of the box. I am not using setup projects in Visual Studio anymore, unless I really have to.
| [
"Record or create a macro:\nOption Strict Off\nOption Explicit Off\nImports System\nImports EnvDTE\nImports EnvDTE80\nImports EnvDTE90\nImports System.Diagnostics\n\nPublic Module RefreshDependencies\n Sub TemporaryMacro()\n DTE.ActiveWindow.Object.GetItem(\"Project\\Setup1\\Setup1\").Select(vsUISelectionType.vsUISelectionTypeSelect)\n DTE.ExecuteCommand(\"Build.RefreshDependencies\")\n End Sub\nEnd Module\n\nThen just call the macro in the command line:\ndevenv /command \"Macros.MyMacros.RefreshDependencies C:\\MyProjects\\MyApp\\\"\n"
] | [
2
] | [] | [] | [
"c#",
"cruisecontrol.net",
"installation",
"msbuild",
"visual_studio_2008"
] | stackoverflow_0000045593_c#_cruisecontrol.net_installation_msbuild_visual_studio_2008.txt |
Q:
Why doesn't C# support implied generic types on class constructors?
C# doesn't require you to specify a generic type parameter if the compiler can infer it, for instance:
List<int> myInts = new List<int> {0,1,1,
2,3,5,8,13,21,34,55,89,144,233,377,
610,987,1597,2584,4181,6765};
//this statement is clunky
List<string> myStrings = myInts.
Select<int,string>( i => i.ToString() ).
ToList<string>();
//the type is inferred from the lambda expression
//the compiler knows that it's taking an int and
//returning a string
List<string> myStrings = myInts.
Select( i => i.ToString() ).
ToList();
This is needed for anonymous types where you don't know what the type parameter would be (in intellisense it shows up as 'a) because it's added by the compiler.
Class-level type parameters don't let you do this:
//sample generic class
public class GenericDemo<T>
{
public GenericDemo ( T value )
{
GenericTypedProperty = value;
}
public T GenericTypedProperty {get; set;}
}
//why can't I do:
int anIntValue = 4181;
var item = new GenericDemo( anIntValue ); //type inference fails
//however I can create a wrapper like this:
public static GenericDemo<T> Create<T> ( T value )
{
return new GenericDemo<T> ( value );
}
//then this works - type inference on the method compiles
var item = Create( anIntValue );
Why doesn't C# support this class level generic type inference?
A:
Actually, your question isn't bad. I've been toying with a generic programming language for last few years and although I've never come around to actually develop it (and probably never will), I've thought a lot about generic type inference and one of my top priorities has always been to allow the construction of classes without having to specify the generic type.
C# simply lacks the set of rules to make this possible. I think the developers never saw the neccesity to include this. Actually, the following code would be very near to your proposition and solve the problem. All C# needs is an added syntax support.
class Foo<T> {
public Foo(T x) { … }
}
// Notice: non-generic class overload. Possible in C#!
class Foo {
public static Foo<T> ctor<T>(T x) { return new Foo<T>(x); }
}
var x = Foo.ctor(42);
Since this code actually works, we've shown that the problem is not one of semantics but simply one of lacking support. I guess I have to take back my previous posting. ;-)
A:
Why doesn't C# support this class level generic type inference?
Because they're generally ambiguous. By contrast, type inference is trivial for function calls (if all types appear in arguments). But in the case of constructor calls (glorified functions, for the sake of discussion), the compiler has to resolve multiple levels at the same time. One level is the class level and the other is the constructor arguments level. I believe solving this is algorithmically non-trivial. Intuitively, I'd say it's even NP-complete.
To illustrate an extreme case where resolution is impossible, imagine the following class and tell me what the compiler should do:
class Foo<T> {
public Foo<U>(U x) { }
}
var x = new Foo(1);
A:
Thanks Konrad, that's a good response (+1), but just to expand on it.
Let's pretend that C# has an explicit constructor function:
//your example
var x = new Foo( 1 );
//becomes
var x = Foo.ctor( 1 );
//your problem is valid because this would be
var x = Foo<T>.ctor<int>( 1 );
//and T can't be inferred
You're quite right that the first constructor can't be inferred.
Now let's go back to the class
class Foo<T>
{
//<T> can't mean anything else in this context
public Foo(T x) { }
}
//this would now throw an exception unless the
//typeparam matches the parameter
var x = Foo<int>.ctor( 1 );
//so why wouldn't this work?
var x = Foo.ctor( 1 );
Of course, if I add your constructor back in (with its alternate type) we have an ambiguous call - exactly as if a normal method overload couldn't be resolved.
| Why doesn't C# support implied generic types on class constructors? | C# doesn't require you to specify a generic type parameter if the compiler can infer it, for instance:
List<int> myInts = new List<int> {0,1,1,
2,3,5,8,13,21,34,55,89,144,233,377,
610,987,1597,2584,4181,6765};
//this statement is clunky
List<string> myStrings = myInts.
Select<int,string>( i => i.ToString() ).
ToList<string>();
//the type is inferred from the lambda expression
//the compiler knows that it's taking an int and
//returning a string
List<string> myStrings = myInts.
Select( i => i.ToString() ).
ToList();
This is needed for anonymous types where you don't know what the type parameter would be (in intellisense it shows up as 'a) because it's added by the compiler.
Class-level type parameters don't let you do this:
//sample generic class
public class GenericDemo<T>
{
public GenericDemo ( T value )
{
GenericTypedProperty = value;
}
public T GenericTypedProperty {get; set;}
}
//why can't I do:
int anIntValue = 4181;
var item = new GenericDemo( anIntValue ); //type inference fails
//however I can create a wrapper like this:
public static GenericDemo<T> Create<T> ( T value )
{
return new GenericDemo<T> ( value );
}
//then this works - type inference on the method compiles
var item = Create( anIntValue );
Why doesn't C# support this class level generic type inference?
| [
"Actually, your question isn't bad. I've been toying with a generic programming language for last few years and although I've never come around to actually develop it (and probably never will), I've thought a lot about generic type inference and one of my top priorities has always been to allow the construction of classes without having to specify the generic type.\nC# simply lacks the set of rules to make this possible. I think the developers never saw the neccesity to include this. Actually, the following code would be very near to your proposition and solve the problem. All C# needs is an added syntax support.\nclass Foo<T> {\n public Foo(T x) { … }\n}\n\n// Notice: non-generic class overload. Possible in C#!\nclass Foo {\n public static Foo<T> ctor<T>(T x) { return new Foo<T>(x); }\n}\n\nvar x = Foo.ctor(42);\n\nSince this code actually works, we've shown that the problem is not one of semantics but simply one of lacking support. I guess I have to take back my previous posting. ;-)\n",
"\nWhy doesn't C# support this class level generic type inference?\n\nBecause they're generally ambiguous. By contrast, type inference is trivial for function calls (if all types appear in arguments). But in the case of constructor calls (glorified functions, for the sake of discussion), the compiler has to resolve multiple levels at the same time. One level is the class level and the other is the constructor arguments level. I believe solving this is algorithmically non-trivial. Intuitively, I'd say it's even NP-complete.\nTo illustrate an extreme case where resolution is impossible, imagine the following class and tell me what the compiler should do:\nclass Foo<T> {\n public Foo<U>(U x) { }\n}\n\nvar x = new Foo(1);\n\n",
"Thanks Konrad, that's a good response (+1), but just to expand on it.\nLet's pretend that C# has an explicit constructor function:\n//your example\nvar x = new Foo( 1 );\n\n//becomes\nvar x = Foo.ctor( 1 );\n\n//your problem is valid because this would be\nvar x = Foo<T>.ctor<int>( 1 );\n//and T can't be inferred\n\nYou're quite right that the first constructor can't be inferred.\nNow let's go back to the class\nclass Foo<T> \n{\n //<T> can't mean anything else in this context\n public Foo(T x) { }\n}\n\n//this would now throw an exception unless the\n//typeparam matches the parameter\nvar x = Foo<int>.ctor( 1 );\n\n//so why wouldn't this work?\nvar x = Foo.ctor( 1 );\n\nOf course, if I add your constructor back in (with its alternate type) we have an ambiguous call - exactly as if a normal method overload couldn't be resolved.\n"
] | [
31,
10,
2
] | [] | [] | [
".net",
"c#",
"generics"
] | stackoverflow_0000045604_.net_c#_generics.txt |
Q:
What path should I pass as an AssemblyPath parameter to the Publish.GacRemove function?
I want to use the Publish.GacRemove function to remove an assembly from GAC. However, I don't understand what path I should pass as an argument.
Should it be a path to the original DLL (what if I removed it after installing it in the GAC?) or the path to the assembly in the GAC?
UPDATE:
I finally used these API wrappers.
A:
I am using the GacInstall to publish my assemblies, however once installed into the gac, I sometimes delete my ‘temporary’ copy of the assemblies.
And then, if I ever wanted to uninstall the assemblies from the gac I do not have the files at the original path. This is causing a problem since I cannot seem to get the GacRemove method to uninstall the assemblies unless I keep the original files.
Conclusion: Yes, you need to specify the path to the original DLL. (And try to not move/delete it later). If you delete it, try to copy the file from the GAC to your original path and you should be able to uninstall it using GacRemove.
A:
I am not exactly sure about it but I believe GacRemove should do the same thing as gacutil /u. So, it should be the path of your DLL. However it doesn't have to be the same DLL file. Copy of the original should suffice since what counts is the uniqueID of the DLL.
| What path should I pass as an AssemblyPath parameter to the Publish.GacRemove function? | I want to use the Publish.GacRemove function to remove an assembly from GAC. However, I don't understand what path I should pass as an argument.
Should it be a path to the original DLL (what if I removed it after installing it in the GAC?) or the path to the assembly in the GAC?
UPDATE:
I finally used these API wrappers.
| [
"I am using the GacInstall to publish my assemblies, however once installed into the gac, I sometimes delete my ‘temporary’ copy of the assemblies.\nAnd then, if I ever wanted to uninstall the assemblies from the gac I do not have the files at the original path. This is causing a problem since I cannot seem to get the GacRemove method to uninstall the assemblies unless I keep the original files.\nConclusion: Yes, you need to specify the path to the original DLL. (And try to not move/delete it later). If you delete it, try to copy the file from the GAC to your original path and you should be able to uninstall it using GacRemove.\n",
"I am not exactly sure about it but I believe GacRemove should do the same thing as gacutil /u. So, it should be the path of your DLL. However it doesn't have to be the same DLL file. Copy of the original should suffice since what counts is the uniqueID of the DLL.\n"
] | [
2,
1
] | [] | [] | [
".net",
"gac"
] | stackoverflow_0000045729_.net_gac.txt |
Q:
apache mod_proxy error os10060 and returning 503?
Can't get to my site. Apache gives the following error message:
[Fri Sep 05 08:47:42 2008] [error] (OS 10060)A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : proxy: HTTP: attempt to connect to 10.10.10.1:80 (10.10.10.1) failed
A:
Can you connect to the proxied host (10.10.10.1) directly? Is it functioning normally?
A:
http://www.checkupdown.com/status/E503.html
Your Web server is effectively 'closed for repair'. It is still functioning minimally because it can at least respond with a 503 status code, but full service is impossible i.e. your Web site is simply unavailable. There are a myriad possible reasons for this, but generally it is because of some human intervention by the operators of your Web server machine. You can usually expect that someone is working on the problem, and normal service will resume as soon as possible.
You need to restart the webserver then figure out why it shut it self down.
| apache mod_proxy error os10060 and returning 503? | Can't get to my site. Apache gives the following error message:
[Fri Sep 05 08:47:42 2008] [error] (OS 10060)A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : proxy: HTTP: attempt to connect to 10.10.10.1:80 (10.10.10.1) failed
| [
"Can you connect to the proxied host (10.10.10.1) directly? Is it functioning normally?\n",
"http://www.checkupdown.com/status/E503.html\n\nYour Web server is effectively 'closed for repair'. It is still functioning minimally because it can at least respond with a 503 status code, but full service is impossible i.e. your Web site is simply unavailable. There are a myriad possible reasons for this, but generally it is because of some human intervention by the operators of your Web server machine. You can usually expect that someone is working on the problem, and normal service will resume as soon as possible.\n\nYou need to restart the webserver then figure out why it shut it self down.\n"
] | [
2,
1
] | [] | [] | [
"apache",
"proxy"
] | stackoverflow_0000045736_apache_proxy.txt |
Q:
Is there a limit with the number of SSL connections?
Is there a limit with the number of SSL connections?
We are trying to connect through SSL with 2000 sessions. We have tried it a couple of times but it always dies at 1062nd. Is there a limit?
A:
Your operating system will have a limit on the number of open files if you are on linux
ulimit -a will show your various limits.
I imagine yours is set to 1024 and some of the sessions just happened to have closed allow the figure of 1062 (this last bit is a guess)
A:
Yes, everything has a limit. As far as I'm aware, there is no inherit limit with "SSL".. it is after all just a protocol.
But, there is a limited amount of memory, ports, CPU on the machine you are connected to, from and every single one in between.
The actually server you are connected to may have an arbitrary limit set too.
This question doesn't have enough information to answer beyond "YES".
A:
SSL itself doesn't have any limitations, but there are some practical limits you may be running into:
SSL connections require more resources on both ends of the connection, so you may be hitting some built-in server limit.
TCP/IP uses a 16-bit port number to identify connections, only some of which (around 16,000) are used for dynamic client connections. This would limit the number of active connections a single client could make to the same server.
On Linux, each process has a maximum number of file descriptors that it can have open, and each network connection uses one file descriptor. I imagine Windows has a similar limit.
| Is there a limit with the number of SSL connections? | Is there a limit with the number of SSL connections?
We are trying to connect through SSL with 2000 sessions. We have tried it a couple of times but it always dies at 1062nd. Is there a limit?
| [
"Your operating system will have a limit on the number of open files if you are on linux\nulimit -a will show your various limits.\nI imagine yours is set to 1024 and some of the sessions just happened to have closed allow the figure of 1062 (this last bit is a guess)\n",
"Yes, everything has a limit. As far as I'm aware, there is no inherit limit with \"SSL\".. it is after all just a protocol.\nBut, there is a limited amount of memory, ports, CPU on the machine you are connected to, from and every single one in between.\nThe actually server you are connected to may have an arbitrary limit set too. \nThis question doesn't have enough information to answer beyond \"YES\".\n",
"SSL itself doesn't have any limitations, but there are some practical limits you may be running into:\n\nSSL connections require more resources on both ends of the connection, so you may be hitting some built-in server limit.\nTCP/IP uses a 16-bit port number to identify connections, only some of which (around 16,000) are used for dynamic client connections. This would limit the number of active connections a single client could make to the same server.\nOn Linux, each process has a maximum number of file descriptors that it can have open, and each network connection uses one file descriptor. I imagine Windows has a similar limit.\n\n"
] | [
3,
1,
1
] | [] | [] | [
"ssl"
] | stackoverflow_0000045686_ssl.txt |
Q:
How Do I Test Rails Logging In from the Console?
I was having a heck of a time figuring out how to login and logout using response objects from Rails. The standard blogs were ok, but I finally diagnosed it, and I wanted to record it here.
app.get '/'
assert_response :success
app.get '/auth_only_url'
assert_response 302
user = User.find(:user_to_login)
app.post '/signin_url',
:user_email => user.email,
:user_password => '<password in clear>'
assert_response 302
app.follow_redirect!
assert_response :success
app.get '/auth_only_url'
assert_response :success
Note, the above implies that you redirect after a failed auth request, and also that you redirect after logging in.
To ensure that you load the fixtures into your test environment DB (which normally occurs during rake test), make sure you execute the following:
rake db:fixtures:load RAILS_ENV=test
(From Patrick Richie)
The default URL will appear to be 'www.example.com', as this default host as set in ActionController::Integration::Session
ActionController::Integration::Session.new.host=> "www.example.com"
It is set in actionpack/lib/action_controller/integration.rb#75
To change it in the integration test, do the following:
session = open_session do |s| s.host = 'my-example-host.com' end
A:
'www.example.com' is the default host as set in ActionController::Integration::Session
>> ActionController::Integration::Session.new.host
=> "www.example.com"
It is set in actionpack/lib/action_controller/integration.rb#75
You should be able to change it in your integration test by doing the following:
session = open_session do |s|
s.host = 'my-example-host.com'
end
| How Do I Test Rails Logging In from the Console? | I was having a heck of a time figuring out how to login and logout using response objects from Rails. The standard blogs were ok, but I finally diagnosed it, and I wanted to record it here.
app.get '/'
assert_response :success
app.get '/auth_only_url'
assert_response 302
user = User.find(:user_to_login)
app.post '/signin_url',
:user_email => user.email,
:user_password => '<password in clear>'
assert_response 302
app.follow_redirect!
assert_response :success
app.get '/auth_only_url'
assert_response :success
Note, the above implies that you redirect after a failed auth request, and also that you redirect after logging in.
To ensure that you load the fixtures into your test environment DB (which normally occurs during rake test), make sure you execute the following:
rake db:fixtures:load RAILS_ENV=test
(From Patrick Richie)
The default URL will appear to be 'www.example.com', as this default host as set in ActionController::Integration::Session
ActionController::Integration::Session.new.host=> "www.example.com"
It is set in actionpack/lib/action_controller/integration.rb#75
To change it in the integration test, do the following:
session = open_session do |s| s.host = 'my-example-host.com' end
| [
"'www.example.com' is the default host as set in ActionController::Integration::Session\n>> ActionController::Integration::Session.new.host\n=> \"www.example.com\"\n\nIt is set in actionpack/lib/action_controller/integration.rb#75\nYou should be able to change it in your integration test by doing the following:\nsession = open_session do |s|\n s.host = 'my-example-host.com'\nend\n\n"
] | [
3
] | [] | [] | [
"ruby",
"ruby_on_rails"
] | stackoverflow_0000044619_ruby_ruby_on_rails.txt |
Q:
Service to make an audio podcast from a video one?
Video podcast
???
Audio only mp3 player
I'm looking for somewhere which will extract audio from video, but instead of a single file, for an on going video podcast.
I would most like a website which would suck in the RSS and spit out an RSS (I'm thinking of something like Feedburner), though would settle for something on my own machine.
If it must be on my machine, it should be quick, transparent, and automatic when I download each episode.
What would you use?
Edit: I'm on an Ubuntu 8.04 machine; so running ffmpeg is no problem; however, I'm looking for automation and feed awareness.
Here's my use case: I want to listen to lectures at Google Video, or Structure and Interpretation of Computer Programs. These videos come out fairly often, so anything that's needed to be done manually will also be done fairly often.
Here's one approach I'd thought of:
download the RSS
parse the RSS for enclosures,
download the enclosures, keeping a track what has already been downloaded previously
transcode the files, but not the ones done already
reconstruct an RSS with the audio files, remembering to change the metadata.
schedule to be run periodically
point podcatcher at new RSS feed.
I also liked the approach of gPodder of using a post-download script.
I wish the Lazy Web still worked.
A:
You could automate this using the open source command line tool ffmpeg. Parse the RSS to get the video files, fetch them over the net if needed, then spit each one out to a command line like this:
ffmpeg -i episode1.mov -ab 128000 episode1.mp3
The -ab switch sets the output bit rate to 128 kbits/s on the audio file, adjust as needed.
Once you have the audio files you can reconstruct the RSS feed to link to the audio files if so desired.
A:
How to extract audio from video to MP3:
http://www.dvdvideosoft.com/guides/dvd/extract-audio-from-video-to-mp3.htm
How to Convert a Video Podcast to Audio Only:
http://www.legalandrew.com/2007/03/10/how-to-convert-a-video-podcast-to-audio-only/
A:
When you edit your video, doesn't your editor provide you an option to split out the audio?
A:
What platform is your own machine? What format is the video podcast?
You could possibly get Handbrake to do this (Windows, Linux and Mac), I don't know if it's scriptable at all but I think it can be used to separate audio and video.
edit: There is a commandline interface for Handbrake, but it appears I was wrong about it accepting non-DVD input.
On the Mac I'd probably rig up something with Applescript and QuickTime - what platform are you on?
| Service to make an audio podcast from a video one? |
Video podcast
???
Audio only mp3 player
I'm looking for somewhere which will extract audio from video, but instead of a single file, for an on going video podcast.
I would most like a website which would suck in the RSS and spit out an RSS (I'm thinking of something like Feedburner), though would settle for something on my own machine.
If it must be on my machine, it should be quick, transparent, and automatic when I download each episode.
What would you use?
Edit: I'm on an Ubuntu 8.04 machine; so running ffmpeg is no problem; however, I'm looking for automation and feed awareness.
Here's my use case: I want to listen to lectures at Google Video, or Structure and Interpretation of Computer Programs. These videos come out fairly often, so anything that's needed to be done manually will also be done fairly often.
Here's one approach I'd thought of:
download the RSS
parse the RSS for enclosures,
download the enclosures, keeping a track what has already been downloaded previously
transcode the files, but not the ones done already
reconstruct an RSS with the audio files, remembering to change the metadata.
schedule to be run periodically
point podcatcher at new RSS feed.
I also liked the approach of gPodder of using a post-download script.
I wish the Lazy Web still worked.
| [
"You could automate this using the open source command line tool ffmpeg. Parse the RSS to get the video files, fetch them over the net if needed, then spit each one out to a command line like this:\nffmpeg -i episode1.mov -ab 128000 episode1.mp3\n\nThe -ab switch sets the output bit rate to 128 kbits/s on the audio file, adjust as needed. \nOnce you have the audio files you can reconstruct the RSS feed to link to the audio files if so desired.\n",
"How to extract audio from video to MP3:\nhttp://www.dvdvideosoft.com/guides/dvd/extract-audio-from-video-to-mp3.htm\nHow to Convert a Video Podcast to Audio Only:\nhttp://www.legalandrew.com/2007/03/10/how-to-convert-a-video-podcast-to-audio-only/\n",
"When you edit your video, doesn't your editor provide you an option to split out the audio?\n",
"What platform is your own machine? What format is the video podcast?\nYou could possibly get Handbrake to do this (Windows, Linux and Mac), I don't know if it's scriptable at all but I think it can be used to separate audio and video.\nedit: There is a commandline interface for Handbrake, but it appears I was wrong about it accepting non-DVD input. \nOn the Mac I'd probably rig up something with Applescript and QuickTime - what platform are you on?\n"
] | [
5,
0,
0,
0
] | [] | [] | [
"audio",
"podcast",
"video"
] | stackoverflow_0000045803_audio_podcast_video.txt |
Q:
How many app.config files are you allowed to have per AppDomain?
I'm hoping there's a way to avoid custom configuration files if an application runs in a single AppDomain.
A:
From Suzanne Cook's .NET CLR Notes:
App.Config Files:
As default the app config file of the
default appdomain is in the process
exe’s directory and named the same as
the process exe + ".config". Also,
note that a web.config file is an
app.config - ASP.NET sets that as the
config file for your appdomain.
To change the config file, set an
AppDomainSetup.ConfigurationFile to
the new location and pass that
AppDomainSetup to your call to
AppDomain.CreateDomain(). Then, run
all of the code requiring that
application config from within that
new appdomain.
Note, though, that you won’t be able
to choose the CLR version by setting
the ConfigurationFile – at that point,
a CLR will already be running, and
there can only be one per process.
Application configuration files are
per appdomain. So, you can set a ‘dll
config’ by using the method above, but
that means that it will be used for
the entire appdomain, and it only gets
one.
| How many app.config files are you allowed to have per AppDomain? | I'm hoping there's a way to avoid custom configuration files if an application runs in a single AppDomain.
| [
"From Suzanne Cook's .NET CLR Notes:\n\nApp.Config Files:\nAs default the app config file of the\ndefault appdomain is in the process\nexe’s directory and named the same as\nthe process exe + \".config\". Also,\nnote that a web.config file is an\napp.config - ASP.NET sets that as the\nconfig file for your appdomain.\nTo change the config file, set an\nAppDomainSetup.ConfigurationFile to\nthe new location and pass that\nAppDomainSetup to your call to\nAppDomain.CreateDomain(). Then, run\nall of the code requiring that\napplication config from within that\nnew appdomain.\nNote, though, that you won’t be able\nto choose the CLR version by setting\nthe ConfigurationFile – at that point,\na CLR will already be running, and\nthere can only be one per process.\nApplication configuration files are\nper appdomain. So, you can set a ‘dll\nconfig’ by using the method above, but\nthat means that it will be used for\nthe entire appdomain, and it only gets\none.\n\n"
] | [
5
] | [] | [] | [
".net",
"configuration"
] | stackoverflow_0000045838_.net_configuration.txt |
Q:
Should DOM splitText and normalise compose to give the identity?
I got embroiled in a discussion about DOM implementation quirks yesterday, with gave rise to an interesting question regarding Text.splitText and Element.normalise behaviours, and how they should behave.
In DOM Level 1 Core, Text.splitText is defined as...
Breaks this Text node into two Text nodes at the specified offset, keeping both in the tree as siblings. This node then only contains all the content up to the offset point. And a new Text node, which is inserted as the next sibling of this node, contains all the content at and after the offset point.
Normalise is...
Puts all Text nodes in the full depth of the sub-tree underneath this Element into a "normal" form where only markup (e.g., tags, comments, processing instructions, CDATA sections, and entity references) separates Text nodes, i.e., there are no adjacent Text nodes. This can be used to ensure that the DOM view of a document is the same as if it were saved and re-loaded, and is useful when operations (such as XPointer lookups) that depend on a particular document tree structure are to be used.
So, if I take a text node containing "Hello World", referenced in textNode, and do
textNode.splitText(3)
textNode now has the content "Hello", and a new sibling containing " World"
If I then
textNode.parent.normalize()
what is textNode? The specification doesn't make it clear that textNode has to still be a child of it's previous parent, just updated to contain all adjacent text nodes (which are then removed). It seems to be to be a conforment behaviour to remove all the adjacent text nodes, and then recreate a new node with the concatenation of the values, leaving textNode pointing to something that is no longer part of the tree. Or, we can update textNode in the same fashion as in splitText, so it retains it's tree position, and gets a new value.
The choice of behaviour is really quite different, and I can't find a clarification on which is correct, or if this is simply an oversight in the specification (it doesn't seem to be clarified in levels 2 or 3). Can any DOM/XML gurus out there shed some light?
A:
I was on the DOM Working Group in the early days; I'm sure we meant for textNode to contain the new joined value, but if we didn't say it in the spec, it's possible that some implementation might create a new node instead of reusing textNode, though that would require more work for the implementors.
When in doubt, program defensively.
A:
While it would seem like a reasonable assumption, I agree that it is not explicityly made clear in the specification. All I can add is that the way I read it, one of either textNode or it's new sibling (i.e. return value from splitText) would contain the new joined value - the statement specifies that all nodes in the sub-tree are put in normal form, not that the sub-tree is normalised to a new structure. I guess the only safe thing is to keep a reference to the parent before normalising.
A:
I think all bets are off here; I certainly wouldn't depend on any given behaviour. The only safe thing to do is to get the node from its parent again.
| Should DOM splitText and normalise compose to give the identity? | I got embroiled in a discussion about DOM implementation quirks yesterday, with gave rise to an interesting question regarding Text.splitText and Element.normalise behaviours, and how they should behave.
In DOM Level 1 Core, Text.splitText is defined as...
Breaks this Text node into two Text nodes at the specified offset, keeping both in the tree as siblings. This node then only contains all the content up to the offset point. And a new Text node, which is inserted as the next sibling of this node, contains all the content at and after the offset point.
Normalise is...
Puts all Text nodes in the full depth of the sub-tree underneath this Element into a "normal" form where only markup (e.g., tags, comments, processing instructions, CDATA sections, and entity references) separates Text nodes, i.e., there are no adjacent Text nodes. This can be used to ensure that the DOM view of a document is the same as if it were saved and re-loaded, and is useful when operations (such as XPointer lookups) that depend on a particular document tree structure are to be used.
So, if I take a text node containing "Hello World", referenced in textNode, and do
textNode.splitText(3)
textNode now has the content "Hello", and a new sibling containing " World"
If I then
textNode.parent.normalize()
what is textNode? The specification doesn't make it clear that textNode has to still be a child of it's previous parent, just updated to contain all adjacent text nodes (which are then removed). It seems to be to be a conforment behaviour to remove all the adjacent text nodes, and then recreate a new node with the concatenation of the values, leaving textNode pointing to something that is no longer part of the tree. Or, we can update textNode in the same fashion as in splitText, so it retains it's tree position, and gets a new value.
The choice of behaviour is really quite different, and I can't find a clarification on which is correct, or if this is simply an oversight in the specification (it doesn't seem to be clarified in levels 2 or 3). Can any DOM/XML gurus out there shed some light?
| [
"I was on the DOM Working Group in the early days; I'm sure we meant for textNode to contain the new joined value, but if we didn't say it in the spec, it's possible that some implementation might create a new node instead of reusing textNode, though that would require more work for the implementors.\nWhen in doubt, program defensively.\n",
"While it would seem like a reasonable assumption, I agree that it is not explicityly made clear in the specification. All I can add is that the way I read it, one of either textNode or it's new sibling (i.e. return value from splitText) would contain the new joined value - the statement specifies that all nodes in the sub-tree are put in normal form, not that the sub-tree is normalised to a new structure. I guess the only safe thing is to keep a reference to the parent before normalising.\n",
"I think all bets are off here; I certainly wouldn't depend on any given behaviour. The only safe thing to do is to get the node from its parent again.\n"
] | [
5,
2,
2
] | [] | [] | [
"dom",
"xml"
] | stackoverflow_0000030049_dom_xml.txt |
Q:
Web Design for Google Chrome
What, if any, considerations (HTML, CSS, JavaScript) should you take when designing for Google Chrome?
A:
Chrome uses Webkit, the same engine as is used by Safari, OmniWeb, iCab and more. Just code everything based on the standards and verify in each browser.
A:
I think first and foremost you should focus on using HTML and scripting that follows the standards.
After you have that running, file a bug report then make the browser-specific tweaks. If Chrome is worth a flip you shouldn't have to tweak things for it.
A:
The same ones you'd take for Safari, as they share the same rendering engine (with a slight version mismatch).
A:
I'm sure filing a bug report really helps with all those IE rendering issues!
Realistically, you need to test your application in each browser, no browser 100% follows the W3C standards so ultimately you can't rely on following that at all. You need to test everything you do in any browser you wish to support.
As has been mentioned, Google Chrome has the same rendering engine as Safari/iPhone/etc., WebKit which passes Acid3, so there should be minimal issues if you follow the standards. But don't rely on it. Google Chrome currently uses a slightly older version of WebKit than Safari. I'm sure they'll eventually be on the same version at some point, but unfortunately any new browser becomes just another browser to test in.
A:
Are you designing specifically for Chrome, or do you want to make sure your pages work well with Chrome?
Assuming it's the latter, then just use the same design considerations you'd do for any browser. If applicable, keep in mind that many phones and video game consoles have web browsers now.
Chrome uses a new JavaScript engine, so you'll have to test your JavaScript using Chrome as well as Safari. The HTML and CSS may render pretty much the same, but they use different JavaScript engines.
| Web Design for Google Chrome | What, if any, considerations (HTML, CSS, JavaScript) should you take when designing for Google Chrome?
| [
"Chrome uses Webkit, the same engine as is used by Safari, OmniWeb, iCab and more. Just code everything based on the standards and verify in each browser.\n",
"I think first and foremost you should focus on using HTML and scripting that follows the standards.\nAfter you have that running, file a bug report then make the browser-specific tweaks. If Chrome is worth a flip you shouldn't have to tweak things for it.\n",
"The same ones you'd take for Safari, as they share the same rendering engine (with a slight version mismatch).\n",
"I'm sure filing a bug report really helps with all those IE rendering issues! \nRealistically, you need to test your application in each browser, no browser 100% follows the W3C standards so ultimately you can't rely on following that at all. You need to test everything you do in any browser you wish to support. \nAs has been mentioned, Google Chrome has the same rendering engine as Safari/iPhone/etc., WebKit which passes Acid3, so there should be minimal issues if you follow the standards. But don't rely on it. Google Chrome currently uses a slightly older version of WebKit than Safari. I'm sure they'll eventually be on the same version at some point, but unfortunately any new browser becomes just another browser to test in.\n",
"Are you designing specifically for Chrome, or do you want to make sure your pages work well with Chrome?\nAssuming it's the latter, then just use the same design considerations you'd do for any browser. If applicable, keep in mind that many phones and video game consoles have web browsers now.\nChrome uses a new JavaScript engine, so you'll have to test your JavaScript using Chrome as well as Safari. The HTML and CSS may render pretty much the same, but they use different JavaScript engines.\n"
] | [
9,
5,
4,
2,
0
] | [] | [] | [
"google_chrome"
] | stackoverflow_0000045846_google_chrome.txt |
Q:
Using IIS6, how can I place files in a sub-folder but have them served as if they were in the root?
Our ASP.NET 3.5 website running on IIS 6 has two teams that are adding content:
Development team adding code.
Business team adding simple web pages.
For sanity and organization, we would like for the business team to add their web pages to a sub-folder in the project:
Root: for pages of development team
Content: for pages of business team
But
We would like for users to be able to navigate to the business team content without having to append "Content" in their URLs, as described below:
Root: Default.aspx (Available at: www.oursite.com/default.aspx)
Content: Popcorn.aspx (Available at: www.oursite.com/popcorn.aspx)
Is there a way we can accomplish for making a config entry in an ISAPI rewrite tool for every one of these pages?
A:
Since the extensions will be ASPX, ASP.NET will pick up the request... you can write an HttpModule that checks for pages that yield a 404 and then check the subfolder also.
If you know that all pages with a certain format will be coming from that folder, then you can just rewrite the URL in ASP.NET (either in Global.asax or an HttpModule).
A:
I don't have any way to test this right now, but I think you can use the -f flag on RewriteCond to check if a file exists, in either directory.
RewriteCond %{REQUEST_FILENAME} -!f
RewriteCond Content/%{REQUEST_FILENAME} -f
RewriteRule (.*) Content/(.*)
Something like that might do what you're after, too.
| Using IIS6, how can I place files in a sub-folder but have them served as if they were in the root? | Our ASP.NET 3.5 website running on IIS 6 has two teams that are adding content:
Development team adding code.
Business team adding simple web pages.
For sanity and organization, we would like for the business team to add their web pages to a sub-folder in the project:
Root: for pages of development team
Content: for pages of business team
But
We would like for users to be able to navigate to the business team content without having to append "Content" in their URLs, as described below:
Root: Default.aspx (Available at: www.oursite.com/default.aspx)
Content: Popcorn.aspx (Available at: www.oursite.com/popcorn.aspx)
Is there a way we can accomplish for making a config entry in an ISAPI rewrite tool for every one of these pages?
| [
"Since the extensions will be ASPX, ASP.NET will pick up the request... you can write an HttpModule that checks for pages that yield a 404 and then check the subfolder also.\nIf you know that all pages with a certain format will be coming from that folder, then you can just rewrite the URL in ASP.NET (either in Global.asax or an HttpModule).\n",
"I don't have any way to test this right now, but I think you can use the -f flag on RewriteCond to check if a file exists, in either directory.\nRewriteCond %{REQUEST_FILENAME} -!f\nRewriteCond Content/%{REQUEST_FILENAME} -f\nRewriteRule (.*) Content/(.*)\n\nSomething like that might do what you're after, too.\n"
] | [
2,
1
] | [] | [] | [
"asp.net",
"iis_6",
"url_rewriting"
] | stackoverflow_0000045796_asp.net_iis_6_url_rewriting.txt |
Q:
How do I oversee my MySQL replication server?
I've had a tough time setting up my replication server. Is there any program (OS X, Windows, Linux, or PHP no problem) that lets me monitor and resolve replication issues? (btw, for those following, I've been on this issue here, here, here and here)
My production database is several megs in size and growing. Every time the database replication stops and the databases inevitably begin to slide out of sync i cringe. My last resync from dump took almost 4 hours roundtrip!
As always, even after sync, I run into this kind of show-stopping error:
Error 'Duplicate entry '252440' for key 1' on query.
I would love it if there was some way to closely monitor whats going on and perhaps let the software deal with it. I'm even all ears for service companies which may help me monitor my data better. Or an alternate way to mirror altogether.
Edit: going through my previous questions i found this which helps tremendously. I'm still all ears on the monitoring solution.
A:
To monitor the servers we use the free tools from Maatkit ... simple, yet efficient.
The binary replication is available in 5.1, so I guess you've got some balls. We still use 5.0 and it works OK, but of course we had our share of issues with it.
We use a Master-Master replication with a MySql Proxy as a load-balancer in front, and to prevent it from having errors:
we removed all unique indexes
for the few cases where we really needed unique constraints we made sure we used REPLACE instead of INSERT (MySql Proxy can be used to guard for proper usage ... it can even rewrite your queries)
scheduled scripts doing intensive reports are always accessing the same server (not the load-balancer) ... so that dangerous operations are replicated safely
Yeah, I know it sounds simple and stupid, but it solved 95% of all the problems we had.
A:
We use mysql replication to replicate data to close to 30 servers. We monitor them with nagios. You can probably check the replication status and use an event handler to restart it with 'SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; Start Slave;'. That will fix the error, but you'll lose the insert that caused the error.
About the error, do you use memory tables on your slaves? I ask this because the only time we ever got a lot of these error they where caused by a bug in the latests releases of mysql. 'Delete From Table Where Field = Value' will delete only one row in memory tables even though they where multiple rows.
mysql bug descritpion
| How do I oversee my MySQL replication server? | I've had a tough time setting up my replication server. Is there any program (OS X, Windows, Linux, or PHP no problem) that lets me monitor and resolve replication issues? (btw, for those following, I've been on this issue here, here, here and here)
My production database is several megs in size and growing. Every time the database replication stops and the databases inevitably begin to slide out of sync i cringe. My last resync from dump took almost 4 hours roundtrip!
As always, even after sync, I run into this kind of show-stopping error:
Error 'Duplicate entry '252440' for key 1' on query.
I would love it if there was some way to closely monitor whats going on and perhaps let the software deal with it. I'm even all ears for service companies which may help me monitor my data better. Or an alternate way to mirror altogether.
Edit: going through my previous questions i found this which helps tremendously. I'm still all ears on the monitoring solution.
| [
"To monitor the servers we use the free tools from Maatkit ... simple, yet efficient.\nThe binary replication is available in 5.1, so I guess you've got some balls. We still use 5.0 and it works OK, but of course we had our share of issues with it.\nWe use a Master-Master replication with a MySql Proxy as a load-balancer in front, and to prevent it from having errors:\n\nwe removed all unique indexes\nfor the few cases where we really needed unique constraints we made sure we used REPLACE instead of INSERT (MySql Proxy can be used to guard for proper usage ... it can even rewrite your queries)\nscheduled scripts doing intensive reports are always accessing the same server (not the load-balancer) ... so that dangerous operations are replicated safely\n\nYeah, I know it sounds simple and stupid, but it solved 95% of all the problems we had.\n",
"We use mysql replication to replicate data to close to 30 servers. We monitor them with nagios. You can probably check the replication status and use an event handler to restart it with 'SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; Start Slave;'. That will fix the error, but you'll lose the insert that caused the error. \nAbout the error, do you use memory tables on your slaves? I ask this because the only time we ever got a lot of these error they where caused by a bug in the latests releases of mysql. 'Delete From Table Where Field = Value' will delete only one row in memory tables even though they where multiple rows.\nmysql bug descritpion\n"
] | [
3,
0
] | [] | [] | [
"mysql",
"mysql_management"
] | stackoverflow_0000035420_mysql_mysql_management.txt |
Q:
How do you index into a var in LINQ?
I'm trying to get the following bit of code to work in LINQPad but am unable to index into a var. Anybody know how to index into a var in LINQ?
string[] sa = {"one", "two", "three"};
sa[1].Dump();
var va = sa.Select( (a,i) => new {Line = a, Index = i});
va[1].Dump();
// Cannot apply indexing with [] to an expression of type 'System.Collections.Generic.IEnumerable<AnonymousType#1>'
A:
As the comment says, you cannot apply indexing with [] to an expression of type System.Collections.Generic.IEnumerable<T>. The IEnumerable interface only supports the method GetEnumerator(). However with LINQ you can call the extension method ElementAt(int).
A:
You can't apply an index to a var unless it's an indexable type:
//works because under the hood the C# compiler has converted var to string[]
var arrayVar = {"one", "two", "three"};
arrayVar[1].Dump();
//now let's try
var selectVar = arrayVar.Select( (a,i) => new { Line = a });
//or this (I find this syntax easier, but either works)
var selectVar =
from s in arrayVar
select new { Line = s };
In both these cases selectVar is actually IEnumerable<'a> - not an indexed type. You can easily convert it to one though:
//convert it to a List<'a>
var aList = selectVar.ToList();
//convert it to a 'a[]
var anArray = selectVar.ToArray();
//or even a Dictionary<string,'a>
var aDictionary = selectVar.ToDictionary( x => x.Line );
| How do you index into a var in LINQ? | I'm trying to get the following bit of code to work in LINQPad but am unable to index into a var. Anybody know how to index into a var in LINQ?
string[] sa = {"one", "two", "three"};
sa[1].Dump();
var va = sa.Select( (a,i) => new {Line = a, Index = i});
va[1].Dump();
// Cannot apply indexing with [] to an expression of type 'System.Collections.Generic.IEnumerable<AnonymousType#1>'
| [
"As the comment says, you cannot apply indexing with [] to an expression of type System.Collections.Generic.IEnumerable<T>. The IEnumerable interface only supports the method GetEnumerator(). However with LINQ you can call the extension method ElementAt(int).\n",
"You can't apply an index to a var unless it's an indexable type:\n//works because under the hood the C# compiler has converted var to string[]\nvar arrayVar = {\"one\", \"two\", \"three\"};\narrayVar[1].Dump();\n\n//now let's try\nvar selectVar = arrayVar.Select( (a,i) => new { Line = a });\n\n//or this (I find this syntax easier, but either works)\nvar selectVar =\n from s in arrayVar \n select new { Line = s };\n\nIn both these cases selectVar is actually IEnumerable<'a> - not an indexed type. You can easily convert it to one though:\n//convert it to a List<'a>\nvar aList = selectVar.ToList();\n\n//convert it to a 'a[]\nvar anArray = selectVar.ToArray();\n\n//or even a Dictionary<string,'a>\nvar aDictionary = selectVar.ToDictionary( x => x.Line );\n\n"
] | [
21,
4
] | [] | [] | [
"c#",
"linq"
] | stackoverflow_0000044989_c#_linq.txt |
Q:
Best way to export a QTMovie with a fade-in and fade-out in the audio
I want to take a QTMovie that I have and export it with the audio fading in and fading out for a predetermined amount of time. I want to do this within Cocoa as much as possible. The movie will likely only have audio in it. My research has turned up a couple of possibilities:
Use the newer Audio Context Insert APIs. http://developer.apple.com/DOCUMENTATION/QuickTime/Conceptual/QT7-2_Update_Guide/NewFeaturesChangesEnhancements/chapter_2_section_11.html. This appears to be the most modern was to accomplish this.
Use the Quicktime audio extraction APIs to pull out the audio track of the movie and process it and then put the processed audio back into the movie replacing the original audio.
Am I missing some much easier method?
A:
Quicktime has the notion of Tween Tracks. A tween track is a track that allows you to modify the properties of another set of tracks properties (such as the volume).
See Creating a Tween Track in the Quicktime docs to see an example of how to do this with an Quicktime audio track's volume.
There is also a more complete example called qtsndtween on the Apple Developer website.
Of course, all of this code requires using the Quicktime C APIs. If you can live with building a 32-bit only application, you can get the underlying Quicktime-C handles from a QTMovie, QTTrack, or QTMedia object using the "movie", "track", or "media" functions respectively.
Hopefully we'll get all the features of the Quicktime C APIs in the next version of QTKit, whenever that may be.
| Best way to export a QTMovie with a fade-in and fade-out in the audio | I want to take a QTMovie that I have and export it with the audio fading in and fading out for a predetermined amount of time. I want to do this within Cocoa as much as possible. The movie will likely only have audio in it. My research has turned up a couple of possibilities:
Use the newer Audio Context Insert APIs. http://developer.apple.com/DOCUMENTATION/QuickTime/Conceptual/QT7-2_Update_Guide/NewFeaturesChangesEnhancements/chapter_2_section_11.html. This appears to be the most modern was to accomplish this.
Use the Quicktime audio extraction APIs to pull out the audio track of the movie and process it and then put the processed audio back into the movie replacing the original audio.
Am I missing some much easier method?
| [
"Quicktime has the notion of Tween Tracks. A tween track is a track that allows you to modify the properties of another set of tracks properties (such as the volume).\nSee Creating a Tween Track in the Quicktime docs to see an example of how to do this with an Quicktime audio track's volume.\nThere is also a more complete example called qtsndtween on the Apple Developer website.\nOf course, all of this code requires using the Quicktime C APIs. If you can live with building a 32-bit only application, you can get the underlying Quicktime-C handles from a QTMovie, QTTrack, or QTMedia object using the \"movie\", \"track\", or \"media\" functions respectively. \nHopefully we'll get all the features of the Quicktime C APIs in the next version of QTKit, whenever that may be.\n"
] | [
3
] | [] | [] | [
"cocoa",
"macos",
"objective_c",
"quicktime"
] | stackoverflow_0000033061_cocoa_macos_objective_c_quicktime.txt |
Q:
InfoPath 2003 and the xs:any type
I am implementing exception handling for our BizTalk services, and have run into a fairly major stumbling block.
In order to make the exception processing as generic as possible, and therefore to allow us to use it for any BizTalk application, our XML error schema includes an xs:any node, into which we can place a variety of data, depending on the actual exception. The generated XML should then be presented to a user through an InfoPath 2003 form for manual intervention before being represented back to BizTalk.
The problem is that InfoPath 2003 doesn't like schemas with an xs:any node. What we'd really like to do is the show the content of the exception report in a form with all relevant parameters mapped , and the entire content of the xs:any node in a text box, since users who are able to see these messages will be conversant with XML. Unfortunately, I am unable to make InfoPath even load the schema at design time.
Does anyone have any recommendation for how to achieve what we need, please?
A:
Does your xs:any element have a minOccurs > 0?
http://msdn.microsoft.com/en-us/library/bb251017.aspx#UnsupportedConstructs
I've also read that due to the way that InfoPath works, it can not handly more than one schema for each namespace. Hence, your xs:any (and the sequence that it defines) should have a unique namespace.
A:
Unfortunately, things have moved on, and we have (almost) made the decision not to use InfoPath for this requirement. It's only partially to do with the xs:any issue, but more to do with (external) audit trails, calls to custom code and web services, and a couple of other factors.
| InfoPath 2003 and the xs:any type | I am implementing exception handling for our BizTalk services, and have run into a fairly major stumbling block.
In order to make the exception processing as generic as possible, and therefore to allow us to use it for any BizTalk application, our XML error schema includes an xs:any node, into which we can place a variety of data, depending on the actual exception. The generated XML should then be presented to a user through an InfoPath 2003 form for manual intervention before being represented back to BizTalk.
The problem is that InfoPath 2003 doesn't like schemas with an xs:any node. What we'd really like to do is the show the content of the exception report in a form with all relevant parameters mapped , and the entire content of the xs:any node in a text box, since users who are able to see these messages will be conversant with XML. Unfortunately, I am unable to make InfoPath even load the schema at design time.
Does anyone have any recommendation for how to achieve what we need, please?
| [
"Does your xs:any element have a minOccurs > 0?\nhttp://msdn.microsoft.com/en-us/library/bb251017.aspx#UnsupportedConstructs\nI've also read that due to the way that InfoPath works, it can not handly more than one schema for each namespace. Hence, your xs:any (and the sequence that it defines) should have a unique namespace.\n",
"Unfortunately, things have moved on, and we have (almost) made the decision not to use InfoPath for this requirement. It's only partially to do with the xs:any issue, but more to do with (external) audit trails, calls to custom code and web services, and a couple of other factors.\n"
] | [
1,
0
] | [] | [] | [
"forms",
"infopath",
"xml"
] | stackoverflow_0000037584_forms_infopath_xml.txt |
Q:
How can I extract a part of a xaml object graph via linq to xml?
I have an object graph serialized to xaml. A rough sample of what it looks like is:
<MyObject xmlns.... >
<MyObject.TheCollection>
<PolymorphicObjectOne .../>
<HiImPolymorphic ... />
</MyObject.TheCollection>
</MyObject>
I want to use Linq to XML in order to extract the serialized objects within the TheCollection.
Note: MyObject may be named differently at runtime; I'm interested in any object that implements the same interface, which has a public collection called TheCollection that contains types of IPolymorphicLol.
The only things I know at runtime are the depth at which I will find the collection and that the collection element is named ``*.TheCollection`. Everything else will change.
The xml will be retrieved from a database using Linq; if I could combine both queries so instead of getting the entire serialized graph and then extracting the collection objects I would just get back the collection that would be sweet.
A:
Will,
It is not possible to find out whether an object implements some interface by looking at XAML.
With constraints given you can find xml element that has a child named .
You can use following code:
It will return all elements having child element which name ends with .TheCollection
static IEnumerable<XElement> FindElement(XElement root)
{
foreach (var element in root.Elements())
{
if (element.Name.LocalName.EndsWith(".TheCollection"))
{
yield return element.Parent;
}
foreach (var subElement in FindElement(element))
{
yield return subElement;
}
}
}
To make sure that object represented by this element implements some interface you need to read metadata from your assemblies. I would recommend you to use Mono.Cecil framework to analyze types in your assemblies without using reflection.
A:
@aku
Yes, I know that xaml doesn't include any indication of base types or interfaces. But I do know the interface of the root objects, and the interface that the collection holds, at compile time.
The serialized graphs are stored in a sql database as XML, and we're using linq to retrieve them as XElements. Currently, along with your solution, we are limited to deserializing the graphs, iterating through them, pulling out the objects we want from the collection, removing all references to them from, and then disposing, their parents. Its all very kludgy. I was hoping for a single stroke solution; something along the lines of an xpath, but inline with our linq to sql query that returns just the elements we're looking for...
| How can I extract a part of a xaml object graph via linq to xml? | I have an object graph serialized to xaml. A rough sample of what it looks like is:
<MyObject xmlns.... >
<MyObject.TheCollection>
<PolymorphicObjectOne .../>
<HiImPolymorphic ... />
</MyObject.TheCollection>
</MyObject>
I want to use Linq to XML in order to extract the serialized objects within the TheCollection.
Note: MyObject may be named differently at runtime; I'm interested in any object that implements the same interface, which has a public collection called TheCollection that contains types of IPolymorphicLol.
The only things I know at runtime are the depth at which I will find the collection and that the collection element is named ``*.TheCollection`. Everything else will change.
The xml will be retrieved from a database using Linq; if I could combine both queries so instead of getting the entire serialized graph and then extracting the collection objects I would just get back the collection that would be sweet.
| [
"Will,\nIt is not possible to find out whether an object implements some interface by looking at XAML.\nWith constraints given you can find xml element that has a child named .\nYou can use following code:\nIt will return all elements having child element which name ends with .TheCollection\n static IEnumerable<XElement> FindElement(XElement root)\n {\n foreach (var element in root.Elements())\n {\n if (element.Name.LocalName.EndsWith(\".TheCollection\"))\n {\n yield return element.Parent;\n }\n foreach (var subElement in FindElement(element))\n {\n yield return subElement;\n }\n }\n }\n\nTo make sure that object represented by this element implements some interface you need to read metadata from your assemblies. I would recommend you to use Mono.Cecil framework to analyze types in your assemblies without using reflection.\n",
"@aku\nYes, I know that xaml doesn't include any indication of base types or interfaces. But I do know the interface of the root objects, and the interface that the collection holds, at compile time. \nThe serialized graphs are stored in a sql database as XML, and we're using linq to retrieve them as XElements. Currently, along with your solution, we are limited to deserializing the graphs, iterating through them, pulling out the objects we want from the collection, removing all references to them from, and then disposing, their parents. Its all very kludgy. I was hoping for a single stroke solution; something along the lines of an xpath, but inline with our linq to sql query that returns just the elements we're looking for...\n"
] | [
0,
0
] | [] | [] | [
"linq",
"linq_to_xml",
"xaml"
] | stackoverflow_0000045732_linq_linq_to_xml_xaml.txt |
Q:
Java serialization with static initialization
In Java, static and transient fields are not serialized. However, I found out that initialization of static fields causes the generated serialVersionUID to be changed. For example, static int MYINT = 3; causes the serialVersionUID to change. In this example, it makes sense because different versions of the class would get different initial values. Why does any initialization change the serialVersionUID? For example, static String MYSTRING = System.getProperty("foo"); also causes the serialVersionUID to change.
To be specific, my question is why does initialization with a method cause the serialVersionUID to change. The problem I hit is that I added a new static field that was initialized with a system property value (getProperty). That change caused a serialization exception on a remote call.
A:
You can find some information about that in the bug 4365406 and in the algorithm for computing serialVersionUID. Basically, when changing the initialization of your static member with System.getProperty(), the compiler introduces a new static property in your class referencing the System class (I assume that the System class was previously unreferenced in your class), and since this property introduced by the compiler is not private, it takes part in the serialVersionUID computation.
Morality: always use explicit serialVersionUID, you'll save some CPU cycles and some headaches :)
A:
Automatic serialVersionUID is calculated based on members of a class. These can be shown for a class file using the javap tool in the Sun JDK.
In the case mentioned in the question, the member that is added/removed is the static initialiser. This appears as ()V in class files. The contents of the method can be disassembled using javap -c. You should be able to make out the System.getProperty("foo") call and assignment to MYSTRING. However an assignment with a string literal (or any compile-time constant as defined by the Java Language Specification) is supported directly by the class file, so removing the need for a static initialiser.
A common case for code targeting J2SE 1.4 (use -source 1.4 -target 1.4) or earlier is static fields to old Class instances which appear as class literals in source code (MyClass.class). The Class instance is looked up on demand with Class.forName, and the stored in a static field. It is this static field that disrupts the serialVersionUID. From J2SE 5.0, a variant of the ldc opcode gives direct support for class literals, removing the need for the synthetic field. Again, all this can be shown with javap -c.
A:
If I read the spec correctly the automatic serialVersionUID shouldn't change if you change the value of a static of transient field. Take a look at Chapter 5.6 of the Spec.
However, if you think about this a bit - you start by serializing an object that has static int MYINT = 3, when you then deserialize the class you expect to get the same object back, that is, with MYINT = 3. So, if you change the static initialization you would expect the serialVersionUID to change because you can't get the same object back again.
Anyways, keep this in all your serializable classes and you can control the serialVersionUID:
private static final long serialVersionUID = 7526472295622776147L;
A:
I updated the question to be more clear. I understand why initialization with a literal changes the serialVersionUID but not why dynamic initialization changes it. If you initialize with a method, the value, of course, may always be different.
Setting the serialVersionUID explicitly is fine in a subsequent version of the class only if you are sure that it is a safe change.
| Java serialization with static initialization | In Java, static and transient fields are not serialized. However, I found out that initialization of static fields causes the generated serialVersionUID to be changed. For example, static int MYINT = 3; causes the serialVersionUID to change. In this example, it makes sense because different versions of the class would get different initial values. Why does any initialization change the serialVersionUID? For example, static String MYSTRING = System.getProperty("foo"); also causes the serialVersionUID to change.
To be specific, my question is why does initialization with a method cause the serialVersionUID to change. The problem I hit is that I added a new static field that was initialized with a system property value (getProperty). That change caused a serialization exception on a remote call.
| [
"You can find some information about that in the bug 4365406 and in the algorithm for computing serialVersionUID. Basically, when changing the initialization of your static member with System.getProperty(), the compiler introduces a new static property in your class referencing the System class (I assume that the System class was previously unreferenced in your class), and since this property introduced by the compiler is not private, it takes part in the serialVersionUID computation.\nMorality: always use explicit serialVersionUID, you'll save some CPU cycles and some headaches :)\n",
"Automatic serialVersionUID is calculated based on members of a class. These can be shown for a class file using the javap tool in the Sun JDK.\nIn the case mentioned in the question, the member that is added/removed is the static initialiser. This appears as ()V in class files. The contents of the method can be disassembled using javap -c. You should be able to make out the System.getProperty(\"foo\") call and assignment to MYSTRING. However an assignment with a string literal (or any compile-time constant as defined by the Java Language Specification) is supported directly by the class file, so removing the need for a static initialiser.\nA common case for code targeting J2SE 1.4 (use -source 1.4 -target 1.4) or earlier is static fields to old Class instances which appear as class literals in source code (MyClass.class). The Class instance is looked up on demand with Class.forName, and the stored in a static field. It is this static field that disrupts the serialVersionUID. From J2SE 5.0, a variant of the ldc opcode gives direct support for class literals, removing the need for the synthetic field. Again, all this can be shown with javap -c.\n",
"If I read the spec correctly the automatic serialVersionUID shouldn't change if you change the value of a static of transient field. Take a look at Chapter 5.6 of the Spec.\nHowever, if you think about this a bit - you start by serializing an object that has static int MYINT = 3, when you then deserialize the class you expect to get the same object back, that is, with MYINT = 3. So, if you change the static initialization you would expect the serialVersionUID to change because you can't get the same object back again.\nAnyways, keep this in all your serializable classes and you can control the serialVersionUID:\nprivate static final long serialVersionUID = 7526472295622776147L;\n\n",
"I updated the question to be more clear. I understand why initialization with a literal changes the serialVersionUID but not why dynamic initialization changes it. If you initialize with a method, the value, of course, may always be different. \nSetting the serialVersionUID explicitly is fine in a subsequent version of the class only if you are sure that it is a safe change. \n"
] | [
6,
2,
0,
0
] | [] | [] | [
"java",
"serialization"
] | stackoverflow_0000041499_java_serialization.txt |
Q:
How to change "Generate Method Stub" to throw NotImplementedException in VS?
How can I change default Generate Method Stub behavior in Visaul Studio to generate method with body
throw new NotImplementedException();
instead of
throw new Exception("The method or operation is not implemented.");
A:
Taken from: http://blogs.msdn.com/ansonh/archive/2005/12/08/501763.aspx
Visual Studio 2005 supports targeting the 1.0 version of the compact framework. In order to keep the size of the compact framework small, it does not include all of the same types that exist in the desktop framework. One of the types that is not included is NotImplementedException.
You can change the generated code by editing the code snippet file:
C:\Program Files\Microsoft Visual Studio 8\VC#\Snippets\1033\Refactoring\MethodStub.snippet and changing the Declarations section to the following:
<Declarations>
<Literal Editable="true">
<ID>signature</ID>
<Default>signature</Default>
</Literal>
<Literal>
<ID>Exception</ID>
<Function>SimpleTypeName(global::System.NotImplementedException)</Function>
</Literal>
</Declarations>
A:
There's another reason: FxCop catches instances of anybody throwing 'Exception' and flags it, but throwing instances of 'NotImplementedException' is acceptable.
I actually like the default behavior, because it does have this differentiation. NotImplementedException is not a temporary exception to be thrown while you're working your way through your code. It implies "I mean it, I'm really not going to implement this thing". If you leave the codegen the way it is, it's easy for you to differentiate in the code the "I will come back to this later" bits from "I've decided not to do this" bits.
| How to change "Generate Method Stub" to throw NotImplementedException in VS? | How can I change default Generate Method Stub behavior in Visaul Studio to generate method with body
throw new NotImplementedException();
instead of
throw new Exception("The method or operation is not implemented.");
| [
"Taken from: http://blogs.msdn.com/ansonh/archive/2005/12/08/501763.aspx\n\nVisual Studio 2005 supports targeting the 1.0 version of the compact framework. In order to keep the size of the compact framework small, it does not include all of the same types that exist in the desktop framework. One of the types that is not included is NotImplementedException. \n\nYou can change the generated code by editing the code snippet file:\nC:\\Program Files\\Microsoft Visual Studio 8\\VC#\\Snippets\\1033\\Refactoring\\MethodStub.snippet and changing the Declarations section to the following:\n <Declarations>\n <Literal Editable=\"true\">\n <ID>signature</ID>\n <Default>signature</Default>\n </Literal>\n <Literal>\n <ID>Exception</ID>\n <Function>SimpleTypeName(global::System.NotImplementedException)</Function>\n </Literal>\n </Declarations>\n\n",
"There's another reason: FxCop catches instances of anybody throwing 'Exception' and flags it, but throwing instances of 'NotImplementedException' is acceptable.\nI actually like the default behavior, because it does have this differentiation. NotImplementedException is not a temporary exception to be thrown while you're working your way through your code. It implies \"I mean it, I'm really not going to implement this thing\". If you leave the codegen the way it is, it's easy for you to differentiate in the code the \"I will come back to this later\" bits from \"I've decided not to do this\" bits.\n"
] | [
8,
1
] | [] | [] | [
".net",
"configuration",
"visual_studio"
] | stackoverflow_0000046003_.net_configuration_visual_studio.txt |
Q:
Self Updating
What's the best way to terminate a program and then run additional code from the program that's being terminated? For example, what would be the best way for a program to self update itself?
A:
You have a couple options:
You could use another application .exe to do the auto update. This is probably the best method.
You can also rename a program's exe while it is running. Hence allowing you to get the file from some update server and replace it. On the program's next startup it will be using the new .exe. You can then delete the renamed file on startup.
A:
It'd be really helpful to know what language we're talking about here. I'm sure I could give you some really great tips for doing this in PowerBuilder or Cobol, but that might not really be what you're after! If you're talking Java however, then you could use a shut down hook - works great for me.
A:
Another thing to consider is that most of the "major" apps I've been using (FileZilla, Paint.NET, etc.), are having the updaters uninstall the previous version of the app and then doing a fresh install of the new version of the application.
I understand this won't work for really large applications, but this does seem to be a "preferred" process for the small to medium size applications.
A:
I don't know of a way to do it without a second program that the primary program launches prior to shutting down. Program 2 downloads and installs the changes and then relaunches the primary program.
A:
We did something like this in our previous app. We captured the termination of the program (in .NET 2.0) from either the X or the close button, and then kicked off a background update process that the user didn't see. It would check the server (client-server app) for an update, and if there was one available, it would download in the background using BITS. Then the next time the application opened, it would realize that there was a new version (we set a flag) and popped up a message alerting the user to the new version, and a button to click if they wanted to view the new features added to this version.
A:
It makes it easier if you have a secondary app that runs to do the updates. You would execute the "updater" app, and then inside of it wait for the other process to exit. If you need access to the regular apps DLLs and such but they also need updating, you can run the updater from a secondary location with already updated DLLs so that they are not in use in the original location.
A:
If you're using writing a .NET application, you might consider using ClickOnce. If you need quite a bit of customization, you might look elsewhere.
We have an external process that performs updating for us. When it finds an update, it downloads it to a secondary folder and then waits for the main application to exit. On exit, it replaces all of the current files. The primary process just kicks the update process off every 4 hours. Because the update process will wait for the exit of the primary app, the primary app doesn't have to do any special processing other than start the update application.
This is a side issue, but if you're considering writing your own update process, I would encourage you to look into using compression of some sort to (1) save on download and (2) provide one file to pull from an update server.
Hope that makes sense!
| Self Updating | What's the best way to terminate a program and then run additional code from the program that's being terminated? For example, what would be the best way for a program to self update itself?
| [
"You have a couple options:\nYou could use another application .exe to do the auto update. This is probably the best method.\nYou can also rename a program's exe while it is running. Hence allowing you to get the file from some update server and replace it. On the program's next startup it will be using the new .exe. You can then delete the renamed file on startup. \n",
"It'd be really helpful to know what language we're talking about here. I'm sure I could give you some really great tips for doing this in PowerBuilder or Cobol, but that might not really be what you're after! If you're talking Java however, then you could use a shut down hook - works great for me.\n",
"Another thing to consider is that most of the \"major\" apps I've been using (FileZilla, Paint.NET, etc.), are having the updaters uninstall the previous version of the app and then doing a fresh install of the new version of the application. \nI understand this won't work for really large applications, but this does seem to be a \"preferred\" process for the small to medium size applications.\n",
"I don't know of a way to do it without a second program that the primary program launches prior to shutting down. Program 2 downloads and installs the changes and then relaunches the primary program.\n",
"We did something like this in our previous app. We captured the termination of the program (in .NET 2.0) from either the X or the close button, and then kicked off a background update process that the user didn't see. It would check the server (client-server app) for an update, and if there was one available, it would download in the background using BITS. Then the next time the application opened, it would realize that there was a new version (we set a flag) and popped up a message alerting the user to the new version, and a button to click if they wanted to view the new features added to this version.\n",
"It makes it easier if you have a secondary app that runs to do the updates. You would execute the \"updater\" app, and then inside of it wait for the other process to exit. If you need access to the regular apps DLLs and such but they also need updating, you can run the updater from a secondary location with already updated DLLs so that they are not in use in the original location.\n",
"If you're using writing a .NET application, you might consider using ClickOnce. If you need quite a bit of customization, you might look elsewhere. \nWe have an external process that performs updating for us. When it finds an update, it downloads it to a secondary folder and then waits for the main application to exit. On exit, it replaces all of the current files. The primary process just kicks the update process off every 4 hours. Because the update process will wait for the exit of the primary app, the primary app doesn't have to do any special processing other than start the update application.\nThis is a side issue, but if you're considering writing your own update process, I would encourage you to look into using compression of some sort to (1) save on download and (2) provide one file to pull from an update server.\nHope that makes sense!\n"
] | [
10,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"patch",
"updating"
] | stackoverflow_0000046013_patch_updating.txt |
Q:
Easy way to AJAX WebControls
I've got a web application that I'm trying to optimize. Some of the controls are hidden in dialog-style DIVs. So, I'd like to have them load in via AJAX only when the user wants to see them. This is fine for controls that are mostly literal-based (various menus and widgets), but when I have what I call "dirty" controls - ones that write extensive information to the ViewState, put tons of CSS or script on the page, require lots of references, etc - these are seemingly impossible to move "out of page", especially considering how ASP.NET will react on postback.
I was considering some kind of step where I override Render, find markers for the bits I want to move out and put AJAX placeholders in there, but not only does the server overhead seem extreme, it also feels like a complete hack. Besides, the key element here is the dialog boxes that contain forms with validation controls on them, and I can't imagine how I would move the controls and their required scripts.
In my fevered imagination, I want to do this:
AJAXifier.AJAXify(ctlEditForm);
Sadly, I know this is a dream.
How close can I really get to a quick-and-easy AJAXification without causing too much load on the server?
A:
Check out the RadAjax control from Telerik - it allows you to avoid using UpdatePanels, and limit the amount of info passed back and forth between server and client by declaring direct relationships between calling controls, and controls that should be "Ajaxified" when the calling controls submit postbacks.
A:
I recommend that you walk over to your local book store this weekend, get a cup of coffee and find jQuery in Action by Manning Press. Go ahead and read the first chapter of this 300 page book in the store, then buy it if it resonates with you.
I think you'll be surprized by how easy jQuery lets you perform what your describing here. From ajax calls to the server in the background, to showing and hiding div tags based on the visitor's actions. The amount of code you have to write is super small.
There are a bunch of good JavaScript libraries, this is just one of them that I like, and it really is easy to get started. Start by including a reference to the current jQuery file with a tag and then write a few lines of code to interact with your page.
A:
Step one is to make your "dirty" pieces self contained user controls
Step two is to embed those controls on your consuming page
Step three is to wrap each user control tag in their own Asp:UpdatePanel
Step four is to ensure your control gets the data it needs by having it read from properties which check against the viewstate for pre-existing values. I know this makes your code rely on ugly global variables but it's a fast way to get this done.
Your mileage may vary.
| Easy way to AJAX WebControls | I've got a web application that I'm trying to optimize. Some of the controls are hidden in dialog-style DIVs. So, I'd like to have them load in via AJAX only when the user wants to see them. This is fine for controls that are mostly literal-based (various menus and widgets), but when I have what I call "dirty" controls - ones that write extensive information to the ViewState, put tons of CSS or script on the page, require lots of references, etc - these are seemingly impossible to move "out of page", especially considering how ASP.NET will react on postback.
I was considering some kind of step where I override Render, find markers for the bits I want to move out and put AJAX placeholders in there, but not only does the server overhead seem extreme, it also feels like a complete hack. Besides, the key element here is the dialog boxes that contain forms with validation controls on them, and I can't imagine how I would move the controls and their required scripts.
In my fevered imagination, I want to do this:
AJAXifier.AJAXify(ctlEditForm);
Sadly, I know this is a dream.
How close can I really get to a quick-and-easy AJAXification without causing too much load on the server?
| [
"Check out the RadAjax control from Telerik - it allows you to avoid using UpdatePanels, and limit the amount of info passed back and forth between server and client by declaring direct relationships between calling controls, and controls that should be \"Ajaxified\" when the calling controls submit postbacks. \n",
"I recommend that you walk over to your local book store this weekend, get a cup of coffee and find jQuery in Action by Manning Press. Go ahead and read the first chapter of this 300 page book in the store, then buy it if it resonates with you.\nI think you'll be surprized by how easy jQuery lets you perform what your describing here. From ajax calls to the server in the background, to showing and hiding div tags based on the visitor's actions. The amount of code you have to write is super small. \nThere are a bunch of good JavaScript libraries, this is just one of them that I like, and it really is easy to get started. Start by including a reference to the current jQuery file with a tag and then write a few lines of code to interact with your page.\n",
"Step one is to make your \"dirty\" pieces self contained user controls\nStep two is to embed those controls on your consuming page\nStep three is to wrap each user control tag in their own Asp:UpdatePanel\nStep four is to ensure your control gets the data it needs by having it read from properties which check against the viewstate for pre-existing values. I know this makes your code rely on ugly global variables but it's a fast way to get this done.\nYour mileage may vary.\n"
] | [
5,
2,
1
] | [] | [] | [
"ajax",
"asp.net",
"web_controls"
] | stackoverflow_0000002196_ajax_asp.net_web_controls.txt |
Q:
Attaching to a foreign executable in Visual C++ 2003
I have an executable (compiled by someone else) that is hitting an assertion near my code. I work on the code in Visual C++ 2003, but I don't have a project file for this particular executable (the code is used to build many different tools). Is it possible to launch the binary in Visual C++'s debugger and just tell it where the sources are? I've done this before in GDB, so I know it ought to be possible.
A:
Without the PDB symbols for that application you're going to have a tough time making heads or tails of what is going on and where. I think any source code information is going to be only in that PDB file that was created when whoever built that application.
This is assuming that the PDB file was EVER created for this application - which is not the default configuration for release mode VC++ projects I think. Since you're asserting, I guessing this is a debug configuration?
A:
Short of any other answers, I would try attaching to the executable process in Visual Studio, setting a break point in your code and when you step into the process you don't have source to, it should ask for a source file.
A:
Yes, it's possible. Just set up an empty project and specify the desired .exe file as debug target. I don't remember exactly how, but I know it's doable, because I used to set winamp.exe as debug target when I developed plug-ins for Winamp.
Since you don't have the source file it will only show the assembly code, but that might still be useful as you can also inspect memory, registers, etc.
Update
If you are debugging an assertion in your own program you should be able to see the source just fine, since the path to the source file is stored in the executable when you compile it with debug information.
| Attaching to a foreign executable in Visual C++ 2003 | I have an executable (compiled by someone else) that is hitting an assertion near my code. I work on the code in Visual C++ 2003, but I don't have a project file for this particular executable (the code is used to build many different tools). Is it possible to launch the binary in Visual C++'s debugger and just tell it where the sources are? I've done this before in GDB, so I know it ought to be possible.
| [
"Without the PDB symbols for that application you're going to have a tough time making heads or tails of what is going on and where. I think any source code information is going to be only in that PDB file that was created when whoever built that application.\nThis is assuming that the PDB file was EVER created for this application - which is not the default configuration for release mode VC++ projects I think. Since you're asserting, I guessing this is a debug configuration?\n",
"Short of any other answers, I would try attaching to the executable process in Visual Studio, setting a break point in your code and when you step into the process you don't have source to, it should ask for a source file.\n",
"Yes, it's possible. Just set up an empty project and specify the desired .exe file as debug target. I don't remember exactly how, but I know it's doable, because I used to set winamp.exe as debug target when I developed plug-ins for Winamp.\nSince you don't have the source file it will only show the assembly code, but that might still be useful as you can also inspect memory, registers, etc.\nUpdate\nIf you are debugging an assertion in your own program you should be able to see the source just fine, since the path to the source file is stored in the executable when you compile it with debug information.\n"
] | [
2,
0,
0
] | [] | [] | [
"debugging",
"visual_c++",
"visual_studio_2003"
] | stackoverflow_0000031075_debugging_visual_c++_visual_studio_2003.txt |
Q:
How do I make AutoCompleteExtender render above select controls in IE6
When an AutoCompleteExtender is displayed in IE6 it seems to ignore z-index and renders below any select controls (like dropdownlists) in IE6.
<asp:TextBox ID="TextBox1" runat="server" />
<cc1:AutoCompleteExtender ID="AutoCompleteExtender1" runat="server"
TargetControlID="TextBox1" EnableCaching="true" CompletionSetCount="5"
FirstRowSelected="true" ServicePath="~/Services/Service1.asmx" ServiceMethod="GetSuggestion" />
<asp:DropDownList ID="DropDownList1" runat="server">
<asp:ListItem Text="Item 1" Value="0" />
<asp:ListItem Text="Item 2" Value="1" />
</asp:DropDownList>
How do I make it render above dropdownlists?
A:
Nothing renders below select controls in IE6. It's one of the many "features" microsoft bestowed upon us when they gifted IE to the world
You have to hide them, then re-show them.
Observe the standard lightbox script - which does exactly this
(note that link is just to the first thing I found on google which had the source to lightbox.js as a demonstration. It's got nothing to do with anything else)
A:
@Orion has this partially correct - there is one other way to deal with these, and that is to cover the offending select lists with an iframe. This technique is used in Cody Lindley's ThickBox (written for jQuery). See the code for details on how to do it.
| How do I make AutoCompleteExtender render above select controls in IE6 | When an AutoCompleteExtender is displayed in IE6 it seems to ignore z-index and renders below any select controls (like dropdownlists) in IE6.
<asp:TextBox ID="TextBox1" runat="server" />
<cc1:AutoCompleteExtender ID="AutoCompleteExtender1" runat="server"
TargetControlID="TextBox1" EnableCaching="true" CompletionSetCount="5"
FirstRowSelected="true" ServicePath="~/Services/Service1.asmx" ServiceMethod="GetSuggestion" />
<asp:DropDownList ID="DropDownList1" runat="server">
<asp:ListItem Text="Item 1" Value="0" />
<asp:ListItem Text="Item 2" Value="1" />
</asp:DropDownList>
How do I make it render above dropdownlists?
| [
"Nothing renders below select controls in IE6. It's one of the many \"features\" microsoft bestowed upon us when they gifted IE to the world\nYou have to hide them, then re-show them.\nObserve the standard lightbox script - which does exactly this\n(note that link is just to the first thing I found on google which had the source to lightbox.js as a demonstration. It's got nothing to do with anything else)\n",
"@Orion has this partially correct - there is one other way to deal with these, and that is to cover the offending select lists with an iframe. This technique is used in Cody Lindley's ThickBox (written for jQuery). See the code for details on how to do it.\n"
] | [
1,
0
] | [] | [] | [
"asp.net",
"autocompleteextender",
"internet_explorer_6"
] | stackoverflow_0000045267_asp.net_autocompleteextender_internet_explorer_6.txt |
Q:
Remote debugging an NT application from XP with Visual Studio 6.0
Is it possible to remote-debug a Visual C++ 6.0 application running on a Windows NT machine from a developer workstation running Windows XP? If so, is there a procedure written up somewhere?
A:
Take a look at this article. Also this may be helpful although you don't mention which version of the IDE you're using.
A:
Yes -- you can also use a newer version of Visual Studio. As long as you have the PDB file for the target application it doesn't matter what version it was built with (well, VS6 might not understand a newer PDB, but backwards should be fine).
The remote debugging experience on newer VS versions is a lot smoother than old versions in my experience. It is also easier to set up if you can arrange things so that you are attaching to an existing process that you have started manually rather than kicking off the process (avoid a lot of the path setup).
| Remote debugging an NT application from XP with Visual Studio 6.0 | Is it possible to remote-debug a Visual C++ 6.0 application running on a Windows NT machine from a developer workstation running Windows XP? If so, is there a procedure written up somewhere?
| [
"Take a look at this article. Also this may be helpful although you don't mention which version of the IDE you're using. \n",
"Yes -- you can also use a newer version of Visual Studio. As long as you have the PDB file for the target application it doesn't matter what version it was built with (well, VS6 might not understand a newer PDB, but backwards should be fine).\nThe remote debugging experience on newer VS versions is a lot smoother than old versions in my experience. It is also easier to set up if you can arrange things so that you are attaching to an existing process that you have started manually rather than kicking off the process (avoid a lot of the path setup).\n"
] | [
3,
1
] | [] | [] | [
"debugging",
"visual_c++_6",
"windows_nt",
"windows_xp"
] | stackoverflow_0000046084_debugging_visual_c++_6_windows_nt_windows_xp.txt |
Q:
Where can I find the time and space complexity of the built-in sequence types in Python
I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online?
A:
Checkout the TimeComplexity page on the py dot org wiki. It covers set/dicts/lists/etc at least as far as time complexity goes.
A:
Raymond D. Hettinger does an excellent talk (slides) about Python's built-in collections called 'Core Python Containers - Under the Hood'. The version I saw focussed mainly on set and dict, but list was covered too.
There are also some photos of the pertinent slides from EuroPython in a blog.
Here is a summary of my notes on list:
Stores items as an array of pointers. Subscript costs O(1) time. Append costs amortized O(1) time. Insert costs O(n) time.
Tries to avoid memcpy when growing by over-allocating. Many small lists will waste a lot of space, but large lists never waste more than about 12.5% to overallocation.
Some operations pre-size. Examples given were range(n), map(), list(), [None] * n, and slicing.
When shrinking, the array is realloced only when it is wasting 50% of space. pop is cheap.
A:
If your asking what I think your asking, you can find them Here... page 476 and on.
It's written around optimization techniques for Python; It's mostly Big-O notation of time efficiencies not much memory.
| Where can I find the time and space complexity of the built-in sequence types in Python | I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online?
| [
"Checkout the TimeComplexity page on the py dot org wiki. It covers set/dicts/lists/etc at least as far as time complexity goes.\n",
"Raymond D. Hettinger does an excellent talk (slides) about Python's built-in collections called 'Core Python Containers - Under the Hood'. The version I saw focussed mainly on set and dict, but list was covered too.\nThere are also some photos of the pertinent slides from EuroPython in a blog.\nHere is a summary of my notes on list:\n\nStores items as an array of pointers. Subscript costs O(1) time. Append costs amortized O(1) time. Insert costs O(n) time.\nTries to avoid memcpy when growing by over-allocating. Many small lists will waste a lot of space, but large lists never waste more than about 12.5% to overallocation.\nSome operations pre-size. Examples given were range(n), map(), list(), [None] * n, and slicing.\nWhen shrinking, the array is realloced only when it is wasting 50% of space. pop is cheap.\n\n",
"If your asking what I think your asking, you can find them Here... page 476 and on.\nIt's written around optimization techniques for Python; It's mostly Big-O notation of time efficiencies not much memory.\n"
] | [
19,
15,
2
] | [] | [] | [
"big_o",
"complexity_theory",
"performance",
"python",
"sequences"
] | stackoverflow_0000045228_big_o_complexity_theory_performance_python_sequences.txt |
Q:
What could cause Run-time error 1012 Error accessing application data directories
Friend of mine has a problem :).
There is an application written in Visual Basic 6.0 (not by him).
One of users reported that when it run on Windows 2000 and tried to scan folders on disk it raised box with message:
Run-time error 1012 Error accessing application data directories
We couldn't google anything about it and didn't find anything about runtime error 1012 in VB6 help files.
My guess was that VB calls some old API function which returns folder to which app has no access (private, ciphered, belongs to other user and app is run by user without needed privileges).
But we could not reproduce this (on Windows XP professional).
Anyone meets with bug like this in the past?
A:
Error 1012 is rather generically ERROR_CANT_READ. See this Microsoft list, but it also implies it refers to the registry.
You could try running SysInternals Process Monitor to look for failing file/registry operations by the process.
| What could cause Run-time error 1012 Error accessing application data directories | Friend of mine has a problem :).
There is an application written in Visual Basic 6.0 (not by him).
One of users reported that when it run on Windows 2000 and tried to scan folders on disk it raised box with message:
Run-time error 1012 Error accessing application data directories
We couldn't google anything about it and didn't find anything about runtime error 1012 in VB6 help files.
My guess was that VB calls some old API function which returns folder to which app has no access (private, ciphered, belongs to other user and app is run by user without needed privileges).
But we could not reproduce this (on Windows XP professional).
Anyone meets with bug like this in the past?
| [
"Error 1012 is rather generically ERROR_CANT_READ. See this Microsoft list, but it also implies it refers to the registry.\nYou could try running SysInternals Process Monitor to look for failing file/registry operations by the process.\n"
] | [
2
] | [] | [] | [
"runtime_error",
"vb6",
"windows"
] | stackoverflow_0000046156_runtime_error_vb6_windows.txt |
Q:
ASP.NET MVC Route Help, 2 routes, 1 with a category url structure and the other for content page
I need some help with ASP.NET MVC routes. I need to create 2 routes for a cms type application. One route will be for category level URLS, and the other route will be for the actual page content.
categories, always ends in a '/'
www.example.com/category/
www.example.com/category/subcategory/
www.example.com/category/subcategory/subsubcategory/
content page, doesn't end in a '/', can only be at the root level or after 1 subcategory page.
www.example.com/root-level-page
www.example.com/category/some-page-name
Ideas?
A:
Routing does not distinguish between URLs ending with a / and URLs that don't end in /.
| ASP.NET MVC Route Help, 2 routes, 1 with a category url structure and the other for content page | I need some help with ASP.NET MVC routes. I need to create 2 routes for a cms type application. One route will be for category level URLS, and the other route will be for the actual page content.
categories, always ends in a '/'
www.example.com/category/
www.example.com/category/subcategory/
www.example.com/category/subcategory/subsubcategory/
content page, doesn't end in a '/', can only be at the root level or after 1 subcategory page.
www.example.com/root-level-page
www.example.com/category/some-page-name
Ideas?
| [
"Routing does not distinguish between URLs ending with a / and URLs that don't end in /.\n"
] | [
2
] | [] | [] | [
"asp.net_mvc",
"asp.net_mvc_routing"
] | stackoverflow_0000035637_asp.net_mvc_asp.net_mvc_routing.txt |
Q:
How well do common programming tasks translate to GPUs?
I have recently begun working on a project to establish how best to leverage the processing power available in modern graphics cards for general programming. It seems that the field general purpose GPU programming (GPGPU) has a large bias towards scientific applications with a lot of heavy math as this fits well with the GPU computational model. This is all good and well, but most people don't spend all their time running simulation software and the like so we figured it might be possible to create a common foundation for easily building GPU-enabled software for the masses.
This leads to the question I would like to pose; What are the most common types of work performed by programs? It is not a requirement that the work translates extremely well to GPU programming as we are willing to accept modest performance improvements (Better little than nothing, right?).
There are a couple of subjects we have in mind already:
Data management - Manipulation of large amounts of data from databases
and otherwise.
Spreadsheet type programs (Is somewhat related to the above).
GUI programming (Though it might be impossible to get access to the
relevant code).
Common algorithms like sorting and searching.
Common collections (And integrating them with data manipulation
algorithms)
Which other coding tasks are very common? I suspect a lot of the code being written is of the category of inventory management and otherwise tracking of real 'objects'.
As I have no industry experience I figured there might be a number of basic types of code which is done more often than I realize but which just doesn't materialize as external products.
Both high level programming tasks as well as specific low level operations will be appreciated.
A:
General programming translates terribly to GPUs. GPUs are dedicated to performing fairly simple tasks on streams of data at a massive rate, with massive parallelism. They do not deal well with the rich data and control structures of general programming, and there's no point trying to shoehorn that into them.
A:
General programming translates terribly to GPUs. GPUs are dedicated to performing fairly simple tasks on streams of data at a massive rate, with massive parallelism. They do not deal well with the rich data and control structures of general programming, and there's no point trying to shoehorn that into them.
This isn't too far away from my impression of the situation but at this point we are not concerning ourselves too much with that. We are starting out by getting a broad picture of which options we have to focus on. After that is done we will analyse them a bit deeper and find out which, if any, are plausible options. If we end up determining that it is impossible to do anything within the field, and we are only increasing everybody's electricity bill then that is a valid result as well.
A:
Things that modern computers do a lot of, where a little benefit could go a long way? Let's see...
Data management: relational database management could benefit from faster relational joins (especially joins involving a large number of relations). Involves massive homogeneous data sets.
Tokenising, lexing, parsing text.
Compilation, code generation.
Optimisation (of queries, graphs, etc).
Encryption, decryption, key generation.
Page layout, typesetting.
Full text indexing.
Garbage collection.
A:
I do a lot of simplifying of configuration. That is I wrap the generation/management of configuration values inside a UI. The primary benefit is I can control work flow and presentation to make it simpler for non-techie users to configure apps/sites/services.
A:
You might want to take a look at the March/April issue of ACM's Queue magazine, which has several articles on GPUs and how best to use them (besides doing graphics, of course).
A:
The other thing to consider when using a GPU is the bus speed, Most Graphics cards are designed to have a higher bandwidth when transferring data from the CPU out to the GPU as that's what they do most of the time. The bandwidth from the GPU back up to the CPU, which is needed to return results etc, isn't as fast. So they work best in a pipelined mode.
| How well do common programming tasks translate to GPUs? | I have recently begun working on a project to establish how best to leverage the processing power available in modern graphics cards for general programming. It seems that the field general purpose GPU programming (GPGPU) has a large bias towards scientific applications with a lot of heavy math as this fits well with the GPU computational model. This is all good and well, but most people don't spend all their time running simulation software and the like so we figured it might be possible to create a common foundation for easily building GPU-enabled software for the masses.
This leads to the question I would like to pose; What are the most common types of work performed by programs? It is not a requirement that the work translates extremely well to GPU programming as we are willing to accept modest performance improvements (Better little than nothing, right?).
There are a couple of subjects we have in mind already:
Data management - Manipulation of large amounts of data from databases
and otherwise.
Spreadsheet type programs (Is somewhat related to the above).
GUI programming (Though it might be impossible to get access to the
relevant code).
Common algorithms like sorting and searching.
Common collections (And integrating them with data manipulation
algorithms)
Which other coding tasks are very common? I suspect a lot of the code being written is of the category of inventory management and otherwise tracking of real 'objects'.
As I have no industry experience I figured there might be a number of basic types of code which is done more often than I realize but which just doesn't materialize as external products.
Both high level programming tasks as well as specific low level operations will be appreciated.
| [
"General programming translates terribly to GPUs. GPUs are dedicated to performing fairly simple tasks on streams of data at a massive rate, with massive parallelism. They do not deal well with the rich data and control structures of general programming, and there's no point trying to shoehorn that into them.\n",
"\nGeneral programming translates terribly to GPUs. GPUs are dedicated to performing fairly simple tasks on streams of data at a massive rate, with massive parallelism. They do not deal well with the rich data and control structures of general programming, and there's no point trying to shoehorn that into them.\n\nThis isn't too far away from my impression of the situation but at this point we are not concerning ourselves too much with that. We are starting out by getting a broad picture of which options we have to focus on. After that is done we will analyse them a bit deeper and find out which, if any, are plausible options. If we end up determining that it is impossible to do anything within the field, and we are only increasing everybody's electricity bill then that is a valid result as well.\n",
"Things that modern computers do a lot of, where a little benefit could go a long way? Let's see...\n\nData management: relational database management could benefit from faster relational joins (especially joins involving a large number of relations). Involves massive homogeneous data sets.\nTokenising, lexing, parsing text.\nCompilation, code generation.\nOptimisation (of queries, graphs, etc).\nEncryption, decryption, key generation.\nPage layout, typesetting.\nFull text indexing.\nGarbage collection.\n\n",
"I do a lot of simplifying of configuration. That is I wrap the generation/management of configuration values inside a UI. The primary benefit is I can control work flow and presentation to make it simpler for non-techie users to configure apps/sites/services.\n",
"You might want to take a look at the March/April issue of ACM's Queue magazine, which has several articles on GPUs and how best to use them (besides doing graphics, of course).\n",
"The other thing to consider when using a GPU is the bus speed, Most Graphics cards are designed to have a higher bandwidth when transferring data from the CPU out to the GPU as that's what they do most of the time. The bandwidth from the GPU back up to the CPU, which is needed to return results etc, isn't as fast. So they work best in a pipelined mode. \n"
] | [
3,
3,
2,
0,
0,
0
] | [] | [] | [
"cuda",
"gpgpu"
] | stackoverflow_0000044789_cuda_gpgpu.txt |
Q:
Plug In Design for .NET App
I’m looking at rewriting a portion of our application in C# (currently legacy VB6 code). The module I am starting with is responsible for importing data from a variety of systems into our database. About 5-6 times a year, a new client asks us to write a new import for the system that they use. Presently, this requires us to release a new version of our software for each new import option we add to the application.
One of the goals of the rewrite is to make the application support plug-ins. Every new import can become a separate assembly which the host application will recognize and allow the end user to interact with. This will hopefully simplify life to some degree as we can simply drop a new assembly into the directory and have it be recognized and used by the main (host) application.
One of the items I am struggling with relates to the differences between the import options we currently support. In some cases we actually let the user point to a directory and read all of the files within the directory into our system. In other cases we allow them to point to a single file and import its contents. Additionally, some imports have a date range restriction that the user applies while others do not.
My question is, how can I design the application in a manner that allows for some flexibility among the imports we build and support while at the same time implementing a common interface that will allow the host application to easily recognize the plug-ins and the options that each one exposes to the user?
A:
I would recommend you take a look at the Managed Add-In Framework that shipped with .NET 3.5. The Add-In team has posted some samples and tools at CodePlex site as well..
A:
.Net 3.5 has the system.Addin namespace.
This thread also has some good information for older versions of the framework:
http://forums.devshed.com/net-development-87/system-plugin-532149.html
A:
for the theory take a look at the plugin pattern in martin fowlers Patterns of Enterprise Application Architecture
for an interesting example take a look at this tutorial: Plugin Architecture using C#
| Plug In Design for .NET App | I’m looking at rewriting a portion of our application in C# (currently legacy VB6 code). The module I am starting with is responsible for importing data from a variety of systems into our database. About 5-6 times a year, a new client asks us to write a new import for the system that they use. Presently, this requires us to release a new version of our software for each new import option we add to the application.
One of the goals of the rewrite is to make the application support plug-ins. Every new import can become a separate assembly which the host application will recognize and allow the end user to interact with. This will hopefully simplify life to some degree as we can simply drop a new assembly into the directory and have it be recognized and used by the main (host) application.
One of the items I am struggling with relates to the differences between the import options we currently support. In some cases we actually let the user point to a directory and read all of the files within the directory into our system. In other cases we allow them to point to a single file and import its contents. Additionally, some imports have a date range restriction that the user applies while others do not.
My question is, how can I design the application in a manner that allows for some flexibility among the imports we build and support while at the same time implementing a common interface that will allow the host application to easily recognize the plug-ins and the options that each one exposes to the user?
| [
"I would recommend you take a look at the Managed Add-In Framework that shipped with .NET 3.5. The Add-In team has posted some samples and tools at CodePlex site as well..\n",
".Net 3.5 has the system.Addin namespace.\nThis thread also has some good information for older versions of the framework:\nhttp://forums.devshed.com/net-development-87/system-plugin-532149.html\n",
"for the theory take a look at the plugin pattern in martin fowlers Patterns of Enterprise Application Architecture\nfor an interesting example take a look at this tutorial: Plugin Architecture using C#\n"
] | [
3,
1,
1
] | [] | [] | [
".net",
"interface_design",
"plugins"
] | stackoverflow_0000046292_.net_interface_design_plugins.txt |
Q:
Tomcat doFilter() invoked with committed response
I have a Tomcat Filter that delegates requests to the a handling object depending on the URL. This is the only filter in the FilterChain. I have an Ajax app that hammers this filter with lots of requests.
Recently I noticed an issue where the filter's doFilter method is often called with a committed response as a parameter (Internally, it is the coyote response that is marked committed).
It seems to me that the only way that this can happen is if the recycle() method is not called on this coyote response. I have checked to make sure that I am not keeping references to any of the request, response, outputStream, or writer objects. Additionally, I made sure to close the outputStream in a finally block. However, this doesn't resolve this issue.
This sounds like I am doing something to abuse the servlet container but I am having trouble tracking it down.
A:
I have tried using Tomcat 6.16 and 6.18. This is definitely is the only filter in the chain.
It seems that something is keeping a reference to the servlet outputStream. I wrapped the ServletOutputStream in my own OutputStream and then made sure the reference is destroyed. This fixed the issue so that I no longer see a committed response passed in.
This is an odd side effect of holding a reference. But I don't think it qualifies as a Tomcat bug. More likely a bug in ImageIO.createImageOutputStream() that I suspect is holding the reference.
A:
What version of Tomcat are you using? To me this sounds like a bug in Tomcat, I can't think of any reason why your doFilter method should be called with a response that's already been committed (if that filter is the only one in the chain, are you sure about this?).
| Tomcat doFilter() invoked with committed response | I have a Tomcat Filter that delegates requests to the a handling object depending on the URL. This is the only filter in the FilterChain. I have an Ajax app that hammers this filter with lots of requests.
Recently I noticed an issue where the filter's doFilter method is often called with a committed response as a parameter (Internally, it is the coyote response that is marked committed).
It seems to me that the only way that this can happen is if the recycle() method is not called on this coyote response. I have checked to make sure that I am not keeping references to any of the request, response, outputStream, or writer objects. Additionally, I made sure to close the outputStream in a finally block. However, this doesn't resolve this issue.
This sounds like I am doing something to abuse the servlet container but I am having trouble tracking it down.
| [
"I have tried using Tomcat 6.16 and 6.18. This is definitely is the only filter in the chain.\nIt seems that something is keeping a reference to the servlet outputStream. I wrapped the ServletOutputStream in my own OutputStream and then made sure the reference is destroyed. This fixed the issue so that I no longer see a committed response passed in.\nThis is an odd side effect of holding a reference. But I don't think it qualifies as a Tomcat bug. More likely a bug in ImageIO.createImageOutputStream() that I suspect is holding the reference.\n",
"What version of Tomcat are you using? To me this sounds like a bug in Tomcat, I can't think of any reason why your doFilter method should be called with a response that's already been committed (if that filter is the only one in the chain, are you sure about this?). \n"
] | [
4,
0
] | [] | [] | [
"java",
"servlets",
"tomcat"
] | stackoverflow_0000045361_java_servlets_tomcat.txt |
Q:
How to Dynamically Generate String Validation?
Does anyone know of a library (preferably php) or algorithm for auto-generating regex's from some common descriptions?
For example, have a form with the possible options of:
- Length (=x, between x & y, etc)
- Starts with
- Ends with
- Character(s) x(yz) at index i
- Specify one or more alternative behavior based on the above
- And so on..
The idea is that for certain data entities in a system, you'll be able to go to a form and set this criteria for a data field. Afterward, any time that data field for that type of data entity is entered, it will be validated against the regex.
This seems like it could grow into a complex problem though, so I'm not expecting anyone to solve it as a whole. Any suggestions are much appreciated.
A:
Would simple globs be enough? For globs it's just a matter of replacing * with .* and adding ^ and $. Or may be Excel-style patterns? It should not be too hard to write a regexp generator for simple rules like this...
My point is, adjust your requirements to simplify the code, and then may be add more features as needed.
| How to Dynamically Generate String Validation? | Does anyone know of a library (preferably php) or algorithm for auto-generating regex's from some common descriptions?
For example, have a form with the possible options of:
- Length (=x, between x & y, etc)
- Starts with
- Ends with
- Character(s) x(yz) at index i
- Specify one or more alternative behavior based on the above
- And so on..
The idea is that for certain data entities in a system, you'll be able to go to a form and set this criteria for a data field. Afterward, any time that data field for that type of data entity is entered, it will be validated against the regex.
This seems like it could grow into a complex problem though, so I'm not expecting anyone to solve it as a whole. Any suggestions are much appreciated.
| [
"Would simple globs be enough? For globs it's just a matter of replacing * with .* and adding ^ and $. Or may be Excel-style patterns? It should not be too hard to write a regexp generator for simple rules like this...\nMy point is, adjust your requirements to simplify the code, and then may be add more features as needed.\n"
] | [
2
] | [] | [] | [
"php",
"regex",
"validation",
"webforms"
] | stackoverflow_0000046339_php_regex_validation_webforms.txt |
Q:
Redirect from domain name to a dotted quad hosted box
I have a php server that is running my domain name. For testing purposes I am running an asp.net on a dotted quad IP. I am hoping to link them together via either PHP or some kind of DNS/.htaccess voodoo.
So if I go to www.mydomain.com/test it redirects (but keeps the url of (www.mydomain.com/test) in the browser's address bar and the pages are served by the dotted quad IP asp.net box.
A:
Instead of pointing www.yourdomain.com/test at your test server, why not use test.yourdomain.com?
Assuming you have access to the DNS records for yourdomain.com, you should just need to create an A record mapping test.yourdomain.com to your test server's IP address.
A:
It is quite possible, if I understand what you're getting at.
You have a PHP server with your domain pointing to it. You also have a separate ASP.NET server that only has an IP address associated with it, no domain.
Is there any drawback to simply pointing your domain name to your ASP.NEt box?
A:
The easiest way is to make www.mydomain.com/test serve a HTML file which has a single frame with the plain IP address. However, this means that the URL in the (awesome) address bar always stays exactly the same, even if you click a link on the displayed page. (You can avoid this by adding target=_top in the href, but this would require some modifications to your "asp.net".)
The only other way I can think of is to make www.mydomain.com act as proxy. That is, at /test it has a script or something that gets the page from your "asp.net" and forwards it to the client.
A:
You can do this with a proxy, but I think Will Harris's answer is the best - use a subdomain. Much simpler, and it'll get rid of issues with relative links as well.
A:
I agree that the sub-domain idea is the best, but if for some reason it doesn't work for you you could also have the php page at /test proxy requests to a URL at the dotted quad machine (using fopen to access the dotted quad URL).
| Redirect from domain name to a dotted quad hosted box | I have a php server that is running my domain name. For testing purposes I am running an asp.net on a dotted quad IP. I am hoping to link them together via either PHP or some kind of DNS/.htaccess voodoo.
So if I go to www.mydomain.com/test it redirects (but keeps the url of (www.mydomain.com/test) in the browser's address bar and the pages are served by the dotted quad IP asp.net box.
| [
"Instead of pointing www.yourdomain.com/test at your test server, why not use test.yourdomain.com?\nAssuming you have access to the DNS records for yourdomain.com, you should just need to create an A record mapping test.yourdomain.com to your test server's IP address.\n",
"It is quite possible, if I understand what you're getting at.\nYou have a PHP server with your domain pointing to it. You also have a separate ASP.NET server that only has an IP address associated with it, no domain.\nIs there any drawback to simply pointing your domain name to your ASP.NEt box?\n",
"The easiest way is to make www.mydomain.com/test serve a HTML file which has a single frame with the plain IP address. However, this means that the URL in the (awesome) address bar always stays exactly the same, even if you click a link on the displayed page. (You can avoid this by adding target=_top in the href, but this would require some modifications to your \"asp.net\".)\nThe only other way I can think of is to make www.mydomain.com act as proxy. That is, at /test it has a script or something that gets the page from your \"asp.net\" and forwards it to the client.\n",
"You can do this with a proxy, but I think Will Harris's answer is the best - use a subdomain. Much simpler, and it'll get rid of issues with relative links as well.\n",
"I agree that the sub-domain idea is the best, but if for some reason it doesn't work for you you could also have the php page at /test proxy requests to a URL at the dotted quad machine (using fopen to access the dotted quad URL).\n"
] | [
6,
0,
0,
0,
0
] | [] | [] | [
"dns",
"hosting",
"php"
] | stackoverflow_0000046074_dns_hosting_php.txt |
Q:
web site structure/architecture
What web site structure(s)/architecture(s) would the community swear by, with a narrowing down in the direction towards more of a small facebook style project?
I understand the question to be rather broad/subjective; but being relatively new to the area of web development, I find just looking at and learning from examples of working projects most times extremely helpful, and that at other times just blows my mind and changes how I construct future assignments.
With the latter paragraph in mind, does the community have any suggestions on places to look/articles to read?
A:
I guess it depends on the technology you select. For web projects in general I've always employed (Web-)MVC for the past two years or so. The advantage being a clear seperation of frontend and backend in order to create a managable code base.
But that's as vague as a recommendation could be. :)
Aside from using a framework to build your site from scratch, you might also want to look into using what's already out there (in terms of open source). I'd recommend any kind of "community software" that's semi-established, well documented, not too often in the news because of security issues and offers API to extend its base. That could indeed jump start you on your facebook-esque site. ;)
A:
Possibly a bit heavy for your immediate needs, but have you seen the Polar bear*? Well worth a browse in the library to see if it's what you require.
*Information Architecture for the World Wide Web, Second Edition
A:
Thanks, IainMH, Till. I'm without a formal computer science qualification and find large blanks in my knowledge. Over the past couple of years I've gone surprisingly far, though knowing the underlining foundation of projects I've created pollute their efficiency and success.
Being a bit of a perfectionist doesn't help (what programmer isn't?) the headaches I get from looking at badly formed projects, that only to my knowing are badly formed only after stepping back and looking at how they're structured. I guess it's a chicken and egg thing, but also a planning thing.
Anyhow, what has helped is studying existing projects.
| web site structure/architecture | What web site structure(s)/architecture(s) would the community swear by, with a narrowing down in the direction towards more of a small facebook style project?
I understand the question to be rather broad/subjective; but being relatively new to the area of web development, I find just looking at and learning from examples of working projects most times extremely helpful, and that at other times just blows my mind and changes how I construct future assignments.
With the latter paragraph in mind, does the community have any suggestions on places to look/articles to read?
| [
"I guess it depends on the technology you select. For web projects in general I've always employed (Web-)MVC for the past two years or so. The advantage being a clear seperation of frontend and backend in order to create a managable code base.\nBut that's as vague as a recommendation could be. :)\nAside from using a framework to build your site from scratch, you might also want to look into using what's already out there (in terms of open source). I'd recommend any kind of \"community software\" that's semi-established, well documented, not too often in the news because of security issues and offers API to extend its base. That could indeed jump start you on your facebook-esque site. ;)\n",
"Possibly a bit heavy for your immediate needs, but have you seen the Polar bear*? Well worth a browse in the library to see if it's what you require.\n*Information Architecture for the World Wide Web, Second Edition\n",
"Thanks, IainMH, Till. I'm without a formal computer science qualification and find large blanks in my knowledge. Over the past couple of years I've gone surprisingly far, though knowing the underlining foundation of projects I've created pollute their efficiency and success.\nBeing a bit of a perfectionist doesn't help (what programmer isn't?) the headaches I get from looking at badly formed projects, that only to my knowing are badly formed only after stepping back and looking at how they're structured. I guess it's a chicken and egg thing, but also a planning thing.\nAnyhow, what has helped is studying existing projects.\n"
] | [
1,
0,
0
] | [] | [] | [
"architecture"
] | stackoverflow_0000046280_architecture.txt |
Q:
How do I specify multiple constraints on a generic type in C#?
What is the syntax for placing constraints on multiple types? The basic example:
class Animal<SpeciesType> where SpeciesType : Species
I would like to place constraints on both types in the following definition such that SpeciesType must inherit from Species and OrderType must inherit from Order:
class Animal<SpeciesType, OrderType>
A:
public class Animal<SpeciesType,OrderType>
where SpeciesType : Species
where OrderType : Order
{
}
A:
You should be able to go :
class Animal<SpeciesType, OrderType>
where SpeciesType : Species
where OrderType : Order {
}
| How do I specify multiple constraints on a generic type in C#? | What is the syntax for placing constraints on multiple types? The basic example:
class Animal<SpeciesType> where SpeciesType : Species
I would like to place constraints on both types in the following definition such that SpeciesType must inherit from Species and OrderType must inherit from Order:
class Animal<SpeciesType, OrderType>
| [
"public class Animal<SpeciesType,OrderType>\n where SpeciesType : Species\n where OrderType : Order\n{\n}\n\n",
"You should be able to go :\nclass Animal<SpeciesType, OrderType>\n where SpeciesType : Species\n where OrderType : Order {\n}\n\n"
] | [
60,
18
] | [] | [] | [
".net",
"c#",
"generics",
"oop",
"type_constraints"
] | stackoverflow_0000046377_.net_c#_generics_oop_type_constraints.txt |
Q:
Pylons error - 'MySQL server has gone away'
I'm using Pylons (a python framework) to serve a simple web application, but it seems to die from time to time, with this in the error log: (2006, 'MySQL server has gone away')
I did a bit of checking, and saw that this was because the connections to MySQL were not being renewed. This shouldn't be a problem though, because the sqlalchemy.pool_recycle in the config file should automatically keep it alive. The default was 3600, but I dialed it back to 1800 because of this problem. It helped a bit, but 3600 should be fine according to the docs. The errors still happen semi-regularly. I don't want to lower it too much though and DOS my own database :).
Maybe something in my MySQL config is goofy? Not sure where to look exactly.
Other relevant details:
Python 2.5
Pylons: 0.9.6.2 (w/ sql_alchemy)
MySQL: 5.0.51
A:
I think I fixed it. It's turns out I had a simple config error. My ini file read:
sqlalchemy.default.url = [connection string here]
sqlalchemy.pool_recycle = 1800
The problem is that my environment.py file declared that the engine would only map keys with the prefix: sqlalchemy.default so pool_recycle was ignored.
The solution is to simply change the second line in the ini to:
sqlalchemy.default.pool_recycle = 1800
A:
You might want to check MySQL's timeout variables:
show variables like '%timeout%';
You're probably interested in wait_timeout (less likely but possible: interactive_timeout). On Debian and Ubuntu, the defaults are 28800 (MySQL kills connections after 8 hours), but maybe the default for your platform is different or whoever administrates the server has configured things differently.
AFAICT, pool_recycle doesn't actually keep the connections alive, it expires them on its own before MySQL kills them. I'm not familiar with pylons, but if causing the connections to intermittently do a SELECT 1; is an option, that will keep them alive at the cost of basically no server load and minimal network traffic. One final thought: are you somehow managing to use a connection that pylons thinks it has expired?
| Pylons error - 'MySQL server has gone away' | I'm using Pylons (a python framework) to serve a simple web application, but it seems to die from time to time, with this in the error log: (2006, 'MySQL server has gone away')
I did a bit of checking, and saw that this was because the connections to MySQL were not being renewed. This shouldn't be a problem though, because the sqlalchemy.pool_recycle in the config file should automatically keep it alive. The default was 3600, but I dialed it back to 1800 because of this problem. It helped a bit, but 3600 should be fine according to the docs. The errors still happen semi-regularly. I don't want to lower it too much though and DOS my own database :).
Maybe something in my MySQL config is goofy? Not sure where to look exactly.
Other relevant details:
Python 2.5
Pylons: 0.9.6.2 (w/ sql_alchemy)
MySQL: 5.0.51
| [
"I think I fixed it. It's turns out I had a simple config error. My ini file read:\nsqlalchemy.default.url = [connection string here]\nsqlalchemy.pool_recycle = 1800\n\nThe problem is that my environment.py file declared that the engine would only map keys with the prefix: sqlalchemy.default so pool_recycle was ignored.\nThe solution is to simply change the second line in the ini to:\nsqlalchemy.default.pool_recycle = 1800\n\n",
"You might want to check MySQL's timeout variables:\nshow variables like '%timeout%';\n\nYou're probably interested in wait_timeout (less likely but possible: interactive_timeout). On Debian and Ubuntu, the defaults are 28800 (MySQL kills connections after 8 hours), but maybe the default for your platform is different or whoever administrates the server has configured things differently.\nAFAICT, pool_recycle doesn't actually keep the connections alive, it expires them on its own before MySQL kills them. I'm not familiar with pylons, but if causing the connections to intermittently do a SELECT 1; is an option, that will keep them alive at the cost of basically no server load and minimal network traffic. One final thought: are you somehow managing to use a connection that pylons thinks it has expired?\n"
] | [
8,
2
] | [] | [] | [
"mysql",
"pylons",
"python"
] | stackoverflow_0000008154_mysql_pylons_python.txt |
Q:
MenuStrip Error
My users are having an intermittent error when using a Windows Forms application built in VB.NET 3.5. Apparently when they click on the form and the form re-paints, a red 'X' will be painted over the MenuStrip control and the app will crash with the following error.
Has anyone seen this before? Can someone point me in the right direction?
System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
at System.Collections.ArrayList.get_Item(Int32 index)
at System.Windows.Forms.ToolStripItemCollection.get_Item(Int32 index)
at System.Windows.Forms.ToolStrip.OnPaint(PaintEventArgs e)
at System.Windows.Forms.Control.PaintWithErrorHandling(PaintEventArgs e, Int16 layer, Boolean disposeEventArgs)
at System.Windows.Forms.Control.WmPaint(Message& m)
at System.Windows.Forms.Control.WndProc(Message& m)
at System.Windows.Forms.ScrollableControl.WndProc(Message& m)
at System.Windows.Forms.ToolStrip.WndProc(Message& m)
at System.Windows.Forms.MenuStrip.WndProc(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
A:
Are you adding items to this strip dynamically?
A:
You will have to find where in the the code this is happening but it is being cause by an integer variable being used to access your dynamic menu. Before you use the menu, use an if statement to make sure it is between 0 and the size of the collection - 1. Also, place a break point where you create the variable and step through the code watching what happens to it.
Also, a code sample of how you are using the dynamic menu would help.
A:
While looking through the code, I discovered that the menu is being cleared and reloaded whenever the form data is being refreshed. The menu only needs to be loaded once, when the form is initially loaded.
I think that the menu may be getting cleared while the form is in the process of being painted. Do you think that this may be true?
A:
Thanks to all of you that helped to point me in the right direction. I made a change to only clear/add the menu when the form is loaded, so I shouldn't see this error again when the form is painting.
| MenuStrip Error | My users are having an intermittent error when using a Windows Forms application built in VB.NET 3.5. Apparently when they click on the form and the form re-paints, a red 'X' will be painted over the MenuStrip control and the app will crash with the following error.
Has anyone seen this before? Can someone point me in the right direction?
System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
at System.Collections.ArrayList.get_Item(Int32 index)
at System.Windows.Forms.ToolStripItemCollection.get_Item(Int32 index)
at System.Windows.Forms.ToolStrip.OnPaint(PaintEventArgs e)
at System.Windows.Forms.Control.PaintWithErrorHandling(PaintEventArgs e, Int16 layer, Boolean disposeEventArgs)
at System.Windows.Forms.Control.WmPaint(Message& m)
at System.Windows.Forms.Control.WndProc(Message& m)
at System.Windows.Forms.ScrollableControl.WndProc(Message& m)
at System.Windows.Forms.ToolStrip.WndProc(Message& m)
at System.Windows.Forms.MenuStrip.WndProc(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
| [
"Are you adding items to this strip dynamically?\n",
"You will have to find where in the the code this is happening but it is being cause by an integer variable being used to access your dynamic menu. Before you use the menu, use an if statement to make sure it is between 0 and the size of the collection - 1. Also, place a break point where you create the variable and step through the code watching what happens to it. \nAlso, a code sample of how you are using the dynamic menu would help.\n",
"While looking through the code, I discovered that the menu is being cleared and reloaded whenever the form data is being refreshed. The menu only needs to be loaded once, when the form is initially loaded. \nI think that the menu may be getting cleared while the form is in the process of being painted. Do you think that this may be true?\n",
"Thanks to all of you that helped to point me in the right direction. I made a change to only clear/add the menu when the form is loaded, so I shouldn't see this error again when the form is painting.\n"
] | [
0,
0,
0,
0
] | [] | [] | [
".net_3.5",
"exception",
"vb.net",
"winforms"
] | stackoverflow_0000045924_.net_3.5_exception_vb.net_winforms.txt |
Q:
How to generate a verification code/number?
I'm working on an application where users have to make a call and type a verification number with the keypad of their phone.
I would like to be able to detect if the number they type is correct or not. The phone system does not have access to a list of valid numbers, but instead, it will validate the number against an algorithm (like a credit card number).
Here are some of the requirements :
It must be difficult to type a valid random code
It must be difficult to have a valid code if I make a typo (transposition of digits, wrong digit)
I must have a reasonable number of possible combinations (let's say 1M)
The code must be as short as possible, to avoid errors from the user
Given these requirements, how would you generate such a number?
EDIT :
@Haaked: The code has to be numerical because the user types it with its phone.
@matt b: On the first step, the code is displayed on a Web page, the second step is to call and type in the code. I don't know the user's phone number.
Followup : I've found several algorithms to check the validity of numbers (See this interesting Google Code project : checkDigits).
A:
After some research, I think I'll go with the ISO 7064 Mod 97,10 formula. It seems pretty solid as it is used to validate IBAN (International Bank Account Number).
The formula is very simple:
Take a number : 123456
Apply the following formula to obtain the 2 digits checksum : mod(98 - mod(number * 100, 97), 97) => 76
Concat number and checksum to obtain the code => 12345676
To validate a code, verify that mod(code, 97) == 1
Test :
mod(12345676, 97) = 1 => GOOD
mod(21345676, 97) = 50 => BAD !
mod(12345678, 97) = 10 => BAD !
Apparently, this algorithm catches most of the errors.
Another interesting option was the Verhoeff algorithm. It has only one verification digit and is more difficult to implement (compared to the simple formula above).
A:
For 1M combinations you'll need 6 digits. To make sure that there aren't any accidentally valid codes, I suggest 9 digits with a 1/1000 chance that a random code works. I'd also suggest using another digit (10 total) to perform an integrity check. As far as distribution patterns, random will suffice and the check digit will ensure that a single error will not result in a correct code.
Edit: Apparently I didn't fully read your request. Using a credit card number, you could perform a hash on it (MD5 or SHA1 or something similar). You then truncate at an appropriate spot (for example 9 characters) and convert to base 10. Then you add the check digit(s) and this should more or less work for your purposes.
A:
You want to segment your code. Part of it should be a 16-bit CRC of the rest of the code.
If all you want is a verification number then just use a sequence number (assuming you have a single point of generation). That way you know you are not getting duplicates.
Then you prefix the sequence with a CRC-16 of that sequence number AND some private key. You can use anything for the private key, as long as you keep it private. Make it something big, at least a GUID, but it could be the text to War and Peace from project Gutenberg. Just needs to be secret and constant. Having a private key prevents people from being able to forge a key, but using a 16 bit CR makes it easier to break.
To validate you just split the number into its two parts, and then take a CRC-16 of the sequence number and the private key.
If you want to obscure the sequential portion more, then split the CRC in two parts. Put 3 digits at the front and 2 at the back of the sequence (zero pad so the length of the CRC is consistent).
This method allows you to start with smaller keys too. The first 10 keys will be 6 digits.
A:
Does it have to be only numbers? You could create a random number between 1 and 1M (I'd suggest even higher though) and then Base32 encode it. The next thing you need to do is Hash that value (using a secret salt value) and base32 encode the hash. Then append the two strings together, perhaps separated by the dash.
That way, you can verify the incoming code algorithmically. You just take the left side of the code, hash it using your secret salt, and compare that value to the right side of the code.
A:
I must have a reasonnable number of possible combinations (let's say 1M)
The code must be as short as possible, to avoid errors from the user
Well, if you want it to have at least one million combinations, then you need at least six digits. Is that short enough?
A:
When you are creating the verification code, do you have access to the caller's phone number?
If so I would use the caller's phone number and run it through some sort of hashing function so that you can guarantee that the verification code you gave to the caller in step 1 is the same one that they are entering in step 2 (to make sure they aren't using a friend's validation code or they simply made a very lucky guess).
About the hashing, I'm not sure if it's possible to take a 10 digit number and come out with a hash result that would be < 10 digits (I guess you'd have to live with a certain amount of collision) but I think this would help ensure the user is who they say they are.
Of course this won't work if the phone number used in step 1 is different than the one they are calling from in step 2.
A:
Assuming you already know how to detect which key the user hit, this should be doable reasonably easily. In the security world, there is the notion of a "one time" password. This is sometimes referred to as a "disposable password." Normally these are restricted to the (easily typable) ASCII values. So, [a-zA-z0-9] and a bunch of easily typable symbols. like comma, period, semi colon, and parenthesis. In your case, though, you'd probably want to limit the range to [0-9] and possibly include * and #.
I am unable to explain all the technical details of how these one-time codes are generated (or work) adequately. There is some intermediate math behind it, which I'd butcher without first reviewing it myself. Suffice it to say that you use an algorithm to generate a stream of one time passwords. No matter how mnay previous codes you know, the subsequent one should be impossibel to guess! In your case, you'll simply use each password on the list as the user's random code.
Rather than fail at explaining the details of the implementation myself, I'll direct you to a 9 page article where you can read up on it youself: https://www.grc.com/ppp.htm
A:
It sounds like you have the unspoken requirement that it must be quickly determined, via algorithm, that the code is valid. This would rule out you simply handing out a list of one time pad numbers.
There are several ways people have done this in the past.
Make a public key and private key. Encode the numbers 0-999,999 using the private key, and hand out the results. You'll need to throw in some random numbers to make the result come out to the longer version, and you'll have to convert the result from base 64 to base 10. When you get a number entered, convert it back to base64, apply the private key, and see if the intereting numbers are under 1,000,000 (discard the random numbers).
Use a reversible hash function
Use the first million numbers from a PRN seeded at a specific value. The "checking" function can get the seed, and know that the next million values are good. It can either generate them each time and check one by one when a code is received, or on program startup store them all in a table, sorted, and then use binary search (maximum of compares) since one million integers is not a whole lot of space.
There are a bunch of other options, but these are common and easy to implement.
-Adam
A:
You linked to the check digits project, and using the "encode" function seems like a good solution. It says:
encode may throw an exception if 'bad' data (e.g. non-numeric) is passed to it, while verify only returns true or false. The idea here is that encode normally gets it's data from 'trusted' internal sources (a database key for instance), so it should be pretty usual, in fact, exceptional that bad data is being passed in.
So it sounds like you could pass the encode function a database key (5 digits, for instance) and you could get a number out that would meet your requirements.
| How to generate a verification code/number? | I'm working on an application where users have to make a call and type a verification number with the keypad of their phone.
I would like to be able to detect if the number they type is correct or not. The phone system does not have access to a list of valid numbers, but instead, it will validate the number against an algorithm (like a credit card number).
Here are some of the requirements :
It must be difficult to type a valid random code
It must be difficult to have a valid code if I make a typo (transposition of digits, wrong digit)
I must have a reasonable number of possible combinations (let's say 1M)
The code must be as short as possible, to avoid errors from the user
Given these requirements, how would you generate such a number?
EDIT :
@Haaked: The code has to be numerical because the user types it with its phone.
@matt b: On the first step, the code is displayed on a Web page, the second step is to call and type in the code. I don't know the user's phone number.
Followup : I've found several algorithms to check the validity of numbers (See this interesting Google Code project : checkDigits).
| [
"After some research, I think I'll go with the ISO 7064 Mod 97,10 formula. It seems pretty solid as it is used to validate IBAN (International Bank Account Number).\nThe formula is very simple:\n\nTake a number : 123456\nApply the following formula to obtain the 2 digits checksum : mod(98 - mod(number * 100, 97), 97) => 76\nConcat number and checksum to obtain the code => 12345676\nTo validate a code, verify that mod(code, 97) == 1\n\nTest :\n\nmod(12345676, 97) = 1 => GOOD\nmod(21345676, 97) = 50 => BAD !\nmod(12345678, 97) = 10 => BAD ! \n\nApparently, this algorithm catches most of the errors.\nAnother interesting option was the Verhoeff algorithm. It has only one verification digit and is more difficult to implement (compared to the simple formula above). \n",
"For 1M combinations you'll need 6 digits. To make sure that there aren't any accidentally valid codes, I suggest 9 digits with a 1/1000 chance that a random code works. I'd also suggest using another digit (10 total) to perform an integrity check. As far as distribution patterns, random will suffice and the check digit will ensure that a single error will not result in a correct code.\nEdit: Apparently I didn't fully read your request. Using a credit card number, you could perform a hash on it (MD5 or SHA1 or something similar). You then truncate at an appropriate spot (for example 9 characters) and convert to base 10. Then you add the check digit(s) and this should more or less work for your purposes.\n",
"You want to segment your code. Part of it should be a 16-bit CRC of the rest of the code. \nIf all you want is a verification number then just use a sequence number (assuming you have a single point of generation). That way you know you are not getting duplicates. \nThen you prefix the sequence with a CRC-16 of that sequence number AND some private key. You can use anything for the private key, as long as you keep it private. Make it something big, at least a GUID, but it could be the text to War and Peace from project Gutenberg. Just needs to be secret and constant. Having a private key prevents people from being able to forge a key, but using a 16 bit CR makes it easier to break.\nTo validate you just split the number into its two parts, and then take a CRC-16 of the sequence number and the private key. \nIf you want to obscure the sequential portion more, then split the CRC in two parts. Put 3 digits at the front and 2 at the back of the sequence (zero pad so the length of the CRC is consistent).\nThis method allows you to start with smaller keys too. The first 10 keys will be 6 digits.\n",
"Does it have to be only numbers? You could create a random number between 1 and 1M (I'd suggest even higher though) and then Base32 encode it. The next thing you need to do is Hash that value (using a secret salt value) and base32 encode the hash. Then append the two strings together, perhaps separated by the dash.\nThat way, you can verify the incoming code algorithmically. You just take the left side of the code, hash it using your secret salt, and compare that value to the right side of the code.\n",
"\n\nI must have a reasonnable number of possible combinations (let's say 1M)\nThe code must be as short as possible, to avoid errors from the user\n\n\nWell, if you want it to have at least one million combinations, then you need at least six digits. Is that short enough?\n",
"When you are creating the verification code, do you have access to the caller's phone number?\nIf so I would use the caller's phone number and run it through some sort of hashing function so that you can guarantee that the verification code you gave to the caller in step 1 is the same one that they are entering in step 2 (to make sure they aren't using a friend's validation code or they simply made a very lucky guess).\nAbout the hashing, I'm not sure if it's possible to take a 10 digit number and come out with a hash result that would be < 10 digits (I guess you'd have to live with a certain amount of collision) but I think this would help ensure the user is who they say they are.\nOf course this won't work if the phone number used in step 1 is different than the one they are calling from in step 2.\n",
"Assuming you already know how to detect which key the user hit, this should be doable reasonably easily. In the security world, there is the notion of a \"one time\" password. This is sometimes referred to as a \"disposable password.\" Normally these are restricted to the (easily typable) ASCII values. So, [a-zA-z0-9] and a bunch of easily typable symbols. like comma, period, semi colon, and parenthesis. In your case, though, you'd probably want to limit the range to [0-9] and possibly include * and #.\nI am unable to explain all the technical details of how these one-time codes are generated (or work) adequately. There is some intermediate math behind it, which I'd butcher without first reviewing it myself. Suffice it to say that you use an algorithm to generate a stream of one time passwords. No matter how mnay previous codes you know, the subsequent one should be impossibel to guess! In your case, you'll simply use each password on the list as the user's random code. \nRather than fail at explaining the details of the implementation myself, I'll direct you to a 9 page article where you can read up on it youself: https://www.grc.com/ppp.htm \n",
"It sounds like you have the unspoken requirement that it must be quickly determined, via algorithm, that the code is valid. This would rule out you simply handing out a list of one time pad numbers.\nThere are several ways people have done this in the past.\n\nMake a public key and private key. Encode the numbers 0-999,999 using the private key, and hand out the results. You'll need to throw in some random numbers to make the result come out to the longer version, and you'll have to convert the result from base 64 to base 10. When you get a number entered, convert it back to base64, apply the private key, and see if the intereting numbers are under 1,000,000 (discard the random numbers).\nUse a reversible hash function\nUse the first million numbers from a PRN seeded at a specific value. The \"checking\" function can get the seed, and know that the next million values are good. It can either generate them each time and check one by one when a code is received, or on program startup store them all in a table, sorted, and then use binary search (maximum of compares) since one million integers is not a whole lot of space.\n\nThere are a bunch of other options, but these are common and easy to implement.\n-Adam\n",
"You linked to the check digits project, and using the \"encode\" function seems like a good solution. It says:\n\nencode may throw an exception if 'bad' data (e.g. non-numeric) is passed to it, while verify only returns true or false. The idea here is that encode normally gets it's data from 'trusted' internal sources (a database key for instance), so it should be pretty usual, in fact, exceptional that bad data is being passed in.\n\nSo it sounds like you could pass the encode function a database key (5 digits, for instance) and you could get a number out that would meet your requirements.\n"
] | [
33,
4,
2,
1,
0,
0,
0,
0,
0
] | [] | [] | [
"algorithm",
"checksum",
"data_consistency",
"error_checking"
] | stackoverflow_0000046231_algorithm_checksum_data_consistency_error_checking.txt |
Q:
What is the best practice for estimating required time for development of the SDLC phases?
As a project manager, you are required to organize time so that the project meets a deadline.
Is there some sort of equations to use for estimating how long the development will take?
let's say the database
time = sql storedprocedures * tables manipulated or something similar
Or are you just stuck having to get the experience to get adequate estimations?
A:
As project manager you have to remember that the best you will ever we be able to do on your own is give your best guess as to how long a given project will take. How accurate you are. depends on your experience and the scope of the project.
The only way I know of to get a reasonably accurate estimate that is it to break the project into individual tasks and get the developer who will be doing the actual work to put an estimate on each task. You can then use an evidence based algorithm that takes the estimation accuracy of each developer into account to give you the probability of hitting a given deadline.
If the probability is too low, you have two choices: remove features or move the deadline.
Further reading:
http://www.joelonsoftware.com/items/2007/10/26.html
http://www.wordyard.com/2007/10/11/evidence-based-scheduling/
http://en.wikipedia.org/wiki/Monte_Carlo_method
A:
There will be such a formula as soon as computers can start generating all code themselves. Until then you are stuck with human developers who all have different levels of skill and development speed.
A:
There's no set formula out there that I've seen that would really work. Fogbugz has its monte carlo simulator which has somewhat of a concept for this, but really, experience is going to be your best point of reference. Every developer and every project will be different!
| What is the best practice for estimating required time for development of the SDLC phases? | As a project manager, you are required to organize time so that the project meets a deadline.
Is there some sort of equations to use for estimating how long the development will take?
let's say the database
time = sql storedprocedures * tables manipulated or something similar
Or are you just stuck having to get the experience to get adequate estimations?
| [
"As project manager you have to remember that the best you will ever we be able to do on your own is give your best guess as to how long a given project will take. How accurate you are. depends on your experience and the scope of the project.\nThe only way I know of to get a reasonably accurate estimate that is it to break the project into individual tasks and get the developer who will be doing the actual work to put an estimate on each task. You can then use an evidence based algorithm that takes the estimation accuracy of each developer into account to give you the probability of hitting a given deadline.\nIf the probability is too low, you have two choices: remove features or move the deadline.\nFurther reading:\n\nhttp://www.joelonsoftware.com/items/2007/10/26.html\nhttp://www.wordyard.com/2007/10/11/evidence-based-scheduling/\nhttp://en.wikipedia.org/wiki/Monte_Carlo_method\n\n",
"There will be such a formula as soon as computers can start generating all code themselves. Until then you are stuck with human developers who all have different levels of skill and development speed.\n",
"There's no set formula out there that I've seen that would really work. Fogbugz has its monte carlo simulator which has somewhat of a concept for this, but really, experience is going to be your best point of reference. Every developer and every project will be different!\n"
] | [
2,
0,
0
] | [] | [] | [
"project_management",
"time_management"
] | stackoverflow_0000044247_project_management_time_management.txt |
Q:
Best way to set the permissions for a specific user on a specific folder on a remote machine?
We have a deployment system at my office where we can automatically deploy a given build of our code to a specified dev environment (dev01, dev02, etc.).
These dev environments are generalized virtual machines, so our system has to configure them automatically. We have a new system requirement with our next version; we need to give certain user accounts read/write access to certain folders (specifically, giving the ASPNET user read/write to a logging folder).
I'm pretty sure we could do this with WMI or scripts (we use Sysinternals PSTools in a few places for deployment), but I'm not sure what is the best way to do it. The deployment system is written in C# 2.0, the dev environment is a VM with Windows XP. The VM is on the same domain as the deployment system and I have administrator access.
Edit: There's not really a right answer for this, so I'm hesitant to mark an answer as accepted.
A:
Another option would be to investigate using a Powershell script There are a lot powershell community snap ins to support VMs and active directory.
Active Directory Script Rescources
Powershell Script Library
Microsoft Script Resources
VMWARE VI Toolkit (for Windows)
A:
If you can run scripts, it might be as simple as runing the CACLS command on the VM. Perhaps just have your deployment script read in a config and run the appropriate CACLs commands.
| Best way to set the permissions for a specific user on a specific folder on a remote machine? | We have a deployment system at my office where we can automatically deploy a given build of our code to a specified dev environment (dev01, dev02, etc.).
These dev environments are generalized virtual machines, so our system has to configure them automatically. We have a new system requirement with our next version; we need to give certain user accounts read/write access to certain folders (specifically, giving the ASPNET user read/write to a logging folder).
I'm pretty sure we could do this with WMI or scripts (we use Sysinternals PSTools in a few places for deployment), but I'm not sure what is the best way to do it. The deployment system is written in C# 2.0, the dev environment is a VM with Windows XP. The VM is on the same domain as the deployment system and I have administrator access.
Edit: There's not really a right answer for this, so I'm hesitant to mark an answer as accepted.
| [
"Another option would be to investigate using a Powershell script There are a lot powershell community snap ins to support VMs and active directory.\nActive Directory Script Rescources\nPowershell Script Library\nMicrosoft Script Resources\nVMWARE VI Toolkit (for Windows)\n",
"If you can run scripts, it might be as simple as runing the CACLS command on the VM. Perhaps just have your deployment script read in a config and run the appropriate CACLs commands.\n"
] | [
1,
0
] | [] | [] | [
".net",
"c#",
"deployment"
] | stackoverflow_0000046173_.net_c#_deployment.txt |
Q:
Mapping collections with LINQ
I have a collection of objects to which I'd like to just add a new property. How do I do that with LINQ?
A:
var a = from i in ObjectCollection select new {i.prop1, i.prop2, i.prop3, ..., newprop = newProperty}
A:
I don't think that you can using pure LINQ. However, if you're doing this sort of thing a lot in your code you may be able to make this work with reflection.
A:
Why do you want to add the extra property? Or, put a different way, what do you intend to do with the property once you have it in your new IEnumerable source?
If you need it for data binding, I have a helper class that might help you.
| Mapping collections with LINQ | I have a collection of objects to which I'd like to just add a new property. How do I do that with LINQ?
| [
"var a = from i in ObjectCollection select new {i.prop1, i.prop2, i.prop3, ..., newprop = newProperty}\n\n",
"I don't think that you can using pure LINQ. However, if you're doing this sort of thing a lot in your code you may be able to make this work with reflection.\n",
"Why do you want to add the extra property? Or, put a different way, what do you intend to do with the property once you have it in your new IEnumerable source?\nIf you need it for data binding, I have a helper class that might help you.\n"
] | [
2,
0,
0
] | [] | [] | [
"functional_programming",
"linq"
] | stackoverflow_0000046189_functional_programming_linq.txt |
Q:
Automatic Timeout Web Client Use
One of the problems I have come across having complex tasks on the browser is with automatic timeouts.
Currently our site has a sliding expiration of 30 minutes. Normally, this isn't a problem because we use asp.net and most of the time the users update one or two fields and then submit the form. This obviously keeps the session active. I have struggled with how to handle this with significantly more complex forms/user interaction where there are few if no round trips to the server.
Now users can view reports, type emails etc. on our site, and I am struggling with how to keep their session active while they're actually using browser but they are not actively posting information back. I would dispense with the sliding expiration, but it's a requirement, so I am stuck with it. Most of our users are outside of the organization, so I can't use any form of SSO.
Does anyone have an elegant solution (I'll even go for an ugly one if it works), or am I out of luck?
A:
Have the browser ping the server via Ajax periodically to keep the session alive. Most sites do something useful in this ping, like save a draft of the user's submission.
A:
We recently went through this in my organization. Although it is not the best solution, and hitting the right session across multiple browser windows is rough, we put a countdown timer on our page, included a button that just went back and hit the server to restart the session, and also provided the user with a JavaScript popup (the user's favorite part of the solution) with a message saying that the session was, say, five minutes from timing out and to hit the "OK" button to restart. Then the button would hit the server to restart the session, restart the timer on the base page, close the popup and the base page didn't need to be refreshed at all.
A:
erickson is on the the right track.
On the areas of the site that are prone to session-timeout due to "complex forms/user interaction where there are few if no round trips to the server", you can place a keepalive control to keep pinging the server, thus keeping the session alive.
Here is a sample control that you can use, or use as a basis for coding your own.
A:
Ah, the age old problem of not wanting to increase the session time because of higher memory usage.
One answer is to also set a cookie that expires after more like a day that will tell the system to still remember the user. That's what eBay does, among others.
| Automatic Timeout Web Client Use | One of the problems I have come across having complex tasks on the browser is with automatic timeouts.
Currently our site has a sliding expiration of 30 minutes. Normally, this isn't a problem because we use asp.net and most of the time the users update one or two fields and then submit the form. This obviously keeps the session active. I have struggled with how to handle this with significantly more complex forms/user interaction where there are few if no round trips to the server.
Now users can view reports, type emails etc. on our site, and I am struggling with how to keep their session active while they're actually using browser but they are not actively posting information back. I would dispense with the sliding expiration, but it's a requirement, so I am stuck with it. Most of our users are outside of the organization, so I can't use any form of SSO.
Does anyone have an elegant solution (I'll even go for an ugly one if it works), or am I out of luck?
| [
"Have the browser ping the server via Ajax periodically to keep the session alive. Most sites do something useful in this ping, like save a draft of the user's submission.\n",
"We recently went through this in my organization. Although it is not the best solution, and hitting the right session across multiple browser windows is rough, we put a countdown timer on our page, included a button that just went back and hit the server to restart the session, and also provided the user with a JavaScript popup (the user's favorite part of the solution) with a message saying that the session was, say, five minutes from timing out and to hit the \"OK\" button to restart. Then the button would hit the server to restart the session, restart the timer on the base page, close the popup and the base page didn't need to be refreshed at all. \n",
"erickson is on the the right track.\nOn the areas of the site that are prone to session-timeout due to \"complex forms/user interaction where there are few if no round trips to the server\", you can place a keepalive control to keep pinging the server, thus keeping the session alive.\nHere is a sample control that you can use, or use as a basis for coding your own.\n",
"Ah, the age old problem of not wanting to increase the session time because of higher memory usage. \nOne answer is to also set a cookie that expires after more like a day that will tell the system to still remember the user. That's what eBay does, among others.\n"
] | [
4,
2,
1,
0
] | [] | [] | [
"asp.net",
"c#"
] | stackoverflow_0000046394_asp.net_c#.txt |
Q:
How to get the correct Content-Length for a POST request
I am using a perl script to POST to Google Appengine application. I post a text file containing some XML using the -F option.
http://www.cpan.org/authors/id/E/EL/ELIJAH/bget-1.1
There is a version 1.2, already tested and get the same issue. The post looks something like this.
Host: foo.appspot.com
User-Agent: lwp-request/1.38
Content-Type: text/plain
Content-Length: 202
<XML>
<BLAH>Hello World</BLAH>
</XML>
I have modified the example so the 202 isn't right, don't worry about that. On to the problem. The Content-Length matches the number of bytes on the file, however unless I manually increase the Content-Length it does not send all of the file, a few bytes get truncated. The number of bytes truncated is not the same for files of different sizes. I used the -r option on the script and I can see what it is sending and it is sending all of the file, but Google Appengine self.request.body shows that not everything is received. I think the solution is to get the right number for Content-Length and apparently it isn't as simple as number of bytes on the file or the perl script is mangling it somehow.
Update:
Thanks to Erickson for the right answer. I used printf to append characters to the end of the file and it always truncated exactly the number of lines in the file. I suppose I could figure out what is being added by iterating through every character on the server side but not worth it. This wasn't even answered over on the google groups set up for app engine!
A:
Is the number of extra bytes you need equal to the number of lines in the file? I ask because perhaps its possible that somehow carriage-returns are being introduced but not counted.
A:
I've run into similar problems before.
I assume you're using the length() function to determine the size of the file? If so, it;s likely the data that you're posting is UTF-8 encoded, instead of ASCII.
To get the correct count you may need to add a "use bytes;" pragma to the top of your script, or wrap the length call in a block:
my $size;
do {use bytes; $size = length($file_data)}
From the perlfunc man page:
"Note the characters: if the EXPR is in Unicode, you will get the number of characters, not the number of bytes."
A:
How are you getting the number of bytes? .. By looking at the size of the file on the filesystem?
You can use "-s" to get the size of the file.
Or, if you want to do more, you may use File::Stat
| How to get the correct Content-Length for a POST request | I am using a perl script to POST to Google Appengine application. I post a text file containing some XML using the -F option.
http://www.cpan.org/authors/id/E/EL/ELIJAH/bget-1.1
There is a version 1.2, already tested and get the same issue. The post looks something like this.
Host: foo.appspot.com
User-Agent: lwp-request/1.38
Content-Type: text/plain
Content-Length: 202
<XML>
<BLAH>Hello World</BLAH>
</XML>
I have modified the example so the 202 isn't right, don't worry about that. On to the problem. The Content-Length matches the number of bytes on the file, however unless I manually increase the Content-Length it does not send all of the file, a few bytes get truncated. The number of bytes truncated is not the same for files of different sizes. I used the -r option on the script and I can see what it is sending and it is sending all of the file, but Google Appengine self.request.body shows that not everything is received. I think the solution is to get the right number for Content-Length and apparently it isn't as simple as number of bytes on the file or the perl script is mangling it somehow.
Update:
Thanks to Erickson for the right answer. I used printf to append characters to the end of the file and it always truncated exactly the number of lines in the file. I suppose I could figure out what is being added by iterating through every character on the server side but not worth it. This wasn't even answered over on the google groups set up for app engine!
| [
"Is the number of extra bytes you need equal to the number of lines in the file? I ask because perhaps its possible that somehow carriage-returns are being introduced but not counted.\n",
"I've run into similar problems before.\nI assume you're using the length() function to determine the size of the file? If so, it;s likely the data that you're posting is UTF-8 encoded, instead of ASCII.\nTo get the correct count you may need to add a \"use bytes;\" pragma to the top of your script, or wrap the length call in a block:\nmy $size;\ndo {use bytes; $size = length($file_data)}\n\nFrom the perlfunc man page:\n\"Note the characters: if the EXPR is in Unicode, you will get the number of characters, not the number of bytes.\"\n",
"How are you getting the number of bytes? .. By looking at the size of the file on the filesystem?\nYou can use \"-s\" to get the size of the file.\nOr, if you want to do more, you may use File::Stat\n"
] | [
3,
1,
0
] | [] | [] | [
"google_app_engine",
"https",
"perl"
] | stackoverflow_0000046387_google_app_engine_https_perl.txt |
Q:
How to encrypt email addresses using JQuery
Is there a way to use JQuery to cloak or encrypt email addresses on an HTML page without changing the syntax in the href?
A:
Well, just as a comment, you probably want the source to have a cloaked email address and then use jQuery to fix or construct the link to have the correct address... because bots will be looking at the source, not the results of running your javascript ;-)
A:
Using JQuery may not be the route you want to take since this would be on the client side... Is there a reason you're not encrypting on server side?
A:
Semantic nazis would say "encoding", not "encrypting". Encrypting implies a secret is required to decode. Converting to HTML entity syntax would be a decent encoding process to keep out prying humans, but bots could easily decode it.
A:
To kind of piggy-back on what Mike Stone was suggesting, what I would do is encrypt it on the server-side and have something on the server-side that will decrypt it and return it back as JSON (jsonresult in mvc framework, web service, http handler, whatever). That way you could use jQuery to de-obfuscate the e-mail addresses when you wanted but it would still confuse any bot that doesn't support java script. Again this is not a bullet proof solution but it may do what you're looking for.
A:
What I've done is obfuscate it when it's rendered and hide it, then use javascript to fix the obfuscation and show the link.
For example, you can render this from the server:
<a href="mailto:some_address^^some_domain$$com" style='display:none'>Email me</a>
then using Javascript you can use regex to swap ^^ for @ and $$ for .
Whatever scheme you can come up with will probably be fine. Of course if the bot understands javascript then it doesn't matter anyway.
You'll block 95% of the bots that come your way and the rest of your users will see the address just fine.
| How to encrypt email addresses using JQuery | Is there a way to use JQuery to cloak or encrypt email addresses on an HTML page without changing the syntax in the href?
| [
"Well, just as a comment, you probably want the source to have a cloaked email address and then use jQuery to fix or construct the link to have the correct address... because bots will be looking at the source, not the results of running your javascript ;-)\n",
"Using JQuery may not be the route you want to take since this would be on the client side... Is there a reason you're not encrypting on server side?\n",
"Semantic nazis would say \"encoding\", not \"encrypting\". Encrypting implies a secret is required to decode. Converting to HTML entity syntax would be a decent encoding process to keep out prying humans, but bots could easily decode it.\n",
"To kind of piggy-back on what Mike Stone was suggesting, what I would do is encrypt it on the server-side and have something on the server-side that will decrypt it and return it back as JSON (jsonresult in mvc framework, web service, http handler, whatever). That way you could use jQuery to de-obfuscate the e-mail addresses when you wanted but it would still confuse any bot that doesn't support java script. Again this is not a bullet proof solution but it may do what you're looking for.\n",
"What I've done is obfuscate it when it's rendered and hide it, then use javascript to fix the obfuscation and show the link.\nFor example, you can render this from the server:\n<a href=\"mailto:some_address^^some_domain$$com\" style='display:none'>Email me</a>\n\nthen using Javascript you can use regex to swap ^^ for @ and $$ for .\nWhatever scheme you can come up with will probably be fine. Of course if the bot understands javascript then it doesn't matter anyway.\nYou'll block 95% of the bots that come your way and the rest of your users will see the address just fine.\n"
] | [
10,
6,
0,
0,
0
] | [] | [] | [
"email",
"jquery"
] | stackoverflow_0000046484_email_jquery.txt |
Q:
Store more than 3GB of video-frames in memory, on 32-bit OS
At work we have an application to play 2K (2048*1556px) OpenEXR film sequences. It works well.. apart from when sequences that are over 3GB (quite common), then it has to unload old frames from memory, despite the fact all machines have 8-16GB of memory (which is addressable via the linux BIGMEM stuff).
The frames have to he cached into memory to play back in realtime. The OS is a several-year old 32-bit Fedora Distro (not possible to upgradable to 64bit, for the foreseeable future). The per-process limitation is 3GB per process.
Basically, is it possible to cache more than 3GB of data in memory, somehow? My initial idea was to spread the data between multiple processes, but I've no idea if this is possible..
A:
One possibility may be to use mmap. You would map/unmap different parts of your data into the same virtual memory region. You could only have one set mapped at a time, but as long as there was enough physical memory, the data should stay resident.
A:
How about creating a RAM drive and loading the file into that ... assuming the RAM drive supports the BIGMEM stuff for you.
You could use multiple processes: each process loads a view of the file as a shared memory segment, and the player process then maps the segments in turn as needed.
A:
I assume you can modify the application. If so, the easiest thing would be to start the application several times (once for each 3GB chunk of video), have each one hold a chunk of video, and use another program to synchronize them so they each take control of the framebuffer (or other video output) in turn.
The synchronization is going to be a little messy, perhaps, but it can be simplified if each app has its own framebuffer and the sync program points the video controller to the correct framebuffer inbetween frames when switching to the next app.
A:
My, what an interesting problem :)
(EDIT: Oh, I just read Rob's ram drive post...I got all excited by the problem...but have a bit more to suggest, so I won't delete)
Would it be possible to...
setup a multi-gigabyte ram disk, and then
modify the program to do all it's reading from the "disk"?
I'd guess the ram disk part is where all the problem would be, since the size of the ram disk would be OS and file system dependent. You might have to create multiple ram disks and have your code jump between them. Or maybe you could setup a RAID-0 stripe set over multiple ram disks. Or, if there are still OS limitations and you can afford to drop a couple grand (4k?), setup a hardware RAID-0 strip set with some of those new blazing fast solid state drives. Or...
Fun, fun, fun.
Be sure to follow up!
A:
@dbr said:
There is a review machine with an absurd fiber-channel-RAID-array that can play 2K files direct from the array easily. The issue is with the artist-workstations, so it wouldn't be one $4000 RAID array, it'd be hundreds..
Well, if you can accept a limit of ~30GB, then maybe a single 36GB SSD drive would be enough? Those go for ~US$1k each I think, and the data rates might be enough. That very well maybe cheaper than a pure RAM approach. There are smaller sizes available, too. If ~60GB is enough you could probably get away with a JBOD array of 2 for double the cost, and skip the RAID controller. Be sure only to look at the higher end SSD options--the low end is filled with glorified memory sticks. :P
| Store more than 3GB of video-frames in memory, on 32-bit OS | At work we have an application to play 2K (2048*1556px) OpenEXR film sequences. It works well.. apart from when sequences that are over 3GB (quite common), then it has to unload old frames from memory, despite the fact all machines have 8-16GB of memory (which is addressable via the linux BIGMEM stuff).
The frames have to he cached into memory to play back in realtime. The OS is a several-year old 32-bit Fedora Distro (not possible to upgradable to 64bit, for the foreseeable future). The per-process limitation is 3GB per process.
Basically, is it possible to cache more than 3GB of data in memory, somehow? My initial idea was to spread the data between multiple processes, but I've no idea if this is possible..
| [
"One possibility may be to use mmap. You would map/unmap different parts of your data into the same virtual memory region. You could only have one set mapped at a time, but as long as there was enough physical memory, the data should stay resident.\n",
"How about creating a RAM drive and loading the file into that ... assuming the RAM drive supports the BIGMEM stuff for you.\nYou could use multiple processes: each process loads a view of the file as a shared memory segment, and the player process then maps the segments in turn as needed.\n",
"I assume you can modify the application. If so, the easiest thing would be to start the application several times (once for each 3GB chunk of video), have each one hold a chunk of video, and use another program to synchronize them so they each take control of the framebuffer (or other video output) in turn. \nThe synchronization is going to be a little messy, perhaps, but it can be simplified if each app has its own framebuffer and the sync program points the video controller to the correct framebuffer inbetween frames when switching to the next app.\n",
"My, what an interesting problem :)\n(EDIT: Oh, I just read Rob's ram drive post...I got all excited by the problem...but have a bit more to suggest, so I won't delete)\nWould it be possible to... \n\nsetup a multi-gigabyte ram disk, and then\nmodify the program to do all it's reading from the \"disk\"?\n\nI'd guess the ram disk part is where all the problem would be, since the size of the ram disk would be OS and file system dependent. You might have to create multiple ram disks and have your code jump between them. Or maybe you could setup a RAID-0 stripe set over multiple ram disks. Or, if there are still OS limitations and you can afford to drop a couple grand (4k?), setup a hardware RAID-0 strip set with some of those new blazing fast solid state drives. Or...\nFun, fun, fun.\nBe sure to follow up!\n",
"@dbr said:\n\nThere is a review machine with an absurd fiber-channel-RAID-array that can play 2K files direct from the array easily. The issue is with the artist-workstations, so it wouldn't be one $4000 RAID array, it'd be hundreds..\n\nWell, if you can accept a limit of ~30GB, then maybe a single 36GB SSD drive would be enough? Those go for ~US$1k each I think, and the data rates might be enough. That very well maybe cheaper than a pure RAM approach. There are smaller sizes available, too. If ~60GB is enough you could probably get away with a JBOD array of 2 for double the cost, and skip the RAID controller. Be sure only to look at the higher end SSD options--the low end is filled with glorified memory sticks. :P\n"
] | [
3,
2,
1,
1,
0
] | [] | [] | [
"32_bit",
"linux",
"memory"
] | stackoverflow_0000041643_32_bit_linux_memory.txt |
Q:
OpenID authentication in ASP.NET?
I am starting to build a new web application that will require user accounts. Now that I have an OpenID that I am using for this site I thought it would be cool if I could use OpenID for authentication in my application. Are there any good tutorials on how to integrate OpenID with an ASP.NET site?
A:
See Scott Hanselman's post on using DotNetOpenID in ASP.NET. Andrew Arnott's blog is full of samples on using DotNetOpenID with ASP.NET, including ASP.NET MVC.
I recently hooked up DotNetOpenID for the Subtext 2.0 release. It went really smoothly - the code samples included with the DotNetOpenID download are pretty helpful. The one thing I'd recommend is that you just use the library and avoid the ASP.NET control. It uses table based layout (hardcoded) and is pretty difficult to restyle.
A:
DotNetOpenId available at http://code.google.com/p/dotnetopenid
A:
Are there any good tutorials on how to integrate OpenId with an ASP.NET site?
Andrew Arnott's post titled "How to add OpenID to your ASP.NET web site (in C# or VB.NET)"
A:
I'm considering the same thing. On the Open ID site, there's a link 'For Developers' @ http://openid.net/developers/ and from there is a link to 'Open Libraries' @ http://wiki.openid.net/Libraries and finally from there is one called 'DotNetOpenID' @ http://dotnetopenid.googlecode.com/ which is probably what you're looking for.
Good luck.
A:
DotNetNuke may not be a good current example. When we did the integration, DotNetOpenID was not currently supporting OpenID 2.0 spec. I hacked together a fork to get the 2.0 support and have not had a chance to rip it back out for the official DotNetOpenID 2.0 release.
A:
You should check out the DotNetNuke codebase as well, they have been using OpenID for the last several revisions, and you'll find working code for implementing it there.
| OpenID authentication in ASP.NET? | I am starting to build a new web application that will require user accounts. Now that I have an OpenID that I am using for this site I thought it would be cool if I could use OpenID for authentication in my application. Are there any good tutorials on how to integrate OpenID with an ASP.NET site?
| [
"See Scott Hanselman's post on using DotNetOpenID in ASP.NET. Andrew Arnott's blog is full of samples on using DotNetOpenID with ASP.NET, including ASP.NET MVC.\nI recently hooked up DotNetOpenID for the Subtext 2.0 release. It went really smoothly - the code samples included with the DotNetOpenID download are pretty helpful. The one thing I'd recommend is that you just use the library and avoid the ASP.NET control. It uses table based layout (hardcoded) and is pretty difficult to restyle.\n",
"DotNetOpenId available at http://code.google.com/p/dotnetopenid\n",
"\nAre there any good tutorials on how to integrate OpenId with an ASP.NET site?\n\nAndrew Arnott's post titled \"How to add OpenID to your ASP.NET web site (in C# or VB.NET)\"\n",
"I'm considering the same thing. On the Open ID site, there's a link 'For Developers' @ http://openid.net/developers/ and from there is a link to 'Open Libraries' @ http://wiki.openid.net/Libraries and finally from there is one called 'DotNetOpenID' @ http://dotnetopenid.googlecode.com/ which is probably what you're looking for.\nGood luck.\n",
"DotNetNuke may not be a good current example. When we did the integration, DotNetOpenID was not currently supporting OpenID 2.0 spec. I hacked together a fork to get the 2.0 support and have not had a chance to rip it back out for the official DotNetOpenID 2.0 release.\n",
"You should check out the DotNetNuke codebase as well, they have been using OpenID for the last several revisions, and you'll find working code for implementing it there.\n"
] | [
25,
8,
6,
4,
2,
1
] | [] | [] | [
"asp.net",
"dotnetopenauth",
"openid"
] | stackoverflow_0000016716_asp.net_dotnetopenauth_openid.txt |
Q:
ViewState and changing control order
This has been a fun week (if you look back at my questions you'll see a common theme).
I have a repeater that is bound to a collection. Each repeater item dynamic control that corresponds to the collection item, it also renders out a header over each control that contains a Delete link.
When the delete link is clicked, the appropriate item is removed from the collection, and the repeater is rebound.
The problem I am encountering is that once I alter the repeater items, the state on some of the usercontrols is lost. Its always the same controls, regardless of where they are in the collection.
I'm wondering if changing the bound collection is a no-no, and it may confuse viewstate from properly restoring the values.
Can anyone clarify? How else can I do this?
A:
Ok, answered my own question.
The answer is, don't...its a nightmare.
Instead, I added a softDelete flag, and instead of removing the item from the collection, I just set this flag. Then, the repeater does not render items are marked for deletion.
When the collection is saved, it discards the items marked for deletion, and saves...
Everything is fixed, if not in an odd way.
| ViewState and changing control order | This has been a fun week (if you look back at my questions you'll see a common theme).
I have a repeater that is bound to a collection. Each repeater item dynamic control that corresponds to the collection item, it also renders out a header over each control that contains a Delete link.
When the delete link is clicked, the appropriate item is removed from the collection, and the repeater is rebound.
The problem I am encountering is that once I alter the repeater items, the state on some of the usercontrols is lost. Its always the same controls, regardless of where they are in the collection.
I'm wondering if changing the bound collection is a no-no, and it may confuse viewstate from properly restoring the values.
Can anyone clarify? How else can I do this?
| [
"Ok, answered my own question.\nThe answer is, don't...its a nightmare.\nInstead, I added a softDelete flag, and instead of removing the item from the collection, I just set this flag. Then, the repeater does not render items are marked for deletion.\nWhen the collection is saved, it discards the items marked for deletion, and saves...\nEverything is fixed, if not in an odd way.\n"
] | [
3
] | [] | [] | [
"asp.net",
"viewstate"
] | stackoverflow_0000046561_asp.net_viewstate.txt |
Q:
When should one use a project reference opposed to a binary reference?
My company has a common code library which consists of many class libary projects along with supporting test projects. Each class library project outputs a single binary, e.g. Company.Common.Serialization.dll. Since we own the compiled, tested binaries as well as the source code, there's debate as to whether our consuming applications should use binary or project references.
Some arguments in favor of project references:
Project references would allow users to debug and view all solution code without the overhead of loading additional projects/solutions.
Project references would assist in keeping up with common component changes committed to the source control system as changes would be easily identifiable without the active solution.
Some arguments in favor of binary references:
Binary references would simplify solutions and make for faster solution loading times.
Binary references would allow developers to focus on new code rather than potentially being distracted by code which is already baked and proven stable.
Binary references would force us to appropriately dogfood our stuff as we would be using the common library just as those outside of our organization would be required to do.
Since a binary reference can't be debugged (stepped into), one would be forced to replicate and fix issues by extending the existing test projects rather than testing and fixing within the context of the consuming application alone.
Binary references will ensure that concurrent development on the class library project will have no impact on the consuming application as a stable version of the binary will be referenced rather than an influx version. It would be the decision of the project lead whether or not to incorporate a newer release of the component if necessary.
What is your policy/preference when it comes to using project or binary references?
A:
It sounds to me as though you've covered all the major points. We've had a similar discussion at work recently and we're not quite decided yet.
However, one thing we've looked into is to reference the binary files, to gain all the advantages you note, but have the binaries built by a common build system where the source code is in a common location, accessible from all developer machines (at least if they're sitting on the network at work), so that any debugging can in fact dive into library code, if necessary.
However, on the same note, we've also tagged a lot of the base classes with appropriate attributes in order to make the debugger skip them completely, because any debugging you do in your own classes (at the level you're developing) would only be vastly outsized by code from the base libraries. This way when you hit the Step Into debugging shortcut key on a library class, you resurface into the next piece of code at your current level, instead of having to wade through tons of library code.
Basically, I definitely vote up (in SO terms) your comments about keeping proven library code out of sight for the normal developer.
Also, if I load the global solution file, that contains all the projects and basically, just everything, ReSharper 4 seems to have some kind of coronary problem, as Visual Studio practically comes to a stand-still.
A:
In my opinion the greatest problem with using project references is that it does not provide consumers with a common baseline for their development. I am assuming that the libraries are changing. If that's the case, building them and ensuring that they are versioned will give you an easily reproducible environment.
Not doing this will mean that your code will mysteriously break when the referenced project changes. But only on some machines.
A:
I tend to treat common libraries like this as 3rd-party resources. This allows the library to have it's own build processes, QA testing, etc. When QA (or whomever) "blesses" a release of the library, it's copied to a central location available to all developers. It's then up to each project to decide which version of the library to consume by copying the binaries to a project folder and using binary references in the projects.
One thing that is important is to create debug symbol (pdb) files with each build of the library and make those available as well. The other option is to actually create a local symbol store on your network and have each developer add that symbol store to their VS configuration. This would allow you to debug through the code and still have the benefits of usinng binary references.
As for the benefits you mention for project references, I don't agree with your second point. To me, it's important that the consuming projects explicitly know which version of the common library they are consuming and for them to take a deliberate step to upgrade that version. This is the best way to guarantee that you don't accidentally pick up changes to the library that haven't been completed or tested.
A:
when you don't want it in your solution, or have potential to split your solution, send all library output to a common, bin directory and reference there.
I have done this in order to allow developers to open a tight solution that only has the Domain, tests and Web projects. Our win services, and silverlight stuff, and web control libraries are in seperate solutions that include the projects you need when looking at those, but nant can build it all.
A:
I believe your question is actually about when projects go together in the same solution; the reason being that projects in the same solution should have project references to each other, and projects in different solutions should have binary references to each other.
I tend to think solutions should contain projects that are developed closely together. Such as your API assemblies and your implementations of those APIs.
Closeness is relative, however. A designer for an application, by definition, is closely related to the app, however you wouldn't want to have the designer and the application within the same solution (if they are at all complex, that is). You'd probably want to develop the designer against a branch of the program that is merged at intervals further spaced apart than the normal daily integration.
A:
I think that if the project is not part of the solution, you shouldn't include it there... but that's just my opinion
I separate it by concept in short
| When should one use a project reference opposed to a binary reference? | My company has a common code library which consists of many class libary projects along with supporting test projects. Each class library project outputs a single binary, e.g. Company.Common.Serialization.dll. Since we own the compiled, tested binaries as well as the source code, there's debate as to whether our consuming applications should use binary or project references.
Some arguments in favor of project references:
Project references would allow users to debug and view all solution code without the overhead of loading additional projects/solutions.
Project references would assist in keeping up with common component changes committed to the source control system as changes would be easily identifiable without the active solution.
Some arguments in favor of binary references:
Binary references would simplify solutions and make for faster solution loading times.
Binary references would allow developers to focus on new code rather than potentially being distracted by code which is already baked and proven stable.
Binary references would force us to appropriately dogfood our stuff as we would be using the common library just as those outside of our organization would be required to do.
Since a binary reference can't be debugged (stepped into), one would be forced to replicate and fix issues by extending the existing test projects rather than testing and fixing within the context of the consuming application alone.
Binary references will ensure that concurrent development on the class library project will have no impact on the consuming application as a stable version of the binary will be referenced rather than an influx version. It would be the decision of the project lead whether or not to incorporate a newer release of the component if necessary.
What is your policy/preference when it comes to using project or binary references?
| [
"It sounds to me as though you've covered all the major points. We've had a similar discussion at work recently and we're not quite decided yet.\nHowever, one thing we've looked into is to reference the binary files, to gain all the advantages you note, but have the binaries built by a common build system where the source code is in a common location, accessible from all developer machines (at least if they're sitting on the network at work), so that any debugging can in fact dive into library code, if necessary.\nHowever, on the same note, we've also tagged a lot of the base classes with appropriate attributes in order to make the debugger skip them completely, because any debugging you do in your own classes (at the level you're developing) would only be vastly outsized by code from the base libraries. This way when you hit the Step Into debugging shortcut key on a library class, you resurface into the next piece of code at your current level, instead of having to wade through tons of library code.\nBasically, I definitely vote up (in SO terms) your comments about keeping proven library code out of sight for the normal developer.\nAlso, if I load the global solution file, that contains all the projects and basically, just everything, ReSharper 4 seems to have some kind of coronary problem, as Visual Studio practically comes to a stand-still.\n",
"In my opinion the greatest problem with using project references is that it does not provide consumers with a common baseline for their development. I am assuming that the libraries are changing. If that's the case, building them and ensuring that they are versioned will give you an easily reproducible environment. \nNot doing this will mean that your code will mysteriously break when the referenced project changes. But only on some machines. \n",
"I tend to treat common libraries like this as 3rd-party resources. This allows the library to have it's own build processes, QA testing, etc. When QA (or whomever) \"blesses\" a release of the library, it's copied to a central location available to all developers. It's then up to each project to decide which version of the library to consume by copying the binaries to a project folder and using binary references in the projects.\nOne thing that is important is to create debug symbol (pdb) files with each build of the library and make those available as well. The other option is to actually create a local symbol store on your network and have each developer add that symbol store to their VS configuration. This would allow you to debug through the code and still have the benefits of usinng binary references.\nAs for the benefits you mention for project references, I don't agree with your second point. To me, it's important that the consuming projects explicitly know which version of the common library they are consuming and for them to take a deliberate step to upgrade that version. This is the best way to guarantee that you don't accidentally pick up changes to the library that haven't been completed or tested.\n",
"when you don't want it in your solution, or have potential to split your solution, send all library output to a common, bin directory and reference there.\nI have done this in order to allow developers to open a tight solution that only has the Domain, tests and Web projects. Our win services, and silverlight stuff, and web control libraries are in seperate solutions that include the projects you need when looking at those, but nant can build it all.\n",
"I believe your question is actually about when projects go together in the same solution; the reason being that projects in the same solution should have project references to each other, and projects in different solutions should have binary references to each other.\nI tend to think solutions should contain projects that are developed closely together. Such as your API assemblies and your implementations of those APIs. \nCloseness is relative, however. A designer for an application, by definition, is closely related to the app, however you wouldn't want to have the designer and the application within the same solution (if they are at all complex, that is). You'd probably want to develop the designer against a branch of the program that is merged at intervals further spaced apart than the normal daily integration. \n",
"I think that if the project is not part of the solution, you shouldn't include it there... but that's just my opinion\nI separate it by concept in short\n"
] | [
5,
2,
2,
1,
1,
0
] | [] | [] | [
"coding_style",
"standards"
] | stackoverflow_0000046584_coding_style_standards.txt |
Q:
Best way to make a printer-friendly ASP.NET page?
I'm just curious how most people make their ASP.NET pages printer-friendly? Do you create a separate printer-friendly version of the ASPX page, use CSS or something else? How do you handle situations like page breaks and wide tables?
Is there one elegant solution that works for the majority of the cases?
A:
You basically make another CSS file that hide things or gives simpler "printer-friendly" style to things then add that with a media="print" so that it only applies to print media (when it is printed)
<link rel="stylesheet" type="text/css" media="print" href="print.css" />
A:
Our gracious host wrote a good blog post on this topic:
Coding Horror: Stylesheets for Print and Handheld
A:
I am a php user, but the point must be that the result no matter what is HTML and HTML is styled with CSS and there is an option for your style sheets for just using the style for printing. This should be the way to do it, imho. About big tables, there isnt really a magic "fix" for that. Page will break where it breaks, dont really understand the problem here either.
<link rel="stylesheet" type="text/css" media="print" href="print.css" />
<link rel="stylesheet" type="text/css" media="screen" href="screen.css" />
| Best way to make a printer-friendly ASP.NET page? | I'm just curious how most people make their ASP.NET pages printer-friendly? Do you create a separate printer-friendly version of the ASPX page, use CSS or something else? How do you handle situations like page breaks and wide tables?
Is there one elegant solution that works for the majority of the cases?
| [
"You basically make another CSS file that hide things or gives simpler \"printer-friendly\" style to things then add that with a media=\"print\" so that it only applies to print media (when it is printed)\n<link rel=\"stylesheet\" type=\"text/css\" media=\"print\" href=\"print.css\" />\n\n",
"Our gracious host wrote a good blog post on this topic:\nCoding Horror: Stylesheets for Print and Handheld\n",
"I am a php user, but the point must be that the result no matter what is HTML and HTML is styled with CSS and there is an option for your style sheets for just using the style for printing. This should be the way to do it, imho. About big tables, there isnt really a magic \"fix\" for that. Page will break where it breaks, dont really understand the problem here either. \n<link rel=\"stylesheet\" type=\"text/css\" media=\"print\" href=\"print.css\" /> \n<link rel=\"stylesheet\" type=\"text/css\" media=\"screen\" href=\"screen.css\" />\n\n"
] | [
14,
6,
0
] | [] | [] | [
"asp.net"
] | stackoverflow_0000046718_asp.net.txt |
Q:
Can you programmatically restart a j2ee application?
Does anyone know if it is possible to restart a J2EE application (from the application)? If so, how?
I would like to be able to do it in an app-server-agnostic way, if it is possible.
The application will be run on many different app servers-- basically whatever the client prefers.
If it isn't possible to do this in an app-server-agnostic manner, then it probably isn't really worth doing for my purposes. I can always just display a message informing the user that they will need to restart the app manually.
A:
I would suggest that you're unlikely to find an appserver agnostic way. And while I don't pretend to know your requirements, I might question a design that requires the application to restart itself, other than an installer that is deploying a new version. Finally, I would suggest that for any nontrivial purpose "any" appserver will not work. You should have a list of supported app servers and versions, documented in your release notes, so you can test on all of those and dont have to worry about supporting clients on a non-conforming server/version. From experience, there are always subtle differences between, for example, Apache Tomcat and BEA WebLogic, and these differences are often undocument and hard to determine until you run into them.
A:
Most application servers provide a JMX interface, so you could invoke that.
A:
I'd suggest using servicewrapper to manage the application server, and then use its api methods for requesting a restart of the service. There would be some configuration involved and its hard to know if this would work in your particuar environment, but thats the only solution that I know of which is even reasonably cross-server compatible.
| Can you programmatically restart a j2ee application? | Does anyone know if it is possible to restart a J2EE application (from the application)? If so, how?
I would like to be able to do it in an app-server-agnostic way, if it is possible.
The application will be run on many different app servers-- basically whatever the client prefers.
If it isn't possible to do this in an app-server-agnostic manner, then it probably isn't really worth doing for my purposes. I can always just display a message informing the user that they will need to restart the app manually.
| [
"I would suggest that you're unlikely to find an appserver agnostic way. And while I don't pretend to know your requirements, I might question a design that requires the application to restart itself, other than an installer that is deploying a new version. Finally, I would suggest that for any nontrivial purpose \"any\" appserver will not work. You should have a list of supported app servers and versions, documented in your release notes, so you can test on all of those and dont have to worry about supporting clients on a non-conforming server/version. From experience, there are always subtle differences between, for example, Apache Tomcat and BEA WebLogic, and these differences are often undocument and hard to determine until you run into them. \n",
"Most application servers provide a JMX interface, so you could invoke that.\n",
"I'd suggest using servicewrapper to manage the application server, and then use its api methods for requesting a restart of the service. There would be some configuration involved and its hard to know if this would work in your particuar environment, but thats the only solution that I know of which is even reasonably cross-server compatible.\n"
] | [
6,
3,
2
] | [] | [] | [
"jakarta_ee",
"java"
] | stackoverflow_0000046568_jakarta_ee_java.txt |
Q:
Want to Encode text during Regex.Replace call
I have a regex call that I need help with.
I haven't posted my regex, because it is not relevant here.
What I want to be able to do is, during the Replace, I also want to modify the ${test} portion by doing a Html.Encode on the entire text that is effecting the regex.
Basically, wrap the entire text that is within the range of the regex with the bold tag, but also Html.Encode the text inbetween the bold tag.
RegexOptions regexOptions = RegexOptions.Compiled | RegexOptions.IgnoreCase;
text = Regex.Replace(text, regexBold, @"<b>${text}</b>", regexOptions);
A:
There is an incredibly easy way of doing this (in .net). Its called a MatchEvaluator and it lets you do all sorts of cool find and replace. Essentially you just feed the Regex.Replace method the method name of a method that returns a string and takes in a Match object as its only parameter. Do whatever makes sense for your particular match (html encode) and the string you return will replace the entire text of the match in the input string.
Example: Lets say you wanted to find all the places where there are two numbers being added (in text) and you want to replace the expression with the actual number. You can't do that with a strict regex approach, but you can when you throw in a MatchEvaluator it becomes easy.
public void Stuff()
{
string pattern = @"(?<firstNumber>\d+)\s*(?<operator>[*+-/])\s*(?<secondNumber>\d+)";
string input = "something something 123 + 456 blah blah 100 - 55";
string output = Regex.Replace(input, pattern, MatchMath);
//output will be "something something 579 blah blah 45"
}
private static string MatchMath(Match match)
{
try
{
double first = double.Parse(match.Groups["firstNumber"].Value);
double second = double.Parse(match.Groups["secondNumber"].Value);
switch (match.Groups["operator"].Value)
{
case "*":
return (first * second).ToString();
case "+":
return (first + second).ToString();
case "-":
return (first - second).ToString();
case "/":
return (first / second).ToString();
}
}
catch { }
return "NaN";
}
Find out more at http://msdn.microsoft.com/en-us/library/system.text.regularexpressions.matchevaluator.aspx
A:
Don't use Regex.Replace in this case... use..
foreach(Match in Regex.Matches(...))
{
//do your stuff here
}
A:
Heres an implementation of this I've used to pick out special replace strings from content and localize them.
protected string FindAndTranslateIn(string content)
{
return Regex.Replace(content, @"\{\^(.+?);(.+?)?}", new MatchEvaluator(TranslateHandler), RegexOptions.IgnoreCase);
}
public string TranslateHandler(Match m)
{
if (m.Success)
{
string key = m.Groups[1].Value;
key = FindAndTranslateIn(key);
string def = string.Empty;
if (m.Groups.Count > 2)
{
def = m.Groups[2].Value;
if(def.Length > 1)
{
def = FindAndTranslateIn(def);
}
}
if (group == null)
{
return Translate(key, def);
}
else
{
return Translate(key, group, def);
}
}
return string.Empty;
}
From the match evaluator delegate you return everything you want replaced, so where I have returns you would have bold tags and an encode call, mine also supports recursion, so a little over complicated for your needs, but you can just pare down the example for your needs.
This is equivalent to doing an iteration over the collection of matches and doing parts of the replace methods job. It just saves you some code, and you get to use a fancy shmancy delegate.
A:
If you do a Regex.Match, the resulting match objects group at the 0th index, is the subset of the intput that matched the regex.
you can use this to stitch in the bold tags and encode it there.
A:
Can you fill in the code inside {} to add the bold tag, and encode the text?
I'm confused as to how to apply the changes to the entire text block AND replace the section in the text variable at the end.
| Want to Encode text during Regex.Replace call | I have a regex call that I need help with.
I haven't posted my regex, because it is not relevant here.
What I want to be able to do is, during the Replace, I also want to modify the ${test} portion by doing a Html.Encode on the entire text that is effecting the regex.
Basically, wrap the entire text that is within the range of the regex with the bold tag, but also Html.Encode the text inbetween the bold tag.
RegexOptions regexOptions = RegexOptions.Compiled | RegexOptions.IgnoreCase;
text = Regex.Replace(text, regexBold, @"<b>${text}</b>", regexOptions);
| [
"There is an incredibly easy way of doing this (in .net). Its called a MatchEvaluator and it lets you do all sorts of cool find and replace. Essentially you just feed the Regex.Replace method the method name of a method that returns a string and takes in a Match object as its only parameter. Do whatever makes sense for your particular match (html encode) and the string you return will replace the entire text of the match in the input string. \nExample: Lets say you wanted to find all the places where there are two numbers being added (in text) and you want to replace the expression with the actual number. You can't do that with a strict regex approach, but you can when you throw in a MatchEvaluator it becomes easy.\npublic void Stuff()\n{\n string pattern = @\"(?<firstNumber>\\d+)\\s*(?<operator>[*+-/])\\s*(?<secondNumber>\\d+)\";\n string input = \"something something 123 + 456 blah blah 100 - 55\";\n string output = Regex.Replace(input, pattern, MatchMath);\n //output will be \"something something 579 blah blah 45\"\n}\n\nprivate static string MatchMath(Match match)\n{\n try\n {\n double first = double.Parse(match.Groups[\"firstNumber\"].Value);\n double second = double.Parse(match.Groups[\"secondNumber\"].Value);\n switch (match.Groups[\"operator\"].Value)\n {\n case \"*\":\n return (first * second).ToString();\n case \"+\":\n return (first + second).ToString();\n case \"-\":\n return (first - second).ToString();\n case \"/\":\n return (first / second).ToString();\n }\n }\n catch { }\n return \"NaN\"; \n}\n\nFind out more at http://msdn.microsoft.com/en-us/library/system.text.regularexpressions.matchevaluator.aspx\n",
"Don't use Regex.Replace in this case... use..\nforeach(Match in Regex.Matches(...))\n{\n //do your stuff here\n}\n\n",
"Heres an implementation of this I've used to pick out special replace strings from content and localize them.\n protected string FindAndTranslateIn(string content)\n {\n return Regex.Replace(content, @\"\\{\\^(.+?);(.+?)?}\", new MatchEvaluator(TranslateHandler), RegexOptions.IgnoreCase);\n }\n\npublic string TranslateHandler(Match m)\n{\n if (m.Success)\n {\n string key = m.Groups[1].Value;\n key = FindAndTranslateIn(key);\n string def = string.Empty;\n if (m.Groups.Count > 2)\n {\n def = m.Groups[2].Value;\n if(def.Length > 1)\n {\n def = FindAndTranslateIn(def);\n }\n }\n\n if (group == null)\n {\n return Translate(key, def);\n }\n else\n {\n return Translate(key, group, def);\n }\n }\n return string.Empty;\n}\n\nFrom the match evaluator delegate you return everything you want replaced, so where I have returns you would have bold tags and an encode call, mine also supports recursion, so a little over complicated for your needs, but you can just pare down the example for your needs.\nThis is equivalent to doing an iteration over the collection of matches and doing parts of the replace methods job. It just saves you some code, and you get to use a fancy shmancy delegate.\n",
"If you do a Regex.Match, the resulting match objects group at the 0th index, is the subset of the intput that matched the regex.\nyou can use this to stitch in the bold tags and encode it there.\n",
"Can you fill in the code inside {} to add the bold tag, and encode the text?\nI'm confused as to how to apply the changes to the entire text block AND replace the section in the text variable at the end.\n"
] | [
4,
3,
2,
0,
0
] | [] | [] | [
"regex"
] | stackoverflow_0000046719_regex.txt |
Q:
Using jQuery, what is the best way to set onClick event listeners for radio buttons?
For the following HTML:
<form name="myForm">
<label>One<input name="area" type="radio" value="S" /></label>
<label>Two<input name="area" type="radio" value="R" /></label>
<label>Three<input name="area" type="radio" value="O" /></label>
<label>Four<input name="area" type="radio" value="U" /></label>
</form>
Changing from the following JavaScript code:
$(function() {
var myForm = document.myForm;
var radios = myForm.area;
// Loop through radio buttons
for (var i=0; i<radios.length; i++) {
if (radios[i].value == "S") {
radios[i].checked = true; // Selected when form displays
radioClicks(); // Execute the function, initial setup
}
radios[i].onclick = radioClicks; // Assign to run when clicked
}
});
Thanks
EDIT: The response I selected answers the question I asked, however I like the answer that uses bind() because it also shows how to distinguish the group of radio buttons
A:
$(document).ready(function(){
$("input[name='area']").bind("click", radioClicks);
});
functionradioClicks() {
alert($(this).val());
}
I like to use bind() instead of directly wiring the event handler because you can pass additional data to the event hander (not shown here but the data is a third bind() argument) and because you can easily unbind it (and you can bind and unbind by group--see the jQuery docs).
http://docs.jquery.com/Events/bind#typedatafn
A:
$( function() {
$("input:radio")
.click(radioClicks)
.filter("[value='S']")
.attr("checked", "checked");
});
A:
$(function() {
$("form#myForm input[type='radio']").click( fn );
});
function fn()
{
//do stuff here
}
A:
$(function() {
$('input[@type="radio"]').click(radioClicks);
});
A:
I think something like this should work (but it's untested):
$("input[@type='radio']").each(function(i) {
if (this.val() == 'E') {
radioClicks();
this.get().checked = true;
}
}
$("input[@type='radio']").click(radioClicks);
A:
$(function() {
$('#myForm :radio').each(function() {
if ($(this).value == 'S') {
$(this).attr("checked", true);
radioClicks();
}
$(this).click(radioClicks);
});
});
| Using jQuery, what is the best way to set onClick event listeners for radio buttons? | For the following HTML:
<form name="myForm">
<label>One<input name="area" type="radio" value="S" /></label>
<label>Two<input name="area" type="radio" value="R" /></label>
<label>Three<input name="area" type="radio" value="O" /></label>
<label>Four<input name="area" type="radio" value="U" /></label>
</form>
Changing from the following JavaScript code:
$(function() {
var myForm = document.myForm;
var radios = myForm.area;
// Loop through radio buttons
for (var i=0; i<radios.length; i++) {
if (radios[i].value == "S") {
radios[i].checked = true; // Selected when form displays
radioClicks(); // Execute the function, initial setup
}
radios[i].onclick = radioClicks; // Assign to run when clicked
}
});
Thanks
EDIT: The response I selected answers the question I asked, however I like the answer that uses bind() because it also shows how to distinguish the group of radio buttons
| [
"$(document).ready(function(){\n $(\"input[name='area']\").bind(\"click\", radioClicks);\n});\n\nfunctionradioClicks() {\n alert($(this).val());\n}\n\nI like to use bind() instead of directly wiring the event handler because you can pass additional data to the event hander (not shown here but the data is a third bind() argument) and because you can easily unbind it (and you can bind and unbind by group--see the jQuery docs).\nhttp://docs.jquery.com/Events/bind#typedatafn\n",
"$( function() {\n $(\"input:radio\")\n .click(radioClicks)\n .filter(\"[value='S']\")\n .attr(\"checked\", \"checked\");\n});\n\n",
"$(function() {\n\n $(\"form#myForm input[type='radio']\").click( fn );\n\n});\n\nfunction fn()\n{\n //do stuff here\n}\n\n",
"$(function() {\n $('input[@type=\"radio\"]').click(radioClicks);\n});\n\n",
"I think something like this should work (but it's untested):\n$(\"input[@type='radio']\").each(function(i) {\n if (this.val() == 'E') {\n radioClicks();\n this.get().checked = true;\n }\n}\n$(\"input[@type='radio']\").click(radioClicks);\n\n",
"$(function() {\n $('#myForm :radio').each(function() {\n if ($(this).value == 'S') {\n $(this).attr(\"checked\", true);\n radioClicks();\n }\n\n $(this).click(radioClicks);\n });\n});\n\n"
] | [
19,
18,
2,
1,
0,
0
] | [] | [] | [
"javascript",
"jquery"
] | stackoverflow_0000046704_javascript_jquery.txt |
Q:
Terminating intermittently
Has anyone had and solved a problem where programs would terminate without any indication of why? I encounter this problem about every 6 months and I can get it to stop by having me (the administrator) log-in then out of the machine. After this things are back to normal for the next 6 months. I've seen this on Windows XP and Windows 2000 machines.
I've looked in the Event Viewer and monitored API calls and I cannot see anything out of the ordinary.
UPDATE: On the Windows 2000 machine, Visual Basic 6 would terminate when loading a project. On the Windows XP machine, IIS stopped working until I logged in then out.
UPDATE: Restarting the machine doesn't work.
A:
Perhaps it's not solved by you logging in, but by the user logging out. It could be a memory leak and logging out closes the process, causing windows to reclaim the memory. I assume programs indicated multiple applications, so it could be a shared dll that's causing the problem. Is there any kind of similarities in the programs? .Net, VB6, Office, and so on, or is it everything on the computer? You may be able to narrow it down to shared libraries.
During the 6 month "no error" time frame, is the system always on and logged in? If that's the case, you may suggest the user periodically reboot, perhaps once a week, in order to reclaim leaked memory, or memory claimed by hanging programs that didn't close properly.
A:
You need to take this issue to the software developer.
A:
The more details you provide the more likely it will be that you will get an answer: explain what exact program was 'terminating'. A termination is usually caused by an internal unhandled error, and not all programs check for them, and log them before quitting. However I think you can install Dr Watson, and it will give you at least a stack trace when a crash happens.
| Terminating intermittently | Has anyone had and solved a problem where programs would terminate without any indication of why? I encounter this problem about every 6 months and I can get it to stop by having me (the administrator) log-in then out of the machine. After this things are back to normal for the next 6 months. I've seen this on Windows XP and Windows 2000 machines.
I've looked in the Event Viewer and monitored API calls and I cannot see anything out of the ordinary.
UPDATE: On the Windows 2000 machine, Visual Basic 6 would terminate when loading a project. On the Windows XP machine, IIS stopped working until I logged in then out.
UPDATE: Restarting the machine doesn't work.
| [
"Perhaps it's not solved by you logging in, but by the user logging out. It could be a memory leak and logging out closes the process, causing windows to reclaim the memory. I assume programs indicated multiple applications, so it could be a shared dll that's causing the problem. Is there any kind of similarities in the programs? .Net, VB6, Office, and so on, or is it everything on the computer? You may be able to narrow it down to shared libraries.\nDuring the 6 month \"no error\" time frame, is the system always on and logged in? If that's the case, you may suggest the user periodically reboot, perhaps once a week, in order to reclaim leaked memory, or memory claimed by hanging programs that didn't close properly.\n",
"You need to take this issue to the software developer. \n",
"The more details you provide the more likely it will be that you will get an answer: explain what exact program was 'terminating'. A termination is usually caused by an internal unhandled error, and not all programs check for them, and log them before quitting. However I think you can install Dr Watson, and it will give you at least a stack trace when a crash happens.\n"
] | [
1,
0,
0
] | [] | [] | [
"intermittent",
"windows"
] | stackoverflow_0000046812_intermittent_windows.txt |
Q:
Identifying ASP.NET web service references
At my day job we have load balanced web servers which talk to load balanced app servers via web services (and lately WCF). At any given time, we have 4-6 different teams that have the ability to add new web sites or services or consume existing services. We probably have about 20-30 different web applications and corresponding services.
Unfortunately, given that we have no centralized control over this due to competing priorities, org structures, project timelines, financial buckets, etc., it is quite a mess. We have a variety of services that are reused, but a bunch that are specific to a front-end.
Ideally we would have better control over this situation, and we are trying to get control over it, but that is taking a while. One thing we would like to do is find out more about what all of the inter-relationships between web sites and the app servers.
I have used Reflector to find dependencies among assemblies, but would like to be able to see the traffic patterns between services.
What are the options for trying to map out web service relationships? For the most part, we are mainly talking about internal services (web to app, app to app, batch to app, etc.). Off the top of my head, I can think of two ways to approach it:
Analyze assemblies for any web references. The drawback here is that not everything is a web reference and I'm not sure how WCF connections are listed. However, this would at least be a start for finding 80% of the connections. Does anyone know of any tools that can do that analysis? Like I said, I've used Reflector for assembly references but can't find anything for web references.
Possibly tap into IIS and passively monitor the traffic coming in and out and somehow figure out what is being called and where from. We are looking at enterprise tools that could help but it would be a while before they are implemented (and cost a lot). But is there anything out there that could help out quickly and cheaply? One tool in particular (AmberPoint) can tap into IIS on the servers and monitor inbound and outbound traffic, adds a little special sauce and begin to build a map of the traffic. Very nice, but costs a bundle.
I know, I know, how the heck did you get into this mess in the first place? Beats me, just trying to help us get control of it and get out of it.
Thanks,
Matt
A:
The easiest way is to look through the logs, but if that doesn't include the referrer than you may also want to monitor what is going out from your web to the app server. You can use tools like Wireshark or Microsoft Network Monitor to see this traffic.
The other "solution" and I use this loosely is to bind a specific web server to app server and then run through a bundle and see what it is hitting on the app server. You could probably do this in a test environment to lesson the effects on the users of the site.
A:
You need a service registry (UDDI??)... If you had a means to catalog these services and their consumers, it would make this job of dependency discovery a lot easier. That is not an easy solution, though. It takes time and documentation to get a catalog in place.
I think the quickest solution would be to query your IIS logs and find source URLs which originate from your own servers. You would at least be able to track down which servers your consumers are coming from.
Also, if you already have some kind of authentication mechanism in place, you could trace who is using a particular service based on login.
You are right about AmberPoint. There are other tools that catalog the service traffic and provide reports showing what is happening to your services. Systinet, SOA Software and Actional also has a products similar to Amberpoint but Amberpoint has a free-ware version, I believe.
| Identifying ASP.NET web service references | At my day job we have load balanced web servers which talk to load balanced app servers via web services (and lately WCF). At any given time, we have 4-6 different teams that have the ability to add new web sites or services or consume existing services. We probably have about 20-30 different web applications and corresponding services.
Unfortunately, given that we have no centralized control over this due to competing priorities, org structures, project timelines, financial buckets, etc., it is quite a mess. We have a variety of services that are reused, but a bunch that are specific to a front-end.
Ideally we would have better control over this situation, and we are trying to get control over it, but that is taking a while. One thing we would like to do is find out more about what all of the inter-relationships between web sites and the app servers.
I have used Reflector to find dependencies among assemblies, but would like to be able to see the traffic patterns between services.
What are the options for trying to map out web service relationships? For the most part, we are mainly talking about internal services (web to app, app to app, batch to app, etc.). Off the top of my head, I can think of two ways to approach it:
Analyze assemblies for any web references. The drawback here is that not everything is a web reference and I'm not sure how WCF connections are listed. However, this would at least be a start for finding 80% of the connections. Does anyone know of any tools that can do that analysis? Like I said, I've used Reflector for assembly references but can't find anything for web references.
Possibly tap into IIS and passively monitor the traffic coming in and out and somehow figure out what is being called and where from. We are looking at enterprise tools that could help but it would be a while before they are implemented (and cost a lot). But is there anything out there that could help out quickly and cheaply? One tool in particular (AmberPoint) can tap into IIS on the servers and monitor inbound and outbound traffic, adds a little special sauce and begin to build a map of the traffic. Very nice, but costs a bundle.
I know, I know, how the heck did you get into this mess in the first place? Beats me, just trying to help us get control of it and get out of it.
Thanks,
Matt
| [
"The easiest way is to look through the logs, but if that doesn't include the referrer than you may also want to monitor what is going out from your web to the app server. You can use tools like Wireshark or Microsoft Network Monitor to see this traffic.\nThe other \"solution\" and I use this loosely is to bind a specific web server to app server and then run through a bundle and see what it is hitting on the app server. You could probably do this in a test environment to lesson the effects on the users of the site.\n",
"You need a service registry (UDDI??)... If you had a means to catalog these services and their consumers, it would make this job of dependency discovery a lot easier. That is not an easy solution, though. It takes time and documentation to get a catalog in place.\nI think the quickest solution would be to query your IIS logs and find source URLs which originate from your own servers. You would at least be able to track down which servers your consumers are coming from.\nAlso, if you already have some kind of authentication mechanism in place, you could trace who is using a particular service based on login.\nYou are right about AmberPoint. There are other tools that catalog the service traffic and provide reports showing what is happening to your services. Systinet, SOA Software and Actional also has a products similar to Amberpoint but Amberpoint has a free-ware version, I believe.\n"
] | [
2,
0
] | [] | [] | [
"esb",
"iis",
"reflection",
"web_services"
] | stackoverflow_0000044644_esb_iis_reflection_web_services.txt |
Q:
How can I identify in which Java Applet context running without passing an ID?
I'm part of a team that develops a pretty big Swing Java Applet. Most of our code are legacy and there are tons of singleton references. We've bunched all of them to a single "Application Context" singleton. What we now need is to create some way to separate the shared context (shared across all applets currently showing) and non-shared context (specific to each applet currently showing).
However, we don't have an ID at each of the locations that call to the singleton, nor do we want to propagate the ID to all locations. What's the easiest way to identify in which applet context we're running? (I've tried messing with classloaders, thread groups, thread ids... so far I could find nothing that will enable me to ID the origin of the call).
A:
Singletons are evil, what do you expect? ;)
Perhaps the most comprehensive approach would be to load the bulk of the applet in a different class loader (use java.net.URLClassLoader.newInstance). Then use a WeakHashMap to associate class loader with an applet. If you could split most of the code into a common class loader (as a parent of each per-applet class loader) and into the normal applet codebase, that would be faster but more work.
Other hacks:
If you have access to any component, you can use Component.getParent repeatedly or SwingUtilities.getRoot.
If you are in a per-applet instance thread, then you can set up a ThreadLocal.
From the EDT, you can read the current event from the queue (java.awt.EventQueue.getCurrentEvent()), and possibly find a component from that. Alternatively push an EventQueue with a overridden dispatchEvent method.
A:
If I understand you correctly, the idea is to get a different "singleton" object for each caller object or "context".
One thing you can do is to create a thread-local global variable where you write the ID of the current context. (This can be done with AOP.) Then in the singleton getter, the context ID is fetched from the thread-local to use as a key to the correct "singleton" instance for the calling context.
Regarding AOP there should be no problem using it in applets since, depending on your point-cuts, the advices are woven at compile time and a JAR is added to the runtime dependencies. Hence, no special evidence of AOP should remain at run time.
A:
@Hugo regarding threadlocal:
I thought about that solution. However, from experiments I found two problems with that approach:
Shared thread (server connections, etc) are problematic. This can be solved though by paying special attention to these thread (they're all under my control and are pretty much isolated from the legacy code).
The EDT thread is shared across all applets. I failed to find a way to force the creation of a new EDT thread for each applet. This means that the threadlocal for the EDT would be shared across the applets. This one I have no idea how to solve. Suggestions?
| How can I identify in which Java Applet context running without passing an ID? | I'm part of a team that develops a pretty big Swing Java Applet. Most of our code are legacy and there are tons of singleton references. We've bunched all of them to a single "Application Context" singleton. What we now need is to create some way to separate the shared context (shared across all applets currently showing) and non-shared context (specific to each applet currently showing).
However, we don't have an ID at each of the locations that call to the singleton, nor do we want to propagate the ID to all locations. What's the easiest way to identify in which applet context we're running? (I've tried messing with classloaders, thread groups, thread ids... so far I could find nothing that will enable me to ID the origin of the call).
| [
"Singletons are evil, what do you expect? ;)\nPerhaps the most comprehensive approach would be to load the bulk of the applet in a different class loader (use java.net.URLClassLoader.newInstance). Then use a WeakHashMap to associate class loader with an applet. If you could split most of the code into a common class loader (as a parent of each per-applet class loader) and into the normal applet codebase, that would be faster but more work.\nOther hacks:\nIf you have access to any component, you can use Component.getParent repeatedly or SwingUtilities.getRoot.\nIf you are in a per-applet instance thread, then you can set up a ThreadLocal.\nFrom the EDT, you can read the current event from the queue (java.awt.EventQueue.getCurrentEvent()), and possibly find a component from that. Alternatively push an EventQueue with a overridden dispatchEvent method.\n",
"If I understand you correctly, the idea is to get a different \"singleton\" object for each caller object or \"context\".\nOne thing you can do is to create a thread-local global variable where you write the ID of the current context. (This can be done with AOP.) Then in the singleton getter, the context ID is fetched from the thread-local to use as a key to the correct \"singleton\" instance for the calling context. \nRegarding AOP there should be no problem using it in applets since, depending on your point-cuts, the advices are woven at compile time and a JAR is added to the runtime dependencies. Hence, no special evidence of AOP should remain at run time.\n",
"@Hugo regarding threadlocal:\nI thought about that solution. However, from experiments I found two problems with that approach:\n\nShared thread (server connections, etc) are problematic. This can be solved though by paying special attention to these thread (they're all under my control and are pretty much isolated from the legacy code).\nThe EDT thread is shared across all applets. I failed to find a way to force the creation of a new EDT thread for each applet. This means that the threadlocal for the EDT would be shared across the applets. This one I have no idea how to solve. Suggestions?\n\n"
] | [
3,
0,
0
] | [] | [] | [
"applet",
"java",
"swing"
] | stackoverflow_0000007269_applet_java_swing.txt |
Q:
Is there anything wrong with this query?
INSERT INTO tblExcel (ename, position, phone, email) VALUES ('Burton, Andrew', 'Web Developer / Network Assistant', '876-9259', 'aburton@wccs.edu')
I've got an Access table that has five fields: id, ename, position, phone, and email...each one is plain text field with 50 characters, save for position which is 255 and id which is an autoincrement field. I'm using a VB.NET to read data from an Excel table, which gets pushed into a simple class that's used to fill out that query. I do the same thing with two other tables, whose data are pulled from a DB2 table and a MySQL table through. The other two work, but this simple INSERT loop keeps failing, so I don't think it's my "InsertNoExe" function that handles all the OleDb stuff.
So, um, does that query, any of the field titles, etc. look bogus? I can post other bits of code if anyone wants to see it.
EDIT: Fixed. I wasn't sure if the wide image counted as a Stack Overflow bug or not, which is why I left it.
EDIT 2: I'm dense. I use a try...catch to see the bogus query, and don't even check the ex.messsage. Gah.
INSERT INTO tblExcel (ename, position, phone, email) VALUES ('Burton, Andrew', 'Web Developer / Network Assistant', '876-9259', 'aburton@wccs.edu')
at System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(Int32 hr)
at System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult(tagDBPARAMS dbParams, Object& executeResult)
at System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult)
at System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult)
at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method)
at System.Data.OleDb.OleDbCommand.ExecuteNonQuery()
at EmployeeList.EmployeeDatabase.ExeNonQuery(String sql) in C:\andy\html\code\vb\EmployeeList\EmployeeDatabase.vb:line 263
Syntax error in INSERT INTO statement.
EDIT 3: Thank you, Chris.
A:
I beleive "position" is a reserved word.
Try...
INSERT into tblExcel (ename, [position], phone, email) VALUES (...
Reserved Words
A:
The spacing of "Web Developer / Network Assistant" looks a little wonky, maybe there is a hidden character in there (carriage return?)
I'd try taking the slash out, and see if the insert works, if not try taking all punctuation out. Then add it back and maybe you will be able to identify the bug.
| Is there anything wrong with this query? | INSERT INTO tblExcel (ename, position, phone, email) VALUES ('Burton, Andrew', 'Web Developer / Network Assistant', '876-9259', 'aburton@wccs.edu')
I've got an Access table that has five fields: id, ename, position, phone, and email...each one is plain text field with 50 characters, save for position which is 255 and id which is an autoincrement field. I'm using a VB.NET to read data from an Excel table, which gets pushed into a simple class that's used to fill out that query. I do the same thing with two other tables, whose data are pulled from a DB2 table and a MySQL table through. The other two work, but this simple INSERT loop keeps failing, so I don't think it's my "InsertNoExe" function that handles all the OleDb stuff.
So, um, does that query, any of the field titles, etc. look bogus? I can post other bits of code if anyone wants to see it.
EDIT: Fixed. I wasn't sure if the wide image counted as a Stack Overflow bug or not, which is why I left it.
EDIT 2: I'm dense. I use a try...catch to see the bogus query, and don't even check the ex.messsage. Gah.
INSERT INTO tblExcel (ename, position, phone, email) VALUES ('Burton, Andrew', 'Web Developer / Network Assistant', '876-9259', 'aburton@wccs.edu')
at System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(Int32 hr)
at System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult(tagDBPARAMS dbParams, Object& executeResult)
at System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult)
at System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult)
at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method)
at System.Data.OleDb.OleDbCommand.ExecuteNonQuery()
at EmployeeList.EmployeeDatabase.ExeNonQuery(String sql) in C:\andy\html\code\vb\EmployeeList\EmployeeDatabase.vb:line 263
Syntax error in INSERT INTO statement.
EDIT 3: Thank you, Chris.
| [
"I beleive \"position\" is a reserved word.\nTry...\nINSERT into tblExcel (ename, [position], phone, email) VALUES (...\nReserved Words\n",
"The spacing of \"Web Developer / Network Assistant\" looks a little wonky, maybe there is a hidden character in there (carriage return?)\nI'd try taking the slash out, and see if the insert works, if not try taking all punctuation out. Then add it back and maybe you will be able to identify the bug.\n"
] | [
5,
0
] | [] | [] | [
"excel",
"ms_access",
"oledb",
"sql",
"vb.net"
] | stackoverflow_0000046907_excel_ms_access_oledb_sql_vb.net.txt |
Q:
Is this the proper use of a mutex?
I have a situation where I might have multiple instances of a program running at once, and it's important that just one specific function not be executing in more than one of these instances at once.
Is this the proper way to use a mutex to prevent this from happening?
lock (this.GetType()) {
_log.Info("Doing Sync");
DoSync();
_log.Info("Sync Completed");
}
A:
You said multiple instances of one application, so we're talking about two program.exe's running, right? The lock statement won't lock across multiple programs, just within the program. If you want a true Mutex, look at the System.Threading.Mutex object.
Here is a usage example:
bool createdNew;
using (Mutex mtx = new Mutex(false, "MyAwesomeMutex", out createdNew))
{
try
{
mtx.WaitOne();
MessageBox.Show("Click OK to release the mutex.");
}
finally
{
mtx.ReleaseMutex();
}
}
The createdNew variable will let you know whether or not it was created the first time. It only tells you if it has been created, though. If you want to acquire the lock, you need to call WaitOne and then call ReleaseMutex to release it. If you just want to see if you created a Mutex, just constructing it is fine.
A:
TheSeeker is correct.
Jeff Richter's advice in Clr Via C# (p638-9) on locking is to create a private object specifically for the purpose of being locked.
private Object _lock = new Object();
// usage
lock( _lock )
{
// thread-safe code here..
}
This works because _lock cannot be locked by anything outside the current class.
EDIT: this is applicable to threads executing within a single process. @David Mohundro's answer is correct for inter-process locking.
| Is this the proper use of a mutex? | I have a situation where I might have multiple instances of a program running at once, and it's important that just one specific function not be executing in more than one of these instances at once.
Is this the proper way to use a mutex to prevent this from happening?
lock (this.GetType()) {
_log.Info("Doing Sync");
DoSync();
_log.Info("Sync Completed");
}
| [
"You said multiple instances of one application, so we're talking about two program.exe's running, right? The lock statement won't lock across multiple programs, just within the program. If you want a true Mutex, look at the System.Threading.Mutex object.\nHere is a usage example:\nbool createdNew;\nusing (Mutex mtx = new Mutex(false, \"MyAwesomeMutex\", out createdNew))\n{\n try\n {\n mtx.WaitOne();\n\n MessageBox.Show(\"Click OK to release the mutex.\");\n }\n finally\n {\n mtx.ReleaseMutex();\n }\n}\n\nThe createdNew variable will let you know whether or not it was created the first time. It only tells you if it has been created, though. If you want to acquire the lock, you need to call WaitOne and then call ReleaseMutex to release it. If you just want to see if you created a Mutex, just constructing it is fine.\n",
"TheSeeker is correct.\nJeff Richter's advice in Clr Via C# (p638-9) on locking is to create a private object specifically for the purpose of being locked.\nprivate Object _lock = new Object();\n\n// usage\nlock( _lock )\n{\n // thread-safe code here..\n}\n\nThis works because _lock cannot be locked by anything outside the current class.\nEDIT: this is applicable to threads executing within a single process. @David Mohundro's answer is correct for inter-process locking.\n"
] | [
18,
5
] | [] | [] | [
"c#",
"mutex"
] | stackoverflow_0000046909_c#_mutex.txt |
Q:
What is the best way to inherit an array that needs to store subclass specific data?
I'm trying to set up an inheritance hierarchy similar to the following:
abstract class Vehicle
{
public string Name;
public List<Axle> Axles;
}
class Motorcycle : Vehicle
{
}
class Car : Vehicle
{
}
abstract class Axle
{
public int Length;
public void Turn(int numTurns) { ... }
}
class MotorcycleAxle : Axle
{
public bool WheelAttached;
}
class CarAxle : Axle
{
public bool LeftWheelAttached;
public bool RightWheelAttached;
}
I would like to only store MotorcycleAxle objects in a Motorcycle object's Axles array, and CarAxle objects in a Car object's Axles array. The problem is there is no way to override the array in the subclass to force one or the other. Ideally something like the following would be valid for the Motorcycle class:
class Motorcycle : Vehicle
{
public override List<MotorcycleAxle> Axles;
}
but the types have to match when overriding. How can I support this architecture? Will I just have to do a lot of run-time type checking and casting wherever the Axles member is accessed? I don't like adding run-time type checks because you start to lose the benefits of strong typing and polymorphism. There have to be at least some run-time checks in this scenario since the WheelAttached and Left/RightWheelAttached properties depend on the type, but I would like to minimize them.
A:
Use more generics
abstract class Vehicle<T> where T : Axle
{
public string Name;
public List<T> Axles;
}
class Motorcycle : Vehicle<MotorcycleAxle>
{
}
class Car : Vehicle<CarAxle>
{
}
abstract class Axle
{
public int Length;
public void Turn(int numTurns) { ... }
}
class MotorcycleAxle : Axle
{
public bool WheelAttached;
}
class CarAxle : Axle
{
public bool LeftWheelAttached;
public bool RightWheelAttached;
}
A:
2 options spring to mind. 1 is using generics:
abstract class Vehicle<TAxle> where TAxle : Axle {
public List<TAxle> Axles;
}
The second uses shadowing - and this assumes you have properties:
abstract class Vehicle {
public IList<Axle> Axles { get; set; }
}
class Motorcyle : Vehicle {
public new IList<MotorcycleAxle> Axles { get; set; }
}
class Car : Vehicle {
public new IList<CarAxle> Axles { get; set; }
}
void Main() {
Vehicle v = new Car();
// v.Axles is IList<Axle>
Car c = (Car) v;
// c.Axles is IList<CarAxle>
// ((Vehicle)c).Axles is IList<Axle>
The problem with shadowing is that you have a generic List. Unfortunately, you can't constrain the list to only contain CarAxle. Also, you can't cast a List<Axle> into List<CarAxle> - even though there's an inheritance chain there. You have to cast each object into a new List (though that becomes much easier with LINQ).
I'd go for generics myself.
A:
I asked a similar question and got a better answer, the problem is related to C#'s support for covariance and contravariance. See that discussion for a little more information.
| What is the best way to inherit an array that needs to store subclass specific data? | I'm trying to set up an inheritance hierarchy similar to the following:
abstract class Vehicle
{
public string Name;
public List<Axle> Axles;
}
class Motorcycle : Vehicle
{
}
class Car : Vehicle
{
}
abstract class Axle
{
public int Length;
public void Turn(int numTurns) { ... }
}
class MotorcycleAxle : Axle
{
public bool WheelAttached;
}
class CarAxle : Axle
{
public bool LeftWheelAttached;
public bool RightWheelAttached;
}
I would like to only store MotorcycleAxle objects in a Motorcycle object's Axles array, and CarAxle objects in a Car object's Axles array. The problem is there is no way to override the array in the subclass to force one or the other. Ideally something like the following would be valid for the Motorcycle class:
class Motorcycle : Vehicle
{
public override List<MotorcycleAxle> Axles;
}
but the types have to match when overriding. How can I support this architecture? Will I just have to do a lot of run-time type checking and casting wherever the Axles member is accessed? I don't like adding run-time type checks because you start to lose the benefits of strong typing and polymorphism. There have to be at least some run-time checks in this scenario since the WheelAttached and Left/RightWheelAttached properties depend on the type, but I would like to minimize them.
| [
"Use more generics\nabstract class Vehicle<T> where T : Axle\n{\n public string Name;\n public List<T> Axles;\n}\n\nclass Motorcycle : Vehicle<MotorcycleAxle>\n{\n}\n\nclass Car : Vehicle<CarAxle>\n{\n}\n\nabstract class Axle\n{\n public int Length;\n public void Turn(int numTurns) { ... }\n}\n\nclass MotorcycleAxle : Axle\n{\n public bool WheelAttached;\n}\n\nclass CarAxle : Axle\n{\n public bool LeftWheelAttached;\n public bool RightWheelAttached;\n}\n\n",
"2 options spring to mind. 1 is using generics:\nabstract class Vehicle<TAxle> where TAxle : Axle {\n public List<TAxle> Axles;\n}\n\nThe second uses shadowing - and this assumes you have properties:\nabstract class Vehicle {\n public IList<Axle> Axles { get; set; }\n}\n\nclass Motorcyle : Vehicle {\n public new IList<MotorcycleAxle> Axles { get; set; }\n}\n\nclass Car : Vehicle {\n public new IList<CarAxle> Axles { get; set; }\n}\n\nvoid Main() {\n Vehicle v = new Car();\n // v.Axles is IList<Axle>\n\n Car c = (Car) v;\n // c.Axles is IList<CarAxle>\n // ((Vehicle)c).Axles is IList<Axle>\n\nThe problem with shadowing is that you have a generic List. Unfortunately, you can't constrain the list to only contain CarAxle. Also, you can't cast a List<Axle> into List<CarAxle> - even though there's an inheritance chain there. You have to cast each object into a new List (though that becomes much easier with LINQ).\nI'd go for generics myself.\n",
"I asked a similar question and got a better answer, the problem is related to C#'s support for covariance and contravariance. See that discussion for a little more information.\n"
] | [
5,
0,
0
] | [] | [] | [
"c#",
"contravariance",
"covariance",
"inheritance",
"oop"
] | stackoverflow_0000031088_c#_contravariance_covariance_inheritance_oop.txt |
Q:
How to find the current name of the test being executing?
I know I can figure out the name of the method as its being executed, just wondering if there is a way from the setup method. I guess an attribute method would work but from the setup method it would be the best.
EDIT NUnit
A:
I know this is going to sound negative, but don't do it! :-)
The idea behind the setup method is that it executes something required by every test, which means that it doesn't matter which test is being executed, so you don't need to know the name of the method.
If you are after different data used in initialisation, then call a separate method with the data passed as a parameter from your test method.
If you really want what you are asking for, then you may need a different method that takes the name of the current method as a parameter and call that from your test method.
| How to find the current name of the test being executing? | I know I can figure out the name of the method as its being executed, just wondering if there is a way from the setup method. I guess an attribute method would work but from the setup method it would be the best.
EDIT NUnit
| [
"I know this is going to sound negative, but don't do it! :-)\nThe idea behind the setup method is that it executes something required by every test, which means that it doesn't matter which test is being executed, so you don't need to know the name of the method.\nIf you are after different data used in initialisation, then call a separate method with the data passed as a parameter from your test method.\nIf you really want what you are asking for, then you may need a different method that takes the name of the current method as a parameter and call that from your test method.\n"
] | [
1
] | [] | [] | [
"unit_testing",
"visual_studio"
] | stackoverflow_0000044899_unit_testing_visual_studio.txt |
Q:
Getting an int representation of a String
I am looking for a way to create an int\long representation of an arbitrary alpha-numeric String. Hash codes won't do it, because I can't afford hash collisions i.e. the representation must be unique and repeatable.
The numeric representation will be used to perform efficient (hopefully) compares. The creation of the numeric key will take some time, but it only has to happen once, whereas I need to perform vast numbers of comparisons with it - which will hopefully be much faster than comparing the raw Strings.
Any other idea's on faster String comparison will be most appreciated too...
A:
Unless your string is limited in length, you can't avoid collisions.
There are 4294967296 possible values for an integer (2^32). If you have a string of more than 4 ASCII characters, or more than two unicode characters, then there are more possible string values than possible integer values. You can't have a unique integer value for every possible 5 character string. Long values have more possible values, but they would only provide a unique value for every possible string of 8 ASCII characters.
Hash codes are useful as a two step process: first see if the hash code matches, then check the whole string. For most strings that don't match, you only need to do the first step, and it's really fast.
A:
Can't you just start with a hash code, and if the hash codes match, do a character by character comparison?
A:
How long are the strings? If they are very short, then a unique ID can be generated by considering the characters as digits in base 36 (26 + 10) that form a n-digits number where n is the length of the string. On the other hand, if the strings are short enough to allow this, direct comparison won't be an issue anyway.
Otherwise you'll have to generate a collision-free hash and this can only be done when the complete problem space is known in advance (i.e. if you know all strings that can possibly occur). You will want to have a look at perfect hashing, although the only feasible algorithm to find a perfect hash function that I know is probabilistic so collisions are still theoretically possible.
There might be other ways to find such a function. Knuth called this a “rather amusing … puzzle” in TAoCP but he doesn't give an algorithm either.
In general, you give way too few information to find an algorithm that doesn't require probing the whole problem space in some manner. This does invariably mean that the problem has exponential running time but could be solved using machine-learning heuristics. I'm not sure if this is advisable in your case.
A:
Perhaps:
String y = "oiu291981u39u192u3198u389u28u389u";
BigInteger bi = new BigInteger(y, 36);
System.out.println(bi);
A:
At the end of the day, a single alphanumeric character has at least 36 possible values. If you include punctuation, lower case, etc then you can easily pass 72 possible values.
A non-colliding number that allows you to quickly compare strings would necessarily grow exponentially with the length of the string.
So you first must decide on the longest string you are expecting to compare. Assuming it's N characters in length, and assuming you ONLY need uppercase letters and the numerals 0-9 then you need to have an integer representation that can be as high as
36^N
For a string of length 25 (common name field) then you end up needing a binary number with 130 bits.
If you compose that into 32 bit numbers, you'll need 4. Then you can compare each number (four integer compares should take no time, compared to walking the string). I would recommend a big number library, but for this specialized case I'm pretty sure you can write your own and get better performance.
If you want to handle 72 possible values per character (uppercase, lowercase, numerals, punctuation...) and you need 10 characters, then you'll need 62 bits - two 32 bit integers (or one 64 bit if you're on a system that supports 64 bit computing)
If, however, you are not able to restrict the numbers in the string (ie, could be any of the 256 letters/numbers/characters/etc) and you can't define the size of the string, then comparing the strings directly is the only way to go, but there's a shortcut.
Cast the pointer of the string to a 32 bit unsigned integer array, and compare the string 4 bytes at a time (or 64 bits/8bytes at a time on a 64 bit processor). This means that a 100 character string only requires 25 compares maximum to find which is greater.
You may need to re-define the character set (and convert the strings) so that the characters with higher precedence are assigned values closer to 0, and lower precedence values closer to 255 (or vice versa, depending on how you are comparing them).
Good luck!
-Adam
A:
A few questions in the beginning:
Did you test that simple string comparison is too slow?
How the comparison looks like ('ABC' == 'abc' or 'ABC' != 'abc')?
How many string do you have to compare?
How many comparison do you have to do?
How your strings look like (the length, letter case)?
As far as I remember String in Java is an object and two identical strings point to the same object.
So, maybe it would be enough to compare objects (probably string comparison is already implemented in this way).
If it doesn't help you can try to use Pascal implementation of string object when first element is length and if your strings have various length this should save some CPU time.
A:
As long as it's a hash function, be it String.hashCode(), MD5 or SHA1, collision is unavoidable unless you have a fixed limit on the string's length. It is mathematically impossible to have one-to-one mapping from an infinite group to a finite group.
Stepping back, is collision avoidance absolutely necessary?
A:
How long are your strings? Unless you choose an int representation that's longer than the string, collisions will always be possible no matter what conversion you're using. So if you're using a 32 bit integer, you can only uniquely represent strings of up to 4 bytes.
A:
How big are your strings? Arbitrarily long strings cannot be compressed into 32/64 bit format.
A:
If you don't want collisions, try something insane like SHA-512. I can't guarantee there won't be collisions, but I don't think they have found any yet.
A:
Assuming "alphanumeric" means letters and numbers, you could treat each letter/number as a base-36 digit. Unfortunately, large strings will cause the number to grow rapidly and you'd have to resort to big integers, which are hardly efficient.
If your strings are usually different when you make the comparison (i.e. searching for a specific string) the hash might be your best option. Once you get a potential hit, you can do the string comparison to be sure. A well-designed hash will make collisions exceedingly rare.
A:
It would seem that an MD5 hash would work fine. The risk of a hash collision would be extremely unlikely. Depending on the length of your string, a hash that generates an int/long would run into max value problems very quickly.
A:
Why don't you do something like 1stChar + (10 x 2ndChar) + 100 x (3rdChar) ...., where you use the simple integer value of each character, i.e. a = 1, b = 2 etc, or just the integer value if it's not a letter. This will give a unique value for each string, even for 2 strings that are just the same letters in a different order.
Of course if gets more complicated if you need to worry about Unicode rather than just ASCII and the numbers could get large if you need to use long string.
Are the standard Java string comparison functions definitely not efficient enough?
A:
String length may vary, but let's say 10 characters for now.
In that case, in order to guarantee uniqueness you'd have to use some sort of big integer representation. I doubt that doing comparisons on big integers would be substantially faster than doing string comparisons in the first place. I'll second what other's have said here, use some sort of hash, then in the event of a hash match check the original strings to weed out any collisions.
In any case, If your strings are around 10 characters, I doubt that comparing, say, a bunch of 32 bit hashes will be all that much faster than direct string comparisons. I think you have to ask yourself if it's it really worth the additional complexity.
| Getting an int representation of a String | I am looking for a way to create an int\long representation of an arbitrary alpha-numeric String. Hash codes won't do it, because I can't afford hash collisions i.e. the representation must be unique and repeatable.
The numeric representation will be used to perform efficient (hopefully) compares. The creation of the numeric key will take some time, but it only has to happen once, whereas I need to perform vast numbers of comparisons with it - which will hopefully be much faster than comparing the raw Strings.
Any other idea's on faster String comparison will be most appreciated too...
| [
"Unless your string is limited in length, you can't avoid collisions. \nThere are 4294967296 possible values for an integer (2^32). If you have a string of more than 4 ASCII characters, or more than two unicode characters, then there are more possible string values than possible integer values. You can't have a unique integer value for every possible 5 character string. Long values have more possible values, but they would only provide a unique value for every possible string of 8 ASCII characters.\nHash codes are useful as a two step process: first see if the hash code matches, then check the whole string. For most strings that don't match, you only need to do the first step, and it's really fast.\n",
"Can't you just start with a hash code, and if the hash codes match, do a character by character comparison?\n",
"How long are the strings? If they are very short, then a unique ID can be generated by considering the characters as digits in base 36 (26 + 10) that form a n-digits number where n is the length of the string. On the other hand, if the strings are short enough to allow this, direct comparison won't be an issue anyway.\nOtherwise you'll have to generate a collision-free hash and this can only be done when the complete problem space is known in advance (i.e. if you know all strings that can possibly occur). You will want to have a look at perfect hashing, although the only feasible algorithm to find a perfect hash function that I know is probabilistic so collisions are still theoretically possible.\nThere might be other ways to find such a function. Knuth called this a “rather amusing … puzzle” in TAoCP but he doesn't give an algorithm either.\nIn general, you give way too few information to find an algorithm that doesn't require probing the whole problem space in some manner. This does invariably mean that the problem has exponential running time but could be solved using machine-learning heuristics. I'm not sure if this is advisable in your case.\n",
"Perhaps:\nString y = \"oiu291981u39u192u3198u389u28u389u\";\nBigInteger bi = new BigInteger(y, 36);\nSystem.out.println(bi);\n\n",
"At the end of the day, a single alphanumeric character has at least 36 possible values. If you include punctuation, lower case, etc then you can easily pass 72 possible values.\nA non-colliding number that allows you to quickly compare strings would necessarily grow exponentially with the length of the string.\nSo you first must decide on the longest string you are expecting to compare. Assuming it's N characters in length, and assuming you ONLY need uppercase letters and the numerals 0-9 then you need to have an integer representation that can be as high as\n36^N\nFor a string of length 25 (common name field) then you end up needing a binary number with 130 bits.\nIf you compose that into 32 bit numbers, you'll need 4. Then you can compare each number (four integer compares should take no time, compared to walking the string). I would recommend a big number library, but for this specialized case I'm pretty sure you can write your own and get better performance.\nIf you want to handle 72 possible values per character (uppercase, lowercase, numerals, punctuation...) and you need 10 characters, then you'll need 62 bits - two 32 bit integers (or one 64 bit if you're on a system that supports 64 bit computing)\nIf, however, you are not able to restrict the numbers in the string (ie, could be any of the 256 letters/numbers/characters/etc) and you can't define the size of the string, then comparing the strings directly is the only way to go, but there's a shortcut.\nCast the pointer of the string to a 32 bit unsigned integer array, and compare the string 4 bytes at a time (or 64 bits/8bytes at a time on a 64 bit processor). This means that a 100 character string only requires 25 compares maximum to find which is greater.\nYou may need to re-define the character set (and convert the strings) so that the characters with higher precedence are assigned values closer to 0, and lower precedence values closer to 255 (or vice versa, depending on how you are comparing them).\nGood luck!\n-Adam\n",
"A few questions in the beginning:\n\nDid you test that simple string comparison is too slow? \nHow the comparison looks like ('ABC' == 'abc' or 'ABC' != 'abc')? \nHow many string do you have to compare? \nHow many comparison do you have to do?\nHow your strings look like (the length, letter case)?\n\nAs far as I remember String in Java is an object and two identical strings point to the same object.\nSo, maybe it would be enough to compare objects (probably string comparison is already implemented in this way).\nIf it doesn't help you can try to use Pascal implementation of string object when first element is length and if your strings have various length this should save some CPU time.\n",
"As long as it's a hash function, be it String.hashCode(), MD5 or SHA1, collision is unavoidable unless you have a fixed limit on the string's length. It is mathematically impossible to have one-to-one mapping from an infinite group to a finite group.\nStepping back, is collision avoidance absolutely necessary?\n",
"How long are your strings? Unless you choose an int representation that's longer than the string, collisions will always be possible no matter what conversion you're using. So if you're using a 32 bit integer, you can only uniquely represent strings of up to 4 bytes.\n",
"How big are your strings? Arbitrarily long strings cannot be compressed into 32/64 bit format. \n",
"If you don't want collisions, try something insane like SHA-512. I can't guarantee there won't be collisions, but I don't think they have found any yet.\n",
"Assuming \"alphanumeric\" means letters and numbers, you could treat each letter/number as a base-36 digit. Unfortunately, large strings will cause the number to grow rapidly and you'd have to resort to big integers, which are hardly efficient.\nIf your strings are usually different when you make the comparison (i.e. searching for a specific string) the hash might be your best option. Once you get a potential hit, you can do the string comparison to be sure. A well-designed hash will make collisions exceedingly rare.\n",
"It would seem that an MD5 hash would work fine. The risk of a hash collision would be extremely unlikely. Depending on the length of your string, a hash that generates an int/long would run into max value problems very quickly. \n",
"Why don't you do something like 1stChar + (10 x 2ndChar) + 100 x (3rdChar) ...., where you use the simple integer value of each character, i.e. a = 1, b = 2 etc, or just the integer value if it's not a letter. This will give a unique value for each string, even for 2 strings that are just the same letters in a different order.\nOf course if gets more complicated if you need to worry about Unicode rather than just ASCII and the numbers could get large if you need to use long string.\nAre the standard Java string comparison functions definitely not efficient enough?\n",
"\nString length may vary, but let's say 10 characters for now.\n\nIn that case, in order to guarantee uniqueness you'd have to use some sort of big integer representation. I doubt that doing comparisons on big integers would be substantially faster than doing string comparisons in the first place. I'll second what other's have said here, use some sort of hash, then in the event of a hash match check the original strings to weed out any collisions.\nIn any case, If your strings are around 10 characters, I doubt that comparing, say, a bunch of 32 bit hashes will be all that much faster than direct string comparisons. I think you have to ask yourself if it's it really worth the additional complexity.\n"
] | [
12,
10,
6,
2,
2,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"java",
"performance",
"string"
] | stackoverflow_0000046160_java_performance_string.txt |
Q:
How can a simple tree algorithm be coded in a functional language?
Suppose I want to implement a reasonably efficient 'keyword recognition algorithm', that is first given a list of keyword, and must then answer if another given word was in the list.
In an imperative language, I would store the keywords in a tree (one node per character). Then, when receiving a word to test, I would scan my tree to test if the word is a keyword.
I'd like to understand how such an algorithm would be coded in a functional language. How does one get the benefits of 'stateless' programming while keeping the efficiency of 'imperative' algorithms. Isn't it necessary to store the tree somewhere between the lookups if you don't want to rebuild it each time?
A:
I think what you mean is a character per node... sort of like a simple hash tree scheme for keyword lookup. Assuming this or even another kind of tree... imagine doing something like this (in pseudo-LISP):
(defun buildtree (wordlist) ...code to build tree recursively returns the tree...)
(define lookup (tree word) ...code to look up word using tree, returns t or nil...)
(defun lookupmany (tree querylist)
(if (eq querylist nil)
nil
(cons (lookup tree (car querylist)) (lookupmany tree (cdr querylist))
)
)
(defun main (wordlist querylist) ; the main entry point
(lookupmany (buildtree wordlist) querylist)
)
if this is what you mean, this is fairly straight-forward functional programming.
Is it really stateless? That's a matter of debate. Some people would say some
forms of functional programming store what we normally call "state" on the stack.
Moreover, Common LISP even since the first edition of the Steele book has had iterative
constructs, and LISP has had setq for a long, long time.
But in the theory of programming languages, what we mean by "stateless" is pretty much satisfied by the idea shown here.
I think the above is something like the arrangement you mean.
A:
I imagine you'd want something like a tree with a list of children, as described here.
| How can a simple tree algorithm be coded in a functional language? | Suppose I want to implement a reasonably efficient 'keyword recognition algorithm', that is first given a list of keyword, and must then answer if another given word was in the list.
In an imperative language, I would store the keywords in a tree (one node per character). Then, when receiving a word to test, I would scan my tree to test if the word is a keyword.
I'd like to understand how such an algorithm would be coded in a functional language. How does one get the benefits of 'stateless' programming while keeping the efficiency of 'imperative' algorithms. Isn't it necessary to store the tree somewhere between the lookups if you don't want to rebuild it each time?
| [
"I think what you mean is a character per node... sort of like a simple hash tree scheme for keyword lookup. Assuming this or even another kind of tree... imagine doing something like this (in pseudo-LISP):\n(defun buildtree (wordlist) ...code to build tree recursively returns the tree...)\n(define lookup (tree word) ...code to look up word using tree, returns t or nil...)\n\n(defun lookupmany (tree querylist)\n (if (eq querylist nil)\n nil\n (cons (lookup tree (car querylist)) (lookupmany tree (cdr querylist))\n )\n)\n\n(defun main (wordlist querylist) ; the main entry point\n (lookupmany (buildtree wordlist) querylist)\n)\n\nif this is what you mean, this is fairly straight-forward functional programming.\nIs it really stateless? That's a matter of debate. Some people would say some\nforms of functional programming store what we normally call \"state\" on the stack.\nMoreover, Common LISP even since the first edition of the Steele book has had iterative\nconstructs, and LISP has had setq for a long, long time.\nBut in the theory of programming languages, what we mean by \"stateless\" is pretty much satisfied by the idea shown here.\nI think the above is something like the arrangement you mean.\n",
"I imagine you'd want something like a tree with a list of children, as described here.\n"
] | [
3,
0
] | [] | [] | [
"algorithm",
"functional_programming"
] | stackoverflow_0000046924_algorithm_functional_programming.txt |
Q:
"Getting" the path in Linux
I am writing a C program in Linux. Commands like execv() require a path in the form of a C string. Is there a command that will return the current path in the form of a C-style string?
A:
getcwd():
SYNOPSIS
#include <unistd.h>
char *getcwd(char *buf, size_t size);
DESCRIPTION
The getcwd() function shall place an absolute pathname of the current working directory in the array pointed to by buf, and return buf. The pathname copied to the array shall contain no components that are symbolic links. The size argument is the size in bytes of the character array pointed to by the buf argument. If buf is a null pointer, the behavior of getcwd() is unspecified.
RETURN VALUE
Upon successful completion, getcwd() shall return the buf argument. Otherwise, getcwd() shall return a null pointer and set errno to indicate the error. The contents of the array pointed to by buf are then undefined....
A:
The path argument to execv() is the path to the application you wish to execute, not the current working directory (which will be returned by getcwd()) or the shell search path (which will be returned by getenv("PATH")).
Depending on what you're doing, you may get more mileage out of the system() function in the C library rather than the lower-level exec() family.
A:
This is not ANSI C:
#include <unistd.h>
char path[MAXPATHLEN];
getcwd(path, MAXPATHLEN);
printf("pwd -> %s\n", path);
A:
If the path can be a relative path, you should be able to use '.' or './' as the path. I'm not sure if it will work, but you could try it.
| "Getting" the path in Linux | I am writing a C program in Linux. Commands like execv() require a path in the form of a C string. Is there a command that will return the current path in the form of a C-style string?
| [
"getcwd():\n\nSYNOPSIS\n#include <unistd.h>\n\nchar *getcwd(char *buf, size_t size);\n\nDESCRIPTION\nThe getcwd() function shall place an absolute pathname of the current working directory in the array pointed to by buf, and return buf. The pathname copied to the array shall contain no components that are symbolic links. The size argument is the size in bytes of the character array pointed to by the buf argument. If buf is a null pointer, the behavior of getcwd() is unspecified.\nRETURN VALUE\nUpon successful completion, getcwd() shall return the buf argument. Otherwise, getcwd() shall return a null pointer and set errno to indicate the error. The contents of the array pointed to by buf are then undefined....\n\n",
"The path argument to execv() is the path to the application you wish to execute, not the current working directory (which will be returned by getcwd()) or the shell search path (which will be returned by getenv(\"PATH\")).\nDepending on what you're doing, you may get more mileage out of the system() function in the C library rather than the lower-level exec() family.\n",
"This is not ANSI C:\n#include <unistd.h>\n\nchar path[MAXPATHLEN];\ngetcwd(path, MAXPATHLEN);\nprintf(\"pwd -> %s\\n\", path);\n\n",
"If the path can be a relative path, you should be able to use '.' or './' as the path. I'm not sure if it will work, but you could try it.\n"
] | [
11,
6,
2,
0
] | [
"You need to grab the environment variable PWD (present working directory).\nI'm not sure what the library it is in, but it is a standard Linux header.\nI was thinking of getenv() which would help if you also need to run system commands and need the various bin paths located in PATH.\n"
] | [
-2
] | [
"c",
"linux",
"operating_system"
] | stackoverflow_0000047066_c_linux_operating_system.txt |
Q:
Recursive function for an xml file (hierarchial data)
I have an XML file in the following format:
<categories>
<category id="1"></category>
<category id="2">
<category id="3"></category>
<category id="4">
<category id="5"></category>
</category>
</category>
</categories>
Can anyone please give me some direction on how I might traverse the file using C#?
A:
First off, System.XML provides some excellent ways to work with XML.
I'm assuming you loaded your XML into an XMLDocument, doing so allows you to use XPath Selectors, or just walk through the DOM.
Something like this would walk from whatever element back up to the top using recursion:
public XmlNode WalkToTopNode (XmlNode CurrentNode)
{
if (CurrentNode.ParentNode == null)
return CurrentNode;
else
return WalkToTopNode(CurrentNode.ParentNode);
}
Using recursion to find a node by ID could be done somewhat like this (Note, I typed this in the textbox, it may be wrong):
public XmlNode GetElementById (string id, XmlNode node)
{
if (node.Attributes["id"] != null && node.Attributes["id"].InnerText == id)
{
return node;
}
else
{
foreach (XmlNode childNode in node.Children)
{
return GetElementById(id, childNode);
}
}
return null;
}
However, if you are using recursion when there are so many better node traversal ways built in to System.XML, then perhaps its time to rethink your strategy.
| Recursive function for an xml file (hierarchial data) | I have an XML file in the following format:
<categories>
<category id="1"></category>
<category id="2">
<category id="3"></category>
<category id="4">
<category id="5"></category>
</category>
</category>
</categories>
Can anyone please give me some direction on how I might traverse the file using C#?
| [
"First off, System.XML provides some excellent ways to work with XML.\nI'm assuming you loaded your XML into an XMLDocument, doing so allows you to use XPath Selectors, or just walk through the DOM.\nSomething like this would walk from whatever element back up to the top using recursion:\npublic XmlNode WalkToTopNode (XmlNode CurrentNode)\n{\n if (CurrentNode.ParentNode == null)\n return CurrentNode;\n else\n return WalkToTopNode(CurrentNode.ParentNode);\n}\n\nUsing recursion to find a node by ID could be done somewhat like this (Note, I typed this in the textbox, it may be wrong):\npublic XmlNode GetElementById (string id, XmlNode node)\n{\n if (node.Attributes[\"id\"] != null && node.Attributes[\"id\"].InnerText == id)\n { \n return node;\n }\n else\n {\n foreach (XmlNode childNode in node.Children)\n {\n return GetElementById(id, childNode);\n }\n }\n\n return null; \n}\n\nHowever, if you are using recursion when there are so many better node traversal ways built in to System.XML, then perhaps its time to rethink your strategy.\n"
] | [
2
] | [] | [] | [
"recursion",
"xml"
] | stackoverflow_0000047026_recursion_xml.txt |
Q:
How do you convert an aspx or master page file to page and code behind?
I have a project where a .master page was created without a code behind page. Now I want to add a code behind page for this .master page and move the "in page" code to the code behind file. What is the best/easiest way to go about doing this? I'm using Visual Studio 2008.
A:
Create new class file, name it yourmaster.master.cs (Visual Studio will automaticly group it with the .master) and move the code to it, reference it in your masterpage.
Then rightclick on your project and click "Convert to Web Application" and Visual Studio will create the designer file.
A:
One way to do it, is to create a new empty masterpage/aspx-file and then copy-paste the code you allready have into that page. That will take care of all the wire-up and creating of code-files.
A:
Or you can adapt the Page or Master directive while creating an appropiate code behind file (.master.cs or .aspx.cs or for VB.NET .master.vb or .aspx.vb).
I don't know of a simple click way to achieve this.
| How do you convert an aspx or master page file to page and code behind? | I have a project where a .master page was created without a code behind page. Now I want to add a code behind page for this .master page and move the "in page" code to the code behind file. What is the best/easiest way to go about doing this? I'm using Visual Studio 2008.
| [
"Create new class file, name it yourmaster.master.cs (Visual Studio will automaticly group it with the .master) and move the code to it, reference it in your masterpage.\nThen rightclick on your project and click \"Convert to Web Application\" and Visual Studio will create the designer file.\n",
"One way to do it, is to create a new empty masterpage/aspx-file and then copy-paste the code you allready have into that page. That will take care of all the wire-up and creating of code-files.\n",
"Or you can adapt the Page or Master directive while creating an appropiate code behind file (.master.cs or .aspx.cs or for VB.NET .master.vb or .aspx.vb).\nI don't know of a simple click way to achieve this.\n"
] | [
3,
0,
0
] | [] | [] | [
"visual_studio_2008"
] | stackoverflow_0000046938_visual_studio_2008.txt |
Q:
Slow SQL Query due to inner and left join?
Can anyone explain this behavior or how to get around it?
If you execute this query:
select *
from TblA
left join freetexttable ( TblB, *, 'query' ) on TblA.ID = [Key]
inner join DifferentDbCatalog.dbo.TblC on TblA.ID = TblC.TblAID
It will be very very very slow.
If you change that query to use two inner joins instead of a left join, it will be very fast. If you change it to use two left joins instead of an inner join, it will be very fast.
You can observe this same behavior if you use a sql table variable instead of the freetexttable as well.
The performance problem arises any time you have a table variable (or freetexttable) and a table in a different database catalog where one is in an inner join and the other is in a left join.
Does anyone know why this is slow, or how to speed it up?
A:
A general rule of thumb is that OUTER JOINs cause the number of rows in a result set to increase, while INNER JOINs cause the number of rows in a result set to decrease. Of course, there are plenty of scenarios where the opposite is true as well, but it's more likely to work this way than not. What you want to do for performance is keep the size of the result set (working set) as small as possible for as long as possible.
Since both joins match on the first table, changing up the order won't effect the accuracy of the results. Therefore, you probably want to do the INNER JOIN before the LEFT JOIN:
SELECT *
FROM TblA
INNER JOIN DifferentDbCatalog.dbo.TblC on TblA.ID = TblC.TblAID
LEFT JOIN freetexttable ( TblB, *, 'query' ) on TblA.ID = [Key]
As a practical matter, the query optimizer should be smart enough to compile to use the faster option, regardless of which order you specified for the joins. However, it's good practice to pretend that you have a dumb query optimizer, and that query operations happen in order. This helps future maintainers spot potential errors or assumptions about the nature of the tables.
Because the optimizer should re-write things, this probably isn't good enough to fully explain the behavior you're seeing, so you'll still want to examine the execution plan used for each query, and probably add an index as suggested earlier. This is still a good principle to learn, though.
A:
What you should usually do is turn on the "Show Actual Execution Plan" option and then take a close look at what is causing the slowdown. (hover your mouse over each join to see the details) You'll want to make sure that you are getting an index seek and not a table scan.
I would assume what is happening is that SQL is being forced to pull everything from one table into memory in order to do one of the joins. Sometimes reversing the order that you join the tables will also help things.
A:
Putting freetexttable(TblB, *, 'query') into a temp table may help if it's getting called repeatedly in the execution plan.
A:
Index the field you use to perform the join.
A good rule of thumb is to assign an index to any commonly referenced foreign or candidate keys.
| Slow SQL Query due to inner and left join? | Can anyone explain this behavior or how to get around it?
If you execute this query:
select *
from TblA
left join freetexttable ( TblB, *, 'query' ) on TblA.ID = [Key]
inner join DifferentDbCatalog.dbo.TblC on TblA.ID = TblC.TblAID
It will be very very very slow.
If you change that query to use two inner joins instead of a left join, it will be very fast. If you change it to use two left joins instead of an inner join, it will be very fast.
You can observe this same behavior if you use a sql table variable instead of the freetexttable as well.
The performance problem arises any time you have a table variable (or freetexttable) and a table in a different database catalog where one is in an inner join and the other is in a left join.
Does anyone know why this is slow, or how to speed it up?
| [
"A general rule of thumb is that OUTER JOINs cause the number of rows in a result set to increase, while INNER JOINs cause the number of rows in a result set to decrease. Of course, there are plenty of scenarios where the opposite is true as well, but it's more likely to work this way than not. What you want to do for performance is keep the size of the result set (working set) as small as possible for as long as possible. \nSince both joins match on the first table, changing up the order won't effect the accuracy of the results. Therefore, you probably want to do the INNER JOIN before the LEFT JOIN:\nSELECT * \nFROM TblA\nINNER JOIN DifferentDbCatalog.dbo.TblC on TblA.ID = TblC.TblAID\nLEFT JOIN freetexttable ( TblB, *, 'query' ) on TblA.ID = [Key]\n\nAs a practical matter, the query optimizer should be smart enough to compile to use the faster option, regardless of which order you specified for the joins. However, it's good practice to pretend that you have a dumb query optimizer, and that query operations happen in order. This helps future maintainers spot potential errors or assumptions about the nature of the tables.\nBecause the optimizer should re-write things, this probably isn't good enough to fully explain the behavior you're seeing, so you'll still want to examine the execution plan used for each query, and probably add an index as suggested earlier. This is still a good principle to learn, though.\n",
"What you should usually do is turn on the \"Show Actual Execution Plan\" option and then take a close look at what is causing the slowdown. (hover your mouse over each join to see the details) You'll want to make sure that you are getting an index seek and not a table scan.\nI would assume what is happening is that SQL is being forced to pull everything from one table into memory in order to do one of the joins. Sometimes reversing the order that you join the tables will also help things.\n",
"Putting freetexttable(TblB, *, 'query') into a temp table may help if it's getting called repeatedly in the execution plan.\n",
"Index the field you use to perform the join.\nA good rule of thumb is to assign an index to any commonly referenced foreign or candidate keys.\n"
] | [
8,
4,
1,
0
] | [] | [] | [
"freetext",
"performance",
"sql_server"
] | stackoverflow_0000047104_freetext_performance_sql_server.txt |
Q:
Internationalized page properties in Tapestry 4.1.2
The login page in my Tapestry application has a property in which the password the user types in is stored, which is then compared against the value from the database. If the user enters a password with multi-byte characters, such as:
áéíóú
...an inspection of the return value of getPassword() (the abstract method for the corresponding property) gives:
áéÃóú
Clearly, that's not encoded properly. Yet Firebug reports that the page is served up in UTF-8, so presumably the form submission request would also be encoded in UTF-8. Inspecting the value as it comes from the database produces the correct string, so it wouldn't appear to be an OS or IDE encoding issue. I have not overridden Tapestry's default value for org.apache.tapestry.output-encoding in the .application file, and the Tapestry 4 documentation indicates that the default value for the property is UTF-8.
So why does Tapestry appear to botch the encoding when setting the property?
Relevant code follows:
Login.html
<html jwcid="@Shell" doctype='html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"' ...>
<body jwcid="@Body">
...
<form jwcid="@Form" listener="listener:attemptLogin" ...>
...
<input jwcid="password"/>
...
</form>
...
</body>
</html>
Login.page
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE page-specification
PUBLIC "-//Apache Software Foundation//Tapestry Specification 4.0//EN"
"http://jakarta.apache.org/tapestry/dtd/Tapestry_4_0.dtd">
<page-specification class="mycode.Login">
...
<property name="password" />
...
<component id="password" type="TextField">
<binding name="value" value="password"/>
<binding name="hidden" value="true"/>
...
</component>
...
</page-specification>
Login.java
...
public abstract class Login extends BasePage {
...
public abstract String getPassword();
...
public void attemptLogin() {
// At this point, inspecting getPassword() returns
// the incorrectly encoded String.
}
...
}
Updates
@Jan Soltis: Well, if I inspect the value that comes from the database, it displays the correct string, so it would seem that my editor, OS and database are all encoding the value correctly. I've also checked my .application file; it does not contain an org.apache.tapestry.output-encoding entry, and the Tapestry 4 documentation indicates that the default value for this property is UTF-8. I have updated the description above to reflect the answers to your questions.
@myself: Solution found.
A:
Everything seems to be correct.
Are you really sure getPassword() returns garbage? Isn't it someone else (your editor, OS, database,...) that doesn't know that it's a unicode string when it displays it to you while the password may be perfectly okay? What exactly makes you think it's a garbage?
I would also make sure there's no strange encoding set in the .application config file
<meta key="org.apache.tapestry.output-encoding" value="some strange encoding"/>
A:
I found the problem. Tomcat was mangling the parameters before Tapestry or my page class even had a crack at it. Creating a servlet filter that enforced the desired character encoding fixed it.
CharacterEncodingFilter.java
package mycode;
import java.io.IOException;
import javax.servlet.*;
/**
* Allows you to enforce a particular character encoding on incoming requests.
* @author Robert J. Walker
*/
public class CharacterEncodingFilter implements Filter {
private static final String ENCODINGPARAM = "encoding";
private String encoding;
public void init(FilterConfig config) throws ServletException {
encoding = config.getInitParameter(ENCODINGPARAM);
if (encoding != null) {
encoding = encoding.trim();
}
}
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
throws IOException, ServletException {
request.setCharacterEncoding(encoding);
chain.doFilter(request, response);
}
public void destroy() {
// do nothing
}
}
web.xml
<web-app>
...
<filter>
<filter-name>characterEncoding</filter-name>
<filter-class>mycode.CharacterEncodingFilter</filter-class>
<init-param>
<param-name>encoding</param-name>
<param-value>UTF-8</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>characterEncoding</filter-name>
<url-pattern>/app/*</url-pattern>
</filter-mapping>
...
</web-app>
| Internationalized page properties in Tapestry 4.1.2 | The login page in my Tapestry application has a property in which the password the user types in is stored, which is then compared against the value from the database. If the user enters a password with multi-byte characters, such as:
áéíóú
...an inspection of the return value of getPassword() (the abstract method for the corresponding property) gives:
áéÃóú
Clearly, that's not encoded properly. Yet Firebug reports that the page is served up in UTF-8, so presumably the form submission request would also be encoded in UTF-8. Inspecting the value as it comes from the database produces the correct string, so it wouldn't appear to be an OS or IDE encoding issue. I have not overridden Tapestry's default value for org.apache.tapestry.output-encoding in the .application file, and the Tapestry 4 documentation indicates that the default value for the property is UTF-8.
So why does Tapestry appear to botch the encoding when setting the property?
Relevant code follows:
Login.html
<html jwcid="@Shell" doctype='html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"' ...>
<body jwcid="@Body">
...
<form jwcid="@Form" listener="listener:attemptLogin" ...>
...
<input jwcid="password"/>
...
</form>
...
</body>
</html>
Login.page
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE page-specification
PUBLIC "-//Apache Software Foundation//Tapestry Specification 4.0//EN"
"http://jakarta.apache.org/tapestry/dtd/Tapestry_4_0.dtd">
<page-specification class="mycode.Login">
...
<property name="password" />
...
<component id="password" type="TextField">
<binding name="value" value="password"/>
<binding name="hidden" value="true"/>
...
</component>
...
</page-specification>
Login.java
...
public abstract class Login extends BasePage {
...
public abstract String getPassword();
...
public void attemptLogin() {
// At this point, inspecting getPassword() returns
// the incorrectly encoded String.
}
...
}
Updates
@Jan Soltis: Well, if I inspect the value that comes from the database, it displays the correct string, so it would seem that my editor, OS and database are all encoding the value correctly. I've also checked my .application file; it does not contain an org.apache.tapestry.output-encoding entry, and the Tapestry 4 documentation indicates that the default value for this property is UTF-8. I have updated the description above to reflect the answers to your questions.
@myself: Solution found.
| [
"Everything seems to be correct.\nAre you really sure getPassword() returns garbage? Isn't it someone else (your editor, OS, database,...) that doesn't know that it's a unicode string when it displays it to you while the password may be perfectly okay? What exactly makes you think it's a garbage?\nI would also make sure there's no strange encoding set in the .application config file\n<meta key=\"org.apache.tapestry.output-encoding\" value=\"some strange encoding\"/>\n\n",
"I found the problem. Tomcat was mangling the parameters before Tapestry or my page class even had a crack at it. Creating a servlet filter that enforced the desired character encoding fixed it.\nCharacterEncodingFilter.java\npackage mycode;\n\nimport java.io.IOException;\n\nimport javax.servlet.*;\n\n/**\n * Allows you to enforce a particular character encoding on incoming requests.\n * @author Robert J. Walker\n */\npublic class CharacterEncodingFilter implements Filter {\n private static final String ENCODINGPARAM = \"encoding\";\n\n private String encoding;\n\n public void init(FilterConfig config) throws ServletException {\n encoding = config.getInitParameter(ENCODINGPARAM);\n\n if (encoding != null) {\n encoding = encoding.trim();\n }\n }\n\n public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)\n throws IOException, ServletException {\n request.setCharacterEncoding(encoding);\n chain.doFilter(request, response);\n }\n\n public void destroy() {\n // do nothing\n }\n}\n\nweb.xml\n<web-app>\n ...\n <filter>\n <filter-name>characterEncoding</filter-name>\n <filter-class>mycode.CharacterEncodingFilter</filter-class>\n <init-param>\n <param-name>encoding</param-name>\n <param-value>UTF-8</param-value>\n </init-param>\n </filter>\n <filter-mapping>\n <filter-name>characterEncoding</filter-name>\n <url-pattern>/app/*</url-pattern>\n </filter-mapping>\n ...\n</web-app>\n\n"
] | [
2,
2
] | [] | [] | [
"internationalization",
"java",
"tapestry"
] | stackoverflow_0000043199_internationalization_java_tapestry.txt |
Q:
Is there a way to ensure entire code block execution in a .NET thread?
In my C# program, I have a thread that represents a running test, which can be aborted by a click on a button labeled "Interrupt execution". In order for the thread (and therefore the test) to terminate in an elegant manner (and do some vital work beforehand), this button is enabled only in some well-defined moments, in which I catch ThreadAbortedException, do Thread.ResetAbort() and die beautifully (that is, the thread).
The problem is that, in the time window in which aborting is possible, there are some tasks that need to be done from start to finish once initiated, and, so, I fear TAE. Locks don't provide a solution for this, and, although finally blocks do, I don't find it elegant to wrap important code in the following manner:
try {
} finally {
// vital code
}
However, I didn't find any other solution.
Is this another way to delay the interference of ThreadAbortException until the end of the block?
A:
The situations under which a ThreadAbortException will be generated can be complex. A better option might be to ask why you are using ThreadAbortExceptions at all.
A better pattern would be to simply use a flag to indicate to the running thread that it should abort, and then test the flag regularly while it is working. If it is set you can throw an exception at that point if you want.
With this pattern you are totally in control of when the thread will act on the abort, and don't have to worry about the critical operations being interrupted.
A:
Use Thread.BeginCriticalRegion()
Notifies a host that execution is about to enter a region of code in which the effects of a thread abort or unhandled exception might jeopardize other tasks in the application domain.
Thread.BeginCriticalRegion()
//do important stuff here
Thread.EndCriticalRegion()
| Is there a way to ensure entire code block execution in a .NET thread? | In my C# program, I have a thread that represents a running test, which can be aborted by a click on a button labeled "Interrupt execution". In order for the thread (and therefore the test) to terminate in an elegant manner (and do some vital work beforehand), this button is enabled only in some well-defined moments, in which I catch ThreadAbortedException, do Thread.ResetAbort() and die beautifully (that is, the thread).
The problem is that, in the time window in which aborting is possible, there are some tasks that need to be done from start to finish once initiated, and, so, I fear TAE. Locks don't provide a solution for this, and, although finally blocks do, I don't find it elegant to wrap important code in the following manner:
try {
} finally {
// vital code
}
However, I didn't find any other solution.
Is this another way to delay the interference of ThreadAbortException until the end of the block?
| [
"The situations under which a ThreadAbortException will be generated can be complex. A better option might be to ask why you are using ThreadAbortExceptions at all.\nA better pattern would be to simply use a flag to indicate to the running thread that it should abort, and then test the flag regularly while it is working. If it is set you can throw an exception at that point if you want.\nWith this pattern you are totally in control of when the thread will act on the abort, and don't have to worry about the critical operations being interrupted.\n",
"Use Thread.BeginCriticalRegion() \n\nNotifies a host that execution is about to enter a region of code in which the effects of a thread abort or unhandled exception might jeopardize other tasks in the application domain. \n\nThread.BeginCriticalRegion()\n\n//do important stuff here\n\nThread.EndCriticalRegion()\n\n"
] | [
4,
3
] | [] | [] | [
".net",
"multithreading"
] | stackoverflow_0000047144_.net_multithreading.txt |
Q:
How do you prevent SQL injection in LAMP applications?
Here are a few possibilities to get the conversation started:
Escape all input upon initialization.
Escape each value, preferably when generating the SQL.
The first solution is suboptimal, because you then need to unescape each value if you want to use it in anything other than SQL, like outputting it on a web page.
The second solution makes much more sense, but manually escaping each value is a pain.
I'm aware of prepared statements, however I find MySQLi cumbersome. Also, separating the query from the inputs concerns me, because although it's crucial to get the order correct it's easy to make a mistake, and thus write the wrong data to the wrong fields.
A:
Prepared statements are the best answer. You have testing because you can make mistakes!
See this question.
A:
as @Rob Walker states, parameterized queries are your best bet. If you're using the latest and greatest PHP, I'd highly recommend taking a look at PDO (PHP Data Objects). This is a native database abstraction library that has support for a wide range of databases (including MySQL of course) as well as prepared statements with named parameters.
A:
I would go with using prepared statements. If you want to use prepared statements, you probably want to check out the PDO functions for PHP. Not only does this let you easily run prepared statements, it also lets you be a little more database agnostic by not calling functions that begin with mysql_, mysqli_, or pgsql_.
A:
PDO may be worth it some day, but it's not just there yet. It's a DBAL and it's strengh is (supposedly) to make switching between vendors more easier. It's not really build to catch SQL injections.
Anyhow, you want to escape and sanatize your inputs, using prepared statements could be a good measure (I second that). Although I believe it's much easier, e.g. by utilizing filter.
| How do you prevent SQL injection in LAMP applications? | Here are a few possibilities to get the conversation started:
Escape all input upon initialization.
Escape each value, preferably when generating the SQL.
The first solution is suboptimal, because you then need to unescape each value if you want to use it in anything other than SQL, like outputting it on a web page.
The second solution makes much more sense, but manually escaping each value is a pain.
I'm aware of prepared statements, however I find MySQLi cumbersome. Also, separating the query from the inputs concerns me, because although it's crucial to get the order correct it's easy to make a mistake, and thus write the wrong data to the wrong fields.
| [
"Prepared statements are the best answer. You have testing because you can make mistakes!\nSee this question.\n",
"as @Rob Walker states, parameterized queries are your best bet. If you're using the latest and greatest PHP, I'd highly recommend taking a look at PDO (PHP Data Objects). This is a native database abstraction library that has support for a wide range of databases (including MySQL of course) as well as prepared statements with named parameters.\n",
"I would go with using prepared statements. If you want to use prepared statements, you probably want to check out the PDO functions for PHP. Not only does this let you easily run prepared statements, it also lets you be a little more database agnostic by not calling functions that begin with mysql_, mysqli_, or pgsql_.\n",
"PDO may be worth it some day, but it's not just there yet. It's a DBAL and it's strengh is (supposedly) to make switching between vendors more easier. It's not really build to catch SQL injections.\nAnyhow, you want to escape and sanatize your inputs, using prepared statements could be a good measure (I second that). Although I believe it's much easier, e.g. by utilizing filter. \n"
] | [
10,
7,
1,
0
] | [
"I've always used the first solution because 99% of the time, variables in $_GET, $_POST, and $_COOKIE are never outputted to the browser. You also won't ever mistakenly write code with an SQL injection (unless you don't use quotes in the query), whereas with the second solution you could easily forget to escape one of your strings eventually.\nActually, the reason I've always done it that way was because all my sites had the magic_quotes setting on by default, and once you've written a lot of code using one of those two solutions, it takes a lot of work to change to the other one.\n"
] | [
-1
] | [
"lamp",
"mysql",
"php",
"sql"
] | stackoverflow_0000047087_lamp_mysql_php_sql.txt |
Q:
LINQ to SQL Mapping From Money to Double
I'm working with LINQ for the first time and wanted to get the Mapping to work when I have a money type in SQL, but my domain object property is of type double. How can I express this in the XML file, or in code so that the mapping does not throw the usual "invalid cast" exception?
A:
Slightly off topic, but this is something everyone should know when working with the Money type in SQLServer.
You don't want to use a double, you want to use a decimal.
A Double is a long float, and floating point arithmetic should never be used for financial calculations.
Think of this, the fractions 1/3, 1/3 and 1/3 equal 1. However, when expressed as a double ie:
.3333 + .3333 + .3333 = .9999 not 1.
You may think that losing a thousandth of a cent is trivial, but its not when your working with someone elses money.
Use System.Decimal.
A:
pretty sure Money maps to System.Decimal
Check here
A:
In the DBML XML file, you can set the Expression attribute of a Column element to something like this:
<Column Name="Table1.Amount" DbType="smallint" Type="System.Int32"
Expression="CAST(Table1.Amount as int)" />
| LINQ to SQL Mapping From Money to Double | I'm working with LINQ for the first time and wanted to get the Mapping to work when I have a money type in SQL, but my domain object property is of type double. How can I express this in the XML file, or in code so that the mapping does not throw the usual "invalid cast" exception?
| [
"Slightly off topic, but this is something everyone should know when working with the Money type in SQLServer.\nYou don't want to use a double, you want to use a decimal.\nA Double is a long float, and floating point arithmetic should never be used for financial calculations.\nThink of this, the fractions 1/3, 1/3 and 1/3 equal 1. However, when expressed as a double ie:\n.3333 + .3333 + .3333 = .9999 not 1.\nYou may think that losing a thousandth of a cent is trivial, but its not when your working with someone elses money.\nUse System.Decimal.\n",
"pretty sure Money maps to System.Decimal\nCheck here\n",
"In the DBML XML file, you can set the Expression attribute of a Column element to something like this:\n <Column Name=\"Table1.Amount\" DbType=\"smallint\" Type=\"System.Int32\" \n Expression=\"CAST(Table1.Amount as int)\" />\n\n"
] | [
3,
1,
1
] | [] | [] | [
"linq",
"linq_to_sql",
"orm"
] | stackoverflow_0000016265_linq_linq_to_sql_orm.txt |
Q:
Any good resources or advice for working with languages with different orientations? (such as Japanese or Chinese)
We have an enterprise web application where every bit of text in the system is localised to the user's browser's culture setting.
So far we have only supported English, American (similar but mis-spelt ;-) and French (for the Canadian Gov't - app in English or French depending on user preference). During development we also had some European languages in mind like Dutch and German that tend to concatenate words into very long ones.
We're currently investigating support for eastern languages: Chinese, Japanese, and so on. I understand that these use phonetic input converted to written characters. How does that work on the web? Do the same events fire while inputs and textareas are being edited (we're quite Ajax heavy).
What conventions do users of these top-down languages expect online?
What effect does their dual-input (phonetic typing + conversion) have on web controls?
With RTL languages like Arabic do users expect the entire interface to be mirrored? For instance should things like OK/Cancel buttons be swapped and on the left?
A:
As an Arabic speaker, when I do look at Arabic websites, I do expect things like OK/Cancel to be swapped.
When reading Arabic, my eyes read from right to left. So, in situations where you'd want to reader to view an affirmative/action button (e.g. OK, Submit, Yes, etc.) before a negative/inaction button (Cancel, Clear, No, etc.), you'd probably want to put the former on the right.
Caveat: As weird as it sounds, the above only applies (to me personally) when the button text is in Arabic. If the button text is in English (in a mixed-language web page), I'd prefer to see the OK button on the left.
Hope that helps.
A:
Read Globalization Step-by-Step by Microsoft.
I can answer the specifics on CJKV, but you probably want a book on this topic. I haven't read it but CJKV Information Processing is from O'Reilly (2nd ed due Dec, 2008).
I understand that these use phonetic input converted to written characters.
How does that work on the web?
The input is done by a class of software called an IME (Input Method Editor) on Windows, Mac, and Linux (e.g. SCIM). When an IME is turned on, the input from the keyboard first goes to the IME, and the user gets to pick the correct kanji/hiragana combo. When the user commits by hitting return key, the IME types in the kanji/hiragana into the web browser using the current encoding. Encoding situation was a big mess, but if you are writing a web app, go with an encoding of Unicode. I suggest UTF-8.
Do the same events fire while inputs and textareas are being edited?
A Unicode savvy web browser and OS combo handles multiple languages. For example, one can use English normal version of Firefox to browse and post to a Japanese website. From the browsers point of view, it's just an array of "bla bla bla" in Unicode. In other words, if the event fires up in English, the same event should fire up in CJKV if you use a Unicode variant.
What conventions do users of these top-down languages expect online?
CJKV readers expect left-to-right online. Math and science textbooks are written from left-to-right. Most word processors, including localized version of Word, write left-to-right.
What effect does their dual-input (phonetic typing + conversion) have on web controls?
For the most part you should not have to worry about it, unless you are trapping keyboard events. For example, I hate using Japanese keyboard with bunch of extra keyboard. So, when I have to assign IME on/off command to some key on US keyboard. I personally use right-Alt. Also, spacebar and enter key is used during conversion, but not sure if these events are passed to browser.
A:
The directionality question is easy to answer for East Asian languages: websites are left-to-right, top-to-bottom as per usual.
In fact, the general web design layout principles much the same. Have a look at the websites of a newspaper (name top left, navigation bar under with "Home" on the left, headline links below with most important at the top) or a search engine (don't think I need to say which US site you should compare that layout to).
However, just as Arabic/Hebrew/etc right-to-left language users will expect left-to-right progression in some contexts (embedded English fragments and so on), there are situations, even on the web, where top-to-bottom layout is preferred. This is generally done by including an image with the text layout and font desired, or using flash.
Internet Explorer has actually offered tb-rl layout with the CSS writing-mode property since version 5.5 however none of the other browsers have bothered implementing it (or ruby, which is useful for sites aimed at a young audience). IE 5.5 was released in 2000, so that's eight years of support, and there was a W3C candidate recommendation in 2003 but text layout in CSS still being poked around.
As for your worries with text input and IMEs, as long as you're not doing something bogus like trying to manually translate the virtual keys given by keydown events into text strings, you're unlikely to run into problems.
There are some additional issues you've not mentioned however. The minimum comfortably readable font size is larger than for languages written with the Latin script. Bold and italic for emphasis in flow are generally not appropriate. Han unification means to need to be picky about specifying the right fonts for the different CJK languages when working with unicode. You may want to provide both traditional and simplified interfaces for Chinese, depending on what audience you are expecting.
I've been meaning to write up a more comprehensive guide along these lines for a while, if you need more information feel free to kick me.
| Any good resources or advice for working with languages with different orientations? (such as Japanese or Chinese) | We have an enterprise web application where every bit of text in the system is localised to the user's browser's culture setting.
So far we have only supported English, American (similar but mis-spelt ;-) and French (for the Canadian Gov't - app in English or French depending on user preference). During development we also had some European languages in mind like Dutch and German that tend to concatenate words into very long ones.
We're currently investigating support for eastern languages: Chinese, Japanese, and so on. I understand that these use phonetic input converted to written characters. How does that work on the web? Do the same events fire while inputs and textareas are being edited (we're quite Ajax heavy).
What conventions do users of these top-down languages expect online?
What effect does their dual-input (phonetic typing + conversion) have on web controls?
With RTL languages like Arabic do users expect the entire interface to be mirrored? For instance should things like OK/Cancel buttons be swapped and on the left?
| [
"As an Arabic speaker, when I do look at Arabic websites, I do expect things like OK/Cancel to be swapped.\nWhen reading Arabic, my eyes read from right to left. So, in situations where you'd want to reader to view an affirmative/action button (e.g. OK, Submit, Yes, etc.) before a negative/inaction button (Cancel, Clear, No, etc.), you'd probably want to put the former on the right.\nCaveat: As weird as it sounds, the above only applies (to me personally) when the button text is in Arabic. If the button text is in English (in a mixed-language web page), I'd prefer to see the OK button on the left.\nHope that helps.\n",
"Read Globalization Step-by-Step by Microsoft.\nI can answer the specifics on CJKV, but you probably want a book on this topic. I haven't read it but CJKV Information Processing is from O'Reilly (2nd ed due Dec, 2008).\n\nI understand that these use phonetic input converted to written characters.\nHow does that work on the web?\n\nThe input is done by a class of software called an IME (Input Method Editor) on Windows, Mac, and Linux (e.g. SCIM). When an IME is turned on, the input from the keyboard first goes to the IME, and the user gets to pick the correct kanji/hiragana combo. When the user commits by hitting return key, the IME types in the kanji/hiragana into the web browser using the current encoding. Encoding situation was a big mess, but if you are writing a web app, go with an encoding of Unicode. I suggest UTF-8.\n\nDo the same events fire while inputs and textareas are being edited?\n\nA Unicode savvy web browser and OS combo handles multiple languages. For example, one can use English normal version of Firefox to browse and post to a Japanese website. From the browsers point of view, it's just an array of \"bla bla bla\" in Unicode. In other words, if the event fires up in English, the same event should fire up in CJKV if you use a Unicode variant.\n\nWhat conventions do users of these top-down languages expect online?\n\nCJKV readers expect left-to-right online. Math and science textbooks are written from left-to-right. Most word processors, including localized version of Word, write left-to-right.\n\nWhat effect does their dual-input (phonetic typing + conversion) have on web controls?\n\nFor the most part you should not have to worry about it, unless you are trapping keyboard events. For example, I hate using Japanese keyboard with bunch of extra keyboard. So, when I have to assign IME on/off command to some key on US keyboard. I personally use right-Alt. Also, spacebar and enter key is used during conversion, but not sure if these events are passed to browser.\n",
"The directionality question is easy to answer for East Asian languages: websites are left-to-right, top-to-bottom as per usual.\nIn fact, the general web design layout principles much the same. Have a look at the websites of a newspaper (name top left, navigation bar under with \"Home\" on the left, headline links below with most important at the top) or a search engine (don't think I need to say which US site you should compare that layout to).\nHowever, just as Arabic/Hebrew/etc right-to-left language users will expect left-to-right progression in some contexts (embedded English fragments and so on), there are situations, even on the web, where top-to-bottom layout is preferred. This is generally done by including an image with the text layout and font desired, or using flash.\nInternet Explorer has actually offered tb-rl layout with the CSS writing-mode property since version 5.5 however none of the other browsers have bothered implementing it (or ruby, which is useful for sites aimed at a young audience). IE 5.5 was released in 2000, so that's eight years of support, and there was a W3C candidate recommendation in 2003 but text layout in CSS still being poked around.\nAs for your worries with text input and IMEs, as long as you're not doing something bogus like trying to manually translate the virtual keys given by keydown events into text strings, you're unlikely to run into problems.\nThere are some additional issues you've not mentioned however. The minimum comfortably readable font size is larger than for languages written with the Latin script. Bold and italic for emphasis in flow are generally not appropriate. Han unification means to need to be picky about specifying the right fonts for the different CJK languages when working with unicode. You may want to provide both traditional and simplified interfaces for Chinese, depending on what audience you are expecting.\nI've been meaning to write up a more comprehensive guide along these lines for a while, if you need more information feel free to kick me.\n"
] | [
8,
6,
2
] | [] | [] | [
"internationalization",
"multilingual"
] | stackoverflow_0000013279_internationalization_multilingual.txt |
Q:
Is there an event that triggers if the number of ListViewItems in a ListView changes? (Windows Forms)
I'd like to enable/disable some other controls based on how many items are in my ListView control. I can't find any event that would do this, either on the ListView itself or on the ListViewItemCollection. Maybe there's a way to generically watch any collection in C# for changes?
I'd be happy with other events too, even ones that sometimes fire when the items don't change, but for example the ControlAdded and Layout events didn't work :(.
A:
@Domenic
Not too sure, Never quite got that far in the thought process.
Another solution might be to extend ListView, and when adding and removing stuff, instead of calling .items.add, and items.remove, you call your other functions. It would still be possible to add and remove without events being raised, but with a little code review to make sure .items.add and .items.remove weren't called directly, it could work out quite well. Here's a little example. I only showed 1 Add function, but there are 6 you would have to implement, if you wanted to have use of all the available add functions. There's also .AddRange, and .Clear that you might want to take a look at.
Public Class MonitoredListView
Inherits ListView
Public Event ItemAdded()
Public Event ItemRemoved()
Public Sub New()
MyBase.New()
End Sub
Public Function AddItem(ByVal Text As String) As ListViewItem
RaiseEvent ItemAdded()
MyBase.Items.Add(Text)
End Function
Public Sub RemoveItem(ByVal Item As ListViewItem)
RaiseEvent ItemRemoved()
MyBase.Items.Remove(Item)
End Sub
End Class
A:
I can't find any events that you could use. Perhaps you could subclass ListViewItemCollection, and raise your own event when something is added, with code similar to this.
Public Class MyListViewItemCollection
Inherits ListView.ListViewItemCollection
Public Event ItemAdded(ByVal Item As ListViewItem)
Sub New(ByVal owner As ListView)
MyBase.New(owner)
End Sub
Public Overrides Function Add(ByVal value As System.Windows.Forms.ListViewItem) As System.Windows.Forms.ListViewItem
Dim Item As ListViewItem
Item = MyBase.Add(value)
RaiseEvent ItemAdded(Item)
Return Item
End Function
End Class
A:
I think the best thing that you can do here is to subclass ListView and provide the events that you want.
| Is there an event that triggers if the number of ListViewItems in a ListView changes? (Windows Forms) | I'd like to enable/disable some other controls based on how many items are in my ListView control. I can't find any event that would do this, either on the ListView itself or on the ListViewItemCollection. Maybe there's a way to generically watch any collection in C# for changes?
I'd be happy with other events too, even ones that sometimes fire when the items don't change, but for example the ControlAdded and Layout events didn't work :(.
| [
"@Domenic\nNot too sure, Never quite got that far in the thought process. \nAnother solution might be to extend ListView, and when adding and removing stuff, instead of calling .items.add, and items.remove, you call your other functions. It would still be possible to add and remove without events being raised, but with a little code review to make sure .items.add and .items.remove weren't called directly, it could work out quite well. Here's a little example. I only showed 1 Add function, but there are 6 you would have to implement, if you wanted to have use of all the available add functions. There's also .AddRange, and .Clear that you might want to take a look at.\nPublic Class MonitoredListView\n Inherits ListView\n\n Public Event ItemAdded()\n Public Event ItemRemoved()\n\n Public Sub New()\n MyBase.New()\n End Sub\n\n Public Function AddItem(ByVal Text As String) As ListViewItem\n RaiseEvent ItemAdded()\n\n MyBase.Items.Add(Text)\n End Function\n\n Public Sub RemoveItem(ByVal Item As ListViewItem)\n RaiseEvent ItemRemoved()\n\n MyBase.Items.Remove(Item)\n End Sub\n\nEnd Class\n\n",
"I can't find any events that you could use. Perhaps you could subclass ListViewItemCollection, and raise your own event when something is added, with code similar to this.\nPublic Class MyListViewItemCollection\n Inherits ListView.ListViewItemCollection\n\n Public Event ItemAdded(ByVal Item As ListViewItem)\n\n Sub New(ByVal owner As ListView)\n MyBase.New(owner)\n End Sub\n\n Public Overrides Function Add(ByVal value As System.Windows.Forms.ListViewItem) As System.Windows.Forms.ListViewItem\n Dim Item As ListViewItem\n\n Item = MyBase.Add(value)\n\n RaiseEvent ItemAdded(Item)\n\n Return Item\n End Function\nEnd Class\n\n",
"I think the best thing that you can do here is to subclass ListView and provide the events that you want.\n"
] | [
2,
0,
0
] | [] | [] | [
"events",
"listview",
"winforms"
] | stackoverflow_0000047169_events_listview_winforms.txt |
Q:
How to set the order in subnodes of a tree structure
I have a tree representation of pages in a CMS application. I understand how to persist the tree in the database. However, I don't have a good way to:
A) Reorder subpages under a particular parent page.
B) Provide a UI implementation that allows the user to change the order.
Any suggestions?
A:
A) I have a similar CMS app and I store an ordinal value with the page for a particular tree and sort on this value -- because lots of my pages appear in completely different sites I have to maintain the ordinal number against a page / tree combination.
B) I too would like a better way to do this. Currently they click on the node in the treeview and in the main page screen they can move the page around. I've tried drag and drop with java script and other solutions but my users could never work with it without lots of hand holding. I'll be interested in the responses to this one.
A:
Changing the order itself will require you store some sort of ordering along with each page in the database. Just the current highest / lowest value +/- 1 would probably be a fine starting point. Once you've got that ordering in there, reordering becomes a case of swapping two values or changing the value for one page to be between two others (you could use floats I guess, but you may need to renumber if you split it too many times).
Anyway, once you've got that, you need a UI. I've seen a very simple 'swap this with the one above/below' approach which can be a simple web link or an AJAX call. You could also present all the page values to the user and ask them to renumber them as they see fit. If you want to get fancy, JavaScript drag and drop might be a good approach. I've used ExtJS and Mootools as frameworks in this kind of area. If you don't need all the Extjs widgets, I'd say well away from it in future, and look at something like the Mootools Dynamic Sortables demo.
| How to set the order in subnodes of a tree structure | I have a tree representation of pages in a CMS application. I understand how to persist the tree in the database. However, I don't have a good way to:
A) Reorder subpages under a particular parent page.
B) Provide a UI implementation that allows the user to change the order.
Any suggestions?
| [
"A) I have a similar CMS app and I store an ordinal value with the page for a particular tree and sort on this value -- because lots of my pages appear in completely different sites I have to maintain the ordinal number against a page / tree combination.\nB) I too would like a better way to do this. Currently they click on the node in the treeview and in the main page screen they can move the page around. I've tried drag and drop with java script and other solutions but my users could never work with it without lots of hand holding. I'll be interested in the responses to this one.\n",
"Changing the order itself will require you store some sort of ordering along with each page in the database. Just the current highest / lowest value +/- 1 would probably be a fine starting point. Once you've got that ordering in there, reordering becomes a case of swapping two values or changing the value for one page to be between two others (you could use floats I guess, but you may need to renumber if you split it too many times).\nAnyway, once you've got that, you need a UI. I've seen a very simple 'swap this with the one above/below' approach which can be a simple web link or an AJAX call. You could also present all the page values to the user and ask them to renumber them as they see fit. If you want to get fancy, JavaScript drag and drop might be a good approach. I've used ExtJS and Mootools as frameworks in this kind of area. If you don't need all the Extjs widgets, I'd say well away from it in future, and look at something like the Mootools Dynamic Sortables demo.\n"
] | [
0,
0
] | [] | [] | [
"tree",
"treeview"
] | stackoverflow_0000047163_tree_treeview.txt |
Q:
SQL Server, nvarchar(MAX) or ntext, image or varbinary?
When should I choose one or the other? What are the implications regarding space and (full-text) indexing?
BTW: I'm currently using SQL Server 2005 planing to upgrade to 2008 in the following months.
Thanks
A:
The new (max) fields make it a lot easier to deal with the data from .NET code. With varbinary(max), you simply set the value of a SqlParameter to a byte array and you are done. WIth the image field, you need to write a few hundred lines of code to stream the data into and out of the field.
Also, the image/text fields are deprecated in favor of varbinary(max) and varchar(max), and future versions of Sql Server will discontinue support for them.
A:
Once you put it in the blob, it's going to be difficult to be used for normal SQL comparison. See Using Large-Value Data Types.
| SQL Server, nvarchar(MAX) or ntext, image or varbinary? | When should I choose one or the other? What are the implications regarding space and (full-text) indexing?
BTW: I'm currently using SQL Server 2005 planing to upgrade to 2008 in the following months.
Thanks
| [
"The new (max) fields make it a lot easier to deal with the data from .NET code. With varbinary(max), you simply set the value of a SqlParameter to a byte array and you are done. WIth the image field, you need to write a few hundred lines of code to stream the data into and out of the field.\nAlso, the image/text fields are deprecated in favor of varbinary(max) and varchar(max), and future versions of Sql Server will discontinue support for them.\n",
"Once you put it in the blob, it's going to be difficult to be used for normal SQL comparison. See Using Large-Value Data Types.\n"
] | [
13,
3
] | [] | [] | [
"sql_server",
"sql_server_2005",
"sql_types",
"tsql"
] | stackoverflow_0000047203_sql_server_sql_server_2005_sql_types_tsql.txt |
Q:
Exception in Web Service locks DLL and prevents publishing. Workaround?
I'm using a native DLL (FastImage.dll) in a C# ASP.NET Web Service that sometimes locks (can't delete it---says access denied); this requires stopping IIS to delete the DLL. The inability to delete this DLL in the bin folder of my published Web Service prevents me from publishing successfully (even though it thinks it published successfully!), which makes development and fixing the bug difficult (especially when it just happily runs old code since my changes may not be reflected on the server!). Note that the bug causing the Web Service to bomb and lock up the DLL is in my code, which is outside of said DLL, so I think this is a more general problem than just the FreeImage library (not to bring them any heat).
Has anyone experienced this?
Is there a way to make sure that when it says "Publish succeeded" from the VS IDE that it really means it, or to run sort of script to check that the files are really deleted before it attempts to Publish (like a pre-build step in VC++). (Right now I manually delete the files before publishing to make sure that I know the DLLs were replaced, instead of running old DLLs. It's still a problem, though if I can't delete the DLL.)
How would I detect whether a file was successfully deleted from a batch file? (so I can stop and start IIS if it fails)
Is it possible to stop and start IIS from a script (preferably from the Publish... action in the VS IDE) and if so, how?
A:
Using the IISReset command line tool will only restart IIS on the local machine, not on a remote server to which you are publishing.
Assuming that you are publishing to a Windows 2003 server, I'd suggest trying the slightly less drastic step of stopping and restarting the IIS AppPool in the web site or virtual folder in which the web service runs. (That way you are not taking all sites that run on the target server offline.) This too assumes that the web service runs in its own app pool. Ideally it should, so you keep it isolated.
I'd recommend getting away from using the Publishing process and to look into using a Web Deployment Project. Here is a post on ScottGu's blog detailing VS 2005 Web Deployment Projects.
The advantage to the Web Deployment Project approach is that it provides you with all the power and capability of MSbuild, as it is really just a convenience wrapper around MSBuild. Here's a post from the MSBuild team about pre-build and post-build capabilities
Hope this helps.
A:
You could use the IISReset command line tool to stop/restart iis. So you could write a simple batch file to stop iis, copy your files, and then restart iis. I'm not sure how to integrate this with the VS publish feature however.
| Exception in Web Service locks DLL and prevents publishing. Workaround? | I'm using a native DLL (FastImage.dll) in a C# ASP.NET Web Service that sometimes locks (can't delete it---says access denied); this requires stopping IIS to delete the DLL. The inability to delete this DLL in the bin folder of my published Web Service prevents me from publishing successfully (even though it thinks it published successfully!), which makes development and fixing the bug difficult (especially when it just happily runs old code since my changes may not be reflected on the server!). Note that the bug causing the Web Service to bomb and lock up the DLL is in my code, which is outside of said DLL, so I think this is a more general problem than just the FreeImage library (not to bring them any heat).
Has anyone experienced this?
Is there a way to make sure that when it says "Publish succeeded" from the VS IDE that it really means it, or to run sort of script to check that the files are really deleted before it attempts to Publish (like a pre-build step in VC++). (Right now I manually delete the files before publishing to make sure that I know the DLLs were replaced, instead of running old DLLs. It's still a problem, though if I can't delete the DLL.)
How would I detect whether a file was successfully deleted from a batch file? (so I can stop and start IIS if it fails)
Is it possible to stop and start IIS from a script (preferably from the Publish... action in the VS IDE) and if so, how?
| [
"Using the IISReset command line tool will only restart IIS on the local machine, not on a remote server to which you are publishing.\nAssuming that you are publishing to a Windows 2003 server, I'd suggest trying the slightly less drastic step of stopping and restarting the IIS AppPool in the web site or virtual folder in which the web service runs. (That way you are not taking all sites that run on the target server offline.) This too assumes that the web service runs in its own app pool. Ideally it should, so you keep it isolated.\nI'd recommend getting away from using the Publishing process and to look into using a Web Deployment Project. Here is a post on ScottGu's blog detailing VS 2005 Web Deployment Projects.\nThe advantage to the Web Deployment Project approach is that it provides you with all the power and capability of MSbuild, as it is really just a convenience wrapper around MSBuild. Here's a post from the MSBuild team about pre-build and post-build capabilities\nHope this helps.\n",
"You could use the IISReset command line tool to stop/restart iis. So you could write a simple batch file to stop iis, copy your files, and then restart iis. I'm not sure how to integrate this with the VS publish feature however.\n"
] | [
2,
0
] | [] | [] | [
"asp.net",
"c#",
"iis",
"visual_studio",
"web_services"
] | stackoverflow_0000035479_asp.net_c#_iis_visual_studio_web_services.txt |
Q:
Storing Windows passwords
I'm writing (in C# with .NET 3.5) an administrative application which will poll multiple Windows systems for various bits of data. In many cases it will use WMI, but in some cases it may need to read remote registry or remotely execute some command or script on the polled system. This polling will happen at repeating intervals - usually nightly, but can be configured to happen more (or less) frequently. So the poll could happen as often as every 10 minutes or as rarely as once a month. It needs to happen in an automated way, without any human intervention.
These functions will require admin-level access to the polled systems. Now, I expect that in most use cases, there will be a domain, and the polling application can run as a service with Domain Admin (or equivalent) privileges, which means I do not have to worry about storing passwords - the admin setting up the app will define the service's username/password via standard Windows mechanisms.
But there's always a few black sheep out there. The program may run in nondomain environments, or in cases where some polled systems are not members of the domain. In these cases we will have to define a username and password, store them securely, then invoke this user/pass pair at the time we poll that system. So keep in mind - in this case the program being written is the user who sends password to the authenticating system.
I am not sure whether I will need to use a reversible hash which I then decrypt to plaintext at time of use, or if there is some Windows mechanism which would allow me to store and then reuse the hash only. Obviously the second mechanism is preferable; I'd like my program to either never know the password's plaintext value, or know it for the shortest amount of time possible.
I need suggestions for smart and secure ways to accomplish this.
Thanks for looking!
A:
The answer is here:
How to store passwords in Winforms application?
A:
Well it seems that your program needs to impersonate a user other than the context under which it is already running. Although, it does look like a pretty automated process, but if it's not, can you simply not ask the administrator to put in username and password at the time this 'black-sheep' computer is being polled?
| Storing Windows passwords | I'm writing (in C# with .NET 3.5) an administrative application which will poll multiple Windows systems for various bits of data. In many cases it will use WMI, but in some cases it may need to read remote registry or remotely execute some command or script on the polled system. This polling will happen at repeating intervals - usually nightly, but can be configured to happen more (or less) frequently. So the poll could happen as often as every 10 minutes or as rarely as once a month. It needs to happen in an automated way, without any human intervention.
These functions will require admin-level access to the polled systems. Now, I expect that in most use cases, there will be a domain, and the polling application can run as a service with Domain Admin (or equivalent) privileges, which means I do not have to worry about storing passwords - the admin setting up the app will define the service's username/password via standard Windows mechanisms.
But there's always a few black sheep out there. The program may run in nondomain environments, or in cases where some polled systems are not members of the domain. In these cases we will have to define a username and password, store them securely, then invoke this user/pass pair at the time we poll that system. So keep in mind - in this case the program being written is the user who sends password to the authenticating system.
I am not sure whether I will need to use a reversible hash which I then decrypt to plaintext at time of use, or if there is some Windows mechanism which would allow me to store and then reuse the hash only. Obviously the second mechanism is preferable; I'd like my program to either never know the password's plaintext value, or know it for the shortest amount of time possible.
I need suggestions for smart and secure ways to accomplish this.
Thanks for looking!
| [
"The answer is here:\nHow to store passwords in Winforms application?\n",
"Well it seems that your program needs to impersonate a user other than the context under which it is already running. Although, it does look like a pretty automated process, but if it's not, can you simply not ask the administrator to put in username and password at the time this 'black-sheep' computer is being polled?\n"
] | [
6,
0
] | [] | [] | [
".net",
"c#",
"passwords",
"security",
"windows"
] | stackoverflow_0000047262_.net_c#_passwords_security_windows.txt |
Q:
Alignment restrictions for malloc()/free()
Older K&R (2nd ed.) and other C-language texts I have read that discuss the implementation of a dynamic memory allocator in the style of malloc() and free() usually also mention, in passing, something about data type alignment restrictions. Apparently certain computer hardware architectures (CPU, registers, and memory access) restrict how you can store and address certain value types. For example, there may be a requirement that a 4 byte (long) integer must be stored beginning at addresses that are multiples of four.
What restrictions, if any, do major platforms (Intel & AMD, SPARC, Alpha) impose for memory allocation and memory access, or can I safely ignore aligning memory allocations on specific address boundaries?
A:
Sparc, MIPS, Alpha, and most other "classical RISC" architectures only allow aligned accesses to memory, even today. An unaligned access will cause an exception, but some operating systems will handle the exception by copying from the desired address in software using smaller loads and stores. The application code won't know there was a problem, except that the performance will be very bad.
MIPS has special instructions (lwl and lwr) which can be used to access 32 bit quantities from unaligned addresses. Whenever the compiler can tell that the address is likely unaligned it will use this two instruction sequence instead of a normal lw instruction.
x86 can handle unaligned memory accesses in hardware without an exception, but there is still a performance hit of up to 3X compared to aligned accesses.
Ulrich Drepper wrote a comprehensive paper on this and other memory-related topics, What Every Programmer Should Know About Memory. It is a very long writeup, but filled with chewy goodness.
A:
Alignment is still quite important today. Some processors (the 68k family jumps to mind) would throw an exception if you tried to access a word value on an odd boundary. Today, most processors will run two memory cycles to fetch an unaligned word, but this will definitely be slower than an aligned fetch. Some other processors won't even throw an exception, but will fetch an incorrect value from memory!
If for no other reason than performance, it is wise to try to follow your processor's alignment preferences. Usually, your compiler will take care of all the details, but if you're doing anything where you lay out the memory structure yourself, then it's worth considering.
A:
You still need to be aware of alignment issues when laying out a class or struct in C(++). In these cases the compiler will do the right thing for you, but the overall size of the struct/class may be more wastefull than necessary
For example:
struct
{
char A;
int B;
char C;
int D;
};
Would have a size of 4 * 4 = 16 bytes (assume Windows on x86) whereas
struct
{
char A;
char C;
int B;
int D;
};
Would have a size of 4*3 = 12 bytes.
This is because the compiler enforces a 4 byte alignment for integers, but only 1 byte for chars.
In general pack member variables of the same size (type) together to minimize wasted space.
A:
As Greg mentioned it is still important today (perhaps more so in some ways) and compilers usually take care of the alignment based on the target of the architecture. In managed environments, the JIT compiler can optimize the alignment based on the runtime architecture.
You may see pragma directives (in C/C++) that change the alignment. This should only be used when very specific alignment is required.
// For example, this changes the pack to 2 byte alignment.
#pragma pack(2)
A:
Note that even on IA-32 and the AMD64, some of the SSE instructions/intrinsics require aligned data. These instructions will throw an exception if the data is unaligned, so at least you won't have to debug "wrong data" bugs. There are equivalent unaligned instructions as well, but like Denton says, they're are slower.
If you're using VC++, then besides the #pragma pack directives, you also have the __declspec(align) directives for precise alignment. VC++ documentation also mentions an __aligned_malloc function for specific alignment requirements.
As a rule of thumb, unless you are moving data across compilers/languages or are using the SSE instructions, you can probably ignore alignment issues.
| Alignment restrictions for malloc()/free() | Older K&R (2nd ed.) and other C-language texts I have read that discuss the implementation of a dynamic memory allocator in the style of malloc() and free() usually also mention, in passing, something about data type alignment restrictions. Apparently certain computer hardware architectures (CPU, registers, and memory access) restrict how you can store and address certain value types. For example, there may be a requirement that a 4 byte (long) integer must be stored beginning at addresses that are multiples of four.
What restrictions, if any, do major platforms (Intel & AMD, SPARC, Alpha) impose for memory allocation and memory access, or can I safely ignore aligning memory allocations on specific address boundaries?
| [
"Sparc, MIPS, Alpha, and most other \"classical RISC\" architectures only allow aligned accesses to memory, even today. An unaligned access will cause an exception, but some operating systems will handle the exception by copying from the desired address in software using smaller loads and stores. The application code won't know there was a problem, except that the performance will be very bad.\nMIPS has special instructions (lwl and lwr) which can be used to access 32 bit quantities from unaligned addresses. Whenever the compiler can tell that the address is likely unaligned it will use this two instruction sequence instead of a normal lw instruction.\nx86 can handle unaligned memory accesses in hardware without an exception, but there is still a performance hit of up to 3X compared to aligned accesses.\nUlrich Drepper wrote a comprehensive paper on this and other memory-related topics, What Every Programmer Should Know About Memory. It is a very long writeup, but filled with chewy goodness.\n",
"Alignment is still quite important today. Some processors (the 68k family jumps to mind) would throw an exception if you tried to access a word value on an odd boundary. Today, most processors will run two memory cycles to fetch an unaligned word, but this will definitely be slower than an aligned fetch. Some other processors won't even throw an exception, but will fetch an incorrect value from memory!\nIf for no other reason than performance, it is wise to try to follow your processor's alignment preferences. Usually, your compiler will take care of all the details, but if you're doing anything where you lay out the memory structure yourself, then it's worth considering.\n",
"You still need to be aware of alignment issues when laying out a class or struct in C(++). In these cases the compiler will do the right thing for you, but the overall size of the struct/class may be more wastefull than necessary\nFor example:\nstruct\n{ \n char A;\n int B;\n char C;\n int D;\n};\n\nWould have a size of 4 * 4 = 16 bytes (assume Windows on x86) whereas\nstruct\n{ \n char A;\n char C;\n int B;\n int D;\n};\n\nWould have a size of 4*3 = 12 bytes.\nThis is because the compiler enforces a 4 byte alignment for integers, but only 1 byte for chars.\nIn general pack member variables of the same size (type) together to minimize wasted space.\n",
"As Greg mentioned it is still important today (perhaps more so in some ways) and compilers usually take care of the alignment based on the target of the architecture. In managed environments, the JIT compiler can optimize the alignment based on the runtime architecture.\nYou may see pragma directives (in C/C++) that change the alignment. This should only be used when very specific alignment is required.\n// For example, this changes the pack to 2 byte alignment.\n#pragma pack(2)\n\n",
"Note that even on IA-32 and the AMD64, some of the SSE instructions/intrinsics require aligned data. These instructions will throw an exception if the data is unaligned, so at least you won't have to debug \"wrong data\" bugs. There are equivalent unaligned instructions as well, but like Denton says, they're are slower.\nIf you're using VC++, then besides the #pragma pack directives, you also have the __declspec(align) directives for precise alignment. VC++ documentation also mentions an __aligned_malloc function for specific alignment requirements.\nAs a rule of thumb, unless you are moving data across compilers/languages or are using the SSE instructions, you can probably ignore alignment issues.\n"
] | [
6,
4,
1,
1,
1
] | [] | [] | [
"allocation",
"c",
"malloc",
"memory"
] | stackoverflow_0000036455_allocation_c_malloc_memory.txt |
Q:
Django: Print url of view without hardcoding the url
Can i print out a url /admin/manage/products/add of a certain view in a template?
Here is the rule i want to create a link for
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
I would like to have /manage/products/add in a template without hardcoding it. How can i do this?
Edit: I am not using the default admin (well, i am but it is at another url), this is my own
A:
You can use get_absolute_url, but that will only work for a particular object. Since your object hasn't been created yet, it won't work in this case.
You want to use named URL patterns. Here's a quick intro:
Change the line in your urls.py to:
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}, "create-product"),
Then, in your template you use this to display the URL:
{% url create-product %}
If you're using Django 1.5 or higher you need this:
{% url 'create-product' %}
You can do some more powerful things with named URL patterns, they're very handy. Note that they are only in the development version (and also 1.0).
A:
If you use named url patterns you can do the follwing in your template
{% url create_object %}
A:
The preferred way of creating the URL is by adding a get_absolute_url method to your model classes. You can hardcode the path there so you at least get closer to following the KISS philosophy.
You can go further by utilizing the permalink decorator that figures the path based on the urls configuration.
You can read more in the django documentation here.
| Django: Print url of view without hardcoding the url | Can i print out a url /admin/manage/products/add of a certain view in a template?
Here is the rule i want to create a link for
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
I would like to have /manage/products/add in a template without hardcoding it. How can i do this?
Edit: I am not using the default admin (well, i am but it is at another url), this is my own
| [
"You can use get_absolute_url, but that will only work for a particular object. Since your object hasn't been created yet, it won't work in this case.\nYou want to use named URL patterns. Here's a quick intro:\nChange the line in your urls.py to:\n(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}, \"create-product\"),\n\nThen, in your template you use this to display the URL:\n{% url create-product %}\n\nIf you're using Django 1.5 or higher you need this:\n{% url 'create-product' %}\n\nYou can do some more powerful things with named URL patterns, they're very handy. Note that they are only in the development version (and also 1.0).\n",
"If you use named url patterns you can do the follwing in your template\n{% url create_object %}\n\n",
"The preferred way of creating the URL is by adding a get_absolute_url method to your model classes. You can hardcode the path there so you at least get closer to following the KISS philosophy.\nYou can go further by utilizing the permalink decorator that figures the path based on the urls configuration.\nYou can read more in the django documentation here.\n"
] | [
17,
2,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000047207_django_python.txt |
Q:
How do I group in memory lists?
I have a list of Foo. Foo has properties Bar and Lum. Some Foos have identical values for Bar. How can I use lambda/linq to group my Foos by Bar so I can iterate over each grouping's Lums?
A:
var q = from x in list
group x by x.Bar into g
select g;
foreach (var group in q)
{
Console.WriteLine("Group " + group.Key);
foreach (var item in group)
{
Console.WriteLine(item.Bar);
}
}
A:
Deeno,
Enjoy:
var foos = new List<Foo> {
new Foo{Bar = 1,Lum = 1},
new Foo{Bar = 1,Lum = 2},
new Foo{Bar = 2,Lum = 3},
};
// Using language integrated queries:
var q = from foo in foos
group foo by foo.Bar into groupedFoos
let lums = from fooGroup in groupedFoos
select fooGroup.Lum
select new { Bar = groupedFoos.Key, Lums = lums };
// Using lambdas
var q = foos.GroupBy(x => x.Bar).
Select(y => new {Bar = y.Key, Lums = y.Select(z => z.Lum)});
foreach (var group in q)
{
Console.WriteLine("Lums for Bar#" + group.Bar);
foreach (var lum in group.Lums)
{
Console.WriteLine(lum);
}
}
To learn more about LINQ read 101 LINQ Samples
| How do I group in memory lists? | I have a list of Foo. Foo has properties Bar and Lum. Some Foos have identical values for Bar. How can I use lambda/linq to group my Foos by Bar so I can iterate over each grouping's Lums?
| [
"var q = from x in list\n group x by x.Bar into g\n select g;\n\nforeach (var group in q)\n{\n Console.WriteLine(\"Group \" + group.Key);\n foreach (var item in group)\n {\n Console.WriteLine(item.Bar);\n }\n}\n\n",
"Deeno,\nEnjoy:\nvar foos = new List<Foo> {\n new Foo{Bar = 1,Lum = 1},\n new Foo{Bar = 1,Lum = 2},\n new Foo{Bar = 2,Lum = 3},\n};\n\n// Using language integrated queries:\n\nvar q = from foo in foos\n group foo by foo.Bar into groupedFoos\n let lums = from fooGroup in groupedFoos\n select fooGroup.Lum\n select new { Bar = groupedFoos.Key, Lums = lums };\n\n// Using lambdas\n\nvar q = foos.GroupBy(x => x.Bar).\n Select(y => new {Bar = y.Key, Lums = y.Select(z => z.Lum)});\n\n\nforeach (var group in q)\n{\n Console.WriteLine(\"Lums for Bar#\" + group.Bar);\n foreach (var lum in group.Lums)\n {\n Console.WriteLine(lum);\n }\n}\n\nTo learn more about LINQ read 101 LINQ Samples\n"
] | [
4,
3
] | [] | [] | [
".net",
"c#",
"lambda",
"linq"
] | stackoverflow_0000046130_.net_c#_lambda_linq.txt |
Q:
.Net 3.5, most secure way to pass string between processes
I'd like to be able to pass a SecureString (a cached passphrase) to a child process in C# (.Net 3.5), but I don't know what the most secure way is to do it. If I were to convert the SecureString back to a regular string and pass it as a command-line argument, for example, then I think the value may be prone to disk paging--which would make the plaintext touch the filesystem and ruin the point of using SecureString.
Can the IntPtr for the SecureString be passed instead? Could I use a named pipe without increasing the risk?
A:
In general you should define your threat model before worrying about more exotic attacks. In this case: are you worried that somebody shuts down the computer and does a forensic analysis of the harddrive? Application memory can also be swapped out, so the simple fact that one process has it in memory, makes it potentially possible for it to end in the swap file. What about hibernation? During hibernation the entire content of the memory is written to the harddisk (including the SecureString - and presumably the encryption key!). What if the attacker has access to the system while it's running and can search through the memory of applications?
In general client side security is very tricky and unless you have dedicated hardware (like a TPM chip) it is almost impossible to get right. Two solutions would be:
If you only need to test for equality between two strings (ie: is this string the same as the one I had earlier), store only a (salted) hash value of it.
Make the user re-enter the information when it is needed a second time (not very convenient, but security and convenience are opposed to each other)
A:
Unless your child process also understands how to work with SecureString I don't think there is a way to pass it directly. For example, the Process.Start() method has two overloads that take a SecureString so the risk of the actual string value being sniffed is minimized (it's still possible since somewhere along the the way the actual value has to be retrieved/unmarshalled).
I think a lot of how to do this will depend on what the child process is and how it is being started.
| .Net 3.5, most secure way to pass string between processes | I'd like to be able to pass a SecureString (a cached passphrase) to a child process in C# (.Net 3.5), but I don't know what the most secure way is to do it. If I were to convert the SecureString back to a regular string and pass it as a command-line argument, for example, then I think the value may be prone to disk paging--which would make the plaintext touch the filesystem and ruin the point of using SecureString.
Can the IntPtr for the SecureString be passed instead? Could I use a named pipe without increasing the risk?
| [
"In general you should define your threat model before worrying about more exotic attacks. In this case: are you worried that somebody shuts down the computer and does a forensic analysis of the harddrive? Application memory can also be swapped out, so the simple fact that one process has it in memory, makes it potentially possible for it to end in the swap file. What about hibernation? During hibernation the entire content of the memory is written to the harddisk (including the SecureString - and presumably the encryption key!). What if the attacker has access to the system while it's running and can search through the memory of applications?\nIn general client side security is very tricky and unless you have dedicated hardware (like a TPM chip) it is almost impossible to get right. Two solutions would be:\n\nIf you only need to test for equality between two strings (ie: is this string the same as the one I had earlier), store only a (salted) hash value of it.\nMake the user re-enter the information when it is needed a second time (not very convenient, but security and convenience are opposed to each other)\n\n",
"Unless your child process also understands how to work with SecureString I don't think there is a way to pass it directly. For example, the Process.Start() method has two overloads that take a SecureString so the risk of the actual string value being sniffed is minimized (it's still possible since somewhere along the the way the actual value has to be retrieved/unmarshalled).\nI think a lot of how to do this will depend on what the child process is and how it is being started.\n"
] | [
3,
0
] | [] | [] | [
".net",
".net_3.5",
"ipc",
"security"
] | stackoverflow_0000046693_.net_.net_3.5_ipc_security.txt |
Q:
UserControl rendering: write link to current page?
I'm implementing a custom control and in this control I need to write a bunch of links to the current page, each one with a different query parameter. I need to keep existing query string intact, and add (or modify the value of ) an extra query item (eg. "page"):
"Default.aspx?page=1"
"Default.aspx?page=2"
"Default.aspx?someother=true&page=2"
etc.
Is there a simple helper method that I can use in the Render method ... uhmm ... like:
Page.ClientScript.SomeURLBuilderMethodHere(this,"page","1");
Page.ClientScript.SomeURLBuilderMethodHere(this,"page","2");
That will take care of generating a correct URL, maintain existing query string items and not create duplicates eg. page=1&page=2&page=3?
Rolling up my own seems like such an unappealing task.
A:
I'm afraid I don't know of any built-in method for this, we use this method that takes the querystring and sets parameters
/// <summary>
/// Set a parameter value in a query string. If the parameter is not found in the passed in query string,
/// it is added to the end of the query string
/// </summary>
/// <param name="queryString">The query string that is to be manipulated</param>
/// <param name="paramName">The name of the parameter</param>
/// <param name="paramValue">The value that the parameter is to be set to</param>
/// <returns>The query string with the parameter set to the new value.</returns>
public static string SetParameter(string queryString, string paramName, object paramValue)
{
//create the regex
//match paramname=*
//string regex = String.Format(@"{0}=[^&]*", paramName);
string regex = @"([&?]{0,1})" + String.Format(@"({0}=[^&]*)", paramName);
RegexOptions options = RegexOptions.RightToLeft;
// Querystring has parameters...
if (Regex.IsMatch(queryString, regex, options))
{
queryString = Regex.Replace(queryString, regex, String.Format("$1{0}={1}", paramName, paramValue));
}
else
{
// If no querystring just return the Parameter Key/Value
if (queryString == String.Empty)
{
return String.Format("{0}={1}", paramName, paramValue);
}
else
{
// Append the new parameter key/value to the end of querystring
queryString = String.Format("{0}&{1}={2}", queryString, paramName, paramValue);
}
}
return queryString;
}
Obviously you could use the QueryString NameValueCollection property of the URI object to make looking up the values easier, but we wanted to be able to parse any querystring.
A:
Oh and we have this method too that allows you to put in a whole URL string without having to get the querystring out of it
public static string SetParameterInUrl(string url, string paramName, object paramValue)
{
int queryStringIndex = url.IndexOf("?");
string path;
string queryString;
if (queryStringIndex >= 0 && !url.EndsWith("?"))
{
path = url.Substring(0, queryStringIndex);
queryString = url.Substring(queryStringIndex + 1);
}
else
{
path = url;
queryString = string.Empty;
}
return path + "?" + SetParameter(queryString, paramName, paramValue);
}
| UserControl rendering: write link to current page? | I'm implementing a custom control and in this control I need to write a bunch of links to the current page, each one with a different query parameter. I need to keep existing query string intact, and add (or modify the value of ) an extra query item (eg. "page"):
"Default.aspx?page=1"
"Default.aspx?page=2"
"Default.aspx?someother=true&page=2"
etc.
Is there a simple helper method that I can use in the Render method ... uhmm ... like:
Page.ClientScript.SomeURLBuilderMethodHere(this,"page","1");
Page.ClientScript.SomeURLBuilderMethodHere(this,"page","2");
That will take care of generating a correct URL, maintain existing query string items and not create duplicates eg. page=1&page=2&page=3?
Rolling up my own seems like such an unappealing task.
| [
"I'm afraid I don't know of any built-in method for this, we use this method that takes the querystring and sets parameters\n /// <summary>\n /// Set a parameter value in a query string. If the parameter is not found in the passed in query string,\n /// it is added to the end of the query string\n /// </summary>\n /// <param name=\"queryString\">The query string that is to be manipulated</param>\n /// <param name=\"paramName\">The name of the parameter</param>\n /// <param name=\"paramValue\">The value that the parameter is to be set to</param>\n /// <returns>The query string with the parameter set to the new value.</returns>\n public static string SetParameter(string queryString, string paramName, object paramValue)\n {\n //create the regex\n //match paramname=*\n //string regex = String.Format(@\"{0}=[^&]*\", paramName);\n string regex = @\"([&?]{0,1})\" + String.Format(@\"({0}=[^&]*)\", paramName);\n\n RegexOptions options = RegexOptions.RightToLeft;\n // Querystring has parameters...\n if (Regex.IsMatch(queryString, regex, options))\n {\n queryString = Regex.Replace(queryString, regex, String.Format(\"$1{0}={1}\", paramName, paramValue));\n }\n else\n {\n // If no querystring just return the Parameter Key/Value\n if (queryString == String.Empty)\n {\n return String.Format(\"{0}={1}\", paramName, paramValue);\n }\n else\n {\n // Append the new parameter key/value to the end of querystring\n queryString = String.Format(\"{0}&{1}={2}\", queryString, paramName, paramValue);\n }\n }\n return queryString;\n }\n\nObviously you could use the QueryString NameValueCollection property of the URI object to make looking up the values easier, but we wanted to be able to parse any querystring.\n",
"Oh and we have this method too that allows you to put in a whole URL string without having to get the querystring out of it\npublic static string SetParameterInUrl(string url, string paramName, object paramValue)\n{\n int queryStringIndex = url.IndexOf(\"?\");\n string path;\n string queryString;\n if (queryStringIndex >= 0 && !url.EndsWith(\"?\"))\n {\n path = url.Substring(0, queryStringIndex);\n queryString = url.Substring(queryStringIndex + 1);\n }\n else\n {\n path = url;\n queryString = string.Empty;\n }\n return path + \"?\" + SetParameter(queryString, paramName, paramValue);\n}\n\n"
] | [
1,
0
] | [] | [] | [
"asp.net",
"custom_server_controls",
"web_applications"
] | stackoverflow_0000047329_asp.net_custom_server_controls_web_applications.txt |
Q:
Information Management Policy in SharePoint
An obscure puzzle, but it's driving me absolutely nuts:
I'm creating a custom Information Management Policy in MOSS. I've implemented IPolicyFeature, and my policy feature happily registers itself by configuring a new SPItemEventReceiver. All new items in my library fire the events as they should, and it all works fine.
IPolicyFeature also has a method ProcessListItem, which is supposed to retroactively apply the policy to items that were already in the library (at least, it's supposed to do that for as long as it keeps returning true). Except it doesn't. It only applies the policy to the first item in the library, and I have absolutely no idea why.
It doesn't seem to be throwing an exception, and it really does return true from processing that first item, and I can't think what else to look at. Anyone?
Edit: Cory's answer, below, set me on the right track. Something else was indeed failing -- I didn't find out what, since my windbg-fu isn't what it should be, but I suspect it was something like "modifying a collection while it's being iterated over". My code was modifying the SPListItem that's passed into ProcessListItem, and then calling SystemUpdate on it; as soon as I changed the code so that it created its own variable (pointing at the exact same SPListItem) and used that, the problem went away...
A:
There's only a couple of things I can think of to try. First, are you developing on the box where you might be able to use Visual Studio to debug? So just stepping through it.
Assuming that's not the case - what I'd do is fire up WinDBG and attach it to the process just before I registered the policy. Turn on first chance exceptions so that it breaks whenever they occur. you can do that by issuing the command "sxe clr" once it is broken in. Here's a little more info about WinDBG:
http://blogs.msdn.com/tess/archive/2008/06/05/setting-net-breakpoints-in-windbg-for-applications-that-crash-on-startup.aspx
What I'd do is then watch for First Chance exceptions to be thrown, and do a !PrintException to see what is going on. My guess is that there is an exception being thrown somewhere that is causing the app to stop processing the other items.
What does the logic look like for your ProcessListItem? Have you tried just doing a return true to make sure it works?
A:
Some nice ideas there, thanks. The Visual Studio debugger wasn't showing an exception (and I've wrapped everything in try/catch blocks just in case), but I hadn't thought of trying Windbg...
| Information Management Policy in SharePoint | An obscure puzzle, but it's driving me absolutely nuts:
I'm creating a custom Information Management Policy in MOSS. I've implemented IPolicyFeature, and my policy feature happily registers itself by configuring a new SPItemEventReceiver. All new items in my library fire the events as they should, and it all works fine.
IPolicyFeature also has a method ProcessListItem, which is supposed to retroactively apply the policy to items that were already in the library (at least, it's supposed to do that for as long as it keeps returning true). Except it doesn't. It only applies the policy to the first item in the library, and I have absolutely no idea why.
It doesn't seem to be throwing an exception, and it really does return true from processing that first item, and I can't think what else to look at. Anyone?
Edit: Cory's answer, below, set me on the right track. Something else was indeed failing -- I didn't find out what, since my windbg-fu isn't what it should be, but I suspect it was something like "modifying a collection while it's being iterated over". My code was modifying the SPListItem that's passed into ProcessListItem, and then calling SystemUpdate on it; as soon as I changed the code so that it created its own variable (pointing at the exact same SPListItem) and used that, the problem went away...
| [
"There's only a couple of things I can think of to try. First, are you developing on the box where you might be able to use Visual Studio to debug? So just stepping through it.\nAssuming that's not the case - what I'd do is fire up WinDBG and attach it to the process just before I registered the policy. Turn on first chance exceptions so that it breaks whenever they occur. you can do that by issuing the command \"sxe clr\" once it is broken in. Here's a little more info about WinDBG:\nhttp://blogs.msdn.com/tess/archive/2008/06/05/setting-net-breakpoints-in-windbg-for-applications-that-crash-on-startup.aspx\nWhat I'd do is then watch for First Chance exceptions to be thrown, and do a !PrintException to see what is going on. My guess is that there is an exception being thrown somewhere that is causing the app to stop processing the other items. \nWhat does the logic look like for your ProcessListItem? Have you tried just doing a return true to make sure it works?\n",
"Some nice ideas there, thanks. The Visual Studio debugger wasn't showing an exception (and I've wrapped everything in try/catch blocks just in case), but I hadn't thought of trying Windbg...\n"
] | [
1,
0
] | [] | [] | [
"information_management",
"moss",
"sharepoint"
] | stackoverflow_0000046692_information_management_moss_sharepoint.txt |
Q:
Library or algorithm to explode an alphanumeric range
I was wondering if there is an open source library or algorithm that can expand a non-numeric range. For example, if you have 1A to 9A you should get
1A, 2A, 3A, 4A, 5A, 6A, 7A, 8A, 9A.
I've tried Googling for this and the best I could come up with were Regex that would expand numerics with dashes (1-3 becoming 1,2,3).
A:
As noted by others, it would be useful to be more specific. I don't think you can expect there to be a library that will generate ranges according to any arbitrary order on string you can come up with.
If you can simply define what the successor of any given string is, then the solutions is quite easy. That is, if you have a successor function S on strings (e.g. with S('3A') = '4A'), then something like the following can be used:
s = initial_string
while s != final_string do
output s
s = S(s)
output s
Something I have used in the past to generate all strings of a given length l and with given range b to e of characters, is the following piece of (pseudo-)code. It can be easily adapted to a wide range of variations.
// initialise s with b at every position
for i in [0..l) do
s[i] = b
done = false
while not done do
output s
j = 0
// if s[j] is e, reset it to b and "add carry"
while j < l and s[j] == e do
s[j] = b
j = j + 1
if j == l then
done = true
if not done then
s[j] = s[j] + 1
For example, to start at a specific string you need only the change the initialisation. To set the end you only need to change the behaviour for the inner while to separately handle position l (limiting to the character in the end string on that position and if reached decrementing l).
A:
I was trying to leave it somewhat open because the number of possibilities is staggering. I believe this one of those questions that could not be answered 100% here without going through a lot of technical detail about is considered a "good" or "bad" range. I'm just trying to find a jumping point for ideas on how other people have tackled this problem. I was hoping that someone wrote a blog post explaining how they went about it solving this problem or created a whole library to handle this.
A:
I would say the first step in the solution will be to define how characters and numbers interact and form a sequence. The given example isn't clear, as you would at least assume it to run 1A, 1B .... 8Y, 8Z, 9A - that's assuming your input is restricted to decimal followed by a single character.
If you can define a continuous sequence for characters and decimals, then you it will simply be a matter of some recursion / looping to generate part of that sequence.
For example, you could assume that each character in the input is one of (1-9A-Z), therefore you could easily make that continuous by grabbing the decimal ascii value of the alpha characters and subtracting 55, in effect giving you the range (1-35)
A:
If we assume that the start and end ranges will follow the same alternating pattern, and limit the range of digits to 0-9 and A-Z, we can think of each group of digits as a component in a multi-dimensonal coordinate. For example, 1A would correspond to the two-dimensional coordinate (1,A) (which is what Excel uses to label its two-dimensional grid of rows and columns); whereas AA1BB2 would be a four-dimensional coordinate (AA,1,BB,2).
Because each component is independent, to expand the range between two coordinates we just return all combinations of the expansion of each component. Below is a quick implementation I cooked up this afternoon. It works for an arbitrary number of alternations of normal and alphabetic numbers, and handles large alphabetic ranges (i.e. from AB to CDE, not just AB to CD).
Note: This is intended as a rough draft of an actual implementation (I'm taking off tomorrow, so it is even less polished than usual ;). All the usual caveats regarding error handling, robustness, (readability ;), etc, apply.
IEnumerable<string> ExpandRange( string start, string end ) {
// Split coordinates into component parts.
string[] startParts = GetRangeParts( start );
string[] endParts = GetRangeParts( end );
// Expand range between parts
// (i.e. 1->3 becomes 1,2,3; A->C becomes A,B,C).
int length = startParts.Length;
int[] lengths = new int[length];
string[][] expandedParts = new string[length][];
for( int i = 0; i < length; ++i ) {
expandedParts[i] = ExpandRangeParts( startParts[i], endParts[i] );
lengths[i] = expandedParts[i].Length;
}
// Return all combinations of expanded parts.
int[] indexes = new int[length];
do {
var sb = new StringBuilder( );
for( int i = 0; i < length; ++i ) {
int partIndex = indexes[i];
sb.Append( expandedParts[i][partIndex] );
}
yield return sb.ToString( );
} while( IncrementIndexes( indexes, lengths ) );
}
readonly Regex RangeRegex = new Regex( "([0-9]*)([A-Z]*)" );
string[] GetRangeParts( string range ) {
// Match all alternating digit-letter components of coordinate.
var matches = RangeRegex.Matches( range );
var parts =
from match in matches.Cast<Match>( )
from matchGroup in match.Groups.Cast<Group>( ).Skip( 1 )
let value = matchGroup.Value
where value.Length > 0
select value;
return parts.ToArray( );
}
string[] ExpandRangeParts( string startPart, string endPart ) {
int start, end;
Func<int, string> toString;
bool isNumeric = char.IsDigit( startPart, 0 );
if( isNumeric ) {
// Parse regular integers directly.
start = int.Parse( startPart );
end = int.Parse( endPart );
toString = ( i ) => i.ToString( );
}
else {
// Convert alphabetic numbers to integers for expansion,
// then convert back for display.
start = AlphaNumberToInt( startPart );
end = AlphaNumberToInt( endPart );
toString = IntToAlphaNumber;
}
int count = end - start + 1;
return Enumerable.Range( start, count )
.Select( toString )
.Where( s => s.Length > 0 )
.ToArray( );
}
bool IncrementIndexes( int[] indexes, int[] lengths ) {
// Increment indexes from right to left (i.e. Arabic numeral order).
bool carry = true;
for( int i = lengths.Length; carry && i > 0; --i ) {
int index = i - 1;
int incrementedValue = (indexes[index] + 1) % lengths[index];
indexes[index] = incrementedValue;
carry = (incrementedValue == 0);
}
return !carry;
}
// Alphabetic numbers are 1-based (i.e. A = 1, AA = 11, etc, mod base-26).
const char AlphaDigitZero = (char)('A' - 1);
const int AlphaNumberBase = 'Z' - AlphaDigitZero + 1;
int AlphaNumberToInt( string number ) {
int sum = 0;
int place = 1;
foreach( char c in number.Cast<char>( ).Reverse( ) ) {
int digit = c - AlphaDigitZero;
sum += digit * place;
place *= AlphaNumberBase;
}
return sum;
}
string IntToAlphaNumber( int number ) {
List<char> digits = new List<char>( );
while( number > 0 ) {
int digit = number % AlphaNumberBase;
if( digit == 0 ) // Compensate for 1-based alphabetic numbers.
return "";
char c = (char)(AlphaDigitZero + digit);
digits.Add( c );
number /= AlphaNumberBase;
}
digits.Reverse( );
return new string( digits.ToArray( ) );
}
| Library or algorithm to explode an alphanumeric range | I was wondering if there is an open source library or algorithm that can expand a non-numeric range. For example, if you have 1A to 9A you should get
1A, 2A, 3A, 4A, 5A, 6A, 7A, 8A, 9A.
I've tried Googling for this and the best I could come up with were Regex that would expand numerics with dashes (1-3 becoming 1,2,3).
| [
"As noted by others, it would be useful to be more specific. I don't think you can expect there to be a library that will generate ranges according to any arbitrary order on string you can come up with.\nIf you can simply define what the successor of any given string is, then the solutions is quite easy. That is, if you have a successor function S on strings (e.g. with S('3A') = '4A'), then something like the following can be used:\ns = initial_string\nwhile s != final_string do\n output s\n s = S(s)\noutput s\n\nSomething I have used in the past to generate all strings of a given length l and with given range b to e of characters, is the following piece of (pseudo-)code. It can be easily adapted to a wide range of variations.\n// initialise s with b at every position\nfor i in [0..l) do\n s[i] = b\ndone = false\nwhile not done do\n output s\n j = 0\n // if s[j] is e, reset it to b and \"add carry\"\n while j < l and s[j] == e do\n s[j] = b\n j = j + 1\n if j == l then\n done = true\n if not done then\n s[j] = s[j] + 1\n\nFor example, to start at a specific string you need only the change the initialisation. To set the end you only need to change the behaviour for the inner while to separately handle position l (limiting to the character in the end string on that position and if reached decrementing l).\n",
"I was trying to leave it somewhat open because the number of possibilities is staggering. I believe this one of those questions that could not be answered 100% here without going through a lot of technical detail about is considered a \"good\" or \"bad\" range. I'm just trying to find a jumping point for ideas on how other people have tackled this problem. I was hoping that someone wrote a blog post explaining how they went about it solving this problem or created a whole library to handle this.\n",
"I would say the first step in the solution will be to define how characters and numbers interact and form a sequence. The given example isn't clear, as you would at least assume it to run 1A, 1B .... 8Y, 8Z, 9A - that's assuming your input is restricted to decimal followed by a single character.\nIf you can define a continuous sequence for characters and decimals, then you it will simply be a matter of some recursion / looping to generate part of that sequence.\nFor example, you could assume that each character in the input is one of (1-9A-Z), therefore you could easily make that continuous by grabbing the decimal ascii value of the alpha characters and subtracting 55, in effect giving you the range (1-35)\n",
"If we assume that the start and end ranges will follow the same alternating pattern, and limit the range of digits to 0-9 and A-Z, we can think of each group of digits as a component in a multi-dimensonal coordinate. For example, 1A would correspond to the two-dimensional coordinate (1,A) (which is what Excel uses to label its two-dimensional grid of rows and columns); whereas AA1BB2 would be a four-dimensional coordinate (AA,1,BB,2).\nBecause each component is independent, to expand the range between two coordinates we just return all combinations of the expansion of each component. Below is a quick implementation I cooked up this afternoon. It works for an arbitrary number of alternations of normal and alphabetic numbers, and handles large alphabetic ranges (i.e. from AB to CDE, not just AB to CD).\nNote: This is intended as a rough draft of an actual implementation (I'm taking off tomorrow, so it is even less polished than usual ;). All the usual caveats regarding error handling, robustness, (readability ;), etc, apply.\nIEnumerable<string> ExpandRange( string start, string end ) {\n // Split coordinates into component parts.\n string[] startParts = GetRangeParts( start );\n string[] endParts = GetRangeParts( end );\n\n // Expand range between parts \n // (i.e. 1->3 becomes 1,2,3; A->C becomes A,B,C).\n int length = startParts.Length;\n int[] lengths = new int[length];\n string[][] expandedParts = new string[length][];\n for( int i = 0; i < length; ++i ) {\n expandedParts[i] = ExpandRangeParts( startParts[i], endParts[i] );\n lengths[i] = expandedParts[i].Length;\n }\n\n // Return all combinations of expanded parts.\n int[] indexes = new int[length];\n do {\n var sb = new StringBuilder( );\n for( int i = 0; i < length; ++i ) {\n int partIndex = indexes[i];\n sb.Append( expandedParts[i][partIndex] );\n }\n yield return sb.ToString( );\n } while( IncrementIndexes( indexes, lengths ) );\n}\n\nreadonly Regex RangeRegex = new Regex( \"([0-9]*)([A-Z]*)\" );\nstring[] GetRangeParts( string range ) {\n // Match all alternating digit-letter components of coordinate.\n var matches = RangeRegex.Matches( range );\n var parts =\n from match in matches.Cast<Match>( )\n from matchGroup in match.Groups.Cast<Group>( ).Skip( 1 )\n let value = matchGroup.Value\n where value.Length > 0\n select value;\n return parts.ToArray( );\n}\n\nstring[] ExpandRangeParts( string startPart, string endPart ) {\n int start, end;\n Func<int, string> toString;\n\n bool isNumeric = char.IsDigit( startPart, 0 );\n if( isNumeric ) {\n // Parse regular integers directly.\n start = int.Parse( startPart );\n end = int.Parse( endPart );\n toString = ( i ) => i.ToString( );\n }\n else {\n // Convert alphabetic numbers to integers for expansion,\n // then convert back for display.\n start = AlphaNumberToInt( startPart );\n end = AlphaNumberToInt( endPart );\n toString = IntToAlphaNumber;\n }\n\n int count = end - start + 1;\n return Enumerable.Range( start, count )\n .Select( toString )\n .Where( s => s.Length > 0 )\n .ToArray( );\n}\n\nbool IncrementIndexes( int[] indexes, int[] lengths ) {\n // Increment indexes from right to left (i.e. Arabic numeral order).\n bool carry = true;\n for( int i = lengths.Length; carry && i > 0; --i ) {\n int index = i - 1;\n int incrementedValue = (indexes[index] + 1) % lengths[index];\n indexes[index] = incrementedValue;\n carry = (incrementedValue == 0);\n }\n return !carry;\n}\n\n// Alphabetic numbers are 1-based (i.e. A = 1, AA = 11, etc, mod base-26).\nconst char AlphaDigitZero = (char)('A' - 1);\nconst int AlphaNumberBase = 'Z' - AlphaDigitZero + 1;\nint AlphaNumberToInt( string number ) {\n int sum = 0;\n int place = 1;\n foreach( char c in number.Cast<char>( ).Reverse( ) ) {\n int digit = c - AlphaDigitZero;\n sum += digit * place;\n place *= AlphaNumberBase;\n }\n return sum;\n}\n\nstring IntToAlphaNumber( int number ) {\n List<char> digits = new List<char>( );\n while( number > 0 ) {\n int digit = number % AlphaNumberBase;\n if( digit == 0 ) // Compensate for 1-based alphabetic numbers.\n return \"\";\n\n char c = (char)(AlphaDigitZero + digit);\n digits.Add( c );\n number /= AlphaNumberBase;\n }\n\n digits.Reverse( );\n return new string( digits.ToArray( ) );\n}\n\n"
] | [
1,
0,
0,
0
] | [] | [] | [
".net",
"algorithm"
] | stackoverflow_0000045642_.net_algorithm.txt |
Q:
How can I monitor the executed sql statements on a SQL Server 2005
In a project of mine the SQL statements that are executed against a SQL Server are failing for some unknown reason. Some of the code is already used in production so debugging it is not an easy task. Therefore I need a way to see in the database itself what SQL statements are used, as the statements are generated at runtime by the project and could be flawed when certain conditions are met.
I therefore considered the possibility to monitor the incoming statements and check myself if I see any flaws.
The database is running on a SQL Server 2005, and I use SQL server management studio express as primary tool to manipulate the database. So my question is, what is the best way to do this?
A:
Seeing how you use the Management Studio Express, I will assume you don't have access to the MSSQL 2005 client tools. If you do, install those, because it includes the SQL profiler which does exactly what you want (and more!). For more info about that one, see msdn.
I found this a while ago, because I was thinking about the exact same thing. I have access to the client tools myself, so I don't really need to yet, but that access is not unlimited (it's through my current job). If you try it out, let me know if it works ;-)
A:
Best way is to fire up profiler, start a trace, save the trace and then rerun the statements
| How can I monitor the executed sql statements on a SQL Server 2005 | In a project of mine the SQL statements that are executed against a SQL Server are failing for some unknown reason. Some of the code is already used in production so debugging it is not an easy task. Therefore I need a way to see in the database itself what SQL statements are used, as the statements are generated at runtime by the project and could be flawed when certain conditions are met.
I therefore considered the possibility to monitor the incoming statements and check myself if I see any flaws.
The database is running on a SQL Server 2005, and I use SQL server management studio express as primary tool to manipulate the database. So my question is, what is the best way to do this?
| [
"Seeing how you use the Management Studio Express, I will assume you don't have access to the MSSQL 2005 client tools. If you do, install those, because it includes the SQL profiler which does exactly what you want (and more!). For more info about that one, see msdn.\nI found this a while ago, because I was thinking about the exact same thing. I have access to the client tools myself, so I don't really need to yet, but that access is not unlimited (it's through my current job). If you try it out, let me know if it works ;-)\n",
"Best way is to fire up profiler, start a trace, save the trace and then rerun the statements\n"
] | [
22,
2
] | [] | [] | [
"monitoring",
"sql",
"sql_server",
"sql_server_2005"
] | stackoverflow_0000047376_monitoring_sql_sql_server_sql_server_2005.txt |
Q:
Do namespaces propagate to children in XElement objects?
If I have an XElement that has child elements, and if I remove a child element from the parent, removing all references between the two, will the child XElement have the same namespaces as the parent?
In other words, if I have the following XML:
<parent xmlns:foo="abc">
<foo:child />
</parent>
and I remove the child element, will the child element's xml look like
<child xmlns="abc" />
or like
<child />
A:
The answer is yes, namespaces do propagate to children.
You do NOT have to specify the namespace within child elements. The scoping of a namespace includes all elements until the closing tag of the element it was defined in.
See section #6.1 here http://www.w3.org/TR/REC-xml-names/#scoping
hope that helps
A:
If you include mentioned element in the new xml tree it will be in the same namespace.
var xml1 = XElement.Parse("<a xmlns:foo=\"abc\"><foo:b></foo:b></a>");
var xml2 = XElement.Parse("<a xmlns:boo=\"efg\"></a>");
XNamespace ns = "abc";
var elem = xml1.Element(ns + "b");
elem.Remove();
xml2.Add(elem);
Console.WriteLine(xml1.ToString());
Console.WriteLine(xml2.ToString());
Result:
<a xmlns:foo="abc" />
<a xmlns:boo="efg">
<b xmlns="abc"></b>
</a>
| Do namespaces propagate to children in XElement objects? | If I have an XElement that has child elements, and if I remove a child element from the parent, removing all references between the two, will the child XElement have the same namespaces as the parent?
In other words, if I have the following XML:
<parent xmlns:foo="abc">
<foo:child />
</parent>
and I remove the child element, will the child element's xml look like
<child xmlns="abc" />
or like
<child />
| [
"The answer is yes, namespaces do propagate to children. \nYou do NOT have to specify the namespace within child elements. The scoping of a namespace includes all elements until the closing tag of the element it was defined in.\nSee section #6.1 here http://www.w3.org/TR/REC-xml-names/#scoping\nhope that helps\n",
"If you include mentioned element in the new xml tree it will be in the same namespace.\nvar xml1 = XElement.Parse(\"<a xmlns:foo=\\\"abc\\\"><foo:b></foo:b></a>\");\nvar xml2 = XElement.Parse(\"<a xmlns:boo=\\\"efg\\\"></a>\");\nXNamespace ns = \"abc\";\nvar elem = xml1.Element(ns + \"b\");\nelem.Remove();\nxml2.Add(elem);\nConsole.WriteLine(xml1.ToString());\nConsole.WriteLine(xml2.ToString());\n\nResult:\n<a xmlns:foo=\"abc\" />\n<a xmlns:boo=\"efg\">\n <b xmlns=\"abc\"></b>\n</a>\n\n"
] | [
1,
1
] | [] | [] | [
".net",
"linq_to_xml",
"namespaces"
] | stackoverflow_0000046532_.net_linq_to_xml_namespaces.txt |
Q:
Ruby Performance
I'm pretty keen to develop my first Ruby app, as my company has finally blessed its use internally.
In everything I've read about Ruby up to v1.8, there is never anything positive said about performance, but I've found nothing about version 1.9. The last figures I saw about 1.8 had it drastically slower than just about everything out there, so I'm hoping this was addressed in 1.9.
Has performance drastically improved? Are there some concrete things that can be done with Ruby apps (or things to avoid) to keep performance at the best possible level?
A:
There are some benchmarks of 1.8 vs 1.9 at http://www.rubychan.de/share/yarv_speedups.html. Overall, it looks like 1.9 is a lot faster in most cases.
A:
If scalability and performance are really important to you you can also check out Ruby Enterprise Edition. It's a custom implementation of the Ruby interpreter that's supposed to be much better about memory allocation and garbage collection. I haven't seen any objective metrics comparing it directly to JRuby, but all of the anectdotal evidence I've heard has been very very good.
This is from the same company that created Passenger (aka mod_rails) which you should definitely check out as a rails deployment solution if you decide not to go the JRuby route.
A:
Matz ruby 1.8.6 is much slower when it comes to performance and 1.9 and JRuby do alot to speed it up. But the performance isn't such that it will prevent you from doing anything you want in a web application. There are many large Ruby on Rails sites that do just fine with the "slower interpreted" language. When you get to scaling out web apps there are many more pressing performance issues than the speed of the language you are writing it in.
A:
I've actually heard really good things performance with about the JVM implementation, JRuby. Completly anecdotal, but perhaps worth looking into.
See also http://en.wikipedia.org/wiki/JRuby#Performance
A:
Check out "Writing Efficient Ruby Code" from Addison Wesley Professional:
http://safari.oreilly.com/9780321540034
I found some very helpful and interesting insights in this short work. And if you sign up for the free 10-day trial you could read it for free. (It's 50 pages and the trial gets you (AFAIR) 100 page views.)
https://ssl.safaribooksonline.com/promo
A:
I am not a Ruby programmer but I have been pretty tightly involved in a JRuby deployment lately and can thus draw some conclusions. Do not expect to much from JRuby's performance. In interpreted mode, it seems to be somewhere in the range of C Ruby. JIT mode might be faster, but only in theory. In practice, we tried JIT mode on Glassfish for a decently-sized Rails application on a medium-sized server (dual core, 8GB RAM). And the truth is, the JITting took so freakingly much time, that the server needed 20-30 minutes before it answered the first request. Memory usage was astronomic, profiling did not work because the whole system grinded to halt with a profiler attached.
Bottom line: JRuby has its merits (multithreading, solid platform, easy Java integration), but given that interpreted mode is the only mode that worked for us in practice, it may be expected to be no better performance-wise than C Ruby.
| Ruby Performance | I'm pretty keen to develop my first Ruby app, as my company has finally blessed its use internally.
In everything I've read about Ruby up to v1.8, there is never anything positive said about performance, but I've found nothing about version 1.9. The last figures I saw about 1.8 had it drastically slower than just about everything out there, so I'm hoping this was addressed in 1.9.
Has performance drastically improved? Are there some concrete things that can be done with Ruby apps (or things to avoid) to keep performance at the best possible level?
| [
"There are some benchmarks of 1.8 vs 1.9 at http://www.rubychan.de/share/yarv_speedups.html. Overall, it looks like 1.9 is a lot faster in most cases.\n",
"If scalability and performance are really important to you you can also check out Ruby Enterprise Edition. It's a custom implementation of the Ruby interpreter that's supposed to be much better about memory allocation and garbage collection. I haven't seen any objective metrics comparing it directly to JRuby, but all of the anectdotal evidence I've heard has been very very good.\nThis is from the same company that created Passenger (aka mod_rails) which you should definitely check out as a rails deployment solution if you decide not to go the JRuby route.\n",
"Matz ruby 1.8.6 is much slower when it comes to performance and 1.9 and JRuby do alot to speed it up. But the performance isn't such that it will prevent you from doing anything you want in a web application. There are many large Ruby on Rails sites that do just fine with the \"slower interpreted\" language. When you get to scaling out web apps there are many more pressing performance issues than the speed of the language you are writing it in.\n",
"I've actually heard really good things performance with about the JVM implementation, JRuby. Completly anecdotal, but perhaps worth looking into.\nSee also http://en.wikipedia.org/wiki/JRuby#Performance\n",
"Check out \"Writing Efficient Ruby Code\" from Addison Wesley Professional:\nhttp://safari.oreilly.com/9780321540034\nI found some very helpful and interesting insights in this short work. And if you sign up for the free 10-day trial you could read it for free. (It's 50 pages and the trial gets you (AFAIR) 100 page views.)\nhttps://ssl.safaribooksonline.com/promo\n",
"I am not a Ruby programmer but I have been pretty tightly involved in a JRuby deployment lately and can thus draw some conclusions. Do not expect to much from JRuby's performance. In interpreted mode, it seems to be somewhere in the range of C Ruby. JIT mode might be faster, but only in theory. In practice, we tried JIT mode on Glassfish for a decently-sized Rails application on a medium-sized server (dual core, 8GB RAM). And the truth is, the JITting took so freakingly much time, that the server needed 20-30 minutes before it answered the first request. Memory usage was astronomic, profiling did not work because the whole system grinded to halt with a profiler attached.\nBottom line: JRuby has its merits (multithreading, solid platform, easy Java integration), but given that interpreted mode is the only mode that worked for us in practice, it may be expected to be no better performance-wise than C Ruby.\n"
] | [
8,
4,
2,
1,
0,
0
] | [
"I'd second the recommendation of the use of Passenger - it makes deployment and management of Rails applications trivial\n"
] | [
-1
] | [
"performance",
"ruby",
"ruby_1.9"
] | stackoverflow_0000025950_performance_ruby_ruby_1.9.txt |
Q:
Which is a better refactoring tool for a beginner (something easy to learn & use)?
Resharper, RefactorPro, etc?
A:
I have tried using Resharper for some while and also CodeRush with Refactor later on.
I have stayed with CodeRush/Refactor. There is one major difference - the discoverability of the commands. Their learning videos are quite nice and show you a lot.
Most importantly Coderush has one key/shortcut for all refactorings which makes you much more likely to actually use them. There is side window that shows you what keys to press in order to use the templates as well. I have liked Resharper's searching for usage of a method, but CodeRush has a similar feature ignited by Shift + F12 and you can also simply press Tab on a variable, function etc. to jump to its next usage.
I also liked the interface of CodeRush/Refactor more.
One of the pro's for Resharper is the integrated testing tool so yuo can run test directly from Visual Studio.
A:
In addition to Resharper I've tried both Coderush and Visual Assist X from Whole Tomato Software.
In my opinion none of the above could measure up to Resharper from Jetbrains which I decided to go for. The others have many great features but Resharper is in a class of it's own. IMHO Coderush looks cooler, but I found Resharper more helpful.
In response to Tomas note about discoverability: I agree it's tough relearning all the shortcuts. But to ease the transistion Resharper also has a shortcut Ctrl+Shift+R which will show all refactorings appropriate for the thing the cursor is placed on:
My recommendation is download a trial of all three, try one of them at a time for a while, and make your own choice.
A:
I think ReSharper is great. I've been using it for 3 years now and I just love it more and more.
| Which is a better refactoring tool for a beginner (something easy to learn & use)? | Resharper, RefactorPro, etc?
| [
"I have tried using Resharper for some while and also CodeRush with Refactor later on. \nI have stayed with CodeRush/Refactor. There is one major difference - the discoverability of the commands. Their learning videos are quite nice and show you a lot.\nMost importantly Coderush has one key/shortcut for all refactorings which makes you much more likely to actually use them. There is side window that shows you what keys to press in order to use the templates as well. I have liked Resharper's searching for usage of a method, but CodeRush has a similar feature ignited by Shift + F12 and you can also simply press Tab on a variable, function etc. to jump to its next usage.\nI also liked the interface of CodeRush/Refactor more.\nOne of the pro's for Resharper is the integrated testing tool so yuo can run test directly from Visual Studio.\n",
"In addition to Resharper I've tried both Coderush and Visual Assist X from Whole Tomato Software.\nIn my opinion none of the above could measure up to Resharper from Jetbrains which I decided to go for. The others have many great features but Resharper is in a class of it's own. IMHO Coderush looks cooler, but I found Resharper more helpful.\nIn response to Tomas note about discoverability: I agree it's tough relearning all the shortcuts. But to ease the transistion Resharper also has a shortcut Ctrl+Shift+R which will show all refactorings appropriate for the thing the cursor is placed on:\n\nMy recommendation is download a trial of all three, try one of them at a time for a while, and make your own choice.\n",
"I think ReSharper is great. I've been using it for 3 years now and I just love it more and more.\n"
] | [
2,
2,
0
] | [] | [] | [
"refactoring"
] | stackoverflow_0000047437_refactoring.txt |
Q:
Child spans of the same width
I am trying to create a horizontal menu with the elements represented by <span>'s. The menu itself (parent <div>) has a fixed width, but the elements number is always different.
I would like to have child <span>'s of the same width, independently of how many of them are there.
What I've done so far: added a float: left; style for every span and specified its percentage width (percents are more or less fine, as the server knows at the time of the page generation, how many menu items are there and could divide 100% by this number). This works, except for the case when we have a division remainder (like for 3 elements), in this case I have a one-pixel hole to the right of the parent <div>, and if I rounding the percents up, the last menu element is wrapped. I also don't really like style generation on the fly, but if there's no other solution, it's fine.
What else could I try?
It seems like this is a very common problem, however googling for "child elements of the same width" didn't help.
A:
If you have a fixed width container, then you are losing some of the effectiveness of a percentage width child span.
For your case of 33% you could add a class to the first and every 4th child span to set the correct width as necessary.
<div>
<span class="first-in-row">/<span><span></span><span></span><span class="first-in-row"><span></span><span></span>...
</div>
where
.first-in-row { width:auto; /* or */ width:XXX px; }
A:
You might try a table with a fixed table layout. It should calculate the column widths without concerning itself with the cell contents.
table.ClassName {
table-layout: fixed
}
A:
have you tried the decimal values, like setting width to 33.33%?
As specified in the CSS syntax, the width property (http://www.w3.org/TR/CSS21/visudet.html#the-width-property) can be given as <percentage> (http://www.w3.org/TR/CSS21/syndata.html#value-def-percentage), which is stated to be a <number>.
As said at the number definition (http://www.w3.org/TR/CSS21/syndata.html#value-def-number), there some value types that must be integers, and are stated as <integer>, and the others are real numbers, stated as <number>. The percentage is defined as <number>, not as <integer> so it might work.
It will depend on the browser's ability to solve this situation if it can't divide the parent's box by 3 without remaining, will it draw a 1- or 2-pixel space, or make 1 or 2 spans out of three wider than the rest.
A:
In reference to Xian's answer, there's also the :first-child pseudo-element. Rather than having first-in-row class, you'd have this.
span:first-child {
width: auto;
}
Obviously, this is only applicable to a single line menu.
| Child spans of the same width | I am trying to create a horizontal menu with the elements represented by <span>'s. The menu itself (parent <div>) has a fixed width, but the elements number is always different.
I would like to have child <span>'s of the same width, independently of how many of them are there.
What I've done so far: added a float: left; style for every span and specified its percentage width (percents are more or less fine, as the server knows at the time of the page generation, how many menu items are there and could divide 100% by this number). This works, except for the case when we have a division remainder (like for 3 elements), in this case I have a one-pixel hole to the right of the parent <div>, and if I rounding the percents up, the last menu element is wrapped. I also don't really like style generation on the fly, but if there's no other solution, it's fine.
What else could I try?
It seems like this is a very common problem, however googling for "child elements of the same width" didn't help.
| [
"If you have a fixed width container, then you are losing some of the effectiveness of a percentage width child span.\nFor your case of 33% you could add a class to the first and every 4th child span to set the correct width as necessary.\n<div>\n<span class=\"first-in-row\">/<span><span></span><span></span><span class=\"first-in-row\"><span></span><span></span>...\n</div>\n\nwhere \n.first-in-row { width:auto; /* or */ width:XXX px; }\n\n",
"You might try a table with a fixed table layout. It should calculate the column widths without concerning itself with the cell contents.\ntable.ClassName {\n table-layout: fixed\n}\n\n",
"have you tried the decimal values, like setting width to 33.33%?\nAs specified in the CSS syntax, the width property (http://www.w3.org/TR/CSS21/visudet.html#the-width-property) can be given as <percentage> (http://www.w3.org/TR/CSS21/syndata.html#value-def-percentage), which is stated to be a <number>.\nAs said at the number definition (http://www.w3.org/TR/CSS21/syndata.html#value-def-number), there some value types that must be integers, and are stated as <integer>, and the others are real numbers, stated as <number>. The percentage is defined as <number>, not as <integer> so it might work.\nIt will depend on the browser's ability to solve this situation if it can't divide the parent's box by 3 without remaining, will it draw a 1- or 2-pixel space, or make 1 or 2 spans out of three wider than the rest.\n",
"In reference to Xian's answer, there's also the :first-child pseudo-element. Rather than having first-in-row class, you'd have this.\nspan:first-child {\n width: auto;\n}\n\nObviously, this is only applicable to a single line menu.\n"
] | [
2,
2,
0,
0
] | [] | [] | [
"css",
"html"
] | stackoverflow_0000047447_css_html.txt |
Q:
Is Google Chrome's V8 engine really that good?
Does anyone have time to take a look at it?
I've read a bit and it promises a lot, if it's half what they say, it'll change web Development a lot
A:
I have compared Mozilla Firefox 3.0.1 and Google Chrome 0.2.149.27 on SunSpider JavaScript Benchmark with the following results:
Firefox - total: 2900.0ms +/- 1.8%
Chrome - total: 1549.2ms +/- 1.7%
and on V8 Benchmark Suite with the following results (higher score is better):
Firefox - score: 212
Chrome - score: 1842
and on Web Browser Javascript Benchmark with the following results:
Firefox - total duration: 362 ms
Chrome - total duration: 349 ms
Machine: Windows XP SP2, Intel Core2 DUO T7500 @ 2.2 Ghz, 2 GB RAM
All blog posts and articles that I've read so far also claim that V8 is clearly the fastest JavaScript engine out there. See for example - V8, TraceMonkey, SquirrelFish, IE8 BenchMarks
"... Needless to say, Chrome’s V8 blows away all the current builds of the next-generation of JavaScript VMs. Just to be clear, WebKit and FireFox engines haven’t even hit beta, but it looks like the performance bar has just been set to an astronomical height by the V8 Team."
A:
Perhaps a bit anecdotal but comparing runs between Firefox and Chrome showed a significant difference in benchmarks.
http://www2.webkit.org/perf/sunspider-0.9/sunspider.html
Try for yourself.
A:
While in Microsoft:
Consuming twice as much RAM as Firefox
and saturating the CPU with nearly six
times as many execution threads,
Microsoft's latest beta release of
Internet Explorer 8 is in fact more
demanding on your PC than Windows XP
itself, research firm Devil Mountain
Software found in performance tests.
According to the firm, which operates
a community-based testing network, IE8
Beta 2 consumed 380MB of RAM and
spawned 171 concurrent threads during
a multi-tab browsing test of popular
Web destinations
Slashdot
I imagine how @rjrapson came with that conclusion. Every blog post I see, calims it's faster.
A:
The speed initially seemed substantially improved. One interesting thing is that it keeps locking up the Google REader tab, it's gotten the sad-face at least 5 times over this morning...
A:
It's really speedy. Visibly so. I was pretty impressed with its performance compared with Firefox 3. Already made it my default browser.
A:
The browser is incredibly fast in general, and Javascript is very fast in particular.
Edit: The benchmark showed Chrome to be 1.73x faster on average than FF3, and 14.8x faster on average than IE 7. String manipulation is IE 7's weak point, which I'm told has been improved greatly in IE 8.
A:
Yes, V8 is extremely fast on Vista x86 -- up to 50 times as fast as IE 7 for most benchmarks I tried. More impressively, GMail running under Chrome had one-quarter the memory footprint of GMail running under IE 7. This can probably be attributed in large part to V8.
A:
I am finding it visibly much faster on Vista x64 than IE8 and FF3.
A:
It's two times faster than Firefox 3 on my Windows XP box. FWIW, the updates in Fx3.1 are supposed to make it an order of magnitude faster.
A:
I've compared it to Firefox and Internet Explorer using this link: http://celtickane.com/2009/07/javascript-speed-test-2009-browsers/ (was http://celtickane.com/webdesign/jsspeed.php)
The difference is impressive.
212ms in Chrome, 341ms in Firefox 3, and 2188ms for Internet Explorer 7.
A:
I ran the aformentioned sunspider javascript benchmark on FF3 and Chrome and got over a 2x speed increase moving from FF3 to Chrome (on a Vista 64 system - Core 2 duo 6600 2.4GHz, 2GB RAM).
The links above show you my results - I'm very interested to see what, if any, difference the underlying OS makes.
That being said, I agree with Google that Javascript is becoming more and more important, and that the other browser makers should spend some time on optimizing it.
I love being able to drag and drop tabs - that's something I've needed for over 2 years now...
-Adam
A:
It's definitely fast. Gmail, Google Reader and Yahoo mail all load instantly. Can't say that for FF or Opera.
A:
Yes I have seen the bench marks and V8 does appear to be objectively faster but as for
it'll change web programming a lot
I personally do not think the bottleneck is currently in javascript, but rather in bandwidth
| Is Google Chrome's V8 engine really that good? | Does anyone have time to take a look at it?
I've read a bit and it promises a lot, if it's half what they say, it'll change web Development a lot
| [
"I have compared Mozilla Firefox 3.0.1 and Google Chrome 0.2.149.27 on SunSpider JavaScript Benchmark with the following results:\n\nFirefox - total: 2900.0ms +/- 1.8%\nChrome - total: 1549.2ms +/- 1.7%\n\nand on V8 Benchmark Suite with the following results (higher score is better):\n\nFirefox - score: 212\nChrome - score: 1842\n\nand on Web Browser Javascript Benchmark with the following results:\n\nFirefox - total duration: 362 ms\nChrome - total duration: 349 ms\n\nMachine: Windows XP SP2, Intel Core2 DUO T7500 @ 2.2 Ghz, 2 GB RAM\nAll blog posts and articles that I've read so far also claim that V8 is clearly the fastest JavaScript engine out there. See for example - V8, TraceMonkey, SquirrelFish, IE8 BenchMarks\n\n\"... Needless to say, Chrome’s V8 blows away all the current builds of the next-generation of JavaScript VMs. Just to be clear, WebKit and FireFox engines haven’t even hit beta, but it looks like the performance bar has just been set to an astronomical height by the V8 Team.\"\n\n",
"Perhaps a bit anecdotal but comparing runs between Firefox and Chrome showed a significant difference in benchmarks. \nhttp://www2.webkit.org/perf/sunspider-0.9/sunspider.html\nTry for yourself.\n",
"While in Microsoft:\n\nConsuming twice as much RAM as Firefox\n and saturating the CPU with nearly six\n times as many execution threads,\n Microsoft's latest beta release of\n Internet Explorer 8 is in fact more\n demanding on your PC than Windows XP\n itself, research firm Devil Mountain\n Software found in performance tests.\n According to the firm, which operates\n a community-based testing network, IE8\n Beta 2 consumed 380MB of RAM and\n spawned 171 concurrent threads during\n a multi-tab browsing test of popular\n Web destinations\n\nSlashdot\nI imagine how @rjrapson came with that conclusion. Every blog post I see, calims it's faster.\n",
"The speed initially seemed substantially improved. One interesting thing is that it keeps locking up the Google REader tab, it's gotten the sad-face at least 5 times over this morning...\n",
"It's really speedy. Visibly so. I was pretty impressed with its performance compared with Firefox 3. Already made it my default browser.\n",
"The browser is incredibly fast in general, and Javascript is very fast in particular.\nEdit: The benchmark showed Chrome to be 1.73x faster on average than FF3, and 14.8x faster on average than IE 7. String manipulation is IE 7's weak point, which I'm told has been improved greatly in IE 8.\n",
"Yes, V8 is extremely fast on Vista x86 -- up to 50 times as fast as IE 7 for most benchmarks I tried. More impressively, GMail running under Chrome had one-quarter the memory footprint of GMail running under IE 7. This can probably be attributed in large part to V8.\n",
"I am finding it visibly much faster on Vista x64 than IE8 and FF3.\n",
"It's two times faster than Firefox 3 on my Windows XP box. FWIW, the updates in Fx3.1 are supposed to make it an order of magnitude faster.\n",
"I've compared it to Firefox and Internet Explorer using this link: http://celtickane.com/2009/07/javascript-speed-test-2009-browsers/ (was http://celtickane.com/webdesign/jsspeed.php)\nThe difference is impressive.\n212ms in Chrome, 341ms in Firefox 3, and 2188ms for Internet Explorer 7.\n",
"I ran the aformentioned sunspider javascript benchmark on FF3 and Chrome and got over a 2x speed increase moving from FF3 to Chrome (on a Vista 64 system - Core 2 duo 6600 2.4GHz, 2GB RAM).\nThe links above show you my results - I'm very interested to see what, if any, difference the underlying OS makes.\nThat being said, I agree with Google that Javascript is becoming more and more important, and that the other browser makers should spend some time on optimizing it.\nI love being able to drag and drop tabs - that's something I've needed for over 2 years now...\n-Adam\n",
"It's definitely fast. Gmail, Google Reader and Yahoo mail all load instantly. Can't say that for FF or Opera.\n",
"Yes I have seen the bench marks and V8 does appear to be objectively faster but as for \n\nit'll change web programming a lot\n\nI personally do not think the bottleneck is currently in javascript, but rather in bandwidth\n"
] | [
19,
5,
2,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"google_chrome",
"javascript",
"v8"
] | stackoverflow_0000040994_google_chrome_javascript_v8.txt |
Q:
Reading from a socket in C#
I am trying to read ASCII text response from a tcp open streaming socket
This is what I have come up with so far. I want to know what is the best way to do this that includes error handling. Should I also be checking the SocketError after the Receive call in the do loop?
IPEndPoint ep = new IPEndPoint(IPAddress.Parse("192.168.1.1"), 9000);
Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
sock.SendTimeout = 5000;
sock.ReceiveTimeout = 5000;
if (null != sock)
{
try
{
sock.Connect(ep);
if (sock.Connected)
{
// Blocks until send returns.
int bytesSent = sock.Send(buffer);
Console.WriteLine("Sent {0} bytes.", bytesSent);
//get the first 4 bytes, should be the lenngth of the rest of the response.
byte[] response = new byte[4];
int bytesRec = sock.Receive(response);
int totalBytesRec = 0;
if (4 == bytesRec)
{
int len = BitConverter.ToInt32(response, 0);
response = new byte[len];
Console.WriteLine("Trying to get {0} bytes.", len);
bytesRec = 0;
do
{
bytesRec = sock.Receive(response);
totalBytesRec += bytesRec;
forecast += Encoding.ASCII.GetString(response, 0, bytesRec);
} while (totalBytesRec < len && bytesRec > 0);
if (totalBytesRec != len)
throw new Exception("The total bytes recieved from manager did not equal the expected bytes");
}
else
throw new Exception("Unable to get the response size from the manager");
Console.WriteLine("Received {0} bytes.", totalBytesRec);
}
}
catch (SocketException ex)
{
Console.WriteLine("{0} Error code: {1}.", ex.Message, ex.ErrorCode);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
sock.Shutdown(SocketShutdown.Both);
sock.Close();
}
}
A:
Your best bet is probably TcpClient. There's a great sample that does exactly what you're doing right in the .NET Documentation here:
http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.aspx
Good luck to you!
| Reading from a socket in C# | I am trying to read ASCII text response from a tcp open streaming socket
This is what I have come up with so far. I want to know what is the best way to do this that includes error handling. Should I also be checking the SocketError after the Receive call in the do loop?
IPEndPoint ep = new IPEndPoint(IPAddress.Parse("192.168.1.1"), 9000);
Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
sock.SendTimeout = 5000;
sock.ReceiveTimeout = 5000;
if (null != sock)
{
try
{
sock.Connect(ep);
if (sock.Connected)
{
// Blocks until send returns.
int bytesSent = sock.Send(buffer);
Console.WriteLine("Sent {0} bytes.", bytesSent);
//get the first 4 bytes, should be the lenngth of the rest of the response.
byte[] response = new byte[4];
int bytesRec = sock.Receive(response);
int totalBytesRec = 0;
if (4 == bytesRec)
{
int len = BitConverter.ToInt32(response, 0);
response = new byte[len];
Console.WriteLine("Trying to get {0} bytes.", len);
bytesRec = 0;
do
{
bytesRec = sock.Receive(response);
totalBytesRec += bytesRec;
forecast += Encoding.ASCII.GetString(response, 0, bytesRec);
} while (totalBytesRec < len && bytesRec > 0);
if (totalBytesRec != len)
throw new Exception("The total bytes recieved from manager did not equal the expected bytes");
}
else
throw new Exception("Unable to get the response size from the manager");
Console.WriteLine("Received {0} bytes.", totalBytesRec);
}
}
catch (SocketException ex)
{
Console.WriteLine("{0} Error code: {1}.", ex.Message, ex.ErrorCode);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
sock.Shutdown(SocketShutdown.Both);
sock.Close();
}
}
| [
"Your best bet is probably TcpClient. There's a great sample that does exactly what you're doing right in the .NET Documentation here:\nhttp://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.aspx\nGood luck to you!\n"
] | [
5
] | [] | [] | [
"c#",
"sockets"
] | stackoverflow_0000047533_c#_sockets.txt |
Q:
Javascript and CSS parsing performance
I am trying to improve the performance of a web application. I have metrics that I can use to optimize the time taken to return the main HTML page, but I'm concerned about the external CSS and JavaScript files that are included from these HTML pages. These are served statically, with HTTP Expires headers, but are shared between all the pages of the application.
I'm concerned that the browser has to parse these CSS and JavaScript files for each page that is displayed and so having all the CSS and JavaScript for the site shared into common files will negatively affect performance. Should I be trying to split out these files so I link from each page to only the CSS and JavaScript needed for that page, or would I get little return for my efforts?
Are there any tools that could help me generate metrics for this?
A:
Context: While it's true that HTTP overhead is more significant than parsing JS and CSS, ignoring the impact of parsing on browser performance (even if you have less than a meg of JS) is a good way to get yourself in trouble.
YSlow, Fiddler, and Firebug are not the best tools to monitor parsing speed. Unless they've been updated very recently, they don't separate the amount of time required to fetch JS over HTTP or load from cache versus the amount of time spent parsing the actual JS payload.
Parse speed is slightly difficult to measure, but we've chased this metric a number of times on projects I've worked on and the impact on pageloads were significant even with ~500k of JS. Obviously the older browsers suffer the most...hopefully Chrome, TraceMonkey and the like help resolve this situation.
Suggestion: Depending on the type of traffic you have at your site, it may be well worth your while to split up your JS payload so some large chunks of JS that will never be used on a the most popular pages are never sent down to the client. Of course, this means that when a new client hits a page where this JS is needed, you'll have to send it over the wire.
However, it may well be the case that, say, 50% of your JS is never needed by 80% of your users due to your traffic patterns. If this is so, you should definitely user smaller, packaged JS payloads only on pages where the JS is necessary. Otherwise 80% of your users will suffer unnecessary JS parsing penalties on every single pageload.
Bottom Line: It's difficult to find the proper balance of JS caching and smaller, packaged payloads, but depending on your traffic pattern it's definitely well worth considering a technique other than smashing all of your JS into every single pageload.
A:
I believe YSlow does, but be aware that unless all requests are over a loopback connection you shouldn't worry. The HTTP overhead of split-up files will impact performance far more than parsing, unless your CSS/JS files exceed several megabytes.
A:
To add to kamen's great answer, I would say that on some browsers, the parse time for larger js resources grows non-linearly. That is, a 1 meg JS file will take longer to parse than two 500k files. So if a lot of your traffic is people who are likely to have your JS cached (return visitors), and all your JS files are cache-stable, it may make sense to break them up even if you end up loading all of them on every pageview.
| Javascript and CSS parsing performance | I am trying to improve the performance of a web application. I have metrics that I can use to optimize the time taken to return the main HTML page, but I'm concerned about the external CSS and JavaScript files that are included from these HTML pages. These are served statically, with HTTP Expires headers, but are shared between all the pages of the application.
I'm concerned that the browser has to parse these CSS and JavaScript files for each page that is displayed and so having all the CSS and JavaScript for the site shared into common files will negatively affect performance. Should I be trying to split out these files so I link from each page to only the CSS and JavaScript needed for that page, or would I get little return for my efforts?
Are there any tools that could help me generate metrics for this?
| [
"Context: While it's true that HTTP overhead is more significant than parsing JS and CSS, ignoring the impact of parsing on browser performance (even if you have less than a meg of JS) is a good way to get yourself in trouble.\nYSlow, Fiddler, and Firebug are not the best tools to monitor parsing speed. Unless they've been updated very recently, they don't separate the amount of time required to fetch JS over HTTP or load from cache versus the amount of time spent parsing the actual JS payload.\nParse speed is slightly difficult to measure, but we've chased this metric a number of times on projects I've worked on and the impact on pageloads were significant even with ~500k of JS. Obviously the older browsers suffer the most...hopefully Chrome, TraceMonkey and the like help resolve this situation.\nSuggestion: Depending on the type of traffic you have at your site, it may be well worth your while to split up your JS payload so some large chunks of JS that will never be used on a the most popular pages are never sent down to the client. Of course, this means that when a new client hits a page where this JS is needed, you'll have to send it over the wire.\nHowever, it may well be the case that, say, 50% of your JS is never needed by 80% of your users due to your traffic patterns. If this is so, you should definitely user smaller, packaged JS payloads only on pages where the JS is necessary. Otherwise 80% of your users will suffer unnecessary JS parsing penalties on every single pageload.\nBottom Line: It's difficult to find the proper balance of JS caching and smaller, packaged payloads, but depending on your traffic pattern it's definitely well worth considering a technique other than smashing all of your JS into every single pageload.\n",
"I believe YSlow does, but be aware that unless all requests are over a loopback connection you shouldn't worry. The HTTP overhead of split-up files will impact performance far more than parsing, unless your CSS/JS files exceed several megabytes.\n",
"To add to kamen's great answer, I would say that on some browsers, the parse time for larger js resources grows non-linearly. That is, a 1 meg JS file will take longer to parse than two 500k files. So if a lot of your traffic is people who are likely to have your JS cached (return visitors), and all your JS files are cache-stable, it may make sense to break them up even if you end up loading all of them on every pageview.\n"
] | [
14,
3,
2
] | [] | [] | [
"css",
"javascript",
"performance"
] | stackoverflow_0000046982_css_javascript_performance.txt |
Q:
Enabled Brigded Network in Vmware Server
I have the vmware server with this error, anyone knows how to fix it?VMware Server Error http://soporte.cardinalsystems.com.ar/errorvmwareserver.jpg
A:
In the Network Connections on the host PC, you might try repairing the connections that are created by VMWare. Something like "VMWare Network Adapter VMnet1"
I'm assuming that the network connections (to a LAN/Internet) are working on the host computer. If not, I'd start by fixing the host first.
A:
There should be a vmware.log file or something similar in the directory that contains your vm. After you start the vm, are there any new errors in it?
Also, is the network adapter enabled?
A:
No Idea what I do, but now its working.
this is all I have done:
reinstall Vmware server several times ( more than 4 )
Fix network adapter
prey ( more than 1000 times)
UPDATE: One of the three VM does not work the other works perfect.
| Enabled Brigded Network in Vmware Server | I have the vmware server with this error, anyone knows how to fix it?VMware Server Error http://soporte.cardinalsystems.com.ar/errorvmwareserver.jpg
| [
"In the Network Connections on the host PC, you might try repairing the connections that are created by VMWare. Something like \"VMWare Network Adapter VMnet1\"\nI'm assuming that the network connections (to a LAN/Internet) are working on the host computer. If not, I'd start by fixing the host first.\n",
"There should be a vmware.log file or something similar in the directory that contains your vm. After you start the vm, are there any new errors in it? \nAlso, is the network adapter enabled?\n",
"No Idea what I do, but now its working.\nthis is all I have done:\n\nreinstall Vmware server several times ( more than 4 )\nFix network adapter\nprey ( more than 1000 times)\n\nUPDATE: One of the three VM does not work the other works perfect.\n"
] | [
2,
0,
0
] | [] | [] | [
"virtualization",
"vmware",
"vmware_server"
] | stackoverflow_0000044294_virtualization_vmware_vmware_server.txt |
Q:
Strong Validation in WPF
I have a databound TextBox in my application like so: (The type of Height is decimal?)
<TextBox Text="{Binding Height, UpdateSourceTrigger=PropertyChanged,
ValidatesOnExceptions=True,
Converter={StaticResource NullConverter}}" />
public class NullableConverter : IValueConverter {
public object Convert(object o, Type type, object parameter, CultureInfo culture) {
return o;
}
public object ConvertBack(object o, Type type, object parameter, CultureInfo culture) {
if (o as string == null || (o as string).Trim() == string.Empty)
return null;
return o;
}
}
Configured this way, any non-empty strings which cannot be converted to decimal result in a validation error which will immediately highlight the textbox. However, the TextBox can still lose focus and remain in an invalid state. What I would like to do is either:
Not allow the TextBox to lose focus until it contains a valid value.
Revert the value in the TextBox to the last valid value.
What is the best way to do this?
Update:
I've found a way to do #2. I don't love it, but it works:
private void TextBox_LostKeyboardFocus(object sender, RoutedEventArgs e) {
var box = sender as TextBox;
var binding = box.GetBindingExpression(TextBox.TextProperty);
if (binding.HasError)
binding.UpdateTarget();
}
Does anyone know how to do this better? (Or do #1.)
A:
You can force the keyboard focus to stay on the TextBox by handling the PreviewLostKeyBoardFocus event like this:
<TextBox PreviewLostKeyboardFocus="TextBox_PreviewLostKeyboardFocus" />
private void TextBox_PreviewLostKeyboardFocus(object sender, KeyboardFocusChangedEventArgs e) {
e.Handled = true;
}
A:
It sounds to me that you'll want to handle two events:
GotFocus: Will trigger when the textbox gains focus. You can store the initial value of the box.
LostFocus: Will trigger when the textbox loses focus. At this point you can do your validation and decide if you want to roll back or not.
| Strong Validation in WPF | I have a databound TextBox in my application like so: (The type of Height is decimal?)
<TextBox Text="{Binding Height, UpdateSourceTrigger=PropertyChanged,
ValidatesOnExceptions=True,
Converter={StaticResource NullConverter}}" />
public class NullableConverter : IValueConverter {
public object Convert(object o, Type type, object parameter, CultureInfo culture) {
return o;
}
public object ConvertBack(object o, Type type, object parameter, CultureInfo culture) {
if (o as string == null || (o as string).Trim() == string.Empty)
return null;
return o;
}
}
Configured this way, any non-empty strings which cannot be converted to decimal result in a validation error which will immediately highlight the textbox. However, the TextBox can still lose focus and remain in an invalid state. What I would like to do is either:
Not allow the TextBox to lose focus until it contains a valid value.
Revert the value in the TextBox to the last valid value.
What is the best way to do this?
Update:
I've found a way to do #2. I don't love it, but it works:
private void TextBox_LostKeyboardFocus(object sender, RoutedEventArgs e) {
var box = sender as TextBox;
var binding = box.GetBindingExpression(TextBox.TextProperty);
if (binding.HasError)
binding.UpdateTarget();
}
Does anyone know how to do this better? (Or do #1.)
| [
"You can force the keyboard focus to stay on the TextBox by handling the PreviewLostKeyBoardFocus event like this:\n <TextBox PreviewLostKeyboardFocus=\"TextBox_PreviewLostKeyboardFocus\" /> \n\n private void TextBox_PreviewLostKeyboardFocus(object sender, KeyboardFocusChangedEventArgs e) {\n e.Handled = true;\n }\n\n",
"It sounds to me that you'll want to handle two events:\nGotFocus: Will trigger when the textbox gains focus. You can store the initial value of the box.\nLostFocus: Will trigger when the textbox loses focus. At this point you can do your validation and decide if you want to roll back or not.\n"
] | [
2,
0
] | [] | [] | [
"data_binding",
"validation",
"wpf"
] | stackoverflow_0000044298_data_binding_validation_wpf.txt |
Q:
Maven2 Eclipse integration
There seem to be two rival Eclipse plugins for integrating with Maven:
m2Eclipse
and
q4e.
Has anyone recently evaluated or used these plugins?
Why would I choose one or the other?
A:
Side by side comparison table of three maven plugins.
A:
There is only one point where q4e is actually better: dependency viewer. You could see the dependency tree, manage your dependencies visually and even see them in a graph. But, m2eclipse works in a better way, specially because you can create you own build commands (in the run menu). q4e comes with some predefined commands and I can't find where to define a new one. In other words, m2eclipse is more friendly to the maven way.
A:
I have been using m2Eclipse for quiet some time now and have found it to be very reliable. I wasn't aware of q4e until I saw this question so I can't recommend one over the other.
A:
My 2cents,
I am using eclipse for some months now with m2eclipse integration. It's easy to use and straight forward. Once you associate your project to maven and update the dependencies using m2eclipse, any change to pom.xml are reflected to entire project, even Java version definition causes it to be compiled in right JRE (if you have it installed, and properly configured into eclipse.)
Another advantage I found is the maven plug-ins are easy to use integrated with eclipse (jetty being my best example, again, properly configured you can easily integrate maven, jetty-plug-in and Eclipse Debugger)
Compilation, packaging and all other maven features are equally easy to use with a couple clicks or shortcuts.
About q4e I have been reading a lot of good stuff about it and seems the next versions will do a lot more than m2eclipse, with a better dependency management and even visual graphs (!) but the general opinion is that m2eclipse is still better than q4e but q4e is getting better each new version and maybe will surpass m2eclipse soon.
| Maven2 Eclipse integration | There seem to be two rival Eclipse plugins for integrating with Maven:
m2Eclipse
and
q4e.
Has anyone recently evaluated or used these plugins?
Why would I choose one or the other?
| [
"Side by side comparison table of three maven plugins. \n",
"There is only one point where q4e is actually better: dependency viewer. You could see the dependency tree, manage your dependencies visually and even see them in a graph. But, m2eclipse works in a better way, specially because you can create you own build commands (in the run menu). q4e comes with some predefined commands and I can't find where to define a new one. In other words, m2eclipse is more friendly to the maven way.\n",
"I have been using m2Eclipse for quiet some time now and have found it to be very reliable. I wasn't aware of q4e until I saw this question so I can't recommend one over the other.\n",
"My 2cents,\nI am using eclipse for some months now with m2eclipse integration. It's easy to use and straight forward. Once you associate your project to maven and update the dependencies using m2eclipse, any change to pom.xml are reflected to entire project, even Java version definition causes it to be compiled in right JRE (if you have it installed, and properly configured into eclipse.)\nAnother advantage I found is the maven plug-ins are easy to use integrated with eclipse (jetty being my best example, again, properly configured you can easily integrate maven, jetty-plug-in and Eclipse Debugger)\nCompilation, packaging and all other maven features are equally easy to use with a couple clicks or shortcuts.\nAbout q4e I have been reading a lot of good stuff about it and seems the next versions will do a lot more than m2eclipse, with a better dependency management and even visual graphs (!) but the general opinion is that m2eclipse is still better than q4e but q4e is getting better each new version and maybe will surpass m2eclipse soon.\n"
] | [
2,
2,
0,
0
] | [] | [] | [
"build",
"eclipse",
"maven_2"
] | stackoverflow_0000047522_build_eclipse_maven_2.txt |
Q:
GODI installation issue
I'm trying to install GODI on linux (Ubuntu). It's a library management tool for the ocaml language. I've actually installed this before --twice, but awhile ago-- with no issues --that I can remember-- but this time I just can't figure out what I'm missing.
$ ./bootstrap --prefix /home/nlucaroni/godi
$ ./bootstrap_stage2
.: 1: godi_confdir: not found
Error: Command fails with code 2: /bin/sh
Failure!
I had added the proper directories to the path, and they show up with a quick echo $path, and godi_confdir reported as being:
/home/nlucaroni/godi/etc
(...and the directory exists, with the godi.conf file present). So, I can't figure out why ./bootstrap_stage2 isn't working.
A:
What is the output of which godi_confdir?
P.S. I remember having this exact same problem, but I don't remember precisely how I fixed it.
A:
Hey Chris, I just figured it out. Silly mistake.
It was just a permission issue, running everything from /tmp/ worked fine --well after enabling GODI_BASEPKG_PCRE in godi.conf. I had been running it from my home directory, you forget simple things like that at 3:00am.
--
Actually I'm having another problem. Installing conf-opengl-6:
GODI can't seen to find the GL/gl.h file, though I can --you can see that it is Checking the suggestion.
> ===> Configuring for conf-opengl-6
> Checking the suggestion
> Include=/usr/include/GL/gl.h Library=/<GLU+GL>
> Checking /usr:
> Include=/usr/include/GL/gl.h Library=/usr/lib/<GLU+GL>
> Checking /usr:
> Include=/usr/local/include/GL/gl.h Library=/usr/local/lib/<GLU+GL>
> Checking /usr/local:
> Include=/usr/local/include/GL/gl.h Library=/usr/local/lib/<GLU+GL>
> Exception: Failure "Cannot find library".
> Error: Exec error: File /home/nlucaroni/godi/build/conf/conf-opengl/./../../mk/bsd.pkg.mk, line 1022: Command returned with non-zero exit code
> Error: Exec error: File /home/nlucaroni/godi/build/conf/conf-opengl/./../../mk/bsd.pkg.mk, line 1375: Command returned with non-zero exit code
### Error: Command fails with code 1: godi_console
edit - Ok, this is fixed too... just needed GLU, weird since the test configuration option said everything was fine.
| GODI installation issue | I'm trying to install GODI on linux (Ubuntu). It's a library management tool for the ocaml language. I've actually installed this before --twice, but awhile ago-- with no issues --that I can remember-- but this time I just can't figure out what I'm missing.
$ ./bootstrap --prefix /home/nlucaroni/godi
$ ./bootstrap_stage2
.: 1: godi_confdir: not found
Error: Command fails with code 2: /bin/sh
Failure!
I had added the proper directories to the path, and they show up with a quick echo $path, and godi_confdir reported as being:
/home/nlucaroni/godi/etc
(...and the directory exists, with the godi.conf file present). So, I can't figure out why ./bootstrap_stage2 isn't working.
| [
"What is the output of which godi_confdir?\nP.S. I remember having this exact same problem, but I don't remember precisely how I fixed it.\n",
"Hey Chris, I just figured it out. Silly mistake.\nIt was just a permission issue, running everything from /tmp/ worked fine --well after enabling GODI_BASEPKG_PCRE in godi.conf. I had been running it from my home directory, you forget simple things like that at 3:00am.\n--\nActually I'm having another problem. Installing conf-opengl-6:\nGODI can't seen to find the GL/gl.h file, though I can --you can see that it is Checking the suggestion.\n> ===> Configuring for conf-opengl-6\n> Checking the suggestion\n> Include=/usr/include/GL/gl.h Library=/<GLU+GL>\n> Checking /usr:\n> Include=/usr/include/GL/gl.h Library=/usr/lib/<GLU+GL>\n> Checking /usr:\n> Include=/usr/local/include/GL/gl.h Library=/usr/local/lib/<GLU+GL>\n> Checking /usr/local:\n> Include=/usr/local/include/GL/gl.h Library=/usr/local/lib/<GLU+GL>\n> Exception: Failure \"Cannot find library\".\n> Error: Exec error: File /home/nlucaroni/godi/build/conf/conf-opengl/./../../mk/bsd.pkg.mk, line 1022: Command returned with non-zero exit code\n> Error: Exec error: File /home/nlucaroni/godi/build/conf/conf-opengl/./../../mk/bsd.pkg.mk, line 1375: Command returned with non-zero exit code\n\n### Error: Command fails with code 1: godi_console\n\nedit - Ok, this is fixed too... just needed GLU, weird since the test configuration option said everything was fine. \n"
] | [
2,
1
] | [] | [] | [
"godi",
"linux",
"ocaml"
] | stackoverflow_0000047309_godi_linux_ocaml.txt |
Q:
How to detect the presence of a default recording device in the system?
How do I detect if the system has a default recording device installed?
I bet this can be done through some calls to the Win32 API, anyone has any experience with this?
I'm talking about doing this through code, not by opening the control panel and taking a look under sound options.
A:
Using the DirectX SDK, you can call DirectSoundCaptureEnumerate, which will call your DSEnumCallback function for each DirectSoundCapture device on the system. The first parameter passed to your DSEnumCallback is an LPGUID, which is the "Address of the GUID that identifies the device being enumerated, or NULL for the primary device".
If all you need to do is find out if a recording device is present (I don't think this is good enough if you really need to know the default device), you can use waveInGetNumDevs:
#include <tchar.h>
#include <windows.h>
#include "mmsystem.h"
int _tmain( int argc, wchar_t *argv[] )
{
UINT deviceCount = waveInGetNumDevs();
if ( deviceCount > 0 )
{
for ( int i = 0; i < deviceCount; i++ )
{
WAVEINCAPSW waveInCaps;
waveInGetDevCapsW( i, &waveInCaps, sizeof( WAVEINCAPS ) );
// do some stuff with waveInCaps...
}
}
return 0;
}
A:
There is an Open Source Audio API called PortAudio that has a method you could use. I think the method is called Pa_GetDeviceInfo() or something.
A:
The win32 api has a function called waveInGetNumDevs for it.
| How to detect the presence of a default recording device in the system? | How do I detect if the system has a default recording device installed?
I bet this can be done through some calls to the Win32 API, anyone has any experience with this?
I'm talking about doing this through code, not by opening the control panel and taking a look under sound options.
| [
"Using the DirectX SDK, you can call DirectSoundCaptureEnumerate, which will call your DSEnumCallback function for each DirectSoundCapture device on the system. The first parameter passed to your DSEnumCallback is an LPGUID, which is the \"Address of the GUID that identifies the device being enumerated, or NULL for the primary device\".\nIf all you need to do is find out if a recording device is present (I don't think this is good enough if you really need to know the default device), you can use waveInGetNumDevs:\n#include <tchar.h>\n#include <windows.h>\n#include \"mmsystem.h\"\n\nint _tmain( int argc, wchar_t *argv[] )\n{\n UINT deviceCount = waveInGetNumDevs();\n\n if ( deviceCount > 0 )\n {\n for ( int i = 0; i < deviceCount; i++ )\n {\n WAVEINCAPSW waveInCaps;\n\n waveInGetDevCapsW( i, &waveInCaps, sizeof( WAVEINCAPS ) );\n\n // do some stuff with waveInCaps...\n }\n }\n\n return 0;\n}\n\n",
"There is an Open Source Audio API called PortAudio that has a method you could use. I think the method is called Pa_GetDeviceInfo() or something.\n",
"The win32 api has a function called waveInGetNumDevs for it.\n"
] | [
1,
0,
0
] | [] | [] | [
"audio",
"device",
"winapi"
] | stackoverflow_0000041330_audio_device_winapi.txt |
Q:
Developer moving from SQL Server to Oracle
We are bringing a new project in house and whereas previously all our work was on SQL Server the new product uses an oracle back end.
Can anyone advise any crib sheets or such like that gives an SQL Server person like me a rundown of what the major differences are - Would like to be able to get up and running as soon as possible.
A:
@hamishcmcn
Your assertion that '' == Null is simply not true. In the relational world Null should only ever be read to mean "I don't know". The only result you will get from Oracle (and most other decent databases) when you compare a value to Null is 'False'.
Off the top of my head the major differences between SQL Server and Oracle are:
Learn to love transactions, they are your friend - auto commit is not.
Read consistency and the lack of blocking reads
SQL Server Database == Oracle Schema
PL/SQL is a lot more feature rich than T-SQL
Learn the difference between an instance and a database in Oracle
You can have more than one Oracle instance on a server
No pointy clicky wizards (unless you really, really want them)
Everyone else, please help me out and add more.
A:
The main difference I noticed in moving from SQL Server to Oracle was that in Oracle you need to use cursors in the SELECT statements.
Also, temporary tables are used differently. In SQL Server you can create one in a procedure and then DROP it at the end, but in Oracle you're supposed to already have a temporary table created before the procedure is executed.
I'd look at datatypes too since they're quite different.
A:
String concatenation:
Oracle: || or concat()
Sql Server: +
These links could be interesting:
http://www.dba-oracle.com/oracle_news/2005_12_16_sql_syntax_differences.htm
http://www.mssqlcity.com/Articles/Compare/sql_server_vs_oracle.htm (old one: Ora9 vs Sql 2000)
A:
Watch out for the difference in the way the empty string is treated.
INSERT INTO atable (a_varchar_column) VALUES ('');
is the same as
INSERT INTO atable (a_varchar_column) VALUES (NULL);
I have no sqlserver experience, but I understand that it differentiates between the two
A:
@hamishmcn
Generally that's a bad idea.. Temporary tables in oracle should just be created and left (unless its a once off/very rarely used). The contents of the temporary table is local to each session and truncated when the session is closed. There is little point in paying the cost of creating/dropping the temporary table, might even result in clashes if two processes try to create the table at the same time and unexpected commits from performing DDL.
A:
What you have asked here is a huge topic, especially since you haven't really said what you are using the database for (eg, are you going to be going from TSQL -> PL/SQL or just changing the backend database your java application is connected to?)
If you are serious about using your database choice to its potiential, then I suggest you dig a bit deeper and read something like Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions by Tom Kyte.
A:
If you need to you can create and drop temporary tables in procedures using the Execute Immediate command.
A:
to andy47, I did not mean that you can use the empty string in a comparison, but oracle treats it like null if you use it in an insert.
Re-read my entry, then try the following SQL:
CREATE TABLE atable (acol VARCHAR(10));
INsERT INTO atable VALUES( '' );
SELECT * FROM atable WHERE acol IS NULL;
And to avoid a "yes it is, no it isn't" situation, here is an external link
| Developer moving from SQL Server to Oracle | We are bringing a new project in house and whereas previously all our work was on SQL Server the new product uses an oracle back end.
Can anyone advise any crib sheets or such like that gives an SQL Server person like me a rundown of what the major differences are - Would like to be able to get up and running as soon as possible.
| [
"@hamishcmcn\nYour assertion that '' == Null is simply not true. In the relational world Null should only ever be read to mean \"I don't know\". The only result you will get from Oracle (and most other decent databases) when you compare a value to Null is 'False'.\nOff the top of my head the major differences between SQL Server and Oracle are:\n\nLearn to love transactions, they are your friend - auto commit is not.\nRead consistency and the lack of blocking reads\nSQL Server Database == Oracle Schema\nPL/SQL is a lot more feature rich than T-SQL\nLearn the difference between an instance and a database in Oracle\nYou can have more than one Oracle instance on a server\nNo pointy clicky wizards (unless you really, really want them)\n\nEveryone else, please help me out and add more.\n",
"The main difference I noticed in moving from SQL Server to Oracle was that in Oracle you need to use cursors in the SELECT statements.\nAlso, temporary tables are used differently. In SQL Server you can create one in a procedure and then DROP it at the end, but in Oracle you're supposed to already have a temporary table created before the procedure is executed.\nI'd look at datatypes too since they're quite different.\n",
"String concatenation:\nOracle: || or concat()\nSql Server: + \nThese links could be interesting:\nhttp://www.dba-oracle.com/oracle_news/2005_12_16_sql_syntax_differences.htm\nhttp://www.mssqlcity.com/Articles/Compare/sql_server_vs_oracle.htm (old one: Ora9 vs Sql 2000)\n",
"Watch out for the difference in the way the empty string is treated.\nINSERT INTO atable (a_varchar_column) VALUES (''); \nis the same as \nINSERT INTO atable (a_varchar_column) VALUES (NULL);\n\nI have no sqlserver experience, but I understand that it differentiates between the two\n",
"@hamishmcn\nGenerally that's a bad idea.. Temporary tables in oracle should just be created and left (unless its a once off/very rarely used). The contents of the temporary table is local to each session and truncated when the session is closed. There is little point in paying the cost of creating/dropping the temporary table, might even result in clashes if two processes try to create the table at the same time and unexpected commits from performing DDL.\n",
"What you have asked here is a huge topic, especially since you haven't really said what you are using the database for (eg, are you going to be going from TSQL -> PL/SQL or just changing the backend database your java application is connected to?)\nIf you are serious about using your database choice to its potiential, then I suggest you dig a bit deeper and read something like Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions by Tom Kyte.\n",
"If you need to you can create and drop temporary tables in procedures using the Execute Immediate command.\n",
"to andy47, I did not mean that you can use the empty string in a comparison, but oracle treats it like null if you use it in an insert. \nRe-read my entry, then try the following SQL:\nCREATE TABLE atable (acol VARCHAR(10));\nINsERT INTO atable VALUES( '' );\nSELECT * FROM atable WHERE acol IS NULL;\n\nAnd to avoid a \"yes it is, no it isn't\" situation, here is an external link \n"
] | [
3,
2,
2,
1,
1,
1,
0,
0
] | [] | [] | [
"database",
"oracle",
"sql_server"
] | stackoverflow_0000039438_database_oracle_sql_server.txt |
Q:
How do I become "test infected" with TDD?
I keep reading about people who are "test infected", meaning that they don't just "get" TDD but also can't live without it. They've "had the makeover" as it were. The question is, how do I get like that?
A:
Part of the point of being "test infected" is that you've used TDD enough and seen the successes enough that you don't want to code without it. Once you've gone through a cycle of writing tests first, then coding and refactoring and seeing your bug counts go down and your code get better as a result, not only does it become second nature like Zxaos said, you have a hard time going back to Code First. This is being test infected.
A:
You've already read about TDD; reading more isn't going to excite you.
Instead, you need a genuine personal success story.
Here's how. Grab some code from a core module, code that doesn't depend on external systems or too many other subroutines. Doesn't matter how complex or simple the routine is.
Then start writing unit tests against it. (I'm assuming you have an xUnit or similar for your language.) Be really obnoxious with the tests -- test every boundary case, test max-int and min-int, test null's, test strings and lists with millions of elements, test strings with Korean and control characters and right-to-left Arabic and quotes and backslashes and periods and other things that tend to break things if not escaped.
What you'll find is.... bugs! At first you might think these bugs aren't important -- you haven't run into these problems yet, your code probably would never do this, etc. etc.. But my experience is if you keep pushing forward you'll be amazed at the number of little problems. Eventually it becomes hard to believe that none of these bugs will ever cause a problem.
Plus you get a great feeling of accomplishment with something is done really, really well. We know code is never perfect and rarely free of bugs, so it's nice when we've exhausted so many tests that we really do feel confident. Confidence is a nice feeling.
Finally, I think the last event that will trigger the love will happen weeks or months later. Maybe you're fixing a bug or adding a feature or refactoring some code, and something you do will break a unit test. "Huh?" you'll say, not understanding why the new change was even relevant to the broken test. Then you'll find it, and find enlightenment. Because you really didn't know that you were breaking code, and the tests saved you.
Hallelujah!
A:
Learn about TDD to start, and then begin integrating it into your workflow. If you use the methodologies enough, you'll find that they become second nature and you'll start framing all of your development tasks within that framework.
Also, start using the J-Unit (or X-Unit) framework for your language of choice.
A:
One word, practice! There is some overhead with doing TDD and the way to overcome it is to practice and make sure you are using tools to help the process. You need to learn the tools like the back of your hand. Once you learn the tools to go along with the process you are learning, then it will click and you will get fluent with writing tests first to flush the code out. Then you will be "test infected".
I answered a question similar to this a while back. You may want to check it out also. I mention some tools and explain learning TDD. Out of these tools, Resharper and picking a good mocking framework are critical for doing TDD. I can't stress learning these tools to go along with the testing framework you are using enough.
| How do I become "test infected" with TDD? | I keep reading about people who are "test infected", meaning that they don't just "get" TDD but also can't live without it. They've "had the makeover" as it were. The question is, how do I get like that?
| [
"Part of the point of being \"test infected\" is that you've used TDD enough and seen the successes enough that you don't want to code without it. Once you've gone through a cycle of writing tests first, then coding and refactoring and seeing your bug counts go down and your code get better as a result, not only does it become second nature like Zxaos said, you have a hard time going back to Code First. This is being test infected.\n",
"You've already read about TDD; reading more isn't going to excite you.\nInstead, you need a genuine personal success story.\nHere's how. Grab some code from a core module, code that doesn't depend on external systems or too many other subroutines. Doesn't matter how complex or simple the routine is.\nThen start writing unit tests against it. (I'm assuming you have an xUnit or similar for your language.) Be really obnoxious with the tests -- test every boundary case, test max-int and min-int, test null's, test strings and lists with millions of elements, test strings with Korean and control characters and right-to-left Arabic and quotes and backslashes and periods and other things that tend to break things if not escaped.\nWhat you'll find is.... bugs! At first you might think these bugs aren't important -- you haven't run into these problems yet, your code probably would never do this, etc. etc.. But my experience is if you keep pushing forward you'll be amazed at the number of little problems. Eventually it becomes hard to believe that none of these bugs will ever cause a problem.\nPlus you get a great feeling of accomplishment with something is done really, really well. We know code is never perfect and rarely free of bugs, so it's nice when we've exhausted so many tests that we really do feel confident. Confidence is a nice feeling.\nFinally, I think the last event that will trigger the love will happen weeks or months later. Maybe you're fixing a bug or adding a feature or refactoring some code, and something you do will break a unit test. \"Huh?\" you'll say, not understanding why the new change was even relevant to the broken test. Then you'll find it, and find enlightenment. Because you really didn't know that you were breaking code, and the tests saved you.\nHallelujah!\n",
"Learn about TDD to start, and then begin integrating it into your workflow. If you use the methodologies enough, you'll find that they become second nature and you'll start framing all of your development tasks within that framework.\nAlso, start using the J-Unit (or X-Unit) framework for your language of choice.\n",
"One word, practice! There is some overhead with doing TDD and the way to overcome it is to practice and make sure you are using tools to help the process. You need to learn the tools like the back of your hand. Once you learn the tools to go along with the process you are learning, then it will click and you will get fluent with writing tests first to flush the code out. Then you will be \"test infected\".\nI answered a question similar to this a while back. You may want to check it out also. I mention some tools and explain learning TDD. Out of these tools, Resharper and picking a good mocking framework are critical for doing TDD. I can't stress learning these tools to go along with the testing framework you are using enough.\n"
] | [
16,
16,
3,
1
] | [] | [] | [
"tdd",
"testing",
"unit_testing"
] | stackoverflow_0000047692_tdd_testing_unit_testing.txt |
Q:
Can you compile Apache HTTP Server and redeploy its binaries to a different location?
As part of our product release we ship Apache HTTP Server binaries that we have compiled on our (UNIX) development machine.
We tell our clients to install the binaries (on their UNIX servers) under the same directory structure that we compiled it under. For some clients this is not appropriate, e.g. where there are restrictions on where they can install software on their servers and they don't want to compile Apache themselves.
Is there a way of compiling Apache HTTP Server so its installation location(s) can be specified dynamically using environment variables ?
I spent a few days trying to sort this out and couldn't find a way to do it. It led me to believe that the Apache binaries were hard coding some directory paths at compilation preventing the portability we require.
Has anyone managed to do this ?
A:
I think the way to do(get around) this problem is to develop a "./configure && make" script that your client uses to install, specify and compile the binaries. That would offcourse require that the client has all the source-code installed on his server or you can make it available on an NFS share.
A:
If you are compiling Apache2 for a particular location but want your clients to be able to install it somewhere else (and I'm assuming they have the same architecture and OS as your build machine) then you can do it but the apachectl script will need some after-market hacking.
I just tested these steps:
Unpacked the Apache2 source (this should work with Apache 1.3 as well though) and ran ./configure --prefix=/opt/apache2
Ran make then sudo make install to install on the build machine.
Switch to the install directory (/opt/apache2) and tar and gzip up the binaries and config files. I used cd /opt/apache2; sudo tar cf - apache2 | gzip -c > ~/apache2.tar.gz
Move the tar file to the target machine. I decided to install in /opt/mynewdir/dan/apache2 to test. So basically, your clients can't use rpm or anything like that -- unless you know how to make that relocatable (I don't :-) ).
Anyway, your client's conf/httpd.conf file will be full of hard-coded absolute paths -- they can just change these to whatever they need. The apachectl script also has hard coded paths. It's just a shell script so you can hack it or give them a sed script to convert the old paths from your build machine to the new path on your clients.
I skipped all that hackery and just ran ./bin/httpd -f /opt/mynewdir/dan/conf/httpd.conf :-)
Hope that helps. Let us know any error messages you get if it's not working for you.
A:
I think the way to do(get around) this problem is to develop a "./configure && make" script that your client uses to install, specify and compile the binaries. That would offcourse require that the client has all the source-code installed on his server or you can make it available on an NFS share.
Not to mention a complete build toolchain. These days, GCC doesn't come default with most major distributions. Wouldn't it be sane to force the client to install it to /opt/my_apache2/ or something like that?
A:
@Hissohathair
I suggest 1 change to @Hissohathair's answer.
6). ./bin/httpd -d <server path> (although it can be overridden in the config file)
In apacheclt there is a variable for HTTPD where you could override to use it.
| Can you compile Apache HTTP Server and redeploy its binaries to a different location? | As part of our product release we ship Apache HTTP Server binaries that we have compiled on our (UNIX) development machine.
We tell our clients to install the binaries (on their UNIX servers) under the same directory structure that we compiled it under. For some clients this is not appropriate, e.g. where there are restrictions on where they can install software on their servers and they don't want to compile Apache themselves.
Is there a way of compiling Apache HTTP Server so its installation location(s) can be specified dynamically using environment variables ?
I spent a few days trying to sort this out and couldn't find a way to do it. It led me to believe that the Apache binaries were hard coding some directory paths at compilation preventing the portability we require.
Has anyone managed to do this ?
| [
"I think the way to do(get around) this problem is to develop a \"./configure && make\" script that your client uses to install, specify and compile the binaries. That would offcourse require that the client has all the source-code installed on his server or you can make it available on an NFS share.\n",
"If you are compiling Apache2 for a particular location but want your clients to be able to install it somewhere else (and I'm assuming they have the same architecture and OS as your build machine) then you can do it but the apachectl script will need some after-market hacking.\nI just tested these steps:\n\nUnpacked the Apache2 source (this should work with Apache 1.3 as well though) and ran ./configure --prefix=/opt/apache2\n\nRan make then sudo make install to install on the build machine.\n\nSwitch to the install directory (/opt/apache2) and tar and gzip up the binaries and config files. I used cd /opt/apache2; sudo tar cf - apache2 | gzip -c > ~/apache2.tar.gz\n\nMove the tar file to the target machine. I decided to install in /opt/mynewdir/dan/apache2 to test. So basically, your clients can't use rpm or anything like that -- unless you know how to make that relocatable (I don't :-) ).\n\nAnyway, your client's conf/httpd.conf file will be full of hard-coded absolute paths -- they can just change these to whatever they need. The apachectl script also has hard coded paths. It's just a shell script so you can hack it or give them a sed script to convert the old paths from your build machine to the new path on your clients.\n\nI skipped all that hackery and just ran ./bin/httpd -f /opt/mynewdir/dan/conf/httpd.conf :-)\n\n\nHope that helps. Let us know any error messages you get if it's not working for you.\n",
"\nI think the way to do(get around) this problem is to develop a \"./configure && make\" script that your client uses to install, specify and compile the binaries. That would offcourse require that the client has all the source-code installed on his server or you can make it available on an NFS share.\n\nNot to mention a complete build toolchain. These days, GCC doesn't come default with most major distributions. Wouldn't it be sane to force the client to install it to /opt/my_apache2/ or something like that?\n",
"@Hissohathair\nI suggest 1 change to @Hissohathair's answer.\n6). ./bin/httpd -d <server path> (although it can be overridden in the config file) \n\nIn apacheclt there is a variable for HTTPD where you could override to use it.\n"
] | [
1,
1,
0,
0
] | [] | [] | [
"apache",
"httpserver",
"unix"
] | stackoverflow_0000022140_apache_httpserver_unix.txt |
Q:
How to start coding the "Dining Philosophers" simulation?
I'm not a beginner at C# but I really need to increase my understanding, so I've picked a classic deadlock problem to code to help teach myself some of the more advanced concepts of C#. The Dining Philosophers Problem seems like a good one, but I need a little help to get started. I know I need to approach the "diners" as objects, but to simulate the random delays between eating, should I look to threading with each diner in a separate thread? Do I need some kind of "master" to monitor all the actions? Any general design concept advice is welcome, but I'd like to do the grunt programming as an exercise. Thanks!
A:
I think the best approach to simulate it would be a Fork class with a method like use() that holds the fork (bool available = false) and a release() that releases it.
A Philosopher class with getFork(Fork) and releaseFork(Fork) that operates the holding/releasing of the object Fork (seems to me a timer would be good in a method useFork() so you can really perceive the deadlock.
And for Last a DinningTable (or any other name) class that creates instances, and do the log. If you plan to use threads, here is where you should implement a thread for each Philosopher concurring for the Fork.
As a suggestion, you could implement a Plate Class, holding a quantity of spaghetti that the Philosopher.useFork() method lower during the time frame. This way you can see which Philosopher finishes first.
I will let the implementations for you, of course, since your objective is to learn C# ... in my experience, you better learn doing something concrete like these classes ;) Besides, you can find lots of implementations on Google if you want to cheat ...
I invite you to share the code after it . It's a great Study Reference.
Hope This helps you.
| How to start coding the "Dining Philosophers" simulation? | I'm not a beginner at C# but I really need to increase my understanding, so I've picked a classic deadlock problem to code to help teach myself some of the more advanced concepts of C#. The Dining Philosophers Problem seems like a good one, but I need a little help to get started. I know I need to approach the "diners" as objects, but to simulate the random delays between eating, should I look to threading with each diner in a separate thread? Do I need some kind of "master" to monitor all the actions? Any general design concept advice is welcome, but I'd like to do the grunt programming as an exercise. Thanks!
| [
"I think the best approach to simulate it would be a Fork class with a method like use() that holds the fork (bool available = false) and a release() that releases it.\nA Philosopher class with getFork(Fork) and releaseFork(Fork) that operates the holding/releasing of the object Fork (seems to me a timer would be good in a method useFork() so you can really perceive the deadlock.\nAnd for Last a DinningTable (or any other name) class that creates instances, and do the log. If you plan to use threads, here is where you should implement a thread for each Philosopher concurring for the Fork.\nAs a suggestion, you could implement a Plate Class, holding a quantity of spaghetti that the Philosopher.useFork() method lower during the time frame. This way you can see which Philosopher finishes first.\nI will let the implementations for you, of course, since your objective is to learn C# ... in my experience, you better learn doing something concrete like these classes ;) Besides, you can find lots of implementations on Google if you want to cheat ... \nI invite you to share the code after it . It's a great Study Reference.\nHope This helps you.\n"
] | [
6
] | [] | [] | [
"c#",
"deadlock",
"puzzle"
] | stackoverflow_0000047707_c#_deadlock_puzzle.txt |
Q:
Referencing Embedded resources from other resources in c#
In my web application I include all of my JavaScripts as js files that are embedded resources in the assembly, and add them to the page using ClientScriptManager.GetWebResourceUrl(). However, in some of my js files, I have references to other static assets like image urls. I would like to make those assembly resources as well. Is there a way to tokenize the reference to the resource? e.g.
this.drophint = document.createElement('img');
this.drophint.src = '/_layouts/images/dragdrophint.gif';
Could become something like:
this.drophint = document.createElement('img');
this.drophint.src = '{resource:assembly.location.dragdrophint.gif}';
A:
I'd suggest that you emit the web resources as a dynamic javascript associative array.
Server side code:
StringBuilder script = new StringBuilder();
script.Append("var imgResources = {};");
script.AppendFormat("imgResources['{0}'] = '{1}';",
"drophint",
Page.ClientScript.GetWebResourceUrl(Page.GetType(), "assembly.location.dragdrophint.gif"));
script.AppendFormat("imgResources['{0}'] = '{1}';",
"anotherimg",
Page.ClientScript.GetWebResourceUrl(Page.GetType(), "assembly.location.anotherimg.gif"));
Page.ClientScript.RegisterClientScriptBlock(
Page.GetType(),
"imgResources",
script.ToString(),
true);
Then your client side code looks like this:
this.drophint = document.createElement('img');
this.drophint.src = imgResources['drophint'];
this.anotherimg = document.createElement('img');
this.anotherimg.src = imgResources['anotherimg'];
Hope this helps.
A:
I don't particularly care for the exact implementation @Jon suggests, but the idea behind it is sound and I would concur that emitting these would be a good thing to do.
A slightly better implementation, though this is all subjective to some degree, would be to create a server-side model (read: C# class(es)) that represents this dictionary (or simply use an instance of Dictionary<string, string>) and serialize that to JavaScript literal object notation. That way you are not dealing with the string hacking you see in Jon's example (if that bothers you).
A:
I concur with Jason's assessment of the initial solution I proposed, it can definitely be improved. My solution represents an older school javascript mentality (read, pre the emergence of ajax and JSON). There are always better ways to solve a problem, which one of the reasons why StackOverflow is so cool. Collectively we are better at the craft of programming than anyone of us on our own.
Based on Jason's ideas I'd revise my initial code, and revise some of what Jason suggested. Implement a C# class with two properties, the img resource id and a property that contains the WebResourceUrl. Then, where I differ some from Jason is that rather than using a Dictionary<string, string> I'd propose using a List<MyImageResourceClass>, which you can then in turn serialize to JSON (using DataContractJsonSerializer), and emit the JSON as the dynamic script, rather than manually generating the javascript using a string builder.
Why a List? I think you may find that dictionaries when serialized to JSON, at least using the DataContractJsonSerializer (fyi available with the 3.5 framework only, with the 2.0 or 3.0 framework you'd need to bolt on aspnet ajax and use is JSON serializer), are a little more cumbersome to work with than how a list would serialize. Although that is subjective.
There are implications too with your client side code. Now on the client side you'll have an array of the JSON serialized MyImageResourceClass instances. You'd need to iterate through this array creating your img tags as you go.
Hopefully, these ideas and suggestions can help get you going! And no doubt there are other solutions. I'm interested to see what comes of this.
| Referencing Embedded resources from other resources in c# | In my web application I include all of my JavaScripts as js files that are embedded resources in the assembly, and add them to the page using ClientScriptManager.GetWebResourceUrl(). However, in some of my js files, I have references to other static assets like image urls. I would like to make those assembly resources as well. Is there a way to tokenize the reference to the resource? e.g.
this.drophint = document.createElement('img');
this.drophint.src = '/_layouts/images/dragdrophint.gif';
Could become something like:
this.drophint = document.createElement('img');
this.drophint.src = '{resource:assembly.location.dragdrophint.gif}';
| [
"I'd suggest that you emit the web resources as a dynamic javascript associative array.\nServer side code:\nStringBuilder script = new StringBuilder();\nscript.Append(\"var imgResources = {};\");\nscript.AppendFormat(\"imgResources['{0}'] = '{1}';\", \n \"drophint\", \n Page.ClientScript.GetWebResourceUrl(Page.GetType(), \"assembly.location.dragdrophint.gif\"));\nscript.AppendFormat(\"imgResources['{0}'] = '{1}';\", \n \"anotherimg\", \n Page.ClientScript.GetWebResourceUrl(Page.GetType(), \"assembly.location.anotherimg.gif\"));\n\nPage.ClientScript.RegisterClientScriptBlock(\n Page.GetType(),\n \"imgResources\",\n script.ToString(), \n true);\n\nThen your client side code looks like this:\nthis.drophint = document.createElement('img');\nthis.drophint.src = imgResources['drophint'];\nthis.anotherimg = document.createElement('img');\nthis.anotherimg.src = imgResources['anotherimg'];\n\nHope this helps.\n",
"I don't particularly care for the exact implementation @Jon suggests, but the idea behind it is sound and I would concur that emitting these would be a good thing to do. \nA slightly better implementation, though this is all subjective to some degree, would be to create a server-side model (read: C# class(es)) that represents this dictionary (or simply use an instance of Dictionary<string, string>) and serialize that to JavaScript literal object notation. That way you are not dealing with the string hacking you see in Jon's example (if that bothers you).\n",
"I concur with Jason's assessment of the initial solution I proposed, it can definitely be improved. My solution represents an older school javascript mentality (read, pre the emergence of ajax and JSON). There are always better ways to solve a problem, which one of the reasons why StackOverflow is so cool. Collectively we are better at the craft of programming than anyone of us on our own.\nBased on Jason's ideas I'd revise my initial code, and revise some of what Jason suggested. Implement a C# class with two properties, the img resource id and a property that contains the WebResourceUrl. Then, where I differ some from Jason is that rather than using a Dictionary<string, string> I'd propose using a List<MyImageResourceClass>, which you can then in turn serialize to JSON (using DataContractJsonSerializer), and emit the JSON as the dynamic script, rather than manually generating the javascript using a string builder.\nWhy a List? I think you may find that dictionaries when serialized to JSON, at least using the DataContractJsonSerializer (fyi available with the 3.5 framework only, with the 2.0 or 3.0 framework you'd need to bolt on aspnet ajax and use is JSON serializer), are a little more cumbersome to work with than how a list would serialize. Although that is subjective.\nThere are implications too with your client side code. Now on the client side you'll have an array of the JSON serialized MyImageResourceClass instances. You'd need to iterate through this array creating your img tags as you go.\nHopefully, these ideas and suggestions can help get you going! And no doubt there are other solutions. I'm interested to see what comes of this.\n"
] | [
3,
3,
2
] | [] | [] | [
"asp.net",
"c#",
"javascript"
] | stackoverflow_0000046489_asp.net_c#_javascript.txt |
Q:
Managing web services in FlexBuilder - How does the manager work?
In FlexBuilder 3, there are two items under the 'Data' menu to import and manage web services. After importing a webservice, I can update it with the manage option. However, the webservices seems to disappear after they are imported. The manager does however recognize that a certain WSDL URL was imported and refuses to do anything with it.
How does the manager know this, and how can I make it refresh a certain WSDL URL?
A:
In your src folder of the flexbuilder project you should see the generated classes. For instance, if you use the manager to generate the proxy classes for www.example.com you should see the folders /com/example with the generated proxy classes inside.
To consume these webservices in ActionScript use the statement:
"import com.example.*;"
To consume the webservice in mxml include the .as file using:
<mx:Script source="yourscriptname.as"/>
To refresh the generated proxy classes, consuming the latest WSDL, simply open the manager and select "update".
Also, I found this article very useful for consuming web services.
I hope that helps, the question was kind of vague about the problem.
| Managing web services in FlexBuilder - How does the manager work? | In FlexBuilder 3, there are two items under the 'Data' menu to import and manage web services. After importing a webservice, I can update it with the manage option. However, the webservices seems to disappear after they are imported. The manager does however recognize that a certain WSDL URL was imported and refuses to do anything with it.
How does the manager know this, and how can I make it refresh a certain WSDL URL?
| [
"In your src folder of the flexbuilder project you should see the generated classes. For instance, if you use the manager to generate the proxy classes for www.example.com you should see the folders /com/example with the generated proxy classes inside. \nTo consume these webservices in ActionScript use the statement: \n\"import com.example.*;\"\n\nTo consume the webservice in mxml include the .as file using: \n<mx:Script source=\"yourscriptname.as\"/>\n\nTo refresh the generated proxy classes, consuming the latest WSDL, simply open the manager and select \"update\". \nAlso, I found this article very useful for consuming web services.\nI hope that helps, the question was kind of vague about the problem.\n"
] | [
1
] | [] | [] | [
"apache_flex",
"flexbuilder",
"web_services"
] | stackoverflow_0000043877_apache_flex_flexbuilder_web_services.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.