content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
WScript.Shell and blocking execution?
I'm using WScript to automate some tasks, by using WScript.Shell to call external programs.
However, right now it does not wait for the external program to finish, and instead moves on. This causes issues because I have some tasks dependent on others finishing first.
I am using code like:
ZipCommand = "7za.exe a -r -y " & ZipDest & BuildLabel & ".zip " & buildSourceDir
Set wshShell = WScript.CreateObject("Wscript.Shell")
wshShell.run ZipCommand
Is there a way to do this so it blocks until the shell executed program returns?
A:
Turns out, that while loop is severe CPU hog :P
I found a better way:
ZipCommand = "7za.exe a -r -y " & ZipDest & BuildLabel & ".zip " & buildSourceDir
Set wshShell = WScript.CreateObject("Wscript.Shell")
wshShell.Run ZipCommand,1,1
The last two arguments are Show window and Block Execution :)
A:
If you use the "Exec" method, it returns a reference, so you can poll the "Status" property to determine when it is complete. Here is a sample from msdn:
Dim WshShell, oExec
Set WshShell = CreateObject("WScript.Shell")
Set oExec = WshShell.Exec(ZipCommand)
Do While oExec.Status = 0
WScript.Sleep 100
Loop
| WScript.Shell and blocking execution? | I'm using WScript to automate some tasks, by using WScript.Shell to call external programs.
However, right now it does not wait for the external program to finish, and instead moves on. This causes issues because I have some tasks dependent on others finishing first.
I am using code like:
ZipCommand = "7za.exe a -r -y " & ZipDest & BuildLabel & ".zip " & buildSourceDir
Set wshShell = WScript.CreateObject("Wscript.Shell")
wshShell.run ZipCommand
Is there a way to do this so it blocks until the shell executed program returns?
| [
"Turns out, that while loop is severe CPU hog :P\nI found a better way:\nZipCommand = \"7za.exe a -r -y \" & ZipDest & BuildLabel & \".zip \" & buildSourceDir\n\nSet wshShell = WScript.CreateObject(\"Wscript.Shell\")\n\nwshShell.Run ZipCommand,1,1\n\nThe last two arguments are Show window and Block Execution :)\n",
"If you use the \"Exec\" method, it returns a reference, so you can poll the \"Status\" property to determine when it is complete. Here is a sample from msdn:\nDim WshShell, oExec\nSet WshShell = CreateObject(\"WScript.Shell\")\n\nSet oExec = WshShell.Exec(ZipCommand)\n\nDo While oExec.Status = 0\n WScript.Sleep 100\nLoop\n\n"
] | [
15,
7
] | [] | [] | [
"shell",
"vbscript",
"wsh"
] | stackoverflow_0000050384_shell_vbscript_wsh.txt |
Q:
Why is AppDomain.CurrentDomain.BaseDirectory different between Windows Forms and Library
In my winforms application, AppDomain.CurrentDomain.BaseDirectory is set to "C:\Projects\TestProject\bin\Debug\"
In my unit tests it is "C:\Projects\TestProject\bin\Debug" (no final slash). Why is this?
[Edit] @Will : I am asking why the test project's directory doesn't have a trailing slash?
A:
You may be asking one of two possible questions: Why are they different, or why the test project's directory doesn't have a trailing slash.
Assuming its the first: That's where the code is executing from. When you debug the program, its compiled and the binaries are placed under the project's \bin\debug directory. When you're testing, you're running the test's binaries, which are compiled and placed under the test project's bin\debug directory.
Assuming its the last: Possibly some obscure reason, possibly a bug, or possibly to catch people who are concatenating paths rather than using Path.Combine (naughty naughty!).
Well, I don't know why it's different. Test applications may be run within a custom CLR host; I think this may be the case as test apps do some weird stuff with private accessors that normally aren't allowed within the standard CLR host. I'm only grasping at straws here as I don't have any actual knowledge about how this stuff is actually being coded.
Anyhow, the workaround stands (Path.Combine). Nobody should be concatenating paths, as path delimeters can change.
| Why is AppDomain.CurrentDomain.BaseDirectory different between Windows Forms and Library | In my winforms application, AppDomain.CurrentDomain.BaseDirectory is set to "C:\Projects\TestProject\bin\Debug\"
In my unit tests it is "C:\Projects\TestProject\bin\Debug" (no final slash). Why is this?
[Edit] @Will : I am asking why the test project's directory doesn't have a trailing slash?
| [
"You may be asking one of two possible questions: Why are they different, or why the test project's directory doesn't have a trailing slash.\nAssuming its the first: That's where the code is executing from. When you debug the program, its compiled and the binaries are placed under the project's \\bin\\debug directory. When you're testing, you're running the test's binaries, which are compiled and placed under the test project's bin\\debug directory.\nAssuming its the last: Possibly some obscure reason, possibly a bug, or possibly to catch people who are concatenating paths rather than using Path.Combine (naughty naughty!).\n\nWell, I don't know why it's different. Test applications may be run within a custom CLR host; I think this may be the case as test apps do some weird stuff with private accessors that normally aren't allowed within the standard CLR host. I'm only grasping at straws here as I don't have any actual knowledge about how this stuff is actually being coded.\nAnyhow, the workaround stands (Path.Combine). Nobody should be concatenating paths, as path delimeters can change.\n"
] | [
9
] | [] | [] | [
".net"
] | stackoverflow_0000050345_.net.txt |
Q:
How to repeatedly merge branches in Mercurial
We're using Mercurial where I work and I want to have a setup similar to how I used SVN:
Trunk
Tags
Production
Branches
Since Mercurial supports branches natively, I know how to create a named branch, but I can't find any documentation on how to repeatedly merge 'Trunk' with 'Production'.
Quite simply, I want to have a development branch for normal work and a production branch that I routinely pull changes from the development branch into. How do I do this with Mercurial?
A:
As the previous poster mentioned, the transplant extension can be used for cherry-picking individual changes from one branch to another. If, however, you always want to pull all the latest changes, the hg merge command will get you there.
The simplest case is when you're using clones to implement branching (since that's the use case Mercurial is designed around). Assuming you've turned on the built-in fetch extension in your .hgrc / Mercurial.ini:
cd ~/src/development
# hack hack hack
hg commit -m "Made some changes"
cd ../production
hg fetch ../development
If you're using local branches:
hg update -C development
# hack hack hack
hg commit -m "Made some changes"
hg update -C production
hg merge development
hg commit -m "Merged from development"
A:
Something like hg transplant? That's what we use on our dev and prod branches.
| How to repeatedly merge branches in Mercurial | We're using Mercurial where I work and I want to have a setup similar to how I used SVN:
Trunk
Tags
Production
Branches
Since Mercurial supports branches natively, I know how to create a named branch, but I can't find any documentation on how to repeatedly merge 'Trunk' with 'Production'.
Quite simply, I want to have a development branch for normal work and a production branch that I routinely pull changes from the development branch into. How do I do this with Mercurial?
| [
"As the previous poster mentioned, the transplant extension can be used for cherry-picking individual changes from one branch to another. If, however, you always want to pull all the latest changes, the hg merge command will get you there.\nThe simplest case is when you're using clones to implement branching (since that's the use case Mercurial is designed around). Assuming you've turned on the built-in fetch extension in your .hgrc / Mercurial.ini:\ncd ~/src/development\n# hack hack hack\nhg commit -m \"Made some changes\"\ncd ../production\nhg fetch ../development\n\nIf you're using local branches:\nhg update -C development\n# hack hack hack\nhg commit -m \"Made some changes\"\nhg update -C production\nhg merge development\nhg commit -m \"Merged from development\"\n\n",
"Something like hg transplant? That's what we use on our dev and prod branches.\n"
] | [
20,
4
] | [] | [] | [
"mercurial",
"version_control"
] | stackoverflow_0000050223_mercurial_version_control.txt |
Q:
Can I prevent user pasting Javascript into Design Mode IFrame?
I'm building a webapp that contains an IFrame in design mode so my user's can "tart" their content up and paste in content to be displayed on their page. Like the WYSIWYG editor on most blog engines or forums.
I'm trying to think of all potential security holes I need to plug, one of which is a user pasting in Javascript:
<script type="text/javascript">
// Do some nasty stuff
</script>
Now I know I can strip this out at the server end, before saving it and/or serving it back, but I'm worried about the possibility of someone being able to paste some script in and run it there and then, without even sending it back to the server for processing.
Am I worrying over nothing?
Any advice would be great, couldn't find much searching Google.
Anthony
A:
...I'm worried about the possibility of someone being able to paste some script in and run it there and then, without even sending it back to the server for processing.
Am I worrying over nothing?
Firefox has a plug-in called Greasemonkey that allows users to arbitrarily run JavaScript against any page that loads into their browser, and there is nothing you can do about it. Firebug allows you to modify web pages as well as run arbitrary JavaScript.
AFAIK, you really only need to worry once it gets to your server, and then potentially hits other users.
A:
As Jason said, I would focus more on cleaning the data on the server side. You don't really have any real control on the client side unless you're using Silverlight / Flex and even then you'd need to check the server.
That said, Here are some tips from "A List Apart" you may find helpful regarding server side data cleaning.
http://www.alistapart.com/articles/secureyourcode
| Can I prevent user pasting Javascript into Design Mode IFrame? | I'm building a webapp that contains an IFrame in design mode so my user's can "tart" their content up and paste in content to be displayed on their page. Like the WYSIWYG editor on most blog engines or forums.
I'm trying to think of all potential security holes I need to plug, one of which is a user pasting in Javascript:
<script type="text/javascript">
// Do some nasty stuff
</script>
Now I know I can strip this out at the server end, before saving it and/or serving it back, but I'm worried about the possibility of someone being able to paste some script in and run it there and then, without even sending it back to the server for processing.
Am I worrying over nothing?
Any advice would be great, couldn't find much searching Google.
Anthony
| [
"\n...I'm worried about the possibility of someone being able to paste some script in and run it there and then, without even sending it back to the server for processing. \nAm I worrying over nothing?\n\nFirefox has a plug-in called Greasemonkey that allows users to arbitrarily run JavaScript against any page that loads into their browser, and there is nothing you can do about it. Firebug allows you to modify web pages as well as run arbitrary JavaScript.\nAFAIK, you really only need to worry once it gets to your server, and then potentially hits other users.\n",
"As Jason said, I would focus more on cleaning the data on the server side. You don't really have any real control on the client side unless you're using Silverlight / Flex and even then you'd need to check the server.\nThat said, Here are some tips from \"A List Apart\" you may find helpful regarding server side data cleaning.\nhttp://www.alistapart.com/articles/secureyourcode\n"
] | [
3,
2
] | [] | [] | [
"iframe",
"javascript"
] | stackoverflow_0000050470_iframe_javascript.txt |
Q:
How do I implement a pre-commit hook script in SVN that calls dos2unix to validate checked-in file
I was wondering if anyone here had some experience writing this type of script and if they could give me some pointers.
I would like to modify this script to validate that the check-in file does not have a Carriage Return in the EOL formatting. The EOL format is CR LF in Windows and LF in Unix. When a User checks-in code with the Windows format. It does not compile in Unix anymore. I know this can be done on the client side but I need to have this validation done on the server side. To achieve this, I need to do the following:
1) Make sure the file I check is not a binary, I dont know how to do this with svnlook, should I check the mime:type of the file? The Red Book does not indicate this clearly or I must have not seen it.
2) I would like to run the dos2unix command to validate that the file has the correct EOL format. I would compare the output of the dos2unix command against the original file. If there is a diff between both, I give an error message to the client and cancel the check-in.
I would like your comments/feedback on this approach.
A:
I think you can avoid a commit hook script in this case by using the svn:eol-style property as described in the SVNBook:
End-of-Line Character Sequences
Subversion Properties
This way SVN can worry about your line endings for you.
Good luck!
A:
What exactly are you trying to do?
Of course, there are numerous places to learn about svn pre-commit hooks (e.g. here , here, and in the Red Book) but it depends what you're trying to do and what is available on your system.
Can you be more specific?
| How do I implement a pre-commit hook script in SVN that calls dos2unix to validate checked-in file | I was wondering if anyone here had some experience writing this type of script and if they could give me some pointers.
I would like to modify this script to validate that the check-in file does not have a Carriage Return in the EOL formatting. The EOL format is CR LF in Windows and LF in Unix. When a User checks-in code with the Windows format. It does not compile in Unix anymore. I know this can be done on the client side but I need to have this validation done on the server side. To achieve this, I need to do the following:
1) Make sure the file I check is not a binary, I dont know how to do this with svnlook, should I check the mime:type of the file? The Red Book does not indicate this clearly or I must have not seen it.
2) I would like to run the dos2unix command to validate that the file has the correct EOL format. I would compare the output of the dos2unix command against the original file. If there is a diff between both, I give an error message to the client and cancel the check-in.
I would like your comments/feedback on this approach.
| [
"I think you can avoid a commit hook script in this case by using the svn:eol-style property as described in the SVNBook:\n\nEnd-of-Line Character Sequences\nSubversion Properties\n\nThis way SVN can worry about your line endings for you.\nGood luck!\n",
"What exactly are you trying to do?\nOf course, there are numerous places to learn about svn pre-commit hooks (e.g. here , here, and in the Red Book) but it depends what you're trying to do and what is available on your system. \nCan you be more specific? \n"
] | [
4,
1
] | [] | [] | [
"dos2unix",
"python",
"svn"
] | stackoverflow_0000048562_dos2unix_python_svn.txt |
Q:
WebDev.WebServer.EXE Crashes After VS 2008 SP1 Install
Since, for various reasons, I can't use IIS for an ASP.NET website I'm developing, I run Cassini from the command line to test the site. However, after installing Visual Studio 2008 SP1, I get a System.Net.Sockets.SocketException when I try to start up the web server. Is anyone else having this problem, and if so, how did you fix it?
A:
Is there anything in the Application section of the event log?
Have you tried using a different port?
Per this thread, try:
Unbind from Visual Source safe, delete the web project from the solution, rename the folder where the website is stored and then re add to the solution as an existing web site and then bind to source safe again.
There may be some incorrect info in your .suo or .sln file. You can safely rename the former, as it is user-specific (solution user options); the latter (the solution itself) would be a bit more of a hassle to recreate.
| WebDev.WebServer.EXE Crashes After VS 2008 SP1 Install | Since, for various reasons, I can't use IIS for an ASP.NET website I'm developing, I run Cassini from the command line to test the site. However, after installing Visual Studio 2008 SP1, I get a System.Net.Sockets.SocketException when I try to start up the web server. Is anyone else having this problem, and if so, how did you fix it?
| [
"\nIs there anything in the Application section of the event log?\nHave you tried using a different port?\nPer this thread, try:\n\n\nUnbind from Visual Source safe, delete the web project from the solution, rename the folder where the website is stored and then re add to the solution as an existing web site and then bind to source safe again.\n\n\nThere may be some incorrect info in your .suo or .sln file. You can safely rename the former, as it is user-specific (solution user options); the latter (the solution itself) would be a bit more of a hassle to recreate.\n"
] | [
5
] | [] | [] | [
"asp.net",
"webserver"
] | stackoverflow_0000050488_asp.net_webserver.txt |
Q:
Override ScriptControl or BaseValidator for an async ASP.NET validator control?
I'm planning to develop an ASP.NET server control to provide asynchronous username availability validation for new user registrations. The control will allow a developer to point it at a "username" TextBox and it will provide an indication of whether or not the username is available. Like this example, but without the clunky UpdatePanel.
One design decision that's giving me headaches is whether to inherit from ScriptControl or BaseValidator.
By implementing it as a ScriptControl, I can make the client side portion easier to deal with and easily localize it with a resx.
However, I want to make sure that the validator functions properly with respect to Page.IsValid. The only way I know to do this is to override BaseValidator and implement EvaluateIsValid().
So, my question is, how would you suggest structuring this control? Is inheriting from BaseValidator the best (only) way to get the validator part right, or can I do that in some other way?
A:
You should be able to do both if you implement the IScriptControl interface while also deriving from BaseValidator:
public class YourControl : IScriptControl, BaseValidator
To implement the IScriptControl interface means your control will also have to have the GetScriptReferences and GetScriptDescriptors methods.
| Override ScriptControl or BaseValidator for an async ASP.NET validator control? | I'm planning to develop an ASP.NET server control to provide asynchronous username availability validation for new user registrations. The control will allow a developer to point it at a "username" TextBox and it will provide an indication of whether or not the username is available. Like this example, but without the clunky UpdatePanel.
One design decision that's giving me headaches is whether to inherit from ScriptControl or BaseValidator.
By implementing it as a ScriptControl, I can make the client side portion easier to deal with and easily localize it with a resx.
However, I want to make sure that the validator functions properly with respect to Page.IsValid. The only way I know to do this is to override BaseValidator and implement EvaluateIsValid().
So, my question is, how would you suggest structuring this control? Is inheriting from BaseValidator the best (only) way to get the validator part right, or can I do that in some other way?
| [
"You should be able to do both if you implement the IScriptControl interface while also deriving from BaseValidator:\npublic class YourControl : IScriptControl, BaseValidator\n\nTo implement the IScriptControl interface means your control will also have to have the GetScriptReferences and GetScriptDescriptors methods.\n"
] | [
4
] | [] | [] | [
"asp.net",
"asp.net_ajax",
"validation"
] | stackoverflow_0000050330_asp.net_asp.net_ajax_validation.txt |
Q:
Ever Heard of a License Transfer Fee upon Acquisition?
My employer was recently acquired by a much larger company. In the process of sorting out all the legal details around our licenses for our development software, we have learned that the vendor of our IDE charges a "nominal" fee of 25% of the cost of a new license to transfer our existing licenses to the new corporate name.
This struck me as absurd. I have not seen such a customer-unfriendly policy from any other vendor. Has anyone else seen this type of policy? Am I way off base in considering this unfriendly and abnormal?
A:
Unfriendly? Yes. Abnormal? No. Its actually very common for tools with a hefty per-seat license fee to charge for a transfer after acquisition. I believe they do it because they can: the cost of transferring license is either overlooked during the M&A due diligence or is considered inconsequential compared to the rest.
The tool vendor justifies the fee because they now have one less potential customer, and the combined company will be paying a lower price per seat due to volume discounts.
A:
I would say you are not, i have never seen a practice like that before.
edit : well i must be very lucky, seems that it is common. Very glad i have not run across this before :)
A:
I've heard of it before in regards to some high-end graphics software, but this was also back in the 1990's and only applied if you sold your license to someone else.
However, it does seem to be a bit odd to change 25% of a new license to just change the name on it. I'm not a lawyer, but isn't there some way that you could get around having to change the name on the software?
A:
Things like this are quite common. It all depends on the agreement between the vendor and leasor. It's not limited to software either. Think about buying music, images etc. I have heard of some agreements where you can't transfer the license at all. You just have to buy a new copy. The thing that has to be remembered is that techinically when we buy a copy of a program, we don't "own" the copy, we just lease the use of it. It sucks at times, but that is the way it works.
A:
There have been cases where the tools (capital) a company has purchased is worth more than the company, and the company is purchased and gutted just to obtain those tools at a discount.
This is bad for the company, of course, but the tool vender especially doesn't want this to happen - they lose a potential full-price customer for software where there is no real competitor. Further, the company that originally purchased the tool doesn't mind the contract because it helps prevent acquisitions based only on getting the capital. (Corollary: If your company is negotiating out of such a contract, get ready to be purchased...)
For tools that are very, very expensive, this is not unheard of. Think 10's of thousands of dollars per seat, and you can see why this economy becomes reality. Further, sometimes tools are purchased for the company by a client (DoD) and they are actually a small company ( a few developers that won a nice contract) - if the client does not retain the license, then the company might go bust and the license sold for pennies on the dollar at an auction to pay creditors.
Etc, etc, etc. In short, very, very expensive licenses change the economic playground enough that very strange rules apply. Note that "expensive" may also mean scarce, as in the case of liquor licenses for restaurants, or otherwise difficult to get (Qualcomm might not want to sell a given company a license for their CDMA patents, but they may not be able to legally prevent that company from acquiring such a license through legal methods).
-Adam
A:
I would have expected your new overlords to have been made aware of this as part of their takeover plans. Part of the process involves checking for exactly this kind of gotcha.
Sounds like they chose to ignore the information or did not check it out.
A:
That sounds pretty harsh to me, but if you think about the amount of money that changes hands during acquisitions, it's probably one of those cases where your IDE vendor just gets paid without complaint most of the time, so they keep with the policy.
I can see why it shouldn't be completely free to transfer the license -- there is some (probably 'nominal') administrative work to do on the vendor's side, and they need to discourage people from transferring licenses all over the place when they really shouldn't be. But 25% seems awfully high for the amount of work and verification they need to do -- it seems like they could put some sort of cap on the license transfer fee, or have a fixed price.
It does seem like the kind of policy that would drive customers to a competitor, particularly one that does not have the same kind of draconian license transfer policy.
A:
It seems that something like this could be negotiable. We have never though of "fees" as a hard nonnegotiable item. If they value your business I would bet they could discount the transfer fee. It certainly seems that some kind of fee is reasonable for administrative changes that are required. To me that should be a flat fee per license. The work required to change their database is the same no matter how much the license costs.
A:
This is quite common. Unless you address this issue up front when you enter into a license you are at the mercy of the licensor when a transaction like you describe happens. The licensor may or may not have a policy to come along and charge a fee, but unless the matter is addressed in your license, they will have the legal ability to do so.
The reason is this: a license is a legal contract with a specific legal entity (your employer in this case) and grants no rights in the software to anyone else (they buyer company in your example). Now your employer could have insisted on a clause in the original agreement saying that the license could be freely transferred to a possible future buyer without fee, but without such a clause, the licensor can do what they wish. Including charging the 25% fee.
This is one reason that many companies have their licenses routinely reviewed by legal counsel who are knowledgeable about software licensing.
| Ever Heard of a License Transfer Fee upon Acquisition? | My employer was recently acquired by a much larger company. In the process of sorting out all the legal details around our licenses for our development software, we have learned that the vendor of our IDE charges a "nominal" fee of 25% of the cost of a new license to transfer our existing licenses to the new corporate name.
This struck me as absurd. I have not seen such a customer-unfriendly policy from any other vendor. Has anyone else seen this type of policy? Am I way off base in considering this unfriendly and abnormal?
| [
"Unfriendly? Yes. Abnormal? No. Its actually very common for tools with a hefty per-seat license fee to charge for a transfer after acquisition. I believe they do it because they can: the cost of transferring license is either overlooked during the M&A due diligence or is considered inconsequential compared to the rest.\nThe tool vendor justifies the fee because they now have one less potential customer, and the combined company will be paying a lower price per seat due to volume discounts.\n",
"I would say you are not, i have never seen a practice like that before.\nedit : well i must be very lucky, seems that it is common. Very glad i have not run across this before :)\n",
"I've heard of it before in regards to some high-end graphics software, but this was also back in the 1990's and only applied if you sold your license to someone else.\nHowever, it does seem to be a bit odd to change 25% of a new license to just change the name on it. I'm not a lawyer, but isn't there some way that you could get around having to change the name on the software?\n",
"Things like this are quite common. It all depends on the agreement between the vendor and leasor. It's not limited to software either. Think about buying music, images etc. I have heard of some agreements where you can't transfer the license at all. You just have to buy a new copy. The thing that has to be remembered is that techinically when we buy a copy of a program, we don't \"own\" the copy, we just lease the use of it. It sucks at times, but that is the way it works. \n",
"There have been cases where the tools (capital) a company has purchased is worth more than the company, and the company is purchased and gutted just to obtain those tools at a discount.\nThis is bad for the company, of course, but the tool vender especially doesn't want this to happen - they lose a potential full-price customer for software where there is no real competitor. Further, the company that originally purchased the tool doesn't mind the contract because it helps prevent acquisitions based only on getting the capital. (Corollary: If your company is negotiating out of such a contract, get ready to be purchased...)\nFor tools that are very, very expensive, this is not unheard of. Think 10's of thousands of dollars per seat, and you can see why this economy becomes reality. Further, sometimes tools are purchased for the company by a client (DoD) and they are actually a small company ( a few developers that won a nice contract) - if the client does not retain the license, then the company might go bust and the license sold for pennies on the dollar at an auction to pay creditors.\nEtc, etc, etc. In short, very, very expensive licenses change the economic playground enough that very strange rules apply. Note that \"expensive\" may also mean scarce, as in the case of liquor licenses for restaurants, or otherwise difficult to get (Qualcomm might not want to sell a given company a license for their CDMA patents, but they may not be able to legally prevent that company from acquiring such a license through legal methods).\n-Adam\n",
"I would have expected your new overlords to have been made aware of this as part of their takeover plans. Part of the process involves checking for exactly this kind of gotcha.\nSounds like they chose to ignore the information or did not check it out.\n",
"That sounds pretty harsh to me, but if you think about the amount of money that changes hands during acquisitions, it's probably one of those cases where your IDE vendor just gets paid without complaint most of the time, so they keep with the policy.\nI can see why it shouldn't be completely free to transfer the license -- there is some (probably 'nominal') administrative work to do on the vendor's side, and they need to discourage people from transferring licenses all over the place when they really shouldn't be. But 25% seems awfully high for the amount of work and verification they need to do -- it seems like they could put some sort of cap on the license transfer fee, or have a fixed price.\nIt does seem like the kind of policy that would drive customers to a competitor, particularly one that does not have the same kind of draconian license transfer policy.\n",
"It seems that something like this could be negotiable. We have never though of \"fees\" as a hard nonnegotiable item. If they value your business I would bet they could discount the transfer fee. It certainly seems that some kind of fee is reasonable for administrative changes that are required. To me that should be a flat fee per license. The work required to change their database is the same no matter how much the license costs.\n",
"This is quite common. Unless you address this issue up front when you enter into a license you are at the mercy of the licensor when a transaction like you describe happens. The licensor may or may not have a policy to come along and charge a fee, but unless the matter is addressed in your license, they will have the legal ability to do so.\nThe reason is this: a license is a legal contract with a specific legal entity (your employer in this case) and grants no rights in the software to anyone else (they buyer company in your example). Now your employer could have insisted on a clause in the original agreement saying that the license could be freely transferred to a possible future buyer without fee, but without such a clause, the licensor can do what they wish. Including charging the 25% fee.\nThis is one reason that many companies have their licenses routinely reviewed by legal counsel who are knowledgeable about software licensing.\n"
] | [
3,
1,
1,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"licensing"
] | stackoverflow_0000050121_licensing.txt |
Q:
How do you initialize a 2 dimensional array when you do not know the size
I have a two dimensional array that I need to load data into. I know the width of the data (22 values) but I do not know the height (estimated around 4000 records, but variable).
I have it declared as follows:
float[,] _calibrationSet;
....
int calibrationRow = 0;
While (recordsToRead)
{
for (int i = 0; i < SensorCount; i++)
{
_calibrationSet[calibrationRow, i] = calibrationArrayView.ReadFloat();
}
calibrationRow++;
}
This causes a NullReferenceException, so when I try to initialize it like this:
_calibrationSet = new float[,];
I get an "Array creation must have array size or array initializer."
Thank you,
Keith
A:
You can't use an array.
Or rather, you would need to pick a size, and if you ended up needing more then you would have to allocate a new, larger, array, copy the data from the old one into the new one, and continue on as before (until you exceed the size of the new one...)
Generally, you would go with one of the collection classes - ArrayList, List<>, LinkedList<>, etc. - which one depends a lot on what you're looking for; List will give you the closest thing to what i described initially, while LinkedList<> will avoid the problem of frequent re-allocations (at the cost of slower access and greater memory usage).
Example:
List<float[]> _calibrationSet = new List<float[]>();
// ...
while (recordsToRead)
{
float[] record = new float[SensorCount];
for (int i = 0; i < SensorCount; i++)
{
record[i] = calibrationArrayView.ReadFloat();
}
_calibrationSet.Add(record);
}
// access later: _calibrationSet[record][sensor]
Oh, and it's worth noting (as Grauenwolf did), that what i'm doing here doesn't give you the same memory structure as a single, multi-dimensional array would - under the hood, it's an array of references to other arrays that actually hold the data. This speeds up building the array a good deal by making reallocation cheaper, but can have an impact on access speed (and, of course, memory usage). Whether this is an issue for you depends a lot on what you'll be doing with the data after it's loaded... and whether there are two hundred records or two million records.
A:
You can't create an array in .NET (as opposed to declaring a reference to it, which is what you did in your example) without specifying its dimensions, either explicitly, or implicitly by specifying a set of literal values when you initialize it. (e.g. int[,] array4 = { { 1, 2 }, { 3, 4 }, { 5, 6 }, { 7, 8 } };)
You need to use a variable-size data structure first (a generic list of 22-element 1-d arrays would be the simplest) and then allocate your array and copy your data into it after your read is finished and you know how many rows you need.
A:
I would just use a list, then convert that list into an array.
You will notice here that I used a jagged array (float[][]) instead of a square array (float [,]). Besides being the "standard" way of doing things, it should be much faster. When converting the data from a list to an array you only have to copy [calibrationRow] pointers. Using a square array, you would have to copy [calibrationRow] x [SensorCount] floats.
var tempCalibrationSet = new List<float[]>();
const int SensorCount = 22;
int calibrationRow = 0;
while (recordsToRead())
{
tempCalibrationSet[calibrationRow] = new float[SensorCount];
for (int i = 0; i < SensorCount; i++)
{
tempCalibrationSet[calibrationRow][i] = calibrationArrayView.ReadFloat();
} calibrationRow++;
}
float[][] _calibrationSet = tempCalibrationSet.ToArray();
A:
I generally use the nicer collections for this sort of work (List, ArrayList etc.) and then (if really necessary) cast to T[,] when I'm done.
A:
you would either need to preallocate the array to a Maximum size (float[999,22] ) , or use a different data structure.
i guess you could copy/resize on the fly.. (but i don't think you'd want to)
i think the List sounds reasonable.
A:
You could also use a two-dimensional ArrayList (from System.Collections) -- you create an ArrayList, then put another ArrayList inside it. This will give you the dynamic resizing you need, but at the expense of a bit of overhead.
| How do you initialize a 2 dimensional array when you do not know the size | I have a two dimensional array that I need to load data into. I know the width of the data (22 values) but I do not know the height (estimated around 4000 records, but variable).
I have it declared as follows:
float[,] _calibrationSet;
....
int calibrationRow = 0;
While (recordsToRead)
{
for (int i = 0; i < SensorCount; i++)
{
_calibrationSet[calibrationRow, i] = calibrationArrayView.ReadFloat();
}
calibrationRow++;
}
This causes a NullReferenceException, so when I try to initialize it like this:
_calibrationSet = new float[,];
I get an "Array creation must have array size or array initializer."
Thank you,
Keith
| [
"You can't use an array.\nOr rather, you would need to pick a size, and if you ended up needing more then you would have to allocate a new, larger, array, copy the data from the old one into the new one, and continue on as before (until you exceed the size of the new one...)\nGenerally, you would go with one of the collection classes - ArrayList, List<>, LinkedList<>, etc. - which one depends a lot on what you're looking for; List will give you the closest thing to what i described initially, while LinkedList<> will avoid the problem of frequent re-allocations (at the cost of slower access and greater memory usage). \nExample:\nList<float[]> _calibrationSet = new List<float[]>();\n\n// ...\n\nwhile (recordsToRead)\n{\n float[] record = new float[SensorCount];\n for (int i = 0; i < SensorCount; i++)\n {\n record[i] = calibrationArrayView.ReadFloat();\n }\n _calibrationSet.Add(record);\n}\n\n// access later: _calibrationSet[record][sensor]\n\nOh, and it's worth noting (as Grauenwolf did), that what i'm doing here doesn't give you the same memory structure as a single, multi-dimensional array would - under the hood, it's an array of references to other arrays that actually hold the data. This speeds up building the array a good deal by making reallocation cheaper, but can have an impact on access speed (and, of course, memory usage). Whether this is an issue for you depends a lot on what you'll be doing with the data after it's loaded... and whether there are two hundred records or two million records.\n",
"You can't create an array in .NET (as opposed to declaring a reference to it, which is what you did in your example) without specifying its dimensions, either explicitly, or implicitly by specifying a set of literal values when you initialize it. (e.g. int[,] array4 = { { 1, 2 }, { 3, 4 }, { 5, 6 }, { 7, 8 } };)\nYou need to use a variable-size data structure first (a generic list of 22-element 1-d arrays would be the simplest) and then allocate your array and copy your data into it after your read is finished and you know how many rows you need.\n",
"I would just use a list, then convert that list into an array.\nYou will notice here that I used a jagged array (float[][]) instead of a square array (float [,]). Besides being the \"standard\" way of doing things, it should be much faster. When converting the data from a list to an array you only have to copy [calibrationRow] pointers. Using a square array, you would have to copy [calibrationRow] x [SensorCount] floats.\n var tempCalibrationSet = new List<float[]>();\n const int SensorCount = 22;\n int calibrationRow = 0;\n\n while (recordsToRead())\n {\n tempCalibrationSet[calibrationRow] = new float[SensorCount];\n\n for (int i = 0; i < SensorCount; i++)\n {\n tempCalibrationSet[calibrationRow][i] = calibrationArrayView.ReadFloat();\n } calibrationRow++;\n }\n\n float[][] _calibrationSet = tempCalibrationSet.ToArray();\n\n",
"I generally use the nicer collections for this sort of work (List, ArrayList etc.) and then (if really necessary) cast to T[,] when I'm done.\n",
"you would either need to preallocate the array to a Maximum size (float[999,22] ) , or use a different data structure.\ni guess you could copy/resize on the fly.. (but i don't think you'd want to)\ni think the List sounds reasonable.\n",
"You could also use a two-dimensional ArrayList (from System.Collections) -- you create an ArrayList, then put another ArrayList inside it. This will give you the dynamic resizing you need, but at the expense of a bit of overhead.\n"
] | [
8,
2,
1,
0,
0,
0
] | [] | [] | [
"array_initialize",
"c#"
] | stackoverflow_0000050558_array_initialize_c#.txt |
Q:
Is it possible to track allocation/deallocation?
As far as I can tell, this is isn't possible, so I'm really just hoping for a left field undocumented allocation hook function.
I want a way to track allocations like in _CrtSetAllocHook, but for C#/.NET.
The only visibility to the garbage collector/allocation appears to be GC.CollectionCount.
Anyone have any other .NET memory mojo?
A:
The CLR has a 'profiling API' that hooks into pretty much everything - it is what the commercial .NET memory profiling products use, I believe. Here is an MSDN link to the top level of the documentation: .NET Framework General Reference: About the Profiling API
See this MSDN magazine article for an introduction to the memory piece: Inspect and Optimize Your Program's Memory Usage with the .NET Profiler API
A:
I would just use Red Gate's ANTS Profiler. It will tell you a lot about what's going on in memory without you having to learn the profiling API yourself.
| Is it possible to track allocation/deallocation? | As far as I can tell, this is isn't possible, so I'm really just hoping for a left field undocumented allocation hook function.
I want a way to track allocations like in _CrtSetAllocHook, but for C#/.NET.
The only visibility to the garbage collector/allocation appears to be GC.CollectionCount.
Anyone have any other .NET memory mojo?
| [
"The CLR has a 'profiling API' that hooks into pretty much everything - it is what the commercial .NET memory profiling products use, I believe. Here is an MSDN link to the top level of the documentation: .NET Framework General Reference: About the Profiling API\nSee this MSDN magazine article for an introduction to the memory piece: Inspect and Optimize Your Program's Memory Usage with the .NET Profiler API\n",
"I would just use Red Gate's ANTS Profiler. It will tell you a lot about what's going on in memory without you having to learn the profiling API yourself.\n"
] | [
8,
1
] | [] | [] | [
"allocation",
"c#",
"hook",
"memory"
] | stackoverflow_0000050391_allocation_c#_hook_memory.txt |
Q:
How to promote WCF to a non-techie?
How would you describe and promote WCF as a technology to a non-technical client/manager/CEO/etc?
What are competing solutions or ideas that they might bring up(such as those they read about in their magazines touting new technology)?
What is WCF not good for that you've seen people try to shoehorn it into?
-Adam
A:
Comparing with .asmx: WCF is the next generation of Microsoft's Web service development platform, which addresses many of the issues with older versions, specifically:
better interoperation, so you can interoperate with Web services that aren't from Microsoft or that are published on the Internet
much more flexible, so it's easier and faster for developers to get their jobs done
easier to configure without changing code, reducing the cost of maintenance significantly
It may be that they raise the question of how it relates to SOA, a "service-oriented architecture". WCF is the Microsoft solution for creating applications that participate in these distributed systems.
A:
Tell them it'll let you do your job easier which translates into less time and less money.
A:
In a single sentence, I'd say that WCF is "software that lets you set up and manage communication between systems a lot more efficiently than in the past".
I can see them bringing up BizTalk as a competitor, but of course you could say that WCF works with it and is in fact used as base technology for it in the more recent versions.
I'm not sure if I can think of any inappropriate shoe-horning of WCF that I have seen, although there are plenty of legacy apps that will probably be "upgraded" to WCF that don't really need to be for any real business reason.
A:
There is an inter-op angle as well. If you upgrade your Asmx services to WCF services you can still honor your asmx clients and then start moving forward with newer WCF clients. WCF is starting to get some ReST attention, RSS is there, Silverlight has a place with WCF. Performance is better, depending on the bindings you choose. One of the big draw backs is a steeper learning curve comapred to Asmx services, the great power/great responsibilty problem and then the 101 ways to do the same thing.
None of this is CxO talk but refactor the language into magazine buzz words so that they can see the future of this technology.
| How to promote WCF to a non-techie? | How would you describe and promote WCF as a technology to a non-technical client/manager/CEO/etc?
What are competing solutions or ideas that they might bring up(such as those they read about in their magazines touting new technology)?
What is WCF not good for that you've seen people try to shoehorn it into?
-Adam
| [
"Comparing with .asmx: WCF is the next generation of Microsoft's Web service development platform, which addresses many of the issues with older versions, specifically:\n\nbetter interoperation, so you can interoperate with Web services that aren't from Microsoft or that are published on the Internet\nmuch more flexible, so it's easier and faster for developers to get their jobs done\neasier to configure without changing code, reducing the cost of maintenance significantly\n\nIt may be that they raise the question of how it relates to SOA, a \"service-oriented architecture\". WCF is the Microsoft solution for creating applications that participate in these distributed systems.\n",
"Tell them it'll let you do your job easier which translates into less time and less money.\n",
"In a single sentence, I'd say that WCF is \"software that lets you set up and manage communication between systems a lot more efficiently than in the past\".\nI can see them bringing up BizTalk as a competitor, but of course you could say that WCF works with it and is in fact used as base technology for it in the more recent versions.\nI'm not sure if I can think of any inappropriate shoe-horning of WCF that I have seen, although there are plenty of legacy apps that will probably be \"upgraded\" to WCF that don't really need to be for any real business reason.\n",
"There is an inter-op angle as well. If you upgrade your Asmx services to WCF services you can still honor your asmx clients and then start moving forward with newer WCF clients. WCF is starting to get some ReST attention, RSS is there, Silverlight has a place with WCF. Performance is better, depending on the bindings you choose. One of the big draw backs is a steeper learning curve comapred to Asmx services, the great power/great responsibilty problem and then the 101 ways to do the same thing.\nNone of this is CxO talk but refactor the language into magazine buzz words so that they can see the future of this technology.\n"
] | [
6,
3,
1,
1
] | [] | [] | [
".net",
"c#",
"soa",
"wcf",
"web_services"
] | stackoverflow_0000050145_.net_c#_soa_wcf_web_services.txt |
Q:
VB.NET FormatNumber equivalent in C#?
Is there a C# equivalent for the VB.NET FormatNumber function?
I.e.:
JSArrayString += "^" + (String)FormatNumber(inv.RRP * oCountry.ExchangeRate, 2);
A:
In both C# and VB.NET you can use either the .ToString() function or the String.Format() method to format the text.
Using the .ToString() method your example could be written as:
JSArrayString += "^" + (inv.RRP * oCountry.ExchangeRate).ToString("#0.00")
Alternatively using the String.Format() it could written as:
JSArrayString = String.Format("{0}^{1:#0.00}",JSArrayString,(inv.RRP * oCountry.ExchangeRate))
In both of the above cases I have used custom formatting for the currency with # representing an optional place holder and 0 representing a 0 or value if one exists.
Other formatting characters can be used to help with formatting such as D2 for 2 decimal places or C to display as currency. In this case you would not want to use the C formatter as this would have inserted the currency symbol and further separators which were not required.
See "String.Format("{0}", "formatting string"};" or "String Format for Int" for more information and examples on how to use String.Format and the different formatting options.
A:
Yes, the .ToString(string) methods.
For instance,
int number = 32;
string formatted = number.ToString("D4");
Console.WriteLine(formatted);
// Shows 0032
Note that in C# you don't use a number to specify a format, but you use a character or a sequence of characters.
Formatting numbers and dates in C# takes some minutes to learn, but once you understand the principle, you can quickly get anything you want from looking at the reference.
Here's a couple MSDN articles to get you started :
Standard Numeric Format Strings
Formatting Types
A:
You can use string formatters to accomplish the same thing.
double MyNumber = inv.RRP * oCountry.ExchangeRate;
JSArrayString += "^" + MyNumber.ToString("#0.00");
A:
While I would recommend using ToString in this case, always keep in mind you can use ANY VB.Net function or class from C# just by referencing Microsoft.VisalBasic.dll.
| VB.NET FormatNumber equivalent in C#? | Is there a C# equivalent for the VB.NET FormatNumber function?
I.e.:
JSArrayString += "^" + (String)FormatNumber(inv.RRP * oCountry.ExchangeRate, 2);
| [
"In both C# and VB.NET you can use either the .ToString() function or the String.Format() method to format the text. \nUsing the .ToString() method your example could be written as:\nJSArrayString += \"^\" + (inv.RRP * oCountry.ExchangeRate).ToString(\"#0.00\")\n\nAlternatively using the String.Format() it could written as:\nJSArrayString = String.Format(\"{0}^{1:#0.00}\",JSArrayString,(inv.RRP * oCountry.ExchangeRate))\n\nIn both of the above cases I have used custom formatting for the currency with # representing an optional place holder and 0 representing a 0 or value if one exists. \nOther formatting characters can be used to help with formatting such as D2 for 2 decimal places or C to display as currency. In this case you would not want to use the C formatter as this would have inserted the currency symbol and further separators which were not required.\nSee \"String.Format(\"{0}\", \"formatting string\"};\" or \"String Format for Int\" for more information and examples on how to use String.Format and the different formatting options.\n",
"Yes, the .ToString(string) methods.\nFor instance,\nint number = 32;\nstring formatted = number.ToString(\"D4\");\nConsole.WriteLine(formatted);\n// Shows 0032\n\nNote that in C# you don't use a number to specify a format, but you use a character or a sequence of characters.\nFormatting numbers and dates in C# takes some minutes to learn, but once you understand the principle, you can quickly get anything you want from looking at the reference.\nHere's a couple MSDN articles to get you started :\nStandard Numeric Format Strings\nFormatting Types\n",
"You can use string formatters to accomplish the same thing.\ndouble MyNumber = inv.RRP * oCountry.ExchangeRate;\nJSArrayString += \"^\" + MyNumber.ToString(\"#0.00\");\n\n",
"While I would recommend using ToString in this case, always keep in mind you can use ANY VB.Net function or class from C# just by referencing Microsoft.VisalBasic.dll.\n"
] | [
10,
2,
1,
0
] | [] | [] | [
".net",
"c#",
"vb.net"
] | stackoverflow_0000049461_.net_c#_vb.net.txt |
Q:
Wordpress Category Template Question
I am looking at using a custom template for a set of categories. Is it possible to use a category template (like category-4.php) on a parent category and have the children use that template as well?
So based on the answer so far, is there a way to accomplish this? I want to add text and images to all categories within a parent category.
A:
From the documentation it does not appear to be possible without actually adding several category template files (unless you custom program it). I run Wordpress, and I have only seen it accomplished category by category.
http://codex.wordpress.org/Category_Templates
| Wordpress Category Template Question | I am looking at using a custom template for a set of categories. Is it possible to use a category template (like category-4.php) on a parent category and have the children use that template as well?
So based on the answer so far, is there a way to accomplish this? I want to add text and images to all categories within a parent category.
| [
"From the documentation it does not appear to be possible without actually adding several category template files (unless you custom program it). I run Wordpress, and I have only seen it accomplished category by category.\nhttp://codex.wordpress.org/Category_Templates\n"
] | [
2
] | [] | [] | [
"templates",
"wordpress"
] | stackoverflow_0000050606_templates_wordpress.txt |
Q:
Context Menu Resets ComboBox's SelectedIndex
I have a ContextMenu that is displayed after a user right clicks on a ComboBox. When the user selects an item in the context menu, a form is brought up using the ShowDialog() method.
If frmOptions.ShowDialog() = Windows.Forms.DialogResult.Cancel Then
LoadComboBoxes()
End If
When that form is closed, I refresh all the data in the ComboBoxes on the parent form. However, when this happens the ComboBox that opened the ContextMenu is reset to have a selected index of -1 but the other selected indexes of the other ComboBoxes remain the same.
How do I prevent the ComboBox that opened the context menu from being reset?
A:
One way to handle this would be to use the context menu's Popup event to grab the selected index of the combobox launching the menu. When the dialog form closes reset the selected index.
A:
I figured it out.
I created a method that passed the ContextMenu.SourceControl() property by reference so I could manipulate the control that called the ContextMenu. In the beginning of the method, I got the SelectedValue of the ComboBox and the reloaded the data in the ComboBoxes. I then set the SelectedValue to the value I had got in the beginning of the method.
Thank you DaveK for pointing me in the right direction.
| Context Menu Resets ComboBox's SelectedIndex | I have a ContextMenu that is displayed after a user right clicks on a ComboBox. When the user selects an item in the context menu, a form is brought up using the ShowDialog() method.
If frmOptions.ShowDialog() = Windows.Forms.DialogResult.Cancel Then
LoadComboBoxes()
End If
When that form is closed, I refresh all the data in the ComboBoxes on the parent form. However, when this happens the ComboBox that opened the ContextMenu is reset to have a selected index of -1 but the other selected indexes of the other ComboBoxes remain the same.
How do I prevent the ComboBox that opened the context menu from being reset?
| [
"One way to handle this would be to use the context menu's Popup event to grab the selected index of the combobox launching the menu. When the dialog form closes reset the selected index.\n",
"I figured it out.\nI created a method that passed the ContextMenu.SourceControl() property by reference so I could manipulate the control that called the ContextMenu. In the beginning of the method, I got the SelectedValue of the ComboBox and the reloaded the data in the ComboBoxes. I then set the SelectedValue to the value I had got in the beginning of the method.\nThank you DaveK for pointing me in the right direction.\n"
] | [
1,
0
] | [] | [] | [
".net",
"combobox",
"contextmenu",
"selectedindex",
"winforms"
] | stackoverflow_0000050565_.net_combobox_contextmenu_selectedindex_winforms.txt |
Q:
__doPostBack not rendering on postback
I'm having a strange problem.
I have to use GetPostBackEventRefence to force a Postback, but it works the first time, after the first postback, the .NET function is not rendered... any ideas?
This is what I'm missing after the postback:
<script language="javascript" type="text/javascript">
<!--
function __doPostBack(eventTarget, eventArgument) {
var theform;
if (window.navigator.appName.toLowerCase().indexOf("microsoft") > -1) {
theform = document.Main;
}
else {
theform = document.forms["Main"];
}
theform.__EVENTTARGET.value = eventTarget.split("$").join(":");
theform.__EVENTARGUMENT.value = eventArgument;
theform.submit();
}
// -->
</script>
A:
Well, following that idea I created a dummy function with the postbackreference, and it works... it still is weird though, because of it rendering correctly the first time
this.Page.RegisterClientScriptBlock("DUMMY", "<script language='javascript'>function dummy() { " + this.Page.GetPostBackEventReference(this) + "; } </script>");
A:
The first thing I would look at is whether you have any asp controls (such as linkbutton, comboboxes,that don't normally generate a submit but requre a postback) being displayed on the page.
The __doPostback function will only be put into the page if ASP thinks that one of your controls requires it.
If you aren't using one of those you can use:
Page.ClientScript.GetPostBackClientHyperlink(controlName, "")
to add the function to your page
| __doPostBack not rendering on postback | I'm having a strange problem.
I have to use GetPostBackEventRefence to force a Postback, but it works the first time, after the first postback, the .NET function is not rendered... any ideas?
This is what I'm missing after the postback:
<script language="javascript" type="text/javascript">
<!--
function __doPostBack(eventTarget, eventArgument) {
var theform;
if (window.navigator.appName.toLowerCase().indexOf("microsoft") > -1) {
theform = document.Main;
}
else {
theform = document.forms["Main"];
}
theform.__EVENTTARGET.value = eventTarget.split("$").join(":");
theform.__EVENTARGUMENT.value = eventArgument;
theform.submit();
}
// -->
</script>
| [
"Well, following that idea I created a dummy function with the postbackreference, and it works... it still is weird though, because of it rendering correctly the first time\nthis.Page.RegisterClientScriptBlock(\"DUMMY\", \"<script language='javascript'>function dummy() { \" + this.Page.GetPostBackEventReference(this) + \"; } </script>\");\n\n",
"The first thing I would look at is whether you have any asp controls (such as linkbutton, comboboxes,that don't normally generate a submit but requre a postback) being displayed on the page. \nThe __doPostback function will only be put into the page if ASP thinks that one of your controls requires it.\nIf you aren't using one of those you can use: \nPage.ClientScript.GetPostBackClientHyperlink(controlName, \"\")\n\nto add the function to your page\n"
] | [
4,
3
] | [] | [] | [
"asp.net",
"javascript",
"postback"
] | stackoverflow_0000050579_asp.net_javascript_postback.txt |
Q:
Handling HttpRequestValidationException gracefully and ASP.net AJAX compatible?
ValidateEvents is a great ASP.net function, but the Yellow Screen of Death is not so nice. I found a way how to handle the HttpRequestValidationException gracefully here, but that does not work with ASP.net AJAX properly.
Basically, I got an UpdatePanel with a TextBox and a Button, and when the user types in HTML into the Textbox, a JavaScript Popup with a Error message saying not to modify the Response pops up.
So I wonder what is the best way to handle HttpRequestValidationException gracefully? For "normal" requests I would like to just display an error message, but when it's an AJAX Request i'd like to throw the request away and return something to indicate an error, so that my frontend page can react on it?
A:
Found it and blogged about it. Basically, the EndRequestHandler and the args.set_errorHandled are our friends here.
<script type="text/javascript" language="javascript">
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.add_endRequest(EndRequestHandler);
function EndRequestHandler(sender, args) {
if (args.get_error() != undefined)
{
var errorMessage;
if (args.get_response().get_statusCode() == '200')
{
errorMessage = args.get_error().message;
}
else
{
// Error occurred somewhere other than the server page.
errorMessage = 'An unspecified error occurred. ';
}
args.set_errorHandled(true);
$get('<%= this.newsletterLabel.ClientID %>').innerHTML = errorMessage;
}
}
</script>
A:
That's what I would like to avoid if possible, but this seems to be much more complicated than expected.
Normally, everyone advises using the AsyncPostBackError of the ScriptManager, but this does not work if called on the Global.asax. Unfortunately, as the HttpRequestValidationException is emitted by the runtime, it never enters my code and I cannot do much within the Application_Error.
So yes, it needs to be indeed done in the JavaScript, I just hope there is a way to add a "hook" like the BeginRequestHandler-Function so that I don't have to "hack" Microsoft code. If I find a solution before someone else, i'll put it up here :-)
A:
hmmmm, it seems you would need to find some sort of JavaScript to check for html input or a client side validator.
| Handling HttpRequestValidationException gracefully and ASP.net AJAX compatible? | ValidateEvents is a great ASP.net function, but the Yellow Screen of Death is not so nice. I found a way how to handle the HttpRequestValidationException gracefully here, but that does not work with ASP.net AJAX properly.
Basically, I got an UpdatePanel with a TextBox and a Button, and when the user types in HTML into the Textbox, a JavaScript Popup with a Error message saying not to modify the Response pops up.
So I wonder what is the best way to handle HttpRequestValidationException gracefully? For "normal" requests I would like to just display an error message, but when it's an AJAX Request i'd like to throw the request away and return something to indicate an error, so that my frontend page can react on it?
| [
"Found it and blogged about it. Basically, the EndRequestHandler and the args.set_errorHandled are our friends here.\n<script type=\"text/javascript\" language=\"javascript\">\nvar prm = Sys.WebForms.PageRequestManager.getInstance();\nprm.add_endRequest(EndRequestHandler);\n\nfunction EndRequestHandler(sender, args) {\n if (args.get_error() != undefined)\n {\n var errorMessage;\n if (args.get_response().get_statusCode() == '200')\n {\n errorMessage = args.get_error().message;\n }\n else\n {\n // Error occurred somewhere other than the server page.\n errorMessage = 'An unspecified error occurred. ';\n }\n args.set_errorHandled(true);\n $get('<%= this.newsletterLabel.ClientID %>').innerHTML = errorMessage;\n }\n}\n</script>\n\n",
"That's what I would like to avoid if possible, but this seems to be much more complicated than expected.\nNormally, everyone advises using the AsyncPostBackError of the ScriptManager, but this does not work if called on the Global.asax. Unfortunately, as the HttpRequestValidationException is emitted by the runtime, it never enters my code and I cannot do much within the Application_Error.\nSo yes, it needs to be indeed done in the JavaScript, I just hope there is a way to add a \"hook\" like the BeginRequestHandler-Function so that I don't have to \"hack\" Microsoft code. If I find a solution before someone else, i'll put it up here :-)\n",
"hmmmm, it seems you would need to find some sort of JavaScript to check for html input or a client side validator. \n"
] | [
3,
1,
0
] | [] | [] | [
"asp.net",
"asp.net_ajax",
"validation"
] | stackoverflow_0000047864_asp.net_asp.net_ajax_validation.txt |
Q:
What language should I learn as a bridge to C (and derivatives)
The first language I learnt was PHP, but I have more recently picked up Python. As these are all 'high-level' languages, I have found them a bit difficult to pick up. I also tried to learn Objective-C but I gave up.
So, what language should I learn to bridge between Python to C
A:
It's not clear why you need a bridge language. Why don't you start working with C directly? C is a very simple language itself. I think that hardest part for C learner is pointers and everything else related to memory management. Also C lang is oriented on structured programming, so you will need to learn how to implement data structures and algorithms without OOP goodness. Actually, your question is pretty hard, usually people go from low level langs to high level and I can understand frustration of those who goes in other direction.
A:
The best place to start learning C is the book "The C Programming Language" by Kernighan and Ritchie.
You will recognise a lot of things from PHP, and you will be surprised how much PHP (and Perl, Python etc) do for you.
Oh and you also will need a C compiler, but i guess you knew that.
A:
I generally agree with most of the others - There's not really a good stepping stone language.
It is, however, useful to understand what is difficult about learning C, which might help you understand what's making it difficult for you.
I'd say the things that would prove difficult in C for someone coming from PHP would be :
Pointers and memory management This is pretty much the reason you're learning C I imagine, so there's not really any getting around it. Learning lower level assembly type languages might make this easier, but C is probably a bridge to do that, not the other way around.
Lack of built in data structures PHP and co all have native String types, and useful things like hash tables built in, which is not the case in C. In C, a String is just an array of characters, which means you'll need to do a lot more work, or look seriously at libraries which add the features you're used to.
Lack of built in libraries Languages like PHP nowadays almost always come with stacks of libraries for things like database connections, image manipulation and stacks of other things. In C, this is not the case other than a very thin standard library which revolves mostly around file reading, writing and basic string manipulation. There are almost always good choices available to fill these needs, but you need to include them yourself.
Suitability for high level tasks If you try to implement the same type of application in C as you might in PHP, you'll find it very slow going. Generating a web page, for example, isn't really something plain C is suited for, so if you're trying to do that, you'll find it very slow going.
Preprocessor and compilation Most languages these days don't have a preprocessor, and if you're coming from PHP, the compilation cycle will seem painful. Both of these are performance trade offs in a way - Scripting languages make the trade off in terms of developer efficiency, where as C prefers performance.
I'm sure there are more that aren't springing to mind for me right now. The moral of the story is that trying to understand what you're finding difficult in C may help you proceed. If you're trying to generate web pages with it, try doing something lower level. If you're missing hash tables, try writing your own, or find a library. If you're struggling with pointers, stick with it :)
A:
Learning any language takes time, I always ensure I have a measurable goal; I set myself an objective, then start learning the language to achieve this objective, as opposed to trying to learn every nook and cranny of the language and syntax.
C is not easy, pointers can be hard to comprehend if you’re not coming assembler roots. I first learned C++, then retro fit C to my repertoire but I started with x86 and 68000 assembler.
A:
Python is about as close to C as you're going to get. It is in fact a very thin wrapper around C in a lot of places. However, C does require that you know a little more about how the computer works on a low level. Thus, you may benefit from trying an assembly language.
LC-3 is a simple assembly language with a simulated machine.
Alternatively, you could try playing with an interactive C interpreter like CINT.
Finally, toughing it out and reading K&R's book is usually the best approach.
A:
Forget Java - it is not going to bring you anywhere closer to C (you have allready proved that you don't have a problem learning new syntax).
Either read K&R or go one lower: Learn about the machine itself. The only tricky part in C is pointers and memory management (which is closely related to pointers, but also has a bit to do with how functions are called). Learning a (simple, maybe even "fake" assembly) language should help you out here.
Then, start reading up on the standard library provided by C. It will be your daily bread and butter.
Oh: another tip! If you really do want to bridge, try FORTH. It helped me get into pointers. Also, using the win32 api from Visual Basic 6.0 can teach you some stuff about pointers ;)
A:
C is a bridge onto itself.
K&R is the only programming language book you can read in one sitting and almost never pick it up again ...
A:
My suggestion is to get a good C-book that is relevant to what you want to do. I agree that K & R is considered to be "The book" on C, but I found "UNIX Systems Programming" by Kay A. Robbins and Steven Robbins to be more practical and hands on. The book is full of clean and short code snippets you can type in, compile and try in just a few minutes each.
There is a preview at http://books.google.com/books?id=tdsZHyH9bQEC&printsec=frontcover (Hyperlinking it didn't work.)
A:
I'm feeling your pain, I also learned PHP first and I'm trying to learn C++, it's not easy, and I am really struggling, It's been 2 years since I started on c++ and Still the extent of what I can do is cout, cin, and math.
If anyone reads this and wonders where to start, START LOWER.
A:
Java might actually be a good option here, believe it or not. It is strongly based on C/C++, so if you can get the syntax and the strong typing, picking up C might be easier. The benefit is you can learn the lower level syntax without having to learn pointers (since memory is managed for you just like in Python and PHP). You will, however, learn a similar concept... references (or objects in general).
Also, it is strongly Object Oriented, so it may be difficult to pick up on that if you haven't dealt with OOP yet.... you might be better off just digging in with C like others suggested, but it is an option.
A:
I think C++ is a good "bridge" to C. I learned C++ first at University, and since it's based on C you'll learn a lot of the same concepts - perhaps most notably pointers - but also Object Oriented Design. OO can be applied to all kinds of modern languages, so it's worth learning.
After learning C++, I found it wasn't too hard to pick up the differences between C++ and C as required (for example, when working on devices that didn't support C++).
A:
try to learn a language which you are comfortable with, try different approach and the basics.
A:
Languages are easy to learn (especially one like C)... the hard part is learning the libraries and/or coding style of the language. For instance, I know C++ fairly well, but most C/C++ code I see confuses me because the naming conventions are so different from what I work with on a daily basis.
Anyway, I guess what I'm trying to say is don't worry too much about the syntax, focus on said language's library. This isn't specific to C, you can say the same about c#, vb.net, java and just about every other language out there.
A:
Pascal! Close enough syntax, still requires you to do some memory management, but not as rough for beginners.
| What language should I learn as a bridge to C (and derivatives) | The first language I learnt was PHP, but I have more recently picked up Python. As these are all 'high-level' languages, I have found them a bit difficult to pick up. I also tried to learn Objective-C but I gave up.
So, what language should I learn to bridge between Python to C
| [
"It's not clear why you need a bridge language. Why don't you start working with C directly? C is a very simple language itself. I think that hardest part for C learner is pointers and everything else related to memory management. Also C lang is oriented on structured programming, so you will need to learn how to implement data structures and algorithms without OOP goodness. Actually, your question is pretty hard, usually people go from low level langs to high level and I can understand frustration of those who goes in other direction.\n",
"The best place to start learning C is the book \"The C Programming Language\" by Kernighan and Ritchie.\nYou will recognise a lot of things from PHP, and you will be surprised how much PHP (and Perl, Python etc) do for you.\nOh and you also will need a C compiler, but i guess you knew that.\n",
"I generally agree with most of the others - There's not really a good stepping stone language.\nIt is, however, useful to understand what is difficult about learning C, which might help you understand what's making it difficult for you.\nI'd say the things that would prove difficult in C for someone coming from PHP would be :\n\nPointers and memory management This is pretty much the reason you're learning C I imagine, so there's not really any getting around it. Learning lower level assembly type languages might make this easier, but C is probably a bridge to do that, not the other way around.\nLack of built in data structures PHP and co all have native String types, and useful things like hash tables built in, which is not the case in C. In C, a String is just an array of characters, which means you'll need to do a lot more work, or look seriously at libraries which add the features you're used to.\nLack of built in libraries Languages like PHP nowadays almost always come with stacks of libraries for things like database connections, image manipulation and stacks of other things. In C, this is not the case other than a very thin standard library which revolves mostly around file reading, writing and basic string manipulation. There are almost always good choices available to fill these needs, but you need to include them yourself.\nSuitability for high level tasks If you try to implement the same type of application in C as you might in PHP, you'll find it very slow going. Generating a web page, for example, isn't really something plain C is suited for, so if you're trying to do that, you'll find it very slow going.\nPreprocessor and compilation Most languages these days don't have a preprocessor, and if you're coming from PHP, the compilation cycle will seem painful. Both of these are performance trade offs in a way - Scripting languages make the trade off in terms of developer efficiency, where as C prefers performance.\n\nI'm sure there are more that aren't springing to mind for me right now. The moral of the story is that trying to understand what you're finding difficult in C may help you proceed. If you're trying to generate web pages with it, try doing something lower level. If you're missing hash tables, try writing your own, or find a library. If you're struggling with pointers, stick with it :)\n",
"Learning any language takes time, I always ensure I have a measurable goal; I set myself an objective, then start learning the language to achieve this objective, as opposed to trying to learn every nook and cranny of the language and syntax. \nC is not easy, pointers can be hard to comprehend if you’re not coming assembler roots. I first learned C++, then retro fit C to my repertoire but I started with x86 and 68000 assembler.\n",
"Python is about as close to C as you're going to get. It is in fact a very thin wrapper around C in a lot of places. However, C does require that you know a little more about how the computer works on a low level. Thus, you may benefit from trying an assembly language.\nLC-3 is a simple assembly language with a simulated machine.\nAlternatively, you could try playing with an interactive C interpreter like CINT.\nFinally, toughing it out and reading K&R's book is usually the best approach.\n",
"Forget Java - it is not going to bring you anywhere closer to C (you have allready proved that you don't have a problem learning new syntax).\nEither read K&R or go one lower: Learn about the machine itself. The only tricky part in C is pointers and memory management (which is closely related to pointers, but also has a bit to do with how functions are called). Learning a (simple, maybe even \"fake\" assembly) language should help you out here.\nThen, start reading up on the standard library provided by C. It will be your daily bread and butter.\nOh: another tip! If you really do want to bridge, try FORTH. It helped me get into pointers. Also, using the win32 api from Visual Basic 6.0 can teach you some stuff about pointers ;)\n",
"C is a bridge onto itself.\nK&R is the only programming language book you can read in one sitting and almost never pick it up again ... \n",
"My suggestion is to get a good C-book that is relevant to what you want to do. I agree that K & R is considered to be \"The book\" on C, but I found \"UNIX Systems Programming\" by Kay A. Robbins and Steven Robbins to be more practical and hands on. The book is full of clean and short code snippets you can type in, compile and try in just a few minutes each.\nThere is a preview at http://books.google.com/books?id=tdsZHyH9bQEC&printsec=frontcover (Hyperlinking it didn't work.)\n",
"I'm feeling your pain, I also learned PHP first and I'm trying to learn C++, it's not easy, and I am really struggling, It's been 2 years since I started on c++ and Still the extent of what I can do is cout, cin, and math.\nIf anyone reads this and wonders where to start, START LOWER.\n",
"Java might actually be a good option here, believe it or not. It is strongly based on C/C++, so if you can get the syntax and the strong typing, picking up C might be easier. The benefit is you can learn the lower level syntax without having to learn pointers (since memory is managed for you just like in Python and PHP). You will, however, learn a similar concept... references (or objects in general).\nAlso, it is strongly Object Oriented, so it may be difficult to pick up on that if you haven't dealt with OOP yet.... you might be better off just digging in with C like others suggested, but it is an option.\n",
"I think C++ is a good \"bridge\" to C. I learned C++ first at University, and since it's based on C you'll learn a lot of the same concepts - perhaps most notably pointers - but also Object Oriented Design. OO can be applied to all kinds of modern languages, so it's worth learning. \nAfter learning C++, I found it wasn't too hard to pick up the differences between C++ and C as required (for example, when working on devices that didn't support C++).\n",
"try to learn a language which you are comfortable with, try different approach and the basics.\n",
"Languages are easy to learn (especially one like C)... the hard part is learning the libraries and/or coding style of the language. For instance, I know C++ fairly well, but most C/C++ code I see confuses me because the naming conventions are so different from what I work with on a daily basis.\nAnyway, I guess what I'm trying to say is don't worry too much about the syntax, focus on said language's library. This isn't specific to C, you can say the same about c#, vb.net, java and just about every other language out there.\n",
"Pascal! Close enough syntax, still requires you to do some memory management, but not as rough for beginners.\n"
] | [
15,
7,
5,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"c",
"python"
] | stackoverflow_0000049195_c_python.txt |
Q:
Font-size independent UI: everything broke when I switched to 120 DPI?
So I was reading those Windows Vista UI guidelines someone linked to in another question, and they mentioned that you should be able to survive a switch to 120 DPI. Well, I fire up my handy VM with my app installed, and what do we get... AAAAGH!!! MASSIVE UI FAIL!
Everything's all jumbled: some containers aren't big enough for their text; some controls that were positioned "next to each other" are now all squished together/spread apart; some buttons aren't tall enough; my ListView columns aren't wide enough... eeek.
It sounds like a completely different approach is in order. My previous one was basically using the VS2008 Windows Forms designer to create, I guess, a pixel-based layout. I can see that if I were to stick with Windows Forms, FlowLayoutPanels would be helpful, although I've found them rather inflexible in the past. They also don't solve the problem where the containers (e.g. the form itself) aren't big enough; presumably there's a way to do that? Maybe that AutoSize property?
This might also be a sign that it's time to jump ship to WPF; I'm under the impression that it's specifically designed for this kind of thing.
The basic issue seems to come down to these:
If I were to stick with Windows Forms, what are all the tricks to achieving a font-size-independent layout that can survive the user setting his fonts large, or setting the display to 120 DPI?
Does WPF have significant advantages here, and if so, can you try to convince me that it's worth the switch?
Are there any general "best-practices" for font-size-independent layouts, either in the .NET stack or in general?
A:
Learn how the Anchor and Dock properties work on your controls, leave anything that can AutoSize itself alone, and use a TableLayoutPanel when you can.
If you do these three things, you'll get a lot of the WPF design experience in Windows Forms. A well-designed TableLayoutPanel will do its best to size the controls so that they fit the form properly. Combined with AutoSize controls, docking, and the AutoScaleMode mentioned by Soeren Kuklau you should be able to make something that scales well. If not, your form might just have too many controls on it; consider splitting it into tab pages, floating toolboxes, or some other space.
In WPF it's a lot easier because the concept of auto-sizing controls is built-in; in most cases if you are placing a WPF element by using a coordinate pair you are doing it wrong. Still, you can't change the fact that at lower resolutions it doesn't take much 120 dpi text to fill up the screen. Sometimes the problem is not your layout, but an attempt to put too much into a small space.
A:
If I were to stick with Windows Forms, what are all the tricks to achieving a font-size-independent layout that can survive the user setting his fonts large, or setting the display to 120 DPI?
For one, AutoScaleMode may be your friend.
A:
In general, the problem is one of using two different "constants" for form layout, and then changing one of those constants without changing the other.
You are using pixels for your form entities, and points (basically inches) to specify font size. Pixels and points are related by DPI, so you change the DPI and suddenly your pixel fixed values don't line up with your point fixed values.
There are packages and classes for this, but at the end of the day you must choose one unit or the other, or scale one of the units according to the changing constant.
Personally, I'd change the entities on the form into inches. I'm not a C# person, so I don't know if this is supported natively, or if you have to perform some dynamic form sizing on application startup.
If you have to do this in your software, then go ahead and size everything normally (say, to your usual 96 DPI).
When your application starts, verify the system is at 96 DPI before you show your forms. If it is, great. If not, then set a variable with the correction factor, and scale and translate (modify both the location and size) of each entity before you show the form.
The ultimate, though, would be to specify everything in inches or points (a point is 1/72 of an inch) and let the OS deal with it. You might have to deal with corner cases (an outdoor screen with a correctly set DPI would show your application in a few pixels...)
| Font-size independent UI: everything broke when I switched to 120 DPI? | So I was reading those Windows Vista UI guidelines someone linked to in another question, and they mentioned that you should be able to survive a switch to 120 DPI. Well, I fire up my handy VM with my app installed, and what do we get... AAAAGH!!! MASSIVE UI FAIL!
Everything's all jumbled: some containers aren't big enough for their text; some controls that were positioned "next to each other" are now all squished together/spread apart; some buttons aren't tall enough; my ListView columns aren't wide enough... eeek.
It sounds like a completely different approach is in order. My previous one was basically using the VS2008 Windows Forms designer to create, I guess, a pixel-based layout. I can see that if I were to stick with Windows Forms, FlowLayoutPanels would be helpful, although I've found them rather inflexible in the past. They also don't solve the problem where the containers (e.g. the form itself) aren't big enough; presumably there's a way to do that? Maybe that AutoSize property?
This might also be a sign that it's time to jump ship to WPF; I'm under the impression that it's specifically designed for this kind of thing.
The basic issue seems to come down to these:
If I were to stick with Windows Forms, what are all the tricks to achieving a font-size-independent layout that can survive the user setting his fonts large, or setting the display to 120 DPI?
Does WPF have significant advantages here, and if so, can you try to convince me that it's worth the switch?
Are there any general "best-practices" for font-size-independent layouts, either in the .NET stack or in general?
| [
"Learn how the Anchor and Dock properties work on your controls, leave anything that can AutoSize itself alone, and use a TableLayoutPanel when you can.\nIf you do these three things, you'll get a lot of the WPF design experience in Windows Forms. A well-designed TableLayoutPanel will do its best to size the controls so that they fit the form properly. Combined with AutoSize controls, docking, and the AutoScaleMode mentioned by Soeren Kuklau you should be able to make something that scales well. If not, your form might just have too many controls on it; consider splitting it into tab pages, floating toolboxes, or some other space.\nIn WPF it's a lot easier because the concept of auto-sizing controls is built-in; in most cases if you are placing a WPF element by using a coordinate pair you are doing it wrong. Still, you can't change the fact that at lower resolutions it doesn't take much 120 dpi text to fill up the screen. Sometimes the problem is not your layout, but an attempt to put too much into a small space.\n",
"\nIf I were to stick with Windows Forms, what are all the tricks to achieving a font-size-independent layout that can survive the user setting his fonts large, or setting the display to 120 DPI?\n\nFor one, AutoScaleMode may be your friend.\n",
"In general, the problem is one of using two different \"constants\" for form layout, and then changing one of those constants without changing the other.\nYou are using pixels for your form entities, and points (basically inches) to specify font size. Pixels and points are related by DPI, so you change the DPI and suddenly your pixel fixed values don't line up with your point fixed values.\nThere are packages and classes for this, but at the end of the day you must choose one unit or the other, or scale one of the units according to the changing constant.\nPersonally, I'd change the entities on the form into inches. I'm not a C# person, so I don't know if this is supported natively, or if you have to perform some dynamic form sizing on application startup.\nIf you have to do this in your software, then go ahead and size everything normally (say, to your usual 96 DPI).\nWhen your application starts, verify the system is at 96 DPI before you show your forms. If it is, great. If not, then set a variable with the correction factor, and scale and translate (modify both the location and size) of each entity before you show the form.\nThe ultimate, though, would be to specify everything in inches or points (a point is 1/72 of an inch) and let the OS deal with it. You might have to deal with corner cases (an outdoor screen with a correctly set DPI would show your application in a few pixels...)\n"
] | [
11,
4,
4
] | [] | [] | [
".net",
"fonts",
"user_interface",
"winforms",
"wpf"
] | stackoverflow_0000050528_.net_fonts_user_interface_winforms_wpf.txt |
Q:
Rss feed for game programmer?
I was browsing this thread, which has good recommendation but a bit too general for me.
So, if anyone has a collection of nice game programming feeds,please share them. :)
(both general and specific topics are welcome)
A:
I used http://www.gamedev.net/ in college a lot, especially the NeHe Tutorials
A:
AIGameDev.com: http://feeds.aigamedev.com/AiGameDev
A:
Here are two I've used
DirectX forum feed and Summary of interesting resources
A:
GameDevKicks.com might become interesting over time - if used more:
http://www.gamedevkicks.com/
| Rss feed for game programmer? | I was browsing this thread, which has good recommendation but a bit too general for me.
So, if anyone has a collection of nice game programming feeds,please share them. :)
(both general and specific topics are welcome)
| [
"I used http://www.gamedev.net/ in college a lot, especially the NeHe Tutorials\n",
"AIGameDev.com: http://feeds.aigamedev.com/AiGameDev\n",
"Here are two I've used\nDirectX forum feed and Summary of interesting resources\n",
"GameDevKicks.com might become interesting over time - if used more:\nhttp://www.gamedevkicks.com/\n"
] | [
1,
1,
1,
1
] | [] | [] | [
"artificial_intelligence",
"feed",
"graphics",
"rss"
] | stackoverflow_0000050723_artificial_intelligence_feed_graphics_rss.txt |
Q:
Response.StatusCode and Internet Explorer - Display custom message?
I am implementing a HttpRequestValidationException in my Application_Error Handler, and if possible, I want to display a custom message.
Now, I'm thinking about the StatusCode. In my current example, it sends a 200, which I think should not be done. I would like to send the (IMHO) more appropriate 400 Bad Request instead. However, at the same time, I would like to use Response.Write to enter a custom message. Firefox displays it properly, but IE7 gives me the Default unhelpful Internet Explorer Error Page.
On one side, I guess that Internet Explorer just assumes that everything <> 200 is simply not having any "good" content, and the RFC is not really clear here.
So I just wonder, is sending a HTTP 200 for an Error Page caused by a HttpRequestValidationException good practice or not? Are there good alternatives?
A:
An HTTP 200 Response Code does not indicate an error. It indicates that everything was OK. You should not use a 200 response code for an error.
Internet Explorer shows its "Friendly Errors" page if the response is less than 512 bytes. Here's more on this issue: http://weblogs.asp.net/scottgu/archive/2006/04/09/442332.aspx,
A:
No, it's certainly not a good practice. 2XX status codes mean (among other things) that the request is valid. Which is just the contrary to raising a HttpRequestValidationException.
I don't know how to make IE behave correctly, sadly. A slightly better way than to send a 200 would be to redirect it to an error page, but still far from perfect.
A:
Internet Explorer shows what they call a "friendly HTTP error message" when the response is 4xx or 5xx. This option can be turned off by the user in IE's Tools.Options.Advanced[Browsing] dialog.
Sending a 200 for an error page is generally bad practice. One alternative would be to have a valid "Error" page that's supposed to show error messages (so a 200 would be okay) and then use a 3xx redirect to that page.
| Response.StatusCode and Internet Explorer - Display custom message? | I am implementing a HttpRequestValidationException in my Application_Error Handler, and if possible, I want to display a custom message.
Now, I'm thinking about the StatusCode. In my current example, it sends a 200, which I think should not be done. I would like to send the (IMHO) more appropriate 400 Bad Request instead. However, at the same time, I would like to use Response.Write to enter a custom message. Firefox displays it properly, but IE7 gives me the Default unhelpful Internet Explorer Error Page.
On one side, I guess that Internet Explorer just assumes that everything <> 200 is simply not having any "good" content, and the RFC is not really clear here.
So I just wonder, is sending a HTTP 200 for an Error Page caused by a HttpRequestValidationException good practice or not? Are there good alternatives?
| [
"An HTTP 200 Response Code does not indicate an error. It indicates that everything was OK. You should not use a 200 response code for an error. \nInternet Explorer shows its \"Friendly Errors\" page if the response is less than 512 bytes. Here's more on this issue: http://weblogs.asp.net/scottgu/archive/2006/04/09/442332.aspx,\n",
"No, it's certainly not a good practice. 2XX status codes mean (among other things) that the request is valid. Which is just the contrary to raising a HttpRequestValidationException.\nI don't know how to make IE behave correctly, sadly. A slightly better way than to send a 200 would be to redirect it to an error page, but still far from perfect.\n",
"Internet Explorer shows what they call a \"friendly HTTP error message\" when the response is 4xx or 5xx. This option can be turned off by the user in IE's Tools.Options.Advanced[Browsing] dialog. \nSending a 200 for an error page is generally bad practice. One alternative would be to have a valid \"Error\" page that's supposed to show error messages (so a 200 would be okay) and then use a 3xx redirect to that page.\n"
] | [
4,
1,
1
] | [] | [] | [
"asp.net",
"http",
"internet_explorer_7"
] | stackoverflow_0000050747_asp.net_http_internet_explorer_7.txt |
Q:
What is a good way to format logs?
I'm designing an application which includes the need to log all incoming messages I receive from a Telnet connection. The text is largely plain though can include ANSI tags that provide text colour and formatting (16 colours, bold, underline, etc).
I'm would like to format my logs to store the text with formatting, date/time and potentially other meta data later. My first thoughts was all XML but this could impact my ability to write a fast search tool later. My current idea is Date/Time + text in one file with meta-data stored in another XML file, referenced by line number.
Is this a good solution? Also, where and how should I store the formatting commands? The original ANSI tags would disrupt the plain but having them in two different files might be awkward.
Additional: Thanks to some answers so far, though I should mention that most of the time the messages will be person to person communications rather than system messages. A more primitive IRC of sorts. Its up to my user to decide later (by adding meta data) which messages were important. This is the raw on the record log that filtered or edited logs might derive from.
A:
My first suggestion would be to use a drop-in logging tool like log4net, which will make formatting much more automatic.
If you are going to go the route of two files (and I agree with Craig that a database is probably a better choice,) you can probably save yourself a lot of heartache by having one file that is as sparse as you can make it for later fast searches and one that holds all the information in one place (metadata and data) rather than creating a metadata-only format.
A:
G'day,
Definitely do the logging in flat file and add munge scripts to turn it into XMl later.
First suggestion would be to make sure that all date/time strings are in ISO 8601 format, namely YYYY-MM-DD hh:mm:ss.
Second is to make your categories, e.g. exception, fatal, error, warning, info, etc. really stand out in your logs.
Then aybe look at some of the vim syntax files and create a new syntax for your log format so that important log entries really stand out.
It's not really that hard to take one of the standard syntax files and modify it to handle your log strings.
HTH.
cheers,
Rob
A:
If you are catpuring logging information for future searching and anaylsis perhaps a database would be a better answer.
As for your solution. Flat files do not scale well at all where as a database scale much better. I wouldn't split the files either, that just compounds the scalability issue. If you have to use a flat file I would probably try keeping the meta data in a csv (less over head) and the data in a series of files indexed by the csv file. That way all the data doesn't impact your index file. Just my thoughts.
A:
I'm going to "split the fence" and say use the database for all of your analysis/archiving log entries (such as your Telnet communications). This will grant you the benefits of full text searching, columns, and easy ways to search out the data.
Use a flat file (or XML format since the file shouldn't be too big) for any of your debug/critical error type logs.
If you have a broken database connection, or something has gone wacky with your table structure, logging to the DB will be meaningless.
Come to think of it, if you are looking for a slightly more "lightweight" solution, you could use SQLite to log all your telnet traffic so that you can leverage the advantage of the DB structure, but also have the availability of the file.
With another nod to log4net, you could easily accomplish this with the ADO appender they have.
A:
I'm not sure exactly what you are trying to accomplish. Telnet is usually thought of as a character-at-a-time protocol, so when you say "incoming messages" do you mean each character is a message? Or the entire user's session is a message?
I'll make some assumtions.
You have users logging in via telnet and you want to capture everything they do while they are logged in. Later, you want to be able to associate the stuff they did with that user and the time and date they did it. You'll need to be able to search later to find out "who did 'rm *' as root?"
I would store each user's session as a separate file, with a naming convention that includes the user's login and a timestamp.
e.g. 2008_09_08_14_52_07_nidonocu
Within the the file, I would capture each byte received, assuming they will mostly be plain text characters.
e.g.
ls
cd www
ls
vi index.html
/copyright 2007
llllllllllllr8:wq
exit
Write the 8-bit ANSI characters to the file as well. You should be able to use a text editor and grep to do basic audits and searches. You could use a binary file viewer or get more sophisticated later if you need to actually read the 8-bit data.
Backups, archiving, purging, etc. can all be done using regular file system tools and scripting.
My apologies if my assumptions are wrong.
--
Bruce
| What is a good way to format logs? | I'm designing an application which includes the need to log all incoming messages I receive from a Telnet connection. The text is largely plain though can include ANSI tags that provide text colour and formatting (16 colours, bold, underline, etc).
I'm would like to format my logs to store the text with formatting, date/time and potentially other meta data later. My first thoughts was all XML but this could impact my ability to write a fast search tool later. My current idea is Date/Time + text in one file with meta-data stored in another XML file, referenced by line number.
Is this a good solution? Also, where and how should I store the formatting commands? The original ANSI tags would disrupt the plain but having them in two different files might be awkward.
Additional: Thanks to some answers so far, though I should mention that most of the time the messages will be person to person communications rather than system messages. A more primitive IRC of sorts. Its up to my user to decide later (by adding meta data) which messages were important. This is the raw on the record log that filtered or edited logs might derive from.
| [
"My first suggestion would be to use a drop-in logging tool like log4net, which will make formatting much more automatic.\nIf you are going to go the route of two files (and I agree with Craig that a database is probably a better choice,) you can probably save yourself a lot of heartache by having one file that is as sparse as you can make it for later fast searches and one that holds all the information in one place (metadata and data) rather than creating a metadata-only format.\n",
"G'day,\nDefinitely do the logging in flat file and add munge scripts to turn it into XMl later.\nFirst suggestion would be to make sure that all date/time strings are in ISO 8601 format, namely YYYY-MM-DD hh:mm:ss.\nSecond is to make your categories, e.g. exception, fatal, error, warning, info, etc. really stand out in your logs.\nThen aybe look at some of the vim syntax files and create a new syntax for your log format so that important log entries really stand out.\nIt's not really that hard to take one of the standard syntax files and modify it to handle your log strings.\nHTH.\ncheers,\nRob\n",
"If you are catpuring logging information for future searching and anaylsis perhaps a database would be a better answer. \nAs for your solution. Flat files do not scale well at all where as a database scale much better. I wouldn't split the files either, that just compounds the scalability issue. If you have to use a flat file I would probably try keeping the meta data in a csv (less over head) and the data in a series of files indexed by the csv file. That way all the data doesn't impact your index file. Just my thoughts.\n",
"I'm going to \"split the fence\" and say use the database for all of your analysis/archiving log entries (such as your Telnet communications). This will grant you the benefits of full text searching, columns, and easy ways to search out the data.\nUse a flat file (or XML format since the file shouldn't be too big) for any of your debug/critical error type logs. \nIf you have a broken database connection, or something has gone wacky with your table structure, logging to the DB will be meaningless.\nCome to think of it, if you are looking for a slightly more \"lightweight\" solution, you could use SQLite to log all your telnet traffic so that you can leverage the advantage of the DB structure, but also have the availability of the file.\nWith another nod to log4net, you could easily accomplish this with the ADO appender they have.\n",
"I'm not sure exactly what you are trying to accomplish. Telnet is usually thought of as a character-at-a-time protocol, so when you say \"incoming messages\" do you mean each character is a message? Or the entire user's session is a message?\nI'll make some assumtions.\nYou have users logging in via telnet and you want to capture everything they do while they are logged in. Later, you want to be able to associate the stuff they did with that user and the time and date they did it. You'll need to be able to search later to find out \"who did 'rm *' as root?\"\nI would store each user's session as a separate file, with a naming convention that includes the user's login and a timestamp.\ne.g. 2008_09_08_14_52_07_nidonocu\nWithin the the file, I would capture each byte received, assuming they will mostly be plain text characters.\ne.g.\nls\ncd www\nls\nvi index.html\n/copyright 2007\nllllllllllllr8:wq\nexit\n\nWrite the 8-bit ANSI characters to the file as well. You should be able to use a text editor and grep to do basic audits and searches. You could use a binary file viewer or get more sophisticated later if you need to actually read the 8-bit data.\nBackups, archiving, purging, etc. can all be done using regular file system tools and scripting.\nMy apologies if my assumptions are wrong.\n--\nBruce\n"
] | [
2,
1,
0,
0,
0
] | [] | [] | [
"c#",
"logging",
"parsing"
] | stackoverflow_0000050704_c#_logging_parsing.txt |
Q:
Visual Studio 2005 crashes on start-up
In my work environment, Visual Studio currently crashes every time I start our main project unless I delete the .suo (solution options) and .ncb (C++ Intellisense symbols) files.
Obviously, I've found a workaround. Is there a more permanent solution than this?
A:
Try monitoring the Visual Studio process using a tool like Process Monitor and get more info. It could be because of some weird file access issues.
A:
Have you installed Visual Studio 2005 Service Pack 1?
A:
The accepted answer wasn't quite correct, but it pointed in the right direction.
There is a hotfix for VS2k5 SP1 described in KB article 947315 that addresses this issue.
| Visual Studio 2005 crashes on start-up | In my work environment, Visual Studio currently crashes every time I start our main project unless I delete the .suo (solution options) and .ncb (C++ Intellisense symbols) files.
Obviously, I've found a workaround. Is there a more permanent solution than this?
| [
"Try monitoring the Visual Studio process using a tool like Process Monitor and get more info. It could be because of some weird file access issues.\n",
"Have you installed Visual Studio 2005 Service Pack 1?\n",
"The accepted answer wasn't quite correct, but it pointed in the right direction.\nThere is a hotfix for VS2k5 SP1 described in KB article 947315 that addresses this issue.\n"
] | [
2,
2,
1
] | [] | [] | [
"crash",
"intellisense",
"visual_studio"
] | stackoverflow_0000050774_crash_intellisense_visual_studio.txt |
Q:
Where do attached properties fit in a class diagram?
What is the most appropriate way to represent attached properties in a UML diagram or an almost-uml diagram like the VS2008 class diagram?
A:
In UML it'll be quoted tag before the member. Something conventional, like this:
"attached" Align: ElementAlign
| Where do attached properties fit in a class diagram? | What is the most appropriate way to represent attached properties in a UML diagram or an almost-uml diagram like the VS2008 class diagram?
| [
"In UML it'll be quoted tag before the member. Something conventional, like this: \n\"attached\" Align: ElementAlign\n\n"
] | [
1
] | [] | [] | [
".net",
"class_diagram",
"dependency_properties",
"uml",
"wpf"
] | stackoverflow_0000046859_.net_class_diagram_dependency_properties_uml_wpf.txt |
Q:
What role to give a sql login in order to create a database and additional logins
What role should I give a sql login if I need the login to be able to create a database, and create additional logins and add users based on those logins to the database i created? This is sql 2005.
A:
There is no fixed database role that includes these permissions. You'll have to create a role and assign the permissions individually.
CREATE ROLE db_creator
GRANT CREATE DATABASE TO db_creator
GRANT ALTER ANY LOGIN TO db_creator
GRANT ALTER ANY USER TO db_creator
| What role to give a sql login in order to create a database and additional logins | What role should I give a sql login if I need the login to be able to create a database, and create additional logins and add users based on those logins to the database i created? This is sql 2005.
| [
"There is no fixed database role that includes these permissions. You'll have to create a role and assign the permissions individually.\n\nCREATE ROLE db_creator\nGRANT CREATE DATABASE TO db_creator\nGRANT ALTER ANY LOGIN TO db_creator\nGRANT ALTER ANY USER TO db_creator\n\n"
] | [
1
] | [] | [] | [
"sql_server_2005"
] | stackoverflow_0000050863_sql_server_2005.txt |
Q:
How do you create a weak reference to an object in Python?
How do you create a weak reference to an object in Python?
A:
>>> import weakref
>>> class Object:
... pass
...
>>> o = Object()
>>> r = weakref.ref(o)
>>> # if the reference is still active, r() will be o, otherwise None
>>> do_something_with_o(r())
See the wearkref module docs for more details.
You can also use weakref.proxy to create an object that proxies o. Will throw ReferenceError if used when the referent is no longer referenced.
| How do you create a weak reference to an object in Python? | How do you create a weak reference to an object in Python?
| [
">>> import weakref\n>>> class Object:\n... pass\n...\n>>> o = Object()\n>>> r = weakref.ref(o)\n>>> # if the reference is still active, r() will be o, otherwise None\n>>> do_something_with_o(r()) \n\nSee the wearkref module docs for more details.\nYou can also use weakref.proxy to create an object that proxies o. Will throw ReferenceError if used when the referent is no longer referenced.\n"
] | [
13
] | [] | [] | [
"python",
"weak_references"
] | stackoverflow_0000050923_python_weak_references.txt |
Q:
Average User Download Speeds
Any ideas what the average user's download speed is? I'm working on a site that streams video and am trying to figure out what an average download speed as to determine quality.
I know i might be comparing apples with oranges but I'm just looking for something to get a basis for where to start.
A:
Speedtest.net has a lot of stats broken down by country, region, city and ISP. Not sure about accuracy, since it's only based on the people using their "bandwidth measurement" service.
A:
It would depend on the geography that you are targeting. For example, in India, you can safely assume it would be a number below 256kbps.
A:
Try attacking it from the other angle. Look at streaming services that cater to the customer you want, and have significant volume (maybe youtube) and see what they're pushing. You'll find there'a pretty direct correlation between alexa rating (popularity) and quality(minimum bitrate required). Vimeo will always have fewer users than Youtube because the user experience is poor for low bitrate users.
There are many other factors, and this should only form one small facet of your bandwidth decision, but it's a useful comparison to make.
Keep in mind, however, that you want to degrade gracefully. As more and more sites come online you'll start bumping into ISPs that limit total transfer, and being able to tell your customers how much of their bandwidth your site is consuming is useful, as well as proclaiming that you are a low bandwidth site.
Further, more and more users are using portable cellular connections (iPhone) where limited bandwidth is a big deal. AT&T has oversold many markets so being able to get useful video through a tiny link will enable you to capture market that vimeo and Hulu cannot.
Quite frankly, though, the best thing to do is degrade on the fly gracefully. Measure the bandwidth of the connection continuously and adjust bandwidth as needed for a smooth playback experience with good audio. Then you can take all users across the gamut...
-Adam
A:
You could try looking at the lower tier offerings from AT&T and Comcast. Probably 1.5 Mbps for the basic level (which I imagine most people get).
The "test your bandwidth" sites may have some stats on this, too.
A:
There are a lot of factors involved (server bandwidth, local ISP, network in between, etc) which make it difficult to give a hard answer. With my current ISP, I typically get 200-300 kB/sec. Although when the planets align I've gotten as much as 2 MB/sec (the "quoted" peak downlink speed). That was with parallel streams, however. The peak bandwidth I've achieved on a single stream is 1.2 MB/sec
A:
The best strategy is always to give your users options. Why don't you start the stream at a low bitrate that will work for everyone and provide a "High Quality" link for those of us with FTTH connections? I believe YouTube has started doing this.
A:
According to CWA, the average US resident has a 1.9Mbps download speed. They have data by state, so if you have money then you can probably get a more specific report for your intended audience. Keep in mind, however, that more and more people are sharing this with multiple computers, using VOIP devices, and running background processes that consume bandwidth.
-Adam
A:
Wow.
This is so dependent on the device, connection method, connection type, ISP throttling, etc. involved in the end-to-end link.
To try and work out an average speed would be fairly impossible.
Think, fat pipe at home (8Gb plus) versus bad wireless connection provided for free at the airport (9.6kb) and you can start to get an idea of the range of connections you're trying to average over.
Then we move onto variations in screen sizes and device capabilities.
Maybe trawl the UA stings of incoming connectins to get an idea of the capabilities of the user devices being used out there.
Maybe see if you can use some sort of geolocation solution to try and see how people are connecting to your site to get an idea of connection capabilities as well.
Are you offering the video in a fixed format, i.e. X x Y pixel size?
HTH.
cheers,
Rob
A:
If I'm using your site, "average" doesn't matter. All I care about is MY experience, and so you either need to make the site adaptive, design for a pretty low speed (iPhone 2G gets you 70-80 kbps if you're lucky, to take one common case), or be very clear about the requirements so I can decide whether or not my connection-of-the-moment will work or not.
What you don't want to subject your users to is unpredictably choppy, intermittent video and audio.
| Average User Download Speeds | Any ideas what the average user's download speed is? I'm working on a site that streams video and am trying to figure out what an average download speed as to determine quality.
I know i might be comparing apples with oranges but I'm just looking for something to get a basis for where to start.
| [
"Speedtest.net has a lot of stats broken down by country, region, city and ISP. Not sure about accuracy, since it's only based on the people using their \"bandwidth measurement\" service.\n",
"It would depend on the geography that you are targeting. For example, in India, you can safely assume it would be a number below 256kbps.\n",
"Try attacking it from the other angle. Look at streaming services that cater to the customer you want, and have significant volume (maybe youtube) and see what they're pushing. You'll find there'a pretty direct correlation between alexa rating (popularity) and quality(minimum bitrate required). Vimeo will always have fewer users than Youtube because the user experience is poor for low bitrate users.\nThere are many other factors, and this should only form one small facet of your bandwidth decision, but it's a useful comparison to make.\nKeep in mind, however, that you want to degrade gracefully. As more and more sites come online you'll start bumping into ISPs that limit total transfer, and being able to tell your customers how much of their bandwidth your site is consuming is useful, as well as proclaiming that you are a low bandwidth site.\nFurther, more and more users are using portable cellular connections (iPhone) where limited bandwidth is a big deal. AT&T has oversold many markets so being able to get useful video through a tiny link will enable you to capture market that vimeo and Hulu cannot.\nQuite frankly, though, the best thing to do is degrade on the fly gracefully. Measure the bandwidth of the connection continuously and adjust bandwidth as needed for a smooth playback experience with good audio. Then you can take all users across the gamut...\n-Adam\n",
"You could try looking at the lower tier offerings from AT&T and Comcast. Probably 1.5 Mbps for the basic level (which I imagine most people get).\nThe \"test your bandwidth\" sites may have some stats on this, too.\n",
"There are a lot of factors involved (server bandwidth, local ISP, network in between, etc) which make it difficult to give a hard answer. With my current ISP, I typically get 200-300 kB/sec. Although when the planets align I've gotten as much as 2 MB/sec (the \"quoted\" peak downlink speed). That was with parallel streams, however. The peak bandwidth I've achieved on a single stream is 1.2 MB/sec\n",
"The best strategy is always to give your users options. Why don't you start the stream at a low bitrate that will work for everyone and provide a \"High Quality\" link for those of us with FTTH connections? I believe YouTube has started doing this.\n",
"According to CWA, the average US resident has a 1.9Mbps download speed. They have data by state, so if you have money then you can probably get a more specific report for your intended audience. Keep in mind, however, that more and more people are sharing this with multiple computers, using VOIP devices, and running background processes that consume bandwidth.\n-Adam\n",
"Wow.\nThis is so dependent on the device, connection method, connection type, ISP throttling, etc. involved in the end-to-end link.\nTo try and work out an average speed would be fairly impossible.\nThink, fat pipe at home (8Gb plus) versus bad wireless connection provided for free at the airport (9.6kb) and you can start to get an idea of the range of connections you're trying to average over.\nThen we move onto variations in screen sizes and device capabilities.\nMaybe trawl the UA stings of incoming connectins to get an idea of the capabilities of the user devices being used out there.\nMaybe see if you can use some sort of geolocation solution to try and see how people are connecting to your site to get an idea of connection capabilities as well.\nAre you offering the video in a fixed format, i.e. X x Y pixel size?\nHTH.\ncheers,\nRob\n",
"If I'm using your site, \"average\" doesn't matter. All I care about is MY experience, and so you either need to make the site adaptive, design for a pretty low speed (iPhone 2G gets you 70-80 kbps if you're lucky, to take one common case), or be very clear about the requirements so I can decide whether or not my connection-of-the-moment will work or not.\nWhat you don't want to subject your users to is unpredictably choppy, intermittent video and audio.\n"
] | [
6,
2,
2,
1,
0,
0,
0,
0,
0
] | [] | [] | [
"average",
"download",
"performance"
] | stackoverflow_0000050890_average_download_performance.txt |
Q:
Can XML comments go anywhere?
I wrote a simple tool to generate a DBUnit XML dataset using queries that the user enters. I want to include each query entered in the XML as a comment, but the DBUnit API to generate the XML file doesn't support inserting the comment where I would like it (above the data it generates), so I am resorting to putting the comment with ALL queries either at the top or bottom.
So my question: is it valid XML to place it at either location? For example, above the XML Declaration:
<!-- Queries used: ... -->
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
...
</dataset>
Or below the root node:
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
...
</dataset>
<!-- Queries used: ... -->
I plan to initially try above the XML Declaration, but I have doubts on if that is valid XML, despite the claim from wikipedia:
Comments can be placed anywhere in the tree, including in the text if the content of the element is text or #PCDATA.
I plan to post back if this works, but it would be nice to know if it is an official XML standard.
UPDATE: See my response below for the result of my test.
A:
According to the XML specification, a well-formed XML document is:
document ::= prolog element Misc*
where prolog is
prolog ::= XMLDecl? Misc* (doctypedecl Misc*)?
and Misc is
Misc ::= Comment | PI | S
and
XMLDecl ::= '<?xml' VersionInfo EncodingDecl? SDDecl? S? '?>'
which means that, if you want to have comments at the top, you cannot have an XML type declaration.
You can, however, have comments after the declaration and outside the document element, either at the top or the bottom of the document, because Misc* can contain comments.
The specification agrees with Wikipedia on comments:
2.5 Comments
[Definition: Comments may appear anywhere in a document outside other markup; in addition, they may appear within the document type declaration at places allowed by the grammar. They are not part of the document's character data; an XML processor MAY, but need not, make it possible for an application to retrieve the text of comments. For compatibility, the string "--" (double-hyphen) MUST NOT occur within comments.] Parameter entity references MUST NOT be recognized within comments.
All of this together means that you can put comments anywhere that's not inside other markup, except that you cannot have an XML declaration if you lead with a comment.
However, while in theory theory agrees with practice, in practice it doesn't, so I'd be curious to see how your experiment works out.
A:
The first example is not valid XML, the declaration has to be the first thing in a XML document.
But besides that, comments can go anywhere else.
Correcting your first example:
<?xml version="1.0" encoding="UTF-8"?>
<!-- Queries used: ... -->
<dataset>
</dataset>
A:
The processing instruction must be the very first thing in the XML content (see XML comment and processing instructions). The following should work:
<?xml version='1.0' encoding='UTF-8'?>
<!-- Queries used: ... -->
<dataset>
...
</dataset>
A:
Thanks for the answers everyone!
As it turns out, the comment ahead of the file seemed to work, but when I delved into the DBUnit source, it is because validation is turned off.
I did try a simple document load via:
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = factory.newDocumentBuilder();
Document document = builder.parse(new File("/path/to/file"));
and this fails with an exception because the XML Declaration is not the first thing (as others indicated would be the case).
So, while DBUnit would work, I prefer to have valid XML, so I moved the comment to the end (since DBUnit generates the XML Declaration, it is not an option to place the comment below it, even though I would prefer that... at least not without modifying the XML after the fact, which would be more work than it is worth).
| Can XML comments go anywhere? | I wrote a simple tool to generate a DBUnit XML dataset using queries that the user enters. I want to include each query entered in the XML as a comment, but the DBUnit API to generate the XML file doesn't support inserting the comment where I would like it (above the data it generates), so I am resorting to putting the comment with ALL queries either at the top or bottom.
So my question: is it valid XML to place it at either location? For example, above the XML Declaration:
<!-- Queries used: ... -->
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
...
</dataset>
Or below the root node:
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
...
</dataset>
<!-- Queries used: ... -->
I plan to initially try above the XML Declaration, but I have doubts on if that is valid XML, despite the claim from wikipedia:
Comments can be placed anywhere in the tree, including in the text if the content of the element is text or #PCDATA.
I plan to post back if this works, but it would be nice to know if it is an official XML standard.
UPDATE: See my response below for the result of my test.
| [
"According to the XML specification, a well-formed XML document is:\n\ndocument ::= prolog element Misc*\n\nwhere prolog is\n\nprolog ::= XMLDecl? Misc* (doctypedecl Misc*)?\n\nand Misc is\n\nMisc ::= Comment | PI | S\n\nand\n\nXMLDecl ::= '<?xml' VersionInfo EncodingDecl? SDDecl? S? '?>'\n\nwhich means that, if you want to have comments at the top, you cannot have an XML type declaration. \nYou can, however, have comments after the declaration and outside the document element, either at the top or the bottom of the document, because Misc* can contain comments.\nThe specification agrees with Wikipedia on comments:\n\n2.5 Comments\n[Definition: Comments may appear anywhere in a document outside other markup; in addition, they may appear within the document type declaration at places allowed by the grammar. They are not part of the document's character data; an XML processor MAY, but need not, make it possible for an application to retrieve the text of comments. For compatibility, the string \"--\" (double-hyphen) MUST NOT occur within comments.] Parameter entity references MUST NOT be recognized within comments.\n\nAll of this together means that you can put comments anywhere that's not inside other markup, except that you cannot have an XML declaration if you lead with a comment.\nHowever, while in theory theory agrees with practice, in practice it doesn't, so I'd be curious to see how your experiment works out.\n",
"The first example is not valid XML, the declaration has to be the first thing in a XML document.\nBut besides that, comments can go anywhere else.\nCorrecting your first example:\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- Queries used: ... -->\n<dataset>\n</dataset>\n\n",
"The processing instruction must be the very first thing in the XML content (see XML comment and processing instructions). The following should work:\n<?xml version='1.0' encoding='UTF-8'?>\n<!-- Queries used: ... -->\n<dataset>\n ...\n</dataset>\n\n",
"Thanks for the answers everyone!\nAs it turns out, the comment ahead of the file seemed to work, but when I delved into the DBUnit source, it is because validation is turned off.\nI did try a simple document load via:\nDocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();\nDocumentBuilder builder = factory.newDocumentBuilder();\nDocument document = builder.parse(new File(\"/path/to/file\"));\n\nand this fails with an exception because the XML Declaration is not the first thing (as others indicated would be the case).\nSo, while DBUnit would work, I prefer to have valid XML, so I moved the comment to the end (since DBUnit generates the XML Declaration, it is not an option to place the comment below it, even though I would prefer that... at least not without modifying the XML after the fact, which would be more work than it is worth).\n"
] | [
17,
5,
3,
2
] | [] | [] | [
"comments",
"xml"
] | stackoverflow_0000050824_comments_xml.txt |
Q:
Why is parameterized SQL generated by NHibernate just as fast as a stored procedure?
One of my co-workers claims that even though the execution path is cached, there is no way parameterized SQL generated from an ORM is as quick as a stored procedure. Any help with this stubborn developer?
A:
I would start by reading this article:
http://decipherinfosys.wordpress.com/2007/03/27/using-stored-procedures-vs-dynamic-sql-generated-by-orm/
Here is a speed test between the two:
http://www.blackwasp.co.uk/SpeedTestSqlSproc.aspx
A:
Round 1 - You can start a profiler trace and compare the execution times.
A:
For most people, the best way to convince them is to "show them the proof." In this case, I would create a couple basic test cases to retrieve the same set of data, and then time how long it takes using stored procedures versus NHibernate. Once you have the results, hand it over to them and most skeptical people should yield to the evidence.
A:
I would only add a couple things to Rob's answer:
First, Make sure the amount of data involved in the test cases is similiar to production values. In other words if your queries are normally against tables with hundreds of thousands or rows, then create such a test environment.
Second, make everything else equal except for the use of an nHibernate generated query and a s'proc call. Hopefully you can execute the test by simply swapping out a provider.
Finally, realize that there is usually a lot more at stake than just stored procedures vs. ORM. With that in mind the test should look at all of the factors: execution time, memory consumption, scalability, debugging ability, etc.
A:
The problem here is that you've accepted the burden of proof. You're unlikely to change someone's mind like that. Like it or not, people--even programmers-- are just too emotional to be easily swayed by logic. You need to put the burden of proof back on him- get him to convince you otherwise- and that will force him to do the research and discover the answer for himself.
A better argument to use stored procedures is security. If you use only stored procedures, with no dynamic sql, you can disable SELECT, INSERT, UPDATE, DELETE, ALTER, and CREATE permissions for the application database user. This will protect you against most 2nd order SQL Injection, whereas parameterized queries are only effective against first order injection.
A:
Measure it, but in a non-micro-benchmark, i.e. something that represents real operations in your system. Even if there would be a tiny performance benefit for a stored procedure it will be insignificant against the other costs your code is incurring: actually retrieving data, converting it, displaying it, etc. Not to mention that using stored procedures amounts to spreading your logic out over your app and your database with no significant version control, unit tests or refactoring support in the latter.
A:
Benchmark it yourself. Write a testbed class that executes a sampled stored procedure a few hundred times, and run the NHibernate code the same amount of times. Compare the average and median execution time of each method.
A:
It is just as fast if the query is the same each time. Sql Server 2005 caches query plans at the level of each statement in a batch, regardless of where the SQL comes from.
The long-term difference might be that stored procedures are many, many times easier for a DBA to manage and tune, whereas hundreds of different queries that have to be gleaned from profiler traces are a nightmare.
A:
I've had this argument many times over.
Almost always I end up grabbing a really good dba, and running a proc and a piece of code with the profiler running, and get the dba to show that the results are so close its negligible.
A:
Measure it.
Really, any discussion on this topic is probably futile until you've measured it.
A:
He may be correct for the specific use case he is thinking of. A stored procedure will probably execute faster for some complex set of SQL, that can be arbitrarily tuned. However, something you get from things like hibernate is caching. This may prove much faster for the lifetime of your actual application.
A:
The additional layer of abstraction will cause it to be slower than a pure call to a sproc. Just by the fact that you have additional allocations on the managed heap, and additional pushes and pops off the callstack, the truth of the matter is that it is more efficient to call a sproc over having an ORM build the query, regardless how good the ORM is.
How slow, if its even measurable, is debatable. This is also helped by the fact that most ORM's have a caching mechanism to avoid doing the query at all.
A:
Even if the stored procedure is 10% faster (it probably isn't), you may want to ask yourself how much it really matters. What really matters in the end, is how easy it is to write and maintain code for your system. If you are coding a web app, and your pages all return in 0.25 seconds, then the extra time saved by using stored procedures is negligible. However, there can be many added advantages of using an ORM like NHibernate, which would be extremely hard to duplicate using only stored procedures.
| Why is parameterized SQL generated by NHibernate just as fast as a stored procedure? | One of my co-workers claims that even though the execution path is cached, there is no way parameterized SQL generated from an ORM is as quick as a stored procedure. Any help with this stubborn developer?
| [
"I would start by reading this article:\nhttp://decipherinfosys.wordpress.com/2007/03/27/using-stored-procedures-vs-dynamic-sql-generated-by-orm/\nHere is a speed test between the two:\nhttp://www.blackwasp.co.uk/SpeedTestSqlSproc.aspx\n",
"Round 1 - You can start a profiler trace and compare the execution times. \n",
"For most people, the best way to convince them is to \"show them the proof.\" In this case, I would create a couple basic test cases to retrieve the same set of data, and then time how long it takes using stored procedures versus NHibernate. Once you have the results, hand it over to them and most skeptical people should yield to the evidence.\n",
"I would only add a couple things to Rob's answer: \nFirst, Make sure the amount of data involved in the test cases is similiar to production values. In other words if your queries are normally against tables with hundreds of thousands or rows, then create such a test environment. \nSecond, make everything else equal except for the use of an nHibernate generated query and a s'proc call. Hopefully you can execute the test by simply swapping out a provider.\nFinally, realize that there is usually a lot more at stake than just stored procedures vs. ORM. With that in mind the test should look at all of the factors: execution time, memory consumption, scalability, debugging ability, etc.\n",
"The problem here is that you've accepted the burden of proof. You're unlikely to change someone's mind like that. Like it or not, people--even programmers-- are just too emotional to be easily swayed by logic. You need to put the burden of proof back on him- get him to convince you otherwise- and that will force him to do the research and discover the answer for himself.\nA better argument to use stored procedures is security. If you use only stored procedures, with no dynamic sql, you can disable SELECT, INSERT, UPDATE, DELETE, ALTER, and CREATE permissions for the application database user. This will protect you against most 2nd order SQL Injection, whereas parameterized queries are only effective against first order injection.\n",
"Measure it, but in a non-micro-benchmark, i.e. something that represents real operations in your system. Even if there would be a tiny performance benefit for a stored procedure it will be insignificant against the other costs your code is incurring: actually retrieving data, converting it, displaying it, etc. Not to mention that using stored procedures amounts to spreading your logic out over your app and your database with no significant version control, unit tests or refactoring support in the latter.\n",
"Benchmark it yourself. Write a testbed class that executes a sampled stored procedure a few hundred times, and run the NHibernate code the same amount of times. Compare the average and median execution time of each method. \n",
"It is just as fast if the query is the same each time. Sql Server 2005 caches query plans at the level of each statement in a batch, regardless of where the SQL comes from.\nThe long-term difference might be that stored procedures are many, many times easier for a DBA to manage and tune, whereas hundreds of different queries that have to be gleaned from profiler traces are a nightmare.\n",
"I've had this argument many times over.\nAlmost always I end up grabbing a really good dba, and running a proc and a piece of code with the profiler running, and get the dba to show that the results are so close its negligible.\n",
"Measure it. \nReally, any discussion on this topic is probably futile until you've measured it.\n",
"He may be correct for the specific use case he is thinking of. A stored procedure will probably execute faster for some complex set of SQL, that can be arbitrarily tuned. However, something you get from things like hibernate is caching. This may prove much faster for the lifetime of your actual application.\n",
"The additional layer of abstraction will cause it to be slower than a pure call to a sproc. Just by the fact that you have additional allocations on the managed heap, and additional pushes and pops off the callstack, the truth of the matter is that it is more efficient to call a sproc over having an ORM build the query, regardless how good the ORM is.\nHow slow, if its even measurable, is debatable. This is also helped by the fact that most ORM's have a caching mechanism to avoid doing the query at all.\n",
"Even if the stored procedure is 10% faster (it probably isn't), you may want to ask yourself how much it really matters. What really matters in the end, is how easy it is to write and maintain code for your system. If you are coding a web app, and your pages all return in 0.25 seconds, then the extra time saved by using stored procedures is negligible. However, there can be many added advantages of using an ORM like NHibernate, which would be extremely hard to duplicate using only stored procedures.\n"
] | [
13,
8,
5,
5,
2,
1,
1,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"orm",
"sql",
"stored_procedures"
] | stackoverflow_0000050346_orm_sql_stored_procedures.txt |
Q:
Global/session scoped values in PHP
Is there a standard way of dealing with globally scoped variables in PHP? Session scoped?
From the research I've done, it looks like the options are mostly add-ons or external. APC might work, but would be limited to a single PHP instance and not so useful for a farm of servers. Memcached seems like it would work, but I was hoping to find something within PHP.
Does its stateless approach keep there from being a standard method for handling this?
A:
A persistent layar is the only way to go with php. Either file based solution or database.
php natively doesn't provide any mechanism to do application scope variable.
| Global/session scoped values in PHP | Is there a standard way of dealing with globally scoped variables in PHP? Session scoped?
From the research I've done, it looks like the options are mostly add-ons or external. APC might work, but would be limited to a single PHP instance and not so useful for a farm of servers. Memcached seems like it would work, but I was hoping to find something within PHP.
Does its stateless approach keep there from being a standard method for handling this?
| [
"A persistent layar is the only way to go with php. Either file based solution or database. \nphp natively doesn't provide any mechanism to do application scope variable.\n"
] | [
1
] | [
"You can do session variables with $_SESSION.\n"
] | [
-1
] | [
"apc",
"memcached",
"php",
"session_state"
] | stackoverflow_0000050652_apc_memcached_php_session_state.txt |
Q:
How to create a non-interactive window in MFC
In my application I have a window which I popup with small messages on it (think similar to tooltip). This window uses the layered attributes to draw alpha backgrounds etc.
If I have several of these windows open at once, and I click one with my mouse, when they disappear they cause my application to lose focus (it switches focus to the app behind the current one).
How do I stop any interaction in my window?
A:
After playing with the WM_NCACTIVATE message with no luck, I overrode the WM_SETFOCUS message:
void CMyWindow::OnSetFocus(CWnd* pOldWnd)
{
if (pOldWnd != NULL)
{
pOldWnd->SetFocus();
}
}
That seems to do the trick. No idea why it works though! Comments welcome on that issue.
A:
It works because OnSetFocus (like many of the On* methods) gives you a chance to pre-empt an action before it actually occurs. The focus never actually switches to your non-interactive window.
| How to create a non-interactive window in MFC | In my application I have a window which I popup with small messages on it (think similar to tooltip). This window uses the layered attributes to draw alpha backgrounds etc.
If I have several of these windows open at once, and I click one with my mouse, when they disappear they cause my application to lose focus (it switches focus to the app behind the current one).
How do I stop any interaction in my window?
| [
"After playing with the WM_NCACTIVATE message with no luck, I overrode the WM_SETFOCUS message:\nvoid CMyWindow::OnSetFocus(CWnd* pOldWnd)\n{\n if (pOldWnd != NULL)\n {\n pOldWnd->SetFocus();\n }\n}\n\nThat seems to do the trick. No idea why it works though! Comments welcome on that issue.\n",
"It works because OnSetFocus (like many of the On* methods) gives you a chance to pre-empt an action before it actually occurs. The focus never actually switches to your non-interactive window.\n"
] | [
1,
1
] | [] | [] | [
"activation",
"focus",
"mfc"
] | stackoverflow_0000049806_activation_focus_mfc.txt |
Q:
Meaning/cause of RPC Exception 'No interfaces have been exported.'
We have a fairly standard client/server application built using MS RPC. Both client and server are implemented in C++. The client establishes a session to the server, then makes repeated calls to it over a period of time before finally closing the session.
Periodically, however, especially under heavy load conditions, we are seeing an RPC exception show up with code 1754: RPC_S_NOTHING_TO_EXPORT.
It appears that this happens in the middle of a session. The user is logged on for a while, making successful calls, then one of the calls inexplicably returns this error. As far as we can tell, the server receives no indication that anything went wrong - and it definitely doesn't see the call the client made.
The error code appears to have permanent implications, as well. Having the client retry the connection doesn't work, either. However, if the user has multiple user sessions active simultaneously between the same client and server, the other connections are unaffected.
In essence, I have two questions:
Does anyone know what RPC_S_NOTHING_TO_EXPORT means? The MSDN documentation simply says: "No interfaces have been exported." ... Huh? The session was working fine for numerous instances of the same call up until this point...
Does anyone have any ideas as to how to identify the real problem? Note: Capturing network traffic is something we would rather avoid, if possible, as the problem is sporadic enough that we would likely go through multiple gigabytes of traffic before running into an occurrence.
A:
Capturing network traffic would be one of the best ways to tackle this issue. If you can't do that, could you dump the client process and debug with WinDBG or Visual Studio? Perhaps compare a dump when operating normally versus in the error state?
| Meaning/cause of RPC Exception 'No interfaces have been exported.' | We have a fairly standard client/server application built using MS RPC. Both client and server are implemented in C++. The client establishes a session to the server, then makes repeated calls to it over a period of time before finally closing the session.
Periodically, however, especially under heavy load conditions, we are seeing an RPC exception show up with code 1754: RPC_S_NOTHING_TO_EXPORT.
It appears that this happens in the middle of a session. The user is logged on for a while, making successful calls, then one of the calls inexplicably returns this error. As far as we can tell, the server receives no indication that anything went wrong - and it definitely doesn't see the call the client made.
The error code appears to have permanent implications, as well. Having the client retry the connection doesn't work, either. However, if the user has multiple user sessions active simultaneously between the same client and server, the other connections are unaffected.
In essence, I have two questions:
Does anyone know what RPC_S_NOTHING_TO_EXPORT means? The MSDN documentation simply says: "No interfaces have been exported." ... Huh? The session was working fine for numerous instances of the same call up until this point...
Does anyone have any ideas as to how to identify the real problem? Note: Capturing network traffic is something we would rather avoid, if possible, as the problem is sporadic enough that we would likely go through multiple gigabytes of traffic before running into an occurrence.
| [
"Capturing network traffic would be one of the best ways to tackle this issue. If you can't do that, could you dump the client process and debug with WinDBG or Visual Studio? Perhaps compare a dump when operating normally versus in the error state?\n"
] | [
1
] | [] | [] | [
"rpc",
"windows"
] | stackoverflow_0000045977_rpc_windows.txt |
Q:
Java Singleton vs static - is there a real performance benefit?
I am merging a CVS branch and one of the larger changes is the replacement wherever it occurs of a Singleton pattern with abstract classes that have a static initialisation block and all static methods.
Is this something that's worth keeping since it will require merging a lot of conflicts, what sort of situation would I be looking at for this refactoring to be worthwhile?
We are running this app under Weblogic 8.1 (so JDK 1.4.2)
sorry Thomas, let me clarify..
the HEAD version has the traditional singleton pattern (private constructor, getInstance() etc)
the branch version has no constructor, is a 'public abstract class' and modified all the methods on the object to be 'static'. The code that used to exist in the private constructor is moved into a static block.
Then all usages of the class are changed which causes multiple conflicts in the merge.
There are a few cases where this change was made.
A:
From a strict runtime performance point of view, the difference is really negligible. The main difference between the two lies down in the fact that the "static" lifecycle is linked to the classloader, whereas for the singleton it's a regular instance lifecycle. Usually it's better to stay away from the ClassLoader business, you avoid some tricky problems, especially when you try to reload the web application.
A:
I would use a singleton if it needed to store any state, and static classes otherwise. There's no point in instantiating something, even a single instance, unless it needs to store something.
A:
Static is bad for extensibility since static methods and fields cannot be extended or overridden by subclasses.
It's also bad for unit tests. Within a unit test you cannot keep the side effects of different tests from spilling over since you cannot control the classloader. Static fields initialized in one unit test will be visible in another, or worse, running tests concurrently will yield unpredictable results.
Singleton is generally an ok pattern when used sparingly. I prefer to use a DI framework and let that manage my instances for me (possibly within different scopes, as in Guice).
A:
If my original post was the correct understanding and the discussion from Sun that was linked to is accurate (which I think it might be), then I think you have to make a trade off between clarity and performance.
Ask yourself these questions:
Does the Singleton object make what I'm doing more clear?
Do I need an object to do this task or is it more suited to static methods?
Do I need the performance that I can gain by not using a Singleton?
A:
From my experience, the only thing that matters is which one is easier to mock in unit tests. I always felt Singleton is easier and natural to mock out. If your organization lets you use JMockit, it doesn't matter since you can overcome these concerns.
A:
Does this discussion help? (I don't know if it's taboo to link to another programming forum, but I'd rather not just quote the whole discussion =) )
Sun Discussion on this subject
The verdict seems to be that it doesn't make enough of a difference to matter in most cases, though technically the static methods are more efficient.
A:
Write some code to measure the performance. The answer is going to be dependent on the JVM(Sun's JDK might perform differently than JRockit) and the VM flags your application uses.
| Java Singleton vs static - is there a real performance benefit? | I am merging a CVS branch and one of the larger changes is the replacement wherever it occurs of a Singleton pattern with abstract classes that have a static initialisation block and all static methods.
Is this something that's worth keeping since it will require merging a lot of conflicts, what sort of situation would I be looking at for this refactoring to be worthwhile?
We are running this app under Weblogic 8.1 (so JDK 1.4.2)
sorry Thomas, let me clarify..
the HEAD version has the traditional singleton pattern (private constructor, getInstance() etc)
the branch version has no constructor, is a 'public abstract class' and modified all the methods on the object to be 'static'. The code that used to exist in the private constructor is moved into a static block.
Then all usages of the class are changed which causes multiple conflicts in the merge.
There are a few cases where this change was made.
| [
"From a strict runtime performance point of view, the difference is really negligible. The main difference between the two lies down in the fact that the \"static\" lifecycle is linked to the classloader, whereas for the singleton it's a regular instance lifecycle. Usually it's better to stay away from the ClassLoader business, you avoid some tricky problems, especially when you try to reload the web application.\n",
"I would use a singleton if it needed to store any state, and static classes otherwise. There's no point in instantiating something, even a single instance, unless it needs to store something.\n",
"Static is bad for extensibility since static methods and fields cannot be extended or overridden by subclasses. \nIt's also bad for unit tests. Within a unit test you cannot keep the side effects of different tests from spilling over since you cannot control the classloader. Static fields initialized in one unit test will be visible in another, or worse, running tests concurrently will yield unpredictable results.\nSingleton is generally an ok pattern when used sparingly. I prefer to use a DI framework and let that manage my instances for me (possibly within different scopes, as in Guice).\n",
"If my original post was the correct understanding and the discussion from Sun that was linked to is accurate (which I think it might be), then I think you have to make a trade off between clarity and performance.\nAsk yourself these questions:\n\nDoes the Singleton object make what I'm doing more clear?\nDo I need an object to do this task or is it more suited to static methods?\nDo I need the performance that I can gain by not using a Singleton?\n\n",
"From my experience, the only thing that matters is which one is easier to mock in unit tests. I always felt Singleton is easier and natural to mock out. If your organization lets you use JMockit, it doesn't matter since you can overcome these concerns.\n",
"Does this discussion help? (I don't know if it's taboo to link to another programming forum, but I'd rather not just quote the whole discussion =) )\nSun Discussion on this subject\nThe verdict seems to be that it doesn't make enough of a difference to matter in most cases, though technically the static methods are more efficient.\n",
"Write some code to measure the performance. The answer is going to be dependent on the JVM(Sun's JDK might perform differently than JRockit) and the VM flags your application uses.\n"
] | [
16,
16,
11,
3,
3,
0,
0
] | [] | [] | [
"design_patterns",
"java",
"singleton"
] | stackoverflow_0000028241_design_patterns_java_singleton.txt |
Q:
Message passing in a plug-in framework
First off, there's a bit of background to this issue available on my blog:
http://www.codebork.com/coding/2008/06/25/message-passing-a-plug-framework.html
http://www.codebork.com/coding/2008/07/31/message-passing-2.html
I'm aware that the descriptions aren't hugely clear, so I'll try to summarise what I'm attempting as best I can here. The application is a personal finance program. Further background on the framework itself is available at the end of this post.
There are a number of different types of plug-in that the framework can handle (e.g., accounts, export, reporting, etc.). However, I'm focussing on one particular class of plug-in, so-called data plug-ins, as it is this class that is causing me problems. I have one class of data plug-in for accounts, one for transactions, etc.
I'm midway through a vast re-factoring that has left me with the following architecture for data plug-ins:
The data plug-in object (implementing intialisation, installation and plug-in metadata) [implements IDataPlugin<FactoryType>]
The data object (such as an account) [implements, e.g., IAccount]
A factory to create instances of the data object [implements, e.g., IAccountFactory]
Previously the data object and the plug-in object were combined into one, but this meant that a new transaction plug-in had to be instantiated for each transaction recorded in the account which caused a number of problems. Unfortunately, that re-factoring has broken my message passing. The data object implements INotifyPropertyChanged, and so I've hit a new problem, and one that I'm not sure how to work around: the plug-in object is registering events with the message broker, but it's the data objects that actually fire the events. This means that the subscribing plug-in currently has to subscribe to each created account, transaction, etc.! This is clearly not scalable.
As far as I can tell at the moment I have two possible solutions:
Make the data plug-in object a go-between for the data-objects and message broker, possibly batching change notifications. I don't like this because it adds another layer of complexity to the messaging system that I feel I should be able to do without.
Junk the current event-based implementation and use something else that's more easily manageable (in-memory WCF?!).
So I guess I'm really asking:
How would you solve this problem?
What potential solutions do you think I've overlooked?
Is my approach even vaguely on-track/sensible?! :-)
As you will be able to tell from the dates of the blog posts, some variant of this problem has been taxing me for quite a long time now! As such, any and all responses will be greatly appreciated.
The background to the framework itself is as follows:
My plug-in framework consists of three main components: a plug-in broker, a preferences manager and a message broker. The plug-in broker does the bread-and-butter plug-in stuff: discovering and creating plug-ins. The preferences manager manages user preferences for the framework and individual plug-ins, such as which plug-ins are enabled, where data should be saved, etc. Communication is via publish/subscribe, with the message broker sitting in the middle, gathering all published message types and managing subscriptions. The publish/subscribe is currently implemented via the .NET INotifyPropertyChanged interface, which provides one event called PropertyChanged; the message broker builds a list of all plug-ins implementing INotifyPropertyChanged and subscribes other plug-ins this event. The purpose of the message passing is to allow the account and transaction plug-ins to notify the storage plug-ins that data has changed so that it may be saved.
A:
Wow! Big question! :)
Correct me if I'm wrong. Your basic solution now is kind of an Observer pattern, where the data object (Account, etc) notifies about changes in their states. You think that the problem is that the subscribing plugin has to register in every object to be able to handle notifications.
That's not a problem per se, you can put the event control in the Domain Model, but I suggest you create a Service Layer and do this event notifications in this layer. That way just one object would be responsible for publishing notifications.
Martin Fowler have a series of Event Patterns in his blog. Check it out! Very good reading.
A:
This is my understanding of your question: You have a plugin object that may have to listen for events on x data objects - you don't want to subscribe to the event on each data object though. I'm assuming that several plugins may want to listen to events on the same data object.
You could create a session type object. Each plugin listens for events on the session object. The data object no longer raises the event - it calls the session object to raise the event (one of the parameters would have to be the data object raising the event).
That means that your plugins only have to subscribe to one event, but they get the event from all data objects.
On the other hand, if only one plugin will ever listen to a data object at a time, why not just have the data object call the plugin directly?
A:
It's early yet, but have you considered trying to use MEF instead of rolling your own?
| Message passing in a plug-in framework | First off, there's a bit of background to this issue available on my blog:
http://www.codebork.com/coding/2008/06/25/message-passing-a-plug-framework.html
http://www.codebork.com/coding/2008/07/31/message-passing-2.html
I'm aware that the descriptions aren't hugely clear, so I'll try to summarise what I'm attempting as best I can here. The application is a personal finance program. Further background on the framework itself is available at the end of this post.
There are a number of different types of plug-in that the framework can handle (e.g., accounts, export, reporting, etc.). However, I'm focussing on one particular class of plug-in, so-called data plug-ins, as it is this class that is causing me problems. I have one class of data plug-in for accounts, one for transactions, etc.
I'm midway through a vast re-factoring that has left me with the following architecture for data plug-ins:
The data plug-in object (implementing intialisation, installation and plug-in metadata) [implements IDataPlugin<FactoryType>]
The data object (such as an account) [implements, e.g., IAccount]
A factory to create instances of the data object [implements, e.g., IAccountFactory]
Previously the data object and the plug-in object were combined into one, but this meant that a new transaction plug-in had to be instantiated for each transaction recorded in the account which caused a number of problems. Unfortunately, that re-factoring has broken my message passing. The data object implements INotifyPropertyChanged, and so I've hit a new problem, and one that I'm not sure how to work around: the plug-in object is registering events with the message broker, but it's the data objects that actually fire the events. This means that the subscribing plug-in currently has to subscribe to each created account, transaction, etc.! This is clearly not scalable.
As far as I can tell at the moment I have two possible solutions:
Make the data plug-in object a go-between for the data-objects and message broker, possibly batching change notifications. I don't like this because it adds another layer of complexity to the messaging system that I feel I should be able to do without.
Junk the current event-based implementation and use something else that's more easily manageable (in-memory WCF?!).
So I guess I'm really asking:
How would you solve this problem?
What potential solutions do you think I've overlooked?
Is my approach even vaguely on-track/sensible?! :-)
As you will be able to tell from the dates of the blog posts, some variant of this problem has been taxing me for quite a long time now! As such, any and all responses will be greatly appreciated.
The background to the framework itself is as follows:
My plug-in framework consists of three main components: a plug-in broker, a preferences manager and a message broker. The plug-in broker does the bread-and-butter plug-in stuff: discovering and creating plug-ins. The preferences manager manages user preferences for the framework and individual plug-ins, such as which plug-ins are enabled, where data should be saved, etc. Communication is via publish/subscribe, with the message broker sitting in the middle, gathering all published message types and managing subscriptions. The publish/subscribe is currently implemented via the .NET INotifyPropertyChanged interface, which provides one event called PropertyChanged; the message broker builds a list of all plug-ins implementing INotifyPropertyChanged and subscribes other plug-ins this event. The purpose of the message passing is to allow the account and transaction plug-ins to notify the storage plug-ins that data has changed so that it may be saved.
| [
"Wow! Big question! :)\nCorrect me if I'm wrong. Your basic solution now is kind of an Observer pattern, where the data object (Account, etc) notifies about changes in their states. You think that the problem is that the subscribing plugin has to register in every object to be able to handle notifications.\nThat's not a problem per se, you can put the event control in the Domain Model, but I suggest you create a Service Layer and do this event notifications in this layer. That way just one object would be responsible for publishing notifications.\nMartin Fowler have a series of Event Patterns in his blog. Check it out! Very good reading.\n",
"This is my understanding of your question: You have a plugin object that may have to listen for events on x data objects - you don't want to subscribe to the event on each data object though. I'm assuming that several plugins may want to listen to events on the same data object.\nYou could create a session type object. Each plugin listens for events on the session object. The data object no longer raises the event - it calls the session object to raise the event (one of the parameters would have to be the data object raising the event).\nThat means that your plugins only have to subscribe to one event, but they get the event from all data objects.\nOn the other hand, if only one plugin will ever listen to a data object at a time, why not just have the data object call the plugin directly?\n",
"It's early yet, but have you considered trying to use MEF instead of rolling your own?\n"
] | [
4,
3,
1
] | [] | [] | [
"c#",
"message_passing",
"plugins"
] | stackoverflow_0000050822_c#_message_passing_plugins.txt |
Q:
How do you Modify TextBox Control Tab Stops
When you use a Windows Forms TextBox, the default number of tab stops (spaces) is 8. How do you modify this?
A:
First add the following namespace
using System.Runtime.InteropServices;
Then add the following after the class declaration:
private const int EM_SETTABSTOPS = 0x00CB;
[DllImport("User32.dll", CharSet = CharSet.Auto)]
public static extern IntPtr SendMessage(IntPtr h,
int msg,
int wParam,
int [] lParam);
Then add the following to the Form_Load event:
// define value of the Tab indent
int[] stops = {16};
// change the indent
SendMessage(this.textBox1.Handle, EM_SETTABSTOPS, 1, stops);
| How do you Modify TextBox Control Tab Stops | When you use a Windows Forms TextBox, the default number of tab stops (spaces) is 8. How do you modify this?
| [
"First add the following namespace\nusing System.Runtime.InteropServices;\n\nThen add the following after the class declaration:\nprivate const int EM_SETTABSTOPS = 0x00CB;\n[DllImport(\"User32.dll\", CharSet = CharSet.Auto)]\npublic static extern IntPtr SendMessage(IntPtr h, \n int msg, \n int wParam, \n int [] lParam);\n\nThen add the following to the Form_Load event:\n// define value of the Tab indent \nint[] stops = {16}; \n// change the indent \nSendMessage(this.textBox1.Handle, EM_SETTABSTOPS, 1, stops);\n\n"
] | [
4
] | [] | [] | [
".net",
"c#",
"winforms"
] | stackoverflow_0000051126_.net_c#_winforms.txt |
Q:
Best way to extract data from a FileMaker Pro database in a script?
My job would be easier, or at least less tedious if I could come up with an automated way (preferably in a Python script) to extract useful information from a FileMaker Pro database. I am working on Linux machine and the FileMaker database is on the same LAN running on an OS X machine. I can log into the webby interface from my machine.
I'm quite handy with SQL, and if somebody could point me to some FileMaker plug-in that could give me SQL access to the data within FileMaker, I would be pleased as punch. Everything I've found only goes the other way: Having FileMaker get data from SQL sources. Not useful.
It's not my first choice, but I'd use Perl instead of Python if there was a Perl-y solution at hand.
Note: XML/XSLT services (as suggested by some folks) are only available on FM Server, not FM Pro. Otherwise, that would probably be the best solution. ODBC is turning out to be extremely difficult to even get working. There is absolutely zero feedback from FM when you set it up so you have to dig through /var/log/system.log and parse obscure error messages.
Conclusion: I got it working by running a python script locally on the machine that queries the FM database through the ODBC connections. The script is actually a TCPServer that accepts socket connections from other systems on the LAN, runs the queries, and returns the data through the socket connection. I had to do this to bypass the fact that FM Pro only accepts ODBC connections locally (FM server is required for external connections).
A:
It has been a really long time since I did anything with FileMaker Pro, but I know that it does have capabilities for an ODBC (and JDBC) connection to be made to it (however, I don't know how, or if, that translates to the linux/perl/python world though).
This article shows how to share/expose your FileMaker data via ODBC & JDBC:
Sharing FileMaker Pro data via ODBC or JDBC
From there, if you're able to create an ODBC/JDBC connection you could query out data as needed.
A:
You'll need the FileMaker Pro installation CD to get the drivers. This document details the process for FMP 9 - it is similar for versions 7.x and 8.x as well. Versions 6.x and earlier are completely different and I wouldn't bother trying (xDBC support in those previous versions is "minimal" at best).
FMP 9 supports SQL-92 standard syntax (mostly). Note that rather than querying tables directly you query using the "table occurrence" name which serves as a table alias of sorts. If the data tables are stored in multiple files it is possible to create a single FMP file with table occurrences/aliases pointing to those data tables. There's an "undocumented feature" where such a file must have a table defined in it as well and that table "related" to any other table on the relationships graph (doesn't matter which one) for ODBC access to work. Otherwise your queries will always return no results.
The PDF document details all of the limitations of using the xDBC interface FMP provides. Performance of simple queries is reasonably fast, ymmv. I have found the performance of queries specifying the "LIKE" operator to be less than stellar.
FMP also has an XML/XSLT interface that you can use to query FMP data over an HTTP connection. It also provides a PHP class for accessing and using FMP data in web applications.
A:
If your leaning is to Python, you may be interested in checking out the Python Wrapper for Filemaker. It provides two way access to the Filemaker data via Filemaker's built-in XML services. You can find some quite thorough information on this at:
http://code.google.com/p/pyfilemaker/
| Best way to extract data from a FileMaker Pro database in a script? | My job would be easier, or at least less tedious if I could come up with an automated way (preferably in a Python script) to extract useful information from a FileMaker Pro database. I am working on Linux machine and the FileMaker database is on the same LAN running on an OS X machine. I can log into the webby interface from my machine.
I'm quite handy with SQL, and if somebody could point me to some FileMaker plug-in that could give me SQL access to the data within FileMaker, I would be pleased as punch. Everything I've found only goes the other way: Having FileMaker get data from SQL sources. Not useful.
It's not my first choice, but I'd use Perl instead of Python if there was a Perl-y solution at hand.
Note: XML/XSLT services (as suggested by some folks) are only available on FM Server, not FM Pro. Otherwise, that would probably be the best solution. ODBC is turning out to be extremely difficult to even get working. There is absolutely zero feedback from FM when you set it up so you have to dig through /var/log/system.log and parse obscure error messages.
Conclusion: I got it working by running a python script locally on the machine that queries the FM database through the ODBC connections. The script is actually a TCPServer that accepts socket connections from other systems on the LAN, runs the queries, and returns the data through the socket connection. I had to do this to bypass the fact that FM Pro only accepts ODBC connections locally (FM server is required for external connections).
| [
"It has been a really long time since I did anything with FileMaker Pro, but I know that it does have capabilities for an ODBC (and JDBC) connection to be made to it (however, I don't know how, or if, that translates to the linux/perl/python world though). \nThis article shows how to share/expose your FileMaker data via ODBC & JDBC:\nSharing FileMaker Pro data via ODBC or JDBC \nFrom there, if you're able to create an ODBC/JDBC connection you could query out data as needed.\n",
"You'll need the FileMaker Pro installation CD to get the drivers. This document details the process for FMP 9 - it is similar for versions 7.x and 8.x as well. Versions 6.x and earlier are completely different and I wouldn't bother trying (xDBC support in those previous versions is \"minimal\" at best).\nFMP 9 supports SQL-92 standard syntax (mostly). Note that rather than querying tables directly you query using the \"table occurrence\" name which serves as a table alias of sorts. If the data tables are stored in multiple files it is possible to create a single FMP file with table occurrences/aliases pointing to those data tables. There's an \"undocumented feature\" where such a file must have a table defined in it as well and that table \"related\" to any other table on the relationships graph (doesn't matter which one) for ODBC access to work. Otherwise your queries will always return no results.\nThe PDF document details all of the limitations of using the xDBC interface FMP provides. Performance of simple queries is reasonably fast, ymmv. I have found the performance of queries specifying the \"LIKE\" operator to be less than stellar.\nFMP also has an XML/XSLT interface that you can use to query FMP data over an HTTP connection. It also provides a PHP class for accessing and using FMP data in web applications.\n",
"If your leaning is to Python, you may be interested in checking out the Python Wrapper for Filemaker. It provides two way access to the Filemaker data via Filemaker's built-in XML services. You can find some quite thorough information on this at:\nhttp://code.google.com/p/pyfilemaker/\n"
] | [
6,
4,
2
] | [] | [] | [
"filemaker",
"linux",
"perl",
"python",
"scripting"
] | stackoverflow_0000028668_filemaker_linux_perl_python_scripting.txt |
Q:
Will random data appended to a JPG make it unusable?
So, to simplify my life I want to be able to append from 1 to 7 additional characters on the end of some jpg images my program is processing*. These are dummy padding (fillers, etc - probably all 0x00) just to make the file size a multiple of 8 bytes for block encryption.
Having tried this out with a few programs, it appears they are fine with the additional characters, which occur after the FF D9 that specifies the end of the image - so it appears that the file format is well defined enough that the 'corruption' I'm adding at the end shouldn't matter.
I can always post process the files later if needed, but my preference is to do the simplest thing possible - which is to let them remain (I'm decrypting other file types and they won't mind, so having a special case is annoying).
I figure with all the talk of Steganography hullaballo years ago, someone has some input here...
(encryption processing by 8 byte blocks, I don't want to save pre-encrypted file size, so append 0x00 to input data, and leave them there after decoding)
A:
No, you can add bits to the end of a jpg file, without making it unusable. The heading of the jpg file tells how to read it, so the program reading it will stop at the end of the jpg data.
In fact, people have hidden zip files inside jpg files by appending the zip data to the end of the jpg data. Because of the way these formats are structured, the resulting file is valid in either format.
A:
You can .. but the results may be unpredictable.
Even though there is enough information in the format to tell the client to ignore the extra data it is likely not a case the programmer tested for.
A paranoid program might look at the size, notice the discrepancy and decide it won't process your file because clearly it doesn't fully understand it. This is particularly likely when reading data from the web when random bytes in a file could be considered a security risk.
A:
You can embed your data in the XMP tag within a JPEG (or EXIF or IPTC fields for that matter).
XMP is XML so you have a fair bit of flexibility there to do you own custom stuff.
It's probably not the simplest thing possible but putting your data here will maintain the integrity of the JPEG and require no "post processing".
You data will then show up in other imaging software such as PhotoShop, which may not be ideal.
A:
As others have stated, you have no control how programs process image files and therefore some programs may find the images valid others may not.
However, there is a bigger issue here. Judging by your question, I'm deducing you're practicing "security through obscurity." It's widely considered a very bad practice. Use Google to find a plethora of articles about the topic.
| Will random data appended to a JPG make it unusable? | So, to simplify my life I want to be able to append from 1 to 7 additional characters on the end of some jpg images my program is processing*. These are dummy padding (fillers, etc - probably all 0x00) just to make the file size a multiple of 8 bytes for block encryption.
Having tried this out with a few programs, it appears they are fine with the additional characters, which occur after the FF D9 that specifies the end of the image - so it appears that the file format is well defined enough that the 'corruption' I'm adding at the end shouldn't matter.
I can always post process the files later if needed, but my preference is to do the simplest thing possible - which is to let them remain (I'm decrypting other file types and they won't mind, so having a special case is annoying).
I figure with all the talk of Steganography hullaballo years ago, someone has some input here...
(encryption processing by 8 byte blocks, I don't want to save pre-encrypted file size, so append 0x00 to input data, and leave them there after decoding)
| [
"No, you can add bits to the end of a jpg file, without making it unusable. The heading of the jpg file tells how to read it, so the program reading it will stop at the end of the jpg data.\nIn fact, people have hidden zip files inside jpg files by appending the zip data to the end of the jpg data. Because of the way these formats are structured, the resulting file is valid in either format.\n",
"You can .. but the results may be unpredictable. \nEven though there is enough information in the format to tell the client to ignore the extra data it is likely not a case the programmer tested for.\nA paranoid program might look at the size, notice the discrepancy and decide it won't process your file because clearly it doesn't fully understand it. This is particularly likely when reading data from the web when random bytes in a file could be considered a security risk.\n",
"You can embed your data in the XMP tag within a JPEG (or EXIF or IPTC fields for that matter).\nXMP is XML so you have a fair bit of flexibility there to do you own custom stuff.\nIt's probably not the simplest thing possible but putting your data here will maintain the integrity of the JPEG and require no \"post processing\".\nYou data will then show up in other imaging software such as PhotoShop, which may not be ideal.\n",
"As others have stated, you have no control how programs process image files and therefore some programs may find the images valid others may not.\nHowever, there is a bigger issue here. Judging by your question, I'm deducing you're practicing \"security through obscurity.\" It's widely considered a very bad practice. Use Google to find a plethora of articles about the topic.\n"
] | [
23,
7,
3,
0
] | [] | [] | [
"file_format",
"jpeg",
"steganography"
] | stackoverflow_0000050965_file_format_jpeg_steganography.txt |
Q:
Separating concerns with Linq To SQL and DTO's
I recently started a new webforms project and decided to separate the business classes from any DBML references. My business layer classes instead access discrete Data layer methods and are returned collections of DTO's. So the data layer might project DTO's like the following:
(from c in dataContext.Customers
where c.Active == true
select new DTO.Customer
{
CustomerID = c.CustomerID,
Name = c.CustomerName,
...
}).ToList()
Although building the DTO objects adds work, this feels like a better approach to a tight binding between Business & Data layers and means I can test the Business layer without a database being present.
My question is, is this good practice?, Is there a way of generating the DTO's (maybe via SQLMetal), and what other problems might I strike as the project progresses.
A:
I don't know if it's best practice but I have written similar code in the not so recent past because I too felt that I could improve the separation of concerns by using my own classes instead of the LINQ-designer-generated ones within my application.
You may want to consider just returning an IQueryable<Customer> instead of an IList<Customer> from your data-access method. Since IQueryable<T> inherits from IEnumerable<T> the rest of your app should be able to deal with it quite well. You can also convert it to a List when you really need to.
The advantage of this is that you can dynamically modify your query quite easily and minimze the amount of data returned from SQL Server.
E.g. if your method signature is
IQueryable<Customer> GetCustomers() you could get a single customer by calling GetCustomers().Where(c => c.CustomerID == 101).Single();
In this example only one record would be returned from the database whereas I imagine currently your code would return either all customers or you'd be required to write separate methods (and thus very repetitive code) to cater for all the different things you may want to filter by.
A:
In my opinion in most cases DTO objects are not needed when dealing with LINQ. Generated LINQ classes can be easily tested. LINQ gives you ability to query your data from different sources using identical queries. It gives you ability to test your queries against lists of objects instead of real db.
| Separating concerns with Linq To SQL and DTO's | I recently started a new webforms project and decided to separate the business classes from any DBML references. My business layer classes instead access discrete Data layer methods and are returned collections of DTO's. So the data layer might project DTO's like the following:
(from c in dataContext.Customers
where c.Active == true
select new DTO.Customer
{
CustomerID = c.CustomerID,
Name = c.CustomerName,
...
}).ToList()
Although building the DTO objects adds work, this feels like a better approach to a tight binding between Business & Data layers and means I can test the Business layer without a database being present.
My question is, is this good practice?, Is there a way of generating the DTO's (maybe via SQLMetal), and what other problems might I strike as the project progresses.
| [
"I don't know if it's best practice but I have written similar code in the not so recent past because I too felt that I could improve the separation of concerns by using my own classes instead of the LINQ-designer-generated ones within my application.\nYou may want to consider just returning an IQueryable<Customer> instead of an IList<Customer> from your data-access method. Since IQueryable<T> inherits from IEnumerable<T> the rest of your app should be able to deal with it quite well. You can also convert it to a List when you really need to.\nThe advantage of this is that you can dynamically modify your query quite easily and minimze the amount of data returned from SQL Server.\nE.g. if your method signature is\nIQueryable<Customer> GetCustomers() you could get a single customer by calling GetCustomers().Where(c => c.CustomerID == 101).Single();\nIn this example only one record would be returned from the database whereas I imagine currently your code would return either all customers or you'd be required to write separate methods (and thus very repetitive code) to cater for all the different things you may want to filter by.\n",
"In my opinion in most cases DTO objects are not needed when dealing with LINQ. Generated LINQ classes can be easily tested. LINQ gives you ability to query your data from different sources using identical queries. It gives you ability to test your queries against lists of objects instead of real db.\n"
] | [
5,
2
] | [] | [] | [
"c#",
"dto_mapping",
"linq"
] | stackoverflow_0000051176_c#_dto_mapping_linq.txt |
Q:
Google Finance - Get Quotes search box - Column Alignment
How does Google manage to properly align the second column (i.e. the ticker name) in the "Get Quotes" search box suggestion drop-down in google finance url
Example: If you enter iii - the second column is perfectly aligned.
It does not use a fixed width font - so just adding the correct numbers of spaces to the ticker will not work.
How do they do that?
A:
most likely just using margins. float the first column left then set the margin to the width of the first column.
A:
I just viewed source with a DOM inspector and it appears that they are spans for each cell with a margin set (as Darren said) to position the right column over.
| Google Finance - Get Quotes search box - Column Alignment | How does Google manage to properly align the second column (i.e. the ticker name) in the "Get Quotes" search box suggestion drop-down in google finance url
Example: If you enter iii - the second column is perfectly aligned.
It does not use a fixed width font - so just adding the correct numbers of spaces to the ticker will not work.
How do they do that?
| [
"most likely just using margins. float the first column left then set the margin to the width of the first column.\n",
"I just viewed source with a DOM inspector and it appears that they are spans for each cell with a margin set (as Darren said) to position the right column over.\n"
] | [
1,
0
] | [] | [] | [
"css",
"html",
"javascript"
] | stackoverflow_0000050814_css_html_javascript.txt |
Q:
Multiline ddl Custom Control
One of the guys I work with needs a custom control that would work like a multiline ddl since such a thing does not exist as far as we have been able to discover
does anyone have any ideas or have created such a thing before
we have a couple ideas but they involve to much database usage
We prefer that it be FREE!!!
A:
Have a look at EasyListBox. I used on a project and while a bit quirky at first, got the job done.
A:
I'm not sure exactly what you mean by multi-line, but if it is selecting multiple elements in a drop down list, see this demo.
If its showing elements that wrap mulitple lines in a drop down, see this demo. You can put a break in the HTML to achieve what you might be looking for. I've used this control in this manner before, so I can confirm it works.
Good luck.
A:
We use a custom modified version of suckerfish at work. DB performance isn't an issue for us because we cache the control.
The control renders out nested UL/LIs either for all nodes in the web.sitemap or for a certain set of pages pulled from the DB. We then use jQuery to do all the cool javascript stuff. Because it uses such basic HTML, it's pretty easy to have multi-line or wrapped long items once you style it with CSS.
| Multiline ddl Custom Control | One of the guys I work with needs a custom control that would work like a multiline ddl since such a thing does not exist as far as we have been able to discover
does anyone have any ideas or have created such a thing before
we have a couple ideas but they involve to much database usage
We prefer that it be FREE!!!
| [
"Have a look at EasyListBox. I used on a project and while a bit quirky at first, got the job done.\n",
"I'm not sure exactly what you mean by multi-line, but if it is selecting multiple elements in a drop down list, see this demo.\nIf its showing elements that wrap mulitple lines in a drop down, see this demo. You can put a break in the HTML to achieve what you might be looking for. I've used this control in this manner before, so I can confirm it works.\nGood luck.\n",
"We use a custom modified version of suckerfish at work. DB performance isn't an issue for us because we cache the control.\nThe control renders out nested UL/LIs either for all nodes in the web.sitemap or for a certain set of pages pulled from the DB. We then use jQuery to do all the cool javascript stuff. Because it uses such basic HTML, it's pretty easy to have multi-line or wrapped long items once you style it with CSS.\n"
] | [
0,
0,
0
] | [] | [] | [
"asp.net",
"c#",
"custom_controls"
] | stackoverflow_0000050539_asp.net_c#_custom_controls.txt |
Q:
What table/view do you query against to select all the table names in a schema in Oracle?
What object do you query against to select all the table names in a schema in Oracle?
A:
To see all the tables you have access to
select table_name from all_tables where owner='<SCHEMA>';
To select all tables for the current logged in schema (eg, your tables)
select table_name from user_tables;
A:
you're looking for:
select table_name from user_tables;
| What table/view do you query against to select all the table names in a schema in Oracle? | What object do you query against to select all the table names in a schema in Oracle?
| [
"To see all the tables you have access to\nselect table_name from all_tables where owner='<SCHEMA>';\n\nTo select all tables for the current logged in schema (eg, your tables)\nselect table_name from user_tables;\n\n",
"you're looking for:\n\nselect table_name from user_tables;\n\n"
] | [
5,
1
] | [
"You may use:\nselect tabname from tabs \n\nto get the name of tables present in schema.\n"
] | [
-1
] | [
"oracle",
"sql"
] | stackoverflow_0000051264_oracle_sql.txt |
Q:
Implementing user defined display order UI
i have a list of products that are being displayed in particular order. store admin can reassign the display order, by moving the "hot" items to the top of the list. what's the best way of implementing the admin functionality UI [asp.net C#]? Products table has a [displayOrder(int)] filed which determines the display order.
i'm looking for something intuitive and simple.
thank you.
p.s. i guess i didn't make myself clear, i'm looking for UI advice more than anything.
SOLUTION: ReorderList worked out great, this article helped too. Also, make sure OldValuesParameterFormatString="{0}" in your DataSource.
A:
You need a Rank field for each product (which could also be the DisplayOrder field).
When the administrator ups or downs a product, update the rank value.
When you need to list the products, do a select query which sorts in DESC order of rank.
A:
using AJAX you could implement a Reoder list control you can find more information here http://www.asp.net/AJAX/AjaxControlToolkit/Samples/ReorderList/ReorderList.aspx
Mauro
http://www.brantas.co.uk
A:
i'm implementing that using the 'Order' column/property where people input numbers like 10, 20, 30 (i have ascending ordering). i have a list of items with text boxes to input order, and an 'apply order' button that saves new values to the database and reorders/reloads items on the page with the new ordering applied.
i don't forbid inputting the same value for two items, i sort them by name as a second sort parameter, or leave it to the database to sort it at will if it doesn't matter much. i believe it's understandable enough to put it that way, it seems like an ordered list which everybody understand easily.
A:
If you can modify the database, add an IsHot column. Then sort by IsHot and DisplayOrder (in that order). This will keep the products in the correct order and the "hot" products will bubble up to the top.
| Implementing user defined display order UI | i have a list of products that are being displayed in particular order. store admin can reassign the display order, by moving the "hot" items to the top of the list. what's the best way of implementing the admin functionality UI [asp.net C#]? Products table has a [displayOrder(int)] filed which determines the display order.
i'm looking for something intuitive and simple.
thank you.
p.s. i guess i didn't make myself clear, i'm looking for UI advice more than anything.
SOLUTION: ReorderList worked out great, this article helped too. Also, make sure OldValuesParameterFormatString="{0}" in your DataSource.
| [
"You need a Rank field for each product (which could also be the DisplayOrder field).\nWhen the administrator ups or downs a product, update the rank value.\nWhen you need to list the products, do a select query which sorts in DESC order of rank.\n",
"using AJAX you could implement a Reoder list control you can find more information here http://www.asp.net/AJAX/AjaxControlToolkit/Samples/ReorderList/ReorderList.aspx\nMauro\nhttp://www.brantas.co.uk\n",
"i'm implementing that using the 'Order' column/property where people input numbers like 10, 20, 30 (i have ascending ordering). i have a list of items with text boxes to input order, and an 'apply order' button that saves new values to the database and reorders/reloads items on the page with the new ordering applied.\ni don't forbid inputting the same value for two items, i sort them by name as a second sort parameter, or leave it to the database to sort it at will if it doesn't matter much. i believe it's understandable enough to put it that way, it seems like an ordered list which everybody understand easily.\n",
"If you can modify the database, add an IsHot column. Then sort by IsHot and DisplayOrder (in that order). This will keep the products in the correct order and the \"hot\" products will bubble up to the top.\n"
] | [
2,
2,
0,
0
] | [] | [] | [
"asp.net",
"c#",
"visual_studio_2005"
] | stackoverflow_0000051342_asp.net_c#_visual_studio_2005.txt |
Q:
How do you backup IIS's metabase in C#?
exact code will be helpful. I assume the DirectoryServices namespace does it but I can't find the method that does it.
I need actual C# code. All the samples I found so far are VB or VBScript. The C# examples I found are for reading/setting ADSI properties. A command like backup seems to have a certain .NET syntax which I am not clear how to use. In VB there's a straightforward backup command. Need an equivalent in .NET.
A:
You'll need to use ADSI objects. The IIsComputer.Backup method is what you want.
As far as how to access ADSI objects from C#, check out this MSDN page.
EDIT: Here's a sample implementation in C#.
A:
I found it:
DirectoryEntry de = new DirectoryEntry("IIS://localhost");
de.Invoke("Backup", new object[0] );
new object needs to be set to hold proper arguments like overwriting current backup
| How do you backup IIS's metabase in C#? | exact code will be helpful. I assume the DirectoryServices namespace does it but I can't find the method that does it.
I need actual C# code. All the samples I found so far are VB or VBScript. The C# examples I found are for reading/setting ADSI properties. A command like backup seems to have a certain .NET syntax which I am not clear how to use. In VB there's a straightforward backup command. Need an equivalent in .NET.
| [
"You'll need to use ADSI objects. The IIsComputer.Backup method is what you want.\nAs far as how to access ADSI objects from C#, check out this MSDN page.\nEDIT: Here's a sample implementation in C#.\n",
"I found it:\nDirectoryEntry de = new DirectoryEntry(\"IIS://localhost\");\n de.Invoke(\"Backup\", new object[0] );\nnew object needs to be set to hold proper arguments like overwriting current backup\n"
] | [
2,
0
] | [] | [] | [
"directoryservices",
"iis",
"metabase"
] | stackoverflow_0000050089_directoryservices_iis_metabase.txt |
Q:
Open source or low cost "log shipping" program
I have written a log shipping program a number of times. It is a simple program that is used to maintain a warm fail over box for SQL Server.
It has two pieces. On the live dB server it:
Does full and transaction backups and removes old files
On the backup server it:
Copies the backups from the live box
Restores the backups or trans into databases that are set to recovery
zips the backups
deletes them based on retention
If there is a failure, the program can go through each database on the backup server and set them to active.
I am looking for an open source or low cost program that does this.
A:
MS SQL server 2005 and 2008 already support this.
http://technet.microsoft.com/en-us/library/ms188698.aspx
http://technet.microsoft.com/en-us/library/ms188698(SQL.90).aspx
| Open source or low cost "log shipping" program | I have written a log shipping program a number of times. It is a simple program that is used to maintain a warm fail over box for SQL Server.
It has two pieces. On the live dB server it:
Does full and transaction backups and removes old files
On the backup server it:
Copies the backups from the live box
Restores the backups or trans into databases that are set to recovery
zips the backups
deletes them based on retention
If there is a failure, the program can go through each database on the backup server and set them to active.
I am looking for an open source or low cost program that does this.
| [
"MS SQL server 2005 and 2008 already support this.\nhttp://technet.microsoft.com/en-us/library/ms188698.aspx\nhttp://technet.microsoft.com/en-us/library/ms188698(SQL.90).aspx\n"
] | [
3
] | [] | [] | [
"backup",
"sql_server"
] | stackoverflow_0000049847_backup_sql_server.txt |
Q:
How do I get a value from an XML web service in C#?
In C#, if I need to open an HTTP connection, download XML and get one value from the result, how would I do that?
For consistency, imagine the webservice is at www.webservice.com and that if you pass it the POST argument fXML=1 it gives you back
<xml><somekey>somevalue</somekey></xml>
I'd like it to spit out "somevalue".
A:
I think it will be useful to read this first:
Creating and Consuming a Web Service (in .NET)
This is a series of tutorials of how web services are used in .NET, including how XML input is used (deserialization).
A:
I use this code and it works great:
System.Xml.XmlDocument xd = new System.Xml.XmlDocument;
xd.Load("http://www.webservice.com/webservice?fXML=1");
string xPath = "/xml/somekey";
// this node's inner text contains "somevalue"
return xd.SelectSingleNode(xPath).InnerText;
EDIT: I just realized you're talking about a webservice and not just plain XML. In your Visual Studio Solution, try right clicking on References in Solution Explorer and choose "Add a Web Reference". A dialog will appear asking for a URL, you can just paste it in: "http://www.webservice.com/webservice.asmx". VS will autogenerate all the helpers you need. Then you can just call:
com.webservice.www.WebService ws = new com.webservice.www.WebService();
// this assumes your web method takes in the fXML as an integer attribute
return ws.SomeWebMethod(1);
A:
You can use something like that:
var client = new WebClient();
var response = client.UploadValues("www.webservice.com", "POST", new NameValueCollection {{"fXML", "1"}});
using (var reader = new StringReader(Encoding.UTF8.GetString(response)))
{
var xml = XElement.Load(reader);
var value = xml.Element("somekey").Value;
Console.WriteLine("Some value: " + value);
}
Note I didn't have a chance to test this code, but it should work :)
A:
It may also be worth adding that if you need to specifically use POST rather than SOAP then you can configure the web service to receive POST calls:
Check out the page on MSDN:
Configuration Options for XML Web Services Created Using ASP.NET
| How do I get a value from an XML web service in C#? | In C#, if I need to open an HTTP connection, download XML and get one value from the result, how would I do that?
For consistency, imagine the webservice is at www.webservice.com and that if you pass it the POST argument fXML=1 it gives you back
<xml><somekey>somevalue</somekey></xml>
I'd like it to spit out "somevalue".
| [
"I think it will be useful to read this first:\nCreating and Consuming a Web Service (in .NET)\nThis is a series of tutorials of how web services are used in .NET, including how XML input is used (deserialization).\n",
"I use this code and it works great:\nSystem.Xml.XmlDocument xd = new System.Xml.XmlDocument;\nxd.Load(\"http://www.webservice.com/webservice?fXML=1\");\nstring xPath = \"/xml/somekey\";\n// this node's inner text contains \"somevalue\"\nreturn xd.SelectSingleNode(xPath).InnerText;\n\n\nEDIT: I just realized you're talking about a webservice and not just plain XML. In your Visual Studio Solution, try right clicking on References in Solution Explorer and choose \"Add a Web Reference\". A dialog will appear asking for a URL, you can just paste it in: \"http://www.webservice.com/webservice.asmx\". VS will autogenerate all the helpers you need. Then you can just call:\ncom.webservice.www.WebService ws = new com.webservice.www.WebService();\n// this assumes your web method takes in the fXML as an integer attribute\nreturn ws.SomeWebMethod(1);\n\n",
"You can use something like that:\nvar client = new WebClient();\nvar response = client.UploadValues(\"www.webservice.com\", \"POST\", new NameValueCollection {{\"fXML\", \"1\"}});\nusing (var reader = new StringReader(Encoding.UTF8.GetString(response)))\n{\n var xml = XElement.Load(reader);\n var value = xml.Element(\"somekey\").Value;\n Console.WriteLine(\"Some value: \" + value); \n}\n\nNote I didn't have a chance to test this code, but it should work :)\n",
"It may also be worth adding that if you need to specifically use POST rather than SOAP then you can configure the web service to receive POST calls:\nCheck out the page on MSDN:\nConfiguration Options for XML Web Services Created Using ASP.NET\n"
] | [
4,
3,
2,
0
] | [] | [] | [
"c#",
"web_services",
"xml"
] | stackoverflow_0000051129_c#_web_services_xml.txt |
Q:
Oracle Application Server SSL Certificates preventing connection to Apache service
We've got an Apache instance deployed through Oracle Application Server. It's currently installed with the default wallet, and, the self-signed certificate. We've got a GEOTRUST certificiate, imported the Trusted Roots and imported the new Cert to the Wallet Manager. We've then updated the SSL properties of the VHOST and the HTTP_SERVER through Enterprise Manager.
Things have restarted fine, however, we now can't connect to the Apache service, we're getting the error:
call to NZ function nzos_Handshake failed
This seems to point to a problem with the root certs, but in my opinion, these are registered with the Wallet correctly.
Anyone seen this before and have some pointers?
A:
Had the same problem with an Apache/JBoss configuration
look at your httpd.conf, you should have three lines:
SSLCertificateFile /usr/local/ssl/crt/public.crt
SSLCertificateKeyFile /usr/local/ssl/private/private.key
SSLCACertificateFile /usr/local/ssl/crt/EV_intermediate.crt
The last line is needed because the Geotrust root CA is not known by most older and some newer servers (you would not have to do this with a verisign or instantssl cert, for instance).
| Oracle Application Server SSL Certificates preventing connection to Apache service | We've got an Apache instance deployed through Oracle Application Server. It's currently installed with the default wallet, and, the self-signed certificate. We've got a GEOTRUST certificiate, imported the Trusted Roots and imported the new Cert to the Wallet Manager. We've then updated the SSL properties of the VHOST and the HTTP_SERVER through Enterprise Manager.
Things have restarted fine, however, we now can't connect to the Apache service, we're getting the error:
call to NZ function nzos_Handshake failed
This seems to point to a problem with the root certs, but in my opinion, these are registered with the Wallet correctly.
Anyone seen this before and have some pointers?
| [
"Had the same problem with an Apache/JBoss configuration\nlook at your httpd.conf, you should have three lines: \nSSLCertificateFile /usr/local/ssl/crt/public.crt\nSSLCertificateKeyFile /usr/local/ssl/private/private.key\nSSLCACertificateFile /usr/local/ssl/crt/EV_intermediate.crt \nThe last line is needed because the Geotrust root CA is not known by most older and some newer servers (you would not have to do this with a verisign or instantssl cert, for instance).\n"
] | [
1
] | [] | [] | [
"apache",
"oracle",
"ssl"
] | stackoverflow_0000049355_apache_oracle_ssl.txt |
Q:
Ubuntu 32 bit maximum address space
Jeff covered this a while back on his blog in terms of 32 bit Vista.
Does the same 32 bit 4 GB memory cap that applies in 32 bit Vista apply to 32 bit Ubuntu? Are there any 32 bit operating systems that have creatively solved this problem?
A:
Ubuntu server has PAE enabled in the kernel, the desktop version does not have this feature enabled by default.
This explains, by the way, why Ubuntu server does not work in some hardware emulators whereas the desktop edition does
A:
Yes, 32 bit ubuntu has the same memory limitations.
There are exceptions to the 4GB limitation, but they are application specific... As in, Microsoft Sql Server can use 16 gigabytes with "Physical address Extensions" [PAE] configured and supported and... ugh
http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3703755&SiteID=17
Also drivers in ubuntu and windows both reduce the amount of memory available from the 4GB address space by mapping memory from that 4GB to devices. Graphics cards are particularly bad at this, your 256MB graphics card is using up at least 256MB of your address space...
If you can [your drivers support it, and cpu is new enough] install a 64 bit os. Your 32 bit applications and games will run fine.
A:
Well, with windows, there's something called PAE, which means you can access up to 64 GB of memory on a windows machine. The downside is that most apps don't support actually using more than 4 GB of RAM. Only a small number of apps, like SQL Server are programmed to actually take advantage of all the extra memory.
A:
In theory, all 32-bit OSes have that problem. You have 32 bits to do addressing.
2^32 bits / 2^10 (bits per kb) / 2^10 (kb per mb) / 2^10 (mb per gb) = 2^2 = 4gb.
Although there are some ways around it. (Look up the jump from 16-bit computing to 32-bit computing. They hit the same problem.)
A:
There seems to be some confusion around PAE. PAE is "Page Address Extension", and is by no means a Windows feature. It is a hack Intel put in their Pentium II (and newer) chips to allow machines to access 64GB of memory. On Windows, applications need to support PAE explicitely, but in the open source world, packages can be compiled and optimized to your liking. The packages that could use more than 4GB of memory on Ubuntu (and other Linux distro's) are compiled with PAE support. This includes all server-specific software.
A:
Linux supports a technology called PAE that lets you use more than 4GB of memory, however I don't know whether Ubuntu has it on by default. You may need to compile a new kernel.
Edit: Some threads on the Ubuntu forums suggest that the server kernel has PAE on by default, you could try installing that.
| Ubuntu 32 bit maximum address space | Jeff covered this a while back on his blog in terms of 32 bit Vista.
Does the same 32 bit 4 GB memory cap that applies in 32 bit Vista apply to 32 bit Ubuntu? Are there any 32 bit operating systems that have creatively solved this problem?
| [
"Ubuntu server has PAE enabled in the kernel, the desktop version does not have this feature enabled by default.\nThis explains, by the way, why Ubuntu server does not work in some hardware emulators whereas the desktop edition does\n",
"Yes, 32 bit ubuntu has the same memory limitations.\nThere are exceptions to the 4GB limitation, but they are application specific... As in, Microsoft Sql Server can use 16 gigabytes with \"Physical address Extensions\" [PAE] configured and supported and... ugh\nhttp://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3703755&SiteID=17\nAlso drivers in ubuntu and windows both reduce the amount of memory available from the 4GB address space by mapping memory from that 4GB to devices. Graphics cards are particularly bad at this, your 256MB graphics card is using up at least 256MB of your address space...\nIf you can [your drivers support it, and cpu is new enough] install a 64 bit os. Your 32 bit applications and games will run fine. \n",
"Well, with windows, there's something called PAE, which means you can access up to 64 GB of memory on a windows machine. The downside is that most apps don't support actually using more than 4 GB of RAM. Only a small number of apps, like SQL Server are programmed to actually take advantage of all the extra memory.\n",
"In theory, all 32-bit OSes have that problem. You have 32 bits to do addressing.\n2^32 bits / 2^10 (bits per kb) / 2^10 (kb per mb) / 2^10 (mb per gb) = 2^2 = 4gb.\n\nAlthough there are some ways around it. (Look up the jump from 16-bit computing to 32-bit computing. They hit the same problem.)\n",
"There seems to be some confusion around PAE. PAE is \"Page Address Extension\", and is by no means a Windows feature. It is a hack Intel put in their Pentium II (and newer) chips to allow machines to access 64GB of memory. On Windows, applications need to support PAE explicitely, but in the open source world, packages can be compiled and optimized to your liking. The packages that could use more than 4GB of memory on Ubuntu (and other Linux distro's) are compiled with PAE support. This includes all server-specific software.\n",
"Linux supports a technology called PAE that lets you use more than 4GB of memory, however I don't know whether Ubuntu has it on by default. You may need to compile a new kernel.\nEdit: Some threads on the Ubuntu forums suggest that the server kernel has PAE on by default, you could try installing that.\n"
] | [
4,
3,
3,
2,
2,
0
] | [] | [] | [
"memory",
"operating_system",
"ubuntu"
] | stackoverflow_0000051093_memory_operating_system_ubuntu.txt |
Q:
How to get the base 10 logarithm of a Fixnum in Ruby?
I want to get the base 10 logarithm of a Fixnum using Ruby, but found that n.log or n.log10 are not defined. Math::log is defined but uses a different base than 10.
What is the easiest way to get the base 10 logarithm of a Fixnum?
A:
There is
Math::log10 (n)
And there is also a property of logarithms that logx(y) = log(y)/log(x)
A:
Reading the documentation for module Math the answer is really obvious:
Math::log10(n)
This gives the base 10 logarithm of n.
A:
Math.log10(numeric) => float
returns base 10 log
| How to get the base 10 logarithm of a Fixnum in Ruby? | I want to get the base 10 logarithm of a Fixnum using Ruby, but found that n.log or n.log10 are not defined. Math::log is defined but uses a different base than 10.
What is the easiest way to get the base 10 logarithm of a Fixnum?
| [
"There is \nMath::log10 (n)\n\nAnd there is also a property of logarithms that logx(y) = log(y)/log(x)\n",
"Reading the documentation for module Math the answer is really obvious:\nMath::log10(n) \n\nThis gives the base 10 logarithm of n.\n",
"Math.log10(numeric) => float\nreturns base 10 log\n"
] | [
10,
2,
0
] | [] | [] | [
"logarithm",
"math",
"ruby"
] | stackoverflow_0000051420_logarithm_math_ruby.txt |
Q:
Can you load a .Net form as a control?
I want to load a desktop application, via reflection, as a Control inside another application.
The application I'm reflecting is a legacy one - I can't make changes to it.
I can dynamically access the Form, but can't load it as a Control.
In .Net Form expands on Control, and I can assign the reflected Form as a Control, but it throws a run-time exception.
Forms cannot be loaded as controls.
Is there any way to convert the form to a control?
A:
Yes, this works just fine. I'm working on a .NET app right now that loads forms into a panel on a host form.
The relevant snippet:
// setup the new form
form.TopLevel = false;
form.FormBorderStyle = FormBorderStyle.None;
form.Dock = DockStyle.Fill;
form.Show ( );
// add to the panel's list of child controls
panelFormHost.Controls.Add ( form );
A:
You should be able to add the form to the controls collection of your parent form...
See here:
http://vbcity.com/forums/topic.asp?tid=30539
If that fails, try using the adapter pattern to create a container with your legacy form inside it, then load it in an MDI maybe?
A:
What is the exception you get? Is it possible that the control itself is giving the exception (vs the framework)? Perhaps something is called in the original applications Main function that is not being called?
| Can you load a .Net form as a control? | I want to load a desktop application, via reflection, as a Control inside another application.
The application I'm reflecting is a legacy one - I can't make changes to it.
I can dynamically access the Form, but can't load it as a Control.
In .Net Form expands on Control, and I can assign the reflected Form as a Control, but it throws a run-time exception.
Forms cannot be loaded as controls.
Is there any way to convert the form to a control?
| [
"Yes, this works just fine. I'm working on a .NET app right now that loads forms into a panel on a host form.\nThe relevant snippet:\n// setup the new form\nform.TopLevel = false;\nform.FormBorderStyle = FormBorderStyle.None;\nform.Dock = DockStyle.Fill;\nform.Show ( );\n\n// add to the panel's list of child controls\npanelFormHost.Controls.Add ( form );\n\n",
"You should be able to add the form to the controls collection of your parent form...\nSee here: \nhttp://vbcity.com/forums/topic.asp?tid=30539\nIf that fails, try using the adapter pattern to create a container with your legacy form inside it, then load it in an MDI maybe?\n",
"What is the exception you get? Is it possible that the control itself is giving the exception (vs the framework)? Perhaps something is called in the original applications Main function that is not being called?\n"
] | [
10,
1,
1
] | [] | [] | [
".net",
"winforms"
] | stackoverflow_0000051407_.net_winforms.txt |
Q:
Free JSP plugin for eclipse?
I was looking out for a free plugin for developing/debugging JSP pages in eclipse.
Any suggestions?
A:
The Eclipse Web Tools Platform Project includes a JSP debugger. I have only ever needed to use it with Tomcat so I cannot say how well it works with other servlet containers.
A:
BEA seems to have a free one BEA JSP plugin - not used it, so not sure how good it is.
Oracle now owns BEA, and they have this plugin which might do a similar job.
A:
The former BEA Workshop is now Oracle Workshop. It is the best JSP editor with WYSIWYG support and it is free. It is not specific to WebLogic. Basic JSP editing is server neutral anyway. However, it supports launching and debugging on many servers.
You can read my blog post about it.
| Free JSP plugin for eclipse? | I was looking out for a free plugin for developing/debugging JSP pages in eclipse.
Any suggestions?
| [
"The Eclipse Web Tools Platform Project includes a JSP debugger. I have only ever needed to use it with Tomcat so I cannot say how well it works with other servlet containers.\n",
"BEA seems to have a free one BEA JSP plugin - not used it, so not sure how good it is.\nOracle now owns BEA, and they have this plugin which might do a similar job.\n",
"The former BEA Workshop is now Oracle Workshop. It is the best JSP editor with WYSIWYG support and it is free. It is not specific to WebLogic. Basic JSP editing is server neutral anyway. However, it supports launching and debugging on many servers.\nYou can read my blog post about it.\n"
] | [
5,
4,
4
] | [] | [] | [
"eclipse",
"jsp"
] | stackoverflow_0000048250_eclipse_jsp.txt |
Q:
How to host licensed .Net controls in unmanaged C++ app?
I need to host and run managed controls inside of a purely unmanaged C++ app. How to do this?
To run unlicensed controls is typically simple:
if (SUCCEEDED(ClrCreateManagedInstance(type, iid, &obj)))
{
// do something with obj
}
When using a licensed control however, we need to somehow embed a .licx file into the project (ref application licensing). In an unmanaged C++ app, the requisite glue does not seem to work. The lc.exe tool is supposed to be able to embed the license as an assembly resource but either we were not waving the correct invocation, or it failed silently. Any help would be appreciated.
A:
The answer depends on the particular component you're using. Contact your component help desk OR read up the documentation on what it takes to deploy their component.
Basically component developers are free to implement licensing as they deem fit. With the .licx file the component needs to be able to do whatever the developer wished via GetKey and IsValidKey (explained in the link you posted).
So if GetKey checks for a .licx file in the component directory - you just need to make sure its there.
AFAIK the client assembly doesn't need to do anything except instantiate the control.
Also if you post the name of the component and the lc.exe command you're using, people could take a look..
| How to host licensed .Net controls in unmanaged C++ app? | I need to host and run managed controls inside of a purely unmanaged C++ app. How to do this?
To run unlicensed controls is typically simple:
if (SUCCEEDED(ClrCreateManagedInstance(type, iid, &obj)))
{
// do something with obj
}
When using a licensed control however, we need to somehow embed a .licx file into the project (ref application licensing). In an unmanaged C++ app, the requisite glue does not seem to work. The lc.exe tool is supposed to be able to embed the license as an assembly resource but either we were not waving the correct invocation, or it failed silently. Any help would be appreciated.
| [
"The answer depends on the particular component you're using. Contact your component help desk OR read up the documentation on what it takes to deploy their component.\nBasically component developers are free to implement licensing as they deem fit. With the .licx file the component needs to be able to do whatever the developer wished via GetKey and IsValidKey (explained in the link you posted).\nSo if GetKey checks for a .licx file in the component directory - you just need to make sure its there.\nAFAIK the client assembly doesn't need to do anything except instantiate the control.\nAlso if you post the name of the component and the lc.exe command you're using, people could take a look..\n"
] | [
1
] | [] | [] | [
".net",
"c++",
"unmanaged"
] | stackoverflow_0000051436_.net_c++_unmanaged.txt |
Q:
Java -> Python?
Besides the dynamic nature of Python (and the syntax), what are some of the major features of the Python language that Java doesn't have, and vice versa?
A:
List comprehensions. I often find myself filtering/mapping lists, and being able to say [line.replace("spam","eggs") for line in open("somefile.txt") if line.startswith("nee")] is really nice.
Functions are first class objects. They can be passed as parameters to other functions, defined inside other function, and have lexical scope. This makes it really easy to say things like people.sort(key=lambda p: p.age) and thus sort a bunch of people on their age without having to define a custom comparator class or something equally verbose.
Everything is an object. Java has basic types which aren't objects, which is why many classes in the standard library define 9 different versions of functions (for boolean, byte, char, double, float, int, long, Object, short). Array.sort is a good example. Autoboxing helps, although it makes things awkward when something turns out to be null.
Properties. Python lets you create classes with read-only fields, lazily-generated fields, as well as fields which are checked upon assignment to make sure they're never 0 or null or whatever you want to guard against, etc.'
Default and keyword arguments. In Java if you want a constructor that can take up to 5 optional arguments, you must define 6 different versions of that constructor. And there's no way at all to say Student(name="Eli", age=25)
Functions can only return 1 thing. In Python you have tuple assignment, so you can say spam, eggs = nee() but in Java you'd need to either resort to mutable out parameters or have a custom class with 2 fields and then have two additional lines of code to extract those fields.
Built-in syntax for lists and dictionaries.
Operator Overloading.
Generally better designed libraries. For example, to parse an XML document in Java, you say
Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse("test.xml");
and in Python you say
doc = parse("test.xml")
Anyway, I could go on and on with further examples, but Python is just overall a much more flexible and expressive language. It's also dynamically typed, which I really like, but which comes with some disadvantages.
Java has much better performance than Python and has way better tool support. Sometimes those things matter a lot and Java is the better language than Python for a task; I continue to use Java for some new projects despite liking Python a lot more. But as a language I think Python is superior for most things I find myself needing to accomplish.
A:
I think this pair of articles by Philip J. Eby does a great job discussing the differences between the two languages (mostly about philosophy/mentality rather than specific language features).
Python is Not Java
Java is Not Python, either
A:
One key difference in Python is significant whitespace. This puts a lot of people off - me too for a long time - but once you get going it seems natural and makes much more sense than ;s everywhere.
From a personal perspective, Python has the following benefits over Java:
No Checked Exceptions
Optional Arguments
Much less boilerplate and less verbose generally
Other than those, this page on the Python Wiki is a good place to look with lots of links to interesting articles.
A:
With Jython you can have both. It's only at Python 2.2, but still very useful if you need an embedded interpreter that has access to the Java runtime.
A:
Apart from what Eli Courtwright said:
I find iterators in Python more concise. You can use for i in something, and it works with pretty much everything. Yeah, Java has gotten better since 1.5, but for example you can iterate through a string in python with this same construct.
Introspection: In python you can get at runtime information about an object or a module about its symbols, methods, or even its docstrings. You can also instantiate them dynamically. Java has some of this, but usually in Java it takes half a page of code to get an instance of a class, whereas in Python it is about 3 lines. And as far as I know the docstrings thing is not available in Java
| Java -> Python? | Besides the dynamic nature of Python (and the syntax), what are some of the major features of the Python language that Java doesn't have, and vice versa?
| [
"\nList comprehensions. I often find myself filtering/mapping lists, and being able to say [line.replace(\"spam\",\"eggs\") for line in open(\"somefile.txt\") if line.startswith(\"nee\")] is really nice.\nFunctions are first class objects. They can be passed as parameters to other functions, defined inside other function, and have lexical scope. This makes it really easy to say things like people.sort(key=lambda p: p.age) and thus sort a bunch of people on their age without having to define a custom comparator class or something equally verbose.\nEverything is an object. Java has basic types which aren't objects, which is why many classes in the standard library define 9 different versions of functions (for boolean, byte, char, double, float, int, long, Object, short). Array.sort is a good example. Autoboxing helps, although it makes things awkward when something turns out to be null.\nProperties. Python lets you create classes with read-only fields, lazily-generated fields, as well as fields which are checked upon assignment to make sure they're never 0 or null or whatever you want to guard against, etc.'\nDefault and keyword arguments. In Java if you want a constructor that can take up to 5 optional arguments, you must define 6 different versions of that constructor. And there's no way at all to say Student(name=\"Eli\", age=25)\nFunctions can only return 1 thing. In Python you have tuple assignment, so you can say spam, eggs = nee() but in Java you'd need to either resort to mutable out parameters or have a custom class with 2 fields and then have two additional lines of code to extract those fields.\nBuilt-in syntax for lists and dictionaries.\nOperator Overloading.\nGenerally better designed libraries. For example, to parse an XML document in Java, you say\nDocument doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(\"test.xml\");\nand in Python you say\ndoc = parse(\"test.xml\")\n\nAnyway, I could go on and on with further examples, but Python is just overall a much more flexible and expressive language. It's also dynamically typed, which I really like, but which comes with some disadvantages.\nJava has much better performance than Python and has way better tool support. Sometimes those things matter a lot and Java is the better language than Python for a task; I continue to use Java for some new projects despite liking Python a lot more. But as a language I think Python is superior for most things I find myself needing to accomplish.\n",
"I think this pair of articles by Philip J. Eby does a great job discussing the differences between the two languages (mostly about philosophy/mentality rather than specific language features). \n\nPython is Not Java\nJava is Not Python, either\n\n",
"One key difference in Python is significant whitespace. This puts a lot of people off - me too for a long time - but once you get going it seems natural and makes much more sense than ;s everywhere.\nFrom a personal perspective, Python has the following benefits over Java:\n\nNo Checked Exceptions\nOptional Arguments\nMuch less boilerplate and less verbose generally\n\nOther than those, this page on the Python Wiki is a good place to look with lots of links to interesting articles.\n",
"With Jython you can have both. It's only at Python 2.2, but still very useful if you need an embedded interpreter that has access to the Java runtime.\n",
"Apart from what Eli Courtwright said:\n\nI find iterators in Python more concise. You can use for i in something, and it works with pretty much everything. Yeah, Java has gotten better since 1.5, but for example you can iterate through a string in python with this same construct.\nIntrospection: In python you can get at runtime information about an object or a module about its symbols, methods, or even its docstrings. You can also instantiate them dynamically. Java has some of this, but usually in Java it takes half a page of code to get an instance of a class, whereas in Python it is about 3 lines. And as far as I know the docstrings thing is not available in Java\n\n"
] | [
47,
16,
5,
3,
2
] | [] | [] | [
"java",
"python"
] | stackoverflow_0000049824_java_python.txt |
Q:
How to make only certain parts of a site beta?
Most sites are either fully released, or in beta.
But what happens if you have a large site, and some of the parts are still in Beta, and other parts aren't.
How do you effectively communicate this to the customer?
A:
Maybe take a look at how Facebook, Bloglines, Gmail did it?
Like "We have this beta thing going on, come on over and see the same site with new stuff, but if it doesnt work, use the old parts"
Maybe gmail labs where you can sign up for "beta features"
A:
If there's a certain way you enter the part of the beta site, maybe you can have a modal that pops up that they have to agree to every time. I wouldn't have it on every page since it gets annoying, so I would only use this approach if there is a definitive way to get into that part of the site (e.g. people won't be coming to random parts of the beta section through Google or something).
A:
One way I've used for non-web software is a change to background. So for example if your normal site tended to have a plain white background, you could have the beta site have a repeating beta text in a background image. You want to make it fairly faint so it is present but doesn't detract from the overall experience.
Another subtle but present option would just be to change the title bar.
Or you could do what google does, which is a large site with some of it in beta. Check out Google experimental search. Basically the site is no different, but it is hard to get to accidentally.
A:
There are a few ways.
Provide access to the site via two domains (e.g. www.domain.com and beta.domain.com) and only allow access to beta parts of the site when going in via beta.domain.com.
People will be accessing the same code base, but will only get access to the beta sections if they've specified the beta subdomain. Trying to access beta sections of the site will explain this & tell them how to access the beta.
Strongly Flag the beta sections of the application as being beta, and force the user to acknowledge that they're OK using beta features with some kind of agreement screen. The first time they try to use the beta feature, they'll be shown the agreement screen. Subsequent uses of the feature will prominently deisplay that "thios part of the site is in beta and is used at your own peril."
| How to make only certain parts of a site beta? | Most sites are either fully released, or in beta.
But what happens if you have a large site, and some of the parts are still in Beta, and other parts aren't.
How do you effectively communicate this to the customer?
| [
"Maybe take a look at how Facebook, Bloglines, Gmail did it?\nLike \"We have this beta thing going on, come on over and see the same site with new stuff, but if it doesnt work, use the old parts\"\nMaybe gmail labs where you can sign up for \"beta features\"\n",
"If there's a certain way you enter the part of the beta site, maybe you can have a modal that pops up that they have to agree to every time. I wouldn't have it on every page since it gets annoying, so I would only use this approach if there is a definitive way to get into that part of the site (e.g. people won't be coming to random parts of the beta section through Google or something).\n",
"One way I've used for non-web software is a change to background. So for example if your normal site tended to have a plain white background, you could have the beta site have a repeating beta text in a background image. You want to make it fairly faint so it is present but doesn't detract from the overall experience.\nAnother subtle but present option would just be to change the title bar.\nOr you could do what google does, which is a large site with some of it in beta. Check out Google experimental search. Basically the site is no different, but it is hard to get to accidentally.\n",
"There are a few ways. \n\nProvide access to the site via two domains (e.g. www.domain.com and beta.domain.com) and only allow access to beta parts of the site when going in via beta.domain.com. \nPeople will be accessing the same code base, but will only get access to the beta sections if they've specified the beta subdomain. Trying to access beta sections of the site will explain this & tell them how to access the beta. \nStrongly Flag the beta sections of the application as being beta, and force the user to acknowledge that they're OK using beta features with some kind of agreement screen. The first time they try to use the beta feature, they'll be shown the agreement screen. Subsequent uses of the feature will prominently deisplay that \"thios part of the site is in beta and is used at your own peril.\"\n\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"beta"
] | stackoverflow_0000051519_beta.txt |
Q:
Get MIME type of a local file in PHP5 without a PECL extension?
mime_content_type() is deprecated.
How can I find the MIME type of a local file using PHP5 but without using this deprecated method or the PECL fileinfo extension?
Edit: That's what I was afraid of. It's unfortunate that they deprecated a built-in function in favour of one that requires an extension that isn't always available.
A:
If you can't use the fileinfo extension, and you don't want to use mime_content_type, your options are limited.
Most likely you'll need to do a lookup based on the file extension. mime_content_type did something a bit more intelligent and actually looked for special data in the file to determine the mime type.
A:
The getID3() library is a quick and easy works-most-of-the-time option. Originally named for a project to obtain MP3 ID3 data, the library does two hecks of a lot more than that and is quite convenient for all sorts of common or odd file meta data tasks.
I've used it to get the MIME types of files for online image and video tools. In all the testing I've done I've not seen getID3 get the MIME type wrong.
I've also used it to check if QuickTime videos have streaming hints. I mention this as an example of versatility.
A second more time consuming option is to roll your own MIME type checker as already suggested. If you have a MIME magic file you can go a little further than a lookup on the file extension by comparing the first n bytes of file data against a first-n-bytes to MIME type lookup table derived from your MIME magic file.
A typical MIME magic file will contain in excess of 500 sets of MIME types which might result in slow comparisons (lots of checks to make). Hard-coding the 10 most common MIME type checks in your home rolled solution will help there.
| Get MIME type of a local file in PHP5 without a PECL extension? | mime_content_type() is deprecated.
How can I find the MIME type of a local file using PHP5 but without using this deprecated method or the PECL fileinfo extension?
Edit: That's what I was afraid of. It's unfortunate that they deprecated a built-in function in favour of one that requires an extension that isn't always available.
| [
"If you can't use the fileinfo extension, and you don't want to use mime_content_type, your options are limited.\nMost likely you'll need to do a lookup based on the file extension. mime_content_type did something a bit more intelligent and actually looked for special data in the file to determine the mime type.\n",
"The getID3() library is a quick and easy works-most-of-the-time option. Originally named for a project to obtain MP3 ID3 data, the library does two hecks of a lot more than that and is quite convenient for all sorts of common or odd file meta data tasks.\nI've used it to get the MIME types of files for online image and video tools. In all the testing I've done I've not seen getID3 get the MIME type wrong.\nI've also used it to check if QuickTime videos have streaming hints. I mention this as an example of versatility.\nA second more time consuming option is to roll your own MIME type checker as already suggested. If you have a MIME magic file you can go a little further than a lookup on the file extension by comparing the first n bytes of file data against a first-n-bytes to MIME type lookup table derived from your MIME magic file.\nA typical MIME magic file will contain in excess of 500 sets of MIME types which might result in slow comparisons (lots of checks to make). Hard-coding the 10 most common MIME type checks in your home rolled solution will help there.\n"
] | [
1,
0
] | [] | [] | [
"mime",
"php"
] | stackoverflow_0000051110_mime_php.txt |
Q:
What is a managable way to store e-mails for extended periods of time?
If you have a site which sends out emails to the customer, and you want to save a copy of the mail, what is an effective strategy?
If you save it to a table in your database (e.g. create a table called Mail), it gets very large very quickly.
Some strategies I've seen are:
Save it to the file system
Run a scheduled task to clear old entries from the database - but then you wind up not having a copy;
Create a separate table for each time frame (one each year, or one each month)
What strategies have you used?
A:
I don't agree that gmail is an effective backup for business data.
Why trust your business information to a provider who makes no guarantees of service, or over who you have no control whatsoever?
Makes no sense to me.
Depending on how frequently you need to access this information, I'd say go with the filesystem or database archive. At least that way, you have control over your own data.
A:
Data you want to save is saved in a database. The only exception that is justified is large binary data (images, videos). Who cares how large the table gets? If the mails are automated and template-based, you just have to save the variable parts anyway. The size will be about the same wherever you save it, but you probably already have a mechanism to backup your database, so you won't have to invent one to handle millions of files.
A:
Lots of assumptions:
1. You're running windows / would like an archive in windows
2. The ability to search in the mails is important.
Since you are sending mails to your customers there isn't any reason you can't bcc a mail account of your own. Assuming you have a suitable account on your own server then I'd look at using MailStore (home) to pull the mails out from your account and put them into it's own compressed database.
A:
Another option (depending on the email content) is to not save the email, but make sure you can recreate the email by archiving the original content that went into generating the email.
A:
It depends on the content of your email. If it contains large images. I would plump for the file system. Otherwise if your Mail table table is getting very large very quickly I would go for the separate table, archiving off dead customers.
A:
We save the email to a database table. It really doesn't get that big that quickly. We've a table with 32,000 emails in it (they're biggish emails too @ 50kb per email) and with compression, the file only uses 16MB.
If you're sending a shed load of email, then know that GMail(free) currently only allows 7GB of data. I'd be happy holding that on a disk.
A:
I'd think about putting in place some sort of general archiving functionality. How you implement that depends on your specific retrieval needs.
For example if you wish just to retrieve emails sent to a particular customer for a certain month then stocking them in an appropriate heirachy on the File System (zip them up if necessary) should be simple to do. You might want to record a list of sent emails in a database table with a pointer to the appropriate directory but a naming convention for your directories and files might be sufficient
You might not need to access very old emails very infrequently so you might archive these to DVD for example if online storage is a problem
If you're wanting to often search the actual content of emails then your going to have to put the content in a DB table or use an indexer like Lucerne to examine the files stocked on disk
| What is a managable way to store e-mails for extended periods of time? | If you have a site which sends out emails to the customer, and you want to save a copy of the mail, what is an effective strategy?
If you save it to a table in your database (e.g. create a table called Mail), it gets very large very quickly.
Some strategies I've seen are:
Save it to the file system
Run a scheduled task to clear old entries from the database - but then you wind up not having a copy;
Create a separate table for each time frame (one each year, or one each month)
What strategies have you used?
| [
"I don't agree that gmail is an effective backup for business data.\nWhy trust your business information to a provider who makes no guarantees of service, or over who you have no control whatsoever?\nMakes no sense to me.\nDepending on how frequently you need to access this information, I'd say go with the filesystem or database archive. At least that way, you have control over your own data.\n",
"Data you want to save is saved in a database. The only exception that is justified is large binary data (images, videos). Who cares how large the table gets? If the mails are automated and template-based, you just have to save the variable parts anyway. The size will be about the same wherever you save it, but you probably already have a mechanism to backup your database, so you won't have to invent one to handle millions of files.\n",
"Lots of assumptions:\n1. You're running windows / would like an archive in windows\n2. The ability to search in the mails is important.\nSince you are sending mails to your customers there isn't any reason you can't bcc a mail account of your own. Assuming you have a suitable account on your own server then I'd look at using MailStore (home) to pull the mails out from your account and put them into it's own compressed database.\n",
"Another option (depending on the email content) is to not save the email, but make sure you can recreate the email by archiving the original content that went into generating the email.\n",
"It depends on the content of your email. If it contains large images. I would plump for the file system. Otherwise if your Mail table table is getting very large very quickly I would go for the separate table, archiving off dead customers.\n",
"We save the email to a database table. It really doesn't get that big that quickly. We've a table with 32,000 emails in it (they're biggish emails too @ 50kb per email) and with compression, the file only uses 16MB. \nIf you're sending a shed load of email, then know that GMail(free) currently only allows 7GB of data. I'd be happy holding that on a disk. \n",
"I'd think about putting in place some sort of general archiving functionality. How you implement that depends on your specific retrieval needs.\nFor example if you wish just to retrieve emails sent to a particular customer for a certain month then stocking them in an appropriate heirachy on the File System (zip them up if necessary) should be simple to do. You might want to record a list of sent emails in a database table with a pointer to the appropriate directory but a naming convention for your directories and files might be sufficient\nYou might not need to access very old emails very infrequently so you might archive these to DVD for example if online storage is a problem\nIf you're wanting to often search the actual content of emails then your going to have to put the content in a DB table or use an indexer like Lucerne to examine the files stocked on disk\n"
] | [
6,
4,
3,
2,
1,
0,
0
] | [] | [] | [
"backup",
"email"
] | stackoverflow_0000051528_backup_email.txt |
Q:
Accessing non-generic members of a generic object
Is there a way to collect (e.g. in a List) multiple 'generic' objects that don't share a common super class? If so, how can I access their common properties?
For example:
class MyObject<T>
{
public T Value { get; set; }
public string Name { get; set; }
public MyObject(string name, T value)
{
Name = name;
Value = value;
}
}
var fst = new MyObject<int>("fst", 42);
var snd = new MyObject<bool>("snd", true);
List<MyObject<?>> list = new List<MyObject<?>>(){fst, snd};
foreach (MyObject<?> o in list)
Console.WriteLine(o.Name);
Obviously, this is pseudo code, this doesn't work.
Also I don't need to access the .Value property (since that wouldn't be type-safe).
EDIT: Now that I've been thinking about this, It would be possible to use sub-classes for this. However, I think that would mean I'd have to write a new subclass for every new type.
@Grzenio
Yes, that exactly answered my question. Of course, now I need to duplicate the entire shared interface, but that's not a big problem. I should have thought of that...
@aku
You are right about the duck typing. I wouldn't expect two completely random types of objects to be accessible.
But I thought generic objects would share some kind of common interface, since they are exactly the same, apart from the type they are parametrized by. Apparently, this is not the case automatically.
A:
I don't think it is possible in C#, because MyObject is not a baseclass of MyObject. What I usually do is to define an interface (a 'normal' one, not generic) and make MyObject implement that interface, e.g.
interface INamedObject
{
string Name {get;}
}
and then you can use the interface:
List<INamedObject> list = new List<INamedObject>(){fst, snd};
foreach (INamedObject o in list)
Console.WriteLine(o.Name);
Did it answer your question?
A:
C# doesn't support duck typing. You have 2 choices: interfaces and inheritance, otherwise you can't access similar properties of different types of objects.
A:
The best way would be to add a common base class, otherwise you can fall back to reflection.
| Accessing non-generic members of a generic object | Is there a way to collect (e.g. in a List) multiple 'generic' objects that don't share a common super class? If so, how can I access their common properties?
For example:
class MyObject<T>
{
public T Value { get; set; }
public string Name { get; set; }
public MyObject(string name, T value)
{
Name = name;
Value = value;
}
}
var fst = new MyObject<int>("fst", 42);
var snd = new MyObject<bool>("snd", true);
List<MyObject<?>> list = new List<MyObject<?>>(){fst, snd};
foreach (MyObject<?> o in list)
Console.WriteLine(o.Name);
Obviously, this is pseudo code, this doesn't work.
Also I don't need to access the .Value property (since that wouldn't be type-safe).
EDIT: Now that I've been thinking about this, It would be possible to use sub-classes for this. However, I think that would mean I'd have to write a new subclass for every new type.
@Grzenio
Yes, that exactly answered my question. Of course, now I need to duplicate the entire shared interface, but that's not a big problem. I should have thought of that...
@aku
You are right about the duck typing. I wouldn't expect two completely random types of objects to be accessible.
But I thought generic objects would share some kind of common interface, since they are exactly the same, apart from the type they are parametrized by. Apparently, this is not the case automatically.
| [
"I don't think it is possible in C#, because MyObject is not a baseclass of MyObject. What I usually do is to define an interface (a 'normal' one, not generic) and make MyObject implement that interface, e.g.\ninterface INamedObject\n{\n string Name {get;}\n}\n\nand then you can use the interface:\nList<INamedObject> list = new List<INamedObject>(){fst, snd};\n\nforeach (INamedObject o in list)\n Console.WriteLine(o.Name);\n\nDid it answer your question?\n",
"C# doesn't support duck typing. You have 2 choices: interfaces and inheritance, otherwise you can't access similar properties of different types of objects.\n",
"The best way would be to add a common base class, otherwise you can fall back to reflection.\n"
] | [
7,
3,
0
] | [] | [] | [
".net",
"c#",
"generics"
] | stackoverflow_0000051586_.net_c#_generics.txt |
Q:
Display data from XMLDataSource in TextBox
Can anyone give me some pointers on how to display the results of an XPath query in a textbox using code (C#)? My datascource seems to (re)bind correctly once the XPath query has been applied, but I cannot find how to get at the resulting data.
Any help would be greatly appreciated.
A:
XMLDataSource is designed to be used with data-bound controls. ASP.NET's TextBox is not a data-bound control. So to accomplish what you want you either have to find a textbox control with data binding or display the result in some other way.
For example, you could use a Repeater control and create your own rendering template for it.
<asp:Repeater id="Repeater1" runat="server" datasource="XMLds">
<ItemTemplate>
<input type="text" value="<%# XPath("<path to display field>")%>" />
</ItemTemplate>
</asp:Repeater>
A:
Some more information would be nice to have to be able to give you a decent answer. Do you have any existing code snippets you could publish here?
The general idea is to use the XmlDataSource.XPath property as a filter on the XmlDataSource.Data property. Did you try displaying the contents of the Data prop in your textbox?
A:
Based on a slection in a DropDownList, when the SelectedIndexChanged event fires, the XPath for an XMLDataSource object is updated:
protected void ddl_SelectedIndexChanged(object sender, EventArgs e)
{
XMLds.XPath = "/controls/control[@id='AuthorityType']/item[@text='" + ddl.SelectedValue + "']/linkedValue";
XMLds.DataBind();
}
The XPath string is fine, I can output and test that it is working correctly and resolving to the correct nodes. What I am having problems with, is getting at the data that is supposedly stored in the XmlDataSource; specifically, getting the data and outputting it in a TextBox. I'd like to be able to do this as part of the function above, i.e.
protected void ddl_SelectedIndexChanged(object sender, EventArgs e)
{
XMLds.XPath = "/controls/control[@id='AuthorityType']/item[@text='" + ddl.SelectedValue + "']/linkedValue";
XMLds.DataBind();
myTextBox.Text = <FieldFromXMLDataSource>;
}
Thank you for your time.
| Display data from XMLDataSource in TextBox | Can anyone give me some pointers on how to display the results of an XPath query in a textbox using code (C#)? My datascource seems to (re)bind correctly once the XPath query has been applied, but I cannot find how to get at the resulting data.
Any help would be greatly appreciated.
| [
"XMLDataSource is designed to be used with data-bound controls. ASP.NET's TextBox is not a data-bound control. So to accomplish what you want you either have to find a textbox control with data binding or display the result in some other way. \nFor example, you could use a Repeater control and create your own rendering template for it. \n<asp:Repeater id=\"Repeater1\" runat=\"server\" datasource=\"XMLds\">\n <ItemTemplate>\n <input type=\"text\" value=\"<%# XPath(\"<path to display field>\")%>\" />\n </ItemTemplate>\n</asp:Repeater>\n\n",
"Some more information would be nice to have to be able to give you a decent answer. Do you have any existing code snippets you could publish here?\nThe general idea is to use the XmlDataSource.XPath property as a filter on the XmlDataSource.Data property. Did you try displaying the contents of the Data prop in your textbox?\n",
"Based on a slection in a DropDownList, when the SelectedIndexChanged event fires, the XPath for an XMLDataSource object is updated:\nprotected void ddl_SelectedIndexChanged(object sender, EventArgs e)\n{\n XMLds.XPath = \"/controls/control[@id='AuthorityType']/item[@text='\" + ddl.SelectedValue + \"']/linkedValue\";\n XMLds.DataBind();\n}\n\nThe XPath string is fine, I can output and test that it is working correctly and resolving to the correct nodes. What I am having problems with, is getting at the data that is supposedly stored in the XmlDataSource; specifically, getting the data and outputting it in a TextBox. I'd like to be able to do this as part of the function above, i.e.\nprotected void ddl_SelectedIndexChanged(object sender, EventArgs e)\n{\n XMLds.XPath = \"/controls/control[@id='AuthorityType']/item[@text='\" + ddl.SelectedValue + \"']/linkedValue\";\n XMLds.DataBind();\n myTextBox.Text = <FieldFromXMLDataSource>;\n}\n\nThank you for your time.\n"
] | [
1,
0,
0
] | [] | [] | [
"c#",
"xmldatasource"
] | stackoverflow_0000051429_c#_xmldatasource.txt |
Q:
Windows 2003 Scheduled Task Cmdlet (v 1.0)
Does anyone know of a powershell cmdlet out there for automating task scheduler in XP/2003? If you've ever tried to work w/ schtasks you know it's pretty painful.
A:
Ok, Pablo has sparked my interest in saying that the scheduler is accessible via COM.
In PowerShell you can do this:
$svc = new-object -com Schedule.Service
... and that gives you a handle to the task scheduler. You can see what members it has using:
$svc | get-member
One of its methods is NewTask, so I'd start there.
Edit: Some more info here. It's a VBScript example but it'll give you the gist.
A:
You don't need PowerShell to automate the Task Scheduler, you can use the SCHTASKS command in XP.
According to Wikipedia, the Task Scheduler 2.0 (Vista and Server 2008) is accesible via COM.
A:
This is a good article (be sure to read the other linked article in it) that discusses looking at th scheduled tasks on remote machines. It is not exactly what you were asking for but it should get you headed in the right direction.
A:
Not "native" PowerShell, but if you're running powershell.exe as an administrator then you should have access to the "at" command, which you can use to schedule tasks.
| Windows 2003 Scheduled Task Cmdlet (v 1.0) | Does anyone know of a powershell cmdlet out there for automating task scheduler in XP/2003? If you've ever tried to work w/ schtasks you know it's pretty painful.
| [
"Ok, Pablo has sparked my interest in saying that the scheduler is accessible via COM.\nIn PowerShell you can do this:\n$svc = new-object -com Schedule.Service\n\n... and that gives you a handle to the task scheduler. You can see what members it has using:\n$svc | get-member\n\nOne of its methods is NewTask, so I'd start there.\nEdit: Some more info here. It's a VBScript example but it'll give you the gist.\n",
"You don't need PowerShell to automate the Task Scheduler, you can use the SCHTASKS command in XP.\nAccording to Wikipedia, the Task Scheduler 2.0 (Vista and Server 2008) is accesible via COM.\n",
"This is a good article (be sure to read the other linked article in it) that discusses looking at th scheduled tasks on remote machines. It is not exactly what you were asking for but it should get you headed in the right direction.\n",
"Not \"native\" PowerShell, but if you're running powershell.exe as an administrator then you should have access to the \"at\" command, which you can use to schedule tasks.\n"
] | [
3,
1,
1,
0
] | [
"@slipsec: I don't have access to a 2003 server to try, but googling \"2003 server\" SCHTASKS there are links related to questions about it. Perhaps you misspell the command?\nLike Matt says, you also can use AT, but in this case you don't have access to the scheduled tasks via the Control Panel, only via the AT command.\n"
] | [
-1
] | [
"powershell",
"scheduled_tasks"
] | stackoverflow_0000033779_powershell_scheduled_tasks.txt |
Q:
Managing/Using libraries with Debug builds vs Release builds
I'm curious about everyones practices when it comes to using or distributing libraries for an application that you write.
First of all, when developing your application do you link the debug or release version of the libraries? (For when you run your application in debug mode)
Then when you run your app in release mode just before deploying, which build of the libraries do you use?
How do you perform the switch between your debug and release version of the libraries? Do you do it manually, do you use macros, or whatever else is it that you do?
A:
I would first determine what requirements are needed from the library:
Debug/Release
Unicode support
And so on..
With that determined you can then create configurations for each combination required by yourself or other library users.
When compiling and linking it is very important that you keep that libraries and executable consistent with respect to configurations used i.e. don't mix release & debug when linking.
I know on the Windows/VS platform this can cause subtle memory issues if debug & release libs are mixed within an executable.
As Brian has mentioned to Visual Studio it's best to use the Configuration Manager to setup how you want each configuration you require to be built.
For example our projects require the following configurations to be available depending on the executable being built.
Debug+Unicode
Debug+ASCII
Release+Unicode
Release+ASCII
The users of this particular project use the Configuration Manager to match their executable requirements with the project's available configurations.
Regarding the use of macros, they are used extensively in implementing compile time decisions for requirements like if the debug or release version of a function is to be linked. If you're using VS you can view the pre-processor definitions attribute to see how the various macros are defined e.g. _DEBUG _RELEASE, this is how the configuration controls whats compiled.
What platform are you using to compile/link your projects?
EDIT: Expanding on your updated comment..
If the Configuration Manager option is not available to you then I recommend using the following properties from the project:
Linker->Additional Library Directories or Linker->Input
Use the macro $(ConfigurationName) to link with the appropriate library configuration e.g. Debug/Release.
$(ProjectDir)\..\third-party-prj\$(ConfigurationName)\third-party.lib
Build Events or Custom Build Step configuration property
Execute a copy of the required library file(s) from the dependent project prior (or after) to the build occurring.
xcopy $(ProjectDir)\..\third-party-prj\$(ConfigurationName)\third-party.dll $(IntDir)
The macro $(ProjectDir) will be substituted for the current project's location and causes the operation to occur relative to the current project.
The macro $(ConfigurationName) will be substituted for the currently selected configuration (default is Debug or Release) which allows the correct items to be copied depending on what configuration is being built currently.
If you use a regular naming convention for your project configurations it will help, as you can use the $(ConfigurationName) macro, otherwise you can simply use a fixed string.
A:
I use VS. The way that I do it is that the libraries I need through the references of the project. Which basically just says in what folder to look for a specific library at project load time. I develop my libraries to be as project independent or reusable as possible. Therefore they are all projects of their own. So of the libraries that I need for a specific project, I create a "3rdParty" or "libs" folder at the same level as my "src" folder in my svn folder tree. I tend to only use released libraries, but when I get some unknown issues and want to switch to debug, I manually copy a debug version of the files in the "lib" folder and reload the project.
I am unsure wether I should be keeping both debug and released versions in my svn tree. Although since they are projects of their own, keeping them in the svn tree of another project doesn't right. They can be built again without an hitch at any moment.
And then I wanted to find a way of making the switch more...hmmm...well basically automatic if you while, but that's not what I really mean. It just feels that switching the files manually between released and debug isn't right. Maybe I haven't found it yet, but what I would like is an option that would do like:
For library "stack.dll" look in "......\3rdParty\" for release and "......\3rdPartyD\" for debug.
Anything that those something like I don't know. What do you suggest?
Remember libraries are external projects. There the built files are totally elsewhere. In fact think of it as you have to check out another project, build it, and copy the built library if you want another copy. How would you set that up?
| Managing/Using libraries with Debug builds vs Release builds | I'm curious about everyones practices when it comes to using or distributing libraries for an application that you write.
First of all, when developing your application do you link the debug or release version of the libraries? (For when you run your application in debug mode)
Then when you run your app in release mode just before deploying, which build of the libraries do you use?
How do you perform the switch between your debug and release version of the libraries? Do you do it manually, do you use macros, or whatever else is it that you do?
| [
"I would first determine what requirements are needed from the library:\n\nDebug/Release\nUnicode support\nAnd so on..\n\nWith that determined you can then create configurations for each combination required by yourself or other library users.\nWhen compiling and linking it is very important that you keep that libraries and executable consistent with respect to configurations used i.e. don't mix release & debug when linking.\nI know on the Windows/VS platform this can cause subtle memory issues if debug & release libs are mixed within an executable.\nAs Brian has mentioned to Visual Studio it's best to use the Configuration Manager to setup how you want each configuration you require to be built.\nFor example our projects require the following configurations to be available depending on the executable being built.\n\nDebug+Unicode\nDebug+ASCII\nRelease+Unicode\nRelease+ASCII\n\nThe users of this particular project use the Configuration Manager to match their executable requirements with the project's available configurations.\nRegarding the use of macros, they are used extensively in implementing compile time decisions for requirements like if the debug or release version of a function is to be linked. If you're using VS you can view the pre-processor definitions attribute to see how the various macros are defined e.g. _DEBUG _RELEASE, this is how the configuration controls whats compiled.\nWhat platform are you using to compile/link your projects?\nEDIT: Expanding on your updated comment..\nIf the Configuration Manager option is not available to you then I recommend using the following properties from the project:\n\nLinker->Additional Library Directories or Linker->Input\n\nUse the macro $(ConfigurationName) to link with the appropriate library configuration e.g. Debug/Release.\n$(ProjectDir)\\..\\third-party-prj\\$(ConfigurationName)\\third-party.lib\n\n\nBuild Events or Custom Build Step configuration property\n\nExecute a copy of the required library file(s) from the dependent project prior (or after) to the build occurring.\nxcopy $(ProjectDir)\\..\\third-party-prj\\$(ConfigurationName)\\third-party.dll $(IntDir)\n\nThe macro $(ProjectDir) will be substituted for the current project's location and causes the operation to occur relative to the current project.\nThe macro $(ConfigurationName) will be substituted for the currently selected configuration (default is Debug or Release) which allows the correct items to be copied depending on what configuration is being built currently.\nIf you use a regular naming convention for your project configurations it will help, as you can use the $(ConfigurationName) macro, otherwise you can simply use a fixed string.\n",
"I use VS. The way that I do it is that the libraries I need through the references of the project. Which basically just says in what folder to look for a specific library at project load time. I develop my libraries to be as project independent or reusable as possible. Therefore they are all projects of their own. So of the libraries that I need for a specific project, I create a \"3rdParty\" or \"libs\" folder at the same level as my \"src\" folder in my svn folder tree. I tend to only use released libraries, but when I get some unknown issues and want to switch to debug, I manually copy a debug version of the files in the \"lib\" folder and reload the project.\nI am unsure wether I should be keeping both debug and released versions in my svn tree. Although since they are projects of their own, keeping them in the svn tree of another project doesn't right. They can be built again without an hitch at any moment.\nAnd then I wanted to find a way of making the switch more...hmmm...well basically automatic if you while, but that's not what I really mean. It just feels that switching the files manually between released and debug isn't right. Maybe I haven't found it yet, but what I would like is an option that would do like:\nFor library \"stack.dll\" look in \"......\\3rdParty\\\" for release and \"......\\3rdPartyD\\\" for debug.\nAnything that those something like I don't know. What do you suggest?\nRemember libraries are external projects. There the built files are totally elsewhere. In fact think of it as you have to check out another project, build it, and copy the built library if you want another copy. How would you set that up?\n"
] | [
3,
0
] | [] | [] | [
"debug_build",
"release_builds"
] | stackoverflow_0000045769_debug_build_release_builds.txt |
Q:
Mail Message Link Handling
I have written an AppleScript which when supplied with a Windows network link, will convert it to the correct smb:// equivalent for the server in our office, mount the network drive, and open the requested folder in Finder.
I have this built in an application which just takes a pasted network path. Ideally I need this to trigger on clicking a link in a Mail.app email message so that it can check if the link is in the correct format, and if so run the script and attempt to mount the drive and load the folder in Finder.
How would I go about doing this?
A:
In order to do this I think you'd need to create a Cocoa application that was registered with OS X Launch Services as the default role handler for smb:// links.
I've written some stuff about how to do this on another question: How do you set your Cocoa application as the default web browser?
If there's a pure AppleScript solution or a way of only handling links within Mail.app I'm not aware of it.
| Mail Message Link Handling | I have written an AppleScript which when supplied with a Windows network link, will convert it to the correct smb:// equivalent for the server in our office, mount the network drive, and open the requested folder in Finder.
I have this built in an application which just takes a pasted network path. Ideally I need this to trigger on clicking a link in a Mail.app email message so that it can check if the link is in the correct format, and if so run the script and attempt to mount the drive and load the folder in Finder.
How would I go about doing this?
| [
"In order to do this I think you'd need to create a Cocoa application that was registered with OS X Launch Services as the default role handler for smb:// links.\nI've written some stuff about how to do this on another question: How do you set your Cocoa application as the default web browser?\nIf there's a pure AppleScript solution or a way of only handling links within Mail.app I'm not aware of it.\n"
] | [
1
] | [] | [] | [
"applescript",
"macos"
] | stackoverflow_0000051701_applescript_macos.txt |
Q:
High availability and scalable platform for Java/C++ on Solaris
I have an application that's a mix of Java and C++ on Solaris. The Java aspects of the code run the web UI and establish state on the devices that we're talking to, and the C++ code does the real-time crunching of data coming back from the devices. Shared memory is used to pass device state and context information from the Java code through to the C++ code. The Java code uses a PostgreSQL database to persist its state.
We're running into some pretty severe performance bottlenecks, and right now the only way we can scale is to increase memory and CPU counts. We're stuck on the one physical box due to the shared memory design.
The really big hit here is being taken by the C++ code. The web interface is fairly lightly used to configure the devices; where we're really struggling is to handle the data volumes that the devices deliver once configured.
Every piece of data we get back from the device has an identifier in it which points back to the device context, and we need to look that up. Right now there's a series of shared memory objects that are maintained by the Java/UI code and referred to by the C++ code, and that's the bottleneck. Because of that architecture we cannot move the C++ data handling off to another machine. We need to be able to scale out so that various subsets of devices can be handled by different machines, but then we lose the ability to do that context lookup, and that's the problem I'm trying to resolve: how to offload the real-time data processing to other boxes while still being able to refer to the device context.
I should note we have no control over the protocol used by the devices themselves, and there is no possible chance that situation will change.
We know we need to move away from this to be able to scale out by adding more machines to the cluster, and I'm in the early stages of working out exactly how we'll do this.
Right now I'm looking at Terracotta as a way of scaling out the Java code, but I haven't got as far as working out how to scale out the C++ to match.
As well as scaling for performance we need to consider high availability as well. The application needs to be available pretty much the whole time -- not absolutely 100%, which isn't cost effective, but we need to do a reasonable job of surviving a machine outage.
If you had to undertake the task I've been given, what would you do?
EDIT: Based on the data provided by @john channing, i'm looking at both GigaSpaces and Gemstone. Oracle Coherence and IBM ObjectGrid appear to be java-only.
A:
The first thing I would do is construct a model of the system to map the data flow and try to understand precisely where the bottleneck lies. If you can model your system as a pipeline, then you should be able to use the theory of constraints (most of the literature is about optimising business processes but it applies equally to software) to continuously improve performance and eliminate the bottleneck.
Next I would collect some hard empirical data that accurately characterises the performance of your system. It is something of a cliché that you cannot manage what you cannot measure, but I have seen many people attempt to optimise a software system based on hunches and fail miserably.
Then I would use the Pareto Principle (80/20 rule) to choose the small number of things that will produce the biggest gains and focus only on those.
To scale a Java application horizontally, I have used Oracle Coherence extensively. Although some dismiss it as a very expensive distributed hashtable, the functionality is much richer than that and you can, for example, directly access data in the cache from C++ code .
Other alternatives for horizontally scaling your Java code would be Giga Spaces, IBM Object Grid or Gemstone Gemfire.
If your C++ code is stateless and is used purely for number crunching, you could look at distributing the process using ICE Grid which has bindings for all of the languages you are using.
A:
You need to scale sideways and out. Maybe something like a message queue could be the backend between the frontend and the crunching.
A:
Andrew, (in addition to modeling as a pipeline etc), measuring things is important. Have you ran a profiler over the code and got metrics of where most of the time is spent?
For the database code, how often does it change ? Are you looking at caching at the moment ? I assume you have looked at indexes etc over the data to speed up the Db ?
What levels of traffic do you have on the front end ? Are you caching web pages ? (It isn't too hard to say use a JMS type api to communicate between components. You can then put Web Page component on one machine (or more), and then put the integration code (c++) on another, and for many JMS products there are usually native C++ api's ie. ActiveMQ comes to mind), but it really helps to know how much of the time is in Web (JSP ?) , C++, Database ops.
Is the database storing business data, or is it being also used to pass data between Java and C++ ? You say you are using shared mem not JNI ? What level of multi-threading currently exists in the APP? Would you describe the code as being synchronous in nature or async?
Is there a physical relationship between the Solaris code and the devices that must be maintained (ie. do all the devices register with the c++ code, or can that be specified). ie. if you were to put a web load balancer on the frontend, and just put 2 machines up today is the relationhip of which devices are managed by a box initialized up front or in advance?
What are the HA requirements ? ie. just state info ? Can the HA be done just in the web tier by clustering Session data ?
Is the DB running on another machine ?
How big is the DB ? Have you optimized your queries ie. tried using explicit inner/outer joins sometimes helps versus nested sub queries (sometmes). (again look at the sql stats).
| High availability and scalable platform for Java/C++ on Solaris | I have an application that's a mix of Java and C++ on Solaris. The Java aspects of the code run the web UI and establish state on the devices that we're talking to, and the C++ code does the real-time crunching of data coming back from the devices. Shared memory is used to pass device state and context information from the Java code through to the C++ code. The Java code uses a PostgreSQL database to persist its state.
We're running into some pretty severe performance bottlenecks, and right now the only way we can scale is to increase memory and CPU counts. We're stuck on the one physical box due to the shared memory design.
The really big hit here is being taken by the C++ code. The web interface is fairly lightly used to configure the devices; where we're really struggling is to handle the data volumes that the devices deliver once configured.
Every piece of data we get back from the device has an identifier in it which points back to the device context, and we need to look that up. Right now there's a series of shared memory objects that are maintained by the Java/UI code and referred to by the C++ code, and that's the bottleneck. Because of that architecture we cannot move the C++ data handling off to another machine. We need to be able to scale out so that various subsets of devices can be handled by different machines, but then we lose the ability to do that context lookup, and that's the problem I'm trying to resolve: how to offload the real-time data processing to other boxes while still being able to refer to the device context.
I should note we have no control over the protocol used by the devices themselves, and there is no possible chance that situation will change.
We know we need to move away from this to be able to scale out by adding more machines to the cluster, and I'm in the early stages of working out exactly how we'll do this.
Right now I'm looking at Terracotta as a way of scaling out the Java code, but I haven't got as far as working out how to scale out the C++ to match.
As well as scaling for performance we need to consider high availability as well. The application needs to be available pretty much the whole time -- not absolutely 100%, which isn't cost effective, but we need to do a reasonable job of surviving a machine outage.
If you had to undertake the task I've been given, what would you do?
EDIT: Based on the data provided by @john channing, i'm looking at both GigaSpaces and Gemstone. Oracle Coherence and IBM ObjectGrid appear to be java-only.
| [
"The first thing I would do is construct a model of the system to map the data flow and try to understand precisely where the bottleneck lies. If you can model your system as a pipeline, then you should be able to use the theory of constraints (most of the literature is about optimising business processes but it applies equally to software) to continuously improve performance and eliminate the bottleneck.\nNext I would collect some hard empirical data that accurately characterises the performance of your system. It is something of a cliché that you cannot manage what you cannot measure, but I have seen many people attempt to optimise a software system based on hunches and fail miserably.\nThen I would use the Pareto Principle (80/20 rule) to choose the small number of things that will produce the biggest gains and focus only on those.\nTo scale a Java application horizontally, I have used Oracle Coherence extensively. Although some dismiss it as a very expensive distributed hashtable, the functionality is much richer than that and you can, for example, directly access data in the cache from C++ code .\nOther alternatives for horizontally scaling your Java code would be Giga Spaces, IBM Object Grid or Gemstone Gemfire.\nIf your C++ code is stateless and is used purely for number crunching, you could look at distributing the process using ICE Grid which has bindings for all of the languages you are using.\n",
"You need to scale sideways and out. Maybe something like a message queue could be the backend between the frontend and the crunching.\n",
"Andrew, (in addition to modeling as a pipeline etc), measuring things is important. Have you ran a profiler over the code and got metrics of where most of the time is spent?\nFor the database code, how often does it change ? Are you looking at caching at the moment ? I assume you have looked at indexes etc over the data to speed up the Db ?\nWhat levels of traffic do you have on the front end ? Are you caching web pages ? (It isn't too hard to say use a JMS type api to communicate between components. You can then put Web Page component on one machine (or more), and then put the integration code (c++) on another, and for many JMS products there are usually native C++ api's ie. ActiveMQ comes to mind), but it really helps to know how much of the time is in Web (JSP ?) , C++, Database ops. \nIs the database storing business data, or is it being also used to pass data between Java and C++ ? You say you are using shared mem not JNI ? What level of multi-threading currently exists in the APP? Would you describe the code as being synchronous in nature or async? \nIs there a physical relationship between the Solaris code and the devices that must be maintained (ie. do all the devices register with the c++ code, or can that be specified). ie. if you were to put a web load balancer on the frontend, and just put 2 machines up today is the relationhip of which devices are managed by a box initialized up front or in advance? \nWhat are the HA requirements ? ie. just state info ? Can the HA be done just in the web tier by clustering Session data ?\nIs the DB running on another machine ?\nHow big is the DB ? Have you optimized your queries ie. tried using explicit inner/outer joins sometimes helps versus nested sub queries (sometmes). (again look at the sql stats).\n"
] | [
5,
1,
1
] | [] | [] | [
"c++",
"high_availability",
"java",
"scalability",
"solaris"
] | stackoverflow_0000051266_c++_high_availability_java_scalability_solaris.txt |
Q:
Property default values using Properties.Settings.Default
I am using .Net 2 and the normal way to store my settings. I store my custom object serialized to xml. I am trying to retrieve the default value of the property (but without reseting other properties). I use:
ValuationInput valuationInput = (ValuationInput) Settings.Default.Properties["ValuationInput"].DefaultValue;
But it seems to return a string instead of ValuationInput and it throws an exception.
I made a quick hack, which works fine:
string valuationInputStr = (string)
Settings.Default.Properties["ValuationInput"].DefaultValue;
XmlSerializer xmlSerializer = new XmlSerializer(typeof(ValuationInput));
ValuationInput valuationInput = (ValuationInput) xmlSerializer.Deserialize(new StringReader(valuationInputStr));
But this is really ugly - when I use all the tool to define a strongly typed setting, I don't want to serialize the default value myself, I would like to read it the same way as I read the current value: ValuationInput valuationInput = Settings.Default.ValuationInput;
A:
At some point, something, somewhere is going to have to use Xml Deserialization, whether it is you or a wrapper inside the settings class. You could always abstract it away in a method to remove the "ugly" code from your business logic.
public static T FromXml<T>(string xml)
{
XmlSerializer xmlser = new XmlSerializer(typeof(T));
using (System.IO.StringReader sr = new System.IO.StringReader(xml))
{
return (T)xmlser.Deserialize(sr);
}
}
http://www.vonsharp.net/PutDownTheXmlNodeAndStepAwayFromTheStringBuilder.aspx
A:
@Grzenio,
Why don't you use your object type directly? You can set type of your setting on Project Properties->Settings tab. You can select your type by clicking on Browse in drop down for Type column.
Citation from MSDN:
Application settings can be stored as
any data type that is XML serializable
or has a TypeConverter that implements
ToString/FromString
That way you can have strongly typed settings, i.e. (ValuationInput) Settings.Default.Properties["ValuationInput"].DefaultValue; will return an object instead of string.
| Property default values using Properties.Settings.Default | I am using .Net 2 and the normal way to store my settings. I store my custom object serialized to xml. I am trying to retrieve the default value of the property (but without reseting other properties). I use:
ValuationInput valuationInput = (ValuationInput) Settings.Default.Properties["ValuationInput"].DefaultValue;
But it seems to return a string instead of ValuationInput and it throws an exception.
I made a quick hack, which works fine:
string valuationInputStr = (string)
Settings.Default.Properties["ValuationInput"].DefaultValue;
XmlSerializer xmlSerializer = new XmlSerializer(typeof(ValuationInput));
ValuationInput valuationInput = (ValuationInput) xmlSerializer.Deserialize(new StringReader(valuationInputStr));
But this is really ugly - when I use all the tool to define a strongly typed setting, I don't want to serialize the default value myself, I would like to read it the same way as I read the current value: ValuationInput valuationInput = Settings.Default.ValuationInput;
| [
"At some point, something, somewhere is going to have to use Xml Deserialization, whether it is you or a wrapper inside the settings class. You could always abstract it away in a method to remove the \"ugly\" code from your business logic.\npublic static T FromXml<T>(string xml)\n{\n XmlSerializer xmlser = new XmlSerializer(typeof(T));\n using (System.IO.StringReader sr = new System.IO.StringReader(xml))\n {\n return (T)xmlser.Deserialize(sr);\n }\n}\n\nhttp://www.vonsharp.net/PutDownTheXmlNodeAndStepAwayFromTheStringBuilder.aspx\n",
"@Grzenio,\nWhy don't you use your object type directly? You can set type of your setting on Project Properties->Settings tab. You can select your type by clicking on Browse in drop down for Type column.\nCitation from MSDN: \n\nApplication settings can be stored as\n any data type that is XML serializable\n or has a TypeConverter that implements\n ToString/FromString\n\nThat way you can have strongly typed settings, i.e. (ValuationInput) Settings.Default.Properties[\"ValuationInput\"].DefaultValue; will return an object instead of string.\n"
] | [
2,
2
] | [] | [] | [
".net",
"c#",
"settings"
] | stackoverflow_0000051700_.net_c#_settings.txt |
Q:
HTTP Errors with .Net 3.5 SP1
I have a ASP.Net website that is failing on AJAX postbacks (both with ASP.Net AJAX and a 3rd part control) in IE. FireFox works fine. If I install the website on another machine without .Net 3.5 SP1, it works as expected.
When it fails, Fiddler shows that I'm getting a 405 "Method Not Allowed". The form seems to be posting to pages other than page I'm viewing.
The form's action is "#" for the page on the broken website (with SP1). The form's action is "Default.aspx" for the same page on a website without SP1.
Any ideas?
A:
Check out the following Microsoft Knowledge base article:
http://support.microsoft.com/kb/216493
If you're using IIS4 or IIS5 this may be the problem.
A:
SP1 changes the HtmlForm control so that it honors the action attribute, where previous versions ignored it. It sounds like you have something like this on the broken pages:
<form runat="server" action="#">
Remove the action, and it should be fine:
<form runat="server">
More info here: http://forums.asp.net/t/1305800.aspx
| HTTP Errors with .Net 3.5 SP1 | I have a ASP.Net website that is failing on AJAX postbacks (both with ASP.Net AJAX and a 3rd part control) in IE. FireFox works fine. If I install the website on another machine without .Net 3.5 SP1, it works as expected.
When it fails, Fiddler shows that I'm getting a 405 "Method Not Allowed". The form seems to be posting to pages other than page I'm viewing.
The form's action is "#" for the page on the broken website (with SP1). The form's action is "Default.aspx" for the same page on a website without SP1.
Any ideas?
| [
"Check out the following Microsoft Knowledge base article:\nhttp://support.microsoft.com/kb/216493\nIf you're using IIS4 or IIS5 this may be the problem.\n",
"SP1 changes the HtmlForm control so that it honors the action attribute, where previous versions ignored it. It sounds like you have something like this on the broken pages:\n<form runat=\"server\" action=\"#\">\n\nRemove the action, and it should be fine:\n<form runat=\"server\">\n\nMore info here: http://forums.asp.net/t/1305800.aspx\n"
] | [
0,
0
] | [] | [] | [
".net_3.5",
"asp.net"
] | stackoverflow_0000051751_.net_3.5_asp.net.txt |
Q:
Edit control on stack overflow
Is the edit control I'm typing in now, with all its buttons and rules freely available for use?
My web project is also .Net based.
A:
It's the WMD Markdown editor which is free and seems to be pretty easy to use. Just include the javascript for it and (in the easiest case), it just attaches to the first textarea it finds.
Here's some info about the Perl implementation of Markdown which, according to the site, WMD is 100% compatible with.
@Chris Upchurch Technically the current release isn't open-source, just free to use. The next version is supposed to be released with an MIT license though.
"now completely free to use. The next release will be open source under an MIT-style license."
A:
I don't know about this control, but TinyMCE is:
http://tinymce.moxiecode.com/
It's what wordpress etc use.
A:
The WMD editor is completely free (in the speech and beer senses of the word). It's available under an MIT-style license.
| Edit control on stack overflow | Is the edit control I'm typing in now, with all its buttons and rules freely available for use?
My web project is also .Net based.
| [
"It's the WMD Markdown editor which is free and seems to be pretty easy to use. Just include the javascript for it and (in the easiest case), it just attaches to the first textarea it finds.\nHere's some info about the Perl implementation of Markdown which, according to the site, WMD is 100% compatible with.\n\n@Chris Upchurch Technically the current release isn't open-source, just free to use. The next version is supposed to be released with an MIT license though.\n\n\n\n\n\"now completely free to use. The next release will be open source under an MIT-style license.\"\n\n\n\n\n",
"I don't know about this control, but TinyMCE is:\nhttp://tinymce.moxiecode.com/\nIt's what wordpress etc use.\n",
"The WMD editor is completely free (in the speech and beer senses of the word). It's available under an MIT-style license.\n"
] | [
9,
2,
1
] | [] | [] | [
".net",
"asp.net",
"text_editor"
] | stackoverflow_0000051808_.net_asp.net_text_editor.txt |
Q:
How does off-the-shelf software fit in with agile development?
Maybe my understanding of agile development isn't as good as it should be, but I'm curious how an agile developer would potentially use off-the-shelf (OTS) software when the requirements and knowledge of what the final system should be are changing as rapidly as I understand them to (often after each iteration of development).
I see two situations that are of particular interest to me:
(1) An OTS system meets the initial set of requirements with little to no modification, other than potential integration into an existing system. However, within a few iterations of development, this system no longer meets the needs without rewriting the core code. The developers must choose to either spend additional time learning the core code behind this OTS software or throw it away and build from scratch. Either would have a drastic impact on development time and project cost.
(2) The initial needs are not like any existing OTS system available, however, in the end when the customer accepts the product, it ends up being much like existing solutions due to requirement additions and subtractions. If the developers had more requirements and spent more time working on them up front, this solution could have been used instead of building again. The project was delivered, but later and at a higher cost than necessary.
As a software engineer, part of my responsibilities (as I have been taught), are to deliver high-quality software to the customer on time at the lowest possible cost (among other things). Agile development allows for high-quality software, but in some cases, it might not be apparent that there are better alternatives until it is too late and too much money has been spent.
My questions are:
How does off-the-shelf software fit in with agile development?
How do the agile manager and agile developer deal with these cases?
What do the agile paradigms say about these cases?
A:
Scenario1:
This can occur regardless off the OTS nature of the component. Agile does not mean near-sighted.. you'd need to know the big chunks.. the framework bits and spend thinking time on it beforehand. That said, you can only build to what you know .. Delay only till the last responsible moment.Then you need to pick one of the alternatives and start on it. (I'd Avoid third party application unless the cost of developing it in-house is infeasible.. but that's just me). Prototype multiple solutions to check feasibility with list of known requirements. Keep things loosely coupled (replacable), easy to change and full tested. If you reach the fork of keep hacking or rewrite, you'd need to think of which has better value for the business and pick that option. It's comes down 'Now that we're here, what's the best we can do now?'
Scenario2:
This can happen although the chances are slim compared to the team spending 2-3 months trying to get the requirements 'finalized' only to find that the market needs or customer minds have changed and 'Now we want it this way'. Once again, its a question of what is the point of time till which you are prepared to investigate and explore before committing on a path of action. Decide wisely with whatever information you have upto that point.. Hindsight is always 20-20 but the customers wont wait forever. You can't wait till the point of time where the requirements coalesce to fit a known OTS component :)
Agile says Do whatever makes sense and strip out the non-value-adding activities :) Agile is no magic bullet. just my 2 agile cents :)
A:
Not a strict answer per se, but I think that using off the shelf software as a component in a software solution can be very beneficial if:
It's data is open, e.g. an open database or a web service to interact with it
The off the shelf system can customised easily using a similar programming paradigm to the rest of your solution
It can be seamlessly adapted to the rest of your work-flow
I'm a big fan of not re-inventing the wheel, and using your development skills to design the 'glue' between off-the-shelf solutions can be a big win.
Remember 'open' is the important part, and a vendor will often tout their solution as open when it isn't really.
A:
I think I read somewhere that if during an iteration you discover that you have more than 20% more work that you initially thought then you should abandon the sprint and start planning a new one taking into account the additional work.
So this would mean replanning with the business to see if they still want to go ahead with the original requirements now that you know more.
At our company we also make use of prototyping before the sprint to try and identify these kind of situations before they arise on a sprint. Although of course that still may not identify the kind of situation that you describe.
A:
C2 wiki discussion: http://c2.com/cgi/wiki?BuyDontBuild
| How does off-the-shelf software fit in with agile development? | Maybe my understanding of agile development isn't as good as it should be, but I'm curious how an agile developer would potentially use off-the-shelf (OTS) software when the requirements and knowledge of what the final system should be are changing as rapidly as I understand them to (often after each iteration of development).
I see two situations that are of particular interest to me:
(1) An OTS system meets the initial set of requirements with little to no modification, other than potential integration into an existing system. However, within a few iterations of development, this system no longer meets the needs without rewriting the core code. The developers must choose to either spend additional time learning the core code behind this OTS software or throw it away and build from scratch. Either would have a drastic impact on development time and project cost.
(2) The initial needs are not like any existing OTS system available, however, in the end when the customer accepts the product, it ends up being much like existing solutions due to requirement additions and subtractions. If the developers had more requirements and spent more time working on them up front, this solution could have been used instead of building again. The project was delivered, but later and at a higher cost than necessary.
As a software engineer, part of my responsibilities (as I have been taught), are to deliver high-quality software to the customer on time at the lowest possible cost (among other things). Agile development allows for high-quality software, but in some cases, it might not be apparent that there are better alternatives until it is too late and too much money has been spent.
My questions are:
How does off-the-shelf software fit in with agile development?
How do the agile manager and agile developer deal with these cases?
What do the agile paradigms say about these cases?
| [
"Scenario1:\nThis can occur regardless off the OTS nature of the component. Agile does not mean near-sighted.. you'd need to know the big chunks.. the framework bits and spend thinking time on it beforehand. That said, you can only build to what you know .. Delay only till the last responsible moment.Then you need to pick one of the alternatives and start on it. (I'd Avoid third party application unless the cost of developing it in-house is infeasible.. but that's just me). Prototype multiple solutions to check feasibility with list of known requirements. Keep things loosely coupled (replacable), easy to change and full tested. If you reach the fork of keep hacking or rewrite, you'd need to think of which has better value for the business and pick that option. It's comes down 'Now that we're here, what's the best we can do now?' \nScenario2:\nThis can happen although the chances are slim compared to the team spending 2-3 months trying to get the requirements 'finalized' only to find that the market needs or customer minds have changed and 'Now we want it this way'. Once again, its a question of what is the point of time till which you are prepared to investigate and explore before committing on a path of action. Decide wisely with whatever information you have upto that point.. Hindsight is always 20-20 but the customers wont wait forever. You can't wait till the point of time where the requirements coalesce to fit a known OTS component :)\nAgile says Do whatever makes sense and strip out the non-value-adding activities :) Agile is no magic bullet. just my 2 agile cents :)\n",
"Not a strict answer per se, but I think that using off the shelf software as a component in a software solution can be very beneficial if:\n\nIt's data is open, e.g. an open database or a web service to interact with it\nThe off the shelf system can customised easily using a similar programming paradigm to the rest of your solution\nIt can be seamlessly adapted to the rest of your work-flow\n\nI'm a big fan of not re-inventing the wheel, and using your development skills to design the 'glue' between off-the-shelf solutions can be a big win.\nRemember 'open' is the important part, and a vendor will often tout their solution as open when it isn't really.\n",
"I think I read somewhere that if during an iteration you discover that you have more than 20% more work that you initially thought then you should abandon the sprint and start planning a new one taking into account the additional work.\nSo this would mean replanning with the business to see if they still want to go ahead with the original requirements now that you know more.\nAt our company we also make use of prototyping before the sprint to try and identify these kind of situations before they arise on a sprint. Although of course that still may not identify the kind of situation that you describe.\n",
"C2 wiki discussion: http://c2.com/cgi/wiki?BuyDontBuild\n"
] | [
4,
3,
1,
1
] | [] | [] | [
"agile"
] | stackoverflow_0000051649_agile.txt |
Q:
Win32 ToolTip disappears never to re-appear with Commctl 6
I'm creating a ToolTip window and adding tools to it using the flags
TTF_IDISHWND | TTF_SUBCLASS. (c++, win32)
I have a manifest file such that my program uses the new WindowsXP themes
(comctrl32 version 6).
When I hover over a registered tool, the tip appears.
Good.
When I click the mouse, the tip disappears.
Ok.
However, moving away from the tool and back
again does not make the tip re-appear. I need to hover over a different tool
and then come back to my tool to get the tip to come back.
When I remove my manifest file (to use the older non-XP comctrl32), the
problem goes away.
After doing some experimentation, I discovered the following differences
between ToolTips in Comctl32 version 5 (old) and Comctl32 version 6 (new):
New TTF_TRANSPARENT ToolTips (when used In-Place) actually return
HTCLIENT from WM_NCITTEST if a mouse button is down, thus getting
WM_LBUTTONDOWN and stealing focus for a moment before vanishing. This causes
the application's border to flash.
Old TTF_TRANSPARENT ToolTips always return HTTRANSPARENT from WM_NCHITTEST,
and thus never get WM_LBUTTONDOWN themselves and never steal focus. (This seems to be just aesthetic, but may impact the next point...)
New ToolTips seem not to get WM_TIMER events after a mouse-click, and
only resume getting (a bunch of) timer events after being de-activated and
re-activated. Thus, they do not re-display their tip window after a mouse
click and release.
Old ToolTips get a WM_TIMER message as soon as the mouse is moved again
after click/release, so they are ready to re-display their tip.
Thus, as a comctl32 workaround, I had to:
subclass the TOOLTIPS_CLASS window and always return HTTRANSPARENT from
WM_NCHITTEST if the tool asked for transparency.
avoid using TTF_SUBCLASS and rather process the mouse messages myself so
I could de-activate/re-activate upon receiving WM_xBUTTONUP.
I assume that the change in internal behavior was to accommodate the new "clickable" features in ToolTips like hyperlinks, but the hover behavior appears to be thus broken.
Does anyone know of a better solution than my subclass workaround? Am I missing some other point?
A:
You're not the only one who has hit compatablity issues with tooltips between these DLLS.
I too have had nothing but trouble with the new tooltips in the themable common controls. We have already been monkeying with mouse messages and active/deactivating the tips before adding the manifest and theming our application - so it sounds like what your doing isn't too crazy.
We're still living with problems with TTN_NEEDTEXT messages being send constantly as the mouse moves (not just when hovering), positioning problems with large tips (maybe not something new), and odd unicode messages being sent instead of the ANSI versions (which I plan to post as a question at some point).
A:
I don't know, but this sounds like a really "hard" problem (in the sense that all real-world) problems are really hard. I bet the underlying problem is something to do with the setting of the focus. Windows that manually do that are evil and generally suffer from all manner of bugs.
| Win32 ToolTip disappears never to re-appear with Commctl 6 | I'm creating a ToolTip window and adding tools to it using the flags
TTF_IDISHWND | TTF_SUBCLASS. (c++, win32)
I have a manifest file such that my program uses the new WindowsXP themes
(comctrl32 version 6).
When I hover over a registered tool, the tip appears.
Good.
When I click the mouse, the tip disappears.
Ok.
However, moving away from the tool and back
again does not make the tip re-appear. I need to hover over a different tool
and then come back to my tool to get the tip to come back.
When I remove my manifest file (to use the older non-XP comctrl32), the
problem goes away.
After doing some experimentation, I discovered the following differences
between ToolTips in Comctl32 version 5 (old) and Comctl32 version 6 (new):
New TTF_TRANSPARENT ToolTips (when used In-Place) actually return
HTCLIENT from WM_NCITTEST if a mouse button is down, thus getting
WM_LBUTTONDOWN and stealing focus for a moment before vanishing. This causes
the application's border to flash.
Old TTF_TRANSPARENT ToolTips always return HTTRANSPARENT from WM_NCHITTEST,
and thus never get WM_LBUTTONDOWN themselves and never steal focus. (This seems to be just aesthetic, but may impact the next point...)
New ToolTips seem not to get WM_TIMER events after a mouse-click, and
only resume getting (a bunch of) timer events after being de-activated and
re-activated. Thus, they do not re-display their tip window after a mouse
click and release.
Old ToolTips get a WM_TIMER message as soon as the mouse is moved again
after click/release, so they are ready to re-display their tip.
Thus, as a comctl32 workaround, I had to:
subclass the TOOLTIPS_CLASS window and always return HTTRANSPARENT from
WM_NCHITTEST if the tool asked for transparency.
avoid using TTF_SUBCLASS and rather process the mouse messages myself so
I could de-activate/re-activate upon receiving WM_xBUTTONUP.
I assume that the change in internal behavior was to accommodate the new "clickable" features in ToolTips like hyperlinks, but the hover behavior appears to be thus broken.
Does anyone know of a better solution than my subclass workaround? Am I missing some other point?
| [
"You're not the only one who has hit compatablity issues with tooltips between these DLLS. \nI too have had nothing but trouble with the new tooltips in the themable common controls. We have already been monkeying with mouse messages and active/deactivating the tips before adding the manifest and theming our application - so it sounds like what your doing isn't too crazy.\nWe're still living with problems with TTN_NEEDTEXT messages being send constantly as the mouse moves (not just when hovering), positioning problems with large tips (maybe not something new), and odd unicode messages being sent instead of the ANSI versions (which I plan to post as a question at some point).\n",
"I don't know, but this sounds like a really \"hard\" problem (in the sense that all real-world) problems are really hard. I bet the underlying problem is something to do with the setting of the focus. Windows that manually do that are evil and generally suffer from all manner of bugs.\n"
] | [
1,
0
] | [] | [] | [
"winapi",
"windows"
] | stackoverflow_0000051146_winapi_windows.txt |
Q:
Issue reading XML file into C# DataSet
I was given an .xml file that I needed to read into my code as a DataSet (as background, the file was created by creating a DataSet in C# and calling dataSet.WriteXml(file, XmlWriteMode.IgnoreSchema), but this was done by someone else).
The .xml file was shaped like this:
<?xml version="1.0" standalone="yes"?>
<NewDataSet>
<Foo>
<Bar>abcd</Bar>
<Foo>efg</Foo>
</Foo>
<Foo>
<Bar>hijk</Bar>
<Foo>lmn</Foo>
</Foo>
</NewDataSet>
Using C# and .NET 2.0, I read the file in using the code below:
DataSet ds = new DataSet();
ds.ReadXml(file);
Using a breakpoint, after this line ds.Tables[0] looked like this (using dashes in place of underscores that I couldn't get to format properly):
Bar Foo-Id Foo-Id-0
abcd 0 null
null 1 0
hijk 2 null
null 3 2
I have found a workaround (I know there are many) and have been able to successfully read in the .xml, but what I would like to understand why ds.ReadXml(file) performed in this manner, so I will be able to avoid the issue in the future. Thanks.
A:
This appears to be correct for your nested Foo tags:
<NewDataSet>
<Foo> <!-- Foo-Id: 0 -->
<Bar>abcd</Bar>
<Foo>efg</Foo> <!-- Foo-Id: 1, Parent-Id: 0 -->
</Foo>
<Foo> <!-- Foo-Id: 2 -->
<Bar>hijk</Bar>
<Foo>lmn</Foo> <!-- Foo-Id: 3, Parent-Id: 2 -->
</Foo>
</NewDataSet>
So this correctly becomes 4 records in your result, with a parent-child key of "Foo-Id-0"
Try:
<NewDataSet>
<Rec> <!-- Rec-Id: 0 -->
<Bar>abcd</Bar>
<Foo>efg</Foo>
</Rec>
<Rec> <!-- Rec-Id: 1 -->
<Bar>hijk</Bar>
<Foo>lmn</Foo>
</Rec>
</NewDataSet>
Which should result in:
Bar Foo Rec-Id
abcd efg 0
hijk lmn 1
A:
These are my observations rather than a full answer:
My guess (without trying to re-produce it myself) is that a couple of things may be happening as the DataSet tries to 'flatten' a hierarchical structure to a relational data structure.
1) thinking about the data from a relational database perspective; there is no obvious primary key field for identifying each of the Foo elements in the collection so the DataSet has automatically used the ordinal position in the file as an auto-generated field called Foo-Id.
2) There are actually two elements called 'Foo' so that probably explains the generation of a strange name for the column 'Foo-Id-0' (it has auto-generated a unique name for the column - I guess you could think of this as a fault-tolerant behaviour in the DataSet).
| Issue reading XML file into C# DataSet | I was given an .xml file that I needed to read into my code as a DataSet (as background, the file was created by creating a DataSet in C# and calling dataSet.WriteXml(file, XmlWriteMode.IgnoreSchema), but this was done by someone else).
The .xml file was shaped like this:
<?xml version="1.0" standalone="yes"?>
<NewDataSet>
<Foo>
<Bar>abcd</Bar>
<Foo>efg</Foo>
</Foo>
<Foo>
<Bar>hijk</Bar>
<Foo>lmn</Foo>
</Foo>
</NewDataSet>
Using C# and .NET 2.0, I read the file in using the code below:
DataSet ds = new DataSet();
ds.ReadXml(file);
Using a breakpoint, after this line ds.Tables[0] looked like this (using dashes in place of underscores that I couldn't get to format properly):
Bar Foo-Id Foo-Id-0
abcd 0 null
null 1 0
hijk 2 null
null 3 2
I have found a workaround (I know there are many) and have been able to successfully read in the .xml, but what I would like to understand why ds.ReadXml(file) performed in this manner, so I will be able to avoid the issue in the future. Thanks.
| [
"This appears to be correct for your nested Foo tags:\n<NewDataSet> \n <Foo> <!-- Foo-Id: 0 -->\n <Bar>abcd</Bar>\n <Foo>efg</Foo> <!-- Foo-Id: 1, Parent-Id: 0 -->\n </Foo>\n <Foo> <!-- Foo-Id: 2 -->\n <Bar>hijk</Bar>\n <Foo>lmn</Foo> <!-- Foo-Id: 3, Parent-Id: 2 -->\n </Foo>\n</NewDataSet>\n\nSo this correctly becomes 4 records in your result, with a parent-child key of \"Foo-Id-0\"\nTry:\n<NewDataSet> \n <Rec> <!-- Rec-Id: 0 -->\n <Bar>abcd</Bar>\n <Foo>efg</Foo> \n </Rec>\n <Rec> <!-- Rec-Id: 1 -->\n <Bar>hijk</Bar>\n <Foo>lmn</Foo> \n </Rec>\n</NewDataSet>\n\nWhich should result in:\nBar Foo Rec-Id\nabcd efg 0\nhijk lmn 1\n\n",
"These are my observations rather than a full answer:\nMy guess (without trying to re-produce it myself) is that a couple of things may be happening as the DataSet tries to 'flatten' a hierarchical structure to a relational data structure.\n1) thinking about the data from a relational database perspective; there is no obvious primary key field for identifying each of the Foo elements in the collection so the DataSet has automatically used the ordinal position in the file as an auto-generated field called Foo-Id.\n2) There are actually two elements called 'Foo' so that probably explains the generation of a strange name for the column 'Foo-Id-0' (it has auto-generated a unique name for the column - I guess you could think of this as a fault-tolerant behaviour in the DataSet).\n"
] | [
4,
0
] | [] | [] | [
".net",
".net_2.0",
"c#",
"xml"
] | stackoverflow_0000051741_.net_.net_2.0_c#_xml.txt |
Q:
How to force my ASP.net 2.0 app to recompile
I have a ASP.net 2.0 app and I have made some changes the the source file ( cs files ). I uploaded the changes with the belief that it would auto-recompile. I also have the compiled dll in MY_APP/bin. I checked it and noticed that it did not recompile. Please understand I am new to this.
A:
my #1 way to do this, add white space to the top of the web config file, after the xml declaration tag.
It forces the node to re-cache and recompile. We even have a page deep in the admin called Flush.aspx that does it for us.
A:
I use a similar method to ChanChan, but instead of whitespace I put a comment in the web.config to indicate when/why the config was edited.
A:
It's always best to just actually run a build after making .cs changes.
Where are you running it? Is this for debugging or production?
A:
In VS menu you have Build -> Rebuild Solution
| How to force my ASP.net 2.0 app to recompile | I have a ASP.net 2.0 app and I have made some changes the the source file ( cs files ). I uploaded the changes with the belief that it would auto-recompile. I also have the compiled dll in MY_APP/bin. I checked it and noticed that it did not recompile. Please understand I am new to this.
| [
"my #1 way to do this, add white space to the top of the web config file, after the xml declaration tag.\nIt forces the node to re-cache and recompile. We even have a page deep in the admin called Flush.aspx that does it for us.\n",
"I use a similar method to ChanChan, but instead of whitespace I put a comment in the web.config to indicate when/why the config was edited.\n",
"It's always best to just actually run a build after making .cs changes. \nWhere are you running it? Is this for debugging or production?\n",
"In VS menu you have Build -> Rebuild Solution\n"
] | [
10,
4,
1,
0
] | [] | [] | [
".net_2.0",
"asp.net"
] | stackoverflow_0000051870_.net_2.0_asp.net.txt |
Q:
Open one of a series of files using a batch file
I have up to 4 files based on this structure (note the prefixes are dates)
0830filename.txt
0907filename.txt
0914filename.txt
0921filename.txt
I want to open the the most recent one (0921filename.txt). how can i do this in a batch file?
Thanks.
A:
This method uses the actual file modification date, to figure out which one is the latest file:
@echo off
for /F %%i in ('dir /B /O:-D *.txt') do (
call :open "%%i"
exit /B 0
)
:open
start "dummy" "%~1"
exit /B 0
This method, however, chooses the last file in alphabetic order (or the first one, in reverse-alphabetic order), so if the filenames are consistent - it will work:
@echo off
for /F %%i in ('dir /B *.txt^|sort /R') do (
call :open "%%i"
exit /B 0
)
:open
start "dummy" "%~1"
exit /B 0
You actually have to choose which method is better for you.
A:
Sorry, for spamming this question, but I just really feel like posting The Real Answer.
If you want your BATCH script to parse and compare the dates in filenames, then you can use something like this:
@echo off
rem Enter the ending of the filenames.
rem Basically, you must specify everything that comes after the date.
set fn_end=filename.txt
rem Do not touch anything bellow this line.
set max_month=00
set max_day=00
for /F %%i in ('dir /B *%fn_end%') do call :check "%%i"
call :open %max_month% %max_day%
exit /B 0
:check
set name=%~1
set date=%name:~0,4%
set month=%date:~0,2%
set day=%date:~2,2%
if /I %month% GTR %max_month% (
set max_month=%month%
set max_day=%day%
) else if /I %month% EQU %max_month% (
set max_month=%month%
if /I %day% GTR %max_day% (
set max_day=%day%
)
)
exit /B 0
:open
set date=%~1
set month=%~2
set name=%date%%month%%fn_end%
start "dummy" "%name%"
exit /B 0
A:
One liner, using EXIT trick:
FOR /F %%I IN ('DIR *.TXT /B /O:-D') DO NOTEPAD %%I & EXIT
EDIT:
@pam: you're right, I was assuming that the files were in date order, but you can change the command to:
FOR /F %%I IN ('DIR *.TXT /B /O:-N') DO NOTEPAD %%I & EXIT
then you have the file list sorted by name in reverse order.
A:
Here you go... (hope no-one beat me to it...) (You'll need to save the file as lasttext.bat or something)
This will open up / run the oldest .txt file
dir *.txt /b /od > systext.bak
FOR /F %%i in (systext.bak) do set sysRunCommand=%%i
call %sysRunCommand%
del systext.bak /Y
Probably XP only. BEHOLD The mighty power of DOS.
Although this takes the latest filename by date - NOT by filename..
If you want to get the latest filename, change /od to /on .
If you want to sort on something else, add a "sort" command to the second line.
| Open one of a series of files using a batch file | I have up to 4 files based on this structure (note the prefixes are dates)
0830filename.txt
0907filename.txt
0914filename.txt
0921filename.txt
I want to open the the most recent one (0921filename.txt). how can i do this in a batch file?
Thanks.
| [
"This method uses the actual file modification date, to figure out which one is the latest file:\n@echo off\nfor /F %%i in ('dir /B /O:-D *.txt') do (\n call :open \"%%i\"\n exit /B 0\n)\n:open\n start \"dummy\" \"%~1\"\nexit /B 0\n\nThis method, however, chooses the last file in alphabetic order (or the first one, in reverse-alphabetic order), so if the filenames are consistent - it will work:\n@echo off\nfor /F %%i in ('dir /B *.txt^|sort /R') do (\n call :open \"%%i\"\n exit /B 0\n)\n:open\n start \"dummy\" \"%~1\"\nexit /B 0\n\nYou actually have to choose which method is better for you.\n",
"Sorry, for spamming this question, but I just really feel like posting The Real Answer.\nIf you want your BATCH script to parse and compare the dates in filenames, then you can use something like this:\n@echo off\n\nrem Enter the ending of the filenames.\nrem Basically, you must specify everything that comes after the date.\nset fn_end=filename.txt\n\nrem Do not touch anything bellow this line.\nset max_month=00\nset max_day=00\n\nfor /F %%i in ('dir /B *%fn_end%') do call :check \"%%i\"\ncall :open %max_month% %max_day%\nexit /B 0\n\n:check\n set name=%~1\n set date=%name:~0,4%\n set month=%date:~0,2%\n set day=%date:~2,2%\n if /I %month% GTR %max_month% (\n set max_month=%month%\n set max_day=%day%\n ) else if /I %month% EQU %max_month% (\n set max_month=%month%\n if /I %day% GTR %max_day% (\n set max_day=%day%\n )\n )\nexit /B 0\n\n:open\n set date=%~1\n set month=%~2\n set name=%date%%month%%fn_end%\n start \"dummy\" \"%name%\"\nexit /B 0\n\n",
"One liner, using EXIT trick:\nFOR /F %%I IN ('DIR *.TXT /B /O:-D') DO NOTEPAD %%I & EXIT\n\nEDIT:\n@pam: you're right, I was assuming that the files were in date order, but you can change the command to:\nFOR /F %%I IN ('DIR *.TXT /B /O:-N') DO NOTEPAD %%I & EXIT\n\nthen you have the file list sorted by name in reverse order.\n",
"Here you go... (hope no-one beat me to it...) (You'll need to save the file as lasttext.bat or something) \nThis will open up / run the oldest .txt file\ndir *.txt /b /od > systext.bak \nFOR /F %%i in (systext.bak) do set sysRunCommand=%%i \ncall %sysRunCommand%\ndel systext.bak /Y\n\nProbably XP only. BEHOLD The mighty power of DOS.\nAlthough this takes the latest filename by date - NOT by filename..\nIf you want to get the latest filename, change /od to /on .\nIf you want to sort on something else, add a \"sort\" command to the second line. \n"
] | [
9,
6,
4,
1
] | [
"Use regular expression to parse the relevant integer out and compare them.\n"
] | [
-1
] | [
"batch_file",
"cmd",
"command_line",
"dos"
] | stackoverflow_0000051837_batch_file_cmd_command_line_dos.txt |
Q:
Arbitrary Naming Convention (Business Objects)
Ok, do you do Business.Name or Business.BusinessName
SubCategory.ID or SubCategory.SubCategoryID
What about in your database?
Why?
I'm torn with both. Would love there to be a "right answer"
A:
The only "right" answer is to be consistent. Decide upfront which one you will be using in a project, and stick to it.
A:
The main drawback of using ID, Name etc is that you have to qualify them with the table name if you are writing an SQL join which overlaps two tables.
Despite that, I find it far more concise and readable to just use ID and Name - your code and tables will 'flow' much more easily past the eyes. Easier to type and less redundant. And typing SELECT Business.Name FROM ... in an SQL query is not really more troublesome than typing SELECT BusinessName FROM ...
In general, if I find myself repeating semantic information it alerts me to look for ways to eliminate it or at least recognise why it repeats. This could be on the small scale (attribute names) or the large scale (behaviour patterns or common class structures).
A:
For very common properties like "Name" and "ID", the convention I have used is to not put the entity name in the field. For more unusual properties, I do put the entity name.
This is a naming convention decision, but I have not regretted projects where this is the convention, if you put the name of the entity for each ID, it ends up seeming to be too verbose.
A:
we do ID on anything that's the primary key. Saying SubCategory.SubCategoryID seems redundant,
A:
I may not be right, but I think Id is a tastier dish.
thing.id
because if you are going to write any reflective stuff that deals with your objects and needs primary key, its way easier to know it everywhere, then trying to determine it with a formula.
As for the other, thats total preference and I don't see any real implications other than time wasted typing the other characters, and its .net so no one actually types namespaces anyway.
| Arbitrary Naming Convention (Business Objects) | Ok, do you do Business.Name or Business.BusinessName
SubCategory.ID or SubCategory.SubCategoryID
What about in your database?
Why?
I'm torn with both. Would love there to be a "right answer"
| [
"The only \"right\" answer is to be consistent. Decide upfront which one you will be using in a project, and stick to it.\n",
"The main drawback of using ID, Name etc is that you have to qualify them with the table name if you are writing an SQL join which overlaps two tables.\nDespite that, I find it far more concise and readable to just use ID and Name - your code and tables will 'flow' much more easily past the eyes. Easier to type and less redundant. And typing SELECT Business.Name FROM ... in an SQL query is not really more troublesome than typing SELECT BusinessName FROM ...\nIn general, if I find myself repeating semantic information it alerts me to look for ways to eliminate it or at least recognise why it repeats. This could be on the small scale (attribute names) or the large scale (behaviour patterns or common class structures).\n",
"For very common properties like \"Name\" and \"ID\", the convention I have used is to not put the entity name in the field. For more unusual properties, I do put the entity name.\nThis is a naming convention decision, but I have not regretted projects where this is the convention, if you put the name of the entity for each ID, it ends up seeming to be too verbose.\n",
"we do ID on anything that's the primary key. Saying SubCategory.SubCategoryID seems redundant,\n",
"I may not be right, but I think Id is a tastier dish.\nthing.id\nbecause if you are going to write any reflective stuff that deals with your objects and needs primary key, its way easier to know it everywhere, then trying to determine it with a formula.\nAs for the other, thats total preference and I don't see any real implications other than time wasted typing the other characters, and its .net so no one actually types namespaces anyway.\n"
] | [
5,
2,
0,
0,
0
] | [] | [] | [
"c#",
"naming",
"object",
"oop"
] | stackoverflow_0000044485_c#_naming_object_oop.txt |
Q:
What is the best Visual Studio Plugin for Printing Code
Some of the features I think it must include are:
Print Entire Solution
Ability to print line numbers
Proper choice of coding font and size to improve readability
Nice Header Information
Ability to print regions collapsed
Couple feature additions:
Automatically insert page breaks
after methods/classes
Keep long lines readable (nearly all
current implementations are broken)
Note: There are many reasons to need to print code... One very good one is escrow.
A:
I use PrettyCode.Print for .NET. It does everything on your list, and more. (I use it for printing code excerpts for copyright registration paperwork, which is similar to your escrow case.)
It is a little slow to open a really big solution, but not unbearably so, and the output quality is excellent.
A:
Try StarPrint's VSNETcodePrint
A:
Couple feature additions:
Automatically insert page breaks after methods/classes
Keep long lines readable (nearly all current implementations are broken)
| What is the best Visual Studio Plugin for Printing Code | Some of the features I think it must include are:
Print Entire Solution
Ability to print line numbers
Proper choice of coding font and size to improve readability
Nice Header Information
Ability to print regions collapsed
Couple feature additions:
Automatically insert page breaks
after methods/classes
Keep long lines readable (nearly all
current implementations are broken)
Note: There are many reasons to need to print code... One very good one is escrow.
| [
"I use PrettyCode.Print for .NET. It does everything on your list, and more. (I use it for printing code excerpts for copyright registration paperwork, which is similar to your escrow case.)\nIt is a little slow to open a really big solution, but not unbearably so, and the output quality is excellent.\n",
"Try StarPrint's VSNETcodePrint\n",
"Couple feature additions:\n\nAutomatically insert page breaks after methods/classes \nKeep long lines readable (nearly all current implementations are broken)\n\n"
] | [
12,
4,
1
] | [] | [] | [
"visual_studio",
"visual_studio_extensions"
] | stackoverflow_0000051871_visual_studio_visual_studio_extensions.txt |
Q:
Operating System Overheads while profiling?
I am doing profiling of a C code in Microsoft VS 2005 on a Intel Core-2Duo platform.
I measure the time(secs:millisecs) counsumed by my function. But i have some doubts about the accuracy of this measurement as the operating system will not continuously run my application, but instead schedule others apps/services in between the execution of my code.(Although i have no major applications running while i do the profile run, still windows will have lot of code of its own which it will run by preempting my app.). Because of all this i believe the profiling number(time taken by my app to run) is not accurate.
So my question is there any way to find out the Operating system overheads, scheduling overhead on a typical windows system(I run Windows XP)e.g. if my applications says it ran for 60 milliseconds, out of that 60 msec, how much time really was used by my app. and how much time it was sitting idle, due to being pre-empted by some other task scheduled by the OS?
or
Atleast is there any ball-park number to get such OS overhead, based on your experience you came across while doing something similar?
A:
@Kogus: Even if i run outside debugger(standalone app. from a command prompt) it still could be preempted by OS and cause a incorrect measurement of the time consumed by my app.
Is'nt it?
-AD
A:
I think you are going to have some problems with the granularity. See similar questions GetLocalTime() API time resolution and Is gettimeofday() guaranteed to be of microsecond resolution?
Also, you may want to take a look at the Windows Resource Kits Tools which include timeit.exe (similar to time on unix/linux) to give you elapsed and process times.
A:
Suggestion
Try run on multi CPU systems.
A:
1 - Put some debug logging in your code (include timestamps of course), and run it outside of the debugger
2 - Run again in the debugger
3 - Repeat many times, to get statistically valid data.
4 - Compare.
If there is a significant difference in the average execution time of the standalone vs. the debugger, then you are right to be suspicious of the OS (or the overhead of the debugger hooks themselves...). If no difference, then don't sweat it.
Edit0: Obviously the debug messages have some overhead of their own. You may want to leave those in the code even when you are running from the debugger. That way, both the standalone and the debugger are running the very same code.
Edit1: I misunderstood the question. I thought your concern was that --while debugging--, the OS might interrupt your app more frequently than in a normal mode of execution. If you want to know how much time your app actually spent working, just compare the time taken to the "CPU Time" in the Task Manager.
Edit2: Compare the time returned by GetProcessTimes for your process to the actual execution time. The difference is the time spent by the CPU on somebody else.
A:
The best way of doing this is a dedicated profiling tool. There are lots out there. I haven't used one for C for a few years, someone else will hopefully be able to give better advice. As you are using Visual Studio 2005 this might be a good place to start:
AQ, but I've never used it.
| Operating System Overheads while profiling? | I am doing profiling of a C code in Microsoft VS 2005 on a Intel Core-2Duo platform.
I measure the time(secs:millisecs) counsumed by my function. But i have some doubts about the accuracy of this measurement as the operating system will not continuously run my application, but instead schedule others apps/services in between the execution of my code.(Although i have no major applications running while i do the profile run, still windows will have lot of code of its own which it will run by preempting my app.). Because of all this i believe the profiling number(time taken by my app to run) is not accurate.
So my question is there any way to find out the Operating system overheads, scheduling overhead on a typical windows system(I run Windows XP)e.g. if my applications says it ran for 60 milliseconds, out of that 60 msec, how much time really was used by my app. and how much time it was sitting idle, due to being pre-empted by some other task scheduled by the OS?
or
Atleast is there any ball-park number to get such OS overhead, based on your experience you came across while doing something similar?
| [
"@Kogus: Even if i run outside debugger(standalone app. from a command prompt) it still could be preempted by OS and cause a incorrect measurement of the time consumed by my app.\nIs'nt it?\n-AD\n",
"I think you are going to have some problems with the granularity. See similar questions GetLocalTime() API time resolution and Is gettimeofday() guaranteed to be of microsecond resolution?\nAlso, you may want to take a look at the Windows Resource Kits Tools which include timeit.exe (similar to time on unix/linux) to give you elapsed and process times.\n",
"Suggestion\nTry run on multi CPU systems. \n",
"1 - Put some debug logging in your code (include timestamps of course), and run it outside of the debugger\n2 - Run again in the debugger\n3 - Repeat many times, to get statistically valid data.\n4 - Compare.\nIf there is a significant difference in the average execution time of the standalone vs. the debugger, then you are right to be suspicious of the OS (or the overhead of the debugger hooks themselves...). If no difference, then don't sweat it.\nEdit0: Obviously the debug messages have some overhead of their own. You may want to leave those in the code even when you are running from the debugger. That way, both the standalone and the debugger are running the very same code.\nEdit1: I misunderstood the question. I thought your concern was that --while debugging--, the OS might interrupt your app more frequently than in a normal mode of execution. If you want to know how much time your app actually spent working, just compare the time taken to the \"CPU Time\" in the Task Manager.\nEdit2: Compare the time returned by GetProcessTimes for your process to the actual execution time. The difference is the time spent by the CPU on somebody else.\n",
"The best way of doing this is a dedicated profiling tool. There are lots out there. I haven't used one for C for a few years, someone else will hopefully be able to give better advice. As you are using Visual Studio 2005 this might be a good place to start:\nAQ, but I've never used it.\n"
] | [
1,
1,
0,
0,
0
] | [] | [] | [
"profile"
] | stackoverflow_0000051887_profile.txt |
Q:
Are there any languages that implement generics _well_?
I liked the discussion at Differences in Generics, and was wondering whether there were any languages that used this feature particularly well.
I really dislike Java's List<? extends Foo> for a List of things that are Liskov-substitutable for Foo. Why can't List<Foo> cover that?
And honestly, Comparable<? super Bar>?
I also can't remember for the life of my why you should never return an Array of generics:
public T[] getAll<T>() { ... }
I never liked templates in C++, but that was mostly because none of the compilers could ever spit out a remotely meaningful error message for them. One time I actually did a make realclean && make 17 times to get something to compile; I never did figure out why the 17th time was the charm.
So, who actually likes using generics in their pet language?
A:
Haskell implements type-constructor parameterisation (generics, or parametric polymorphism) quite well. So does Scala (although it needs a bit of hand-holding sometimes).
Both of these languages have higher-kinded types (a.k.a. abstract type constructors, or type-constructor polymorphism, or higher-order polymorphism).
See here: Generics of a Higher Kind
A:
Heck, English doesn't even implement generics well. :)
My bias is for C#. Mainly because that is what I am currently using and I have used them to good effect.
A:
I think the generics in Java are actually pretty good. The reason why List<Foo> is different than List<? extends Foo> is that when Foo is a subtype of Bar, List<Foo> is not a subtype of List<Bar>. If you could treat a List<Foo> object as a List<Bar>, then you could add Bar objects to it, which could break things. Any reasonable type system will require this. Java lets you get away with treating Foo[] as a subtype of Bar[], but this forces runtime checks, reducing performance. When you return such an array, this makes it difficult for the compiler to know whether to do a runtime check.
I have never needed to use the lower bounds (List<? super Foo>), but I would imagine they might be useful for returning generic values. See covariance and contravariance.
On the whole though, I definitely agree with the complaints about overly verbose syntax and confusing error messages. Languages with type inference like OCaml and Haskell will probably make this easier on you, although their error messages can be confusing as well.
A:
I'll add OCaml to the list, which has really generic generics. I agree that Haskell's type classes are really well done, but it's a bit different in that Haskell has no OO semantics, but OCaml does support OO.
A:
I use .Net (VB.Net), and haven't had any problems using generics. It's mostly painless.
Dim Cars as List(Of Car)
Dim Car as Car
For Each Car in Cars
...
Next
Never had any problems using the generic collections, although I haven't gone so far as to design any objects that use generics on my own.
A:
I think that C# and VB.NET do a good job with generics.
| Are there any languages that implement generics _well_? | I liked the discussion at Differences in Generics, and was wondering whether there were any languages that used this feature particularly well.
I really dislike Java's List<? extends Foo> for a List of things that are Liskov-substitutable for Foo. Why can't List<Foo> cover that?
And honestly, Comparable<? super Bar>?
I also can't remember for the life of my why you should never return an Array of generics:
public T[] getAll<T>() { ... }
I never liked templates in C++, but that was mostly because none of the compilers could ever spit out a remotely meaningful error message for them. One time I actually did a make realclean && make 17 times to get something to compile; I never did figure out why the 17th time was the charm.
So, who actually likes using generics in their pet language?
| [
"Haskell implements type-constructor parameterisation (generics, or parametric polymorphism) quite well. So does Scala (although it needs a bit of hand-holding sometimes).\nBoth of these languages have higher-kinded types (a.k.a. abstract type constructors, or type-constructor polymorphism, or higher-order polymorphism).\nSee here: Generics of a Higher Kind\n",
"Heck, English doesn't even implement generics well. :)\nMy bias is for C#. Mainly because that is what I am currently using and I have used them to good effect.\n",
"I think the generics in Java are actually pretty good. The reason why List<Foo> is different than List<? extends Foo> is that when Foo is a subtype of Bar, List<Foo> is not a subtype of List<Bar>. If you could treat a List<Foo> object as a List<Bar>, then you could add Bar objects to it, which could break things. Any reasonable type system will require this. Java lets you get away with treating Foo[] as a subtype of Bar[], but this forces runtime checks, reducing performance. When you return such an array, this makes it difficult for the compiler to know whether to do a runtime check.\nI have never needed to use the lower bounds (List<? super Foo>), but I would imagine they might be useful for returning generic values. See covariance and contravariance. \nOn the whole though, I definitely agree with the complaints about overly verbose syntax and confusing error messages. Languages with type inference like OCaml and Haskell will probably make this easier on you, although their error messages can be confusing as well.\n",
"I'll add OCaml to the list, which has really generic generics. I agree that Haskell's type classes are really well done, but it's a bit different in that Haskell has no OO semantics, but OCaml does support OO.\n",
"I use .Net (VB.Net), and haven't had any problems using generics. It's mostly painless.\nDim Cars as List(Of Car)\nDim Car as Car\n\nFor Each Car in Cars\n...\nNext\n\nNever had any problems using the generic collections, although I haven't gone so far as to design any objects that use generics on my own.\n",
"I think that C# and VB.NET do a good job with generics.\n"
] | [
14,
7,
7,
3,
1,
0
] | [] | [] | [
"generics",
"language_agnostic"
] | stackoverflow_0000050983_generics_language_agnostic.txt |
Q:
Revoke shared folders in windows
Over the last few months/years, I have shared a folder or two with numerous people on my domain. How do I easily revoke those shares to keep access to my system nice and tidy?
A:
Using computer management (an MMC snap-in. See Control Panel Administrative tools) you can see a list of all folders that are shared. You could delete the shares or change the permissions on the share to only allow access for certain people or groups.
A:
You can also achieve this via the command line:
C:>net share share-name /d
A:
On Windows XP, go to:
Administrative Tools > Computer Management > System Tools > Shared Folders > Shares
This page lists all shares and lets you remove them easily, in one place.
| Revoke shared folders in windows | Over the last few months/years, I have shared a folder or two with numerous people on my domain. How do I easily revoke those shares to keep access to my system nice and tidy?
| [
"Using computer management (an MMC snap-in. See Control Panel Administrative tools) you can see a list of all folders that are shared. You could delete the shares or change the permissions on the share to only allow access for certain people or groups.\n",
"You can also achieve this via the command line:\n\nC:>net share share-name /d\n\n",
"On Windows XP, go to:\nAdministrative Tools > Computer Management > System Tools > Shared Folders > Shares\n\nThis page lists all shares and lets you remove them easily, in one place.\n"
] | [
3,
2,
1
] | [] | [] | [
"directory",
"networking",
"shared",
"smb",
"windows"
] | stackoverflow_0000026230_directory_networking_shared_smb_windows.txt |
Q:
What to do with queries which don´t have a representation in a domain model?
This is not specific to any language, it´s just about best practices. I am using JPA/Hibernate (but it could be any other ORM solution) and I would like to know how do you guys deal with this situation:
Let´s suppose that you have a query returning something that is not represented by any of your domain classes.
Do you create a specific class to represent that specific query?
Do you return the query in some other kind of object (array, map...)
Some other solutions?
I would like to know about your experiences and best practices.
P.S.
Actually I am creating specific objetcs for specific queries.
A:
We have a situation that sounds similar to yours.
We use separate objects for reporting data that spans several domain objects. Our convention is that these will be backed by a view in the database, so we have come to call them view objects. We generally use them for summarising complex data into a flat format.
A:
I typically write a function that performs a query using SQL and then puts the results into either a list or dictionary (in Java, I'd use either an ArrayList or a HashMap).
If I found myself doing this a lot, I'd probably create a new file to hold all of these queries. Otherwise I'd just make them functions in whatever file they were needed/used.
Since we're talking Java specifically, I would certainly not create a new class in a separate file. However, for queries needed in only one class, you could create a private static inner class with only the function(s) needed to generate the query(s) needed by that class.
A:
The idea of wrapping that up the functionality in some sort of manager is always nice. It allows for better testing, and management therefore of schema changes.
Also allows for easier reuse in the application. NEVER just put the sql in directly!!!. For Hibernate I have found HQL great for just this. In particular , if you can use Named queries. Also be careful of adding an filter values etc use "string append", use parameters (can we say SQL injection ?). Even if the SQL is dynamic in terms of the join or where criteria, have a function in some sort of manager is always best.
A:
@DrPizza
I will be more specific. We have three tables in a database
USER
PROJECT
TASK
USER to TASK 1:n
PROJECT to TASK 1:n
I have a query that returns a list of all projects but showing also some grouped information (all tasks, open tasks, closed tasks). When returned, the query looks like this
PROJECTID: 1
NAME: New Web Site
ALLTASK: 10
OPENTASK: 7
CLOSEDTASK: 3
I don´t have any domain class that could represent this information and I don´t want to create specific methods in Project class (like getAllTasks, getOpenTasks) because each of these methods would trigger a new query.
So the question is:
I create a new class (somenthing like ProjectTasksQuery) just to hold that information?
I return information within array or map?
Something else?
A:
You might feel better after reading about Data Transfer Objects. Some people plain don't like them, but if it feels like a good fit to you, it probably is.
| What to do with queries which don´t have a representation in a domain model? | This is not specific to any language, it´s just about best practices. I am using JPA/Hibernate (but it could be any other ORM solution) and I would like to know how do you guys deal with this situation:
Let´s suppose that you have a query returning something that is not represented by any of your domain classes.
Do you create a specific class to represent that specific query?
Do you return the query in some other kind of object (array, map...)
Some other solutions?
I would like to know about your experiences and best practices.
P.S.
Actually I am creating specific objetcs for specific queries.
| [
"We have a situation that sounds similar to yours.\nWe use separate objects for reporting data that spans several domain objects. Our convention is that these will be backed by a view in the database, so we have come to call them view objects. We generally use them for summarising complex data into a flat format.\n",
"I typically write a function that performs a query using SQL and then puts the results into either a list or dictionary (in Java, I'd use either an ArrayList or a HashMap).\nIf I found myself doing this a lot, I'd probably create a new file to hold all of these queries. Otherwise I'd just make them functions in whatever file they were needed/used.\nSince we're talking Java specifically, I would certainly not create a new class in a separate file. However, for queries needed in only one class, you could create a private static inner class with only the function(s) needed to generate the query(s) needed by that class.\n",
"The idea of wrapping that up the functionality in some sort of manager is always nice. It allows for better testing, and management therefore of schema changes.\nAlso allows for easier reuse in the application. NEVER just put the sql in directly!!!. For Hibernate I have found HQL great for just this. In particular , if you can use Named queries. Also be careful of adding an filter values etc use \"string append\", use parameters (can we say SQL injection ?). Even if the SQL is dynamic in terms of the join or where criteria, have a function in some sort of manager is always best.\n",
"@DrPizza\nI will be more specific. We have three tables in a database\nUSER\nPROJECT\nTASK\nUSER to TASK 1:n\nPROJECT to TASK 1:n\n\nI have a query that returns a list of all projects but showing also some grouped information (all tasks, open tasks, closed tasks). When returned, the query looks like this\nPROJECTID: 1\nNAME: New Web Site\nALLTASK: 10\nOPENTASK: 7\nCLOSEDTASK: 3\n\nI don´t have any domain class that could represent this information and I don´t want to create specific methods in Project class (like getAllTasks, getOpenTasks) because each of these methods would trigger a new query.\nSo the question is:\nI create a new class (somenthing like ProjectTasksQuery) just to hold that information?\nI return information within array or map?\nSomething else?\n",
"You might feel better after reading about Data Transfer Objects. Some people plain don't like them, but if it feels like a good fit to you, it probably is.\n"
] | [
1,
0,
0,
0,
0
] | [] | [] | [
"orm"
] | stackoverflow_0000051653_orm.txt |
Q:
SpecialCells in VSTO
I'm trying to use the SpecialCells method in a VSTO project using c# against the 3.5 framework and Excel2007.
Here's my code:
Excel.Worksheet myWs = (Excel.Worksheet)ModelWb.Worksheets[1];
Range myRange = myWs.get_Range("A7", "A800");
//Range rAccounts = myRange.SpecialCells(XlCellType.xlCellTypeConstants, XlSpecialCellsValue.xlTextValues);
Range rAccounts = myWs.Cells.SpecialCells(XlCellType.xlCellTypeConstants, XlSpecialCellsValue.xlTextValues);
When I run this, it throws an exception...
System.Exception._COMPlusExceptionCode with a value of -532459699
Note that I get the same exception if I switch (uncomment one and comment the other) the above Range rAccounts line.
A:
I figured it out...
the worksheet was protected!
myWs.Unprotect(Properties.Settings.Default.PasswordSheet);
fixes it...for those playing along at home...don't forget to protect the sheet when you're done.
myWs.Protect(Properties.Settings.Default.PasswordSheet, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing);
| SpecialCells in VSTO | I'm trying to use the SpecialCells method in a VSTO project using c# against the 3.5 framework and Excel2007.
Here's my code:
Excel.Worksheet myWs = (Excel.Worksheet)ModelWb.Worksheets[1];
Range myRange = myWs.get_Range("A7", "A800");
//Range rAccounts = myRange.SpecialCells(XlCellType.xlCellTypeConstants, XlSpecialCellsValue.xlTextValues);
Range rAccounts = myWs.Cells.SpecialCells(XlCellType.xlCellTypeConstants, XlSpecialCellsValue.xlTextValues);
When I run this, it throws an exception...
System.Exception._COMPlusExceptionCode with a value of -532459699
Note that I get the same exception if I switch (uncomment one and comment the other) the above Range rAccounts line.
| [
"I figured it out...\nthe worksheet was protected!\nmyWs.Unprotect(Properties.Settings.Default.PasswordSheet);\n\nfixes it...for those playing along at home...don't forget to protect the sheet when you're done.\nmyWs.Protect(Properties.Settings.Default.PasswordSheet, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing);\n\n"
] | [
0
] | [] | [] | [
"vsto"
] | stackoverflow_0000051754_vsto.txt |
Q:
Determine how much memory a class uses?
I am trying to find a way to determine at run-time how much memory a given class is using in .NET. Using Marshal.SizeOf() is out, as it only works on value types. Is there a way to check exactly how much memory a class uses?
A:
I've only recently started looking into this type of thing, but i have found that memory profilers can give quite detailed information regarding instances of objects within your application.
Here are a couple that are worth trying:
ANTS Profiler
.NET Memory Profiler
A:
I agree that a memory profiler is the easiest way to get the information you are looking for. In addition to the two previously mentioned, I recommend JetBrains dotTrace, which is both a performance profiler and a memory profiler.
If you want to do it yourself, and are willing to get pretty deep into the guts of the CLR, you can use the .NET Profiling API, which is an unmanaged API that (as Microsoft says): "enables a profiler to monitor a program's execution by the common language runtime (CLR)." It's not exactly intended for casual use, but it does have an enormous amount of functionality.
A:
just link to related SO question:
sizeof() equivalent for reference types?
| Determine how much memory a class uses? | I am trying to find a way to determine at run-time how much memory a given class is using in .NET. Using Marshal.SizeOf() is out, as it only works on value types. Is there a way to check exactly how much memory a class uses?
| [
"I've only recently started looking into this type of thing, but i have found that memory profilers can give quite detailed information regarding instances of objects within your application.\nHere are a couple that are worth trying:\n\nANTS Profiler\n.NET Memory Profiler\n\n",
"I agree that a memory profiler is the easiest way to get the information you are looking for. In addition to the two previously mentioned, I recommend JetBrains dotTrace, which is both a performance profiler and a memory profiler.\nIf you want to do it yourself, and are willing to get pretty deep into the guts of the CLR, you can use the .NET Profiling API, which is an unmanaged API that (as Microsoft says): \"enables a profiler to monitor a program's execution by the common language runtime (CLR).\" It's not exactly intended for casual use, but it does have an enormous amount of functionality.\n",
"just link to related SO question:\n\nsizeof() equivalent for reference types?\n\n"
] | [
5,
1,
1
] | [] | [] | [
".net",
"memory"
] | stackoverflow_0000051540_.net_memory.txt |
Q:
Adaptive Database
Are there any rapid Database protoyping tools that don't require me to declare a database schema, but rather create it based on the way I'm using my entities.
For example, assuming an empty database (pseudo code):
user1 = new User() // Creates the user table with a single id column
user1.firstName = "Allain" // alters the table to have a firstName column as varchar(255)
user2 = new User() // Reuses the table
user2.firstName = "Bob"
user2.lastName = "Loblaw" // Alters the table to have a last name column
Since there are logical assumptions that can be made when dynamically creating the schema, and you could always override its choices by using your DB tools to tweak it later.
Also, you could generate your schema by unit testing it this way.
And obviously this is only for prototyping.
Is there anything like this out there?
A:
Google's Application Engine works like this. When you download the toolkit you get a local copy of the database engine for testing.
A:
Grails uses Hibernate to persist domain objects and produces behavior similar to what you describe. To alter the schema you simply modify the domain, in this simple case the file is named User.groovy.
class User {
String userName
String firstName
String lastName
Date dateCreated
Date lastUpdated
static constraints = {
userName(blank: false, unique: true)
firstName(blank: false)
lastName(blank: false)
}
String toString() {"$lastName, $firstName"}
}
Saving the file alters the schema automatically. Likewise, if you are using scaffolding it is updated. The prototype process becomes run the application, view the page in your browser, modify the domain, refresh the browser, and see the changes.
A:
I agree with the NHibernate approach and auto-database-generation. But, if you want to avoid writing a configuration file, and stay close to the code, use Castle's ActiveRecord. You declare the 'schema' directly on the class with via attributes.
[ActiveRecord]
public class User : ActiveRecordBase<User>
{
[PrimaryKey]
public Int32 UserId { get; set; }
[Property]
public String FirstName { get; set; }
}
There are a variety of constraints you can apply (validation, bounds, etc) and you can declare relationships between different data model classes. Most of these options are parameters added to the attributes. It's rather simple.
So, you're working with code. Declaring usage in code. And when you're done, let ActiveRecord create the database.
ActiveRecordStarter.Initialize();
ActiveRecordStarter.CreateSchema();
A:
May be not exactly responding to your general question, but if you used (N)Hibernate then you can automatically generate the database schema from your hbm mapping files.
Its not done directly from your code as you seem to be wanting but Hibernate Schema generation seems to work well for us
A:
Do you want the schema, but have it generated, or do you actually want NO schema?
For the former I'd go with nhibernate as @tom-carter said. Have it generate your schema for you, and you are all good (atleast until you roll your app out, then look at something like Tarantino and RedGate SQL Diff or whatever it's called to generate update scripts)
If you want the latter.... google app engine does this, as I've discovered this afternoon, and it's very nice. If you want to stick with code under your control, I'd suggest looking at CouchDB, tho it's a bit of upfront work getting it setup. But once you have it, it's a totally, 100% schema-free database. Well, you have an ID and a Version, but thats it - the rest is up to you. http://incubator.apache.org/couchdb/
But by the sounds of it (N)hibernate would suite the best, but I could be wrong.
A:
You could use an object database.
| Adaptive Database | Are there any rapid Database protoyping tools that don't require me to declare a database schema, but rather create it based on the way I'm using my entities.
For example, assuming an empty database (pseudo code):
user1 = new User() // Creates the user table with a single id column
user1.firstName = "Allain" // alters the table to have a firstName column as varchar(255)
user2 = new User() // Reuses the table
user2.firstName = "Bob"
user2.lastName = "Loblaw" // Alters the table to have a last name column
Since there are logical assumptions that can be made when dynamically creating the schema, and you could always override its choices by using your DB tools to tweak it later.
Also, you could generate your schema by unit testing it this way.
And obviously this is only for prototyping.
Is there anything like this out there?
| [
"Google's Application Engine works like this. When you download the toolkit you get a local copy of the database engine for testing.\n",
"Grails uses Hibernate to persist domain objects and produces behavior similar to what you describe. To alter the schema you simply modify the domain, in this simple case the file is named User.groovy.\nclass User {\n\n String userName\n String firstName\n String lastName\n Date dateCreated\n Date lastUpdated\n\n static constraints = {\n userName(blank: false, unique: true)\n firstName(blank: false)\n lastName(blank: false)\n }\n\n String toString() {\"$lastName, $firstName\"}\n\n}\n\nSaving the file alters the schema automatically. Likewise, if you are using scaffolding it is updated. The prototype process becomes run the application, view the page in your browser, modify the domain, refresh the browser, and see the changes.\n",
"I agree with the NHibernate approach and auto-database-generation. But, if you want to avoid writing a configuration file, and stay close to the code, use Castle's ActiveRecord. You declare the 'schema' directly on the class with via attributes.\n[ActiveRecord]\npublic class User : ActiveRecordBase<User>\n{\n [PrimaryKey]\n public Int32 UserId { get; set; }\n\n [Property]\n public String FirstName { get; set; }\n}\n\nThere are a variety of constraints you can apply (validation, bounds, etc) and you can declare relationships between different data model classes. Most of these options are parameters added to the attributes. It's rather simple.\nSo, you're working with code. Declaring usage in code. And when you're done, let ActiveRecord create the database.\nActiveRecordStarter.Initialize();\nActiveRecordStarter.CreateSchema();\n\n",
"May be not exactly responding to your general question, but if you used (N)Hibernate then you can automatically generate the database schema from your hbm mapping files.\nIts not done directly from your code as you seem to be wanting but Hibernate Schema generation seems to work well for us \n",
"Do you want the schema, but have it generated, or do you actually want NO schema?\nFor the former I'd go with nhibernate as @tom-carter said. Have it generate your schema for you, and you are all good (atleast until you roll your app out, then look at something like Tarantino and RedGate SQL Diff or whatever it's called to generate update scripts)\nIf you want the latter.... google app engine does this, as I've discovered this afternoon, and it's very nice. If you want to stick with code under your control, I'd suggest looking at CouchDB, tho it's a bit of upfront work getting it setup. But once you have it, it's a totally, 100% schema-free database. Well, you have an ID and a Version, but thats it - the rest is up to you. http://incubator.apache.org/couchdb/\nBut by the sounds of it (N)hibernate would suite the best, but I could be wrong.\n",
"You could use an object database. \n"
] | [
2,
1,
1,
0,
0,
0
] | [] | [] | [
"database",
"orm"
] | stackoverflow_0000032231_database_orm.txt |
Q:
Using Makefile instead of Solution/Project files under Visual Studio (2005)
Does anyone have experience using makefiles for Visual Studio C++ builds (under VS 2005) as opposed to using the project/solution setup. For us, the way that the project/solutions work is not intuitive and leads to configuruation explosion when you are trying to tweak builds with specific compile time flags.
Under Unix, it's pretty easy to set up a makefile that has its default options overridden by user settings (or other configuration setting). But doing these types of things seems difficult in Visual Studio.
By way of example, we have a project that needs to get build for 3 different platforms. Each platform might have several configurations (for example debug, release, and several others). One of my goals on a newly formed project is to have a solution that can have all platform build living together, which makes building and testing code changes easier since you aren't having to open 3 different solutions just to test your code. But visual studio will require 3 * (number of base configurations) configurations. i.e. PC Debug, X360 Debug, PS3 Debug, etc.
It seems like a makefile solution is much better here. Wrapped with some basic batchfiles or scripts, it would be easy to keep the configuration explotion to a minimum and only maintain a small set of files for all of the different builds that we have to do.
However, I have no experience with makefiles under visual studio and would like to know if others have experiences or issues that they can share.
Thanks.
(post edited to mention that these are C++ builds)
A:
I've found some benefits to makefiles with large projects, mainly related to unifying the location of the project settings. It's somewhat easier to manage the list of source files, include paths, preprocessor defines and so on, if they're all in a makefile or other build config file. With multiple configurations, adding an include path means you need to make sure you update every config manually through Visual Studio's fiddly project properties, which can get pretty tedious as a project grows in size.
Projects which use a lot of custom build tools can be easier to manage too, such as if you need to compile pixel / vertex shaders, or code in other languages without native VS support.
You'll still need to have various different project configurations however, since you'll need to differentiate the invocation of the build tool for each config (e.g. passing in different command line options to make).
Immediate downsides that spring to mind:
Slower builds: VS isn't particularly quick at invoking external tools, or even working out whether it needs to build a project in the first place.
Awkward inter-project dependencies: It's fiddly to set up so that a dependee causes the base project to build, and fiddlier to make sure that they get built in the right order. I've had some success getting SCons to do this, but it's always a challenge to get working well.
Loss of some useful IDE features: Edit & Continue being the main one!
In short, you'll spend less time managing your project configurations, but more time coaxing Visual Studio to work properly with it.
A:
Visual studio is being built on top of the MSBuild configurations files. You can consider *proj and *sln files as makefiles. They allow you to fully customize build process.
A:
While it's technically possible, it's not a very friendly solution within Visual Studio. It will be fighting you the entire time.
I recommend you take a look at NAnt. It's a very robust build system where you can do basically anything you need to.
Our NAnt script does this on every build:
Migrate the database to the latest version
Generate C# entities off of the database
Compile every project in our "master" solution
Run all unit tests
Run all integration tests
Additionally, our build server leverages this and adds 1 more task, which is generating Sandcastle documentation.
If you don't like XML, you might also take a look at Rake (ruby), Bake/BooBuildSystem (Boo), or Psake (PowerShell)
A:
You can use nant to build the projects individually thus replacing the solution and have 1 coding solution and no build solutions.
1 thing to keep in mind, is that the solution and csproj files from vs 2005 and up are msbuild scripts. So if you get acquainted with msbuild you might be able to wield the existing files, to make vs easier, and to make your deployment easier.
A:
We have a similar set up as the one you are describing. We support at least 3 different platforms, so the we found that using CMake to mange the different Visual Studio solutions. Set up can be a bit painful, but it pretty much boils down to reading the docs and a couple of tutorials. You should be able to do virtually everything you can do by going to the properties of the projects and the solution.
Not sure if you can have all three platforms builds living together in the same solution, but you can use CruiseControl to take care of your builds, and running your testing scripts as often as needed.
| Using Makefile instead of Solution/Project files under Visual Studio (2005) | Does anyone have experience using makefiles for Visual Studio C++ builds (under VS 2005) as opposed to using the project/solution setup. For us, the way that the project/solutions work is not intuitive and leads to configuruation explosion when you are trying to tweak builds with specific compile time flags.
Under Unix, it's pretty easy to set up a makefile that has its default options overridden by user settings (or other configuration setting). But doing these types of things seems difficult in Visual Studio.
By way of example, we have a project that needs to get build for 3 different platforms. Each platform might have several configurations (for example debug, release, and several others). One of my goals on a newly formed project is to have a solution that can have all platform build living together, which makes building and testing code changes easier since you aren't having to open 3 different solutions just to test your code. But visual studio will require 3 * (number of base configurations) configurations. i.e. PC Debug, X360 Debug, PS3 Debug, etc.
It seems like a makefile solution is much better here. Wrapped with some basic batchfiles or scripts, it would be easy to keep the configuration explotion to a minimum and only maintain a small set of files for all of the different builds that we have to do.
However, I have no experience with makefiles under visual studio and would like to know if others have experiences or issues that they can share.
Thanks.
(post edited to mention that these are C++ builds)
| [
"I've found some benefits to makefiles with large projects, mainly related to unifying the location of the project settings. It's somewhat easier to manage the list of source files, include paths, preprocessor defines and so on, if they're all in a makefile or other build config file. With multiple configurations, adding an include path means you need to make sure you update every config manually through Visual Studio's fiddly project properties, which can get pretty tedious as a project grows in size.\nProjects which use a lot of custom build tools can be easier to manage too, such as if you need to compile pixel / vertex shaders, or code in other languages without native VS support.\nYou'll still need to have various different project configurations however, since you'll need to differentiate the invocation of the build tool for each config (e.g. passing in different command line options to make).\nImmediate downsides that spring to mind:\n\nSlower builds: VS isn't particularly quick at invoking external tools, or even working out whether it needs to build a project in the first place.\nAwkward inter-project dependencies: It's fiddly to set up so that a dependee causes the base project to build, and fiddlier to make sure that they get built in the right order. I've had some success getting SCons to do this, but it's always a challenge to get working well.\nLoss of some useful IDE features: Edit & Continue being the main one!\n\nIn short, you'll spend less time managing your project configurations, but more time coaxing Visual Studio to work properly with it.\n",
"Visual studio is being built on top of the MSBuild configurations files. You can consider *proj and *sln files as makefiles. They allow you to fully customize build process.\n",
"While it's technically possible, it's not a very friendly solution within Visual Studio. It will be fighting you the entire time.\nI recommend you take a look at NAnt. It's a very robust build system where you can do basically anything you need to.\nOur NAnt script does this on every build:\n\nMigrate the database to the latest version\nGenerate C# entities off of the database\nCompile every project in our \"master\" solution\nRun all unit tests\nRun all integration tests\n\nAdditionally, our build server leverages this and adds 1 more task, which is generating Sandcastle documentation.\nIf you don't like XML, you might also take a look at Rake (ruby), Bake/BooBuildSystem (Boo), or Psake (PowerShell)\n",
"You can use nant to build the projects individually thus replacing the solution and have 1 coding solution and no build solutions.\n1 thing to keep in mind, is that the solution and csproj files from vs 2005 and up are msbuild scripts. So if you get acquainted with msbuild you might be able to wield the existing files, to make vs easier, and to make your deployment easier.\n",
"We have a similar set up as the one you are describing. We support at least 3 different platforms, so the we found that using CMake to mange the different Visual Studio solutions. Set up can be a bit painful, but it pretty much boils down to reading the docs and a couple of tutorials. You should be able to do virtually everything you can do by going to the properties of the projects and the solution.\nNot sure if you can have all three platforms builds living together in the same solution, but you can use CruiseControl to take care of your builds, and running your testing scripts as often as needed.\n"
] | [
5,
2,
1,
0,
0
] | [] | [] | [
"c++",
"makefile",
"visual_studio"
] | stackoverflow_0000051859_c++_makefile_visual_studio.txt |
Q:
Nesting a GridView within Repeater
I have a scenario wherein, for example, I need to repeat a list of US states and display a table of cities and city populations after the name of each state. The design requirement dictates that every outer repetition must be the name of a state followed by a table of cities, and that requirement cannot be changed at this time. Are there disadvantages to nesting a GridView within a Repeater and then binding each repeated GridView during the Repeater's ItemDataBound event? What are some alternative solutions?
A:
If it were me, I'd reverse the question and ask why I should use a GridView, If you need a bunch of built-in features like paging and sorting, then the GridView might be a good fit. If you just want tabular data, I'd reconsider. Why? Because with GridView you're getting a whole bunch of stuff you won't use, your ViewState will be potentially huge, and your page performance will be slower.
I'm not a bigot when it comes to GridView, but I only use them when there is a damn good reason.
A:
In your above scenario, you'd be better off doing a master-detail style GridView, which will save you the overhead of all those GridView objects that get created.
There are various implementation of it (using a drop down for the master, using a modal popup for the detail, etc.), but the main point is that there are implementations available.
A:
At the very least, hopefully you can turn off ViewState on the GridViews.
A:
The best solution I was able to come up with was to nest the GridView in the Repeater. Then I bound each repeated GridView during the Repeater's ItemDataBound event. I turned off their ViewStates, of course, as they weren't required.
| Nesting a GridView within Repeater | I have a scenario wherein, for example, I need to repeat a list of US states and display a table of cities and city populations after the name of each state. The design requirement dictates that every outer repetition must be the name of a state followed by a table of cities, and that requirement cannot be changed at this time. Are there disadvantages to nesting a GridView within a Repeater and then binding each repeated GridView during the Repeater's ItemDataBound event? What are some alternative solutions?
| [
"If it were me, I'd reverse the question and ask why I should use a GridView, If you need a bunch of built-in features like paging and sorting, then the GridView might be a good fit. If you just want tabular data, I'd reconsider. Why? Because with GridView you're getting a whole bunch of stuff you won't use, your ViewState will be potentially huge, and your page performance will be slower. \nI'm not a bigot when it comes to GridView, but I only use them when there is a damn good reason. \n",
"In your above scenario, you'd be better off doing a master-detail style GridView, which will save you the overhead of all those GridView objects that get created.\nThere are various implementation of it (using a drop down for the master, using a modal popup for the detail, etc.), but the main point is that there are implementations available.\n",
"At the very least, hopefully you can turn off ViewState on the GridViews.\n",
"The best solution I was able to come up with was to nest the GridView in the Repeater. Then I bound each repeated GridView during the Repeater's ItemDataBound event. I turned off their ViewStates, of course, as they weren't required.\n"
] | [
2,
1,
1,
1
] | [] | [] | [
"asp.net",
"data_binding",
"gridview",
"repeater"
] | stackoverflow_0000050898_asp.net_data_binding_gridview_repeater.txt |
Q:
Is there a standard approach to generating sql dynamically?
I want to ask how other programmers are producing Dynamic SQL strings for execution as the CommandText of a SQLCommand object.
I am producing parameterized queries containing user-generated WHERE clauses and SELECT fields. Sometimes the queries are complex and I need a lot of control over how the different parts are built.
Currently, I am using many loops and switch statements to produce the necessary SQL code fragments and to create the SQL parameters objects needed. This method is difficult to follow and it makes maintenance a real chore.
Is there a cleaner, more stable way of doing this?
Any Suggestions?
EDIT:
To add detail to my previous post:
I cannot really template my query due to the requirements. It just changes too much.
I have to allow for aggregate functions, like Count(). This has consequences for the Group By/Having clause. It also causes nested SELECT statements. This, in turn, effects the column name used by
Some Contact data is stored in an XML column. Users can query this data AS WELL AS and the other relational columns together. Consequences are that xmlcolumns cannot appear in Group By clauses[sql syntax].
I am using an efficient paging technique that uses Row_Number() SQL Function. Consequences are that I have to use a Temp table and then get the @@rowcount, before selecting my subset, to avoid a second query.
I will show some code (the horror!) so that you guys have an idea of what I'm dealing with.
sqlCmd.CommandText = "DECLARE @t Table(ContactId int, ROWRANK int" + declare
+ ")INSERT INTO @t(ContactId, ROWRANK" + insertFields + ")"//Insert as few cols a possible
+ "Select ContactID, ROW_NUMBER() OVER (ORDER BY " + sortExpression + " "
+ sortDirection + ") as ROWRANK" // generates a rowrank for each row
+ outerFields
+ " FROM ( SELECT c.id AS ContactID"
+ coreFields
+ from // sometimes different tables are required
+ where + ") T " // user input goes here.
+ groupBy+ " "
+ havingClause //can be empty
+ ";"
+ "select @@rowcount as rCount;" // return 2 recordsets, avoids second query
+ " SELECT " + fields + ",field1,field2" // join onto the other cols n the table
+" FROM @t t INNER JOIN contacts c on t.ContactID = c.id"
+" WHERE ROWRANK BETWEEN " + ((pageIndex * pageSize) + 1) + " AND "
+ ( (pageIndex + 1) * pageSize); // here I select the pages I want
In this example, I am querying XML data. For purely relational data, the query is much more simple. Each of the section variables are StringBuilders. Where clauses are built like so:
// Add Parameter to SQL Command
AddParamToSQLCmd(sqlCmd, "@p" + z.ToString(), SqlDbType.VarChar, 50, ParameterDirection.Input, qc.FieldValue);
// Create SQL code Fragment
where.AppendFormat(" {0} {1} {2} @p{3}", qc.BooleanOperator, qc.FieldName, qc.ComparisonOperator, z);
A:
I had the need to do this on one of my recent projects. Here is the scheme that I am using for generating the SQL:
Each component of the query is represented by an Object (which in my case is a Linq-to-Sql entity that maps to a table in the DB). So I have the following classes: Query, SelectColumn, Join, WhereCondition, Sort, GroupBy. Each of these classes contains all details relating to that component of the query.
The last five classes are all related to a Query object. So the Query object itself has collections of each class.
Each class has a method that can generate the SQL for the part of the query that it represents. So creating the overall query ends up calling Query.GenerateQuery() which in turn enumerates through all of the sub-collections and calls their respective GenerateQuery() methods
It is still a bit complicated, but in the end you know where the SQL generation for each individual part of the query originates (and I don't think that there are any big switch statements). And don't forget to use StringBuilder.
A:
We created our own FilterCriteria object that is kind of a black-box dynamic query builder. It has collection properties for SelectClause, WhereClause, GroupByClause and OrderByClause. It also contains a properties for CommandText, CommandType, and MaximumRecords.
We then jut pass our FilterCriteria object to our data logic and it executes it against the database server and passes parameter values to a stored procedure that executes the dynamic code.
Works well for us ... and keeps the SQL generation nicely contained in an object.
A:
You could try the approach used by code generation tools like CodeSmith. Create a SQL template with placeholders. At runtime, read the template into a string and substitute the placeholders with actual values. This is only useful if all the SQL code follow a pattern.
A:
Gulzar and Ryan Lanciaux make good points in mentioning CodeSmith and ORM. Either of those might reduce or eliminate your current burden when it comes to generating dynamic SQL. Your current approach of using parameterized SQL is wise, simply because it protects well against SQL injection attacks.
Without an actual code sample to comment on, it's difficult to provide an informed alternative to the loops and switch statements you're currently using. But since you mention that you're setting a CommandText property, I would recommend the use of string.Format in your implementation (if you aren't already using it). I think it may make your code easier to restructure, and therefore improve readability and understanding.
A:
Usually it's something like this:
string query= "SELECT {0} FROM .... WHERE {1}"
StringBuilder selectclause = new StringBuilder();
StringBuilder wherecaluse = new StringBuilder();
// .... the logic here will vary greatly depending on what your system looks like
MySqlcommand.CommandText = String.Format(query, selectclause.ToString(), whereclause.ToString());
I'm also just getting started out with ORMs. You might want to take a look at one of those. ActiveRecord / Hibernate are some good keywords to google.
A:
If you really need to do this from code, then an ORM is probably the way to go to try to keep it clean.
But I'd like to offer an alternative that works well and could avoid the performance problems that accompany dynamic queries, due to changing SQL that requires new query plans to be created, with different demands on indexes.
Create a stored procedure that accepts all possible parameters, and then use something like this in the where clause:
where...
and (@MyParam5 is null or @MyParam5 = Col5)
then, from code, it's much simpler to set the parameter value to DBNull.Value when it is not applicable, rather than changing the SQL string you generate.
Your DBAs will be much happier with you, because they will have one place to go for query tuning, the SQL will be easy to read, and they won't have to dig through profiler traces to find the many different queries being generated by your code.
A:
Out of curiousity, have you considered using an ORM for managing your data access. A lot of the functionality you're trying to implement could already be there. It may be something to look at because its best not to re-invent the wheel.
A:
ORMs have already solved the problem of dynamic SQL generation (I prefer NHibernate/ActiveRecord). Using these tools you can create a query with an unknown number of conditions by looping across user input and generating an array of Expression objects. Then execute the built-in query methods with that custom expression set.
List<Expression> expressions = new List<Expression>(userConditions.Count);
foreach(Condition c in userConditions)
{
expressions.Add(Expression.Eq(c.Field, c.Value));
}
SomeTable[] records = SomeTable.Find(expressions);
There are more 'Expression' options: non-equality, greater/less than, null/not-null, etc. The 'Condition' type I just made up, you can probably stuff your user input into a useful class.
| Is there a standard approach to generating sql dynamically? | I want to ask how other programmers are producing Dynamic SQL strings for execution as the CommandText of a SQLCommand object.
I am producing parameterized queries containing user-generated WHERE clauses and SELECT fields. Sometimes the queries are complex and I need a lot of control over how the different parts are built.
Currently, I am using many loops and switch statements to produce the necessary SQL code fragments and to create the SQL parameters objects needed. This method is difficult to follow and it makes maintenance a real chore.
Is there a cleaner, more stable way of doing this?
Any Suggestions?
EDIT:
To add detail to my previous post:
I cannot really template my query due to the requirements. It just changes too much.
I have to allow for aggregate functions, like Count(). This has consequences for the Group By/Having clause. It also causes nested SELECT statements. This, in turn, effects the column name used by
Some Contact data is stored in an XML column. Users can query this data AS WELL AS and the other relational columns together. Consequences are that xmlcolumns cannot appear in Group By clauses[sql syntax].
I am using an efficient paging technique that uses Row_Number() SQL Function. Consequences are that I have to use a Temp table and then get the @@rowcount, before selecting my subset, to avoid a second query.
I will show some code (the horror!) so that you guys have an idea of what I'm dealing with.
sqlCmd.CommandText = "DECLARE @t Table(ContactId int, ROWRANK int" + declare
+ ")INSERT INTO @t(ContactId, ROWRANK" + insertFields + ")"//Insert as few cols a possible
+ "Select ContactID, ROW_NUMBER() OVER (ORDER BY " + sortExpression + " "
+ sortDirection + ") as ROWRANK" // generates a rowrank for each row
+ outerFields
+ " FROM ( SELECT c.id AS ContactID"
+ coreFields
+ from // sometimes different tables are required
+ where + ") T " // user input goes here.
+ groupBy+ " "
+ havingClause //can be empty
+ ";"
+ "select @@rowcount as rCount;" // return 2 recordsets, avoids second query
+ " SELECT " + fields + ",field1,field2" // join onto the other cols n the table
+" FROM @t t INNER JOIN contacts c on t.ContactID = c.id"
+" WHERE ROWRANK BETWEEN " + ((pageIndex * pageSize) + 1) + " AND "
+ ( (pageIndex + 1) * pageSize); // here I select the pages I want
In this example, I am querying XML data. For purely relational data, the query is much more simple. Each of the section variables are StringBuilders. Where clauses are built like so:
// Add Parameter to SQL Command
AddParamToSQLCmd(sqlCmd, "@p" + z.ToString(), SqlDbType.VarChar, 50, ParameterDirection.Input, qc.FieldValue);
// Create SQL code Fragment
where.AppendFormat(" {0} {1} {2} @p{3}", qc.BooleanOperator, qc.FieldName, qc.ComparisonOperator, z);
| [
"I had the need to do this on one of my recent projects. Here is the scheme that I am using for generating the SQL: \n\nEach component of the query is represented by an Object (which in my case is a Linq-to-Sql entity that maps to a table in the DB). So I have the following classes: Query, SelectColumn, Join, WhereCondition, Sort, GroupBy. Each of these classes contains all details relating to that component of the query.\nThe last five classes are all related to a Query object. So the Query object itself has collections of each class.\nEach class has a method that can generate the SQL for the part of the query that it represents. So creating the overall query ends up calling Query.GenerateQuery() which in turn enumerates through all of the sub-collections and calls their respective GenerateQuery() methods\n\nIt is still a bit complicated, but in the end you know where the SQL generation for each individual part of the query originates (and I don't think that there are any big switch statements). And don't forget to use StringBuilder.\n",
"We created our own FilterCriteria object that is kind of a black-box dynamic query builder. It has collection properties for SelectClause, WhereClause, GroupByClause and OrderByClause. It also contains a properties for CommandText, CommandType, and MaximumRecords.\nWe then jut pass our FilterCriteria object to our data logic and it executes it against the database server and passes parameter values to a stored procedure that executes the dynamic code. \nWorks well for us ... and keeps the SQL generation nicely contained in an object.\n",
"You could try the approach used by code generation tools like CodeSmith. Create a SQL template with placeholders. At runtime, read the template into a string and substitute the placeholders with actual values. This is only useful if all the SQL code follow a pattern.\n",
"Gulzar and Ryan Lanciaux make good points in mentioning CodeSmith and ORM. Either of those might reduce or eliminate your current burden when it comes to generating dynamic SQL. Your current approach of using parameterized SQL is wise, simply because it protects well against SQL injection attacks.\nWithout an actual code sample to comment on, it's difficult to provide an informed alternative to the loops and switch statements you're currently using. But since you mention that you're setting a CommandText property, I would recommend the use of string.Format in your implementation (if you aren't already using it). I think it may make your code easier to restructure, and therefore improve readability and understanding.\n",
"Usually it's something like this:\nstring query= \"SELECT {0} FROM .... WHERE {1}\"\nStringBuilder selectclause = new StringBuilder();\nStringBuilder wherecaluse = new StringBuilder();\n\n// .... the logic here will vary greatly depending on what your system looks like\n\nMySqlcommand.CommandText = String.Format(query, selectclause.ToString(), whereclause.ToString());\n\nI'm also just getting started out with ORMs. You might want to take a look at one of those. ActiveRecord / Hibernate are some good keywords to google.\n",
"If you really need to do this from code, then an ORM is probably the way to go to try to keep it clean.\nBut I'd like to offer an alternative that works well and could avoid the performance problems that accompany dynamic queries, due to changing SQL that requires new query plans to be created, with different demands on indexes.\nCreate a stored procedure that accepts all possible parameters, and then use something like this in the where clause:\nwhere...\nand (@MyParam5 is null or @MyParam5 = Col5)\n\nthen, from code, it's much simpler to set the parameter value to DBNull.Value when it is not applicable, rather than changing the SQL string you generate.\nYour DBAs will be much happier with you, because they will have one place to go for query tuning, the SQL will be easy to read, and they won't have to dig through profiler traces to find the many different queries being generated by your code.\n",
"Out of curiousity, have you considered using an ORM for managing your data access. A lot of the functionality you're trying to implement could already be there. It may be something to look at because its best not to re-invent the wheel.\n",
"ORMs have already solved the problem of dynamic SQL generation (I prefer NHibernate/ActiveRecord). Using these tools you can create a query with an unknown number of conditions by looping across user input and generating an array of Expression objects. Then execute the built-in query methods with that custom expression set.\nList<Expression> expressions = new List<Expression>(userConditions.Count);\nforeach(Condition c in userConditions)\n{\n expressions.Add(Expression.Eq(c.Field, c.Value));\n}\nSomeTable[] records = SomeTable.Find(expressions);\n\nThere are more 'Expression' options: non-equality, greater/less than, null/not-null, etc. The 'Condition' type I just made up, you can probably stuff your user input into a useful class.\n"
] | [
2,
2,
1,
1,
1,
1,
0,
0
] | [] | [] | [
".net",
"sql"
] | stackoverflow_0000051827_.net_sql.txt |
Q:
Linux/X11 input library without creating a window
Is there a good library to use for gathering user input in Linux from the mouse/keyboard/joystick that doesn't force you to create a visible window to do so? SDL lets you get user input in a reasonable way, but seems to force you to create a window, which is troublesome if you have abstracted control so the control machine doesn't have to be the same as the render machine. However, if the control and render machines are the same, this results in an ugly little SDL window on top of your display.
Edit To Clarify:
The renderer has an output window, in its normal use case, that window is full screen, except when they are both running on the same computer, just so it is possible to give the controller focus. There can actually be multiple renderers displaying a different view of the same data on different computers all controlled by the same controller, hence the total decoupling of the input from the output (Making taking advantage of the built in X11 client/server stuff for display less useable) Also, multiple controller applications for one renderer is also possible. Communication between the controllers and renderers is via sockets.
A:
OK, if you're under X11 and you want to get the kbd, you need to do a grab.
If you're not, my only good answer is ncurses from a terminal.
Here's how you grab everything from the keyboard and release again:
/* Demo code, needs more error checking, compile
* with "gcc nameofthisfile.c -lX11".
/* weird formatting for markdown follows. argh! */
#include <X11/Xlib.h>
int main(int argc, char **argv)
{
Display *dpy;
XEvent ev;
char *s;
unsigned int kc;
int quit = 0;
if (NULL==(dpy=XOpenDisplay(NULL))) {
perror(argv[0]);
exit(1);
}
/*
* You might want to warp the pointer to somewhere that you know
* is not associated with anything that will drain events.
* (void)XWarpPointer(dpy, None, DefaultRootWindow(dpy), 0, 0, 0, 0, x, y);
*/
XGrabKeyboard(dpy, DefaultRootWindow(dpy),
True, GrabModeAsync, GrabModeAsync, CurrentTime);
printf("KEYBOARD GRABBED! Hit 'q' to quit!\n"
"If this job is killed or you get stuck, use Ctrl-Alt-F1\n"
"to switch to a console (if possible) and run something that\n"
"ungrabs the keyboard.\n");
/* A very simple event loop: start at "man XEvent" for more info. */
/* Also see "apropos XGrab" for various ways to lock down access to
* certain types of info. coming out of or going into the server */
for (;!quit;) {
XNextEvent(dpy, &ev);
switch (ev.type) {
case KeyPress:
kc = ((XKeyPressedEvent*)&ev)->keycode;
s = XKeysymToString(XKeycodeToKeysym(dpy, kc, 0));
/* s is NULL or a static no-touchy return string. */
if (s) printf("KEY:%s\n", s);
if (!strcmp(s, "q")) quit=~0;
break;
case Expose:
/* Often, it's a good idea to drain residual exposes to
* avoid visiting Blinky's Fun Club. */
while (XCheckTypedEvent(dpy, Expose, &ev)) /* empty body */ ;
break;
case ButtonPress:
case ButtonRelease:
case KeyRelease:
case MotionNotify:
case ConfigureNotify:
default:
break;
}
}
XUngrabKeyboard(dpy, CurrentTime);
if (XCloseDisplay(dpy)) {
perror(argv[0]);
exit(1);
}
return 0;
}
Run this from a terminal and all kbd events should hit it. I'm testing it under Xorg
but it uses venerable, stable Xlib mechanisms.
Hope this helps.
BE CAREFUL with grabs under X. When you're new to them, sometimes it's a good
idea to start a time delay process that will ungrab the server when you're
testing code and let it sit and run and ungrab every couple of minutes.
It saves having to kill or switch away from the server to externally reset state.
From here, I'll leave it to you to decide how to multiplex renderes. Read
the XGrabKeyboard docs and XEvent docs to get started.
If you have small windows exposed at the screen corners, you could jam
the pointer into one corner to select a controller. XWarpPointer can
shove the pointer to one of them as well from code.
One more point: you can grab the pointer as well, and other resources. If you had one controller running on the box in front of which you sit, you could use keyboard and mouse input to switch it between open sockets with different renderers. You shouldn't need to resize the output window to less than full screen anymore with this approach, ever. With more work, you could actually drop alpha-blended overlays on top using the SHAPE and COMPOSITE extensions to get a nice overlay feature in response to user input (which might count as gilding the lily).
A:
For the mouse you can use GPM.
I'm not sure off the top of my head for keyboard or joystick.
It probably wouldn't be too bad to read directly off there /dev files if need be.
Hope it helps
| Linux/X11 input library without creating a window | Is there a good library to use for gathering user input in Linux from the mouse/keyboard/joystick that doesn't force you to create a visible window to do so? SDL lets you get user input in a reasonable way, but seems to force you to create a window, which is troublesome if you have abstracted control so the control machine doesn't have to be the same as the render machine. However, if the control and render machines are the same, this results in an ugly little SDL window on top of your display.
Edit To Clarify:
The renderer has an output window, in its normal use case, that window is full screen, except when they are both running on the same computer, just so it is possible to give the controller focus. There can actually be multiple renderers displaying a different view of the same data on different computers all controlled by the same controller, hence the total decoupling of the input from the output (Making taking advantage of the built in X11 client/server stuff for display less useable) Also, multiple controller applications for one renderer is also possible. Communication between the controllers and renderers is via sockets.
| [
"OK, if you're under X11 and you want to get the kbd, you need to do a grab.\nIf you're not, my only good answer is ncurses from a terminal.\nHere's how you grab everything from the keyboard and release again:\n\n/* Demo code, needs more error checking, compile\n * with \"gcc nameofthisfile.c -lX11\".\n\n/* weird formatting for markdown follows. argh! */\n\n#include <X11/Xlib.h>\n\nint main(int argc, char **argv)\n{\n Display *dpy;\n XEvent ev;\n char *s;\n unsigned int kc;\n int quit = 0;\n\n if (NULL==(dpy=XOpenDisplay(NULL))) {\n perror(argv[0]);\n exit(1);\n }\n\n /*\n * You might want to warp the pointer to somewhere that you know\n * is not associated with anything that will drain events.\n * (void)XWarpPointer(dpy, None, DefaultRootWindow(dpy), 0, 0, 0, 0, x, y);\n */\n\n XGrabKeyboard(dpy, DefaultRootWindow(dpy),\n True, GrabModeAsync, GrabModeAsync, CurrentTime);\n\n printf(\"KEYBOARD GRABBED! Hit 'q' to quit!\\n\"\n \"If this job is killed or you get stuck, use Ctrl-Alt-F1\\n\"\n \"to switch to a console (if possible) and run something that\\n\"\n \"ungrabs the keyboard.\\n\");\n\n\n /* A very simple event loop: start at \"man XEvent\" for more info. */\n /* Also see \"apropos XGrab\" for various ways to lock down access to\n * certain types of info. coming out of or going into the server */\n for (;!quit;) {\n XNextEvent(dpy, &ev);\n switch (ev.type) {\n case KeyPress:\n kc = ((XKeyPressedEvent*)&ev)->keycode;\n s = XKeysymToString(XKeycodeToKeysym(dpy, kc, 0));\n /* s is NULL or a static no-touchy return string. */\n if (s) printf(\"KEY:%s\\n\", s);\n if (!strcmp(s, \"q\")) quit=~0;\n break;\n case Expose:\n /* Often, it's a good idea to drain residual exposes to\n * avoid visiting Blinky's Fun Club. */\n while (XCheckTypedEvent(dpy, Expose, &ev)) /* empty body */ ;\n break;\n case ButtonPress:\n case ButtonRelease:\n case KeyRelease:\n case MotionNotify:\n case ConfigureNotify:\n default:\n break;\n }\n }\n\n XUngrabKeyboard(dpy, CurrentTime);\n\n if (XCloseDisplay(dpy)) {\n perror(argv[0]);\n exit(1);\n }\n\n return 0;\n}\n\nRun this from a terminal and all kbd events should hit it. I'm testing it under Xorg\nbut it uses venerable, stable Xlib mechanisms.\nHope this helps.\nBE CAREFUL with grabs under X. When you're new to them, sometimes it's a good\nidea to start a time delay process that will ungrab the server when you're\ntesting code and let it sit and run and ungrab every couple of minutes.\nIt saves having to kill or switch away from the server to externally reset state.\nFrom here, I'll leave it to you to decide how to multiplex renderes. Read\nthe XGrabKeyboard docs and XEvent docs to get started.\nIf you have small windows exposed at the screen corners, you could jam\nthe pointer into one corner to select a controller. XWarpPointer can\nshove the pointer to one of them as well from code.\nOne more point: you can grab the pointer as well, and other resources. If you had one controller running on the box in front of which you sit, you could use keyboard and mouse input to switch it between open sockets with different renderers. You shouldn't need to resize the output window to less than full screen anymore with this approach, ever. With more work, you could actually drop alpha-blended overlays on top using the SHAPE and COMPOSITE extensions to get a nice overlay feature in response to user input (which might count as gilding the lily).\n",
"For the mouse you can use GPM.\nI'm not sure off the top of my head for keyboard or joystick.\nIt probably wouldn't be too bad to read directly off there /dev files if need be. \nHope it helps\n"
] | [
7,
2
] | [] | [] | [
"linux",
"sdl",
"user_input"
] | stackoverflow_0000050182_linux_sdl_user_input.txt |
Q:
How do I lock certain SQL rows while running a process on them?
My work has a financial application, written in VB.NET with SQL, that several users can be working on at the same time.
At some point, one user might decide to Post the batch of entries that they (and possibly other people) are currently working on.
Obviously, I no longer want any other users to add, edit, or delete entries in that batch after the Post process has been initiated.
I have already seen that I can lock all data by opening the SQL transaction the moment the Post process starts, but the process can be fairly lengthy and I would prefer not to have the Transaction open for the several minutes it might take to complete the function.
Is there a way to lock just the records that I know need to be operated on from VB.NET code?
A:
If you are using Oracle you would Select for update on the rows you are locking.
here is an example
SELECT address1 , city, country
FROM location
FOR UPDATE;
A:
You probably want to set an isolation level for the entire transaction rather than using with (rowlock) on specific tables.
Look at this page:
http://msdn.microsoft.com/en-us/library/ms173763.aspx
Specifically, search within it for 'row lock', and I think you'll find that READ COMMITTED or REPEATABLE READ are what you want. READ COMMITTED is the SQL Server default. If READ COMMITTED doesn't seem strong enough to you, then go for REPEATABLE READ.
Update: After reading one of your follow up posts, you definitely want repeatable read. That will hold the lock until you either commit or rollback the transaction.
A:
add
with (rowlock)
to your SQL query
SQL Server Performance article
EDIT: ok, I misunderstood the question. What you want is transaction isolation. +1 to Joel :)
A:
wrap it in a tran use an holdlock + updlock in the select
example
begin tran
select * from
SomeTable (holdlock,updlock)
where ....
processing here
commit
| How do I lock certain SQL rows while running a process on them? | My work has a financial application, written in VB.NET with SQL, that several users can be working on at the same time.
At some point, one user might decide to Post the batch of entries that they (and possibly other people) are currently working on.
Obviously, I no longer want any other users to add, edit, or delete entries in that batch after the Post process has been initiated.
I have already seen that I can lock all data by opening the SQL transaction the moment the Post process starts, but the process can be fairly lengthy and I would prefer not to have the Transaction open for the several minutes it might take to complete the function.
Is there a way to lock just the records that I know need to be operated on from VB.NET code?
| [
"If you are using Oracle you would Select for update on the rows you are locking.\nhere is an example\nSELECT address1 , city, country\nFROM location\nFOR UPDATE;\n\n",
"You probably want to set an isolation level for the entire transaction rather than using with (rowlock) on specific tables. \nLook at this page:\nhttp://msdn.microsoft.com/en-us/library/ms173763.aspx\nSpecifically, search within it for 'row lock', and I think you'll find that READ COMMITTED or REPEATABLE READ are what you want. READ COMMITTED is the SQL Server default. If READ COMMITTED doesn't seem strong enough to you, then go for REPEATABLE READ.\nUpdate: After reading one of your follow up posts, you definitely want repeatable read. That will hold the lock until you either commit or rollback the transaction.\n",
"add \nwith (rowlock)\n\nto your SQL query\nSQL Server Performance article\nEDIT: ok, I misunderstood the question. What you want is transaction isolation. +1 to Joel :) \n",
"wrap it in a tran use an holdlock + updlock in the select\nexample\nbegin tran\nselect * from\nSomeTable (holdlock,updlock)\nwhere ....\n\n\nprocessing here\n\ncommit\n\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"database",
"locking",
"sql",
"transactions",
"vb.net"
] | stackoverflow_0000046034_database_locking_sql_transactions_vb.net.txt |
Q:
TypeLoadException on System.Xml.Linq.XDocument when running T4 template on build server
I'm having a problem running a T4 template using TextTransform.exe on my build server. On my dev machine the template works perfectly. The error message is as follows.
error : Running transformation: System.TypeLoadException: Could not instantiate type System.Xml.Linq.XDocument
The build server has the windows 2008 SDK on it, my machine has VS 2008. I thought mayve it couldn't find System.Data.Xml so I ensure the dll was present in the same directory, but no luck. Any ideas?
A:
Sounds like your server may not have .NET 3.5 installed.
A:
I installed .NET 3.5 SP1 and it corrected the problem
| TypeLoadException on System.Xml.Linq.XDocument when running T4 template on build server | I'm having a problem running a T4 template using TextTransform.exe on my build server. On my dev machine the template works perfectly. The error message is as follows.
error : Running transformation: System.TypeLoadException: Could not instantiate type System.Xml.Linq.XDocument
The build server has the windows 2008 SDK on it, my machine has VS 2008. I thought mayve it couldn't find System.Data.Xml so I ensure the dll was present in the same directory, but no luck. Any ideas?
| [
"Sounds like your server may not have .NET 3.5 installed.\n",
"I installed .NET 3.5 SP1 and it corrected the problem\n"
] | [
1,
1
] | [] | [] | [
".net",
"t4",
"typeloadexception"
] | stackoverflow_0000051931_.net_t4_typeloadexception.txt |
Q:
Determine size of page via HttpModule
Here's an easy one for you:
I'm currently logging request duration via an HttpModule and I'd like to find out the number of bytes each page is as well.
HttpContext.Current.Response.OutputStream.Length throws a NotSupportedException.
What's an easy way to do this?
A:
I have an HttpModule that implements a stream rewriter. It derives from the Stream class. In my HttpModule I have the following code:
void app_PreRequestHandlerExecute(object sender, EventArgs e)
{
HttpResponse response = HttpContext.Current.Response;
response.Filter = new MyRewriterStream(response.Filter);
}
In the stream class I have the following code that overrides the default Write method:
public override void Write(byte[] buffer, int offset, int count)
{
string outStr;
outStr = UTF8Encoding.UTF8.GetString(buffer, offset, count);
//Do useful stuff and write back to the stream
}
You can just take the length of the string at the second point
| Determine size of page via HttpModule | Here's an easy one for you:
I'm currently logging request duration via an HttpModule and I'd like to find out the number of bytes each page is as well.
HttpContext.Current.Response.OutputStream.Length throws a NotSupportedException.
What's an easy way to do this?
| [
"I have an HttpModule that implements a stream rewriter. It derives from the Stream class. In my HttpModule I have the following code:\nvoid app_PreRequestHandlerExecute(object sender, EventArgs e)\n{\n HttpResponse response = HttpContext.Current.Response;\n response.Filter = new MyRewriterStream(response.Filter);\n}\n\nIn the stream class I have the following code that overrides the default Write method:\npublic override void Write(byte[] buffer, int offset, int count)\n{\n string outStr;\n outStr = UTF8Encoding.UTF8.GetString(buffer, offset, count);\n //Do useful stuff and write back to the stream\n}\n\nYou can just take the length of the string at the second point\n"
] | [
3
] | [] | [] | [
"asp.net"
] | stackoverflow_0000052311_asp.net.txt |
Q:
How to validate an XML file against a schema using Visual Studio 2005
Is it possible to validate an xml file against its associated schema using Visual Studio 2005 IDE?
I could only see options to create a schema based on the current file, or show the XSLT output
A:
It's done automatically, errors appear as warnings in the "Error List" and are additionally underlined with the blue squiggle in the source file.
Not sure if there is another way to validate the file, but this will do for now.
A:
XmlSchemaValidator
Warning: It's not pretty to use.
| How to validate an XML file against a schema using Visual Studio 2005 | Is it possible to validate an xml file against its associated schema using Visual Studio 2005 IDE?
I could only see options to create a schema based on the current file, or show the XSLT output
| [
"It's done automatically, errors appear as warnings in the \"Error List\" and are additionally underlined with the blue squiggle in the source file. \nNot sure if there is another way to validate the file, but this will do for now.\n",
"XmlSchemaValidator\nWarning: It's not pretty to use.\n"
] | [
5,
0
] | [] | [] | [
"visual_studio",
"visual_studio_2005",
"xml",
"xslt"
] | stackoverflow_0000052326_visual_studio_visual_studio_2005_xml_xslt.txt |
Q:
Updating Legacy Code from System.Web.Mail to System.Net.Mail in Visual Studio 2005: Problems sending E-Mail
Using the obsolete System.Web.Mail sending email works fine, here's the code snippet:
Public Shared Sub send(ByVal recipent As String, ByVal from As String, ByVal subject As String, ByVal body As String)
Try
Dim Message As System.Web.Mail.MailMessage = New System.Web.Mail.MailMessage
Message.To = recipent
Message.From = from
Message.Subject = subject
Message.Body = body
Message.BodyFormat = MailFormat.Html
Try
SmtpMail.SmtpServer = MAIL_SERVER
SmtpMail.Send(Message)
Catch ehttp As System.Web.HttpException
critical_error("Email sending failed, reason: " + ehttp.ToString)
End Try
Catch e As System.Exception
critical_error(e, "send() in Util_Email")
End Try
End Sub
and here's the updated version:
Dim mailMessage As New System.Net.Mail.MailMessage()
mailMessage.From = New System.Net.Mail.MailAddress(from)
mailMessage.To.Add(New System.Net.Mail.MailAddress(recipent))
mailMessage.Subject = subject
mailMessage.Body = body
mailMessage.IsBodyHtml = True
mailMessage.Priority = System.Net.Mail.MailPriority.Normal
Try
Dim smtp As New Net.Mail.SmtpClient(MAIL_SERVER)
smtp.Send(mailMessage)
Catch ex As Exception
MsgBox(ex.ToString)
End Try
I have tried many different variations and nothing seems to work, I have a feeling it may have to do with the SmtpClient, is there something that changed in the underlying code between these versions?
There are no exceptions that are thrown back.
A:
The System.Net.Mail library uses the config files to store the settings so you may just need to add a section like this
<system.net>
<mailSettings>
<smtp from="test@foo.com">
<network host="smtpserver1" port="25" userName="username" password="secret" defaultCredentials="true" />
</smtp>
</mailSettings>
</system.net>
A:
Have you tried adding
smtp.UseDefaultCredentials = True
before the send?
Also, what happens if you try changing:
mailMessage.From = New System.Net.Mail.MailAddress(from)
mailMessage.To.Add(New System.Net.Mail.MailAddress(recipent))
to this:
mailMessage.From = New System.Net.Mail.MailAddress(from,recipent)
-- Kevin Fairchild
A:
I've tested your code and my mail is sent successfully. Assuming that you're using the same parameters for the old code, I would suggest that your mail server (MAIL_SERVER) is accepting the message and there's a delay in processing or it considers it spam and discards it.
I would suggest sending a message using a third way (telnet if you're feeling brave) and see if that is successful.
EDIT: I note (from your subsequent answer) that specifying the port has helped somewhat. You've not said if you're using port 25 (SMTP) or port 587 (Submission) or something else. If you're not doing it already, using the sumission port may also help solve your problem.
Wikipedia and rfc4409 have more details.
A:
Are you setting the credentials for the E-Mail?
smtp.Credentials = New Net.NetworkCredential("xyz@gmail.com", "password")
I had this error, however I believe it threw an exception.
A:
Everything you are doing is correct. Here's the things i would check.
Double check that the SMTP service in IIS is running right.
Make sure it's not getting flagged as spam.
those are usually the biggest culprits whenever we have had issues w/ sending email.
Also, just noticed you are doing MsgBox(ex.Message). I believe they blocked MessageBox from working asp.net in a service pack, so it might be erroring out, you just might not know it. check your event log.
A:
I added the port number for the mail server and it started working sporadically, it seems that it was a problem with the server and a delay in sending the messages. Thanks for your answers, they were all helpful!
| Updating Legacy Code from System.Web.Mail to System.Net.Mail in Visual Studio 2005: Problems sending E-Mail | Using the obsolete System.Web.Mail sending email works fine, here's the code snippet:
Public Shared Sub send(ByVal recipent As String, ByVal from As String, ByVal subject As String, ByVal body As String)
Try
Dim Message As System.Web.Mail.MailMessage = New System.Web.Mail.MailMessage
Message.To = recipent
Message.From = from
Message.Subject = subject
Message.Body = body
Message.BodyFormat = MailFormat.Html
Try
SmtpMail.SmtpServer = MAIL_SERVER
SmtpMail.Send(Message)
Catch ehttp As System.Web.HttpException
critical_error("Email sending failed, reason: " + ehttp.ToString)
End Try
Catch e As System.Exception
critical_error(e, "send() in Util_Email")
End Try
End Sub
and here's the updated version:
Dim mailMessage As New System.Net.Mail.MailMessage()
mailMessage.From = New System.Net.Mail.MailAddress(from)
mailMessage.To.Add(New System.Net.Mail.MailAddress(recipent))
mailMessage.Subject = subject
mailMessage.Body = body
mailMessage.IsBodyHtml = True
mailMessage.Priority = System.Net.Mail.MailPriority.Normal
Try
Dim smtp As New Net.Mail.SmtpClient(MAIL_SERVER)
smtp.Send(mailMessage)
Catch ex As Exception
MsgBox(ex.ToString)
End Try
I have tried many different variations and nothing seems to work, I have a feeling it may have to do with the SmtpClient, is there something that changed in the underlying code between these versions?
There are no exceptions that are thrown back.
| [
"The System.Net.Mail library uses the config files to store the settings so you may just need to add a section like this\n <system.net>\n <mailSettings>\n <smtp from=\"test@foo.com\">\n <network host=\"smtpserver1\" port=\"25\" userName=\"username\" password=\"secret\" defaultCredentials=\"true\" />\n </smtp>\n </mailSettings>\n </system.net>\n\n",
"Have you tried adding \nsmtp.UseDefaultCredentials = True \n\nbefore the send?\nAlso, what happens if you try changing:\nmailMessage.From = New System.Net.Mail.MailAddress(from)\nmailMessage.To.Add(New System.Net.Mail.MailAddress(recipent))\n\nto this:\nmailMessage.From = New System.Net.Mail.MailAddress(from,recipent)\n\n-- Kevin Fairchild\n",
"I've tested your code and my mail is sent successfully. Assuming that you're using the same parameters for the old code, I would suggest that your mail server (MAIL_SERVER) is accepting the message and there's a delay in processing or it considers it spam and discards it.\nI would suggest sending a message using a third way (telnet if you're feeling brave) and see if that is successful.\nEDIT: I note (from your subsequent answer) that specifying the port has helped somewhat. You've not said if you're using port 25 (SMTP) or port 587 (Submission) or something else. If you're not doing it already, using the sumission port may also help solve your problem.\nWikipedia and rfc4409 have more details.\n",
"Are you setting the credentials for the E-Mail?\nsmtp.Credentials = New Net.NetworkCredential(\"xyz@gmail.com\", \"password\")\n\nI had this error, however I believe it threw an exception.\n",
"Everything you are doing is correct. Here's the things i would check.\n\nDouble check that the SMTP service in IIS is running right.\nMake sure it's not getting flagged as spam.\n\nthose are usually the biggest culprits whenever we have had issues w/ sending email.\nAlso, just noticed you are doing MsgBox(ex.Message). I believe they blocked MessageBox from working asp.net in a service pack, so it might be erroring out, you just might not know it. check your event log.\n",
"I added the port number for the mail server and it started working sporadically, it seems that it was a problem with the server and a delay in sending the messages. Thanks for your answers, they were all helpful!\n"
] | [
1,
0,
0,
0,
0,
0
] | [] | [] | [
".net",
".net_2.0",
"email",
"vb.net",
"visual_studio_2005"
] | stackoverflow_0000052321_.net_.net_2.0_email_vb.net_visual_studio_2005.txt |
Q:
What is causing a JVMTI_ERROR_NULL_POINTER?
I'm getting an error when my application starts. It appears to be after it's initialized its connection to the database. It also may be when it starts to spawn threads, but I haven't been able to cause it to happen on purpose.
The entire error message is:
FATAL ERROR in native method: JDWP NewGlobalRef, jvmtiError=JVMTI_ERROR_NULL_POINTER(100)
JDWP exit error JVMTI_ERROR_NULL_POINTER(100): NewGlobalRef
erickson:
I'm not very familiar with the DB code, but hopefully this string is helpful:
jdbc:sqlserver://localhost;databasename=FOO
Tom Hawtin:
It's likely I was only getting this error when debugging, but it wasn't consistent enough for me to notice.
Also, I fixed a bug that was causing multiple threads to attempt to update the same row in DB and I haven't gotten the JVMTI... error since.
A:
JVMTI is the debugging and profiling protocol. So, I'm guessint it's something peculiar to the environment you are attempting to run your application in.
A:
I'm guessing you are using a native-code–based database driver (JDBC driver type 1 or 2). And I'm guessing that driver is buggy. If you could provide more information about the driver and your datasource configuration or connection string, it might help determine some answers.
| What is causing a JVMTI_ERROR_NULL_POINTER? | I'm getting an error when my application starts. It appears to be after it's initialized its connection to the database. It also may be when it starts to spawn threads, but I haven't been able to cause it to happen on purpose.
The entire error message is:
FATAL ERROR in native method: JDWP NewGlobalRef, jvmtiError=JVMTI_ERROR_NULL_POINTER(100)
JDWP exit error JVMTI_ERROR_NULL_POINTER(100): NewGlobalRef
erickson:
I'm not very familiar with the DB code, but hopefully this string is helpful:
jdbc:sqlserver://localhost;databasename=FOO
Tom Hawtin:
It's likely I was only getting this error when debugging, but it wasn't consistent enough for me to notice.
Also, I fixed a bug that was causing multiple threads to attempt to update the same row in DB and I haven't gotten the JVMTI... error since.
| [
"JVMTI is the debugging and profiling protocol. So, I'm guessint it's something peculiar to the environment you are attempting to run your application in.\n",
"I'm guessing you are using a native-code–based database driver (JDBC driver type 1 or 2). And I'm guessing that driver is buggy. If you could provide more information about the driver and your datasource configuration or connection string, it might help determine some answers.\n"
] | [
4,
1
] | [] | [] | [
"java",
"jvm"
] | stackoverflow_0000052343_java_jvm.txt |
Q:
Patterns for the overlap of two objects
I'm sure this has already been asked and answered so I apologize in advance for that but I'm not figuring out the correct keywords to search for. Searching for "Pattern" hits way too many Q & A's to be useful.
I'm working on a regression testing app. I'm displaying a form on the screen and according to which user is logged in to the app some of the fields should be read-only. So I can abstract a field object and I can abstract a user object but what pattern should I be looking at to describe the intersection of these two concepts? In other words how should I describe that for Field 1 and User A, the field should be read-only? It seems like read-only (or not) should be a property of the Field class but as I said, it depends on which user is looking at the form. I've considered a simple two-dimensional array (e. g. ReadOnly[Field,User] = True) but I want to make sure I've picked the most effective structure to represent this.
Are there any software design patterns regarding this kind of data structure? Am I overcomplicating things--would a two-dimensional array be the best way to go here? As I said if this has been asked and answered, I do apologize. I did search here and didn't find anything and a Google search failed to turn up anything either.
A:
Table driven designs can be effective.
Steve Maguire had few nice examples in Writing Solid Code .
They are also a great way to capture tests, see fit .
In your case something like:
Field1ReadonlyRules = {
'user class 1' : True,
'user class 2' : False
}
field1.readOnly = Field1ReadonlyRules[ someUser.userClass ]
As an aside you probably want to model both users and user classes/roles/groups instead of combining them.
A user typically captures who (authentication) while groups/roles capture what (permissions, capabilities)
A:
At first blush it sounds more like you have two different types of users and they have different access levels. This could be solved by inheritance (PowerUser, User) or by containing a security object or token that sets the level for the user.
If you don't like inheritance as a rule, you could use a State pattern on the application, Decorate the user objects (Shudder) or possibly add strategy patterns for differing security levels. But I think it's a little early yet, I don't normally apply patterns until I have a firm idea of how the item will grown and be maintained.
| Patterns for the overlap of two objects | I'm sure this has already been asked and answered so I apologize in advance for that but I'm not figuring out the correct keywords to search for. Searching for "Pattern" hits way too many Q & A's to be useful.
I'm working on a regression testing app. I'm displaying a form on the screen and according to which user is logged in to the app some of the fields should be read-only. So I can abstract a field object and I can abstract a user object but what pattern should I be looking at to describe the intersection of these two concepts? In other words how should I describe that for Field 1 and User A, the field should be read-only? It seems like read-only (or not) should be a property of the Field class but as I said, it depends on which user is looking at the form. I've considered a simple two-dimensional array (e. g. ReadOnly[Field,User] = True) but I want to make sure I've picked the most effective structure to represent this.
Are there any software design patterns regarding this kind of data structure? Am I overcomplicating things--would a two-dimensional array be the best way to go here? As I said if this has been asked and answered, I do apologize. I did search here and didn't find anything and a Google search failed to turn up anything either.
| [
"Table driven designs can be effective. \nSteve Maguire had few nice examples in Writing Solid Code .\nThey are also a great way to capture tests, see fit .\nIn your case something like:\nField1ReadonlyRules = {\n 'user class 1' : True,\n 'user class 2' : False\n}\n\nfield1.readOnly = Field1ReadonlyRules[ someUser.userClass ]\n\nAs an aside you probably want to model both users and user classes/roles/groups instead of combining them.\nA user typically captures who (authentication) while groups/roles capture what (permissions, capabilities)\n",
"At first blush it sounds more like you have two different types of users and they have different access levels. This could be solved by inheritance (PowerUser, User) or by containing a security object or token that sets the level for the user. \nIf you don't like inheritance as a rule, you could use a State pattern on the application, Decorate the user objects (Shudder) or possibly add strategy patterns for differing security levels. But I think it's a little early yet, I don't normally apply patterns until I have a firm idea of how the item will grown and be maintained.\n"
] | [
2,
1
] | [] | [] | [
"design_patterns",
"intersection",
"object"
] | stackoverflow_0000052400_design_patterns_intersection_object.txt |
Q:
Visual Web Developer Express and .NET, et al
I'm coming from the open source world, and interested in giving ASP.NET a spin. But I'm having a little trouble separating the tools from the platform itself in regards to the licensing. I've downloaded Visual Web Developer 2008 Express, but not sure how different this is from one of the full-featured Visual Studio licenses -- and whether or not my Express license will prevent me from using all the features of ASP.NET.
Is a Visual Studio license just an IDE, or does it include pieces of .NET not available to the Express license? What about the other tools like IIS and SQL Server?
Thanks.
A:
All of .net is available in the .net SDK, so in theory you will not need Visual Studio at all.
Now, there are some things that Express will not do. For example, the Database Designer is not very comprehensive and adding different remote databases is not or only very hardly possible. Still, in code you can connect to everything.
There is also no Remote Debugger, no support for creating Setup Files (well, that does not apply to ASP.net anyway), no real Publish Web Site Feature (although that can be added manually as it's just a Frontend for a SDK tool), no integrated Unit testing (and Microsoft loves to threaten people who add it), etc.
For a full comparison, see here:
Visual Studio 2008 Editions
But as said: Functionality of .net is all in the SDK, Visual Studio is just making it a bit easier to work with.
A:
Visual Studio is just an IDE, you can do all your .NET development with the SDK and notepad if you choose. In fact there is something to be said for learning it that way so you understand better how the pieces fit together!
Microsoft have a version comparison matrix available so you can see exactly what is included each version.
IIS is a Windows component and considered part of the OS, there is nothing else to buy.
SQL Server comes in many flavours, SQL EXpress is free to use and whilst limited compared to the versions you pay for, it is more than enough to get started with ASP.Net
A:
Visual Studio is the IDE and does not include the platform.
IIS and SQL Server are separate products. IIS is available as part of the windows install and the version is different depending on what version of Windows you are using.
SQL Server also has an express product which is not as full featured as the Full versions of SQL Server, yet it is still rather valuable and useful especially for learning purposes.
You can learn a lot from the free tutorials found on asp.net.
A:
Visual Studio is just the IDE. You could theoretically create every file in Notepad and compile manually with just the .net framework.
IIS is an operating system feature, and SQL Server has different flavors with different capabilites.
A:
SharpDevelop is a Open Source IDE for C# and VB.net
| Visual Web Developer Express and .NET, et al | I'm coming from the open source world, and interested in giving ASP.NET a spin. But I'm having a little trouble separating the tools from the platform itself in regards to the licensing. I've downloaded Visual Web Developer 2008 Express, but not sure how different this is from one of the full-featured Visual Studio licenses -- and whether or not my Express license will prevent me from using all the features of ASP.NET.
Is a Visual Studio license just an IDE, or does it include pieces of .NET not available to the Express license? What about the other tools like IIS and SQL Server?
Thanks.
| [
"All of .net is available in the .net SDK, so in theory you will not need Visual Studio at all.\nNow, there are some things that Express will not do. For example, the Database Designer is not very comprehensive and adding different remote databases is not or only very hardly possible. Still, in code you can connect to everything.\nThere is also no Remote Debugger, no support for creating Setup Files (well, that does not apply to ASP.net anyway), no real Publish Web Site Feature (although that can be added manually as it's just a Frontend for a SDK tool), no integrated Unit testing (and Microsoft loves to threaten people who add it), etc.\nFor a full comparison, see here:\nVisual Studio 2008 Editions\nBut as said: Functionality of .net is all in the SDK, Visual Studio is just making it a bit easier to work with.\n",
"Visual Studio is just an IDE, you can do all your .NET development with the SDK and notepad if you choose. In fact there is something to be said for learning it that way so you understand better how the pieces fit together!\nMicrosoft have a version comparison matrix available so you can see exactly what is included each version.\nIIS is a Windows component and considered part of the OS, there is nothing else to buy. \nSQL Server comes in many flavours, SQL EXpress is free to use and whilst limited compared to the versions you pay for, it is more than enough to get started with ASP.Net\n",
"Visual Studio is the IDE and does not include the platform.\nIIS and SQL Server are separate products. IIS is available as part of the windows install and the version is different depending on what version of Windows you are using.\nSQL Server also has an express product which is not as full featured as the Full versions of SQL Server, yet it is still rather valuable and useful especially for learning purposes.\nYou can learn a lot from the free tutorials found on asp.net.\n",
"Visual Studio is just the IDE. You could theoretically create every file in Notepad and compile manually with just the .net framework.\nIIS is an operating system feature, and SQL Server has different flavors with different capabilites.\n",
"SharpDevelop is a Open Source IDE for C# and VB.net\n"
] | [
2,
2,
1,
0,
0
] | [] | [] | [
"asp.net",
"visual_studio"
] | stackoverflow_0000052469_asp.net_visual_studio.txt |
Q:
Best way to manage session in NHibernate?
I'm new to NHibernate (my 1st big project with it).
I had been using a simple method of data access by creating the ISession object within a using block to do my grab my Object or list of Objects, and in that way the session was destroyed after exiting the code block.
This doesn't work in a situation where lazy-loading is required, however.
For example, if I have a Customer object that has a property which is a collection of Orders, then when the lazy-load is attempted, I get a Hibernate exception.
Anyone using a different method?
A:
Session management:
http://code.google.com/p/dot-net-reference-app/source/browse/trunk/src/Infrastructure/Impl/HybridSessionBuilder.cs
Session per request:
http://code.google.com/p/dot-net-reference-app/source/browse/trunk/src/Infrastructure/Impl/NHibernateSessionModule.cs
A:
check out the SummerOfNHibernate webcasts for a great tutorial... What you're looking for specifically doesn't come until webisode 5 or 6.
A:
Keep your session open for your entire unit of work. If your session is life is too small, you cannot benefit from the session level cache (which is significant). Any time you can prevent a roundtrip to the database is going to save a lot of time. You also cannot take advantage of lazy loading, which is crucial to understand.
If your session lifetime is too big, you can run into other issues.
If this is a web app, you'll probably do fine with the session-per-httpRequest pattern. Basically this is an HttpModule that opens the session at the beginning of the request and flushes/closes at the end. Be sure to store the session in HttpContext.Items NOT A STATIC VARIABLE. <--- leads to all kinds of problems that you don't want to deal with.
You might also look at RhinoCommons for a unit of work implementation.
A:
Since you are developing a Web App (presumably with ASP.NET), check out NHibernate Best Practices with ASP.NET at CodeProject.
| Best way to manage session in NHibernate? | I'm new to NHibernate (my 1st big project with it).
I had been using a simple method of data access by creating the ISession object within a using block to do my grab my Object or list of Objects, and in that way the session was destroyed after exiting the code block.
This doesn't work in a situation where lazy-loading is required, however.
For example, if I have a Customer object that has a property which is a collection of Orders, then when the lazy-load is attempted, I get a Hibernate exception.
Anyone using a different method?
| [
"Session management:\nhttp://code.google.com/p/dot-net-reference-app/source/browse/trunk/src/Infrastructure/Impl/HybridSessionBuilder.cs\nSession per request:\nhttp://code.google.com/p/dot-net-reference-app/source/browse/trunk/src/Infrastructure/Impl/NHibernateSessionModule.cs\n",
"check out the SummerOfNHibernate webcasts for a great tutorial... What you're looking for specifically doesn't come until webisode 5 or 6.\n",
"Keep your session open for your entire unit of work. If your session is life is too small, you cannot benefit from the session level cache (which is significant). Any time you can prevent a roundtrip to the database is going to save a lot of time. You also cannot take advantage of lazy loading, which is crucial to understand.\nIf your session lifetime is too big, you can run into other issues.\nIf this is a web app, you'll probably do fine with the session-per-httpRequest pattern. Basically this is an HttpModule that opens the session at the beginning of the request and flushes/closes at the end. Be sure to store the session in HttpContext.Items NOT A STATIC VARIABLE. <--- leads to all kinds of problems that you don't want to deal with.\nYou might also look at RhinoCommons for a unit of work implementation.\n",
"Since you are developing a Web App (presumably with ASP.NET), check out NHibernate Best Practices with ASP.NET at CodeProject.\n"
] | [
6,
2,
1,
0
] | [] | [] | [
".net",
"c#",
"nhibernate"
] | stackoverflow_0000032612_.net_c#_nhibernate.txt |
Q:
Ignore SVN ignore... possible?
So I have some files I want to ignore in a subversion repository, but I don't want my ignore patterns for this to be propagated to the repository.
In other words, I added some private files in my checkout that I want to keep, but they only exist for me and wouldn't make sense to be ignored for everyone, so if I use the svn:ignore, this will apply on the directory, and I either have to check that in (which I don't want to do), or see that this directory was modified every time I do an svn status.
So, ideally I would like something like a .svnignore file which I could then mark to ignore itself as well as some other files (I think this is a possibility in git for example, using a .gitignore file, or whatever the name is).
I'm guessing it might work to ignore the whole directory (maybe), but then I suspect I won't see any new files in that directory, which would also not be desirable.
So does anybody know a way to do this in subversion?
A:
Subversion does have a per-user, global ignore setting, which sounds like what you want. Look in your .subversion directory (found in your home directory) and locate the Miscellany section of the config file. There should be an entry called global-ignores.
For Windows users, this setting is found in the registry under HKEY_LOCAL_MACHINE\Software\Tigris.org\Subversion.
More information is available in the Version Control with Subversion (the SVNBook).
A:
I'm confused about 2 things:
why do you have files that you don't want to check in that other people might want to check in? seems like you'd get a conflict if that happened anyway
does just "ignoring" in the human sense not work for you?
I'm just having trouble seeing a scenario where you wouldn't want to use svn:ignore...
| Ignore SVN ignore... possible? | So I have some files I want to ignore in a subversion repository, but I don't want my ignore patterns for this to be propagated to the repository.
In other words, I added some private files in my checkout that I want to keep, but they only exist for me and wouldn't make sense to be ignored for everyone, so if I use the svn:ignore, this will apply on the directory, and I either have to check that in (which I don't want to do), or see that this directory was modified every time I do an svn status.
So, ideally I would like something like a .svnignore file which I could then mark to ignore itself as well as some other files (I think this is a possibility in git for example, using a .gitignore file, or whatever the name is).
I'm guessing it might work to ignore the whole directory (maybe), but then I suspect I won't see any new files in that directory, which would also not be desirable.
So does anybody know a way to do this in subversion?
| [
"Subversion does have a per-user, global ignore setting, which sounds like what you want. Look in your .subversion directory (found in your home directory) and locate the Miscellany section of the config file. There should be an entry called global-ignores.\nFor Windows users, this setting is found in the registry under HKEY_LOCAL_MACHINE\\Software\\Tigris.org\\Subversion.\nMore information is available in the Version Control with Subversion (the SVNBook).\n",
"I'm confused about 2 things:\n\nwhy do you have files that you don't want to check in that other people might want to check in? seems like you'd get a conflict if that happened anyway\ndoes just \"ignoring\" in the human sense not work for you?\n\nI'm just having trouble seeing a scenario where you wouldn't want to use svn:ignore...\n"
] | [
9,
0
] | [] | [] | [
"svn"
] | stackoverflow_0000052398_svn.txt |
Q:
Class::DBI-like library for php?
I have inherited an old crusty PHP application, and I'd like to refactor it into something a little nicer to deal with, but in a gradual manner. In perl's CPAN, there is a series of classes around Class::DBI that allow you to use database rows as the basis for objects in your code, with the library generating accessor methods etc as appropriate, but also allowing you to add additional methods.
Does anyone know of something like this for PHP? Especially something that doesn't require wholesale adoption of a "framework"... bonus points if it works in PHP4 too, but to be honest, I'd love to have another reason to ditch that. :-)
A:
It's now defunct but phpdbi is possibly worth a look. If you're willing to let go of some of your caveats (the framework one), I've found that Doctrine is a pretty neat way of accessing DBs in PHP. Worth investigating anyway.
A:
Class::DBI is an ORM (Object Relational Mapper) for perl. Searching for "PHP ORM" on google gives some good results, including Doctrin, which I've had good luck with. I'd start there and work your way up.
A:
I'm trying to get more feedback on my own projects, so I'll suggest my take on ORM: ORMer
Usage examples are here
You can phase it in, it doesn't require you to adopt MVC, and it requires very little setup.
A:
The right thing to is to access the database via an abstraction layer in a way such if you change your RDBMS or how you implemented that access, you only have to modify this layer while all the rest of your application remains untouched.
To do this, to free your application from knowing how to deal with the database, your abstraction layer for DB access must be implemented by a framework such as ADODB.
All the files related to this layer must be located in a sub directory:
/ado
In this directories you'll put all of your .php.inc files which contains general methods to access the database.
A:
How about MDB2 from pear?
It provides a common API for all
supported RDBMS. The main difference
to most other DB abstraction packages
is that MDB2 goes much further to
ensure portability.
Btw: @GaryF what are those strange title attributes your links have ? Did you add them or are they added by SO ?
| Class::DBI-like library for php? | I have inherited an old crusty PHP application, and I'd like to refactor it into something a little nicer to deal with, but in a gradual manner. In perl's CPAN, there is a series of classes around Class::DBI that allow you to use database rows as the basis for objects in your code, with the library generating accessor methods etc as appropriate, but also allowing you to add additional methods.
Does anyone know of something like this for PHP? Especially something that doesn't require wholesale adoption of a "framework"... bonus points if it works in PHP4 too, but to be honest, I'd love to have another reason to ditch that. :-)
| [
"It's now defunct but phpdbi is possibly worth a look. If you're willing to let go of some of your caveats (the framework one), I've found that Doctrine is a pretty neat way of accessing DBs in PHP. Worth investigating anyway.\n",
"Class::DBI is an ORM (Object Relational Mapper) for perl. Searching for \"PHP ORM\" on google gives some good results, including Doctrin, which I've had good luck with. I'd start there and work your way up.\n",
"I'm trying to get more feedback on my own projects, so I'll suggest my take on ORM: ORMer\nUsage examples are here\nYou can phase it in, it doesn't require you to adopt MVC, and it requires very little setup.\n",
"The right thing to is to access the database via an abstraction layer in a way such if you change your RDBMS or how you implemented that access, you only have to modify this layer while all the rest of your application remains untouched. \nTo do this, to free your application from knowing how to deal with the database, your abstraction layer for DB access must be implemented by a framework such as ADODB. \nAll the files related to this layer must be located in a sub directory:\n\n/ado\n\nIn this directories you'll put all of your .php.inc files which contains general methods to access the database.\n",
"How about MDB2 from pear?\n\nIt provides a common API for all\n supported RDBMS. The main difference\n to most other DB abstraction packages\n is that MDB2 goes much further to\n ensure portability.\n\nBtw: @GaryF what are those strange title attributes your links have ? Did you add them or are they added by SO ?\n"
] | [
4,
1,
1,
0,
0
] | [] | [] | [
"orm",
"perl",
"php"
] | stackoverflow_0000008276_orm_perl_php.txt |
Q:
Automating DB Object Migrations from Source Control
I'm looking for some "Best Practices" for automating the deployment of Stored Procedures/Views/Functions/Table changes from source control. I'm using StarTeam & ANT so the labeling is taken care of; what I am looking for is how some of you have approached automating the pull of these objects from source - not necessarily StarTeam.
I'd like to end up with one script that can then be executed, checked in, and labeled.
I'm NOT asking for anyone to write that - just some ideas or approaches that have (or haven't) worked in the past.
I'm trying to clean up a mess and want to make sure I get this as close to "right" as I can.
We are storing the tables/views/functions etc. in individual files in StarTeam and our DB is SQL 2K5.
A:
We use SQL Compare from redgate (http://www.red-gate.com/).
We have a production database, a development database and each developer has their own database.
The development database is synchronised with the changes a developer has made to their database when they check in their changes.
The developer also checks in a synchronisation script and a comparison report generated by SQL Compare.
When we deploy our application we simply synchronise the development database with the production database using SQL Compare.
This works for us because our application is for in-house use only. If this isn't your scenario then I would look at SQL Packager (also from redgate).
A:
I prefer to separate views, procedures, and triggers (objects that can be re-created at will) from tables. For views, procedures, and triggers, just write a job that will check them out and re-create the latest.
For tables, I prefer to have a database version table with one row. Use that table to determine what new updates have not been applied. Then each update is applied and the version number is updated. If an update fails, you have only that update to check and you can re-run know that the earlier updates will not happen again.
| Automating DB Object Migrations from Source Control | I'm looking for some "Best Practices" for automating the deployment of Stored Procedures/Views/Functions/Table changes from source control. I'm using StarTeam & ANT so the labeling is taken care of; what I am looking for is how some of you have approached automating the pull of these objects from source - not necessarily StarTeam.
I'd like to end up with one script that can then be executed, checked in, and labeled.
I'm NOT asking for anyone to write that - just some ideas or approaches that have (or haven't) worked in the past.
I'm trying to clean up a mess and want to make sure I get this as close to "right" as I can.
We are storing the tables/views/functions etc. in individual files in StarTeam and our DB is SQL 2K5.
| [
"We use SQL Compare from redgate (http://www.red-gate.com/).\nWe have a production database, a development database and each developer has their own database.\nThe development database is synchronised with the changes a developer has made to their database when they check in their changes. \nThe developer also checks in a synchronisation script and a comparison report generated by SQL Compare.\nWhen we deploy our application we simply synchronise the development database with the production database using SQL Compare.\nThis works for us because our application is for in-house use only. If this isn't your scenario then I would look at SQL Packager (also from redgate).\n",
"I prefer to separate views, procedures, and triggers (objects that can be re-created at will) from tables. For views, procedures, and triggers, just write a job that will check them out and re-create the latest.\nFor tables, I prefer to have a database version table with one row. Use that table to determine what new updates have not been applied. Then each update is applied and the version number is updated. If an update fails, you have only that update to check and you can re-run know that the earlier updates will not happen again.\n"
] | [
4,
1
] | [] | [] | [
"build_automation",
"sql_server",
"starteam",
"version_control"
] | stackoverflow_0000052626_build_automation_sql_server_starteam_version_control.txt |
Q:
Are off-the-cloud desktop applications dead?
Although somewhat related to this question, I have what I think is a different take on it.
Is a desktop app that has no connections to the "cloud" dead? I believe that some things are going to continue to be on the machine (operating systems obviously, browsers, some light-weight applications), but more and more things are moving to network-based applications (see Google Docs for office suites, GMail and other web-email clients for email, flickr for photo management, and more).
So other than the lightweight applications, is there anything that, in 5 to 10 years, will continue to be (either out of necessity or just demand) remain on the desktop and off the cloud?
A:
10 years or more ago this would have been, "Are non-internet applications dead?"
There's things the cloud does better than desktop applications, and in those places I'm sure non-cloud applications will become increasingly rare. But there's plenty of applications where you might not want to use the cloud, the benefits don't outweigh the costs, or the complexity just isn't worth it.
It's a new tool, and it's a better tool than desktop applications for many things. However, you don't throw away a hammer when you buy a screwdriver, you simply reserve it for when a nail needs to be driven.
A:
Video editing and other resource intensive tasks will probably stay off the cloud for a long time.
A:
IDE's will probably be "off the cloud" for a long time, if ever... powerful customizable editors like Emacs will also probably stay "off the cloud" for a while.
A:
If I look at the application that we've selling and at the applications I've written as a consultant, I must very much agree with you. Most of them are useless if there is no internet connection. Some do work in disconnected mode, some don't, but all of them are pretty useless if you cannot connect to the big supporting system hidden far far away.
On the other hand, I wouldn't want to say that everything will move into the cloud in 5 years. Too much work with porting. There will be desktop applications that will function as a thin and offline-able client (just like, for example, Google Reader does if you install Gears) and there will be fully "clouded" :) applications.
I have no idea what will happen in 10 years. If I put myself 10 years back (and that is very easy to do as I was writing a lot for a local computer magazine in that time), I totally couldn't predict how the computing will become internet-dependant in 2008.
A:
Gosh, I hope not as that's my job.
The main piece of software I write controls electronic hardware (PXI boards and the like) for testing. Without "real" hardware, there's nothing to test. Even the very nature of the tests themselves prevent simultaneous access (once you set the state of a switch, you don't want someone else moving it).
So as long as you interact with any hardware, you're off-the-cloud.
Oh, and some companies have security issues with being on the Internet; I'd say security would also drive desktop apps with no connections.
A:
There's no reason that many corporations will move to an online system simply because of security concerns.
For example, One of the greatest assets of Outlook is to go offline and continue working. Sure Google Gears has similar functionality, but then you're trusting Google with your corporate security.
A:
Such applications are dead since 15 years, ever since Sun took market leadership with their JavaStation.
No, wait. They did not. And things are not "more and more" moving to network-based applications. Sure, there is Webmail, but even GMail is FAR away from the comfort of modern Outlook or Thunderbird Clients. Same for office. Google Docs is a nice toy for ocasional use, but it's vastly inferior to conventional Office suites.
The Desktop is not dead and it will not die anytime soon. Internet Applications are alternatives in some situations, but be are just starting getting proper functionality and performance. Let's face it: JavaScript performance is still a Joke, the IDE Support is not there yet and Browsers are too unstable at the moment.
Google Chrome, IE8 and Firefox 3.1 start to go in a better direction, but it will take years for them to be mature enough to create JavaScript applications that actually can fully replace desktop apps. But that would require some proper standardization accross browsers, and we all know that this will not happen before the next millennium or so.
A:
About 1% of users actually use Google Docs&Spreadsheets full-time. Almost all of the rest use Microsoft Office. So, no, off-the-cloud applications are not dead simply because a Google office suite exists. And those are, really, the only high-profile true web applications out there that are meant as desktop app replacements.
Webmails are a special case though. It actually makes sense to use those rather than a desktop app, since your email is next-to-useless without a connection anyway. But most applications don't NEED a full-time Internet connection. A word processor certainly doesn't.
What will definitely remain on the desktop:
Games
Small apps (calculator, notepad type of stuff)
Anything that generates data that needs to be secure (I don't imagine tons of people or companies want to trust their accounting details to Google, for example)
Web browsers (obviously)
IDEs (Visual Studio via Ajax? Come on...)
Auxiliary development tools (SVN, etc), since good security policy would forbid their use through a web browser
Anything that needs high enough performance that network latency would be an impediment
What will probably remain primarily on the desktop, at least for the next 5 years:
Office tools (unless web-based limitations can be lifted... which would require much better-performing web browsers than we have now)
Photoshop and such tools
Chat clients (web-based equivalents are disappointing so far)
That's not to say that any of the above cannot have an Internet-based component, of course.
A:
I personally will never leave my stuff on the web under someone else's control. All of my photos and e-mails I keep on local hard drives that I control.
I prefer to make my own stuff available to me through the web on my own hardware. The only way to have reasonable performance and be productive when offline is to use local apps.
To me the future will be local, but remotely accessible and synchronized. At least for the next 20 years or so.
Not only do I think it's not dead, I think it's the way everyone will want to go once we have a few disastrous failures (ie, websites disappearing with users content that isn't backed up anywhere or severe privacy breeches as some large company loses control of access to the data they are protecting).
| Are off-the-cloud desktop applications dead? | Although somewhat related to this question, I have what I think is a different take on it.
Is a desktop app that has no connections to the "cloud" dead? I believe that some things are going to continue to be on the machine (operating systems obviously, browsers, some light-weight applications), but more and more things are moving to network-based applications (see Google Docs for office suites, GMail and other web-email clients for email, flickr for photo management, and more).
So other than the lightweight applications, is there anything that, in 5 to 10 years, will continue to be (either out of necessity or just demand) remain on the desktop and off the cloud?
| [
"10 years or more ago this would have been, \"Are non-internet applications dead?\"\nThere's things the cloud does better than desktop applications, and in those places I'm sure non-cloud applications will become increasingly rare. But there's plenty of applications where you might not want to use the cloud, the benefits don't outweigh the costs, or the complexity just isn't worth it.\nIt's a new tool, and it's a better tool than desktop applications for many things. However, you don't throw away a hammer when you buy a screwdriver, you simply reserve it for when a nail needs to be driven.\n",
"Video editing and other resource intensive tasks will probably stay off the cloud for a long time.\n",
"IDE's will probably be \"off the cloud\" for a long time, if ever... powerful customizable editors like Emacs will also probably stay \"off the cloud\" for a while.\n",
"If I look at the application that we've selling and at the applications I've written as a consultant, I must very much agree with you. Most of them are useless if there is no internet connection. Some do work in disconnected mode, some don't, but all of them are pretty useless if you cannot connect to the big supporting system hidden far far away.\nOn the other hand, I wouldn't want to say that everything will move into the cloud in 5 years. Too much work with porting. There will be desktop applications that will function as a thin and offline-able client (just like, for example, Google Reader does if you install Gears) and there will be fully \"clouded\" :) applications.\nI have no idea what will happen in 10 years. If I put myself 10 years back (and that is very easy to do as I was writing a lot for a local computer magazine in that time), I totally couldn't predict how the computing will become internet-dependant in 2008.\n",
"Gosh, I hope not as that's my job. \nThe main piece of software I write controls electronic hardware (PXI boards and the like) for testing. Without \"real\" hardware, there's nothing to test. Even the very nature of the tests themselves prevent simultaneous access (once you set the state of a switch, you don't want someone else moving it).\nSo as long as you interact with any hardware, you're off-the-cloud.\nOh, and some companies have security issues with being on the Internet; I'd say security would also drive desktop apps with no connections.\n",
"There's no reason that many corporations will move to an online system simply because of security concerns. \nFor example, One of the greatest assets of Outlook is to go offline and continue working. Sure Google Gears has similar functionality, but then you're trusting Google with your corporate security.\n",
"Such applications are dead since 15 years, ever since Sun took market leadership with their JavaStation.\nNo, wait. They did not. And things are not \"more and more\" moving to network-based applications. Sure, there is Webmail, but even GMail is FAR away from the comfort of modern Outlook or Thunderbird Clients. Same for office. Google Docs is a nice toy for ocasional use, but it's vastly inferior to conventional Office suites.\nThe Desktop is not dead and it will not die anytime soon. Internet Applications are alternatives in some situations, but be are just starting getting proper functionality and performance. Let's face it: JavaScript performance is still a Joke, the IDE Support is not there yet and Browsers are too unstable at the moment.\nGoogle Chrome, IE8 and Firefox 3.1 start to go in a better direction, but it will take years for them to be mature enough to create JavaScript applications that actually can fully replace desktop apps. But that would require some proper standardization accross browsers, and we all know that this will not happen before the next millennium or so.\n",
"About 1% of users actually use Google Docs&Spreadsheets full-time. Almost all of the rest use Microsoft Office. So, no, off-the-cloud applications are not dead simply because a Google office suite exists. And those are, really, the only high-profile true web applications out there that are meant as desktop app replacements.\nWebmails are a special case though. It actually makes sense to use those rather than a desktop app, since your email is next-to-useless without a connection anyway. But most applications don't NEED a full-time Internet connection. A word processor certainly doesn't.\nWhat will definitely remain on the desktop:\n\nGames\nSmall apps (calculator, notepad type of stuff)\nAnything that generates data that needs to be secure (I don't imagine tons of people or companies want to trust their accounting details to Google, for example)\nWeb browsers (obviously)\nIDEs (Visual Studio via Ajax? Come on...)\nAuxiliary development tools (SVN, etc), since good security policy would forbid their use through a web browser\nAnything that needs high enough performance that network latency would be an impediment\n\nWhat will probably remain primarily on the desktop, at least for the next 5 years:\n\nOffice tools (unless web-based limitations can be lifted... which would require much better-performing web browsers than we have now)\nPhotoshop and such tools\nChat clients (web-based equivalents are disappointing so far)\n\nThat's not to say that any of the above cannot have an Internet-based component, of course.\n",
"I personally will never leave my stuff on the web under someone else's control. All of my photos and e-mails I keep on local hard drives that I control. \nI prefer to make my own stuff available to me through the web on my own hardware. The only way to have reasonable performance and be productive when offline is to use local apps. \nTo me the future will be local, but remotely accessible and synchronized. At least for the next 20 years or so.\nNot only do I think it's not dead, I think it's the way everyone will want to go once we have a few disastrous failures (ie, websites disappearing with users content that isn't backed up anywhere or severe privacy breeches as some large company loses control of access to the data they are protecting).\n"
] | [
7,
4,
3,
3,
3,
2,
2,
2,
0
] | [] | [] | [
"cloud",
"desktop_application"
] | stackoverflow_0000052520_cloud_desktop_application.txt |
Q:
php is_dir returns true for non-existent folder
Has anyone encountered this oddity?
I'm checking for the existence of a number of directories in one of my unit tests. is_dir is reporting true (1) in spite of the folder not existing at the time it is called. The code looks like this (with a few extraneous intermediate vars to ease debugging):
foreach($userfolders as $uf) {
$uf = sprintf($uf, $user_id);
$uf = ltrim($uf,'/');
$path = trim($base . '/' . $uf);
$res = is_dir($path); //returns false except last time returns 1
$this->assertFalse($res, $path);
}
The machine running Ubuntu Linux 8.04 with PHP Version 5.2.4-2ubuntu5.3
Things I have checked:
- Paths are full paths
- The same thing happens on two separate machines (both running Ubuntu)
- I have stepped through line by line in a debugger
- Paths genuinely don't exist at the point where is_dir is called
- While the code is paused on this line, I can actually drop to a shell and run
the interactive PHP interpreter and get the correct result
- The paths are all WELL under 256 chars
- I can't imagine a permissions problem as the folder doesn't exist! The parent folder can't be causing permissions problems as the other folders in the loop are correctly reported as missing.
Comments on the PHP docs point to the odd issue with is_dir but not this particular one.
I'm not posting this as a "please help me fix" but in the hope that somebody encountering the same thing can search here and hopefully an answer from somebody else who has seen this!
A:
I don't think this would cause your problem, but $path does have the trailing slash, correct?
A:
For what its worth, is_readable can be used as a work around.
A:
$path = trim($base . '/' . $uf);
That could be causing it. I'm assuming $base is some sort of root folder you are searching, so if $uf is something like '', '.', or '../' that could return true. We would have to see what values you are using in your foreach to know anything further.
[EDIT]
Doing some more looking the above code works fine on OpenBSD 4.3 with PHP 5.2.
| php is_dir returns true for non-existent folder | Has anyone encountered this oddity?
I'm checking for the existence of a number of directories in one of my unit tests. is_dir is reporting true (1) in spite of the folder not existing at the time it is called. The code looks like this (with a few extraneous intermediate vars to ease debugging):
foreach($userfolders as $uf) {
$uf = sprintf($uf, $user_id);
$uf = ltrim($uf,'/');
$path = trim($base . '/' . $uf);
$res = is_dir($path); //returns false except last time returns 1
$this->assertFalse($res, $path);
}
The machine running Ubuntu Linux 8.04 with PHP Version 5.2.4-2ubuntu5.3
Things I have checked:
- Paths are full paths
- The same thing happens on two separate machines (both running Ubuntu)
- I have stepped through line by line in a debugger
- Paths genuinely don't exist at the point where is_dir is called
- While the code is paused on this line, I can actually drop to a shell and run
the interactive PHP interpreter and get the correct result
- The paths are all WELL under 256 chars
- I can't imagine a permissions problem as the folder doesn't exist! The parent folder can't be causing permissions problems as the other folders in the loop are correctly reported as missing.
Comments on the PHP docs point to the odd issue with is_dir but not this particular one.
I'm not posting this as a "please help me fix" but in the hope that somebody encountering the same thing can search here and hopefully an answer from somebody else who has seen this!
| [
"I don't think this would cause your problem, but $path does have the trailing slash, correct?\n",
"For what its worth, is_readable can be used as a work around.\n",
"$path = trim($base . '/' . $uf);\n\nThat could be causing it. I'm assuming $base is some sort of root folder you are searching, so if $uf is something like '', '.', or '../' that could return true. We would have to see what values you are using in your foreach to know anything further.\n[EDIT]\nDoing some more looking the above code works fine on OpenBSD 4.3 with PHP 5.2. \n"
] | [
4,
1,
0
] | [] | [] | [
"php"
] | stackoverflow_0000052703_php.txt |
Q:
ASP.Net word count with a custom validator
A requirement for an ASP.Net 2.0 project I'm working on limits a certain field to a max of 10 words (not characters). I'm currently using a CustomValidator control with the following ServerValidate method:
Protected Sub TenWordsTextBoxValidator_ServerValidate(ByVal source As Object, ByVal args As System.Web.UI.WebControls.ServerValidateEventArgs) Handles TenWordsTextBoxValidator.ServerValidate
'' 10 words
args.IsValid = args.Value.Split(" ").Length <= 10
End Sub
Does anyone have a more thorough/accurate method of getting a word count?
A:
You can use one of the builtin validators with a regex that counts the words.
I'm a little rusty with regex so go easy on me:
(\b.*\b){0,10}
A:
This regex seems to be working great:
"^(\b\S+\b\s*){0,10}$"
Update: the above had a few flaws so I ended up using this RegEx:
[\s\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\xBF]+
I split() the string on that regex and use the length of the resulting array to get the correct word count.
A:
I voted for mharen's answer, and commented on it as well, but since the comments are hidden by default let me explain it again:
The reason you would want to use the regex validator rather than the custom validator is that the regex validator will also automatically validate the regex client-side using javascript, if it's available. If they pass validation it's no big deal, but every time someone fails the client-side validation you save your server from doing a postback.
| ASP.Net word count with a custom validator | A requirement for an ASP.Net 2.0 project I'm working on limits a certain field to a max of 10 words (not characters). I'm currently using a CustomValidator control with the following ServerValidate method:
Protected Sub TenWordsTextBoxValidator_ServerValidate(ByVal source As Object, ByVal args As System.Web.UI.WebControls.ServerValidateEventArgs) Handles TenWordsTextBoxValidator.ServerValidate
'' 10 words
args.IsValid = args.Value.Split(" ").Length <= 10
End Sub
Does anyone have a more thorough/accurate method of getting a word count?
| [
"You can use one of the builtin validators with a regex that counts the words.\nI'm a little rusty with regex so go easy on me:\n(\\b.*\\b){0,10}\n\n",
"This regex seems to be working great:\n\"^(\\b\\S+\\b\\s*){0,10}$\"\n\nUpdate: the above had a few flaws so I ended up using this RegEx:\n[\\s\\x21-\\x2F\\x3A-\\x40\\x5B-\\x60\\x7B-\\xBF]+\n\nI split() the string on that regex and use the length of the resulting array to get the correct word count.\n",
"I voted for mharen's answer, and commented on it as well, but since the comments are hidden by default let me explain it again:\nThe reason you would want to use the regex validator rather than the custom validator is that the regex validator will also automatically validate the regex client-side using javascript, if it's available. If they pass validation it's no big deal, but every time someone fails the client-side validation you save your server from doing a postback.\n"
] | [
5,
1,
0
] | [] | [] | [
".net_2.0",
"asp.net",
"validation",
"vb.net"
] | stackoverflow_0000052591_.net_2.0_asp.net_validation_vb.net.txt |
Q:
Perform token replacements using VS post-build event command?
I would like to "post-process" my app.config file and perform some token replacements after the project builds.
Is there an easy way to do this using a VS post-build event command?
(Yeah I know I could probably use NAnt or something, looking for something simple.)
A:
Take a look at XmlPreProcess. We use it for producing different config files for our testing and live deployment packages.
We execute it from a nant script as part of a continuous build but, since it's a console app, I see no reason why you coudn't add a call in your project's post-build event instead
| Perform token replacements using VS post-build event command? | I would like to "post-process" my app.config file and perform some token replacements after the project builds.
Is there an easy way to do this using a VS post-build event command?
(Yeah I know I could probably use NAnt or something, looking for something simple.)
| [
"Take a look at XmlPreProcess. We use it for producing different config files for our testing and live deployment packages. \nWe execute it from a nant script as part of a continuous build but, since it's a console app, I see no reason why you coudn't add a call in your project's post-build event instead\n"
] | [
1
] | [] | [] | [
".net",
"visual_studio_2005",
"visual_studio_2008"
] | stackoverflow_0000052730_.net_visual_studio_2005_visual_studio_2008.txt |
Q:
How do I add a div to DOM and pick it up later
I think this is specific to IE 6.0 but...
In JavaScript I add a div to the DOM. I assign an id attribute. When I later try to pick up the div by the id all I get is null.
Any suggestions?
Example:
var newDiv = document.createElement("DIV");
newDiv.setAttribute("ID", "obj_1000");
document.appendChild(newDiv);
alert("Added:" + newDiv.getAttribute("ID") + ":" + newDiv.id + ":" + document.getElementById("obj_1000") );
Alert prints "::null"
Seems to work fine in Firefox 2.0+
A:
In addition to what the other answers suggest (that you need to actually insert the element into the DOM for it to be found via getElementById()), you also need to use a lower-case attribute name in order for IE6 to recognize it as the id:
var newDiv = document.createElement("DIV");
newDiv.setAttribute("id", "obj_1000");
document.body.appendChild(newDiv);
alert("Added:"
+ newDiv.getAttribute("id")
+ ":" + newDiv.id + ":"
+ document.getElementById("obj_1000") );
...responds as expected:
Added:obj_1000:obj_1000:[object]
According to the MSDN documentation for setAttribute(), up to IE8 there is an optional third parameter that controls whether or not it is case sensitive with regard to the attribute name. Guess what the default is...
A:
The div needs to be added to an element for it to be part of the document.
document.appendChild(newDiv);
alert( document.getElementById("obj_1000") );
A:
You have to add the div to the dom.
// Create the Div
var oDiv = document.createElement('div');
document.body.appendChild(oDiv);
A:
newDiv.setAttribute( "ID", "obj_1000" );
should be
newDiv.id = "obj_1000";
A:
Hummm, thanks for putting me on the right track guys...this was odd but it turns out that if I change the case to lower case, everything starting working just fine...
Finished Result:
var newDiv = document.createElement("DIV");
newDiv.setAttribute("id", "obj_1000");
document.appendChild(newDiv);
alert("Added:" +
newDiv.getAttribute("id") + ":" +
newDiv.id + ":" +
document.getElementById("obj_1000"));
ODD...VERY ODD
| How do I add a div to DOM and pick it up later | I think this is specific to IE 6.0 but...
In JavaScript I add a div to the DOM. I assign an id attribute. When I later try to pick up the div by the id all I get is null.
Any suggestions?
Example:
var newDiv = document.createElement("DIV");
newDiv.setAttribute("ID", "obj_1000");
document.appendChild(newDiv);
alert("Added:" + newDiv.getAttribute("ID") + ":" + newDiv.id + ":" + document.getElementById("obj_1000") );
Alert prints "::null"
Seems to work fine in Firefox 2.0+
| [
"In addition to what the other answers suggest (that you need to actually insert the element into the DOM for it to be found via getElementById()), you also need to use a lower-case attribute name in order for IE6 to recognize it as the id:\nvar newDiv = document.createElement(\"DIV\"); \nnewDiv.setAttribute(\"id\", \"obj_1000\");\ndocument.body.appendChild(newDiv);\n\nalert(\"Added:\"\n + newDiv.getAttribute(\"id\") \n + \":\" + newDiv.id + \":\" \n + document.getElementById(\"obj_1000\") );\n\n...responds as expected:\nAdded:obj_1000:obj_1000:[object]\n\n\nAccording to the MSDN documentation for setAttribute(), up to IE8 there is an optional third parameter that controls whether or not it is case sensitive with regard to the attribute name. Guess what the default is...\n",
"The div needs to be added to an element for it to be part of the document.\ndocument.appendChild(newDiv);\n\nalert( document.getElementById(\"obj_1000\") );\n\n",
"You have to add the div to the dom.\n// Create the Div\nvar oDiv = document.createElement('div');\ndocument.body.appendChild(oDiv);\n\n",
"newDiv.setAttribute( \"ID\", \"obj_1000\" );\nshould be\nnewDiv.id = \"obj_1000\";\n",
"Hummm, thanks for putting me on the right track guys...this was odd but it turns out that if I change the case to lower case, everything starting working just fine...\nFinished Result:\nvar newDiv = document.createElement(\"DIV\");\nnewDiv.setAttribute(\"id\", \"obj_1000\");\ndocument.appendChild(newDiv);\n\nalert(\"Added:\" +\n newDiv.getAttribute(\"id\") + \":\" +\n newDiv.id + \":\" +\n document.getElementById(\"obj_1000\"));\n\nODD...VERY ODD\n"
] | [
8,
3,
1,
0,
0
] | [] | [] | [
"css",
"dhtml",
"javascript"
] | stackoverflow_0000052785_css_dhtml_javascript.txt |
Q:
GridView will not update underlying data source
So I'm been pounding on this problem all day. I've got a LinqDataSource that points to my model and a GridView that consumes it. When I attempt to do an update on the GridView, it does not update the underlying data source. I thought it might have to do with the LinqDataSource, so I added a SqlDataSource and the same thing happens. The aspx is as follows (the code-behind page is empty):
<asp:SqlDataSource ID="SqlDataSource1" runat="server"
ConnectionString="Data Source=devsql32;Initial Catalog=Steam;Persist Security Info=True;"
ProviderName="System.Data.SqlClient"
SelectCommand="SELECT [LangID], [Code], [Name] FROM [Languages]" UpdateCommand="UPDATE [Languages] SET [Code]=@Code WHERE [LangID]=@LangId">
</asp:SqlDataSource>
<asp:GridView ID="_languageGridView" runat="server" AllowPaging="True"
AllowSorting="True" AutoGenerateColumns="False" DataKeyNames="LangId"
DataSourceID="SqlDataSource1">
<Columns>
<asp:CommandField ShowDeleteButton="True" ShowEditButton="True" />
<asp:BoundField DataField="LangId" HeaderText="Id" ReadOnly="True" />
<asp:BoundField DataField="Code" HeaderText="Code" />
<asp:BoundField DataField="Name" HeaderText="Name" />
</Columns>
</asp:GridView>
<asp:LinqDataSource ID="_languageDataSource" ContextTypeName="GeneseeSurvey.SteamDatabaseDataContext" runat="server" TableName="Languages" EnableInsert="True" EnableUpdate="true" EnableDelete="true">
</asp:LinqDataSource>
What in the world am I missing here? This problem is driving me insane.
A:
You are missing the <UpdateParameters> sections of your DataSources.
LinqDataSource.UpdateParameters
SqlDataSource.UpdateParameters
A:
It turns out that we had a DataBind() call in the Page_Load of the master page of the aspx file that was probably causing the state of the GridView to get tossed out on every page load.
As a note - update parameters for a LINQ query are not required unless you want to set them some non-null default.
A:
This is a total shot in the dark since I haven't used ASP at all.
I've been just learning XAML and WPF, which appears to be very similar to what you've posted above and I know that for some UI controls you need to specify the binding mode to two-way in order to get updates in both directions.
| GridView will not update underlying data source | So I'm been pounding on this problem all day. I've got a LinqDataSource that points to my model and a GridView that consumes it. When I attempt to do an update on the GridView, it does not update the underlying data source. I thought it might have to do with the LinqDataSource, so I added a SqlDataSource and the same thing happens. The aspx is as follows (the code-behind page is empty):
<asp:SqlDataSource ID="SqlDataSource1" runat="server"
ConnectionString="Data Source=devsql32;Initial Catalog=Steam;Persist Security Info=True;"
ProviderName="System.Data.SqlClient"
SelectCommand="SELECT [LangID], [Code], [Name] FROM [Languages]" UpdateCommand="UPDATE [Languages] SET [Code]=@Code WHERE [LangID]=@LangId">
</asp:SqlDataSource>
<asp:GridView ID="_languageGridView" runat="server" AllowPaging="True"
AllowSorting="True" AutoGenerateColumns="False" DataKeyNames="LangId"
DataSourceID="SqlDataSource1">
<Columns>
<asp:CommandField ShowDeleteButton="True" ShowEditButton="True" />
<asp:BoundField DataField="LangId" HeaderText="Id" ReadOnly="True" />
<asp:BoundField DataField="Code" HeaderText="Code" />
<asp:BoundField DataField="Name" HeaderText="Name" />
</Columns>
</asp:GridView>
<asp:LinqDataSource ID="_languageDataSource" ContextTypeName="GeneseeSurvey.SteamDatabaseDataContext" runat="server" TableName="Languages" EnableInsert="True" EnableUpdate="true" EnableDelete="true">
</asp:LinqDataSource>
What in the world am I missing here? This problem is driving me insane.
| [
"You are missing the <UpdateParameters> sections of your DataSources.\nLinqDataSource.UpdateParameters\nSqlDataSource.UpdateParameters\n",
"It turns out that we had a DataBind() call in the Page_Load of the master page of the aspx file that was probably causing the state of the GridView to get tossed out on every page load.\nAs a note - update parameters for a LINQ query are not required unless you want to set them some non-null default.\n",
"This is a total shot in the dark since I haven't used ASP at all.\nI've been just learning XAML and WPF, which appears to be very similar to what you've posted above and I know that for some UI controls you need to specify the binding mode to two-way in order to get updates in both directions.\n"
] | [
2,
1,
0
] | [] | [] | [
"asp.net",
"data_binding",
"linq_to_sql"
] | stackoverflow_0000052634_asp.net_data_binding_linq_to_sql.txt |
Q:
.NET Production Debugging
I've had a Windows app in production for a while now, and have it set up to send us error reports when it throws exceptions. Most of these are fairly descriptive and help me find the problem very quickly (I use the MS Application Exception Block).
On a few occasions I have reports that are issues that I can't reproduce, and seem to only happen on a few client machines.
I don't have physical access to these client machines, what are some strategies I can use for debugging? Would it be better to build some tracing into the code, or are there some other alternatives?
Thank you.
Edit: I should have been more clear: The error reports that I get do have the stack trace, but since it's production code, it doesn't indicate the exact line that caused the exception, just the method in which it was thrown.
A:
You are on the right track. You need to create a tracking module which logs actions/exceptions locally.
You can then have a button or a menu option that the user can click to either automatically email you this information the moment the issue occurs, or they can have the option to get hold of the file so that they can transfer it to you in any other way.
You can even build-in a diagnostics code to run an integrity check on the system and sends you a report (maybe it runs all your unit tests to see if they work on that system).
A:
One option is to generate a (mini-)dump file as close to the point where the exception is thrown as possible. This article talks about how to do this from managed code.
You could then load the dump file into Visual Studio or WinDbg and examine it with the aid of SOS
A:
I always use this module from Jeff for unhandled exceptions, sending me an email with stacktrace etc.
A:
Smart Inspect from Gurock Software has come in handy many times for me. It is very easy to put into a .NET application and gives you extremely powerful control when analyzing log files. It has log levels that allow you to turn off certain functionality except in certain cases so you don't lose performance.
They even have server software that your software can connect to to save logs when you do not have full access to the machines. For example, you could have a server running at www.yourdomain.com. Your software would have a configuration option to turn on debugging. Smart Inspect would be configured to send the log data to your server (And optionally to a local file) so that you could get live logging no matter where the software is being run.
Smart Inspect is very easy to configure and has many features that you can use to help. I've use it to debug high impact multi-threaded server applications on the fly without taking down the machines. It has all the hooks to keep track of different processes, threads and machines.
A:
I'd make use of the event log. Take a look here:
http://support.microsoft.com/kb/307024
| .NET Production Debugging | I've had a Windows app in production for a while now, and have it set up to send us error reports when it throws exceptions. Most of these are fairly descriptive and help me find the problem very quickly (I use the MS Application Exception Block).
On a few occasions I have reports that are issues that I can't reproduce, and seem to only happen on a few client machines.
I don't have physical access to these client machines, what are some strategies I can use for debugging? Would it be better to build some tracing into the code, or are there some other alternatives?
Thank you.
Edit: I should have been more clear: The error reports that I get do have the stack trace, but since it's production code, it doesn't indicate the exact line that caused the exception, just the method in which it was thrown.
| [
"You are on the right track. You need to create a tracking module which logs actions/exceptions locally.\nYou can then have a button or a menu option that the user can click to either automatically email you this information the moment the issue occurs, or they can have the option to get hold of the file so that they can transfer it to you in any other way.\nYou can even build-in a diagnostics code to run an integrity check on the system and sends you a report (maybe it runs all your unit tests to see if they work on that system).\n",
"One option is to generate a (mini-)dump file as close to the point where the exception is thrown as possible. This article talks about how to do this from managed code.\nYou could then load the dump file into Visual Studio or WinDbg and examine it with the aid of SOS\n",
"I always use this module from Jeff for unhandled exceptions, sending me an email with stacktrace etc.\n",
"Smart Inspect from Gurock Software has come in handy many times for me. It is very easy to put into a .NET application and gives you extremely powerful control when analyzing log files. It has log levels that allow you to turn off certain functionality except in certain cases so you don't lose performance. \nThey even have server software that your software can connect to to save logs when you do not have full access to the machines. For example, you could have a server running at www.yourdomain.com. Your software would have a configuration option to turn on debugging. Smart Inspect would be configured to send the log data to your server (And optionally to a local file) so that you could get live logging no matter where the software is being run.\nSmart Inspect is very easy to configure and has many features that you can use to help. I've use it to debug high impact multi-threaded server applications on the fly without taking down the machines. It has all the hooks to keep track of different processes, threads and machines. \n",
"I'd make use of the event log. Take a look here:\nhttp://support.microsoft.com/kb/307024\n"
] | [
2,
2,
1,
1,
0
] | [] | [] | [
".net",
"debugging"
] | stackoverflow_0000052808_.net_debugging.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.