Dataset Viewer
Auto-converted to Parquet
query_id
stringlengths
32
32
query
stringlengths
7
2.91k
positive_passages
listlengths
1
7
negative_passages
listlengths
10
100
subset
stringclasses
7 values
873d0cda976a62a1bec378b0f14f4d6f
Can a company charge you for services never requested or received?
[ { "docid": "0a0f16b824e6dab326bf5f18bbd456c0", "text": "In general, you can only be charged for services if there is some kind of contract. The contract doesn't have to be written, but you have to have agreed to it somehow. However, it is possible that you entered into a contract due to some clause in the home purchase contract or the contract with the home owners' association. There are also sometimes services you are legally required to get, such as regular inspection of heating furnaces (though I don't think this translates to automatic contracts). But in any case you would not be liable for services rendered before you entered into the contract, which sounds like it's the case here.", "title": "" }, { "docid": "914cf45781f65096709ea6f6a48237cf", "text": "No. A company cannot bill you for services you did not request nor receive. If they could, imagine how many people would just randomly get bills in their mail. Ignore them. They don't have a contract or agreement with you and can't do anything other than make noise. If they get aggressive or don't stop requesting money, hire an attorney and it will be taken care of.", "title": "" }, { "docid": "c04232d35c3027bae24245c0369769ec", "text": "I have had a couple of businesses do this to me. I simply ask them to come over to talk about the bill. Sometimes this ends it. If they come over then I call the cops to file a report on fraud. A lot of times the police will do nothing unless they have had a load of complaints but it certainly gets the company off your back. And if they are truly unscrupulous it doesn't hurt to get a picture of them talking with the police and their van, and then post the whole situation online - you will see others come forward really quick after doing something like this.", "title": "" }, { "docid": "2663ac52e0b08439c2b736ddc3fd573d", "text": "\"Here's another example of such a practice and the problem it caused. My brother, who lived alone, was missing from work for several days so a co-worker went to his home to search for him and called the local Sheriff's Office for assistance. The local fire department which runs the EMS ambulance was also dispatched in the event there was a medical emergency. They discovered my brother had passed away inside his home and had obviously been dead for days. As our family worked on probate matters to settle his estate following this death, it was learned that the local fire department had levied a bill against my brother's estate for $800 for responding with their ambulance to his home that day. I tried to talk to their commander about this, insisting my brother had not called them, nor had they transported him or even checked his pulse. The commander insisted theirs was common practice - that someone was always billed for their medical response. He would not withdraw his bill for \"\"services\"\". I hate to say, but the family paid the bill in order to prevent delay of his probate issues and from receiving monies that paid for his final expenses.\"", "title": "" } ]
[ { "docid": "2edf29c8d6d138c80ffaab5b810e5260", "text": "If there was some contract in place (even a verbal agreement) that he would complete the work you asked for in return for payment, then you don't have to pay him anything. He hasn't completed the work and what he did do was stolen from another person. He hasn't held up his end of the agreement, so you don't owe squat.", "title": "" }, { "docid": "d5e4ca3bd60328381f8ea5cbd1c4a30f", "text": "If you have not already hired another caterer, potentially your best solution might be to try and work out something with these folks. Presuming of course that they still have access to their equipment, dishware, etc, and to the extent that what you have paid might cover their labor, equipment use etc there might be some way for them to provide the services you have paid for, if you pay for materials such as the food itself directly . This presumes of course that it's only the IRS that they stiffed, and have not had most of their (material) capital assets repossessed or seized. and you still trusted them enough to work out something. Otherwise as Duff points out you will likely need to file a small claims lawsuit and get in line with any other creditors.", "title": "" }, { "docid": "52b93ea21402f1d2f3d73a6d680c120c", "text": "I have already talked to them over the phone and they insist they haven't charged me yet, and I will not be charged. When I informed them I had in fact been charged they agreed it would be reversed. So I have tried to resolve the issue and I don't have any confidence they will reverse the charge as it has not been done yet. They are difficult to communicate which makes the whole process more difficult. Your best next step is to call the credit card company and share this story. I believe the likely result is that the credit card company will initiate a charge back. My question is, is this a valid reason to file a chargeback on my credit card? Yes. If you attempted to work it out with the vendor and it is not working out, this is an appropriate time to initiate a charge back.", "title": "" }, { "docid": "c435f5c350f31fd9c7567c22ec82571e", "text": "Obviously, the credit card's administators know who this charge was submitted by. Contact them, tell them that you don't recognize the charge, and ask them to tell you who it was from. If they can't or won't, tell them you suspect fraud and want it charged back, then wait to see who contacts you to complain that the payment was cancelled. Note that you should charge back any charge you firmly believe is an error, if attempts to resolve it with the company aren't working. Also note that if you really ghink this is fraud, you should contact your bank and ask them to issue a new card number. Standard procedures exist. Use them when appropriate.", "title": "" }, { "docid": "202cf175509a021a1050b9735f8505b3", "text": "\"You have a subscription that costs $25 They have the capabilities to get that $25 from the card on file if you had stopped paying for it, you re-upping the cost of the subscription was more of a courtesy. They would have considered pulling the $25 themselves or it may have gone to collections (or they could courteously ask if you wanted to resubscribe, what a concept) The credit card processing agreements (with the credit card companies) and the FTC would handle such business practices, but \"\"illegal\"\" wouldn't be the word I would use. The FTC or Congress may have mandated that an easy \"\"opt-out\"\" number be associated with that kind of business practice, and left it at that.\"", "title": "" }, { "docid": "36bc3419347f5ab9a094d1c7d866fbae", "text": "\"Anything is negotiable. Clearly in the current draft of the contract the company isn't going to calculate or withhold taxes on your behalf - that is your responsibility. But if you want to calculate taxes yourself, and break out the fees you are receiving into several \"\"buckets\"\" on the invoice, the company might agree (they might have to run it past their legal department first). I don't see how that helps anything - it just divides the single fee into two pieces with the same overall total. As @mhoran_psprep points out, it appears that the company expects you to cover your expenses from within your charges. Thus, it's up to you to decide the appropriate fees to charge, and you are assuming the risk that you have estimated your expenses incorrectly. If you want the company to pay you a fee, plus reimburse your expenses, you will need to craft that into the contract. It's not clear what kind of expenses you need to be covered, and sometimes companies will not agree to them. For specific tax rule questions applicable to your locale, you should consult your tax adviser.\"", "title": "" }, { "docid": "639cc7a31d1d784762a35b44780f1a2c", "text": "You definitely have an argument for getting them to reverse the late fee, especially if it hasn't happened very often. (If you are late every month they may be less likely to forgive.) As for why this happens, it's not actually about business days, but instead it's based on when they know that you paid. In general, there are 2 ways for a company to mark a bill as paid: Late Fees: Some systems automatically assign late fees at the start of the day after the due date if money has not been received. In your case, if your bill was due on the 24th, the late fee was probably assessed at midnight of the 25th, and the payment arrived after that during the day of the 25th. You may have been able to initiate the payment on the company's website at 11:59pm on the 24th and not have received a late fee (or whatever their cutoff time is). Suggestion: as a rule of thumb, for utility bills whose due date and amount can vary slightly from month to month, you're usually better off setting up your payments on the company website to pull from your bank account, instead of setting up your bank account to push the payment to the company. This will ensure that you always get the bill paid on time and for the correct amount. If you still would rather push the payment from your bank account, then consider setting up the payment to arrive about 5 days early, to account for holidays and weekends.", "title": "" }, { "docid": "2fb4a9419331064c1938409da6c4e3f8", "text": "Phone conversations are useless if the company is uncooperative, you must take it into the written word so it can be documented. Sent them certified letters and keep copies of everything you send and any written responses from the company. This is how you will get actual action.", "title": "" }, { "docid": "34a9082d8d05827f9fda9ec540a53c71", "text": "W9 is required for any payments. However, in your case - these are not payments, but refunds, i.e.: you're not receiving any income from the company that is subject to tax or withholding rules, you're receiving money that is yours already. I do not think they have a right to demand W9 as a condition of refund, and as Joe suggested - would dispute the charge as fraudulent.", "title": "" }, { "docid": "f75c66b588570fe3601c49ee0a1ecd46", "text": "So, since you have no record of picking it up, are you going to do the right thing and claim you never got it? On another note, I was known at the local home depot for being the guy who ordered things online, they actually used my orders to train new people. That was back when buying online got 5% back from Discover.", "title": "" }, { "docid": "1cc4f7ba9a0c307acb4c55a928045ef2", "text": "Inform the company that you didn't receive the payment. Only they can trace the payment via their bank.", "title": "" }, { "docid": "cf2b2bc6c3b544fa27f5fbbea273dbca", "text": "Well, it really depends on for how long the quote has been made. But yes, when you're honoring it, you should let them know that this is a once of thing and that you're out of pocket doing it. Most people will understand and when you make the appropriate quote next time around, especially when elaborate where the additional cost that you did not account for initially, come from. It's important to maintain customer trust by being transparent. You can justify higher prices with time needed, material needed or whatever comes to mind. It's just important to convince that customer that without it, they wouldn't get this superb service that they're getting now.", "title": "" }, { "docid": "7b0436dec2a966beeef456ac1afa55a3", "text": "not if it's only Bob and a couple others that are having the problem. The company is spending more money on the wages of the guy helping him out than what Bob brought to the company with his purchase. There's no sense in paying for a customer.", "title": "" }, { "docid": "6c76b97fce53688c272eebaeee2f0c8d", "text": "What you are describing here is the opposite of a problem: You're trying to contact a debt-collector to pay them money, but THEY'RE ignoring YOU and won't return your calls! LOL! All joking aside, having 'incidental' charges show up as negative marks on your credit history is an annoyance- thankfully you're not the first to deal with such problems, and there are processes in place to remedy the situation. Contact the credit bureau(s) on which the debt is listed, and file a petition to have it removed from your history. If everything that you say here is true, then it should be relatively easy. Edit: See here for Equifax's dispute resolution process- it sounds like you've already completed the first two steps.", "title": "" }, { "docid": "316710461de83750af605d1897addf25", "text": "Chris, since you own your own company, nobody can stop you from charging your personal expenses to your business account. IRS is not a huge fan of mixing business and personal expenses and this practice might indicate to them that you are not treating your business seriously, and it should classify your business as a hobby. IRS defines deductible business expense as being both: ordinary AND necessary. Meditation is not an ordinary expense (other S-corps do not incur such expense.) It is not a necessary expense either. Therefore, you cannot deduct this expense. http://www.irs.gov/Businesses/Small-Businesses-&-Self-Employed/Deducting-Business-Expenses", "title": "" } ]
fiqa
36a898fc52389242f06a91f2eec42c9b
How can I save money on a gym / fitness membership? New Year's Resolution is to get in shape - but on the cheap!
[ { "docid": "3becf428add18f59ba38d20807e3f7d7", "text": "Shop around for Gym January is a great time to look because that's when most people join and the gyms are competing for your business. Also, look beyond the monthly dues. Many gyms will give free personal training sessions when you sign up - a necessity if you are serious about getting in shape! My gym offered a one time fee for 3 years. It cost around $600 which comes out to under $17 a month. Not bad for a new modern state of the art gym.", "title": "" }, { "docid": "d98599e2bb8795a543c46c226255323c", "text": "If you're determined to save money, find ways to integrate exercise into your daily routine and don't join a gym at all. This makes it more likely you'll keep it up if it is a natural part of your day. You could set aside half the money you would spend on the gym towards some of the options below. I know it's not always practical, especially in the winter, but here are a few things you could do. One of the other answers makes a good point. Gym membership can be cost effective if you go regularly, but don't kid yourself that you'll suddenly go 5 times a week every week if you've not done much regular exercise. If you are determined to join a gym, here are a few other things to consider.", "title": "" }, { "docid": "a5965eca10891f12b394ad3541cdc32a", "text": "Try a gym for a month before you sign up on any contracts. This will also give you time to figure out if you are the type who can stick with a schedule to workout on regular basis. Community centres are cost effective and offer pretty good facilities. They have monthly plans as well so no long term committments.", "title": "" }, { "docid": "eb1e4693e06138828d8a9809185fd27e", "text": "Find a physical activity or programme that interests you. Memberships only have real value if you use them. Consider learning a martial art like karate, aikido, kung fu, tai kwan do, judo, tai chi chuan. :-) Even yoga is a good form of exercise. Many of these are offered at local community centres if you just want to try it out without worrying about the cost initially. Use this to gauge your interest before considering more advanced clubs. One advantage later on if you stay with it long enough - some places will compensate you for being a junior or even associate instructor. Regardless of whether this is your interest or if the gym membership is more to your liking real value is achieved if you have a good routine and interest in your physical fitness activity. It also helps to have a workout buddy or partner. They will help motivate you to try even when you don't feel like working out.", "title": "" }, { "docid": "847a632b12e6877c7889efada52dfa79", "text": "The gym I used to use was around £35-40 a month, its quite a big whack but if you think about it; its pretty good value for money. That includes gym use, swimming pool use, and most classes Paying for a gym session is around £6 a go, so if you do that 3 times a week, then make use of the other facilities like swimming at the weekends, maybe a few classes on the nights your not at the gym it does work out ok As for deals, my one used to do family membership deals, and I think things like referring a friend gives you money off etc. They will probably also put on some deals in January since lots of people want to give it a go being new year and all", "title": "" }, { "docid": "63309a9b0948785f9f5d96857b4dde78", "text": "Look for discounts from a health insurance provider, price club, professional memberships or credit cards. That goes for a lot of things besides health memberships. My wife is in a professional woman's association for networking at work. A side benefit is an affiliate network they offer for discounts of lots of things, including gym memberships.", "title": "" }, { "docid": "3e873c2f7acf9a5c8ec012a8b705c129", "text": "I came across an article posted at Squawkfox last week. It's particularly relevant to answering this question. See 10 Ways to Cut Your Fitness Membership Costs. Here's an excerpt: [...] If you’re in the market for a shiny new gym membership, it may be wise to read the fine print and know your rights before agreeing to a fitness club contract. No one wants to be stuck paying for a membership they can no longer use, for whatever reason. But if you’re revved and ready to burn a few calories, here are ten ways to get fitter while saving some cash on a fitness club or gym membership. Yay, fitness tips! [...] Check it out!", "title": "" } ]
[ { "docid": "950c17269f8da50f264a91dc43c67c1d", "text": "Saving for retirement is important. So is living within one's means. Also--wear your sunscreen every day, rain or shine, never stop going to the gym, stay the same weight you were in high school, and eat your vegetables if you want to pass for 30 when you are 50.", "title": "" }, { "docid": "4abd220e2e701da0dd7a47df87939235", "text": "It depends on you. If you're not an aggressive shopper and travel , you'll recoup your membership fee in hotel savings with one or two stays. Hilton brands, for example, give you a 10% discount. AARP discounts can sometimes be combined with other offers as well. From an insurance point of view, you should always shop around, but sometimes group plans like AARP's have underwriting standards that work to your advantage.", "title": "" }, { "docid": "bac44a8c730685829aae631e9b51a6dc", "text": "\"Okay. Savings-in-a-nutshell. So, take at least year's worth of rent - $30k or so, maybe more for additional expenses. That's your core emergency fund for when you lose your job or total a few cars or something. Keep it in a good savings account, maybe a CD ladder - but the point is it's liquid, and you can get it when you need it in case of emergency. Replenish it immediately after using it. You may lose a little cash to inflation, but you need liquidity to protect you from risk. It is worth it. The rest is long-term savings, probably for retirement, or possibly for a down payment on a home. A blended set of stocks and bonds is appropriate, with stocks storing most of it. If saving for retirement, you may want to put the stocks in a tax-deferred account (if only for the reduced paperwork! egads, stocks generate so much!). Having some money (especially bonds) in something like a Roth IRA or a non-tax-advantaged account is also useful as a backup emergency fund, because you can withdraw it without penalties. Take the money out of stocks gradually when you are approaching the time when you use the money. If it's closer than five years, don't use stocks; your money should be mostly-bonds when you're about to use it. (And not 30-year bonds or anything like that either. Those are sensitive to interest rates in the short term. You should have bonds that mature approximately the same time you're going to use them. Keep an eye on that if you're using bond funds, which continually roll over.) That's basically how any savings goal should work. Retirement is a little special because it's sort of like 20 years' worth of savings goals (so you don't want all your savings in bonds at the beginning), and because you can get fancy tax-deferred accounts, but otherwise it's about the same thing. College savings? Likewise. There are tools available to help you with this. An asset allocation calculator can be found from a variety of sources, including most investment firms. You can use a target-date fund for something this if you'd like automation. There are also a couple things like, say, \"\"Vanguard LifeStrategy funds\"\" (from Vanguard) which target other savings goals. You may be able to understand the way these sorts of instruments function more easily than you could other investments. You could do a decent job for yourself by just opening up an account at Vanguard, using their online tool, and pouring your money into the stuff they recommend.\"", "title": "" }, { "docid": "7601e04f3bc71c067101f24687e82a63", "text": "Track your expenses. Find out where your money is going, and target areas where you can reduce expenses. Some examples: I was spending a lot on food, buying too much packaged food, and eating out too much. So I started cooking from scratch more and eating out less. Now, even though I buy expensive organic produce, imported cheese, and grass-fed beef, I'm spending half of what I used to spend on food. It could be better. I could cut back on meat and eat out even less. I'm working on it. I was buying a ton of books and random impulsive crap off of Amazon. So I no longer let myself buy things right away. I put stuff on my wish list if I want it, and every couple of months I go on there and buy myself a couple of things off my wishlist. I usually end up realizing that some of the stuff on there isn't something I want that badly after all, so I just delete it from my wishlist. I replaced my 11-year-old Jeep SUV with an 11-year-old Saturn sedan that gets twice the gas mileage. That saves me almost $200/month in gasoline costs alone. I had cable internet through Comcast, even though I don't have a TV. So I went from a $70/month cable bill to a $35/month DSL bill, which cut my internet costs in half. I have an iPhone and my bill for that is $85/month. That's insane, with how little I talk on the phone and send text messages. Once it goes out of contract, I plan to replace it with a cheap phone, possibly a pre-paid. That should cut my phone expenses in half, or even less. I'll keep my iPhone, and just use it when wifi is available (which is almost everywhere these days).", "title": "" }, { "docid": "09b119db97e23f1561e931465bf82e81", "text": "Agree wholeheartedly with the first point - keep track! It's like losing weight, the first step is to be aware of what you are doing. It also helps to have a goal (e.g. pay for a trip to Australia, have X in my savings account), and then with each purchase ask 'what will I do with this when I go to Australia' or 'how does this help towards goal x?' Thrift stores and the like require some time searching but can be good value. If you think you need something, watch for sales too.", "title": "" }, { "docid": "3d05671fdb3c36883abcde29fd83fabc", "text": "I make it a habit at the end of every day to think about how much money I spent in total that day, being mindful of what was essential and wasn't. I know that I might have spent $20 on a haircut (essential), $40 on groceries (essential) and $30 on eating out (not essential). Then I realize that I could have just spent $60 instead of $90. This habit, combined with the general attitude that it's better to have not spent some mone than to have spent some money, has been pretty effective for me to bring down my monthly spending. I guess this requires more motivation than the other more-involved techniques given here. You have to really want to reduce your spending. I found motivation easy to come by because I was spending a lot and I'm still looking for a job, so I have no sources of income. But it's worked really well so far.", "title": "" }, { "docid": "b5784f5173fee940085b18abefd8ac43", "text": "The best way to save on clothes is up to you. I have friends who save all year for two yearly shopping trips to update anything that may need updating at the time. By allowing themselves only two trips, they control the money spent. Bring it in cash and stop buying when you run out. On the other hand in my family we shop sales. When we determine that we need something we wait until we find a sale. When we see an exceptionally good sale on something we know we will need (basic work dress shoes, for example), we'll purchase it and save it until the existing item it is replacing has worn out. Our strategy is to know what we need and buy it when the price is right. We tend to wait on anything that isn't on sale until we can find the right item at a price we like, which sometimes means stretching the existing piece of clothing it is replacing until well after its prime. If you've got a list you're shopping from, you know what you need. The question becomes: how will you control your spending best? Carefully shopping sales and using coupons, or budgeting for a spree within limits?", "title": "" }, { "docid": "5f218c61466d5c2c295984a1d83a152b", "text": "\"The way I approach \"\"afford to lose\"\", is that you need to sit down and figure out the amount of money you need at different stages of your life. I can look at my current expenses and figure out what I will always roughly be paying - bills, groceries, rent/mortgage. I can figure out when I want to retire and how much I want to live on - I generally group 401k and other retirement separately to what I want to invest. With these numbers I can figure out how much I need to save to achieve this goal. Maybe you want to purchase a house in 5 years - figure out the rough down payment and include that in your savings plan. Continue for all capital purchases that you can think you would aim for. Subtract your income from this and you have the amount of money you have greater discretion over. Subtracting current liabilities (4th of July holiday... christmas presents) and you have the amount you could \"\"afford to lose\"\". As to the asset allocation you should look at, as others have mentioned that the younger you the greater your opportunity is to recoup losses. Personally I would disagree - you should have some plan for the investment and use that goal to drive your diversification.\"", "title": "" }, { "docid": "593c2052f536084940c862901c5f2843", "text": "Interesting, that makes some sense. With Planet Fitness, my understanding is that their cost structure is slanted towards fixed costs. Whether their members come to the gym or not doesn't matter; they still have to pay rent, labor, utilities, buy equipment, etc. Those costs don't change much if people subscribe and don't show up vs. subscribe and do show up. Moviepass seems to be almost entirely variable; their costs are buying movie tickets when people order. They would love it if people signed up and never used it, but unlike PF if people DID use it they'd be completely screwed. It's a risky plan, but it just might work as long as people don't figure out a way to game the system (or, you know, turn out to be movie buffs).", "title": "" }, { "docid": "f35317548c0342e1ecd3c69b1d7c2e3e", "text": "\"A trick that works for some folks: \"\"Pay yourself first.\"\" Have part of your paycheck put directly into an account that you promise yourself you won't touch except for some specific purpose (eg retirement). If that money is gone before it gets to your pocket, it's much less likely to be spent. US-specific: Note that if your employer offers a 401k program with matching funds, and you aren't taking advantage of that, you are leaving free money on the table. That does put an additional barrier between you and the money until you retire, too. (In other countries, look for other possible matching fundsand/or tax-advantaged savings programs; for that matter there are some other possibilities in the US, from education savings plans to discounted stock purchase that you could sell immediately for a profit. I probably should be signed up for that last...)\"", "title": "" }, { "docid": "8dc02c817c798f53a098e1f8c3943822", "text": "I've often encountered the practices you describe in the Netherlands too. This is how I deal with it. Avoid gyms with aggressive sales tactics My solution is to only sign up for a gym that does not seem to have one-on-one sales personnel and aggressive sales tactics, and even then to read the terms and conditions thoroughly. I prefer to pay them in monthly terms that I myself initiate, instead of allowing them to charge my account when they please. [1] Avoid gyms that lack respect for their members Maybe you've struggled with the choice for a gym, because one of those 'evil' gyms is very close to home and has really excellent facilities. You may be tempted to ask for a one-off contract without the shady wording, but I advise against this. Think about it this way: Even though regular T&C would not apply, the spirit with which they were drawn up lives on among gym personnel/management. They're simply not inclined to act in your best interest, so it's still possible to run into problems when ending your membership. In my opinion, it's better to completely avoid such places because they are not worthy of your trust. Of course this advice goes beyond gym memberships and is applicable to life in general. Hope this helps. [1] Credit Cards aren't very popular in the Netherlands, but we have a charging mechanism called 'automatic collection' which allows for arbitrary merchant-initiated charges.", "title": "" }, { "docid": "a816d89279fc582023e15c450eb92628", "text": "\"There's plenty of advice out there about how to set up a budget or track your expenses or \"\"pay yourself first\"\". This is all great advice but sometimes the hardest part is just getting in the right frugal mindset. Here's a couple tricks on how I did it. Put yourself through a \"\"budget fire drill\"\" If you've never set a budget for yourself, you don't necessarily need to do that here... just live as though you had lost your job and savings through some imaginary catastrophe and live on the bare minimum for at least a month. Treat every dollar as though you only had a few left. Clip coupons, stop dining out, eat rice and beans, bike or car pool to work... whatever means possible to cut costs. If you're really into it, you can cancel your cable/Netflix/wine of the month bills and see how much you really miss them. This exercise will get you used to resisting impulse buys and train you to live through an actual financial disaster. There's also a bit of a game element here in that you can shoot for a \"\"high score\"\"... the difference between the monthly expenditures for your fire drill and the previous month. Understand the power of compound interest. Sit down with Excel and run some numbers for how your net worth will change long term if you saved more and paid down debt sooner. It will give you some realistic sense of the power of compound interest in terms that relate to your specific situation. Start simple... pick your top 10 recent non-essential purchases and calculate how much that would be worth if you had invested that money in the stock market earning 8% over the next thirty years. Then visualize your present self sneaking up to your future self and stealing that much money right out of your own wallet. When I did that, it really resonated with me and made me think about how every dollar I spent on something non-essential was a kick to the crotch of poor old future me.\"", "title": "" }, { "docid": "0309d5e6df68d1710cf557e3de38ac2c", "text": "Congrats on your first real job! Save as much as your can while keeping yourself (relatively) comfortable. As to where to put your hard earned money, first establish why you want to save the money in the first place. Money is a mean to acquire the things we want or need in your life or the lives of others. Once your goals are set, then follow this order:", "title": "" }, { "docid": "397220883f559435621d173d3f45c35c", "text": "You're asking for a LOT. I mean, entire lives and volumes upon volumes of information is out there. I'd recommend Benjamin Graham for finance concepts (might be a little bit dry...), *A Random Walk Down Wall Street,* by Burton Malkiel and *A Concise Guide to Macro Economics* by David Moss.", "title": "" }, { "docid": "d0e336eb05e4701401e2367555b6ec53", "text": "Banks make money by charging fees on products and charging interest on loans. If you keep close to a $0 average balance in your account, and they aren't charging you any fees, then yes, your account is not profitable for them. That's ok. It's not costing them much to keep you as a customer, and some day you may start keeping a balance with them or apply for a loan. The bank is taking a chance that you will continue to be a loyal customer and will one day become profitable for them. Just be on the lookout for a change in their fee structure. Sometimes banks drop customers or start charging fees in cases like yours.", "title": "" } ]
fiqa
6eb6a1ce9252ca457bb221dea84d1437
Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)
[ { "docid": "cce513c48e630ab3f072f334d00b67dc", "text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press", "title": "" }, { "docid": "eea39002b723aaa9617c63c1249ef9a6", "text": "Generative Adversarial Networks (GAN) [1] are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.", "title": "" } ]
[ { "docid": "757441e95be19ca4569c519fb35adfb7", "text": "Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.", "title": "" }, { "docid": "21cde70c4255e706cb05ff38aec99406", "text": "In this paper, a multiple classifier machine learning (ML) methodology for predictive maintenance (PdM) is presented. PdM is a prominent strategy for dealing with maintenance issues given the increasing need to minimize downtime and associated costs. One of the challenges with PdM is generating the so-called “health factors,” or quantitative indicators, of the status of a system associated with a given maintenance issue, and determining their relationship to operating costs and failure risk. The proposed PdM methodology allows dynamical decision rules to be adopted for maintenance management, and can be used with high-dimensional and censored data problems. This is achieved by training multiple classification modules with different prediction horizons to provide different performance tradeoffs in terms of frequency of unexpected breaks and unexploited lifetime, and then employing this information in an operating cost-based maintenance decision system to minimize expected costs. The effectiveness of the methodology is demonstrated using a simulated example and a benchmark semiconductor manufacturing maintenance problem.", "title": "" }, { "docid": "6e7d5e2548e12d11afd3389b6d677a0f", "text": "Internet marketing is a field that is continuing to grow, and the online auction concept may be defining a totally new and unique distribution alternative. Very few studies have examined auction sellers and their internet marketing strategies. This research examines the internet auction phenomenon as it relates to the marketing mix of online auction sellers. The data in this study indicate that, whilst there is great diversity among businesses that utilise online auctions, distinct cost leadership and differentiation marketing strategies are both evident. These two approaches are further distinguished in terms of the internet usage strategies employed by each group.", "title": "" }, { "docid": "085155ebfd2ac60ed65293129cb0bfee", "text": "Today, Convolution Neural Networks (CNN) is adopted by various application areas such as computer vision, speech recognition, and natural language processing. Due to a massive amount of computing for CNN, CNN running on an embedded platform may not meet the performance requirement. In this paper, we propose a system-on-chip (SoC) CNN architecture synthesized by high level synthesis (HLS). HLS is an effective hardware (HW) synthesis method in terms of both development effort and performance. However, the implementation should be optimized carefully in order to achieve a satisfactory performance. Thus, we apply several optimization techniques to the proposed CNN architecture to satisfy the performance requirement. The proposed CNN architecture implemented on a Xilinx's Zynq platform has achieved 23% faster and 9.05 times better throughput per energy consumption than an implementation on an Intel i7 Core processor.", "title": "" }, { "docid": "3465c3bc8f538246be5d7f8c8d1292c2", "text": "The minimal depth of a maximal subtree is a dimensionless order statistic measuring the predictiveness of a variable in a survival tree. We derive the distribution of the minimal depth and use it for high-dimensional variable selection using random survival forests. In big p and small n problems (where p is the dimension and n is the sample size), the distribution of the minimal depth reveals a “ceiling effect” in which a tree simply cannot be grown deep enough to properly identify predictive variables. Motivated by this limitation, we develop a new regularized algorithm, termed RSF-Variable Hunting. This algorithm exploits maximal subtrees for effective variable selection under such scenarios. Several applications are presented demonstrating the methodology, including the problem of gene selection using microarray data. In this work we focus only on survival settings, although our methodology also applies to other random forests applications, including regression and classification settings. All examples presented here use the R-software package randomSurvivalForest.", "title": "" }, { "docid": "6c9d163a7ad97ebecdfd82275990f315", "text": "We present and evaluate a new deep neural network architecture for automatic thoracic disease detection on chest X-rays. Deep neural networks have shown great success in a plethora of visual recognition tasks such as image classification and object detection by stacking multiple layers of convolutional neural networks (CNN) in a feed-forward manner. However, the performance gain by going deeper has reached bottlenecks as a result of the trade-off between model complexity and discrimination power. We address this problem by utilizing the recently developed routing-by agreement mechanism in our architecture. A novel characteristic of our network structure is that it extends routing to two types of layer connections (1) connection between feature maps in dense layers, (2) connection between primary capsules and prediction capsules in final classification layer. We show that our networks achieve comparable results with much fewer layers in the measurement of AUC score. We further show the combined benefits of model interpretability by generating Gradient-weighted Class Activation Mapping (Grad-CAM) for localization. We demonstrate our results on the NIH chestX-ray14 dataset that consists of 112,120 images on 30,805 unique patients including 14 kinds of lung diseases.", "title": "" }, { "docid": "36f068b9579788741f23c459570694fe", "text": "One of the difficulties in learning Chinese characters is distinguishing similar characters. This can cause misunderstanding and miscommunication in daily life. Thus, it is important for students learning the Chinese language to be able to distinguish similar characters and understand their proper usage. In this paper, the authors propose a game style framework to train students to distinguish similar characters. A major component in this framework is the search for similar Chinese characters in the system. From the authors’ prior work, they find the similar characters by the radical information and stroke correspondence determination. This paper improves the stroke correspondence determination by using the attributed relational graph (ARG) matching algorithm that considers both the stroke and spatial relationship during matching. The experimental results show that the new proposed method is more accurate in finding similar Chinese characters. Additionally, the authors have implemented online educational games to train students to distinguish similar Chinese characters and made use of the improved matching method for creating the game content automatically. DOI: 10.4018/jdet.2010070103 32 International Journal of Distance Education Technologies, 8(3), 31-46, July-September 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. iNtroDUCtioN The evolution of computer technologies makes a big impact on traditional learning. Shih et al. (2007) and Chen et al. (2005) studied the impact of distant e-learning compared with traditional learning. Distant e-learning has many advantages over traditional learning such as no learning barrier in location, allowing more people to learn and providing an interactive learning environment. There is a great potential in adopting distant e-learning in areas with a sparse population. For example, in China, it is impractical to build schools in every village. As a result, some students have to spend a lot of time for travelling to school that may be quite far away from their home. If computers can be used for e-learning in this location, the students can save a lot of time for other learning activities. Moreover, there is a limit in the number of students whom a school can physically accommodate. Distant e-learning is a solution that gives the chance for more people to learn in their own pace without the physical limitation. In addition, distant e-learning allows certain levels of interactivity. The learners can get the immediate feedback from the e-learning system and enhance the efficiency in their learning. E-learning has been applied in different areas such as engineering by Sziebig (2008), maritime education by Jurian (2006), etc. Some researchers study the e-learning in Chinese handwriting education. Nowadays there exist many e-learning applications to help students learn their native or a foreign language. This paper is focused on the learning of the Chinese language. Some researchers (Tan, 2002; Teo et al., 2002) provide an interactive interface for students to practice Chinese character handwriting. These e-learning methods help students improve their handwriting skill by providing them a framework to repeat some handwriting exercises just like in the traditional learning. However, they have not considered how to maintain students’ motivation to complete the tasks. Green et al. (2007) suggested that game should be introduced for learning because games bring challenges to students, stimulate their curiosity, develop their creativity and let them have fun. One of the common problems in Chinese students’ handwriting is mixing up similar characters in the structure (e.g., 困, 因) or sound (e.g., 木, 目), and misusing them. Chinese characters are logographs and there are about 3000 commonly used characters. Learners have to memorize a lot of writing structures and their related meanings. It is difficult to distinguish similar Chinese characters with similar structure or sound even for people whose native language is Chinese. For training people in distinguishing similar characters, teachers often make some questions by presenting the similar characters and ask the students to find out the correct one under each case. There are some web-based games that aim to help students differentiate similar characters (The Academy of Chinese Studies & Erroneous Character Arena). These games work in a similar fashion in which they show a few choices of similar characters to the players and ask them to pick the correct one that should be used in a phrase. These games suffer from the drawback that the question-answer set is limited thus players feel bored easily and there is little replay value. On the other hand, creating a large set of question-answer pairs is time consuming if it is done manually. It is beneficial to have a system to generate the choices automatically.", "title": "" }, { "docid": "ac37ca6b8bb12305ac6e880e6e7c336a", "text": "In this paper, we are interested in learning the underlying graph structure behind training data. Solving this basic problem is essential to carry out any graph signal processing or machine learning task. To realize this, we assume that the data is smooth with respect to the graph topology, and we parameterize the graph topology using an edge sampling function. That is, the graph Laplacian is expressed in terms of a sparse edge selection vector, which provides an explicit handle to control the sparsity level of the graph. We solve the sparse graph learning problem given some training data in both the noiseless and noisy settings. Given the true smooth data, the posed sparse graph learning problem can be solved optimally and is based on simple rank ordering. Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.", "title": "" }, { "docid": "b1488b35284b6610d44d178d56cc89eb", "text": "We introduce an unsupervised discriminative model for the task of retrieving experts in online document collections. We exclusively employ textual evidence and avoid explicit feature engineering by learning distributed word representations in an unsupervised way. We compare our model to state-of-the-art unsupervised statistical vector space and probabilistic generative approaches. Our proposed log-linear model achieves the retrieval performance levels of state-of-the-art document-centric methods with the low inference cost of so-called profile-centric approaches. It yields a statistically significant improved ranking over vector space and generative models in most cases, matching the performance of supervised methods on various benchmarks. That is, by using solely text we can do as well as methods that work with external evidence and/or relevance feedback. A contrastive analysis of rankings produced by discriminative and generative approaches shows that they have complementary strengths due to the ability of the unsupervised discriminative model to perform semantic matching.", "title": "" }, { "docid": "9a2609d1b13e0fb43849d3e4ca8682fe", "text": "This report presents a brief overview of multimedia data mining and the corresponding workshop series at ACM SIGKDD conference series on data mining and knowledge discovery. It summarizes the presentations, conclusions and directions for future work that were discussed during the 3rd edition of the International Workshop on Multimedia Data Mining, conducted in conjunction with KDD-2002 in Edmonton, Alberta, Canada.", "title": "" }, { "docid": "92386ee2988b6d7b6f2f0b3cdcbf44ba", "text": "In the rst part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weightupdate rule of Littlestone and Warmuth [20] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n . In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary nite set or a bounded segment of the real line.", "title": "" }, { "docid": "e78d53a2790ac3b6011910f82cefaff9", "text": "A two-dimensional crystal of molybdenum disulfide (MoS2) monolayer is a photoluminescent direct gap semiconductor in striking contrast to its bulk counterpart. Exfoliation of bulk MoS2 via Li intercalation is an attractive route to large-scale synthesis of monolayer crystals. However, this method results in loss of pristine semiconducting properties of MoS2 due to structural changes that occur during Li intercalation. Here, we report structural and electronic properties of chemically exfoliated MoS2. The metastable metallic phase that emerges from Li intercalation was found to dominate the properties of as-exfoliated material, but mild annealing leads to gradual restoration of the semiconducting phase. Above an annealing temperature of 300 °C, chemically exfoliated MoS2 exhibit prominent band gap photoluminescence, similar to mechanically exfoliated monolayers, indicating that their semiconducting properties are largely restored.", "title": "" }, { "docid": "7e682f98ee6323cd257fda07504cba20", "text": "We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al.,2004) and STARE (Hoover et al.,2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods", "title": "" }, { "docid": "5176645b3aca90b9f3e7d9fb8391063d", "text": "The role of dysfunctional attitudes in loneliness among college students was investigated. Subjects were 50 introductory psychology volunteers (20 male, 30 female) who completed measures of loneliness severity, depression, and dysfunctional attitudes. The results showed a strong predictive relationship between dysfunctional attitudes and loneliness even after level of depression was statistically controlled. Lonely college students' thinking is dominated by doubts about ability to find satisfying romantic relationships and fears of being rejected and hurt in an intimate pairing. Lonely individuals also experience much anxiety in interpersonal encounters and regard themselves as undesirable to others. Generally, a negative evaluation of self, especially in the social realm, is present. Implications of the results for treatment planning for lonely clients are discussed.", "title": "" }, { "docid": "3c8cc4192ee6ddd126e53c8ab242f396", "text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.", "title": "" }, { "docid": "934875351d5fa0c9b5c7499ca13727ab", "text": "Computation of the simplicial complexes of a large point cloud often relies on extracting a sample, to reduce the associated computational burden. The study considers sampling critical points of a Morse function associated to a point cloud, to approximate the Vietoris-Rips complex or the witness complex and compute persistence homology. The effectiveness of the novel approach is compared with the farthest point sampling, in a context of classifying human face images into ethnics groups using persistence homology.", "title": "" }, { "docid": "5291162cd0841cc025f2a86b360372e6", "text": "The web contains countless semi-structured websites, which can be a rich source of information for populating knowledge bases. Existing methods for extracting relations from the DOM trees of semi-structured webpages can achieve high precision and recall only when manual annotations for each website are available. Although there have been efforts to learn extractors from automatically generated labels, these methods are not sufficiently robust to succeed in settings with complex schemas and information-rich websites. In this paper we present a new method for automatic extraction from semi-structured websites based on distant supervision. We automatically generate training labels by aligning an existing knowledge base with a website and leveraging the unique structural characteristics of semi-structured websites. We then train a classifier based on the potentially noisy and incomplete labels to predict new relation instances. Our method can compete with annotationbased techniques in the literature in terms of extraction quality. A large-scale experiment on over 400,000 pages from dozens of multi-lingual long-tail websites harvested 1.25 million facts at a precision of 90%. PVLDB Reference Format: Colin Lockard, Xin Luna Dong, Arash Einolghozati, Prashant Shiralkar. CERES: Distantly Supervised Relation Extraction from the Semi-Structured Web. PVLDB, 11(10): 1084-1096, 2018. DOI: https://doi.org/10.14778/3231751.3231758", "title": "" }, { "docid": "46ecd1781e1ab5866fde77b3a24be06a", "text": "Viral products and ideas are intuitively understood to grow through a person-to-person diffusion process analogous to the spread of an infectious disease; however, until recently it has been prohibitively difficult to directly observe purportedly viral events, and thus to rigorously quantify or characterize their structural properties. Here we propose a formal measure of what we label “structural virality” that interpolates between two extremes: content that gains its popularity through a single, large broadcast, and that which grows through multiple generations with any one individual directly responsible for only a fraction of the total adoption. We use this notion of structural virality to analyze a unique dataset of a billion diffusion events on Twitter, including the propagation of news stories, videos, images, and petitions. We find that the very largest observed events nearly always exhibit high structural virality, providing some of the first direct evidence that many of the most popular products and ideas grow through person-to-person diffusion. However, medium-sized events—having thousands of adopters—exhibit surprising structural diversity, and regularly grow via both broadcast and viral mechanisms. We find that these empirical results are largely consistent with a simple contagion model characterized by a low infection rate spreading on a scale-free network, reminiscent of previous work on the long-term persistence of computer viruses.", "title": "" }, { "docid": "41b17931c63d053bd0a339beab1c0cfc", "text": "The investigation and development of new methods from diverse perspectives to shed light on portfolio choice problems has never stagnated in financial research. Recently, multi-armed bandits have drawn intensive attention in various machine learning applications in online settings. The tradeoff between exploration and exploitation to maximize rewards in bandit algorithms naturally establishes a connection to portfolio choice problems. In this paper, we present a bandit algorithm for conducting online portfolio choices by effectually exploiting correlations among multiple arms. Through constructing orthogonal portfolios from multiple assets and integrating with the upper confidence bound bandit framework, we derive the optimal portfolio strategy that represents the combination of passive and active investments according to a risk-adjusted reward function. Compared with oft-quoted trading strategies in finance and machine learning fields across representative real-world market datasets, the proposed algorithm demonstrates superiority in both risk-adjusted return and cumulative wealth.", "title": "" } ]
scidocsrr
5d3aa8179e63cffbfc4583f97535b24c
Minnesota Satisfaction Questionnaire-Psychometric Properties and Validation in a Population of Portuguese Hospital Workers
[ { "docid": "6521ae2b4592fccdb061f1e414774024", "text": "The development of the Job Satisfaction Survey (JSS), a nine-subscale measure of employee job satisfaction applicable specifically to human service, public, and nonprofit sector organizations, is described. The item selection, item analysis, and determination of the final 36-item scale are also described, and data on reliability and validity and the instrument's norms are summarized. Included are a multitrait-multimethod analysis of the JSS and the Job Descriptive Index (JDI), factor analysis of the JSS, and scale intercorrelations. Correlation of JSS scores with criteria of employee perceptions and behaviors for multiple samples were consistent with findings involving other satisfaction scales and with findings from the private sector. The strongest correlations were with perceptions of the job and supervisor, intention of quitting, and organizational commitment. More modest correlations were found with salary, age, level, absenteeism, and turnover.", "title": "" } ]
[ { "docid": "4a89f20c4b892203be71e3534b32449c", "text": "This paper draws together knowledge from a variety of fields to propose that innovation management can be viewed as a form of organisational capability. Excellent companies invest and nurture this capability, from which they execute effective innovation processes, leading to innovations in new product, services and processes, and superior business performance results. An extensive review of the literature on innovation management, along with a case study of Cisco Systems, develops a conceptual model of the firm as an innovation engine. This new operating model sees substantial investment in innovation capability as the primary engine for wealth creation, rather than the possession of physical assets. Building on the dynamic capabilities literature, an “innovation capability” construct is proposed with seven elements. These are vision and strategy, harnessing the competence base, organisational intelligence, creativity and idea management, organisational structures and systems, culture and climate, and management of technology.", "title": "" }, { "docid": "4ae231ad20a99fb0b4c745cdffde456d", "text": "Networks-on-Chip (NoCs) interconnection architectures to be used in future billion-transistor Systems-on-Chip (SoCs) meet the major communication requirements of these systems, offering, at the same time, reusability, scalability and parallelism in communication. Furthermore, they cope with other issues like power constraints and clock distribution. Currently, there is a number of research works which explore different features of NoCs. In this paper, we present SoCIN, a scalable network based on a parametric router architecture to beused in the synthesis of customized low cost NoCs. The architecture of SoCIN and its router are described, and some synthesis results are presented.", "title": "" }, { "docid": "82da6897a36ea57473455d8f4da0a32d", "text": "Traditional learning-based coreference resolvers operate by training amentionpair classifier for determining whether two mentions are coreferent or not. Two independent lines of recent research have attempted to improve these mention-pair classifiers, one by learning amentionranking model to rank preceding mentions for a given anaphor, and the other by training an entity-mention classifier to determine whether a preceding cluster is coreferent with a given mention. We propose a cluster-ranking approach to coreference resolution that combines the strengths of mention rankers and entitymention models. We additionally show how our cluster-ranking framework naturally allows discourse-new entity detection to be learned jointly with coreference resolution. Experimental results on the ACE data sets demonstrate its superior performance to competing approaches.", "title": "" }, { "docid": "3f807cb7e753ebd70558a0ce74b416b7", "text": "In this paper, we study the problem of recovering a tensor with missing data. We propose a new model combining the total variation regularization and low-rank matrix factorization. A block coordinate decent (BCD) algorithm is developed to efficiently solve the proposed optimization model. We theoretically show that under some mild conditions, the algorithm converges to the coordinatewise minimizers. Experimental results are reported to demonstrate the effectiveness of the proposed model and the efficiency of the numerical scheme. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "0bd720d912575c0810c65d04f6b1712b", "text": "Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware.", "title": "" }, { "docid": "34e544af5158850b7119ac4f7c0b7b5e", "text": "Over the last decade, the surprising fact has emerged that machines can possess therapeutic power. Due to the many healing qualities of touch, one route to such power is through haptic emotional interaction, which requires sophisticated touch sensing and interpretation. We explore the development of touch recognition technologies in the context of a furry artificial lap-pet, with the ultimate goal of creating therapeutic interactions by sensing human emotion through touch. In this work, we build upon a previous design for a new type of fur-based touch sensor. Here, we integrate our fur sensor with a piezoresistive fabric location/pressure sensor, and adapt the combined design to cover a curved creature-like object. We then use this interface to collect synchronized time-series data from the two sensors, and perform machine learning analysis to recognize 9 key affective touch gestures. In a study of 16 participants, our model averages 94% recognition accuracy when trained on individuals, and 86% when applied to the combined set of all participants. The model can also recognize which participant is touching the prototype with 79% accuracy. These results promise a new generation of emotionally intelligent machines, enabled by affective touch gesture recognition.", "title": "" }, { "docid": "30ff2dfb2864a294d2be5e4a33b88964", "text": "Using blockchain technology, it is possible to create contracts that offer a reward in exchange for a trained machine learning model for a particular data set. This would allow users to train machine learning models for a reward in a trustless manner. The smart contract will use the blockchain to automatically validate the solution, so there would be no debate about whether the solution was correct or not. Users who submit the solutions wont have counterparty risk that they wont get paid for their work. Contracts can be created easily by anyone with a dataset, even programmatically by software agents. This creates a market where parties who are good at solving machine learning problems can directly monetize their skillset, and where any organization or software agent that has a problem to solve with AI can solicit solutions from all over the world. This will incentivize the creation of better machine learning models, and make AI more accessible to companies and software agents. A consequence of creating this market is that there will be a well defined price of GPU training for machine learning models. Crypto-currency mining also uses GPUs in many cases. We can envision a world where at any given moment, miners can choose to direct their hardware to work on whichever workload is more profitable: cryptocurrency mining, or machine learning training. 1. Background 1.1. Bitcoin and cryptocurrencies Bitcoin was first introduced in 2008 to create a decentralized method of storing and transferring funds from one account to another. It enforced ownership using public key cryptography. Funds are stored in various addresses, and anyone with the private key for an address would be able to transfer funds from this account. To create such a system in a decentralized fashion required innovation on how to achieve consensus between participants, which was solved using a blockchain. This created an ecosystem that enabled fast and trusted transactions between untrusted users. Bitcoin implemented a scripting language for simple tasks. This language wasnt designed to be turing complete. Over time, people wanted to implement more complicated programming tasks on blockchains. Ethereum introduced a turing-complete language to support a wider range of applications. This language was designed to utilize the decentralized nature of the blockchain. Essentially its an application layer on top of the ethereum blockchain. By having a more powerful, turing-complete programming language, it became possible to build new types of applications on top of the ethereum blockchain: from escrow systems, minting new coins, decentralized corporations, and more. The Ethereum whitepaper talks about creating decentralized marketplaces, but focuses on things like identities and reputations to facilitate these transactions. (Buterin, 2014) In this marketplace, specifically for machine learning models, trust is a required feature. This approach is distinctly different than the trustless exchange system proposed in this paper. ar X iv :1 80 2. 10 18 5v 1 [ cs .C R ] 2 7 Fe b 20 18 Evaluating and Exchanging Machine Learning Models on the Ethereum Blockchain 1.2. Breakthrough in machine learning In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton were able to train a deep neural network for image classification by utilizing GPUs. (Krizhevsky et al., 2012) Their submission for the Large Scale Visual Recognition Challenge (LSVRC) halved the best error rate at the time. GPUs being able to do thousands of matrix operations in parallel was the breakthrough needed to train deep neural networks. With more research, machine learning (ML) systems have been able to surpass humans in many specific problems. These systems are now better at: lip reading (Chung et al., 2016), speech recognition (Xiong et al., 2016), location tagging (Weyand et al., 2016), playing Go (Silver et al., 2016), image classification (He et al., 2015), and more. In ML, a variety of models and approaches are used to attack different types of problems. Such an approach is called a Neural Network (NN). Neural Networks are made out of nodes, biases and weighted edges, and can represent virtually any function. (Hornik, 1991) Figure 1. Neural Network Schema There are two steps in building a new machine learning model. The first step is training, which takes in a dataset as an input, and adjusts the model weights to increase accuracy for the model. The second step is testing, that uses an independent dataset for testing the accuracy of the trained model. This second step is necessary to validate the model and to prevent a problem known as overfitting. An overfitted model is very good at a particular dataset, but is bad at generalizing for the given problem. Once it has been trained, a ML model can be used to perform tasks on new data, such as prediction, classification, and clustering. There is a huge demand for machine learning models, and companies that can get access to good machine learning models stand to profit through improved efficiency and new capabilities. Since there is strong demand for this kind of technology, and limited supply of talent, it makes sense to create a market for machine learning models. Since machine learning is purely software and training it doesnt require interacting with any physical systems, using blockchain for coordination between users, and using cryptocurrency for payment is a natural choice.", "title": "" }, { "docid": "2b743ba2f607f75bb7e1d964c39cbbcf", "text": "The demand and growth of indoor positioning has increased rapidly in the past few years for a diverse range of applications. Various innovative techniques and technologies have been introduced but precise and reliable indoor positioning still remains a challenging task due to dependence on a large number of factors and limitations of the technologies. Positioning technologies based on radio frequency (RF) have many advantages over the technologies utilizing ultrasonic, optical and infrared devices. Both narrowband and wideband RF systems have been implemented for short range indoor positioning/real-time locating systems. Ultra wideband (UWB) technology has emerged as a viable candidate for precise indoor positioning due its unique characteristics. This article presents a comparison of UWB and narrowband RF technologies in terms of modulation, throughput, transmission time, energy efficiency, multipath resolving capability and interference. Secondly, methods for measurement of the positioning parameters are discussed based on a generalized measurement model and, in addition, widely used position estimation algorithms are surveyed. Finally, the article provides practical UWB positioning systems and state-of-the-art implementations. We believe that the review presented in this article provides a structured overview and comparison of the positioning methods, algorithms and implementations in the field of precise UWB indoor positioning, and will be helpful for practitioners as well as for researchers to keep abreast of the recent developments in the field.", "title": "" }, { "docid": "f7808b676e04ae7e80cf06d36edc73e8", "text": "Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.", "title": "" }, { "docid": "ae536a72dfba1e7eff57989c3f94ae3e", "text": "Policymakers are often interested in estimating how policy interventions affect the outcomes of those most in need of help. This concern has motivated the practice of disaggregating experimental results by groups constructed on the basis of an index of baseline characteristics that predicts the values of individual outcomes without the treatment. This paper shows that substantial biases may arise in practice if the index is estimated by regressing the outcome variable on baseline characteristics for the full sample of experimental controls. We propose alternative methods that correct this bias and show that they behave well in realistic scenarios.", "title": "" }, { "docid": "47afea1e95f86bb44a1cf11e020828fc", "text": "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in Koopman et al. (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogaziçi University Printhouse. http://www.issi2015.org/files/downloads/all-papers/1042.pdf , 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.", "title": "" }, { "docid": "a00ee62d48afbcac22c85d2961c596bc", "text": "Despite oxycodone's (4,5-epoxy-14-hydroxy-3-methoxy-17-methylmorphinan-6-one) history of clinical use and the attention it has received as a drug of abuse, few reports have documented its pharmacology's relevance to its abuse or its mechanism of action. The purposes of the present study were to further characterize the analgesic effects of oxycodone, its mechanism of action, and its effects in terms of its relevance to its abuse liability. The results indicate that oxycodone had potent antinociceptive effects in the mouse paraphenylquinone writhing, hot-plate, and tail-flick assays, in which it appeared to be acting as a mu-opioid receptor agonist. It generalized to the heroin discriminative stimulus and served as a positive reinforcer in rats and completely suppressed withdrawal signs in morphine-dependent rhesus monkeys. These results suggest that the analgesic and abuse liability effects of oxycodone are likely mediated through mu-opioid receptors and provide the first laboratory report of its discriminative stimulus, reinforcing, and morphine cross-dependency effects.", "title": "" }, { "docid": "40939d3a4634498fb50c0cda9e31f476", "text": "Learning analytics is receiving increased attention, in part because it offers to assist educational institutions in increasing student retention, improving student success, and easing the burden of accountability. Although these large-scale issues are worthy of consideration, faculty might also be interested in how they can use learning analytics in their own courses to help their students succeed. In this paper, we define learning analytics, how it has been used in educational institutions, what learning analytics tools are available, and how faculty can make use of data in their courses to monitor and predict student performance. Finally, we discuss several issues and concerns with the use of learning analytics in higher education. Have you ever had the sense at the start of a new course or even weeks into the semester that you could predict which students will drop the course or which students will succeed? Of course, the danger of this realization is that it may create a self-fulfilling prophecy or possibly be considered “profiling”. But it could also be that you have valuable data in your head, collected from semesters of experience, that can help you predict who will succeed and who will not based on certain variables. In short, you likely have hunches based on an accumulation of experience. The question is, what are those variables? What are those data? And how well will they help you predict student performance and retention? More importantly, how will those data help you to help your students succeed in your course? Such is the promise of learning analytics. Learning analytics is defined as “the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Long & Siemens, 2011, p. 32). Learning analytics offers promise for predicting and improving student success and retention (e.g., Olmos & Corrin, 2012; Smith, Lange, & Huston, 2012) in part because it allows faculty, institutions, and students to make data-driven decisions about student success and retention. Data-driven decision making involves making use of data, such as the sort provided in Learning Management Systems (LMS), to inform educator’s judgments (Jones, 2012; Long & Siemens, 2011; Picciano, 2012). For example, to argue for increased funding to support student preparation for a course or a set of courses, it would be helpful to have data showing that students who have certain skills or abilities or prior coursework perform better in the class or set of classes than those who do not. Journal of Interactive Online Learning Dietz-Uhler & Hurn", "title": "" }, { "docid": "0caa6d4623fb0414facb76ccd8eaa235", "text": "Because of large amounts of unstructured text data generated on the Internet, text mining is believed to have high commercial value. Text mining is the process of extracting previously unknown, understandable, potential and practical patterns or knowledge from the collection of text data. This paper introduces the research status of text mining. Then several general models are described to know text mining in the overall perspective. At last we classify text mining work as text categorization, text clustering, association rule extraction and trend analysis according to applications.", "title": "" }, { "docid": "e41eb91c146b5054b583083b89d0a3fb", "text": "The authors adapt the SERVQUAL scale for medical care services and examine it for reliability, dimensionality, and validity in a primary care clinic setting. In addition, they explore the possibility of a link between perceived service quality--and its various dimensions--and a patient's future intent to complain, compliment, repeat purchase, and switch providers. Findings from 159 matched-pair responses indicate that the SERVQUAL scale can be adapted reliably to a clinic setting and that the dimensions of reliability, dependability, and empathy are most predictive of a patient's intent to complain, compliment, repeat purchase, and switch providers.", "title": "" }, { "docid": "0e3f43a28c477ae0e15a8608d3a1d4a5", "text": "This report provides an overview of the current state of the art deep learning architectures and optimisation techniques, and uses the ADNI hippocampus MRI dataset as an example to compare the effectiveness and efficiency of different convolutional architectures on the task of patch-based 3dimensional hippocampal segmentation, which is important in the diagnosis of Alzheimer’s Disease. We found that a slightly unconventional ”stacked 2D” approach provides much better classification performance than simple 2D patches without requiring significantly more computational power. We also examined the popular ”tri-planar” approach used in some recently published studies, and found that it provides much better results than the 2D approaches, but also with a moderate increase in computational power requirement. Finally, we evaluated a full 3D convolutional architecture, and found that it provides marginally better results than the tri-planar approach, but at the cost of a very significant increase in computational power requirement. ar X iv :1 50 5. 02 00 0v 1 [ cs .L G ] 8 M ay 2 01 5", "title": "" }, { "docid": "34a6fe0c5183f19d4f25a99b3bcd205e", "text": "In this paper, we first offer an overview of advances in the field of distance metric learning. Then, we empirically compare selected methods using a common experimental protocol. The number of distance metric learning algorithms proposed keeps growing due to their effectiveness and wide application. However, existing surveys are either outdated or they focus only on a few methods. As a result, there is an increasing need to summarize the obtained knowledge in a concise, yet informative manner. Moreover, existing surveys do not conduct comprehensive experimental comparisons. On the other hand, individual distance metric learning papers compare the performance of the proposed approach with only a few related methods and under different settings. This highlights the need for an experimental evaluation using a common and challenging protocol. To this end, we conduct face verification experiments, as this task poses significant challenges due to varying conditions during data acquisition. In addition, face verification is a natural application for distance metric learning because the encountered challenge is to define a distance function that: 1) accurately expresses the notion of similarity for verification; 2) is robust to noisy data; 3) generalizes well to unseen subjects; and 4) scales well with the dimensionality and number of training samples. In particular, we utilize well-tested features to assess the performance of selected methods following the experimental protocol of the state-of-the-art database labeled faces in the wild. A summary of the results is presented along with a discussion of the insights obtained and lessons learned by employing the corresponding algorithms.", "title": "" }, { "docid": "bf9e44e81e37b0aefb12250202d59111", "text": "There are many clustering tasks which are closely related in the real world, e.g. clustering the web pages of different universities. However, existing clustering approaches neglect the underlying relation and treat these clustering tasks either individually or simply together. In this paper, we will study a novel clustering paradigm, namely multi-task clustering, which performs multiple related clustering tasks together and utilizes the relation of these tasks to enhance the clustering performance. We aim to learn a subspace shared by all the tasks, through which the knowledge of the tasks can be transferred to each other. The objective of our approach consists of two parts: (1) Within-task clustering: clustering the data of each task in its input space individually; and (2) Cross-task clustering: simultaneous learning the shared subspace and clustering the data of all the tasks together. We will show that it can be solved by alternating minimization, and its convergence is theoretically guaranteed. Furthermore, we will show that given the labels of one task, our multi-task clustering method can be extended to transductive transfer classification (a.k.a. cross-domain classification, domain adaption). Experiments on several cross-domain text data sets demonstrate that the proposed multi-task clustering outperforms traditional single-task clustering methods greatly. And the transductive transfer classification method is comparable to or even better than several existing transductive transfer classification approaches.", "title": "" }, { "docid": "959b487a51ae87b2d993e6f0f6201513", "text": "The two-wheel differential drive mobile robots, are one of the simplest and most used structures in mobile robotics applications, it consists of a chassis with two fixed and in-line with each other electric motors. This paper presents new models for differential drive mobile robots and some considerations regarding design, modeling and control solutions. The presented models are to be used to help in facing the two top challenges in developing mechatronic mobile robots system; early identifying system level problems and ensuring that all design requirements are met, as well as, to simplify and accelerate Mechatronics mobile robots design process, including proper selection, analysis, integration and verification of the overall system and sub-systems performance throughout the development process.", "title": "" } ]
scidocsrr
656b9050e1363c9eaaf9703e9b39b5cd
Predicting subscriber dissatisfaction and improving retention in the wireless telecommunications industry
[ { "docid": "00ea9078f610b14ed0ed00ed6d0455a7", "text": "Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, Ada Boost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether Ada Boost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the Ada Boost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4 error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5 error on the UCI letters and 8.1 error on the UCI satellite data set, which is significantly better than boosted decision trees.", "title": "" } ]
[ { "docid": "38c9cee29ef1ba82e45556d87de1ff24", "text": "This paper presents a detailed characterization of the Hokuyo URG-04LX 2D laser range finder. While the sensor specifications only provide a rough estimation of the sensor accuracy, the present work analyzes issues such as time drift effects and dependencies on distance, target properties (color, brightness and material) as well as incidence angle. Since the sensor is intended to be used for measurements of a tubelike environment on an inspection robot, the characterization is extended by investigating the influence of the sensor orientation and dependency on lighting conditions. The sensor characteristics are compared to those of the Sick LMS 200 which is commonly used in robotic applications when size and weight are not critical constraints. The results show that the sensor accuracy is strongly depending on the target properties (color, brightness, material) and that it is consequently difficult to establish a calibration model. The paper also identifies cases for which the sensor returns faulty measurements, mainly when the surface has low reflectivity (dark surfaces, foam) or for high incidence angles on shiny surfaces. On the other hand, the repeatability of the sensor seems to be competitive with the LMS 200.", "title": "" }, { "docid": "a71d0d3748f6be2adbd48ab7671dd9f8", "text": "Considerable overlap has been identified in the risk factors, comorbidities and putative pathophysiological mechanisms of Alzheimer disease and related dementias (ADRDs) and type 2 diabetes mellitus (T2DM), two of the most pressing epidemics of our time. Much is known about the biology of each condition, but whether T2DM and ADRDs are parallel phenomena arising from coincidental roots in ageing or synergistic diseases linked by vicious pathophysiological cycles remains unclear. Insulin resistance is a core feature of T2DM and is emerging as a potentially important feature of ADRDs. Here, we review key observations and experimental data on insulin signalling in the brain, highlighting its actions in neurons and glia. In addition, we define the concept of 'brain insulin resistance' and review the growing, although still inconsistent, literature concerning cognitive impairment and neuropathological abnormalities in T2DM, obesity and insulin resistance. Lastly, we review evidence of intrinsic brain insulin resistance in ADRDs. By expanding our understanding of the overlapping mechanisms of these conditions, we hope to accelerate the rational development of preventive, disease-modifying and symptomatic treatments for cognitive dysfunction in T2DM and ADRDs alike.", "title": "" }, { "docid": "180a271a86f9d9dc71cc140096d08b2f", "text": "This communication demonstrates for the first time the capability to independently control the real and imaginary parts of the complex propagation constant in planar, printed circuit board compatible leaky-wave antennas. The structure is based on a half-mode microstrip line which is loaded with an additional row of periodic metallic posts, resulting in a substrate integrated waveguide SIW with one of its lateral electric walls replaced by a partially reflective wall. The radiation mechanism is similar to the conventional microstrip leaky-wave antenna operating in its first higher-order mode, with the novelty that the leaky-mode leakage rate can be controlled by virtue of a sparse row of metallic vias. For this topology it is demonstrated that it is possible to independently control the antenna pointing angle and main lobe beamwidth while achieving high radiation efficiencies, thus providing low-cost, low-profile, simply fed, and easily integrable leaky-wave solutions for high-gain frequency beam-scanning applications. Several prototypes operating at 15 GHz have been designed, simulated, manufactured and tested, to show the operation principle and design flexibility of this one dimensional leaky-wave antenna.", "title": "" }, { "docid": "9a65a5c09df7e34383056509d96e772d", "text": "With explosive growth of Android malware and due to its damage to smart phone users (e.g., stealing user credentials, resource abuse), Android malware detection is one of the cyber security topics that are of great interests. Currently, the most significant line of defense against Android malware is anti-malware software products, such as Norton, Lookout, and Comodo Mobile Security, which mainly use the signature-based method to recognize threats. However, malware attackers increasingly employ techniques such as repackaging and obfuscation to bypass signatures and defeat attempts to analyze their inner mechanisms. The increasing sophistication of Android malware calls for new defensive techniques that are harder to evade, and are capable of protecting users against novel threats. In this paper, we propose a novel dynamic analysis method named Component Traversal that can automatically execute the code routines of each given Android application (app) as completely as possible. Based on the extracted Linux kernel system calls, we further construct the weighted directed graphs and then apply a deep learning framework resting on the graph based features for newly unknown Android malware detection. A comprehensive experimental study on a real sample collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed method outperforms other alternative Android malware detection techniques. Our developed system Deep4MalDroid has also been integrated into a commercial Android anti-malware software.", "title": "" }, { "docid": "9365a612900a8bf0ddef8be6ec17d932", "text": "Stabilization exercise program has become the most popular treatment method in spinal rehabilitation since it has shown its effectiveness in some aspects related to pain and disability. However, some studies have reported that specific exercise program reduces pain and disability in chronic but not in acute low back pain, although it can be helpful in the treatment of acute low back pain by reducing recurrence rate (Ferreira et al., 2006).", "title": "" }, { "docid": "4e2bfd87acf1287f36694634a6111b3f", "text": "This paper presents a model for managing departure aircraft at the spot or gate on the airport surface. The model is applied over two time frames: long term (one hour in future) for collaborative decision making, and short term (immediate) for decisions regarding the release of aircraft. The purpose of the model is to provide the controller a schedule of spot or gate release times optimized for runway utilization. This model was tested in nominal and heavy surface traffic scenarios in a simulated environment, and results indicate average throughput improvement of 10% in high traffic scenarios even with up to two minutes of uncertainty in spot arrival times.", "title": "" }, { "docid": "6e0a19a9bc744aa05a64bd7450cc4c1b", "text": "The success of deep neural networks hinges on our ability to accurately and efficiently optimize high-dimensional, non-convex functions. In this paper, we empirically investigate the loss functions of state-of-the-art networks, and how commonlyused stochastic gradient descent variants optimize these loss functions. To do this, we visualize the loss function by projecting them down to low-dimensional spaces chosen based on the convergence points of different optimization algorithms. Our observations suggest that optimization algorithms encounter and choose different descent directions at many saddle points to find different final weights. Based on consistency we observe across re-runs of the same stochastic optimization algorithm, we hypothesize that each optimization algorithm makes characteristic choices at these saddle points.", "title": "" }, { "docid": "1d5119a4aeb7d678b58fb4e55c43fe94", "text": "This chapter provides a simplified introduction to cloud computing. This chapter starts by introducing the history of cloud computing and moves on to describe the cloud architecture and operation. This chapter also discusses briefly cloud servicemodels: Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service. Clouds are also categorized based on their ownership to private and public clouds. This chapter concludes by explaining the reasons for choosing cloud computing over other technologies by exploring the economic and technological benefits of the cloud.", "title": "" }, { "docid": "07175075dad32287a7dabf3d852f729a", "text": "This paper is intended as a tutorial overview of induction motors signature analysis as a medium for fault detection. The purpose is to introduce in a concise manner the fundamental theory, main results, and practical applications of motor signature analysis for the detection and the localization of abnormal electrical and mechanical conditions that indicate, or may lead to, a failure of induction motors. The paper is focused on the so-called motor current signature analysis which utilizes the results of spectral analysis of the stator current. The paper is purposefully written without “state-of-the-art” terminology for the benefit of practicing engineers in facilities today who may not be familiar with signal processing.", "title": "" }, { "docid": "9e7fc71def2afc58025ff5e0198148d0", "text": "BACKGROUD\nWith the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects.\n\n\nNEW METHOD\nWe have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability.\n\n\nRESULTS\nThis study demonstrates new methods for computing and visualizing 'grand' ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute.", "title": "" }, { "docid": "073cd7c54b038dcf69ae400f97a54337", "text": "Interventions to support children with autism often include the use of visual supports, which are cognitive tools to enable learning and the production of language. Although visual supports are effective in helping to diminish many of the challenges of autism, they are difficult and time-consuming to create, distribute, and use. In this paper, we present the results of a qualitative study focused on uncovering design guidelines for interactive visual supports that would address the many challenges inherent to current tools and practices. We present three prototype systems that address these design challenges with the use of large group displays, mobile personal devices, and personal recording technologies. We also describe the interventions associated with these prototypes along with the results from two focus group discussions around the interventions. We present further design guidance for visual supports and discuss tensions inherent to their design.", "title": "" }, { "docid": "df9acaed8dbcfbd38a30e4e1fa77aa8a", "text": "Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-theart object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.", "title": "" }, { "docid": "56826bfc5f48105387fd86cc26b402f1", "text": "It is difficult to identify sentence importance from a single point of view. In this paper, we propose a learning-based approach to combine various sentence features. They are categorized as surface, content, relevance and event features. Surface features are related to extrinsic aspects of a sentence. Content features measure a sentence based on contentconveying words. Event features represent sentences by events they contained. Relevance features evaluate a sentence from its relatedness with other sentences. Experiments show that the combined features improved summarization performance significantly. Although the evaluation results are encouraging, supervised learning approach requires much labeled data. Therefore we investigate co-training by combining labeled and unlabeled data. Experiments show that this semisupervised learning approach achieves comparable performance to its supervised counterpart and saves about half of the labeling time cost.", "title": "" }, { "docid": "dd9edd37ff5f4cb332fcb8a0ef86323e", "text": "This paper proposes several nonlinear control strategies for trajectory tracking of a quadcopter system based on the property of differential flatness. Its originality is twofold. Firstly, it provides a flat output for the quadcopter dynamics capable of creating full flat parametrization of the states and inputs. Moreover, B-splines characterizations of the flat output and their properties allow for optimal trajectory generation subject to way-point constraints. Secondly, several control strategies based on computed torque control and feedback linearization are presented and compared. The advantages of flatness within each control strategy are analyzed and detailed through extensive simulation results.", "title": "" }, { "docid": "8e3bf062119c6de9fa5670ce4b00764b", "text": "Heating red phosphorus in sealed ampoules in the presence of a Sn/SnI4 catalyst mixture has provided bulk black phosphorus at much lower pressures than those required for allotropic conversion by anvil cells. Herein we report the growth of ultra-long 1D red phosphorus nanowires (>1 mm) selectively onto a wafer substrate from red phosphorus powder and a thin film of red phosphorus in the present of a Sn/SnI4 catalyst. Raman spectra and X-ray diffraction characterization suggested the formation of crystalline red phosphorus nanowires. FET devices constructed with the red phosphorus nanowires displayed a typical I-V curve similar to that of black phosphorus and a similar mobility reaching 300 cm(2)  V(-1)  s with an Ion /Ioff ratio approaching 10(2) . A significant response to infrared light was observed from the FET device.", "title": "" }, { "docid": "1bf801e8e0348ccd1e981136f604dd18", "text": "Sketch recognition is one of the integral components used by law enforcement agencies in solving crime. In recent past, software generated composite sketches are being preferred as they are more consistent and faster to construct than hand drawn sketches. Matching these composite sketches to face photographs is a complex task because the composite sketches are drawn based on the witness description and lack minute details which are present in photographs. This paper presents a novel algorithm for matching composite sketches with photographs using transfer learning with deep learning representation. In the proposed algorithm, first the deep learning architecture based facial representation is learned using large face database of photos and then the representation is updated using small problem-specific training database. Experiments are performed on the extended PRIP database and it is observed that the proposed algorithm outperforms recently proposed approach and a commercial face recognition system.", "title": "" }, { "docid": "11e666f5b8746ea4b6fc6d4467295e61", "text": "It is shown that by combining the osmotic pressure and rate of diffusion laws an equation can be derived for the kinetics of osmosis. The equation has been found to agree with experiments on the rate of osmosis for egg albumin and gelatin solutions with collodion membranes.", "title": "" }, { "docid": "1d41e6f55521cdba4fc73febd09d2eb4", "text": "1.", "title": "" }, { "docid": "3b4622a4ad745fc0ffb3b6268eb969fa", "text": "Eruptive syringomas: unresponsiveness to oral isotretinoin A 22-year-old man of Egyptian origin was referred to our department due to exacerbation of pruritic pre-existing papular dermatoses. The skin lesions had been present since childhood. The family history was negative for a similar condition. The patient complained of exacerbation of the pruritus during physical activity under a hot climate and had moderate to severe pruritus during his work. Physical examination revealed multiple reddish-brownish smooth-surfaced, symmetrically distributed papules 2–4 mm in diameter on the patient’s trunk, neck, axillae, and limbs (Fig. 1). The rest of the physical examination was unremarkable. The Darier sign was negative. A skin biopsy was obtained from a representative lesion on the trunk. Histopathologic examination revealed a small, wellcircumscribed neoplasm confined to the upper dermis, composed of small solid and ductal structures relatively evenly distributed in a sclerotic collagenous stroma. The solid elements were of various shapes (round, oval, curvilinear, “comma-like,” or “tadpole-like”) (Fig. 2). These microscopic features and the clinical presentation were consistent with the diagnosis of eruptive syringomas. Our patient was treated with a short course of oral antihistamines without any effect and subsequently with low-dose isotretinoin (10 mg/day) for 5 months. No improvement of the skin eruption was observed while cessation of the pruritus was accomplished. Syringoma is a common adnexal tumor with differentiation towards eccrine acrosyringium composed of small solid and ductal elements embedded in a sclerotic stroma and restricted as a rule to the upper to mid dermis, usually presenting clinically as multiple lesions on the lower eyelids and cheeks of adolescent females. A much less common variant is the eruptive or disseminated syringomas, which involve primarily young women. Eruptive syringomas are characterized by rapid development during a short period of time of hundreds of small (1–5 mm), ill-defined, smooth surfaced, skin-colored, pink, yellowish, or brownish papules typically involving the face, trunk, genitalia, pubic area, and extremities but can occur principally in any site where eccrine glands are found. The pathogenesis of eruptive syringoma remains unclear. Some authors have recently challenged the traditional notion that eruptive syringomas are neoplastic lesions. Chandler and Bosenberg presented evidence that eruptive syringomas result from autoimmune destruction of the acrosyringium and proposed the term autoimmune acrosyringitis with ductal cysts. Garrido-Ruiz et al. support the theory that eruptive syringomas may represent a hyperplastic response of the eccrine duct to an inflammatory reaction. In a recent systematic review by Williams and Shinkai the strongest association of syringomas was with Down’s syndrome (183 reported cases, 22.2%). Syringomas are also associated with diabetes mellitus (17 reported cases, 2.1%), Ehlers–Danlos", "title": "" }, { "docid": "161e9a2b7a6783b57ce47bb8e100a80d", "text": "Distributed storage systems provide large-scale data storage services, yet they are confronted with frequent node failures. To ensure data availability, a storage system often introduces data redundancy via replication or erasure coding. As erasure coding incurs significantly less redundancy overhead than replication under the same fault tolerance, it has been increasingly adopted in large-scale storage systems. In erasure-coded storage systems, degraded reads to temporarily unavailable data are very common, and hence boosting the performance of degraded reads becomes important. One challenge is that storage nodes tend to be heterogeneous with different storage capacities and I/O bandwidths. To this end, we propose FastDR, a system that addresses node heterogeneity and exploits I/O parallelism, so as to boost the performance of degraded reads to temporarily unavailable data. FastDR incorporates a greedy algorithm that seeks to reduce the data transfer cost of reading surviving data for degraded reads, while allowing the search of the efficient degraded read solution to be completed in a timely manner. We implement a FastDR prototype, and conduct extensive evaluation through simulation studies as well as testbed experiments on a Hadoop cluster with 10 storage nodes. We demonstrate that our FastDR achieves efficient degraded reads compared to existing approaches.", "title": "" } ]
scidocsrr
c4d19b13e92558c0cfab7f6748d7a35e
Ensemble diversity measures and their application to thinning
[ { "docid": "0fb2afcd2997a1647bb4edc12d2191f9", "text": "Many databases have grown to the point where they cannot fit into the fast memory of even large memory machines, to say nothing of current workstations. If what we want to do is to use these data bases to construct predictions of various characteristics, then since the usual methods require that all data be held in fast memory, various work-arounds have to be used. This paper studies one such class of methods which give accuracy comparable to that which could have been obtained if all data could have been held in core and which are computationally fast. The procedure takes small pieces of the data, grows a predictor on each small piece and then pastes these predictors together. A version is given that scales up to terabyte data sets. The methods are also applicable to on-line learning.", "title": "" } ]
[ { "docid": "a880d38d37862b46dc638b9a7e45b6ee", "text": "This paper presents the modeling, simulation, and analysis of the dynamic behavior of a fictitious 2 × 320 MW variable-speed pump-turbine power plant, including a hydraulic system, electrical equipment, rotating inertias, and control systems. The modeling of the hydraulic and electrical components of the power plant is presented. The dynamic performances of a control strategy in generating mode and one in pumping mode are investigated by the simulation of the complete models in the case of change of active power set points. Then, a pseudocontinuous model of the converters feeding the rotor circuits is described. Due to this simplification, the simulation time can be reduced drastically (approximately factor 60). A first validation of the simplified model of the converters is obtained by comparison of the simulated results coming from the simplified and complete models for different modes of operation of the power plant. Experimental results performed on a 2.2-kW low-power test bench are also compared with the simulated results coming from both complete and simplified models related to this case and confirm the validity of the proposed simplified approach for the converters.", "title": "" }, { "docid": "833c110e040311909aa38b05e457b2af", "text": "The scyphozoan Aurelia aurita (Linnaeus) s. l., is a cosmopolitan species-complex which blooms seasonally in a variety of coastal and shelf sea environments around the world. We hypothesized that ephyrae of Aurelia sp.1 are released from the inner part of the Jiaozhou Bay, China when water temperature is below 15°C in late autumn and winter. The seasonal occurrence, growth, and variation of the scyphomedusa Aurelia sp.1 were investigated in Jiaozhou Bay from January 2011 to December 2011. Ephyrae occurred from May through June with a peak abundance of 2.38 ± 0.56 ind/m3 in May, while the temperature during this period ranged from 12 to 18°C. The distribution of ephyrae was mainly restricted to the coastal area of the bay, and the abundance was higher in the dock of the bay than at the other inner bay stations. Young medusae derived from ephyrae with a median diameter of 9.74 ± 1.7 mm were present from May 22. Growth was rapid from May 22 to July 2 with a maximum daily growth rate of 39%. Median diameter of the medusae was 161.80 ± 18.39 mm at the beginning of July. In August, a high proportion of deteriorated specimens was observed and the median diameter decreased. The highest average abundance is 0.62 ± 1.06 ind/km2 in Jiaozhou Bay in August. The abundance of Aurelia sp.1 medusae was low from September and then decreased to zero. It is concluded that water temperature is the main driver regulating the life cycle of Aurelia sp.1 in Jiaozhou Bay.", "title": "" }, { "docid": "db4bb32f6fdc7a05da41e223afac3025", "text": "Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: \"noise\" characterization and suppression, and \"signal\" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources.", "title": "" }, { "docid": "7dcba854d1f138ab157a1b24176c2245", "text": "Essential oils distilled from members of the genus Lavandula have been used both cosmetically and therapeutically for centuries with the most commonly used species being L. angustifolia, L. latifolia, L. stoechas and L. x intermedia. Although there is considerable anecdotal information about the biological activity of these oils much of this has not been substantiated by scientific or clinical evidence. Among the claims made for lavender oil are that is it antibacterial, antifungal, carminative (smooth muscle relaxing), sedative, antidepressive and effective for burns and insect bites. In this review we detail the current state of knowledge about the effect of lavender oils on psychological and physiological parameters and its use as an antimicrobial agent. Although the data are still inconclusive and often controversial, there does seem to be both scientific and clinical data that support the traditional uses of lavender. However, methodological and oil identification problems have severely hampered the evaluation of the therapeutic significance of much of the research on Lavandula spp. These issues need to be resolved before we have a true picture of the biological activities of lavender essential oil.", "title": "" }, { "docid": "83b8944584693b9568f6ad3533ad297b", "text": "BACKGROUND\nChemotherapy is the standard of care for incurable advanced gastric cancer. Whether the addition of gastrectomy to chemotherapy improves survival for patients with advanced gastric cancer with a single non-curable factor remains controversial. We aimed to investigate the superiority of gastrectomy followed by chemotherapy versus chemotherapy alone with respect to overall survival in these patients.\n\n\nMETHODS\nWe did an open-label, randomised, phase 3 trial at 44 centres or hospitals in Japan, South Korea, and Singapore. Patients aged 20-75 years with advanced gastric cancer with a single non-curable factor confined to either the liver (H1), peritoneum (P1), or para-aortic lymph nodes (16a1/b2) were randomly assigned (1:1) in each country to chemotherapy alone or gastrectomy followed by chemotherapy by a minimisation method with biased-coin assignment to balance the groups according to institution, clinical nodal status, and non-curable factor. Patients, treating physicians, and individuals who assessed outcomes and analysed data were not masked to treatment assignment. Chemotherapy consisted of oral S-1 80 mg/m(2) per day on days 1-21 and cisplatin 60 mg/m(2) on day 8 of every 5-week cycle. Gastrectomy was restricted to D1 lymphadenectomy without any resection of metastatic lesions. The primary endpoint was overall survival, analysed by intention to treat. This study is registered with UMIN-CTR, number UMIN000001012.\n\n\nFINDINGS\nBetween Feb 4, 2008, and Sept 17, 2013, 175 patients were randomly assigned to chemotherapy alone (86 patients) or gastrectomy followed by chemotherapy (89 patients). After the first interim analysis on Sept 14, 2013, the predictive probability of overall survival being significantly higher in the gastrectomy plus chemotherapy group than in the chemotherapy alone group at the final analysis was only 13·2%, so the study was closed on the basis of futility. Overall survival at 2 years for all randomly assigned patients was 31·7% (95% CI 21·7-42·2) for patients assigned to chemotherapy alone compared with 25·1% (16·2-34·9) for those assigned to gastrectomy plus chemotherapy. Median overall survival was 16·6 months (95% CI 13·7-19·8) for patients assigned to chemotherapy alone and 14·3 months (11·8-16·3) for those assigned to gastrectomy plus chemotherapy (hazard ratio 1·09, 95% CI 0·78-1·52; one-sided p=0·70). The incidence of the following grade 3 or 4 chemotherapy-associated adverse events was higher in patients assigned to gastrectomy plus chemotherapy than in those assigned to chemotherapy alone: leucopenia (14 patients [18%] vs two [3%]), anorexia (22 [29%] vs nine [12%]), nausea (11 [15%] vs four [5%]), and hyponatraemia (seven [9%] vs four [5%]). One treatment-related death occurred in a patient assigned to chemotherapy alone (sudden cardiopulmonary arrest of unknown cause during the second cycle of chemotherapy) and one occurred in a patient assigned to chemotherapy plus gastrectomy (rapid growth of peritoneal metastasis after discharge 12 days after surgery).\n\n\nINTERPRETATION\nSince gastrectomy followed by chemotherapy did not show any survival benefit compared with chemotherapy alone in advanced gastric cancer with a single non-curable factor, gastrectomy cannot be justified for treatment of patients with these tumours.\n\n\nFUNDING\nThe Ministry of Health, Labour and Welfare of Japan and the Korean Gastric Cancer Association.", "title": "" }, { "docid": "ddb77ec8a722c50c28059d03919fb299", "text": "Among the smart cities applications, optimizing lottery games is one of the urgent needs to ensure their fairness and transparency. The emerging blockchain technology shows a glimpse of solutions to fairness and transparency issues faced by lottery industries. This paper presents the design of a blockchain-based lottery system for smart cities applications. We adopt the smart contracts of blockchain technology and the cryptograph blockchain model, Hawk [8], to design the blockchain-based lottery system, FairLotto, for future smart cities applications. Fairness, transparency, and privacy of the proposed blockchain-based lottery system are discussed and ensured.", "title": "" }, { "docid": "872ccba4f0a0ba6a57500d4b73384ce1", "text": "This research demonstrates the application of association rule mining to spatio-temporal data. Association rule mining seeks to discover associations among transactions encoded in a database. An association rule takes the form A → B where A (the antecedent) and B (the consequent) are sets of predicates. A spatio-temporal association rule occurs when there is a spatio-temporal relationship in the antecedent or consequent of the rule. As a case study, association rule mining is used to explore the spatial and temporal relationships among a set of variables that characterize socioeconomic and land cover change in the Denver, Colorado, USA region from 1970–1990. Geographic Information Systems (GIS)-based data pre-processing is used to integrate diverse data sets, extract spatio-temporal relationships, classify numeric data into ordinal categories, and encode spatio-temporal relationship data in tabular format for use by conventional (non-spatio-temporal) association rule mining software. Multiple level association rule mining is supported by the development of a hierarchical classification scheme (concept hierarchy) for each variable. Further research in spatiotemporal association rule mining should address issues of data integration, data classification, the representation and calculation of spatial relationships, and strategies for finding ‘interesting’ rules.", "title": "" }, { "docid": "5ec64c4a423ccd32a5c1ceb918e3e003", "text": "The leading edge (approximately 1 microgram) of lamellipodia in Xenopus laevis keratocytes and fibroblasts was shown to have an extensively branched organization of actin filaments, which we term the dendritic brush. Pointed ends of individual filaments were located at Y-junctions, where the Arp2/3 complex was also localized, suggesting a role of the Arp2/3 complex in branch formation. Differential depolymerization experiments suggested that the Arp2/3 complex also provided protection of pointed ends from depolymerization. Actin depolymerizing factor (ADF)/cofilin was excluded from the distal 0.4 micrometer++ of the lamellipodial network of keratocytes and in fibroblasts it was located within the depolymerization-resistant zone. These results suggest that ADF/cofilin, per se, is not sufficient for actin brush depolymerization and a regulatory step is required. Our evidence supports a dendritic nucleation model (Mullins, R.D., J.A. Heuser, and T.D. Pollard. 1998. Proc. Natl. Acad. Sci. USA. 95:6181-6186) for lamellipodial protrusion, which involves treadmilling of a branched actin array instead of treadmilling of individual filaments. In this model, Arp2/3 complex and ADF/cofilin have antagonistic activities. Arp2/3 complex is responsible for integration of nascent actin filaments into the actin network at the cell front and stabilizing pointed ends from depolymerization, while ADF/cofilin promotes filament disassembly at the rear of the brush, presumably by pointed end depolymerization after dissociation of the Arp2/3 complex.", "title": "" }, { "docid": "b81f30a692d57ebc2fdef7df652d0ca2", "text": "Suppose that Alice wishes to send messages to Bob through a communication channel C1, but her transmissions also reach an eavesdropper Eve through another channel C2. This is the wiretap channel model introduced by Wyner in 1975. The goal is to design a coding scheme that makes it possible for Alice to communicate both reliably and securely. Reliability is measured in terms of Bob's probability of error in recovering the message, while security is measured in terms of the mutual information between the message and Eve's observations. Wyner showed that the situation is characterized by a single constant Cs, called the secrecy capacity, which has the following meaning: for all ε >; 0, there exist coding schemes of rate R ≥ Cs-ε that asymptotically achieve the reliability and security objectives. However, his proof of this result is based upon a random-coding argument. To date, despite consider able research effort, the only case where we know how to construct coding schemes that achieve secrecy capacity is when Eve's channel C2 is an erasure channel, or a combinatorial variation thereof. Polar codes were recently invented by Arikan; they approach the capacity of symmetric binary-input discrete memoryless channels with low encoding and decoding complexity. In this paper, we use polar codes to construct a coding scheme that achieves the secrecy capacity for a wide range of wiretap channels. Our construction works for any instantiation of the wiretap channel model, as long as both C1 and C2 are symmetric and binary-input, and C2 is degraded with respect to C1. Moreover, we show how to modify our construction in order to provide strong security, in the sense defined by Maurer, while still operating at a rate that approaches the secrecy capacity. In this case, we cannot guarantee that the reliability condition will also be satisfied unless the main channel C1 is noiseless, although we believe it can be always satisfied in practice.", "title": "" }, { "docid": "a2c26a8b15cafeb365ad9870f9bbf884", "text": "Microgrids consist of multiple parallel-connected distributed generation (DG) units with coordinated control strategies, which are able to operate in both grid-connected and islanded mode. Microgrids are attracting more and more attention since they can alleviate the stress of main transmission systems, reduce feeder losses, and improve system power quality. When the islanded microgrids are concerned, it is important to maintain system stability and achieve load power sharing among the multiple parallel-connected DG units. However, the poor active and reactive power sharing problems due to the influence of impedance mismatch of the DG feeders and the different ratings of the DG units are inevitable when the conventional droop control scheme is adopted. Therefore, the adaptive/improved droop control, network-based control methods and cost-based droop schemes are compared and summarized in this paper for active power sharing. Moreover, nonlinear and unbalanced loads could further affect the reactive power sharing when regulating the active power, and it is difficult to share the reactive power accurately only by using the enhanced virtual impedance method. Therefore, the hierarchical control strategies are utilized as supplements of the conventional droop controls and virtual impedance methods. The improved hierarchical control approaches such as the algorithms based on graph theory, multi-agent system, the gain scheduling method and predictive control have been proposed to achieve proper reactive power sharing for islanded microgrids and eliminate the effect of the communication delays on hierarchical control. Finally, the future research trends on islanded microgrids are also discussed in this paper.", "title": "" }, { "docid": "87da90ee583f5aa1777199f67bdefc83", "text": "The rapid development of computer networks in the past decades has created many security problems related to intrusions on computer and network systems. Intrusion Detection Systems IDSs incorporate methods that help to detect and identify intrusive and non-intrusive network packets. Most of the existing intrusion detection systems rely heavily on human analysts to analyze system logs or network traffic to differentiate between intrusive and non-intrusive network traffic. With the increase in data of network traffic, involvement of human in the detection system is a non-trivial problem. IDS’s ability to perform based on human expertise brings limitations to the system’s capability to perform autonomously over exponentially increasing data in the network. However, human expertise and their ability to analyze the system can be efficiently modeled using soft-computing techniques. Intrusion detection techniques based on machine learning and softcomputing techniques enable autonomous packet detections. They have the potential to analyze the data packets, autonomously. These techniques are heavily based on statistical analysis of data. The ability of the algorithms that handle these data-sets can use patterns found in previous data to make decisions for the new evolving data-patterns in the network traffic. In this paper, we present a rigorous survey study that envisages various soft-computing and machine learning techniques used to build autonomous IDSs. A robust IDSs system lays a foundation to build an efficient Intrusion Detection and Prevention System IDPS.", "title": "" }, { "docid": "2d5a8949119d7881a97693867a009917", "text": "Labeling a histopathology image as having cancerous regions or not is a critical task in cancer diagnosis; it is also clinically important to segment the cancer tissues and cluster them into various classes. Existing supervised approaches for image classification and segmentation require detailed manual annotations for the cancer pixels, which are time-consuming to obtain. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL) (along the line of weakly supervised learning) for histopathology image segmentation. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), medical image segmentation (cancer vs. non-cancer tissue), and patch-level clustering (different classes). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to performing the above three tasks in an integrated framework. In addition, we introduce contextual constraints as a prior for MCIL, which further reduces the ambiguity in MIL. Experimental results on histopathology colon cancer images and cytology images demonstrate the great advantage of MCIL over the competing methods.", "title": "" }, { "docid": "f02b44ff478952f1958ba33d8a488b8e", "text": "Plagiarism is an illicit act of using other’s work wholly or partially as one’s own in any field such as art, poetry literature, cinema, research and other creative forms of study. It has become a serious crime in academia and research fields and access to wide range of resources on the internet has made the situation even worse. Therefore, there is a need for automatic detection of plagiarism in text. This paper presents a survey of various plagiarism detection techniques used for different languages.", "title": "" }, { "docid": "026a0651177ee631a80aaa7c63a1c32f", "text": "This paper is an introduction to natural language interfaces to databases (Nlidbs). A brief overview of the history of Nlidbs is rst given. Some advantages and disadvantages of Nlidbs are then discussed, comparing Nlidbs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems Nlidbs have to confront follows, for the beneet of readers less familiar with computational linguistics. The discussion then moves on to Nlidb architectures, porta-bility issues, restricted natural language input systems (including menu-based Nlidbs), and Nlidbs with reasoning capabilities. Some less explored areas of Nlidb research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal Nlidbs. The paper ends with reeections on the current state of the art.", "title": "" }, { "docid": "02605f4044a69b70673121985f1bd913", "text": "A novel class of low-cost, small-footprint and high-gain antenna arrays is presented for W-band applications. A 4 × 4 antenna array is proposed and demonstrated using substrate-integrated waveguide (SIW) technology for the design of its feed network and longitudinal slots in the SIW top metallic surface to drive the array antenna elements. Dielectric cubes of low-permittivity material are placed on top of each 1 × 4 antenna array to increase the gain of the circular patch antenna elements. This new design is compared to a second 4 × 4 antenna array which, instead of dielectric cubes, uses vertically stacked Yagi-like parasitic director elements to increase the gain. Measured impedance bandwidths of the two 4 × 4 antenna arrays are about 7.5 GHz (94.2-101.8 GHz) at 18 ± 1 dB gain level, with radiation patterns and gains of the two arrays remaining nearly constant over this bandwidth. While the fabrication effort of the new array involving dielectric cubes is significantly reduced, its measured radiation efficiency of 81 percent is slightly lower compared to 90 percent of the Yagi-like design.", "title": "" }, { "docid": "b05f96e22157b69d7033db35ab38524a", "text": "Novelty search has shown to be a promising approach for the evolution of controllers for swarms of robots. In existing studies, however, the experimenter had to craft a task-specific behaviour similarity measure. The reliance on hand-crafted similarity measures places an additional burden to the experimenter and introduces a bias in the evolutionary process. In this paper, we propose and compare two generic behaviour similarity measures: combined state count and sampled average state. The proposed measures are based on the values of sensors and effectors recorded for each individual robot of the swarm. The characterisation of the group-level behaviour is then obtained by combining the sensor-effector values from all the robots. We evaluate the proposed measures in an aggregation task and in a resource sharing task. We show that the generic measures match the performance of task-specific measures in terms of solution quality. Our results indicate that the proposed generic measures operate as effective behaviour similarity measures, and that it is possible to leverage the benefits of novelty search without having to craft task-specific similarity measures.", "title": "" }, { "docid": "ba2710c7df05b149f6d2befa8dbc37ee", "text": "This work proposes a method for blind equalization of possibly non-minimum phase channels using particular infinite impulse response (IIR) filters. In this context, the transfer function of the equalizer is represented by a linear combination of specific rational basis functions. This approach estimates separately the coefficients of the linear expansion and the poles of the rational basis functions by alternating iteratively between an adaptive (fixed pole) estimation of the coefficients and a pole placement method. The focus of the work is mainly on the issue of good pole placement (initialization and updating).", "title": "" }, { "docid": "6b0a4a8c61fb4ceabe3aa3d5664b4b67", "text": "Most existing approaches for text classification represent texts as vectors of words, namely ``Bag-of-Words.'' This text representation results in a very high dimensionality of feature space and frequently suffers from surface mismatching. Short texts make these issues even more serious, due to their shortness and sparsity. In this paper, we propose using ``Bag-of-Concepts'' in short text representation, aiming to avoid the surface mismatching and handle the synonym and polysemy problem. Based on ``Bag-of-Concepts,'' a novel framework is proposed for lightweight short text classification applications. By leveraging a large taxonomy knowledgebase, it learns a concept model for each category, and conceptualizes a short text to a set of relevant concepts. A concept-based similarity mechanism is presented to classify the given short text to the most similar category. One advantage of this mechanism is that it facilitates short text ranking after classification, which is needed in many applications, such as query or ad recommendation. We demonstrate the usage of our proposed framework through a real online application: Channel-based Query Recommendation. Experiments show that our framework can map queries to channels with a high degree of precision (avg. precision=90.3%), which is critical for recommendation applications.", "title": "" }, { "docid": "32fb1d8492e06b1424ea61d4c28f3c6c", "text": "Modern IT systems often produce large volumes of event logs, and event pattern discovery is an important log management task. For this purpose, data mining methods have been suggested in many previous works. In this paper, we present the LogCluster algorithm which implements data clustering and line pattern mining for textual event logs. The paper also describes an open source implementation of LogCluster.", "title": "" } ]
scidocsrr
0c39cc7afb570af24adeb2b801b6598e
Personal self-concept and satisfaction with life in adolescence, youth and adulthood.
[ { "docid": "c2448cd1ac95923b11b033041cfa0cb7", "text": "Reigning measures of psychological well-being have little theoretical grounding, despite an extensive literature on the contours of positive functioning. Aspects of well-being derived from this literature (i.e., self-acceptance, positive relations with others, autonomy, environmental mastery, purpose in life, and personal growth) were operationalized. Three hundred and twenty-one men and women, divided among young, middle-aged, and older adults, rated themselves on these measures along with six instruments prominent in earlier studies (i.e., affect balance, life satisfaction, self-esteem, morale, locus of control, depression). Results revealed that positive relations with others, autonomy, purpose in life, and personal growth were not strongly tied to prior assessment indexes, thereby supporting the claim that key aspects of positive functioning have not been represented in the empirical arena. Furthermore, age profiles revealed a more differentiated pattern of well-being than is evident in prior research.", "title": "" } ]
[ { "docid": "07a718d6e7136e90dbd35ea18d6a5f11", "text": "We discuss the importance of understanding psychological aspects of phishing, and review some recent findings. Given these findings, we critique some commonly used security practices and suggest and review alternatives, including educational approaches. We suggest a few techniques that can be used to assess and remedy threats remotely, without requiring any user involvement. We conclude by discussing some approaches to anticipate the next wave of threats, based both on psychological and technical insights. 1 What Will Consumers Believe? There are several reasons why it is important to understand what consumers will find believable. First of all, it is crucial for service providers to know their vulnerabilities (and those of their clients) in order to assess their exposure to risks and the associated liabilities. Second, recognizing what the vulnerabilities are translates into knowing from where the attacks are likely to come; this allows for suitable technical security measures to be deployed to detect and protect against attacks of concern. It also allows for a proactive approach in which the expected vulnerabilities are minimized by the selection and deployment of appropriate email and web templates, and the use of appropriate manners of interaction. Finally, there are reasons for why understanding users is important that are not directly related to security: Knowing what consumers will believe—and will not believe—means a better ability to reach the consumers with information they do not expect, whether for reasons of advertising products or communicating alerts. Namely, given the mimicry techniques used by phishers, there is a risk that consumers incorrectly classify legitimate messages as attempts to attack them. Being aware of potential pitfalls may guide decisions that facilitate communication. While technically knowledgeable, specialists often make the mistake of believing that security measures that succeed in protecting them are sufficient to protect average consumers. For example, it was for a long time commonly held among security practitioners that the widespread deployment of SSL would eliminate phishing once consumers become aware of the risks and nature of phishing attacks. This, very clearly, has not been the case, as supported both by reallife observations and by experiments [48]. This can be ascribed to a lack of attention to security among typical users [47, 35], but also to inconsistent or inappropriate security education [12]— whether implicit or not. An example of a common procedure that indirectly educates user is the case of lock symbols. Many financial institutions place a lock symbol in the content portion of the login page to indicate that a secure connection will be established as the user submits his credentials. This is to benefit from the fact that users have been educated to equate an SSL lock with a higher level of security. However, attackers may also place lock icons in the content of the page, whether they intend to establish an SSL connection or not. Therefore, the use of the lock", "title": "" }, { "docid": "e91d3ae1224ca4c86f72646fd86cc661", "text": "We examine the functional cohesion of procedures using a data slice abstraction. Our analysis identi es the data tokens that lie on more than one slice as the \\glue\" that binds separate components together. Cohesion is measured in terms of the relative number of glue tokens, tokens that lie on more than one data slice, and super-glue tokens, tokens that lie on all data slices in a procedure, and the adhesiveness of the tokens. The intuition and measurement scale factors are demonstrated through a set of abstract transformations and composition operators. Index terms | software metrics, cohesion, program slices, measurement theory", "title": "" }, { "docid": "8e26d11fa1ab330a429f072c1ac17fe2", "text": "The objective of this study was to report the signalment, indications for surgery, postoperative complications and outcome in dogs undergoing penile amputation and scrotal urethrostomy. Medical records of three surgical referral facilities were reviewed for dogs undergoing penile amputation and scrotal urethrostomy between January 2003 and July 2010. Data collected included signalment, presenting signs, indication for penile amputation, surgical technique, postoperative complications and long-term outcome. Eighteen dogs were included in the study. Indications for surgery were treatment of neoplasia (n=6), external or unknown penile trauma (n=4), penile trauma or necrosis associated with urethral obstruction with calculi (n=3), priapism (n=4) and balanoposthitis (n=1). All dogs suffered mild postoperative haemorrhage (posturination and/or spontaneous) from the urethrostomy stoma for up to 21 days (mean 5.5 days). Four dogs had minor complications recorded at suture removal (minor dehiscence (n=1), mild bruising and swelling around the urethrostomy site and mild haemorrhage at suture removal (n=2), and granulation at the edge of stoma (n=1)). One dog had a major complication (wound dehiscence and subsequent stricture of the stoma). Long-term outcome was excellent in all dogs with non-neoplastic disease. Local tumour recurrence and/or metastatic disease occurred within five to 12 months of surgery in two dogs undergoing penile amputation for the treatment of neoplasia. Both dogs were euthanased.", "title": "" }, { "docid": "878cd4545931099ead5df71076afc731", "text": "The pioneer deep neural networks (DNNs) have emerged to be deeper or wider for improving their accuracy in various applications of artificial intelligence. However, DNNs are often too heavy to deploy in practice, and it is often required to control their architectures dynamically given computing resource budget, i.e., anytime prediction. While most existing approaches have focused on training multiple shallow sub-networks jointly, we study training thin sub-networks instead. To this end, we first build many inclusive thin sub-networks (of the same depth) under a minor modification of existing multi-branch DNNs, and found that they can significantly outperform the state-of-art dense architecture for anytime prediction. This is remarkable due to their simplicity and effectiveness, but training many thin subnetworks jointly faces a new challenge on training complexity. To address the issue, we also propose a novel DNN architecture by forcing a certain sparsity pattern on multi-branch network parameters, making them train efficiently for the purpose of anytime prediction. In our experiments on the ImageNet dataset, its sub-networks have up to 43.3% smaller sizes (FLOPs) compared to those of the state-of-art anytime model with respect to the same accuracy. Finally, we also propose an alternative task under the proposed architecture using a hierarchical taxonomy, which brings a new angle for anytime prediction.", "title": "" }, { "docid": "be447131554900aaba025be449944613", "text": "Attackers increasingly take advantage of innocent users who tend to casually open email messages assumed to be benign, carrying malicious documents. Recent targeted attacks aimed at organizations utilize the new Microsoft Word documents (*.docx). Anti-virus software fails to detect new unknown malicious files, including malicious docx files. In this paper, we present ALDOCX, a framework aimed at accurate detection of new unknown malicious docx files that also efficiently enhances the framework’s detection capabilities over time. Detection relies upon our new structural feature extraction methodology (SFEM), which is performed statically using meta-features extracted from docx files. Using machine-learning algorithms with SFEM, we created a detection model that successfully detects new unknown malicious docx files. In addition, because it is crucial to maintain the detection model’s updatability and incorporate new malicious files created daily, ALDOCX integrates our active-learning (AL) methods, which are designed to efficiently assist anti-virus vendors by better focusing their experts’ analytical efforts and enhance detection capability. ALDOCX identifies and acquires new docx files that are most likely malicious, as well as informative benign files. These files are used for enhancing the knowledge stores of both the detection model and the anti-virus software. The evaluation results show that by using ALDOCX and SFEM, we achieved a high detection rate of malicious docx files (94.44% TPR) compared with the anti-virus software (85.9% TPR)—with very low FPR rates (0.19%). ALDOCX’s AL methods used only 14% of the labeled docx files, which led to a reduction of 95.5% in security experts’ labeling efforts compared with the passive learning and the support vector machine (SVM)-Margin (existing active-learning method). Our AL methods also showed a significant improvement of 91% in number of unknown docx malware acquired, compared with the passive learning and the SVM-Margin, thus providing an improved updating solution for the detection model, as well as the anti-virus software widely used within organizations.", "title": "" }, { "docid": "e2988860c1e8b4aebd6c288d37d1ca4e", "text": "Numerous studies have shown that datacenter computers rarely operate at full utilization, leading to a number of proposals for creating servers that are energy proportional with respect to the computation that they are performing.\n In this paper, we show that as servers themselves become more energy proportional, the datacenter network can become a significant fraction (up to 50%) of cluster power. In this paper we propose several ways to design a high-performance datacenter network whose power consumption is more proportional to the amount of traffic it is moving -- that is, we propose energy proportional datacenter networks.\n We first show that a flattened butterfly topology itself is inherently more power efficient than the other commonly proposed topology for high-performance datacenter networks. We then exploit the characteristics of modern plesiochronous links to adjust their power and performance envelopes dynamically. Using a network simulator, driven by both synthetic workloads and production datacenter traces, we characterize and understand design tradeoffs, and demonstrate an 85% reduction in power --- which approaches the ideal energy-proportionality of the network.\n Our results also demonstrate two challenges for the designers of future network switches: 1) We show that there is a significant power advantage to having independent control of each unidirectional channel comprising a network link, since many traffic patterns show very asymmetric use, and 2) system designers should work to optimize the high-speed channel designs to be more energy efficient by choosing optimal data rate and equalization technology. Given these assumptions, we demonstrate that energy proportional datacenter communication is indeed possible.", "title": "" }, { "docid": "555afe09318573b475e96e72d2c7e54e", "text": "A conflict-free replicated data type (CRDT) is an abstract data type, with a well defined interface, designed to be replicated at multiple processes and exhibiting the following properties: (i) any replica can be modified without coordinating with another replicas; (ii) when any two replicas have received the same set of updates, they reach the same state, deterministically, by adopting mathematically sound rules to guarantee state convergence.", "title": "" }, { "docid": "fba5b69c3b0afe9f39422db8c18dba06", "text": "It is well known that stressful experiences may affect learning and memory processes. Less clear is the exact nature of these stress effects on memory: both enhancing and impairing effects have been reported. These opposite effects may be explained if the different time courses of stress hormone, in particular catecholamine and glucocorticoid, actions are taken into account. Integrating two popular models, we argue here that rapid catecholamine and non-genomic glucocorticoid actions interact in the basolateral amygdala to shift the organism into a 'memory formation mode' that facilitates the consolidation of stressful experiences into long-term memory. The undisturbed consolidation of these experiences is then promoted by genomic glucocorticoid actions that induce a 'memory storage mode', which suppresses competing cognitive processes and thus reduces interference by unrelated material. Highlighting some current trends in the field, we further argue that stress affects learning and memory processes beyond the basolateral amygdala and hippocampus and that stress may pre-program subsequent memory performance when it is experienced during critical periods of brain development.", "title": "" }, { "docid": "352c61af854ffc6dab438e7a1be56fcb", "text": "Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children’s cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.", "title": "" }, { "docid": "84301afe8fa5912dc386baab84dda7ea", "text": "There is a growing understanding that machine learning architectures have to be much bigger and more complex to approach any intelligent behavior. There is also a growing understanding that purely supervised learning is inadequate to train such systems. A recent paradigm of artificial recurrent neural network (RNN) training under the umbrella-name Reservoir Computing (RC) demonstrated that training big recurrent networks (the reservoirs) differently than supervised readouts from them is often better. It started with Echo State Networks (ESNs) and Liquid State Machines ten years ago where the reservoir was generated randomly and only linear readouts from it were trained. Rather surprisingly, such simply and fast trained ESNs outperformed classical fully-trained RNNs in many tasks. While full supervised training of RNNs is problematic, intuitively there should also be something better than a random network. In recent years RC became a vivid research field extending the initial paradigm from fixed random reservoir and trained output into using different methods for training the reservoir and the readout. In this thesis we overview existing and investigate new alternatives to the classical supervised training of RNNs and their hierarchies. First we present a taxonomy and a systematic overview of the RNN training approaches under the RC umbrella. Second, we propose and investigate the use of two different neural network models for the reservoirs together with several unsupervised adaptation techniques, as well as unsupervisedly layer-wise trained deep hierarchies of such models. We rigorously empirically test the proposed methods on two temporal pattern recognition datasets, comparing it to the classical reservoir computing state of art.", "title": "" }, { "docid": "fa313356d7267e963f75cd2ba4452814", "text": "INTRODUCTION\nStroke is a major cause of death and disability. Accurately predicting stroke outcome from a set of predictive variables may identify high-risk patients and guide treatment approaches, leading to decreased morbidity. Logistic regression models allow for the identification and validation of predictive variables. However, advanced machine learning algorithms offer an alternative, in particular, for large-scale multi-institutional data, with the advantage of easily incorporating newly available data to improve prediction performance. Our aim was to design and compare different machine learning methods, capable of predicting the outcome of endovascular intervention in acute anterior circulation ischaemic stroke.\n\n\nMETHOD\nWe conducted a retrospective study of a prospectively collected database of acute ischaemic stroke treated by endovascular intervention. Using SPSS®, MATLAB®, and Rapidminer®, classical statistics as well as artificial neural network and support vector algorithms were applied to design a supervised machine capable of classifying these predictors into potential good and poor outcomes. These algorithms were trained, validated and tested using randomly divided data.\n\n\nRESULTS\nWe included 107 consecutive acute anterior circulation ischaemic stroke patients treated by endovascular technique. Sixty-six were male and the mean age of 65.3. All the available demographic, procedural and clinical factors were included into the models. The final confusion matrix of the neural network, demonstrated an overall congruency of ∼ 80% between the target and output classes, with favourable receiving operative characteristics. However, after optimisation, the support vector machine had a relatively better performance, with a root mean squared error of 2.064 (SD: ± 0.408).\n\n\nDISCUSSION\nWe showed promising accuracy of outcome prediction, using supervised machine learning algorithms, with potential for incorporation of larger multicenter datasets, likely further improving prediction. Finally, we propose that a robust machine learning system can potentially optimise the selection process for endovascular versus medical treatment in the management of acute stroke.", "title": "" }, { "docid": "43cc6e40a7a31948ca2e7c141b271dbf", "text": "The false discovery rate (FDR)—the expected fraction of spurious discoveries among all the discoveries—provides a popular statistical assessment of the reproducibility of scientific studies in various disciplines. In this work, we introduce a new method for controlling the FDR in meta-analysis of many decentralized linear models. Our method targets the scenario where many research groups—possibly the number of which is random—are independently testing a common set of hypotheses and then sending summary statistics to a coordinating center in an online manner. Built on the knockoffs framework introduced by Barber and Candès (2015), our procedure starts by applying the knockoff filter to each linear model and then aggregates the summary statistics via one-shot communication in a novel way. This method gives exact FDR control non-asymptotically without any knowledge of the noise variances or making any assumption about sparsity of the signal. In certain settings, it has a communication complexity that is optimal up to a logarithmic factor.", "title": "" }, { "docid": "5e07328bf13a9dd2486e9dddbe6a3d8f", "text": "We present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical representation of bibliometric maps. The functionality of VOSviewer is especially useful for displaying large bibliometric maps in an easy-to-interpret way. The paper consists of three parts. In the first part, an overview of VOSviewer’s functionality for displaying bibliometric maps is provided. In the second part, the technical implementation of specific parts of the program is discussed. Finally, in the third part, VOSviewer’s ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.", "title": "" }, { "docid": "64a0e5d297c1bf2d42eae909e9548fb6", "text": "How to find the representative bands is a key issue in band selection for hyperspectral data. Very often, unsupervised band selection is associated with data clustering, and the cluster centers (or exemplars) are considered ideal representatives. However, partitioning the bands into clusters may be very time-consuming and affected by the distribution of the data points. In this letter, we propose a new band selection method, i.e., exemplar component analysis (ECA), aiming at selecting the exemplars of bands. Interestingly, ECA does not involve actual clustering. Instead, it prioritizes the bands according to their exemplar score, which is an easy-to-compute indicator defined in this letter measuring the possibility of bands to be exemplars. As a result, ECA is of high efficiency and immune to distribution structures of the data. The experiments on real hyperspectral data set demonstrate that ECA is an effective and efficient band selection method.", "title": "" }, { "docid": "74227709f4832c3978a21abb9449203b", "text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.", "title": "" }, { "docid": "867bb8f30a1e9440a03903d8471443f0", "text": "In this paper we present the Reactable, a new electronic musical instrument with a simple and intuitive tabletop interface that turns music into a tangible and visual experience, enabling musicians to experiment with sound, change its structure, control its parameters and be creative in a direct, refreshing and unseen way.", "title": "" }, { "docid": "4c5ed8940b888a4eb2abc5791afd5a36", "text": "A low-gain antenna (LGA) is designed for high cross-polarization discrimination (XPD) and low backward radiation within the 8.025-8.4-GHz frequency band to mitigate cross-polarization and multipath interference given the spacecraft layout constraints. The X-band choke ring horn was optimized, fabricated, and measured. The antenna gain remains higher than 2.5 dBi for angles between 0° and 60° off-boresight. The XPD is higher than 15 dB from 0° to 40° and higher than 20 dB from 40° to 60° off-boresight. The calculated and measured data are in excellent agreement.", "title": "" }, { "docid": "59f083611e4dc81c5280fc118e05401c", "text": "We propose a low area overhead and power-efficient asynchronous-logic quasi-delay-insensitive (QDI) sense-amplifier half-buffer (SAHB) approach with quad-rail (i.e., 1-of-4) data encoding. The proposed quad-rail SAHB approach is targeted for area- and energy-efficient asynchronous network-on-chip (ANoC) router designs. There are three main features in the proposed quad-rail SAHB approach. First, the quad-rail SAHB is designed to use four wires for selecting four ANoC router directions, hence reducing the number of transistors and area overhead. Second, the quad-rail SAHB switches only one out of four wires for 2-bit data propagation, hence reducing the number of transistor switchings and dynamic power dissipation. Third, the quad-rail SAHB abides by QDI rules, hence the designed ANoC router features high operational robustness toward process-voltage-temperature (PVT) variations. Based on the 65-nm CMOS process, we use the proposed quad-rail SAHB to implement and prototype an 18-bit ANoC router design. When benchmarked against the dual-rail counterpart, the proposed quad-rail SAHB ANoC router features 32% smaller area and dissipates 50% lower energy under the same excellent operational robustness toward PVT variations. When compared to the other reported ANoC routers, our proposed quad-rail SAHB ANoC router is one of the high operational robustness, smallest area, and most energy-efficient designs.", "title": "" }, { "docid": "19ea89fc23e7c4d564e4a164cfc4947a", "text": "OBJECTIVES\nThe purpose of this study was to evaluate the proximity of the mandibular molar apex to the buccal bone surface in order to provide anatomic information for apical surgery.\n\n\nMATERIALS AND METHODS\nCone-beam computed tomography (CBCT) images of 127 mandibular first molars and 153 mandibular second molars were analyzed from 160 patients' records. The distance was measured from the buccal bone surface to the root apex and the apical 3.0 mm on the cross-sectional view of CBCT.\n\n\nRESULTS\nThe second molar apex and apical 3 mm were located significantly deeper relative to the buccal bone surface compared with the first molar (p < 0.01). For the mandibular second molars, the distance from the buccal bone surface to the root apex was significantly shorter in patients over 70 years of age (p < 0.05). Furthermore, this distance was significantly shorter when the first molar was missing compared to nonmissing cases (p < 0.05). For the mandibular first molars, the distance to the distal root apex of one distal-rooted tooth was significantly greater than the distance to the disto-buccal root apex (p < 0.01). In mandibular second molar, the distance to the apex of C-shaped roots was significantly greater than the distance to the mesial root apex of non-C-shaped roots (p < 0.01).\n\n\nCONCLUSIONS\nFor apical surgery in mandibular molars, the distance from the buccal bone surface to the apex and apical 3 mm is significantly affected by the location, patient age, an adjacent missing anterior tooth, and root configuration.", "title": "" }, { "docid": "3f4953e2fd874fa9be4ab64912cd190a", "text": "Road detection from a monocular camera is an important perception module in any advanced driver assistance or autonomous driving system. Traditional techniques [1, 2, 3, 4, 5, 6] work reasonably well for this problem, when the roads are well maintained and the boundaries are clearly marked. However, in many developing countries or even for the rural areas in the developed countries, the assumption does not hold which leads to failure of such techniques. In this paper we propose a novel technique based on the combination of deep convolutional neural networks (CNNs), along with color lines model [7] based prior in a conditional random field (CRF) framework. While the CNN learns the road texture, the color lines model allows to adapt to varying illumination conditions. We show that our technique outperforms the state of the art segmentation techniques on the unmarked road segmentation problem. Though, not a focus of this paper, we show that even on the standard benchmark datasets like KITTI [8] and CamVid [9], where the road boundaries are well marked, the proposed technique performs competitively to the contemporary techniques.", "title": "" } ]
scidocsrr
b76682699bd65eb1bb86bfedf78406c9
A food image recognition system with Multiple Kernel Learning
[{"docid":"432fe001ec8f1331a4bd033e9c49ccdf","text":"Recently, methods based on local image features(...TRUNCATED)
[{"docid":"02156199912027e9230b3c000bcbe87b","text":"Voice conversion (VC) using sequence-to-sequenc(...TRUNCATED)
scidocsrr
8aad42609bf989c816c96442c69dd42f
"Evaluating Reliability and Predictive Validity of the Persian Translation of Quantitative Checklist(...TRUNCATED)
[{"docid":"ab8cc15fe47a9cf4aa904f7e1eea4bc9","text":"Autism, a severe disorder of development, is di(...TRUNCATED)
[{"docid":"9775396477ccfde5abdd766588655539","text":"The use of hand gestures offers an alternative (...TRUNCATED)
scidocsrr
bffc21925bf37c6af821150d9a109478
Improving a credit card fraud detection system using genetic algorithm
[{"docid":"51eb8e36ffbf5854b12859602f7554ef","text":"Fraud is increasing dramatically with the expan(...TRUNCATED)
[{"docid":"fdaf5546d430226721aa1840f92ba5af","text":"The recent development of regulatory policies t(...TRUNCATED)
scidocsrr
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
20