query_id
stringlengths
32
32
query
stringlengths
7
2.91k
positive_passages
listlengths
1
7
negative_passages
listlengths
10
100
subset
stringclasses
7 values
873d0cda976a62a1bec378b0f14f4d6f
Can a company charge you for services never requested or received?
[ { "docid": "0a0f16b824e6dab326bf5f18bbd456c0", "text": "In general, you can only be charged for services if there is some kind of contract. The contract doesn't have to be written, but you have to have agreed to it somehow. However, it is possible that you entered into a contract due to some clause in the home purchase contract or the contract with the home owners' association. There are also sometimes services you are legally required to get, such as regular inspection of heating furnaces (though I don't think this translates to automatic contracts). But in any case you would not be liable for services rendered before you entered into the contract, which sounds like it's the case here.", "title": "" }, { "docid": "914cf45781f65096709ea6f6a48237cf", "text": "No. A company cannot bill you for services you did not request nor receive. If they could, imagine how many people would just randomly get bills in their mail. Ignore them. They don't have a contract or agreement with you and can't do anything other than make noise. If they get aggressive or don't stop requesting money, hire an attorney and it will be taken care of.", "title": "" }, { "docid": "c04232d35c3027bae24245c0369769ec", "text": "I have had a couple of businesses do this to me. I simply ask them to come over to talk about the bill. Sometimes this ends it. If they come over then I call the cops to file a report on fraud. A lot of times the police will do nothing unless they have had a load of complaints but it certainly gets the company off your back. And if they are truly unscrupulous it doesn't hurt to get a picture of them talking with the police and their van, and then post the whole situation online - you will see others come forward really quick after doing something like this.", "title": "" }, { "docid": "2663ac52e0b08439c2b736ddc3fd573d", "text": "\"Here's another example of such a practice and the problem it caused. My brother, who lived alone, was missing from work for several days so a co-worker went to his home to search for him and called the local Sheriff's Office for assistance. The local fire department which runs the EMS ambulance was also dispatched in the event there was a medical emergency. They discovered my brother had passed away inside his home and had obviously been dead for days. As our family worked on probate matters to settle his estate following this death, it was learned that the local fire department had levied a bill against my brother's estate for $800 for responding with their ambulance to his home that day. I tried to talk to their commander about this, insisting my brother had not called them, nor had they transported him or even checked his pulse. The commander insisted theirs was common practice - that someone was always billed for their medical response. He would not withdraw his bill for \"\"services\"\". I hate to say, but the family paid the bill in order to prevent delay of his probate issues and from receiving monies that paid for his final expenses.\"", "title": "" } ]
[ { "docid": "2edf29c8d6d138c80ffaab5b810e5260", "text": "If there was some contract in place (even a verbal agreement) that he would complete the work you asked for in return for payment, then you don't have to pay him anything. He hasn't completed the work and what he did do was stolen from another person. He hasn't held up his end of the agreement, so you don't owe squat.", "title": "" }, { "docid": "d5e4ca3bd60328381f8ea5cbd1c4a30f", "text": "If you have not already hired another caterer, potentially your best solution might be to try and work out something with these folks. Presuming of course that they still have access to their equipment, dishware, etc, and to the extent that what you have paid might cover their labor, equipment use etc there might be some way for them to provide the services you have paid for, if you pay for materials such as the food itself directly . This presumes of course that it's only the IRS that they stiffed, and have not had most of their (material) capital assets repossessed or seized. and you still trusted them enough to work out something. Otherwise as Duff points out you will likely need to file a small claims lawsuit and get in line with any other creditors.", "title": "" }, { "docid": "52b93ea21402f1d2f3d73a6d680c120c", "text": "I have already talked to them over the phone and they insist they haven't charged me yet, and I will not be charged. When I informed them I had in fact been charged they agreed it would be reversed. So I have tried to resolve the issue and I don't have any confidence they will reverse the charge as it has not been done yet. They are difficult to communicate which makes the whole process more difficult. Your best next step is to call the credit card company and share this story. I believe the likely result is that the credit card company will initiate a charge back. My question is, is this a valid reason to file a chargeback on my credit card? Yes. If you attempted to work it out with the vendor and it is not working out, this is an appropriate time to initiate a charge back.", "title": "" }, { "docid": "c435f5c350f31fd9c7567c22ec82571e", "text": "Obviously, the credit card's administators know who this charge was submitted by. Contact them, tell them that you don't recognize the charge, and ask them to tell you who it was from. If they can't or won't, tell them you suspect fraud and want it charged back, then wait to see who contacts you to complain that the payment was cancelled. Note that you should charge back any charge you firmly believe is an error, if attempts to resolve it with the company aren't working. Also note that if you really ghink this is fraud, you should contact your bank and ask them to issue a new card number. Standard procedures exist. Use them when appropriate.", "title": "" }, { "docid": "202cf175509a021a1050b9735f8505b3", "text": "\"You have a subscription that costs $25 They have the capabilities to get that $25 from the card on file if you had stopped paying for it, you re-upping the cost of the subscription was more of a courtesy. They would have considered pulling the $25 themselves or it may have gone to collections (or they could courteously ask if you wanted to resubscribe, what a concept) The credit card processing agreements (with the credit card companies) and the FTC would handle such business practices, but \"\"illegal\"\" wouldn't be the word I would use. The FTC or Congress may have mandated that an easy \"\"opt-out\"\" number be associated with that kind of business practice, and left it at that.\"", "title": "" }, { "docid": "36bc3419347f5ab9a094d1c7d866fbae", "text": "\"Anything is negotiable. Clearly in the current draft of the contract the company isn't going to calculate or withhold taxes on your behalf - that is your responsibility. But if you want to calculate taxes yourself, and break out the fees you are receiving into several \"\"buckets\"\" on the invoice, the company might agree (they might have to run it past their legal department first). I don't see how that helps anything - it just divides the single fee into two pieces with the same overall total. As @mhoran_psprep points out, it appears that the company expects you to cover your expenses from within your charges. Thus, it's up to you to decide the appropriate fees to charge, and you are assuming the risk that you have estimated your expenses incorrectly. If you want the company to pay you a fee, plus reimburse your expenses, you will need to craft that into the contract. It's not clear what kind of expenses you need to be covered, and sometimes companies will not agree to them. For specific tax rule questions applicable to your locale, you should consult your tax adviser.\"", "title": "" }, { "docid": "639cc7a31d1d784762a35b44780f1a2c", "text": "You definitely have an argument for getting them to reverse the late fee, especially if it hasn't happened very often. (If you are late every month they may be less likely to forgive.) As for why this happens, it's not actually about business days, but instead it's based on when they know that you paid. In general, there are 2 ways for a company to mark a bill as paid: Late Fees: Some systems automatically assign late fees at the start of the day after the due date if money has not been received. In your case, if your bill was due on the 24th, the late fee was probably assessed at midnight of the 25th, and the payment arrived after that during the day of the 25th. You may have been able to initiate the payment on the company's website at 11:59pm on the 24th and not have received a late fee (or whatever their cutoff time is). Suggestion: as a rule of thumb, for utility bills whose due date and amount can vary slightly from month to month, you're usually better off setting up your payments on the company website to pull from your bank account, instead of setting up your bank account to push the payment to the company. This will ensure that you always get the bill paid on time and for the correct amount. If you still would rather push the payment from your bank account, then consider setting up the payment to arrive about 5 days early, to account for holidays and weekends.", "title": "" }, { "docid": "2fb4a9419331064c1938409da6c4e3f8", "text": "Phone conversations are useless if the company is uncooperative, you must take it into the written word so it can be documented. Sent them certified letters and keep copies of everything you send and any written responses from the company. This is how you will get actual action.", "title": "" }, { "docid": "34a9082d8d05827f9fda9ec540a53c71", "text": "W9 is required for any payments. However, in your case - these are not payments, but refunds, i.e.: you're not receiving any income from the company that is subject to tax or withholding rules, you're receiving money that is yours already. I do not think they have a right to demand W9 as a condition of refund, and as Joe suggested - would dispute the charge as fraudulent.", "title": "" }, { "docid": "f75c66b588570fe3601c49ee0a1ecd46", "text": "So, since you have no record of picking it up, are you going to do the right thing and claim you never got it? On another note, I was known at the local home depot for being the guy who ordered things online, they actually used my orders to train new people. That was back when buying online got 5% back from Discover.", "title": "" }, { "docid": "1cc4f7ba9a0c307acb4c55a928045ef2", "text": "Inform the company that you didn't receive the payment. Only they can trace the payment via their bank.", "title": "" }, { "docid": "cf2b2bc6c3b544fa27f5fbbea273dbca", "text": "Well, it really depends on for how long the quote has been made. But yes, when you're honoring it, you should let them know that this is a once of thing and that you're out of pocket doing it. Most people will understand and when you make the appropriate quote next time around, especially when elaborate where the additional cost that you did not account for initially, come from. It's important to maintain customer trust by being transparent. You can justify higher prices with time needed, material needed or whatever comes to mind. It's just important to convince that customer that without it, they wouldn't get this superb service that they're getting now.", "title": "" }, { "docid": "7b0436dec2a966beeef456ac1afa55a3", "text": "not if it's only Bob and a couple others that are having the problem. The company is spending more money on the wages of the guy helping him out than what Bob brought to the company with his purchase. There's no sense in paying for a customer.", "title": "" }, { "docid": "6c76b97fce53688c272eebaeee2f0c8d", "text": "What you are describing here is the opposite of a problem: You're trying to contact a debt-collector to pay them money, but THEY'RE ignoring YOU and won't return your calls! LOL! All joking aside, having 'incidental' charges show up as negative marks on your credit history is an annoyance- thankfully you're not the first to deal with such problems, and there are processes in place to remedy the situation. Contact the credit bureau(s) on which the debt is listed, and file a petition to have it removed from your history. If everything that you say here is true, then it should be relatively easy. Edit: See here for Equifax's dispute resolution process- it sounds like you've already completed the first two steps.", "title": "" }, { "docid": "316710461de83750af605d1897addf25", "text": "Chris, since you own your own company, nobody can stop you from charging your personal expenses to your business account. IRS is not a huge fan of mixing business and personal expenses and this practice might indicate to them that you are not treating your business seriously, and it should classify your business as a hobby. IRS defines deductible business expense as being both: ordinary AND necessary. Meditation is not an ordinary expense (other S-corps do not incur such expense.) It is not a necessary expense either. Therefore, you cannot deduct this expense. http://www.irs.gov/Businesses/Small-Businesses-&-Self-Employed/Deducting-Business-Expenses", "title": "" } ]
fiqa
36a898fc52389242f06a91f2eec42c9b
How can I save money on a gym / fitness membership? New Year's Resolution is to get in shape - but on the cheap!
[ { "docid": "3becf428add18f59ba38d20807e3f7d7", "text": "Shop around for Gym January is a great time to look because that's when most people join and the gyms are competing for your business. Also, look beyond the monthly dues. Many gyms will give free personal training sessions when you sign up - a necessity if you are serious about getting in shape! My gym offered a one time fee for 3 years. It cost around $600 which comes out to under $17 a month. Not bad for a new modern state of the art gym.", "title": "" }, { "docid": "d98599e2bb8795a543c46c226255323c", "text": "If you're determined to save money, find ways to integrate exercise into your daily routine and don't join a gym at all. This makes it more likely you'll keep it up if it is a natural part of your day. You could set aside half the money you would spend on the gym towards some of the options below. I know it's not always practical, especially in the winter, but here are a few things you could do. One of the other answers makes a good point. Gym membership can be cost effective if you go regularly, but don't kid yourself that you'll suddenly go 5 times a week every week if you've not done much regular exercise. If you are determined to join a gym, here are a few other things to consider.", "title": "" }, { "docid": "a5965eca10891f12b394ad3541cdc32a", "text": "Try a gym for a month before you sign up on any contracts. This will also give you time to figure out if you are the type who can stick with a schedule to workout on regular basis. Community centres are cost effective and offer pretty good facilities. They have monthly plans as well so no long term committments.", "title": "" }, { "docid": "eb1e4693e06138828d8a9809185fd27e", "text": "Find a physical activity or programme that interests you. Memberships only have real value if you use them. Consider learning a martial art like karate, aikido, kung fu, tai kwan do, judo, tai chi chuan. :-) Even yoga is a good form of exercise. Many of these are offered at local community centres if you just want to try it out without worrying about the cost initially. Use this to gauge your interest before considering more advanced clubs. One advantage later on if you stay with it long enough - some places will compensate you for being a junior or even associate instructor. Regardless of whether this is your interest or if the gym membership is more to your liking real value is achieved if you have a good routine and interest in your physical fitness activity. It also helps to have a workout buddy or partner. They will help motivate you to try even when you don't feel like working out.", "title": "" }, { "docid": "847a632b12e6877c7889efada52dfa79", "text": "The gym I used to use was around £35-40 a month, its quite a big whack but if you think about it; its pretty good value for money. That includes gym use, swimming pool use, and most classes Paying for a gym session is around £6 a go, so if you do that 3 times a week, then make use of the other facilities like swimming at the weekends, maybe a few classes on the nights your not at the gym it does work out ok As for deals, my one used to do family membership deals, and I think things like referring a friend gives you money off etc. They will probably also put on some deals in January since lots of people want to give it a go being new year and all", "title": "" }, { "docid": "63309a9b0948785f9f5d96857b4dde78", "text": "Look for discounts from a health insurance provider, price club, professional memberships or credit cards. That goes for a lot of things besides health memberships. My wife is in a professional woman's association for networking at work. A side benefit is an affiliate network they offer for discounts of lots of things, including gym memberships.", "title": "" }, { "docid": "3e873c2f7acf9a5c8ec012a8b705c129", "text": "I came across an article posted at Squawkfox last week. It's particularly relevant to answering this question. See 10 Ways to Cut Your Fitness Membership Costs. Here's an excerpt: [...] If you’re in the market for a shiny new gym membership, it may be wise to read the fine print and know your rights before agreeing to a fitness club contract. No one wants to be stuck paying for a membership they can no longer use, for whatever reason. But if you’re revved and ready to burn a few calories, here are ten ways to get fitter while saving some cash on a fitness club or gym membership. Yay, fitness tips! [...] Check it out!", "title": "" } ]
[ { "docid": "950c17269f8da50f264a91dc43c67c1d", "text": "Saving for retirement is important. So is living within one's means. Also--wear your sunscreen every day, rain or shine, never stop going to the gym, stay the same weight you were in high school, and eat your vegetables if you want to pass for 30 when you are 50.", "title": "" }, { "docid": "4abd220e2e701da0dd7a47df87939235", "text": "It depends on you. If you're not an aggressive shopper and travel , you'll recoup your membership fee in hotel savings with one or two stays. Hilton brands, for example, give you a 10% discount. AARP discounts can sometimes be combined with other offers as well. From an insurance point of view, you should always shop around, but sometimes group plans like AARP's have underwriting standards that work to your advantage.", "title": "" }, { "docid": "bac44a8c730685829aae631e9b51a6dc", "text": "\"Okay. Savings-in-a-nutshell. So, take at least year's worth of rent - $30k or so, maybe more for additional expenses. That's your core emergency fund for when you lose your job or total a few cars or something. Keep it in a good savings account, maybe a CD ladder - but the point is it's liquid, and you can get it when you need it in case of emergency. Replenish it immediately after using it. You may lose a little cash to inflation, but you need liquidity to protect you from risk. It is worth it. The rest is long-term savings, probably for retirement, or possibly for a down payment on a home. A blended set of stocks and bonds is appropriate, with stocks storing most of it. If saving for retirement, you may want to put the stocks in a tax-deferred account (if only for the reduced paperwork! egads, stocks generate so much!). Having some money (especially bonds) in something like a Roth IRA or a non-tax-advantaged account is also useful as a backup emergency fund, because you can withdraw it without penalties. Take the money out of stocks gradually when you are approaching the time when you use the money. If it's closer than five years, don't use stocks; your money should be mostly-bonds when you're about to use it. (And not 30-year bonds or anything like that either. Those are sensitive to interest rates in the short term. You should have bonds that mature approximately the same time you're going to use them. Keep an eye on that if you're using bond funds, which continually roll over.) That's basically how any savings goal should work. Retirement is a little special because it's sort of like 20 years' worth of savings goals (so you don't want all your savings in bonds at the beginning), and because you can get fancy tax-deferred accounts, but otherwise it's about the same thing. College savings? Likewise. There are tools available to help you with this. An asset allocation calculator can be found from a variety of sources, including most investment firms. You can use a target-date fund for something this if you'd like automation. There are also a couple things like, say, \"\"Vanguard LifeStrategy funds\"\" (from Vanguard) which target other savings goals. You may be able to understand the way these sorts of instruments function more easily than you could other investments. You could do a decent job for yourself by just opening up an account at Vanguard, using their online tool, and pouring your money into the stuff they recommend.\"", "title": "" }, { "docid": "7601e04f3bc71c067101f24687e82a63", "text": "Track your expenses. Find out where your money is going, and target areas where you can reduce expenses. Some examples: I was spending a lot on food, buying too much packaged food, and eating out too much. So I started cooking from scratch more and eating out less. Now, even though I buy expensive organic produce, imported cheese, and grass-fed beef, I'm spending half of what I used to spend on food. It could be better. I could cut back on meat and eat out even less. I'm working on it. I was buying a ton of books and random impulsive crap off of Amazon. So I no longer let myself buy things right away. I put stuff on my wish list if I want it, and every couple of months I go on there and buy myself a couple of things off my wishlist. I usually end up realizing that some of the stuff on there isn't something I want that badly after all, so I just delete it from my wishlist. I replaced my 11-year-old Jeep SUV with an 11-year-old Saturn sedan that gets twice the gas mileage. That saves me almost $200/month in gasoline costs alone. I had cable internet through Comcast, even though I don't have a TV. So I went from a $70/month cable bill to a $35/month DSL bill, which cut my internet costs in half. I have an iPhone and my bill for that is $85/month. That's insane, with how little I talk on the phone and send text messages. Once it goes out of contract, I plan to replace it with a cheap phone, possibly a pre-paid. That should cut my phone expenses in half, or even less. I'll keep my iPhone, and just use it when wifi is available (which is almost everywhere these days).", "title": "" }, { "docid": "09b119db97e23f1561e931465bf82e81", "text": "Agree wholeheartedly with the first point - keep track! It's like losing weight, the first step is to be aware of what you are doing. It also helps to have a goal (e.g. pay for a trip to Australia, have X in my savings account), and then with each purchase ask 'what will I do with this when I go to Australia' or 'how does this help towards goal x?' Thrift stores and the like require some time searching but can be good value. If you think you need something, watch for sales too.", "title": "" }, { "docid": "3d05671fdb3c36883abcde29fd83fabc", "text": "I make it a habit at the end of every day to think about how much money I spent in total that day, being mindful of what was essential and wasn't. I know that I might have spent $20 on a haircut (essential), $40 on groceries (essential) and $30 on eating out (not essential). Then I realize that I could have just spent $60 instead of $90. This habit, combined with the general attitude that it's better to have not spent some mone than to have spent some money, has been pretty effective for me to bring down my monthly spending. I guess this requires more motivation than the other more-involved techniques given here. You have to really want to reduce your spending. I found motivation easy to come by because I was spending a lot and I'm still looking for a job, so I have no sources of income. But it's worked really well so far.", "title": "" }, { "docid": "b5784f5173fee940085b18abefd8ac43", "text": "The best way to save on clothes is up to you. I have friends who save all year for two yearly shopping trips to update anything that may need updating at the time. By allowing themselves only two trips, they control the money spent. Bring it in cash and stop buying when you run out. On the other hand in my family we shop sales. When we determine that we need something we wait until we find a sale. When we see an exceptionally good sale on something we know we will need (basic work dress shoes, for example), we'll purchase it and save it until the existing item it is replacing has worn out. Our strategy is to know what we need and buy it when the price is right. We tend to wait on anything that isn't on sale until we can find the right item at a price we like, which sometimes means stretching the existing piece of clothing it is replacing until well after its prime. If you've got a list you're shopping from, you know what you need. The question becomes: how will you control your spending best? Carefully shopping sales and using coupons, or budgeting for a spree within limits?", "title": "" }, { "docid": "5f218c61466d5c2c295984a1d83a152b", "text": "\"The way I approach \"\"afford to lose\"\", is that you need to sit down and figure out the amount of money you need at different stages of your life. I can look at my current expenses and figure out what I will always roughly be paying - bills, groceries, rent/mortgage. I can figure out when I want to retire and how much I want to live on - I generally group 401k and other retirement separately to what I want to invest. With these numbers I can figure out how much I need to save to achieve this goal. Maybe you want to purchase a house in 5 years - figure out the rough down payment and include that in your savings plan. Continue for all capital purchases that you can think you would aim for. Subtract your income from this and you have the amount of money you have greater discretion over. Subtracting current liabilities (4th of July holiday... christmas presents) and you have the amount you could \"\"afford to lose\"\". As to the asset allocation you should look at, as others have mentioned that the younger you the greater your opportunity is to recoup losses. Personally I would disagree - you should have some plan for the investment and use that goal to drive your diversification.\"", "title": "" }, { "docid": "593c2052f536084940c862901c5f2843", "text": "Interesting, that makes some sense. With Planet Fitness, my understanding is that their cost structure is slanted towards fixed costs. Whether their members come to the gym or not doesn't matter; they still have to pay rent, labor, utilities, buy equipment, etc. Those costs don't change much if people subscribe and don't show up vs. subscribe and do show up. Moviepass seems to be almost entirely variable; their costs are buying movie tickets when people order. They would love it if people signed up and never used it, but unlike PF if people DID use it they'd be completely screwed. It's a risky plan, but it just might work as long as people don't figure out a way to game the system (or, you know, turn out to be movie buffs).", "title": "" }, { "docid": "f35317548c0342e1ecd3c69b1d7c2e3e", "text": "\"A trick that works for some folks: \"\"Pay yourself first.\"\" Have part of your paycheck put directly into an account that you promise yourself you won't touch except for some specific purpose (eg retirement). If that money is gone before it gets to your pocket, it's much less likely to be spent. US-specific: Note that if your employer offers a 401k program with matching funds, and you aren't taking advantage of that, you are leaving free money on the table. That does put an additional barrier between you and the money until you retire, too. (In other countries, look for other possible matching fundsand/or tax-advantaged savings programs; for that matter there are some other possibilities in the US, from education savings plans to discounted stock purchase that you could sell immediately for a profit. I probably should be signed up for that last...)\"", "title": "" }, { "docid": "8dc02c817c798f53a098e1f8c3943822", "text": "I've often encountered the practices you describe in the Netherlands too. This is how I deal with it. Avoid gyms with aggressive sales tactics My solution is to only sign up for a gym that does not seem to have one-on-one sales personnel and aggressive sales tactics, and even then to read the terms and conditions thoroughly. I prefer to pay them in monthly terms that I myself initiate, instead of allowing them to charge my account when they please. [1] Avoid gyms that lack respect for their members Maybe you've struggled with the choice for a gym, because one of those 'evil' gyms is very close to home and has really excellent facilities. You may be tempted to ask for a one-off contract without the shady wording, but I advise against this. Think about it this way: Even though regular T&C would not apply, the spirit with which they were drawn up lives on among gym personnel/management. They're simply not inclined to act in your best interest, so it's still possible to run into problems when ending your membership. In my opinion, it's better to completely avoid such places because they are not worthy of your trust. Of course this advice goes beyond gym memberships and is applicable to life in general. Hope this helps. [1] Credit Cards aren't very popular in the Netherlands, but we have a charging mechanism called 'automatic collection' which allows for arbitrary merchant-initiated charges.", "title": "" }, { "docid": "a816d89279fc582023e15c450eb92628", "text": "\"There's plenty of advice out there about how to set up a budget or track your expenses or \"\"pay yourself first\"\". This is all great advice but sometimes the hardest part is just getting in the right frugal mindset. Here's a couple tricks on how I did it. Put yourself through a \"\"budget fire drill\"\" If you've never set a budget for yourself, you don't necessarily need to do that here... just live as though you had lost your job and savings through some imaginary catastrophe and live on the bare minimum for at least a month. Treat every dollar as though you only had a few left. Clip coupons, stop dining out, eat rice and beans, bike or car pool to work... whatever means possible to cut costs. If you're really into it, you can cancel your cable/Netflix/wine of the month bills and see how much you really miss them. This exercise will get you used to resisting impulse buys and train you to live through an actual financial disaster. There's also a bit of a game element here in that you can shoot for a \"\"high score\"\"... the difference between the monthly expenditures for your fire drill and the previous month. Understand the power of compound interest. Sit down with Excel and run some numbers for how your net worth will change long term if you saved more and paid down debt sooner. It will give you some realistic sense of the power of compound interest in terms that relate to your specific situation. Start simple... pick your top 10 recent non-essential purchases and calculate how much that would be worth if you had invested that money in the stock market earning 8% over the next thirty years. Then visualize your present self sneaking up to your future self and stealing that much money right out of your own wallet. When I did that, it really resonated with me and made me think about how every dollar I spent on something non-essential was a kick to the crotch of poor old future me.\"", "title": "" }, { "docid": "0309d5e6df68d1710cf557e3de38ac2c", "text": "Congrats on your first real job! Save as much as your can while keeping yourself (relatively) comfortable. As to where to put your hard earned money, first establish why you want to save the money in the first place. Money is a mean to acquire the things we want or need in your life or the lives of others. Once your goals are set, then follow this order:", "title": "" }, { "docid": "397220883f559435621d173d3f45c35c", "text": "You're asking for a LOT. I mean, entire lives and volumes upon volumes of information is out there. I'd recommend Benjamin Graham for finance concepts (might be a little bit dry...), *A Random Walk Down Wall Street,* by Burton Malkiel and *A Concise Guide to Macro Economics* by David Moss.", "title": "" }, { "docid": "d0e336eb05e4701401e2367555b6ec53", "text": "Banks make money by charging fees on products and charging interest on loans. If you keep close to a $0 average balance in your account, and they aren't charging you any fees, then yes, your account is not profitable for them. That's ok. It's not costing them much to keep you as a customer, and some day you may start keeping a balance with them or apply for a loan. The bank is taking a chance that you will continue to be a loyal customer and will one day become profitable for them. Just be on the lookout for a change in their fee structure. Sometimes banks drop customers or start charging fees in cases like yours.", "title": "" } ]
fiqa
6eb6a1ce9252ca457bb221dea84d1437
Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)
[ { "docid": "cce513c48e630ab3f072f334d00b67dc", "text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press", "title": "" }, { "docid": "eea39002b723aaa9617c63c1249ef9a6", "text": "Generative Adversarial Networks (GAN) [1] are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.", "title": "" } ]
[ { "docid": "757441e95be19ca4569c519fb35adfb7", "text": "Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.", "title": "" }, { "docid": "21cde70c4255e706cb05ff38aec99406", "text": "In this paper, a multiple classifier machine learning (ML) methodology for predictive maintenance (PdM) is presented. PdM is a prominent strategy for dealing with maintenance issues given the increasing need to minimize downtime and associated costs. One of the challenges with PdM is generating the so-called “health factors,” or quantitative indicators, of the status of a system associated with a given maintenance issue, and determining their relationship to operating costs and failure risk. The proposed PdM methodology allows dynamical decision rules to be adopted for maintenance management, and can be used with high-dimensional and censored data problems. This is achieved by training multiple classification modules with different prediction horizons to provide different performance tradeoffs in terms of frequency of unexpected breaks and unexploited lifetime, and then employing this information in an operating cost-based maintenance decision system to minimize expected costs. The effectiveness of the methodology is demonstrated using a simulated example and a benchmark semiconductor manufacturing maintenance problem.", "title": "" }, { "docid": "6e7d5e2548e12d11afd3389b6d677a0f", "text": "Internet marketing is a field that is continuing to grow, and the online auction concept may be defining a totally new and unique distribution alternative. Very few studies have examined auction sellers and their internet marketing strategies. This research examines the internet auction phenomenon as it relates to the marketing mix of online auction sellers. The data in this study indicate that, whilst there is great diversity among businesses that utilise online auctions, distinct cost leadership and differentiation marketing strategies are both evident. These two approaches are further distinguished in terms of the internet usage strategies employed by each group.", "title": "" }, { "docid": "085155ebfd2ac60ed65293129cb0bfee", "text": "Today, Convolution Neural Networks (CNN) is adopted by various application areas such as computer vision, speech recognition, and natural language processing. Due to a massive amount of computing for CNN, CNN running on an embedded platform may not meet the performance requirement. In this paper, we propose a system-on-chip (SoC) CNN architecture synthesized by high level synthesis (HLS). HLS is an effective hardware (HW) synthesis method in terms of both development effort and performance. However, the implementation should be optimized carefully in order to achieve a satisfactory performance. Thus, we apply several optimization techniques to the proposed CNN architecture to satisfy the performance requirement. The proposed CNN architecture implemented on a Xilinx's Zynq platform has achieved 23% faster and 9.05 times better throughput per energy consumption than an implementation on an Intel i7 Core processor.", "title": "" }, { "docid": "3465c3bc8f538246be5d7f8c8d1292c2", "text": "The minimal depth of a maximal subtree is a dimensionless order statistic measuring the predictiveness of a variable in a survival tree. We derive the distribution of the minimal depth and use it for high-dimensional variable selection using random survival forests. In big p and small n problems (where p is the dimension and n is the sample size), the distribution of the minimal depth reveals a “ceiling effect” in which a tree simply cannot be grown deep enough to properly identify predictive variables. Motivated by this limitation, we develop a new regularized algorithm, termed RSF-Variable Hunting. This algorithm exploits maximal subtrees for effective variable selection under such scenarios. Several applications are presented demonstrating the methodology, including the problem of gene selection using microarray data. In this work we focus only on survival settings, although our methodology also applies to other random forests applications, including regression and classification settings. All examples presented here use the R-software package randomSurvivalForest.", "title": "" }, { "docid": "6c9d163a7ad97ebecdfd82275990f315", "text": "We present and evaluate a new deep neural network architecture for automatic thoracic disease detection on chest X-rays. Deep neural networks have shown great success in a plethora of visual recognition tasks such as image classification and object detection by stacking multiple layers of convolutional neural networks (CNN) in a feed-forward manner. However, the performance gain by going deeper has reached bottlenecks as a result of the trade-off between model complexity and discrimination power. We address this problem by utilizing the recently developed routing-by agreement mechanism in our architecture. A novel characteristic of our network structure is that it extends routing to two types of layer connections (1) connection between feature maps in dense layers, (2) connection between primary capsules and prediction capsules in final classification layer. We show that our networks achieve comparable results with much fewer layers in the measurement of AUC score. We further show the combined benefits of model interpretability by generating Gradient-weighted Class Activation Mapping (Grad-CAM) for localization. We demonstrate our results on the NIH chestX-ray14 dataset that consists of 112,120 images on 30,805 unique patients including 14 kinds of lung diseases.", "title": "" }, { "docid": "36f068b9579788741f23c459570694fe", "text": "One of the difficulties in learning Chinese characters is distinguishing similar characters. This can cause misunderstanding and miscommunication in daily life. Thus, it is important for students learning the Chinese language to be able to distinguish similar characters and understand their proper usage. In this paper, the authors propose a game style framework to train students to distinguish similar characters. A major component in this framework is the search for similar Chinese characters in the system. From the authors’ prior work, they find the similar characters by the radical information and stroke correspondence determination. This paper improves the stroke correspondence determination by using the attributed relational graph (ARG) matching algorithm that considers both the stroke and spatial relationship during matching. The experimental results show that the new proposed method is more accurate in finding similar Chinese characters. Additionally, the authors have implemented online educational games to train students to distinguish similar Chinese characters and made use of the improved matching method for creating the game content automatically. DOI: 10.4018/jdet.2010070103 32 International Journal of Distance Education Technologies, 8(3), 31-46, July-September 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. iNtroDUCtioN The evolution of computer technologies makes a big impact on traditional learning. Shih et al. (2007) and Chen et al. (2005) studied the impact of distant e-learning compared with traditional learning. Distant e-learning has many advantages over traditional learning such as no learning barrier in location, allowing more people to learn and providing an interactive learning environment. There is a great potential in adopting distant e-learning in areas with a sparse population. For example, in China, it is impractical to build schools in every village. As a result, some students have to spend a lot of time for travelling to school that may be quite far away from their home. If computers can be used for e-learning in this location, the students can save a lot of time for other learning activities. Moreover, there is a limit in the number of students whom a school can physically accommodate. Distant e-learning is a solution that gives the chance for more people to learn in their own pace without the physical limitation. In addition, distant e-learning allows certain levels of interactivity. The learners can get the immediate feedback from the e-learning system and enhance the efficiency in their learning. E-learning has been applied in different areas such as engineering by Sziebig (2008), maritime education by Jurian (2006), etc. Some researchers study the e-learning in Chinese handwriting education. Nowadays there exist many e-learning applications to help students learn their native or a foreign language. This paper is focused on the learning of the Chinese language. Some researchers (Tan, 2002; Teo et al., 2002) provide an interactive interface for students to practice Chinese character handwriting. These e-learning methods help students improve their handwriting skill by providing them a framework to repeat some handwriting exercises just like in the traditional learning. However, they have not considered how to maintain students’ motivation to complete the tasks. Green et al. (2007) suggested that game should be introduced for learning because games bring challenges to students, stimulate their curiosity, develop their creativity and let them have fun. One of the common problems in Chinese students’ handwriting is mixing up similar characters in the structure (e.g., 困, 因) or sound (e.g., 木, 目), and misusing them. Chinese characters are logographs and there are about 3000 commonly used characters. Learners have to memorize a lot of writing structures and their related meanings. It is difficult to distinguish similar Chinese characters with similar structure or sound even for people whose native language is Chinese. For training people in distinguishing similar characters, teachers often make some questions by presenting the similar characters and ask the students to find out the correct one under each case. There are some web-based games that aim to help students differentiate similar characters (The Academy of Chinese Studies & Erroneous Character Arena). These games work in a similar fashion in which they show a few choices of similar characters to the players and ask them to pick the correct one that should be used in a phrase. These games suffer from the drawback that the question-answer set is limited thus players feel bored easily and there is little replay value. On the other hand, creating a large set of question-answer pairs is time consuming if it is done manually. It is beneficial to have a system to generate the choices automatically.", "title": "" }, { "docid": "ac37ca6b8bb12305ac6e880e6e7c336a", "text": "In this paper, we are interested in learning the underlying graph structure behind training data. Solving this basic problem is essential to carry out any graph signal processing or machine learning task. To realize this, we assume that the data is smooth with respect to the graph topology, and we parameterize the graph topology using an edge sampling function. That is, the graph Laplacian is expressed in terms of a sparse edge selection vector, which provides an explicit handle to control the sparsity level of the graph. We solve the sparse graph learning problem given some training data in both the noiseless and noisy settings. Given the true smooth data, the posed sparse graph learning problem can be solved optimally and is based on simple rank ordering. Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.", "title": "" }, { "docid": "b1488b35284b6610d44d178d56cc89eb", "text": "We introduce an unsupervised discriminative model for the task of retrieving experts in online document collections. We exclusively employ textual evidence and avoid explicit feature engineering by learning distributed word representations in an unsupervised way. We compare our model to state-of-the-art unsupervised statistical vector space and probabilistic generative approaches. Our proposed log-linear model achieves the retrieval performance levels of state-of-the-art document-centric methods with the low inference cost of so-called profile-centric approaches. It yields a statistically significant improved ranking over vector space and generative models in most cases, matching the performance of supervised methods on various benchmarks. That is, by using solely text we can do as well as methods that work with external evidence and/or relevance feedback. A contrastive analysis of rankings produced by discriminative and generative approaches shows that they have complementary strengths due to the ability of the unsupervised discriminative model to perform semantic matching.", "title": "" }, { "docid": "9a2609d1b13e0fb43849d3e4ca8682fe", "text": "This report presents a brief overview of multimedia data mining and the corresponding workshop series at ACM SIGKDD conference series on data mining and knowledge discovery. It summarizes the presentations, conclusions and directions for future work that were discussed during the 3rd edition of the International Workshop on Multimedia Data Mining, conducted in conjunction with KDD-2002 in Edmonton, Alberta, Canada.", "title": "" }, { "docid": "92386ee2988b6d7b6f2f0b3cdcbf44ba", "text": "In the rst part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weightupdate rule of Littlestone and Warmuth [20] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n . In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary nite set or a bounded segment of the real line.", "title": "" }, { "docid": "e78d53a2790ac3b6011910f82cefaff9", "text": "A two-dimensional crystal of molybdenum disulfide (MoS2) monolayer is a photoluminescent direct gap semiconductor in striking contrast to its bulk counterpart. Exfoliation of bulk MoS2 via Li intercalation is an attractive route to large-scale synthesis of monolayer crystals. However, this method results in loss of pristine semiconducting properties of MoS2 due to structural changes that occur during Li intercalation. Here, we report structural and electronic properties of chemically exfoliated MoS2. The metastable metallic phase that emerges from Li intercalation was found to dominate the properties of as-exfoliated material, but mild annealing leads to gradual restoration of the semiconducting phase. Above an annealing temperature of 300 °C, chemically exfoliated MoS2 exhibit prominent band gap photoluminescence, similar to mechanically exfoliated monolayers, indicating that their semiconducting properties are largely restored.", "title": "" }, { "docid": "7e682f98ee6323cd257fda07504cba20", "text": "We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al.,2004) and STARE (Hoover et al.,2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods", "title": "" }, { "docid": "5176645b3aca90b9f3e7d9fb8391063d", "text": "The role of dysfunctional attitudes in loneliness among college students was investigated. Subjects were 50 introductory psychology volunteers (20 male, 30 female) who completed measures of loneliness severity, depression, and dysfunctional attitudes. The results showed a strong predictive relationship between dysfunctional attitudes and loneliness even after level of depression was statistically controlled. Lonely college students' thinking is dominated by doubts about ability to find satisfying romantic relationships and fears of being rejected and hurt in an intimate pairing. Lonely individuals also experience much anxiety in interpersonal encounters and regard themselves as undesirable to others. Generally, a negative evaluation of self, especially in the social realm, is present. Implications of the results for treatment planning for lonely clients are discussed.", "title": "" }, { "docid": "3c8cc4192ee6ddd126e53c8ab242f396", "text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.", "title": "" }, { "docid": "934875351d5fa0c9b5c7499ca13727ab", "text": "Computation of the simplicial complexes of a large point cloud often relies on extracting a sample, to reduce the associated computational burden. The study considers sampling critical points of a Morse function associated to a point cloud, to approximate the Vietoris-Rips complex or the witness complex and compute persistence homology. The effectiveness of the novel approach is compared with the farthest point sampling, in a context of classifying human face images into ethnics groups using persistence homology.", "title": "" }, { "docid": "5291162cd0841cc025f2a86b360372e6", "text": "The web contains countless semi-structured websites, which can be a rich source of information for populating knowledge bases. Existing methods for extracting relations from the DOM trees of semi-structured webpages can achieve high precision and recall only when manual annotations for each website are available. Although there have been efforts to learn extractors from automatically generated labels, these methods are not sufficiently robust to succeed in settings with complex schemas and information-rich websites. In this paper we present a new method for automatic extraction from semi-structured websites based on distant supervision. We automatically generate training labels by aligning an existing knowledge base with a website and leveraging the unique structural characteristics of semi-structured websites. We then train a classifier based on the potentially noisy and incomplete labels to predict new relation instances. Our method can compete with annotationbased techniques in the literature in terms of extraction quality. A large-scale experiment on over 400,000 pages from dozens of multi-lingual long-tail websites harvested 1.25 million facts at a precision of 90%. PVLDB Reference Format: Colin Lockard, Xin Luna Dong, Arash Einolghozati, Prashant Shiralkar. CERES: Distantly Supervised Relation Extraction from the Semi-Structured Web. PVLDB, 11(10): 1084-1096, 2018. DOI: https://doi.org/10.14778/3231751.3231758", "title": "" }, { "docid": "46ecd1781e1ab5866fde77b3a24be06a", "text": "Viral products and ideas are intuitively understood to grow through a person-to-person diffusion process analogous to the spread of an infectious disease; however, until recently it has been prohibitively difficult to directly observe purportedly viral events, and thus to rigorously quantify or characterize their structural properties. Here we propose a formal measure of what we label “structural virality” that interpolates between two extremes: content that gains its popularity through a single, large broadcast, and that which grows through multiple generations with any one individual directly responsible for only a fraction of the total adoption. We use this notion of structural virality to analyze a unique dataset of a billion diffusion events on Twitter, including the propagation of news stories, videos, images, and petitions. We find that the very largest observed events nearly always exhibit high structural virality, providing some of the first direct evidence that many of the most popular products and ideas grow through person-to-person diffusion. However, medium-sized events—having thousands of adopters—exhibit surprising structural diversity, and regularly grow via both broadcast and viral mechanisms. We find that these empirical results are largely consistent with a simple contagion model characterized by a low infection rate spreading on a scale-free network, reminiscent of previous work on the long-term persistence of computer viruses.", "title": "" }, { "docid": "41b17931c63d053bd0a339beab1c0cfc", "text": "The investigation and development of new methods from diverse perspectives to shed light on portfolio choice problems has never stagnated in financial research. Recently, multi-armed bandits have drawn intensive attention in various machine learning applications in online settings. The tradeoff between exploration and exploitation to maximize rewards in bandit algorithms naturally establishes a connection to portfolio choice problems. In this paper, we present a bandit algorithm for conducting online portfolio choices by effectually exploiting correlations among multiple arms. Through constructing orthogonal portfolios from multiple assets and integrating with the upper confidence bound bandit framework, we derive the optimal portfolio strategy that represents the combination of passive and active investments according to a risk-adjusted reward function. Compared with oft-quoted trading strategies in finance and machine learning fields across representative real-world market datasets, the proposed algorithm demonstrates superiority in both risk-adjusted return and cumulative wealth.", "title": "" } ]
scidocsrr
5d3aa8179e63cffbfc4583f97535b24c
Minnesota Satisfaction Questionnaire-Psychometric Properties and Validation in a Population of Portuguese Hospital Workers
[ { "docid": "6521ae2b4592fccdb061f1e414774024", "text": "The development of the Job Satisfaction Survey (JSS), a nine-subscale measure of employee job satisfaction applicable specifically to human service, public, and nonprofit sector organizations, is described. The item selection, item analysis, and determination of the final 36-item scale are also described, and data on reliability and validity and the instrument's norms are summarized. Included are a multitrait-multimethod analysis of the JSS and the Job Descriptive Index (JDI), factor analysis of the JSS, and scale intercorrelations. Correlation of JSS scores with criteria of employee perceptions and behaviors for multiple samples were consistent with findings involving other satisfaction scales and with findings from the private sector. The strongest correlations were with perceptions of the job and supervisor, intention of quitting, and organizational commitment. More modest correlations were found with salary, age, level, absenteeism, and turnover.", "title": "" } ]
[ { "docid": "4a89f20c4b892203be71e3534b32449c", "text": "This paper draws together knowledge from a variety of fields to propose that innovation management can be viewed as a form of organisational capability. Excellent companies invest and nurture this capability, from which they execute effective innovation processes, leading to innovations in new product, services and processes, and superior business performance results. An extensive review of the literature on innovation management, along with a case study of Cisco Systems, develops a conceptual model of the firm as an innovation engine. This new operating model sees substantial investment in innovation capability as the primary engine for wealth creation, rather than the possession of physical assets. Building on the dynamic capabilities literature, an “innovation capability” construct is proposed with seven elements. These are vision and strategy, harnessing the competence base, organisational intelligence, creativity and idea management, organisational structures and systems, culture and climate, and management of technology.", "title": "" }, { "docid": "4ae231ad20a99fb0b4c745cdffde456d", "text": "Networks-on-Chip (NoCs) interconnection architectures to be used in future billion-transistor Systems-on-Chip (SoCs) meet the major communication requirements of these systems, offering, at the same time, reusability, scalability and parallelism in communication. Furthermore, they cope with other issues like power constraints and clock distribution. Currently, there is a number of research works which explore different features of NoCs. In this paper, we present SoCIN, a scalable network based on a parametric router architecture to beused in the synthesis of customized low cost NoCs. The architecture of SoCIN and its router are described, and some synthesis results are presented.", "title": "" }, { "docid": "82da6897a36ea57473455d8f4da0a32d", "text": "Traditional learning-based coreference resolvers operate by training amentionpair classifier for determining whether two mentions are coreferent or not. Two independent lines of recent research have attempted to improve these mention-pair classifiers, one by learning amentionranking model to rank preceding mentions for a given anaphor, and the other by training an entity-mention classifier to determine whether a preceding cluster is coreferent with a given mention. We propose a cluster-ranking approach to coreference resolution that combines the strengths of mention rankers and entitymention models. We additionally show how our cluster-ranking framework naturally allows discourse-new entity detection to be learned jointly with coreference resolution. Experimental results on the ACE data sets demonstrate its superior performance to competing approaches.", "title": "" }, { "docid": "3f807cb7e753ebd70558a0ce74b416b7", "text": "In this paper, we study the problem of recovering a tensor with missing data. We propose a new model combining the total variation regularization and low-rank matrix factorization. A block coordinate decent (BCD) algorithm is developed to efficiently solve the proposed optimization model. We theoretically show that under some mild conditions, the algorithm converges to the coordinatewise minimizers. Experimental results are reported to demonstrate the effectiveness of the proposed model and the efficiency of the numerical scheme. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "0bd720d912575c0810c65d04f6b1712b", "text": "Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware.", "title": "" }, { "docid": "34e544af5158850b7119ac4f7c0b7b5e", "text": "Over the last decade, the surprising fact has emerged that machines can possess therapeutic power. Due to the many healing qualities of touch, one route to such power is through haptic emotional interaction, which requires sophisticated touch sensing and interpretation. We explore the development of touch recognition technologies in the context of a furry artificial lap-pet, with the ultimate goal of creating therapeutic interactions by sensing human emotion through touch. In this work, we build upon a previous design for a new type of fur-based touch sensor. Here, we integrate our fur sensor with a piezoresistive fabric location/pressure sensor, and adapt the combined design to cover a curved creature-like object. We then use this interface to collect synchronized time-series data from the two sensors, and perform machine learning analysis to recognize 9 key affective touch gestures. In a study of 16 participants, our model averages 94% recognition accuracy when trained on individuals, and 86% when applied to the combined set of all participants. The model can also recognize which participant is touching the prototype with 79% accuracy. These results promise a new generation of emotionally intelligent machines, enabled by affective touch gesture recognition.", "title": "" }, { "docid": "30ff2dfb2864a294d2be5e4a33b88964", "text": "Using blockchain technology, it is possible to create contracts that offer a reward in exchange for a trained machine learning model for a particular data set. This would allow users to train machine learning models for a reward in a trustless manner. The smart contract will use the blockchain to automatically validate the solution, so there would be no debate about whether the solution was correct or not. Users who submit the solutions wont have counterparty risk that they wont get paid for their work. Contracts can be created easily by anyone with a dataset, even programmatically by software agents. This creates a market where parties who are good at solving machine learning problems can directly monetize their skillset, and where any organization or software agent that has a problem to solve with AI can solicit solutions from all over the world. This will incentivize the creation of better machine learning models, and make AI more accessible to companies and software agents. A consequence of creating this market is that there will be a well defined price of GPU training for machine learning models. Crypto-currency mining also uses GPUs in many cases. We can envision a world where at any given moment, miners can choose to direct their hardware to work on whichever workload is more profitable: cryptocurrency mining, or machine learning training. 1. Background 1.1. Bitcoin and cryptocurrencies Bitcoin was first introduced in 2008 to create a decentralized method of storing and transferring funds from one account to another. It enforced ownership using public key cryptography. Funds are stored in various addresses, and anyone with the private key for an address would be able to transfer funds from this account. To create such a system in a decentralized fashion required innovation on how to achieve consensus between participants, which was solved using a blockchain. This created an ecosystem that enabled fast and trusted transactions between untrusted users. Bitcoin implemented a scripting language for simple tasks. This language wasnt designed to be turing complete. Over time, people wanted to implement more complicated programming tasks on blockchains. Ethereum introduced a turing-complete language to support a wider range of applications. This language was designed to utilize the decentralized nature of the blockchain. Essentially its an application layer on top of the ethereum blockchain. By having a more powerful, turing-complete programming language, it became possible to build new types of applications on top of the ethereum blockchain: from escrow systems, minting new coins, decentralized corporations, and more. The Ethereum whitepaper talks about creating decentralized marketplaces, but focuses on things like identities and reputations to facilitate these transactions. (Buterin, 2014) In this marketplace, specifically for machine learning models, trust is a required feature. This approach is distinctly different than the trustless exchange system proposed in this paper. ar X iv :1 80 2. 10 18 5v 1 [ cs .C R ] 2 7 Fe b 20 18 Evaluating and Exchanging Machine Learning Models on the Ethereum Blockchain 1.2. Breakthrough in machine learning In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton were able to train a deep neural network for image classification by utilizing GPUs. (Krizhevsky et al., 2012) Their submission for the Large Scale Visual Recognition Challenge (LSVRC) halved the best error rate at the time. GPUs being able to do thousands of matrix operations in parallel was the breakthrough needed to train deep neural networks. With more research, machine learning (ML) systems have been able to surpass humans in many specific problems. These systems are now better at: lip reading (Chung et al., 2016), speech recognition (Xiong et al., 2016), location tagging (Weyand et al., 2016), playing Go (Silver et al., 2016), image classification (He et al., 2015), and more. In ML, a variety of models and approaches are used to attack different types of problems. Such an approach is called a Neural Network (NN). Neural Networks are made out of nodes, biases and weighted edges, and can represent virtually any function. (Hornik, 1991) Figure 1. Neural Network Schema There are two steps in building a new machine learning model. The first step is training, which takes in a dataset as an input, and adjusts the model weights to increase accuracy for the model. The second step is testing, that uses an independent dataset for testing the accuracy of the trained model. This second step is necessary to validate the model and to prevent a problem known as overfitting. An overfitted model is very good at a particular dataset, but is bad at generalizing for the given problem. Once it has been trained, a ML model can be used to perform tasks on new data, such as prediction, classification, and clustering. There is a huge demand for machine learning models, and companies that can get access to good machine learning models stand to profit through improved efficiency and new capabilities. Since there is strong demand for this kind of technology, and limited supply of talent, it makes sense to create a market for machine learning models. Since machine learning is purely software and training it doesnt require interacting with any physical systems, using blockchain for coordination between users, and using cryptocurrency for payment is a natural choice.", "title": "" }, { "docid": "2b743ba2f607f75bb7e1d964c39cbbcf", "text": "The demand and growth of indoor positioning has increased rapidly in the past few years for a diverse range of applications. Various innovative techniques and technologies have been introduced but precise and reliable indoor positioning still remains a challenging task due to dependence on a large number of factors and limitations of the technologies. Positioning technologies based on radio frequency (RF) have many advantages over the technologies utilizing ultrasonic, optical and infrared devices. Both narrowband and wideband RF systems have been implemented for short range indoor positioning/real-time locating systems. Ultra wideband (UWB) technology has emerged as a viable candidate for precise indoor positioning due its unique characteristics. This article presents a comparison of UWB and narrowband RF technologies in terms of modulation, throughput, transmission time, energy efficiency, multipath resolving capability and interference. Secondly, methods for measurement of the positioning parameters are discussed based on a generalized measurement model and, in addition, widely used position estimation algorithms are surveyed. Finally, the article provides practical UWB positioning systems and state-of-the-art implementations. We believe that the review presented in this article provides a structured overview and comparison of the positioning methods, algorithms and implementations in the field of precise UWB indoor positioning, and will be helpful for practitioners as well as for researchers to keep abreast of the recent developments in the field.", "title": "" }, { "docid": "f7808b676e04ae7e80cf06d36edc73e8", "text": "Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.", "title": "" }, { "docid": "ae536a72dfba1e7eff57989c3f94ae3e", "text": "Policymakers are often interested in estimating how policy interventions affect the outcomes of those most in need of help. This concern has motivated the practice of disaggregating experimental results by groups constructed on the basis of an index of baseline characteristics that predicts the values of individual outcomes without the treatment. This paper shows that substantial biases may arise in practice if the index is estimated by regressing the outcome variable on baseline characteristics for the full sample of experimental controls. We propose alternative methods that correct this bias and show that they behave well in realistic scenarios.", "title": "" }, { "docid": "47afea1e95f86bb44a1cf11e020828fc", "text": "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in Koopman et al. (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogaziçi University Printhouse. http://www.issi2015.org/files/downloads/all-papers/1042.pdf , 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.", "title": "" }, { "docid": "a00ee62d48afbcac22c85d2961c596bc", "text": "Despite oxycodone's (4,5-epoxy-14-hydroxy-3-methoxy-17-methylmorphinan-6-one) history of clinical use and the attention it has received as a drug of abuse, few reports have documented its pharmacology's relevance to its abuse or its mechanism of action. The purposes of the present study were to further characterize the analgesic effects of oxycodone, its mechanism of action, and its effects in terms of its relevance to its abuse liability. The results indicate that oxycodone had potent antinociceptive effects in the mouse paraphenylquinone writhing, hot-plate, and tail-flick assays, in which it appeared to be acting as a mu-opioid receptor agonist. It generalized to the heroin discriminative stimulus and served as a positive reinforcer in rats and completely suppressed withdrawal signs in morphine-dependent rhesus monkeys. These results suggest that the analgesic and abuse liability effects of oxycodone are likely mediated through mu-opioid receptors and provide the first laboratory report of its discriminative stimulus, reinforcing, and morphine cross-dependency effects.", "title": "" }, { "docid": "40939d3a4634498fb50c0cda9e31f476", "text": "Learning analytics is receiving increased attention, in part because it offers to assist educational institutions in increasing student retention, improving student success, and easing the burden of accountability. Although these large-scale issues are worthy of consideration, faculty might also be interested in how they can use learning analytics in their own courses to help their students succeed. In this paper, we define learning analytics, how it has been used in educational institutions, what learning analytics tools are available, and how faculty can make use of data in their courses to monitor and predict student performance. Finally, we discuss several issues and concerns with the use of learning analytics in higher education. Have you ever had the sense at the start of a new course or even weeks into the semester that you could predict which students will drop the course or which students will succeed? Of course, the danger of this realization is that it may create a self-fulfilling prophecy or possibly be considered “profiling”. But it could also be that you have valuable data in your head, collected from semesters of experience, that can help you predict who will succeed and who will not based on certain variables. In short, you likely have hunches based on an accumulation of experience. The question is, what are those variables? What are those data? And how well will they help you predict student performance and retention? More importantly, how will those data help you to help your students succeed in your course? Such is the promise of learning analytics. Learning analytics is defined as “the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Long & Siemens, 2011, p. 32). Learning analytics offers promise for predicting and improving student success and retention (e.g., Olmos & Corrin, 2012; Smith, Lange, & Huston, 2012) in part because it allows faculty, institutions, and students to make data-driven decisions about student success and retention. Data-driven decision making involves making use of data, such as the sort provided in Learning Management Systems (LMS), to inform educator’s judgments (Jones, 2012; Long & Siemens, 2011; Picciano, 2012). For example, to argue for increased funding to support student preparation for a course or a set of courses, it would be helpful to have data showing that students who have certain skills or abilities or prior coursework perform better in the class or set of classes than those who do not. Journal of Interactive Online Learning Dietz-Uhler & Hurn", "title": "" }, { "docid": "0caa6d4623fb0414facb76ccd8eaa235", "text": "Because of large amounts of unstructured text data generated on the Internet, text mining is believed to have high commercial value. Text mining is the process of extracting previously unknown, understandable, potential and practical patterns or knowledge from the collection of text data. This paper introduces the research status of text mining. Then several general models are described to know text mining in the overall perspective. At last we classify text mining work as text categorization, text clustering, association rule extraction and trend analysis according to applications.", "title": "" }, { "docid": "e41eb91c146b5054b583083b89d0a3fb", "text": "The authors adapt the SERVQUAL scale for medical care services and examine it for reliability, dimensionality, and validity in a primary care clinic setting. In addition, they explore the possibility of a link between perceived service quality--and its various dimensions--and a patient's future intent to complain, compliment, repeat purchase, and switch providers. Findings from 159 matched-pair responses indicate that the SERVQUAL scale can be adapted reliably to a clinic setting and that the dimensions of reliability, dependability, and empathy are most predictive of a patient's intent to complain, compliment, repeat purchase, and switch providers.", "title": "" }, { "docid": "0e3f43a28c477ae0e15a8608d3a1d4a5", "text": "This report provides an overview of the current state of the art deep learning architectures and optimisation techniques, and uses the ADNI hippocampus MRI dataset as an example to compare the effectiveness and efficiency of different convolutional architectures on the task of patch-based 3dimensional hippocampal segmentation, which is important in the diagnosis of Alzheimer’s Disease. We found that a slightly unconventional ”stacked 2D” approach provides much better classification performance than simple 2D patches without requiring significantly more computational power. We also examined the popular ”tri-planar” approach used in some recently published studies, and found that it provides much better results than the 2D approaches, but also with a moderate increase in computational power requirement. Finally, we evaluated a full 3D convolutional architecture, and found that it provides marginally better results than the tri-planar approach, but at the cost of a very significant increase in computational power requirement. ar X iv :1 50 5. 02 00 0v 1 [ cs .L G ] 8 M ay 2 01 5", "title": "" }, { "docid": "34a6fe0c5183f19d4f25a99b3bcd205e", "text": "In this paper, we first offer an overview of advances in the field of distance metric learning. Then, we empirically compare selected methods using a common experimental protocol. The number of distance metric learning algorithms proposed keeps growing due to their effectiveness and wide application. However, existing surveys are either outdated or they focus only on a few methods. As a result, there is an increasing need to summarize the obtained knowledge in a concise, yet informative manner. Moreover, existing surveys do not conduct comprehensive experimental comparisons. On the other hand, individual distance metric learning papers compare the performance of the proposed approach with only a few related methods and under different settings. This highlights the need for an experimental evaluation using a common and challenging protocol. To this end, we conduct face verification experiments, as this task poses significant challenges due to varying conditions during data acquisition. In addition, face verification is a natural application for distance metric learning because the encountered challenge is to define a distance function that: 1) accurately expresses the notion of similarity for verification; 2) is robust to noisy data; 3) generalizes well to unseen subjects; and 4) scales well with the dimensionality and number of training samples. In particular, we utilize well-tested features to assess the performance of selected methods following the experimental protocol of the state-of-the-art database labeled faces in the wild. A summary of the results is presented along with a discussion of the insights obtained and lessons learned by employing the corresponding algorithms.", "title": "" }, { "docid": "bf9e44e81e37b0aefb12250202d59111", "text": "There are many clustering tasks which are closely related in the real world, e.g. clustering the web pages of different universities. However, existing clustering approaches neglect the underlying relation and treat these clustering tasks either individually or simply together. In this paper, we will study a novel clustering paradigm, namely multi-task clustering, which performs multiple related clustering tasks together and utilizes the relation of these tasks to enhance the clustering performance. We aim to learn a subspace shared by all the tasks, through which the knowledge of the tasks can be transferred to each other. The objective of our approach consists of two parts: (1) Within-task clustering: clustering the data of each task in its input space individually; and (2) Cross-task clustering: simultaneous learning the shared subspace and clustering the data of all the tasks together. We will show that it can be solved by alternating minimization, and its convergence is theoretically guaranteed. Furthermore, we will show that given the labels of one task, our multi-task clustering method can be extended to transductive transfer classification (a.k.a. cross-domain classification, domain adaption). Experiments on several cross-domain text data sets demonstrate that the proposed multi-task clustering outperforms traditional single-task clustering methods greatly. And the transductive transfer classification method is comparable to or even better than several existing transductive transfer classification approaches.", "title": "" }, { "docid": "959b487a51ae87b2d993e6f0f6201513", "text": "The two-wheel differential drive mobile robots, are one of the simplest and most used structures in mobile robotics applications, it consists of a chassis with two fixed and in-line with each other electric motors. This paper presents new models for differential drive mobile robots and some considerations regarding design, modeling and control solutions. The presented models are to be used to help in facing the two top challenges in developing mechatronic mobile robots system; early identifying system level problems and ensuring that all design requirements are met, as well as, to simplify and accelerate Mechatronics mobile robots design process, including proper selection, analysis, integration and verification of the overall system and sub-systems performance throughout the development process.", "title": "" } ]
scidocsrr
656b9050e1363c9eaaf9703e9b39b5cd
Predicting subscriber dissatisfaction and improving retention in the wireless telecommunications industry
[ { "docid": "00ea9078f610b14ed0ed00ed6d0455a7", "text": "Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, Ada Boost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether Ada Boost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the Ada Boost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4 error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5 error on the UCI letters and 8.1 error on the UCI satellite data set, which is significantly better than boosted decision trees.", "title": "" } ]
[ { "docid": "38c9cee29ef1ba82e45556d87de1ff24", "text": "This paper presents a detailed characterization of the Hokuyo URG-04LX 2D laser range finder. While the sensor specifications only provide a rough estimation of the sensor accuracy, the present work analyzes issues such as time drift effects and dependencies on distance, target properties (color, brightness and material) as well as incidence angle. Since the sensor is intended to be used for measurements of a tubelike environment on an inspection robot, the characterization is extended by investigating the influence of the sensor orientation and dependency on lighting conditions. The sensor characteristics are compared to those of the Sick LMS 200 which is commonly used in robotic applications when size and weight are not critical constraints. The results show that the sensor accuracy is strongly depending on the target properties (color, brightness, material) and that it is consequently difficult to establish a calibration model. The paper also identifies cases for which the sensor returns faulty measurements, mainly when the surface has low reflectivity (dark surfaces, foam) or for high incidence angles on shiny surfaces. On the other hand, the repeatability of the sensor seems to be competitive with the LMS 200.", "title": "" }, { "docid": "a71d0d3748f6be2adbd48ab7671dd9f8", "text": "Considerable overlap has been identified in the risk factors, comorbidities and putative pathophysiological mechanisms of Alzheimer disease and related dementias (ADRDs) and type 2 diabetes mellitus (T2DM), two of the most pressing epidemics of our time. Much is known about the biology of each condition, but whether T2DM and ADRDs are parallel phenomena arising from coincidental roots in ageing or synergistic diseases linked by vicious pathophysiological cycles remains unclear. Insulin resistance is a core feature of T2DM and is emerging as a potentially important feature of ADRDs. Here, we review key observations and experimental data on insulin signalling in the brain, highlighting its actions in neurons and glia. In addition, we define the concept of 'brain insulin resistance' and review the growing, although still inconsistent, literature concerning cognitive impairment and neuropathological abnormalities in T2DM, obesity and insulin resistance. Lastly, we review evidence of intrinsic brain insulin resistance in ADRDs. By expanding our understanding of the overlapping mechanisms of these conditions, we hope to accelerate the rational development of preventive, disease-modifying and symptomatic treatments for cognitive dysfunction in T2DM and ADRDs alike.", "title": "" }, { "docid": "180a271a86f9d9dc71cc140096d08b2f", "text": "This communication demonstrates for the first time the capability to independently control the real and imaginary parts of the complex propagation constant in planar, printed circuit board compatible leaky-wave antennas. The structure is based on a half-mode microstrip line which is loaded with an additional row of periodic metallic posts, resulting in a substrate integrated waveguide SIW with one of its lateral electric walls replaced by a partially reflective wall. The radiation mechanism is similar to the conventional microstrip leaky-wave antenna operating in its first higher-order mode, with the novelty that the leaky-mode leakage rate can be controlled by virtue of a sparse row of metallic vias. For this topology it is demonstrated that it is possible to independently control the antenna pointing angle and main lobe beamwidth while achieving high radiation efficiencies, thus providing low-cost, low-profile, simply fed, and easily integrable leaky-wave solutions for high-gain frequency beam-scanning applications. Several prototypes operating at 15 GHz have been designed, simulated, manufactured and tested, to show the operation principle and design flexibility of this one dimensional leaky-wave antenna.", "title": "" }, { "docid": "9a65a5c09df7e34383056509d96e772d", "text": "With explosive growth of Android malware and due to its damage to smart phone users (e.g., stealing user credentials, resource abuse), Android malware detection is one of the cyber security topics that are of great interests. Currently, the most significant line of defense against Android malware is anti-malware software products, such as Norton, Lookout, and Comodo Mobile Security, which mainly use the signature-based method to recognize threats. However, malware attackers increasingly employ techniques such as repackaging and obfuscation to bypass signatures and defeat attempts to analyze their inner mechanisms. The increasing sophistication of Android malware calls for new defensive techniques that are harder to evade, and are capable of protecting users against novel threats. In this paper, we propose a novel dynamic analysis method named Component Traversal that can automatically execute the code routines of each given Android application (app) as completely as possible. Based on the extracted Linux kernel system calls, we further construct the weighted directed graphs and then apply a deep learning framework resting on the graph based features for newly unknown Android malware detection. A comprehensive experimental study on a real sample collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed method outperforms other alternative Android malware detection techniques. Our developed system Deep4MalDroid has also been integrated into a commercial Android anti-malware software.", "title": "" }, { "docid": "9365a612900a8bf0ddef8be6ec17d932", "text": "Stabilization exercise program has become the most popular treatment method in spinal rehabilitation since it has shown its effectiveness in some aspects related to pain and disability. However, some studies have reported that specific exercise program reduces pain and disability in chronic but not in acute low back pain, although it can be helpful in the treatment of acute low back pain by reducing recurrence rate (Ferreira et al., 2006).", "title": "" }, { "docid": "4e2bfd87acf1287f36694634a6111b3f", "text": "This paper presents a model for managing departure aircraft at the spot or gate on the airport surface. The model is applied over two time frames: long term (one hour in future) for collaborative decision making, and short term (immediate) for decisions regarding the release of aircraft. The purpose of the model is to provide the controller a schedule of spot or gate release times optimized for runway utilization. This model was tested in nominal and heavy surface traffic scenarios in a simulated environment, and results indicate average throughput improvement of 10% in high traffic scenarios even with up to two minutes of uncertainty in spot arrival times.", "title": "" }, { "docid": "6e0a19a9bc744aa05a64bd7450cc4c1b", "text": "The success of deep neural networks hinges on our ability to accurately and efficiently optimize high-dimensional, non-convex functions. In this paper, we empirically investigate the loss functions of state-of-the-art networks, and how commonlyused stochastic gradient descent variants optimize these loss functions. To do this, we visualize the loss function by projecting them down to low-dimensional spaces chosen based on the convergence points of different optimization algorithms. Our observations suggest that optimization algorithms encounter and choose different descent directions at many saddle points to find different final weights. Based on consistency we observe across re-runs of the same stochastic optimization algorithm, we hypothesize that each optimization algorithm makes characteristic choices at these saddle points.", "title": "" }, { "docid": "1d5119a4aeb7d678b58fb4e55c43fe94", "text": "This chapter provides a simplified introduction to cloud computing. This chapter starts by introducing the history of cloud computing and moves on to describe the cloud architecture and operation. This chapter also discusses briefly cloud servicemodels: Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service. Clouds are also categorized based on their ownership to private and public clouds. This chapter concludes by explaining the reasons for choosing cloud computing over other technologies by exploring the economic and technological benefits of the cloud.", "title": "" }, { "docid": "07175075dad32287a7dabf3d852f729a", "text": "This paper is intended as a tutorial overview of induction motors signature analysis as a medium for fault detection. The purpose is to introduce in a concise manner the fundamental theory, main results, and practical applications of motor signature analysis for the detection and the localization of abnormal electrical and mechanical conditions that indicate, or may lead to, a failure of induction motors. The paper is focused on the so-called motor current signature analysis which utilizes the results of spectral analysis of the stator current. The paper is purposefully written without “state-of-the-art” terminology for the benefit of practicing engineers in facilities today who may not be familiar with signal processing.", "title": "" }, { "docid": "9e7fc71def2afc58025ff5e0198148d0", "text": "BACKGROUD\nWith the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects.\n\n\nNEW METHOD\nWe have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability.\n\n\nRESULTS\nThis study demonstrates new methods for computing and visualizing 'grand' ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute.", "title": "" }, { "docid": "073cd7c54b038dcf69ae400f97a54337", "text": "Interventions to support children with autism often include the use of visual supports, which are cognitive tools to enable learning and the production of language. Although visual supports are effective in helping to diminish many of the challenges of autism, they are difficult and time-consuming to create, distribute, and use. In this paper, we present the results of a qualitative study focused on uncovering design guidelines for interactive visual supports that would address the many challenges inherent to current tools and practices. We present three prototype systems that address these design challenges with the use of large group displays, mobile personal devices, and personal recording technologies. We also describe the interventions associated with these prototypes along with the results from two focus group discussions around the interventions. We present further design guidance for visual supports and discuss tensions inherent to their design.", "title": "" }, { "docid": "df9acaed8dbcfbd38a30e4e1fa77aa8a", "text": "Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-theart object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.", "title": "" }, { "docid": "56826bfc5f48105387fd86cc26b402f1", "text": "It is difficult to identify sentence importance from a single point of view. In this paper, we propose a learning-based approach to combine various sentence features. They are categorized as surface, content, relevance and event features. Surface features are related to extrinsic aspects of a sentence. Content features measure a sentence based on contentconveying words. Event features represent sentences by events they contained. Relevance features evaluate a sentence from its relatedness with other sentences. Experiments show that the combined features improved summarization performance significantly. Although the evaluation results are encouraging, supervised learning approach requires much labeled data. Therefore we investigate co-training by combining labeled and unlabeled data. Experiments show that this semisupervised learning approach achieves comparable performance to its supervised counterpart and saves about half of the labeling time cost.", "title": "" }, { "docid": "dd9edd37ff5f4cb332fcb8a0ef86323e", "text": "This paper proposes several nonlinear control strategies for trajectory tracking of a quadcopter system based on the property of differential flatness. Its originality is twofold. Firstly, it provides a flat output for the quadcopter dynamics capable of creating full flat parametrization of the states and inputs. Moreover, B-splines characterizations of the flat output and their properties allow for optimal trajectory generation subject to way-point constraints. Secondly, several control strategies based on computed torque control and feedback linearization are presented and compared. The advantages of flatness within each control strategy are analyzed and detailed through extensive simulation results.", "title": "" }, { "docid": "8e3bf062119c6de9fa5670ce4b00764b", "text": "Heating red phosphorus in sealed ampoules in the presence of a Sn/SnI4 catalyst mixture has provided bulk black phosphorus at much lower pressures than those required for allotropic conversion by anvil cells. Herein we report the growth of ultra-long 1D red phosphorus nanowires (>1 mm) selectively onto a wafer substrate from red phosphorus powder and a thin film of red phosphorus in the present of a Sn/SnI4 catalyst. Raman spectra and X-ray diffraction characterization suggested the formation of crystalline red phosphorus nanowires. FET devices constructed with the red phosphorus nanowires displayed a typical I-V curve similar to that of black phosphorus and a similar mobility reaching 300 cm(2)  V(-1)  s with an Ion /Ioff ratio approaching 10(2) . A significant response to infrared light was observed from the FET device.", "title": "" }, { "docid": "1bf801e8e0348ccd1e981136f604dd18", "text": "Sketch recognition is one of the integral components used by law enforcement agencies in solving crime. In recent past, software generated composite sketches are being preferred as they are more consistent and faster to construct than hand drawn sketches. Matching these composite sketches to face photographs is a complex task because the composite sketches are drawn based on the witness description and lack minute details which are present in photographs. This paper presents a novel algorithm for matching composite sketches with photographs using transfer learning with deep learning representation. In the proposed algorithm, first the deep learning architecture based facial representation is learned using large face database of photos and then the representation is updated using small problem-specific training database. Experiments are performed on the extended PRIP database and it is observed that the proposed algorithm outperforms recently proposed approach and a commercial face recognition system.", "title": "" }, { "docid": "11e666f5b8746ea4b6fc6d4467295e61", "text": "It is shown that by combining the osmotic pressure and rate of diffusion laws an equation can be derived for the kinetics of osmosis. The equation has been found to agree with experiments on the rate of osmosis for egg albumin and gelatin solutions with collodion membranes.", "title": "" }, { "docid": "1d41e6f55521cdba4fc73febd09d2eb4", "text": "1.", "title": "" }, { "docid": "3b4622a4ad745fc0ffb3b6268eb969fa", "text": "Eruptive syringomas: unresponsiveness to oral isotretinoin A 22-year-old man of Egyptian origin was referred to our department due to exacerbation of pruritic pre-existing papular dermatoses. The skin lesions had been present since childhood. The family history was negative for a similar condition. The patient complained of exacerbation of the pruritus during physical activity under a hot climate and had moderate to severe pruritus during his work. Physical examination revealed multiple reddish-brownish smooth-surfaced, symmetrically distributed papules 2–4 mm in diameter on the patient’s trunk, neck, axillae, and limbs (Fig. 1). The rest of the physical examination was unremarkable. The Darier sign was negative. A skin biopsy was obtained from a representative lesion on the trunk. Histopathologic examination revealed a small, wellcircumscribed neoplasm confined to the upper dermis, composed of small solid and ductal structures relatively evenly distributed in a sclerotic collagenous stroma. The solid elements were of various shapes (round, oval, curvilinear, “comma-like,” or “tadpole-like”) (Fig. 2). These microscopic features and the clinical presentation were consistent with the diagnosis of eruptive syringomas. Our patient was treated with a short course of oral antihistamines without any effect and subsequently with low-dose isotretinoin (10 mg/day) for 5 months. No improvement of the skin eruption was observed while cessation of the pruritus was accomplished. Syringoma is a common adnexal tumor with differentiation towards eccrine acrosyringium composed of small solid and ductal elements embedded in a sclerotic stroma and restricted as a rule to the upper to mid dermis, usually presenting clinically as multiple lesions on the lower eyelids and cheeks of adolescent females. A much less common variant is the eruptive or disseminated syringomas, which involve primarily young women. Eruptive syringomas are characterized by rapid development during a short period of time of hundreds of small (1–5 mm), ill-defined, smooth surfaced, skin-colored, pink, yellowish, or brownish papules typically involving the face, trunk, genitalia, pubic area, and extremities but can occur principally in any site where eccrine glands are found. The pathogenesis of eruptive syringoma remains unclear. Some authors have recently challenged the traditional notion that eruptive syringomas are neoplastic lesions. Chandler and Bosenberg presented evidence that eruptive syringomas result from autoimmune destruction of the acrosyringium and proposed the term autoimmune acrosyringitis with ductal cysts. Garrido-Ruiz et al. support the theory that eruptive syringomas may represent a hyperplastic response of the eccrine duct to an inflammatory reaction. In a recent systematic review by Williams and Shinkai the strongest association of syringomas was with Down’s syndrome (183 reported cases, 22.2%). Syringomas are also associated with diabetes mellitus (17 reported cases, 2.1%), Ehlers–Danlos", "title": "" }, { "docid": "161e9a2b7a6783b57ce47bb8e100a80d", "text": "Distributed storage systems provide large-scale data storage services, yet they are confronted with frequent node failures. To ensure data availability, a storage system often introduces data redundancy via replication or erasure coding. As erasure coding incurs significantly less redundancy overhead than replication under the same fault tolerance, it has been increasingly adopted in large-scale storage systems. In erasure-coded storage systems, degraded reads to temporarily unavailable data are very common, and hence boosting the performance of degraded reads becomes important. One challenge is that storage nodes tend to be heterogeneous with different storage capacities and I/O bandwidths. To this end, we propose FastDR, a system that addresses node heterogeneity and exploits I/O parallelism, so as to boost the performance of degraded reads to temporarily unavailable data. FastDR incorporates a greedy algorithm that seeks to reduce the data transfer cost of reading surviving data for degraded reads, while allowing the search of the efficient degraded read solution to be completed in a timely manner. We implement a FastDR prototype, and conduct extensive evaluation through simulation studies as well as testbed experiments on a Hadoop cluster with 10 storage nodes. We demonstrate that our FastDR achieves efficient degraded reads compared to existing approaches.", "title": "" } ]
scidocsrr
c4d19b13e92558c0cfab7f6748d7a35e
Ensemble diversity measures and their application to thinning
[ { "docid": "0fb2afcd2997a1647bb4edc12d2191f9", "text": "Many databases have grown to the point where they cannot fit into the fast memory of even large memory machines, to say nothing of current workstations. If what we want to do is to use these data bases to construct predictions of various characteristics, then since the usual methods require that all data be held in fast memory, various work-arounds have to be used. This paper studies one such class of methods which give accuracy comparable to that which could have been obtained if all data could have been held in core and which are computationally fast. The procedure takes small pieces of the data, grows a predictor on each small piece and then pastes these predictors together. A version is given that scales up to terabyte data sets. The methods are also applicable to on-line learning.", "title": "" } ]
[ { "docid": "a880d38d37862b46dc638b9a7e45b6ee", "text": "This paper presents the modeling, simulation, and analysis of the dynamic behavior of a fictitious 2 × 320 MW variable-speed pump-turbine power plant, including a hydraulic system, electrical equipment, rotating inertias, and control systems. The modeling of the hydraulic and electrical components of the power plant is presented. The dynamic performances of a control strategy in generating mode and one in pumping mode are investigated by the simulation of the complete models in the case of change of active power set points. Then, a pseudocontinuous model of the converters feeding the rotor circuits is described. Due to this simplification, the simulation time can be reduced drastically (approximately factor 60). A first validation of the simplified model of the converters is obtained by comparison of the simulated results coming from the simplified and complete models for different modes of operation of the power plant. Experimental results performed on a 2.2-kW low-power test bench are also compared with the simulated results coming from both complete and simplified models related to this case and confirm the validity of the proposed simplified approach for the converters.", "title": "" }, { "docid": "833c110e040311909aa38b05e457b2af", "text": "The scyphozoan Aurelia aurita (Linnaeus) s. l., is a cosmopolitan species-complex which blooms seasonally in a variety of coastal and shelf sea environments around the world. We hypothesized that ephyrae of Aurelia sp.1 are released from the inner part of the Jiaozhou Bay, China when water temperature is below 15°C in late autumn and winter. The seasonal occurrence, growth, and variation of the scyphomedusa Aurelia sp.1 were investigated in Jiaozhou Bay from January 2011 to December 2011. Ephyrae occurred from May through June with a peak abundance of 2.38 ± 0.56 ind/m3 in May, while the temperature during this period ranged from 12 to 18°C. The distribution of ephyrae was mainly restricted to the coastal area of the bay, and the abundance was higher in the dock of the bay than at the other inner bay stations. Young medusae derived from ephyrae with a median diameter of 9.74 ± 1.7 mm were present from May 22. Growth was rapid from May 22 to July 2 with a maximum daily growth rate of 39%. Median diameter of the medusae was 161.80 ± 18.39 mm at the beginning of July. In August, a high proportion of deteriorated specimens was observed and the median diameter decreased. The highest average abundance is 0.62 ± 1.06 ind/km2 in Jiaozhou Bay in August. The abundance of Aurelia sp.1 medusae was low from September and then decreased to zero. It is concluded that water temperature is the main driver regulating the life cycle of Aurelia sp.1 in Jiaozhou Bay.", "title": "" }, { "docid": "db4bb32f6fdc7a05da41e223afac3025", "text": "Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: \"noise\" characterization and suppression, and \"signal\" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources.", "title": "" }, { "docid": "7dcba854d1f138ab157a1b24176c2245", "text": "Essential oils distilled from members of the genus Lavandula have been used both cosmetically and therapeutically for centuries with the most commonly used species being L. angustifolia, L. latifolia, L. stoechas and L. x intermedia. Although there is considerable anecdotal information about the biological activity of these oils much of this has not been substantiated by scientific or clinical evidence. Among the claims made for lavender oil are that is it antibacterial, antifungal, carminative (smooth muscle relaxing), sedative, antidepressive and effective for burns and insect bites. In this review we detail the current state of knowledge about the effect of lavender oils on psychological and physiological parameters and its use as an antimicrobial agent. Although the data are still inconclusive and often controversial, there does seem to be both scientific and clinical data that support the traditional uses of lavender. However, methodological and oil identification problems have severely hampered the evaluation of the therapeutic significance of much of the research on Lavandula spp. These issues need to be resolved before we have a true picture of the biological activities of lavender essential oil.", "title": "" }, { "docid": "83b8944584693b9568f6ad3533ad297b", "text": "BACKGROUND\nChemotherapy is the standard of care for incurable advanced gastric cancer. Whether the addition of gastrectomy to chemotherapy improves survival for patients with advanced gastric cancer with a single non-curable factor remains controversial. We aimed to investigate the superiority of gastrectomy followed by chemotherapy versus chemotherapy alone with respect to overall survival in these patients.\n\n\nMETHODS\nWe did an open-label, randomised, phase 3 trial at 44 centres or hospitals in Japan, South Korea, and Singapore. Patients aged 20-75 years with advanced gastric cancer with a single non-curable factor confined to either the liver (H1), peritoneum (P1), or para-aortic lymph nodes (16a1/b2) were randomly assigned (1:1) in each country to chemotherapy alone or gastrectomy followed by chemotherapy by a minimisation method with biased-coin assignment to balance the groups according to institution, clinical nodal status, and non-curable factor. Patients, treating physicians, and individuals who assessed outcomes and analysed data were not masked to treatment assignment. Chemotherapy consisted of oral S-1 80 mg/m(2) per day on days 1-21 and cisplatin 60 mg/m(2) on day 8 of every 5-week cycle. Gastrectomy was restricted to D1 lymphadenectomy without any resection of metastatic lesions. The primary endpoint was overall survival, analysed by intention to treat. This study is registered with UMIN-CTR, number UMIN000001012.\n\n\nFINDINGS\nBetween Feb 4, 2008, and Sept 17, 2013, 175 patients were randomly assigned to chemotherapy alone (86 patients) or gastrectomy followed by chemotherapy (89 patients). After the first interim analysis on Sept 14, 2013, the predictive probability of overall survival being significantly higher in the gastrectomy plus chemotherapy group than in the chemotherapy alone group at the final analysis was only 13·2%, so the study was closed on the basis of futility. Overall survival at 2 years for all randomly assigned patients was 31·7% (95% CI 21·7-42·2) for patients assigned to chemotherapy alone compared with 25·1% (16·2-34·9) for those assigned to gastrectomy plus chemotherapy. Median overall survival was 16·6 months (95% CI 13·7-19·8) for patients assigned to chemotherapy alone and 14·3 months (11·8-16·3) for those assigned to gastrectomy plus chemotherapy (hazard ratio 1·09, 95% CI 0·78-1·52; one-sided p=0·70). The incidence of the following grade 3 or 4 chemotherapy-associated adverse events was higher in patients assigned to gastrectomy plus chemotherapy than in those assigned to chemotherapy alone: leucopenia (14 patients [18%] vs two [3%]), anorexia (22 [29%] vs nine [12%]), nausea (11 [15%] vs four [5%]), and hyponatraemia (seven [9%] vs four [5%]). One treatment-related death occurred in a patient assigned to chemotherapy alone (sudden cardiopulmonary arrest of unknown cause during the second cycle of chemotherapy) and one occurred in a patient assigned to chemotherapy plus gastrectomy (rapid growth of peritoneal metastasis after discharge 12 days after surgery).\n\n\nINTERPRETATION\nSince gastrectomy followed by chemotherapy did not show any survival benefit compared with chemotherapy alone in advanced gastric cancer with a single non-curable factor, gastrectomy cannot be justified for treatment of patients with these tumours.\n\n\nFUNDING\nThe Ministry of Health, Labour and Welfare of Japan and the Korean Gastric Cancer Association.", "title": "" }, { "docid": "ddb77ec8a722c50c28059d03919fb299", "text": "Among the smart cities applications, optimizing lottery games is one of the urgent needs to ensure their fairness and transparency. The emerging blockchain technology shows a glimpse of solutions to fairness and transparency issues faced by lottery industries. This paper presents the design of a blockchain-based lottery system for smart cities applications. We adopt the smart contracts of blockchain technology and the cryptograph blockchain model, Hawk [8], to design the blockchain-based lottery system, FairLotto, for future smart cities applications. Fairness, transparency, and privacy of the proposed blockchain-based lottery system are discussed and ensured.", "title": "" }, { "docid": "872ccba4f0a0ba6a57500d4b73384ce1", "text": "This research demonstrates the application of association rule mining to spatio-temporal data. Association rule mining seeks to discover associations among transactions encoded in a database. An association rule takes the form A → B where A (the antecedent) and B (the consequent) are sets of predicates. A spatio-temporal association rule occurs when there is a spatio-temporal relationship in the antecedent or consequent of the rule. As a case study, association rule mining is used to explore the spatial and temporal relationships among a set of variables that characterize socioeconomic and land cover change in the Denver, Colorado, USA region from 1970–1990. Geographic Information Systems (GIS)-based data pre-processing is used to integrate diverse data sets, extract spatio-temporal relationships, classify numeric data into ordinal categories, and encode spatio-temporal relationship data in tabular format for use by conventional (non-spatio-temporal) association rule mining software. Multiple level association rule mining is supported by the development of a hierarchical classification scheme (concept hierarchy) for each variable. Further research in spatiotemporal association rule mining should address issues of data integration, data classification, the representation and calculation of spatial relationships, and strategies for finding ‘interesting’ rules.", "title": "" }, { "docid": "5ec64c4a423ccd32a5c1ceb918e3e003", "text": "The leading edge (approximately 1 microgram) of lamellipodia in Xenopus laevis keratocytes and fibroblasts was shown to have an extensively branched organization of actin filaments, which we term the dendritic brush. Pointed ends of individual filaments were located at Y-junctions, where the Arp2/3 complex was also localized, suggesting a role of the Arp2/3 complex in branch formation. Differential depolymerization experiments suggested that the Arp2/3 complex also provided protection of pointed ends from depolymerization. Actin depolymerizing factor (ADF)/cofilin was excluded from the distal 0.4 micrometer++ of the lamellipodial network of keratocytes and in fibroblasts it was located within the depolymerization-resistant zone. These results suggest that ADF/cofilin, per se, is not sufficient for actin brush depolymerization and a regulatory step is required. Our evidence supports a dendritic nucleation model (Mullins, R.D., J.A. Heuser, and T.D. Pollard. 1998. Proc. Natl. Acad. Sci. USA. 95:6181-6186) for lamellipodial protrusion, which involves treadmilling of a branched actin array instead of treadmilling of individual filaments. In this model, Arp2/3 complex and ADF/cofilin have antagonistic activities. Arp2/3 complex is responsible for integration of nascent actin filaments into the actin network at the cell front and stabilizing pointed ends from depolymerization, while ADF/cofilin promotes filament disassembly at the rear of the brush, presumably by pointed end depolymerization after dissociation of the Arp2/3 complex.", "title": "" }, { "docid": "b81f30a692d57ebc2fdef7df652d0ca2", "text": "Suppose that Alice wishes to send messages to Bob through a communication channel C1, but her transmissions also reach an eavesdropper Eve through another channel C2. This is the wiretap channel model introduced by Wyner in 1975. The goal is to design a coding scheme that makes it possible for Alice to communicate both reliably and securely. Reliability is measured in terms of Bob's probability of error in recovering the message, while security is measured in terms of the mutual information between the message and Eve's observations. Wyner showed that the situation is characterized by a single constant Cs, called the secrecy capacity, which has the following meaning: for all ε >; 0, there exist coding schemes of rate R ≥ Cs-ε that asymptotically achieve the reliability and security objectives. However, his proof of this result is based upon a random-coding argument. To date, despite consider able research effort, the only case where we know how to construct coding schemes that achieve secrecy capacity is when Eve's channel C2 is an erasure channel, or a combinatorial variation thereof. Polar codes were recently invented by Arikan; they approach the capacity of symmetric binary-input discrete memoryless channels with low encoding and decoding complexity. In this paper, we use polar codes to construct a coding scheme that achieves the secrecy capacity for a wide range of wiretap channels. Our construction works for any instantiation of the wiretap channel model, as long as both C1 and C2 are symmetric and binary-input, and C2 is degraded with respect to C1. Moreover, we show how to modify our construction in order to provide strong security, in the sense defined by Maurer, while still operating at a rate that approaches the secrecy capacity. In this case, we cannot guarantee that the reliability condition will also be satisfied unless the main channel C1 is noiseless, although we believe it can be always satisfied in practice.", "title": "" }, { "docid": "a2c26a8b15cafeb365ad9870f9bbf884", "text": "Microgrids consist of multiple parallel-connected distributed generation (DG) units with coordinated control strategies, which are able to operate in both grid-connected and islanded mode. Microgrids are attracting more and more attention since they can alleviate the stress of main transmission systems, reduce feeder losses, and improve system power quality. When the islanded microgrids are concerned, it is important to maintain system stability and achieve load power sharing among the multiple parallel-connected DG units. However, the poor active and reactive power sharing problems due to the influence of impedance mismatch of the DG feeders and the different ratings of the DG units are inevitable when the conventional droop control scheme is adopted. Therefore, the adaptive/improved droop control, network-based control methods and cost-based droop schemes are compared and summarized in this paper for active power sharing. Moreover, nonlinear and unbalanced loads could further affect the reactive power sharing when regulating the active power, and it is difficult to share the reactive power accurately only by using the enhanced virtual impedance method. Therefore, the hierarchical control strategies are utilized as supplements of the conventional droop controls and virtual impedance methods. The improved hierarchical control approaches such as the algorithms based on graph theory, multi-agent system, the gain scheduling method and predictive control have been proposed to achieve proper reactive power sharing for islanded microgrids and eliminate the effect of the communication delays on hierarchical control. Finally, the future research trends on islanded microgrids are also discussed in this paper.", "title": "" }, { "docid": "87da90ee583f5aa1777199f67bdefc83", "text": "The rapid development of computer networks in the past decades has created many security problems related to intrusions on computer and network systems. Intrusion Detection Systems IDSs incorporate methods that help to detect and identify intrusive and non-intrusive network packets. Most of the existing intrusion detection systems rely heavily on human analysts to analyze system logs or network traffic to differentiate between intrusive and non-intrusive network traffic. With the increase in data of network traffic, involvement of human in the detection system is a non-trivial problem. IDS’s ability to perform based on human expertise brings limitations to the system’s capability to perform autonomously over exponentially increasing data in the network. However, human expertise and their ability to analyze the system can be efficiently modeled using soft-computing techniques. Intrusion detection techniques based on machine learning and softcomputing techniques enable autonomous packet detections. They have the potential to analyze the data packets, autonomously. These techniques are heavily based on statistical analysis of data. The ability of the algorithms that handle these data-sets can use patterns found in previous data to make decisions for the new evolving data-patterns in the network traffic. In this paper, we present a rigorous survey study that envisages various soft-computing and machine learning techniques used to build autonomous IDSs. A robust IDSs system lays a foundation to build an efficient Intrusion Detection and Prevention System IDPS.", "title": "" }, { "docid": "2d5a8949119d7881a97693867a009917", "text": "Labeling a histopathology image as having cancerous regions or not is a critical task in cancer diagnosis; it is also clinically important to segment the cancer tissues and cluster them into various classes. Existing supervised approaches for image classification and segmentation require detailed manual annotations for the cancer pixels, which are time-consuming to obtain. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL) (along the line of weakly supervised learning) for histopathology image segmentation. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), medical image segmentation (cancer vs. non-cancer tissue), and patch-level clustering (different classes). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to performing the above three tasks in an integrated framework. In addition, we introduce contextual constraints as a prior for MCIL, which further reduces the ambiguity in MIL. Experimental results on histopathology colon cancer images and cytology images demonstrate the great advantage of MCIL over the competing methods.", "title": "" }, { "docid": "f02b44ff478952f1958ba33d8a488b8e", "text": "Plagiarism is an illicit act of using other’s work wholly or partially as one’s own in any field such as art, poetry literature, cinema, research and other creative forms of study. It has become a serious crime in academia and research fields and access to wide range of resources on the internet has made the situation even worse. Therefore, there is a need for automatic detection of plagiarism in text. This paper presents a survey of various plagiarism detection techniques used for different languages.", "title": "" }, { "docid": "026a0651177ee631a80aaa7c63a1c32f", "text": "This paper is an introduction to natural language interfaces to databases (Nlidbs). A brief overview of the history of Nlidbs is rst given. Some advantages and disadvantages of Nlidbs are then discussed, comparing Nlidbs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems Nlidbs have to confront follows, for the beneet of readers less familiar with computational linguistics. The discussion then moves on to Nlidb architectures, porta-bility issues, restricted natural language input systems (including menu-based Nlidbs), and Nlidbs with reasoning capabilities. Some less explored areas of Nlidb research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal Nlidbs. The paper ends with reeections on the current state of the art.", "title": "" }, { "docid": "02605f4044a69b70673121985f1bd913", "text": "A novel class of low-cost, small-footprint and high-gain antenna arrays is presented for W-band applications. A 4 × 4 antenna array is proposed and demonstrated using substrate-integrated waveguide (SIW) technology for the design of its feed network and longitudinal slots in the SIW top metallic surface to drive the array antenna elements. Dielectric cubes of low-permittivity material are placed on top of each 1 × 4 antenna array to increase the gain of the circular patch antenna elements. This new design is compared to a second 4 × 4 antenna array which, instead of dielectric cubes, uses vertically stacked Yagi-like parasitic director elements to increase the gain. Measured impedance bandwidths of the two 4 × 4 antenna arrays are about 7.5 GHz (94.2-101.8 GHz) at 18 ± 1 dB gain level, with radiation patterns and gains of the two arrays remaining nearly constant over this bandwidth. While the fabrication effort of the new array involving dielectric cubes is significantly reduced, its measured radiation efficiency of 81 percent is slightly lower compared to 90 percent of the Yagi-like design.", "title": "" }, { "docid": "b05f96e22157b69d7033db35ab38524a", "text": "Novelty search has shown to be a promising approach for the evolution of controllers for swarms of robots. In existing studies, however, the experimenter had to craft a task-specific behaviour similarity measure. The reliance on hand-crafted similarity measures places an additional burden to the experimenter and introduces a bias in the evolutionary process. In this paper, we propose and compare two generic behaviour similarity measures: combined state count and sampled average state. The proposed measures are based on the values of sensors and effectors recorded for each individual robot of the swarm. The characterisation of the group-level behaviour is then obtained by combining the sensor-effector values from all the robots. We evaluate the proposed measures in an aggregation task and in a resource sharing task. We show that the generic measures match the performance of task-specific measures in terms of solution quality. Our results indicate that the proposed generic measures operate as effective behaviour similarity measures, and that it is possible to leverage the benefits of novelty search without having to craft task-specific similarity measures.", "title": "" }, { "docid": "ba2710c7df05b149f6d2befa8dbc37ee", "text": "This work proposes a method for blind equalization of possibly non-minimum phase channels using particular infinite impulse response (IIR) filters. In this context, the transfer function of the equalizer is represented by a linear combination of specific rational basis functions. This approach estimates separately the coefficients of the linear expansion and the poles of the rational basis functions by alternating iteratively between an adaptive (fixed pole) estimation of the coefficients and a pole placement method. The focus of the work is mainly on the issue of good pole placement (initialization and updating).", "title": "" }, { "docid": "6b0a4a8c61fb4ceabe3aa3d5664b4b67", "text": "Most existing approaches for text classification represent texts as vectors of words, namely ``Bag-of-Words.'' This text representation results in a very high dimensionality of feature space and frequently suffers from surface mismatching. Short texts make these issues even more serious, due to their shortness and sparsity. In this paper, we propose using ``Bag-of-Concepts'' in short text representation, aiming to avoid the surface mismatching and handle the synonym and polysemy problem. Based on ``Bag-of-Concepts,'' a novel framework is proposed for lightweight short text classification applications. By leveraging a large taxonomy knowledgebase, it learns a concept model for each category, and conceptualizes a short text to a set of relevant concepts. A concept-based similarity mechanism is presented to classify the given short text to the most similar category. One advantage of this mechanism is that it facilitates short text ranking after classification, which is needed in many applications, such as query or ad recommendation. We demonstrate the usage of our proposed framework through a real online application: Channel-based Query Recommendation. Experiments show that our framework can map queries to channels with a high degree of precision (avg. precision=90.3%), which is critical for recommendation applications.", "title": "" }, { "docid": "32fb1d8492e06b1424ea61d4c28f3c6c", "text": "Modern IT systems often produce large volumes of event logs, and event pattern discovery is an important log management task. For this purpose, data mining methods have been suggested in many previous works. In this paper, we present the LogCluster algorithm which implements data clustering and line pattern mining for textual event logs. The paper also describes an open source implementation of LogCluster.", "title": "" } ]
scidocsrr
0c39cc7afb570af24adeb2b801b6598e
Personal self-concept and satisfaction with life in adolescence, youth and adulthood.
[ { "docid": "c2448cd1ac95923b11b033041cfa0cb7", "text": "Reigning measures of psychological well-being have little theoretical grounding, despite an extensive literature on the contours of positive functioning. Aspects of well-being derived from this literature (i.e., self-acceptance, positive relations with others, autonomy, environmental mastery, purpose in life, and personal growth) were operationalized. Three hundred and twenty-one men and women, divided among young, middle-aged, and older adults, rated themselves on these measures along with six instruments prominent in earlier studies (i.e., affect balance, life satisfaction, self-esteem, morale, locus of control, depression). Results revealed that positive relations with others, autonomy, purpose in life, and personal growth were not strongly tied to prior assessment indexes, thereby supporting the claim that key aspects of positive functioning have not been represented in the empirical arena. Furthermore, age profiles revealed a more differentiated pattern of well-being than is evident in prior research.", "title": "" } ]
[ { "docid": "07a718d6e7136e90dbd35ea18d6a5f11", "text": "We discuss the importance of understanding psychological aspects of phishing, and review some recent findings. Given these findings, we critique some commonly used security practices and suggest and review alternatives, including educational approaches. We suggest a few techniques that can be used to assess and remedy threats remotely, without requiring any user involvement. We conclude by discussing some approaches to anticipate the next wave of threats, based both on psychological and technical insights. 1 What Will Consumers Believe? There are several reasons why it is important to understand what consumers will find believable. First of all, it is crucial for service providers to know their vulnerabilities (and those of their clients) in order to assess their exposure to risks and the associated liabilities. Second, recognizing what the vulnerabilities are translates into knowing from where the attacks are likely to come; this allows for suitable technical security measures to be deployed to detect and protect against attacks of concern. It also allows for a proactive approach in which the expected vulnerabilities are minimized by the selection and deployment of appropriate email and web templates, and the use of appropriate manners of interaction. Finally, there are reasons for why understanding users is important that are not directly related to security: Knowing what consumers will believe—and will not believe—means a better ability to reach the consumers with information they do not expect, whether for reasons of advertising products or communicating alerts. Namely, given the mimicry techniques used by phishers, there is a risk that consumers incorrectly classify legitimate messages as attempts to attack them. Being aware of potential pitfalls may guide decisions that facilitate communication. While technically knowledgeable, specialists often make the mistake of believing that security measures that succeed in protecting them are sufficient to protect average consumers. For example, it was for a long time commonly held among security practitioners that the widespread deployment of SSL would eliminate phishing once consumers become aware of the risks and nature of phishing attacks. This, very clearly, has not been the case, as supported both by reallife observations and by experiments [48]. This can be ascribed to a lack of attention to security among typical users [47, 35], but also to inconsistent or inappropriate security education [12]— whether implicit or not. An example of a common procedure that indirectly educates user is the case of lock symbols. Many financial institutions place a lock symbol in the content portion of the login page to indicate that a secure connection will be established as the user submits his credentials. This is to benefit from the fact that users have been educated to equate an SSL lock with a higher level of security. However, attackers may also place lock icons in the content of the page, whether they intend to establish an SSL connection or not. Therefore, the use of the lock", "title": "" }, { "docid": "e91d3ae1224ca4c86f72646fd86cc661", "text": "We examine the functional cohesion of procedures using a data slice abstraction. Our analysis identi es the data tokens that lie on more than one slice as the \\glue\" that binds separate components together. Cohesion is measured in terms of the relative number of glue tokens, tokens that lie on more than one data slice, and super-glue tokens, tokens that lie on all data slices in a procedure, and the adhesiveness of the tokens. The intuition and measurement scale factors are demonstrated through a set of abstract transformations and composition operators. Index terms | software metrics, cohesion, program slices, measurement theory", "title": "" }, { "docid": "8e26d11fa1ab330a429f072c1ac17fe2", "text": "The objective of this study was to report the signalment, indications for surgery, postoperative complications and outcome in dogs undergoing penile amputation and scrotal urethrostomy. Medical records of three surgical referral facilities were reviewed for dogs undergoing penile amputation and scrotal urethrostomy between January 2003 and July 2010. Data collected included signalment, presenting signs, indication for penile amputation, surgical technique, postoperative complications and long-term outcome. Eighteen dogs were included in the study. Indications for surgery were treatment of neoplasia (n=6), external or unknown penile trauma (n=4), penile trauma or necrosis associated with urethral obstruction with calculi (n=3), priapism (n=4) and balanoposthitis (n=1). All dogs suffered mild postoperative haemorrhage (posturination and/or spontaneous) from the urethrostomy stoma for up to 21 days (mean 5.5 days). Four dogs had minor complications recorded at suture removal (minor dehiscence (n=1), mild bruising and swelling around the urethrostomy site and mild haemorrhage at suture removal (n=2), and granulation at the edge of stoma (n=1)). One dog had a major complication (wound dehiscence and subsequent stricture of the stoma). Long-term outcome was excellent in all dogs with non-neoplastic disease. Local tumour recurrence and/or metastatic disease occurred within five to 12 months of surgery in two dogs undergoing penile amputation for the treatment of neoplasia. Both dogs were euthanased.", "title": "" }, { "docid": "878cd4545931099ead5df71076afc731", "text": "The pioneer deep neural networks (DNNs) have emerged to be deeper or wider for improving their accuracy in various applications of artificial intelligence. However, DNNs are often too heavy to deploy in practice, and it is often required to control their architectures dynamically given computing resource budget, i.e., anytime prediction. While most existing approaches have focused on training multiple shallow sub-networks jointly, we study training thin sub-networks instead. To this end, we first build many inclusive thin sub-networks (of the same depth) under a minor modification of existing multi-branch DNNs, and found that they can significantly outperform the state-of-art dense architecture for anytime prediction. This is remarkable due to their simplicity and effectiveness, but training many thin subnetworks jointly faces a new challenge on training complexity. To address the issue, we also propose a novel DNN architecture by forcing a certain sparsity pattern on multi-branch network parameters, making them train efficiently for the purpose of anytime prediction. In our experiments on the ImageNet dataset, its sub-networks have up to 43.3% smaller sizes (FLOPs) compared to those of the state-of-art anytime model with respect to the same accuracy. Finally, we also propose an alternative task under the proposed architecture using a hierarchical taxonomy, which brings a new angle for anytime prediction.", "title": "" }, { "docid": "be447131554900aaba025be449944613", "text": "Attackers increasingly take advantage of innocent users who tend to casually open email messages assumed to be benign, carrying malicious documents. Recent targeted attacks aimed at organizations utilize the new Microsoft Word documents (*.docx). Anti-virus software fails to detect new unknown malicious files, including malicious docx files. In this paper, we present ALDOCX, a framework aimed at accurate detection of new unknown malicious docx files that also efficiently enhances the framework’s detection capabilities over time. Detection relies upon our new structural feature extraction methodology (SFEM), which is performed statically using meta-features extracted from docx files. Using machine-learning algorithms with SFEM, we created a detection model that successfully detects new unknown malicious docx files. In addition, because it is crucial to maintain the detection model’s updatability and incorporate new malicious files created daily, ALDOCX integrates our active-learning (AL) methods, which are designed to efficiently assist anti-virus vendors by better focusing their experts’ analytical efforts and enhance detection capability. ALDOCX identifies and acquires new docx files that are most likely malicious, as well as informative benign files. These files are used for enhancing the knowledge stores of both the detection model and the anti-virus software. The evaluation results show that by using ALDOCX and SFEM, we achieved a high detection rate of malicious docx files (94.44% TPR) compared with the anti-virus software (85.9% TPR)—with very low FPR rates (0.19%). ALDOCX’s AL methods used only 14% of the labeled docx files, which led to a reduction of 95.5% in security experts’ labeling efforts compared with the passive learning and the support vector machine (SVM)-Margin (existing active-learning method). Our AL methods also showed a significant improvement of 91% in number of unknown docx malware acquired, compared with the passive learning and the SVM-Margin, thus providing an improved updating solution for the detection model, as well as the anti-virus software widely used within organizations.", "title": "" }, { "docid": "e2988860c1e8b4aebd6c288d37d1ca4e", "text": "Numerous studies have shown that datacenter computers rarely operate at full utilization, leading to a number of proposals for creating servers that are energy proportional with respect to the computation that they are performing.\n In this paper, we show that as servers themselves become more energy proportional, the datacenter network can become a significant fraction (up to 50%) of cluster power. In this paper we propose several ways to design a high-performance datacenter network whose power consumption is more proportional to the amount of traffic it is moving -- that is, we propose energy proportional datacenter networks.\n We first show that a flattened butterfly topology itself is inherently more power efficient than the other commonly proposed topology for high-performance datacenter networks. We then exploit the characteristics of modern plesiochronous links to adjust their power and performance envelopes dynamically. Using a network simulator, driven by both synthetic workloads and production datacenter traces, we characterize and understand design tradeoffs, and demonstrate an 85% reduction in power --- which approaches the ideal energy-proportionality of the network.\n Our results also demonstrate two challenges for the designers of future network switches: 1) We show that there is a significant power advantage to having independent control of each unidirectional channel comprising a network link, since many traffic patterns show very asymmetric use, and 2) system designers should work to optimize the high-speed channel designs to be more energy efficient by choosing optimal data rate and equalization technology. Given these assumptions, we demonstrate that energy proportional datacenter communication is indeed possible.", "title": "" }, { "docid": "555afe09318573b475e96e72d2c7e54e", "text": "A conflict-free replicated data type (CRDT) is an abstract data type, with a well defined interface, designed to be replicated at multiple processes and exhibiting the following properties: (i) any replica can be modified without coordinating with another replicas; (ii) when any two replicas have received the same set of updates, they reach the same state, deterministically, by adopting mathematically sound rules to guarantee state convergence.", "title": "" }, { "docid": "fba5b69c3b0afe9f39422db8c18dba06", "text": "It is well known that stressful experiences may affect learning and memory processes. Less clear is the exact nature of these stress effects on memory: both enhancing and impairing effects have been reported. These opposite effects may be explained if the different time courses of stress hormone, in particular catecholamine and glucocorticoid, actions are taken into account. Integrating two popular models, we argue here that rapid catecholamine and non-genomic glucocorticoid actions interact in the basolateral amygdala to shift the organism into a 'memory formation mode' that facilitates the consolidation of stressful experiences into long-term memory. The undisturbed consolidation of these experiences is then promoted by genomic glucocorticoid actions that induce a 'memory storage mode', which suppresses competing cognitive processes and thus reduces interference by unrelated material. Highlighting some current trends in the field, we further argue that stress affects learning and memory processes beyond the basolateral amygdala and hippocampus and that stress may pre-program subsequent memory performance when it is experienced during critical periods of brain development.", "title": "" }, { "docid": "352c61af854ffc6dab438e7a1be56fcb", "text": "Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children’s cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.", "title": "" }, { "docid": "84301afe8fa5912dc386baab84dda7ea", "text": "There is a growing understanding that machine learning architectures have to be much bigger and more complex to approach any intelligent behavior. There is also a growing understanding that purely supervised learning is inadequate to train such systems. A recent paradigm of artificial recurrent neural network (RNN) training under the umbrella-name Reservoir Computing (RC) demonstrated that training big recurrent networks (the reservoirs) differently than supervised readouts from them is often better. It started with Echo State Networks (ESNs) and Liquid State Machines ten years ago where the reservoir was generated randomly and only linear readouts from it were trained. Rather surprisingly, such simply and fast trained ESNs outperformed classical fully-trained RNNs in many tasks. While full supervised training of RNNs is problematic, intuitively there should also be something better than a random network. In recent years RC became a vivid research field extending the initial paradigm from fixed random reservoir and trained output into using different methods for training the reservoir and the readout. In this thesis we overview existing and investigate new alternatives to the classical supervised training of RNNs and their hierarchies. First we present a taxonomy and a systematic overview of the RNN training approaches under the RC umbrella. Second, we propose and investigate the use of two different neural network models for the reservoirs together with several unsupervised adaptation techniques, as well as unsupervisedly layer-wise trained deep hierarchies of such models. We rigorously empirically test the proposed methods on two temporal pattern recognition datasets, comparing it to the classical reservoir computing state of art.", "title": "" }, { "docid": "fa313356d7267e963f75cd2ba4452814", "text": "INTRODUCTION\nStroke is a major cause of death and disability. Accurately predicting stroke outcome from a set of predictive variables may identify high-risk patients and guide treatment approaches, leading to decreased morbidity. Logistic regression models allow for the identification and validation of predictive variables. However, advanced machine learning algorithms offer an alternative, in particular, for large-scale multi-institutional data, with the advantage of easily incorporating newly available data to improve prediction performance. Our aim was to design and compare different machine learning methods, capable of predicting the outcome of endovascular intervention in acute anterior circulation ischaemic stroke.\n\n\nMETHOD\nWe conducted a retrospective study of a prospectively collected database of acute ischaemic stroke treated by endovascular intervention. Using SPSS®, MATLAB®, and Rapidminer®, classical statistics as well as artificial neural network and support vector algorithms were applied to design a supervised machine capable of classifying these predictors into potential good and poor outcomes. These algorithms were trained, validated and tested using randomly divided data.\n\n\nRESULTS\nWe included 107 consecutive acute anterior circulation ischaemic stroke patients treated by endovascular technique. Sixty-six were male and the mean age of 65.3. All the available demographic, procedural and clinical factors were included into the models. The final confusion matrix of the neural network, demonstrated an overall congruency of ∼ 80% between the target and output classes, with favourable receiving operative characteristics. However, after optimisation, the support vector machine had a relatively better performance, with a root mean squared error of 2.064 (SD: ± 0.408).\n\n\nDISCUSSION\nWe showed promising accuracy of outcome prediction, using supervised machine learning algorithms, with potential for incorporation of larger multicenter datasets, likely further improving prediction. Finally, we propose that a robust machine learning system can potentially optimise the selection process for endovascular versus medical treatment in the management of acute stroke.", "title": "" }, { "docid": "43cc6e40a7a31948ca2e7c141b271dbf", "text": "The false discovery rate (FDR)—the expected fraction of spurious discoveries among all the discoveries—provides a popular statistical assessment of the reproducibility of scientific studies in various disciplines. In this work, we introduce a new method for controlling the FDR in meta-analysis of many decentralized linear models. Our method targets the scenario where many research groups—possibly the number of which is random—are independently testing a common set of hypotheses and then sending summary statistics to a coordinating center in an online manner. Built on the knockoffs framework introduced by Barber and Candès (2015), our procedure starts by applying the knockoff filter to each linear model and then aggregates the summary statistics via one-shot communication in a novel way. This method gives exact FDR control non-asymptotically without any knowledge of the noise variances or making any assumption about sparsity of the signal. In certain settings, it has a communication complexity that is optimal up to a logarithmic factor.", "title": "" }, { "docid": "5e07328bf13a9dd2486e9dddbe6a3d8f", "text": "We present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical representation of bibliometric maps. The functionality of VOSviewer is especially useful for displaying large bibliometric maps in an easy-to-interpret way. The paper consists of three parts. In the first part, an overview of VOSviewer’s functionality for displaying bibliometric maps is provided. In the second part, the technical implementation of specific parts of the program is discussed. Finally, in the third part, VOSviewer’s ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.", "title": "" }, { "docid": "64a0e5d297c1bf2d42eae909e9548fb6", "text": "How to find the representative bands is a key issue in band selection for hyperspectral data. Very often, unsupervised band selection is associated with data clustering, and the cluster centers (or exemplars) are considered ideal representatives. However, partitioning the bands into clusters may be very time-consuming and affected by the distribution of the data points. In this letter, we propose a new band selection method, i.e., exemplar component analysis (ECA), aiming at selecting the exemplars of bands. Interestingly, ECA does not involve actual clustering. Instead, it prioritizes the bands according to their exemplar score, which is an easy-to-compute indicator defined in this letter measuring the possibility of bands to be exemplars. As a result, ECA is of high efficiency and immune to distribution structures of the data. The experiments on real hyperspectral data set demonstrate that ECA is an effective and efficient band selection method.", "title": "" }, { "docid": "74227709f4832c3978a21abb9449203b", "text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.", "title": "" }, { "docid": "867bb8f30a1e9440a03903d8471443f0", "text": "In this paper we present the Reactable, a new electronic musical instrument with a simple and intuitive tabletop interface that turns music into a tangible and visual experience, enabling musicians to experiment with sound, change its structure, control its parameters and be creative in a direct, refreshing and unseen way.", "title": "" }, { "docid": "4c5ed8940b888a4eb2abc5791afd5a36", "text": "A low-gain antenna (LGA) is designed for high cross-polarization discrimination (XPD) and low backward radiation within the 8.025-8.4-GHz frequency band to mitigate cross-polarization and multipath interference given the spacecraft layout constraints. The X-band choke ring horn was optimized, fabricated, and measured. The antenna gain remains higher than 2.5 dBi for angles between 0° and 60° off-boresight. The XPD is higher than 15 dB from 0° to 40° and higher than 20 dB from 40° to 60° off-boresight. The calculated and measured data are in excellent agreement.", "title": "" }, { "docid": "59f083611e4dc81c5280fc118e05401c", "text": "We propose a low area overhead and power-efficient asynchronous-logic quasi-delay-insensitive (QDI) sense-amplifier half-buffer (SAHB) approach with quad-rail (i.e., 1-of-4) data encoding. The proposed quad-rail SAHB approach is targeted for area- and energy-efficient asynchronous network-on-chip (ANoC) router designs. There are three main features in the proposed quad-rail SAHB approach. First, the quad-rail SAHB is designed to use four wires for selecting four ANoC router directions, hence reducing the number of transistors and area overhead. Second, the quad-rail SAHB switches only one out of four wires for 2-bit data propagation, hence reducing the number of transistor switchings and dynamic power dissipation. Third, the quad-rail SAHB abides by QDI rules, hence the designed ANoC router features high operational robustness toward process-voltage-temperature (PVT) variations. Based on the 65-nm CMOS process, we use the proposed quad-rail SAHB to implement and prototype an 18-bit ANoC router design. When benchmarked against the dual-rail counterpart, the proposed quad-rail SAHB ANoC router features 32% smaller area and dissipates 50% lower energy under the same excellent operational robustness toward PVT variations. When compared to the other reported ANoC routers, our proposed quad-rail SAHB ANoC router is one of the high operational robustness, smallest area, and most energy-efficient designs.", "title": "" }, { "docid": "19ea89fc23e7c4d564e4a164cfc4947a", "text": "OBJECTIVES\nThe purpose of this study was to evaluate the proximity of the mandibular molar apex to the buccal bone surface in order to provide anatomic information for apical surgery.\n\n\nMATERIALS AND METHODS\nCone-beam computed tomography (CBCT) images of 127 mandibular first molars and 153 mandibular second molars were analyzed from 160 patients' records. The distance was measured from the buccal bone surface to the root apex and the apical 3.0 mm on the cross-sectional view of CBCT.\n\n\nRESULTS\nThe second molar apex and apical 3 mm were located significantly deeper relative to the buccal bone surface compared with the first molar (p < 0.01). For the mandibular second molars, the distance from the buccal bone surface to the root apex was significantly shorter in patients over 70 years of age (p < 0.05). Furthermore, this distance was significantly shorter when the first molar was missing compared to nonmissing cases (p < 0.05). For the mandibular first molars, the distance to the distal root apex of one distal-rooted tooth was significantly greater than the distance to the disto-buccal root apex (p < 0.01). In mandibular second molar, the distance to the apex of C-shaped roots was significantly greater than the distance to the mesial root apex of non-C-shaped roots (p < 0.01).\n\n\nCONCLUSIONS\nFor apical surgery in mandibular molars, the distance from the buccal bone surface to the apex and apical 3 mm is significantly affected by the location, patient age, an adjacent missing anterior tooth, and root configuration.", "title": "" }, { "docid": "3f4953e2fd874fa9be4ab64912cd190a", "text": "Road detection from a monocular camera is an important perception module in any advanced driver assistance or autonomous driving system. Traditional techniques [1, 2, 3, 4, 5, 6] work reasonably well for this problem, when the roads are well maintained and the boundaries are clearly marked. However, in many developing countries or even for the rural areas in the developed countries, the assumption does not hold which leads to failure of such techniques. In this paper we propose a novel technique based on the combination of deep convolutional neural networks (CNNs), along with color lines model [7] based prior in a conditional random field (CRF) framework. While the CNN learns the road texture, the color lines model allows to adapt to varying illumination conditions. We show that our technique outperforms the state of the art segmentation techniques on the unmarked road segmentation problem. Though, not a focus of this paper, we show that even on the standard benchmark datasets like KITTI [8] and CamVid [9], where the road boundaries are well marked, the proposed technique performs competitively to the contemporary techniques.", "title": "" } ]
scidocsrr
b76682699bd65eb1bb86bfedf78406c9
A food image recognition system with Multiple Kernel Learning
[ { "docid": "432fe001ec8f1331a4bd033e9c49ccdf", "text": "Recently, methods based on local image features have shown promise for texture and object recognition tasks. This paper presents a large-scale evaluation of an approach that represents images as distributions (signatures or histograms) of features extracted from a sparse set of keypoint locations and learns a Support Vector Machine classifier with kernels based on two effective measures for comparing distributions, the Earth Mover’s Distance and the χ2 distance. We first evaluate the performance of our approach with different keypoint detectors and descriptors, as well as different kernels and classifiers. We then conduct a comparative evaluation with several state-of-the-art recognition methods on four texture and five object databases. On most of these databases, our implementation exceeds the best reported results and achieves comparable performance on the rest. Finally, we investigate the influence of background correlations on recognition performance via extensive tests on the PASCAL database, for which ground-truth object localization information is available. Our experiments demonstrate that image representations based on distributions of local features are surprisingly effective for classification of texture and object images under challenging real-world conditions, including significant intra-class variations and substantial background clutter.", "title": "" }, { "docid": "dce51c1fed063c9d9776fce998209d25", "text": "While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lankriet et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constrained quadratic program. We show that it can be rewritten as a semi-infinite linear program that can be efficiently solved by recycling the standard SVM implementations. Moreover, we generalize the formulation and our method to a larger class of problems, including regression and one-class classification. Experimental results show that the proposed algorithm works for hundred thousands of examples or hundreds of kernels to be combined, and helps for automatic model selection, improving the interpretability of the learning result. In a second part we discuss general speed up mechanism for SVMs, especially when used with sparse feature maps as appear for string kernels, allowing us to train a string kernel SVM on a 10 million real-world splice dataset from computational biology. We integrated Multiple Kernel Learning in our Machine Learning toolbox SHOGUN for which the source code is publicly available at http://www.fml.tuebingen.mpg.de/raetsch/projects/shogun.", "title": "" } ]
[ { "docid": "02156199912027e9230b3c000bcbe87b", "text": "Voice conversion (VC) using sequence-to-sequence learning of context posterior probabilities is proposed. Conventional VC using shared context posterior probabilities predicts target speech parameters from the context posterior probabilities estimated from the source speech parameters. Although conventional VC can be built from non-parallel data, it is difficult to convert speaker individuality such as phonetic property and speaking rate contained in the posterior probabilities because the source posterior probabilities are directly used for predicting target speech parameters. In this work, we assume that the training data partly include parallel speech data and propose sequence-to-sequence learning between the source and target posterior probabilities. The conversion models perform non-linear and variable-length transformation from the source probability sequence to the target one. Further, we propose a joint training algorithm for the modules. In contrast to conventional VC, which separately trains the speech recognition that estimates posterior probabilities and the speech synthesis that predicts target speech parameters, our proposed method jointly trains these modules along with the proposed probability conversion modules. Experimental results demonstrate that our approach outperforms the conventional VC.", "title": "" }, { "docid": "be009b972c794d01061c4ebdb38cc720", "text": "The existing efforts in computer assisted semen analysis have been focused on high speed imaging and automated image analysis of sperm motility. This results in a large amount of data, and it is extremely challenging for both clinical scientists and researchers to interpret, compare and correlate the multidimensional and time-varying measurements captured from video data. In this work, we use glyphs to encode a collection of numerical measurements taken at a regular interval and to summarize spatio-temporal motion characteristics using static visual representations. The design of the glyphs addresses the needs for (a) encoding some 20 variables using separable visual channels, (b) supporting scientific observation of the interrelationships between different measurements and comparison between different sperm cells and their flagella, and (c) facilitating the learning of the encoding scheme by making use of appropriate visual abstractions and metaphors. As a case study, we focus this work on video visualization for computer-aided semen analysis, which has a broad impact on both biological sciences and medical healthcare. We demonstrate that glyph-based visualization can serve as a means of external memorization of video data as well as an overview of a large set of spatiotemporal measurements. It enables domain scientists to make scientific observation in a cost-effective manner by reducing the burden of viewing videos repeatedly, while providing them with a new visual representation for conveying semen statistics.", "title": "" }, { "docid": "de3ba8a5e83dc1fa153b9341ff7cbc76", "text": "The 1990s have seen a rapid growth of research interests in mobile ad hoc networking. The infrastructureless and the dynamic nature of these networks demands new set of networking strategies to be implemented in order to provide efficient end-to-end communication. This, along with the diverse application of these networks in many different scenarios such as battlefield and disaster recovery, have seen MANETs being researched by many different organisations and institutes. MANETs employ the traditional TCP/IP structure to provide end-to-end communication between nodes. However, due to their mobility and the limited resource in wireless networks, each layer in the TCP/IP model require redefinition or modifications to function efficiently in MANETs. One interesting research area in MANET is routing. Routing in the MANETs is a challenging task and has received a tremendous amount of attention from researches. This has led to development of many different routing protocols for MANETs, and each author of each proposed protocol argues that the strategy proposed provides an improvement over a number of different strategies considered in the literature for a given network scenario. Therefore, it is quite difficult to determine which protocols may perform best under a number of different network scenarios, such as increasing node density and traffic. In this paper, we provide an overview of a wide range of routing protocols proposed in the literature. We also provide a performance comparison of all routing protocols and suggest which protocols may perform best in large networks.", "title": "" }, { "docid": "2d784404588a3b214684f38b060ac29c", "text": "Complex query types huge data volumes and very high read update ratios make the indexing techniques designed and tuned for traditional database systems unsuitable for data warehouses DW We propose an encoded bitmap indexing for DWs which improves the performance of known bitmap indexing in the case of large cardinality domains A performance analysis and theorems which identify properties of good encodings for better performance are presented We compare en coded bitmap indexing with related techniques such as bit slicing projection dynamic and range based indexing", "title": "" }, { "docid": "3419c35e0dff7b47328943235419a409", "text": "Several methods of classification of partially edentulous arches have been proposed and are in use. The most familiar classifications are those originally proposed by Kennedy, Cummer, and Bailyn. None of these classification systems include implants, simply because most of them were proposed before implants became widely accepted. At this time, there is no classification system for partially edentulous arches incorporating implants placed or to be placed in the edentulous spaces for a removable partial denture (RPD). This article proposes a simple classification system for partially edentulous arches with implants based on the Kennedy classification system, with modification, to be used for RPDs. It incorporates the number and positions of implants placed or to be placed in the edentulous areas. A different name, Implant-Corrected Kennedy (ICK) Classification System, is given to the new classification system to be differentiated from other partially edentulous arch classification systems.", "title": "" }, { "docid": "af842014eb9d2201f20f5ec5a5025fe5", "text": "In the context of deep learning for robotics, we show effective method of training a real robot to grasp a tiny sphere (1.37cm of diameter), with an original combination of system design choices. We decompose the end-to-end system into a vision module and a closed-loop controller module. The two modules use target object segmentation as their common interface. The vision module extracts information from the robot end-effector camera, in the form of a binary segmentation mask of the target. We train it to achieve effective domain transfer by composing real background images with simulated images of the target. The controller module takes as input the binary segmentation mask, and thus is agnostic to visual discrepancies between simulated and real environments. We train our closed-loop controller in simulation using imitation learning and show it is robust with respect to discrepancies between the dynamic model of the simulated and real robot: when combined with eye-in-hand observations, we achieve a 90% success rate in grasping a tiny sphere with a real robot. The controller can generalize to unseen scenarios where the target is moving and even learns to recover from failures.", "title": "" }, { "docid": "e464cde1434026c17b06716c6a416b7a", "text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.", "title": "" }, { "docid": "037fb8eb72b55b8dae1aee107eb6b15c", "text": "Traditional methods on video summarization are designed to generate summaries for single-view video records, and thus they cannot fully exploit the mutual information in multi-view video records. In this paper, we present a multiview metric learning framework for multi-view video summarization. It combines the advantages of maximum margin clustering with the disagreement minimization criterion. The learning framework thus has the ability to find a metric that best separates the input data, and meanwhile to force the learned metric to maintain underlying intrinsic structure of data points, for example geometric information. Facilitated by such a framework, a systematic solution to the multi-view video summarization problem is developed from the viewpoint of metric learning. The effectiveness of the proposed method is demonstrated by experiments.", "title": "" }, { "docid": "7e08a713a97f153cdd3a7728b7e0a37c", "text": "The availability of long circulating, multifunctional polymers is critical to the development of drug delivery systems and bioconjugates. The ease of synthesis and functionalization make linear polymers attractive but their rapid clearance from circulation compared to their branched or cyclic counterparts, and their high solution viscosities restrict their applications in certain settings. Herein, we report the unusual compact nature of high molecular weight (HMW) linear polyglycerols (LPGs) (LPG - 100; M(n) - 104 kg mol(-1), M(w)/M(n) - 1.15) in aqueous solutions and its impact on its solution properties, blood compatibility, cell compatibility, in vivo circulation, biodistribution and renal clearance. The properties of LPG have been compared with hyperbranched polyglycerol (HPG) (HPG-100), linear polyethylene glycol (PEG) with similar MWs. The hydrodynamic size and the intrinsic viscosity of LPG-100 in water were considerably lower compared to PEG. The Mark-Houwink parameter of LPG was almost 10-fold lower than that of PEG. LPG and HPG demonstrated excellent blood and cell compatibilities. Unlike LPG and HPG, HMW PEG showed dose dependent activation of blood coagulation, platelets and complement system, severe red blood cell aggregation and hemolysis, and cell toxicity. The long blood circulation of LPG-100 (t(1/2β,) 31.8 ± 4 h) was demonstrated in mice; however, it was shorter compared to HPG-100 (t(1/2β,) 39.2 ± 8 h). The shorter circulation half life of LPG-100 was correlated with its higher renal clearance and deformability. Relatively lower organ accumulation was observed for LPG-100 and HPG-100 with some influence of on the architecture of the polymers. Since LPG showed better biocompatibility profiles, longer in vivo circulation time compared to PEG and other linear drug carrier polymers, and has multiple functionalities for conjugation, makes it a potential candidate for developing long circulating multifunctional drug delivery systems similar to HPG.", "title": "" }, { "docid": "6cd9df79a38656597b124b139746462e", "text": "Load balancing is a technique which allows efficient parallelization of irregular workloads, and a key component of many applications and parallelizing runtimes. Work-stealing is a popular technique for implementing load balancing, where each parallel thread maintains its own work set of items and occasionally steals items from the sets of other threads.\n The conventional semantics of work stealing guarantee that each inserted task is eventually extracted exactly once. However, correctness of a wide class of applications allows for relaxed semantics, because either: i) the application already explicitly checks that no work is repeated or ii) the application can tolerate repeated work.\n In this paper, we introduce idempotent work tealing, and present several new algorithms that exploit the relaxed semantics to deliver better performance. The semantics of the new algorithms guarantee that each inserted task is eventually extracted at least once-instead of exactly once.\n On mainstream processors, algorithms for conventional work stealing require special atomic instructions or store-load memory ordering fence instructions in the owner's critical path operations. In general, these instructions are substantially slower than regular memory access instructions. By exploiting the relaxed semantics, our algorithms avoid these instructions in the owner's operations.\n We evaluated our algorithms using common graph problems and micro-benchmarks and compared them to well-known conventional work stealing algorithms, the THE Cilk and Chase-Lev algorithms. We found that our best algorithm (with LIFO extraction) outperforms existing algorithms in nearly all cases, and often by significant margins.", "title": "" }, { "docid": "65117f76b795ad5fabd271fad5ee4287", "text": "We present a novel, fast, and compact method to improve semantic segmentation of three-dimensional (3-D) point clouds, which is able to learn and exploit common contextual relations between observed structures and objects. Introducing 3-D Entangled Forests (3-DEF), we extend the concept of entangled features for decision trees to 3-D point clouds, enabling the classifier not only to learn, which labels are likely to occur close to each other, but also in which specific geometric configuration. Operating on a plane-based representation of a point cloud, our method does not require a final smoothing step and achieves state-of-the-art results on the NYU Depth Dataset in a single inference step. This compactness in turn allows for fast processing times, a crucial factor to consider for online applications on robotic platforms. In a thorough evaluation, we demonstrate the expressiveness of our new 3-D entangled feature set and the importance of spatial context in the scope of semantic segmentation.", "title": "" }, { "docid": "7681a78f2d240afc6b2e48affa0612c1", "text": "Web usage mining applies data mining procedures to analyze user access of Web sites. As with any KDD (knowledge discovery and data mining) process, WUM contains three main steps: preprocessing, knowledge extraction, and results analysis. We focus on data preprocessing, a fastidious, complex process. Analysts aim to determine the exact list of users who accessed the Web site and to reconstitute user sessions-the sequence of actions each user performed on the Web site. Intersites WUM deals with Web server logs from several Web sites, generally belonging to the same organization. Thus, analysts must reassemble the users' path through all the different Web servers that they visited. Our solution is to join all the log files and reconstitute the visit. Classical data preprocessing involves three steps: data fusion, data cleaning, and data structuration. Our solution for WUM adds what we call advanced data preprocessing. This consists of a data summarization step, which will allow the analyst to select only the information of interest. We've successfully tested our solution in an experiment with log files from INRIA Web sites.", "title": "" }, { "docid": "f693b26866ca8eb2a893dead7aa0fb21", "text": "This paper deals with response signals processing in eddy current non-destructive testing. Non-sinusoidal excitation is utilized to drive eddy currents in a conductive specimen. The response signals due to a notch with variable depth are calculated by numerical means. The signals are processed in order to evaluate the depth of the notch. Wavelet transformation is used for this purpose. Obtained results are presented and discussed in this paper. Streszczenie. Praca dotyczy sygnałów wzbudzanych przy nieniszczącym testowaniu za pomocą prądów wirowych. Przy pomocy symulacji numerycznych wyznaczono sygnały odpowiedzi dla niesinusoidalnych sygnałów wzbudzających i defektów o różnej głębokości. Celem symulacji jest wyznaczenie zależności pozwalającej wyznaczyć głębokość defektu w zależności od odbieranego sygnału. W artykule omówiono wykorzystanie do tego celu transformaty falkowej. (Analiza falkowa impulsowych prądów wirowych)", "title": "" }, { "docid": "d81c25a953bc14e3316e2ae7485c023a", "text": "The amphibious robot is so attractive and challenging for its broad application and its complex working environment. It should walk on rough ground, maneuver underwater and pass through transitional terrain such as sand and mud, simultaneously. To tackle with such a complex task, a novel amphibious robot (AmphiHex-I) with transformable leg-flipper composite propulsion is proposed and developed. This paper presents the detailed structure design of the transformable leg-flipper propulsion mechanism and its drive module, which enables the amphibious robot passing through the terrain, water and transitional zone between them. A preliminary theoretical analysis is conducted to study the interaction between the elliptic leg and transitional environment such as granular medium. An orthogonal experiment is designed to study the leg locomotion in the sandy and muddy terrain with different water content. Finally, basic propulsion experiments of AmphiHex-I are launched, which verified the locomotion capability on land and underwater is achieved by the transformable leg-flipper mechanism.", "title": "" }, { "docid": "1488c4ad77f042cbc67aa1681fca8d7e", "text": "Mining chemical-induced disease relations embedded in the vast biomedical literature could facilitate a wide range of computational biomedical applications, such as pharmacovigilance. The BioCreative V organized a Chemical Disease Relation (CDR) Track regarding chemical-induced disease relation extraction from biomedical literature in 2015. We participated in all subtasks of this challenge. In this article, we present our participation system Chemical Disease Relation Extraction SysTem (CD-REST), an end-to-end system for extracting chemical-induced disease relations in biomedical literature. CD-REST consists of two main components: (1) a chemical and disease named entity recognition and normalization module, which employs the Conditional Random Fields algorithm for entity recognition and a Vector Space Model-based approach for normalization; and (2) a relation extraction module that classifies both sentence-level and document-level candidate drug-disease pairs by support vector machines. Our system achieved the best performance on the chemical-induced disease relation extraction subtask in the BioCreative V CDR Track, demonstrating the effectiveness of our proposed machine learning-based approaches for automatic extraction of chemical-induced disease relations in biomedical literature. The CD-REST system provides web services using HTTP POST request. The web services can be accessed fromhttp://clinicalnlptool.com/cdr The online CD-REST demonstration system is available athttp://clinicalnlptool.com/cdr/cdr.html. Database URL:http://clinicalnlptool.com/cdr;http://clinicalnlptool.com/cdr/cdr.html.", "title": "" }, { "docid": "0ebcadb280792dfc14714cd13e550775", "text": "Brushless DC (BLDC) motor drives are continually gaining popularity in motion control applications. Therefore, it is necessary to have a low cost, but effective BLDC motor speed/torque regulator. This paper introduces a novel concept for digital control of trapezoidal BLDC motors. The digital controller was implemented via two different methods, namely conduction-angle control and current-mode control. Motor operation is allowed only at two operating points or states. Alternating between the two operating points results in an average operating point that produces an average operating speed. The controller design equations are derived from Newton's second law. The novel controller is verified via computer simulations and an experimental demonstration is carried out with the rapid prototyping and real-time interface system dSPACE.", "title": "" }, { "docid": "5cdc962d9ce66938ad15829f8d0331ed", "text": "This study aims to provide a picture of how relationship quality can influence customer loyalty or loyalty in the business-to-business context. Building on prior research, we propose relationship quality as a higher construct comprising trust, commitment, satisfaction and service quality. These dimensions of relationship quality can reasonably explain the influence of relationship quality on customer loyalty. This study follows the composite loyalty approach providing both behavioural aspects (purchase intentions) and attitudinal loyalty in order to fully explain the concept of customer loyalty. A literature search is undertaken in the areas of customer loyalty, relationship quality, perceived service quality, trust, commitment and satisfaction. This study then seeks to address the following research issues: Does relationship quality influence both aspects of customer loyalty? Which relationship quality dimensions influence each of the components of customer loyalty? This study was conducted in a business-to-business setting of the courier and freight delivery service industry in Australia. The survey was targeted to Australian Small to Medium Enterprises (SMEs). Two methods were chosen for data collection: mail survey and online survey. The total number of usable respondents who completed both survey was 306. In this study, a two step approach (Anderson and Gerbing 1988) was selected for measurement model and structural model. The results also show that all measurement models of relationship dimensions achieved a satisfactory level of fit to the data. The hypothesized relationships were estimated using structural equation modeling. The overall goodness of fit statistics shows that the structural model fits the data well. As the results show, to maintain customer loyalty to the supplier, a supplier may enhance all four aspects of relationship quality which are trust, commitment, satisfaction and service quality. Specifically, in order to enhance customer’s trust, a supplier should promote the customer’s trust in the supplier. In efforts to emphasize commitment, a supplier should focus on building affective aspects of commitment rather than calculative aspects. Satisfaction appears to be a crucial factor in maintaining purchase intentions whereas service quality will strongly enhance both purchase intentions and attitudinal loyalty.", "title": "" }, { "docid": "7170a9d4943db078998e1844ad67ae9e", "text": "Privacy has become increasingly important to the database community which is reflected by a noteworthy increase in research papers appearing in the literature. While researchers often assume that their definition of “privacy” is universally held by all readers, this is rarely the case; so many papers addressing key challenges in this domain have actually produced results that do not consider the same problem, even when using similar vocabularies. This paper provides an explicit definition of data privacy suitable for ongoing work in data repositories such as a DBMS or for data mining. The work contributes by briefly providing the larger context for the way privacy is defined legally and legislatively but primarily provides a taxonomy capable of thinking of data privacy technologically. We then demonstrate the taxonomy’s utility by illustrating how this perspective makes it possible to understand the important contribution made by researchers to the issue of privacy. The conclusion of this paper is that privacy is indeed multifaceted so no single current research effort adequately addresses the true breadth of the issues necessary to fully understand the scope of this important issue.", "title": "" }, { "docid": "dfa25bfc23d0a7be74d190af7377b740", "text": "Dental erosion is a contemporary disease, mostly because of the change of the eating patterns that currently exist in society. It is a \"silent\" and multifactorial disease, and is highly influenced by habits and lifestyles. The prevalence of dental erosion has considerably increased, with this condition currently standing as a great challenge for the clinician, regarding the diagnosis, identification of the etiological factors, prevention, and execution of an adequate treatment. This article presents a dental erosion review and a case report of a restorative treatment of dental erosion lesions using a combination of bonded ceramic overlays to reestablish vertical dimension and composite resin to restore the worn palatal and incisal surfaces of the anterior upper teeth. Adequate function and esthetics can be achieved with this approach.", "title": "" }, { "docid": "7835a3ecdb9a8563e29ee122e5987503", "text": "Women diagnosed with complete spinal cord injury (SCI) at T10 or higher report sensations generated by vaginal-cervical mechanical self-stimulation (CSS). In this paper we review brain responses to sexual arousal and orgasm in such women, and further hypothesize that the afferent pathway for this unexpected perception is provided by the Vagus nerves, which bypass the spinal cord. Using functional magnetic resonance imaging (fMRI), we ascertained that the region of the medulla oblongata to which the Vagus nerves project (the Nucleus of the Solitary Tract or NTS) is activated by CSS. We also used an objective measure, CSS-induced analgesia response to experimentally induced finger pain, to ascertain the functionality of this pathway. During CSS, several women experienced orgasms. Brain regions activated during orgasm included the hypothalamic paraventricular nucleus, amygdala, accumbens-bed nucleus of the stria terminalis-preoptic area, hippocampus, basal ganglia (especially putamen), cerebellum, and anterior cingulate, insular, parietal and frontal cortices, and lower brainstem (central gray, mesencephalic reticular formation, and NTS). We conclude that the Vagus nerves provide a spinal cord-bypass pathway for vaginal-cervical sensibility and that activation of this pathway can produce analgesia and orgasm.", "title": "" } ]
scidocsrr
8aad42609bf989c816c96442c69dd42f
Evaluating Reliability and Predictive Validity of the Persian Translation of Quantitative Checklist for Autism in Toddlers (Q-CHAT)
[ { "docid": "ab8cc15fe47a9cf4aa904f7e1eea4bc9", "text": "Autism, a severe disorder of development, is difficult to detect in very young children. However, children who receive early intervention have improved long-term prognoses. The Modified Checklist for Autism in Toddlers (M-CHAT), consisting of 23 yes/no items, was used to screen 1,293 children. Of the 58 children given a diagnostic/developmental evaluation, 39 were diagnosed with a disorder on the autism spectrum. Six items pertaining to social relatedness and communication were found to have the best discriminability between children diagnosed with and without autism/PDD. Cutoff scores were created for the best items and the total checklist. Results indicate that the M-CHAT is a promising instrument for the early detection of autism.", "title": "" }, { "docid": "81ca5239dbd60a988e7457076aac05d7", "text": "OBJECTIVE\nFrontline health professionals need a \"red flag\" tool to aid their decision making about whether to make a referral for a full diagnostic assessment for an autism spectrum condition (ASC) in children and adults. The aim was to identify 10 items on the Autism Spectrum Quotient (AQ) (Adult, Adolescent, and Child versions) and on the Quantitative Checklist for Autism in Toddlers (Q-CHAT) with good test accuracy.\n\n\nMETHOD\nA case sample of more than 1,000 individuals with ASC (449 adults, 162 adolescents, 432 children and 126 toddlers) and a control sample of 3,000 controls (838 adults, 475 adolescents, 940 children, and 754 toddlers) with no ASC diagnosis participated. Case participants were recruited from the Autism Research Centre's database of volunteers. The control samples were recruited through a variety of sources. Participants completed full-length versions of the measures. The 10 best items were selected on each instrument to produce short versions.\n\n\nRESULTS\nAt a cut-point of 6 on the AQ-10 adult, sensitivity was 0.88, specificity was 0.91, and positive predictive value (PPV) was 0.85. At a cut-point of 6 on the AQ-10 adolescent, sensitivity was 0.93, specificity was 0.95, and PPV was 0.86. At a cut-point of 6 on the AQ-10 child, sensitivity was 0.95, specificity was 0.97, and PPV was 0.94. At a cut-point of 3 on the Q-CHAT-10, sensitivity was 0.91, specificity was 0.89, and PPV was 0.58. Internal consistency was >0.85 on all measures.\n\n\nCONCLUSIONS\nThe short measures have potential to aid referral decision making for specialist assessment and should be further evaluated.", "title": "" } ]
[ { "docid": "9775396477ccfde5abdd766588655539", "text": "The use of hand gestures offers an alternative to the commonly used human computer interfaces, providing a more intuitive way of navigating among menus and multimedia applications. This paper presents a system for hand gesture recognition devoted to control windows applications. Starting from the images captured by a time-of-flight camera (a camera that produces images with an intensity level inversely proportional to the depth of the objects observed) the system performs hand segmentation as well as a low-level extraction of potentially relevant features which are related to the morphological representation of the hand silhouette. Classification based on these features discriminates between a set of possible static hand postures which results, combined with the estimated motion pattern of the hand, in the recognition of dynamic hand gestures. The whole system works in real-time, allowing practical interaction between user and application.", "title": "" }, { "docid": "888b24ac96c2258d47bec205ccd418b6", "text": "We present a graph-based semi-supervised label propagation algorithm for acquiring opendomain labeled classes and their instances from a combination of unstructured and structured text sources. This acquisition method significantly improves coverage compared to a previous set of labeled classes and instances derived from free text, while achieving comparable precision.", "title": "" }, { "docid": "798ee46a8ac10787eaa154861d0311c6", "text": "In the last few years, we have seen the transformative impact of deep learning in many applications, particularly in speech recognition and computer vision. Inspired by Google's Inception-ResNet deep convolutional neural network (CNN) for image classification, we have developed\"Chemception\", a deep CNN for the prediction of chemical properties, using just the images of 2D drawings of molecules. We develop Chemception without providing any additional explicit chemistry knowledge, such as basic concepts like periodicity, or advanced features like molecular descriptors and fingerprints. We then show how Chemception can serve as a general-purpose neural network architecture for predicting toxicity, activity, and solvation properties when trained on a modest database of 600 to 40,000 compounds. When compared to multi-layer perceptron (MLP) deep neural networks trained with ECFP fingerprints, Chemception slightly outperforms in activity and solvation prediction and slightly underperforms in toxicity prediction. Having matched the performance of expert-developed QSAR/QSPR deep learning models, our work demonstrates the plausibility of using deep neural networks to assist in computational chemistry research, where the feature engineering process is performed primarily by a deep learning algorithm.", "title": "" }, { "docid": "6f16ccc24022c4fc46f8b0b106b0f3c4", "text": "We reviewed 25 patients ascertained through the finding of trigonocephaly/metopic synostosis as a prominent manifestation. In 16 patients, trigonocephaly/metopic synostosis was the only significant finding (64%); 2 patients had metopic/sagittal synostosis (8%) and in 7 patients the trigonocephaly was part of a syndrome (28%). Among the nonsyndromic cases, 12 were males and 6 were females and the sex ratio was 2 M:1 F. Only one patient with isolated trigonocephaly had an affected parent (5.6%). All nonsyndromic patients had normal psychomotor development. In 2 patients with isolated metopic/sagittal synostosis, FGFR2 and FGFR3 mutations were studied and none were detected. Among the syndromic cases, two had Jacobsen syndrome associated with deletion of chromosome 11q 23 (28.5%). Of the remaining five syndromic cases, different conditions were found including Say-Meyer syndrome, multiple congenital anomalies and bilateral retinoblastoma with no detectable deletion in chromosome 13q14.2 by G-banding chromosomal analysis and FISH, I-cell disease, a new acrocraniofacial dysostosis syndrome, and Opitz C trigonocephaly syndrome. The last two patients were studied for cryptic chromosomal rearrangements, with SKY and subtelomeric FISH probes. Also FGFR2 and FGFR3 mutations were studied in two syndromic cases, but none were found. This study demonstrates that the majority of cases with nonsyndromic trigonocephaly are sporadic and benign, apart from the associated cosmetic implications. Syndromic trigonocephaly cases are causally heterogeneous and associated with chromosomal as well as single gene disorders. An investigation to delineate the underlying cause of trigonocephaly is indicated because of its important implications on medical management for the patient and the reproductive plans for the family.", "title": "" }, { "docid": "a500afda393ad60ddd1bb39778655172", "text": "The success and the failure of a data warehouse (DW) project are mainly related to the design phase according to most researchers in this domain. When analyzing the decision-making system requirements, many recurring problems appear and requirements modeling difficulties are detected. Also, we encounter the problem associated with the requirements expression by non-IT professionals and non-experts makers on design models. The ambiguity of the term of decision-making requirements leads to a misinterpretation of the requirements resulting from data warehouse design failure and incorrect OLAP analysis. Therefore, many studies have focused on the inclusion of vague data in information systems in general, but few studies have examined this case in data warehouses. This article describes one of the shortcomings of current approaches to data warehouse design which is the study of in the requirements inaccuracy expression and how ontologies can help us to overcome it. We present a survey on this topic showing that few works that take into account the imprecision in the study of this crucial phase in the decision-making process for the presentation of challenges and problems that arise and requires more attention by researchers to improve DW design. According to our knowledge, no rigorous study of vagueness in this area were made. Keywords— Data warehouses Design, requirements analysis, imprecision, ontology", "title": "" }, { "docid": "c66069fc52e1d6a9ab38f699b6a482c6", "text": "An understanding of the age of the Acheulian and the transition to the Middle Stone Age in southern Africa has been hampered by a lack of reliable dates for key sequences in the region. A number of researchers have hypothesised that the Acheulian first occurred simultaneously in southern and eastern Africa at around 1.7-1.6 Ma. A chronological evaluation of the southern African sites suggests that there is currently little firm evidence for the Acheulian occurring before 1.4 Ma in southern Africa. Many researchers have also suggested the occurrence of a transitional industry, the Fauresmith, covering the transition from the Early to Middle Stone Age, but again, the Fauresmith has been poorly defined, documented, and dated. Despite the occurrence of large cutting tools in these Fauresmith assemblages, they appear to include all the technological components characteristic of the MSA. New data from stratified Fauresmith bearing sites in southern Africa suggest this transitional industry maybe as old as 511-435 ka and should represent the beginning of the MSA as a broad entity rather than the terminal phase of the Acheulian. The MSA in this form is a technology associated with archaic H. sapiens and early modern humans in Africa with a trend of greater complexity through time.", "title": "" }, { "docid": "95612aa090b77fc660279c5f2886738d", "text": "Healthy biological systems exhibit complex patterns of variability that can be described by mathematical chaos. Heart rate variability (HRV) consists of changes in the time intervals between consecutive heartbeats called interbeat intervals (IBIs). A healthy heart is not a metronome. The oscillations of a healthy heart are complex and constantly changing, which allow the cardiovascular system to rapidly adjust to sudden physical and psychological challenges to homeostasis. This article briefly reviews current perspectives on the mechanisms that generate 24 h, short-term (~5 min), and ultra-short-term (<5 min) HRV, the importance of HRV, and its implications for health and performance. The authors provide an overview of widely-used HRV time-domain, frequency-domain, and non-linear metrics. Time-domain indices quantify the amount of HRV observed during monitoring periods that may range from ~2 min to 24 h. Frequency-domain values calculate the absolute or relative amount of signal energy within component bands. Non-linear measurements quantify the unpredictability and complexity of a series of IBIs. The authors survey published normative values for clinical, healthy, and optimal performance populations. They stress the importance of measurement context, including recording period length, subject age, and sex, on baseline HRV values. They caution that 24 h, short-term, and ultra-short-term normative values are not interchangeable. They encourage professionals to supplement published norms with findings from their own specialized populations. Finally, the authors provide an overview of HRV assessment strategies for clinical and optimal performance interventions.", "title": "" }, { "docid": "e756574e701c9ecc4e28da6135499215", "text": "MicroRNAs are small noncoding RNA molecules that regulate gene expression posttranscriptionally through complementary base pairing with thousands of messenger RNAs. They regulate diverse physiological, developmental, and pathophysiological processes. Recent studies have uncovered the contribution of microRNAs to the pathogenesis of many human diseases, including liver diseases. Moreover, microRNAs have been identified as biomarkers that can often be detected in the systemic circulation. We review the role of microRNAs in liver physiology and pathophysiology, focusing on viral hepatitis, liver fibrosis, and cancer. We also discuss microRNAs as diagnostic and prognostic markers and microRNA-based therapeutic approaches for liver disease.", "title": "" }, { "docid": "e9f9d022007833ab7ae928619641e1b1", "text": "BACKGROUND\nDissemination and implementation of health care interventions are currently hampered by the variable quality of reporting of implementation research. Reporting of other study types has been improved by the introduction of reporting standards (e.g. CONSORT). We are therefore developing guidelines for reporting implementation studies (StaRI).\n\n\nMETHODS\nUsing established methodology for developing health research reporting guidelines, we systematically reviewed the literature to generate items for a checklist of reporting standards. We then recruited an international, multidisciplinary panel for an e-Delphi consensus-building exercise which comprised an initial open round to revise/suggest a list of potential items for scoring in the subsequent two scoring rounds (scale 1 to 9). Consensus was defined a priori as 80% agreement with the priority scores of 7, 8, or 9.\n\n\nRESULTS\nWe identified eight papers from the literature review from which we derived 36 potential items. We recruited 23 experts to the e-Delphi panel. Open round comments resulted in revisions, and 47 items went forward to the scoring rounds. Thirty-five items achieved consensus: 19 achieved 100% agreement. Prioritised items addressed the need to: provide an evidence-based justification for implementation; describe the setting, professional/service requirements, eligible population and intervention in detail; measure process and clinical outcomes at population level (using routine data); report impact on health care resources; describe local adaptations to the implementation strategy and describe barriers/facilitators. Over-arching themes from the free-text comments included balancing the need for detailed descriptions of interventions with publishing constraints, addressing the dual aims of reporting on the process of implementation and effectiveness of the intervention and monitoring fidelity to an intervention whilst encouraging adaptation to suit diverse local contexts.\n\n\nCONCLUSIONS\nWe have identified priority items for reporting implementation studies and key issues for further discussion. An international, multidisciplinary workshop, where participants will debate the issues raised, clarify specific items and develop StaRI standards that fit within the suite of EQUATOR reporting guidelines, is planned.\n\n\nREGISTRATION\nThe protocol is registered with Equator: http://www.equator-network.org/library/reporting-guidelines-under-development/#17 .", "title": "" }, { "docid": "82250ed88b90a942ddf551b0e9c78dd5", "text": "This paper is focused on proposal of image steganographic method that is able to embedding of encoded secret message using Quick Response Code (QR) code into image data. Discrete Wavelet Transformation (DWT) domain is used for the embedding of QR code, while embedding process is additionally protected by Advanced Encryption Standard (AES) cipher algorithm. In addition, typical characteristics of QR code was broken using the encryption, therefore it makes the method more secure. The aim of this paper is design of image steganographic method with high secure level and high non-perceptibility level. The relation between security and capacity of the method was improved by special compression of QR code before the embedding process. Efficiency of the proposed method was measured by Peak Signal-to-Noise Ratio (PSNR) and achieved results were compared with other steganographic tools.", "title": "" }, { "docid": "dc3417d01a998ee476aeafc0e9d11c74", "text": "We present an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations. 1. Per-channel quantization of weights and per-layer quantization of activations to 8-bits of precision post-training produces classification accuracies within 2% of floating point networks for a wide variety of CNN architectures (section 3.1). 2. Model sizes can be reduced by a factor of 4 by quantizing weights to 8bits, even when 8-bit arithmetic is not supported. This can be achieved with simple, post training quantization of weights (section 3.1). 3. We benchmark latencies of quantized networks on CPUs and DSPs and observe a speedup of 2x-3x for quantized implementations compared to floating point on CPUs. Speedups of up to 10x are observed on specialized processors with fixed point SIMD capabilities, like the Qualcomm QDSPs with HVX (section 6). 4. Quantization-aware training can provide further improvements, reducing the gap to floating point to 1% at 8-bit precision. Quantization-aware training also allows for reducing the precision of weights to four bits with accuracy losses ranging from 2% to 10%, with higher accuracy drop for smaller networks (section 3.2). 5. We introduce tools in TensorFlow and TensorFlowLite for quantizing convolutional networks (Section 3). 6. We review best practices for quantization-aware training to obtain high accuracy with quantized weights and activations (section 4). 7. We recommend that per-channel quantization of weights and per-layer quantization of activations be the preferred quantization scheme for hardware acceleration and kernel optimization. We also propose that future processors and hardware accelerators for optimized inference support precisions of 4, 8 and 16 bits (section 7).", "title": "" }, { "docid": "c808655e7272293ef8ae8af563700c2e", "text": "3D scene flow estimation aims to jointly recover dense geometry and 3D motion from stereoscopic image sequences, thus generalizes classical disparity and 2D optical flow estimation. To realize its conceptual benefits and overcome limitations of many existing methods, we propose to represent the dynamic scene as a collection of rigidly moving planes, into which the input images are segmented. Geometry and 3D motion are then jointly recovered alongside an over-segmentation of the scene. This piecewise rigid scene model is significantly more parsimonious than conventional pixel-based representations, yet retains the ability to represent real-world scenes with independent object motion. It, furthermore, enables us to define suitable scene priors, perform occlusion reasoning, and leverage discrete optimization schemes toward stable and accurate results. Assuming the rigid motion to persist approximately over time additionally enables us to incorporate multiple frames into the inference. To that end, each view holds its own representation, which is encouraged to be consistent across all other viewpoints and frames in a temporal window. We show that such a view-consistent multi-frame scheme significantly improves accuracy, especially in the presence of occlusions, and increases robustness against adverse imaging conditions. Our method currently achieves leading performance on the KITTI benchmark, for both flow and stereo.", "title": "" }, { "docid": "55a0fb2814fde7890724a137fc414c88", "text": "Quantitative structure-activity relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists toward collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making.", "title": "" }, { "docid": "f5e44676e9ce8a06bcdb383852fb117f", "text": "We explore techniques to significantly improve the compute efficiency and performance of Deep Convolution Networks without impacting their accuracy. To improve the compute efficiency, we focus on achieving high accuracy with extremely low-precision (2-bit) weight networks, and to accelerate the execution time, we aggressively skip operations on zero-values. We achieve the highest reported accuracy of 76.6% Top-1/93% Top-5 on the Imagenet object classification challenge with low-precision network while reducing the compute requirement by ∼3× compared to a full-precision network that achieves similar accuracy. Furthermore, to fully exploit the benefits of our low-precision networks, we build a deep learning accelerator core, DLAC, that can achieve up to 1 TFLOP/mm2 equivalent for single-precision floating-point operations (∼2 TFLOP/mm2 for half-precision), which is ∼5× better than Linear Algebra Core [16] and ∼4× better than previous deep learning accelerator proposal [8].", "title": "" }, { "docid": "2ead9e973f2a237b604bf68284e0acf1", "text": "Cognitive radio networks challenge the traditional wireless networking paradigm by introducing concepts firmly stemmed into the Artificial Intelligence (AI) field, i.e., learning and reasoning. This fosters optimal resource usage and management allowing a plethora of potential applications such as secondary spectrum access, cognitive wireless backbones, cognitive machine-to-machine etc. The majority of overview works in the field of cognitive radio networks deal with the notions of observation and adaptations, which are not a distinguished cognitive radio networking aspect. Therefore, this paper provides insight into the mechanisms for obtaining and inferring knowledge that clearly set apart the cognitive radio networks from other wireless solutions.", "title": "" }, { "docid": "6a3dc4c6bcf2a4133532c37dfa685f3b", "text": "Feature selection can be de ned as a problem of nding a minimum set of M relevant at tributes that describes the dataset as well as the original N attributes do where M N After examining the problems with both the exhaustive and the heuristic approach to fea ture selection this paper proposes a proba bilistic approach The theoretic analysis and the experimental study show that the pro posed approach is simple to implement and guaranteed to nd the optimal if resources permit It is also fast in obtaining results and e ective in selecting features that im prove the performance of a learning algo rithm An on site application involving huge datasets has been conducted independently It proves the e ectiveness and scalability of the proposed algorithm Discussed also are various aspects and applications of this fea ture selection algorithm", "title": "" }, { "docid": "141e3ad8619577140f02a1038981ecb2", "text": "Sponges are sessile benthic filter-feeding animals, which harbor numerous microorganisms. The enormous diversity and abundance of sponge associated bacteria envisages sponges as hot spots of microbial diversity and dynamics. Many theories were proposed on the ecological implications and mechanism of sponge-microbial association, among these, the biosynthesis of sponge derived bioactive molecules by the symbiotic bacteria is now well-indicated. This phenomenon however, is not exhibited by all marine sponges. Based on the available reports, it has been well established that the sponge associated microbial assemblages keep on changing continuously in response to environmental pressure and/or acquisition of microbes from surrounding seawater or associated macroorganisms. In this review, we have discussed nutritional association of sponges with its symbionts, interaction of sponges with other eukaryotic organisms, dynamics of sponge microbiome and sponge-specific microbial symbionts, sponge-coral association etc.", "title": "" }, { "docid": "9f3f5e2baa1bff4aa28a2ce2a4c47088", "text": "One of the most perplexing problems in risk analysis is why some relatively minor risks or risk events, as assessed by technical experts, often elicit strong public concerns and result in substantial impacts upon society and economy. This article sets forth a conceptual framework that seeks to link systematically the technical assessment of risk with psychological, sociological, and cultural perspectives of risk perception and risk-related behavior. The main thesis is that hazards interact with psychological, social, institutional, and cultural processes in ways that may amplify or attenuate public responses to the risk or risk event. A structural description of the social amplification of risk is now possible. Amplification occurs at two stages: in the transfer of information about the risk, and in the response mechanisms of society. Signals about risk are processed by individual and social amplification stations, including the scientist who communicates the risk assessment, the news media, cultural groups, interpersonal networks, and others. Key steps of amplifications can be identified at each stage. The amplified risk leads to behavioral responses, which, in turn, result in secondary impacts. Models are presented that portray the elements and linkages in the proposed conceptual framework.", "title": "" }, { "docid": "165429eb7bf6661af60081aa9cdeb370", "text": "Central to the looming paradigm shift toward data-intensive science, machine-learning techniques are becoming increasingly important. In particular, deep learning has proven to be both a major breakthrough and an extremely powerful tool in many fields. Shall we embrace deep learning as the key to everything? Or should we resist a black-box solution? These are controversial issues within the remote-sensing community. In this article, we analyze the challenges of using deep learning for remote-sensing data analysis, review recent advances, and provide resources we hope will make deep learning in remote sensing seem ridiculously simple. More importantly, we encourage remote-sensing scientists to bring their expertise into deep learning and use it as an implicit general model to tackle unprecedented, large-scale, influential challenges, such as climate change and urbanization.", "title": "" }, { "docid": "a3cb3e28db4e44642ecdac8eb4ae9a8a", "text": "A Ka-band highly linear power amplifier (PA) is implemented in 28-nm bulk CMOS technology. Using a deep class-AB PA topology with appropriate harmonic control circuit, highly linear and efficient PAs are designed at millimeter-wave band. This PA architecture provides a linear PA operation close to the saturated power. Also elaborated harmonic tuning and neutralization techniques are used to further improve the transistor gain and stability. A two-stack PA is designed for higher gain and output power than a common source (CS) PA. Additionally, average power tracking (APT) is applied to further reduce the power consumption at a low power operation and, hence, extend battery life. Both the PAs are tested with two different signals at 28.5 GHz; they are fully loaded long-term evolution (LTE) signal with 16-quadrature amplitude modulation (QAM), a 7.5-dB peakto-average power ratio (PAPR), and a 20-MHz bandwidth (BW), and a wireless LAN (WLAN) signal with 64-QAM, a 10.8-dB PAPR, and an 80-MHz BW. The CS/two-stack PAs achieve power-added efficiency (PAE) of 27%/25%, error vector magnitude (EVM) of 5.17%/3.19%, and adjacent channel leakage ratio (ACLRE-UTRA) of -33/-33 dBc, respectively, with an average output power of 11/14.6 dBm for the LTE signal. For the WLAN signal, the CS/2-stack PAs achieve the PAE of 16.5%/17.3%, and an EVM of 4.27%/4.21%, respectively, at an average output power of 6.8/11 dBm.", "title": "" } ]
scidocsrr
bffc21925bf37c6af821150d9a109478
Improving a credit card fraud detection system using genetic algorithm
[ { "docid": "51eb8e36ffbf5854b12859602f7554ef", "text": "Fraud is increasing dramatically with the expansion of modern technology and the global superhighways of communication, resulting in the loss of billions of dollars worldwide each year. Although prevention technologies are the best way to reduce fraud, fraudsters are adaptive and, given time, will usually find ways to circumvent such measures. Methodologies for the detection of fraud are essential if we are to catch fraudsters once fraud prevention has failed. Statistics and machine learning provide effective technologies for fraud detection and have been applied successfully to detect activities such as money laundering, e-commerce credit card fraud, telecommunications fraud and computer intrusion, to name but a few. We describe the tools available for statistical fraud detection and the areas in which fraud detection technologies are most used.", "title": "" }, { "docid": "e404699c5b86d3a3a47a1f3d745eecc1", "text": "We apply Artificial Immune Systems(AIS) [4] for credit card fraud detection and we compare it to other methods such as Neural Nets(NN) [8] and Bayesian Nets(BN) [2], Naive Bayes(NB) and Decision Trees(DT) [13]. Exhaustive search and Genetic Algorithm(GA) [7] are used to select optimized parameters sets, which minimizes the fraud cost for a credit card database provided by a Brazilian card issuer. The specifics of the fraud database are taken into account, such as skewness of data and different costs associated with false positives and negatives. Tests are done with holdout sample sets, and all executions are run using Weka [18], a publicly available software. Our results are consistent with the early result of Maes in [12] which concludes that BN is better than NN, and this occurred in all our evaluated tests. Although NN is widely used in the market today, the evaluated implementation of NN is among the worse methods for our database. In spite of a poor behavior if used with the default parameters set, AIS has the best performance when parameters optimized by GA are used.", "title": "" } ]
[ { "docid": "fdaf5546d430226721aa1840f92ba5af", "text": "The recent development of regulatory policies that permit the use of TV bands spectrum on a secondary basis has motivated discussion about coexistence of primary (e.g. TV broadcasts) and secondary users (e.g. WiFi users in TV spectrum). However, much less attention has been given to coexistence of different secondary wireless technologies in the TV white spaces. Lack of coordination between secondary networks may create severe interference situations, resulting in less efficient usage of the spectrum. In this paper, we consider two of the most prominent wireless technologies available today, namely Long Term Evolution (LTE), and WiFi, and address some problems that arise from their coexistence in the same band. We perform exhaustive system simulations and observe that WiFi is hampered much more significantly than LTE in coexistence scenarios. A simple coexistence scheme that reuses the concept of almost blank subframes in LTE is proposed, and it is observed that it can improve the WiFi throughput per user up to 50 times in the studied scenarios.", "title": "" }, { "docid": "f7121b434ae326469780f300256367a8", "text": "Aerial Manipulators (AMs) are a special class of underactuated mechanical systems formed by the join of Unmanned Aerial Vehicles (UAVs) and manipulators. A thorough analysis of the dynamics and a fully constructive controller design for a quadrotor plus n-link manipulator in a free-motion on an arbitrary plane is provided, via the lDA-PBC methodology. A controller is designed with the manipulator locked at any position ensuring global asymptotic stability in an open set and avoiding the AM goes upside down (autonomous). The major result of stability/robustness arises when it is proved that, additionally, the controller guarantees the boundedness of the trajectories for bounded movements of the manipulator, i.e. the robot manipulator executing planned tasks, giving rise to a non-autonomous port-controlled Hamiltonian system in closed loop. Moreover, all trajectories converge to a positive limit set, a strong result for matching-type controllers.", "title": "" }, { "docid": "1773e82a9f8f928a1ce0abd053a7cd99", "text": "INTRODUCTION\nThe aim of this study was to investigate the role of treatment timing on the effectiveness of vertical-pull chincup (V-PCC) therapy in conjunction with a bonded rapid maxillary expander (RME) in growing subjects with mild-to-severe hyperdivergent facial patterns.\n\n\nMETHODS\nThe records of 39 subjects treated with a bonded RME combined with a V-PCC were compared with 29 untreated subjects with similar vertical skeletal disharmonies. Lateral cephalograms were analyzed before (T1) and after treatment or observation (T2). Both the treated and the untreated samples were divided into prepubertal and pubertal groups on the basis of cervical vertebral maturation (prepubertal treated group, 21 subjects; pubertal treated group, 18 subjects; prepubertal control group, 15 subjects; pubertal control group, 14 subjects). Mean change differences from T2 to T1 were compared in the 2 prepubertal and the 2 pubertal groups with independent-sample t tests.\n\n\nRESULTS\nNo statistically significant differences between the 2 prepubertal groups were found for any cephalometric skeletal measures from T1 to T2. When compared with the untreated pubertal sample, the group treated with the RME and V-PCC at puberty showed a statistically significant reduction in the inclination of the mandibular plane to the Frankfort horizontal (-2.2 mm), a statistically significant reduction in the inclination of the condylar axis to the mandibular plane (-2.2 degrees), and statistically significant supplementary growth of the mandibular ramus (1.7 mm).\n\n\nCONCLUSIONS\nTreatment of increased vertical dimension with the RME and V-PCC protocol appears to produce better results during the pubertal growth spurt than before puberty, although the absolute amount of correction in the vertical skeletal parameters is limited.", "title": "" }, { "docid": "8468e279ff6dfcd11a5525ab8a60d816", "text": "We provide a concise introduction to basic approaches to reinforcement learning from the machine learning perspective. The focus is on value function and policy gradient methods. Some selected recent trends are highlighted.", "title": "" }, { "docid": "9b430645f7b0da19b2c55d43985259d8", "text": "Research on human spatial memory and navigational ability has recently shown the strong influence of reference systems in spatial memory on the ways spatial information is accessed in navigation and other spatially oriented tasks. One of the main findings can be characterized as a large cognitive cost, both in terms of speed and accuracy that occurs whenever the reference system used to encode spatial information in memory is not aligned with the reference system required by a particular task. In this paper, the role of aligned and misaligned reference systems is discussed in the context of the built environment and modern architecture. The role of architectural design on the perception and mental representation of space by humans is investigated. The navigability and usability of built space is systematically analysed in the light of cognitive theories of spatial and navigational abilities of humans. It is concluded that a building’s navigability and related wayfinding issues can benefit from architectural design that takes into account basic results of spatial cognition research. 1 Wayfinding and Architecture Life takes place in space and humans, like other organisms, have developed adaptive strategies to find their way around their environment. Tasks such as identifying a place or direction, retracing one’s path, or navigating a large-scale space, are essential elements to mobile organisms. Most of these spatial abilities have evolved in natural environments over a very long time, using properties present in nature as cues for spatial orientation and wayfinding. With the rise of complex social structure and culture, humans began to modify their natural environment to better fit their needs. The emergence of primitive dwellings mainly provided shelter, but at the same time allowed builders to create environments whose spatial structure “regulated” the chaotic natural environment. They did this by using basic measurements and geometric relations, such as straight lines, right angles, etc., as the basic elements of design (Le Corbusier, 1931, p. 69ff.) In modern society, most of our lives take place in similar regulated, human-made spatial environments, with paths, tracks, streets, and hallways as the main arteries of human locomotion. Architecture and landscape architecture embody the human effort to structure space in meaningful and useful ways. Architectural design of space has multiple functions. Architecture is designed to satisfy the different representational, functional, aesthetic, and emotional needs of organizations and the people who live or work in these structures. In this chapter, emphasis lies on a specific functional aspect of architectural design: human wayfinding. Many approaches to improving architecture focus on functional issues, like improved ecological design, the creation of improved workplaces, better climate control, lighting conditions, or social meeting areas. Similarly, when focusing on the mobility of humans, the ease of wayfinding within a building can be seen as an essential function of a building’s design (Arthur & Passini, 1992; Passini, 1984). When focusing on wayfinding issues in buildings, cities, and landscapes, the designed spatial environment can be seen as an important tool in achieving a particular goal, e.g., reaching a destination or finding an exit in case of emergency. This view, if taken to a literal extreme, is summarized by Le Corbusier’s (1931) notion of the building as a “machine,” mirroring in architecture the engineering ideals of efficiency and functionality found in airplanes and cars. In the narrow sense of wayfinding, a building thus can be considered of good design if it allows easy and error-free navigation. This view is also adopted by Passini (1984), who states that “although the architecture and the spatial configuration of a building generate the wayfinding problems people have to solve, they are also a wayfinding support system in that they contain the information necessary to solve the problem” (p. 110). Like other problems of engineering, the wayfinding problem in architecture should have one or more solutions that can be evaluated. This view of architecture can be contrasted with the alternative view of architecture as “built philosophy”. According to this latter view, architecture, like art, expresses ideas and cultural progress by shaping the spatial structure of the world – a view which gives consideration to the users as part of the philosophical approach but not necessarily from a usability perspective. Viewing wayfinding within the built environment as a “man-machine-interaction” problem makes clear that good architectural design with respect to navigability needs to take two factors into account. First, the human user comes equipped with particular sensory, perceptual, motoric, and cognitive abilities. Knowledge of these abilities and the limitations of an average user or special user populations thus is a prerequisite for good design. Second, structural, functional, financial, and other design considerations restrict the degrees of freedom architects have in designing usable spaces. In the following sections, we first focus on basic research on human spatial cognition. Even though not all of it is directly applicable to architectural design and wayfinding, it lays the foundation for more specific analyses in part 3 and 4. In part 3, the emphasis is on a specific research question that recently has attracted some attention: the role of environmental structure (e.g., building and street layout) for the selection of a spatial reference frame. In part 4, implications for architectural design are discussed by means of two real-world examples. 2 The human user in wayfinding 2.1 Navigational strategies Finding one’s way in the environment, reaching a destination, or remembering the location of relevant objects are some of the elementary tasks of human activity. Fortunately, human navigators are well equipped with an array of flexible navigational strategies, which usually enable them to master their spatial environment (Allen, 1999). In addition, human navigation can rely on tools that extend human sensory and mnemonic abilities. Most spatial or navigational strategies are so common that they do not occur to us when we perform them. Walking down a hallway we hardly realize that the optical and acoustical flows give us rich information about where we are headed and whether we will collide with other objects (Gibson, 1979). Our perception of other objects already includes physical and social models on how they will move and where they will be once we reach the point where paths might cross. Following a path can consist of following a particular visual texture (e.g., asphalt) or feeling a handrail in the dark by touch. At places where multiple continuing paths are possible, we might have learned to associate the scene with a particular action (e.g., turn left; Schölkopf & Mallot, 1995), or we might try to approximate a heading direction by choosing the path that most closely resembles this direction. When in doubt about our path we might ask another person or consult a map. As is evident from this brief (and not exhaustive) description, navigational strategies and activities are rich in diversity and adaptability (for an overview see Golledge, 1999; Werner, Krieg-Brückner, & Herrmann, 2000), some of which are aided by architectural design and signage (see Arthur & Passini, 1992; Passini, 1984). Despite the large number of different navigational strategies, people still experience problems finding their way or even feel lost momentarily. This feeling of being lost might reflect the lack of a key component of human wayfinding: knowledge about where one is located in an environment – with respect to one’s goal, one’s starting location, or with respect to the global environment one is in. As Lynch put it, “the terror of being lost comes from the necessity that a mobile organism be oriented in its surroundings” (1960, p. 125.) Some wayfinding strategies, like vector navigation, rely heavily on this information. Other strategies, e.g. piloting or path-following, which are based on purely local information can benefit from even vague locational knowledge as a redundant source of information to validate or question navigational decisions (see Werner et al., 2000, for examples.) Proficient signage in buildings, on the other hand, relies on a different strategy. It relieves a user from keeping track of his or her position in space by indicating the correct navigational choice whenever the choice becomes relevant. Keeping track of one’s position during navigation can be done quite easily if access to global landmarks, reference directions, or coordinates is possible. Unfortunately, the built environment often does not allow for simple navigational strategies based on these types of information. Instead, spatial information has to be integrated across multiple places, paths, turns, and extended periods of time (see Poucet, 1993, for an interesting model of how this can be achieved). In the next section we will describe an essential ingredient of this integration – the mental representation of spatial information in memory. 2.2 Alignment effects in spatial memory When observing tourists in an unfamiliar environment, one often notices people frantically turning maps to align the noticeable landmarks depicted in the map with the visible landmarks as seen from the viewpoint of the tourist. This type of behavior indicates a well-established cognitive principle (Levine, Jankovic, & Palij, 1982). Observers more easily comprehend and use information depicted in “You-are-here” (YAH) maps if the up-down direction of the map coincides with the front-back direction of the observer. In this situation, the natural preference of directional mapping of top to front and bottom to back is used, and left and right in the map stay left and right in the depicted world. While th", "title": "" }, { "docid": "54d3d5707e50b979688f7f030770611d", "text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.", "title": "" }, { "docid": "403d54a5672037cb8adb503405845bbd", "text": "This paper introduces adaptor grammars, a class of probabil istic models of language that generalize probabilistic context-free grammar s (PCFGs). Adaptor grammars augment the probabilistic rules of PCFGs with “ada ptors” that can induce dependencies among successive uses. With a particular choice of adaptor, based on the Pitman-Yor process, nonparametric Bayesian mo dels f language using Dirichlet processes and hierarchical Dirichlet proc esses can be written as simple grammars. We present a general-purpose inference al gorithm for adaptor grammars, making it easy to define and use such models, and ill ustrate how several existing nonparametric Bayesian models can be expressed wi thin this framework.", "title": "" }, { "docid": "d952de00554b9a6bb21fbce802729b3f", "text": "In the past five years there has been a dramatic increase in work on Search Based Software Engineering (SBSE), an approach to software engineering in which search based optimisation algorithms are used to address problems in Software Engineering. SBSE has been applied to problems throughout the Software Engineering lifecycle, from requirements and project planning to maintenance and re-engineering. The approach is attractive because it offers a suite of adaptive automated and semi-automated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This paper provides a review and classification of literature on SBSE. The paper identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.", "title": "" }, { "docid": "e2a1c8dfae27d56faf2fee494ffbae28", "text": "Quantitative structure-activity relationship (QSAR) modeling pertains to the construction of predictive models of biological activities as a function of structural and molecular information of a compound library. The concept of QSAR has typically been used for drug discovery and development and has gained wide applicability for correlating molecular information with not only biological activities but also with other physicochemical properties, which has therefore been termed quantitative structure-property relationship (QSPR). Typical molecular parameters that are used to account for electronic properties, hydrophobicity, steric effects, and topology can be determined empirically through experimentation or theoretically via computational chemistry. A given compilation of data sets is then subjected to data pre-processing and data modeling through the use of statistical and/or machine learning techniques. This review aims to cover the essential concepts and techniques that are relevant for performing QSAR/QSPR studies through the use of selected examples from our previous work.", "title": "" }, { "docid": "69201195326d4e8c5cac61d817e4c1f2", "text": "This paper focuses on the evaluation of theoretical and numerical aspects related to an original DC microgrid power architecture for efficient charging of plug-in electric vehicles (PEVs). The proposed DC microgrid is based on photovoltaic array (PVA) generation, electrochemical storage, and grid connection; it is assumed that PEVs have a direct access to their DC charger input. As opposed to conventional power architecture designs, the PVA is coupled directly on the DC link without a static converter, which implies no DC voltage stabilization, increasing energy efficiency, and reducing control complexity. Based on a real-time rule-based algorithm, the proposed power management allows self-consumption according to PVA power production and storage constraints, and the public grid is seen only as back-up. The first phase of modeling aims to evaluate the main energy flows within the proposed DC microgrid architecture and to identify the control structure and the power management strategies. For this, an original model is obtained by applying the Energetic Macroscopic Representation formalism, which allows deducing the control design using Maximum Control Structure. The second phase of simulation is based on the numerical characterization of the DC microgrid components and the energy management strategies, which consider the power source requirements, charging times of different PEVs, electrochemical storage ageing, and grid power limitations for injection mode. The simulation results show the validity of the model and the feasibility of the proposed DC microgrid power architecture which presents good performance in terms of total efficiency and simplified control. OPEN ACCESS Energies 2015, 8 4336", "title": "" }, { "docid": "1394eaac58304e5d6f951ca193e0be40", "text": "We introduce low-cost hardware for performing non-invasive side-channel attacks on Radio Frequency Identi cation Devices (RFID) and develop techniques for facilitating a correlation power analysis (CPA) in the presence of the eld of an RFID reader. We practically verify the e ectiveness of the developed methods by analysing the security of commercial contactless smartcards employing strong cryptography, pinpointing weaknesses in the protocol and revealing a vulnerability towards side-channel attacks. Employing the developed hardware, we present the rst successful key-recovery attack on commercially available contactless smartcards based on the Data Encryption Standard (DES) or TripleDES (3DES) cipher that are widely used for security-sensitive applications, e.g., payment purposes.", "title": "" }, { "docid": "21943e640ce9b56414994b5df504b1a6", "text": "It is a preferable method to transfer power wirelessly using contactless slipring systems for rotary applications. The current single or multiple-unit single-phase systems often have limited power transfer capability, so they may not be able to meet the load requirements. This paper presents a contactless slipring system based on axially traveling magnetic field that can achieve a high output power level. A new index termed mutual inductance per pole is introduced to simplify the analysis of the mutually coupled poly-phase system to a single-phase basis. Both simulation and practical results have shown that the proposed system can transfer 2.7 times more power than a multiple-unit (six individual units) single-phase system with the same amount of ferrite and copper materials at higher power transfer efficiency. It has been found that the new system can achieve about 255.6 W of maximum power at 97% efficiency, compared to 68.4 W at 90% of a multiple-unit (six individual units) single-phase system.", "title": "" }, { "docid": "43a7e786704b5347f3b67c08ac9c4f70", "text": "Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed- base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where the position of the robot and workpiece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. The methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.", "title": "" }, { "docid": "039bcd32a6ad2ebe93e08f518ffa5c7b", "text": "Falling down and not managing to get up again is one of the main concerns of elderly people living alone in their home. Robotic assistance for the elderly promises to have a great potential of detecting these critical situations and calling for help. This paper presents a feature-based method to detect fallen people on the ground by a mobile robot equipped with a Kinect sensor. Point clouds are segmented, layered and classified to detect fallen people, even under occlusions by parts of their body or furniture. Different features, originally from pedestrian and object detection in depth data, and different classifiers are evaluated. Evaluation was done using data of 12 people lying on the floor. Negative samples were collected from objects similar to persons, two tall dogs, and five real apartments of elderly people. The best feature-classifier combination is selected to built a robust system to detect fallen people.", "title": "" }, { "docid": "07fbce97ec4e5e7fd176507b64b01e33", "text": "Drought and heat-induced forest dieback and mortality are emerging global concerns. Although Mediterranean-type forest (MTF) ecosystems are considered to be resilient to drought and other disturbances, we observed a sudden and unprecedented forest collapse in a MTF in Western Australia corresponding with record dry and heat conditions in 2010/2011. An aerial survey and subsequent field investigation were undertaken to examine: the incidence and severity of canopy dieback and stem mortality, associations between canopy health and stand-related factors as well as tree species response. Canopy mortality was found to be concentrated in distinct patches, representing 1.5 % of the aerial sample (1,350 ha). Within these patches, 74 % of all measured stems (>1 cm DBHOB) had dying or recently killed crowns, leading to 26 % stem mortality six months following the collapse. Patches of canopy collapse were more densely stocked with the dominant species, Eucalyptus marginata, and lacked the prominent midstorey species Banksia grandis, compared to the surrounding forest. A differential response to the disturbance was observed among co-occurring tree species, which suggests contrasting strategies for coping with extreme water stress. These results suggest that MTFs, once thought to be resilient to climate change, are susceptible to sudden and severe forest collapse when key thresholds have been reached.", "title": "" }, { "docid": "d774759e03329d0cc5611ab9104f8299", "text": "The flexibility of neural networks is a very powerful property. In many cases, these changes lead to great improvements in accuracy compared to basic models that we discussed in the previous tutorial. In the last part of the tutorial, I will also explain how to parallelize the training of neural networks. This is also an important topic because parallelizing neural networks has played an important role in the current deep learning movement.", "title": "" }, { "docid": "db2b94a49d4907504cf2444305287ec8", "text": "In this paper, we propose a principled Tag Disentangled Generative Adversarial Networks (TDGAN) for re-rendering new images for the object of interest from a single image of it by specifying multiple scene properties (such as viewpoint, illumination, expression, etc.). The whole framework consists of a disentangling network, a generative network, a tag mapping net, and a discriminative network, which are trained jointly based on a given set of images that are completely/partially tagged (i.e., supervised/semi-supervised setting). Given an input image, the disentangling network extracts disentangled and interpretable representations, which are then used to generate images by the generative network. In order to boost the quality of disentangled representations, the tag mapping net is integrated to explore the consistency between the image and its tags. Furthermore, the discriminative network is introduced to implement the adversarial training strategy for generating more realistic images. Experiments on two challenging datasets demonstrate the state-of-the-art performance of the proposed framework in the problem of interest.", "title": "" }, { "docid": "d7ecb69aeb14b5f899c768032f36cc43", "text": "Building on top of the success of generative adversarial networks (GANs), conditional GANs attempt to better direct the data generation process by conditioning with certain additional information. Inspired by the most recent AC-GAN, in this paper we propose a fast-converging conditional GAN (FC-GAN). In addition to the real/fake classifier used in vanilla GANs, our discriminator has an advanced auxiliary classifier which distinguishes each real class from an extra ‘fake’ class. The ‘fake’ class avoids mixing generated data with real data, which can potentially confuse the classification of real data as AC-GAN does, and makes the advanced auxiliary classifier behave as another real/fake classifier. As a result, FC-GAN can accelerate the process of differentiation of all classes, thus boost the convergence speed. Experimental results on image synthesis demonstrate our model is competitive in the quality of images generated while achieving a faster convergence rate.", "title": "" }, { "docid": "f967ad72daeb84e2fce38aec69997c8a", "text": "While HCI has focused on multitasking with information workers, we report on multitasking among Millennials who grew up with digital media - focusing on college students. We logged computer activity and used biosensors to measure stress of 48 students for 7 days for all waking hours, in their in situ environments. We found a significant positive relationship with stress and daily time spent on computers. Stress is positively associated with the amount of multitasking. Conversely, stress is negatively associated with Facebook and social media use. Heavy multitaskers use significantly more social media and report lower positive affect than light multitaskers. Night habits affect multitasking the following day: late-nighters show longer duration of computer use and those ending their activities earlier in the day multitask less. Our study shows that college students multitask at double the frequency compared to studies of information workers. These results can inform designs for stress management of college students.", "title": "" }, { "docid": "936c4fb60d37cce15ed22227d766908f", "text": "English. The SENTIment POLarity Classification Task 2016 (SENTIPOLC), is a rerun of the shared task on sentiment classification at the message level on Italian tweets proposed for the first time in 2014 for the Evalita evaluation campaign. It includes three subtasks: subjectivity classification, polarity classification, and irony detection. In 2016 SENTIPOLC has been again the most participated EVALITA task with a total of 57 submitted runs from 13 different teams. We present the datasets – which includes an enriched annotation scheme for dealing with the impact on polarity of a figurative use of language – the evaluation methodology, and discuss results and participating systems. Italiano. Descriviamo modalità e risultati della seconda edizione della campagna di valutazione di sistemi di sentiment analysis (SENTIment POLarity Classification Task), proposta nel contesto di “EVALITA 2016: Evaluation of NLP and Speech Tools for Italian”. In SENTIPOLC è stata valutata la capacità dei sistemi di riconoscere diversi aspetti del sentiment espresso nei messaggi Twitter in lingua italiana, con un’articolazione in tre sottotask: subjectivity classification, polarity classification e irony detection. La campagna ha suscitato nuovamente grande interesse, con un totale di 57 run inviati da 13 gruppi di partecipanti.", "title": "" } ]
scidocsrr
00edd60b32dca1b610be096d0a5f8e46
Validation of a Greek version of PSS-14; a global measure of perceived stress.
[ { "docid": "51743d233ec269cfa7e010d2109e10a6", "text": "Stress is a part of every life to varying degrees, but individuals differ in their stress vulnerability. Stress is usefully viewed from a biological perspective; accordingly, it involves activation of neurobiological systems that preserve viability through change or allostasis. Although they are necessary for survival, frequent neurobiological stress responses increase the risk of physical and mental health problems, perhaps particularly when experienced during periods of rapid brain development. Recently, advances in noninvasive measurement techniques have resulted in a burgeoning of human developmental stress research. Here we review the anatomy and physiology of stress responding, discuss the relevant animal literature, and briefly outline what is currently known about the psychobiology of stress in human development, the critical role of social regulation of stress neurobiology, and the importance of individual differences as a lens through which to approach questions about stress experiences during development and child outcomes.", "title": "" } ]
[ { "docid": "944efa24cef50c0fd9d940a2ccbcdbcc", "text": "This conceptual paper in sustainable business research introduces a business sustainability maturity model as an innovative solution to support companies move towards sustainable development. Such model offers the possibility for each firm to individually assess its position regarding five sustainability maturity levels and, as a consequence, build a tailored as well as a common strategy along its network of relationships and influence to progress towards higher levels of sustainable development. The maturity model suggested is based on the belief that business sustainability is a continuous process of evolution in which a company will be continuously seeking to achieve its vision of sustainable development in uninterrupted cycles of improvement, where at each new cycle the firm starts the process at a higher level of business sustainability performance. The referred model is therefore dynamic to incorporate changes along the way and enable its own evolution following the firm’s and its network partners’ progress towards the sustainability vision. The research on which this paper is based combines expertise in science and technology policy, R&D and innovation management, team performance and organisational learning, strategy alignment and integrated business performance, knowledge management and technology foresighting.", "title": "" }, { "docid": "71d744aefd254acfc24807d805fb066b", "text": "Bitcoin provides only pseudo-anonymous transactions, which can be exploited to link payers and payees -- defeating the goal of anonymous payments. To thwart such attacks, several Bitcoin mixers have been proposed, with the objective of providing unlinkability between payers and payees. However, existing Bitcoin mixers can be regarded as either insecure or inefficient.\n We present Obscuro, a highly efficient and secure Bitcoin mixer that utilizes trusted execution environments (TEEs). With the TEE's confidentiality and integrity guarantees for code and data, our mixer design ensures the correct mixing operations and the protection of sensitive data (i.e., private keys and mixing logs), ruling out coin theft and address linking attacks by a malicious service provider. Yet, the TEE-based implementation does not prevent the manipulation of inputs (e.g., deposit submissions, blockchain feeds) to the mixer, hence Obscuro is designed to overcome such limitations: it (1) offers an indirect deposit mechanism to prevent a malicious service provider from rejecting benign user deposits; and (2) scrutinizes blockchain feeds to prevent deposits from being mixed more than once (thus degrading anonymity) while being eclipsed from the main blockchain branch. In addition, Obscuro provides several unique anonymity features (e.g., minimum mixing set size guarantee, resistant to dropping user deposits) that are not available in existing centralized and decentralized mixers.\n Our prototype of Obscuro is built using Intel SGX and we demonstrate its effectiveness in Bitcoin Testnet. Our implementation mixes 1000 inputs in just 6.49 seconds, which vastly outperforms all of the existing decentralized mixers.", "title": "" }, { "docid": "2e6d9b7d514463caf66f7adf35868d1d", "text": "Unlike simpler organisms, C. elegans possesses several distinct chemosensory pathways and chemotactic mechanisms. These mechanisms and pathways are individually capable of driving chemotaxis in a chemical concentration gradient. However, it is not understood if they are redundant or co-operate in more sophisticated ways. Here we examine the specialisation of different chemotactic mechanisms in a model of chemotaxis to NaCl. We explore the performance of different chemotactic mechanisms in a range of chemical gradients and show that, in the model, far from being redundant, the mechanisms are specialised both for different environments and for distinct features within those environments. We also show that the chemotactic drive mediated by the ASE pathway is not robust to the presence of noise in the chemical gradient. This problem cannot be solved along the ASE pathway without destroying its ability to drive chemotaxis. Instead, we show that robustness to noise can be achieved by introducing a second, much slower NaCl-sensing pathway. This secondary pathway is simpler than the ASE pathway, in the sense that it can respond to either up-steps or down-steps in NaCl but not both, and could correspond to one of several candidates in the literature which we identify and evaluate. This work provides one possible explanation of why there are multiple NaCl sensing pathways and chemotactic mechanisms in C. elegans: rather than being redundant the different pathways and mechanism are specialised both for the characteristics of different environments and for distinct features within a single environment.", "title": "" }, { "docid": "963b6b2b337541fd741d31b2c8addc8d", "text": "I. Unary terms • Body part detection candidates • Capture distribution of scores over all part classes II. Pairwise terms • Capture part relationships within/across people – proximity: same body part class (c = c) – kinematic relations: different part classes (c!= c) III. Integer Linear Program (ILP) • Substitute zdd cc = xdc xd c ydd ′ to linearize objective • NP-Hard problem solved via branch-and-cut (1% gap) • Linear constraints on 0/1 labelings: plausible poses – uniqueness", "title": "" }, { "docid": "9da1449675af42a2fc75ba8259d22525", "text": "The purpose of the research reported here was to test empirically a conceptualization of brand associations that consists of three dimensions: brand image, brand attitude and perceived quality. A better understanding of brand associations is needed to facilitate further theoretical development and practical measurement of the construct. Three studies were conducted to: test a protocol for developing product category specific measures of brand image; investigate the dimensionality of the brand associations construct; and explore whether the degree of dimensionality of brand associations varies depending upon a brand's familiarity. Findings confirm the efficacy of the brand image protocol and indicate that brand associations differ across brands and product categories. The latter finding supports the conclusion that brand associations for different products should be measured using different items. As predicted, dimensionality of brand associations was found to be influenced by brand familiarity. Research interest in branding continues to be strong in the marketing literature (e.g. Alden et al., 1999; Kirmani et al., 1999; Erdem, 1998). Likewise, marketing managers continue to realize the power of brands, manifest in the recent efforts of many companies to build strong Internet `̀ brands'' such as amazon.com and msn.com (Narisetti, 1998). The way consumers perceive brands is a key determinant of long-term businessconsumer relationships (Fournier, 1998). Hence, building strong brand perceptions is a top priority for many firms today (Morris, 1996). Despite the importance of brands and consumer perceptions of them, marketing researchers have not used a consistent definition or measurement technique to assess consumer perceptions of brands. To address this, two scholars have recently developed extensive conceptual treatments of branding and related issues. Keller (1993; 1998) refers to consumer perceptions of brands as brand knowledge, consisting of brand awareness (recognition and recall) and brand image. Keller defines brand image as `̀ perceptions about a brand as reflected by the brand associations held in consumer memory''. These associations include perceptions of brand quality and attitudes toward the brand. Similarly, Aaker (1991, 1996a) proposes that brand associations are anything linked in memory to a brand. Keller and Aaker both appear to hypothesize that consumer perceptions of brands are The current issue and full text archive of this journal is available at http://www.emerald-library.com The authors thank Paul Herr, Donnie Lichtenstein, Rex Moody, Dave Cravens and Julie Baker for helpful comments on earlier versions of this manuscript. Funding was provided by the Graduate School of the University of Colorado and the Charles Tandy American Enterprise Center at Texas Christian University. Top priority for many firms today 350 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000, pp. 350-368, # MCB UNIVERSITY PRESS, 1061-0421 An executive summary for managers and executive readers can be found at the end of this article multi-dimensional, yet many of the dimensions they identify appear to be very similar. Furthermore, Aaker's and Keller's conceptualizations of consumers' psychological representation of brands have not been subjected to empirical validation. Consequently, it is difficult to determine if the various constructs they discuss, such as brand attitudes and perceived quality, are separate dimensions of brand associations, (multi-dimensional) as they propose, or if they are simply indicators of brand associations (unidimensional). A number of studies have appeared recently which measure some aspect of consumer brand associations, but these studies do not use consistent measurement techniques and hence, their results are not comparable. They also do not discuss the issue of how to conceptualize brand associations, but focus on empirically identifying factors which enhance or diminish one component of consumer perceptions of brands (e.g. Berthon et al., 1997; Keller and Aaker, 1997; Keller et al., 1998; RoedderJohn et al., 1998; Simonin and Ruth, 1998). Hence, the proposed multidimensional conceptualizations of brand perceptions have not been tested empirically, and the empirical work operationalizes these perceptions as uni-dimensional. Our goal is to provide managers of brands a practical measurement protocol based on a parsimonious conceptual model of brand associations. The specific objectives of the research reported here are to: . test a protocol for developing category-specific measures of brand image; . examine the conceptualization of brand associations as a multidimensional construct by testing brand image, brand attitude, and perceived quality in the same model; and . explore whether the degree of dimensionality of brand associations varies depending on a brand's familiarity. In subsequent sections of this paper we explain the theoretical background of our research, describe three studies we conducted to test our conceptual model, and discuss the theoretical and managerial implications of the results. Conceptual background Brand associations According to Aaker (1991), brand associations are the category of a brand's assets and liabilities that include anything `̀ linked'' in memory to a brand (Aaker, 1991). Keller (1998) defines brand associations as informational nodes linked to the brand node in memory that contain the meaning of the brand for consumers. Brand associations are important to marketers and to consumers. Marketers use brand associations to differentiate, position, and extend brands, to create positive attitudes and feelings toward brands, and to suggest attributes or benefits of purchasing or using a specific brand. Consumers use brand associations to help process, organize, and retrieve information in memory and to aid them in making purchase decisions (Aaker, 1991, pp. 109-13). While several research efforts have explored specific elements of brand associations (Gardner and Levy, 1955; Aaker, 1991; 1996a; 1996b; Aaker and Jacobson, 1994; Aaker, 1997; Keller, 1993), no research has been reported that combined these elements in the same study in order to measure how they are interrelated. Practical measurement protocol Importance to marketers and consumers JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 351 Scales to measure partially brand associations have been developed. For example, Park and Srinivasan (1994) developed items to measure one dimension of toothpaste brand associations that included the brand's perceived ability to fight plaque, freshen breath and prevent cavities. This scale is clearly product category specific. Aaker (1997) developed a brand personality scale with five dimensions and 42 items. This scale is not practical to use in some applied studies because of its length. Also, the generalizability of the brand personality scale is limited because many brands are not personality brands, and no protocol is given to adapt the scale. As Aaker (1996b, p. 113) notes, `̀ using personality as a general indicator of brand strength will be a distortion for some brands, particularly those that are positioned with respect to functional advantages and value''. Hence, many previously developed scales are too specialized to allow for general use, or are too long to be used in some applied settings. Another important issue that has not been empirically examined in the literature is whether brand associations represent a one-dimensional or multi-dimensional construct. Although this may appear to be an obvious question, we propose later in this section the conditions under which this dimensionality may be more (or less) measurable. As previously noted, Aaker (1991) defines brand associations as anything linked in memory to a brand. Three related constructs that are, by definition, linked in memory to a brand, and which have been researched conceptually and measured empirically, are brand image, brand attitude, and perceived quality. We selected these three constructs as possible dimensions or indicators of brand associations in our conceptual model. Of the many possible components of brand associations we could have chosen, we selected these three constructs because they: (1) are the three most commonly cited consumer brand perceptions in the empirical marketing literature; (2) have established, reliable, published measures in the literature; and (3) are three dimensions discussed frequently in prior conceptual research (Aaker, 1991; 1996; Keller, 1993; 1998). We conceptualize brand image (functional and symbolic perceptions), brand attitude (overall evaluation of a brand), and perceived quality (judgments of overall superiority) as possible dimensions of brand associations (see Figure 1). Brand image, brand attitude, and perceived quality Brand image is defined as the reasoned or emotional perceptions consumers attach to specific brands (Dobni and Zinkhan,1990) and is the first consumer brand perception that was identified in the marketing literature (Gardner and Levy, 1955). Brand image consists of functional and symbolic brand beliefs. A measurement technique using semantic differential items generated for the relevant product category has been suggested for measuring brand image (Dolich, 1969; Fry and Claxton, 1971). Brand image associations are largely product category specific and measures should be customized for the unique characteristics of specific brand categories (Park and Srinivasan, 1994; Bearden and Etzel, 1982). Brand attitude is defined as consumers' overall evaluation of a brand ± whether good or bad (Mitchell and Olson, 1981). Semantic differential scales measuring brand attitude have frequently appeared in the marketing Linked in memory to a brand Reasoned or emotional perceptions 352 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 literature. Bruner and Hensel (1996) reported 66 published studies which measured brand attitud", "title": "" }, { "docid": "179e9c0672086798e74fa1197a0fda21", "text": "Narcissism is typically viewed as a dimensional construct in social psychology. Direct evidence supporting this position is lacking, however, and recent research suggests that clinical measures of narcissism exhibit categorical properties. It is therefore unclear whether social psychological researchers should conceptualize narcissism as a category or continuum. To help remedy this, the latent structure of narcissism—measured by the Narcissistic Personality Inventory (NPI)—was examined using 3895 participants and three taxometric procedures. Results suggest that NPI scores are distributed dimensionally. There is no apparent shift from ‘‘normal’’ to ‘‘narcissist’’ observed across the NPI continuum. This is consistent with the prevailing view of narcissism in social psychology and suggests that narcissism is structured similar to other aspects of general personality. This also suggests a difference in how narcissism is structured in clinical versus social psychology (134 words). 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ede04c4692c5e575871e66a249e46d3c", "text": "Distributionally robust stochastic optimization (DRSO) is an approach to optimization under uncertainty in which, instead of assuming that there is an underlying probability distribution that is known exactly, one hedges against a chosen set of distributions. In this paper we first point out that the set of distributions should be chosen to be appropriate for the application at hand, and that some of the choices that have been popular until recently are, for many applications, not good choices. We consider sets of distributions that are within a chosen Wasserstein distance from a nominal distribution, for example an empirical distribution resulting from available data. The paper argues that such a choice of sets has two advantages: (1) The resulting distributions hedged against are more reasonable than those resulting from other popular choices of sets. (2) The problem of determining the worst-case expectation over the resulting set of distributions has desirable tractability properties. We derive a dual reformulation of the corresponding DRSO problem and construct approximate worst-case distributions (or an exact worst-case distribution if it exists) explicitly via the first-order optimality conditions of the dual problem. Our contributions are five-fold. (i) We identify necessary and sufficient conditions for the existence of a worst-case distribution, which are naturally related to the growth rate of the objective function. (ii) We show that the worst-case distributions resulting from an appropriate Wasserstein distance have a concise structure and a clear interpretation. (iii) Using this structure, we show that data-driven DRSO problems can be approximated to any accuracy by robust optimization problems, and thereby many DRSO problems become tractable by using tools from robust optimization. (iv) To the best of our knowledge, our proof of strong duality is the first constructive proof for DRSO problems, and we show that the constructive proof technique is also useful in other contexts. (v) Our strong duality result holds in a very general setting, and we show that it can be applied to infinite dimensional process control problems and worst-case value-at-risk analysis.", "title": "" }, { "docid": "100d6140939d37b530888ff9fc644855", "text": "WA-COM has developed an E/D pHEMT process for use in control circuit applications. By adding an E-mode FET to our existing D-mode pHEMT switch process, we are able to integrate logic circuits onto the same die as the RF portion of complex control products (multi-throw switches, multi bit attenuators, etc.). While this capability is not uncommon in the GaAs community, it is new for our fab, and provided new challenges both in processing and in reliability testing. We conducted many tests that focused on the reliability characteristics of this new Emode FET; in the meanwhile, we also needed to assure no degradation of the already qualified D-mode FET. While our initial test suggested low mean-time-tofailure (MTTF) for E-mode devices, recent reliability results have been much better, exceeding our minimum MTTF requirement of 106 hours at channel temperature TCH= 125 °C. Our analysis also shows that devices from this process have high activation energy (Ea 1.6 eV).", "title": "" }, { "docid": "fc3b087bd2c0bd4e12f3cb86f6346c96", "text": "This study investigated whether changes in the technological/social environment in the United States over time have resulted in concomitant changes in the multitasking skills of younger generations. One thousand, three hundred and nineteen Americans from three generations were queried to determine their at-home multitasking behaviors. An anonymous online questionnaire asked respondents to indicate which everyday and technology-based tasks they choose to combine for multitasking and to indicate how difficult it is to multitask when combining the tasks. Combining tasks occurred frequently, especially while listening to music or eating. Members of the ‘‘Net Generation” reported more multitasking than members of ‘‘Generation X,” who reported more multitasking than members of the ‘‘Baby Boomer” generation. The choices of which tasks to combine for multitasking were highly correlated across generations, as were difficulty ratings of specific multitasking combinations. The results are consistent with a greater amount of general multitasking resources in younger generations, but similar mental limitations in the types of tasks that can be multitasked. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "917ab22adee174259bef5171fe6f14fb", "text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.", "title": "" }, { "docid": "f8639b0d3a5792bda63dd2f22bfc496a", "text": "The animal metaphor in poststructuralists thinkers like Roland Barthes and Jacques Derrida, offers an understanding into the human self through the relational modes of being and co-being. The present study focuses on the concept of “semiotic animal” proposed by John Deely with reference to Roland Barthes. Human beings are often considered as “rational animal” (Descartes) capable of reason and thinking. By analyzing the “semiotic animal” in Roland Barthes, the intention is to study him as a “mind-dependent” being who discovers the contrast between ens reale and ens rationis through his writing. For Barthes “it is the intimate which seeks utterance” in one and makes “it cry, heard, confronting generality, confronting science.” Roland Barthes attempts to read “his body” from the “tissues of signs” that is driven by the unconscious desires. The study is an attempt to explore the semiological underpinnings in Barthes which are found in the form of rhetorical tropes of cats and dogs and the way he relates it with the ‘self’.", "title": "" }, { "docid": "eb083b4c46d49a6cc639a89b74b1f269", "text": "ROC analyses generated low area under the curve (.695, 95% confidence interval (.637.752)) and cutoff scores with poor sensitivity/specificity balance. BDI-II. Because the distribution of BDI-II scores was not normal, percentile ranks for raw scores were provided for the total sample and separately by gender. symptoms two scales were used: The Beck Depression Inventory-II (BDIII) smokers and non smokers, we found that the mean scores on the BDI-II (9.21 vs.", "title": "" }, { "docid": "1e2768be2148ff1fd102c6621e8da14d", "text": "Example-based learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.", "title": "" }, { "docid": "cbc6986bf415292292b7008ae4d13351", "text": "In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.", "title": "" }, { "docid": "faca51b6762e4d7c3306208ad800abd3", "text": "Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A well-founded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.", "title": "" }, { "docid": "80cf82caebfb48dac02d001b24163bdf", "text": "This paper presents a new current sensor based on fluxgate principle. The sensor consists of a U-shaped magnetic gathering shell. In the designed sensor, the exciting winding and the secondary winding are arranged orthogonally, so that the magnetic fields produced by the two windings are mutually orthogonal and decoupled. Introducing a magnetic gathering shell into the sensor is to concentrate the detected magnetic field and to reduce the interference of an external stray field. Based on the theoretical analysis and the simulation results, a prototype was designed. Test results show that the proposed sensor can measure currents up to 25 A, and has an accuracy of 0.6% and a remarkable resolution.", "title": "" }, { "docid": "2aea197bd094643ecc735b604501b602", "text": "OBJECTIVE\nTo update previous meta-analyses of cohort studies that investigated the association between the Mediterranean diet and health status and to utilize data coming from all of the cohort studies for proposing a literature-based adherence score to the Mediterranean diet.\n\n\nDESIGN\nWe conducted a comprehensive literature search through all electronic databases up to June 2013.\n\n\nSETTING\nCohort prospective studies investigating adherence to the Mediterranean diet and health outcomes. Cut-off values of food groups used to compute the adherence score were obtained.\n\n\nSUBJECTS\nThe updated search was performed in an overall population of 4 172 412 subjects, with eighteen recent studies that were not present in the previous meta-analyses.\n\n\nRESULTS\nA 2-point increase in adherence score to the Mediterranean diet was reported to determine an 8 % reduction of overall mortality (relative risk = 0·92; 95 % CI 0·91, 0·93), a 10 % reduced risk of CVD (relative risk = 0·90; 95 % CI 0·87, 0·92) and a 4 % reduction of neoplastic disease (relative risk = 0·96; 95 % CI 0·95, 0·97). We utilized data coming from all cohort studies available in the literature for proposing a literature-based adherence score. Such a score ranges from 0 (minimal adherence) to 18 (maximal adherence) points and includes three different categories of consumption for each food group composing the Mediterranean diet.\n\n\nCONCLUSIONS\nThe Mediterranean diet was found to be a healthy dietary pattern in terms of morbidity and mortality. By using data from the cohort studies we proposed a literature-based adherence score that can represent an easy tool for the estimation of adherence to the Mediterranean diet also at the individual level.", "title": "" }, { "docid": "9c8fefeb34cc1adc053b5918ea0c004d", "text": "Mezzo is a computer program designed that procedurally writes Romantic-Era style music in real-time to accompany computer games. Leitmotivs are associated with game characters and elements, and mapped into various musical forms. These forms are distinguished by different amounts of harmonic tension and formal regularity, which lets them musically convey various states of markedness which correspond to states in the game story. Because the program is not currently attached to any game or game engine, “virtual” gameplays were been used to explore the capabilities of the program; that is, videos of various game traces were used as proxy examples. For each game trace, Leitmotivs were input to be associated with characters and game elements, and a set of ‘cues’ was written, consisting of a set of time points at which a new set of game data would be passed to Mezzo to reflect the action of the game trace. Examples of music composed for one such game trace, a scene from Red Dead Redemption, are given to illustrate the various ways the program maps Leitmotivs into different levels of musical markedness that correspond with the game state. Introduction Mezzo is a computer program designed by the author that procedurally writes Romantic-Era-style music in real time to accompany computer games. It was motivated by the desire for game music to be as rich and expressive as that written for traditional media such as opera, ballet, or film, while still being procedurally generated, and thus able to adapt to a variety of dramatic situations. To do this, it models deep theories of musical form and semiotics in Classical and Romantic music. Characters and other important game elements like props and environmental features are given Leitmotivs, which are constantly rearranged and developed throughout gameplay in ways Copyright © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. that evoke the conditions and relationships of these elements. Story states that occur in a game are musically conveyed by employing or withholding normative musical features. This creates various states of markedness, a concept which is defined in semiotic terms as a valuation given to difference (Hatten 1994). An unmarked state or event is one that conveys normativity, while an unmarked one conveys deviation from or lack of normativity. A succession of musical sections that passes through varying states of markedness and unmarkedness, producing various trajectories of expectation and fulfillment, tension and release, correlates with the sequence of episodes that makes up a game story’s structure. Mezzo uses harmonic tension and formal regularity as its primary vehicles for musically conveying markedness; it is constantly adjusting the values of these features in order to express states of the game narrative. Motives are associated with characters, and markedness with game conditions. These two independent associations allow each coupling of a motive with a level of markedness to be interpreted as a pair of coordinates in a state space (a “semiotic square”), where various regions of the space correspond to different expressive musical qualities (Grabócz 2009). Certain patterns of melodic repetition combined with harmonic function became conventionalized in the Classical Era as normative forms, labeled the sentence, period, and sequence (Caplin 1998, Schoenberg 1969). These forms exist in the middleground of a musical work, each comprising one or several phrase repetitions and one or a small number of harmonic cadences. Each musical form has a normative structure, and various ways in which it can be deformed by introducing irregular amounts of phrase repetition to make the form asymmetrical. Mezzo’s expressive capability comes from the idea that there are different perceptible levels of formal irregularity that can be quantitatively measured, and that these different levels convey different levels of markedness. Musical Metacreation: Papers from the 2012 AIIDE Workshop AAAI Technical Report WS-12-16", "title": "" }, { "docid": "f20a3c60d7415186b065dc7782af16ef", "text": "The present research examined how implicit racial associations and explicit racial attitudes of Whites relate to behaviors and impressions in interracial interactions. Specifically, the authors examined how response latency and self-report measures predicted bias and perceptions of bias in verbal and nonverbal behavior exhibited by Whites while they interacted with a Black partner. As predicted, Whites' self-reported racial attitudes significantly predicted bias in their verbal behavior to Black relative to White confederates. Furthermore, these explicit attitudes predicted how much friendlier Whites felt that they behaved toward White than Black partners. In contrast, the response latency measure significantly predicted Whites' nonverbal friendliness and the extent to which the confederates and observers perceived bias in the participants' friendliness.", "title": "" }, { "docid": "678d3dccdd77916d0c653d88785e1300", "text": "BACKGROUND\nFatigue is one of the common complaints of multiple sclerosis (MS) patients, and its treatment is relatively unclear. Ginseng is one of the herbal medicines possessing antifatigue properties, and its administration in MS for such a purpose has been scarcely evaluated. The purpose of this study was to evaluate the efficacy and safety of ginseng in the treatment of fatigue and the quality of life of MS patients.\n\n\nMETHODS\nEligible female MS patients were randomized in a double-blind manner, to receive 250-mg ginseng or placebo twice daily over 3 months. Outcome measures included the Modified Fatigue Impact Scale (MFIS) and the Iranian version of the Multiple Sclerosis Quality Of Life Questionnaire (MSQOL-54). The questionnaires were used after randomization, and again at the end of the study.\n\n\nRESULTS\nOf 60 patients who were enrolled in the study, 52 (86%) subjects completed the trial with good drug tolerance. Statistical analysis showed better effects for ginseng than the placebo as regards MFIS (p = 0.046) and MSQOL (p ≤ 0.0001) after 3 months. No serious adverse events were observed during follow-up.\n\n\nCONCLUSIONS\nThis study indicates that 3-month ginseng treatment can reduce fatigue and has a significant positive effect on quality of life. Ginseng is probably a good candidate for the relief of MS-related fatigue. Further studies are needed to shed light on the efficacy of ginseng in this field.", "title": "" } ]
scidocsrr
18b187af6666031609b07017bfa0c654
Customer relationship management classification using data mining techniques
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "4bf5fd6fdb2cb82fa13abdb13653f3ac", "text": "Customer relationship management (CRM) has once again gained prominence amongst academics and practitioners. However, there is a tremendous amount of confusion regarding its domain and meaning. In this paper, the authors explore the conceptual foundations of CRM by examining the literature on relationship marketing and other disciplines that contribute to the knowledge of CRM. A CRM process framework is proposed that builds on other relationship development process models. CRM implementation challenges as well as CRM's potential to become a distinct discipline of marketing are also discussed in this paper. JEL Classification Codes: M31.", "title": "" } ]
[ { "docid": "c1d5df0e2058e3f191a8227fca51a2fb", "text": "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "title": "" }, { "docid": "36cf5e6ffec29f0eede4f369104d00d3", "text": "This meta-analysis is a study of the experimental literature of technology use in postsecondary education from 1990 up to 2010 exclusive of studies of online or distance education previously reviewed by Bernard et al. (2004). It reports the overall weighted average effects of technology use on achievement and attitude outcomes and explores moderator variables in an attempt to explain how technology treatments lead to positive or negative effects. Out of an initial pool of 11,957 study abstracts, 1105 were chosen for analysis, yielding 879 achievement and 181 attitude effect sizes after pre-experimental designs and studies with obvious methodological confounds were removed. The random effects weighted average effect size for achievement was gþ 1⁄4 0.27, k 1⁄4 879, p < .05, and for attitude outcomes it was gþ 1⁄4 0.20, k 1⁄4 181, p < .05. The collection of achievement outcomes was divided into two sub-collections, according to the amount of technology integration in the control condition. These were no technology in the control condition (k 1⁄4 479) and some technology in the control condition (k 1⁄4 400). Random effects multiple meta-regression analysis was run on each sub-collection revealing three significant predictors (subject matter, degree of difference in technology use between the treatment and the control and pedagogical uses of technology). The set of predictors for each sub-collection was both significant and homogeneous. Differences were found among the levels of all three moderators, but particularly in favor of cognitive support applications. There were no significant predictors for attitude outcomes. Crown Copyright 2013 Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e28613435d7dbd944a997f2d1fa67598", "text": "Emerging multimedia content including images and texts are always jointly utilized to describe the same semantics. As a result, crossmedia retrieval becomes increasingly important, which is able to retrieve the results of the same semantics with the query but with different media types. In this paper, we propose a novel heterogeneous similarity measure with nearest neighbors (HSNN). Unlike traditional similarity measures which are limited in homogeneous feature space, HSNN could compute the similarity between media objects with different media types. The heterogeneous similarity is obtained by computing the probability for two media objects belonging to the same semantic category. The probability is achieved by analyzing the homogeneous nearest neighbors of each media object. HSNN is flexible so that any traditional similarity measure could be incorporated, which is further regarded as the weak ranker. An effective ranking model is learned from multiple weak rankers through AdaRank for cross-media retrieval. Experiments on the wikipedia dataset show the effectiveness of the proposed approach, compared with stateof-the-art methods. The cross-media retrieval also shows to outperform image retrieval systems on a unimedia retrieval task.", "title": "" }, { "docid": "b617762a18685137a1e18838b2e46f11", "text": "Information Extraction (IE) is a technology for localizing and classifying pieces of relevant information in unstructured natural language texts and detecting relevant relations among them. This thesis deals with one of the central tasks of IE, i.e., relation extraction. The goal is to provide a general framework that automatically learns mappings between linguistic analyses and target semantic relations, with minimal human intervention. Furthermore, this framework is supposed to support the adaptation to new application domains and new relations with various complexities. The central result is a new approach to relation extraction which is based on a minimally supervised method for automatically learning extraction grammars from a large collection of parsed texts, initialized by some instances of the target relation, called semantic seed. Due to the semantic seed approach, the framework can accommodate new relation types and domains with minimal effort. It supports relations of different arity as well as their projections. Furthermore, this framework is general enough to employ any linguistic analysis tools that provide the required type and depth of analysis. The adaptability and the scalability of the framework is facilitated by the DARE rule representation model which is recursive and compositional. In comparison to other IE rule representation models, e.g., Stevenson and Greenwood (2006), the DARE rule representation model is expressive enough to achieve good coverage of linguistic constructions for finding mentions of the target relation. The powerful DARE rules are constructed via a bottom-up and compositional rule discovery strategy, driven by the semantic seed. The control of the quality of newly acquired knowledge during the bootstrapping process is realized through a ranking and filtering strategy, taking two aspects into account: the domain relevance and the trustworthiness of the origin. A spe-", "title": "" }, { "docid": "6fe39cbe3811ac92527ba60620b39170", "text": "Providing accurate information about human's state, activity is one of the most important elements in Ubiquitous Computing. Various applications can be enabled if one's state, activity can be recognized. Due to the low deployment cost, non-intrusive sensing nature, Wi-Fi based activity recognition has become a promising, emerging research area. In this paper, we survey the state-of-the-art of the area from four aspects ranging from historical overview, theories, models, key techniques to applications. In addition to the summary about the principles, achievements of existing work, we also highlight some open issues, research directions in this emerging area.", "title": "" }, { "docid": "7111c220a28d7a6fab32d9ecc914c5aa", "text": "Songbirds are one of the best-studied examples of vocal learners. Learning of both human speech and birdsong depends on hearing. Once learned, adult song in many species remains unchanging, suggesting a reduced influence of sensory experience. Recent studies have revealed, however, that adult song is not always stable, extending our understanding of the mechanisms involved in song maintenance, and their similarity to those active during song learning. Here we review some of the processes that contribute to song learning and production, with an emphasis on the role of auditory feedback. We then consider some of the possible neural substrates involved in these processes, particularly basal ganglia circuitry. Although a thorough treatment of human speech is beyond the scope of this article, we point out similarities between speech and song learning, and ways in which studies of these disparate behaviours complement each other in developing an understanding of general principles that contribute to learning and maintenance of vocal behaviour.", "title": "" }, { "docid": "f44bfa0a366fb50a571e6df9f4c3f91d", "text": "BACKGROUND\nIn silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process.\n\n\nRESULTS\ncamb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2).\n\n\nCONCLUSIONS\nOverall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.", "title": "" }, { "docid": "2bf1766eccd14d2da3581018ff621f09", "text": "We propose a novel segmentation approach for introducing shape priors in the geometric active contour framework. Following the work of Leventon, we propose to revisit the use of linear principal component analysis (PCA) to introduce prior knowledge about shapes in a more robust manner. Our contribution in this paper is twofold. First, we demonstrate that building a space of familiar shapes by applying PCA on binary images (instead of signed distance functions) enables one to constrain the contour evolution in a way that is more faithful to the elements of a training set. Secondly, we present a novel region-based segmentation framework, able to separate regions of different intensities in an image. Shape knowledge and image information are encoded into two energy functionals entirely described in terms of shapes. This consistent description allows for the simultaneous encoding of multiple types of shapes and leads to promising segmentation results. In particular, our shape-driven segmentation technique offers a convincing level of robustness with respect to noise, clutter, partial occlusions, and blurring.", "title": "" }, { "docid": "6c71078281d0ff7e4829624af5124bfb", "text": "The modeling of artificial, human-level creativity is becoming more and more achievable. In recent years, neural networks have been successfully applied to different tasks such as image and music generation, demonstrating their great potential in realizing computational creativity. The fuzzy definition of creativity combined with varying goals of the evaluated generative systems, however, makes subjective evaluation seem to be the only viable methodology of choice. We review the evaluation of generative music systems and discuss the inherent challenges of their evaluation. Although subjective evaluation should always be the ultimate choice for the evaluation of creative results, researchers unfamiliar with rigorous subjective experiment design and without the necessary resources for the execution of a large-scale experiment face challenges in terms of reliability, validity, and replicability of the results. In numerous studies, this leads to the report of insignificant and possibly irrelevant results and the lack of comparability with similar and previous generative systems. Therefore, we propose a set of simple musically informed objective metrics enabling an objective and reproducible way of evaluating and comparing the output of music generative systems. We demonstrate the usefulness of the proposed metrics with several experiments on real-world data.", "title": "" }, { "docid": "973426438175226bb46c39cc0a390d97", "text": "This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets.", "title": "" }, { "docid": "9e31cedf404c989d15a2f06c5800f207", "text": "For automatic driving, vehicles must be able to recognize their environment and take control of the vehicle. The vehicle must perceive relevant objects, which includes other traffic participants as well as infrastructure information, assess the situation and generate appropriate actions. This work is a first step of integrating previous works on environment perception and situation analysis toward automatic driving strategies. We present a method for automatic cruise control of vehicles in urban environments. The longitudinal velocity is influenced by the speed limit, the curvature of the lane, the state of the next traffic light and the most relevant target on the current lane. The necessary acceleration is computed in respect to the information which is estimated by an instrumented vehicle.", "title": "" }, { "docid": "4d52865efa6c359d68125c7013647c86", "text": "In recent years, we have witnessed an unprecedented proliferation of large document collections. This development has spawned the need for appropriate analytical means. In particular, to seize the thematic composition of large document collections, researchers increasingly draw on quantitative topic models. Among their most prominent representatives is the Latent Dirichlet Allocation (LDA). Yet, these models have significant drawbacks, e.g. the generated topics lack context and thus meaningfulness. Prior research has rarely addressed this limitation through the lens of mixed-methods research. We position our paper towards this gap by proposing a structured mixedmethods approach to the meaningful analysis of large document collections. Particularly, we draw on qualitative coding and quantitative hierarchical clustering to validate and enhance topic models through re-contextualization. To illustrate the proposed approach, we conduct a case study of the thematic composition of the AIS Senior Scholars' Basket of Journals.", "title": "" }, { "docid": "a62aae5ac55e884d6e1e3ef0282657cc", "text": "Nowadays, the remote Home Automation turns out to be more and more significant and appealing. It improves the value of our lives by automating various electrical appliances or instruments. This paper describes GSM (Global System Messaging) based secured device control system using App Inventor for Android mobile phones. App Inventor is a latest visual programming platform for developing mobile applications for Android-based smart phones. The Android Mobile Phone Platform becomes more and more popular among software developers, because of its powerful capabilities and open architecture. It is a fantastic platform for the real world interface control, as it offers an ample of resources and already incorporates a lot of sensors. No need to write programming codes to develop apps in the App Inventor, instead it provides visual design interface as the way the apps looks and use blocks of interlocking components to control the app’s behaviour. The App Inventor aims to make programming enjoyable and accessible to", "title": "" }, { "docid": "506d3e23383de6d3a37471798770ed70", "text": "One of the most controversial issues in uncertainty modelling and information sciences is the relationship between probability theory and fuzzy sets. This paper is meant to survey the literature pertaining to this debate, and to try to overcome misunderstandings and to supply access to many basic references that have addressed the \"probability versus fuzzy set\" challenge. This problem has not a single facet, as will be claimed here. Moreover it seems that a lot of controversies might have been avoided if protagonists had been patient enough to build a common language and to share their scientific backgrounds. The main points made here are as follows. i) Fuzzy set theory is a consistent body of mathematical tools. ii) Although fuzzy sets and probability measures are distinct, several bridges relating them have been proposed that should reconcile opposite points of view ; especially possibility theory stands at the cross-roads between fuzzy sets and probability theory. iii) Mathematical objects that behave like fuzzy sets exist in probability theory. It does not mean that fuzziness is reducible to randomness. Indeed iv) there are ways of approaching fuzzy sets and possibility theory that owe nothing to probability theory. Interpretations of probability theory are multiple especially frequentist versus subjectivist views (Fine [31]) ; several interpretations of fuzzy sets also exist. Some interpretations of fuzzy sets are in agreement with probability calculus and some are not. The paper is structured as follows : first we address some classical misunderstandings between fuzzy sets and probabilities. They must be solved before any discussion can take place. Then we consider probabilistic interpretations of membership functions, that may help in membership function assessment. We also point out nonprobabilistic interpretations of fuzzy sets. The next section examines the literature on possibility-probability transformations and tries to clarify some lurking controversies on that topic. In conclusion, we briefly mention several subfields of fuzzy set research where fuzzy sets and probability are conjointly used.", "title": "" }, { "docid": "eec7a9a6859e641c3cc0ade73583ef5c", "text": "We propose an Apache Spark-based scale-up server architecture using Docker container-based partitioning method to improve performance scalability. The performance scalability problem of Apache Spark-based scale-up servers is due to garbage collection(GC) and remote memory access overheads when the servers are equipped with significant number of cores and Non-Uniform Memory Access(NUMA). The proposed method minimizes the problems using Docker container-based architecture effectively partitioning the original scale-up server into small logical servers. Our evaluation study based on benchmark programs revealed that the partitioning method showed performance improvement by ranging from 1.1x through 1.7x on a 120 core scale-up system. Our proof-of-concept scale-up server architecture provides the basis towards complete and practical design of partitioning-based scale-up servers showing performance scalability.", "title": "" }, { "docid": "ce463006a11477c653c15eb53f673837", "text": "This paper presents a meaning-based statistical math word problem (MWP) solver with understanding, reasoning and explanation. It comprises a web user interface and pipelined modules for analysing the text, transforming both body and question parts into their logic forms, and then performing inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating the extracted math quantity with its associated syntactic and semantic information (which specifies the physical meaning of that quantity). Those role-tags are then used to identify the desired operands and filter out irrelevant quantities (so that the answer can be obtained precisely). Since the physical meaning of each quantity is explicitly represented with those role-tags and used in the inference process, the proposed approach could explain how the answer is obtained in a human comprehensible way.", "title": "" }, { "docid": "325003e43d73d68a851a8c3fa6681f94", "text": "This tutorial is aimed at introducing some basic ideas of stochastic programming. The intended audience of the tutorial is optimization practitioners and researchers who wish to acquaint themselves with the fundamental issues that arise when modeling optimization problems as stochastic programs. The emphasis of the paper is on motivation and intuition rather than technical completeness (although we could not avoid giving some technical details). Since it is not intended to be a historical overview of the subject, relevant references are given in the “Notes” section at the end of the paper, rather than in the text. Stochastic programming is an approach for modeling optimization problems that involve uncertainty. Whereas deterministic optimization problems are formulated with known parameters, real world problems almost invariably include parameters which are unknown at the time a decision should be made. When the parameters are uncertain, but assumed to lie in some given set of possible values, one might seek a solution that is feasible for all possible parameter choices and optimizes a given objective function. Such an approach might make sense for example when designing a least-weight bridge with steel having a tensile strength that is known only to within some tolerance. Stochastic programming models are similar in style but try to take advantage of the fact that probability distributions governing the data are known or can be estimated. Often these models apply to settings in which decisions are made repeatedly in essentially the same circumstances, and the objective is to come up with a decision that will perform well on average. An example would be designing truck routes for daily milk delivery to customers with random demand. Here probability distributions (e.g., of demand) could be estimated from data that have been collected over time. The goal is to find some policy that is feasible for all (or almost all) the possible parameter realizations and optimizes the expectation of some function of the decisions and the random variables.", "title": "" }, { "docid": "a6bc752bd6a4fc070fa01a5322fb30a1", "text": "The formulation of a generalized area-based confusion matrix for exploring the accuracy of area estimates is presented. The generalized confusion matrix is appropriate for both traditional classiŽ cation algorithms and sub-pixel area estimation models. An error matrix, derived from the generalized confusion matrix, allows the accuracy of maps generated using area estimation models to be assessed quantitatively and compared to the accuracies obtained from traditional classiŽ cation techniques. The application of this approach is demonstrated for an area estimation model applied to Landsat data of an urban area of the United Kingdom.", "title": "" }, { "docid": "391fb9de39cb2d0635f2329362db846e", "text": "In recent years, there has been an explosion of interest in mining time series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature.", "title": "" }, { "docid": "e78c1fed6f3c09642a8c2c592545bea0", "text": "We present a general framework and algorithmic approach for incremental approximation algorithms. The framework handles cardinality constrained minimization problems, such as the k-median and k-MST problems. Given some notion of ordering on solutions of different cardinalities k, we give solutions for all values of k such that the solutions respect the ordering and such that for any k, our solution is close in value to the value of an optimal solution of cardinality k. For instance, for the k-median problem, the notion of ordering is set inclusion and our incremental algorithm produces solutions such that any k and k', k < k', our solution of size k is a subset of our solution of size k'. We show that our framework applies to this incremental version of the k-median problem (introduced by Mettu and Plaxton [30]), and incremental versions of the k-MST problem, k-vertex cover problem, k-set cover problem, as well as the uncapacitated facility location problem (which is not cardinality-constrained). For these problems we either get new incremental algorithms, or improvements over what was previously known. We also show that the framework applies to hierarchical clustering problems. In particular, we give an improved algorithm for a hierarchical version of the k-median problem introduced by Plaxton [31].", "title": "" } ]
scidocsrr
4bd646da50658547d1ab74cfe5d08613
Metaphors We Think With: The Role of Metaphor in Reasoning
[ { "docid": "45082917d218ec53559c328dcc7c02db", "text": "How are people able to think about things they have never seen or touched? We demonstrate that abstract knowledge can be built analogically from more experience-based knowledge. People's understanding of the abstract domain of time, for example, is so intimately dependent on the more experience-based domain of space that when people make an air journey or wait in a lunch line, they also unwittingly (and dramatically) change their thinking about time. Further, our results suggest that it is not sensorimotor spatial experience per se that influences people's thinking about time, but rather people's representations of and thinking about their spatial experience.", "title": "" }, { "docid": "5ebd92444b69b2dd8e728de2381f3663", "text": "A mind is a computer.", "title": "" }, { "docid": "e39cafd4de135ccb17f7cf74cbd38a97", "text": "A central question in metaphor research is how metaphors establish mappings between concepts from different domains. The authors propose an evolutionary path based on structure-mapping theory. This hypothesis--the career of metaphor--postulates a shift in mode of mapping from comparison to categorization as metaphors are conventionalized. Moreover, as demonstrated by 3 experiments, this processing shift is reflected in the very language that people use to make figurative assertions. The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor. This account further suggests that whether metaphors are processed directly or indirectly, and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form.", "title": "" }, { "docid": "c0fc94aca86a6aded8bc14160398ddea", "text": "THE most persistent problems of recall all concern the ways in which past experiences and past reactions are utilised when anything is remembered. From a general point of view it looks as if the simplest explanation available is to suppose that when any specific event occurs some trace, or some group of traces, is made and stored up in the organism or in the mind. Later, an immediate stimulus re-excites the trace, or group of traces, and, provided a further assumption is made to the effect that the trace somehow carries with it a temporal sign, the re-excitement appears to be equivalent to recall. There is, of course, no direct evidence for such traces, but the assumption at first sight seems to be a very simple one, and so it has commonly been made.", "title": "" } ]
[ { "docid": "242686291812095c5320c1c8cae6da27", "text": "In the modern high-performance transceivers, mixers (both upand down-converters) are required to have large dynamic range in order to meet the system specifications. The lower end of the dynamic range is indicated by the noise floor which tells how small a signal may be processed while the high end is determined by the non-linearity which causes distortion, compression and saturation of the signal and thus limits the maximum signal amplitude input to the mixer for the undistorted output. Compared to noise, the linearity requirement is much higher in mixer design because it is generally the limiting factor to the transceiver’s linearity. Therefore, this paper will emphasize on the linearization techniques for analog multipliers and mixers, which have been a very active research area since 1960s.", "title": "" }, { "docid": "9adaeac8cedd4f6394bc380cb0abba6e", "text": "The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, \"cocktail-party\" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the \"cocktail party problem\".", "title": "" }, { "docid": "f14daee1ddf6bbf4f3d41fe6ef5fcdb6", "text": "A characteristic that will distinguish successful manufacturing enterprises of the next millennium is agility: the ability to respond quickly, proactively, and aggressively to unpredictable change. The use of extended virtual enterprise Supply Chains (SC) to achieve agility is becoming increasingly prevalent. A key problem in constructing effective SCs is the lack of methods and tools to support the integration of processes and systems into shared SC processes and systems. This paper describes the architecture and concept of operation of the Supply Chain Process Design Toolkit (SCPDT), an integrated software system that addresses the challenge of seamless and efficient integration. The SCPDT enables the analysis and design of Supply Chain (SC) processes. SCPDT facilitates key SC process engineering tasks including 1) AS-IS process base-lining and assessment, 2) collaborative TO-BE process requirements definition, 3) SC process integration and harmonization, 4) TO-BE process design trade-off analysis, and 5) TO-BE process planning and implementation.", "title": "" }, { "docid": "3874d10936841f59647d73f750537d96", "text": "The number of studies comparing nutritional quality of restrictive diets is limited. Data on vegan subjects are especially lacking. It was the aim of the present study to compare the quality and the contributing components of vegan, vegetarian, semi-vegetarian, pesco-vegetarian and omnivorous diets. Dietary intake was estimated using a cross-sectional online survey with a 52-items food frequency questionnaire (FFQ). Healthy Eating Index 2010 (HEI-2010) and the Mediterranean Diet Score (MDS) were calculated as indicators for diet quality. After analysis of the diet questionnaire and the FFQ, 1475 participants were classified as vegans (n = 104), vegetarians (n = 573), semi-vegetarians (n = 498), pesco-vegetarians (n = 145), and omnivores (n = 155). The most restricted diet, i.e., the vegan diet, had the lowest total energy intake, better fat intake profile, lowest protein and highest dietary fiber intake in contrast to the omnivorous diet. Calcium intake was lowest for the vegans and below national dietary recommendations. The vegan diet received the highest index values and the omnivorous the lowest for HEI-2010 and MDS. Typical aspects of a vegan diet (high fruit and vegetable intake, low sodium intake, and low intake of saturated fat) contributed substantially to the total score, independent of the indexing system used. The score for the more prudent diets (vegetarians, semi-vegetarians and pesco-vegetarians) differed as a function of the used indexing system but they were mostly better in terms of nutrient quality than the omnivores.", "title": "" }, { "docid": "03a39c98401fc22f1a376b9df66988dc", "text": "A highly efficient wireless power transfer (WPT) system is required in many applications to replace the conventional wired system. The high temperature superconducting (HTS) wires are examined in a WPT system to increase the power-transfer efficiency (PTE) as compared with the conventional copper/Litz conductor. The HTS conductors are naturally can produce higher amount of magnetic field with high induced voltage to the receiving coil. Moreover, the WPT systems are prone to misalignment, which can cause sudden variation in the induced voltage and lead to rapid damage of the resonant capacitors connected in the circuit. Hence, the protection or elimination of resonant capacitor is required to increase the longevity of WPT system, but both the adoptions will operate the system in nonresonance mode. The absence of resonance phenomena in the WPT system will drastically reduce the PTE and correspondingly the future commercialization. This paper proposes an open bifilar spiral coils based self-resonant WPT method without using resonant capacitors at both the sides. The mathematical modeling and circuit simulation of the proposed system is performed by designing the transmitter coil using HTS wire and the receiver with copper coil. The three-dimensional modeling and finite element simulation of the proposed system is performed to analyze the current density at different coupling distances between the coil. Furthermore, the experimental results show the PTE of 49.8% under critical coupling with the resonant frequency of 25 kHz.", "title": "" }, { "docid": "18136fba311484e901282c31c9d206fd", "text": "New demands, coming from the industry 4.0 concept of the near future production systems have to be fulfilled in the coming years. Seamless integration of current technologies with new ones is mandatory. The concept of Cyber-Physical Production Systems (CPPS) is the core of the new control and automation distributed systems. However, it is necessary to provide the global production system with integrated architectures that make it possible. This work analyses the requirements and proposes a model-based architecture and technologies to make the concept a reality.", "title": "" }, { "docid": "7ebaee3df1c8ee4bf1c82102db70f295", "text": "Small cells such as femtocells overlaying the macrocells can enhance the coverage and capacity of cellular wireless networks and increase the spectrum efficiency by reusing the frequency spectrum assigned to the macrocells in a universal frequency reuse fashion. However, management of both the cross-tier and co-tier interferences is one of the most critical issues for such a two-tier cellular network. Centralized solutions for interference management in a two-tier cellular network with orthogonal frequency-division multiple access (OFDMA), which yield optimal/near-optimal performance, are impractical due to the computational complexity. Distributed solutions, on the other hand, lack the superiority of centralized schemes. In this paper, we propose a semi-distributed (hierarchical) interference management scheme based on joint clustering and resource allocation for femtocells. The problem is formulated as a mixed integer non-linear program (MINLP). The solution is obtained by dividing the problem into two sub-problems, where the related tasks are shared between the femto gateway (FGW) and femtocells. The FGW is responsible for clustering, where correlation clustering is used as a method for femtocell grouping. In this context, a low-complexity approach for solving the clustering problem is used based on semi-definite programming (SDP). In addition, an algorithm is proposed to reduce the search range for the best cluster configuration. For a given cluster configuration, within each cluster, one femto access point (FAP) is elected as a cluster head (CH) that is responsible for resource allocation among the femtocells in that cluster. The CH performs sub-channel and power allocation in two steps iteratively, where a low-complexity heuristic is proposed for the sub-channel allocation phase. Numerical results show the performance gains due to clustering in comparison to other related schemes. Also, the proposed correlation clustering scheme offers performance, which is close to that of the optimal clustering, with a lower complexity.", "title": "" }, { "docid": "88afb98c0406d7c711b112fbe2a6f25e", "text": "This paper provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time. Firms are assumed to have always been oriented toward accumulating and applying knowledge to create economic value and competitive advantage. We therefore suggest the need for a KMPI which we have defined as a logistic function having five components that can be used to determine the knowledge circulation process (KCP): knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. When KCP efficiency increases, KMPI will also expand, enabling firms to become knowledgeintensive. To prove KMPI’s contribution, a questionnaire survey was conducted on 101 firms listed in the KOSDAQ market in Korea. We associated KMPI with three financial measures: stock price, price earnings ratio (PER), and R&D expenditure. Statistical results show that the proposed KMPI can represent KCP efficiency, while the three financial performance measures are also useful. # 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "8ca0edf4c51b0156c279fcbcb1941d2b", "text": "The good fossil record of trilobite exoskeletal anatomy and ontogeny, coupled with information on their nonbiomineralized tissues, permits analysis of how the trilobite body was organized and developed, and the various evolutionary modifications of such patterning within the group. In several respects trilobite development and form appears comparable with that which may have characterized the ancestor of most or all euarthropods, giving studies of trilobite body organization special relevance in the light of recent advances in the understanding of arthropod evolution and development. The Cambrian diversification of trilobites displayed modifications in the patterning of the trunk region comparable with those seen among the closest relatives of Trilobita. In contrast, the Ordovician diversification of trilobites, although contributing greatly to the overall diversity within the clade, did so within a narrower range of trunk conditions. Trilobite evolution is consistent with an increased premium on effective enrollment and protective strategies, and with an evolutionary trade-off between the flexibility to vary the number of trunk segments and the ability to regionalize portions of the trunk. 401 A nn u. R ev . E ar th P la ne t. Sc i. 20 07 .3 5: 40 143 4. D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by U N IV E R SI T Y O F C A L IF O R N IA R IV E R SI D E L IB R A R Y o n 05 /0 2/ 07 . F or p er so na l u se o nl y. ANRV309-EA35-14 ARI 20 March 2007 15:54 Cephalon: the anteriormost or head division of the trilobite body composed of a set of conjoined segments whose identity is expressed axially Thorax: the central portion of the trilobite body containing freely articulating trunk segments Pygidium: the posterior tergite of the trilobite exoskeleton containing conjoined segments INTRODUCTION The rich record of the diversity and development of the trilobite exoskeleton (along with information on the geological occurrence, nonbiomineralized tissues, and associated trace fossils of trilobites) provides the best history of any Paleozoic arthropod group. The retention of features that may have characterized the most recent common ancestor of all living arthropods, which have been lost or obscured in most living forms, provides insights into the nature of the evolutionary radiation of the most diverse metazoan phylum alive today. Studies of phylogenetic stem-group taxa, of which Trilobita provide a prominent example, have special significance in the light of renewed interest in arthropod evolution prompted by comparative developmental genetics. Although we cannot hope to dissect the molecular controls operative within trilobites, the evolutionary developmental biology (evo-devo) approach permits a fresh perspective from which to examine the contributions that paleontology can make to evolutionary biology, which, in the context of the overall evolutionary history of Trilobita, is the subject of this review. TRILOBITES: BODY PLAN AND ONTOGENY Trilobites were a group of marine arthropods that appeared in the fossil record during the early Cambrian approximately 520 Ma and have not been reported from rocks younger than the close of the Permian, approximately 250 Ma. Roughly 15,000 species have been described to date, and although analysis of the occurrence of trilobite genera suggests that the known record is quite complete (Foote & Sepkoski 1999), many new species and genera continue to be established each year. The known diversity of trilobites results from their strongly biomineralized exoskeletons, made of two layers of low magnesium calcite, which was markedly more durable than the sclerites of most other arthropods. Because the exoskeleton was rich in morphological characters and was the only body structure preserved in the vast majority of specimens, skeletal form has figured prominently in the biological interpretation of trilobites.", "title": "" }, { "docid": "221c59b8ea0460dac3128e81eebd6aca", "text": "STUDY DESIGN\nA prospective self-assessment analysis and evaluation of nutritional and radiographic parameters in a consecutive series of healthy adult volunteers older than 60 years.\n\n\nOBJECTIVES\nTo ascertain the prevalence of adult scoliosis, assess radiographic parameters, and determine if there is a correlation with functional self-assessment in an aged volunteer population.\n\n\nSUMMARY OF BACKGROUND DATA\nThere exists little data studying the prevalence of scoliosis in a volunteer aged population, and correlation between deformity and self-assessment parameters.\n\n\nMETHODS\nThere were 75 subjects in the study. Inclusion criteria were: age > or =60 years, no known history of scoliosis, and no prior spine surgery. Each subject answered a RAND 36-Item Health Survey questionnaire, a full-length anteroposterior standing radiographic assessment of the spine was obtained, and nutritional parameters were analyzed from blood samples. For each subject, radiographic, laboratory, and clinical data were evaluated. The study population was divided into 3 groups based on frontal plane Cobb angulation of the spine. Comparison of the RAND 36-Item Health Surveys data among groups of the volunteer population and with United States population benchmark data (age 65-74 years) was undertaken using an unpaired t test. Any correlation between radiographic, laboratory, and self-assessment data were also investigated.\n\n\nRESULTS\nThe mean age of the patients in this study was 70.5 years (range 60-90). Mean Cobb angle was 17 degrees in the frontal plane. In the study group, 68% of subjects met the definition of scoliosis (Cobb angle >10 degrees). No significant correlation was noted among radiographic parameters and visual analog scale scores, albumin, lymphocytes, or transferrin levels in the study group as a whole. Prevalence of scoliosis was not significantly different between males and females (P > 0.03). The scoliosis prevalence rate of 68% found in this study reveals a rate significantly higher than reported in other studies. These findings most likely reflect the targeted selection of an elderly group. Although many patients with adult scoliosis have pain and dysfunction, there appears to be a large group (such as the volunteers in this study) that has no marked physical or social impairment.\n\n\nCONCLUSIONS\nPrevious reports note a prevalence of adult scoliosis up to 32%. In this study, results indicate a scoliosis rate of 68% in a healthy adult population, with an average age of 70.5 years. This study found no significant correlations between adult scoliosis and visual analog scale scores or nutritional status in healthy, elderly volunteers.", "title": "" }, { "docid": "9d2a73c8eac64ed2e1af58a5883229c3", "text": "Tetyana Sydorenko Michigan State University This study examines the effect of input modality (video, audio, and captions, i.e., onscreen text in the same language as audio) on (a) the learning of written and aural word forms, (b) overall vocabulary gains, (c) attention to input, and (d) vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian participated in this study. Group one (N = 8) saw video with audio and captions (VAC); group two (N = 9) saw video with audio (VA); group three (N = 9) saw video with captions (VC). All participants completed written and aural vocabulary tests and a final questionnaire.", "title": "" }, { "docid": "428ecd77262fc57c5d0d19924a10f02a", "text": "In an identity based encryption scheme, each user is identified by a unique identity string. An attribute based encryption scheme (ABE), in contrast, is a scheme in which each user is identified by a set of attributes, and some function of those attributes is used to determine decryption ability for each ciphertext. Sahai and Waters introduced a single authority attribute encryption scheme and left open the question of whether a scheme could be constructed in which multiple authorities were allowed to distribute attributes [SW05]. We answer this question in", "title": "" }, { "docid": "d1756aa5f0885157bdad130d96350cd3", "text": "In this paper, we describe the winning approach for the RecSys Challenge 2015. Our key points are (1) two-stage classification, (2) massive usage of categorical features, (3) strong classifiers built by gradient boosting and (4) threshold optimization based directly on the competition score. We describe our approach and discuss how it can be used to build scalable personalization systems.", "title": "" }, { "docid": "59f022a6e943f46e7b87213f651065d8", "text": "This paper presents a procedure to design a robust switching strategy for the basic Buck-Boost DC-DC converter utilizing switched systems' theory. The converter dynamic is described in the framework of linear switched systems and then sliding-mode controller is developed to ensure the asymptotic stability of the desired equilibrium point for the switched system with constant external input. The inherent robustness of the sliding-mode switching rule leads to efficient regulation of the output voltage under load variations. Simulation results are presented to demonstrate the outperformance of the proposed method compared to a rival scheme in the literature.", "title": "" }, { "docid": "d49fc093d43fa3cdf40ecfa3f670e165", "text": "As a result of the increase in robots in various fields, the mechanical stability of specific robots has become an important subject of research. This study is concerned with the development of a two-wheeled inverted pendulum robot that can be applied to an intelligent, mobile home robot. This kind of robotic mechanism has an innately clumsy motion for stabilizing the robot’s body posture. To analyze and execute this robotic mechanism, we investigated the exact dynamics of the mechanism with the aid of 3-DOF modeling. By using the governing equations of motion, we analyzed important issues in the dynamics of a situation with an inclined surface and also the effect of the turning motion on the stability of the robot. For the experiments, the mechanical robot was constructed with various sensors. Its application to a two-dimensional floor environment was confirmed by experiments on factors such as balancing, rectilinear motion, and spinning motion.", "title": "" }, { "docid": "a9fc5418c0b5789b02dd6638a1b61b5d", "text": "As the homeostatis characteristics of nerve systems show, artificial neural networks are considered to be robust to variation of circuit components and interconnection faults. However, the tolerance of neural networks depends on many factors, such as the fault model, the network size, and the training method. In this study, we analyze the fault tolerance of fixed-point feed-forward deep neural networks for the implementation in CMOS digital VLSI. The circuit errors caused by the interconnection as well as the processing units are considered. In addition to the conventional and dropout training methods, we develop a new technique that randomly disconnects weights during the training to increase the error resiliency. Feed-forward deep neural networks for phoneme recognition are employed for the experiments.", "title": "" }, { "docid": "1bdf1bfe81bf6f947df2254ae0d34227", "text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.", "title": "" }, { "docid": "497e2ed6d39ad6c09210b17ce137c45a", "text": "PURPOSE\nThe purpose of this study is to develop a model of Hospital Information System (HIS) user acceptance focusing on human, technological, and organizational characteristics for supporting government eHealth programs. This model was then tested to see which hospital type in Indonesia would benefit from the model to resolve problems related to HIS user acceptance.\n\n\nMETHOD\nThis study used qualitative and quantitative approaches with case studies at four privately owned hospitals and three government-owned hospitals, which are general hospitals in Indonesia. The respondents involved in this study are low-level and mid-level hospital management officers, doctors, nurses, and administrative staff who work at medical record, inpatient, outpatient, emergency, pharmacy, and information technology units. Data was processed using Structural Equation Modeling (SEM) and AMOS 21.0.\n\n\nRESULTS\nThe study concludes that non-technological factors, such as human characteristics (i.e. compatibility, information security expectancy, and self-efficacy), and organizational characteristics (i.e. management support, facilitating conditions, and user involvement) which have level of significance of p<0.05, significantly influenced users' opinions of both the ease of use and the benefits of the HIS. This study found that different factors may affect the acceptance of each user in each type of hospital regarding the use of HIS. Finally, this model is best suited for government-owned hospitals.\n\n\nCONCLUSIONS\nBased on the results of this study, hospital management and IT developers should have more understanding on the non-technological factors to better plan for HIS implementation. Support from management is critical to the sustainability of HIS implementation to ensure HIS is easy to use and provides benefits to the users as well as hospitals. Finally, this study could assist hospital management and IT developers, as well as researchers, to understand the obstacles faced by hospitals in implementing HIS.", "title": "" }, { "docid": "2923e6f0760006b6a049a5afa297ca56", "text": "Six years ago in this journal we discussed the work of Arthur T. Murray, who endeavored to explore artificial intelligence using the Forth programming language [1]. His creation, which he called MIND.FORTH, was interesting in its ability to understand English sentences in the form: subject-verb-object. It also had the capacity to learn new things and to form mental associations between recent experiences and older memories. In the intervening years, Mr. Murray has continued to develop his MIND.FORTH: he has translated it into Visual BASIC, PERL and Javascript, he has written a book [2] on the subject, and he maintains a wiki web site where anyone may suggest changes or extensions to his design [3]. MIND.FORTH is necessarily complex and opaque by virtue of its functionality; therefore it may be challenging for a newcomer to grasp. However, the more dedicated student will find much of value in this code. Murray himself has become quite a controversial figure.", "title": "" }, { "docid": "369ed2ef018f9b6a031b58618f262dce", "text": "Natural language processing has increasingly moved from modeling documents and words toward studying the people behind the language. This move to working with data at the user or community level has presented the field with different characteristics of linguistic data. In this paper, we empirically characterize various lexical distributions at different levels of analysis, showing that, while most features are decidedly sparse and non-normal at the message-level (as with traditional NLP), they follow the central limit theorem to become much more Log-normal or even Normal at the userand county-levels. Finally, we demonstrate that modeling lexical features for the correct level of analysis leads to marked improvements in common social scientific prediction tasks.", "title": "" } ]
scidocsrr
2113da56aa1ad681b109a5be053bcd0f
Building phylogenetic trees from molecular data with MEGA.
[ { "docid": "7fe1cea4990acabf7bc3c199d3c071ce", "text": "Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net.", "title": "" } ]
[ { "docid": "ee40d2e4a049f61a2c2b7eee2a2a98ae", "text": "In Analog to digital convertor design converter, high speed comparator influences the overall performance of Flash/Pipeline Analog to Digital Converter (ADC) directly. This paper presents the schematic design of a CMOS comparator with high speed, low noise and low power dissipation. A schematic design of this comparator is given with 0.18μm TSMC Technology and simulated in cadence environment. Simulation results are presented and it shows that this design can work under high speed clock frequency 100MHz. The design has a low offset voltage 280.7mv, low power dissipation 0.37 mw and low noise 6.21μV.", "title": "" }, { "docid": "40e06996a22e1de4220a09e65ac1a04d", "text": "Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.", "title": "" }, { "docid": "00277e4562f707d37844e6214d1f8777", "text": "Video super-resolution (SR) aims at estimating a high-resolution video sequence from a low-resolution (LR) one. Given that the deep learning has been successfully applied to the task of single image SR, which demonstrates the strong capability of neural networks for modeling spatial relation within one single image, the key challenge to conduct video SR is how to efficiently and effectively exploit the temporal dependence among consecutive LR frames other than the spatial relation. However, this remains challenging because the complex motion is difficult to model and can bring detrimental effects if not handled properly. We tackle the problem of learning temporal dynamics from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependence. Inspired by the inception module in GoogLeNet [1], filters of various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated, in order to fully exploit the temporal relation among the consecutive LR frames. Second, we decrease the complexity of motion among neighboring frames using a spatial alignment network that can be end-to-end trained with the temporal adaptive network and has the merit of increasing the robustness to complex motion and the efficiency compared with the competing image alignment methods. We provide a comprehensive evaluation of the temporal adaptation and the spatial alignment modules. We show that the temporal adaptive design considerably improves the SR quality over its plain counterparts, and the spatial alignment network is able to attain comparable SR performance with the sophisticated optical flow-based approach, but requires a much less running time. Overall, our proposed model with learned temporal dynamics is shown to achieve the state-of-the-art SR results in terms of not only spatial consistency but also the temporal coherence on public video data sets. More information can be found in http://www.ifp.illinois.edu/~dingliu2/videoSR/.", "title": "" }, { "docid": "3f2bb2a383e34bc4a5cae29b3709d199", "text": "We present Cardinal, a tool for computer-assisted authoring of movie scripts. Cardinal provides a means of viewing a script through a variety of perspectives, for interpretation as well as editing. This is made possible by virtue of intelligent automated analysis of natural language scripts and generating different intermediate representations. Cardinal generates 2-D and 3-D visualizations of the scripted narrative and also presents interactions in a timeline-based view. The visualizations empower the scriptwriter to understand their story from a spatial perspective, and the timeline view provides an overview of the interactions in the story. The user study reveals that users of the system demonstrated confidence and comfort using the system.", "title": "" }, { "docid": "bdefafd4277c1f71e9f4c8d7769e0645", "text": "In many applications, one has to actively select among a set of expensive observations before making an informed decision. For example, in environmental monitoring, we want to select locations to measure in order to most effectively predict spatial phenomena. Often, we want to select observations which are robust against a number of possible objective functions. Examples include minimizing the maximum posterior variance in Gaussian Process regression, robust experimental design, and sensor placement for outbreak detection. In this paper, we present the Submodular Saturation algorithm, a simple and efficient algorithm with strong theoretical approximation guarantees for cases where the possible objective functions exhibit submodularity, an intuitive diminishing returns property. Moreover, we prove that better approximation algorithms do not exist unless NP-complete problems admit efficient algorithms. We show how our algorithm can be extended to handle complex cost functions (incorporating non-unit observation cost or communication and path costs). We also show how the algorithm can be used to near-optimally trade off expected-case (e.g., the Mean Square Prediction Error in Gaussian Process regression) and worst-case (e.g., maximum predictive variance) performance. We show that many important machine learning problems fit our robust submodular observation selection formalism, and provide extensive empirical evaluation on several real-world problems. For Gaussian Process regression, our algorithm compares favorably with state-of-the-art heuristics described in the geostatistics literature, while being simpler, faster and providing theoretical guarantees. For robust experimental design, our algorithm performs favorably compared to SDP-based algorithms. ∗ School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA † Google Inc., Pittsburgh, PA, USA.", "title": "" }, { "docid": "f66dfbbd6d2043744d32b44dba145ef2", "text": "Newly emerging location-based and event-based social network services provide us with a new platform to understand users' preferences based on their activity history. A user can only visit a limited number of venues/events and most of them are within a limited distance range, so the user-item matrix is very sparse, which creates a big challenge for traditional collaborative filtering-based recommender systems. The problem becomes more challenging when people travel to a new city where they have no activity history.\n In this paper, we propose LCARS, a location-content-aware recommender system that offers a particular user a set of venues (e.g., restaurants) or events (e.g., concerts and exhibitions) by giving consideration to both personal interest and local preference. This recommender system can facilitate people's travel not only near the area in which they live, but also in a city that is new to them. Specifically, LCARS consists of two components: offline modeling and online recommendation. The offline modeling part, called LCA-LDA, is designed to learn the interest of each individual user and the local preference of each individual city by capturing item co-occurrence patterns and exploiting item contents. The online recommendation part automatically combines the learnt interest of the querying user and the local preference of the querying city to produce the top-k recommendations. To speed up this online process, a scalable query processing technique is developed by extending the classic Threshold Algorithm (TA). We evaluate the performance of our recommender system on two large-scale real data sets, DoubanEvent and Foursquare. The results show the superiority of LCARS in recommending spatial items for users, especially when traveling to new cities, in terms of both effectiveness and efficiency.", "title": "" }, { "docid": "7de29b042513aaf1a3b12e71bee6a338", "text": "The widespread use of deception in online sources has motivated the need for methods to automatically profile and identify deceivers. This work explores deception, gender and age detection in short texts using a machine learning approach. First, we collect a new open domain deception dataset also containing demographic data such as gender and age. Second, we extract feature sets including n-grams, shallow and deep syntactic features, semantic features, and syntactic complexity and readability metrics. Third, we build classifiers that aim to predict deception, gender, and age. Our findings show that while deception detection can be performed in short texts even in the absence of a predetermined domain, gender and age prediction in deceptive texts is a challenging task. We further explore the linguistic differences in deceptive content that relate to deceivers gender and age and find evidence that both age and gender play an important role in people’s word choices when fabricating lies.", "title": "" }, { "docid": "1c1d8901dea3474d1a6ecf84a2044bd4", "text": "Zero-shot learning (ZSL) is typically achieved by resorting to a class semantic embedding space to transfer the knowledge from the seen classes to unseen ones. Capturing the common semantic characteristics between the visual modality and the class semantic modality (e.g., attributes or word vector) is a key to the success of ZSL. In this paper, we propose a novel encoder-decoder approach, namely latent space encoding (LSE), to connect the semantic relations of different modalities. Instead of requiring a projection function to transfer information across different modalities like most previous work, LSE performs the interactions of different modalities via a feature aware latent space, which is learned in an implicit way. Specifically, different modalities are modeled separately but optimized jointly. For each modality, an encoder-decoder framework is performed to learn a feature aware latent space via jointly maximizing the recoverability of the original space from the latent space and the predictability of the latent space from the original space. To relate different modalities together, their features referring to the same concept are enforced to share the same latent codings. In this way, the common semantic characteristics of different modalities are generalized with the latent representations. Another property of the proposed approach is that it is easily extended to more modalities. Extensive experimental results on four benchmark datasets [animal with attribute, Caltech UCSD birds, aPY, and ImageNet] clearly demonstrate the superiority of the proposed approach on several ZSL tasks, including traditional ZSL, generalized ZSL, and zero-shot retrieval.", "title": "" }, { "docid": "35d11265d367c6eeca6f3dfb8ef67a36", "text": "A synthetic aperture radar (SAR) can produce high-resolution two-dimensional images of mapped areas. The SAR comprises a pulsed transmitter, an antenna, and a phase-coherent receiver. The SAR is borne by a constant velocity vehicle such as an aircraft or satellite, with the antenna beam axis oriented obliquely to the velocity vector. The image plane is defined by the velocity vector and antenna beam axis. The image orthogonal coordinates are range and cross range (azimuth). The amplitude and phase of the received signals are collected for the duration of an integration time after which the signal is processed. High range resolution is achieved by the use of wide bandwidth transmitted pulses. High azimuth resolution is achieved by focusing, with a signal processing technique, an extremely long antenna that is synthesized from the coherent phase history. The pulse repetition frequency of the SAR is constrained within bounds established by the geometry and signal ambiguity limits. SAR operation requires relative motion between radar and target. Nominal velocity values are assumed for signal processing and measurable deviations are used for error compensation. Residual uncertainties and high-order derivatives of the velocity which are difficult to compensate may cause image smearing, defocusing, and increased image sidelobes. The SAR transforms the ocean surface into numerous small cells, each with dimensions of range and azimuth resolution. An image of a cell can be produced provided the radar cross section of the cell is sufficiently large and the cell phase history is deterministic. Ocean waves evidently move sufficiently uniformly to produce SAR images which correlate well with optical photographs and visual observations. The relationship between SAR images and oceanic physical features is not completely understood, and more analyses and investigations are desired.", "title": "" }, { "docid": "c03a2f4634458d214d961c3ae9438d1d", "text": "An accurate small-signal model of three-phase photovoltaic (PV) inverters with a high-order grid filter is derived in this paper. The proposed model takes into account the influence of both the inverter operating point and the PV panel characteristics on the inverter dynamic response. A sensitivity study of the control loops to variations of the DC voltage, PV panel transconductance, supplied power, and grid inductance is performed using the proposed small-signal model. Analytical and experimental results carried out on a 100-kW PV inverter are presented.", "title": "" }, { "docid": "3c27b3e11ba9924e9c102fc9ba7907b6", "text": "The Visagraph IITM Eye Movement Recording System is an instrument that assesses reading eye movement efficiency and related parameters objectively. It also incorporates automated data analysis. In the standard protocol, the patient reads selections only at the level of their current school grade, or at the level that has been determined by a standardized reading test. In either case, deficient reading eye movements may be the consequence of a language-based reading disability, an oculomotor-based reading inefficiency, or both. We propose an addition to the standard protocol: the patient’s eye movements are recorded a second time with text that is significantly below the grade level of the initial reading. The goal is to determine which factor is primarily contributing to the patient’s reading problem, oculomotor or language. This concept is discussed in the context of two representative cases.", "title": "" }, { "docid": "5c4f20fcde1cc7927d359fd2d79c2ba5", "text": "There are different interpretations of user experience that lead to different scopes of measure. The ISO definition suggests measures of user experience are similar to measures of satisfaction in usability. A survey at Nokia showed that user experience was interpreted in a similar way to usability, but with the addition of anticipation and hedonic responses. CHI 2009 SIG participants identified not just measurement methods, but methods that help understanding of how and why people use products. A distinction can be made between usability methods that have the objective of improving human performance, and user experience methods that have the objective of improving user satisfaction with achieving both pragmatic and hedonic goals. Sometimes the term “user experience” is used to refer to both approaches. DEFINITIONS OF USABILITY AND USER EXPERIENCE There has been a lot of recent debate about the scope of user experience, and how it should be defined [5]. The definition of user experience in ISO FDIS 9241-210 is: A person's perceptions and responses that result from the use and/or anticipated use of a product, system or service. This contrasts with the revised definition of usability in ISO FDIS 9241-210: Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Both these definitions suggest that usability or user experience can be measured during or after use of a product, system or service. A person's “perceptions and responses” in the definition of user experience are similar to the concept of satisfaction in usability. From this perspective, measures of user experience can be encompassed within the 3-component model of usability [1], particularly when the experience is task-related. A weakness of both definitions is that they are not explicitly concerned with time. Just as the ISO 9241-11 definition of usability has nothing to say about learnability (where usability changes over time), so the ISO 9241-210 definition of user experience has nothing to say about the way user experience evolves from expectation, through actual interaction, to a total experience that includes reflection on the experience [7]. USER EXPERIENCE NEEDS IN DESIGN AND DEVELOPMENT Ketola and Roto [4] surveyed the needs for information on user experience in Nokia, asking senior staff: Which User Experience information (measurable data gained from our target users directly or indirectly), is useful for your organization? How? 21 needs were identified from 18 respondents who worked in Research, Development, Care, and Quality. Ketola and Roto categorised the responses in terms of the area measured: UX lifecycle, retention, use of functions, breakdowns, customer care, localization, device performance and new technology. In Table 1, the needs have been recategorized by type of measure. It is clear that most of the measures are common to conventional approaches to user centred design, but three measures are specific to user experience: • The impact of expected UX to purchase decisions • Continuous excitement • Why and when the user experiences frustration? USER EXPERIENCE EVALUATION METHODS At the CHI 2009 SIG: “User Experience Evaluation – Do You Know Which Method to Use?” [6] [8], participants were asked to describe user experience evaluation methods that they used. 36 methods were collected (including the example methods presented by the organizers). These have been categorised in Table 2 by the type of evaluation context, and the type of data collected. There was very little mention of using measures specific to user experience, particularly from industry participants. It seems that industry's interpretation of user experience evaluation methods is much broader, going beyond conventional evaluation to encompass methods that collect information that helps design for user experience. In that sense user experience evaluation seems to be interpreted as user centred design methods for achieving user experience. The differentiating factor from more traditional usability work is thus a wider end goal: not just achieving effectiveness, efficiency and satisfaction, but optimising the whole user experience from expectation through actual interaction to reflection on the experience. DIFFERENCES BETWEEN USABILITY AND USER EXPERIENCE Although there is no fundamental difference between measures of usability and measures of user experience at a particular point in time, the difference in emphasis between task performance and pleasure leads to different concerns during development. In the context of user centred design, typical usability concerns include: Measurement category Measurement type Measure Area measured Anticipation Pre-purchase Anticipated use The impact of expected UX to purchase decisions UX lifecycle Overall usability First use Effectiveness Success of taking the product into use UX lifecycle Product upgrade Effectiveness Success in transferring content from old device to the new device UX lifecycle Expectations vs. reality Satisfaction Has the device met your expectations? Retention Long term experience Satisfaction Are you satisfied with the product quality (after 3 months of use) Retention Hedonic Engagement Pleasure Continuous excitement Retention UX Obstacles Frustration Why and when the user experiences frustration? Breakdowns Detailed usability Use of device functions How used What functions are used, how often, why, how, when, where? Use of functions Malfunction Technical problems Amount of “reboots” and severe technical problems experienced. Breakdowns Usability problems Usability problems Top 10 usability problems experienced by the customers. Breakdowns Effect of localization Satisfaction with localisation How do users perceive content in their local language? Localization Latencies Satisfaction with device performance Perceived latencies in key tasks. Device performance Performance Satisfaction with device performance Perceived UX on device performance Device performance Perceived complexity Satisfaction with task complexity Actual and perceived complexity of task accomplishments. Device performance User differences Previous devices Previous user experience Which device you had previously? Retention Differences in user groups User differences How different user groups access features? Use of functions Reliability of product planning User differences Comparison of target users vs. actual buyers? Use of functions Support Customer experience in “touchpoints” Satisfaction with support How does customer think & feel about the interaction in the touch points? Customer care Accuracy of support information Consequences of poor support Does inaccurate support information result in product returns? How? Customer care Innovation feedback User wish list New user ideas & innovations triggered by new experiences New technologies Impact of use Change in user behaviour How the device affects user behaviour How are usage patterns changing when new technologies are introduced New technologies Table 1. Categorisation of usability measures reported in [4] 1. Designing for and evaluating overall effectiveness and efficiency. 2. Designing for and evaluating user comfort and satisfaction. 3. Designing to make the product easy to use, and evaluating the product in order to identify and fix usability problems. 4. When relevant, the temporal aspect leads to a concern for learnability. In the context of user centred design, typical user experience concerns include: 1. Understanding and designing the user’s experience with a product: the way in which people interact with a product over time: what they do and why. 2. Maximising the achievement of the hedonic goals of stimulation, identification and evocation and associated emotional responses. Sometimes the two sets of issues are contrasted as usability and user experience. But some organisations would include both under the common umbrella of user experience. Evaluation context Lab tests Lab study with mind maps Paper prototyping Field tests Product / Tool Comparison Competitive evaluation of prototypes in the wild Field observation Long term pilot study Longitudinal comparison Contextual Inquiry Observation/Post Interview Activity Experience Sampling Longitudinal Evaluation Ethnography Field observations Longitudinal Studies Evaluation of groups Evaluating collaborative user experiences, Instrumented product TRUE Tracking Realtime User Experience Domain specific Nintendi Wii Children OPOS Outdoor Play Observation Scheme This-or-that Approaches Evaluating UX jointly with usability Evaluation data User opinion/interview Lab study with mind maps Quick and dirty evaluation Audio narrative Retrospective interview Contextual Inquiry Focus groups evaluation Observation \\ Post Interview Activity Experience Sampling Sensual Evaluation Instrument Contextual Laddering Interview ESM User questionnaire Survey Questions Emocards Experience sampling triggered by events, SAM Magnitude Estimation TRUE Tracking Realtime User Experience Questionnaire (e.g. AttrakDiff) Human responses PURE preverbal user reaction evaluation Psycho-physiological measurements Expert evaluation Expert evaluation Heuristic matrix Perspective-Based Inspection Table2. User experience evaluation methods (CHI 2009 SIG) CONCLUSIONS The scope of user experience The concept of user experience both broadens: • The range of human responses that would be measured to include pleasure. • The circumstances in which they would be measured to include anticipated use and reflection on use. Equally importantly the goal to achieve improved user experience over the whole lifecycle of user involvement with the product leads to increased emphasis on use of methods that help understand what can be done to improve this experience through the whole lifecycle of user involvement. However, notably absent from any of the current surveys or initiative", "title": "" }, { "docid": "40d2b1e5b12a3239aed16cd1691037a2", "text": "Identifiers in programs contain semantic information that might be leveraged to build tools that help programmers write code. This work explores using RNN models to predict Haskell type signatures given the name of the entity being typed. A large corpus of real-world type signatures is gathered from online sources for training and evaluation. In real-world Haskell files, the same type signature is often immediately repeated for a new name. To attempt to take advantage of this repetition, a varying attention mechanism was developed and evaluated. The RNN models explored show some facility at predicting type signature structure from the name, but not the entire signature. The varying attention mechanism provided little gain.", "title": "" }, { "docid": "de7d29c7e11445e836bd04c003443c67", "text": "Logistic regression with `1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interior-point method for solving large-scale `1-regularized logistic regression problems. Small problems with up to a thousand or so features and examples can be solved in seconds on a PC; medium sized problems, with tens of thousands of features and examples, can be solved in tens of seconds (assuming some sparsity in the data). A variation on the basic method, that uses a preconditioned conjugate gradient method to compute the search step, can solve very large problems, with a million features and examples (e.g., the 20 Newsgroups data set), in a few minutes, on a PC. Using warm-start techniques, a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently.", "title": "" }, { "docid": "ccacb7e5d59c4d9fc5d31664260f25f5", "text": "This paper presents a systematic survey on existing literatures and seminal works relevant to the application of ontologies in different aspects of Cloud computing. Our hypothesis is that ontologies along with their reasoning capabilities can have significant impact on improving various aspects of the Cloud computing phenomena. Ontologies can promote intelligent decision support mechanisms for various Cloud based services. They can also provide effective interoperability among the Cloud based systems and resources. This survey can promote a comprehensive understanding on the roles and significance of ontologies within the overall domain of Cloud Computing. Also, this project can potentially form the basis of new research area and possibilities for both ontology and Cloud computing communities.", "title": "" }, { "docid": "105b0c048852de36d075b1db929c1fa4", "text": "OBJECTIVES\nThis study was carried out to investigate the potential of titanium to induce hypersensitivity in patients chronically exposed to titanium-based dental or endoprosthetic implants.\n\n\nMETHODS\nFifty-six patients who had developed clinical symptoms after receiving titanium-based implants were tested in the optimized lymphocyte transformation test MELISA against 10 metals including titanium. Out of 56 patients, 54 were patch-tested with titanium as well as with other metals. The implants were removed in 54 patients (2 declined explantation), and 15 patients were retested in MELISA.\n\n\nRESULTS\nOf the 56 patients tested in MELISA, 21 (37.5%) were positive, 16 (28.6%) ambiguous, and 19 (33.9%) negative to titanium. In the latter group, 11 (57.9%) showed lymphocyte reactivity to other metals, including nickel. All 54 patch-tested patients were negative to titanium. Following removal of the implants, all 54 patients showed remarkable clinical improvement. In the 15 retested patients, this clinical improvement correlated with normalization in MELISA reactivity.\n\n\nCONCLUSION\nThese data clearly demonstrate that titanium can induce clinically-relevant hypersensitivity in a subgroup of patients chronically exposed via dental or endoprosthetic implants.", "title": "" }, { "docid": "bdb2a80b6139e7fd229acf2a1f8c33f1", "text": "This paper aims to determine the maximum frequency achievable in a 25 kW series inverter for induction heating applications and to compare, in hard switching conditions, four fast transistors IGBTs 600A and 1200V modules encapsulated in 62mm from different suppliers. The comparison has been done at 25 and 125ºC in a set-up. Important differences between modules have been obtained depending on the die temperature.", "title": "" }, { "docid": "a2f062482157efb491ca841cc68b7fd3", "text": "Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs.", "title": "" }, { "docid": "87c973e92ef3affcff4dac0d0183067c", "text": "Drug-drug interaction (DDI) is a major cause of morbidity and mortality and a subject of intense scientific interest. Biomedical literature mining can aid DDI research by extracting evidence for large numbers of potential interactions from published literature and clinical databases. Though DDI is investigated in domains ranging in scale from intracellular biochemistry to human populations, literature mining has not been used to extract specific types of experimental evidence, which are reported differently for distinct experimental goals. We focus on pharmacokinetic evidence for DDI, essential for identifying causal mechanisms of putative interactions and as input for further pharmacological and pharmacoepidemiology investigations. We used manually curated corpora of PubMed abstracts and annotated sentences to evaluate the efficacy of literature mining on two tasks: first, identifying PubMed abstracts containing pharmacokinetic evidence of DDIs; second, extracting sentences containing such evidence from abstracts. We implemented a text mining pipeline and evaluated it using several linear classifiers and a variety of feature transforms. The most important textual features in the abstract and sentence classification tasks were analyzed. We also investigated the performance benefits of using features derived from PubMed metadata fields, various publicly available named entity recognizers, and pharmacokinetic dictionaries. Several classifiers performed very well in distinguishing relevant and irrelevant abstracts (reaching F1≈0.93, MCC≈0.74, iAUC≈0.99) and sentences (F1≈0.76, MCC≈0.65, iAUC≈0.83). We found that word bigram features were important for achieving optimal classifier performance and that features derived from Medical Subject Headings (MeSH) terms significantly improved abstract classification. We also found that some drug-related named entity recognition tools and dictionaries led to slight but significant improvements, especially in classification of evidence sentences. Based on our thorough analysis of classifiers and feature transforms and the high classification performance achieved, we demonstrate that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence.", "title": "" } ]
scidocsrr
30c474ca277ce4ff36b8c8cb8412065b
CNN Based Transfer Learning for Historical Chinese Character Recognition
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "ebb01a778c668ef7b439875eaa5682ac", "text": "In this paper, we present a large scale off-line handwritten Chinese character database-HCL2000 which will be made public available for the research community. The database contains 3,755 frequently used simplified Chinesecharacters written by 1,000 different subjects. The writers’ information is incorporated in the database to facilitate testing on grouping writers with different background such as age, occupation, gender, and education etc. We investigate some characteristics of writing styles from different groups of writers. We evaluate HCL2000 database using three different algorithms as a baseline. We decide to publish the database along with this paper and make it free for a research purpose.", "title": "" } ]
[ { "docid": "6f371e0a8f0bfd3cd1b5eb4208160818", "text": "A key aim of current research is to create robots that can reliably manipulate objects. However, in many applications, general-purpose object detection or manipulation is not required: the robot would be useful if it could recognize, localize, and manipulate the relatively small set of specific objects most important in that application, but do so with very high reliability. Instance-based approaches can achieve this high reliability but to work well, they require large amounts of data about the objects that are being manipulated. The first contribution of this paper is a system that automates this data collection using a robot. When the robot encounters a novel object, it collects data that enables it to detect the object, estimate its pose, and grasp it. However for some objects, information needed to infer a successful grasp is not visible to the robot’s sensors; for example, a heavy object might need to be grasped in the middle or else it will twist out of the robot’s gripper. The second contribution of this paper is an approach that allows a robot to identify the best grasp point by attempting to pick up the object and tracking its successes and failures. Because the number of grasp points is very large, we formalize grasping as an N-armed bandit problem and define a new algorithm for best arm identification in budgeted bandits that enables the robot to quickly find an arm corresponding to a good grasp without pulling all the arms. We demonstrate that a stock Baxter robot with no additional sensing can autonomously acquire models for a wide variety of objects and use the models to detect, classify, and manipulate the objects. Additionally, we show that our adaptation step significantly improves accuracy over a non-adaptive system, enabling a robot to improve its pick success rate from 55% to 75% on a collection of 30 household objects. Our instance-based approach exploits the robot’s ability to collect its own training data, enabling experience with the object to directly improve the robot’s performance during future interactions.", "title": "" }, { "docid": "0d8c38444954a0003117e7334195cb00", "text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.", "title": "" }, { "docid": "3d310295592775bbe785692d23649c56", "text": "BACKGROUND\nEvidence indicates that sexual assertiveness is one of the important factors affecting sexual satisfaction. According to some studies, traditional gender norms conflict with women's capability in expressing sexual desires. This study examined the relationship between gender roles and sexual assertiveness in married women in Mashhad, Iran.\n\n\nMETHODS\nThis cross-sectional study was conducted on 120 women who referred to Mashhad health centers through convenient sampling in 2014-15. Data were collected using Bem Sex Role Inventory (BSRI) and Hulbert index of sexual assertiveness. Data were analyzed using SPSS 16 by Pearson and Spearman's correlation tests and linear Regression Analysis.\n\n\nRESULTS\nThe mean scores of sexual assertiveness was 54.93±13.20. According to the findings, there was non-significant correlation between Femininity and masculinity score with sexual assertiveness (P=0.069 and P=0.080 respectively). Linear regression analysis indicated that among the predictor variables, only Sexual function satisfaction was identified as the sexual assertiveness summary predictor variables (P=0.001).\n\n\nCONCLUSION\nBased on the results, sexual assertiveness in married women does not comply with gender role, but it is related to Sexual function satisfaction. So, counseling psychologists need to consider this variable when designing intervention programs for modifying sexual assertiveness and find other variables that affect sexual assertiveness.", "title": "" }, { "docid": "3c0cc3398139b6a558a56b934d96c641", "text": "Targeted nucleases are powerful tools for mediating genome alteration with high precision. The RNA-guided Cas9 nuclease from the microbial clustered regularly interspaced short palindromic repeats (CRISPR) adaptive immune system can be used to facilitate efficient genome engineering in eukaryotic cells by simply specifying a 20-nt targeting sequence within its guide RNA. Here we describe a set of tools for Cas9-mediated genome editing via nonhomologous end joining (NHEJ) or homology-directed repair (HDR) in mammalian cells, as well as generation of modified cell lines for downstream functional studies. To minimize off-target cleavage, we further describe a double-nicking strategy using the Cas9 nickase mutant with paired guide RNAs. This protocol provides experimentally derived guidelines for the selection of target sites, evaluation of cleavage efficiency and analysis of off-target activity. Beginning with target design, gene modifications can be achieved within as little as 1–2 weeks, and modified clonal cell lines can be derived within 2–3 weeks.", "title": "" }, { "docid": "210ec3c86105f496087c7b012619e1d3", "text": "An ultra compact projection system based on a high brightness OLEd micro display is developed. System design and realization of a prototype are presented. This OLEd pico projector with a volume of about 10 cm3 can be integrated into portable systems like mobile phones or PdAs. The Fraunhofer IPMS developed the high brightness monochrome OLEd micro display. The Fraunhofer IOF desig­ ned the specific projection lens [1] and in tegrated the OLEd and the projection optic to a full functional pico projection system. This article provides a closer look on the technology and its possibilities.", "title": "" }, { "docid": "620bed2762c52ad377ceac677adfebef", "text": "Shape is an important image feature - it is one of the primary low level image features exploited in content-based image retrieval (CBIR). There are generally two types of shape descriptors in the literature: contour-based and region-based. In MPEG-7, the curvature scale space descriptor (CSSD) and Zernike moment descriptor (ZMD) have been adopted as the contour-based shape descriptor and region-based shape descriptor, respectively. In this paper, the two shape descriptors are evaluated against other shape descriptors, and the two shape descriptors are also evaluated against each other. Standard methodology is used in the evaluation. Specifically, we use standard databases, large data sets and query sets, commonly used performance measurement and guided principles. A Java-based client-server retrieval framework has been implemented to facilitate the evaluation. Results show that Fourier descriptor (FD) outperforms CSSD, and that CSSD can be replaced by either FD or ZMD.", "title": "" }, { "docid": "147c1fb2c455325ff5e4e4e4659a0040", "text": "A Ka-band 2D flat-profiled Luneburg lens antenna implemented with a glide-symmetric holey structure is presented. The required refractive index for the lens design has been investigated via an analysis of the hole depth and the gap between the two metallic layers constituting the lens. The final unit cell is described and applied to create the complete metasurface Luneburg lens showing that a plane wave is obtained when feeding at an opposite arbitrary point with a discrete source.", "title": "" }, { "docid": "572453e5febc5d45be984d7adb5436c5", "text": "An analysis of several role playing games indicates that player quests share common elements, and that these quests may be abstractly represented using a small expressive language. One benefit of this representation is that it can guide procedural content generation by allowing quests to be generated using this abstraction, and then later converting them into a concrete form within a game’s domain.", "title": "" }, { "docid": "4d4540a59e637f9582a28ed62083bfd6", "text": "Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been shown that competitive accuracies can be achieved without using syntactic parsers, which can be highly inaccurate on noisy text such as tweets. This is achieved by applying distributed word representations and rich neural pooling functions over a simple and intuitive segmentation of tweets according to target entity mentions. In this paper, we extend this idea by proposing a sentencelevel neural model to address the limitation of pooling functions, which do not explicitly model tweet-level semantics. First, a bi-directional gated neural network is used to connect the words in a tweet so that pooling functions can be applied over the hidden layer instead of words for better representing the target and its contexts. Second, a three-way gated neural network structure is used to model the interaction between the target mention and its surrounding contexts. Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis.", "title": "" }, { "docid": "f3cb6de57ba293be0b0833a04086b2ce", "text": "Due to increasing globalization, urban societies are becoming more multicultural. The availability of large-scale digital mobility traces e.g. from tweets or checkins provides an opportunity to explore multiculturalism that until recently could only be addressed using survey-based methods. In this paper we examine a basic facet of multiculturalism through the lens of language use across multiple cities in Switzerland. Using data obtained from Foursquare over 330 days, we present a descriptive analysis of linguistic differences and similarities across five urban agglomerations in a multicultural, western European country.", "title": "" }, { "docid": "2a13609a94050c4477d94cf0d89cbdd3", "text": "In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.", "title": "" }, { "docid": "76791240fa26fef46578d600bfd7f665", "text": "PURPOSE\nTo investigate the effectiveness of a multistation proprioceptive exercise program for the prevention of ankle injuries in basketball players using a prospective randomized controlled trial in combination with biomechanical tests of neuromuscular performance.\n\n\nMETHODS\nA total of 232 players participated in the study and were randomly assigned to a training or control group following the CONSORT statement. The training group performed a multistation proprioceptive exercise program, and the control group continued with their normal workout routines. During one competitive basketball season, the number of ankle injuries was counted and related to the number of sports participation sessions using logistic regression. Additional biomechanical pre–post tests (angle reproduction and postural sway) were performed in both groups to investigate the effects on neuromuscular performance.\n\n\nRESULTS\nIn the control group, 21 injuries occurred, whereas in the training group, 7 injuries occurred. The risk for sustaining an ankle injury was significantly reduced in the training group by approximately 65%. [corrected] The corresponding number needed to treat was 7. Additional biomechanical tests revealed significant improvements in joint position sense and single-limb stance in the training group.\n\n\nCONCLUSIONS\nThe multistation proprioceptive exercise program effectively prevented ankle injuries in basketball players. Analysis of number needed to treat clearly showed the relatively low prevention effort that is necessary to avoid an ankle injury. Additional biomechanical tests confirmed the neuromuscular effect and confirmed a relationship between injury prevention and altered neuromuscular performance. With this knowledge, proprioceptive training may be optimized to specifically address the demands in various athletic activities.", "title": "" }, { "docid": "343dd7c6bb6751eb0368da729c2b704a", "text": "The coupling of computer science and theoretical bases such as nonlinear dynamics and chaos theory allows the creation of 'intelligent' agents, such as artificial neural networks (ANNs), able to adapt themselves dynamically to problems of high complexity. ANNs are able to reproduce the dynamic interaction of multiple factors simultaneously, allowing the study of complexity; they can also draw conclusions on individual basis and not as average trends. These tools can offer specific advantages with respect to classical statistical techniques. This article is designed to acquaint gastroenterologists with concepts and paradigms related to ANNs. The family of ANNs, when appropriately selected and used, permits the maximization of what can be derived from available data and from complex, dynamic, and multidimensional phenomena, which are often poorly predictable in the traditional 'cause and effect' philosophy.", "title": "" }, { "docid": "accebc4ebc062f9676977b375e0c4f32", "text": "Microtask crowdsourcing organizes complex work into workflows, decomposing large tasks into small, relatively independent microtasks. Applied to software development, this model might increase participation in open source software development by lowering the barriers to contribu-tion and dramatically decrease time to market by increasing the parallelism in development work. To explore this idea, we have developed an approach to decomposing programming work into microtasks. Work is coordinated through tracking changes to a graph of artifacts, generating appropriate microtasks and propagating change notifications to artifacts with dependencies. We have implemented our approach in CrowdCode, a cloud IDE for crowd development. To evaluate the feasibility of microtask programming, we performed a small study and found that a small crowd of 12 workers was able to successfully write 480 lines of code and 61 unit tests in 14.25 person-hours of time.", "title": "" }, { "docid": "ca659ea60b5d7c214460b32fe5aa3837", "text": "Address Decoder is an important digital block in SRAM which takes up to half of the total chip access time and significant part of the total SRAM power in normal read/write cycle. To design address decoder need to consider two objectives, first choosing the optimal circuit technique and second sizing of their transistors. Novel address decoder circuit is presented and analysed in this paper. Address decoder using NAND-NOR alternate stages with predecoder and replica inverter chain circuit is proposed and compared with traditional and universal block architecture, using 90nm CMOS technology. Delay and power dissipation in proposed decoder is 60.49% and 52.54% of traditional and 82.35% and 73.80% of universal block architecture respectively.", "title": "" }, { "docid": "47ef46ef69a23e393d8503154f110a81", "text": "Question answering (Q&A) communities have been gaining popularity in the past few years. The success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers, and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities.\n We present a study of the popular Q&A website StackOverflow (SO), in which users ask and answer questions about software development, algorithms, math and other technical topics. The dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008--2012. Participation in activities on the site (such as asking and answering questions) earns users reputation, which is an indicator of the value of that user to the site.\n We describe an analysis of the SO reputation system, and the participation patterns of high and low reputation users. The contributions of very high reputation users to the site indicate that they are the primary source of answers, and especially of high quality answers. Interestingly, we find that while the majority of questions on the site are asked by low reputation users, on average a high reputation user asks more questions than a user with low reputation. We consider a number of graph analysis methods for detecting influential and anomalous users in the underlying user interaction network, and find they are effective in detecting extreme behaviors such as those of spam users. Lastly, we show an application of our analysis: by considering user contributions over first months of activity on the site, we predict who will become influential long-term contributors.", "title": "" }, { "docid": "19d8b6ff70581307e0a00c03b059964f", "text": "We propose a novel approach for analysing time series using complex network theory. We identify the recurrence matrix (calculated from time series) with the adjacency matrix of a complex network and apply measures for the characterisation of complex networks to this recurrence matrix. By using the logistic map, we illustrate the potential of these complex network measures for the detection of dynamical transitions. Finally, we apply the proposed approach to a marine palaeo-climate record and identify the subtle changes to the climate regime.", "title": "" }, { "docid": "0e02a468a65909b93d3876f30a247ab1", "text": "Implant therapy can lead to peri-implantitis, and none of the methods used to treat this inflammatory response have been predictably effective. It is nearly impossible to treat infected surfaces such as TiUnite (a titanium oxide layer) that promote osteoinduction, but finding an effective way to do so is essential. Experiments were conducted to determine the optimum irradiation power for stripping away the contaminated titanium oxide layer with Er:YAG laser irradiation, the degree of implant heating as a result of Er:YAG laser irradiation, and whether osseointegration was possible after Er:YAG laser microexplosions were used to strip a layer from the surface of implants placed in beagle dogs. The Er:YAG laser was effective at removing an even layer of titanium oxide, and the use of water spray limited heating of the irradiated implant, thus protecting the surrounding bone tissue from heat damage.", "title": "" }, { "docid": "745a89e24f439b6f31cdadea25386b17", "text": "Developmental imaging studies show that cortical grey matter decreases in volume during childhood and adolescence. However, considerably less research has addressed the development of subcortical regions (caudate, putamen, pallidum, accumbens, thalamus, amygdala, hippocampus and the cerebellar cortex), in particular not in longitudinal designs. We used the automatic labeling procedure in FreeSurfer to estimate the developmental trajectories of the volume of these subcortical structures in 147 participants (age 7.0-24.3years old, 94 males; 53 females) of whom 53 participants were scanned twice or more. A total of 223 magnetic resonance imaging (MRI) scans (acquired at 1.5-T) were analyzed. Substantial diversity in the developmental trajectories was observed between the different subcortical gray matter structures: the volume of caudate, putamen and nucleus accumbens decreased with age, whereas the volume of hippocampus, amygdala, pallidum and cerebellum showed an inverted U-shaped developmental trajectory. The thalamus showed an initial small increase in volume followed by a slight decrease. All structures had a larger volume in males than females over the whole age range, except for the cerebellum that had a sexually dimorphic developmental trajectory. Thus, subcortical structures appear to not yet be fully developed in childhood, similar to the cerebral cortex, and continue to show maturational changes into adolescence. In addition, there is substantial heterogeneity between the developmental trajectories of these structures.", "title": "" }, { "docid": "6aee20acd54b5d6f2399106075c9fee1", "text": "BACKGROUND\nThe aim of this study was to compare the effectiveness of the ampicillin plus ceftriaxone (AC) and ampicillin plus gentamicin (AG) combinations for treating Enterococcus faecalis infective endocarditis (EFIE).\n\n\nMETHODS\nAn observational, nonrandomized, comparative multicenter cohort study was conducted at 17 Spanish and 1 Italian hospitals. Consecutive adult patients diagnosed of EFIE were included. Outcome measurements were death during treatment and at 3 months of follow-up, adverse events requiring treatment withdrawal, treatment failure requiring a change of antimicrobials, and relapse.\n\n\nRESULTS\nA larger percentage of AC-treated patients (n = 159) had previous chronic renal failure than AG-treated patients (n = 87) (33% vs 16%, P = .004), and AC patients had a higher incidence of cancer (18% vs 7%, P = .015), transplantation (6% vs 0%, P = .040), and healthcare-acquired infection (59% vs 40%, P = .006). Between AC and AG-treated EFIE patients, there were no differences in mortality while on antimicrobial treatment (22% vs 21%, P = .81) or at 3-month follow-up (8% vs 7%, P = .72), in treatment failure requiring a change in antimicrobials (1% vs 2%, P = .54), or in relapses (3% vs 4%, P = .67). However, interruption of antibiotic treatment due to adverse events was much more frequent in AG-treated patients than in those receiving AC (25% vs 1%, P < .001), mainly due to new renal failure (≥25% increase in baseline creatinine concentration; 23% vs 0%, P < .001).\n\n\nCONCLUSIONS\nAC appears as effective as AG for treating EFIE patients and can be used with virtually no risk of renal failure and regardless of the high-level aminoglycoside resistance status of E. faecalis.", "title": "" } ]
scidocsrr
3deab786cc1b2e691452a35b3cf149c5
Spam Deobfuscation using a Hidden Markov Model
[ { "docid": "5301c9ab75519143c5657b9fa780cfcb", "text": "Although discriminatively trained classifiers are usually more accurate when labeled training data is abundant, previous work has sh own that when training data is limited, generative classifiers can ou t-perform them. This paper describes a hybrid model in which a high-dim ensional subset of the parameters are trained to maximize generative likelihood, and another, small, subset of parameters are discriminativ ely trained to maximize conditional likelihood. We give a sample complexi ty bound showing that in order to fit the discriminative parameters we ll, the number of training examples required depends only on the logari thm of the number of feature occurrences and feature set size. Experim ental results show that hybrid models can provide lower test error and can p roduce better accuracy/coverage curves than either their purely g nerative or purely discriminative counterparts. We also discuss sever al advantages of hybrid models, and advocate further work in this area.", "title": "" } ]
[ { "docid": "7ff2b5900aa1b7ca841f01985ad28fb9", "text": "Article history: Received 4 December 2016 Received in revised form 29 October 2017 Accepted 21 November 2017 Available online 2 December 2017 This paper presents a longitudinal interpretive case study of a UK bank's efforts to combat Money Laundering (ML) by expanding the scope of its profiling ofML behaviour. The concept of structural coupling, taken from systems theory, is used to reflect on the bank's approach to theorize about the nature of ML-profiling. The paper offers a practical contribution by laying a path towards the improvement of money laundering detection in an organizational context while a set of evaluation measures is extracted from the case study. Generalizing from the case of the bank, the paper presents a systems-oriented conceptual framework for ML monitoring. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7c89df8980ee72aa2aa2d094f97a0cc8", "text": "This paper presents a power factor correction (PFC)-based bridgeless canonical switching cell (BL-CSC) converter-fed brushless dc (BLDC) motor drive. The proposed BL-CSC converter operating in a discontinuous inductor current mode is used to achieve a unity power factor at the ac mains using a single voltage sensor. The speed of the BLDC motor is controlled by varying the dc bus voltage of the voltage source inverter (VSI) feeding the BLDC motor via a PFC converter. Therefore, the BLDC motor is electronically commutated such that the VSI operates in fundamental frequency switching for reduced switching losses. Moreover, the bridgeless configuration of the CSC converter offers low conduction losses due to partial elimination of diode bridge rectifier at the front end. The proposed configuration shows a considerable increase in efficiency as compared with the conventional scheme. The performance of the proposed drive is validated through experimental results obtained on a developed prototype. Improved power quality is achieved at the ac mains for a wide range of control speeds and supply voltages. The obtained power quality indices are within the acceptable limits of IEC 61000-3-2.", "title": "" }, { "docid": "a2c9c975788253957e6bbebc94eb5a4b", "text": "The implementation of Substrate Integrated Waveguide (SIW) structures in paper-based inkjet-printed technology is presented in this paper for the first time. SIW interconnects and components have been fabricated and tested on a multilayer paper substrate, which permits to implement low-cost and eco-friendly structures. A broadband and compact ridge substrate integrated slab waveguide covering the entire UWB frequency range is proposed and preliminarily verified. SIW structures appear particularly suitable for implementation on paper, due to the possibility to easily realize multilayered topologies and conformal geometries.", "title": "" }, { "docid": "e48e1a9b9a14e0ef3b2bcc78058089cc", "text": "Reading requires the orchestration of visual, attentional, language-related, and oculomotor processing constraints. This study replicates previous effects of frequency, predictability, and length of fixated words on fixation durations in natural reading and demonstrates new effects of these variables related to 144 sentences. Such evidence for distributed processing of words across fixation durations challenges psycholinguistic immediacy-of-processing and eye-mind assumptions. Most of the time the mind processes several words in parallel at different perceptual and cognitive levels. Eye movements can help to unravel these processes.", "title": "" }, { "docid": "cb147151678698565840e4979fa4cb41", "text": "This paper presents a comparative evaluation of silicon carbide power devices for the domestic induction heating (IH) application, which currently has a major industrial, economic and social impact. The compared technologies include MOSFETs, normally on and normally off JFETs, as well as BJTs. These devices have been compared according to different figure-of-merit evaluating conduction and switching performance, efficiency, impact of temperature, as well as other driving and protection issues. To perform the proposed evaluation, a versatile test platform has been developed. As a result of this study, several differential features are identified and discussed, taking into account the pursued induction heating application.", "title": "" }, { "docid": "a059b4908b2ffde33fcedfad999e9f6e", "text": "The use of a hull-climbing robot is proposed to assist hull surveyors in their inspection tasks, reducing cost and risk to personnel. A novel multisegmented hull-climbing robot with magnetic wheels is introduced where multiple two-wheeled modular segments are adjoined by flexible linkages. Compared to traditional rigid-body tracked magnetic robots that tend to detach easily in the presence of surface discontinuities, the segmented design adapts to such discontinuities with improved adhesion to the ferrous surface. Coordinated mobility is achieved with the use of a motion-control algorithm that estimates robot pose through position sensors located in each segment and linkage in order to optimally command each of the drive motors of the system. Self-powered segments and an onboard radio allow for wireless transmission of video and control data between the robot and its operator control unit. The modular-design approach of the system is highly suited for upgrading or adding segments as needed. For example, enhancing the system with a segment that supports an ultrasonic measurement device used to measure hull-thickness of corroded sites can help minimize the number of areas that a surveyor must personally visit for further inspection and repair. Future development efforts may lead to the design of autonomy segments that accept high-level commands from the operator and automatically execute wide-area inspections. It is also foreseeable that with several multi-segmented robots, a coordinated inspection task can take place in parallel, significantly reducing inspection time and cost. *aaron.burmeister@navy.mil The focus of this paper is on the development efforts of the prototype system that has taken place since 2012. Specifically, the tradeoffs of the magnetic-wheel and linkage designs are discussed and the motion-control algorithm presented. Overall system-performance results obtained from various tests and demonstrations are also reported.", "title": "" }, { "docid": "f074965ee3a1d6122f1e68f49fd11d84", "text": "Data mining is the extraction of knowledge from large databases. One of the popular data mining techniques is Classification in which different objects are classified into different classes depending on the common properties among them. Decision Trees are widely used in Classification. This paper proposes a tool which applies an enhanced Decision Tree Algorithm to detect the suspicious e-mails about the criminal activities. An improved ID3 Algorithm with enhanced feature selection method and attribute- importance factor is applied to generate a better and faster Decision Tree. The objective is to detect the suspicious criminal activities and minimize them. That's why the tool is named as “Z-Crime” depicting the “Zero Crime” in the society. This paper aims at highlighting the importance of data mining technology to design proactive application to detect the suspicious criminal activities.", "title": "" }, { "docid": "3b7c0a822c5937ac9e4d702bb23e3432", "text": "In a video surveillance system with static cameras, object segmentation often fails when part of the object has similar color with the background, resulting in poor performance of the subsequent object tracking. Multiple kernels have been utilized in object tracking to deal with occlusion, but the performance still highly depends on segmentation. This paper presents an innovative system, named Multiple-kernel Adaptive Segmentation and Tracking (MAST), which dynamically controls the decision thresholds of background subtraction and shadow removal around the adaptive kernel regions based on the preliminary tracking results. Then the objects are tracked for the second time according to the adaptively segmented foreground. Evaluations of both segmentation and tracking on benchmark datasets and our own recorded video sequences demonstrate that the proposed method can successfully track objects in similar-color background and/or shadow areas with favorable segmentation performance.", "title": "" }, { "docid": "9154228a5f1602e2fbebcac15959bd21", "text": "Evaluation metric plays a critical role in achieving the optimal classifier during the classification training. Thus, a selection of suitable evaluation metric is an important key for discriminating and obtaining the optimal classifier. This paper systematically reviewed the related evaluation metrics that are specifically designed as a discriminator for optimizing generative classifier. Generally, many generative classifiers employ accuracy as a measure to discriminate the optimal solution during the classification training. However, the accuracy has several weaknesses which are less distinctiveness, less discriminability, less informativeness and bias to majority class data. This paper also briefly discusses other metrics that are specifically designed for discriminating the optimal solution. The shortcomings of these alternative metrics are also discussed. Finally, this paper suggests five important aspects that must be taken into consideration in constructing a new discriminator metric.", "title": "" }, { "docid": "4d04debb13948f73e959929dbf82e139", "text": "DynaMIT is a simulation-based real-time system designed to estimate the current state of a transportation network, predict future tra c conditions, and provide consistent and unbiased information to travelers. To perform these tasks, e cient simulators have been designed to explicitly capture the interactions between transportation demand and supply. The demand re ects both the OD ow patterns and the combination of all the individual decisions of travelers while the supply re ects the transportation network in terms of infrastructure, tra c ow and tra c control. This paper describes the design and speci cation of these simulators, and discusses their interactions. Massachusetts Institute of Technology, Dpt of Civil and Environmental Engineering, Cambridge, Ma. Email: mba@mit.edu Ecole Polytechnique F ed erale de Lausanne, Dpt. of Mathematics, CH-1015 Lausanne, Switzerland. Email: michel.bierlaire@ep .ch Volpe National Transportation Systems Center, Dpt of Transportation, Cambridge, Ma. Email: koutsopoulos@volpe.dot.gov The Ohio State University, Columbus, Oh. Email: mishalani.1@osu.edu", "title": "" }, { "docid": "cf6eb57b4740d3e14a73fd6197769bf5", "text": "Microwave Materials such as Rogers RO3003 are subject to process-related fluctuations in terms of the relative permittivity. The behavior of high frequency circuits like patch-antenna arrays and their distribution networks is dependent on the effective wavelength. Therefore, fluctuations of the relative permittivity will influence the resonance frequency and antenna beam direction. This paper presents a grounded coplanar wave-guide based sensor, which can measure the relative permittivity at 77 GHz, as well as at other resonance frequencies, by applying it on top of the manufactured depaneling. In addition, the sensor is robust against floating ground metallizations on inner printed circuit board layers, which are typically distributed over the entire surface below antennas.", "title": "" }, { "docid": "b4eef9e3a95a00cefd3a947637f72329", "text": "Plants are considered as one of the greatest assets in the field of Indian Science of Medicine called Ayurveda. Some plants have its medicinal values apart from serving as the source of food. The innovation in the allopathic medicines has degraded the significance of these therapeutic plants. People failed to have their medications at their door step instead went behind the fastest cure unaware of its side effects. One among the reasons is the lack of knowledge about identifying medicinal plants among the normal ones. So, a Vision based approach is being employed to create an automated system which identifies the plants and provides its medicinal values thus helping even a common man to be aware of the medicinal plants around them. This paper discusses about the formation of the feature set which is the important step in recognizing any plant species.", "title": "" }, { "docid": "68278896a61e13705e5ffb113487cceb", "text": "Universal Language Model for Fine-tuning [6] (ULMFiT) is one of the first NLP methods for efficient inductive transfer learning. Unsupervised pretraining results in improvements on many NLP tasks for English. In this paper, we describe a new method that uses subword tokenization to adapt ULMFiT to languages with high inflection. Our approach results in a new state-of-the-art for the Polish language, taking first place in Task 3 of PolEval’18. After further training, our final model outperformed the second best model by 35%. We have open-sourced our pretrained models and code.", "title": "" }, { "docid": "5db336088113fbfdf93be6e057f97748", "text": "Unmanned Aerial Vehicles (UAVs) are an exciting new remote sensing tool capable of acquiring high resolution spatial data. Remote sensing with UAVs has the potential to provide imagery at an unprecedented spatial and temporal resolution. The small footprint of UAV imagery, however, makes it necessary to develop automated techniques to geometrically rectify and mosaic the imagery such that larger areas can be monitored. In this paper, we present a technique for geometric correction and mosaicking of UAV photography using feature matching and Structure from Motion (SfM) photogrammetric techniques. Images are processed to create three dimensional point clouds, initially in an arbitrary model space. The point clouds are transformed into a real-world coordinate system using either a direct georeferencing technique that uses estimated camera positions or via a Ground Control Point (GCP) technique that uses automatically identified GCPs within the point cloud. The point cloud is then used to generate a Digital Terrain Model (DTM) required for rectification of the images. Subsequent georeferenced images are then joined together to form a mosaic of the study area. The absolute spatial accuracy of the direct technique was found to be 65–120 cm whilst the GCP technique achieves an accuracy of approximately 10–15 cm.", "title": "" }, { "docid": "615ba820d06c9e5f7dd3e9130bf064bd", "text": "Recommender system has become an indispensable component in many e-commerce sites. One major challenge that largely remains open is the coldstart problem, which can be viewed as an ice barrier that keeps the cold-start users/items from the warm ones. In this paper, we propose a novel rating comparison strategy (RAPARE) to break this ice barrier. The center-piece of our RAPARE is to provide a fine-grained calibration on the latent profiles of cold-start users/items by exploring the differences between cold-start and warm users/items. We instantiate our RAPARE strategy on the prevalent method in recommender system, i.e., the matrix factorization based collaborative filtering. Experimental evaluations on two real data sets validate the superiority of our approach over the existing methods in cold-start scenarios.", "title": "" }, { "docid": "b2e7fc135ec3afa8e38f87a3c47fd5d9", "text": "Advances in 3D graphics technology have accelerated the con struction of dynamic 3D environments. Despite their promise for scientific and educational applications, much of this potential has gone unrealized because runtime c a era control software lacks user-sensitivity. Current environments rely on sequences of viewpoints that directly require the user’s control or are based primarily on actions and geom etry of the scene. Because of the complexity of rapidly changing environments, users typ ically cannot manipulate objects in environments while simultaneously issuing camera contr ol commands. To address these issues, we have developed UC AM , a realtime camera planner that employs cinematographic user models to render customized visualizations of dynamic 3D environments. After interviewing users to determine their preferred directorial sty e and pacing, UCAM examines the resulting cinematographic user model to plan camera sequen ces whose shot vantage points and cutting rates are tailored to the user in realtime. Evalu ations of UCAM in a dynamic 3D testbed are encouraging.", "title": "" }, { "docid": "bc166a431e35bc9b11801bcf1ff6c9fd", "text": "Outsourced storage has become more and more practical in recent years. Users can now store large amounts of data in multiple servers at a relatively low price. An important issue for outsourced storage systems is to design an efficient scheme to assure users that their data stored at remote servers has not been tampered with. This paper presents a general method and a practical prototype application for verifying the integrity of files in an untrusted network storage service. The verification process is managed by an application running in a trusted environment (typically on the client) that stores just one cryptographic hash value of constant size, corresponding to the \"digest\" of an authenticated data structure. The proposed integrity verification service can work with any storage service since it is transparent to the storage technology used. Experimental results show that our integrity verification method is efficient and practical for network storage systems.", "title": "" }, { "docid": "5353d9e123261783a5bcb02adaac09b2", "text": "This work presents a new digital control strategy of a three-phase PWM inverter for uninterruptible power supplies (UPS) systems. To achieve a fast transient response, a good voltage regulation, nearly zero steady state inverter output voltage error, and low total harmonic distortion (THD), the proposed control method consists of two discrete-time feedback controllers: a discrete-time optimal + sliding-mode voltage controller in outer loop and a discrete-time optimal current controller in inner loop. To prove the effectiveness of the proposed technique, various simulation results using Matlab/Simulink are shown under both linear and nonlinear loads.", "title": "" }, { "docid": "2a7983e91cd674d95524622e82c4ded7", "text": "• FC (fully-connected) layer takes the pooling results, produces features FROI, Fcontext, Fframe, and feeds them into two streams, inspired by [BV16]. • Classification stream produces a matrix of classification scores S = [FCcls(FROI1); . . . ;FCcls(FROIK)] ∈ RK×C • Localization stream implements the proposed context-aware guidance that uses FROIk, Fcontextk, Fframek to produce a localization score matrix L ∈ RK×C.", "title": "" }, { "docid": "a9ff593d6eea9f28aa1d2b41efddea9b", "text": "A central task in the study of evolution is the reconstruction of a phylogenetic tree from sequences of current-day taxa. A well supported approach to tree reconstruction performs maximum likelihood (ML) analysis. Unfortunately, searching for the maximum likelihood phylogenetic tree is computationally expensive. In this paper, we describe a new algorithm that uses Structural-EM for learning maximum likelihood trees. This algorithm is similar to the standard EM method for estimating branch lengths, except that during iterations of this algorithms the topology is improved as well as the branch length. The algorithm performs iterations of two steps. In the E-Step, we use the current tree topology and branch lengths to compute expected sufficient statistics, which summarize the data. In the M-Step, we search for a topology that maximizes the likelihood with respect to these expected sufficient statistics. As we show, searching for better topologies inside the M-step can be done efficiently, as opposed to standard search over topologies. We prove that each iteration of this procedure increases the likelihood of the topology, and thus the procedure must converge. We evaluate our new algorithm on both synthetic and real sequence data, and show that it is both dramatically faster and finds more plausible trees than standard search for maximum likelihood phylogenies.", "title": "" } ]
scidocsrr
e554268203677574146b4f38da03bc34
Surprise-Based Intrinsic Motivation for Deep Reinforcement Learning
[ { "docid": "c8b382852f445c6f05c905371330dd07", "text": "Novelty and surprise play significant roles in animal behavior and in attempts to understand the neural mechanisms underlying it. They also play important roles in technology, where detecting observations that are novel or surprising is central to many applications, such as medical diagnosis, text processing, surveillance, and security. Theories of motivation, particularly of intrinsic motivation, place novelty and surprise among the primary factors that arouse interest, motivate exploratory or avoidance behavior, and drive learning. In many of these studies, novelty and surprise are not distinguished from one another: the words are used more-or-less interchangeably. However, while undeniably closely related, novelty and surprise are very different. The purpose of this article is first to highlight the differences between novelty and surprise and to discuss how they are related by presenting an extensive review of mathematical and computational proposals related to them, and then to explore the implications of this for understanding behavioral and neuroscience data. We argue that opportunities for improved understanding of behavior and its neural basis are likely being missed by failing to distinguish between novelty and surprise.", "title": "" }, { "docid": "5ed1a40b933e44f0a7f7240bbca24ab4", "text": "We present new algorithms for reinforcement learning and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted case) or by the horizon time T (in the discounted case), we then give algorithms requiring a number of actions and total computation time that are only polynomial in T and the number of states and actions, for both the undiscounted and discounted cases. An interesting aspect of our algorithms is their explicit handling of the Exploration-Exploitation trade-off.", "title": "" } ]
[ { "docid": "e6ff5af0a9d6105a60771a2c447fab5e", "text": "Object detection and classification in 3D is a key task in Automated Driving (AD). LiDAR sensors are employed to provide the 3D point cloud reconstruction of the surrounding environment, while the task of 3D object bounding box detection in real time remains a strong algorithmic challenge. In this paper, we build on the success of the oneshot regression meta-architecture in the 2D perspective image space and extend it to generate oriented 3D object bounding boxes from LiDAR point cloud. Our main contribution is in extending the loss function of YOLO v2 to include the yaw angle, the 3D box center in Cartesian coordinates and the height of the box as a direct regression problem. This formulation enables real-time performance, which is essential for automated driving. Our results are showing promising figures on KITTI benchmark, achieving real-time performance (40 fps) on Titan X GPU.", "title": "" }, { "docid": "1afc103a3878d859ec15929433f49077", "text": "Large-scale deep neural networks (DNNs) are both compute and memory intensive. As the size of DNNs continues to grow, it is critical to improve the energy efficiency and performance while maintaining accuracy. For DNNs, the model size is an important factor affecting performance, scalability and energy efficiency. Weight pruning achieves good compression ratios but suffers from three drawbacks: 1) the irregular network structure after pruning, which affects performance and throughput; 2) the increased training complexity; and 3) the lack of rigirous guarantee of compression ratio and inference accuracy.\n To overcome these limitations, this paper proposes CirCNN, a principled approach to represent weights and process neural networks using block-circulant matrices. CirCNN utilizes the Fast Fourier Transform (FFT)-based fast multiplication, simultaneously reducing the computational complexity (both in inference and training) from O(n2) to O(n log n) and the storage complexity from O(n2) to O(n), with negligible accuracy loss. Compared to other approaches, CirCNN is distinct due to its mathematical rigor: the DNNs based on CirCNN can converge to the same \"effectiveness\" as DNNs without compression. We propose the CirCNN architecture, a universal DNN inference engine that can be implemented in various hardware/software platforms with configurable network architecture (e.g., layer type, size, scales, etc.). In CirCNN architecture: 1) Due to the recursive property, FFT can be used as the key computing kernel, which ensures universal and small-footprint implementations. 2) The compressed but regular network structure avoids the pitfalls of the network pruning and facilitates high performance and throughput with highly pipelined and parallel design. To demonstrate the performance and energy efficiency, we test CirCNN in FPGA, ASIC and embedded processors. Our results show that CirCNN architecture achieves very high energy efficiency and performance with a small hardware footprint. Based on the FPGA implementation and ASIC synthesis results, CirCNN achieves 6 - 102X energy efficiency improvements compared with the best state-of-the-art results.", "title": "" }, { "docid": "be40c84b4bc9d5d6cd21e1e83b9c35ed", "text": "A short overview is given of many recent results in algorithmic graph theory that deal with the notions treewidth, and pathwidth. We discuss algorithms that nd tree-decompositions, algorithms that use tree-decompositions to solve hard problems eeciently, graph minor theory, and some applications. The paper contains an extensive bibliography.", "title": "" }, { "docid": "ecad37ad1097369fd03f0decff2d23dc", "text": "The unique musculoskeletal structure of the human hand brings in wider dexterous capabilities to grasp and manipulate a repertoire of objects than the non-human primates. It has been widely accepted that the orientation and the position of the thumb plays an important role in this characteristic behavior. There have been numerous attempts to develop anthropomorphic robotic hands with varying levels of success. Nevertheless, manipulation ability in those hands is to be ameliorated even though they can grasp objects successfully. An appropriate model of the thumb is important to manipulate the objects against the fingers and to maintain the stability. Modeling these complex interactions about the mechanical axes of the joints and how to incorporate these joints in robotic thumbs is a challenging task. This article presents a review of the biomechanics of the human thumb and the robotic thumb designs to identify opportunities for future anthropomorphic robotic hands.", "title": "" }, { "docid": "df6b1cb3efbababa8aa0a2c04b999cf0", "text": "A cognitive radio wireless sensor network is one of the candidate areas where cognitive techniques can be used for opportunistic spectrum access. Research in this area is still in its infancy, but it is progressing rapidly. The aim of this study is to classify the existing literature of this fast emerging application area of cognitive radio wireless sensor networks, highlight the key research that has already been undertaken, and indicate open problems. This paper describes the advantages of cognitive radio wireless sensor networks, the difference between ad hoc cognitive radio networks, wireless sensor networks, and cognitive radio wireless sensor networks, potential application areas of cognitive radio wireless sensor networks, challenges and research trend in cognitive radio wireless sensor networks. The sensing schemes suited for cognitive radio wireless sensor networks scenarios are discussed with an emphasis on cooperation and spectrum access methods that ensure the availability of the required QoS. Finally, this paper lists several open research challenges aimed at drawing the attention of the readers toward the important issues that need to be addressed before the vision of completely autonomous cognitive radio wireless sensor networks can be realized.", "title": "" }, { "docid": "5284157f83c2fe578746b9ae3f6ad429", "text": "SLOWbot is a research project conducted via a collaboration between iaso health and FBK (Fondazione Bruno Kessler). There are now thousands of available healthy aging apps, but most don't deliver on their promise to support a healthy aging process in people that need it the most. The neediest include the over-fifties age group, particularly those wanting to prevent the diseases of aging or whom already have a chronic disease. Even the motivated \"quantified selfers\" discard their health apps after only a few months. Our research aims to identify new ways to ensure adherence to a healthy lifestyle program tailored for an over fifties audience which is delivered by a chatbot. The research covers the participant onboarding process and examines barriers and issues with gathering predictive data that might inform future improved uptake and adherence as well as an increase in health literacy by the participants. The healthy lifestyle program will ultimately be delivered by our \"SLOWbot\" which guides the participant to make informed and enhanced health decision making, specifically around food choices (a \"longevity eating plan\").", "title": "" }, { "docid": "ee045772d55000b6f2d3f7469a4161b1", "text": "Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin’s q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary dataset, the results show support for this framework. Interestingly, it is found that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice. In today’s competitive market environment, corporate social responsibility (CSR) represents a high-profile notion that has strategic importance to many companies. As many as 90% of the Fortune 500 companies now have explicit CSR initiatives (Kotler and Lee 2004; Lichtenstein et al. 2004). According to a recent special report by BusinessWeek (2005a, p.72), large companies disclosed substantial investments in CSR initiatives (i.e., Target’s donation of $107.8 million in CSR represents 3.6% of its pretax profits, with GM $51.2 million at 2.7%, General Mills $60.3 million at 3.2%, Merck $921million at 11.3%, HCA $926 million at 43.3%). By dedicating everincreasing amounts to cash donations, in-kind contributions, cause marketing, and employee volunteerism programs, companies are acting on the premise that CSR is not merely the “right thing to do,” but also “the smart thing to do” (Smith 2003). Importantly, along with increasing media coverage of CSR issues, companies themselves are also taking direct and visible steps to communicate their CSR initiatives to various stakeholders including consumers. A decade ago, Drumwright (1996) observed that advertising with a social dimension was on the rise. The trend seems to continue. Many companies, including the likes of Target and Walmart, have funded large national ad campaigns promoting their good works. The October 2005 issue of In Style magazine alone carried more than 25 “cause” ads. Indeed, consumers seem to be taking notice: whereas in 1993 only 26% of individuals surveyed by Cone Communications could name a company as a strong corporate citizen, by 2004, the percentage surged to as high as 80% (BusinessWeek 2005a). Motivated, in part, by this mounting importance of CSR in practice, several marketing studies have found that social responsibility programs have a significant influence on a number of customer-related outcomes (Bhattacharya and Sen 2004). More specifically, based on lab experiments, CSR is reported to directly or indirectly impact consumer product responses", "title": "" }, { "docid": "da0d17860604269378c8649e7353ba83", "text": "Responsive, implantable stimulation devices to treat epilepsy are now in clinical trials. New evidence suggests that these devices may be more effective when they deliver therapy before seizure onset. Despite years of effort, prospective seizure prediction, which could improve device performance, remains elusive. In large part, this is explained by lack of agreement on a statistical framework for modeling seizure generation and a method for validating algorithm performance. We present a novel stochastic framework based on a three-state hidden Markov model (HMM) (representing interictal, preictal, and seizure states) with the feature that periods of increased seizure probability can transition back to the interictal state. This notion reflects clinical experience and may enhance interpretation of published seizure prediction studies. Our model accommodates clipped EEG segments and formalizes intuitive notions regarding statistical validation. We derive equations for type I and type II errors as a function of the number of seizures, duration of interictal data, and prediction horizon length and we demonstrate the model's utility with a novel seizure detection algorithm that appeared to predicted seizure onset. We propose this framework as a vital tool for designing and validating prediction algorithms and for facilitating collaborative research in this area.", "title": "" }, { "docid": "11f47bb575a6e50c3d3ccef0e75ff3b9", "text": "Corporate social responsibility is incorporated into strategic management at the enterprise strategy level. This paper delineates the domain of enterprise strategy by focusing on how well a firm's social performance matches its competences and stakeholders rather than on the \"quantity\" of a firm's social responsibility. Enterprise strategy is defined and a classification of enterprise strategies is set forth.", "title": "" }, { "docid": "43184dfe77050618402900bc309203d5", "text": "A prototype of Air Gap RLSA has been designed and simulated using hybrid air gap and FR4 dielectric material. The 28% wide bandwidth has been recorded through this approach. A 12.35dBi directive gain also recorded from the simulation. The 13.3 degree beamwidth of the radiation pattern is sufficient for high directional application. Since the proposed application was for Point to Point Link, this study concluded the Air Gap RLSA is a new candidate for this application.", "title": "" }, { "docid": "e9c383d71839547d41829348bebaabcf", "text": "Receiver operating characteristic (ROC) analysis, which yields indices of accuracy such as the area under the curve (AUC), is increasingly being used to evaluate the performances of diagnostic tests that produce results on continuous scales. Both parametric and nonparametric ROC approaches are available to assess the discriminant capacity of such tests, but there are no clear guidelines as to the merits of each, particularly with non-binormal data. Investigators may worry that when data are non-Gaussian, estimates of diagnostic accuracy based on a binormal model may be distorted. The authors conducted a Monte Carlo simulation study to compare the bias and sampling variability in the estimates of the AUCs derived from parametric and nonparametric procedures. Each approach was assessed in data sets generated from various configurations of pairs of overlapping distributions; these included the binormal model and non-binormal pairs of distributions where one or both pair members were mixtures of Gaussian (MG) distributions with different degrees of departures from binormality. The biases in the estimates of the AUCs were found to be very small for both parametric and nonparametric procedures. The two approaches yielded very close estimates of the AUCs and the corresponding sampling variability even when data were generated from non-binormal models. Thus, for a wide range of distributions, concern about bias or imprecision of the estimates of the AUC should not be a major factor in choosing between the nonparametric and parametric approaches.", "title": "" }, { "docid": "17cb27030abc5054b8f51256bdee346a", "text": "Purpose – This paper seeks to define and describe agile project management using the Scrum methodology as a method for more effectively managing and completing projects. Design/methodology/approach – This paper provides a general overview and introduction to the concepts of agile project management and the Scrum methodology in particular. Findings – Agile project management using the Scrum methodology allows project teams to manage digital library projects more effectively by decreasing the amount of overhead dedicated to managing the project. Using an iterative process of continuous review and short-design time frames, the project team is better able to quickly adapt projects to rapidly evolving environments in which systems will be used. Originality/value – This paper fills a gap in the digital library project management literature by providing an overview of agile project management methods.", "title": "" }, { "docid": "5a9b5313575208b0bdf8ffdbd4e271f5", "text": "A new method for the design of predictive controllers for SISO systems is presented. The proposed technique allows uncertainties and constraints to be concluded in the design of the control law. The goal is to design, at each sample instant, a predictive feedback control law that minimizes a performance measure and guarantees of constraints are satisfied for a set of models that describes the system to be controlled. The predictive controller consists of a finite horizon parametric-optimization problem with an additional constraint over the manipulated variable behavior. This is an end-constraint based approach that ensures the exponential stability of the closed-loop system. The inclusion of this additional constraint, in the on-line optimization algorithm, enables robust stability properties to be demonstrated for the closedloop system. This is the case even though constraints and disturbances are present. Finally, simulation results are presented using a nonlinear continuous stirred tank reactor model.", "title": "" }, { "docid": "b3c83fc9495387f286ea83d00673b5b3", "text": "A new walk compensation method for a pulsed time-of-flight rangefinder is suggested. The receiver channel operates without gain control using leading edge timing discrimination principle. The generated walk error is compensated for by measuring the pulse length and knowing the relation between the walk error and pulse length. The walk compensation is possible also at the range where the signal is clipped and where the compensation method by amplitude measurement is impossible. Based on the simulations walk error can be compensated within the dynamic range of 1:30 000.", "title": "" }, { "docid": "9f40a57159a06ecd9d658b4d07a326b5", "text": "_____________________________________________________________________________ The aim of the present study was to investigate a cytotoxic oxidative cell stress related and the antioxidant profile of kaempferol, quercetin, and isoquercitrin. The flavonol compounds were able to act as scavengers of superoxide anion (but not hydrogen peroxide), hypochlorous acid, chloramine and nitric oxide. Although flavonoids are widely described as antioxidants and this activity is generally related to beneficial effects on human health, here we show important cytotoxic actions of three well known flavonoids. They were able to promote hemolysis which one was exacerbated on the presence of hypochlorous acid but not by AAPH radical. Therefore, WWW.SCIELO.BR/EQ VOLUME 36, NÚMERO 2, 2011", "title": "" }, { "docid": "2ae1dfeae3c6b8a1ca032198f2989aef", "text": "This study enhances the existing literature on online trust by integrating the consumers’ product evaluations model and technology adoption model in e-commerce environments. In this study, we investigate how perceived value influences the perceptions of online trust among online buyers and their willingness to repurchase from the same website. This study proposes a research model that compares the relative importance of perceived value and online trust to perceived usefulness in influencing consumers’ repurchase intention. The proposed model is tested using data collected from online consumers of e-commerce. The findings show that although trust and ecommerce adoption components are critical in influencing repurchase intention, product evaluation factors are also important in determining repurchase intention. Perceived quality is influenced by the perceptions of competitive price and website reputation, which in turn influences perceived value; and perceived value, website reputation, and perceived risk influence online trust, which in turn influence repurchase intention. The findings also indicate that the effect of perceived usefulness on repurchase intention is not significant whereas perceived value and online trust are the major determinants of repurchase intention. Major theoretical contributions and practical implications are discussed.", "title": "" }, { "docid": "ce55485a60213c7656eb804b89be36cc", "text": "In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.", "title": "" }, { "docid": "bd0b233e4f19abaf97dcb85042114155", "text": "BACKGROUND/PURPOSE\nHair straighteners are very popular around the world, although they can cause great damage to the hair. Thus, the characterization of the mechanical properties of curly hair using advanced techniques is very important to clarify how hair straighteners act on hair fibers and to contribute to the development of effective products. On this basis, we chose two nonconventional hair straighteners (formaldehyde and glyoxylic acid) to investigate how hair straightening treatments affect the mechanical properties of curly hair.\n\n\nMETHODS\nThe mechanical properties of curly hair were evaluated using a tensile test, differential scanning calorimetry (DSC) measurements, scanning electronic microscopy (SEM), a torsion modulus, dynamic vapor sorption (DVS), and Fourier transform infrared spectroscopy (FTIR) analysis.\n\n\nRESULTS\nThe techniques used effectively helped the understanding of the influence of nonconventional hair straighteners on hair properties. For the break stress and the break extension tests, formaldehyde showed a marked decrease in these parameters, with great hair damage. Glyoxylic acid had a slight effect compared to formaldehyde treatment. Both treatments showed an increase in shear modulus, a decrease in water sorption and damage to the hair surface.\n\n\nCONCLUSIONS\nA combination of the techniques used in this study permitted a better understanding of nonconventional hair straightener treatments and also supported the choice of the better treatment, considering a good relationship between efficacy and safety. Thus, it is very important to determine the properties of hair for the development of cosmetics used to improve the beauty of curly hair.", "title": "" }, { "docid": "93f8ba979ea679d6b9be6f949f8ee6ed", "text": "This paper presents a method for Simultaneous Localization and Mapping (SLAM), relying on a monocular camera as the only sensor, which is able to build outdoor, closed-loop maps much larger than previously achieved with such input. Our system, based on the Hierarchical Map approach [1], builds independent local maps in real-time using the EKF-SLAM technique and the inverse depth representation proposed in [2]. The main novelty in the local mapping process is the use of a data association technique that greatly improves its robustness in dynamic and complex environments. A new visual map matching algorithm stitches these maps together and is able to detect large loops automatically, taking into account the unobservability of scale intrinsic to pure monocular SLAM. The loop closing constraint is applied at the upper level of the Hierarchical Map in near real-time. We present experimental results demonstrating monocular SLAM as a human carries a camera over long walked trajectories in outdoor areas with people and other clutter, even in the more difficult case of forward-looking camera, and show the closing of loops of several hundred meters.", "title": "" } ]
scidocsrr
69f416d273d3a55a81632dcf5ccbca85
Survey on Evaluation of Student's Performance in Educational Data Mining
[ { "docid": "33e5a3619a6f7d831146c399ff55f5ff", "text": "With the continuous development of online learning platforms, educational data analytics and prediction have become a promising research field, which are helpful for the development of personalized learning system. However, the indicator's selection process does not combine with the whole learning process, which may affect the accuracy of prediction results. In this paper, we induce 19 behavior indicators in the online learning platform, proposing a student performance prediction model which combines with the whole learning process. The model consists of four parts: data collection and pre-processing, learning behavior analytics, algorithm model building and prediction. Moreover, we apply an optimized Logistic Regression algorithm, taking a case to analyze students' behavior and to predict their performance. Experimental results demonstrate that these eigenvalues can effectively predict whether a student was probably to have an excellent grade.", "title": "" }, { "docid": "86f5c3e7b238656ae5f680db6ce0b7f5", "text": "It is important to study and analyse educational data especially students’ performance. Educational Data Mining (EDM) is the field of study concerned with mining educational data to find out interesting patterns and knowledge in educational organizations. This study is equally concerned with this subject, specifically, the students’ performance. This study explores multiple factors theoretically assumed to affect students’ performance in higher education, and finds a qualitative model which best classifies and predicts the students’ performance based on related personal and social factors. Keywords—Data Mining; Education; Students; Performance; Patterns", "title": "" } ]
[ { "docid": "bd4d6e83ccf5da959dac5bbc174d9d6f", "text": "This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.", "title": "" }, { "docid": "c88f5359fc6dc0cac2c0bd53cea989ee", "text": "Automatic detection and monitoring of oil spills and illegal oil discharges is of fundamental importance in ensuring compliance with marine legislation and protection of the coastal environments, which are under considerable threat from intentional or accidental oil spills, uncontrolled sewage and wastewater discharged. In this paper the level set based image segmentation was evaluated for the real-time detection and tracking of oil spills from SAR imagery. The developed processing scheme consists of a preprocessing step, in which an advanced image simplification is taking place, followed by a geometric level set segmentation for the detection of the possible oil spills. Finally a classification was performed, for the separation of lookalikes, leading to oil spill extraction. Experimental results demonstrate that the level set segmentation is a robust tool for the detection of possible oil spills, copes well with abrupt shape deformations and splits and outperforms earlier efforts which were based on different types of threshold or edge detection techniques. The developed algorithm’s efficiency for real-time oil spill detection and monitoring was also tested.", "title": "" }, { "docid": "0f92bd13b589f0f5328620681547b3ea", "text": "By integrating the perspectives of social presence, interactivity, and peer motivation, this study developed a theoretical model to examine the factors affecting members' purchase intention in the context of social media brand community. Data collected from members of a fan page brand community on Facebook in Taiwan was used to test the model. The results also show that peer extrinsic motivation and peer intrinsic motivation have positive influences on purchase intention. The results also reveal that human-message interaction exerts significant influence on peer extrinsic motivation and peer intrinsic motivation, while human-human interaction has a positive effect on human-message interaction. Finally, the results report that awareness impacts human-message interaction significantly, whereas awareness, affective social presence, and cognitive social presence influence human-human interaction significantly.", "title": "" }, { "docid": "d967d6525cf88d498ecc872a9eef1c7c", "text": "Historical Chinese character recognition has been suffering from the problem of lacking sufficient labeled training samples. A transfer learning method based on Convolutional Neural Network (CNN) for historical Chinese character recognition is proposed in this paper. A CNN model L is trained by printed Chinese character samples in the source domain. The network structure and weights of model L are used to initialize another CNN model T, which is regarded as the feature extractor and classifier in the target domain. The model T is then fine-tuned by a few labeled historical or handwritten Chinese character samples, and used for final evaluation in the target domain. Several experiments regarding essential factors of the CNNbased transfer learning method are conducted, showing that the proposed method is effective.", "title": "" }, { "docid": "8854917dff531c706f0234c1e45a496d", "text": "A new equivalent circuit model of an electrical size-reduced coupled line radio frequency Marchand balun is proposed and investigated in this paper. It consists of two parts of coupled lines with significantly reduced electrical length. Compared with the conventional Marchand balun, a short-circuit ending is applied instead of the open-circuit ending, and a capacitive feeding is introduced. The electrical length of the proposed balun is reduced to around 1/3 compared with that of the conventional Marchand balun. Detailed mathematical analysis for this design is included in this paper. Groups of circuit simulation results are shown to verify the conclusions. A sample balun is fabricated in microstrip line type on the Teflon substrate, with low dielectric constant of 2.54. It has a dimension of $0.189\\lambda _{g} \\times 0.066 \\lambda _{g}$ with amplitude imbalance of 0.1 dB and phase imbalance of 179.09° ± 0.14°. The simulation and experiment results are in good agreement.", "title": "" }, { "docid": "799ccd75d6781e38cf5e2faee5784cae", "text": "Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets – Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.", "title": "" }, { "docid": "2e40cdb0416198c1ec986e0d3da47fd1", "text": "The slotted-page structure is a database page format commonly used for managing variable-length records. In this work, we develop a novel \"failure-atomic slotted page structure\" for persistent memory that leverages byte addressability and durability of persistent memory to minimize redundant write operations used to maintain consistency in traditional database systems. Failure-atomic slotted paging consists of two key elements: (i) in-place commit per page using hardware transactional memory and (ii) slot header logging that logs the commit mark of each page. The proposed scheme is implemented in SQLite and compared against NVWAL, the current state-of-the-art scheme. Our performance study shows that our failure-atomic slotted paging shows optimal performance for database transactions that insert a single record. For transactions that touch more than one database page, our proposed slot-header logging scheme minimizes the logging overhead by avoiding duplicating pages and logging only the metadata of the dirty pages. Overall, we find that our failure-atomic slotted-page management scheme reduces database logging overhead to 1/6 and improves query response time by up to 33% compared to NVWAL.", "title": "" }, { "docid": "ba3636b17e9a5d1cb3d8755afb1b3500", "text": "Anabolic-androgenic steroids (AAS) are used as ergogenic aids by athletes and non-athletes to enhance performance by augmenting muscular development and strength. AAS administration is often associated with various adverse effects that are generally dose related. High and multi-doses of AAS used for athletic enhancement can lead to serious and irreversible organ damage. Among the most common adverse effects of AAS are some degree of reduced fertility and gynecomastia in males and masculinization in women and children. Other adverse effects include hypertension and atherosclerosis, blood clotting, jaundice, hepatic neoplasms and carcinoma, tendon damage, psychiatric and behavioral disorders. More specifically, this article reviews the reproductive, hepatic, cardiovascular, hematological, cerebrovascular, musculoskeletal, endocrine, renal, immunologic and psychologic effects. Drug-prevention counseling to athletes is highlighted and the use of anabolic steroids is must be avoided, emphasizing that sports goals may be met within the framework of honest competition, free of doping substances.", "title": "" }, { "docid": "a118ef8ac178113e9bb06a4196a58bcf", "text": "Clustering is a task of assigning a set of objects into groups called clusters. In general the clustering algorithms can be classified into two categories. One is hard clustering; another one is soft (fuzzy) clustering. Hard clustering, the data’s are divided into distinct clusters, where each data element belongs to exactly one cluster. In soft clustering, data elements belong to more than one cluster, and associated with each element is a set of membership levels. In this paper we represent a survey on fuzzy c means clustering algorithm. These algorithms have recently been shown to produce good results in a wide variety of real world applications.", "title": "" }, { "docid": "b47535d86f17047ff04ceb01d0133163", "text": "Segmentation of femurs in Anterior-Posterior x-ray images is very important for fracture detection, computer-aided surgery and surgical planning. Existing methods do not perform well in segmenting bones in x-ray images due to the presence of large amount of spurious edges. This paper presents an atlas-based approach for automatic segmentation of femurs in x-ray images. A robust global alignment method based on consistent sets of edge segments registers the whole atlas to the image under joint constraints. After global alignment, the femur models undergo local refinement to extract detailed contours of the femurs. Test results show that the proposed algorithm is robust and accurate in segmenting the femur contours of different patients.", "title": "" }, { "docid": "ec5d110ea0267fc3e72e4fa2cb4f186e", "text": "We present a secure Internet of Things (IoT) architecture for Smart Cities. The large-scale deployment of IoT technologies within a city promises to make city operations efficient while improving quality of life for city inhabitants. Mission-critical Smart City data, captured from and carried over IoT networks, must be secured to prevent cyber attacks that might cripple city functions, steal personal data and inflict catastrophic harm. We present an architecture containing four basic IoT architectural blocks for secure Smart Cities: Black Network, Trusted SDN Controller, Unified Registry and Key Management System. Together, these basic IoT-centric blocks enable a secure Smart City that mitigates cyber attacks beginning at the IoT nodes themselves.", "title": "" }, { "docid": "99d5eab7b0dfcb59f7111614714ddf95", "text": "To prevent interference problems due to existing nearby communication systems within an ultrawideband (UWB) operating frequency, the significance of an efficient band-notched design is increased. Here, the band-notches are realized by adding independent controllable strips in terms of the notch frequency and the width of the band-notches to the fork shape of the UWB antenna. The size of the flat type band-notched UWB antenna is etched on 24 times 36 mm2 substrate. Two novel antennas are presented. One antenna is designed for single band-notch with a separated strip to cover the 5.15-5.825 GHz band. The second antenna is designed for dual band-notches using two separated strips to cover the 5.15-5.35 GHz band and 5.725-5.825 GHz band. The simulation and measurement show that the proposed antenna achieves a wide bandwidth from 3 to 12 GHz with the dual band-notches successfully.", "title": "" }, { "docid": "14e75e14ba61e01ae905cbf0ba0879b3", "text": "A new Kalman-filter based active contour model is proposed for tracking of nonrigid objects in combined spatio-velocity space. The model employs measurements of gradient-based image potential and of optical-flow along the contour as system measurements. In order to improve robustness to image clutter and to occlusions an optical-flow based detection mechanism is proposed. The method detects and rejects spurious measurements which are not consistent with previous estimation of image motion.", "title": "" }, { "docid": "dc2ea774fb11bc09e80b9de3acd7d5a6", "text": "The Hough transform is a well-known straight line detection algorithm and it has been widely used for many lane detection algorithms. However, its real-time operation is not guaranteed due to its high computational complexity. In this paper, we designed a Hough transform hardware accelerator on FPGA to process it in real time. Its FPGA logic area usage was reduced by limiting the angles of the lines to (-20, 20) degrees which are enough for lane detection applications, and its arithmetic computations were performed in parallel to speed up the processing time. As a result of FPGA synthesis using Xilinx Vertex-5 XC5VLX330 device, it occupies 4,521 slices and 25.6Kbyte block memory giving performance of 10,000fps in VGA images(5000 edge points). The proposed hardware on FPGA (0.1ms) is 450 times faster than the software implementation on ARM Cortex-A9 1.4GHz (45ms). Our Hough transform hardware was verified by applying it to the newly developed LDWS (lane departure warning system).", "title": "" }, { "docid": "70a7aa831b2036a50de1751ed1ace6d9", "text": "Short stature and later maturation of youth artistic gymnasts are often attributed to the effects of intensive training from a young age. Given limitations of available data, inadequate specification of training, failure to consider other factors affecting growth and maturation, and failure to address epidemiological criteria for causality, it has not been possible thus far to establish cause-effect relationships between training and the growth and maturation of young artistic gymnasts. In response to this ongoing debate, the Scientific Commission of the International Gymnastics Federation (FIG) convened a committee to review the current literature and address four questions: (1) Is there a negative effect of training on attained adult stature? (2) Is there a negative effect of training on growth of body segments? (3) Does training attenuate pubertal growth and maturation, specifically, the rate of growth and/or the timing and tempo of maturation? (4) Does training negatively influence the endocrine system, specifically hormones related to growth and pubertal maturation? The basic information for the review was derived from the active involvement of committee members in research on normal variation and clinical aspects of growth and maturation, and on the growth and maturation of artistic gymnasts and other youth athletes. The committee was thus thoroughly familiar with the literature on growth and maturation in general and of gymnasts and young athletes. Relevant data were more available for females than males. Youth who persisted in the sport were a highly select sample, who tended to be shorter for chronological age but who had appropriate weight-for-height. Data for secondary sex characteristics, skeletal age and age at peak height velocity indicated later maturation, but the maturity status of gymnasts overlapped the normal range of variability observed in the general population. Gymnasts as a group demonstrated a pattern of growth and maturation similar to that observed among short-, normal-, late-maturing individuals who were not athletes. Evidence for endocrine changes in gymnasts was inadequate for inferences relative to potential training effects. Allowing for noted limitations, the following conclusions were deemed acceptable: (1) Adult height or near adult height of female and male artistic gymnasts is not compromised by intensive gymnastics training. (2) Gymnastics training does not appear to attenuate growth of upper (sitting height) or lower (legs) body segment lengths. (3) Gymnastics training does not appear to attenuate pubertal growth and maturation, neither rate of growth nor the timing and tempo of the growth spurt. (4) Available data are inadequate to address the issue of intensive gymnastics training and alterations within the endocrine system.", "title": "" }, { "docid": "2ba975af095effcbbc4e98d7dc2172ec", "text": "People have strong intuitions about the influence objects exert upon one another when they collide. Because people's judgments appear to deviate from Newtonian mechanics, psychologists have suggested that people depend on a variety of task-specific heuristics. This leaves open the question of how these heuristics could be chosen, and how to integrate them into a unified model that can explain human judgments across a wide range of physical reasoning tasks. We propose an alternative framework, in which people's judgments are based on optimal statistical inference over a Newtonian physical model that incorporates sensory noise and intrinsic uncertainty about the physical properties of the objects being viewed. This noisy Newton framework can be applied to a multitude of judgments, with people's answers determined by the uncertainty they have for physical variables and the constraints of Newtonian mechanics. We investigate a range of effects in mass judgments that have been taken as strong evidence for heuristic use and show that they are well explained by the interplay between Newtonian constraints and sensory uncertainty. We also consider an extended model that handles causality judgments, and obtain good quantitative agreement with human judgments across tasks that involve different judgment types with a single consistent set of parameters.", "title": "" }, { "docid": "5b4fd88e33a6422c70f0d7150bb62627", "text": "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.", "title": "" }, { "docid": "362779d2c9686e9cfe2dc3c38dd80d50", "text": "We use neuroimaging to predict cultural popularity — something that is popular in the broadest sense and appeals to a large number of individuals. Neuroeconomic research suggests that activity in reward-related regions of the brain, notably the orbitofrontal cortex and ventral striatum, is predictive of future purchasing decisions, but it is unknown whether the neural signals of a small group of individuals are predictive of the purchasing decisions of the population at large. For neuroimaging to be useful as a measure of widespread popularity, these neural responses would have to generalize to a much larger population that is not the direct subject of the brain imaging itself. Here, we test the possibility of using functional magnetic resonance imaging (fMRI) to predict the relative popularity of a common good: music. We used fMRI to measure the brain responses of a relatively small group of adolescents while listening to songs of largely unknown artists. As a measure of popularity, the sales of these songs were totaled for the three years following scanning, and brain responses were then correlated with these “future” earnings. Although subjective likability of the songs was not predictive of sales, activity within the ventral striatum was significantly correlated with the number of units sold. These results suggest that the neural responses to goods are not only predictive of purchase decisions for those individuals actually scanned, but such responses generalize to the population at large and may be used to predict cultural popularity. © 2011 Published by Elsevier Inc. on behalf of Society for Consumer Psychology.", "title": "" }, { "docid": "2316e37df8796758c86881aaeed51636", "text": "Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.", "title": "" } ]
scidocsrr
0a35fd72a697dbf1713858c1861dce7a
A Survey of Data Mining and Deep Learning in Bioinformatics
[ { "docid": "5d8f33b7f28e6a8d25d7a02c1f081af1", "text": "Background The life sciences, biomedicine and health care are increasingly turning into a data intensive science [2-4]. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weaklystructured and noisy data [5-8], but also the growing need for integrative analysis and modeling [9-14]. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized [15]), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics), see Figure 1[16]. Just for rapid orientation in terms of size: the Glucose molecule has a size of 900 pm = 900× 10−12m and the Carbon atom approx. 300 pm . A hepatitis virus is relatively large with 45nm = 45× 10−9m and the X-Chromosome much bigger with 7μm = 7× 10−6m . We produce most of the “Big Data” in the omics world, we estimate many Terabytes ( 1TB = 1× 10 Byte = 1000 GByte) of genomics data in each individual, consequently, the fusion of these with Petabytes of proteomics data for personalized medicine results in Exabytes of data (1 EB = 1× 1018 Byte ). Last but not least, this “natural” data is then fused together with “produced” data, e.g., the unstructured information (text) in the patient records, wellness data, the data from physiological sensors, laboratory data etc. these data are also rapidly increasing in size and complexity. Besides the problem of heterogeneous and distributed data, we are confronted with noisy, missing and inconsistent data. This leaves a large gap between the available “dirty” data [17] and the machinery to effectively process the data for the application purposes; moreover, the procedures of data integration and information extraction may themselves introduce errors and artifacts in the data [18]. Although, one may argue that “Big Data” is a buzz word, systematic and comprehensive exploration of all these data is often seen as the fourth paradigm in the investigation of nature after empiricism, theory and computation [19], and provides a mechanism for data driven hypotheses generation, optimized experiment planning, precision medicine and evidence-based medicine. The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data [20], [21]. Many different approaches, including statistical and graph theoretical methods, data mining, and machine learning methods, have been applied in the past however with partly unsatisfactory success [22,23] especially in terms of performance [24]. The grand challenge is to make data useful to and useable by the end user [25]. Maybe, the key challenge is interaction, due to the fact that it is the human end user who possesses the problem solving intelligence [26], hence the ability to ask intelligent questions about the data. The problem in the life sciences is that (biomedical) data models are characterized by significant complexity [27], [28], making manual analysis by the end users difficult and often impossible [29]. At the same time, human * Correspondence: a.holzinger@tugraz.at Research Unit Human-Computer Interaction, Austrian IBM Watson Think Group, Institute for Medical Informatics, Statistics & Documentation, Medical University Graz, Austria Full list of author information is available at the end of the article Holzinger et al. BMC Bioinformatics 2014, 15(Suppl 6):I1 http://www.biomedcentral.com/1471-2105/15/S6/I1", "title": "" }, { "docid": "447bbce2f595af07c8d784d422e7f826", "text": "MOTIVATION\nRNA-seq technology has been widely adopted as an attractive alternative to microarray-based methods to study global gene expression. However, robust statistical tools to analyze these complex datasets are still lacking. By grouping genes with similar expression profiles across treatments, cluster analysis provides insight into gene functions and networks, and hence is an important technique for RNA-seq data analysis.\n\n\nRESULTS\nIn this manuscript, we derive clustering algorithms based on appropriate probability models for RNA-seq data. An expectation-maximization algorithm and another two stochastic versions of expectation-maximization algorithms are described. In addition, a strategy for initialization based on likelihood is proposed to improve the clustering algorithms. Moreover, we present a model-based hybrid-hierarchical clustering method to generate a tree structure that allows visualization of relationships among clusters as well as flexibility of choosing the number of clusters. Results from both simulation studies and analysis of a maize RNA-seq dataset show that our proposed methods provide better clustering results than alternative methods such as the K-means algorithm and hierarchical clustering methods that are not based on probability models.\n\n\nAVAILABILITY AND IMPLEMENTATION\nAn R package, MBCluster.Seq, has been developed to implement our proposed algorithms. This R package provides fast computation and is publicly available at http://www.r-project.org", "title": "" }, { "docid": "1e4ea38a187881d304ea417f98a608d1", "text": "Breast cancer represents the second leading cause of cancer deaths in women today and it is the most common type of cancer in women. This paper presents some experiments for tumour detection in digital mammography. We investigate the use of different data mining techniques, neural networks and association rule mining, for anomaly detection and classification. The results show that the two approaches performed well, obtaining a classification accuracy reaching over 70% percent for both techniques. Moreover, the experiments we conducted demonstrate the use and effectiveness of association rule mining in image categorization.", "title": "" } ]
[ { "docid": "4285d9b4b9f63f22033ce9a82eec2c76", "text": "To ease large-scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems. Moreover, since one of the main tasks for which agent systems were invented is the integration between heterogeneous software, independently developed agents should be able to interact successfully. In this paper, we present JADE (Java Agent Development Environment), a software framework to build agent systems for the management of networked information resources in compliance with the FIPA specifications for inter-operable intelligent multi-agent systems. The goal of JADE is to simplify development while ensuring standard compliance through a comprehensive set of system services and agents. JADE can then be considered to be an agent middle-ware that implements an efficient agent platform and supports the development of multi-agent systems. It deals with all the aspects that are not peculiar to agent internals and that are independent of the applications, such as message transport, encoding and parsing, or agent life-cycle management. Copyright  2001 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "5923cd462b5b09a3aabd0fbf5c36f00c", "text": "Exoskeleton robots are used as assistive limbs for elderly persons, rehabilitation for paralyzed persons or power augmentation purposes for healthy persons. The similarity of the exoskeleton robots and human body neuro-muscular system maximizes the device performance. Human body neuro-muscular system provides a flexible and safe movement capability with minimum energy consumption by varying the stiffness of the human joints regularly. Similar to human body, variable stiffness actuators should be used to provide a flexible and safe movement capability in exoskeletons. In the present day, different types of variable stiffness actuator designs are used, and the studies on these actuators are still continuing rapidly. As exoskeleton robots are mobile devices working with the equipment such as batteries, the motors used in the design are expected to have minimal power requirements. In this study, antagonistic, pre-tension and controllable transmission ratio type variable stiffness actuators are compared in terms of energy efficiency and power requirement at an optimal (medium) walking speed for ankle joint. In the case of variable stiffness, the results show that the controllable transmission ratio type actuator compared with the antagonistic design is more efficient in terms of energy consumption and power requirement.", "title": "" }, { "docid": "d60b1a9a23fe37813a24533104a74d70", "text": "Online display advertising is a multi-billion dollar industry where advertisers promote their products to users by having publishers display their advertisements on popular Web pages. An important problem in online advertising is how to forecast the number of user visits for a Web page during a particular period of time. Prior research addressed the problem by using traditional time-series forecasting techniques on historical data of user visits; (e.g., via a single regression model built for forecasting based on historical data for all Web pages) and did not fully explore the fact that different types of Web pages and different time stamps have different patterns of user visits. In this paper, we propose a series of probabilistic latent class models to automatically learn the underlying user visit patterns among multiple Web pages and multiple time stamps. The last (and the most effective) proposed model identifies latent groups/classes of (i) Web pages and (ii) time stamps with similar user visit patterns, and learns a specialized forecast model for each latent Web page and time stamp class. Compared with a single regression model as well as several other baselines, the proposed latent class model approach has the capability of differentiating the importance of different types of information across different classes of Web pages and time stamps, and therefore has much better modeling flexibility. An extensive set of experiments along with detailed analysis carried out on real-world data from Yahoo! demonstrates the advantage of the proposed latent class models in forecasting online user visits in online display advertising.", "title": "" }, { "docid": "72e4d7729031d63f96b686444c9b446e", "text": "In this paper we describe the fundamentals of affective gaming from a physiological point of view, covering some of the origins of the genre, how affective videogames operate and current conceptual and technological capabilities. We ground this overview of the ongoing research by taking an in-depth look at one of our own early biofeedback-based affective games. Based on our analysis of existing videogames and our own experience with affective videogames, we propose a new approach to game design based on several high-level design heuristics: assist me, challenge me and emote me (ACE), a series of gameplay \"tweaks\" made possible through affective videogames.", "title": "" }, { "docid": "258c90fe18f120a24d8132550ed85a6e", "text": "Based on the thorough analysis of the literature, Chap. 1 introduces readers with challenges of STEM-driven education in general and those challenges caused by the use of this paradigm in computer science (CS) education in particular. This analysis enables to motivate our approach we discuss throughout the book. Chapter 1 also formulates objectives, research agenda and topics this book addresses. The objectives of the book are to discuss the concepts and approaches enabling to transform the current CS education paradigm into the STEM-driven one at the school and, to some extent, at the university. We seek to implement this transformation through the integration of the STEM pedagogy, the smart content and smart devices and educational robots into the smart STEM-driven environment, using reuse-based approaches taken from software engineering and CS.", "title": "" }, { "docid": "fcc092e71c7a0b38edb23e4eb92dfb21", "text": "In this work, we focus on semantic parsing of natural language conversations. Most existing methods for semantic parsing are based on understanding the semantics of a single sentence at a time. However, understanding conversations also requires an understanding of conversational context and discourse structure across sentences. We formulate semantic parsing of conversations as a structured prediction task, incorporating structural features that model the ‘flow of discourse’ across sequences of utterances. We create a dataset for semantic parsing of conversations, consisting of 113 real-life sequences of interactions of human users with an automated email assistant. The data contains 4759 natural language statements paired with annotated logical forms. Our approach yields significant gains in performance over traditional semantic parsing.", "title": "" }, { "docid": "e464cde1434026c17b06716c6a416b7a", "text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.", "title": "" }, { "docid": "314e1b8bbcc0a5735d86bb751d524a93", "text": "Ubiquinone (coenzyme Q), in addition to its function as an electron and proton carrier in mitochondrial and bacterial electron transport linked to ATP synthesis, acts in its reduced form (ubiquinol) as an antioxidant, preventing the initiation and/or propagation of lipid peroxidation in biological membranes and in serum low-density lipoprotein. The antioxidant activity of ubiquinol is independent of the effect of vitamin E, which acts as a chain-breaking antioxidant inhibiting the propagation of lipid peroxidation. In addition, ubiquinol can efficiently sustain the effect of vitamin E by regenerating the vitamin from the tocopheroxyl radical, which otherwise must rely on water-soluble agents such as ascorbate (vitamin C). Ubiquinol is the only known lipid-soluble antioxidant that animal cells can synthesize de novo, and for which there exist enzymic mechanisms that can regenerate the antioxidant from its oxidized form resulting from its inhibitory effect of lipid peroxidation. These features, together with its high degree of hydrophobicity and its widespread occurrence in biological membranes and in low-density lipoprotein, suggest an important role of ubiquinol in cellular defense against oxidative damage. Degenerative diseases and aging may bc 1 manifestations of a decreased capacity to maintain adequate ubiquinol levels.", "title": "" }, { "docid": "e39494d730b0ad81bf950b68dc4a7854", "text": "G4LTL-ST automatically synthesizes control code for industrial Programmable Logic Controls (PLC) from timed behavioral specifications of inputoutput signals. These specifications are expressed in a linear temporal logic (LTL) extended with non-linear arithmetic constraints and timing constraints on signals. G4LTL-ST generates code in IEC 61131-3-compatible Structured Text, which is compiled into executable code for a large number of industrial field-level devices. The synthesis algorithm of G4LTL-ST implements pseudo-Boolean abstraction of data constraints and the compilation of timing constraints into LTL, together with a counterstrategy-guided abstraction-refinement synthesis loop. Since temporal logic specifications are notoriously difficult to use in practice, G4LTL-ST supports engineers in specifying realizable control problems by suggesting suitable restrictions on the behavior of the control environment from failed synthesis attempts.", "title": "" }, { "docid": "58bfe45d6f2e8bdb2f641290ee6f0b86", "text": "Intimate partner violence (IPV) is a common phenomenon worldwide. However, there is a relative dearth of qualitative research exploring IPV in which men are the victims of their female partners. The present study used a qualitative approach to explore how Portuguese men experience IPV. Ten male victims (aged 35–75) who had sought help from domestic violence agencies or from the police were interviewed. Transcripts were analyzed using QSR NVivo10 and coded following thematic analysis. The results enhance our understanding of both the nature and dynamics of the violence that men experience as well as the negative impact of violence on their lives. This study revealed the difficulties that men face in the process of seeking help, namely differences in treatment of men versus women victims. It also highlights that help seeking had a negative emotional impact for most of these men. Finally, this study has important implications for practitioners and underlines macro-level social recommendations for raising awareness about this phenomenon, including the need for changes in victims’ services and advocacy for gender-inclusive campaigns and responses.", "title": "" }, { "docid": "288383c6a6d382b6794448796803699f", "text": "A transresistance instrumentation amplifier (dual-input transresistance amplifier) was designed, and a prototype was fabricated and tested in a gamma-ray dosimeter. The circuit, explained in this letter, is a differential amplifier which is suitable for amplification of signals from current-source transducers. In the dosimeter application, the amplifier proved superior to a regular (single) transresistance amplifier, giving better temperature stability and better common-mode rejection.", "title": "" }, { "docid": "7476bbec4720e04223d56a71e6bab03e", "text": "We consider the performance analysis and design optimization of low-density parity check (LDPC) coded multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems for high data rate wireless transmission. The tools of density evolution with mixture Gaussian approximations are used to optimize irregular LDPC codes and to compute minimum operational signal-to-noise ratios (SNRs) for ergodic MIMO OFDM channels. In particular, the optimization is done for various MIMO OFDM system configurations, which include a different number of antennas, different channel models, and different demodulation schemes; the optimized performance is compared with the corresponding channel capacity. It is shown that along with the optimized irregular LDPC codes, a turbo iterative receiver that consists of a soft maximum a posteriori (MAP) demodulator and a belief-propagation LDPC decoder can perform within 1 dB from the ergodic capacity of the MIMO OFDM systems under consideration. It is also shown that compared with the optimal MAP demodulator-based receivers, the receivers employing a low-complexity linear minimum mean-square-error soft-interference-cancellation (LMMSE-SIC) demodulator have a small performance loss (< 1dB) in spatially uncorrelated MIMO channels but suffer extra performance loss in MIMO channels with spatial correlation. Finally, from the LDPC profiles that already are optimized for ergodic channels, we heuristically construct small block-size irregular LDPC codes for outage MIMO OFDM channels; as shown from simulation results, the irregular LDPC codes constructed here are helpful in expediting the convergence of the iterative receivers.", "title": "" }, { "docid": "309a20834f17bd87e10f8f1c051bf732", "text": "Tamper-resistant cryptographic processors are becoming the standard way to enforce data-usage policies. Their origins lie with military cipher machines and PIN processing in banking payment networks, expanding in the 1990s into embedded applications: token vending machines for prepayment electricity and mobile phone credit. Major applications such as GSM mobile phone identification and pay TV set-top boxes have pushed low-cost cryptoprocessors toward ubiquity. In the last five years, dedicated crypto chips have been embedded in devices such as game console accessories and printer ink cartridges, to control product and accessory after markets. The \"Trusted Computing\" initiative will soon embed cryptoprocessors in PCs so they can identify each other remotely. This paper surveys the range of applications of tamper-resistant hardware and the array of attack and defense mechanisms which have evolved in the tamper-resistance arms race.", "title": "" }, { "docid": "81cd2034b2096db2be699821e499dfa8", "text": "At the US National Library of Medicine we have developed the Unified Medical Language System (UMLS), whose goal it is to provide integrated access to a large number of biomedical resources by unifying the vocabularies that are used to access those resources. The UMLS currently interrelates some 60 controlled vocabularies in the biomedical domain. The UMLS coverage is quite extensive, including not only many concepts in clinical medicine, but also a large number of concepts applicable to the broad domain of the life sciences. In order to provide an overarching conceptual framework for all UMLS concepts, we developed an upper-level ontology, called the UMLS semantic network. The semantic network, through its 134 semantic types, provides a consistent categorization of all concepts represented in the UMLS. The 54 links between the semantic types provide the structure for the network and represent important relationships in the biomedical domain. Because of the growing number of information resources that contain genetic information, the UMLS coverage in this area is being expanded. We recently integrated the taxonomy of organisms developed by the NLM's National Center for Biotechnology Information, and we are currently working together with the developers of the Gene Ontology to integrate this resource, as well. As additional, standard, ontologies become publicly available, we expect to integrate these into the UMLS construct.", "title": "" }, { "docid": "8381e95910a7500cdb37505e64a9331b", "text": "Previous ensemble streamflow prediction (ESP) studies in Korea reported that modelling error significantly affects the accuracy of the ESP probabilistic winter and spring (i.e. dry season) forecasts, and thus suggested that improving the existing rainfall-runoff model, TANK, would be critical to obtaining more accurate probabilistic forecasts with ESP. This study used two types of artificial neural network (ANN), namely the single neural network (SNN) and the ensemble neural network (ENN), to provide better rainfall-runoff simulation capability than TANK, which has been used with the ESP system for forecasting monthly inflows to the Daecheong multipurpose dam in Korea. Using the bagging method, the ENN combines the outputs of member networks so that it can control the generalization error better than an SNN. This study compares the two ANN models with TANK with respect to the relative bias and the root-mean-square error. The overall results showed that the ENN performed the best among the three rainfall-runoff models. The ENN also considerably improved the probabilistic forecasting accuracy, measured in terms of average hit score, half-Brier score and hit rate, of the present ESP system that used TANK. Therefore, this study concludes that the ENN would be more effective for ESP rainfall-runoff modelling than TANK or an SNN. Copyright  2005 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "584540f486e1bf112eb8abe8731de341", "text": "This article overviews the diagnosis and management of traumatic injuries to primary teeth. The child's age, ability to cooperate for treatment, and the potential for collateral damage to developing permanent teeth can complicate the management of these injuries. The etiology of these injuries is reviewed including the disturbing role of child abuse. Serious medical complications including head injury, cervical spine injury, and tetanus are discussed. Diagnostic methods and the rationale for treatment of luxation injuries, crown, and crown/root fractures are included. Treatment priorities should include adequate pain control, safe management of the child's behavior, and protection of the developing permanent teeth.", "title": "" }, { "docid": "6fc9000394cc05b2f70909dd2d0c76fb", "text": "Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.", "title": "" }, { "docid": "795f59c0658a56aa68a9271d591c81a6", "text": "We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter’s infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections.", "title": "" }, { "docid": "1b1953e3dd28c67e7a8648392422df88", "text": "We examined Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) General Ability Index (GAI) and Full Scale Intelligence Quotient (FSIQ) discrepancies in 100 epilepsy patients; 44% had a significant GAI > FSIQ discrepancy. GAI-FSIQ discrepancies were correlated with the number of antiepileptic drugs taken and duration of epilepsy. Individual antiepileptic drugs differentially interfere with the expression of underlying intellectual ability in this group. FSIQ may significantly underestimate levels of general intellectual ability in people with epilepsy. Inaccurate representations of FSIQ due to selective impairments in working memory and reduced processing speed obscure the contextual interpretation of performance on other neuropsychological tests, and subtle localizing and lateralizing signs may be missed as a result.", "title": "" }, { "docid": "5547f8ad138a724c2cc05ce65f50ebd2", "text": "As machine learning (ML) technology continues to spread by rapid evolution, the system or service using Machine Learning technology, called ML product, makes big impact on our life, society and economy. Meanwhile, Quality Assurance (QA) for ML product is quite more difficult than hardware, non-ML software and service because performance of ML technology is much better than non-ML technology in exchange for the characteristics of ML product, e.g. low explainability. We must keep rapid evolution and reduce quality risk of ML product simultaneously. In this paper, we show a Quality Assurance Framework for Machine Learning product. Scope of QA in this paper is limited to product evaluation. First, a policy of QA for ML Product is proposed. General principles of product evaluation is introduced and applied to ML product evaluation as a part of the policy. They are composed of A-ARAI: Allowability, Achievability, Robustness, Avoidability and Improvability. A strategy of ML Product Evaluation is constructed as another part of the policy. Quality Integrity Level for ML product is also modelled. Second, we propose a test architecture of ML product testing. It consists of test levels and fundamental test types of ML product testing, including snapshot testing, learning testing and confrontation testing. Finally, we defines QA activity levels for ML product.", "title": "" } ]
scidocsrr
6a4bdf8a3531300909b2c97569672111
Gated Multimodal Units for Information Fusion
[ { "docid": "0bbfd07d0686fc563f156d75d3672c7b", "text": "In this paper, we provide a comprehensive survey of the mixture of experts (ME). We discuss the fundamental models for regression and classification and also their training with the expectation-maximization algorithm. We follow the discussion with improvements to the ME model and focus particularly on the mixtures of Gaussian process experts. We provide a review of the literature for other training methods, such as the alternative localized ME training, and cover the variational learning of ME in detail. In addition, we describe the model selection literature which encompasses finding the optimum number of experts, as well as the depth of the tree. We present the advances in ME in the classification area and present some issues concerning the classification model. We list the statistical properties of ME, discuss how the model has been modified over the years, compare ME to some popular algorithms, and list several applications. We conclude our survey with future directions and provide a list of publicly available datasets and a list of publicly available software that implement ME. Finally, we provide examples for regression and classification. We believe that the study described in this paper will provide quick access to the relevant literature for researchers and practitioners who would like to improve or use ME, and that it will stimulate further studies in ME.", "title": "" } ]
[ { "docid": "e668a6b42058bc44925d073fd9ee0cdd", "text": "Reducing the in-order delivery, or playback, delay of reliable transport layer protocols over error prone networks can significantly improve application layer performance. This is especially true for applications that have time sensitive constraints such as streaming services. We explore the benefits of a coded generalization of selective repeat ARQ for minimizing the in-order delivery delay. An analysis of the delay's first two moments is provided so that we can determine when and how much redundancy should be added to meet a user's requirements. Numerical results help show the gains over selective repeat ARQ, as well as the trade-offs between meeting the user's delay constraints and the costs inflicted on the achievable rate. Finally, the analysis is compared with experimental results to help illustrate how our work can be used to help inform system decisions.", "title": "" }, { "docid": "eed45b473ebaad0740b793bda8345ef3", "text": "Plyometric training (PT) enhances soccer performance, particularly vertical jump. However, the effectiveness of PT depends on various factors. A systematic search of the research literature was conducted for randomized controlled trials (RCTs) studying the effects of PT on countermovement jump (CMJ) height in soccer players. Ten studies were obtained through manual and electronic journal searches (up to April 2017). Significant differences were observed when compared: (1) PT group vs. control group (ES=0.85; 95% CI 0.47-1.23; I2=68.71%; p<0.001), (2) male vs. female soccer players (Q=4.52; p=0.033), (3) amateur vs. high-level players (Q=6.56; p=0.010), (4) single session volume (<120 jumps vs. ≥120 jumps; Q=6.12, p=0.013), (5) rest between repetitions (5 s vs. 10 s vs. 15 s vs. 30 s; Q=19.10, p<0.001), (6) rest between sets (30 s vs. 60 s vs. 90 s vs. 120 s vs. 240 s; Q=19.83, p=0.001) and (7) and overall training volume (low: <1600 jumps vs. high: ≥1600 jumps; Q=5.08, p=0.024). PT is an effective form of training to improve vertical jump performance (i.e., CMJ) in soccer players. The benefits of PT on CMJ performance are greater for interventions of longer rest interval between repetitions (30 s) and sets (240 s) with higher volume of more than 120 jumps per session and 1600 jumps in total. Gender and competitive level differences should be considered when planning PT programs in soccer players.", "title": "" }, { "docid": "33431760dfc16c095a4f0b8d4ed94790", "text": "Millions of individuals worldwide are afflicted with acute and chronic respiratory diseases, causing temporary and permanent disabilities and even death. Oftentimes, these diseases occur as a result of altered immune responses. The aryl hydrocarbon receptor (AhR), a ligand-activated transcription factor, acts as a regulator of mucosal barrier function and may influence immune responsiveness in the lungs through changes in gene expression, cell–cell adhesion, mucin production, and cytokine expression. This review updates the basic immunobiology of the AhR signaling pathway with regards to inflammatory lung diseases such as asthma, chronic obstructive pulmonary disease, and silicosis following data in rodent models and humans. Finally, we address the therapeutic potential of targeting the AhR in regulating inflammation during acute and chronic respiratory diseases.", "title": "" }, { "docid": "c906d026937ebea3525f5dee5d923335", "text": "VGGNets have turned out to be effective for object recognition in still images. However, it is unable to yield good performance by directly adapting the VGGNet models trained on the ImageNet dataset for scene recognition. This report describes our implementation of training the VGGNets on the large-scale Places205 dataset. Specifically, we train three VGGNet models, namely VGGNet-11, VGGNet-13, and VGGNet-16, by using a Multi-GPU extension of Caffe toolbox with high computational efficiency. We verify the performance of trained Places205-VGGNet models on three datasets: MIT67, SUN397, and Places205. Our trained models achieve the state-of-the-art performance o n these datasets and are made public available 1.", "title": "" }, { "docid": "7249e8c5db7d9d048f777aeeaf34954c", "text": "With the growth of system size and complexity, reliability has become of paramount importance for petascale systems. Reliability, Availability, and Serviceability (RAS) logs have been commonly used for failure analysis. However, analysis based on just the RAS logs has proved to be insufficient in understanding failures and system behaviors. To overcome the limitation of this existing methodologies, we analyze the Blue Gene/P RAS logs and the Blue Gene/P job logs in a cooperative manner. From our co-analysis effort, we have identified a dozen important observations about failure characteristics and job interruption characteristics on the Blue Gene/P systems. These observations can significantly facilitate the research in fault resilience of large-scale systems.", "title": "" }, { "docid": "c564656568c9ce966e88d11babc0d445", "text": "In this study, Turkish texts belonging to different categories were classified by using word2vec word vectors. Firstly, vectors of the words in all the texts were extracted then, each text was represented in terms of the mean vectors of the words it contains. Texts were classified by SVM and 0.92 F measurement score was obtained for seven different categories. As a result, it was experimentally shown that word2vec is more successful than tf-idf based classification for Turkish document classification.", "title": "" }, { "docid": "a74b091706f4aeb384d2bf3d477da67d", "text": "Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo.", "title": "" }, { "docid": "1ede796449f610b186638aa2ac9ceedf", "text": "We introduce a framework for exploring and learning representations of log data generated by enterprise-grade security devices with the goal of detecting advanced persistent threats (APTs) spanning over several weeks. The presented framework uses a divide-and-conquer strategy combining behavioral analytics, time series modeling and representation learning algorithms to model large volumes of data. In addition, given that we have access to human-engineered features, we analyze the capability of a series of representation learning algorithms to complement human-engineered features in a variety of classification approaches. We demonstrate the approach with a novel dataset extracted from 3 billion log lines generated at an enterprise network boundaries with reported command and control communications. The presented results validate our approach, achieving an area under the ROC curve of 0.943 and 95 true positives out of the Top 100 ranked instances on the test data set.", "title": "" }, { "docid": "08f49b003a3a5323e38e4423ba6503a4", "text": "Neurofeedback (NF), a type of neurobehavioral training, has gained increasing attention in recent years, especially concerning the treatment of children with ADHD. Promising results have emerged from recent randomized controlled studies, and thus, NF is on its way to becoming a valuable addition to the multimodal treatment of ADHD. In this review, we summarize the randomized controlled trials in children with ADHD that have been published within the last 5 years and discuss issues such as the efficacy and specificity of effects, treatment fidelity and problems inherent in placebo-controlled trials of NF. Directions for future NF research are outlined, which should further address specificity and help to determine moderators and mediators to optimize and individualize NF training. Furthermore, we describe methodological (tomographic NF) and technical ('tele-NF') developments that may also contribute to further improvements in treatment outcome.", "title": "" }, { "docid": "0cf3a201140e02039295a2ef4697a635", "text": "In recent years, deep convolutional neural networks (ConvNet) have shown their popularity in various real world applications. To provide more accurate results, the state-of-the-art ConvNet requires millions of parameters and billions of operations to process a single image, which represents a computational challenge for general purpose processors. As a result, hardware accelerators such as Graphic Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs), have been adopted to improve the performance of ConvNet. However, GPU-based solution consumes a considerable amount of power and a traditional RTL design on FPGA requires tedious development that is very time-consuming. In this work, we propose a scalable and parameterized end-to-end ConvNet design using Intel FPGA SDK for OpenCL. To validate the design, we implement VGG 16 model on two different FPGA boards. Consequently, our designs achieve 306.41 GOPS on Intel Stratix A7 and 318.94 GOPS on Intel Arria 10 GX 10AX115. To the best of our knowledge, this outperforms previous FPGA-based accelerators. Compared to the CPU (Intel Xeon E5-2620) and a mid-range GPU (Nvidia K40), our design is 24.3X and 1.7X more energy efficient respectively.", "title": "" }, { "docid": "280672ad5473e061269114d0d11acc90", "text": "With personalization, consumers can choose from various product attributes and a customized product is assembled based on their preferences. Marketers often offer personalization on websites. This paper investigates consumer purchase intentions toward personalized products in an online selling situation. The research builds and tests three hypotheses: (1) intention to purchase personalized products will be affected by individualism, uncertainty avoidance, power distance, and masculinity dimensions of a national culture; (2) consumers will be more likely to buy personalized search products than experience products; and (3) intention to buy a personalized product will not be influenced by price premiums up to some level. Results indicate that individualism is the only culture dimension to have a significant effect on purchase intention. Product type and individualism by price interaction also have a significant effect, whereas price does not. Major findings and implications are discussed. a Department of Business Administration, School of Economics and Business, Hanyang University, Ansan, South Korea b Department of International Business, School of Commerce and Business, University of Auckland, Auckland, New Zealand c School of Business, State University of New York at New Paltz, New Paltz, New York 12561, USA This work was supported by a Korea Research Foundation Grant (KRF-2004-041-B00211) to the first author. Corresponding author. Tel.: +82 31 400 5653; fax: +82 31 400 5591. E-mail addresses: jmoon@hanyang.ac.kr (J. Moon), d.chadee@auckland.ac.nz (D. Chadee), tikoos@newpaltz.edu (S. Tikoo). 1 Tel.: +64 9 373 7599 x85951. 2 Tel.: +1 845 257 2959.", "title": "" }, { "docid": "9e6df649528ce4f011fcc09d089b4559", "text": "Aspect-based sentiment analysis (ABSA) tries to predict the polarity of a given document with respect to a given aspect entity. While neural network architectures have been successful in predicting the overall polarity of sentences, aspectspecific sentiment analysis still remains as an open problem. In this paper, we propose a novel method for integrating aspect information into the neural model. More specifically, we incorporate aspect information into the neural model by modeling word-aspect relationships. Our novel model, Aspect Fusion LSTM (AF-LSTM) learns to attend based on associative relationships between sentence words and aspect which allows our model to adaptively focus on the correct words given an aspect term. This ameliorates the flaws of other state-of-the-art models that utilize naive concatenations to model word-aspect similarity. Instead, our model adopts circular convolution and circular correlation to model the similarity between aspect and words and elegantly incorporates this within a differentiable neural attention framework. Finally, our model is end-to-end differentiable and highly related to convolution-correlation (holographic like) memories. Our proposed neural model achieves state-of-the-art performance on benchmark datasets, outperforming ATAE-LSTM by 4%− 5% on average across multiple datasets.", "title": "" }, { "docid": "4f9558d13c3caf7244b31adc69c8832d", "text": "Self-adaptation is a first class concern for cloud applications, which should be able to withstand diverse runtime changes. Variations are simultaneously happening both at the cloud infrastructure level - for example hardware failures - and at the user workload level - flash crowds. However, robustly withstanding extreme variability, requires costly hardware over-provisioning. \n In this paper, we introduce a self-adaptation programming paradigm called brownout. Using this paradigm, applications can be designed to robustly withstand unpredictable runtime variations, without over-provisioning. The paradigm is based on optional code that can be dynamically deactivated through decisions based on control theory. \n We modified two popular web application prototypes - RUBiS and RUBBoS - with less than 170 lines of code, to make them brownout-compliant. Experiments show that brownout self-adaptation dramatically improves the ability to withstand flash-crowds and hardware failures.", "title": "" }, { "docid": "dfb78a96f9af81aa3f4be1a28e4ce0a2", "text": "This paper presents two ultra-high-speed SerDes dedicated for PAM4 and NRZ data. The PAM4 TX incorporates an output driver with 3-tap FFE and adjustable weighting to deliver clean outputs at 4 levels, and the PAM4 RX employs a purely linear full-rate CDR and CTLE/1-tap DFE combination to recover and demultiplex the data. NRZ TX includes a tree-structure MUX with built-in PLL and phase aligner. NRZ RX adopts linear PD with special vernier technique to handle the 56 Gb/s input data. All chips have been verified in silicon with reasonable performance, providing prospective design examples for next-generation 400 GbE.", "title": "" }, { "docid": "2bbcdf5f3182262d3fcd6addc1e3f835", "text": "Online handwritten Chinese text recognition (OHCTR) is a challenging problem as it involves a large-scale character set, ambiguous segmentation, and variable-length input sequences. In this paper, we exploit the outstanding capability of path signature to translate online pen-tip trajectories into informative signature feature maps, successfully capturing the analytic and geometric properties of pen strokes with strong local invariance and robustness. A multi-spatial-context fully convolutional recurrent network (MC-FCRN) is proposed to exploit the multiple spatial contexts from the signature feature maps and generate a prediction sequence while completely avoiding the difficult segmentation problem. Furthermore, an implicit language model is developed to make predictions based on semantic context within a predicting feature sequence, providing a new perspective for incorporating lexicon constraints and prior knowledge about a certain language in the recognition procedure. Experiments on two standard benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with correct rates of 97.50 and 96.58 percent, respectively, which are significantly better than the best result reported thus far in the literature.", "title": "" }, { "docid": "981da4eddfc1c9fbbceef437f5f43439", "text": "A significant number of schizophrenic patients show patterns of smooth pursuit eye-tracking patterns that differ strikingly from the generally smooth eye-tracking seen in normals and in nonschizophrenic patients. These deviations are probably referable not only to motivational or attentional factors, but also to oculomotor involvement that may have a critical relevance for perceptual dysfunction in schizophrenia.", "title": "" }, { "docid": "9be80d8f93dd5edd72ecd759993935d6", "text": "The excretory system regulates the chemical composition of body fluids by removing metabolic wastes and retaining the proper amount of water, salts and nutrients. The invertebrate excretory structures are classified in according to their marked variations in the morphological structures into three types included contractile vacuoles in protozoa, nephridia (flame cell system) in most invertebrate animals and Malpighian tubules (arthropod kidney) in insects [2]. There are three distinct excretory organs formed in succession during the development of the vertebrate kidney, they are called pronephros, mesonephros and metanephros. The pronephros is the most primitive one and exists as a functional kidney only in some of the lowest fishes and is called the archinephros. The mesonephros represents the functional excretory organs in anamniotes and called as opisthonephros. The metanephros is the most caudally located of the excretory organs and the last to appear, it represents the functional kidney in amniotes [2-4].", "title": "" }, { "docid": "ef8be5104f9bc4a0f4353ed236b6afb8", "text": "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.", "title": "" }, { "docid": "079de41f553c8bd5c87f7c3cfbe5d836", "text": "We present a design study for a nano-scale crossbar memory system that uses memristors with symmetrical but highly nonlinear current-voltage characteristics as memory elements. The memory is non-volatile since the memristors retain their state when un-powered. In order to address the nano-wires that make up this nano-scale crossbar, we use two coded demultiplexers implemented using mixed-scale crossbars (in which CMOS-wires cross nano-wires and in which the crosspoint junctions have one-time configurable memristors). This memory system does not utilize the kind of devices (diodes or transistors) that are normally used to isolate the memory cell being written to and read from in conventional memories. Instead, special techniques are introduced to perform the writing and the reading operation reliably by taking advantage of the nonlinearity of the type of memristors used. After discussing both writing and reading strategies for our memory system in general, we focus on a 64 x 64 memory array and present simulation results that show the feasibility of these writing and reading procedures. Besides simulating the case where all device parameters assume exactly their nominal value, we also simulate the much more realistic case where the device parameters stray around their nominal value: we observe a degradation in margins, but writing and reading is still feasible. These simulation results are based on a device model for memristors derived from measurements of fabricated devices in nano-scale crossbars using Pt and Ti nano-wires and using oxygen-depleted TiO(2) as the switching material.", "title": "" }, { "docid": "35725331e4abd61ed311b14086dd3d5c", "text": "BACKGROUND\nBody dysmorphic disorder (BDD) consists of a preoccupation with an 'imagined' defect in appearance which causes significant distress or impairment in functioning. There has been little previous research into BDD. This study replicates a survey from the USA in a UK population and evaluates specific measures of BDD.\n\n\nMETHOD\nCross-sectional interview survey of 50 patients who satisfied DSM-IV criteria for BDD as their primary disorder.\n\n\nRESULTS\nThe average age at onset was late adolescence and a large proportion of patients were either single or divorced. Three-quarters of the sample were female. There was a high degree of comorbidity with the most common additional Axis l diagnosis being either a mood disorder (26%), social phobia (16%) or obsessive-compulsive disorder (6%). Twenty-four per cent had made a suicide attempt in the past. Personality disorders were present in 72% of patients, the most common being paranoid, avoidant and obsessive-compulsive.\n\n\nCONCLUSIONS\nBDD patients had a high associated comorbidity and previous suicide attempts. BDD is a chronic handicapping disorder and patients are not being adequately identified or treated by health professionals.", "title": "" } ]
scidocsrr
5f31dfded71c8aa0596b961f83ad9bfd
A new hybrid global optimization approach for selecting clinical and biological features that are relevant to the effective diagnosis of ovarian cancer
[ { "docid": "86826e10d531b8d487fada7a5c151a41", "text": "Feature selection is an important preprocessing step in data mining. Mutual information-based feature selection is a kind of popular and effective approaches. In general, most existing mutual information-based techniques are greedy methods, which are proven to be efficient but suboptimal. In this paper, mutual information-based feature selection is transformed into a global optimization problem, which provides a new idea for solving feature selection problems. Firstly, a single-objective feature selection algorithm combining relevance and redundancy is presented, which has well global searching ability and high computational efficiency. Furthermore, to improve the performance of feature selection, we propose a multi-objective feature selection algorithm. The method can meet different requirements and achieve a tradeoff among multiple conflicting objectives. On this basis, a hybrid feature selection framework is adopted for obtaining a final solution. We compare the performance of our algorithm with related methods on both synthetic and real datasets. Simulation results show the effectiveness and practicality of the proposed method.", "title": "" }, { "docid": "023166be79a875da0b06a4d6d562839f", "text": "There is no a priori reason why machine learning must borrow from nature. A field could exist, complete with well-defined algorithms, data structures, and theories of learning, without once referring to organisms, cognitive or genetic structures, and psychological or evolutionary theories. Yet at the end of the day, with the position papers written, the computers plugged in, and the programs debugged, a learning edifice devoid of natural metaphor would lack something. It would ignore the fact that all these creations have become possible only after three billion years of evolution on this planet. It would miss the point that the very ideas of adaptation and learning are concepts invented by the most recent representatives of the species Homo sapiens from the careful observation of themselves and life around them. It would miss the point that natural examples of learning and adaptation are treasure troves of robust procedures and structures. Fortunately, the field of machine learning does rely upon nature's bounty for both inspiration and mechanism. Many machine learning systems now borrow heavily from current thinking in cognitive science, and rekindled interest in neural networks and connectionism is evidence of serious mechanistic and philosophical currents running through the field. Another area where natural example has been tapped is in work on genetic algorithms (GAs) and genetics-based machine learning. Rooted in the early cybernetics movement (Holland, 1962), progress has been made in both theory (Holland, 1975; Holland, Holyoak, Nisbett, & Thagard, 1986) and application (Goldberg, 1989; Grefenstette, 1985, 1987) to the point where genetics-based systems are finding their way into everyday commercial use (Davis & Coombs, 1987; Fourman, 1985).", "title": "" } ]
[ { "docid": "085f0bef6bef5f91659edfad039f422e", "text": "With the development in modern communication technology, every physical device is now connecting with the internet. IoT is getting emerging technology for connecting physical devices with the user. In this paper we combined existing energy meter with the IoT technology. By implementation of IoT in the case of meter reading for electricity can give customer relief in using electrical energy. In this work a digital energy meter is connected with cloud server via IoT device. It sends the amount of consumed energy of connected customer to webserver. There is a feature for disconnection in the case of unauthorized and unpaid consumption and also have option for renew the connection by paying bill online. We tried to build up a consumer and business friendly system.", "title": "" }, { "docid": "db111db8aaaf1185d9dc99ba53e6e828", "text": "Topic model uncovers abstract topics within texts documents, which is an essential task in text analysis in social networks. However, identifying topics in text documents in social networks is challenging since the texts are short, unlabeled, and unstructured. For this reason, we propose a topic classification system regarding the features of text documents in social networks. The proposed system is based on several machine-learning algorithms and voting system. The accuracy of the system has been tested using text documents that were classified into three topics. The experiment results show that the proposed system guarantees high accuracy rates in documents topic classification.", "title": "" }, { "docid": "8f6107d045b94917cf0f0bd3f262a1bf", "text": "An interesting challenge for explainable recommender systems is to provide successful interpretation of recommendations using structured sentences. It is well known that user-generated reviews, have strong influence on the users' decision. Recent techniques exploit user reviews to generate natural language explanations. In this paper, we propose a character-level attention-enhanced long short-term memory model to generate natural language explanations. We empirically evaluated this network using two real-world review datasets. The generated text present readable and similar to a real user's writing, due to the ability of reproducing negation, misspellings, and domain-specific vocabulary.", "title": "" }, { "docid": "1c3c21a159bed9bf293838eee7c6c36b", "text": "The laminar location of the cell bodies and terminals of interareal connections determines the hierarchical structural organization of the cortex and has been intensively studied. However, we still have only a rudimentary understanding of the connectional principles of feedforward (FF) and feedback (FB) pathways. Quantitative analysis of retrograde tracers was used to extend the notion that the laminar distribution of neurons interconnecting visual areas provides an index of hierarchical distance (percentage of supragranular labeled neurons [SLN]). We show that: 1) SLN values constrain models of cortical hierarchy, revealing previously unsuspected areal relations; 2) SLN reflects the operation of a combinatorial distance rule acting differentially on sets of connections between areas; 3) Supragranular layers contain highly segregated bottom-up and top-down streams, both of which exhibit point-to-point connectivity. This contrasts with the infragranular layers, which contain diffuse bottom-up and top-down streams; 4) Cell filling of the parent neurons of FF and FB pathways provides further evidence of compartmentalization; 5) FF pathways have higher weights, cross fewer hierarchical levels, and are less numerous than FB pathways. Taken together, the present results suggest that cortical hierarchies are built from supra- and infragranular counterstreams. This compartmentalized dual counterstream organization allows point-to-point connectivity in both bottom-up and top-down directions.", "title": "" }, { "docid": "2547e6e8138c49b76062e241391dfc1d", "text": "Methods of deep neural networks (DNNs) have recently demonstrated superior performance on a number of natural language processing tasks. However, in most previous work, the models are learned based on either unsupervised objectives, which does not directly optimize the desired task, or singletask supervised objectives, which often suffer from insufficient training data. We develop a multi-task DNN for learning representations across multiple tasks, not only leveraging large amounts of cross-task data, but also benefiting from a regularization effect that leads to more general representations to help tasks in new domains. Our multi-task DNN approach combines tasks of multiple-domain classification (for query classification) and information retrieval (ranking for web search), and demonstrates significant gains over strong baselines in a comprehensive set of domain adaptation.", "title": "" }, { "docid": "bea5359317e05e0a9c3b4d474ca0067f", "text": "Agile method Scrum can effectively resolve numerous problems encountered when Capability Maturity Model Integration(CMMI) is implemented in small and medium software development organizations, but some special needs are hard to be satisfied. According to small and medium organizations' characteristic, the paper analyzes feasibility of combining Scrum and CMMI in depth. It is useful for organizations that build a new project management framework based on both CMMI and Scrum practices.", "title": "" }, { "docid": "4ba4930befdc19c32c4fb73abe35d141", "text": "Us enhance usab adaptivity and designers mod hindering thus level, increasin possibility of e aims at creat concepts and p literature, app user context an to create a ge This ontology alleviate the a download, is ex areas, person visualization.", "title": "" }, { "docid": "04384b62c17f9ff323db4d51bea86fe9", "text": "Imbalanced data widely exist in many high-impact applications. An example is in air traffic control, where among all three types of accident causes, historical accident reports with ‘personnel issues’ are much more than the other two types (‘aircraft issues’ and ‘environmental issues’) combined. Thus, the resulting data set of accident reports is highly imbalanced. On the other hand, this data set can be naturally modeled as a network, with each node representing an accident report, and each edge indicating the similarity of a pair of accident reports. Up until now, most existing work on imbalanced data analysis focused on the classification setting, and very little is devoted to learning the node representations for imbalanced networks. To bridge this gap, in this paper, we first propose Vertex-Diminished Random Walk (VDRW) for imbalanced network analysis. It is significantly different from the existing Vertex Reinforced Random Walk by discouraging the random particle to return to the nodes that have already been visited. This design is particularly suitable for imbalanced networks as the random particle is more likely to visit the nodes from the same class, which is a desired property for learning node representations. Furthermore, based on VDRW, we propose a semi-supervised network representation learning framework named ImVerde for imbalanced networks, where context sampling uses VDRW and the limited label information to create node-context pairs, and balanced-batch sampling adopts a simple under-sampling method to balance these pairs from different classes. Experimental results demonstrate that ImVerde based on VDRW outperforms stateof-the-art algorithms for learning network representations from imbalanced data.", "title": "" }, { "docid": "cf0d5d3877bf26822c2196a3a17bd073", "text": "The purpose of this paper is to review existing sensor and sensor network ontologies to understand whether they can be reused as a basis for a manufacturing perception sensor ontology, or if the existing ontologies hold lessons for the development of a new ontology. We develop an initial set of requirements that should apply to a manufacturing perception sensor ontology. These initial requirements are used in reviewing selected existing sensor ontologies. This paper describes the steps for 1) extending and refining the requirements; 2) proposing hierarchical structures for verifying the purposes of the ontology; and 3) choosing appropriate tools and languages to support such an ontology. Some languages could include OWL (Web Ontology Language) [1] and SensorML (Sensor Markup Language) [2]. This work will be proposed as a standard within the IEEE Robotics and Automation Society (RAS) Ontologies for Robotics Automation (ORA) Working Group [3]. 1. Overview of Sensor Ontology Effort Next generation robotic systems for manufacturing must perform highly complex tasks in dynamic environments. To improve return on investment, manufacturing robots and automation must become more flexible and adaptable, and less dependent on blind, repetitive motions in a structured, fixed environment. To become more adaptable, robots need both precise sensing for parts and assemblies, so they can focus on specific tasks in which they must interact with and manipulate objects; and situational awareness, so they can robustly sense their entire environment for long-term planning and short-term safety. Meeting these requirements will need advances in sensing and perception systems that can identify and locate objects, can detect people and obstacles, and, in general, can perceive as many elements of the manufacturing environment as needed for operation. To robustly and accurately perceive many elements of the environment will require a wide range of collaborating smart sensors such as cameras, laser scanners, stereo cameras, and others. In many cases these sensors will need to be integrated into a distributed sensor network that offers extensive coverage of a manufacturing facility by sensors of complementary capabilities. To support the development of these sensors and networks, the National Institute of Standards and Technology (NIST) manufacturing perception sensor ontology effort looks to create an ontology of sensors, sensor networks, sensor capabilities, environmental objects, and environmental conditions so as to better define and anticipate the wide range of perception systems needed. The ontology will include:", "title": "" }, { "docid": "3a723bb57dedaaf473384243fe6e1ab1", "text": "Objective\nWe explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality.\n\n\nMaterials and Methods\nData were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches.\n\n\nResults\nUsing a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP).\n\n\nConclusion\nDeep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months.", "title": "" }, { "docid": "9d55947637b358c4dc30d7ba49885472", "text": "Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models. CCS Concepts •Information systems→ Retrieval models and ranking;", "title": "" }, { "docid": "ff20e5cd554cd628eba07776fa9a5853", "text": "We describe our early experience in applying our console log mining techniques [19, 20] to logs from production Google systems with thousands of nodes. This data set is five orders of magnitude in size and contains almost 20 times as many messages types as the Hadoop data set we used in [19]. It also has many properties that are unique to large scale production deployments (e.g., the system stays on for several months and multiple versions of the software can run concurrently). Our early experience shows that our techniques, including source code based log parsing, state and sequence based feature creation and problem detection, work well on this production data set. We also discuss our experience in using our log parser to assist the log sanitization.", "title": "" }, { "docid": "1303770cf8d0f1b0f312feb49281aa10", "text": "A terahertz metamaterial absorber (MA) with properties of broadband width, polarization-insensitive, wide angle incidence is presented. Different from the previous methods to broaden the absorption width, this letter proposes a novel combinatorial way which units a nested structure with multiple metal-dielectric layers. We numerically investigate the proposed MA, and the simulation results show that the absorber achieves a broadband absorption over a frequency range of 0.896 THz with the absorptivity greater than 90%. Moreover, the full-width at half maximum of the absorber is up to 1.224 THz which is 61.2% with respect to the central frequency. The mechanism for the broadband absorption originates from the overlapping of longitudinal coupling between layers and coupling of the nested structure. Importantly, the nested structure makes a great contribution to broaden the absorption width. Thus, constructing a nested structure in a multi-layer absorber may be considered as an effective way to design broadband MAs.", "title": "" }, { "docid": "ea262ac413534326feaed7adf8455881", "text": "Numerous sensors in modern mobile phones enable a range of people-centric applications. This paper envisions a system called PhonePoint Pen that uses the in-built accelerometer in mobile phones to recognize human writing. By holding the phone like a pen, a user should be able to write short messages or draw simple diagrams in the air. The acceleration due to hand gestures can be translated into geometric strokes, and recognized as characters. We prototype the PhonePoint Pen on the Nokia N95 platform, and evaluate it through real users. Results show that English characters can be identified with an average accuracy of 91.9%, if the users conform to a few reasonable constraints. Future work is focused on refining the prototype, with the goal of offering a new user-experience that complements keyboards and touch-screens.", "title": "" }, { "docid": "aaf30f184fcea3852f73a5927100cac7", "text": "Dyslexia is a neurodevelopmental reading disability estimated to affect 5-10% of the population. While there is yet no full understanding of the cause of dyslexia, or agreement on its precise definition, it is certain that many individuals suffer persistent problems in learning to read for no apparent reason. Although it is generally agreed that early intervention is the best form of support for children with dyslexia, there is still a lack of efficient and objective means to help identify those at risk during the early years of school. Here we show that it is possible to identify 9-10 year old individuals at risk of persistent reading difficulties by using eye tracking during reading to probe the processes that underlie reading ability. In contrast to current screening methods, which rely on oral or written tests, eye tracking does not depend on the subject to produce some overt verbal response and thus provides a natural means to objectively assess the reading process as it unfolds in real-time. Our study is based on a sample of 97 high-risk subjects with early identified word decoding difficulties and a control group of 88 low-risk subjects. These subjects were selected from a larger population of 2165 school children attending second grade. Using predictive modeling and statistical resampling techniques, we develop classification models from eye tracking records less than one minute in duration and show that the models are able to differentiate high-risk subjects from low-risk subjects with high accuracy. Although dyslexia is fundamentally a language-based learning disability, our results suggest that eye movements in reading can be highly predictive of individual reading ability and that eye tracking can be an efficient means to identify children at risk of long-term reading difficulties.", "title": "" }, { "docid": "45a8fea3e8d780c65811cee79082237f", "text": "Pedestrian dead reckoning, especially on smart-phones, is likely to play an increasingly important role in indoor tracking and navigation, due to its low cost and ability to work without any additional infrastructure. A challenge however, is that positioning, both in terms of step detection and heading estimation, must be accurate and reliable, even when the use of the device is so varied in terms of placement (e.g. handheld or in a pocket) or orientation (e.g holding the device in either portrait or landscape mode). Furthermore, the placement can vary over time as a user performs different tasks, such as making a call or carrying the device in a bag. A second challenge is to be able to distinguish between a true step and other periodic motion such as swinging an arm or tapping when the placement and orientation of the device is unknown. If this is not done correctly, then the PDR system typically overestimates the number of steps taken, leading to a significant long term error. We present a fresh approach, robust PDR (R-PDR), based on exploiting how bipedal motion impacts acquired sensor waveforms. Rather than attempting to recognize different placements through sensor data, we instead simply determine whether the motion of one or both legs impact the measurements. In addition, we formulate a set of techniques to accurately estimate the device orientation, which allows us to very accurately (typically over 99%) reject false positives. We demonstrate that regardless of device placement, we are able to detect the number of steps taken with >99.4% accuracy. R-PDR thus addresses the two main limitations facing existing PDR techniques.", "title": "" }, { "docid": "6a2e3c783b468474ca0f67d7c5af456c", "text": "We evaluated the cytotoxic effects of four prostaglandin analogs (PGAs) used to treat glaucoma. First we established primary cultures of conjunctival stromal cells from healthy donors. Then cell cultures were incubated with different concentrations (0, 0.1, 1, 5, 25, 50 and 100%) of commercial formulations of bimatoprost, tafluprost, travoprost and latanoprost for increasing periods (5 and 30 min, 1 h, 6 h and 24 h) and cell survival was assessed with three different methods: WST-1, MTT and calcein/AM-ethidium homodimer-1 assays. Our results showed that all PGAs were associated with a certain level of cell damage, which correlated significantly with the concentration of PGA used, and to a lesser extent with culture time. Tafluprost tended to be less toxic than bimatoprost, travoprost and latanoprost after all culture periods. The results for WST-1, MTT and calcein/AM-ethidium homodimer-1 correlated closely. When the average lethal dose 50 was calculated, we found that the most cytotoxic drug was latanoprost, whereas tafluprost was the most sparing of the ocular surface in vitro. These results indicate the need to design novel PGAs with high effectiveness but free from the cytotoxic effects that we found, or at least to obtain drugs that are functional at low dosages. The fact that the commercial formulation of tafluprost used in this work was preservative-free may support the current tendency to eliminate preservatives from eye drops for clinical use.", "title": "" }, { "docid": "9f3f5e2baa1bff4aa28a2ce2a4c47088", "text": "One of the most perplexing problems in risk analysis is why some relatively minor risks or risk events, as assessed by technical experts, often elicit strong public concerns and result in substantial impacts upon society and economy. This article sets forth a conceptual framework that seeks to link systematically the technical assessment of risk with psychological, sociological, and cultural perspectives of risk perception and risk-related behavior. The main thesis is that hazards interact with psychological, social, institutional, and cultural processes in ways that may amplify or attenuate public responses to the risk or risk event. A structural description of the social amplification of risk is now possible. Amplification occurs at two stages: in the transfer of information about the risk, and in the response mechanisms of society. Signals about risk are processed by individual and social amplification stations, including the scientist who communicates the risk assessment, the news media, cultural groups, interpersonal networks, and others. Key steps of amplifications can be identified at each stage. The amplified risk leads to behavioral responses, which, in turn, result in secondary impacts. Models are presented that portray the elements and linkages in the proposed conceptual framework.", "title": "" }, { "docid": "d07ba52b14c098ca5e2178ce64fc4403", "text": "Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to log n-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential network parameters exceeding the sample size. The analysis gives some insights why multilayer feedforward neural networks perform well in practice. Interestingly, the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that for nonparametric regression scaling the network depth with the logarithm of the sample size is natural. It is also shown that under the composition assumption wavelet estimators can only achieve suboptimal rates.", "title": "" } ]
scidocsrr
f5b3fc2cdb8558c05e48482705db5285
Composing graphical models with neural networks for structured representations and fast inference
[ { "docid": "62d39d41523bca97939fa6a2cf736b55", "text": "We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.", "title": "" } ]
[ { "docid": "27ebec7dcf4372a907e1952b67dbbe3e", "text": "A large sample (N = 141) of college students participated in both a conjunctive visual search task and an ambiguous figures task that have been used as tests of selective attention. Tests for effects of bilingualism on attentional control were conducted by both partitioning the participants into bilinguals and monolinguals and by treating bilingualism as a continuous variable, but there were no effects of bilingualism in any of the tests. Bayes factor analyses confirmed that the evidence substantially favored the null hypothesis. These new findings mesh with failures to replicate language-group differences in congruency-sequence effects, inhibition-of-return, and working memory capacity. The evidence that bilinguals are better than monolinguals at attentional control is equivocal at best.", "title": "" }, { "docid": "55d5e03e86a3b35dc2ee258dc5c6029f", "text": "This paper presents an approach to the automatic generation of electromechanical engineering designs. We apply Messy Genetic Algorithm optimization techniques to the evolution of assemblies composed of the Lego structures. Each design is represented as a labeled assembly graph and is evaluated based on a set of behavior and structural equations. The initial populations are generated at random and design candidates for subsequent generations are produced by user-specified selection techniques. Crossovers are applied by using cut and splice operators at the random points of the chromosomes; random mutations are applied to modify the graph with a certain low probability. This cycle will continue until a suitable design is found. The research contributions in this work include the development of a new GA encoding scheme for mechanical assemblies (Legos), as well as the creation of selection criteria for this domain. Our eventual goal is to introduce a simulation of electromechanical devices into our evaluation functions. We believe that this research creates a foundation for future work and it will apply GA techniques to the evolution of more complex and realistic electromechanical structures.", "title": "" }, { "docid": "e83a360cb318b948b221206b75664b23", "text": "Marine defaunation, or human-caused animal loss in the oceans, emerged forcefully only hundreds of years ago, whereas terrestrial defaunation has been occurring far longer. Though humans have caused few global marine extinctions, we have profoundly affected marine wildlife, altering the functioning and provisioning of services in every ocean. Current ocean trends, coupled with terrestrial defaunation lessons, suggest that marine defaunation rates will rapidly intensify as human use of the oceans industrializes. Though protected areas are a powerful tool to harness ocean productivity, especially when designed with future climate in mind, additional management strategies will be required. Overall, habitat degradation is likely to intensify as a major driver of marine wildlife loss. Proactive intervention can avert a marine defaunation disaster of the magnitude observed on land.", "title": "" }, { "docid": "48664108c3bea8cc90a8e431baaa4f78", "text": "Studying how privacy regulation might impact economic activity on the advertising-supported Internet.", "title": "" }, { "docid": "ead343ffee692a8645420c58016c129d", "text": "One of the most important applications in multiview imaging (MVI) is the development of advanced immersive viewing or visualization systems using, for instance, 3DTV. With the introduction of multiview TVs, it is expected that a new age of 3DTV systems will arrive in the near future. Image-based rendering (IBR) refers to a collection of techniques and representations that allow 3-D scenes and objects to be visualized in a realistic way without full 3-D model reconstruction. IBR uses images as the primary substrate. The potential for photorealistic visualization has tremendous appeal, and it has been receiving increasing attention over the years. Applications such as video games, virtual travel, and E-commerce stand to benefit from this technology. This article serves as a tutorial introduction and brief review of this important technology. First the classification, principles, and key research issues of IBR are discussed. Then, an object-based IBR system to illustrate the techniques involved and its potential application in view synthesis and processing are explained. Stereo matching, which is an important technique for depth estimation and view synthesis, is briefly explained and some of the top-ranked methods are highlighted. Finally, the challenging problem of interactive IBR is explained. Possible solutions and some state-of-the-art systems are also reviewed.", "title": "" }, { "docid": "850483f2db17a4f5d5a48db80d326dd3", "text": "The Internet has revolutionized healthcare by offering medical information ubiquitously to patients via the web search. The healthcare status, complex medical information needs of patients are expressed diversely and implicitly in their medical text queries. Aiming to better capture a focused picture of user's medical-related information search and shed insights on their healthcare information access strategies, it is challenging yet rewarding to detect structured user intentions from their diversely expressed medical text queries. We introduce a graph-based formulation to explore structured concept transitions for effective user intent detection in medical queries, where each node represents a medical concept mention and each directed edge indicates a medical concept transition. A deep model based on multi-task learning is introduced to extract structured semantic transitions from user queries, where the model extracts word-level medical concept mentions as well as sentence-level concept transitions collectively. A customized graph-based mutual transfer loss function is designed to impose explicit constraints and further exploit the contribution of mentioning a medical concept word to the implication of a semantic transition. We observe an 8% relative improvement in AUC and 23% relative reduction in coverage error by comparing the proposed model with the best baseline model for the concept transition inference task on real-world medical text queries.", "title": "" }, { "docid": "54537c242bc89fbf15d9191be80c5073", "text": "In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature’s number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.", "title": "" }, { "docid": "34e8cbfa11983f896d9e159daf08cc27", "text": "XtratuM is an hypervisor designed to meet safety critical requirements. Initially designed for x86 architectures (version 2.0), it has been strongly redesigned for SPARC v8 arquitecture and specially for the to the LEON2 processor. Current version 2.2, includes all the functionalities required to build safety critical systems based on ARINC 653, AUTOSTAR and other standards. Although XtratuMdoes not provides a compliant API with these standards, partitions can offer easily the appropriated API to the applications. XtratuM is being used by the aerospace sector to build software building blocks of future generic on board software dedicated to payloads management units in aerospace. XtratuM provides ARINC 653 scheduling policy, partition management, inter-partition communications, health monitoring, logbooks, traces, and other services to easily been adapted to the ARINC standard. The configuration of the system is specified in a configuration file (XML format) and it is compiled to achieve a static configuration of the final container (XtratuM and the partition’s code) to be deployed to the hardware board. As far as we know, XtratuM is the first hypervisor for the SPARC v8 arquitecture. In this paper, the main design aspects are discussed and the internal architecture described. An evaluation of the most significant metrics is also provided. This evaluation permits to affirm that the overhead of a hypervisor is lower than 3% if the slot duration is higher than 1 millisecond.", "title": "" }, { "docid": "2da84ca7d7db508a6f9a443f2dbae7c1", "text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.", "title": "" }, { "docid": "0c8947cbaa2226a024bf3c93541dcae1", "text": "As storage systems grow in size and complexity, they are increasingly confronted with concurrent disk failures together with multiple unrecoverable sector errors. To ensure high data reliability and availability, erasure codes with high fault tolerance are required. In this article, we present a new family of erasure codes with high fault tolerance, named GRID codes. They are called such because they are a family of strip-based codes whose strips are arranged into multi-dimensional grids. In the construction of GRID codes, we first introduce a concept of matched codes and then discuss how to use matched codes to construct GRID codes. In addition, we propose an iterative reconstruction algorithm for GRID codes. We also discuss some important features of GRID codes. Finally, we compare GRID codes with several categories of existing codes. Our comparisons show that for large-scale storage systems, our GRID codes have attractive advantages over many existing erasure codes: (a) They are completely XOR-based and have very regular structures, ensuring easy implementation; (b) they can provide up to 15 and even higher fault tolerance; and (c) their storage efficiency can reach up to 80% and even higher. All the advantages make GRID codes more suitable for large-scale storage systems.", "title": "" }, { "docid": "c5428f44292952bfb9443f61aa6d6ce0", "text": "In this letter, a tunable protection switch device using open stubs for $X$ -band low-noise amplifiers (LNAs) is proposed. The protection switch is implemented using p-i-n diodes. As the parasitic inductance in the p-i-n diodes may degrade the protection performance, tunable open stubs are attached to these diodes to obtain a grounding effect. The performance is optimized for the desired frequency band by adjusting the lengths of the microstrip line open stubs. The designed LNA protection switch is fabricated and measured, and sufficient isolation is obtained for a 200 MHz operating band. The proposed protection switch is suitable for solid-state power amplifier radars in which the LNAs need to be protected from relatively long pulses.", "title": "" }, { "docid": "77620bb2a19faffd4530e1814ca08f86", "text": "As in any academic discipline, the evaluation of proposed methodologies and techniques is of vital importance for assessing the validity of novel ideas or findings in Software Engineering. Over the years, a large number of evaluation approaches have been employed, some of them drawn from other domains and other particularly developed for the needs of software engineering related research. In this paper we present the results of a survey of evaluation techniques that have been utilized in research papers that appeared in three leading software engineering journal and propose a taxonomy of evaluation approaches which might be helpful towards the organization of knowledge regarding the different strategies for the validation of research outcomes. The applicability of the proposed taxonomy has been evaluated by classifying the articles retrieved from ICSE'2012.", "title": "" }, { "docid": "9f5e4d52df5f13a80ccdb917a899bb9e", "text": "This paper proposes a robust background model-based dense-visual-odometry (BaMVO) algorithm that uses an RGB-D sensor in a dynamic environment. The proposed algorithm estimates the background model represented by the nonparametric model from depth scenes and then estimates the ego-motion of the sensor using the energy-based dense-visual-odometry approach based on the estimated background model in order to consider moving objects. Experimental results demonstrate that the ego-motion is robustly obtained by BaMVO in a dynamic environment.", "title": "" }, { "docid": "26a9bf8c2e6a8dc0d13774fd614b8776", "text": "This paper addresses an open challenge in educational data mining, i.e., the problem of automatically mapping online courses from different providers (universities, MOOCs, etc.) onto a universal space of concepts, and predicting latent prerequisite dependencies (directed links) among both concepts and courses. We propose a novel approach for inference within and across course-level and concept-level directed graphs. In the training phase, our system projects partially observed course-level prerequisite links onto directed concept-level links; in the testing phase, the induced concept-level links are used to infer the unknown courselevel prerequisite links. Whereas courses may be specific to one institution, concepts are shared across different providers. The bi-directional mappings enable our system to perform interlingua-style transfer learning, e.g. treating the concept graph as the interlingua and transferring the prerequisite relations across universities via the interlingua. Experiments on our newly collected datasets of courses from MIT, Caltech, Princeton and CMU show promising results.", "title": "" }, { "docid": "4ede3f2caa829e60e4f87a9b516e28bd", "text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.", "title": "" }, { "docid": "9d9e9a25e19c83a2a435128823a6519a", "text": "The rapid deployment of millions of mobile sensors and smartphones has resulted in a demand for opportunistic encounter-based networking to support mobile social networking applications and proximity-based gaming. However, the success of these emerging networks is limited by the lack of effective and energy efficient neighbor discovery protocols. While probabilistic approaches perform well for the average case, they exhibit long tails resulting in high upper bounds on neighbor discovery time. Recent deterministic protocols, which allow nodes to wake up at specific timeslots according to a particular pattern, improve on the worst case bound, but do so by sacrificing average case performance. In response to these limitations, we have designed Searchlight, a highly effective asynchronous discovery protocol that is built on three basic ideas. First, it leverages the constant offset between periodic awake slots to design a simple probing-based approach to ensure discovery. Second, it allows awake slots to cover larger sections of time, which ultimately reduces total awake time drastically. Finally, Searchlight has the option to employ probabilistic techniques with its deterministic approach that can considerably improve its performance in the average case when all nodes have the same duty cycle. We validate Searchlight through analysis and real-world experiments on smartphones that show considerable improvement (up to 50%) in worst-case discovery latency over existing approaches in almost all cases, irrespective of duty cycle symmetry.", "title": "" }, { "docid": "0c162c4f83294c4f701eabbd69f171f7", "text": "This paper aims to explore how the principles of a well-known Web 2.0 service, the world¿s largest social music service \"Last.fm\" (www.last.fm), can be applied to research, which potential it could have in the world of research (e.g. an open and interdisciplinary database, usage-based reputation metrics, and collaborative filtering) and which challenges such a model would face in academia. A real-world application of these principles, \"Mendeley\" (www.mendeley.com), will be demoed at the IEEE e-Science Conference 2008.", "title": "" }, { "docid": "bf14f996f9013351aca1e9935157c0e3", "text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.", "title": "" }, { "docid": "07305bc3eab0d83772ea1ab8ceebed9d", "text": "This paper examines the effect of the freemium strategy on Google Play, an online marketplace for Android mobile apps. By analyzing a large panel dataset consisting of 1,597 ranked mobile apps, we found that the freemium strategy is positively associated with increased sales volume and revenue of the paid apps. Higher sales rank and review rating of the free version of a mobile app both lead to higher sales rank of its paid version. However, only higher review rating of the free app contributes to higher revenue from the paid version, suggesting that although offering a free version is a viable way to improve the visibility of a mobile app, revenue is largely determined by product quality, not product visibility. Moreover, we found that the impact of review rating is not significant when the free version is offered, or when the mobile app is a hedonic app.", "title": "" } ]
scidocsrr
e745cdf3341de90bb9b19a4739da8659
Game design principles in everyday fitness applications
[ { "docid": "16d949f6915cbb958cb68a26c6093b6b", "text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.", "title": "" }, { "docid": "e5a3119470420024b99df2d6eb14b966", "text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?", "title": "" }, { "docid": "1aeca45f1934d963455698879b1e53e8", "text": "A sedentary lifestyle is a contributing factor to chronic diseases, and it is often correlated with obesity. To promote an increase in physical activity, we created a social computer game, Fish'n'Steps, which links a player’s daily foot step count to the growth and activity of an animated virtual character, a fish in a fish tank. As further encouragement, some of the players’ fish tanks included other players’ fish, thereby creating an environment of both cooperation and competition. In a fourteen-week study with nineteen participants, the game served as a catalyst for promoting exercise and for improving game players’ attitudes towards physical activity. Furthermore, although most player’s enthusiasm in the game decreased after the game’s first two weeks, analyzing the results using Prochaska's Transtheoretical Model of Behavioral Change suggests that individuals had, by that time, established new routines that led to healthier patterns of physical activity in their daily lives. Lessons learned from this study underscore the value of such games to encourage rather than provide negative reinforcement, especially when individuals are not meeting their own expectations, to foster long-term behavioral change.", "title": "" } ]
[ { "docid": "c5081f86c4a173a40175e65b05d9effb", "text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.", "title": "" }, { "docid": "928eb797289d2630ff2e701ced782a14", "text": "The restricted Boltzmann machine (RBM) has received an increasing amount of interest in recent years. It determines good mapping weights that capture useful latent features in an unsupervised manner. The RBM and its generalizations have been successfully applied to a variety of image classification and speech recognition tasks. However, most of the existing RBM-based models disregard the preservation of the data manifold structure. In many real applications, the data generally reside on a low-dimensional manifold embedded in high-dimensional ambient space. In this brief, we propose a novel graph regularized RBM to capture features and learning representations, explicitly considering the local manifold structure of the data. By imposing manifold-based locality that preserves constraints on the hidden layer of the RBM, the model ultimately learns sparse and discriminative representations. The representations can reflect data distributions while simultaneously preserving the local manifold structure of data. We test our model using several benchmark image data sets for unsupervised clustering and supervised classification problem. The results demonstrate that the performance of our method exceeds the state-of-the-art alternatives.", "title": "" }, { "docid": "70ea3e32d4928e7fd174b417ec8b6d0e", "text": "We show that invariance in a deep neural network is equivalent to information minimality of the representation it computes, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. Then, we show that overfitting is related to the quantity of information stored in the weights, and derive a sharp bound between this information and the minimality and Total Correlation of the layers. This allows us to conclude that implicit and explicit regularization of the loss function not only help limit overfitting, but also foster invariance and disentangling of the learned representation. We also shed light on the properties of deep networks in relation to the geometry of the loss function.", "title": "" }, { "docid": "fd4bd9edcaff84867b6e667401aa3124", "text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378", "title": "" }, { "docid": "b1453c089b5b9075a1b54e4f564f7b45", "text": "Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crashes. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different safety properties and find concrete counterexamples for networks that are 10× larger than the ones supported by existing analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.", "title": "" }, { "docid": "ad4d38ee8089a67353586abad319038f", "text": "State-of-the-art systems of Chinese Named Entity Recognition (CNER) require large amounts of hand-crafted features and domainspecific knowledge to achieve high performance. In this paper, we apply a bidirectional LSTM-CRF neural network that utilizes both characterlevel and radical-level representations. We are the first to use characterbased BLSTM-CRF neural architecture for CNER. By contrasting the results of different variants of LSTM blocks, we find the most suitable LSTM block for CNER. We are also the first to investigate Chinese radical-level representations in BLSTM-CRF architecture and get better performance without carefully designed features. We evaluate our system on the third SIGHAN Bakeoff MSRA data set for simplfied CNER task and achieve state-of-the-art performance 90.95% F1.", "title": "" }, { "docid": "c256283819014d79dd496a3183116b68", "text": "For the 5th generation of terrestrial mobile communications, Multi-Carrier (MC) transmission based on non-orthogonal waveforms is a promising technology component compared to orthogonal frequency division multiplex (OFDM) in order to achieve higher throughput and enable flexible spectrum management. Coverage extension and service continuity can be provided considering satellites as additional components in future networks by allowing vertical handover to terrestrial radio interfaces. In this paper, the properties of Filter Bank Multicarrier (FBMC) as potential MC transmission scheme is discussed taking into account the requirements for the satellite-specific PHY-Layer like non-linear distortions due to High Power Amplifiers (HPAs). The performance for specific FBMC configurations is analyzed in terms of peak-to-average power ratio (PAPR), computational complexity, non-linear distortions as well as carrier frequency offsets sensitivity (CFOs). Even though FBMC and OFDM have similar PAPR and suffer comparable spectral regrowth at the output of the non linear amplifier, simulations on link level show that FBMC still outperforms OFDM in terms of CFO sensitivity and symbol error rate in the presence of non-linear distortions.", "title": "" }, { "docid": "c2f807e336be1b8d918d716c07668ae1", "text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.", "title": "" }, { "docid": "7963adab39b58ab0334b8eef4149c59c", "text": "The aim of the present study was to gain a better understanding of the content characteristics that make online consumer reviews a useful source of consumer information. To this end, we content analyzed reviews of experience and search products posted on Amazon.com (N = 400). The insights derived from this content analysis were linked with the proportion of ‘useful’ votes that reviews received from fellow consumers. The results show that content characteristics are paramount to understanding the perceived usefulness of reviews. Specifically, argumentation (density and diversity) served as a significant predictor of perceived usefulness, as did review valence although this latter effect was contingent on the type of product (search or experience) being evaluated in reviews. The presence of expertise claims appeared to be weakly related to the perceived usefulness of reviews. The broader theoretical, methodological and practical implications of these findings are discussed.", "title": "" }, { "docid": "179d8f41102862710595671e5a819d70", "text": "Detecting changes in time series data is an important data analysis task with application in various scientific domains. In this paper, we propose a novel approach to address the problem of change detection in time series data, which can find both the amplitude and degree of changes. Our approach is based on wavelet footprints proposed originally by the signal processing community for signal compression. We, however, exploit the properties of footprints to efficiently capture discontinuities in a signal. We show that transforming time series data using footprint basis up to degree D generates nonzero coefficients only at the change points with degree up to D. Exploiting this property, we propose a novel change detection query processing scheme which employs footprint-transformed data to identify change points, their amplitudes, and degrees of change efficiently and accurately. We also present two methods for exact and approximate transformation of data. Our analytical and empirical results with both synthetic and real-world data show that our approach outperforms the best known change detection approach in terms of both performance and accuracy. Furthermore, unlike the state of the art approaches, our query response time is independent from the number of change points in the data and the user-defined change threshold.", "title": "" }, { "docid": "c59aaad99023e5c6898243db208a4c3c", "text": "This paper presents a method for automated vessel segmentation in retinal images. For each pixel in the field of view of the image, a 41-D feature vector is constructed, encoding information on the local intensity structure, spatial properties, and geometry at multiple scales. An AdaBoost classifier is trained on 789 914 gold standard examples of vessel and nonvessel pixels, then used for classifying previously unseen images. The algorithm was tested on the public digital retinal images for vessel extraction (DRIVE) set, frequently used in the literature and consisting of 40 manually labeled images with gold standard. Results were compared experimentally with those of eight algorithms as well as the additional manual segmentation provided by DRIVE. Training was conducted confined to the dedicated training set from the DRIVE database, and feature-based AdaBoost classifier (FABC) was tested on the 20 images from the test set. FABC achieved an area under the receiver operating characteristic (ROC) curve of 0.9561, in line with state-of-the-art approaches, but outperforming their accuracy (0.9597 versus 0.9473 for the nearest performer).", "title": "" }, { "docid": "e11b4a08fc864112d4f68db1ea9703e9", "text": "Forecasting is an integral part of any organization for their decision-making process so that they can predict their targets and modify their strategy in order to improve their sales or productivity in the coming future. This paper evaluates and compares various machine learning models, namely, ARIMA, Auto Regressive Neural Network(ARNN), XGBoost, SVM, Hy-brid Models like Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM and STL Decomposition (using ARIMA, Snaive, XGBoost) to forecast sales of a drug store company called Rossmann. Training data set contains past sales and supplemental information about drug stores. Accuracy of these models is measured by metrics such as MAE and RMSE. Initially, linear model such as ARIMA has been applied to forecast sales. ARIMA was not able to capture nonlinear patterns precisely, hence nonlinear models such as Neural Network, XGBoost and SVM were used. Nonlinear models performed better than ARIMA and gave low RMSE. Then, to further optimize the performance, composite models were designed using hybrid technique and decomposition technique. Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM were used and all of them performed better than their respective individual models. Then, the composite model was designed using STL Decomposition where the decomposed components namely seasonal, trend and remainder components were forecasted by Snaive, ARIMA and XGBoost. STL gave better results than individual and hybrid models. This paper evaluates and analyzes why composite models give better results than an individual model and state that decomposition technique is better than the hybrid technique for this application.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8c2e69380cebdd6affd43c6bfed2fc51", "text": "A fundamental property of many plasma-membrane proteins is their association with the underlying cytoskeleton to determine cell shape, and to participate in adhesion, motility and other plasma-membrane processes, including endocytosis and exocytosis. The ezrin–radixin–moesin (ERM) proteins are crucial components that provide a regulated linkage between membrane proteins and the cortical cytoskeleton, and also participate in signal-transduction pathways. The closely related tumour suppressor merlin shares many properties with ERM proteins, yet also provides a distinct and essential function.", "title": "" }, { "docid": "a1046f5282cf4057fd143fdce79c6990", "text": "Rheumatoid arthritis is a multisystem disease with underlying immune mechanisms. Osteoarthritis is a debilitating, progressive disease of diarthrodial joints associated with the aging process. Although much is known about the pathogenesis of rheumatoid arthritis and osteoarthritis, our understanding of some immunologic changes remains incomplete. This study tries to examine the numeric changes in the T cell subsets and the alterations in the levels of some cytokines and adhesion molecules in these lesions. To accomplish this goal, peripheral blood and synovial fluid samples were obtained from 24 patients with rheumatoid arthritis, 15 patients with osteoarthritis and six healthy controls. The counts of CD4 + and CD8 + T lymphocytes were examined using flow cytometry. The levels of some cytokines (TNF-α, IL1-β, IL-10, and IL-17) and a soluble intercellular adhesion molecule-1 (sICAM-1) were measured in the sera and synovial fluids using enzyme linked immunosorbant assay. We found some variations in the counts of T cell subsets, the levels of cytokines and sICAM-1 adhesion molecule between the healthy controls and the patients with arthritis. High levels of IL-1β, IL-10, IL-17 and TNF-α (in the serum and synovial fluid) were observed in arthritis compared to the healthy controls. In rheumatoid arthritis, a high serum level of sICAM-1 was found compared to its level in the synovial fluid. A high CD4+/CD8+ T cell ratio was found in the blood of the patients with rheumatoid arthritis. In rheumatoid arthritis, the cytokine levels correlated positively with some clinicopathologic features. To conclude, the development of rheumatoid arthritis and osteoarthritis is associated with alteration of the levels of some cytokines. The assessment of these immunologic changes may have potential prognostic roles.", "title": "" }, { "docid": "15e034d722778575b43394b968be19ad", "text": "Elections are contests for the highest stakes in national politics and the electoral system is a set of predetermined rules for conducting elections and determining their outcome. Thus defined, the electoral system is distinguishable from the actual conduct of elections as well as from the wider conditions surrounding the electoral contest, such as the state of civil liberties, restraints on the opposition and access to the mass media. While all these aspects are of obvious importance to free and fair elections, the main interest of this study is the electoral system.", "title": "" }, { "docid": "77b78ec70f390289424cade3850fc098", "text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.", "title": "" }, { "docid": "11a1c92620d58100194b735bfc18c695", "text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.", "title": "" }, { "docid": "02469f669769f5c9e2a9dc49cee20862", "text": "In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.", "title": "" }, { "docid": "24e1a6f966594d4230089fc433e38ce6", "text": "The need for omnidirectional antennas for wireless applications has increased considerably. The antennas are used in a variety of bands anywhere from 1.7 to 2.5 GHz, in different configurations which mainly differ in gain. The omnidirectionality is mostly obtained using back-to-back elements or simply using dipoles in different collinear-array configurations. The antenna proposed in this paper is a patch which was built in a cylindrical geometry rather than a planar one, and which generates an omnidirectional pattern in the H-plane.", "title": "" } ]
scidocsrr
0b8742ea9f684f8439af828120db0df2
Learning beyond datasets : Knowledge Graph Augmented Neural Networks for Natural language Processing
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "4e2fbac1742c7afe9136e274150d6ee9", "text": "Recently, knowledge graph embedding, which projects symbolic entities and relations into continuous vector space, has become a new, hot topic in artificial intelligence. This paper addresses a new issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples, and proposes a novel generative model for embedding, TransG. The new model can discover latent semantics for a relation and leverage a mixture of relation-specific component vectors to embed a fact triple. To the best of our knowledge, this is the first generative model for knowledge graph embedding, which is able to deal with multiple relation semantics. Extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines.", "title": "" } ]
[ { "docid": "4236e1b86150a9557b518b789418f048", "text": "Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.", "title": "" }, { "docid": "94ea3cbf3df14d2d8e3583cb4714c13f", "text": "The emergence of computers as an essential tool in scientific research has shaken the very foundations of differential modeling. Indeed, the deeply-rooted abstraction of smoothness, or differentiability, seems to inherently clash with a computer's ability of storing only finite sets of numbers. While there has been a series of computational techniques that proposed discretizations of differential equations, the geometric structures they are supposed to simulate are often lost in the process.", "title": "" }, { "docid": "9869bc5dfc8f20b50608f0d68f7e49ba", "text": "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of “objectness”.", "title": "" }, { "docid": "e645deb8bfd17dd8ef657ef0a0e0e960", "text": "HR Tool Employee engagement refers to the level of commitment workers make to their employer, seen in their willingness to stay at the firm and to go beyond the call of duty.1 Firms want employees that are highly motivated and feel they have a real stake in the company’s success. Such employees are willing to finish tasks in their own time and see a strong link between the firm’s success and their own career prospects. In short, motivated, empowered employees work hand in hand with employers in an atmosphere of mutual trust. Companies with engaged workforces have also reported less absenteeism, more engagement with customers, greater employee satisfaction, less mistakes, fewer employees leaving, and naturally higher profits. Such is the power of this concept that former Secretary of State for Business, Peter Mandelson, commissioned David McLeod and Nita Clarke to investigate how much UK competitiveness could be enhanced by wider use of employee engagement. David and Nita concluded that in a world where work tasks have become increasingly similar, engaged employees could give some companies the edge over their rivals. They also identified significant barriers to engagement such as a lack of appreciation for the concept of employee engagement by some companies and managers. Full participation by line managers is particularly crucial. From the employee point of view, it is easy to view engagement as a management fad, particularly if the company fails to demonstrate the necessary commitment. Some also feel that in a recession, employee engagement becomes less of a priority when in Performance Management and Appraisal 8 CHATE R", "title": "" }, { "docid": "2fad2d005416a59ba2d876a297cc5215", "text": "Executive approaches to creativity emphasize that generating creative ideas can be hard and requires mental effort. Few studies, however, have examined effort-related physiological activity during creativity tasks. Using motivational intensity theory as a framework, we examined predictors of effort-related cardiac activity during a creative challenge. A sample of 111 adults completed a divergent thinking task. Sympathetic (PEP and RZ) and parasympathetic (RSA and RMSSD) outcomes were assessed using impedance cardiography. As predicted, people with high creative achievement (measured with the Creative Achievement Questionnaire) showed significantly greater increases in sympathetic activity from baseline to task, reflecting higher effort. People with more creative achievements generated ideas that were significantly more creative, and creative performance correlated marginally with PEP and RZ. The results support the view that creative thought can be a mental challenge.", "title": "" }, { "docid": "d3997f030d5d7287a4c6557681dc7a46", "text": "This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PASCAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.", "title": "" }, { "docid": "254b380eacf71429dd1d4d6589c69262", "text": "Big data technology offers unprecedented opportunities to society as a whole and also to its individual members. At the same time, this technology poses significant risks to those it overlooks. In this article, we give an overview of recent technical work on diversity, particularly in selection tasks, discuss connections between diversity and fairness, and identify promising directions for future work that will position diversity as an important component of a data-responsible society. We argue that diversity should come to the forefront of our discourse, for reasons that are both ethical-to mitigate the risks of exclusion-and utilitarian, to enable more powerful, accurate, and engaging data analysis and use.", "title": "" }, { "docid": "e984ca3539c2ea097885771e52bdc131", "text": "This study proposes and tests a novel theoretical mechanism to explain increased selfdisclosure intimacy in text-based computer-mediated communication (CMC) versus face-to-face (FtF) interactions. On the basis of joint effects of perception intensification processes in CMC and the disclosure reciprocity norm, the authors predict a perceptionbehavior intensification effect, according to which people perceive partners’ initial disclosures as more intimate in CMC than FtF and, consequently, reciprocate with more intimate disclosures of their own. An experiment compares disclosure reciprocity in textbased CMC and FtF conversations, in which participants interacted with a confederate who made either intimate or nonintimate disclosures across the two communication media. The utterances generated by the participants are coded for disclosure frequency and intimacy. Consistent with the proposed perception-behavior intensification effect, CMC participants perceive the confederate’s disclosures as more intimate, and, importantly, reciprocate with more intimate disclosures than FtF participants do.", "title": "" }, { "docid": "7bef5a19f6d8f71d4aa12194dd02d0c4", "text": "To build a natural sounding speech synthesis system, it is essential that the text processing component produce an appropriate sequence of phonemic units corresponding to an arbitrary input text. In this paper we discuss our efforts in addressing the issues of Font-to-Akshara mapping, pronunciation rules for Aksharas, text normalization in the context of building text-to-speech systems in Indian languages.", "title": "" }, { "docid": "551f1dca9718125b385794d8e12f3340", "text": "Social media provides increasing opportunities for users to voluntarily share their thoughts and concerns in a large volume of data. While user-generated data from each individual may not provide considerable information, when combined, they include hidden variables, which may convey significant events. In this paper, we pursue the question of whether social media context can provide socio-behavior \"signals\" for crime prediction. The hypothesis is that crowd publicly available data in social media, in particular Twitter, may include predictive variables, which can indicate the changes in crime rates. We developed a model for crime trend prediction where the objective is to employ Twitter content to identify whether crime rates have dropped or increased for the prospective time frame. We also present a Twitter sampling model to collect historical data to avoid missing data over time. The prediction model was evaluated for different cities in the United States. The experiments revealed the correlation between features extracted from the content and crime rate directions. Overall, the study provides insight into the correlation of social content and crime trends as well as the impact of social data in providing predictive indicators.", "title": "" }, { "docid": "4725347f7d04e1ca052ee2b963dd140f", "text": "Classically, the procedure for reverse engineering binary code is to use a disassembler and to manually reconstruct the logic of the original program. Unfortunately, this is not always practical as obfuscation can make the binary extremely large by overcomplicating the program logic or adding bogus code. We present a novel approach, based on extracting semantic information by analyzing the behavior of the execution of a program. As obfuscation consists in manipulating the program while keeping its functionality, we argue that there are some characteristics of the execution that are strictly correlated with the underlying logic of the code and are invariant after applying obfuscation. We aim at highlighting these patterns, by introducing different techniques for processing memory and execution traces. Our goal is to identify interesting portions of the traces by finding patterns that depend on the original semantics of the program. Using this approach the high-level information about the business logic is revealed and the amount of binary code to be analyze is considerable reduced. For testing and simulations we used obfuscated code of cryptographic algorithms, as our focus are DRM system and mobile banking applications. We argue however that the methods presented in this work are generic and apply to other domains were obfuscated code is used.", "title": "" }, { "docid": "2eaebb640d4b4cd74cb548dd209e06a8", "text": "Deep learning models have gained great success in many real-world applications. However, most existing networks are typically designed in heuristic manners, thus lack of rigorous mathematical principles and derivations. Several recent studies build deep structures by unrolling a particular optimization model that involves task information. Unfortunately, due to the dynamic nature of network parameters, their resultant deep propagation networks do not possess the nice convergence property as the original optimization scheme does. This paper provides a novel proximal unrolling framework to establish deep models by integrating experimentally verified network architectures and rich cues of the tasks. More importantly, we prove in theory that 1) the propagation generated by our unrolled deep model globally converges to a critical-point of a given variational energy, and 2) the proposed framework is still able to learn priors from training data to generate a convergent propagation even when task information is only partially available. Indeed, these theoretical results are the best we can ask for, unless stronger assumptions are enforced. Extensive experiments on various real-world applications verify the theoretical convergence and demonstrate the effectiveness of designed deep models.", "title": "" }, { "docid": "2ee8910adbdff2111d64b9a06242050f", "text": "Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal.", "title": "" }, { "docid": "0ed5f426be75ebcc85da0c1ab0c1ad65", "text": "The impact of global air pollution on climate and the environment is a new focus in atmospheric science. Intercontinental transport and hemispheric air pollution by ozone jeopardize agricultural and natural ecosystems worldwide and have a strong effect on climate. Aerosols, which are spread globally but have a strong regional imbalance, change global climate through their direct and indirect effects on radiative forcing. In the 1990s, nitrogen oxide emissions from Asia surpassed those from North America and Europe and should continue to exceed them for decades. International initiatives to mitigate global air pollution require participation from both developed and developing countries.", "title": "" }, { "docid": "cc2fd3848c4e035c1d7176abd93fba10", "text": "Cloud computing data enters dynamically provide millions of virtual machines (VMs) in actual cloud markets. In this context, Virtual Machine Placement (VMP) is one of the most challenging problems in cloud infrastructure management, considering the large number of possible optimization criteria and different formulations that could be studied. VMP literature include relevant research topics such as energy efficiency, Service Level Agreement (SLA), Quality of Service (QoS), cloud service pricing schemes and carbon dioxide emissions, all of them with high economical and ecological impact. This work classifies an extensive up-to-date survey of the most relevant VMP literature proposing a novel taxonomy in order to identify research opportunities and define a general vision on this research area.", "title": "" }, { "docid": "65b64f338b0126151a5e8dbcd4a9cf33", "text": "This free executive summary is provided by the National Academies as part of our mission to educate the world on issues of science, engineering, and health. If you are interested in reading the full book, please visit us online at http://www.nap.edu/catalog/9728.html . You may browse and search the full, authoritative version for free; you may also purchase a print or electronic version of the book. If you have questions or just want more information about the books published by the National Academies Press, please contact our customer service department toll-free at 888-624-8373.", "title": "" }, { "docid": "2e976aa51bc5550ad14083d5df7252a8", "text": "This paper presents a 60-dB gain bulk-driven Miller OTA operating at 0.25-V power supply in the 130-nm digital CMOS process. The amplifier operates in the weak-inversion region with input bulk-driven differential pair sporting positive feedback source degeneration for transconductance enhancement. In addition, the distributed layout configuration is used for all the transistors to mitigate the effect of halo implants for higher output impedance. Combining these two approaches, we experimentally demonstrate a high gain of over 60-dB with just 18-nW power consumption from 0.25-V power supply. The use of enhanced bulk-driven differential pair and distributed layout can help overcome some of the constraints imposed by nanometer CMOS process for high performance analog circuits in weak inversion region.", "title": "" }, { "docid": "5ae157937813e060a72ecb918d4dc5d1", "text": "Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models.", "title": "" }, { "docid": "88a2ed90fc39a4ad083aff9fabcf2bc6", "text": "This two-part article provides an overview of the global burden of atherothrombotic cardiovascular disease. Part I initially discusses the epidemiological transition which has resulted in a decrease in deaths in childhood due to infections, with a concomitant increase in cardiovascular and other chronic diseases; and then provides estimates of the burden of cardiovascular (CV) diseases with specific focus on the developing countries. Next, we summarize key information on risk factors for cardiovascular disease (CVD) and indicate that their importance may have been underestimated. Then, we describe overarching factors influencing variations in CVD by ethnicity and region and the influence of urbanization. Part II of this article describes the burden of CV disease by specific region or ethnic group, the risk factors of importance, and possible strategies for prevention.", "title": "" }, { "docid": "e56af4a3a8fbef80493d77b441ee1970", "text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.", "title": "" } ]
scidocsrr
95c59e3a429233fd83bde1c55fa2c103
Cognitive , metacognitive , and motivational aspects of problem solving
[ { "docid": "c56c71775a0c87f7bb6c59d6607e5280", "text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.", "title": "" }, { "docid": "f71d0084ebb315a346b52c7630f36fb2", "text": "A theory of motivation and emotion is proposed in which causal ascriptions play a key role. It is first documented that in achievement-related contexts there are a few dominant causal perceptions. The perceived causes of success and failure share three common properties: locus, stability, and controllability, with intentionality and globality as other possible causal structures. The perceived stability of causes influences changes in expectancy of success; all three dimensions of causality affect a variety of common emotional experiences, including anger, gratitude, guilt, hopelessness, pity, pride, and shame. Expectancy and affect, in turn, are presumed to guide motivated behavior. The theory therefore relates the structure of thinking to the dynamics of feeling and action. Analysis of a created motivational episode involving achievement strivings is offered, and numerous empirical observations are examined from this theoretical position. The strength of the empirical evidence, the capability of this theory to address prevalent human emotions, and the potential generality of the conception are stressed.", "title": "" } ]
[ { "docid": "6c1a1e47ce91b2d9ae60a0cfc972b7e4", "text": "We investigate automatic classification of speculative language (‘hedging’), in biomedical text using weakly supervised machine learning. Our contributions include a precise description of the task with annotation guidelines, analysis and discussion, a probabilistic weakly supervised learning model, and experimental evaluation of the methods presented. We show that hedge classification is feasible using weakly supervised ML, and point toward avenues for future research.", "title": "" }, { "docid": "b7715fb5c6fb19363cb1bdaf92981643", "text": "The composition and antifungal activity of clove essential oil (EO), obtained from Syzygium aromaticum, were studied. Clove oil was obtained commercially and analysed by GC and GC-MS. The EO analysed showed a high content of eugenol (85.3 %). MICs, determined according to Clinical and Laboratory Standards Institute protocols, and minimum fungicidal concentration were used to evaluate the antifungal activity of the clove oil and its main component, eugenol, against Candida, Aspergillus and dermatophyte clinical and American Type Culture Collection strains. The EO and eugenol showed inhibitory activity against all the tested strains. To clarify its mechanism of action on yeasts and filamentous fungi, flow cytometric and inhibition of ergosterol synthesis studies were performed. Propidium iodide rapidly penetrated the majority of the yeast cells when the cells were treated with concentrations just over the MICs, meaning that the fungicidal effect resulted from an extensive lesion of the cell membrane. Clove oil and eugenol also caused a considerable reduction in the quantity of ergosterol, a specific fungal cell membrane component. Germ tube formation by Candida albicans was completely or almost completely inhibited by oil and eugenol concentrations below the MIC values. The present study indicates that clove oil and eugenol have considerable antifungal activity against clinically relevant fungi, including fluconazole-resistant strains, deserving further investigation for clinical application in the treatment of fungal infections.", "title": "" }, { "docid": "e06b2385b1b9a81b9678fa5be485151a", "text": "We propose a new weight compensation mechanism with a non-circular pulley and a spring. We show the basic principle and numerical design method to derive the shape of the non-circular pulley. After demonstration of the weight compensation for an inverted/ordinary pendulum system, we extend the same mechanism to a parallel five-bar linkage system, analyzing the required torques using transposed Jacobian matrices. Finally, we develop a three degree of freedom manipulator with relatively small output actuators and verified that the weight compensation mechanism significantly contributes to decrease static torques to keep the same posture within manipulator's work space.", "title": "" }, { "docid": "356361bf2ca0e821250e4a32d299d498", "text": "DRAM has been a de facto standard for main memory, and advances in process technology have led to a rapid increase in its capacity and bandwidth. In contrast, its random access latency has remained relatively stagnant, as it is still around 100 CPU clock cycles. Modern computer systems rely on caches or other latency tolerance techniques to lower the average access latency. However, not all applications have ample parallelism or locality that would help hide or reduce the latency. Moreover, applications' demands for memory space continue to grow, while the capacity gap between last-level caches and main memory is unlikely to shrink. Consequently, reducing the main-memory latency is important for application performance. Unfortunately, previous proposals have not adequately addressed this problem, as they have focused only on improving the bandwidth and capacity or reduced the latency at the cost of significant area overhead.\n We propose asymmetric DRAM bank organizations to reduce the average main-memory access latency. We first analyze the access and cycle times of a modern DRAM device to identify key delay components for latency reduction. Then we reorganize a subset of DRAM banks to reduce their access and cycle times by half with low area overhead. By synergistically combining these reorganized DRAM banks with support for non-uniform bank accesses, we introduce a novel DRAM bank organization with center high-aspect-ratio mats called CHARM. Experiments on a simulated chip-multiprocessor system show that CHARM improves both the instructions per cycle and system-wide energy-delay product up to 21% and 32%, respectively, with only a 3% increase in die area.", "title": "" }, { "docid": "948257544ca485b689d8663aaba63c5d", "text": "This paper presents a new single-pass shadow mapping technique that achieves better quality than the approaches based on perspective warping, such as perspective, light-space, and trapezoidal shadow maps. The proposed technique is appropriate for real-time rendering of large virtual environments that include dynamic objects. By performing operations in camera space, this solution successfully handles the general and the dueling frustum cases and produces high-quality shadows even for extremely large scenes. This paper also presents a fast nonlinear projection technique for shadow map stretching that enables complete utilization of the shadow map by eliminating wastage. The application of stretching results in a significant reduction in unwanted perspective aliasing, commonly found in all shadow mapping techniques. Technique is compared with other shadow mapping techniques, and the benefits of the proposed method are presented. The proposed shadow mapping technique is simple and flexible enough to handle most of the special scenarios. An API for a generic shadow mapping solution is presented. This API simplifies the generation of fast and high-quality shadows.", "title": "" }, { "docid": "2ceb6aae1478e42ffae56895e17a9e14", "text": "Proposed in 1994, the “QED project” was one of the seminally influential initiatives in automated reasoning: It envisioned the formalization of “all of mathematics” and the assembly of these formalizations in a single coherent database. Even though it never led to the concrete system, communal resource, or even joint research envisioned in the QED manifesto, the idea lives on and shapes the research agendas of a significant part of the community This paper surveys a decade of work on representation languages and knowledge management tools for mathematical knowledge conducted in the KWARC research group at Jacobs University Bremen. It assembles the various research strands into a coherent agenda for realizing the QED dream with modern insights and technologies.", "title": "" }, { "docid": "ad3970fe4a43977f521b9c8a68d32647", "text": "Current key initiatives in deep-space optical communications are treated in terms of historical context, contemporary trends, and prospects for the future. An architectural perspective focusing on high-level drivers, systems, and related operations concepts is provided. Detailed subsystem and component topics are not addressed. A brief overview of past ideas and architectural concepts sets the stage for current developments. Current requirements that might drive a transition from radio frequencies to optical communications are examined. These drivers include mission demand for data rates and/or data volumes; spectrum to accommodate such data rates; and desired power, mass, and cost benefits. As is typical, benefits come with associated challenges. For optical communications, these include atmospheric effects, link availability, pointing, and background light. The paper describes how NASA's Space Communication and Navigation Office will respond to the drivers, achieve the benefits, and mitigate the challenges, as documented in its Optical Communications Roadmap. Some nontraditional architectures and operations concepts are advanced in an effort to realize benefits and mitigate challenges as quickly as possible. Radio frequency communications is considered as both a competitor to and a partner with optical communications. The paper concludes with some suggestions for two affordable first steps that can yet evolve into capable architectures that will fulfill the vision inherent in optical communications.", "title": "" }, { "docid": "c1dbf418f72ad572b3b745a94fe8fbf7", "text": "In this work we show how to integrate prior statistical knowledge, obtained through principal components analysis (PCA), into a convolutional neural network in order to obtain robust predictions even when dealing with corrupted or noisy data. Our network architecture is trained end-to-end and includes a specifically designed layer which incorporates the dataset modes of variation discovered via PCA and produces predictions by linearly combining them. We also propose a mechanism to focus the attention of the CNN on specific regions of interest of the image in order to obtain refined predictions. We show that our method is effective in challenging segmentation and landmark localization tasks.", "title": "" }, { "docid": "a26d98c1f9cb219f85153e04120053a7", "text": "The purpose of this paper is to examine the academic and athletic motivation and identify the factors that determine the academic performance among university students in the Emirates of Dubai. The study examined motivation based on non-traditional measure adopting a scale to measure both academic as well as athletic motivation. Keywords-academic performance, academic motivation, athletic performance, university students, business management, academic achievement, career motivation, sports motivation", "title": "" }, { "docid": "6e36dda80f462c23bb7f6224e741e13d", "text": "Usual way of character's animation is the use of motion captured data. Acquired bones' orientations are blended together according to user input in real-time. Although this massively used method gives a nice results, practical experience show how important is to have a system for interactive direct manipulation of character's skeleton in order to satisfy various tasks in Cartesian space. For this purpose, various methods for solving inverse kinematics problem are used. This paper presents three of such methods: Algebraical method based on limbs positioning; iterative optimization method based on Jacobian pseudo-inversion; and heuristic CCD iterative method. The paper describes them all in detail and discusses practical scope of their use in real-time applications.", "title": "" }, { "docid": "479b2ba292c60ac2441586ac3670e4b8", "text": "Educational Goals of Course(s): i. Explore and evaluate Multi-Objective Evolutionary algorithm (MOEA) space, Multi-Objective Problem (MOP) space, and parameter space along with MOEA performance comparisons ii. Motivate the student to investigate new areas of MOEA design, implementation, and performance metrics iii. Developed an ability to utilize and improve MOEA performance across a wide variety of application problem domains.", "title": "" }, { "docid": "a1046f5282cf4057fd143fdce79c6990", "text": "Rheumatoid arthritis is a multisystem disease with underlying immune mechanisms. Osteoarthritis is a debilitating, progressive disease of diarthrodial joints associated with the aging process. Although much is known about the pathogenesis of rheumatoid arthritis and osteoarthritis, our understanding of some immunologic changes remains incomplete. This study tries to examine the numeric changes in the T cell subsets and the alterations in the levels of some cytokines and adhesion molecules in these lesions. To accomplish this goal, peripheral blood and synovial fluid samples were obtained from 24 patients with rheumatoid arthritis, 15 patients with osteoarthritis and six healthy controls. The counts of CD4 + and CD8 + T lymphocytes were examined using flow cytometry. The levels of some cytokines (TNF-α, IL1-β, IL-10, and IL-17) and a soluble intercellular adhesion molecule-1 (sICAM-1) were measured in the sera and synovial fluids using enzyme linked immunosorbant assay. We found some variations in the counts of T cell subsets, the levels of cytokines and sICAM-1 adhesion molecule between the healthy controls and the patients with arthritis. High levels of IL-1β, IL-10, IL-17 and TNF-α (in the serum and synovial fluid) were observed in arthritis compared to the healthy controls. In rheumatoid arthritis, a high serum level of sICAM-1 was found compared to its level in the synovial fluid. A high CD4+/CD8+ T cell ratio was found in the blood of the patients with rheumatoid arthritis. In rheumatoid arthritis, the cytokine levels correlated positively with some clinicopathologic features. To conclude, the development of rheumatoid arthritis and osteoarthritis is associated with alteration of the levels of some cytokines. The assessment of these immunologic changes may have potential prognostic roles.", "title": "" }, { "docid": "d8ead5d749b9af092adf626245e8178a", "text": "This paper describes a LIN (Local Interconnect Network) Transmitter designed in a BCD HV technology. The key design target is to comply with EMI (electromagnetic interference) specification limits. The two main aspects are low EME (electromagnetic emission) and sufficient immunity against RF disturbance. A gate driver is proposed which uses a certain current summation network for lowering the slew rate on the one hand and being reliable against radio frequency (RF) disturbances within the automotive environment on the other hand. Nowadays the low cost single wire LIN Bus is used for establishing communication between sensors, actuators and other components.", "title": "" }, { "docid": "c45d911aea9d06208a4ef273c9ab5ff3", "text": "A wide range of research has used face data to estimate a person's engagement, in applications from advertising to student learning. An interesting and important question not addressed in prior work is if face-based models of engagement are generalizable and context-free, or do engagement models depend on context and task. This research shows that context-sensitive face-based engagement models are more accurate, at least in the space of web-based tools for trauma recovery. Estimating engagement is important as various psychological studies indicate that engagement is a key component to measure the effectiveness of treatment and can be predictive of behavioral outcomes in many applications. In this paper, we analyze user engagement in a trauma-recovery regime during two separate modules/tasks: relaxation and triggers. The dataset comprises of 8M+ frames from multiple videos collected from 110 subjects, with engagement data coming from 800+ subject self-reports. We build an engagement prediction model as sequence learning from facial Action Units (AUs) using Long Short Term Memory (LSTMs). Our experiments demonstrate that engagement prediction is contextual and depends significantly on the allocated task. Models trained to predict engagement on one task are only weak predictors for another and are much less accurate than context-specific models. Further, we show the interplay of subject mood and engagement using a very short version of Profile of Mood States (POMS) to extend our LSTM model.", "title": "" }, { "docid": "0719942bf0fc7ddf03b4caf6402dec30", "text": "Recent years have seen a renewed interest in the harvesting and conversion of solar energy. Among various technologies, the direct conversion of solar to chemical energy using photocatalysts has received significant attention. Although heterogeneous photocatalysts are almost exclusively semiconductors, it has been demonstrated recently that plasmonic nanostructures of noble metals (mainly silver and gold) also show significant promise. Here we review recent progress in using plasmonic metallic nanostructures in the field of photocatalysis. We focus on plasmon-enhanced water splitting on composite photocatalysts containing semiconductor and plasmonic-metal building blocks, and recently reported plasmon-mediated photocatalytic reactions on plasmonic nanostructures of noble metals. We also discuss the areas where major advancements are needed to move the field of plasmon-mediated photocatalysis forward.", "title": "" }, { "docid": "54c6e02234ce1c0f188dcd0d5ee4f04c", "text": "The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many di erent formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.", "title": "" }, { "docid": "635d981a3f54735ccea336feb0ead45b", "text": "Keyphrase is an efficient representation of the main idea of documents. While background knowledge can provide valuable information about documents, they are rarely incorporated in keyphrase extraction methods. In this paper, we propose WikiRank, an unsupervised method for keyphrase extraction based on the background knowledge from Wikipedia. Firstly, we construct a semantic graph for the document. Then we transform the keyphrase extraction problem into an optimization problem on the graph. Finally, we get the optimal keyphrase set to be the output. Our method obtains improvements over other state-of-art models by more than 2% in F1-score.", "title": "" }, { "docid": "0ca588e42d16733bc8eef4e7957e01ab", "text": "Three-dimensional (3D) finite element (FE) models are commonly used to analyze the mechanical behavior of the bone under different conditions (i.e., before and after arthroplasty). They can provide detailed information but they are numerically expensive and this limits their use in cases where large or numerous simulations are required. On the other hand, 2D models show less computational cost, but the precision of results depends on the approach used for the simplification. Two main questions arise: Are the 3D results adequately represented by a 2D section of the model? Which approach should be used to build a 2D model that provides reliable results compared to the 3D model? In this paper, we first evaluate if the stem symmetry plane used for generating the 2D models of bone-implant systems adequately represents the results of the full 3D model for stair climbing activity. Then, we explore three different approaches that have been used in the past for creating 2D models: (1) without side-plate (WOSP), (2) with variable thickness side-plate and constant cortical thickness (SPCT), and (3) with variable thickness side-plate and variable cortical thickness (SPVT). From the different approaches investigated, a 2D model including a side-plate best represents the results obtained with the full 3D model with much less computational cost. The side-plate needs to have variable thickness, while the cortical bone thickness can be kept constant.", "title": "" }, { "docid": "f6b6b175f556e7ae88661b057eb1c373", "text": "Legacy encryption systems depend on sharing a key (public or private) among the peers involved in exchanging an encrypted message. However, this approach poses privacy concerns. The users or service providers with the key have exclusive rights on the data. Especially with popular cloud services, control over the privacy of the sensitive data is lost. Even when the keys are not shared, the encrypted material is shared with a third party that does not necessarily need to access the content. Moreover, untrusted servers, providers, and cloud operators can keep identifying elements of users long after users end the relationship with the services. Indeed, Homomorphic Encryption (HE), a special kind of encryption scheme, can address these concerns as it allows any third party to operate on the encrypted data without decrypting it in advance. Although this extremely useful feature of the HE scheme has been known for over 30 years, the first plausible and achievable Fully Homomorphic Encryption (FHE) scheme, which allows any computable function to perform on the encrypted data, was introduced by Craig Gentry in 2009. Even though this was a major achievement, different implementations so far demonstrated that FHE still needs to be improved significantly to be practical on every platform. Therefore, this survey focuses on HE and FHE schemes. First, we present the basics of HE and the details of the well-known Partially Homomorphic Encryption (PHE) and Somewhat Homomorphic Encryption (SWHE), which are important pillars for achieving FHE. Then, the main FHE families, which have become the base for the other follow-up FHE schemes, are presented. Furthermore, the implementations and recent improvements in Gentry-type FHE schemes are also surveyed. Finally, further research directions are discussed. This survey is intended to give a clear knowledge and foundation to researchers and practitioners interested in knowing, applying, and extending the state-of-the-art HE, PHE, SWHE, and FHE systems.", "title": "" }, { "docid": "a5082b49cc584548ac066b9c6ffb2452", "text": "In this paper we review the algorithm development and applications in high resolution shock capturing methods, level set methods and PDE based methods in computer vision and image processing. The emphasis is on Stanley Osher's contribution in these areas and the impact of his work. We will start with shock capturing methods and will review the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes and numerical schemes for Hamilton-Jacobi type equations. Among level set methods we will review level set calculus, numerical techniques, uids and materials, variational approach, high codimension motion, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations. Among computer vision and image processing we will review the total variation model for image denoising, images on implicit surfaces, and the level set method in image processing and computer vision.", "title": "" } ]
scidocsrr
22ad268ada2c230126faa965804de169
Phase distribution of software development effort
[ { "docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb", "text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.", "title": "" } ]
[ { "docid": "56a7243414824a2e4ab3993dc3a90fbe", "text": "The primary objectives of periodontal therapy are to maintain and to obtain health and integrity of the insertion apparatus and to re-establish esthetics by means of the quantitative and qualitative restoration of the gingival margin. Esthetics can be considered essential to the success of any dental procedure. However, in cleft lip and palate patients gingival esthetics do not play a relevant role, since most patients present little gingiva exposure (Mikami, 1990). The treatment protocol for cleft palate patients is complex and often requires a myriad of surgical and rehabilitative procedures that last until adulthood. In order to rehabilitate these patients and provide them with adequate physical and psychological conditions for a good quality of life, plastic surgery has been taking place since the 19th century, with the development of new techniques. By the age of six months the patients have undergone lip repair procedures (Bill, 1956; Jolleys, 1954), followed by palatoplasty at the age of 1218 months. As a consequence of these surgical interventions, the formation of innumerous scars and fibrous tissue in the anterior region may cause some sequels, such as orofacial growth alterations (Quarta and Koch, 1989; Ozawa, 2001), a shallow vestibule with lack of attached gingiva and gingival margin mobility (Falcone, 1966). A shallow vestibule in the cleft lip and palate patient is associated with the contraction of the upper lip during healing (Iino et al, 2001), which causes deleterious effects on growth, facial expression, speech, orthodontic and prosthetic treatment problems, diminished keratinized gingiva, bone graft resorption and changes in the upper lip muscle pattern. The surgical protocol at the Hospital for Rehabilitation of Craniofacial Anomalies (HRCA) in Bauru consists of carrying out primary surgeries (cheiloplasty and palatoplasty) during the first months of Periodontal Health Re-Establishment in Cleft Lip and Palate Patients through Vestibuloplasty Associated with Free Gingival Graft", "title": "" }, { "docid": "a947380864130c898d15d7d34280825f", "text": "Automatic oil tank detection plays a very important role for remote sensing image processing. To accomplish the task, a hierarchical oil tank detector with deep surrounding features is proposed in this paper. The surrounding features extracted by the deep learning model aim at making the oil tanks more easily to recognize, since the appearance of oil tanks is a circle and this information is not enough to separate targets from the complex background. The proposed method is divided into three modules: 1) candidate selection; 2) feature extraction; and 3) classification. First, a modified ellipse and line segment detector (ELSD) based on gradient orientation is used to select candidates in the image. Afterward, the feature combing local and surrounding information together is extracted to represent the target. Histogram of oriented gradients (HOG) which can reliably capture the shape information is extracted to characterize the local patch. For the surrounding area, the convolutional neural network (CNN) trained in ImageNet Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) contest is applied as a blackbox feature extractor to extract rich surrounding feature. Then, the linear support vector machine (SVM) is utilized as the classifier to give the final output. Experimental results indicate that the proposed method is robust under different complex backgrounds and has high detection rate with low false alarm.", "title": "" }, { "docid": "c0ec2818c7f34359b089acc1df5478c6", "text": "Methods We searched Medline from Jan 1, 2009, to Nov 19, 2013, limiting searches to phase 3, randomised trials of patients with atrial fi brillation who were randomised to receive new oral anticoagulants or warfarin, and trials in which both effi cacy and safety outcomes were reported. We did a prespecifi ed meta-analysis of all 71 683 participants included in the RE-LY, ROCKET AF, ARISTOTLE, and ENGAGE AF–TIMI 48 trials. The main outcomes were stroke and systemic embolic events, ischaemic stroke, haemorrhagic stroke, all-cause mortality, myocardial infarction, major bleeding, intracranial haemorrhage, and gastrointestinal bleeding. We calculated relative risks (RRs) and 95% CIs for each outcome. We did subgroup analyses to assess whether diff erences in patient and trial characteristics aff ected outcomes. We used a random-eff ects model to compare pooled outcomes and tested for heterogeneity.", "title": "" }, { "docid": "5ae974ffec58910ea3087aefabf343f8", "text": "With the ever-increasing use of multimedia contents through electronic commerce and on-line services, the problems associated with the protection of intellectual property, management of large database and indexation of content are becoming more prominent. Watermarking has been considered as efficient means to these problems. Although watermarking is a powerful tool, there are some issues with the use of it, such as the modification of the content and its security. With respect to this, identifying content itself based on its own features rather than watermarking can be an alternative solution to these problems. The aim of fingerprinting is to provide fast and reliable methods for content identification. In this paper, we present a new approach for image fingerprinting using the Radon transform to make the fingerprint robust against affine transformations. Since it is quite easy with modern computers to apply affine transformations to audio, image and video, there is an obvious necessity for affine transformation resilient fingerprinting. Experimental results show that the proposed fingerprints are highly robust against most signal processing transformations. Besides robustness, we also address other issues such as pairwise independence, database search efficiency and key dependence of the proposed method. r 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2f3046369c717cc3dc15632fc163a429", "text": "We propose FaceVR, a novel image-based method that enables video teleconferencing in VR based on self-reenactment. State-of-the-art face tracking methods in the VR context are focused on the animation of rigged 3D avatars (Li et al. 2015; Olszewski et al. 2016). Although they achieve good tracking performance, the results look cartoonish and not real. In contrast to these model-based approaches, FaceVR enables VR teleconferencing using an image-based technique that results in nearly photo-realistic outputs. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions or change gaze directions in the prerecorded target video. In a live setup, we apply these newly introduced algorithmic components.", "title": "" }, { "docid": "de630d018f3ff24fad06976e8dc390fa", "text": "A critical first step in navigation of unmanned aerial vehicles is the detection of the horizon line. This information can be used for adjusting flight parameters, attitude estimation as well as obstacle detection and avoidance. In this paper, a fast and robust technique for precise detection of the horizon is presented. Our approach is to apply convolutional neural networks to the task, training them to detect the sky and ground regions as well as the horizon line in flight videos. Thorough experiments using large datasets illustrate the significance and accuracy of this technique for various types of terrain as well as seasonal conditions.", "title": "" }, { "docid": "ab231cbc45541b5bdbd0da82571b44ca", "text": "ABSTRACT Evidence of Sedona magnetic anomaly and brainwave EEG synchronization can be demonstrated with portable equipment on site in the field, during sudden magnetic events. Previously, we have demonstrated magnetic anomaly charts recorded in both known and unrecognized Sedona vortex activity locations. We have also shown a correlation or amplification of vortex phenomena with Schumann Resonance. Adding the third measurable parameter of brain wave activity, we demonstrate resonance and amplification among them. We suggest tiny magnetic crystals, biogenic magnetite, make human beings highly sensitive to ELF field fluctuations. Biological Magnetite could act as a transducer of both low frequency magnetic fields and RF fields.", "title": "" }, { "docid": "53e74115eceda124c28975cdaa8e4088", "text": "Current state-of-the-art motion planners rely on samplingbased planning to explore the problem space for a solution. However, sampling valid configurations in narrow or cluttered workspaces remains a challenge. If a valid path for the robot correlates to a path in the workspace, then the planning process can employ a representation of the workspace that captures its salient topological features. Prior approaches have investigated exploiting geometric decompositions of the workspace to bias sampling; while beneficial in some environments, complex narrow passages remain challenging to navigate. In this work, we present Dynamic Region-biased RRT, a novel samplingbased planner that guides the exploration of a Rapidly-exploring Random Tree (RRT) by moving sampling regions along an embedded graph that captures the workspace topology. These sampling regions are dynamically created, manipulated, and destroyed to greedily bias sampling through unexplored passages that lead to the goal. We show that our approach reduces online planning time compared with related methods on a set of maze-like problems.", "title": "" }, { "docid": "0d7ce42011c48232189c791e71c289f5", "text": "RECENT WORK in virtue ethics, particularly sustained reflection on specific virtues, makes it possible to argue that the classical list of cardinal virtues (prudence, justice, temperance, and fortitude) is inadequate, and that we need to articulate the cardinal virtues more correctly. With that end in view, the first section of this article describes the challenges of espousing cardinal virtues today, the second considers the inadequacy of the classical listing of cardinal virtues, and the third makes a proposal. Since virtues, no matter how general, should always relate to concrete living, the article is framed by a case.", "title": "" }, { "docid": "84e71d32b1f40eb59d63a0ec6324d79b", "text": "Typically a classifier trained on a given dataset (source domain) does not performs well if it is tested on data acquired in a different setting (target domain). This is the problem that domain adaptation (DA) tries to overcome and, while it is a well explored topic in computer vision, it is largely ignored in robotic vision where usually visual classification methods are trained and tested in the same domain. Robots should be able to deal with unknown environments, recognize objects and use them in the correct way, so it is important to explore the domain adaptation scenario also in this context. The goal of the project is to define a benchmark and a protocol for multimodal domain adaptation that is valuable for the robot vision community. With this purpose some of the state-of-the-art DA methods are selected: Deep Adaptation Network (DAN), Domain Adversarial Training of Neural Network (DANN), Automatic Domain Alignment Layers (AutoDIAL) and Adversarial Discriminative Domain Adaptation (ADDA). Evaluations have been done using different data types: RGB only, depth only and RGB-D over the following datasets, designed for the robotic community: RGB-D Object Dataset (ROD), Web Object Dataset (WOD), Autonomous Robot Indoor Dataset (ARID), Big Berkeley Instance Recognition Dataset (BigBIRD) and Active Vision Dataset. Although progresses have been made on the formulation of effective adaptation algorithms and more realistic object datasets are available, the results obtained show that, training a sufficiently good object classifier, especially in the domain adaptation scenario, is still an unsolved problem. Also the best way to combine depth with RGB informations to improve the performance is a point that needs to be investigated more.", "title": "" }, { "docid": "73e1b088461da774889ec2bd7ee2f524", "text": "In this paper, we propose a method for obtaining sentence-level embeddings. While the problem of securing word-level embeddings is very well studied, we propose a novel method for obtaining sentence-level embeddings. This is obtained by a simple method in the context of solving the paraphrase generation task. If we use a sequential encoder-decoder model for generating paraphrase, we would like the generated paraphrase to be semantically close to the original sentence. One way to ensure this is by adding constraints for true paraphrase embeddings to be close and unrelated paraphrase candidate sentence embeddings to be far. This is ensured by using a sequential pair-wise discriminator that shares weights with the encoder that is trained with a suitable loss function. Our loss function penalizes paraphrase sentence embedding distances from being too large. This loss is used in combination with a sequential encoder-decoder network. We also validated our method by evaluating the obtained embeddings for a sentiment analysis task. The proposed method results in semantic embeddings and outperforms the state-of-the-art on the paraphrase generation and sentiment analysis task on standard datasets. These results are also shown to be statistically significant.", "title": "" }, { "docid": "8c03df6650b3e400bc5447916d01820a", "text": "People called night owls habitually have late bedtimes and late times of arising, sometimes suffering a heritable circadian disturbance called delayed sleep phase syndrome (DSPS). Those with DSPS, those with more severe progressively-late non-24-hour sleep-wake cycles, and those with bipolar disorder may share genetic tendencies for slowed or delayed circadian cycles. We searched for polymorphisms associated with DSPS in a case-control study of DSPS research participants and a separate study of Sleep Center patients undergoing polysomnography. In 45 participants, we resequenced portions of 15 circadian genes to identify unknown polymorphisms that might be associated with DSPS, non-24-hour rhythms, or bipolar comorbidities. We then genotyped single nucleotide polymorphisms (SNPs) in both larger samples, using Illumina Golden Gate assays. Associations of SNPs with the DSPS phenotype and with the morningness-eveningness parametric phenotype were computed for both samples, then combined for meta-analyses. Delayed sleep and \"eveningness\" were inversely associated with loci in circadian genes NFIL3 (rs2482705) and RORC (rs3828057). A group of haplotypes overlapping BHLHE40 was associated with non-24-hour sleep-wake cycles, and less robustly, with delayed sleep and bipolar disorder (e.g., rs34883305, rs34870629, rs74439275, and rs3750275 were associated with n=37, p=4.58E-09, Bonferroni p=2.95E-06). Bright light and melatonin can palliate circadian disorders, and genetics may clarify the underlying circadian photoperiodic mechanisms. After further replication and identification of the causal polymorphisms, these findings may point to future treatments for DSPS, non-24-hour rhythms, and possibly bipolar disorder or depression.", "title": "" }, { "docid": "da7fc676542ccc6f98c36334d42645ae", "text": "Extracting the defects of the road pavement in images is difficult and, most of the time, one image is used alone. The difficulties of this task are: illumination changes, objects on the road, artefacts due to the dynamic acquisition. In this work, we try to solve some of these problems by using acquisitions from different points of view. In consequence, we present a new methodology based on these steps : the detection of defects in each image, the matching of the images and the merging of the different extractions. We show the increase in performances and more particularly how the false detections are reduced.", "title": "" }, { "docid": "347e7b80b2b0b5cd5f0736d62fa022ae", "text": "This article presents the results of an interview study on how people perceive and play social network games on Facebook. During recent years, social games have become the biggest genre of games if measured by the number of registered users. These games are designed to cater for large audiences in their design principles and values, a free-to-play revenue model and social network integration that make them easily approachable and playable with friends. Although these games have made the headlines and have been seen to revolutionize the game industry, we still lack an understanding of how people perceive and play them. For this article, we interviewed 18 Finnish Facebook users from a larger questionnaire respondent pool of 134 people. This study focuses on a user-centric approach, highlighting the emergent experiences and the meaning-making of social games players. Our findings reveal that social games are usually regarded as single player games with a social twist, and as suffering partly from their design characteristics, while still providing a wide spectrum of playful experiences for different needs. The free-to-play revenue model provides an easy access to social games, but people disagreed with paying for additional content for several reasons.", "title": "" }, { "docid": "6f0faf1a90d9f9b19fb2e122a26a0f77", "text": "Social media shatters the barrier to communicate anytime anywhere for people of all walks of life. The publicly available, virtually free information in social media poses a new challenge to consumers who have to discern whether a piece of information published in social media is reliable. For example, it can be difficult to understand the motivations behind a statement passed from one user to another, without knowing the person who originated the message. Additionally, false information can be propagated through social media, resulting in embarrassment or irreversible damages. Provenance data associated with a social media statement can help dispel rumors, clarify opinions, and confirm facts. However, provenance data about social media statements is not readily available to users today. Currently, providing this data to users requires changing the social media infrastructure or offering subscription services. Taking advantage of social media features, research in this nascent field spearheads the search for a way to provide provenance data to social media users, thus leveraging social media itself by mining it for the provenance data. Searching for provenance data reveals an interesting problem space requiring the development and application of new metrics in order to provide meaningful provenance data to social media users. This lecture reviews the current research on information provenance, explores exciting research opportunities to address pressing needs, and shows how data mining can enable a social media user to make informed judgements about statements published in social media.", "title": "" }, { "docid": "cf21fd00999dff7d974f39b99e71bb13", "text": "Taking r > 0, let π2r(x) denote the number of prime pairs (p, p+ 2r) with p ≤ x. The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π2r(x) ∼ 2C2r li2(x) with an explicit constant C2r > 0. There seems to be no good conjecture for the remainders ω2r(x) = π2r(x)−2C2r li2(x) that corresponds to Riemann’s formula for π(x)−li(x). However, there is a heuristic approximate formula for averages of the remainders ω2r(x) which is supported by numerical results.", "title": "" }, { "docid": "ed66f39bda7ccd5c76f64543b5e3abd6", "text": "BACKGROUND\nLoeys-Dietz syndrome is a recently recognized multisystemic disorder caused by mutations in the genes encoding the transforming growth factor-beta receptor. It is characterized by aggressive aneurysm formation and vascular tortuosity. We report the musculoskeletal demographic, clinical, and imaging findings of this syndrome to aid in its diagnosis and treatment.\n\n\nMETHODS\nWe retrospectively analyzed the demographic, clinical, and imaging data of sixty-five patients with Loeys-Dietz syndrome seen at one institution from May 2007 through December 2008.\n\n\nRESULTS\nThe patients had a mean age of twenty-one years, and thirty-six of the sixty-five patients were less than eighteen years old. Previous diagnoses for these patients included Marfan syndrome (sixteen patients) and Ehlers-Danlos syndrome (two patients). Spinal and foot abnormalities were the most clinically important skeletal findings. Eleven patients had talipes equinovarus, and nineteen patients had cervical anomalies and instability. Thirty patients had scoliosis (mean Cobb angle [and standard deviation], 30 degrees +/- 18 degrees ). Two patients had spondylolisthesis, and twenty-two of thirty-three who had computed tomography scans had dural ectasia. Thirty-five patients had pectus excavatum, and eight had pectus carinatum. Combined thumb and wrist signs were present in approximately one-fourth of the patients. Acetabular protrusion was present in approximately one-third of the patients and was usually mild. Fourteen patients had previous orthopaedic procedures, including scoliosis surgery, cervical stabilization, clubfoot correction, and hip arthroplasty. Features of Loeys-Dietz syndrome that are important clues to aid in making this diagnosis include bifid broad uvulas, hypertelorism, substantial joint laxity, and translucent skin.\n\n\nCONCLUSIONS\nPatients with Loeys-Dietz syndrome commonly present to the orthopaedic surgeon with cervical malformations, spinal and foot deformities, and findings in the craniofacial and cutaneous systems.\n\n\nLEVEL OF EVIDENCE\nTherapeutic Level IV. See Instructions to Authors for a complete description of levels of evidence.", "title": "" }, { "docid": "20705a14783c89ac38693b2202363c1f", "text": "This paper analyzes the effect of employee recognition, pay, and benefits on job satisfaction. In this cross-sectional study, survey responses from university students in the U.S. (n = 457), Malaysia (n = 347) and Vietnam (n = 391) were analyzed. Employee recognition, pay, and benefits were found to have a significant impact on job satisfaction, regardless of home country income level (high, middle or low income) and culture (collectivist or individualist). However, the effect of benefits on job satisfaction was significantly more important for U.S. respondents than for respondents from Malaysia and Vietnam. The authors conclude that both financial and nonfinancial rewards have a role in influencing job satisfaction, which ultimately impacts employee performance. Theoretical and practical implications for developing effective recruitment and retention policies for employees are also discussed.", "title": "" }, { "docid": "d3e8dce306eb20a31ac6b686364d0415", "text": "Lung diseases are the deadliest disease in the world. The computer aided detection system in lung diseases needed accurate lung segmentation to preplan the pulmonary treatment and surgeries. The researchers undergone the lung segmentation need a deep study and understanding of the traditional and recent papers developed in the lung segmentation field so that they can continue their research journey in an efficient way with successful outcomes. The need of reviewing the research papers is now a most wanted one for researches so this paper makes a survey on recent trends of pulmonary lung segmentation. Seven recent papers are carried out to analyze the performance characterization of themselves. The working methods, purpose for development, name of algorithm and drawbacks of the method are taken into consideration for the survey work. The tables and charts are drawn based on the reviewed papers. The study of lung segmentation research is more helpful to new and fresh researchers who are committed their research in lung segmentation.", "title": "" }, { "docid": "43f9e6edee92ddd0b9dfff885b69f64d", "text": "In this paper, we present a scalable and exact solution for probabilistic linear discriminant analysis (PLDA). PLDA is a probabilistic model that has been shown to provide state-of-the-art performance for both face and speaker recognition. However, it has one major drawback: At training time estimating the latent variables requires the inversion and storage of a matrix whose size grows quadratically with the number of samples for the identity (class). To date, two approaches have been taken to deal with this problem, to 1) use an exact solution that calculates this large matrix and is obviously not scalable with the number of samples or 2) derive a variational approximation to the problem. We present a scalable derivation which is theoretically equivalent to the previous nonscalable solution and thus obviates the need for a variational approximation. Experimentally, we demonstrate the efficacy of our approach in two ways. First, on labeled faces in the wild, we illustrate the equivalence of our scalable implementation with previously published work. Second, on the large Multi-PIE database, we illustrate the gain in performance when using more training samples per identity (class), which is made possible by the proposed scalable formulation of PLDA.", "title": "" } ]
scidocsrr
bf13df297bf633e8c7b9ef85c122c3ec
Unbiased Estimation of the Value of an Optimized Policy
[ { "docid": "e5cd8e17db6c3c65320c0581dfecee79", "text": "In this paper we propose methods for estimating heterogeneity in causal effects in experimental and observational studies and for conducting hypothesis tests about the magnitude of differences in treatment effects across subsets of the population. We provide a data-driven approach to partition the data into subpopulations that differ in the magnitude of their treatment effects. The approach enables the construction of valid confidence intervals for treatment effects, even with many covariates relative to the sample size, and without \"sparsity\" assumptions. We propose an \"honest\" approach to estimation, whereby one sample is used to construct the partition and another to estimate treatment effects for each subpopulation. Our approach builds on regression tree methods, modified to optimize for goodness of fit in treatment effects and to account for honest estimation. Our model selection criterion anticipates that bias will be eliminated by honest estimation and also accounts for the effect of making additional splits on the variance of treatment effect estimates within each subpopulation. We address the challenge that the \"ground truth\" for a causal effect is not observed for any individual unit, so that standard approaches to cross-validation must be modified. Through a simulation study, we show that for our preferred method honest estimation results in nominal coverage for 90% confidence intervals, whereas coverage ranges between 74% and 84% for nonhonest approaches. Honest estimation requires estimating the model with a smaller sample size; the cost in terms of mean squared error of treatment effects for our preferred method ranges between 7-22%.", "title": "" } ]
[ { "docid": "4eb9808144e04bf0c01121f2ec7261d2", "text": "The rise of multicore computing has greatly increased system complexity and created an additional burden for software developers. This burden is especially troublesome when it comes to optimizing software on modern computing systems. Autonomic or adaptive computing has been proposed as one method to help application programmers handle this complexity. In an autonomic computing environment, system services monitor applications and automatically adapt their behavior to increase the performance of the applications they support. Unfortunately, applications often run as performance black-boxes and adaptive services must infer application performance from low-level information or rely on system-specific ad hoc methods. This paper proposes a standard framework, Application Heartbeats, which applications can use to communicate both their current and target performance and which autonomic services can use to query these values.\n The Application Heartbeats framework is designed around the well-known idea of a heartbeat. At important points in the program, the application registers a heartbeat. In addition, the interface allows applications to express their performance in terms of a desired heart rate and/or a desired latency between specially tagged heartbeats. Thus, the interface provides a standard method for an application to directly communicate its performance and goals while allowing autonomic services access to this information. Thus, Heartbeat-enabled applications are no longer performance black-boxes. This paper presents the Applications Heartbeats interface, characterizes two reference implementations (one suitable for clusters and one for multicore), and illustrates the use of Heartbeats with several examples of systems adapting behavior based on feedback from heartbeats.", "title": "" }, { "docid": "645faf32f40732d291e604d7240f0546", "text": "Fault Diagnostics and Prognostics has been an increasing interest in recent years, as a result of the increased degree of automation and the growing demand for higher performance, efficiency, reliability and safety in industrial systems. On-line fault detection and isolation methods have been developed for automated processes. These methods include data mining methodologies, artificial intelligence methodologies or combinations of the two. Data Mining is the statistical approach of extracting knowledge from data. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. Activities in AI include searching, recognizing patterns and making logical inferences. This paper focuses on the various techniques used for Fault Diagnostics and Prognostics in Industry application domains.", "title": "" }, { "docid": "e05e91be6ca5423d795f17be8a1cec10", "text": "A novel active gate driver (AGD) for silicon carbide (SiC) MOSFET is studied in this paper. The gate driver (GD) increases the gate resistance value during the voltage plateau area of the gate-source voltage, in both turn-on and turn-off transitions. The proposed AGD is validated in both simulation and experimental environments and in hard-switching conditions. The simulation is evaluated in MATLAB/Simulink with 100 kHz of switching frequency and 600 V of dc-bus, whereas, the experimental part was realised at 100 kHz and 100 V of dc-bus. The results show that the gate driver can reduce the over-voltage and ringing, with low switching losses.", "title": "" }, { "docid": "c13bf429abfb718e6c3557ae71f45f8f", "text": "Researchers who study punishment and social control, like those who study other social phenomena, typically seek to generalize their findings from the data they have to some larger context: in statistical jargon, they generalize from a sample to a population. Generalizations are one important product of empirical inquiry. Of course, the process by which the data are selected introduces uncertainty. Indeed, any given dataset is but one of many that could have been studied. If the dataset had been different, the statistical summaries would have been different, and so would the conclusions, at least by a little. How do we calibrate the uncertainty introduced by data collection? Nowadays, this question has become quite salient, and it is routinely answered using wellknown methods of statistical inference, with standard errors, t-tests, and P-values, culminating in the “tabular asterisks” of Meehl (1978). These conventional answers, however, turn out to depend critically on certain rather restrictive assumptions, for instance, random sampling.1 When the data are generated by random sampling from a clearly defined population, and when the goal is to estimate population parameters from sample statistics, statistical inference can be relatively straightforward. The usual textbook formulas apply; tests of statistical significance and confidence intervals follow. If the random-sampling assumptions do not apply, or the parameters are not clearly defined, or the inferences are to a population that is only vaguely defined, the calibration of uncertainty offered by contemporary statistical technique is in turn rather questionable.2 Thus, investigators who use conventional statistical technique", "title": "" }, { "docid": "e6332552fb29765414020ee97184cc07", "text": "In A History of God, Karen Armstrong describes a division, made by fourth century Christians, between kerygma and dogma: 'religious truth … capable of being expressed and defined clearly and logically,' versus 'religious insights [that] had an inner resonance that could only be apprehended by each individual in his own time during … contemplation' (Armstrong, 1993, p.114). This early dual-process theory had its roots in Plato and Aristotle, who suggested a division between 'philosophy,' which could be 'expressed in terms of reason and thus capable of proof,' and knowledge contained in myths, 'which eluded scientific demonstration' (Armstrong, 1993, 113–14). This division—between what can be known and reasoned logically versus what can only be experienced and apprehended—continued to influence Western culture through the centuries, and arguably underlies our current dual-process theories of reasoning. In psychology, the division between these two forms of understanding have been described in many different ways. The underlying theme of 'overtly reasoned' versus 'perceived, intuited' often ties these dual process theories together. In Western culture, the latter form of thinking has often been maligned (Dijksterhuis and Nordgren, 2006; Gladwell, 2005; Lieberman, 2000). Recently, cultural psychologists have suggested that although the distinction itself—between reasoned and intuited knowl-edge—may have precedents in the intellectual traditions of other cultures, the privileging of the former rather than the latter may be peculiar to Western cultures The Chinese philosophical tradition illustrates this difference of emphasis. Instead of an epistemology that was guided by abstract rules, 'the Chinese in esteeming what was immediately percepti-ble—especially visually perceptible—sought intuitive instantaneous understanding through direct perception' (Nakamura, 1960/1988, p.171). Taoism—the great Chinese philosophical school besides Confucianism—developed an epistemology that was particularly oriented towards concrete perception and direct experience (Fung, 1922; Nakamura, 1960/1988). Moreover, whereas the Greeks were concerned with definitions and devising rules for the purposes of classification, for many influential Taoist philosophers, such as Chuang Tzu, '… the problem of … how terms and attributes are to be delimited, leads one in precisely the wrong direction. Classifying or limiting knowledge fractures the greater knowledge' (Mote, 1971, p.102).", "title": "" }, { "docid": "7064d73864a64e2b76827e3252390659", "text": "Abstmct-In his original paper on the subject, Shannon found upper and lower bounds for the entropy of printed English based on the number of trials required for a subject to guess subsequent symbols in a given text. The guessing approach precludes asymptotic consistency of either the upper or lower bounds except for degenerate ergodic processes. Shannon’s technique of guessing the next symbol is altered by having the subject place sequential bets on the next symbol of text. lf S,, denotes the subject’s capital after n bets at 27 for 1 odds, and lf it is assumed that the subject hnows the underlying prpbabillty distribution for the process X, then the entropy estimate ls H,(X) =(l -(l/n) log,, S,) log, 27 bits/symbol. If the subject does npt hnow the true probabllty distribution for the stochastic process, then Z&(X! ls an asymptotic upper bound for the true entropy. ff X is stationary, EH,,(X)+H(X), H(X) bell the true entropy of the process. Moreovzr, lf X is ergodic, then by the SLOW McMilhm-Brebnan theorem H,,(X)+H(X) with probability one. Preliminary indications are that English text has au entropy of approximately 1.3 bits/symbol, which agrees well with Shannon’s estimate.", "title": "" }, { "docid": "2bdefbc66ae89ce8e48acf0d13041e0a", "text": "We introduce an ac transconductance dispersion method (ACGD) to profile the oxide traps in an MOSFET without needing a body contact. The method extracts the spatial distribution of oxide traps from the frequency dependence of transconductance, which is attributed to charge trapping as modulated by an ac gate voltage. The results from this method have been verified by the use of the multifrequency charge pumping (MFCP) technique. In fact, this method complements the MFCP technique in terms of the trap depth that each method is capable of probing. We will demonstrate the method with InP passivated InGaAs substrates, along with electrically stressed Si N-MOSFETs.", "title": "" }, { "docid": "8dee3ada764a40fce6b5676287496ccd", "text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.", "title": "" }, { "docid": "6659c5b954c14003e6e62c557fffa0f2", "text": "Existing language models such as n-grams for software code often fail to capture a long context where dependent code elements scatter far apart. In this paper, we propose a novel approach to build a language model for software code to address this particular issue. Our language model, partly inspired by human memory, is built upon the powerful deep learning-based Long Short Term Memory architecture that is capable of learning long-term dependencies which occur frequently in software code. Results from our intrinsic evaluation on a corpus of Java projects have demonstrated the effectiveness of our language model. This work contributes to realizing our vision for DeepSoft, an end-to-end, generic deep learning-based framework for modeling software and its development process.", "title": "" }, { "docid": "734fc66c7c745498ca6b2b7fc6780919", "text": "In this paper, we investigate the use of an unsupervised label clustering technique and demonstrate that it enables substantial improvements in visual relationship prediction accuracy on the Person in Context (PIC) dataset. We propose to group object labels with similar patterns of relationship distribution in the dataset into fewer categories. Label clustering not only mitigates both the large classification space and class imbalance issues, but also potentially increases data samples for each clustered category. We further propose to incorporate depth information as an additional feature into the instance segmentation model. The additional depth prediction path supplements the relationship prediction model in a way that bounding boxes or segmentation masks are unable to deliver. We have rigorously evaluated the proposed techniques and performed various ablation analysis to validate the benefits of them.", "title": "" }, { "docid": "0dd8e07502ed70b38fe6eb478115f5a8", "text": "Department of Psychology Iowa State University, Ames, IA, USA Over the last 30 years, the video game industry has grown into a multi-billion dollar business. More children and adults are spending time playing computer games, consoles games, and online games than ever before. Violence is a dominant theme in most of the popular video games. This article reviews the current literature on effects of violent video game exposure on aggression-related variables. Exposure to violent video games causes increases in aggressive behavior, cognitions, and affect. Violent video game exposure also causes increases in physiological desensitization to real-life violence and decreases in helping behavior. The current video game literature is interpreted in terms of the general aggression model (GAM). Differences between violent video game exposure and violent television are also discussed.", "title": "" }, { "docid": "b078c459182501c52f38400e363cb2ca", "text": "Design considerations for piezoelectric-based energy harvesters for MEMS-scale sensors are presented, including a review of past work. Harvested ambient vibration energy can satisfy power needs of advanced MEMS-scale autonomous sensors for numerous applications, e.g., structural health monitoring. Coupled 1-D and modal (beam structure) electromechanical models are presented to predict performance, especially power, from measured low-level ambient vibration sources. Models are validated by comparison to prior published results and tests of a MEMS-scale device. A non-optimized prototype low-level ambient MEMS harvester producing 30 μW/cm3 is designed and modeled. A MEMS fabrication process for the prototype device is presented based on past work.", "title": "" }, { "docid": "1846bbaac13e4a8d5c34b1657a5b634c", "text": "Technology advancement entails an analog design scenario in which sophisticated signal processing algorithms are deployed in mixed-mode and radio frequency circuits to compensate for deterministic and random deficiencies of process technologies. This article reviews one such approach of applying a common communication technique, equalization, to correct for nonlinear distortions in analog circuits, which is analogized as non-ideal communication channels. The efficacy of this approach is showcased by a few latest advances in data conversion and RF transmission integrated circuits, where unprecedented energy efficiency, circuit linearity, and post-fabrication adaptability have been attained with low-cost digital processing.", "title": "" }, { "docid": "46658067ffc4fd2ecdc32fbaaa606170", "text": "Adolescent resilience research differs from risk research by focusing on the assets and resources that enable some adolescents to overcome the negative effects of risk exposure. We discuss three models of resilience-the compensatory, protective, and challenge models-and describe how resilience differs from related concepts. We describe issues and limitations related to resilience and provide an overview of recent resilience research related to adolescent substance use, violent behavior, and sexual risk behavior. We then discuss implications that resilience research has for intervention and describe some resilience-based interventions.", "title": "" }, { "docid": "05049ac85552c32f2c98d7249a038522", "text": "Remote sensing tools are increasingly being used to survey forest structure. Most current methods rely on GPS signals, which are available in above-canopy surveys or in below-canopy surveys of open forests, but may be absent in below-canopy environments of dense forests. We trialled a technology that facilitates mobile surveys in GPS-denied below-canopy forest environments. The platform consists of a battery-powered UAV mounted with a LiDAR. It lacks a GPS or any other localisation device. The vehicle is capable of an 8 min flight duration and autonomous operation but was remotely piloted in the present study. We flew the UAV around a 20 m × 20 m patch of roadside trees and developed postprocessing software to estimate the diameter-at-breast-height (DBH) of 12 trees that were detected by the LiDAR. The method detected 73% of trees greater than 200 mm DBH within 3 m of the flight path. Smaller and more distant trees could not be detected reliably. The UAV-based DBH estimates of detected trees were positively correlated with the humanbased estimates (R = 0.45, p = 0.017) with a median absolute error of 18.1%, a root-meansquare error of 25.1% and a bias of −1.2%. We summarise the main current limitations of this technology and outline potential solutions. The greatest gains in precision could be achieved through use of a localisation device. The long-term factor limiting the deployment of below-canopy UAV surveys is likely to be battery technology.", "title": "" }, { "docid": "7252372bdacaa69b93e52a7741c8f4c2", "text": "This paper introduces a novel type of actuator that is investigated by ESA for force-reflection to a wearable exoskeleton. The actuator consists of a DC motor that is relocated from the joint by means of Bowden cable transmissions. The actuator shall support the development of truly ergonomic and compact wearable man-machine interfaces. Important Bowden cable transmission characteristics are discussed, which dictate a specific hardware design for such an actuator. A first prototype is shown, which was used to analyze these basic characteristics of the transmissions and to proof the overall actuation concept. A second, improved prototype is introduced, which is currently used to investigate the achievable performance as a master actuator in a master-slave control with force-feedback. Initial experimental results are presented, which show good actuator performance in a 4 channel control scheme with a slave joint. The actuator features low movement resistance in free motion and can reflect high torques during hard contact situations. High contact stability can be achieved. The actuator seems therefore well suited to be implemented into the ESA exoskeleton for space-robotic telemanipulation", "title": "" }, { "docid": "87baf6381f4297b6e9af7659ef111f5c", "text": "Indonesian Sign Language System (ISLS) has been used widely by Indonesian for translating the sign language of disabled people to many applications, including education or entertainment. ISLS consists of static and dynamic gestures in representing words or sentences. However, individual variations in performing sign language have been a big challenge especially for developing automatic translation. The accuracy of recognizing the signs will decrease linearly with the increase of variations of gestures. This research is targeted to solve these issues by implementing the multimodal methods: leap motion and Myo armband controllers (EMG electrodes). By combining these two data and implementing Naïve Bayes classifier, we hypothesized that the accuracy of gesture recognition system for ISLS then can be increased significantly. The data streams captured from hand-poses were based on time-domain series method which will warrant the generated data synchronized accurately. The selected features for leap motion data would be based on fingers positions, angles, and elevations, while for the Myo armband would be based on electrical signal generated by eight channels of EMG electrodes relevant to the activities of linked finger’s and forearm muscles. This study will investigate the accuracy of gesture recognition by using either single modal or multimodal for translating Indonesian sign language. For multimodal strategy, both features datasets were merged into a single dataset which was then used for generating a model for each hand gesture. The result showed that there was a significant improvement on its accuracy, from 91% for single modal using leap motion to 98% for multi-modal (combined with Myo armband). The confusion matrix of multimodal method also showed better performance than the single-modality. Finally, we concluded that the implementation of multi-modal controllers for ISLS’s gesture recognition showed better accuracy and performance compared of single modality of using only leap motion controller.", "title": "" }, { "docid": "098da928abe37223e0eed0c6bf0f5747", "text": "With the proliferation of social media, fashion inspired from celebrities, reputed designers as well as fashion influencers has shortned the cycle of fashion design and manufacturing. However, with the explosion of fashion related content and large number of user generated fashion photos, it is an arduous task for fashion designers to wade through social media photos and create a digest of trending fashion. Designers do not just wish to have fashion related photos at one place but seek search functionalities that can let them search photos with natural language queries such as ‘red dress’, ’vintage handbags’, etc in order to spot the trends. This necessitates deep parsing of fashion photos on social media to localize and classify multiple fashion items from a given fashion photo. While object detection competitions such as MSCOCO have thousands of samples for each of the object categories, it is quite difficult to get large labeled datasets for fast fashion items. Moreover, state-of-the-art object detectors [2, 7, 9] do not have any functionality to ingest large amount of unlabeled data available on social media in order to fine tune object detectors with labeled datasets. In this work, we show application of a generic object detector [11], that can be pretrained in an unsupervised manner, on 24 categories from recently released Open Images V4 dataset. We first train the base architecture of the object detector using unsupervisd learning on 60K unlabeled photos from 24 categories gathered from social media, and then subsequently fine tune it on 8.2K labeled photos from Open Images V4 dataset. On 300 × 300 image inputs, we achieve 72.7% mAP on a test dataset of 2.4K photos while performing 11% to 17% better as compared to the state-of-the-art object detectors. We show that this improvement is due to our choice of architecture that lets us do unsupervised learning and that performs significantly better in identifying small objects. 1", "title": "" }, { "docid": "59d6765507415b0365f3193843d01459", "text": "Password typing is the most widely used identity verification method in World Wide Web based Electronic Commerce. Due to its simplicity, however, it is vulnerable to imposter attacks. Keystroke dynamics and password checking can be combined to result in a more secure verification system. We propose an autoassociator neural network that is trained with the timing vectors of the owner's keystroke dynamics and then used to discriminate between the owner and an imposter. An imposter typing the correct password can be detected with very high accuracy using the proposed approach. This approach can be effectively implemented by a Java applet and used in the World Wide Web.", "title": "" }, { "docid": "f77982f55dfd6f188b8fb09e7c36c695", "text": "Bisimulation is the primitive notion of equivalence between concurrent processes in Milner's Calculus of Communicating Systems (CCS); there is a nontrivial game-like protocol for distinguishing nonbisimular processes. In contrast, process distinguishability in Hoare's theory of Communicating Sequential Processes (CSP) is determined solely on the basis of traces of visible actions. We examine what additional operations are needed to explain bisimulation similarly—specifically in the case of finitely branching processes without silent moves. We formulate a general notion of Structured Operational Semantics for processes with Guarded recursion (GSOS), and demonstrate that bisimulation does not agree with trace congruence with respect to any set of GSOS-definable contexts. In justifying the generality and significance of GSOS's, we work out some of the basic proof theoretic facts which justify the SOS discipline.", "title": "" } ]
scidocsrr
b1632b21c1d9d47d82e89b1667a6e303
A comparison of social, learning, and financial strategies on crowd engagement and output quality
[ { "docid": "741619d65757e07394a161f4b96ec408", "text": "Self-disclosure plays a central role in the development and maintenance of relationships. One way that researchers have explored these processes is by studying the links between self-disclosure and liking. Using meta-analytic procedures, the present work sought to clarify and review this literature by evaluating the evidence for 3 distinct disclosure-liking effects. Significant disclosure-liking relations were found for each effect: (a) People who engage in intimate disclosures tend to be liked more than people who disclose at lower levels, (b) people disclose more to those whom they initially like, and (c) people like others as a result of having disclosed to them. In addition, the relation between disclosure and liking was moderated by a number of variables, including study paradigm, type of disclosure, and gender of the discloser. Taken together, these results suggest that various disclosure-liking effects can be integrated and viewed as operating together within a dynamic interpersonal system. Implications for theory development are discussed, and avenues for future research are suggested.", "title": "" }, { "docid": "ff8dec3914e16ae7da8801fe67421760", "text": "A hypothesized need to form and maintain strong, stable interpersonal relationships is evaluated in light of the empirical literature. The need is for frequent, nonaversive interactions within an ongoing relational bond. Consistent with the belongingness hypothesis, people form social attachments readily under most conditions and resist the dissolution of existing bonds. Belongingness appears to have multiple and strong effects on emotional patterns and on cognitive processes. Lack of attachments is linked to a variety of ill effects on health, adjustment, and well-being. Other evidence, such as that concerning satiation, substitution, and behavioral consequences, is likewise consistent with the hypothesized motivation. Several seeming counterexamples turned out not to disconfirm the hypothesis. Existing evidence supports the hypothesis that the need to belong is a powerful, fundamental, and extremely pervasive motivation.", "title": "" } ]
[ { "docid": "738a69ad1006c94a257a25c1210f6542", "text": "Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.", "title": "" }, { "docid": "dd5bfaaf18138d1b714de8d91fbacc7a", "text": "Ball-balancing robots (BBRs) are endowed with rich dynamics. When properly designed and stabilized via feedback to eliminate jitter, and intuitively coordinated with a well-designed smartphone interface, BBRs exhibit a uniquely fluid and organic motion. Unlike mobile inverted pendulums (MIPs, akin to unmanned Segways), BBRs stabilize both fore/aft and left/right motions with feedback, and bank when turning. Previous research on BBRs focused on vehicles from 50cm to 2m in height; the present work is the first to build significantly smaller BBRs, with heights under 25cm. We consider the unique issues arising when miniaturizing a BBR to such a scale, which are characterized by faster time scales and reduced weight (and, thus, reduced normal force and stiction between the omniwheels and the ball). Two key patent-pending aspects of our design are (a) moving the omniwheels to contact the ball down to around 20 to 30 deg N latitude, which increases the normal force between the omniwheels and the ball, and (b) orienting the omniwheels into mutually-orthogonal planes, which improves efficiency. Design iterations were facilitated by rapid prototyping and leveraged low-cost manufacturing principles and inexpensive components. Classical successive loop closure control strategies are implemented, which prove to be remarkably effective when the BBR isn't spinning quickly, and thus the left/right and fore/aft stabilization problems decompose into two decoupled MIP problems.", "title": "" }, { "docid": "196ddcefb2c3fcb6edd5e8d108f7e219", "text": "This paper may be considered as a practical reference for those who wish to add (now sufficiently matured) Agent Based modeling to their analysis toolkit and may or may not have some System Dynamics or Discrete Event modeling background. We focus on systems that contain large numbers of active objects (people, business units, animals, vehicles, or even things like projects, stocks, products, etc. that have timing, event ordering or other kind of individual behavior associated with them). We compare the three major paradigms in simulation modeling: System Dynamics, Discrete Event and Agent Based Modeling with respect to how they approach such systems. We show in detail how an Agent Based model can be built from an existing System Dynamics or a Discrete Event model and then show how easily it can be further enhanced to capture much more complicated behavior, dependencies and interactions thus providing for deeper insight in the system being modeled. Commonly understood examples are used throughout the paper; all models are specified in the visual language supported by AnyLogic tool. We view and present Agent Based modeling not as a substitution to older modeling paradigms but as a useful add-on that can be efficiently combined with System Dynamics and Discrete Event modeling. Several multi-paradigm model architectures are suggested.", "title": "" }, { "docid": "486e15d89ea8d0f6da3b5133c9811ee1", "text": "Frequency-modulated continuous wave radar systems suffer from permanent leakage of the transmit signal into the receive path. Besides leakage within the radar device itself, an unwanted object placed in front of the antennas causes so-called short-range (SR) leakage. In an automotive application, for instance, it originates from signal reflections of the car’s own bumper. Particularly the residual phase noise of the downconverted SR leakage signal causes a severe degradation of the achievable sensitivity. In an earlier work, we proposed an SR leakage cancellation concept that is feasible for integration in a monolithic microwave integrated circuit. In this brief, we present a hardware prototype that holistically proves our concept with discrete components. The fundamental theory and properties of the concept are proven with measurements. Further, we propose a digital design for real-time operation of the cancellation algorithm on a field programmable gate array. Ultimately, by employing measurements with a bumper mounted in front of the antennas, we show that the leakage canceller significantly improves the sensitivity of the radar.", "title": "" }, { "docid": "053afa7201df9174e7f44dded8fa3c36", "text": "Fault Detection and Diagnosis systems offers enhanced availability and reduced risk of safety haz ards w hen comp onent failure and other unex p ected events occur in a controlled p lant. For O nline FDD an ap p rop riate method an O nline data are req uired. I t is q uite difficult to get O nline data for FDD in industrial ap p lications and solution, using O P C is suggested. T op dow n and bottomup ap p roaches to diagnostic reasoning of w hole system w ere rep resented and tw o new ap p roaches w ere suggested. S olution 1 using q ualitative data from “ similar” subsystems w as p rop osed and S olution 2 using reference subsystem w ere p rop osed.", "title": "" }, { "docid": "b2817d85893a624574381eee4f8648db", "text": "A coupled-fed antenna design capable of covering eight-band WWAN/LTE operation in a smartphone and suitable to integrate with a USB connector is presented. The antenna comprises an asymmetric T-shaped monopole as a coupling feed and a radiator as well, and a coupled-fed loop strip shorted to the ground plane. The antenna generates a wide lower band to cover (824-960 MHz) for GSM850/900 operation and a very wide upper band of larger than 1 GHz to cover the GPS/GSM1800/1900/UMTS/LTE2300/2500 operation (1565-2690 MHz). The proposed antenna provides wideband operation and exhibits great flexible behavior. The antenna is capable of providing eight-band operation for nine different sizes of PCBs, and enhance impedance matching only by varying a single element length, L. Details of proposed antenna, parameters and performance are presented and discussed in this paper.", "title": "" }, { "docid": "8da6cc5c6a8a5d45fadbab8b7ca8b71f", "text": "Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.", "title": "" }, { "docid": "f103277dbbcab26d8e5c176520666db9", "text": "Air pollution in urban environments has risen steadily in the last several decades. Such cities as Beijing and Delhi have experienced rises to dangerous levels for citizens. As a growing and urgent public health concern, cities and environmental agencies have been exploring methods to forecast future air pollution, hoping to enact policies and provide incentives and services to benefit their citizenry. Much research is being conducted in environmental science to generate deterministic models of air pollutant behavior; however, this is both complex, as the underlying molecular interactions in the atmosphere need to be simulated, and often inaccurate. As a result, with greater computing power in the twenty-first century, using machine learning methods for forecasting air pollution has become more popular. This paper investigates the use of the LSTM recurrent neural network (RNN) as a framework for forecasting in the future, based on time series data of pollution and meteorological information in Beijing. Due to the sequence dependencies associated with large-scale and longer time series datasets, RNNs, and in particular LSTM models, are well-suited. Our results show that the LSTM framework produces equivalent accuracy when predicting future timesteps compared to the baseline support vector regression for a single timestep. Using our LSTM framework, we can now extend the prediction from a single timestep out to 5 to 10 hours into the future. This is promising in the quest for forecasting urban air quality and leveraging that insight to enact beneficial policy.", "title": "" }, { "docid": "723f2a824bba1167b462b528a34b4b72", "text": "The Korea Advanced Institute of Science and Technology (KAIST) humanoid robot 1 (KHR-1) was developed for the purpose of researching the walking action of bipeds. KHR-1, which has no hands or head, has 21 degrees of freedom (DOF): 12 DOF in the legs, 1 DOF in the torso, and 8 DOF in the arms. The second version of this humanoid robot, KHR-2, (which has 41 DOF) can walk on a living-room floor; it also moves and looks like a human. The third version, KHR-3 (HUBO), has more human-like features, a greater variety of movements, and a more human-friendly character. We present the mechanical design of HUBO, including the design concept, the lower body design, the upper body design, and the actuator selection of joints. Previously we developed and published details of KHR-1 and KHR-2. The HUBO platform, which is based on KHR-2, has 41 DOF, stands 125 cm tall, and weighs 55 kg. From a mechanical point of view, HUBO has greater mechanical stiffness and a more detailed frame design than KHR-2. The stiffness of the frame was increased and the detailed design around the joints and link frame were either modified or fully redesigned. We initially introduced an exterior art design concept for KHR-2, and that concept was implemented in HUBO at the mechanical design stage.", "title": "" }, { "docid": "2c969a6f8292eb42e1775dad1ad2a741", "text": "Solar energy forms the major alternative for the generation of power keeping in mind the sustainable development with reduced greenhouse emission. For improved efficiency of the MPPT which uses solar energy in photovoltaic systems(PV), this paper presents a technique utilizing improved incremental conductance(Inc Cond) MPPT with direct control method using SEPIC converter. Several improvements in the existing technique is proposed which includes converter design aspects, system simulation & DSP programming. For the control part dsPIC30F2010 is programmed accordingly to get the maximum power point for different illuminations. DSP controller also forms the interfacing of PV array with the load. Now the improved Inc Cond helps to get point to point values accurately to track MPP's under different atmospheric conditions. MATLAB and Simulink were employed for simulation studies validation of the proposed technique. Experiment result proves the improvement from existing method.", "title": "" }, { "docid": "6ea0e96496d0c3054ae81e93a3012eb7", "text": "Supervised hierarchical topic modeling and unsupervised hierarchical topic modeling are usually used to obtain hierarchical topics, such as hLLDA and hLDA. Supervised hierarchical topic modeling makes heavy use of the information from observed hierarchical labels, but cannot explore new topics; while unsupervised hierarchical topic modeling is able to detect automatically new topics in the data space, but does not make use of any information from hierarchical labels. In this paper, we propose a semi-supervised hierarchical topic model which aims to explore new topics automatically in the data space while incorporating the information from observed hierarchical labels into the modeling process, called SemiSupervised Hierarchical Latent Dirichlet Allocation (SSHLDA). We also prove that hLDA and hLLDA are special cases of SSHLDA. We conduct experiments on Yahoo! Answers and ODP datasets, and assess the performance in terms of perplexity and clustering. The experimental results show that predictive ability of SSHLDA is better than that of baselines, and SSHLDA can also achieve significant improvement over baselines for clustering on the FScore measure.", "title": "" }, { "docid": "ebd65c03599cc514e560f378f676cc01", "text": "The purpose of this paper is to examine an integrated model of TAM and D&M to explore the effects of quality features, perceived ease of use, perceived usefulness on users’ intentions and satisfaction, alongside the mediating effect of usability towards use of e-learning in Iran. Based on the e-learning user data collected through a survey, structural equations modeling (SEM) and path analysis were employed to test the research model. The results revealed that ‘‘intention’’ and ‘‘user satisfaction’’ both had positive effects on actual use of e-learning. ‘‘System quality’’ and ‘‘information quality’’ were found to be the primary factors driving users’ intentions and satisfaction towards use of e-learning. At last, ‘‘perceived usefulness’’ mediated the relationship between ease of use and users’ intentions. The sample consisted of e-learning users of four public universities in Iran. Past studies have seldom examined an integrated model in the context of e-learning in developing countries. Moreover, this paper tries to provide a literature review of recent published studies in the field of e-learning. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "efba71635ca38b4588d3e4200d655fee", "text": "BACKGROUND\nCircumcisions and cesarian sections are common procedures. Although complications to the newborn child fortunately are rare, it is important to emphasize the potential significance of this problem and its frequent iatrogenic etiology. The authors present 7 cases of genitourinary trauma in newborns, including surgical management and follow-up.\n\n\nMETHODS\nThe authors relate 7 recent cases of genitourinary trauma in newborns from a children's hospital in a major metropolitan area.\n\n\nRESULTS\nCase 1 and 2: Two infants suffered degloving injuries to both the prepuce and penile shaft from a Gomco clamp. Successful full-thickness skin grafting using the previously excised foreskin was used in 1 child. Case 3, 4, and 5: A Mogen clamp caused glans injuries in 3 infants. In 2, hemorrhage from the severed glans was controlled with topical epinephrine; the glans healed with a flattened appearance. Another infant sustained a laceration ventrally, requiring a delayed modified meatal advancement glanoplasty to correct the injury. Case 6: A male infant suffered a ventral slit and division of the ventral urethra before placement of a Gomco clamp. Formal hypospadias repair was required. Case 7: An emergent cesarean section resulted in a grade 4-perineal laceration in a female infant. The vaginal tear caused by the surgeon's finger, extended up to the posterior insertion of the cervix and into the rectum. The infant successfully underwent an emergent multilayered repair.\n\n\nCONCLUSIONS\nGenitourinary trauma in the newborn is rare but often necessitates significant surgical intervention. Circumcision often is the causative event. There has been only 1 prior report of a perineal injury similar to case 7, with a fatal outcome.", "title": "" }, { "docid": "f89236f0cf15d8fa64aca8682d87447f", "text": "This research targeted the learning preferences, goals and motivations, achievements, challenges, and possibilities for life change of self-directed online learners who subscribed to the monthly OpenCourseWare (OCW) e-newsletter from MIT. Data collection included a 25-item survey of 1,429 newsletter subscribers; 613 of whom also completed an additional 15 open-ended survey items. The 25 close-ended survey findings indicated that respondents used a wide range of devices and places to learn for their self-directed learning needs. Key motivational factors included curiosity, interest, and internal need for self-improvement. Factors leading to success or personal change included freedom to learn, resource abundance, choice, control, and fun. In terms of achievements, respondents were learning both specific skills as well as more general skills that help them advance in their careers. Science, math, and foreign language skills were the most desired by the survey respondents. The key obstacles or challenges faced were time, lack of high quality open resources, and membership or technology fees. Several brief stories of life change across different age ranges are documented. Among the chief implications is that learning something new to enhance one’s life or to help others is often more important than course transcript credit or a certificate of completion.", "title": "" }, { "docid": "7adb0a3079fb3b64f7a503bd8eae623e", "text": "Attack trees have found their way to practice because they have proved to be an intuitive aid in threat analysis. Despite, or perhaps thanks to, their apparent simplicity, they have not yet been provided with an unambiguous semantics. We argue that such a formal interpretation is indispensable to precisely understand how attack trees can be manipulated during construction and analysis. We provide a denotational semantics, based on a mapping to attack suites, which abstracts from the internal structure of an attack tree, we study transformations between attack trees, and we study the attribution and projection of an attack tree.", "title": "" }, { "docid": "59a91a18b3706f3e170063818e964ce8", "text": "We present an approach to capture the 3D structure and motion of a group of people engaged in a social interaction. The core challenges in capturing social interactions are: (1) occlusion is functional and frequent, (2) subtle motion needs to be measured over a space large enough to host a social group, and (3) human appearance and configuration variation is immense. The Panoptic Studio is a system organized around the thesis that social interactions should be measured through the perceptual integration of a large variety of view points. We present a modularized system designed around this principle, consisting of integrated structural, hardware, and software innovations. The system takes, as input, 480 synchronized video streams of multiple people engaged in social activities, and produces, as output, the labeled time-varying 3D structure of anatomical landmarks on individuals in the space. The algorithmic contributions include a hierarchical approach for generating skeletal trajectory proposals, and an optimization framework for skeletal reconstruction with trajectory re-association.", "title": "" }, { "docid": "051aa7421187bab5d9e11184da16cc9e", "text": "This paper compares the approaches to reuse in software engineering and knowledge engineering. In detail, definitions are given, the history is enlightened, the main approaches are described, and their feasibility is discussed. The aim of the paper is to show the close relation between software and knowledge engineering and to help the knowledge engineering community to learn from experiences in software engineering with respect to reuse. 1 Reuse in Software Engineering", "title": "" }, { "docid": "1e347f69d739577d4bb0cc050d87eb5b", "text": "The rapidly growing paradigm of the Internet of Things (IoT) requires new search engines, which can crawl heterogeneous data sources and search in highly dynamic contexts. Existing search engines cannot meet these requirements as they are designed for traditional Web and human users only. This is contrary to the fact that things are emerging as major producers and consumers of information. Currently, there is very little work on searching IoT and a number of works claim the unavailability of public IoT data. However, it is dismissed that a majority of real-time web-based maps are sharing data that is generated by things, directly. To shed light on this line of research, in this paper, we firstly create a set of tools to capture IoT data from a set of given data sources. We then create two types of interfaces to provide real-time searching services on dynamic IoT data for both human and machine users.", "title": "" }, { "docid": "a00acd7a9a136914bf98478ccd85e812", "text": "Deep-learning has proved in recent years to be a powerful tool for image analysis and is now widely used to segment both 2D and 3D medical images. Deep-learning segmentation frameworks rely not only on the choice of network architecture but also on the choice of loss function. When the segmentation process targets rare observations, a severe class imbalance is likely to occur between candidate labels, thus resulting in sub-optimal performance. In order to mitigate this issue, strategies such as the weighted cross-entropy function, the sensitivity function or the Dice loss function, have been proposed. In this work, we investigate the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. We also propose to use the class re-balancing properties of the Generalized Dice overlap, a known metric for segmentation assessment, as a robust and accurate deep-learning loss function for unbalanced tasks.", "title": "" }, { "docid": "412e10ae26c0abcb37379c6b37ea022a", "text": "This paper presents the Gavagai Living Lexicon, which is an online distributional semantic model currently available in 20 different languages. We describe the underlying distributional semantic model, and how we have solved some of the challenges in applying such a model to large amounts of streaming data. We also describe the architecture of our implementation, and discuss how we deal with continuous quality assurance of the lexicon.", "title": "" } ]
scidocsrr
e5293b67d91dad5e4ed00f3bb89f6425
Detecting patterns of anomalies
[ { "docid": "3df95e4b2b1bb3dc80785b25c289da92", "text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.", "title": "" } ]
[ { "docid": "be4fbfdde6ec503bebd5b2a8ddaa2820", "text": "Attack-defence Capture The Flag (CTF) competitions are effective pedagogic platforms to teach secure coding practices due to the interactive and real-world experiences they provide to the contest participants. Two of the key challenges that prevent widespread adoption of such contests are: 1) The game infrastructure is highly resource intensive requiring dedication of significant hardware resources and monitoring by organizers during the contest and 2) the participants find the gameplay to be complicated, requiring performance of multiple tasks that overwhelms inexperienced players. In order to address these, we propose a novel attack-defence CTF game infrastructure which uses application containers. The results of our work showcase effectiveness of these containers and supporting tools in not only reducing the resources organizers need but also simplifying the game infrastructure. The work also demonstrates how the supporting tools can be leveraged to help participants focus more on playing the game i.e. attacking and defending services and less on administrative tasks. The results from this work indicate that our architecture can accommodate over 150 teams with 15 times fewer resources when compared to existing infrastructures of most contests today.", "title": "" }, { "docid": "4540c8ed61e6c8ab3727eefc9a048377", "text": "Network Functions Virtualization (NFV) is incrementally deployed by Internet Service Providers (ISPs) in their carrier networks, by means of Virtual Network Function (VNF) chains, to address customers' demands. The motivation is the increasing manageability, reliability and performance of NFV systems, the gains in energy and space granted by virtualization, at a cost that becomes competitive with respect to legacy physical network function nodes. From a network optimization perspective, the routing of VNF chains across a carrier network implies key novelties making the VNF chain routing problem unique with respect to the state of the art: the bitrate of each demand flow can change along a VNF chain, the VNF processing latency and computing load can be a function of the demands traffic, VNFs can be shared among demands, etc. In this paper, we provide an NFV network model suitable for ISP operations. We define the generic VNF chain routing optimization problem and devise a mixed integer linear programming formulation. By extensive simulation on realistic ISP topologies, we draw conclusions on the trade-offs achievable between legacy Traffic Engineering (TE) ISP goals and novel combined TE-NFV goals.", "title": "" }, { "docid": "ff572d9c74252a70a48d4ba377f941ae", "text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.", "title": "" }, { "docid": "73e2738994b78d54d8fbad5df4622451", "text": "Although online consumer reviews (OCR) have helped consumers to know about the strengths and weaknesses of different products and find the ones that best suit their needs, they introduce a challenge for businesses to analyze them because of their volume, variety, velocity and veracity. This research investigates the predictors of readership and helpfulness of OCR using a sentiment mining approach for big data analytics. Our findings show that reviews with higher levels of positive sentiment in the title receive more readerships. Sentimental reviews with neutral polarity in the text are also perceived to be more helpful. The length and longevity of a review positively influence both its readership and helpfulness. Because the current methods used for sorting OCR may bias both their readership and helpfulness, the approach used in this study can be adopted by online vendors to develop scalable automated systems for sorting and classification of big OCR data which will benefit both vendors and consumers.", "title": "" }, { "docid": "fc6f02a4eb006efe54b34b1705559a55", "text": "Company movements and market changes often are headlines of the news, providing managers with important business intelligence (BI). While existing corporate analyses are often based on numerical financial figures, relatively little work has been done to reveal from textual news articles factors that represent BI. In this research, we developed BizPro, an intelligent system for extracting and categorizing BI factors from news articles. BizPro consists of novel text mining procedures and BI factor modeling and categorization. Expert guidance and human knowledge (with high inter-rater reliability) were used to inform system development and profiling of BI factors. We conducted a case study of using the system to profile BI factors of four major IT companies based on 6859 sentences extracted from 231 news articles published in major news sources. The results show that the chosen techniques used in BizPro – Naïve Bayes (NB) and Logistic Regression (LR) – significantly outperformed a benchmark technique. NB was found to outperform LR in terms of precision, recall, F-measure, and area under ROC curve. This research contributes to developing a new system for profiling company BI factors from news articles, to providing new empirical findings to enhance understanding in BI factor extraction and categorization, and to addressing an important yet under-explored concern of BI analysis. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "dc4d11c0478872f3882946580bb10572", "text": "An increasing number of neural implantable devices will become available in the near future due to advances in neural engineering. This discipline holds the potential to improve many patients' lives dramatically by offering improved-and in some cases entirely new-forms of rehabilitation for conditions ranging from missing limbs to degenerative cognitive diseases. The use of standard engineering practices, medical trials, and neuroethical evaluations during the design process can create systems that are safe and that follow ethical guidelines; unfortunately, none of these disciplines currently ensure that neural devices are robust against adversarial entities trying to exploit these devices to alter, block, or eavesdrop on neural signals. The authors define \"neurosecurity\"-a version of computer science security principles and methods applied to neural engineering-and discuss why neurosecurity should be a critical consideration in the design of future neural devices.", "title": "" }, { "docid": "2653554c6dec7e9cfa0f5a4080d251e2", "text": "Clustering is a key technique within the KDD process, with k-means, and the more general k-medoids, being well-known incremental partition-based clustering algorithms. A fundamental issue within this class of algorithms is to find an initial set of medians (or medoids) that improves the efficiency of the algorithms (e.g., accelerating its convergence to a solution), at the same time that it improves its effectiveness (e.g., finding more meaningful clusters). Thus, in this article we aim at providing a technique that, given a set of elements, quickly finds a very small number of elements as medoid candidates for this set, allowing to improve both the efficiency and effectiveness of existing clustering algorithms. We target the class of k-medoids algorithms in general, and propose a technique that selects a well-positioned subset of central elements to serve as the initial set of medoids for the clustering process. Our technique leads to a substantially smaller amount of distance calculations, thus improving the algorithm’s efficiency when compared to existing methods, without sacrificing effectiveness. A salient feature of our proposed technique is that it is not a new k-medoid clustering algorithm per se, rather, it can be used in conjunction with any existing clustering algorithm that is based on the k-medoid paradigm. Experimental results, using both synthetic and real datasets, confirm the efficiency, effectiveness and scalability of the proposed technique.", "title": "" }, { "docid": "abf6f1218543ce69b0095bba24f40ced", "text": "Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments.", "title": "" }, { "docid": "6f9afe3cbf5cc675c6b4e96ee2ccfa76", "text": "As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e77cf8938714824d46cfdbdb1b809f93", "text": "Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fullyobserved samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines.", "title": "" }, { "docid": "b85df2aec85417d45b251299dfce4f39", "text": "A growing body of studies is developing approaches to evaluating human interaction with Web search engines, including the usability and effectiveness of Web search tools. This study explores a user-centered approach to the evaluation of the Web search engine Inquirus – a Web metasearch tool developed by researchers from the NEC Research Institute. The goal of the study reported in this paper was to develop a user-centered approach to the evaluation including: (1) effectiveness: based on the impact of users' interactions on their information problem and information seeking stage, and (2) usability: including screen layout and system capabilities for users. Twenty-two (22) volunteers searched Inquirus on their own personal information topics. Data analyzed included: (1) user preand post-search questionnaires and (2) Inquirus search transaction logs. Key findings include: (1) Inquirus was rated highly by users on various usability measures, (2) all users experienced some level of shift/change in their information problem, information seeking, and personal knowledge due to their Inquirus interaction, (3) different users experienced different levels of change/shift, and (4) the search measure precision did not correlate with other user-based measures. Some users experienced major changes/shifts in various userbased variables, such as information problem or information seeking stage with a search of low precision and vice versa. Implications for the development of user-centered approaches to the evaluation of Web and IR systems and further research are discussed.", "title": "" }, { "docid": "e4b54824b2528b66e28e82ad7d496b36", "text": "Objective: In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. Methods: The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Results: Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Conclusion: Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients’ heterogeneity. Significance: The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.", "title": "" }, { "docid": "024265b0b1872dd89d875dd5d3df5b78", "text": "In this paper, we present a novel system to analyze human body motions for action recognition task from two sets of features using RGBD videos. The Bag-of-Features approach is used for recognizing human action by extracting local spatialtemporal features and shape invariant features from all video frames. These feature vectors are computed in four steps: Firstly, detecting all interest keypoints from RGB video frames using Speed-Up Robust Features and filters motion points using Motion History Image and Optical Flow, then aligned these motion points to the depth frame sequences. Secondly, using a Histogram of orientation gradient descriptor for computing the features vector around these points from both RGB and depth channels, then combined these feature values in one RGBD feature vector. Thirdly, computing Hu-Moment shape features from RGBD frames, fourthly, combining the HOG features with Hu-moments features in one feature vector for each video action. Finally, the k-means clustering and the multi-class K-Nearest Neighbor is used for the classification task. This system is invariant to scale, rotation, translation, and illumination. All tested are utilized on a dataset that is available to the public and used often in the community. By using this new feature combination method improves performance on actions with low movement and reach recognition rates superior to other publications of the dataset. Keywords—RGBD Videos; Feature Extraction; k-means Clustering; KNN (K-Nearest Neighbor)", "title": "" }, { "docid": "cf817c1802b65f93e5426641a5ea62e2", "text": "To protect sensitive data processed by current applications, developers, whether security experts or not, have to rely on cryptography. While cryptography algorithms have become increasingly advanced, many data breaches occur because developers do not correctly use the corresponding APIs. To guide future research into practical solutions to this problem, we perform an empirical investigation into the obstacles developers face while using the Java cryptography APIs, the tasks they use the APIs for, and the kind of (tool) support they desire. We triangulate data from four separate studies that include the analysis of 100 StackOverflow posts, 100 GitHub repositories, and survey input from 48 developers. We find that while developers find it difficult to use certain cryptographic algorithms correctly, they feel surprisingly confident in selecting the right cryptography concepts (e.g., encryption vs. signatures). We also find that the APIs are generally perceived to be too low-level and that developers prefer more task-based solutions.", "title": "" }, { "docid": "77f7644a5e2ec50b541fe862a437806f", "text": "This paper describes SRM (Scalable Reliable Multicast), a reliable multicast framework for application level framing and light-weight sessions. The algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. The framework has been prototyped in wb, a distributed whiteboard application, and has been extensively tested on a global scale with sessions ranging from a few to more than 1000 participants. The paper describes the principles that have guided our design, including the IP multicast group delivery model, an end-to-end, receiver-based model of reliability, and the application level framing protocol model. As with unicast communications, the performance of a reliable multicast delivery algorithm depends on the underlying topology and operational environment. We investigate that dependence via analysis and simulation, and demonstrate an adaptive algorithm that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery. With the adaptive algorithm, our reliable multicast delivery algorithm provides good performance over a wide range of underlying topologies.", "title": "" }, { "docid": "56667d286f69f8429be951ccf5d61c24", "text": "As the Internet of Things (IoT) is emerging as an attractive paradigm, a typical IoT architecture that U2IoT (Unit IoT and Ubiquitous IoT) model has been presented for the future IoT. Based on the U2IoT model, this paper proposes a cyber-physical-social based security architecture (IPM) to deal with Information, Physical, and Management security perspectives, and presents how the architectural abstractions support U2IoT model. In particular, 1) an information security model is established to describe the mapping relations among U2IoT, security layer, and security requirement, in which social layer and additional intelligence and compatibility properties are infused into IPM; 2) physical security referring to the external context and inherent infrastructure are inspired by artificial immune algorithms; 3) recommended security strategies are suggested for social management control. The proposed IPM combining the cyber world, physical world and human social provides constructive proposal towards the future IoT security and privacy protection.", "title": "" }, { "docid": "104c845c9c34e8e94b6e89d651635ae8", "text": "Three families of Bacillus cyclic lipopeptides--surfactins, iturins, and fengycins--have well-recognized potential uses in biotechnology and biopharmaceutical applications. This study outlines the isolation and characterization of locillomycins, a novel family of cyclic lipopeptides produced by Bacillus subtilis 916. Elucidation of the locillomycin structure revealed several molecular features not observed in other Bacillus lipopeptides, including a unique nonapeptide sequence and macrocyclization. Locillomycins are active against bacteria and viruses. Biochemical analysis and gene deletion studies have supported the assignment of a 38-kb gene cluster as the locillomycin biosynthetic gene cluster. Interestingly, this gene cluster encodes 4 proteins (LocA, LocB, LocC, and LocD) that form a hexamodular nonribosomal peptide synthetase to biosynthesize cyclic nonapeptides. Genome analysis and the chemical structures of the end products indicated that the biosynthetic pathway exhibits two distinct features: (i) a nonlinear hexamodular assembly line, with three modules in the middle utilized twice and the first and last two modules used only once and (ii) several domains that are skipped or optionally selected.", "title": "" }, { "docid": "4b432638ecceac3d1948fb2b2e9be49b", "text": "Software process refers to the set of tools, methods, and practices used to produce a software artifact. The objective of a software process management model is to produce software artifacts according to plans while simultaneously improving the organization's capability to produce better artifacts. The SEI's Capability Maturity Model (CMM) is a software process management model; it assists organizations to provide the infrastructure for achieving a disciplined and mature software process. There is a growing concern that the CMM is not applicable to small firms because it requires a huge investment. In fact, detailed studies of the CMM show that its applications may cost well over $100,000. This article attempts to address the above concern by studying the feasibility of a scaled-down version of the CMM for use in small software firms. The logic for a scaled-down CMM is that the same quantitative quality control principles that work for larger projects can be scaled-down and adopted for smaller ones. Both the CMM and the Personal Software Process (PSP) are briefly described and are used as basis.", "title": "" }, { "docid": "20a2390dede15514cd6a70e9b56f5432", "text": "The ability to record and replay program executions with low overhead enables many applications, such as reverse-execution debugging, debugging of hard-toreproduce test failures, and “black box” forensic analysis of failures in deployed systems. Existing record-andreplay approaches limit deployability by recording an entire virtual machine (heavyweight), modifying the OS kernel (adding deployment and maintenance costs), requiring pervasive code instrumentation (imposing significant performance and complexity overhead), or modifying compilers and runtime systems (limiting generality). We investigated whether it is possible to build a practical record-and-replay system avoiding all these issues. The answer turns out to be yes — if the CPU and operating system meet certain non-obvious constraints. Fortunately modern Intel CPUs, Linux kernels and user-space frameworks do meet these constraints, although this has only become true recently. With some novel optimizations, our system RR records and replays real-world lowparallelism workloads with low overhead, with an entirely user-space implementation, using stock hardware, compilers, runtimes and operating systems. RR forms the basis of an open-source reverse-execution debugger seeing significant use in practice. We present the design and implementation of RR, describe its performance on a variety of workloads, and identify constraints on hardware and operating system design required to support our approach.", "title": "" }, { "docid": "a9b366b2b127b093b547f8a10ac05ca5", "text": "Each user session in an e-commerce system can be modeled as a sequence of web pages, indicating how the user interacts with the system and makes his/her purchase. A typical recommendation approach, e.g., Collaborative Filtering, generates its results at the beginning of each session, listing the most likely purchased items. However, such approach fails to exploit current viewing history of the user and hence, is unable to provide a real-time customized recommendation service. In this paper, we build a deep recurrent neural network to address the problem. The network tracks how users browse the website using multiple hidden layers. Each hidden layer models how the combinations of webpages are accessed and in what order. To reduce the processing cost, the network only records a finite number of states, while the old states collapse into a single history state. Our model refreshes the recommendation result each time when user opens a new web page. As user's session continues, the recommendation result is gradually refined. Furthermore, we integrate the recurrent neural network with a Feedfoward network which represents the user-item correlations to increase the prediction accuracy. Our approach has been applied to Kaola (http://www.kaola.com), an e-commerce website powered by the NetEase technologies. It shows a significant improvement over previous recommendation service.", "title": "" } ]
scidocsrr
ba87e2ebdc3c8a4aea5b135201401c75
Bi-directional conversion between graphemes and phonemes using a joint N-gram model
[ { "docid": "fc79bfdb7fbbfa42d2e1614964113101", "text": "Probability Theory, 2nd ed. Princeton, N. J.: 960. Van Nostrand, 1 121 T. T. Kadota, “Optimum reception of binary gaussian signals,” Bell Sys. Tech. J., vol. 43, pp. 2767-2810, November 1964. 131 T. T. Kadota. “Ootrmum recention of binarv sure and Gaussian signals,” Bell Sys. ?‘ech: J., vol. 44;~~. 1621-1658, October 1965. 141 U. Grenander, ‘Stochastic processes and statistical inference,” Arkiv fiir Matematik, vol. 17, pp. 195-277, 1950. 151 L. A. Zadeh and J. R. Ragazzini, “Optimum filters for the detection of signals in noise,” Proc. IRE, vol. 40, pp. 1223-1231, O,+nhm 1 a.63 161 J. H. Laning and R. H. Battin, Random Processes in Automatic Control. New York: McGraw-Hill. 1956. nn. 269-358. 171 C.. W. Helstrom, “ Solution of the dete&on integral equation for stationary filtered white noise,” IEEE Trans. on Information Theory, vol. IT-II, pp. 335-339, July 1965. 181 T. Kailath, “The detection of known signals in colored Gaussian noise,” Stanford Electronics Labs., Stanford Univ., Stanford, Calif. Tech. Rept. 7050-4, July 1965. 191 T. T. Kadota, “Optimum reception of nf-ary Gaussian signals in Gaussian noise,” Bell. Sys. Tech. J., vol. 44, pp. 2187-2197, November 1965. [lOI T. T. Kadota, “Term-by-term differentiability of Mercer’s expansion,” Proc. of Am. Math. Sot., vol. 18, pp. 69-72, February 1967.", "title": "" } ]
[ { "docid": "cfbfc01ee75019b563c46d4bebfba0f4", "text": "We present results from gate-all-around (GAA) silicon nanowire (SiNW) MOSFETs fabricated using a process flow capable of achieving a nanowire pitch of 30 nm and a scaled gate pitch of 60 nm. We demonstrate for the first time that GAA SiNW devices can be integrated to density targets commensurate with CMOS scaling needs of the 10 nm node and beyond. In addition, this work achieves the highest performance for GAA SiNW NFETs at a gate pitch below 100 nm.", "title": "" }, { "docid": "8d8dc05c2de34440eb313503226f7e99", "text": "Disambiguating entity references by annotating them with unique ids from a catalog is a critical step in the enrichment of unstructured content. In this paper, we show that topic models, such as Latent Dirichlet Allocation (LDA) and its hierarchical variants, form a natural class of models for learning accurate entity disambiguation models from crowd-sourced knowledge bases such as Wikipedia. Our main contribution is a semi-supervised hierarchical model called Wikipedia-based Pachinko Allocation Model} (WPAM) that exploits: (1) All words in the Wikipedia corpus to learn word-entity associations (unlike existing approaches that only use words in a small fixed window around annotated entity references in Wikipedia pages), (2) Wikipedia annotations to appropriately bias the assignment of entity labels to annotated (and co-occurring unannotated) words during model learning, and (3) Wikipedia's category hierarchy to capture co-occurrence patterns among entities. We also propose a scheme for pruning spurious nodes from Wikipedia's crowd-sourced category hierarchy. In our experiments with multiple real-life datasets, we show that WPAM outperforms state-of-the-art baselines by as much as 16% in terms of disambiguation accuracy.", "title": "" }, { "docid": "2693a2815adf4e731d87f9630cd7c427", "text": "A new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of two stages. The first stage computes a fuzzy derivative for eight different directions. The second stage uses these fuzzy derivatives to perform fuzzy smoothing by weighting the contributions of neighboring pixel values. Both stages are based on fuzzy rules which make use of membership functions. The filter can be applied iteratively to effectively reduce heavy noise. In particular, the shape of the membership functions is adapted according to the remaining noise level after each iteration, making use of the distribution of the homogeneity in the image. A statistical model for the noise distribution can be incorporated to relate the homogeneity to the adaptation scheme of the membership functions. Experimental results are obtained to show the feasibility of the proposed approach. These results are also compared to other filters by numerical measures and visual inspection.", "title": "" }, { "docid": "e96c9bdd3f5e9710f7264cbbe02738a7", "text": "25 years ago, Lenstra, Lenstra and Lovász presented their c el brated LLL lattice reduction algorithm. Among the various applicatio ns of the LLL algorithm is a method due to Coppersmith for finding small roots of polyn mial equations. We give a survey of the applications of this root finding metho d t the problem of inverting the RSA function and the factorization problem. A s we will see, most of the results are of a dual nature, they can either be interpret ed as cryptanalytic results or as hardness/security results.", "title": "" }, { "docid": "9a9fdd35a3f9df6ebdd7ea8f0cac5a00", "text": "The recent appearance of augmented reality headsets, such as the Microsoft HoloLens, is a marked move from traditional 2D screen to 3D hologram-like interfaces. Striving to be completely portable, these devices unfortunately suffer multiple limitations, such as the lack of real-time, high quality depth data, which severely restricts their use as research tools. To mitigate this restriction, we provide a simple method to augment a HoloLens headset with much higher resolution depth data. To do so, we calibrate an external depth sensor connected to a computer stick that communicates with the HoloLens headset in real-time. To show how this system could be useful to the research community, we present an implementation of small object detection on HoloLens device.", "title": "" }, { "docid": "2fa5df7c70c05445c9f300c7da0f8f87", "text": "In this paper, we describe K-Extractor, a powerful NLP framework that provides integrated and seamless access to structured and unstructured information with minimal effort. The K-Extractor converts natural language documents into a rich set of semantic triples that, not only, can be stored within an RDF semantic index, but also, can be queried using natural language questions, thus eliminating the need to manually formulate SPARQL queries. The K-Extractor greatly outperforms a free text search index-based question answering system.", "title": "" }, { "docid": "9673939625a3caafecf3da68a19742b0", "text": "Automatic detection of road regions in aerial images remains a challenging research topic. Most existing approaches work well on the requirement of users to provide some seedlike points/strokes in the road area as the initial location of road regions, or detecting particular roads such as well-paved roads or straight roads. This paper presents a fully automatic approach that can detect generic roads from a single unmanned aerial vehicles (UAV) image. The proposed method consists of two major components: automatic generation of road/nonroad seeds and seeded segmentation of road areas. To know where roads probably are (i.e., road seeds), a distinct road feature is proposed based on the stroke width transformation (SWT) of road image. To the best of our knowledge, it is the first time to introduce SWT as road features, which show the effectiveness on capturing road areas in images in our experiments. Different road features, including the SWT-based geometry information, colors, and width, are then combined to classify road candidates. Based on the candidates, a Gaussian mixture model is built to produce road seeds and background seeds. Finally, starting from these road and background seeds, a convex active contour model segmentation is proposed to extract whole road regions. Experimental results on varieties of UAV images demonstrate the effectiveness of the proposed method. Comparison with existing techniques shows the robustness and accuracy of our method to different roads.", "title": "" }, { "docid": "9d9428fe9adbe3d1197e12ba4cbafe87", "text": "BACKGROUND\nLegalization of euthanasia and physician-assisted suicide has been heavily debated in many countries. To help inform this debate, we describe the practices of euthanasia and assisted suicide, and the use of life-ending drugs without an explicit request from the patient, in Flanders, Belgium, where euthanasia is legal.\n\n\nMETHODS\nWe mailed a questionnaire regarding the use of life-ending drugs with or without explicit patient request to physicians who certified a representative sample (n = 6927) of death certificates of patients who died in Flanders between June and November 2007.\n\n\nRESULTS\nThe response rate was 58.4%. Overall, 208 deaths involving the use of life-ending drugs were reported: 142 (weighted prevalence 2.0%) were with an explicit patient request (euthanasia or assisted suicide) and 66 (weighted prevalence 1.8%) were without an explicit request. Euthanasia and assisted suicide mostly involved patients less than 80 years of age, those with cancer and those dying at home. Use of life-ending drugs without an explicit request mostly involved patients 80 years of older, those with a disease other than cancer and those in hospital. Of the deaths without an explicit request, the decision was not discussed with the patient in 77.9% of cases. Compared with assisted deaths with the patient's explicit request, those without an explicit request were more likely to have a shorter length of treatment of the terminal illness, to have cure as a goal of treatment in the last week, to have a shorter estimated time by which life was shortened and to involve the administration of opioids.\n\n\nINTERPRETATION\nPhysician-assisted deaths with an explicit patient request (euthanasia and assisted suicide) and without an explicit request occurred in different patient groups and under different circumstances. Cases without an explicit request often involved patients whose diseases had unpredictable end-of-life trajectories. Although opioids were used in most of these cases, misconceptions seem to persist about their actual life-shortening effects.", "title": "" }, { "docid": "11c117d839be466c369274f021caba13", "text": "Android smartphones are becoming increasingly popular. The open nature of Android allows users to install miscellaneous applications, including the malicious ones, from third-party marketplaces without rigorous sanity checks. A large portion of existing malwares perform stealthy operations such as sending short messages, making phone calls and HTTP connections, and installing additional malicious components. In this paper, we propose a novel technique to detect such stealthy behavior. We model stealthy behavior as the program behavior that mismatches with user interface, which denotes the user's expectation of program behavior. We use static program analysis to attribute a top level function that is usually a user interaction function with the behavior it performs. Then we analyze the text extracted from the user interface component associated with the top level function. Semantic mismatch of the two indicates stealthy behavior. To evaluate AsDroid, we download a pool of 182 apps that are potentially problematic by looking at their permissions. Among the 182 apps, AsDroid reports stealthy behaviors in 113 apps, with 28 false positives and 11 false negatives.", "title": "" }, { "docid": "118c147b4bca8036f2ce360609a3c3e5", "text": "Robot manipulators are increasingly used in minimally invasive surgery (MIS). They are required to have small size, wide workspace, adequate dexterity and payload ability when operating in confined surgical cavity. Snake-like flexible manipulators are well suited to these applications. However, conventional fully actuated snake-like flexible manipulators are difficult to miniaturize and even after miniaturization the payload is very limited. The alternative is to use underactuated snake-like flexible manipulators. Three prevailing designs are tendon-driven continuum manipulators (TCM), tendon-driven serpentine manipulators (TSM) and concentric tube manipulators (CTM). In this paper, the three designs are compared at the mechanism level from the kinematics point of view. The workspace and distal end dexterity are compared for TCM, TSM and CTM with one, two and three sections, respectively. Other aspects of these designs are also discussed, including sweeping motion, scaling, force sensing, stiffness control, etc. From the results, the tendon-driven designs and concentric tube design complement each other in terms of their workspace, which is influenced by the number of sections as well as the length distribution among sections. The tendon-driven designs entail better distal end dexterity while generate larger sweeping motion in positions close to the shaft.", "title": "" }, { "docid": "a408e25435dded29744cf2af0f7da1e5", "text": "Using cloud storage to automatically back up content changes when editing documents is an everyday scenario. We demonstrate that current cloud storage services can cause unnecessary bandwidth consumption, especially for office suite documents, in this common scenario. Specifically, even with incremental synchronization approach in place, existing cloud storage services still incur whole-file transmission every time when the document file is synchronized. We analyze the problem causes in depth, and propose EdgeCourier, a system to address the problem. We also propose the concept of edge-hosed personal service (EPS), which has many benefits, such as helping deploy EdgeCourier easily in practice. We have prototyped the EdgeCourier system, deployed it in the form of EPS in a lab environment, and performed extensive experiments for evaluation. Evaluation results suggest that our prototype system can effectively reduce document synchronization bandwidth with negligible overheads.", "title": "" }, { "docid": "ff705a36e71e2aa898e99fbcfc9ec9d2", "text": "This paper presents a design concept for smart home automation system based on the idea of the internet of things (IoT) technology. The proposed system has two scenarios where first one is denoted as a wireless based and the second is a wire-line based scenario. Each scenario has two operational modes for manual and automatic use. In Case of the wireless scenario, Arduino-Uno single board microcontroller as a central controller for home appliances is applied. Cellular phone with Matlab-GUI platform for monitoring and controlling processes through Wi-Fi communication technology is addressed. For the wire-line scenario, field-programmable gate array (FPGA) kit as a main controller is used. Simulation and hardware realization for the proposed system show its reliability and effectiveness.", "title": "" }, { "docid": "34cc6503494981fda7f69c794525776a", "text": "In this article we investigate the problem of human action recognition in static images. By action recognition we intend a class of problems which includes both action classification and action detection (i.e. simultaneous localization and classification). Bag-of-words image representations yield promising results for action classification, and deformable part models perform very well object detection. The representations for action recognition typically use only shape cues and ignore color information. Inspired by the recent success of color in image classification and object detection, we investigate the potential of color for action classification and detection in static images. We perform a comprehensive evaluation of color descriptors and fusion approaches for action recognition. Experiments were conducted on the three datasets most used for benchmarking action recognition in still images: Willow, PASCAL VOC 2010 and Stanford-40. Our experiments demonstrate that incorporating color information considerably improves recognition performance, and that a descriptor based on color names outperforms pure color descriptors. Our experiments demonstrate that late fusion of color and shape information outperforms other approaches on action recognition. Finally, we show that the different color–shape fusion approaches result in complementary information and combining them yields state-of-the-art performance for action classification.", "title": "" }, { "docid": "185ae8a2c89584385a810071c6003c15", "text": "In this paper, we propose a free viewpoint image rendering method combined with filter based alpha matting for improving the image quality of image boundaries. When we synthesize a free viewpoint image, blur around object boundaries in an input image spills foreground/background color in the synthesized image. To generate smooth boundaries, alpha matting is a solution. In our method based on filtering, we make a boundary map from input images and depth maps, and then feather the map by using guided filter. In addition, we extend view synthesis method to deal the alpha channel. Experiment results show that the proposed method synthesizes 0.4 dB higher quality images than the conventional method without the matting. Also the proposed method synthesizes 0.2 dB higher quality images than the conventional method of robust matting. In addition, the computational cost of the proposed method is 100x faster than the conventional matting.", "title": "" }, { "docid": "04e478610728f0aae76e5299c28da25a", "text": "Single image super resolution is one of the most important topic in computer vision and image processing research, many convolutional neural networks (CNN) based super resolution algorithms were proposed and achieved advanced performance, especially in recovering image details, in which PixelCNN is the most representative one. However, due to the intensive computation requirement of PixelCNN model, running time remains a major challenge, which limited its wider application. In this paper, several modifications are proposed to improve PixelCNN based recursive super resolution model. First, a discrete logistic mixture likelihood is adopted, then a cache structure for generating process is proposed, with these modifications, numerous redundant computations are removed without loss of accuracy. Finally, a partial generating network is proposed for higher resolution generation. Experiments on CelebA dataset demonstrate the effectiveness the superiority of the proposed method.", "title": "" }, { "docid": "4e9ca5976fc68c319e8303076ca80dc7", "text": "A self-driving car, to be deployed in real-world driving environments, must be capable of reliably detecting and effectively tracking of nearby moving objects. This paper presents our new, moving object detection and tracking system that extends and improves our earlier system used for the 2007 DARPA Urban Challenge. We revised our earlier motion and observation models for active sensors (i.e., radars and LIDARs) and introduced a vision sensor. In the new system, the vision module detects pedestrians, bicyclists, and vehicles to generate corresponding vision targets. Our system utilizes this visual recognition information to improve a tracking model selection, data association, and movement classification of our earlier system. Through the test using the data log of actual driving, we demonstrate the improvement and performance gain of our new tracking system.", "title": "" }, { "docid": "fb1c9fcea2f650197b79711606d4678b", "text": "Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.", "title": "" }, { "docid": "40c87c73dad1bf79e1dd047b320a5b49", "text": "Very recently, an increasing number of software companies adopted DevOps to adapt themselves to the ever-changing business environment. While it is important to mature adoption of the DevOps for these companies, no dedicated maturity models for DevOps exist. Meanwhile, maturity models such as CMMI models have demonstrated their effects in the traditional paradigm of software industry, however, it is not clear whether the CMMI models could guide the improvements with the context of DevOps. This paper reports a case study aiming at evaluating the feasibility to apply the CMMI models to guide process improvement for DevOps projects and identifying possible gaps. Using a structured method(i.e., SCAMPI C), we conducted a case study by interviewing four employees from one DevOps project. Based on evidence we collected in the case study, we managed to characterize the maturity/capability of the DevOps project, which implies the possibility to use the CMMI models to appraise the current processes in this DevOps project and guide future improvements. Meanwhile, several gaps also are identified between the CMMI models and the DevOps mode. In this sense, the CMMI models could be taken as a good foundation to design suitable maturity models so as to guide process improvement for projects adopting the DevOps.", "title": "" }, { "docid": "c699ede2caeb5953decc55d8e42c2741", "text": "Traditionally, two distinct approaches have been employed for exploratory factor analysis: maximum likelihood factor analysis and principal component analysis. A third alternative, called regularized exploratory factor analysis, was introduced recently in the psychometric literature. Small sample size is an important issue that has received considerable discussion in the factor analysis literature. However, little is known about the differential performance of these three approaches to exploratory factor analysis in a small sample size scenario. A simulation study and an empirical example demonstrate that regularized exploratory factor analysis may be recommended over the two traditional approaches, particularly when sample sizes are small (below 50) and the sample covariance matrix is near singular.", "title": "" }, { "docid": "715877425204ebb5764bd6ca57ac54ea", "text": "User Generated Content (UGC) is re-shaping the way people watch video and TV, with millions of video producers and consumers. In particular, UGC sites are creating new viewing patterns and social interactions, empowering users to be more creative, and developing new business opportunities. To better understand the impact of UGC systems, we have analyzed YouTube, the world's largest UGC VoD system. Based on a large amount of data collected, we provide an in-depth study of YouTube and other similar UGC systems. In particular, we study the popularity life-cycle of videos, the intrinsic statistical properties of requests and their relationship with video age, and the level of content aliasing or of illegal content in the system. We also provide insights on the potential for more efficient UGC VoD systems (e.g. utilizing P2P techniques or making better use of caching). Finally, we discuss the opportunities to leverage the latent demand for niche videos that are not reached today due to information filtering effects or other system scarcity distortions. Overall, we believe that the results presented in this paper are crucial in understanding UGC systems and can provide valuable information to ISPs, site administrators, and content owners with major commercial and technical implications.", "title": "" } ]
scidocsrr
920b475e55e68a6aadf7289885d0ee8f
Boosting for transfer learning with multiple sources
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "794c597a786486ac4d91d861d89eb242", "text": "Human learners appear to have inherent ways to transfer knowledge between tasks. That is, we recognize and apply relevant knowledge from previous learning experiences when we encounter new tasks. The more related a new task is to our previous experience, the more easily we can master it. Common machine learning algorithms, in contrast, traditionally address isolated tasks. Transfer learning attempts to improve on traditional machine learning by transferring knowledge learned in one or more source tasks and using it to improve learning in a related target task (see Figure 1). Techniques that enable knowledge transfer represent progress towards making machine learning as efficient as human learning. This chapter provides an introduction to the goals, settings, and challenges of transfer learning. It surveys current research in this area, giving an overview of the state of the art and outlining the open problems. ABStrAct", "title": "" }, { "docid": "418a5ef9f06f8ba38e63536671d605c1", "text": "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.", "title": "" } ]
[ { "docid": "5c0994fab71ea871fad6915c58385572", "text": "We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.", "title": "" }, { "docid": "61a2b0e51b27f46124a8042d59c0f022", "text": "We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to \"real\" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.", "title": "" }, { "docid": "a48ada0e9d835f26a484d90c62ffc4cf", "text": "Plastics have become an important part of modern life and are used in different sectors of applications like packaging, building materials, consumer products and much more. Each year about 100 million tons of plastics are produced worldwide. Demand for plastics in India reached about 4.3 million tons in the year 2001-02 and would increase to about 8 million tons in the year 2006-07. Degradation is defined as reduction in the molecular weight of the polymer. The Degradation types are (a).Chain end degradation/de-polymerization (b).Random degradation/reverse of the poly condensation process. Biodegradation is defined as reduction in the molecular weight by naturally occurring microorganisms such as bacteria, fungi, and actinomycetes. That is involved in the degradation of both natural and synthetic plastics. Examples of Standard Testing for Polymer Biodegradability in Various Environments. ASTM D5338: Standard Test Method for Determining the Aerobic Biodegradation of Plastic Materials under Controlled Composting Conditions, ASTM D5210: Standard Test Method for Determining the Anaerobic Biodegradation of Plastic Materials in the Presence of Municipal Sewage Sludge, ASTM D5526: Standard Test Method for Determining Anaerobic Biodegradation of Plastic Materials under Accelerated Landfill Conditions, ASTM D5437: Standard Practice for Weathering of Plastics under Marine Floating Exposure. Plastics are biodegraded, (1).In wild nature by aerobic conditions CO2, water are produced,(2).In sediments & landfills by anaerobic conditions CO2, water, methane are produced, (3).In composts and soil by partial aerobic & anaerobic conditions. This review looks at the technological advancement made in the development of more easily biodegradable plastics and the biodegradation of conventional plastics by microorganisms. Additives, such as pro-oxidants and starch, are applied in synthetic materials to modify and make plastics biodegradable. Reviewing published and ongoing studies on plastic biodegradation, this paper attempts to make conclusions on potentially viable methods to reduce impacts of plastic waste on the", "title": "" }, { "docid": "6ccd0d743360b18365210456c56efc19", "text": "Falls are leading cause of injury and death for elderly people. T herefore it is necessary to design a proper fall prevention system to prevent falls at old age The use of MEMS sensor drastically reduces the size of the system which enables the module to be developed as a wearable suite. A special alert notification regarding the fall is activated using twitter. The state of the person can be viewed every 30sec and is well suited for monitoring aged persons. On a typical fall motion the device releases the compressed air module which is to be designed and alarms the concerned.", "title": "" }, { "docid": "04a8932566311e2e4abacf196b83aadb", "text": "Remote sensing and Geographic Information System play a pivotal role in environmental mapping, mineral exploration, agriculture, forestry, geology, water, ocean, infrastructure planning and management, disaster mitigation and management etc. Remote Sensing and GIS has grown as a major tool for collecting information on almost every aspect on the earth for last few decades. In the recent years, very high spatial and spectral resolution satellite data are available and the applications have multiplied with respect to various purpose. Remote sensing and GIS has contributed significantly towards developmental activities for the four decades in India. In the present paper, we have discussed the remote sensing and GIS applications of few environmental issues like Mining environment, Urban environment, Coastal and marine environment and Wasteland environment.", "title": "" }, { "docid": "6fb416991c80cb94ad09bc1bb09f81c7", "text": "Children with Autism Spectrum Disorder often require therapeutic interventions to support engagement in effective social interactions. In this paper, we present the results of a study conducted in three public schools that use an educational and behavioral intervention for the instruction of social skills in changing situational contexts. The results of this study led to the concept of interaction immediacy to help children maintain appropriate spatial boundaries, reply to conversation initiators, disengage appropriately at the end of an interaction, and identify potential communication partners. We describe design principles for Ubicomp technologies to support interaction immediacy and present an example design. The contribution of this work is twofold. First, we present an understanding of social skills in mobile and dynamic contexts. Second, we introduce the concept of interaction immediacy and show its effectiveness as a guiding principle for the design of Ubicomp applications.", "title": "" }, { "docid": "cb5ec5bc55e825289fc8c3251c5b8f92", "text": "This research presents a review of the psychometric measures on boredom that have been developed over the past 25 years. Specifically, the author examined the Boredom Proneness Scale (BPS; R. Farmer & N. D. Sundberg, 1986), the job boredom scales by E. A. Grubb (1975) and T. W. Lee (1986), a boredom coping measure (J. A. Hamilton, R. J. Haier, & M. S. Buchsbaum, 1984), 2 scales that assess leisure and free-time boredom (S. E. Iso-Ahola & E. Weissinger, 1990; M. G. Ragheb & S. P. Merydith, 2001), the Sexual Boredom Scale (SBS; J. D. Watt & J. E. Ewing, 1996), and the Boredom Susceptibility (BS) subscale of the Sensation Seeking Scale (M. Zuckerman, 1979a). Particular attention is devoted to discussing the literature regarding the psychometric properties of the BPS because it is the only full-scale measure on the construct of boredom.", "title": "" }, { "docid": "ac6fa78301c58ba516e22ac17b908c98", "text": "Human facial expressions change with different states of health; therefore, a facial-expression recognition system can be beneficial to a healthcare framework. In this paper, a facial-expression recognition system is proposed to improve the service of the healthcare in a smart city. The proposed system applies a bandlet transform to a face image to extract sub-bands. Then, a weighted, center-symmetric local binary pattern is applied to each sub-band block by block. The CS-LBP histograms of the blocks are concatenated to produce a feature vector of the face image. An optional feature-selection technique selects the most dominant features, which are then fed into two classifiers: a Gaussian mixture model and a support vector machine. The scores of these classifiers are fused by weight to produce a confidence score, which is used to make decisions about the facial expression’s type. Several experiments are performed using a large set of data to validate the proposed system. Experimental results show that the proposed system can recognize facial expressions with 99.95% accuracy.", "title": "" }, { "docid": "f2e2a19506651498eea81c984e8c61d7", "text": "MicroRNAs (miRNA) are crucial post-transcriptional regulators of gene expression and control cell differentiation and proliferation. However, little is known about their targeting of specific developmental pathways. Hedgehog (Hh) signalling controls cerebellar granule cell progenitor development and a subversion of this pathway leads to neoplastic transformation into medulloblastoma (MB). Using a miRNA high-throughput profile screening, we identify here a downregulated miRNA signature in human MBs with high Hh signalling. Specifically, we identify miR-125b and miR-326 as suppressors of the pathway activator Smoothened together with miR-324-5p, which also targets the downstream transcription factor Gli1. Downregulation of these miRNAs allows high levels of Hh-dependent gene expression leading to tumour cell proliferation. Interestingly, the downregulation of miR-324-5p is genetically determined by MB-associated deletion of chromosome 17p. We also report that whereas miRNA expression is downregulated in cerebellar neuronal progenitors, it increases alongside differentiation, thereby allowing cell maturation and growth inhibition. These findings identify a novel regulatory circuitry of the Hh signalling and suggest that misregulation of specific miRNAs, leading to its aberrant activation, sustain cancer development.", "title": "" }, { "docid": "d06cb1f4699757d95a00014e340f927f", "text": "Because of appearance variations, training samples of the tracked targets collected by the online tracker are required for updating the tracking model. However, this often leads to tracking drift problem because of potentially corrupted samples: 1) contaminated/outlier samples resulting from large variations (e.g. occlusion, illumination), and 2) misaligned samples caused by tracking inaccuracy. Therefore, in order to reduce the tracking drift while maintaining the adaptability of a visual tracker, how to alleviate these two issues via an effective model learning (updating) strategy is a key problem to be solved. To address these issues, this paper proposes a novel and optimal model learning (updating) scheme which aims to simultaneously eliminate the negative effects from these two issues mentioned above in a unified robust feature template learning framework. Particularly, the proposed feature template learning framework is capable of: 1) adaptively learning uncontaminated feature templates by separating out contaminated samples, and 2) resolving label ambiguities caused by misaligned samples via a probabilistic multiple instance learning (MIL) model. Experiments on challenging video sequences show that the proposed tracker performs favourably against several state-of-the-art trackers.", "title": "" }, { "docid": "d3984f8562288fabf0627b15af4dd64a", "text": "Volumetric representation has been widely used for 3D deep learning in shape analysis due to its generalization ability and regular data format. However, for fine-grained tasks like part segmentation, volumetric data has not been widely adopted compared to other representations. Aiming at delivering an effective volumetric method for 3D shape part segmentation, this paper proposes a novel volumetric convolutional neural network. Our method can extract discriminative features encoding detailed information from voxelized 3D data under limited resolution. To this purpose, a spatial dense extraction (SDE) module is designed to preserve spatial resolution during feature extraction procedure, alleviating the loss of details caused by sub-sampling operations such as max pooling. An attention feature aggregation (AFA) module is also introduced to adaptively select informative features from different abstraction levels, leading to segmentation with both semantic consistency and high accuracy of details. Experimental results demonstrate that promising results can be achieved by using volumetric data, with part segmentation accuracy comparable or superior to state-of-the-art non-volumetric methods.", "title": "" }, { "docid": "c9e9e00924b215c8c14e3756ea0d1ffc", "text": "A complex activity is a temporal composition of sub-events, and a sub-event typically consists of several low level micro-actions, such as body movement of different actors. Extracting these micro actions explicitly is beneficial for complex activity recognition due to actor selectivity, higher discriminative power, and motion clutter suppression. Moreover, considering both static and motion features is vital for activity recognition. However, optimally controlling the contribution from static and motion features still remains uninvestigated. In this work, we extract motion features at micro level, preserving the actor identity, to later obtain a high-level motion descriptor using a probabilistic model. Furthermore, we propose two novel schemas for combining static and motion features: Cholesky-transformation based and entropy-based. The former allows to control the contribution ratio precisely, while the latter obtains the optimal ratio mathematically. The ratio given by the entropy based method matches well with the experimental values obtained by the Choleksy transformation based method. This analysis also provides the ability to characterize a dataset, according to its richness in motion information. Finally, we study the effectiveness of modeling the temporal evolution of sub-event using an LSTM network. Experimental results demonstrate that the proposed technique outperforms state-of-the-art, when tested against two popular datasets.", "title": "" }, { "docid": "ad9f00a73306cba20073385c7482ba43", "text": "We present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data and estimation of intensity inhomogeneities using fuzzy logic. MRI intensity inhomogeneities can be attributed to imperfections in the radio-frequency coils or to problems associated with the acquisition sequences. The result is a slowly varying shading artifact over the image that can produce errors with conventional intensity-based classification. Our algorithm is formulated by modifying the objective function of the standard fuzzy c-means (FCM) algorithm to compensate for such inhomogeneities and to allow the labeling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The neighborhood effect acts as a regularizer and biases the solution toward piecewise-homogeneous labelings. Such a regularization is useful in segmenting scans corrupted by salt and pepper noise. Experimental results on both synthetic images and MR data are given to demonstrate the effectiveness and efficiency of the proposed algorithm.", "title": "" }, { "docid": "dbbd9f6440ee0c137ee0fb6a4aadba38", "text": "In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings.", "title": "" }, { "docid": "173c0124ac81cfe8fa10fbdc20a1a094", "text": "This paper presents a new approach to compare fuzzy numbers using α-distance. Initially, the metric distance on the interval numbers based on the convex hull of the endpoints is proposed and it is extended to fuzzy numbers. All the properties of the α-distance are proved in details. Finally, the ranking of fuzzy numbers by the α-distance is discussed. In addition, the proposed method is compared with some known ones, the validity of the new method is illustrated by applying its to several group of fuzzy numbers.", "title": "" }, { "docid": "2ff15076533d1065209e0e62776eaa69", "text": "In less than a decade, Cubesats have evolved from purely educational tools to a standard platform for technology demonstration and scientific instrumentation. The use of COTS (Commercial-Off-The-Shelf) components and the ongoing miniaturization of several technologies have already led to scattered instances of missions with promising scientific value. Furthermore, advantages in terms of development cost and development time with respect to larger satellites, as well as the possibility of launching several dozens of Cubesats with a single rocket launch, have brought forth the potential for radically new mission architectures consisting of very large constellations or clusters of Cubesats. These architectures promise to combine the temporal resolution of GEO missions with the spatial resolution of LEO missions, thus breaking a traditional tradeoff in Earth observation mission design. This paper assesses the current capabilities of Cubesats with respect to potential employment in Earth observation missions. A thorough review of Cubesat bus technology capabilities is performed, identifying potential limitations and their implications on 17 different Earth observation payload technologies. These results are matched to an exhaustive review of scientific requirements in the field of Earth observation, assessing the possibilities of Cubesats to cope with the requirements set for each one of 21 measurement categories. Based on this review, several Earth observation measurements are identified that can potentially be compatible with the current state-of-the-art of Cubesat technology although some of them have actually never been addressed by any Cubesat mission. Simultaneously, other measurements are identified which are unlikely to be performed by Cubesats in the next few years due to insuperable constraints. Ultimately, this paper is intended to supply a box of ideas for universities to design future Cubesat missions with high", "title": "" }, { "docid": "27d1e83593d51b34974eb4080993afc2", "text": "The use of on-demand techniques in routing protocols for multi-hop wireless ad hoc networks has been shown to have significant advantages in terms of reducing the routing protocol's overhead and improving its ability to react quickly to topology changes in the network. A number of on-demand multicast routing protocols have been proposed, but each also relies on significant periodic (non-on-demand) behavior within portions of the protocol. This paper presents the design and initial evluation of the Adaptive Demand-Driven Multicast Routing protocol (ADMR), a new on-demand ad hoc network multicast routing protocol that attemps to reduce as much as possible any non-on-demand components within the protocol. Multicast routing state is dynamically established and maintained only for active groups and only in nodes located between multicast senders and receivers. Each multicast data packet is forwarded along the shortest-delay path with multicast forwarding state, from the sender to the receivers, and receivers dynamically adapt to the sending pattern of senders in order to efficiently balance overhead and maintenance of the multicast routing state as nodes in the network move or as wireless transmission conditions in the network change. We describe the operation of the ADMR protocol and present an initial evaluation of its performance based on detailed simulation in ad hoc networks of 50 mobile nodes. We show that ADMR achieves packet delivery ratios within 1% of a flooding-based protocol, while incurring half to a quarter of the overhead.", "title": "" }, { "docid": "56c7c065c390d1ed5f454f663289788d", "text": "This paper presents a novel approach to character identification, that is an entity linking task that maps mentions to characters in dialogues from TV show transcripts. We first augment and correct several cases of annotation errors in an existing corpus so the corpus is clearer and cleaner for statistical learning. We also introduce the agglomerative convolutional neural network that takes groups of features and learns mention and mention-pair embeddings for coreference resolution. We then propose another neural model that employs the embeddings learned and creates cluster embeddings for entity linking. Our coreference resolution model shows comparable results to other state-of-the-art systems. Our entity linking model significantly outperforms the previous work, showing the F1 score of 86.76% and the accuracy of 95.30% for character identification.", "title": "" }, { "docid": "4da2675e6e4af699e6d887dfe0c3ca51", "text": "Using an original method of case evaluation which involved an analysis panel of over 80 Italian psychologists and included a lay case evaluation, the author has investigated the effectiveness of transactional analysis psychotherapy for a case of mixed anxiety and depression with a 39 year old white British male who attended 14 weekly sessions. CORE-OM (Evans, Mellor-Clark , Margison, Barkham, Audin, Connell and McGrath, 2000), PHQ-9 (Kroenke, Spitzer & Williams, 2001), GAD-7) Spitzer, Kroenke, Williams & Löwe, 2006, Hamilton Rating Scale for Depression (Hamilton, 1980) were used for screening and also for outcome measurement, along with Session Rating Scale (SRS v.3.0) (Duncan, Miller, Sparks, Claud, Reynolds, Brown and Johnson, 2003) and Comparative Psychotherapy Process Scale (CPPS) (Hilsenroth, Blagys, Ackerman, Bonge and Blais, 2005), within an overall adjudicational case study method. The conclusion of the analysis panel and the lay judge was unanimously that this was a good outcome case and that the client’s changes had been as a direct result of therapy. Previous case study research has demonstrated that TA is effective for depression, and this present case provides foundation evidence for the effectiveness of TA for depression with comorbid anxiety.", "title": "" }, { "docid": "e12410e92e3f4c0f9c78bc5988606c93", "text": "Semiarid environments are known for climate extremes such as high temperatures, low humidity, irregular precipitations, and apparent resource scarcity. We aimed to investigate how a small neotropical primate (Callithrix jacchus; the common marmoset) manages to survive under the harsh conditions that a semiarid environment imposes. The study was carried out in a 400-ha area of Caatinga in the northeast of Brazil. During a 6-month period (3 months of dry season and 3 months of wet season), we collected data on the diet of 19 common marmosets (distributed in five groups) and estimated their behavioral time budget during both the dry and rainy seasons. Resting significantly increased during the dry season, while playing was more frequent during the wet season. No significant differences were detected regarding other behaviors. In relation to the diet, we recorded the consumption of prey items such as insects, spiders, and small vertebrates. We also observed the consumption of plant items, including prickly cladodes, which represents a previously undescribed food item for this species. Cladode exploitation required perceptual and motor skills to safely access the food resource, which is protected by sharp spines. Our findings show that common marmosets can survive under challenging conditions in part because of adjustments in their behavior and in part because of changes in their diet.", "title": "" } ]
scidocsrr
9d7233877a79481ef40fe83b7edbf01f
Who Owns the Data? Open Data for Healthcare
[ { "docid": "f1294ba7d894db9c5145d11f1251a498", "text": "A grand goal of future medicine is in modelling the complexity of patients to tailor medical decisions, health practices and therapies to the individual patient. This trend towards personalized medicine produces unprecedented amounts of data, and even though the fact that human experts are excellent at pattern recognition in dimensions of ≤ 3, the problem is that most biomedical data is in dimensions much higher than 3, making manual analysis difficult and often impossible. Experts in daily medical routine are decreasingly capable of dealing with the complexity of such data. Moreover, they are not interested the data, they need knowledge and insight in order to support their work. Consequently, a big trend in computer science is to provide efficient, useable and useful computational methods, algorithms and tools to discover knowledge and to interactively gain insight into high-dimensional data. A synergistic combination of methodologies of two areas may be of great help here: Human–Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human intelligence with machine learning. A trend in both disciplines is the acquisition and adaptation of representations that support efficient learning. Mapping higher dimensional data into lower dimensions is a major task in HCI, and a concerted effort of computational methods including recent advances from graphtheory and algebraic topology may contribute to finding solutions. Moreover, much biomedical data is sparse, noisy and timedependent, hence entropy is also amongst promising topics. This paper provides a rough overview of the HCI-KDD approach and focuses on three future trends: graph-based mining, topological data mining and entropy-based data mining.", "title": "" } ]
[ { "docid": "21af4ea62f07966097c8ab46f7226907", "text": "With the introduction of Microsoft Kinect, there has been considerable interest in creating various attractive and feasible applications in related research fields. Kinect simultaneously captures the depth and color information and provides real-time reliable 3D full-body human-pose reconstruction that essentially turns the human body into a controller. This article presents a finger-writing system that recognizes characters written in the air without the need for an extra handheld device. This application adaptively merges depth, skin, and background models for the hand segmentation to overcome the limitations of the individual models, such as hand-face overlapping problems and the depth-color nonsynchronization. The writing fingertip is detected by a new real-time dual-mode switching method. The recognition accuracy rate is greater than 90 percent for the first five candidates of Chinese characters, English characters, and numbers.", "title": "" }, { "docid": "7654ada6aabee2f8abf411dba5383d96", "text": "In the past decade, Convolutional Neural Networks (CNNs) have been demonstrated successful for object detections. However, the size of network input is limited by the amount of memory available on GPUs. Moreover, performance degrades when detecting small objects. To alleviate the memory usage and improve the performance of detecting small traffic signs, we proposed an approach for detecting small traffic signs from large images under real world conditions. In particular, large images are broken into small patches as input to a Small-Object-Sensitive-CNN (SOS-CNN) modified from a Single Shot Multibox Detector (SSD) framework with a VGG-16 network as the base network to produce patch-level object detection results. Scale invariance is achieved by applying the SOS-CNN on an image pyramid. Then, image-level object detection is obtained by projecting all the patch-level detection results to the image at the original scale. Experimental results on a real-world conditioned traffic sign dataset have demonstrated the effectiveness of the proposed method in terms of detection accuracy and recall, especially for those with small sizes.", "title": "" }, { "docid": "947a96e2115f5b271f5550e090859133", "text": "Degenerative lumbar spinal stenosis is caused by mechanical factors and/or biochemical alterations within the intervertebral disk that lead to disk space collapse, facet joint hypertrophy, soft-tissue infolding, and osteophyte formation, which narrows the space available for the thecal sac and exiting nerve roots. The clinical consequence of this compression is neurogenic claudication and varying degrees of leg and back pain. Degenerative lumbar spinal stenosis is a major cause of pain and impaired quality of life in the elderly. The natural history of this condition varies; however, it has not been shown to worsen progressively. Nonsurgical management consists of nonsteroidal anti-inflammatory drugs, physical therapy, and epidural steroid injections. If nonsurgical management is unsuccessful and neurologic decline persists or progresses, surgical treatment, most commonly laminectomy, is indicated. Recent prospective randomized studies have demonstrated that surgery is superior to nonsurgical management in terms of controlling pain and improving function in patients with lumbar spinal stenosis.", "title": "" }, { "docid": "4261e44dad03e8db3c0520126b9c7c4d", "text": "One of the major drawbacks of magnetic resonance imaging (MRI) has been the lack of a standard and quantifiable interpretation of image intensities. Unlike in other modalities, such as X-ray computerized tomography, MR images taken for the same patient on the same scanner at different times may appear different from each other due to a variety of scanner-dependent variations and, therefore, the absolute intensity values do not have a fixed meaning. The authors have devised a two-step method wherein all images (independent of patients and the specific brand of the MR scanner used) can be transformed in such a may that for the same protocol and body region, in the transformed images similar intensities will have similar tissue meaning. Standardized images can be displayed with fixed windows without the need of per-case adjustment. More importantly, extraction of quantitative information about healthy organs or about abnormalities can be considerably simplified. This paper introduces and compares new variants of this standardizing method that can help to overcome some of the problems with the original method.", "title": "" }, { "docid": "a949afe3f53bf7695a35c0c1cc8374c3", "text": "Increasingly complex proteins can be made by a recombinant chemical approach where proteins that can be made easily can be combined by site-specific chemical conjugation to form multifunctional or more active protein therapeutics. Protein dimers may display increased avidity for cell surface receptors. The increased size of protein dimers may also increase circulation times. Cytokines bind to cell surface receptors that dimerise, so much of the solvent accessible surface of a cytokine is involved in binding to its target. Interferon (IFN) homo-dimers (IFN-PEG-IFN) were prepared by two methods: site-specific bis-alkylation conjugation of PEG to the two thiols of a native disulphide or to two imidazoles on a histidine tag of two His8-tagged IFN (His8IFN). Several control conjugates were also prepared to assess the relative activity of these IFN homo-dimers. The His8IFN-PEG20-His8IFN obtained by histidine-specific conjugation displayed marginally greater in vitro antiviral activity compared to the IFN-PEG20-IFN homo-dimer obtained by disulphide re-bridging conjugation. This result is consistent with previous observations in which enhanced retention of activity was made possible by conjugation to an N-terminal His-tag on the IFN. Comparison of the antiviral and antiproliferative activities of the two IFN homo-dimers prepared by disulphide re-bridging conjugation indicated that IFN-PEG10-IFN was more biologically active than IFN-PEG20-IFN. This result suggests that the size of PEG may influence the antiviral activity of IFN-PEG-IFN homo-dimers.", "title": "" }, { "docid": "49ff096deb6621438286942b792d6af3", "text": "Fast fashion is a business model that offers (the perception of) fashionable clothes at affordable prices. From an operations standpoint, fast fashion requires a highly responsive supply chain that can support a product assortment that is periodically changing. Though the underlying principles are simple, the successful execution of the fast-fashion business model poses numerous challenges. We present a careful examination of this business model and discuss its execution by analyzing the most prominent firms in the industry. We then survey the academic literature for research that is specifically relevant or directly related to fast fashion. Our goal is to expose the main components of fast fashion and to identify untapped research opportunities.", "title": "" }, { "docid": "274373d46b748d92e6913496507353b1", "text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.", "title": "" }, { "docid": "3fd8092faee792a316fb3d1d7c2b6244", "text": "The complete dynamics model of a four-Mecanum-wheeled robot considering mass eccentricity and friction uncertainty is derived using the Lagrange’s equation. Then based on the dynamics model, a nonlinear stable adaptive control law is derived using the backstepping method via Lyapunov stability theory. In order to compensate for the model uncertainty, a nonlinear damping term is included in the control law, and the parameter update law with σ-modification is considered for the uncertainty estimation. Computer simulations are conducted to illustrate the suggested control approach.", "title": "" }, { "docid": "a6b5f49b8161b45540bdd333d8588cd8", "text": "Personality inconsistency is one of the major problems for chit-chat sequence to sequence conversational agents. Works studying this problem have proposed models with the capability of generating personalized responses, but there is not an existing evaluation method for measuring the performance of these models on personality. This thesis develops a new evaluation method based on the psychological study of personality, in particular the Big Five personality traits. With the new evaluation method, the thesis examines if the responses generated by personalized chit-chat sequence to sequence conversational agents are distinguished for speakers with different personalities. The thesis also proposes a new model that generates distinguished responses based on given personalities. The results of our experiments in the thesis show that: for both the existing personalized model and the new model that we propose, the generated responses for speakers with different personalities are significantly more distinguished than a random baseline; specially for our new model, it has the capability of generating distinguished responses for different types of personalities measured by the Big Five personality traits.", "title": "" }, { "docid": "1148cc41ee6d016a495856789a7b739d", "text": "Visual reasoning is a special visual question answering problem that is multi-step and compositional by nature, and also requires intensive text-vision interactions. We propose CMM: Cascaded Mutual Modulation as a novel end-to-end visual reasoning model. CMM includes a multi-step comprehension process for both question and image. In each step, we use a Feature-wise Linear Modulation (FiLM) technique to enable textual/visual pipeline to mutually control each other. Experiments show that CMM significantly outperforms most related models, and reach stateof-the-arts on two visual reasoning benchmarks: CLEVR and NLVR, collected from both synthetic and natural languages. Ablation studies confirm that both our multistep framework and our visual-guided language modulation are critical to the task. Our code is available at https://github. com/FlamingHorizon/CMM-VR.", "title": "" }, { "docid": "dfb83ad16854797137e34a5c7cb110ae", "text": "The increasing computing requirements for GPUs (Graphics Processing Units) have favoured the design and marketing of commodity devices that nowadays can also be used to accelerate general purpose computing. Therefore, future high performance clusters intended for HPC (High Performance Computing) will likely include such devices. However, high-end GPU-based accelerators used in HPC feature a considerable energy consumption, so that attaching a GPU to every node of a cluster has a strong impact on its overall power consumption. In this paper we detail a framework that enables remote GPU acceleration in HPC clusters, thus allowing a reduction in the number of accelerators installed in the cluster. This leads to energy, acquisition, maintenance, and space savings.", "title": "" }, { "docid": "15a76f43782ef752e4b8e61e38726d69", "text": "This paper considers invariant texture analysis. Texture analysis approaches whose performances are not a,ected by translation, rotation, a.ne, and perspective transform are addressed. Existing invariant texture analysis algorithms are carefully studied and classi0ed into three categories: statistical methods, model based methods, and structural methods. The importance of invariant texture analysis is presented 0rst. Each approach is reviewed according to its classi0cation, and its merits and drawbacks are outlined. The focus of possible future work is also suggested. ? 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "18968ed6bec670e4c8ba93933d0cc3e3", "text": "The medial prefrontal cortex (MPFC) is regarded as a region of the brain that supports self-referential processes, including the integration of sensory information with self-knowledge and the retrieval of autobiographical information. I used functional magnetic resonance imaging and a novel procedure for eliciting autobiographical memories with excerpts of popular music dating to one's extended childhood to test the hypothesis that music and autobiographical memories are integrated in the MPFC. Dorsal regions of the MPFC (Brodmann area 8/9) were shown to respond parametrically to the degree of autobiographical salience experienced over the course of individual 30 s excerpts. Moreover, the dorsal MPFC also responded on a second, faster timescale corresponding to the signature movements of the musical excerpts through tonal space. These results suggest that the dorsal MPFC associates music and memories when we experience emotionally salient episodic memories that are triggered by familiar songs from our personal past. MPFC acted in concert with lateral prefrontal and posterior cortices both in terms of tonality tracking and overall responsiveness to familiar and autobiographically salient songs. These findings extend the results of previous autobiographical memory research by demonstrating the spontaneous activation of an autobiographical memory network in a naturalistic task with low retrieval demands.", "title": "" }, { "docid": "04c52aa382cf53c3ab208bd3c0fc5354", "text": "This article is the last of our series of articles on survey research. In it, we discuss how to analyze survey data. We provide examples of correct and incorrect analysis techniques used in software engineering surveys.", "title": "" }, { "docid": "bb547f90a98aa25d0824dc63b9de952d", "text": "When designing distributed web services, there are three properties that are commonly desired: consistency, availability, and partition tolerance. It is impossible to achieve all three. In this note, we prove this conjecture in the asynchronous network model, and then discuss solutions to this dilemma in the partially synchronous model.", "title": "" }, { "docid": "0e2a3870af4b7636c0f5e56d658fcc77", "text": "In this review, we provide an overview of protein synthesis in the yeast Saccharomyces cerevisiae The mechanism of protein synthesis is well conserved between yeast and other eukaryotes, and molecular genetic studies in budding yeast have provided critical insights into the fundamental process of translation as well as its regulation. The review focuses on the initiation and elongation phases of protein synthesis with descriptions of the roles of translation initiation and elongation factors that assist the ribosome in binding the messenger RNA (mRNA), selecting the start codon, and synthesizing the polypeptide. We also examine mechanisms of translational control highlighting the mRNA cap-binding proteins and the regulation of GCN4 and CPA1 mRNAs.", "title": "" }, { "docid": "8695757545e44358fd63f06936335903", "text": "We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.", "title": "" }, { "docid": "a06274d9bf6dba90ea0178ec11a20fb6", "text": "Osteoporosis has become one of the most prevalent and costly diseases in the world. It is a metabolic disease characterized by reduction in bone mass due to an imbalance between bone formation and resorption. Osteoporosis causes fractures, prolongs bone healing, and impedes osseointegration of dental implants. Its pathological features include osteopenia, degradation of bone tissue microstructure, and increase of bone fragility. In traditional Chinese medicine, the herb Rhizoma Drynariae has been commonly used to treat osteoporosis and bone nonunion. However, the precise underlying mechanism is as yet unclear. Osteoprotegerin is a cytokine receptor shown to play an important role in osteoblast differentiation and bone formation. Hence, activators and ligands of osteoprotegerin are promising drug targets and have been the focus of studies on the development of therapeutics against osteoporosis. In the current study, we found that naringin could synergistically enhance the action of 1α,25-dihydroxyvitamin D3 in promoting the secretion of osteoprotegerin by osteoblasts in vitro. In addition, naringin can also influence the generation of osteoclasts and subsequently bone loss during organ culture. In conclusion, this study provides evidence that natural compounds such as naringin have the potential to be used as alternative medicines for the prevention and treatment of osteolysis.", "title": "" }, { "docid": "3907bddf6a56b96c4e474d46ddd04359", "text": "The aim of this review is to discuss the accumulating evidence that suggests that grape extracts and purified grape polyphenols possess a diverse array of biological actions and may be beneficial in the prevention of some inflammatory-mediated diseases including cardiovascular disease. The active components from grape extracts, which include the grape seed, grape skin, and grape juice, that have been identified thus far include polyphenols such as resveratrol, phenolic acids, anthocyanins, and flavonoids. All possess potent antioxidant properties and have been shown to decrease low-density lipoprotein-cholesterol oxidation and platelet aggregation. These compounds also possess a range of additional cardioprotective and vasoprotective properties including antiatherosclerotic, antiarrhythmic, and vasorelaxation actions. Although not exclusive, antioxidant properties of grape polyphenols are likely to be central to their mechanism(s) of action, which also include cellular signaling mechanisms and interactions at the genomic level. This review discusses some of the evidence favoring the consumption of grape extracts rich in polyphenols in the prevention of cardiovascular disease. Consumption of grape and grape extracts and/or grape products such as red wine may be beneficial in preventing the development of chronic degenerative diseases such as cardiovascular disease.", "title": "" } ]
scidocsrr
aada452949a24e57489e7bb6d45a177a
Technology addiction's contribution to mental wellbeing: The positive effect of online social capital
[ { "docid": "94d0d80880adeb6ad7a333cf6382fa90", "text": "In 2 daily experience studies and a laboratory study, the authors test predictions from approach-avoidance motivational theory to understand how dating couples can maintain feelings of relationship satisfaction in their daily lives and over the course of time. Approach goals were associated with increased relationship satisfaction on a daily basis and over time, particularly when both partners were high in approach goals. Avoidance goals were associated with decreases in relationship satisfaction over time, and people were particularly dissatisfied when they were involved with a partner with high avoidance goals. People high in approach goals and their partners were rated as relatively more satisfied and responsive to a partner's needs by outside observers in the lab, whereas people with high avoidance goals and their partners were rated as less satisfied and responsive. Positive emotions mediated the link between approach goals and daily satisfaction in both studies, and responsiveness to the partner's needs was an additional behavioral mechanism in Study 2. Implications of these findings for approach-avoidance motivational theory and for the maintenance of satisfying relationships over time are discussed.", "title": "" } ]
[ { "docid": "d99b2bab853f867024d1becb0835548d", "text": "In this paper, we tackle challenges in migrating enterprise services into hybrid cloud-based deployments, where enterprise operations are partly hosted on-premise and partly in the cloud. Such hybrid architectures enable enterprises to benefit from cloud-based architectures, while honoring application performance requirements, and privacy restrictions on what services may be migrated to the cloud. We make several contributions. First, we highlight the complexity inherent in enterprise applications today in terms of their multi-tiered nature, large number of application components, and interdependencies. Second, we have developed a model to explore the benefits of a hybrid migration approach. Our model takes into account enterprise-specific constraints, cost savings, and increased transaction delays and wide-area communication costs that may result from the migration. Evaluations based on real enterprise applications and Azure-based cloud deployments show the benefits of a hybrid migration approach, and the importance of planning which components to migrate. Third, we shed insight on security policies associated with enterprise applications in data centers. We articulate the importance of ensuring assurable reconfiguration of security policies as enterprise applications are migrated to the cloud. We present algorithms to achieve this goal, and demonstrate their efficacy on realistic migration scenarios.", "title": "" }, { "docid": "7974d8e70775f1b7ef4d8c9aefae870e", "text": "Low-rank decomposition plays a central role in accelerating convolutional neural network (CNN), and the rank of decomposed kernel-tensor is a key parameter that determines the complexity and accuracy of a neural network. In this paper, we define rank selection as a combinatorial optimization problem and propose a methodology to minimize network complexity while maintaining the desired accuracy. Combinatorial optimization is not feasible due to search space limitations. To restrict the search space and obtain the optimal rank, we define the space constraint parameters with a boundary condition. We also propose a linearly-approximated accuracy function to predict the fine-tuned accuracy of the optimized CNN model during the cost reduction. Experimental results on AlexNet and VGG-16 show that the proposed rank selection algorithm satisfies the accuracy constraint. Our method combined with truncated-SVD outperforms state-of-the-art methods in terms of inference and training time at almost the same accuracy.", "title": "" }, { "docid": "d64c30da6f8d94ca4effd83075b15901", "text": "The task of natural question generation is to generate a corresponding question given the input passage (fact) and answer. It is useful for enlarging the training set of QA systems. Previous work has adopted sequence-to-sequence models that take a passage with an additional bit to indicate answer position as input. However, they do not explicitly model the information between answer and other context within the passage. We propose a model that matches the answer with the passage before generating the question. Experiments show that our model outperforms the existing state of the art using rich features.", "title": "" }, { "docid": "414160c5d5137def904c38cccc619628", "text": "Side-channel attacks, particularly differential power analysis (DPA) attacks, are efficient ways to extract secret keys of the attacked devices by leaked physical information. To resist DPA attacks, hiding and masking methods are commonly used, but it usually resulted in high area overhead and performance degradation. In this brief, a DPA countermeasure circuit based on digital controlled ring oscillators is presented to efficiently resist the first-order DPA attack. The implementation of the critical S-box of the advanced encryption standard (AES) algorithm shows that the area overhead of a single S-box is about 19% without any extra delay in the critical path. Moreover, the countermeasure circuit can be mounted onto different S-box implementations based on composite field or look-up table (LUT). Based on our approach, a DPA-resistant AES chip can be proposed to maintain the same throughput with less than 2K extra gates.", "title": "" }, { "docid": "17c6859c2ec80d4136cb8e76859e47a6", "text": "This paper describes a complete and efficient vision system d eveloped for the robotic soccer team of the University of Aveiro, CAMB ADA (Cooperative Autonomous Mobile roBots with Advanced Distributed Ar chitecture). The system consists on a firewire camera mounted vertically on th e top of the robots. A hyperbolic mirror placed above the camera reflects the 360 d egrees of the field around the robot. The omnidirectional system is used to find t he ball, the goals, detect the presence of obstacles and the white lines, used by our localization algorithm. In this paper we present a set of algorithms to extract efficiently the color information of the acquired images and, in a second phase, ex tract the information of all objects of interest. Our vision system architect ure uses a distributed paradigm where the main tasks, namely image acquisition, co lor extraction, object detection and image visualization, are separated in se veral processes that can run at the same time. We developed an efficient color extracti on algorithm based on lookup tables and a radial model for object detection. Our participation in the last national robotic contest, ROBOTICA 2007, where we have obtained the first place in the Medium Size League of robotic soccer, shows the e ffectiveness of our algorithms. Moreover, our experiments show that the sys tem is fast and accurate having a maximum processing time independently of the r obot position and the number of objects found in the field.", "title": "" }, { "docid": "5229fb13c66ca8a2b079f8fe46bb9848", "text": "We put forth a lookup-table-based modular reduction method which partitions the binary string of an integer to be reduced into blocks according to its runs. Its complexity depends on the amount of runs in the binary string. We show that the new reduction is almost twice as fast as the popular Barrett’s reduction, and provide a thorough complexity analysis of the method.", "title": "" }, { "docid": "306136e7ffd6b1839956d9f712afbda2", "text": "Dynamic scheduling cloud resources according to the change of the load are key to improve cloud computing on-demand service capabilities. This paper proposes a load-adaptive cloud resource scheduling model based on ant colony algorithm. By real-time monitoring virtual machine of performance parameters, once judging overload, it schedules fast cloud resources using ant colony algorithm to bear some load on the load-free node. So that it can meet changing load requirements. By analyzing an example result, the model can meet the goals and requirements of self-adaptive cloud resources scheduling and improve the efficiency of the resource utilization.", "title": "" }, { "docid": "f1255742f2b1851457dd92ad97db7c8e", "text": "Model transformations are frequently applied in business process modeling to bridge between languages on a different level of abstraction and formality. In this paper, we define a transformation between BPMN which is developed to enable business user to develop readily understandable graphical representations of business processes and YAWL, a formal workflow language that is able to capture all of the 20 workflow patterns reported. We illustrate the transformation challenges and present a suitable transformation algorithm. The benefit of the transformation is threefold. Firstly, it clarifies the semantics of BPMN via a mapping to YAWL. Secondly, the deployment of BPMN business process models is simplified. Thirdly, BPMN models can be analyzed with YAWL verification tools.", "title": "" }, { "docid": "2f8439098872e3af2c8d0ade5fbb15e8", "text": "Natural language explanations of deep neural network decisions provide an intuitive way for a AI agent to articulate a reasoning process. Current textual explanations learn to discuss class discriminative features in an image. However, it is also helpful to understand which attributes might change a classification decision if present in an image (e.g., “This is not a Scarlet Tanager because it does not have black wings.”) We call such textual explanations counterfactual explanations, and propose an intuitive method to generate counterfactual explanations by inspecting which evidence in an input is missing, but might contribute to a different classification decision if present in the image. To demonstrate our method we consider a fine-grained image classification task in which we take as input an image and a counterfactual class and output text which explains why the image does not belong to a counterfactual class. We then analyze our generated counterfactual explanations both qualitatively and quantitatively using proposed automatic metrics.", "title": "" }, { "docid": "aaba5dc8efc9b6a62255139965b6f98d", "text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.", "title": "" }, { "docid": "70859cc5754a4699331e479a566b70f1", "text": "The relationship between mind and brain has philosophical, scientific, and practical implications. Two separate but related surveys from the University of Edinburgh (University students, n= 250) and the University of Liège (health-care workers, lay public, n= 1858) were performed to probe attitudes toward the mind-brain relationship and the variables that account for differences in views. Four statements were included, each relating to an aspect of the mind-brain relationship. The Edinburgh survey revealed a predominance of dualistic attitudes emphasizing the separateness of mind and brain. In the Liège survey, younger participants, women, and those with religious beliefs were more likely to agree that the mind and brain are separate, that some spiritual part of us survives death, that each of us has a soul that is separate from the body, and to deny the physicality of mind. Religious belief was found to be the best predictor for dualistic attitudes. Although the majority of health-care workers denied the distinction between consciousness and the soma, more than one-third of medical and paramedical professionals regarded mind and brain as separate entities. The findings of the study are in line with previous studies in developmental psychology and with surveys of scientists' attitudes toward the relationship between mind and brain. We suggest that the results are relevant to clinical practice, to the formulation of scientific questions about the nature of consciousness, and to the reception of scientific theories of consciousness by the general public.", "title": "" }, { "docid": "024b739dc047e17310fe181591fcd335", "text": "In this paper, a Ka-Band patch sub-array structure for millimeter-wave phased array applications is demonstrated. The conventional corner truncated patch is modified to improve the impedance and CP bandwidth alignment. A new sub-array feed approach is introduced to reduce complexity of the feed line between elements and increase the radiation efficiency. A sub-array prototype is built and tested. Good agreement with the theoretical results is obtained.", "title": "" }, { "docid": "64cbd9f9644cc71f5108c3f2ee7851e7", "text": "The use of neurofeedback as an operant conditioning paradigm has disclosed that participants are able to gain some control over particular aspects of their electroencephalogram (EEG). Based on the association between theta activity (4-7 Hz) and working memory performance, and sensorimotor rhythm (SMR) activity (12-15 Hz) and attentional processing, we investigated the possibility that training healthy individuals to enhance either of these frequencies would specifically influence a particular aspect of cognitive performance, relative to a non-neurofeedback control-group. The results revealed that after eight sessions of neurofeedback the SMR-group were able to selectively enhance their SMR activity, as indexed by increased SMR/theta and SMR/beta ratios. In contrast, those trained to selectively enhance theta activity failed to exhibit any changes in their EEG. Furthermore, the SMR-group exhibited a significant and clear improvement in cued recall performance, using a semantic working memory task, and to a lesser extent showed improved accuracy of focused attentional processing using a 2-sequence continuous performance task. This suggests that normal healthy individuals can learn to increase a specific component of their EEG activity, and that such enhanced activity may facilitate semantic processing in a working memory task and to a lesser extent focused attention. We discuss possible mechanisms that could mediate such effects and indicate a number of directions for future research.", "title": "" }, { "docid": "84ad547eb8a3435b214ed1a192fa96a9", "text": "We present the first known case of somatic PTEN mosaicism causing features of Cowden syndrome (CS) and inheritance in the subsequent generation. A 20-year-old woman presented for genetics evaluation with multiple ganglioneuromas of the colon. On examination, she was found to have a thyroid goiter, macrocephaly, and tongue papules, all suggestive of CS. However, her reported family history was not suspicious for CS. A deleterious PTEN mutation was identified in blood lymphocytes, 966A>G, 967delA. Genetic testing was recommended for her parents. Her 48-year-old father was referred for evaluation and was found to have macrocephaly and a history of Hashimoto’s thyroiditis, but no other features of CS. Site-specific genetic testing carried out on blood lymphocytes showed mosaicism for the same PTEN mutation identified in his daughter. Identifying PTEN mosaicism in the proband’s father had significant implications for the risk assessment/genetic testing plan for the rest of his family. His result also provides impetus for somatic mosaicism in a parent to be considered when a de novo PTEN mutation is suspected.", "title": "" }, { "docid": "f631cca2bd0c22f60af1d5f63a7522b5", "text": "We introduce the problem of k-pattern set mining, concerned with finding a set of k related patterns under constraints. This contrasts to regular pattern mining, where one searches for many individual patterns. The k-pattern set mining problem is a very general problem that can be instantiated to a wide variety of well-known mining tasks including concept-learning, rule-learning, redescription mining, conceptual clustering and tiling. To this end, we formulate a large number of constraints for use in k-pattern set mining, both at the local level, that is, on individual patterns, and on the global level, that is, on the overall pattern set. Building general solvers for the pattern set mining problem remains a challenge. Here, we investigate to what extent constraint programming (CP) can be used as a general solution strategy. We present a mapping of pattern set constraints to constraints currently available in CP. This allows us to investigate a large number of settings within a unified framework and to gain insight in the possibilities and limitations of these solvers. This is important as it allows us to create guidelines in how to model new problems successfully and how to model existing problems more efficiently. It also opens up the way for other solver technologies.", "title": "" }, { "docid": "ee2c37fd2ebc3fd783bfe53213e7470e", "text": "Mind-body interventions are beneficial in stress-related mental and physical disorders. Current research is finding associations between emotional disorders and vagal tone as indicated by heart rate variability. A neurophysiologic model of yogic breathing proposes to integrate research on yoga with polyvagal theory, vagal stimulation, hyperventilation, and clinical observations. Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Many studies demonstrate effects of yogic breathing on brain function and physiologic parameters, but the mechanisms have not been clarified. Sudarshan Kriya yoga (SKY), a sequence of specific breathing techniques (ujjayi, bhastrika, and Sudarshan Kriya) can alleviate anxiety, depression, everyday stress, post-traumatic stress, and stress-related medical illnesses. Mechanisms contributing to a state of calm alertness include increased parasympathetic drive, calming of stress response systems, neuroendocrine release of hormones, and thalamic generators. This model has heuristic value, research implications, and clinical applications.", "title": "" }, { "docid": "957a179c41a641f337b89dbfdc8ea1a9", "text": "Medical staff around the world must take reasonable steps to identify newborns and infants clearly, so as to prevent mix-ups, and to ensure the correct medication reaches the correct child. Footprints are frequently taken despite verification with footprints being challenging due to strong noise. The noise is introduced by the tininess of the structures, movement during capture, and the infant's rapid growth. In this article we address the image processing part of the problem and introduce a novel algorithm for the extraction of creases from infant footprints. The algorithm uses directional filtering on different resolution levels, morphological processing, and block-wise crease line reconstruction. We successfully test our method on noise-affected infant footprints taken from the same infants at different ages.", "title": "" }, { "docid": "45c19ce0417a5f873184dc72eb107cea", "text": "Common Information Model (CIM) is emerging as a standard for information modelling for power control centers. While, IEC 61850 by International Electrotechnical Commission (IEC) is emerging as a standard for achieving interoperability and automation at the substation level. In future, once these two standards are well adopted, the issue of integration of these standards becomes imminent. Some efforts reported towards the integration of these standards have been surveyed. This paper describes a possible approach for the integration of IEC 61850 and CIM standards based on mapping between the representation of elements of these two standards. This enables seamless data transfer from one standard to the other. Mapping between the objects of IEC 61850 and CIM standards both in the static and dynamic models is discussed. A CIM based topology processing application is used to demonstrate the design of the data transfer between the standards. The scope and status of implementation of CIM in the Indian power sector is briefed.", "title": "" }, { "docid": "39036fc99ab177774593bd0fb0fbeef0", "text": "Manipulation of deformable objects, such as ropes and cloth, is an important but challenging problem in robotics. We present a learning-based system where a robot takes as input a sequence of images of a human manipulating a rope from an initial to goal configuration, and outputs a sequence of actions that can reproduce the human demonstration, using only monocular images as input. To perform this task, the robot learns a pixel-level inverse dynamics model of rope manipulation directly from images in a self-supervised manner, using about 60K interactions with the rope collected autonomously by the robot. The human demonstration provides a high-level plan of what to do and the low-level inverse model is used to execute the plan. We show that by combining the high and low-level plans, the robot can successfully manipulate a rope into a variety of target shapes using only a sequence of human-provided images for direction.", "title": "" }, { "docid": "feef714b024ad00086a5303a8b74b0a4", "text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present STN-OCR, a step towards semi-supervised neural networks for scene text recognition that can be optimized end-to-end. In contrast to most existing works that consist of multiple deep neural networks and several pre-processing steps we propose to use a single deep neural network that learns to detect and recognize text from natural images in a semi-supervised way. STN-OCR is a network that integrates and jointly learns a spatial transformer network [16], that can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We investigate how our model behaves on a range of different tasks (detection and recognition of characters, and lines of text). Experimental results on public benchmark datasets show the ability of our model to handle a variety of different tasks, without substantial changes in its overall network structure.", "title": "" } ]
scidocsrr
bf7cb5713c1bc22b3c7b27902d580a24
Why Users Disintermediate Peer-to-Peer Marketplaces
[ { "docid": "00b98536f0ecd554442a67fb31f77f4c", "text": "We use a large, nationally-representative sample of working-age adults to demonstrate that personality (as measured by the Big Five) is stable over a four-year period. Average personality changes are small and do not vary substantially across age groups. Intra-individual personality change is generally unrelated to experiencing adverse life events and is unlikely to be economically meaningful. Like other non-cognitive traits, personality can be modeled as a stable input into many economic decisions. JEL classi cation: J3, C18.", "title": "" }, { "docid": "a8ca6ef7b99cca60f5011b91d09e1b06", "text": "When virtual teams need to establish trust at a distance, it is advantageous for them to use rich media to communicate. We studied the emergence of trust in a social dilemma game in four different communication situations: face-to-face, video, audio, and text chat. All three of the richer conditions were significant improvements over text chat. Video and audio conferencing groups were nearly as good as face-to-face, but both did show some evidence of what we term delayed trust (slower progress toward full cooperation) and fragile trust (vulnerability to opportunistic behavior)", "title": "" }, { "docid": "76034cd981a64059f749338a2107e446", "text": "We examine how financial assurance structures and the clearly defined financial transaction at the core of monetized network hospitality reduce uncertainty for Airbnb hosts and guests. We apply the principles of social exchange and intrinsic and extrinsic motivation to a qualitative study of Airbnb hosts to 1) describe activities that are facilitated by the peer-to-peer exchange platform and 2) how the assurance of the initial financial exchange facilitates additional social exchanges between hosts and guests. The study illustrates that the financial benefits of hosting do not necessarily crowd out intrinsic motivations for hosting but instead strengthen them and even act as a gateway to further social exchange and interpersonal interaction. We describe the assurance structures in networked peer-to-peer exchange, and explain how such assurances can reconcile contention between extrinsic and intrinsic motivations. We conclude with implications for design and future research.", "title": "" } ]
[ { "docid": "12680d4fcf57a8a18d9c2e2b1107bf2d", "text": "Recent advances in computer and technology resulted into ever increasing set of documents. The need is to classify the set of documents according to the type. Laying related documents together is expedient for decision making. Researchers who perform interdisciplinary research acquire repositories on different topics. Classifying the repositories according to the topic is a real need to analyze the research papers. Experiments are tried on different real and artificial datasets such as NEWS 20, Reuters, emails, research papers on different topics. Term Frequency-Inverse Document Frequency algorithm is used along with fuzzy K-means and hierarchical algorithm. Initially experiment is being carried out on small dataset and performed cluster analysis. The best algorithm is applied on the extended dataset. Along with different clusters of the related documents the resulted silhouette coefficient, entropy and F-measure trend are presented to show algorithm behavior for each data set.", "title": "" }, { "docid": "d31ba2b9ca7f5a33619fef33ade3b75a", "text": "We present ARPKI, a public-key infrastructure that ensures that certificate-related operations, such as certificate issuance, update, revocation, and validation, are transparent and accountable. ARPKI is the first such infrastructure that systematically takes into account requirements identified by previous research. Moreover, ARPKI is co-designed with a formal model, and we verify its core security property using the Tamarin prover. We present a proof-of-concept implementation providing all features required for deployment. ARPKI efficiently handles the certification process with low overhead and without incurring additional latency to TLS.\n ARPKI offers extremely strong security guarantees, where compromising n-1 trusted signing and verifying entities is insufficient to launch an impersonation attack. Moreover, it deters misbehavior as all its operations are publicly visible.", "title": "" }, { "docid": "8b205549e43d355174e8d8fce645ca99", "text": "In recent era, the weighted matrix rank minimization is used to reduce image noise, promisingly. However, low-rank weighted conditions may cause oversmoothing or oversharpening of the denoised image. This demands a clever engineering algorithm. Particularly, to remove heavy noise in image is always a challenging task, specially, when there is need to preserve the fine edge structures. To attain a reliable estimate of heavy noise image, a norm weighted fusion estimators method is proposed in wavelet domain. This holds the significant geometric structure of the given noisy image during the denoising process. Proposed method is applied on standard benchmark images, and simulation results outperform the most popular rivals of noise reduction approaches, such as BM3D, EPLL, LSSC, NCSR, SAIST, and WNNM in terms of the quality measurement metric PSNR (dB) and structural analysis SSIM indices.", "title": "" }, { "docid": "7f575dd097ac747eddd2d7d0dc1055d5", "text": "It has been widely believed that biometric template aging does not occur for iris biometrics. We compare the match score distribution for short time-lapse iris image pairs, with a mean of approximately one month between the enrollment image and the verification image, to the match score distributions for image pairs with one, two and three years of time lapse. We find clear and consistent evidence of a template aging effect that is noticeable at one year and that increases with increasing time lapse. For a state-of-the-art iris matcher, and three years of time lapse, at a decision threshold corresponding to a one in two million false match rate, we observe an 153% increase in the false non-match rate, with a bootstrap estimated 95% confidence interval of 85% to 307%.", "title": "" }, { "docid": "2a1c5ba1c1057364420fd220995a74ff", "text": "A multicell rectifier (MC) structure with N + 2 redundancy is presented. The topology is based on power cells implemented with the integrated gate commuted thyristors (IGCTs) to challenge the SCR standard industry solution for the past 35 years. This rectifier is a reliable, compact, efficient, nonpolluting alternative and cost-effective solution for electrolytic applications. Its structure, based on power cells, enables load shedding to ensure power delivery even in the event of power cell failures. It injects quasi-sinusoidal input currents and provides unity power factor without the use of passive or active filters. A complete evaluation based on IEEE standards 493-1997 and IEEE C57.18.10 for average downtime, failures rates, and efficiency is included. For comparison purposes, results are shown against conventional systems known for their high efficiency and reliability.", "title": "" }, { "docid": "36da2b6102762c80b3ae8068d764e220", "text": "Video games have become an essential part of the way people play and learn. While an increasing number of people are using games to learn in informal environments, their acceptance in the classroom as an instructional activity has been mixed. Successes in informal learning have caused supporters to falsely believe that implementing them into the classroom would be a relatively easy transition and have the potential to revolutionise the entire educational system. In spite of all the hype, many are puzzled as to why more teachers have not yet incorporated them into their teaching. The literature is littered with reports that point to a variety of reasons. One of the reasons, we believe, is that very little has been done to convince teachers that the effort to change their curriculum to integrate video games and other forms of technology is worthy of the effort. Not until policy makers realise the importance of professional British Journal of Educational Technology (2009) doi:10.1111/j.1467-8535.2009.01007.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. development and training as an important use of funds will positive changes in thinking and perceptions come about, which will allow these various forms of technology to reach their potential. The authors have hypothesised that the major impediments to useful technology integration include the general lack of institutional infrastructure, poor teacher training, and overly-complicated technologies. Overcoming these obstacles requires both a top-down and a bottom-up approach. This paper presents the results of a pilot study with a group of preservice teachers to determine whether our hypotheses regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. The results of this study are discussed along with suggestions for further research and potential changes in teacher training programmes. Introduction Over the past 40 years, video games have become an increasingly popular way to play and learn. Those who play regularly often note that the major attraction is their ability to become quickly engaged and immersed in gameplay (Lenhart & Kayne, 2008). Many have taken notice of video games’ apparent effectiveness in teaching social interaction and critical thinking in informal learning environments. Beliefs about the effectiveness of video games in informal learning situations have been hyped to the extent that they are often described as the ‘holy grail’ that will revolutionise our entire educational system (Gee, 2003; Kirkley & Kirkley, 2004; Prensky, 2001; Sawyer, 2002). In spite of all the hype and promotion, many educators express puzzlement and disappointment that only a modest number of teachers have incorporated video games into their teaching (Egenfeldt-Nielsen, 2004; Pivec & Pivec, 2008). These results seem to mirror those reported on a general lack of successful integration on the part of teachers and educators of new technologies and media in general. The reasons reported in that research point to a varied and complex issue that involves dispelling preconceived notions, prejudices, and concerns (Kati, 2008; Kim & Baylor, 2008). It is our position that very little has been done to date to overcome these objections. We agree with Magliaro and Ezeife (2007) who posited that teachers can and do greatly influence the successes or failures of classroom interventions. Expenditures on media and technology alone do not guarantee their successful or productive use in the classroom. Policy makers need to realise that professional development and training is the most significant use of funds that will positively affect teaching styles and that will allow technology to reach its potential to change education. But as Cuban, Kirkpatrick and Peck (2001) noted, the practices of policy makers and administrators to increase the effective use of technologies in the classroom more often than not conflict with implementation. In their qualitative study of two Silicon Valley high schools, the authors found that despite ready access to computer technologies, 2 British Journal of Educational Technology © 2009 The Authors. Journal compilation © 2009 Becta. only a handful of teachers actually changed their teaching practices (ie, moved from teacher-centered to student-centered pedagogies). Furthermore, the authors identified several barriers to technological innovation in the classroom, including most notably: a lack of preparation time, poor technical support, outdated technologies, and the inability to sustain interest in the particular lessons and a lack of opportunities for collaboration due to the rigid structure and short time periods allocated to instruction. The authors concluded by suggesting that the path for integrating technology would eventually flourish, but that it initially would be riddled with problems caused by impediments placed upon its success by a lack of institutional infrastructure, poor training, and overly-complicated technologies. We agree with those who suggest that any proposed classroom intervention correlates directly to the expectations and perceived value/benefit on the part of the integrating teachers, who largely control what and how their students learn (Hanusheck, Kain & Rivkin, 1998). Faced with these significant obstacles, it should not be surprising that video games, like other technologies, have been less than successful in transforming the classroom. We further suggest that overcoming these obstacles requires both a top-down and a bottom-up approach. Policy makers carry the burden of correcting the infrastructural issues both for practical reasons as well as for creating optimism on the part of teachers to believe that their administrators actually support their decisions. On the other hand, anyone associated with educational systems for any length of time will agree that a top-down only approach is destined for failure. The successful adoption of any new classroom intervention is based, in larger part, on teachers’ investing in the belief that the experience is worth the effort. If a teacher sees little or no value in an intervention, or is unfamiliar with its use, then the chances that it will be properly implemented are minimised. In other words, a teacher’s adoption of any instructional strategy is directly correlated with his or her views, ideas, and expectations about what is possible, feasible, and useful. In their studies into the game playing habits of various college students, Shaffer, Squire and Gee (2005) alluded to the fact that of those that they interviewed, future teachers indicated that they did not play video games as often as those enrolled in other majors. Our review of these comments generated several additional research questions that we believe deserve further investigation. We began to hypothesise that if it were true that teachers, as a group, do not in fact play video games on a regular basis, it should not be surprising that they would have difficulty integrating games into their curriculum. They would not have sufficient basis to integrate the rules of gameplay with their instructional strategies, nor would they be able to make proper assessments as to which games might be the most effective. We understand that one does not have to actually like something or be good at something to appreciate its value. For example, one does not necessarily have to be a fan of rap music or have a knack for performing it to understand that it could be a useful teaching tool. But, on the other hand, we wondered whether the attitudes towards video games on the part of teachers were not merely neutral, but in fact actually negative, which would further undermine any attempts at successfully introducing games into their classrooms. Expectancy-value 3 © 2009 The Authors. Journal compilation © 2009 Becta. This paper presents the results of a pilot study we conducted that utilised a group of preservice teachers to determine whether our hypothesis regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. In this examination, we utilised a preference survey to ask participants to reveal their impressions and expectancies about video games in general, their playing habits, and their personal assessments as to the potential role games might play in their future teaching strategies. We believe that the results we found are useful in determining ramifications for some potential changes in teacher preparation and professional development programmes. They provide more background on the kinds of learning that can take place, as described by Prensky (2001), Gee (2003) and others, they consider how to evaluate supposed educational games that exist in the market, and they suggest successful integration strategies. Just as no one can assume that digital kids already have expertise in participatory learning simply because they are exposed to these experiences in their informal, outside of school activities, those responsible for teacher training cannot assume that just because up-and-coming teachers have been brought up in the digital age, they are automatically familiar with, disposed to using, and have positive ideas about how games can be integrated into their curriculum. As a case in point, we found that there exists a significant disconnect between teachers and their students regarding the value of gameplay, and whether one can efficiently and effectively learn from games. In this study, we also attempted to determine if there might be an interaction effect based on the type of console being used. We wanted to confirm Pearson and Bailey’s (2008) assertions that the Nintendo Wii (Nintendo Company, Ltd. 11-1 KamitobaHokodate-cho, Minami-ku, Kyoto 601-8501, Japan) consoles would not only promote improvements in physical move", "title": "" }, { "docid": "dedc509f31c9b7e6c4409d655a158721", "text": "Envelope tracking (ET) is by now a well-established technique that improves the efficiency of microwave power amplifiers (PAs) compared to what can be obtained with conventional class-AB or class-B operation for amplifying signals with a time-varying envelope, such as most of those used in present wireless communication systems. ET is poised to be deployed extensively in coming generations of amplifiers for cellular handsets because it can reduce power dissipation for signals using the long-term evolution (LTE) standard required for fourthgeneration (4G) wireless systems, which feature high peak-to-average power ratios (PAPRs). The ET technique continues to be actively developed for higher carrier frequencies and broader bandwidths. This article reviews the concepts and history of ET, discusses several applications currently on the drawing board, presents challenges for future development, and highlights some directions for improving the technique.", "title": "" }, { "docid": "c5bc0cd14aa51c24a00107422fc8ca10", "text": "This paper proposes a new high-voltage Pulse Generator (PG), fed from low voltage dc supply Vs. This input supply voltage is utilized to charge two arms of N series-connected modular multilevel converter sub-module capacitors sequentially through a resistive-inductive branch, such that each arm is charged to NVS. With a step-up nano-crystalline transformer of n turns ratio, the proposed PG is able to generate bipolar rectangular pulses of peak ±nNVs, at high repetition rates. However, equal voltage-second area of consecutive pulse pair polarities should be assured to avoid transformer saturation. Not only symmetrical pulses can be generated, but also asymmetrical pulses with equal voltage-second areas are possible. The proposed topology is tested via simulations and a scaled-down experimentation, which establish the viability of the topology for water treatment applications.", "title": "" }, { "docid": "c5ae1d66d31128691e7e7d8e2ccd2ba8", "text": "The scope of this paper is two-fold: firstly it proposes the application of a 1-2-3 Zones approach to Internet of Things (IoT)-related Digital Forensics (DF) investigations. Secondly, it introduces a Next-Best-Thing Triage (NBT) Model for use in conjunction with the 1-2-3 Zones approach where necessary and vice versa. These two `approaches' are essential for the DF process from an IoT perspective: the atypical nature of IoT sources of evidence (i.e. Objects of Forensic Interest - OOFI), the pervasiveness of the IoT environment and its other unique attributes - and the combination of these attributes - dictate the necessity for a systematic DF approach to incidents. The two approaches proposed are designed to serve as a beacon to incident responders, increasing the efficiency and effectiveness of their IoT-related investigations by maximizing the use of the available time and ensuring relevant evidence identification and acquisition. The approaches can also be applied in conjunction with existing, recognised DF models, methodologies and frameworks.", "title": "" }, { "docid": "4f3936b753abd2265d867c0937aec24c", "text": "A weighted constraint satisfaction problem (WCSP) is a constraint satisfaction problem in which preferences among solutions can be expressed. Bucket elimination is a complete technique commonly used to solve this kind of constraint satisfaction problem. When the memory required to apply bucket elimination is too high, a heuristic method based on it (denominated mini-buckets) can be used to calculate bounds for the optimal solution. Nevertheless, the curse of dimensionality makes these techniques impractical on large scale problems. In response to this situation, we present a memetic algorithm for WCSPs in which bucket elimination is used as a mechanism for recombining solutions, providing the best possible child from the parental set. Subsequently, a multi-level model in which this exact/metaheuristic hybrid is further hybridized with branch-and-bound techniques and mini-buckets is studied. As a case study, we have applied these algorithms to the resolution of the maximum density still life problem, a hard constraint optimization problem based on Conway’s game of life. The resulting algorithm consistently finds optimal patterns for up to date solved instances in less time than current approaches. Moreover, it is shown that this proposal provides new best known solutions for very large instances.", "title": "" }, { "docid": "7159d958139d684e4a74abe252788a40", "text": "Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.", "title": "" }, { "docid": "7618fa5b704c892b6b122f3602893d75", "text": "At the dawn of the second automotive century it is apparent that the competitive realm of the automotive industry is shifting away from traditional classifications based on firms’ production systems or geographical homes. Companies across the regional and volume spectrum have adopted a portfolio of manufacturing concepts derived from both mass and lean production paradigms, and the recent wave of consolidation means that regional comparisons can no longer be made without considering the complexities induced by the diverse ownership structure and plethora of international collaborations. In this chapter we review these dynamics and propose a double helix model illustrating how the basis of competition has shifted from cost-leadership during the heyday of Ford’s original mass production, to variety and choice following Sloan’s portfolio strategy, to diversification through leadership in design, technology or manufacturing excellence, as in the case of Toyota, and to mass customisation, which marks the current competitive frontier. We will explore how the production paradigms that have determined much of the competition in the first automotive century have evolved, what trends shape the industry today, and what it will take to succeed in the automotive industry of the future. 1 This chapter provides a summary of research conducted as part of the ILIPT Integrated Project and the MIT International Motor Vehicle Program (IMVP), and expands on earlier works, including the book The second century: reconnecting customer and value chain through build-toorder (Holweg and Pil 2004) and the paper Beyond mass and lean production: on the dynamics of competition in the automotive industry (Économies et Sociétés: Série K: Économie de l’Enterprise, 2005, 15:245–270).", "title": "" }, { "docid": "411f47c2edaaf3696d44521d4a97eb28", "text": "An energy-efficient 3 Gb/s current-mode interface scheme is proposed for on-chip global interconnects and silicon interposer channels. The transceiver core consists of an open-drain transmitter with one-tap pre-emphasis and a current sense amplifier load as the receiver. The current sense amplifier load is formed by stacking a PMOS diode stage and a cross-coupled NMOS stage, providing an optimum current-mode receiver without any bias current. The proposed scheme is verified with two cases of transceivers implemented in 65 nm CMOS. A 10 mm point-to-point data-only channel shows an energy efficiency of 9.5 fJ/b/mm, and a 20 mm four-drop source-synchronous link achieves 29.4 fJ/b/mm including clock and data channels.", "title": "" }, { "docid": "227d8ad4000e6e1d9fd1aa6bff8ed64c", "text": "Recently, speed sensorless control of Induction Motor (IM) drives received great attention to avoid the different problems associated with direct speed sensors. Among different rotor speed estimation techniques, Model Reference Adaptive System (MRAS) schemes are the most common strategies employed due to their relative simplicity and low computational effort. In this paper a novel adaptation mechanism is proposed which replaces normally used conventional Proportional-Integral (PI) controller in MRAS adaptation mechanism by a Fractional Order PI (FOPI) controller. The performance of two adaptation mechanism controllers has been verified through simulation results using MATLAB/SIMULINK software. It is seen that the performance of the induction motor has improved when FOPI controller is used in place of classical PI controller.", "title": "" }, { "docid": "8f65f1971405e0c225e3625bb682a2d4", "text": "We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository, 2015. arXiv:1512.03012) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2015) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2012) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view, 2018. arXiv:1802.00411), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2017) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR), 2016), while requiring less supervision and being significantly faster.", "title": "" }, { "docid": "4bce887df71f59085938c8030e7b0c1c", "text": "Context plays an important role in human language understanding, thus it may also be useful for machines learning vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. We carefully designed experiments to show that neither an autoregressive decoder nor an RNN decoder is required. After that, we designed a model which still keeps an RNN as the encoder, while using a non-autoregressive convolutional decoder. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabelled corpora, and in both cases the transferability is evaluated on a set of downstream NLP tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks.", "title": "" }, { "docid": "370d8cccec6964954154e796f0e558c8", "text": "We present optical imaging-based methods to measure vital physiological signals, including breathing frequency (BF), exhalation flow rate, heart rate (HR), and pulse transit time (PTT). The breathing pattern tracking was based on the detection of body movement associated with breathing using a differential signal processing approach. A motion-tracking algorithm was implemented to correct random body movements that were unrelated to breathing. The heartbeat pattern was obtained from the color change in selected region of interest (ROI) near the subject's mouth, and the PTT was determined by analyzing pulse patterns at different body parts of the subject. The measured BF, exhaled volume flow rate and HR are consistent with those measured simultaneously with reference technologies (r = 0.98, p <; 0.001 for HR; r = 0.93, p <; 0.001 for breathing rate), and the measured PTT difference (30-40 ms between mouth and palm) is comparable to the results obtained with other techniques in the literature. The imaging-based methods are suitable for tracking vital physiological parameters under free-living condition and this is the first demonstration of using noncontact method to obtain PTT difference and exhalation flow rate.", "title": "" }, { "docid": "7bd0d55e08ff4d94c021dd53142ef5aa", "text": "From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this paper, we instead demonstrate it is possible to mine a broad knowledge base of human behavior by analyzing more than one billion words of modern fiction. Our resulting knowledge base, Augur, trains vector models that can predict many thousands of user activities from surrounding objects in modern contexts: for example, whether a user may be eating food, meeting with a friend, or taking a selfie. Augur uses these predictions to identify actions that people commonly take on objects in the world and estimate a user's future activities given their current situation. We demonstrate Augur-powered, activity-based systems such as a phone that silences itself when the odds of you answering it are low, and a dynamic music player that adjusts to your present activity. A field deployment of an Augur-powered wearable camera resulted in 96% recall and 71% precision on its unsupervised predictions of common daily activities. A second evaluation where human judges rated the system's predictions over a broad set of input images found that 94% were rated sensible.", "title": "" }, { "docid": "57889499aaa45b38754d9d6cebff96b8", "text": "ion speeds up the DTW algorithm by operating on a reduced representation of the data. These algorithms include IDDTW [3], PDTW [13], and COW [2] . The left side of Figure 5 shows a full-resolution cost matrix for which a minimum-distance warp path must be found. Rather than running DTW on the full resolution (1/1) cost matrix, the time series are reduced in size to make the number of cells in the cost matrix more manageable. A warp path is found for the lowerresolution time series and is mapped back to full resolution. The resulting speedup depends on how much abstraction is used. Obviously, the calculated warp path becomes increasingly inaccurate as the level of abstraction increases. Projecting the low resolution warp path to the full resolution usually creates a warp path that is far from optimal. This is because even IF an optimal warp path passes through the low-resolution cell, projecting the warp path to the higher resolution ignores local variations in the warp path that can be very significant. Indexing [9][14] uses lower-bounding functions to prune the number of times DTW is run for similarity search [17]. Indexing speeds up applications in which DTW is used, but it does not make up the actual DTW calculation any more efficient. Our FastDTW algorithm uses ideas from both the constraints and abstraction approaches. Using a combination of both overcomes many limitations of using either method individually, and yields an accurate algorithm that is O(N) in both time and space complexity. Our multi-level approach is superficially similar to IDDTW [3] because they both evaluate several different resolutions. However, IDDTW simply executes PDTW [13] at increasingly higher resolutions until a desired “accuracy” is achieved. IDDTW does not project low resolution solutions to higher resolutions. In Section 4, we will demonstrate that these methods are more inaccurate than our method given the same amount of execution time. Projecting warp paths to higher resolutions is also done in the construction of “Match Webs” [15]. However, their approach is still O(N) due to the simultaneous search for many warp paths (they call them “chains”). A multi-resolution approach in their application also could not continue down to the low resolutions without severely reducing the number of “chains” that could be found. Some recent research [18] asserts that there is no need to speed up the original DTW algorithm. However, this is only true under the following (common) conditions: 1) Tight Constraints A relatively strict near-linear warp path is allowable. 2) Short Time Series All time series are short enough for the DTW algorithm to execute quickly. (~3,000 points if a warp path is needed, or ~100,000 if no warp path is needed and the user has a lot of patience).", "title": "" }, { "docid": "97ac64bb4d06216253eacb17abfcb103", "text": "UIMA Ruta is a rule-based system designed for information extraction tasks, but it is also applicable for many natural language processing use cases. This demonstration gives an overview of the UIMA Ruta Workbench, which provides a development environment and tooling for the rule language. It was developed to ease every step in engineering rule-based applications. In addition to the full-featured rule editor, the user is supported by explanation of the rule execution, introspection in results, automatic validation and rule induction. Furthermore, the demonstration covers the usage and combination of arbitrary components for natural language processing.", "title": "" } ]
scidocsrr
76eb60f0799df01a8a24943c98214c6b
A Deep Learning Approach to Link Prediction in Dynamic Networks
[ { "docid": "40f21a8702b9a0319410b716bda0a11e", "text": "A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our study is the use of a variety of performance criteria to evaluate the learning methods.", "title": "" }, { "docid": "a864a63a12fc30e05f6a7a64ff931d98", "text": "Prediction of links - both new as well as recurring - in a social network representing interactions between individuals is an important problem. In the recent years, there is significant interest in methods that use only the graph structure to make predictions. However, most of them consider a single snapshot of the network as the input, neglecting an important aspect of these social networks viz., their evolution over time.\n In this work, we investigate the value of incorporating the history information available on the interactions (or links) of the current social network state. Our results unequivocally show that time-stamps of past interactions significantly improve the prediction accuracy of new and recurrent links over rather sophisticated methods proposed recently. Furthermore, we introduce a novel testing method which reflects the application of link prediction better than previous approaches.", "title": "" } ]
[ { "docid": "09e8e0d6a0df9228ece3ea9416a7d5e4", "text": "This paper reviews and summarizes Maglev train technologies from an electrical engineering point of view and assimilates the results of works over the past three decades carried out all over the world. Many researches and developments concerning the Maglev train have been accomplished; however, they are not always easy to understand. The purpose of this paper is to make the Maglev train technologies clear at a glance. Included are general understandings, technologies, and worldwide practical projects. Further research needs are also addressed.", "title": "" }, { "docid": "994bebd20ef2594f5337387d97c6bd12", "text": "In complex, open, and heterogeneous environments, agents must be able to reorganize towards the most appropriate organizations to adapt unpredictable environment changes within Multi-Agent Systems (MAS). Types of reorganization can be seen from two different levels. The individual agents level (micro-level) in which an agent changes its behaviors and interactions with other agents to adapt its local environment. And the organizational level (macro-level) in which the whole system changes it structure by adding or removing agents. This chapter is dedicated to overview different aspects of what is called MAS Organization including its motivations, paradigms, models, and techniques adopted for statically or dynamically organizing agents in MAS.", "title": "" }, { "docid": "ee20233660c2caa4a24dbfb512172277", "text": "Any projection of a 3D scene into a wide-angle image unavoidably results in distortion. Current projection methods either bend straight lines in the scene, or locally distort the shapes of scene objects. We present a method that minimizes this distortion by adapting the projection to content in the scene, such as salient scene regions and lines, in order to preserve their shape. Our optimization technique computes a spatially-varying projection that respects user-specified constraints while minimizing a set of energy terms that measure wide-angle image distortion. We demonstrate the effectiveness of our approach by showing results on a variety of wide-angle photographs, as well as comparisons to standard projections.", "title": "" }, { "docid": "d72e4df2e396a11ae7130ca7e0b2fb56", "text": "Advances in location-acquisition and wireless communication technologies have led to wider availability of spatio-temporal (ST) data, which has unique spatial properties (i.e. geographical hierarchy and distance) and temporal properties (i.e. closeness, period and trend). In this paper, we propose a <u>Deep</u>-learning-based prediction model for <u>S</u>patio-<u>T</u>emporal data (DeepST). We leverage ST domain knowledge to design the architecture of DeepST, which is comprised of two components: spatio-temporal and global. The spatio-temporal component employs the framework of convolutional neural networks to simultaneously model spatial near and distant dependencies, and temporal closeness, period and trend. The global component is used to capture global factors, such as day of the week, weekday or weekend. Using DeepST, we build a real-time crowd flow forecasting system called UrbanFlow1. Experiment results on diverse ST datasets verify DeepST's ability to capture ST data's spatio-temporal properties, showing the advantages of DeepST beyond four baseline methods.", "title": "" }, { "docid": "6ad201e411520ff64881b49915415788", "text": "What is the right supervisory signal to train visual representations? Current approaches in computer vision use category labels from datasets such as ImageNet to train ConvNets. However, in case of biological agents, visual representation learning does not require millions of semantic labels. We argue that biological agents use physical interactions with the world to learn visual representations unlike current vision systems which just use passive observations (images and videos downloaded from web). For example, babies push objects, poke them, put them in their mouth and throw them to learn representations. Towards this goal, we build one of the first systems on a Baxter platform that pushes, pokes, grasps and observes objects in a tabletop environment. It uses four different types of physical interactions to collect more than 130K datapoints, with each datapoint providing supervision to a shared ConvNet architecture allowing us to learn visual representations. We show the quality of learned representations by observing neuron activations and performing nearest neighbor retrieval on this learned representation. Quantitatively, we evaluate our learned ConvNet on image classification tasks and show improvements compared to learning without external data. Finally, on the task of instance retrieval, our network outperforms the ImageNet network on recall@1 by 3 %.", "title": "" }, { "docid": "c98ac230a74b68a24554ce3021466b49", "text": "Enterprise social media can provide visibility of users' actions and thus has the potential to reveal insights about users in the organization. We mined large-scale social media use in an enterprise to examine: a) user roles with such broad platforms and b) whether people with large social networks are highly regarded. First, a factor analysis revealed that most variance of social media usage is explained by commenting and 'liking' behaviors while other usage can be characterized as patterns of distinct tool usage. These results informed the development of a model showing that online network size interacts with other media usage to predict who is highly assessed in the organization. We discovered that the smaller one's online social network size in the organization, the more highly assessed they were by colleagues. We explain this inverse relationship as due to friending behavior being highly visible but not yet valued in the organization.", "title": "" }, { "docid": "e49e74c4104116b54d49147028c3392d", "text": "Defining hope as a cognitive set comprising agency (belief in one's capacity to initiate and sustain actions) and pathways (belief in one's capacity to generate routes) to reach goals, the Hope Scale was developed and validated previously as a dispositional self-report measure of hope (Snyder et al., 1991). The present 4 studies were designed to develop and validate a measure of state hope. The 6-item State Hope Scale is internally consistent and reflects the theorized agency and pathways components. The relationships of the State Hope Scale to other measures demonstrate concurrent and discriminant validity; moreover, the scale is responsive to events in the lives of people as evidenced by data gathered through both correlational and causal designs. The State Hope Scale offers a brief, internally consistent, and valid self-report measure of ongoing goal-directed thinking that may be useful to researchers and applied professionals.", "title": "" }, { "docid": "9b94a383b2a6e778513a925cc88802ad", "text": "Pedestrian behavior modeling and analysis is important for crowd scene understanding and has various applications in video surveillance. Stationary crowd groups are a key factor influencing pedestrian walking patterns but was largely ignored in literature. In this paper, a novel model is proposed for pedestrian behavior modeling by including stationary crowd groups as a key component. Through inference on the interactions between stationary crowd groups and pedestrians, our model can be used to investigate pedestrian behaviors. The effectiveness of the proposed model is demonstrated through multiple applications, including walking path prediction, destination prediction, personality classification, and abnormal event detection. To evaluate our model, a large pedestrian walking route dataset1 is built. The walking routes of 12, 684 pedestrians from a one-hour crowd surveillance video are manually annotated. It will be released to the public and benefit future research on pedestrian behavior analysis and crowd scene understanding.", "title": "" }, { "docid": "bbedbe2d901f63e3f163ea0f24a2e2d7", "text": "a r t i c l e i n f o a b s t r a c t The leader trait perspective is perhaps the most venerable intellectual tradition in leadership research. Despite its early prominence in leadership research, it quickly fell out of favor among leadership scholars. Thus, despite recent empirical support for the perspective, conceptual work in the area lags behind other theoretical perspectives. Accordingly, the present review attempts to place the leader trait perspective in the context of supporting intellectual traditions, including evolutionary psychology and behavioral genetics. We present a conceptual model that considers the source of leader traits, mediators and moderators of their effects on leader emergence and leadership effectiveness, and distinguish between perceived and actual leadership effectiveness. We consider both the positive and negative effects of specific \" bright side \" personality traits: the Big Five traits, core self-evaluations, intelligence, and charisma. We also consider the positive and negative effects of \" dark side \" leader traits: Narcissism, hubris, dominance, and Machiavellianism. If one sought to find singular conditions that existed across species, one might find few universals. One universal that does exist, at least those species that have brains and nervous systems, is leadership. From insects to reptiles to mammals, leadership exists as surely as collective activity exists. There is the queen bee, and there is the alpha male. Though the centrality of leadership may vary by species (it seems more important to mammals than, say, to avians and reptiles), it is fair to surmise that whenever there is social activity, a social structure develops, and one (perhaps the) defining characteristic of that structure is the emergence of a leader or leaders. The universality of leadership, however, does not deny the importance of individual differences — indeed the emergence of leadership itself is proof of individual differences. Moreover, even casual observation of animal (including human) collective behavior shows the existence of a leader. Among a herd of 100 cattle or a pride of 20 lions, one is able to detect a leadership structure (especially at times of eating, mating, and attack). One quickly wonders: What has caused this leadership structure to emerge? Why has one animal (the alpha) emerged to lead the collective? And how does this leadership cause this collective to flourish — or founder? Given these questions, it is of no surprise that the earliest conceptions of leadership focused on individual …", "title": "" }, { "docid": "756d6388fc5aa289e7f95cd14d66819e", "text": "Cluster analysis of biological networks is one of the most important approaches for identifying functional modules and predicting protein functions. Furthermore, visualization of clustering results is crucial to uncover the structure of biological networks. In this paper, ClusterViz, an APP of Cytoscape 3 for cluster analysis and visualization, has been developed. In order to reduce complexity and enable extendibility for ClusterViz, we designed the architecture of ClusterViz based on the framework of Open Services Gateway Initiative. According to the architecture, the implementation of ClusterViz is partitioned into three modules including interface of ClusterViz, clustering algorithms and visualization and export. ClusterViz fascinates the comparison of the results of different algorithms to do further related analysis. Three commonly used clustering algorithms, FAG-EC, EAGLE and MCODE, are included in the current version. Due to adopting the abstract interface of algorithms in module of the clustering algorithms, more clustering algorithms can be included for the future use. To illustrate usability of ClusterViz, we provided three examples with detailed steps from the important scientific articles, which show that our tool has helped several research teams do their research work on the mechanism of the biological networks.", "title": "" }, { "docid": "f153ee3853f40018ed0ae8b289b1efcf", "text": "In this paper, the common mode (CM) EMI noise characteristic of three popular topologies of resonant converter (LLC, CLL and LCL) is analyzed. The comparison of their EMI performance is provided. A state-of-art LLC resonant converter with matrix transformer is used as an example to further illustrate the CM noise problem of resonant converters. The CM noise model of LLC resonant converter is provided. A novel method of shielding is provided for matrix transformer to reduce common mode noise. The CM noise of LLC converter has a significantly reduction with shielding. The loss of shielding is analyzed by finite element analysis (FEA) tool. Then the method to reduce the loss of shielding is discussed. There is very little efficiency sacrifice for LLC converter with shielding according to the experiment result.", "title": "" }, { "docid": "ba65c99adc34e05cf0cd1b5618a21826", "text": "We investigate a family of bugs in blockchain-based smart contracts, which we call event-ordering (or EO) bugs. These bugs are intimately related to the dynamic ordering of contract events, i.e., calls of its functions on the blockchain, and enable potential exploits of millions of USD worth of Ether. Known examples of such bugs and prior techniques to detect them have been restricted to a small number of event orderings, typicall 1 or 2. Our work provides a new formulation of this general class of EO bugs as finding concurrency properties arising in long permutations of such events. The technical challenge in detecting our formulation of EO bugs is the inherent combinatorial blowup in path and state space analysis, even for simple contracts. We propose the first use of partial-order reduction techniques, using happen-before relations extracted automatically for contracts, along with several other optimizations built on a dynamic symbolic execution technique. We build an automatic tool called ETHRACER that requires no hints from users and runs directly on Ethereum bytecode. It flags 7-11% of over ten thousand contracts analyzed in roughly 18.5 minutes per contract, providing compact event traces that human analysts can run as witnesses. These witnesses are so compact that confirmations require only a few minutes of human effort. Half of the flagged contracts have subtle EO bugs, including in ERC-20 contracts that carry hundreds of millions of dollars worth of Ether. Thus, ETHRACER is effective at detecting a subtle yet dangerous class of bugs which existing tools miss.", "title": "" }, { "docid": "86aaee95a4d878b53fd9ee8b0735e208", "text": "The tensegrity concept has long been considered as a basis for lightweight and compact packaging deployable structures, but very few studies are available. This paper presents a complete design study of a deployable tensegrity mast with all the steps involved: initial formfinding, structural analysis, manufacturing and deployment. Closed-form solutions are used for the formfinding. A manufacturing procedure in which the cables forming the outer envelope of the mast are constructed by two-dimensional weaving is used. The deployment of the mast is achieved through the use of self-locking hinges. A stiffness comparison between the tensegrity mast and an articulated truss mast shows that the tensegrity mast is weak in bending.", "title": "" }, { "docid": "082f19bb94536f61a7c9e4edd9a9c829", "text": "Phytoplankton abundance and composition and the cyanotoxin, microcystin, were examined relative to environmental parameters in western Lake Erie during late-summer (2003–2005). Spatially explicit distributions of phytoplankton occurred on an annual basis, with the greatest chlorophyll (Chl) a concentrations occurring in waters impacted by Maumee River inflows and in Sandusky Bay. Chlorophytes, bacillariophytes, and cyanobacteria contributed the majority of phylogenetic-group Chl a basin-wide in 2003, 2004, and 2005, respectively. Water clarity, pH, and specific conductance delineated patterns of group Chl a, signifying that water mass movements and mixing were primary determinants of phytoplankton accumulations and distributions. Water temperature, irradiance, and phosphorus availability delineated patterns of cyanobacterial biovolumes, suggesting that biotic processes (most likely, resource-based competition) controlled cyanobacterial abundance and composition. Intracellular microcystin concentrations corresponded to Microcystis abundance and environmental parameters indicative of conditions coincident with biomass accumulations. It appears that environmental parameters regulate microcystin indirectly, via control of cyanobacterial abundance and distribution.", "title": "" }, { "docid": "f06de491c9de78e7bedd39d83a4bc3d5", "text": "Recent advances in visual tracking showed that deep Convolutional Neural Networks (CNN) trained for image classification can be strong feature extractors for discriminative trackers. However, due to the drastic difference between image classification and tracking, extra treatments such as model ensemble and feature engineering must be carried out to bridge the two domains. Such procedures are either time consuming or hard to generalize well across datasets. In this paper we discovered that the internal structure of Region Proposal Network (RPN)’s top layer feature can be utilized for robust visual tracking. We showed that such property has to be unleashed by a novel loss function which simultaneously considers classification accuracy and bounding box quality. Without ensemble and any extra treatment on feature maps, our proposed method achieved state-of-the-art results on several large scale benchmarks including OTB50, OTB100 and VOT2016. We will make our code publicly available.", "title": "" }, { "docid": "2b8c0923372e97ca5781378b7e220021", "text": "Motivated by requirements of Web 2.0 applications, a plethora of non-relational databases raised in recent years. Since it is very difficult to choose a suitable database for a specific use case, this paper evaluates the underlying techniques of NoSQL databases considering their applicability for certain requirements. These systems are compared by their data models, query possibilities, concurrency controls, partitioning and replication opportunities.", "title": "" }, { "docid": "5d8cce4bc41812ab45eb83c7a2a35c30", "text": "Following B.B. Mandelbrot's fractal theory (1982), it was found that the fractal dimension could be obtained in medical images by the concept of fractional Brownian motion. An estimation concept for determination of the fractal dimension based upon the concept of fractional Brownian motion is discussed. Two applications are found: (1) classification; (2) edge enhancement and detection. For the purpose of classification, a normalized fractional Brownian motion feature vector is defined from this estimation concept. It represented the normalized average absolute intensity difference of pixel pairs on a surface of different scales. The feature vector uses relatively few data items to represent the statistical characteristics of the medial image surface and is invariant to linear intensity transformation. For edge enhancement and detection application, a transformed image is obtained by calculating the fractal dimension of each pixel over the whole medical image. The fractal dimension value of each pixel is obtained by calculating the fractal dimension of 7x7 pixel block centered on this pixel.", "title": "" }, { "docid": "b16f7a4242a9ff353d7726e66669ba97", "text": "The ARPA MT Evaluation methodology effort is intended to provide a basis for measuring and thereby facilitating the progress of MT systems of the ARPAsponsored research program. The evaluation methodologies have the further goal of being useful for identifying the context of that progress among developed, production MT systems in use today. Since 1991, the evaluations have evolved as we have discovered more about what properties are valuable to measure, what properties are not, and what elements of the tests/evaluations can be adjusted to enhance significance of the results while still remaining relatively portable. This paper describes this evolutionary process, along with measurements of the most recent MT evaluation (January 1994) and the current evaluation process now underway.", "title": "" }, { "docid": "33aca3fca17c8a9c786b35b4da4de47c", "text": "This paper is concerned with the design of the power control system for a single-phase voltage source inverter feeding a parallel resonant induction heating load. The control of the inverter output current, meaning the active component of the current through the induction coil when the control frequency is equal or slightly exceeds the resonant frequency, is achieved by a Proportional-IntegralDerivative controller tuned in accordance with the Modulus Optimum criterion in Kessler variant. The response of the current loop for different work pipes and set currents has been tested by simulation under the Matlab-Simulink environment and illustrates a very good behavior of the control system.", "title": "" }, { "docid": "d0c940a651b1231c6ef4f620e7acfdcc", "text": "Harvard Business School Working Paper Number 05-016. Working papers are distributed in draft form for purposes of comment and discussion only. They may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author(s). Abstract Much recent research has pointed to the critical role of architecture in the development of a firm's products, services and technical capabilities. A common theme in these studies is the notion that specific characteristics of a product's design – for example, the degree of modularity it exhibits – can have a profound effect on among other things, its performance, the flexibility of the process used to produce it, the value captured by its producer, and the potential for value creation at the industry level. Unfortunately, this stream of work has been limited by the lack of appropriate tools, metrics and terminology for characterizing key attributes of a product's architecture in a robust fashion. As a result, there is little empirical evidence that the constructs emerging in the literature have power in predicting the phenomena with which they are associated. This paper reports data from a research project which seeks to characterize the differences in design structure between complex software products. In particular, we adopt a technique based upon Design Structure Matrices (DSMs) to map the dependencies between different elements of a design then develop metrics that allow us to compare the structures of these different DSMs. We demonstrate the power of this approach in two ways: First, we compare the design structures of two complex software products – the Linux operating system and the Mozilla web browser – that were developed via contrasting modes of organization: specifically, open source versus proprietary development. We find significant differences in their designs, consistent with an interpretation that Linux possesses a more \" modular \" architecture. We then track the evolution of Mozilla, paying particular attention to a major \" redesign \" effort that took place several months after its release as an open source product. We show that this effort resulted in a design structure that was significantly more modular than its predecessor, and indeed, more modular than that of a comparable version of Linux. Our findings demonstrate that it is possible to characterize the structure of complex product designs and draw meaningful conclusions about the precise ways in which they differ. We provide a description of a set of tools …", "title": "" } ]
scidocsrr
0f1525313cf095d9a5cd350e1f6197c7
Semantic Web in data mining and knowledge discovery: A comprehensive survey
[ { "docid": "cb08df0c8ff08eecba5d7fed70c14f1e", "text": "In this article, we propose a family of efficient kernels for l a ge graphs with discrete node labels. Key to our method is a rapid feature extraction scheme b as d on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequ ence of graphs, whose node attributes capture topological and label information. A fami ly of kernels can be defined based on this Weisfeiler-Lehman sequence of graphs, including a highly e ffici nt kernel comparing subtree-like patterns. Its runtime scales only linearly in the number of e dges of the graphs and the length of the Weisfeiler-Lehman graph sequence. In our experimental evaluation, our kernels outperform state-of-the-art graph kernels on several graph classifica tion benchmark data sets in terms of accuracy and runtime. Our kernels open the door to large-scale ap plic tions of graph kernels in various disciplines such as computational biology and social netwo rk analysis.", "title": "" }, { "docid": "ec58ee349217d316f87ff684dba5ac2b", "text": "This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases.", "title": "" } ]
[ { "docid": "c5759678a84864a843c20c5f4a23f29f", "text": "We propose a novel framework called transient imaging for image formation and scene understanding through impulse illumination and time images. Using time-of-flight cameras and multi-path analysis of global light transport, we pioneer new algorithms and systems for scene understanding through time images. We demonstrate that our proposed transient imaging framework allows us to accomplish tasks that are well beyond the reach of existing imaging technology. For example, one can infer the geometry of not only the visible but also the hidden parts of a scene, enabling us to look around corners. Traditional cameras estimate intensity per pixel I(x,y). Our transient imaging camera captures a 3D time-image I(x,y,t) for each pixel and uses an ultra-short pulse laser for illumination. Emerging technologies are supporting cameras with a temporal-profile per pixel at picosecond resolution, allowing us to capture an ultra-high speed time-image. This time-image contains the time profile of irradiance incident at a sensor pixel. We experimentally corroborated our theory with free space hardware experiments using a femtosecond laser and a picosecond accurate sensing device. The ability to infer the structure of hidden scene elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.", "title": "" }, { "docid": "4d2b0b01fae0ff2402fc2feaa5657574", "text": "In this paper, we give an algorithm for the analysis and correction of the distorted QR barcode (QR-code) image. The introduced algorithm is based on the code area finding by four corners detection for 2D barcode. We combine Canny edge detection with contours finding algorithms to erase noises and reduce computation and utilize two tangents to approximate the right-bottom point. Then, we give a detail description on how to use inverse perspective transformation in rebuilding a QR-code image from a distorted one. We test our algorithm on images taken by mobile phones. The experiment shows that our algorithm is effective.", "title": "" }, { "docid": "66a8e7c076ad2cfb7bbe42836607a039", "text": "The Spider system at the Oak Ridge National Laboratory’s Leadership Computing Facility (OLCF) is the world’s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF’s diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF’s diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x 240 GB/sec, and 17x 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing the file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.", "title": "" }, { "docid": "70ec2398526863c05b41866593214d0a", "text": "Matrix factorization (MF) is one of the most popular techniques for product recommendation, but is known to suffer from serious cold-start problems. Item cold-start problems are particularly acute in settings such as Tweet recommendation where new items arrive continuously. In this paper, we present a meta-learning strategy to address item cold-start when new items arrive continuously. We propose two deep neural network architectures that implement our meta-learning strategy. The first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted. We evaluate our techniques on the real-world problem of Tweet recommendation. On production data at Twitter, we demonstrate that our proposed techniques significantly beat the MF baseline and also outperform production models for Tweet recommendation.", "title": "" }, { "docid": "933f8ba333e8cbef574b56348872b313", "text": "Automatic image annotation has been an important research topic in facilitating large scale image management and retrieval. Existing methods focus on learning image-tag correlation or correlation between tags to improve annotation accuracy. However, most of these methods evaluate their performance using top-k retrieval performance, where k is fixed. Although such setting gives convenience for comparing different methods, it is not the natural way that humans annotate images. The number of annotated tags should depend on image contents. Inspired by the recent progress in machine translation and image captioning, we propose a novel Recurrent Image Annotator (RIA) model that forms image annotation task as a sequence generation problem so that RIA can natively predict the proper length of tags according to image contents. We evaluate the proposed model on various image annotation datasets. In addition to comparing our model with existing methods using the conventional top-k evaluation measures, we also provide our model as a high quality baseline for the arbitrary length image tagging task. Moreover, the results of our experiments show that the order of tags in training phase has a great impact on the final annotation performance.", "title": "" }, { "docid": "b0575058a6950bc17a976504145dca0e", "text": "BACKGROUND\nCitation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.\n\n\nMETHODS\nFour systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection.\n\n\nRESULTS\nOf the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9 % rituximab, 40 % dietary fibre, 67 % aHUS, and 57 % ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16 % (aHUS) to 45 % (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7 %. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25 % and increased the workload saving by 10 % but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80 %) but reduced the precision (6.8 %) and increased the number of missed citations.\n\n\nCONCLUSIONS\nSemi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.", "title": "" }, { "docid": "9feeeabb8491a06ae130c99086a9d069", "text": "Dopamine (DA) is a key transmitter in the basal ganglia, yet DA transmission does not conform to several aspects of the classic synaptic doctrine. Axonal DA release occurs through vesicular exocytosis and is action potential- and Ca²⁺-dependent. However, in addition to axonal release, DA neurons in midbrain exhibit somatodendritic release by an incompletely understood, but apparently exocytotic, mechanism. Even in striatum, axonal release sites are controversial, with evidence for DA varicosities that lack postsynaptic specialization, and largely extrasynaptic DA receptors and transporters. Moreover, DA release is often assumed to reflect a global response to a population of activities in midbrain DA neurons, whether tonic or phasic, with precise timing and specificity of action governed by other basal ganglia circuits. This view has been reinforced by anatomical evidence showing dense axonal DA arbors throughout striatum, and a lattice network formed by DA axons and glutamatergic input from cortex and thalamus. Nonetheless, localized DA transients are seen in vivo using voltammetric methods with high spatial and temporal resolution. Mechanistic studies using similar methods in vitro have revealed local regulation of DA release by other transmitters and modulators, as well as by proteins known to be disrupted in Parkinson's disease and other movement disorders. Notably, the actions of most other striatal transmitters on DA release also do not conform to the synaptic doctrine, with the absence of direct synaptic contacts for glutamate, GABA, and acetylcholine (ACh) on striatal DA axons. Overall, the findings reviewed here indicate that DA signaling in the basal ganglia is sculpted by cooperation between the timing and pattern of DA input and those of local regulatory factors.", "title": "" }, { "docid": "b2c299e13eff8776375c14357019d82e", "text": "This paper is focused on the application of complementary split-ring resonators (CSRRs) to the suppression of the common (even) mode in microstrip differential transmission lines. By periodically and symmetrically etching CSRRs in the ground plane of microstrip differential lines, the common mode can be efficiently suppressed over a wide band whereas the differential signals are not affected. Throughout the paper, we present and discuss the principle for the selective common-mode suppression, the circuit model of the structure (including the models under even- and odd-mode excitation), the strategies for bandwidth enhancement of the rejected common mode, and a methodology for common-mode filter design. On the basis of the dispersion relation for the common mode, it is shown that the maximum achievable rejection bandwidth can be estimated. Finally, theory is validated by designing and measuring a differential line and a balanced bandpass filter with common-mode suppression, where double-slit CSRRs (DS-CSRRs) are used in order to enhance the common-mode rejection bandwidth. Due to the presence of DS-CSRRs, the balanced filter exhibits more than 40 dB of common-mode rejection within a 34% bandwidth around the filter pass band.", "title": "" }, { "docid": "353f91c6e35cd5703b5b238f929f543e", "text": "This paper provides an overview of prominent deep learning toolkits and, in particular, reports on recent publications that contributed open source software for implementing tasks that are common in intelligent user interfaces (IUI). We provide a scientific reference for researchers and software engineers who plan to utilise deep learning techniques within their IUI research and development projects. ACM Classification", "title": "" }, { "docid": "7ddfa92cee856e2ef24caf3e88d92b93", "text": "Applications are getting increasingly interconnected. Although the interconnectedness provide new ways to gather information about the user, not all user information is ready to be directly implemented in order to provide a personalized experience to the user. Therefore, a general model is needed to which users’ behavior, preferences, and needs can be connected to. In this paper we present our works on a personality-based music recommender system in which we use users’ personality traits as a general model. We identified relationships between users’ personality and their behavior, preferences, and needs, and also investigated different ways to infer users’ personality traits from user-generated data of social networking sites (i.e., Facebook, Twitter, and Instagram). Our work contributes to new ways to mine and infer personality-based user models, and show how these models can be implemented in a music recommender system to positively contribute to the user experience.", "title": "" }, { "docid": "786ef1b656c182ab71f7a63e7f263b3f", "text": "The spectrum of a first-order sentence is the set of cardinalities of its finite models. This paper is concerned with spectra of sentences over languages that contain only unary function symbols. In particular, it is shown that a set S of natural numbers is the spectrum of a sentence over the language of one unary function symbol precisely if S is an eventually periodic set.", "title": "" }, { "docid": "a9b96c162e9a7f39a90c294167178c05", "text": "The performance of automotive radar systems is expected to significantly increase in the near future. With enhanced resolution capabilities more accurate and denser point clouds of traffic participants and roadside infrastructure can be acquired and so the amount of gathered information is growing drastically. One main driver for this development is the global trend towards self-driving cars, which all rely on precise and fine-grained sensor information. New radar signal processing concepts have to be developed in order to provide this additional information. This paper presents a prototype high resolution radar sensor which helps to facilitate algorithm development and verification. The system is operational under real-time conditions and achieves excellent performance in terms of range, velocity and angular resolution. Complex traffic scenarios can be acquired out of a moving test vehicle, which is very close to the target application. First measurement runs on public roads are extremely promising and show an outstanding single-snapshot performance. Complex objects can be precisely located and recognized by their contour shape. In order to increase the possible recording time, the raw data rate is reduced by several orders of magnitude in real-time by means of constant false alarm rate (CFAR) processing. The number of target cells can still exceed more than 10 000 points in a single measurement cycle for typical road scenarios.", "title": "" }, { "docid": "7b45559be60b099de0bcf109c9a539b7", "text": "The split-heel technique has distinct advantages over the conventional medial or lateral approach in the operative debridement of extensive and predominantly plantar chronic calcaneal osteomyelitis in children above 5 years of age. We report three cases (age 5.5-11 years old) of chronic calcaneal osteomyelitis in children treated using the split-heel approach with 3-10 years follow-up showing excellent functional and cosmetic results.", "title": "" }, { "docid": "b27e5e9540e625912a4e395079f6ac68", "text": "We propose Cooperative Training (CoT) for training generative models that measure a tractable density for discrete data. CoT coordinately trains a generator G and an auxiliary predictive mediator M . The training target of M is to estimate a mixture density of the learned distribution G and the target distribution P , and that of G is to minimize the Jensen-Shannon divergence estimated through M . CoT achieves independent success without the necessity of pre-training via Maximum Likelihood Estimation or involving high-variance algorithms like REINFORCE. This low-variance algorithm is theoretically proved to be unbiased for both generative and predictive tasks. We also theoretically and empirically show the superiority of CoT over most previous algorithms in terms of generative quality and diversity, predictive generalization ability and computational cost.", "title": "" }, { "docid": "6ceab65cc9505cf21824e9409cf67944", "text": "Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on largescale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level", "title": "" }, { "docid": "c2d4f97913bb3acceb3703f1501547a8", "text": "Pattern recognition is the discipline studying the design and operation of systems capable to recognize patterns with specific properties in data so urce . Intrusion detection, on the other hand, is in charge of identifying anomalou s activities by analyzing a data source, be it the logs of an operating system or in the network traffic. It is easy to find similarities between such research fields , and it is straightforward to think of a way to combine them. As to the descriptions abov e, we can imagine an Intrusion Detection System (IDS) using techniques prop er of the pattern recognition field in order to discover an attack pattern within the network traffic. What we propose in this work is such a system, which exp loits the results of research in the field of data mining, in order to discover poten tial attacks. The paper also presents some experimental results dealing with p erformance of our system in a real-world operational scenario.", "title": "" }, { "docid": "32faa5a14922d44101281c783cf6defb", "text": "A novel multifocus color image fusion algorithm based on the quaternion wavelet transform (QWT) is proposed in this paper, aiming at solving the image blur problem. The proposed method uses a multiresolution analysis procedure based on the quaternion wavelet transform. The performance of the proposed fusion scheme is assessed by some experiments, and the experimental results show that the proposed method is effective and performs better than the existing fusion methods.", "title": "" }, { "docid": "b3ffb805b3dcffc4e5c9cec47f90e566", "text": "Real-time ride-sharing, which enables on-the-fly matching between riders and drivers (even en-route), is an important problem due to its environmental and societal benefits. With the emergence of many ride-sharing platforms (e.g., Uber and Lyft), the design of a scalable framework to match riders and drivers based on their various constraints while maximizing the overall profit of the platform becomes a distinguishing business strategy.\n A key challenge of such framework is to satisfy both types of the users in the system, e.g., reducing both riders' and drivers' travel distances. However, the majority of the existing approaches focus only on minimizing the total travel distance of drivers which is not always equivalent to shorter trips for riders. Hence, we propose a fair pricing model that simultaneously satisfies both the riders' and drivers' constraints and desires (formulated as their profiles). In particular, we introduce a distributed auction-based framework where each driver's mobile app automatically bids on every nearby request taking into account many factors such as both the driver's and the riders' profiles, their itineraries, the pricing model, and the current number of riders in the vehicle. Subsequently, the server determines the highest bidder and assigns the rider to that driver. We show that this framework is scalable and efficient, processing hundreds of tasks per second in the presence of thousands of drivers. We compare our framework with the state-of-the-art approaches in both industry and academia through experiments on New York City's taxi dataset. Our results show that our framework can simultaneously match more riders to drivers (i.e., higher service rate) by engaging the drivers more effectively. Moreover, our frame-work schedules shorter trips for riders (i.e., better service quality). Finally, as a consequence of higher service rate and shorter trips, our framework increases the overall profit of the ride-sharing platforms.", "title": "" }, { "docid": "32fe17034223a3ea9a7c52b4107da760", "text": "With the prevalence of the internet, mobile devices and commercial streaming music services, the amount of digital music available is greater than ever. Sorting through all this music is an extremely time-consuming task. Music recommendation systems search through this music automatically and suggest new songs to users. Music recommendation systems have been developed in commercial and academic settings, but more research is needed. The perfect system would handle all the user’s listening needs while requiring only minimal user input. To this end, I have reviewed 20 articles within the field of music recommendation with the goal of finding how the field can be improved. I present a survey of music recommendation, including an explanation of collaborative and content-based filtering with their respective strengths and weaknesses. I propose a novel next-track recommendation system that incorporates techniques advocated by the literature. The system relies heavily on user skipping behavior to drive both a content-based and a collaborative approach. It uses active learning to balance the needs of exploration vs. exploitation in playing music for the user.", "title": "" } ]
scidocsrr
c96961e39cc0c86ce7c02b7f3da00879
The prevalence and impact of depression among medical students: a nationwide cross-sectional study in South Korea.
[ { "docid": "f84f279b6ef3b112a0411f5cba82e1b0", "text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed", "title": "" } ]
[ { "docid": "b59d72a5c2661c6646663dbbf8d5e30b", "text": "Media representation of mental illness has received growing research attention within a variety of academic disciplines. Cultural and media studies have often dominated in this research and discussion. More recently healthcare professionals have become interested in this debate, yet despite the importance of this subject only a selection of papers have been published in professional journals relating to nursing and healthcare. This paper examines the way in which mental illness in the United Kingdom is portrayed in public life. Literature from the field of media studies is explored alongside the available material from the field of mental healthcare. Three main areas are used to put forward an alternative approach: film representation and newspaper reporting of mental illness; the nature of the audience; and finally the concept of myth. The paper concludes by considering this approach in the context of current mental health policy on mental health promotion.", "title": "" }, { "docid": "ebbf02210b8e887a0564319594bc6ce5", "text": "Polycystic ovary syndrome (PCOS) is one of the most common endocrine and metabolic disorders in premenopausal women. Heterogeneous by nature, PCOS is defined by a combination of signs and symptoms of androgen excess and ovarian dysfunction in the absence of other specific diagnoses. The aetiology of this syndrome remains largely unknown, but mounting evidence suggests that PCOS might be a complex multigenic disorder with strong epigenetic and environmental influences, including diet and lifestyle factors. PCOS is frequently associated with abdominal adiposity, insulin resistance, obesity, metabolic disorders and cardiovascular risk factors. The diagnosis and treatment of PCOS are not complicated, requiring only the judicious application of a few well-standardized diagnostic methods and appropriate therapeutic approaches addressing hyperandrogenism, the consequences of ovarian dysfunction and the associated metabolic disorders. This article aims to provide a balanced review of the latest advances and current limitations in our knowledge about PCOS while also providing a few clear and simple principles, based on current evidence-based clinical guidelines, for the proper diagnosis and long-term clinical management of women with PCOS.", "title": "" }, { "docid": "9e3562c5d4baf6be3293486383e62b3e", "text": "Many philosophical and contemplative traditions teach that \"living in the moment\" increases happiness. However, the default mode of humans appears to be that of mind-wandering, which correlates with unhappiness, and with activation in a network of brain areas associated with self-referential processing. We investigated brain activity in experienced meditators and matched meditation-naive controls as they performed several different meditations (Concentration, Loving-Kindness, Choiceless Awareness). We found that the main nodes of the default-mode network (medial prefrontal and posterior cingulate cortices) were relatively deactivated in experienced meditators across all meditation types. Furthermore, functional connectivity analysis revealed stronger coupling in experienced meditators between the posterior cingulate, dorsal anterior cingulate, and dorsolateral prefrontal cortices (regions previously implicated in self-monitoring and cognitive control), both at baseline and during meditation. Our findings demonstrate differences in the default-mode network that are consistent with decreased mind-wandering. As such, these provide a unique understanding of possible neural mechanisms of meditation.", "title": "" }, { "docid": "5eab71f546a7dc8bae157a0ca4dd7444", "text": "We introduce a new usability inspection method called HED (heuristic evaluation during demonstrations) for measuring and comparing usability of competing complex IT systems in public procurement. The method presented enhances traditional heuristic evaluation to include the use context, comprehensive view of the system, and reveals missing functionality by using user scenarios and demonstrations. HED also quantifies the results in a comparable way. We present findings from a real-life validation of the method in a large-scale procurement project of a healthcare and social welfare information system. We analyze and compare the performance of HED to other usability evaluation methods used in procurement. Based on the analysis HED can be used to evaluate the level of usability of an IT system during procurement correctly, comprehensively and efficiently.", "title": "" }, { "docid": "cd71e990546785bd9ba0c89620beb8d2", "text": "Crime is one of the most predominant and alarming aspects in our society and its prevention is a vital task. Crime analysis is a systematic way of detecting and investigating patterns and trends in crime. In this work, we use various clustering approaches of data mining to analyse the crime data of Tamilnadu. The crime data is extracted from National Crime Records Bureau (NCRB) of India. It consists of crime information about six cities namely Chennai, Coimbatore, Salem, Madurai, Thirunelvelli and Thiruchirapalli from the year 2000–2014 with 1760 instances and 9 attributes to represent the instances. K-Means clustering, Agglomerative clustering and Density Based Spatial Clustering with Noise (DBSCAN) algorithms are used to cluster crime activities based on some predefined cases and the results of these clustering are compared to find the best suitable clustering algorithm for crime detection. The result of K-Means clustering algorithm is visualized using Google Map for interactive and easy understanding. The K-Nearest Neighbor (KNN) classification is used for crime prediction. The performance of each clustering algorithms are evaluated using the metrics such as precision, recall and F-measure, and the results are compared. This work helps the law enforcement agencies to predict and detect crimes in Tamilnadu with improved accuracy and thus reduces the crime rate.", "title": "" }, { "docid": "bc432ab6ba8075aa9e142dd186a1e2a5", "text": "The impact of missing data on quantitative research can be serious, leading to biased estimates of parameters, loss of information, decreased statistical power, increased standard errors, and weakened generalizability of findings. In this paper, we discussed and demonstrated three principled missing data methods: multiple imputation, full information maximum likelihood, and expectation-maximization algorithm, applied to a real-world data set. Results were contrasted with those obtained from the complete data set and from the listwise deletion method. The relative merits of each method are noted, along with common features they share. The paper concludes with an emphasis on the importance of statistical assumptions, and recommendations for researchers. Quality of research will be enhanced if (a) researchers explicitly acknowledge missing data problems and the conditions under which they occurred, (b) principled methods are employed to handle missing data, and (c) the appropriate treatment of missing data is incorporated into review standards of manuscripts submitted for publication.", "title": "" }, { "docid": "9ac16df20364b0ae28d3164bbfb08654", "text": "Complex event detection is an advanced form of data stream processing where the stream(s) are scrutinized to identify given event patterns. The challenge for many complex event processing (CEP) systems is to be able to evaluate event patterns on high-volume data streams while adhering to realtime constraints. To solve this problem, in this paper we present a hardware based complex event detection system implemented on field-programmable gate arrays (FPGAs). By inserting the FPGA directly into the data path between the network interface and the CPU, our solution can detect complex events at gigabit wire speed with constant and fully predictable latency, independently of network load, packet size or data distribution. This is a significant improvement over CPU based systems and an architectural approach that opens up interesting opportunities for hybrid stream engines that combine the flexibility of the CPU with the parallelism and processing power of FPGAs.", "title": "" }, { "docid": "de887adb8d3383ffa1ed4aa033e0bd4a", "text": "An offline recognition system for Arabic handwritten words is presented. The recognition system is based on a semi-continuous 1-dimensional HMM. From each binary word image normalization parameters were estimated. First height, length, and baseline skew are normalized, then features are collected using a sliding window approach. This paper presents these methods in more detail. Some parameters were modified and the consequent effect on the recognition results are discussed. Significant tests were performed using the new IFN/ENIT database of handwritten Arabic words. The comprehensive database consists of 26459 Arabic words (Tunisian town/village names) handwritten by 411 different writers and is free for non-commercial research. In the performed tests we achieved maximal recognition rates of about 89% on a word level.", "title": "" }, { "docid": "309a20834f17bd87e10f8f1c051bf732", "text": "Tamper-resistant cryptographic processors are becoming the standard way to enforce data-usage policies. Their origins lie with military cipher machines and PIN processing in banking payment networks, expanding in the 1990s into embedded applications: token vending machines for prepayment electricity and mobile phone credit. Major applications such as GSM mobile phone identification and pay TV set-top boxes have pushed low-cost cryptoprocessors toward ubiquity. In the last five years, dedicated crypto chips have been embedded in devices such as game console accessories and printer ink cartridges, to control product and accessory after markets. The \"Trusted Computing\" initiative will soon embed cryptoprocessors in PCs so they can identify each other remotely. This paper surveys the range of applications of tamper-resistant hardware and the array of attack and defense mechanisms which have evolved in the tamper-resistance arms race.", "title": "" }, { "docid": "b64a2e6bb533043a48b7840b72f71331", "text": "Autonomous long range navigation in partially known planetary-like terrain is an open challenge for robotics. Navigating several hundreds of meters without any human intervention requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to localize itself as it moves, and to schedule, start, control and interrupt these various activities. In this paper, we brie y describe some functionalities that are currently running on board the Marsokhod model robot Lama at LAAS/CNRS. We then focus on the necessity to integrate various instances of the perception and decision functionalities, and on the di culties raised by this integration.", "title": "" }, { "docid": "67da66dc42bfc1f2ca103ff80c7a5eb9", "text": "Low voltage (LV) analog circuit design techniques are addressed in this tutorial. In particular, (i) technology considerations; (ii) transistor model capable to provide performance and power tradeoffs; (iii) low voltage implementation techniques capable to reduce the power supply requirements, such as bulk-driven, floating-gate, and self-cascode MOSFETs; (iv) basic LV building blocks; (v) multi-stage frequency compensation topologies; and (vi) fully-differential and fully-balanced systems. key words: analog circuits, amplifiers, transistor model, bulkdriven, floating-gate, self-cascode, NGCC frequency compensation, fully-differential and fully-balanced systems.", "title": "" }, { "docid": "29aa7084f7d6155d4626b682a5fc88ef", "text": "There is an underlying cascading behavior over road networks. Traffic cascading patterns are of great importance to easing traffic and improving urban planning. However, what we can observe is individual traffic conditions on different road segments at discrete time intervals, rather than explicit interactions or propagation (e.g., A→B) between road segments. Additionally, the traffic from multiple sources and the geospatial correlations between road segments make it more challenging to infer the patterns. In this paper, we first model the three-fold influences existing in traffic propagation and then propose a data-driven approach, which finds the cascading patterns through maximizing the likelihood of observed traffic data. As this is equivalent to a submodular function maximization problem, we solve it by using an approximate algorithm with provable near-optimal performance guarantees based on its submodularity. Extensive experiments on real-world datasets demonstrate the advantages of our approach in both effectiveness and efficiency.", "title": "" }, { "docid": "c2c81d5f7c1be2f6a877811cd61f055d", "text": "Since the cognitive revolution of the sixties, representation has served as the central concept of cognitive theory and representational theories of mind have provided the establishment view in cognitive science (Fodor, 1980; Gardner, 1985; Vera & Simon, 1993). Central to this line of thinking is the belief that knowledge exists solely in the head, and instruction involves finding the most efficient means for facilitating the “acquisition” of this knowledge (Gagne, Briggs, & Wager, 1993). Over the last two decades, however, numerous educational psychologists and instructional designers have begun abandoning cognitive theories that emphasize individual thinkers and their isolated minds. Instead, these researchers have adopted theories that emphasize the social and contextualized nature of cognition and meaning (Brown, Collins, & Duguid, 1989; Greeno, 1989, 1997; Hollan, Hutchins, & Kirsch, 2000; Lave & Wenger, 1991; Resnick, 1987; Salomon, 1993). Central to these reconceptualizations is an emphasis on contextualized activity and ongoing participation as the core units of analysis (Barab & Kirshner, 2001; Barab & Plucker, 2002; Brown & Duguid, 1991; Cook & Yanow, 1993;", "title": "" }, { "docid": "cc4c028027c1761428d5f80e07b1b614", "text": "When humans and other mammals run, the body's complex system of muscle, tendon and ligament springs behaves like a single linear spring ('leg spring'). A simple spring-mass model, consisting of a single linear leg spring and a mass equivalent to the animal's mass, has been shown to describe the mechanics of running remarkably well. Force platform measurements from running animals, including humans, have shown that the stiffness of the leg spring remains nearly the same at all speeds and that the spring-mass system is adjusted for higher speeds by increasing the angle swept by the leg spring. The goal of the present study is to determine the relative importance of changes to the leg spring stiffness and the angle swept by the leg spring when humans alter their stride frequency at a given running speed. Human subjects ran on treadmill-mounted force platform at 2.5ms-1 while using a range of stride frequencies from 26% below to 36% above the preferred stride frequency. Force platform measurements revealed that the stiffness of the leg spring increased by 2.3-fold from 7.0 to 16.3 kNm-1 between the lowest and highest stride frequencies. The angle swept by the leg spring decreased at higher stride frequencies, partially offsetting the effect of the increased leg spring stiffness on the mechanical behavior of the spring-mass system. We conclude that the most important adjustment to the body's spring system to accommodate higher stride frequencies is that leg spring becomes stiffer.", "title": "" }, { "docid": "f5d58660137891111a009bc841950ad2", "text": "Lateral brow ptosis is a common aging phenomenon, contributing to the lateral upper eyelid hooding, in addition to dermatochalasis. Lateral brow lift complements upper blepharoplasty in achieving a youthful periorbital appearance. In this study, the author reports his experience in utilizing a temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia. A retrospective analysis of all patients undergoing the proposed technique by one surgeon from 2009 to 2016 was conducted. Additional procedures were recorded. Preoperative and postoperative photographs at the longest follow-up visit were used for analysis. Operation was performed under local anesthesia. The surgical technique included a temporal (pretrichial) incision with subcutaneous dissection toward the lateral brow, with superolateral lift and closure. Total of 45 patients (44 females, 1 male; mean age: 58 years) underwent the temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia in office setting. The procedure was unilateral in 4 cases. Additional procedures included upper blepharoplasty (38), ptosis surgery (16), and lower blepharoplasty (24). Average follow-up time was 1 year (range, 6 months to 5 years). All patients were satisfied with the eyebrow contour and scar appearance. One patient required additional brow lift on one side for asymmetry. There were no cases of frontal nerve paralysis. In conclusion, the temporal (pretrichial) subcutaneous approach is an effective, safe technique for lateral brow lift/contouring, which can be performed under local anesthesia. It is ideal for women. Additional advantages include ease of operation, cost, and shortening the hairline (if necessary).", "title": "" }, { "docid": "ae5fac207e5d3bf51bffbf2ec01fd976", "text": "Deep learning has revolutionized the way sensor data are analyzed and interpreted. The accuracy gains these approaches offer make them attractive for the next generation of mobile, wearable and embedded sensory applications. However, state-of-the-art deep learning algorithms typically require a significant amount of device and processor resources, even just for the inference stages that are used to discriminate high-level classes from low-level data. The limited availability of memory, computation, and energy on mobile and embedded platforms thus pose a significant challenge to the adoption of these powerful learning techniques. In this paper, we propose SparseSep, a new approach that leverages the sparsification of fully connected layers and separation of convolutional kernels to reduce the resource requirements of popular deep learning algorithms. As a result, SparseSep allows large-scale DNNs and CNNs to run efficiently on mobile and embedded hardware with only minimal impact on inference accuracy. We experiment using SparseSep across a variety of common processors such as the Qualcomm Snapdragon 400, ARM Cortex M0 and M3, and Nvidia Tegra K1, and show that it allows inference for various deep models to execute more efficiently; for example, on average requiring 11.3 times less memory and running 13.3 times faster on these representative platforms.", "title": "" }, { "docid": "061dc618163d08972b73af42e8628159", "text": "< draft-ietf-pkix-ipki-new-rfc2527-01.txt > Status of this Memo This document is an Internet-Draft and is subject to all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of 6 months and may be updated, replaced, or may become obsolete by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as work in progress. To view the entire list of current Internet-Drafts, please check the \"1id-abstracts.txt\" listing contained in the Internet-Drafts Shadow Directories on ftp.is.co.za (Africa), ftp.nordu.net (Northern Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au (Pacific Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu (US West Coast). Abstract This document presents a framework to assist the writers of certificate policies or certification practice statements for participants within public key infrastructures, such as certification authorities, policy authorities, and communities of interest that wish to rely on certificates. In particular, the framework provides a comprehensive list of topics that potentially (at the writer's discretion) need to be covered in a certificate policy or a certification practice statement. This document is being submitted to the RFC Editor with a request for publication as an Informational RFC that will supercede RFC 2527 [CPF].", "title": "" }, { "docid": "f4009fde2b4ac644d3b83b664e178b5f", "text": "This chapter describes the history of metaheuristics in five distinct periods, starting long before the first use of the term and ending a long time in the future.", "title": "" } ]
scidocsrr
efb4dd43048d7298ab1eaa064d0bf263
The effect of strength training on performance in endurance athletes.
[ { "docid": "558eb032e7060abcc6c7f79be7c728aa", "text": "In the exercising human, maximal oxygen uptake (VO2max) is limited by the ability of the cardiorespiratory system to deliver oxygen to the exercising muscles. This is shown by three major lines of evidence: 1) when oxygen delivery is altered (by blood doping, hypoxia, or beta-blockade), VO2max changes accordingly; 2) the increase in VO2max with training results primarily from an increase in maximal cardiac output (not an increase in the a-v O2 difference); and 3) when a small muscle mass is overperfused during exercise, it has an extremely high capacity for consuming oxygen. Thus, O2 delivery, not skeletal muscle O2 extraction, is viewed as the primary limiting factor for VO2max in exercising humans. Metabolic adaptations in skeletal muscle are, however, critical for improving submaximal endurance performance. Endurance training causes an increase in mitochondrial enzyme activities, which improves performance by enhancing fat oxidation and decreasing lactic acid accumulation at a given VO2. VO2max is an important variable that sets the upper limit for endurance performance (an athlete cannot operate above 100% VO2max, for extended periods). Running economy and fractional utilization of VO2max also affect endurance performance. The speed at lactate threshold (LT) integrates all three of these variables and is the best physiological predictor of distance running performance.", "title": "" } ]
[ { "docid": "9f1336d17f5d8fd7e04bd151eabb6a97", "text": "Immensely popular video sharing websites such as YouTube have become the most important sources of music information for Internet users and the most prominent platform for sharing live music. The audio quality of this huge amount of live music recordings, however, varies significantly due to factors such as environmental noise, location, and recording device. However, most video search engines do not take audio quality into consideration when retrieving and ranking results. Given the fact that most users prefer live music videos with better audio quality, we propose the first automatic, non-reference audio quality assessment framework for live music video search online. We first construct two annotated datasets of live music recordings. The first dataset contains 500 human-annotated pieces, and the second contains 2,400 synthetic pieces systematically generated by adding noise effects to clean recordings. Then, we formulate the assessment task as a ranking problem and try to solve it using a learning-based scheme. To validate the effectiveness of our framework, we perform both objective and subjective evaluations. Results show that our framework significantly improves the ranking performance of live music recording retrieval and can prove useful for various real-world music applications.", "title": "" }, { "docid": "9b176a25a16b05200341ac54778a8bfc", "text": "This paper reports on a study of motivations for the use of peer-to-peer or sharing economy services. We interviewed both users and providers of these systems to obtain different perspectives and to determine if providers are matching their system designs to the most important drivers of use. We found that the motivational models implicit in providers' explanations of their systems' designs do not match well with what really seems to motivate users. Providers place great emphasis on idealistic motivations such as creating a better community and increasing sustainability. Users, on the other hand are looking for services that provide what they need whilst increasing value and convenience. We discuss the divergent models of providers and users and offer design implications for peer system providers.", "title": "" }, { "docid": "2b8b06965cca346f3714cbaa1704ab83", "text": "Visual question answering (Visual QA) has attracted a lot of attention lately, seen essentially as a form of (visual) Turing test that artificial intelligence should strive to achieve. In this paper, we study a crucial component of this task: how can we design good datasets for the task? We focus on the design of multiplechoice based datasets where the learner has to select the right answer from a set of candidate ones including the target (i.e. the correct one) and the decoys (i.e. the incorrect ones). Through careful analysis of the results attained by state-of-the-art learning models and human annotators on existing datasets, we show that the design of the decoy answers has a significant impact on how and what the learning models learn from the datasets. In particular, the resulting learner can ignore the visual information, the question, or both while still doing well on the task. Inspired by this, we propose automatic procedures to remedy such design deficiencies. We apply the procedures to re-construct decoy answers for two popular Visual QA datasets as well as to create a new Visual QA dataset from the Visual Genome project, resulting in the largest dataset for this task. Extensive empirical studies show that the design deficiencies have been alleviated in the remedied datasets and the performance on them is likely a more faithful indicator of the difference among learning models. The datasets are released and publicly available via http://www.teds. usc.edu/website_vqa/.", "title": "" }, { "docid": "a5911891697a1b2a407f231cf0ad6c28", "text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.", "title": "" }, { "docid": "2b0fa1c4dceb94a2d8c1395dae9fad99", "text": "Among the major problems facing technical management today are those involving the coordination of many diverse activities toward a common goal. In a large engineering project, for example, almost all the engineering and craft skills are involved as well as the functions represented by research, development, design, procurement, construction, vendors, fabricators and the customer. Management must devise plans which will tell with as much accuracy as possible how the efforts of the people representing these functions should be directed toward the project's completion. In order to devise such plans and implement them, management must be able to collect pertinent information to accomplish the following tasks:\n (1) To form a basis for prediction and planning\n (2) To evaluate alternative plans for accomplishing the objective\n (3) To check progress against current plans and objectives, and\n (4) To form a basis for obtaining the facts so that decisions can be made and the job can be done.", "title": "" }, { "docid": "56ff9b231738b24fda47ab152bf78ba1", "text": "We present the Real-time Accurate Cell-shape Extractor (RACE), a high-throughput image analysis framework for automated three-dimensional cell segmentation in large-scale images. RACE is 55-330 times faster and 2-5 times more accurate than state-of-the-art methods. We demonstrate the generality of RACE by extracting cell-shape information from entire Drosophila, zebrafish, and mouse embryos imaged with confocal and light-sheet microscopes. Using RACE, we automatically reconstructed cellular-resolution tissue anisotropy maps across developing Drosophila embryos and quantified differences in cell-shape dynamics in wild-type and mutant embryos. We furthermore integrated RACE with our framework for automated cell lineaging and performed joint segmentation and cell tracking in entire Drosophila embryos. RACE processed these terabyte-sized datasets on a single computer within 1.4 days. RACE is easy to use, as it requires adjustment of only three parameters, takes full advantage of state-of-the-art multi-core processors and graphics cards, and is available as open-source software for Windows, Linux, and Mac OS.", "title": "" }, { "docid": "cb793f98ea1a001dde3ac87a0b181ebd", "text": "We propose a simplified model of attention which is applicable to feed-forward neural networks and demonstrate that the resulting model can solve the synthetic “addition” and “multiplication” long-term memory problems for sequence lengths which are both longer and more widely varying than the best published results for these tasks. 1 MODELS FOR SEQUENTIAL DATA Many problems in machine learning are best formulated using sequential data and appropriate models for these tasks must be able to capture temporal dependencies in sequences, potentially of arbitrary length. One such class of models are recurrent neural networks (RNNs), which can be considered a learnable function f whose output ht = f(xt, ht−1) at time t depends on input xt and the model’s previous state ht−1. Training of RNNs with backpropagation through time (Werbos, 1990) is hindered by the vanishing and exploding gradient problem (Pascanu et al., 2012; Hochreiter & Schmidhuber, 1997; Bengio et al., 1994), and as a result RNNs are in practice typically only applied in tasks where sequential dependencies span at most hundreds of time steps. Very long sequences can also make training computationally inefficient due to the fact that RNNs must be evaluated sequentially and cannot be fully parallelized.", "title": "" }, { "docid": "71819107f543aa2b20b070e322cf1bbb", "text": "Despite the recent success of end-to-end learned representations, hand-crafted optical flow features are still widely used in video analysis tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural network, to learn optical-flow-like features from data. TVNet subsumes a specific optical flow solver, the TV-L1 method, and is initialized by unfolding its optimization iterations as neural layers. TVNet can therefore be used directly without any extra learning. Moreover, it can be naturally concatenated with other task-specific networks to formulate an end-to-end architecture, thus making our method more efficient than current multi-stage approaches by avoiding the need to pre-compute and store features on disk. Finally, the parameters of the TVNet can be further fine-tuned by end-to-end training. This enables TVNet to learn richer and task-specific patterns beyond exact optical flow. Extensive experiments on two action recognition benchmarks verify the effectiveness of the proposed approach. Our TVNet achieves better accuracies than all compared methods, while being competitive with the fastest counterpart in terms of features extraction time.", "title": "" }, { "docid": "f279060b5ebe9b163d08f29b0e70619c", "text": "Silver film over nanospheres (AgFONs) were successfully employed as surface-enhanced Raman spectroscopy (SERS) substrates to characterize several artists' red dyes including: alizarin, purpurin, carminic acid, cochineal, and lac dye. Spectra were collected on sample volumes (1 x 10(-6) M or 15 ng/microL) similar to those that would be found in a museum setting and were found to be higher in resolution and consistency than those collected on silver island films (AgIFs). In fact, to the best of the authors' knowledge, this work presents the highest resolution spectrum of the artists' material cochineal to date. In order to determine an optimized SERS system for dye identification, experiments were conducted in which laser excitation wavelengths were matched with correlating AgFON localized surface plasmon resonance (LSPR) maxima. Enhancements of approximately two orders of magnitude were seen when resonance SERS conditions were met in comparison to non-resonance SERS conditions. Finally, because most samples collected in a museum contain multiple dyestuffs, AgFONs were employed to simultaneously identify individual dyes within several dye mixtures. These results indicate that AgFONs have great potential to be used to identify not only real artwork samples containing a single dye but also samples containing dyes mixtures.", "title": "" }, { "docid": "b59e527be8cfb1a0d9f475904bbf1602", "text": "Clustering is grouping input data sets into subsets, called ’clusters’ within which the elements are somewhat similar. In general, clustering is an unsupervised learning task as very little or no prior knowledge is given except the input data sets. The tasks have been used in many fields and therefore various clustering algorithms have been developed. Clustering task is, however, computationally expensive as many of the algorithms require iterative or recursive procedures and most of real-life data is high dimensional. Therefore, the parallelization of clustering algorithms is inevitable, and various parallel clustering algorithms have been implemented and applied to many applications. In this paper, we review a variety of clustering algorithms and their parallel versions as well. Although the parallel clustering algorithms have been used for many applications, the clustering tasks are applied as preprocessing steps for parallelization of other algorithms too. Therefore, the applications of parallel clustering algorithms and the clustering algorithms for parallel computations are described in this paper.", "title": "" }, { "docid": "2ea12c68f02657acb9fb27f6ace7e746", "text": "1. Established relevance of authalic spherical parametrization for creating geometry images used subsequently in CNN. 2. Robust authalic parametrization of arbitrary shapes using area restoring diffeomorphic flow and barycentric mapping. 3. Creation of geometry images (a) with appropriate shape feature for rigid/non-rigid shape analysis, (b) which are robust to cut and amenable to learn using CNNs. Experiments Cuts & Data Augmentation", "title": "" }, { "docid": "429c900f6ac66bcea5aa068d27f5b99f", "text": "Recent researches shows that Brain Computer Interface (BCI) technology provides effective way of communication between human and physical device. In this work, an EEG based wireless mobile robot is implemented for people suffer from motor disabilities can interact with physical devices based on Brain Computer Interface (BCI). An experimental model of mobile robot is explored and it can be controlled by human eye blink strength. EEG signals are acquired from NeuroSky Mind wave Sensor (single channel prototype) in non-invasive manner and Signal features are extracted by adopting Discrete Wavelet Transform (DWT) to amend the signal resolution. We analyze and compare the db4 and db7 wavelets for accurate classification of blink signals. Different classes of movements are achieved based on different blink strength of user. The experimental setup of adaptive human machine interface system provides better accuracy and navigates the mobile robot based on user command, so it can be adaptable for disabled people.", "title": "" }, { "docid": "4c12b827ee445ab7633aefb8faf222a2", "text": "Research shows that speech dereverberation (SD) with Deep Neural Network (DNN) achieves the state-of-the-art results by learning spectral mapping, which, simultaneously, lacks the characterization of the local temporal spectral structures (LTSS) of speech signal and calls for a large storage space that is impractical in real applications. Contrarily, the Convolutional Neural Network (CNN) offers a better modeling ability by considering local patterns and has less parameters with its weights sharing property, which motivates us to employ the CNN for SD task. In this paper, to our knowledge, a Deep Convolutional Encoder-Decoder (DCED) model is proposed for the first time in dealing with the SD task (DCED-SD), where the advantage of the DCED-SD model lies in its powerful LTSS modeling capability via convolutional encoder-decoder layers with smaller storage requirement. By taking the reverberant and anechoic spectrum as training pairs, the proposed DCED-SD is well-trained in a supervised manner with less convergence time. Additionally, the DCED-SD model size is 23 times smaller than the size of DNN-SD model with better performance achieved. By using the simulated and real-recorded data, extensive experiments have been conducted to demonstrate the superiority of DCED-based SD method over the DNN-based SD method under different unseen reverberant conditions.", "title": "" }, { "docid": "3c592d5ba9aa08f30f1e3afe890677a2", "text": "Education in Latin America is an important part of social policy. Although huge strides were made in the last decade, a region of disparity between the rich and· poor needs to focus on the reduction of inequality of access and provision if it is to hope for qualitative change. Detailed achievements and challenges are presented, with an emphasis on improving school enrolment and a change to curriculum relevant for the future and local community and business involvement. Change will be achieved by a combination of new teachers, new management and leadership, and the involvement of all society.", "title": "" }, { "docid": "2550502036aac5cf144cb8a0bc2d525b", "text": "Significant achievements have been made on the development of next-generation filtration and separation membranes using graphene materials, as graphene-based membranes can afford numerous novel mass-transport properties that are not possible in state-of-art commercial membranes, making them promising in areas such as membrane separation, water desalination, proton conductors, energy storage and conversion, etc. The latest developments on understanding mass transport through graphene-based membranes, including perfect graphene lattice, nanoporous graphene and graphene oxide membranes are reviewed here in relation to their potential applications. A summary and outlook is further provided on the opportunities and challenges in this arising field. The aspects discussed may enable researchers to better understand the mass-transport mechanism and to optimize the synthesis of graphene-based membranes toward large-scale production for a wide range of applications.", "title": "" }, { "docid": "3f467988a35ecb7b6b9feef049407bb2", "text": "Semantic parsing of large-scale 3D point clouds is an important research topic in computer vision and remote sensing fields. Most existing approaches utilize hand-crafted features for each modality independently and combine them in a heuristic manner. They often fail to consider the consistency and complementary information among features adequately, which makes them difficult to capture high-level semantic structures. The features learned by most of the current deep learning methods can obtain high-quality image classification results. However, these methods are hard to be applied to recognize 3D point clouds due to unorganized distribution and various point density of data. In this paper, we propose a 3DCNN-DQN-RNN method which fuses the 3D convolutional neural network (CNN), Deep Q-Network (DQN) and Residual recurrent neural network (RNN)for an efficient semantic parsing of large-scale 3D point clouds. In our method, an eye window under control of the 3D CNN and DQN can localize and segment the points of the object's class efficiently. The 3D CNN and Residual RNN further extract robust and discriminative features of the points in the eye window, and thus greatly enhance the parsing accuracy of large-scale point clouds. Our method provides an automatic process that maps the raw data to the classification results. It also integrates object localization, segmentation and classification into one framework. Experimental results demonstrate that the proposed method outperforms the state-of-the-art point cloud classification methods.", "title": "" }, { "docid": "0ddb95e00f5502c826e6ec380d58911b", "text": "Antenna selection is a multiple-input multiple-output (MIMO) technology, which uses radio frequency (RF) switches to select a good subset of antennas. Antenna selection can alleviate the requirement on the number of RF transceivers, thus being attractive for massive MIMO systems. In massive MIMO antenna selection systems, RF switching architectures need to be carefully considered. In this paper, we examine two switching architectures, i.e., full-array and sub-array. By assuming independent and identically distributed Rayleigh flat fading channels, we use asymptotic theory on order statistics to derive the asymptotic upper capacity bounds of massive MIMO channels with antenna selection for the both switching architectures in the large-scale limit. We also use the derived bounds to further derive the upper bounds of the ergodic achievable spectral efficiency considering the channel state information (CSI) acquisition. It is also showed that the ergodic capacity of sub-array antenna selection system scales no faster than double logarithmic rate. In addition, optimal antenna selection algorithms based on branch-and-bound are proposed for both switching architectures. Our results show that the derived asymptotic bounds are effective and also apply to the finite-dimensional MIMO. The CSI acquisition is one of the main limits for the massive MIMO antenna selection systems in the time-variant channels. The proposed optimal antenna selection algorithms are much faster than the exhaustive-search-based antenna selection, e.g., 1000 × speedup observed in the large-scale system. Interestingly, the full-array and sub-array systems have very close performance, which is validated by their exact capacities and their close upper bounds on capacity.", "title": "" }, { "docid": "05d282026dcecb3286c9ffbd88cb72a3", "text": "Although deep neural networks (DNNs) are state-of-the-art artificial intelligence systems, it is unclear what insights, if any, they provide about human intelligence. We address this issue in the domain of visual perception. After briefly describing DNNs, we provide an overview of recent results comparing human visual representations and performance with those of DNNs. In many cases, DNNs acquire visual representations and processing strategies that are very different from those used by people. We conjecture that there are at least two factors preventing them from serving as better psychological models. First, DNNs are currently trained with impoverished data, such as data lacking important visual cues to three-dimensional structure, data lacking multisensory statistical regularities, and data in which stimuli are unconnected to an observer’s actions and goals. Second, DNNs typically lack adaptations to capacity limits, such as attentional mechanisms, visual working memory, and compressed mental representations biased toward preserving task-relevant abstractions.", "title": "" }, { "docid": "3e18a760083cd3ed169ed8dae36156b9", "text": "n engl j med 368;26 nejm.org june 27, 2013 2445 correct diagnoses as often as we think: the diagnostic failure rate is estimated to be 10 to 15%. The rate is highest among specialties in which patients are diagnostically undifferentiated, such as emergency medicine, family medicine, and internal medicine. Error in the visual specialties, such as radiology and pathology, is considerably lower, probably around 2%.1 Diagnostic error has multiple causes, but principal among them are cognitive errors. Usually, it’s not a lack of knowledge that leads to failure, but problems with the clinician’s thinking. Esoteric diagnoses are occasionally missed, but common illnesses are commonly misdiagnosed. For example, physicians know the pathophysiology of pulmonary embolus in excruciating detail, yet because its signs and symptoms are notoriously variable and overlap with those of numerous other diseases, this important diagnosis was missed a staggering 55% of the time in a series of fatal cases.2 Over the past 40 years, work by cognitive psychologists and others has pointed to the human mind’s vulnerability to cognitive biases, logical fallacies, false assumptions, and other reasoning failures. It seems that much of our everyday thinking is f lawed, and clinicians are not immune to the problem (see box). More than 100 biases affecting clinical decision making have been described, and many medical disciplines now acknowledge their pervasive influence on our thinking. Cognitive failures are best understood in the context of how our brains manage and process information. The two principal modes, automatic and controlled, are colloquially referred to as “intuitive” and “analytic”; psychologists know them as Type 1 and Type 2 processes. Various conceptualizations of the reasoning process have been proposed, but most can be incorporated into this dual-process system. This system is more than a model: it is accepted that the two processes involve different cortical mechanisms with associated neurophysiologic and neuroanatomical From Mindless to Mindful Practice — Cognitive Bias and Clinical Decision Making", "title": "" }, { "docid": "90549b287e67a38516a08a87756130fc", "text": "Based on a sample of 944 respondents who were recruited from 20 elementary schools in South Korea, this research surveyed the factors that lead to smartphone addiction. This research examined the user characteristics and media content types that can lead to addiction. With regard to user characteristics, results showed that those who have lower self-control and those who have greater stress were more likely to be addicted to smartphones. For media content types, those who use smartphones for SNS, games, and entertainment were more likely to be addicted to smartphones, whereas those who use smartphones for study-related purposes were not. Although both SNS use and game use were positive predictors of smartphone addiction, SNS use was a stronger predictor of smartphone addiction than", "title": "" } ]
scidocsrr
4c45393f8d80acbf4b4bc8630255ea0e
Compositional Verification for Autonomous Systems with Deep Learning Components
[ { "docid": "ed9e22167d3e9e695f67e208b891b698", "text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.", "title": "" }, { "docid": "17611b0521b69ad2b22eeadc10d6d793", "text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "title": "" }, { "docid": "01ed1959250874c55bd32d472461718f", "text": "Deep neural networks have become widely used, obtaining remarkable results in domains such as computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, and bio-informatics, where they have produced results comparable to human experts. However, these networks can be easily “fooled” by adversarial perturbations: minimal changes to correctly-classified inputs, that cause the network to misclassify them. This phenomenon represents a concern for both safety and security, but it is currently unclear how to measure a network’s robustness against such perturbations. Existing techniques are limited to checking robustness around a few individual input points, providing only very limited guarantees. We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations. The approach is data-guided, relying on clustering to identify well-defined geometric regions as candidate safe regions. We then utilize verification techniques to confirm that these regions are safe or to provide counter-examples showing that they are not safe. We also introduce the notion of targeted robustness which, for a given target label and region, ensures that a NN does not map any input in the region to the target label. We evaluated our technique on the MNIST dataset and on a neural network implementation of a controller for the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu). For these networks, our approach identified multiple regions which were completely safe as well as some which were only safe for specific labels. It also discovered several adversarial perturbations of interest.", "title": "" }, { "docid": "11a69c06f21e505b3e05384536108325", "text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "title": "" } ]
[ { "docid": "cc6b9165f395e832a396d59c85f482cc", "text": "Vision-based automatic counting of people has widespread applications in intelligent transportation systems, security, and logistics. However, there is currently no large-scale public dataset for benchmarking approaches on this problem. This work fills this gap by introducing the first real-world RGBD People Counting DataSet (PCDS) containing over 4, 500 videos recorded at the entrance doors of buses in normal and cluttered conditions. It also proposes an efficient method for counting people in real-world cluttered scenes related to public transportations using depth videos. The proposed method computes a point cloud from the depth video frame and re-projects it onto the ground plane to normalize the depth information. The resulting depth image is analyzed for identifying potential human heads. The human head proposals are meticulously refined using a 3D human model. The proposals in each frame of the continuous video stream are tracked to trace their trajectories. The trajectories are again refined to ascertain reliable counting. People are eventually counted by accumulating the head trajectories leaving the scene. To enable effective head and trajectory identification, we also propose two different compound features. A thorough evaluation on PCDS demonstrates that our technique is able to count people in cluttered scenes with high accuracy at 45 fps on a 1.7 GHz processor, and hence it can be deployed for effective real-time people counting for intelligent transportation systems.", "title": "" }, { "docid": "708fbc1eff4d96da2f3adaa403db3090", "text": "We propose a new system for generating art. The system generates art by looking at art and learning about style; and becomes creative by increasing the arousal potential of the generated art by deviating from the learned styles. We build over Generative Adversarial Networks (GAN), which have shown the ability to learn to generate novel images simulating a given distribution. We argue that such networks are limited in their ability to generate creative products in their original design. We propose modifications to its objective to make it capable of generating creative art by maximizing deviation from established styles and minimizing deviation from art distribution. We conducted experiments to compare the response of human subjects to the generated art with their response to art created by artists. The results show that human subjects could not distinguish art generated by the proposed system from art generated by contemporary artists and shown in top art", "title": "" }, { "docid": "9cebb39b2eb340a21c4f64c1bb42217e", "text": "Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.", "title": "" }, { "docid": "f569131096d56336fa3ed547c05c2be4", "text": "Providing high quality recommendations is important for e-commerce systems to assist users in making effective selection decisions from a plethora of choices. Collaborative filtering is a widely accepted technique to generate recommendations based on the ratings of like-minded users. However, it suffers from several inherent issues such as data sparsity and cold start. To address these problems, we propose a novel method called ''Merge'' to incorporate social trust information (i.e., trusted neighbors explicitly specified by users) in providing recommendations. Specifically, ratings of a user's trusted neighbors are merged to complement and represent the preferences of the user and to find other users with similar preferences (i.e., similar users). In addition, the quality of merged ratings is measured by the confidence considering the number of ratings and the ratio of conflicts between positive and negative opinions. Further, the rating confidence is incorporated into the computation of user similarity. The prediction for a given item is generated by aggregating the ratings of similar users. Experimental results based on three real-world data sets demonstrate that our method outperforms other counterparts both in terms of accuracy and coverage. The emergence of Web 2.0 applications has greatly changed users' styles of online activities from searching and browsing to interacting and sharing [6,40]. The available choices grow up exponentially, and make it challenge for users to find useful information which is well-known as the information overload problem. Recommender systems are designed and heavily used in modern e-commerce applications to cope with this problem, i.e., to provide users with high quality, personalized recommendations , and to help them find items (e.g., books, movies, news, music, etc.) of interest from a plethora of available choices. Collaborative filtering (CF) is one of the most well-known and commonly used techniques to generate recommendations [1,17]. The heuristic is that the items appreciated by those who have similar taste will also be in favor of by the active users (who desire recommendations). However, CF suffers from several inherent issues such as data sparsity and cold start. The former issue refers to the difficulty in finding sufficient and reliable similar users due to the fact that users in general only rate a small portion of items, while the latter refers to the dilemma that accurate recommendations are expected for the cold users who rate only a few items and thus whose preferences are hard to be inferred. To resolve these issues and model …", "title": "" }, { "docid": "6cf2ffb0d541320b1ad04dc3b9e1c9a4", "text": "Prediction of potential fraudulent activities may prevent both the stakeholders and the appropriate regulatory authorities of national or international level from being deceived. The objective difficulties on collecting adequate data that are obsessed by completeness affects the reliability of the most supervised Machine Learning methods. This work examines the effectiveness of forecasting fraudulent financial statements using semi-supervised classification techniques (SSC) that require just a few labeled examples for achieving robust learning behaviors mining useful data patterns from a larger pool of unlabeled examples. Based on data extracted from Greek firms, a number of comparisons between supervised and semi-supervised algorithms has been conducted. According to the produced results, the later algorithms are favored being examined over several scenarios of different Labeled Ratio (R) values.", "title": "" }, { "docid": "8f47dc7401999924dba5cb3003194071", "text": "Few types of signal streams are as ubiquitous as music. Here we consider the problem of extracting essential ingredients of music signals, such as well-defined global temporal structure in the form of nested periodicities (or meter). Can we construct an adaptive signal processing device that learns by example how to generate new instances of a given musical style? Because recurrent neural networks can in principle learn the temporal structure of a signal, they are good candidates for such a task. Unfortunately, music composed by standard recurrent neural networks (RNNs) often lacks global coherence. The reason for this failure seems to be that RNNs cannot keep track of temporally distant events that indicate global music structure. Long Short-Term Memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing & counting and learning of context sensitive languages. In the current study we show that LSTM is also a good mechanism for learning to compose music. We present experimental results showing that LSTM successfully learns a form of blues music and is able to compose novel (and we believe pleasing) melodies in that style. Remarkably, once the network has found the relevant structure it does not drift from it: LSTM is able to play the blues with good timing and proper structure as long as one is willing to listen.", "title": "" }, { "docid": "b6ae8a6fdd207686ae4c5108a4b77f1f", "text": "Many IoT applications ingest and process time series data with emphasis on 5Vs (Volume, Velocity, Variety, Value and Veracity). To design and test such systems, it is desirable to have a high-performance traffic generator specifically designed for time series data, preferably using archived data to create a truly realistic workload. However, most existing traffic generator tools either are designed for generic network applications, or only produce synthetic data based on certain time series models. In addition, few have raised their performance bar to millions-packets-per-second level with minimum time violations. In this paper, we design, implement and evaluate a highly efficient and scalable time series traffic generator for IoT applications. Our traffic generator stands out in the following four aspects: 1) it generates time-conforming packets based on high-fidelity reproduction of archived time series data; 2) it leverages an open-source Linux Exokernel middleware and a customized userspace network subsystem; 3) it includes a scalable 10G network card driver and uses \"absolute\" zero-copy in stack processing; and 4) it has an efficient and scalable application-level software architecture and threading model. We have conducted extensive experiments on both a quad-core Intel workstation and a 20-core Intel server equipped with Intel X540 10G network cards and Samsung's NVMe SSDs. Compared with a stock Linux baseline and a traditional mmap-based file I/O approach, we observe that our traffic generator significantly outperforms other alternatives in terms of throughput (10X), scalability (3.6X) and time violations (46.2X).", "title": "" }, { "docid": "bc388488c5695286fe7d7e56ac15fa94", "text": "In this paper a new parking guiding and information system is described. The system assists the user to find the most suitable parking space based on his/her preferences and learned behavior. The system takes into account parameters such as driver's parking duration, arrival time, destination, type preference, cost preference, driving time, and walking distance as well as time-varying parking rules and pricing. Moreover, a prediction algorithm is proposed to forecast the parking availability for different parking locations for different times of the day based on the real-time parking information, and previous parking availability/occupancy data. A novel server structure is used to implement the system. Intelligent parking assist system reduces the searching time for parking spots in urban environments, and consequently leads to a reduction in air pollutions and traffic congestion. On-street parking meters, off-street parking garages, as well as free parking spaces are considered in our system.", "title": "" }, { "docid": "eff903cb53fc7f7e9719a2372d517ab3", "text": "The freshwater angelfishes (Pterophyllum) are South American cichlids that have become very popular among aquarists, yet scarce information on their culture and aquarium husbandry exists. We studied Pterophyllum scalare to analyze dietary effects on fecundity, growth, and survival of eggs and larvae during 135 days. Three diets were used: A) decapsulated cysts of Artemia, B) commercial dry fish food, and C) a mix diet of the rotifer Brachionus plicatilis and the cladoceran Daphnia magna. The initial larval density was 100 organisms in each 40 L aquarium. With diet A, larvae reached a maximum weight of 3.80 g, a total length of 6.3 cm, and a height of 5.8 cm; with diet B: 2.80 g, 4.81 cm, and 4.79 cm, and with diet C: 3.00 g, 5.15 cm, and 5.10 cm, respectively. Significant differences were observed between diet A, and diet B and C, but no significantly differences were observed between diets B and C. Fecundity varied from 234 to 1,082 eggs in 20 and 50 g females, respectively. Egg survival ranged from 87.4% up to 100%, and larvae survival (80 larvae/40 L aquarium) from 50% to 66.3% using diet B and A, respectively. Live food was better for growing fish than the commercial balanced food diet. Fecundity and survival are important factors in planning a good production of angelfish.", "title": "" }, { "docid": "a4154317f6bb6af635edb1b2ef012d09", "text": "The pulp industry in Taiwan discharges tons of wood waste and pulp sludge (i.e., wastewater-derived secondary sludge) per year. The mixture of these two bio-wastes, denoted as wood waste with pulp sludge (WPS), has been commonly converted to organic fertilizers for agriculture application or to soil conditioners. However, due to energy demand, the WPS can be utilized in a beneficial way to mitigate an energy shortage. This study elucidated the performance of applying torrefaction, a bio-waste to energy method, to transform the WPS into solid bio-fuel. Two batches of the tested WPS (i.e., WPS1 and WPS2) were generated from a virgin pulp factory in eastern Taiwan. The WPS1 and WPS2 samples contained a large amount of organics and had high heating values (HHV) on a dry-basis (HHD) of 18.30 and 15.72 MJ/kg, respectively, exhibiting a potential for their use as a solid bio-fuel. However, the wet WPS as received bears high water and volatile matter content and required de-watering, drying, and upgrading. After a 20 min torrefaction time (tT), the HHD of torrefied WPS1 (WPST1) can be enhanced to 27.49 MJ/kg at a torrefaction temperature (TT) of 573 K, while that of torrefied WPS2 (WPST2) increased to 19.74 MJ/kg at a TT of 593 K. The corresponding values of the energy densification ratio of torrefied solid bio-fuels of WPST1 and WPST2 can respectively rise to 1.50 and 1.25 times that of the raw bio-waste. The HHD of WPST1 of 27.49 MJ/kg is within the range of 24–35 MJ/kg for bituminous coal. In addition, the wet-basis HHV of WPST1 with an equilibrium moisture content of 5.91 wt % is 25.87 MJ/kg, which satisfies the Quality D coal specification of the Taiwan Power Co., requiring a value of above 20.92 MJ/kg.", "title": "" }, { "docid": "6b6fd5bfbe1745a49ce497490cef949d", "text": "This paper investigates optimal power allocation strategies over a bank of independent parallel Gaussian wiretap channels where a legitimate transmitter and a legitimate receiver communicate in the presence of an eavesdropper and an unfriendly jammer. In particular, we formulate a zero-sum power allocation game between the transmitter and the jammer where the payoff function is the secrecy rate. We characterize the optimal power allocation strategies as well as the Nash equilibrium in some asymptotic regimes. We also provide a set of results that cast further insight into the problem. Our scenario, which is applicable to current OFDM communications systems, demonstrates that transmitters that adapt to jammer experience much higher secrecy rates than non-adaptive transmitters.", "title": "" }, { "docid": "fc3aeb32f617f7a186d41d56b559a2aa", "text": "Existing neural relation extraction (NRE) models rely on distant supervision and suffer from wrong labeling problems. In this paper, we propose a novel adversarial training mechanism over instances for relation extraction to alleviate the noise issue. As compared with previous denoising methods, our proposed method can better discriminate those informative instances from noisy ones. Our method is also efficient and flexible to be applied to various NRE architectures. As shown in the experiments on a large-scale benchmark dataset in relation extraction, our denoising method can effectively filter out noisy instances and achieve significant improvements as compared with the state-of-theart models.", "title": "" }, { "docid": "2181c4d52e721aab267057b8f271a9ee", "text": "Recently, the widespread availability of consumer grade drones is responsible for the new concerns of air traffic control. This paper investigates the feasibility of drone detection by passive bistatic radar (PBR) system. Wuhan University has successfully developed a digitally multichannel PBR system, which is dedicated for the drone detection. Two typical trials with a cooperative drone have been designed to examine the system's capability of small drone detection. The agreement between experimental results and ground truth indicate the effectiveness of sensing and processing method, which verifies the practicability and prospects of drone detection by this digitally multichannel PBR system.", "title": "" }, { "docid": "fb6494dcf01a927597ff784a3323e8c2", "text": "Detection of defects in induction machine rotor bars for unassembled motors is required to evaluate machines considered for repair as well as fulfilling incremental quality assurance checks in the manufacture of new machines. Detection of rotor bar defects prior to motor assembly are critical in increasing repair efficiency and assuring the quality of newly manufactured machines. Many methods of detecting rotor bar defects in unassembled motors lack the sensitivity to find both major and minor defects in both cast and fabricated rotors along with additional deficiencies in quantifiable test results and arc-flash safety hazards. A process of direct magnetic field analysis can examine measurements from induced currents in a rotor separated from its stator yielding a high-resolution fingerprint of a rotor's magnetic field. This process identifies both major and minor rotor bar defects in a repeatable and quantifiable manner appropriate for numerical evaluation without arc-flash safety hazards.", "title": "" }, { "docid": "935c404529b02cee2620e52f7a09b84d", "text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.", "title": "" }, { "docid": "7c8412c5a7c71fe76105983d3bf7e16d", "text": "A novel wideband dual-cavity-backed circularly polarized (CP) crossed dipole antenna is presented in this letter. The exciter of the antenna comprises two classical orthogonal straight dipoles for a simple design. Dual-cavity structure is employed to achieve unidirectional radiation and improve the broadside gain. In particular, the rim edges of the cavity act as secondary radiators, which contribute to significantly enhance the overall CP performance of the antenna. The final design with an overall size of 0.57λ<sub>o</sub> × 0.57λ<sub>o</sub> × 0.24λ<sub>o</sub> where λ<sub>o</sub> is the free-space wavelength at the lowest CP operating frequency of 2.0 GHzb yields a measured –10 dB impedance bandwidth (BW) of 79.4% and 3 dB axial-ratio BW of 66.7%. The proposed antenna exhibits right-handed circular polarization with a maximum broadside gain of about 9.7 dBic.", "title": "" }, { "docid": "90fe763855ca6c4fabe4f9d042d5c61a", "text": "While learning models of intuitive physics is an increasingly active area of research, current approaches still fall short of natural intelligences in one important regard: they require external supervision, such as explicit access to physical states, at training and sometimes even at test times. Some authors have relaxed such requirements by supplementing the model with an handcrafted physical simulator. Still, the resulting methods are unable to automatically learn new complex environments and to understand physical interactions within them. In this work, we demonstrated for the first time learning such predictors directly from raw visual observations and without relying on simulators. We do so in two steps: first, we learn to track mechanically-salient objects in videos using causality and equivariance, two unsupervised learning principles that do not require auto-encoding. Second, we demonstrate that the extracted positions are sufficient to successfully train visual motion predictors that can take the underlying environment into account. We validate our predictors on synthetic datasets; then, we introduce a new dataset, ROLL4REAL, consisting of real objects rolling on complex terrains (pool table, elliptical bowl, and random height-field). We show that in all such cases it is possible to learn reliable extrapolators of the object trajectories from raw videos alone, without any form of external supervision and with no more prior knowledge than the choice of a convolutional neural network architecture.", "title": "" }, { "docid": "ba0051fdc72efa78a7104587042cea64", "text": "Open innovation breaks the original innovation border of organization and emphasizes the use of suppliers, customers, partners, and other internal and external innovative thinking and resources. How to effectively implement and manage open innovation has become a new business problem. Business ecosystem is the network system of value creation and co-evolution achieved by suppliers, users, partner, and other groups with self-organization mode. This study began with the risk analysis of open innovation implementation; then innovation process was embedded into business ecosystem structure; open innovation mode based on business ecosystem was proposed; business ecosystem based on open innovation was built according to influence degree of each innovative object. Study finds that both sides have a mutual promotion relationship, which provides a new analysis perspective for open innovation and business ecosystem; at the same time, it is also conducive to guiding the concrete practice of implementing open innovation.", "title": "" }, { "docid": "f27391f29b44bfa9989146566a288b79", "text": "An appealing feature of blockchain technology is smart contracts. A smart contract is executable code that runs on top of the blockchain to facilitate, execute and enforce an agreement between untrusted parties without the involvement of a trusted third party. In this paper, we conduct a systematic mapping study to collect all research that is relevant to smart contracts from a technical perspective. The aim of doing so is to identify current research topics and open challenges for future studies in smart contract research. We extract 24 papers from different scientific databases. The results show that about two thirds of the papers focus on identifying and tackling smart contract issues. Four key issues are identified, namely, codifying, security, privacy and performance issues. The rest of the papers focuses on smart contract applications or other smart contract related topics. Research gaps that need to be addressed in future studies are provided.", "title": "" } ]
scidocsrr
a936526b6932b8c79b4eca960fca3b41
Tracking Knowledge Proficiency of Students with Educational Priors
[ { "docid": "7209596ad58da21211bfe0ceaaccc72b", "text": "Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT.", "title": "" } ]
[ { "docid": "1885ee33c09d943736b03895f41cea06", "text": "Since the late 1990s, there has been a burst of research on robotic devices for poststroke rehabilitation. Robot-mediated therapy produced improvements on recovery of motor capacity; however, so far, the use of robots has not shown qualitative benefit over classical therapist-led training sessions, performed on the same quantity of movements. Multidegree-of-freedom robots, like the modern upper-limb exoskeletons, enable a distributed interaction on the whole assisted limb and can exploit a large amount of sensory feedback data, potentially providing new capabilities within standard rehabilitation sessions. Surprisingly, most publications in the field of exoskeletons focused only on mechatronic design of the devices, while little details were given to the control aspects. On the contrary, we believe a paramount aspect for robots potentiality lies on the control side. Therefore, the aim of this review is to provide a taxonomy of currently available control strategies for exoskeletons for neurorehabilitation, in order to formulate appropriate questions toward the development of innovative and improved control strategies.", "title": "" }, { "docid": "7d35f3afeb9a8e1dc6f99e4d241273c7", "text": "In this paper, we propose Motion Dense Sampling (MDS) for action recognition, which detects very informative interest points from video frames. MDS has three advantages compared to other existing methods. The first advantage is that MDS detects only interest points which belong to action regions of all regions of a video frame. The second one is that it can detect the constant number of points even when the size of action region in an image drastically changes. The Third one is that MDS enables to describe scale invariant features by computing sampling scale for each frame based on the size of action regions. Thus, our method detects much more informative interest points from videos unlike other methods. We also propose Category Clustering and Component Clustering, which generate the very effective codebook for action recognition. Experimental results show a significant improvement over existing methods on YouTube dataset. Our method achieves 87.5 % accuracy for video classification by using only one descriptor.", "title": "" }, { "docid": "ba206d552bb33f853972e3f2e70484bc", "text": "Presumptive stressful life event scale Dear Sir, in different demographic and clinical categories, which has not been attempted. I have read with considerable interest the article entitled, Presumptive stressful life events scale (PSLES)-a new stressful life events scale for use in India by Gurmeet Singh et al (April 1984 issue). I think it is a commendable effort to develop such a scale which would potentially be of use in our setting. However, the research raises several questions, which have not been dealt with in the' paper. The following are the questions or comments which ask for response from the authors: a) The mode of selection of 51 items is not mentioned. If taken arbitrarily they could suggest a bias. If selected from clinical experience, there could be a likelihood of certain events being missed. An ideal way would be to record various events from a number of persons (and patients) and then prepare a list of commonly occuring events. b) It is noteworthy that certain culture specific items as dowry, birth of daughter, etc. are included. Other relevant events as conflict with in-laws (not regarding dowry), refusal by match seeking team (difficulty in finding match for marriage) and lack of son, could be considered stressful in our setting. c) Total number of life events are a function of age, as has been mentioned in the review of literature also, hence age categorisation as under 35 and over 35 might neither be proper nor sufficient. The relationship of number of life events in different age groups would be interesting to note. d) Also, more interesting would be to examine the rank order of life events e) A briefened version would be more welcome. The authors should try to evolve a version of around about 25-30 items, which could be easily applied clinically or for research purposes. As can be seen, from items after serial number 30 (Table 4) many could be excluded. f) The cause and effect relationship is difficult to comment from the results given by the scale. As is known, 'stressfulness' of the event depends on an individuals perception of the event. That persons with higher neu-roticism scores report more events could partly be due to this. g) A minor point, Table 4 mentions Standard Deviations however S. D. has not been given for any item. Reply: I am grateful for the interest shown by Dr. Chaturvedi and his …", "title": "" }, { "docid": "b53f683ead67e4bf7915545a8d1a2822", "text": "Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently. A popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model, or output predictions for new test inputs. Alternatively, a more recent approach to meta-learning aims to acquire deep representations that can be effectively fine-tuned, via standard gradient descent, to new tasks. In this paper, we consider the meta-learning problem from the perspective of universality, formalizing the notion of learning algorithm approximation and comparing the expressive power of the aforementioned recurrent models to the more recent approaches that embed gradient descent into the meta-learner. In particular, we seek to answer the following question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm? We find that this is indeed true, and further find, in our experiments, that gradient-based meta-learning consistently leads to learning strategies that generalize more widely compared to those represented by recurrent models.", "title": "" }, { "docid": "7f4a26bbd2335079c97c7f5bb1961af2", "text": "We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them—specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor–critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.", "title": "" }, { "docid": "a5e7a1e5900dc7c8b5047efee6f1268e", "text": "Google Hacking continues to be abused by attackers to find vulnerable websites on current Internet. Through searching specific terms of vulnerabilities in search engines, attackers can easily and automatically find a lot of vulnerable websites in a large scale. However, less work has been done to study the characteristics of vulnerabilities targeted by Google Hacking (e.g., what kind of vulnerabilities are typically targeted by Google Hacking? What kind of vulnerabilities usually have a large victim population? What is the impact of Google Hacking and how easy to defend against Google Hacking?). In this paper, we conduct the first quantitative characterization study of Google Hacking. Starting from 997 Google Dorks used in Google Hacking, we collect a total of 305,485 potentially vulnerable websites, and 6,301 verified vulnerable websites. From these vulnerabilities and potentially vulnerable websites, we study the characteristics of vulnerabilities targeted by Google Hacking from different perspectives. We find that web-related CVE vulnerabilities may not fully reflect the tastes of Google Hacking. Our results show that only a few specially chosen vulnerabilities are exploited in Google Hacking. Specifically, attackers only target on certain categories of vulnerabilities and prefer vulnerabilities with high severity score but low attack complexity. Old vulnerabilities are also preferred in Google Hacking. To defend against the Google Hacking, simply modifying few keywords in web pages can defeat 65.5 % of Google Hacking attacks.", "title": "" }, { "docid": "9297a6eaaf5ba6c1ebec8f96243d39ac", "text": "Editor: T.M. Harrison Non-arc basalts of Archean and Proterozoic age have model primary magmas that exhibit mantle potential temperatures TP that increase from 1350 °C at the present to a maximum of ∼1500–1600 °C at 2.5–3.0 Ga. The overall trend of these temperatures converges smoothly to that of the present-day MORB source, supporting the interpretation that the non-arc basalts formed by the melting of hot ambient mantle, not mantle plumes, and that they can constrain the thermal history of the Earth. These petrological results are very similar to those predicted by thermal models characterized by a low Urey ratio and more sluggish mantle convection in the past. We infer that the mantle was warming in deep Archean–Hadean time because internal heating exceeded surface heat loss, and it has been cooling from 2.5 to 3.0 Ga to the present. Non-arc Precambrian basalts are likely to be similar to those that formed oceanic crust and erupted on continents. It is estimated that ∼25–35 km of oceanic crust formed in the ancient Earth by about 30% melting of hot ambient mantle. In contrast, komatiite parental magmas reveal TP that are higher than those of non-arc basalts, consistent with the hot plume model. However, the associated excess magmatism was minor and oceanic plateaus, if they existed, would have had subtle bathymetric variations, unlike those of Phanerozoic oceanic plateaus. Primary magmas of Precambrian ambient mantle had 18–24% MgO, and they left behind residues of harzburgite that are now found as xenoliths of cratonic mantle. We infer that primary basaltic partial melts having 10–13% MgO are a feature of Phanerozoic magmatism, not of the early Earth, which may be why modern-day analogs of oceanic crust have not been reported in Archean greenstone belts.", "title": "" }, { "docid": "c619d692d9e8a262f85324f6e35471e6", "text": "Affect conveys important implicit information in human communication. Having the capability to correctly express affect during human-machine conversations is one of the major milestones in artificial intelligence. In recent years, extensive research on open-domain neural conversational models has been conducted. However, embedding affect into such models is still under explored. In this paper, we propose an endto-end affect-rich open-domain neural conversational model that produces responses not only appropriate in syntax and semantics, but also with rich affect. Our model extends the Seq2Seq model and adopts VAD (Valence, Arousal and Dominance) affective notations to embed each word with affects. In addition, our model considers the effect of negators and intensifiers via a novel affective attention mechanism, which biases attention towards affect-rich words in input sentences. Lastly, we train our model with an affect-incorporated objective function to encourage the generation of affect-rich words in the output responses. Evaluations based on both perplexity and human evaluations show that our model outperforms the state-of-the-art baseline model of comparable size in producing natural and affect-rich responses. Introduction Affect is a psychological experience of feeling or emotion. As a vital part of human intelligence, having the capability to recognize, understand and express affect and emotions like human has been arguably one of the major milestones in artificial intelligence (Picard 1997). Open-domain conversational models aim to generate coherent and meaningful responses when given user input sentences. In recent years, neural network based generative conversational models relying on Sequence-to-Sequence network (Seq2Seq) (Sutskever, Vinyals, and Le 2014) have been widely adopted due to its success in neural machine translation. Seq2Seq based conversational models have the advantages of end-to-end training paradigm and unrestricted response space over conventional retrieval-based models. To make neural conversational models more engaging, various techniques have been proposed, such as using stochastic latent variable (Serban et al. 2017) to promote response diversity and encoding topic (Xing et al. 2017) into conversational models to produce more coherent responses. Copyright c © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. However, embedding affect into neural conversational models has been seldom explored, despite that it has many benefits such as improving user satisfaction (Callejas, Griol, and López-Cózar 2011), fewer breakdowns (Martinovski and Traum 2003), and more engaged conversations (Robison, McQuiggan, and Lester 2009). For real-world applications, Fitzpatrick, Darcy, and Vierhile (2017) developed a rule-based empathic chatbot to deliver cognitive behavior therapy to young adults with depression and anxiety, and obtained significant results on depression reduction. Despite of these benefits, there are a few challenges in the affect embedding in neural conversational models that existing approaches fail to address: (i) It is difficult to capture the emotion of a sentence, partly because negators and intensifiers often change its polarity and strength. Handling negators and intensifiers properly still remains as a challenge in sentiment analysis. (ii) It is difficult to embed emotions naturally in responses with correct grammar and semantics (Ghosh et al. 2017). In this paper, we propose an end-to-end single-turn opendomain neural conversational model to address the aforementioned challenges to produce responses that are natural and affect-rich. Our model extends Seq2Seq model with attention (Luong, Pham, and Manning 2015). We leverage an external corpus (Warriner, Kuperman, and Brysbaert 2013) to provide affect knowledge for each word in the Valence, Arousal and Dominance (VAD) dimensions (Mehrabian 1996). We then incorporate the affect knowledge into the embedding layer of our model. VAD notation has been widely used as a dimensional representation of human emotions in psychology and various computational models, e.g., (Wang, Tan, and Miao 2016; Tang et al. 2017). 2D plots of selected words with extreme VAD values are shown in Figure 1. To capture the effect of negators and intensifiers, we propose a novel biased attention mechanism that explicitly considers negators and intensifiers in attention computation. To maintain correct grammar and semantics, we train our Seq2Seq model with a weighted cross-entropy loss that encourages the generation of affect-rich words without degrading language fluency. Our main contributions are summarized as follows: • For the first time, we propose a novel affective attention mechanism to incorporate the effect of negators and intensifiers in conversation modeling. Our mechanism inar X iv :1 81 1. 07 07 8v 1 [ cs .C L ] 1 7 N ov 2 01 8", "title": "" }, { "docid": "7882d2d18bc8a30a63e9fdb726c48ff1", "text": "Flying ad-hoc networks (FANETs) are a very vibrant research area nowadays. They have many military and civil applications. Limited battery energy and the high mobility of micro unmanned aerial vehicles (UAVs) represent their two main problems, i.e., short flight time and inefficient routing. In this paper, we try to address both of these problems by means of efficient clustering. First, we adjust the transmission power of the UAVs by anticipating their operational requirements. Optimal transmission range will have minimum packet loss ratio (PLR) and better link quality, which ultimately save the energy consumed during communication. Second, we use a variant of the K-Means Density clustering algorithm for selection of cluster heads. Optimal cluster heads enhance the cluster lifetime and reduce the routing overhead. The proposed model outperforms the state of the art artificial intelligence techniques such as Ant Colony Optimization-based clustering algorithm and Grey Wolf Optimization-based clustering algorithm. The performance of the proposed algorithm is evaluated in term of number of clusters, cluster building time, cluster lifetime and energy consumption.", "title": "" }, { "docid": "40e0d6e93c426107cbefbdf3d4ca85b9", "text": "H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.", "title": "" }, { "docid": "45d6863e54b343d7a081e79c84b81e65", "text": "In order to obtain optimal 3D structure and viewing parameter estimates, bundle adjustment is often used as the last step of feature-based structure and motion estimation algorithms. Bundle adjustment involves the formulation of a large scale, yet sparse minimization problem, which is traditionally solved using a sparse variant of the Levenberg-Marquardt optimization algorithm that avoids storing and operating on zero entries. This paper argues that considerable computational benefits can be gained by substituting the sparse Levenberg-Marquardt algorithm in the implementation of bundle adjustment with a sparse variant of Powell's dog leg non-linear least squares technique. Detailed comparative experimental results provide strong evidence supporting this claim", "title": "" }, { "docid": "e9402a771cc761e7e6484c2be6bc2cce", "text": "In this work, we present the Text Conditioned Auxiliary Classifier Generative Adversarial Network, (TAC-GAN) a text to image Generative Adversarial Network (GAN) for synthesizing images from their text descriptions. Former approaches have tried to condition the generative process on the textual data; but allying it to the usage of class information, known to diversify the generated samples and improve their structural coherence, has not been explored. We trained the presented TAC-GAN model on the Oxford102 dataset of flowers, and evaluated the discriminability of the generated images with Inception-Score, as well as their diversity using the Multi-Scale Structural Similarity Index (MS-SSIM). Our approach outperforms the stateof-the-art models, i.e., its inception score is 3.45, corresponding to a relative increase of 7.8% compared to the recently introduced StackGan. A comparison of the mean MS-SSIM scores of the training and generated samples per class shows that our approach is able to generate highly diverse images with an average MS-SSIM of 0.14 over all generated classes.", "title": "" }, { "docid": "6c7284ca77809210601c213ee8a685bb", "text": "Patients with non-small cell lung cancer (NSCLC) require careful staging at the time of diagnosis to determine prognosis and guide treatment recommendations. The seventh edition of the TNM Classification of Malignant Tumors is scheduled to be published in 2009 and the International Association for the Study of Lung Cancer (IASLC) created the Lung Cancer Staging Project (LCSP) to guide revisions to the current lung cancer staging system. These recommendations will be submitted to the American Joint Committee on Cancer (AJCC) and to the Union Internationale Contre le Cancer (UICC) for consideration in the upcoming edition of the staging manual. Data from over 100,000 patients with lung cancer were submitted for analysis and several modifications were suggested for the T descriptors and the M descriptors although the current N descriptors remain unchanged. These recommendations will further define homogeneous patient subsets with similar survival rates. More importantly, these revisions will help guide clinicians in making optimal, stage-specific, treatment recommendations.", "title": "" }, { "docid": "0dbb5f492e6e2336abea6bf8ce6ee3cc", "text": "This paper presents a Lie group setting for the problem of control of formations, as a natural outcome of the analysis of a planar two-vehicle formation control law. The vehicle trajectories are described using planar Frenet-Serret equations of motion, which capture the evolution of both the vehicle position and orientation for unit-speed motion subject to curvature (steering) control. The set of all possible (relative) equilibria for arbitrary G-invariant curvature controls is described (where G = SE(2) is a symmetry group for the control law). A generalization of the control law for n vehicles is presented, and the corresponding (relative) equilibria are characterized. Work is on-going to discover stability and convergence results for the n-vehicle problem. The practical motivation for this work is the problem of formation control for meter-scale UAVs; therefore, an implementation approach consistent with UAV payload constraints is also discussed.", "title": "" }, { "docid": "263088de40b85afeb051244de4821a25", "text": "Deep neural networks (DNN) are powerful models for many pattern recognition tasks, yet they tend to have many layers and many neurons resulting in a high computational complexity. This limits their application to high-performance computing platforms. In order to evaluate a trained DNN on a lower-performance computing platform like a mobile or embedded device, model reduction techniques which shrink the network size and reduce the number of parameters without considerable performance degradation performance are highly desirable. In this paper, we start with a trained fully connected DNN and show how to reduce the network complexity by a novel layerwise pruning method. We show that if some neurons are pruned and the remaining parameters (weights and biases) are adapted correspondingly to correct the errors introduced by pruning, the model reduction can be done almost without performance loss. The main contribution of our pruning method is a closed-form solution that only makes use of the first and second order moments of the layer outputs and, therefore, only needs unlabeled data. Using three benchmark datasets, we compare our pruning method with the low-rank approximation approach.", "title": "" }, { "docid": "ca715288ff8af17697e65d8b3c9f01bf", "text": "In the last five years, biologically inspired features (BIF) always held the state-of-the-art results for human age estimation from face images. Recently, researchers mainly put their focuses on the regression step after feature extraction, such as support vector regression (SVR), partial least squares (PLS), canonical correlation analysis (CCA) and so on. In this paper, we apply convolutional neural network (CNN) to the age estimation problem, which leads to a fully learned end-toend system can estimate age from image pixels directly. Compared with BIF, the proposed method has deeper structure and the parameters are learned instead of hand-crafted. The multi-scale analysis strategy is also introduced from traditional methods to the CNN, which improves the performance significantly. Furthermore, we train an efficient network in a multi-task way which can do age estimation, gender classification and ethnicity classification well simultaneously. The experiments on MORPH Album 2 illustrate the superiorities of the proposed multi-scale CNN over other state-of-the-art methods.", "title": "" }, { "docid": "c04fc6682403d89e1fbca19787f7a118", "text": "This paper presents a Compact Dual-Circularly Polarized Corrugated Horn with Integrated Septum Polarizer in the X-band. Usually such a complicated structure would be fabricated in parts and assembled together. However, exploiting the versatility afforded by Metal 3D-printing, a complete prototype is fabricated as a single part, enabling a compact design. Any variation due to mating tolerance of separate parts is eliminated. The prototype is designed to work from 9.5GHz to 10GHz. It has an impedance match of better than |S11|<−15dB and a gain about 13.4dBic at 9.75GHz. The efficiency is better than 95% in the operating band.", "title": "" }, { "docid": "ec0d1addabab76d9c2bd044f0bfe3153", "text": "Much of scientific progress stems from previously published findings, but searching through the vast sea of scientific publications is difficult. We often rely on metrics of scholarly authority to find the prominent authors but these authority indices do not differentiate authority based on research topics. We present Latent Topical-Authority Indexing (LTAI) for jointly modeling the topics, citations, and topical authority in a corpus of academic papers. Compared to previous models, LTAI differs in two main aspects. First, it explicitly models the generative process of the citations, rather than treating the citations as given. Second, it models each author’s influence on citations of a paper based on the topics of the cited papers, as well as the citing papers. We fit LTAI into four academic corpora: CORA, Arxiv Physics, PNAS, and Citeseer. We compare the performance of LTAI against various baselines, starting with the latent Dirichlet allocation, to the more advanced models including author-link topic model and dynamic author citation topic model. The results show that LTAI achieves improved accuracy over other similar models when predicting words, citations and authors of publications.", "title": "" }, { "docid": "20b7da7c9f630f12b0ef86d92ed7aa0f", "text": "In this paper, a Rectangular Dielectric Resonator Antenna (RDRA) with a modified feeding line is designed and investigated at 28GHz. The modified feed line is designed to excite the DR with relative permittivity of 10 which contributes to a wide bandwidth operation. The proposed single RDRA has been fabricated and mounted on a RT/Duroid 5880 (εr = 2.2 and tanδ = 0.0009) substrate. The optimized single element has been applied to array structure to improve the gain and achieve the required gain performance. The radiation pattern, impedance bandwidth and gain are simulated and measured accordingly. The number of elements and element spacing are studied for an optimum performance. The proposed antenna obtains a reflection coefficient response from 27.0GHz to 29.1GHz which cover the desired frequency band. This makes the proposed antenna achieve 2.1GHz impedance bandwidth and gain of 12.1 dB. Thus, it has potential for millimeter wave and 5G applications.", "title": "" }, { "docid": "c3c4e6374ace436e22a09a5816be046f", "text": "BACKGROUND Dermal fillers are increasingly being utilized for multiple cosmetic dermatology indications. The appeal of these products can be partly attributed to their strong safety profiles. Nevertheless, complications can sometimes occur. OBJECTIVE To summarize the complications associated with each available dermal filling agent, strategies to avoid them, and management options if they do arise. METHODS AND MATERIALS Complications with dermal fillers reported in peer-reviewed publications, prescribing information, and recent presentations at professional meetings were reviewed. Recommendations for avoiding and managing complications are provided, based on the literature review and the author's experience. RESULTS Inappropriate placement or superficial placement is one of the most frequent reasons for patient dissatisfaction. Due to the reversibility of hyaluronic acid, complications from these fillers can be easily corrected. Sensitivity to any of the currently approved FDA products is quite rare and can usually be managed with anti-inflammatory agents. Infection is quite uncommon as well and can usually be managed with either antibiotics or antivirals depending on the clinical features. The most concerning complication is cutaneous necrosis, and a protocol to treat the full spectrum of this process is reviewed. CONCLUSIONS Complications with dermal fillers are infrequent, and strategies to minimize their incidence and impact are easily deployed. Familiarity with each family of soft-tissue augmentation products, potential complications, and their management will optimize the use of these agents.", "title": "" } ]
scidocsrr
3b1f2e12d3b4042889fc551daf6e9cf7
Template Based Inference in Symmetric Relational Markov Random Fields
[ { "docid": "9707365fac6490f52b328c2b039915b6", "text": "Identification of protein–protein interactions often provides insight into protein function, and many cellular processes are performed by stable protein complexes. We used tandem affinity purification to process 4,562 different tagged proteins of the yeast Saccharomyces cerevisiae. Each preparation was analysed by both matrix-assisted laser desorption/ionization–time of flight mass spectrometry and liquid chromatography tandem mass spectrometry to increase coverage and accuracy. Machine learning was used to integrate the mass spectrometry scores and assign probabilities to the protein–protein interactions. Among 4,087 different proteins identified with high confidence by mass spectrometry from 2,357 successful purifications, our core data set (median precision of 0.69) comprises 7,123 protein–protein interactions involving 2,708 proteins. A Markov clustering algorithm organized these interactions into 547 protein complexes averaging 4.9 subunits per complex, about half of them absent from the MIPS database, as well as 429 additional interactions between pairs of complexes. The data (all of which are available online) will help future studies on individual proteins as well as functional genomics and systems biology.", "title": "" }, { "docid": "8dc493568e94d94370f78e663da7df96", "text": "Expertise in C++, C, Perl, Haskell, Linux system administration. Technical experience in compiler design and implementation, release engineering, network administration, FPGAs, hardware design, probabilistic inference, machine learning, web search engines, cryptography, datamining, databases (SQL, Oracle, PL/SQL, XML), distributed knowledge bases, machine vision, automated web content generation, 2D and 3D graphics, distributed computing, scientific and numerical computing, optimization, virtualization (Xen, VirtualBox). Also experience in risk analysis, finance, game theory, firm behavior, international economics. Familiar with Java, C++ Standard Template Library, Java Native Interface, Java Foundation Classes, Android development, MATLAB, CPLEX, NetPBM, Cascading Style Sheets (CSS), Tcl/Tk, Windows system administration, Mac OS X system administration, ElasticSearch, modifying the Ubuntu installer.", "title": "" } ]
[ { "docid": "907888b819c7f65fe34fb8eea6df9c93", "text": "Most time-series datasets with multiple data streams have (many) missing measurements that need to be estimated. Most existing methods address this estimation problem either by interpolating within data streams or imputing across data streams; we develop a novel approach that does both. Our approach is based on a deep learning architecture that we call a Multidirectional Recurrent Neural Network (M-RNN). An M-RNN differs from a bi-directional RNN in that it operates across streams in addition to within streams, and because the timing of inputs into the hidden layers is both lagged and advanced. To demonstrate the power of our approach we apply it to a familiar real-world medical dataset and demonstrate significantly improved performance.", "title": "" }, { "docid": "1e18be7d7e121aa899c96cbcf5ea906b", "text": "Internet-based technologies such as micropayments increasingly enable the sale and delivery of small units of information. This paper draws attention to the opposite strategy of bundling a large number of information goods, such as those increasingly available on the Internet, for a fixed price that does not depend on how many goods are actually used by the buyer. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this \"predictive value of bundling\" makes it possible to achieve greater sales, greater economic efficiency and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our results do not extend to most physical goods, as the marginal costs of production typically negate any benefits from the predictive value of bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints and bundling of goods with various types of correlations. We find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, a monopolist can generally earn substantially higher profits than would be possible without bundling. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and on-line content, cable television programming, and copyrighted music. ________________________________________ We thank Timothy Bresnahan, Hung-Ken Chien, Frank Fisher, Michael Harrison, Paul Kleindorfer, Thomas Malone, Robert Pindyck, Nancy Rose, Richard Schmalensee, John Tsitsiklis, Hal Varian, Albert Wenger, Birger Wernerfelt, four anonymous reviewers and seminar participants at the University of California at Berkeley, MIT, New York University, Stanford University, University of Rochester, the Wharton School, the 1995 Workshop on Information Systems and Economics and the 1998 Workshop on Marketing Science and the Internet for many helpful suggestions. Any errors that remain are only our responsibility. BUNDLING INFORMATION GOODS Page 1", "title": "" }, { "docid": "26e60be4012b20575f3ddee16f046daa", "text": "Natural scene character recognition is challenging due to the cluttered background, which is hard to separate from text. In this paper, we propose a novel method for robust scene character recognition. Specifically, we first use robust principal component analysis (PCA) to denoise character image by recovering the missing low-rank component and filtering out the sparse noise term, and then use a simple Histogram of oriented Gradient (HOG) to perform image feature extraction, and finally, use a sparse representation based classifier for recognition. In experiments on four public datasets, namely the Char74K dataset, ICADAR 2003 robust reading dataset, Street View Text (SVT) dataset and IIIT5K-word dataset, our method was demonstrated to be competitive with the state-of-the-art methods.", "title": "" }, { "docid": "d8b19c953cc66b6157b87da402dea98a", "text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.", "title": "" }, { "docid": "645d9a7186080d4ec3c7ce708b1c9818", "text": "With millions of users and billions of photos, web-scale face recognition is a challenging task that demands speed, accuracy, and scalability. Most current approaches do not address and do not scale well to Internet-sized scenarios such as tagging friends or finding celebrities. Focusing on web-scale face identification, we gather an 800,000 face dataset from the Facebook social network that models real-world situations where specific faces must be recognized and unknown identities rejected. We propose a novel Linearly Approximated Sparse Representation-based Classification (LASRC) algorithm that uses linear regression to perform sample selection for ‘-minimization, thus harnessing the speed of least-squares and the robustness of sparse solutions such as SRC. Our efficient LASRC algorithm achieves comparable performance to SRC with a 100–250 times speedup and exhibits similar recall to SVMs with much faster training. Extensive tests demonstrate our proposed approach is competitive on pair-matching verification tasks and outperforms current state-of-the-art algorithms on open-universe identification in uncontrolled, web-scale scenarios. 2013 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "362cf1594043c92f118876f959e078a4", "text": "Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation (Krizhevsky et al., 2012) alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes of data. We show that a Data Augmentation Generative Adversarial Network (DAGAN) augments standard vanilla classifiers well. We also show a DAGAN can enhance few-shot learning systems such as Matching Networks. We demonstrate these approaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, and VGG-Face data. In our experiments we can see over 13% increase in accuracy in the low-data regime experiments in Omniglot (from 69% to 82%), EMNIST (73.9% to 76%) and VGG-Face (4.5% to 12%); in Matching Networks for Omniglot we observe an increase of 0.5% (from 96.9% to 97.4%) and an increase of 1.8% in EMNIST (from 59.5% to 61.3%).", "title": "" }, { "docid": "bd41083b19e2d542b3835c3a008b30e6", "text": "Formalizations are used in systems development to support the description of artifacts and to shape and regulate developer behavior. The limits to applying formalizations in these two ways are discussed based on examples from systems development practice. It is argued that formalizations, for example in the form of methods, are valuable in some situations, but inappropriate in others. The alternative to uncritically using formalizations is that systems developers reflect on the situations in which they find themselves and manage based on a combination of formal and informal approaches.", "title": "" }, { "docid": "ce41d07b369635c5b0a914d336971f8e", "text": "In this paper, a fuzzy controller for an inverted pendulum system is presented in two stages. These stages are: investigation of fuzzy control system modeling methods and solution of the “Inverted Pendulum Problem” by using Java programming with Applets for internet based control education. In the first stage, fuzzy modeling and fuzzy control system investigation, Java programming language, classes and multithreading were introduced. In the second stage specifically, simulation of the inverted pendulum problem was developed with Java Applets and the simulation results were given. Also some stability concepts are introduced. c © 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7aa1df89f94fe1f653f1680fbf33e838", "text": "Several modes of vaccine delivery have been developed in the last 25 years, which induce strong immune responses in pre-clinical models and in human clinical trials. Some modes of delivery include, adjuvants (aluminum hydroxide, Ribi formulation, QS21), liposomes, nanoparticles, virus like particles, immunostimulatory complexes (ISCOMs), dendrimers, viral vectors, DNA delivery via gene gun, electroporation or Biojector 2000, cell penetrating peptides, dendritic cell receptor targeting, toll-like receptors, chemokine receptors and bacterial toxins. There is an enormous amount of information and vaccine delivery methods available for guiding vaccine and immunotherapeutics development against diseases.", "title": "" }, { "docid": "228c59c9bf7b4b2741567bffb3fcf73f", "text": "This paper presents a new PSO-based optimization DBSCAN space clustering algorithm with obstacle constraints. The algorithm introduces obstacle model and simplifies two-dimensional coordinates of the cluster object coding to one-dimensional, then uses the PSO algorithm to obtain the shortest path and minimum obstacle distance. At the last stage, this paper fulfills spatial clustering based on obstacle distance. Theoretical analysis and experimental results show that the algorithm can get high-quality clustering result of space constraints with more reasonable and accurate quality.", "title": "" }, { "docid": "0250d6bb0bcf11ca8af6c2661c1f7f57", "text": "Chemoreception is a biological process essential for the survival of animals, as it allows the recognition of important volatile cues for the detection of food, egg-laying substrates, mates, or predators, among other purposes. Furthermore, its role in pheromone detection may contribute to evolutionary processes, such as reproductive isolation and speciation. This key role in several vital biological processes makes chemoreception a particularly interesting system for studying the role of natural selection in molecular adaptation. Two major gene families are involved in the perireceptor events of the chemosensory system: the odorant-binding protein (OBP) and chemosensory protein (CSP) families. Here, we have conducted an exhaustive comparative genomic analysis of these gene families in 20 Arthropoda species. We show that the evolution of the OBP and CSP gene families is highly dynamic, with a high number of gains and losses of genes, pseudogenes, and independent origins of subfamilies. Taken together, our data clearly support the birth-and-death model for the evolution of these gene families with an overall high gene turnover rate. Moreover, we show that the genome organization of the two families is significantly more clustered than expected by chance and, more important, that this pattern appears to be actively maintained across the Drosophila phylogeny. Finally, we suggest the homologous nature of the OBP and CSP gene families, dating back their most recent common ancestor after the terrestrialization of Arthropoda (380--450 Ma) and we propose a scenario for the origin and diversification of these families.", "title": "" }, { "docid": "19ebb5c0cdf90bf5aef36ad4b9f621a1", "text": "There has been a dramatic increase in the number and complexity of new ventilation modes over the last 30 years. The impetus for this has been the desire to improve the safety, efficiency, and synchrony of ventilator-patient interaction. Unfortunately, the proliferation of names for ventilation modes has made understanding mode capabilities problematic. New modes are generally based on increasingly sophisticated closed-loop control systems or targeting schemes. We describe the 6 basic targeting schemes used in commercially available ventilators today: set-point, dual, servo, adaptive, optimal, and intelligent. These control systems are designed to serve the 3 primary goals of mechanical ventilation: safety, comfort, and liberation. The basic operations of these schemes may be understood by clinicians without any engineering background, and they provide the basis for understanding the wide variety of ventilation modes and their relative advantages for improving patient-ventilator synchrony. Conversely, their descriptions may provide engineers with a means to better communicate to end users.", "title": "" }, { "docid": "8588a3317d4b594d8e19cb005c3d35c7", "text": "Histograms of Oriented Gradients (HOG) is one of the wellknown features for object recognition. HOG features are calculated by taking orientation histograms of edge intensity in a local region. N.Dalal et al. proposed an object detection algorithm in which HOG features were extracted from all locations of a dense grid on a image region and the combined features are classified by using linear Support Vector Machine (SVM). In this paper, we employ HOG features extracted from all locations of a grid on the image as candidates of the feature vectors. Principal Component Analysis (PCA) is applied to these HOG feature vectors to obtain the score (PCA-HOG) vectors. Then a proper subset of PCA-HOG feature vectors is selected by using Stepwise Forward Selection (SFS) algorithm or Stepwise Backward Selection (SBS) algorithm to improve the generalization performance. The selected PCA-HOG feature vectors are used as an input of linear SVM to classify the given input into pedestrian/non-pedestrian. The improvement of the recognition rates are confirmed through experiments using MIT pedestrian dataset.", "title": "" }, { "docid": "932088f443c5f0f3e239ed13032e56d7", "text": "Hydro Muscles are linear actuators resembling ordinary biological muscles in terms of active dynamic output, passive material properties and appearance. The passive and dynamic characteristics of the latex based Hydro Muscle are addressed. The control tests of modular muscles are presented together with a muscle model relating sensed quantities with net force. Hydro Muscles are discussed in the context of conventional actuators. The hypothesis that Hydro Muscles have greater efficiency than McKibben Muscles is experimentally verified. Hydro Muscle peak efficiency with (without) back flow consideration was 88% (27%). Possible uses of Hydro Muscles are illustrated by relevant robotics projects at WPI. It is proposed that Hydro Muscles can also be an excellent educational tool for moderate-budget robotics classrooms and labs; the muscles are inexpensive (in the order of standard latex tubes of comparable size), made of off-the-shelf elements in less than 10 minutes, easily customizable, lightweight, biologically inspired, efficient, compliant soft linear actuators that are adept for power-augmentation. Moreover, a single source can actuate many muscles by utilizing control of flow and/or pressure. Still further, these muscles can utilize ordinary tap water and successfully operate within a safe range of pressures not overly exceeding standard water household pressure of about 0.59 MPa (85 psi).", "title": "" }, { "docid": "d9710b9a214d95c572bdc34e1fe439c4", "text": "This paper presents a new method, capable of automatically generating attacks on binary programs from software crashes. We analyze software crashes with a symbolic failure model by performing concolic executions following the failure directed paths, using a whole system environment model and concrete address mapped symbolic memory in S2 E. We propose a new selective symbolic input method and lazy evaluation on pseudo symbolic variables to handle symbolic pointers and speed up the process. This is an end-to-end approach able to create exploits from crash inputs or existing exploits for various applications, including most of the existing benchmark programs, and several large scale applications, such as a word processor (Microsoft office word), a media player (mpalyer), an archiver (unrar), or a pdf reader (foxit). We can deal with vulnerability types including stack and heap overflows, format string, and the use of uninitialized variables. Notably, these applications have become software fuzz testing targets, but still require a manual process with security knowledge to produce mitigation-hardened exploits. Using this method to generate exploits is an automated process for software failures without source code. The proposed method is simpler, more general, faster, and can be scaled to larger programs than existing systems. We produce the exploits within one minute for most of the benchmark programs, including mplayer. We also transform existing exploits of Microsoft office word into new exploits within four minutes. The best speedup is 7,211 times faster than the initial attempt. For heap overflow vulnerability, we can automatically exploit the unlink() macro of glibc, which formerly requires sophisticated hacking efforts.", "title": "" }, { "docid": "665da3a85a548d12864de5fad517e3ee", "text": "To characterize the neural correlates of being personally involved in social interaction as opposed to being a passive observer of social interaction between others we performed an fMRI study in which participants were gazed at by virtual characters (ME) or observed them looking at someone else (OTHER). In dynamic animations virtual characters then showed socially relevant facial expressions as they would appear in greeting and approach situations (SOC) or arbitrary facial movements (ARB). Differential neural activity associated with ME>OTHER was located in anterior medial prefrontal cortex in contrast to the precuneus for OTHER>ME. Perception of socially relevant facial expressions (SOC>ARB) led to differentially increased neural activity in ventral medial prefrontal cortex. Perception of arbitrary facial movements (ARB>SOC) differentially activated the middle temporal gyrus. The results, thus, show that activation of medial prefrontal cortex underlies both the perception of social communication indicated by facial expressions and the feeling of personal involvement indicated by eye gaze. Our data also demonstrate that distinct regions of medial prefrontal cortex contribute differentially to social cognition: whereas the ventral medial prefrontal cortex is recruited during the analysis of social content as accessible in interactionally relevant mimic gestures, differential activation of a more dorsal part of medial prefrontal cortex subserves the detection of self-relevance and may thus establish an intersubjective context in which communicative signals are evaluated.", "title": "" }, { "docid": "f0f2cdccd8f415cbd3fffcea4509562a", "text": "Textual inference is an important component in many applications for understanding natural language. Classical approaches to textual inference rely on logical representations for meaning, which may be regarded as “external” to the natural language itself. However, practical applications usually adopt shallower lexical or lexical-syntactic representations, which correspond closely to language structure. In many cases, such approaches lack a principled meaning representation and inference framework. We describe an inference formalism that operates directly on language-based structures, particularly syntactic parse trees. New trees are generated by applying inference rules, which provide a unified representation for varying types of inferences. We use manual and automatic methods to generate these rules, which cover generic linguistic structures as well as specific lexical-based inferences. We also present a novel packed data-structure and a corresponding inference algorithm that allows efficient implementation of this formalism. We proved the correctness of the new algorithm and established its efficiency analytically and empirically. The utility of our approach was illustrated on two tasks: unsupervised relation extraction from a large corpus, and the Recognizing Textual Entailment (RTE) benchmarks.", "title": "" }, { "docid": "a5214112059506a67f031d98a4e6f04f", "text": "Accurate segmentation of cervical cells in Pap smear images is an important task for automatic identification of pre-cancerous changes in the uterine cervix. One of the major segmentation challenges is the overlapping of cytoplasm, which was less addressed by previous studies. In this paper, we propose a learning-based method to tackle the overlapping issue with robust shape priors by segmenting individual cell in Pap smear images. Specifically, we first define the problem as a discrete labeling task for multiple cells with a suitable cost function. We then use the coarse labeling result to initialize our dynamic multiple-template deformation model for further boundary refinement on each cell. Multiple-scale deep convolutional networks are adopted to learn the diverse cell appearance features. Also, we incorporate high level shape information to guide segmentation where the cells boundary is noisy or lost due to touching and overlapping cells. We evaluate the proposed algorithm on two different datasets, and our comparative experiments demonstrate the promising performance of the proposed method in terms of segmentation accuracy.", "title": "" }, { "docid": "83f44152fe9103a8027b602de7360270", "text": "The BATS project focuses on helping students with visual impairments access and explore spatial information using standard computer hardware and open source software. Our work is largely based on prior techniques used in presenting maps to the blind such as text-to-speech synthesis, auditory icons, and tactile feedback. We add spatial sound to position auditory icons and speech callouts in three dimensions, and use consumer-grade haptic feedback devices to provide additional map information through tactile vibrations and textures. Two prototypes have been developed for use in educational settings and have undergone minimal assessment. A system for public release and plans for more rigorous evaluation are in development.", "title": "" }, { "docid": "4bc7687ba89699a537329f37dda4e74d", "text": "At the same time as cities are growing, their share of older residents is increasing. To engage and assist cities to become more “age-friendly,” the World Health Organization (WHO) prepared the Global Age-Friendly Cities Guide and a companion “Checklist of Essential Features of Age-Friendly Cities”. In collaboration with partners in 35 cities from developed and developing countries, WHO determined the features of age-friendly cities in eight domains of urban life: outdoor spaces and buildings; transportation; housing; social participation; respect and social inclusion; civic participation and employment; communication and information; and community support and health services. In 33 cities, partners conducted 158 focus groups with persons aged 60 years and older from lower- and middle-income areas of a locally defined geographic area (n = 1,485). Additional focus groups were held in most sites with caregivers of older persons (n = 250 caregivers) and with service providers from the public, voluntary, and commercial sectors (n = 515). No systematic differences in focus group themes were noted between cities in developed and developing countries, although the positive, age-friendly features were more numerous in cities in developed countries. Physical accessibility, service proximity, security, affordability, and inclusiveness were important characteristics everywhere. Based on the recurring issues, a set of core features of an age-friendly city was identified. The Global Age-Friendly Cities Guide and companion “Checklist of Essential Features of Age-Friendly Cities” released by WHO serve as reference for other communities to assess their age readiness and plan change.", "title": "" } ]
scidocsrr
89614a6ddc0d9dedd24685c5b6a1164b
Short-term load forecasting in smart grid: A combined CNN and K-means clustering approach
[ { "docid": "f9b56de3658ef90b611c78bdb787d85b", "text": "Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting , weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of the time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. This paper provides a survey of time series prediction applications using a novel machine learning approach: support vector machines (SVM). The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons.The ultimate goal is to provide the reader with insight into the applications using SVM for time series prediction, to give a brief tutorial on SVMs for time series prediction, to outline some of the advantages and challenges in using SVMs for time series prediction, and to provide a source for the reader to locate books, technical journals, and other online SVM research resources.", "title": "" }, { "docid": "0254d49cb759e163a032b6557f969bd3", "text": "The smart electricity grid enables a two-way flow of power and data between suppliers and consumers in order to facilitate the power flow optimization in terms of economic efficiency, reliability and sustainability. This infrastructure permits the consumers and the micro-energy producers to take a more active role in the electricity market and the dynamic energy management (DEM). The most important challenge in a smart grid (SG) is how to take advantage of the users’ participation in order to reduce the cost of power. However, effective DEM depends critically on load and renewable production forecasting. This calls for intelligent methods and solutions for the real-time exploitation of the large volumes of data generated by a vast amount of smart meters. Hence, robust data analytics, high performance computing, efficient data network management, and cloud computing techniques are critical towards the optimized operation of SGs. This research aims to highlight the big data issues and challenges faced by the DEM employed in SG networks. It also provides a brief description of the most commonly used data processing methods in the literature, and proposes a promising direction for future research in the field.", "title": "" }, { "docid": "26032527ca18ef5a8cdeff7988c6389c", "text": "This paper aims to develop a load forecasting method for short-term load forecasting, based on an adaptive two-stage hybrid network with self-organized map (SOM) and support vector machine (SVM). In the first stage, a SOM network is applied to cluster the input data set into several subsets in an unsupervised manner. Then, groups of 24 SVMs for the next day's load profile are used to fit the training data of each subset in the second stage in a supervised way. The proposed structure is robust with different data types and can deal well with the nonstationarity of load series. In particular, our method has the ability to adapt to different models automatically for the regular days and anomalous days at the same time. With the trained network, we can straightforwardly predict the next-day hourly electricity load. To confirm the effectiveness, the proposed model has been trained and tested on the data of the historical energy load from New York Independent System Operator.", "title": "" } ]
[ { "docid": "1232e633a941b7aa8cccb28287b56e5b", "text": "This paper presents a complete system for constructing panoramic image mosaics from sequences of images. Our mosaic representation associates a transformation matrix with each input image, rather than explicitly projecting all of the images onto a common surface (e.g., a cylinder). In particular, to construct a full view panorama, we introduce a rotational mosaic representation that associates a rotation matrix (and optionally a focal length) with each input image. A patch-based alignment algorithm is developed to quickly align two images given motion models. Techniques for estimating and refining camera focal lengths are also presented. In order to reduce accumulated registration errors, we apply global alignment (block adjustment) to the whole sequence of images, which results in an optimally registered image mosaic. To compensate for small amounts of motion parallax introduced by translations of the camera and other unmodeled distortions, we use a local alignment (deghosting) technique which warps each image based on the results of pairwise local image registrations. By combining both global and local alignment, we significantly improve the quality of our image mosaics, thereby enabling the creation of full view panoramic mosaics with hand-held cameras. We also present an inverse texture mapping algorithm for efficiently extracting environment maps from our panoramic image mosaics. By mapping the mosaic onto an arbitrary texture-mapped polyhedron surrounding the origin, we can explore the virtual environment using standard 3D graphics viewers and hardware without requiring special-purpose players.", "title": "" }, { "docid": "8ee0764d45e512bfc6b0273f7e90d2c1", "text": "This work introduces a new dataset and framework for the exploration of topological data analysis (TDA) techniques applied to time-series data. We examine the end-toend TDA processing pipeline for persistent homology applied to time-delay embeddings of time series – embeddings that capture the underlying system dynamics from which time series data is acquired. In particular, we consider stability with respect to time series length, the approximation accuracy of sparse filtration methods, and the discriminating ability of persistence diagrams as a feature for learning. We explore these properties across a wide range of time-series datasets spanning multiple domains for single source multi-segment signals as well as multi-source single segment signals. Our analysis and dataset captures the entire TDA processing pipeline and includes time-delay embeddings, persistence diagrams, topological distance measures, as well as kernels for similarity learning and classification tasks for a broad set of time-series data sources. We outline the TDA framework and rationale behind the dataset and provide insights into the role of TDA for time-series analysis as well as opportunities for new work.", "title": "" }, { "docid": "75f8f0d89bdb5067910a92553275b0d7", "text": "It is well known that recognition performance degrades signi cantly when moving from a speakerdependent to a speaker-independent system. Traditional hidden Markov model (HMM) systems have successfully applied speaker-adaptation approaches to reduce this degradation. In this paper we present and evaluate some techniques for speaker-adaptation of a hybrid HMM-arti cial neural network (ANN) continuous speech recognition system. These techniques are applied to a well trained, speaker-independent, hybrid HMM-ANN system and the recognizer parameters are adapted to a new speaker through o -line procedures. The techniques are evaluated on the DARPA RM corpus using varying amounts of adaptation material and different ANN architectures. The results show that speaker-adaptation within the hybrid framework can substantially improve system performance.", "title": "" }, { "docid": "cfeb97a848766269c2088d8191206cc8", "text": "We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.", "title": "" }, { "docid": "c67ffe3dfa6f0fe0449f13f1feb20300", "text": "The associations between giving a history of physical, emotional, and sexual abuse in children and a range of mental health, interpersonal, and sexual problems in adult life were examined in a community sample of women. Abuse was defined to establish groups giving histories of unequivocal victimization. A history of any form of abuse was associated with increased rates of psychopathology, sexual difficulties, decreased self-esteem, and interpersonal problems. The similarities between the three forms of abuse in terms of their association with negative adult outcomes was more apparent than any differences, though there was a trend for sexual abuse to be particularly associated to sexual problems, emotional abuse to low self-esteem, and physical abuse to marital breakdown. Abuse of all types was more frequent in those from disturbed and disrupted family backgrounds. The background factors associated with reports of abuse were themselves often associated to the same range of negative adult outcomes as for abuse. Logistic regressions indicated that some, though not all, of the apparent associations between abuse and adult problems was accounted for by this matrix of childhood disadvantage from which abuse so often emerged.", "title": "" }, { "docid": "a5e01cfeb798d091dd3f2af1a738885b", "text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.", "title": "" }, { "docid": "a37aae87354ff25bf7937adc7a9f8e62", "text": "Vectorizing hand-drawn sketches is an important but challenging task. Many businesses rely on fashion, mechanical or structural designs which, sooner or later, need to be converted in vectorial form. For most, this is still a task done manually. This paper proposes a complete framework that automatically transforms noisy and complex hand-drawn sketches with different stroke types in a precise, reliable and highly-simplified vectorized model. The proposed framework includes a novel line extraction algorithm based on a multi-resolution application of Pearson’s cross correlation and a new unbiased thinning algorithm that can get rid of scribbles and variable-width strokes to obtain clean 1-pixel lines. Other contributions include variants of pruning, merging and edge linking procedures to post-process the obtained paths. Finally, a modification of the original Schneider’s vectorization algorithm is designed to obtain fewer control points in the resulting Bézier splines. All the steps presented in this framework have been extensively tested and compared with state-of-the-art algorithms, showing (both qualitatively and quantitatively) their outperformance. Moreover they exhibit fast real-time performance, making them suitable for integration in any computer graphics toolset.", "title": "" }, { "docid": "b17e909f1301880e93797ed75d26ce57", "text": "We propose a simple, yet effective, Word Sense Disambiguation method that uses a combination of a lexical knowledge-base and embeddings. Similar to the classic Lesk algorithm, it exploits the idea that overlap between the context of a word and the definition of its senses provides information on its meaning. Instead of counting the number of words that overlap, we use embeddings to compute the similarity between the gloss of a sense and the context. Evaluation on both Dutch and English datasets shows that our method outperforms other Lesk methods and improves upon a state-of-theart knowledge-based system. Additional experiments confirm the effect of the use of glosses and indicate that our approach works well in different domains.", "title": "" }, { "docid": "c1d75b9a71f373a6e44526adf3694f37", "text": "Segmentation means segregating area of interest from the image. The aim of image segmentation is to cluster the pixels into salient image regions i.e. regions corresponding to individual surfaces, objects, or natural parts of objects. Automatic Brain tumour segmentation is a sensitive step in medical field. A significant medical informatics task is to perform the indexing of the patient databases according to image location, size and other characteristics of brain tumours based on magnetic resonance (MR) imagery. This requires segmenting tumours from different MR imaging modalities. Automated brain tumour segmentation from MR modalities is a challenging, computationally intensive task.Image segmentation plays an important role in image processing. MRI is generally more useful for brain tumour detection because it provides more detailed information about its type, position and size. For this reason, MRI imaging is the choice of study for the diagnostic purpose and, thereafter, for surgery and monitoring treatment outcomes. This paper presents a review of the various methods used in brain MRI image segmentation. The review covers imaging modalities, magnetic resonance imaging and methods for segmentation approaches. The paper concludes with a discussion on the upcoming trend of advanced researches in brain image segmentation. Keywords-Region growing, Level set method, Split and merge algorithm, MRI images", "title": "" }, { "docid": "51c0d682dd0d9c24e23696ba09dc4f49", "text": "Graph embedding methods represent nodes in a continuous vector space, preserving information from the graph (e.g. by sampling random walks). There are many hyper-parameters to these methods (such as random walk length) which have to be manually tuned for every graph. In this paper, we replace random walk hyperparameters with trainable parameters that we automatically learn via backpropagation. In particular, we learn a novel attention model on the power series of the transition matrix, which guides the random walk to optimize an upstream objective. Unlike previous approaches to attention models, the method that we propose utilizes attention parameters exclusively on the data (e.g. on the random walk), and not used by the model for inference. We experiment on link prediction tasks, as we aim to produce embeddings that best-preserve the graph structure, generalizing to unseen information. We improve state-of-the-art on a comprehensive suite of real world datasets including social, collaboration, and biological networks. Adding attention to random walks can reduce the error by 20% to 45% on datasets we attempted. Further, our learned attention parameters are different for every graph, and our automatically-found values agree with the optimal choice of hyper-parameter if we manually tune existing methods.", "title": "" }, { "docid": "e45e49fb299659e2e71f5c4eb825aff6", "text": "We propose a lifelong learning system that has the ability to reuse and transfer knowledge from one task to another while efficiently retaining the previously learned knowledgebase. Knowledge is transferred by learning reusable skills to solve tasks in Minecraft, a popular video game which is an unsolved and high-dimensional lifelong learning problem. These reusable skills, which we refer to as Deep Skill Networks, are then incorporated into our novel Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture using two techniques: (1) a deep skill array and (2) skill distillation, our novel variation of policy distillation (Rusu et al. 2015) for learning skills. Skill distillation enables the HDRLN to efficiently retain knowledge and therefore scale in lifelong learning, by accumulating knowledge and encapsulating multiple reusable skills into a single distilled network. The H-DRLN exhibits superior performance and lower learning sample complexity compared to the regular Deep Q Network (Mnih et al. 2015) in sub-domains of Minecraft.", "title": "" }, { "docid": "19a02cb59a50f247663acc77b768d7ec", "text": "Machine learning is a useful technology for decision support systems and assumes greater importance in research and practice. Whilst much of the work focuses technical implementations and the adaption of machine learning algorithms to application domains, the factors of machine learning design affecting the usefulness of decision support are still understudied. To enhance the understanding of machine learning and its use in decision support systems, we report the results of our content analysis of design-oriented research published between 1994 and 2013 in major Information Systems outlets. The findings suggest that the usefulness of machine learning for supporting decision-makers is dependent on the task, the phase of decision-making, and the applied technologies. We also report about the advantages and limitations of prior research, the applied evaluation methods and implications for future decision support research. Our findings suggest that future decision support research should shed more light on organizational and people-related evaluation criteria.", "title": "" }, { "docid": "a90dd405d9bd2ed912cacee098c0f9db", "text": "Many telecommunication companies today have actively started to transform the way they do business, going beyond communication infrastructure providers are repositioning themselves as data-driven service providers to create new revenue streams. In this paper, we present a novel industrial application where a scalable Big data approach combined with deep learning is used successfully to classify massive mobile web log data, to get new aggregated insights on customer web behaviors that could be applied to various industry verticals.", "title": "" }, { "docid": "9b53d96025c26254b38a4325c9d2da15", "text": "The parameter spaces of hierarchical systems such as multilayer perceptrons include singularities due to the symmetry and degeneration of hidden units. A parameter space forms a geometrical manifold, called the neuromanifold in the case of neural networks. Such a model is identified with a statistical model, and a Riemannian metric is given by the Fisher information matrix. However, the matrix degenerates at singularities. Such a singular structure is ubiquitous not only in multilayer perceptrons but also in the gaussian mixture probability densities, ARMA time-series model, and many other cases. The standard statistical paradigm of the Cramr-Rao theorem does not hold, and the singularity gives rise to strange behaviors in parameter estimation, hypothesis testing, Bayesian inference, model selection, and in particular, the dynamics of learning from examples. Prevailing theories so far have not paid much attention to the problem caused by singularity, relying only on ordinary statistical theories developed for regular (nonsingular) models. Only recently have researchers remarked on the effects of singularity, and theories are now being developed. This article gives an overview of the phenomena caused by the singularities of statistical manifolds related to multilayer perceptrons and gaussian mixtures. We demonstrate our recent results on these problems. Simple toy models are also used to show explicit solutions. We explain that the maximum likelihood estimator is no longer subject to the gaussian distribution even asymptotically, because the Fisher information matrix degenerates, that the model selection criteria such as AIC, BIC, and MDL fail to hold in these models, that a smooth Bayesian prior becomes singular in such models, and that the trajectories of dynamics of learning are strongly affected by the singularity, causing plateaus or slow manifolds in the parameter space. The natural gradient method is shown to perform well because it takes the singular geometrical structure into account. The generalization error and the training error are studied in some examples.", "title": "" }, { "docid": "06c0ee8d139afd11aab1cc0883a57a68", "text": "In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.", "title": "" }, { "docid": "af89b3636290235e0b241c6cced2a336", "text": "Assume we were to come up with a family of distributions parameterized by θ in order to approximate the posterior, qθ(ω). Our goal is to set θ such that qθ(ω) is as similar to the true posterior p(ω|D) as possible. For clarity, qθ(ω) is a distribution over stochastic parameters ω that is determined by a set of learnable parameters θ and some source of randomness. The approximation is therefore limited by our choice of parametric function qθ(ω) as well as the randomness.1 Given ω and an input x, an output distribution p(y|x,ω) = p(y|fω(x)) = fω(x,y) is induced by observation noise (the conditionality of which is omitted for brevity).", "title": "" }, { "docid": "5ffb3e630e5f020365e471e94d678cbb", "text": "This paper presents one perspective on recent developments related to software engineering in the industrial automation sector that spans from manufacturing factory automation to process control systems and energy automation systems. The survey's methodology is based on the classic SWEBOK reference document that comprehensively defines the taxonomy of software engineering domain. This is mixed with classic automation artefacts, such as the set of the most influential international standards and dominating industrial practices. The survey focuses mainly on research publications which are believed to be representative of advanced industrial practices as well.", "title": "" }, { "docid": "56fb6fe1f6999b5d7a9dab19e8b877ef", "text": "Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images. In this paper, we present an approach that estimates 3D hand pose from regular RGB images. This task has far more ambiguities due to the missing depth information. To this end, we propose a deep network that learns a network-implicit 3D articulation prior. Together with detected keypoints in the images, this network yields good estimates of the 3D pose. We introduce a large scale 3D hand pose dataset based on synthetic hand models for training the involved networks. Experiments on a variety of test sets, including one on sign language recognition, demonstrate the feasibility of 3D hand pose estimation on single color images.", "title": "" }, { "docid": "e6e91ce66120af510e24a10dee6d64b7", "text": "AI plays an increasingly prominent role in society since decisions that were once made by humans are now delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals’ incarceration, and the hiring of new employees, and it’s not difficult to envision that they will in the future underpin most of the decisions in society. Despite the high complexity entailed by this task, there is still not much understanding of basic properties of such systems. For instance, we currently cannot detect (neither explain nor correct) whether an AI system is operating fairly (i.e., is abiding by the decision-constraints agreed by society) or it is reinforcing biases and perpetuating a preceding prejudicial practice. Issues of discrimination have been discussed extensively in legal circles, but there exists still not much understanding of the formal conditions that a system must adhere to be deemed fair. In this paper, we use the language of structural causality (Pearl, 2000) to fill in this gap. We start by introducing three new fine-grained measures of transmission of change from stimulus to effect, which we called counterfactual direct (Ctf-DE), indirect (Ctf-IE), and spurious (Ctf-SE) effects. We then derive the causal explanation formula, which allows the AI designer to quantitatively evaluate fairness and explain the total observed disparity of decisions through different discriminatory mechanisms. We apply these results to various discrimination analysis tasks and run extensive simulations, including detection, evaluation, and optimization of decision-making under fairness constraints. We conclude studying the trade-off between different types of fairness criteria (outcome and procedural), and provide a quantitative approach to policy implementation and the design of fair decision-making systems.", "title": "" } ]
scidocsrr
3cbee5298bcfe2cb9e5d520cbe0f6fc3
Using Machine Learning Algorithms for Breast Cancer Risk Prediction and Diagnosis
[ { "docid": "6c81b1fe36a591b3b86a5e912a8792c1", "text": "Mobile phones, sensors, patients, hospitals, researchers, providers and organizations are nowadays, generating huge amounts of healthcare data. The real challenge in healthcare systems is how to find, collect, analyze and manage information to make people's lives healthier and easier, by contributing not only to understand new diseases and therapies but also to predict outcomes at earlier stages and make real-time decisions. In this paper, we explain the potential benefits of big data to healthcare and explore how it improves treatment and empowers patients, providers and researchers. We also describe the ability of reality mining in collecting large amounts of data to understand people's habits, detect and predict outcomes, and illustrate the benefits of big data analytics through five effective new pathways that could be adopted to promote patients' health, enhance medicine, reduce cost and improve healthcare value and quality. We cover some big data solutions in healthcare and we shed light on implementations, such as Electronic Healthcare Record (HER) and Electronic Healthcare Predictive Analytics (e-HPA) in US hospitals. Furthermore, we complete the picture by highlighting some challenges that big data analytics faces in healthcare.", "title": "" } ]
[ { "docid": "a68084f8dba1017a2cfa9b01ee571771", "text": "OAuth 2.0 is a delegated authorization framework enabling secure authorization for applications running on various kinds of platforms. In healthcare services, OAuth allows the patient (resource owner) seeking real time clinical care to authorize automatic monthly payments from his bank account (resource server) without the patient being required to supply his credentials to the clinic (client app). OAuth 2.0 achieves this with the help of tokens issued by an authorization server which enables validated access to a protected resource. To ensure security, access tokens have an expiry time and are short-lived. So the clinical app may use a refresh token to obtain a new access token to cash monthly payments for rendering real time health care services. Refresh tokens need secure storage to ensure they are not leaked, since any malicious party can use them to obtain new access and refresh tokens. Since OAuth 2.0 has dropped signatures and relies completely on SSL/TLS, it is vulnerable to phishing attack when accessing interoperable APIs. In this paper, we develop an approach that combines JSON web token (JWT) with OAuth 2.0 to request an OAuth access token from authorization server when a client wishes to utilize a previous authentication and authorization. Experimental evaluation confirms that the proposed scheme is practically efficient, removes secure storage overhead by removing the need to have or store refresh token, uses signature and prevents different security attacks which is highly desired in health care services using an IOT cloud platform.", "title": "" }, { "docid": "de276ac8417b92ed155f5a9dcb5e680d", "text": "With the development of parallel computing, distributed computing, grid computing, a new computing model appeared. The concept of computing comes from grid, public computing and SaaS. It is a new method that shares basic framework. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer or remoter server. The running of the enterprise’s data center is just like Internet. This makes the enterprise use the resource in the application that is needed, and access computer and storage system according to the requirement. This article introduces the background and principle of cloud computing, the character, style and actuality. This article also introduces the application field the merit of cloud computing, such as, it do not need user’s high level equipment, so it reduces the user’s cost. It provides secure and dependable data storage center, so user needn’t do the awful things such storing data and killing virus, this kind of task can be done by professionals. It can realize data share through different equipments. It analyses some questions and hidden troubles, and puts forward some solutions, and discusses the future of cloud computing. Cloud computing is a computing style that provide power referenced with IT as a service. Users can enjoy the service even he knows nothing about the technology of cloud computing and the professional knowledge in this field and the power to control it.", "title": "" }, { "docid": "47fd07d8f2f540ee064e1c674c550637", "text": "Virtual reality and 360-degree video streaming are growing rapidly, yet, streaming high-quality 360-degree video is still challenging due to high bandwidth requirements. Existing solutions reduce bandwidth consumption by streaming high-quality video only for the user's viewport. However, adding the spatial domain (viewport) to the video adaptation space prevents the existing solutions from buffering future video chunks for a duration longer than the interval that user's viewport is predictable. This makes playback more prone to video freezes due to rebuffering, which severely degrades the user's Quality of Experience especially under challenging network conditions. We propose a new method that alleviates the restrictions on buffer duration by utilizing scalable video coding. Our method significantly reduces the occurrence of rebuffering on links with varying bandwidth without compromising playback quality or bandwidth efficiency compared to the existing solutions. We demonstrate the efficiency of our proposed method using experimental results with real world cellular network bandwidth traces.", "title": "" }, { "docid": "5c03be451f3610f39c94043d30314617", "text": "Syphilis is a sexually transmitted disease (STD) produced by Treponema pallidum, which mainly affects humans and is able to invade practically any organ in the body. Its infection facilitates the transmission of other STDs. Since the end of the last decade, successive outbreaks of syphilis have been reported in most western European countries. Like other STDs, syphilis is a notifiable disease in the European Union. In Spain, epidemiological information is obtained nationwide via the country's system for recording notifiable diseases (Spanish acronym EDO) and the national microbiological information system (Spanish acronym SIM), which compiles information from a network of 46 sentinel laboratories in twelve Spanish regions. The STDs that are epidemiologically controlled are gonococcal infection, syphilis, and congenital syphilis. The incidence of each of these diseases is recorded weekly. The information compiled indicates an increase in the cases of syphilis and gonococcal infection in Spain in recent years. According to the EDO, in 1999, the number of cases of syphilis per 100,000 inhabitants was recorded to be 1.69, which has risen to 4.38 in 2007. In this article, we review the reappearance and the evolution of this infectious disease in eight European countries, and alert dentists to the importance of a) diagnosing sexually-transmitted diseases and b) notifying the centres that control them.", "title": "" }, { "docid": "d338c807948016bf978aa7a03841f292", "text": "Emotions accompany everyone in the daily life, playing a key role in non-verbal communication, and they are essential to the understanding of human behavior. Emotion recognition could be done from the text, speech, facial expression or gesture. In this paper, we concentrate on recognition of “inner” emotions from electroencephalogram (EEG) signals as humans could control their facial expressions or vocal intonation. The need and importance of the automatic emotion recognition from EEG signals has grown with increasing role of brain computer interface applications and development of new forms of human-centric and human-driven interaction with digital media. We propose fractal dimension based algorithm of quantification of basic emotions and describe its implementation as a feedback in 3D virtual environments. The user emotions are recognized and visualized in real time on his/her avatar adding one more so-called “emotion dimension” to human computer interfaces.", "title": "" }, { "docid": "07c39fa141334c0b18ecb274a50bed44", "text": "Virtual reality (VR) using head-mounted displays (HMDs) is becoming popular. Smartphone-based HMDs (SbHMDs) are so low cost that users can easily experience VR. Unfortunately, their input modality is quite limited. We propose a real-time eye tracking technique that uses the built-in front facing camera to capture the user's eye. It realizes stand-alone pointing functionality without any additional device.", "title": "" }, { "docid": "e097d29240e7b3a83ad437b5fb7014f1", "text": "We contribute an approach for interactive policy learning through expert demonstration that allows an agent to actively request and effectively represent demonstration examples. In order to address the inherent uncertainty of human demonstration, we represent the policy as a set of Gaussian mixture models (GMMs), where each model, with multiple Gaussian components, corresponds to a single action. Incrementally received demonstration examples are used as training data for the GMM set. We then introduce our confident execution approach, which focuses learning on relevant parts of the domain by enabling the agent to identify the need for and request demonstrations for specific parts of the state space. The agent selects between demonstration and autonomous execution based on statistical analysis of the uncertainty of the learned Gaussian mixture set. As it achieves proficiency at its task and gains confidence in its actions, the agent operates with increasing autonomy, eliminating the need for unnecessary demonstrations of already acquired behavior, and reducing both the training time and the demonstration workload of the expert. We validate our approach with experiments in simulated and real robot domains.", "title": "" }, { "docid": "6379e89db7d9063569a342ef2056307a", "text": "Grounded Theory is a research method that generates theory from data and is useful for understanding how people resolve problems that are of concern to them. Although the method looks deceptively simple in concept, implementing Grounded Theory research can often be confusing in practice. Furthermore, despite many papers in the social science disciplines and nursing describing the use of Grounded Theory, there are very few examples and relevant guides for the software engineering researcher. This paper describes our experience using classical (i.e., Glaserian) Grounded Theory in a software engineering context and attempts to interpret the canons of classical Grounded Theory in a manner that is relevant to software engineers. We provide model to help the software engineering researchers interpret the often fuzzy definitions found in Grounded Theory texts and share our experience and lessons learned during our research. We summarize these lessons learned in a set of fifteen guidelines.", "title": "" }, { "docid": "20f1a40e7f352085c04709e27c1a2aa2", "text": "Automatic speech recognition (ASR) outputs often contain various disfluencies. It is necessary to remove these disfluencies before processing downstream tasks. In this paper, an efficient disfluency detection approach based on right-to-left transitionbased parsing is proposed, which can efficiently identify disfluencies and keep ASR outputs grammatical. Our method exploits a global view to capture long-range dependencies for disfluency detection by integrating a rich set of syntactic and disfluency features with linear complexity. The experimental results show that our method outperforms state-of-the-art work and achieves a 85.1% f-score on the commonly used English Switchboard test set. We also apply our method to in-house annotated Chinese data and achieve a significantly higher f-score compared to the baseline of CRF-based approach.", "title": "" }, { "docid": "0a3ec7d30f50ffac3e906f2b71ea5bac", "text": "Working memory is an essential component of human cognition. Persistent activity related to working memory has been reported in many brain areas, including the inferior temporal and prefrontal cortex [1-8]. The medial temporal lobe (MTL) contains \"concept cells\" that respond invariantly to specific individuals or places whether presented as images, text, or speech [9, 10]. It is unknown, however, whether the MTL also participates in working memory processes. We thus sought to determine whether human MTL neurons respond to images held in working memory. We recorded from patients with chronically intractable epilepsy as they performed a task that required them to remember three or four sequentially presented pictures across a brief delay. 48% of visually selective neurons continued to carry image-specific information after image offset, but most ceased to encode previously presented images after a subsequent presentation of a different image. However, 8% of visually selective neurons encoded previously presented images during a final maintenance period, despite presentation of further images in the intervening interval. Population activity of stimulus-selective neurons predicted behavioral outcome in terms of correct and incorrect responses. These findings indicate that the MTL is part of a brain-wide network for working memory.", "title": "" }, { "docid": "875508043025aeb1e99214bdad269c22", "text": "Article: Emotions and art are intimately related (Tan, 2000). From ancient to modern times, theories of aesthetics have emphasized the role of art in evoking, shaping, and modifying human feelings. The experimental study of preferences, evaluations, and feelings related to art has a long history in psychology. Aesthetics is one of the oldest areas of psychological research, dating to Fechner's (1876) landmark work. Psychology has had a steady interest in aesthetic problems since then, but art has never received as much attention as one would expect (see Berlyne, 1971a; Tan, 2000; Valentine, 1962). The study of art and the study of emotions, as areas of scientific inquiry, both languished during much of the last century. It is not surprising that the behavioral emphasis on observable action over inner experience would lead to a neglect of research on aesthetics. In an interesting coincidence, both art and emotion resurfaced in psychology at about the same time. As emotion psychologists began developing theories of basic emotions (Ekman & Friesen, 1971; Izard, 1971; Tomkins, 1962), experimental psychologists began tackling hedonic qualities of art (Berlyne, 1971a, 1972, 1974). Since then, the psychology of emotion and the psychology of art have had little contact (see Silvia, in press-b; Tan, 2000).", "title": "" }, { "docid": "e57bc6f2d8299292d80e880969504174", "text": "In the zero-emission electric power generation system, a multiple-input DC-DC converter is useful to obtain the regulated output voltage from several input power sources such as a solar array, wind generator, fuel cell, and so forth. A new multiple-input DC-DC converter is proposed and analyzed. As a result, the static and dynamic characteristics are clarified theoretically, and the results are confirmed by experiment.", "title": "" }, { "docid": "bdc3aca95784fa167b1118fedac9d3c5", "text": "This cross-sectional study compared somatic, endurance performance determinants and heart rate variability (HRV) profiles of professional soccer players divided into different age groups: GI (17-19.9 years; n = 23), GII (20-24.9 years; n = 45), GIII (25-29.9 years; n = 30), and GIV (30-39 years; n = 26). Players underwent somatic and HRV assessment and maximal exercise testing. HRV was analyzed by spectral analysis of HRV, and high (HF) and low (LF) frequency power was transformed by a natural logarithm (Ln). Players in GIV (83 ± 7 kg) were heavier (p < 0.05) compared to both GI (73 ± 6 kg), and GII (78 ± 6 kg). Significantly lower maximal oxygen uptake (VO2max, ml•kg-1•min-1) was observed for GIV (56.6 ± 3.8) compared to GI (59.6 ± 3.9), GII (59.4 ± 4.2) and GIV (59.7 ± 4.1). All agegroups, except for GII, demonstrated comparable relative maximal power output (Pmax). For supine HRV, significantly lower Ln HF (ms2) was identified in both GIII (7.1 ± 0.8) and GIV (6.9 ± 1.0) compared to GI (7.9 ± 0.6) and GII (7.7 ± 0.9). In conclusion, soccer players aged >25 years showed negligible differences in Pmax unlike the age group differences demonstrated in VO2max. A shift towards relative sympathetic dominance, particularly due to reduced vagal activity, was apparent after approximately 8 years of competing at the professional level.", "title": "" }, { "docid": "40cea15a4fbe7f939a490ea6b6c9a76a", "text": "An application provider leases resources (i.e., virtual machine instances) of variable configurations from a IaaS provider over some lease duration (typically one hour). The application provider (i.e., consumer) would like to minimize their cost while meeting all service level obligations (SLOs). The mechanism of adding and removing resources at runtime is referred to as autoscaling. The process of autoscaling is automated through the use of a management component referred to as an autoscaler. This paper introduces a novel autoscaling approach in which both cloud and application dynamics are modeled in the context of a stochastic, model predictive control problem. The approach exploits trade-off between satisfying performance related objectives for the consumer's application while minimizing their cost. Simulation results are presented demonstrating the efficacy of this new approach.", "title": "" }, { "docid": "36a5de24f61c4113ba96adcfb5fe192d", "text": "This paper presents a control method for a quadrotor equipped with a multi-DOF manipulator to transport a common object to the desired position. By considering a quadrotor and robot arm as a combined system, called quadrorot-manipulator system, the kinematic and dynamitic models are built together in a general version using Euler-Lagrange(EL) equations. The impact on the quadrotormanipulator system caused by the object is also considered. The transportation task can be decomposed into five steps. The planning trajectory can be obtained when the initial and the fianl position of the object is given. With the combined dynamic model, we propose a control scheme consisting of position controller, attitude controller and manipulator controller to track the planning trajectory. To validate our approach, the simulation results of a transportation task with a quadrotor with a 2-DOF manipulator was presented.", "title": "" }, { "docid": "397a184ed4ba52d9017ddd2f51ea7fc2", "text": "In recent years, a specific machine learning method called deep learning has gained huge attraction, as it has obtained astonishing results in broad applications such as pattern recognition, speech recognition, computer vision, and natural language processing. Recent research has also been shown that deep learning techniques can be combined with reinforcement learning methods to learn useful representations for the problems with high dimensional raw data input. This paper reviews the recent advances in deep reinforcement learning with a focus on the most used deep architectures such as autoencoders, convolutional neural networks and recurrent neural networks which have successfully been come together with the reinforcement learning framework.", "title": "" }, { "docid": "8e7cef98d1d3404dd5101ddde88489ef", "text": "The present experiments were designed to determine the efficacy of metomidate hydrochloride as an alternative anesthetic with potential cortisol blocking properties for channel catfish Ictalurus punctatus. Channel catfish (75 g) were exposed to concentrations of metomidate ranging from 0.5 to 16 ppm for a period of 60 min. At 16-ppm metomidate, mortality occurred in 65% of the catfish. No mortalities were observed at concentrations of 8 ppm or less. The minimum concentration of metomidate producing desirable anesthetic properties was 6 ppm. At this concentration, acceptable induction and recovery times were observed in catfish ranging from 3 to 810 g average body weight. Plasma cortisol levels during metomidate anesthesia (6 ppm) were compared to fish anesthetized with tricaine methanesulfonate (100 ppm), quinaldine (30 ppm) and clove oil (100 ppm). Cortisol levels of catfish treated with metomidate and clove oil remained at baseline levels during 30 min of anesthesia (P>0.05). Plasma cortisol levels of tricaine methanesulfonate and quinaldine anesthetized catfish peaked approximately eightand fourfold higher (P< 0.05), respectively, than fish treated with metomidate. These results suggest that the physiological disturbance of channel catfish during routine-handling procedures and stress-related research could be reduced through the use of metomidate as an anesthetic. D 2003 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "13bd8d8f7ae0295e2b2bba26f02ea378", "text": "Teamwork plays an important role in many areas of today's society, such as business activities. Thus, the question of how to form an effective team is of increasing interest. In this paper we use the team-oriented multiplayer online game Dota 2 to study cooperation within teams and the success of teams. Making use of game log data, we choose a statistical approach to identify factors that increase the chance of a team to win. The factors that we analyze are related to the roles that players can take within the game, the experiences of the players and friendship ties within a team. Our results show that such data can be used to infer social behavior patterns.", "title": "" }, { "docid": "d6cb714b47b056e1aea8ef0682f4ae51", "text": "Arti cial neural networks are being used with increasing frequency for high dimensional problems of regression or classi cation. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques.", "title": "" }, { "docid": "85a076e58f4d117a37dfe6b3d68f5933", "text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.", "title": "" } ]
scidocsrr
d6b38bfacf6234254620ae433837e2a4
A prediction based approach for stock returns using autoregressive neural networks
[ { "docid": "186b18a1ce29ce50bf9309137c09a9b5", "text": "This work presents a new prediction-based portfolio optimization model tha t can capture short-term investment opportunities. We used neural network predictors to predict stock s’ returns and derived a risk measure, based on the prediction errors, that have the same statistical foundation o f he mean-variance model. The efficient diversification effects holds thanks to the selection of predictor s with low and complementary pairwise error profiles. We employed a large set of experiments with real data from the Brazilian sto ck market to examine our portfolio optimization model, which included the evaluation of the Normality o f the prediction errors. Our results showed that it is possible to obtain Normal prediction errors with non-Normal time series of stock returns, and that the prediction-based portfolio optimization model to ok advantage of short term opportunities, outperforming the mean-variance model and beating the m arket index.", "title": "" }, { "docid": "ee11c968b4280f6da0b1c0f4544bc578", "text": "A report is presented of some results of an ongoing project using neural-network modeling and learning techniques to search for and decode nonlinear regularities in asset price movements. The author focuses on the case of IBM common stock daily returns. Having to deal with the salient features of economic data highlights the role to be played by statistical inference and requires modifications to standard learning techniques which may prove useful in other contexts.<<ETX>>", "title": "" } ]
[ { "docid": "2a76205b80c90ff9a4ca3ccb0434bb03", "text": "Finding out which e-shops offer a specific product is a central challenge for building integrated product catalogs and comparison shopping portals. Determining whether two offers refer to the same product involves extracting a set of features (product attributes) from the web pages containing the offers and comparing these features using a matching function. The existing gold standards for product matching have two shortcomings: (i) they only contain offers from a small number of e-shops and thus do not properly cover the heterogeneity that is found on the Web. (ii) they only provide a small number of generic product attributes and therefore cannot be used to evaluate whether detailed product attributes have been correctly extracted from textual product descriptions. To overcome these shortcomings, we have created two public gold standards: The WDC Product Feature Extraction Gold Standard consists of over 500 product web pages originating from 32 different websites on which we have annotated all product attributes (338 distinct attributes) which appear in product titles, product descriptions, as well as tables and lists. The WDC Product Matching Gold Standard consists of over 75 000 correspondences between 150 products (mobile phones, TVs, and headphones) in a central catalog and offers for these products on the 32 web sites. To verify that the gold standards are challenging enough, we ran several baseline feature extraction and matching methods, resulting in F-score values in the range 0.39 to 0.67. In addition to the gold standards, we also provide a corpus consisting of 13 million product pages from the same websites which might be useful as background knowledge for training feature extraction and matching methods.", "title": "" }, { "docid": "2663800ed92ce1cd44ab1b7760c43e0f", "text": "Synchronous reluctance motor (SynRM) have rather poor power factor. This paper investigates possible methods to improve the power factor (pf) without impacting its torque density. The study found two possible aspects to improve the power factor with either refining rotor dimensions and followed by current control techniques. Although it is a non-linear mathematical field, it is analysed by analytical equations and FEM simulation is utilized to validate the design progression. Finally, an analytical method is proposed to enhance pf without compromising machine torque density. There are many models examined in this study to verify the design process. The best design with high performance is used for final current control optimization simulation.", "title": "" }, { "docid": "a1c126807088d954b73c2bd5d696c481", "text": "or, why space syntax works when it looks as though it shouldn't 0 Abstract A common objection to the space syntax analysis of cities is that even in its own terms the technique of using a non-uniform line representation of space and analysing it by measures that are essentially topological, ignores too much geometric and metric detail to be credible. In this paper it is argued that far from ignoring geometric and metric properties the 'line-graph' internalises them into the structure of the graph and in doing so allows the graph analysis to pick up the nonlocal, or extrinsic, properties of spaces that are critical to the movement dynamics through which a city evolves its essential structures. Nonlocal properties are those which are defined by the relation of elements to all others in the system, rather than intrinsic to the element itself. The method also leads to a powerful analysis of urban structures because cities are essentially nonlocal systems. 1 Preliminaries 1.1 The critique of line graphs Space syntax is a family of techniques for representing and analysing spatial layouts of all kinds. A spatial representation is first chosen according to how space is defined for the purposes of the research-rooms, convex spaces, lines, convex isovists, and so on-and then one or more measures of 'configuration' are selected to analyse the patterns formed by that representation. Prior to the researcher setting up the research question, no one representation or measure is privileged over others. Part of the researcher's task is to discover which representation and which measure captures the logic of a particular system, as shown by observation of its functioning. In the study of cities, one representation and one type of measure has proved more consistently fruitful than others: the representation of urban space as a matrix of the 'longest and fewest' lines, the 'axial map', and the analysis of this by translating the line matrix into a graph, and the use of the various versions of the 'topological' (i.e. nonmetric) measure of patterns of line connectivity called 'integration'. (Hillier et al 1982, Steadman 1983, Hillier & Hanson 1984) This 'line graph' approach has proved quite unexpectedly successful. It has generated not only models for predicting urban et al 1998), but also strong theoretical results on urban structure, and even a general theory of the dynamics linking the urban grid, movement, land uses and building densities in 'organic' cities …", "title": "" }, { "docid": "f13000c4870a85e491f74feb20f9b2d4", "text": "Complex Event Processing (CEP) is a stream processing model that focuses on detecting event patterns in continuous event streams. While the CEP model has gained popularity in the research communities and commercial technologies, the problem of gracefully degrading performance under heavy load in the presence of resource constraints, or load shedding, has been largely overlooked. CEP is similar to “classical” stream data management, but addresses a substantially different class of queries. This unfortunately renders the load shedding algorithms developed for stream data processing inapplicable. In this paper we study CEP load shedding under various resource constraints. We formalize broad classes of CEP load-shedding scenarios as different optimization problems. We demonstrate an array of complexity results that reveal the hardness of these problems and construct shedding algorithms with performance guarantees. Our results shed some light on the difficulty of developing load-shedding algorithms that maximize utility.", "title": "" }, { "docid": "1c6a589d2c74bd1feb3e98c21a1375a9", "text": "UNLABELLED\nMinimally invasive approach for groin hernia treatment is still controversial, but in the last decade, it tends to become the standard procedure for one day surgery. We present herein the technique of laparoscopic Trans Abdominal Pre Peritoneal approach (TAPP). The surgical technique is presented step-by step;the different procedures key points (e.g. anatomic landmarks recognition, diagnosis of \"occult\" hernias, preperitoneal and hernia sac dissection, mesh placement and peritoneal closure) are described and discussed in detail, several tips and tricks being noted and highlighted.\n\n\nCONCLUSIONS\nTAPP is a feasible method for treating groin hernia associated with low rate of postoperative morbidity and recurrence. The anatomic landmarks are easily recognizable. The laparoscopic exploration allows for the treatment of incarcerated strangulated hernias and the intraoperative diagnosis of occult hernias.", "title": "" }, { "docid": "1fc58f0ed6c2fbd05f190b3d3da2d319", "text": "Seismology is the scientific study of earthquakes & the propagation of seismic waves through the earth. The large improvement has been seen in seismology from around hundreds of years. The seismic data plays important role in the seismic data acquisition. This data can be used for analysis which helps to locate the correct location of the earthquake. The more efficient systems are used now a day to locate the earthquakes as large improvements has been done in this field. In older days analog systems are used for data acquisition. The analog systems record seismic signals in a permanent way. These systems are large in size, costly and are incompatible with computer. Due to these drawbacks these analog systems are replaced by digital systems so that data can be recorded digitally. Using different sensor to indentify the natural disaster, MEMS, VIBRATION sensor is used to monitor the earth condition , the different values of the different sensor is given to the ADC to convert the values in digital format, if any changes occurs or in abnormality condition BUZZER will ring.", "title": "" }, { "docid": "537d6fdfb26e552fb3254addfbb6ac49", "text": "We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underlying space containing the contexts. Our method captures uncertainty via modelling the entities as distributions and simultaneously provides interpretability with the optimal transport map, hence giving a novel perspective for building rich and powerful feature representations. As a guiding example, we formulate unsupervised representations for text, and demonstrate it on tasks such as sentence similarity and word entailment detection. Empirical results show strong advantages gained through the proposed framework. This approach can potentially be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools at the core of this framework are Wasserstein distances and Wasserstein barycenters, hence raising the question from our title.", "title": "" }, { "docid": "968555bbada2d930b97d8bb982580535", "text": "With the recent developments in three-dimensional (3-D) scanner technologies and photogrammetric techniques, it is now possible to acquire and create accurate models of historical and archaeological sites. In this way, unrestricted access to these sites, which is highly desirable from both a research and a cultural perspective, is provided. Through the process of virtualisation, numerous virtual collections are created. These collections must be archives, indexed and visualised over a very long period of time in order to be able to monitor and restore them as required. However, the intrinsic complexities and tremendous importance of ensuring long-term preservation and access to these collections have been widely overlooked. This neglect may lead to the creation of a so-called “Digital Rosetta Stone”, where models become obsolete and the data cannot be interpreted or virtualised. This paper presents a framework for the long-term preservation of 3-D culture heritage data as well as the application thereof in monitoring, restoration and virtual access. The interplay between raw data and model is considered as well as the importance of calibration. Suitable archiving and indexing techniques are described and the issue of visualisation over a very long period of time is addressed. An approach to experimentation though detachment, migration and emulation is presented.", "title": "" }, { "docid": "8b752b8607b6296b35d34bb59830e8e4", "text": "The innate immune system is the first line of defense against infection and responses are initiated by pattern recognition receptors (PRRs) that detect pathogen-associated molecular patterns (PAMPs). PRRs also detect endogenous danger-associated molecular patterns (DAMPs) that are released by damaged or dying cells. The major PRRs include the Toll-like receptor (TLR) family members, the nucleotide binding and oligomerization domain, leucine-rich repeat containing (NLR) family, the PYHIN (ALR) family, the RIG-1-like receptors (RLRs), C-type lectin receptors (CLRs) and the oligoadenylate synthase (OAS)-like receptors and the related protein cyclic GMP-AMP synthase (cGAS). The different PRRs activate specific signaling pathways to collectively elicit responses including the induction of cytokine expression, processing of pro-inflammatory cytokines and cell-death responses. These responses control a pathogenic infection, initiate tissue repair and stimulate the adaptive immune system. A central theme of many innate immune signaling pathways is the clustering of activated PRRs followed by sequential recruitment and oligomerization of adaptors and downstream effector enzymes, to form higher-order arrangements that amplify the response and provide a scaffold for proximity-induced activation of the effector enzymes. Underlying the formation of these complexes are co-operative assembly mechanisms, whereby association of preceding components increases the affinity for downstream components. This ensures a rapid immune response to a low-level stimulus. Structural and biochemical studies have given key insights into the assembly of these complexes. Here we review the current understanding of assembly of immune signaling complexes, including inflammasomes initiated by NLR and PYHIN receptors, the myddosomes initiated by TLRs, and the MAVS CARD filament initiated by RIG-1. We highlight the co-operative assembly mechanisms during assembly of each of these complexes.", "title": "" }, { "docid": "20c31eaaa80b66cf100ffd24b3b01ede", "text": "Time series data has become a ubiquitous and important data source in many application domains. Most companies and organizations strongly rely on this data for critical tasks like decision-making, planning, predictions, and analytics in general. While all these tasks generally focus on actual data representing organization and business processes, it is also desirable to apply them to alternative scenarios in order to prepare for developments that diverge from expectations or assess the robustness of current strategies. When it comes to the construction of such what-if scenarios, existing tools either focus on scalar data or they address highly specific scenarios. In this work, we propose a generally applicable and easy-to-use method for the generation of what-if scenarios on time series data. Our approach extracts descriptive features of a data set and allows the construction of an alternate version by means of filtering and modification of these features.", "title": "" }, { "docid": "f2a1e5d8e99977c53de9f2a82576db69", "text": "During the last years, several masking schemes for AES have been proposed to secure hardware implementations against DPA attacks. In order to investigate the effectiveness of these countermeasures in practice, we have designed and manufactured an ASIC. The chip features an unmasked and two masked AES-128 encryption engines that can be attacked independently. In addition to conventional DPA attacks on the output of registers, we have also mounted attacks on the output of logic gates. Based on simulations and physical measurements we show that the unmasked and masked implementations leak side-channel information due to glitches at the output of logic gates. It turns out that masking the AES S-Boxes does not prevent DPA attacks, if glitches occur in the circuit.", "title": "" }, { "docid": "e640d487052b9399bea6c0d06ce189b0", "text": "We propose a novel deep supervised neural network for the task of action recognition in videos, which implicitly takes advantage of visual tracking and shares the robustness of both deep Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). In our method, a multi-branch model is proposed to suppress noise from background jitters. Specifically, we firstly extract multi-level deep features from deep CNNs and feed them into 3dconvolutional network. After that we feed those feature cubes into our novel joint LSTM module to predict labels and to generate attention regularization. We evaluate our model on two challenging datasets: UCF101 and HMDB51. The results show that our model achieves the state-of-art by only using convolutional features.", "title": "" }, { "docid": "17c0ef52e8f4dade526bf56f158967ef", "text": "Consider a distributed computing setup consisting of a master node and n worker nodes, each equipped with p cores, and a function f (x) = g(f1(x), f2(x),…, fk(x)), where each fi can be computed independently of the rest. Assuming that the worker computational times have exponential tails, what is the minimum possible time for computing f? Can we use coding theory principles to speed up this distributed computation? In [1], it is shown that distributed computing of linear functions can be expedited by applying linear erasure codes. However, it is not clear if linear codes can speed up distributed computation of ‘nonlinear’ functions as well. To resolve this problem, we propose the use of sparse linear codes, exploiting the modern multicore processing architecture. We show that 1) our coding solution achieves the order optimal runtime, and 2) it is at least Θ(√log n) times faster than any uncoded schemes where the number of workers is n.", "title": "" }, { "docid": "39d522e6db7971ccf8a9d3bd3a915a10", "text": "The Internet of Things (IoT) is next generation technology that is intended to improve and optimize daily life by operating intelligent sensors and smart objects together. At application layer, communication of resourceconstrained devices is expected to use constrained application protocol (CoAP).Communication security is an important aspect of IoT environment. However closed source security solutions do not help in formulating security in IoT so that devices can communicate securely with each other. To protect the transmission of confidential information secure CoAP uses datagram transport layer security (DTLS) as the security protocol for communication and authentication of communicating devices. DTLS was initially designed for powerful devices that are connected through reliable and high bandwidth link. This paper proposes a collaboration of DTLS and CoAP for IoT. Additionally proposed DTLS header compression scheme that helps to reduce packet size, energy consumption and avoids fragmentation by complying the 6LoWPAN standards. Also proposed DTLS header compression scheme does not compromises the point-to-point security provided by DTLS. Since DTLS has chosen as security protocol underneath the CoAP, enhancement to the existing DTLS also provided by introducing the use of raw public key in DTLS.", "title": "" }, { "docid": "477ca9c55310235c691f6420d63005a7", "text": "We present Sigma*, a novel technique for learning symbolic models of software behavior. Sigma* addresses the challenge of synthesizing models of software by using symbolic conjectures and abstraction. By combining dynamic symbolic execution to discover symbolic input-output steps of the programs and counterexample guided abstraction refinement to over-approximate program behavior, Sigma* transforms arbitrary source representation of programs into faithful input-output models. We define a class of stream filters---programs that process streams of data items---for which Sigma* converges to a complete model if abstraction refinement eventually builds up a sufficiently strong abstraction. In other words, Sigma* is complete relative to abstraction. To represent inferred symbolic models, we use a variant of symbolic transducers that can be effectively composed and equivalence checked. Thus, Sigma* enables fully automatic analysis of behavioral properties such as commutativity, reversibility and idempotence, which is useful for web sanitizer verification and stream programs compiler optimizations, as we show experimentally. We also show how models inferred by Sigma* can boost performance of stream programs by parallelized code generation.", "title": "" }, { "docid": "ea5697d417fe154be77d941c19d8a86e", "text": "The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.", "title": "" }, { "docid": "5104891f21240e2ac0f0480e4b5da28e", "text": "The paper describes the modeling and control of a robot with flexible joints (the DLR medical robot), which has strong mechanical couplings between pairs of joints realized with a differential gear-box. Because of this coupling, controllers developed before for the DLR light-weight robots cannot be directly applied. The previous control approach is extended in order to allow a multi-input-multi-output (MIMO) design for the strongly coupled joints. Asymptotic stability is shown for the MIMO controller. Finally, experimental results with the DLR medical robot are presented.", "title": "" }, { "docid": "10e66f0c9cc3532029de388c2018f8ed", "text": "1. ABSTRACT WC have developed a series of lifelike computer characters called Virtual Petz. These are autonomous agents with real-time layered 3D animation and sound. Using a mouse the user moves a hand-shaped cursor to directly touch, pet, and pick up the characters, as well as use toys and objects. Virtual Petz grow up over time on the user’s PC computer desktop, and strive to be the user’s friends and companions. They have evolving social relationships with the user and each other. To implement these agents we have invented hybrid techniques that draw from cartoons, improvisational drama, AI and video games. 1.1", "title": "" }, { "docid": "a636f977eb29b870cefe040f3089de44", "text": "We consider the network implications of virtual reality (VR) and augmented reality (AR). While there are intrinsic challenges for AR/VR applications to deliver on their promise, their impact on the underlying infrastructure will be undeniable. We look at augmented and virtual reality and consider a few use cases where they could be deployed. These use cases define a set of requirements for the underlying network. We take a brief look at potential network architectures. We then make the case for Information-centric networks as a potential architecture to assist the deployment of AR/VR and draw a list of challenges and future research directions for next generation networks to better support AR/VR.", "title": "" }, { "docid": "f050f004d455648767c8663768cfcc42", "text": "In this paper, a metamaterial-based scanning leaky-wave antenna is developed, and then applied to the Doppler radar system for the noncontact vital-sign detection. With the benefit of the antenna beam scanning, the radar system can not only sense human subject, but also detect the vital signs within the specific scanning region. The Doppler radar module is designed at 5.8 GHz, and implemented by commercial integrated circuits and coplanar waveguide (CPW) passive components. Two scanning antennas are then connected with the transmitting and receiving ports of the module, respectively. In addition, since the main beam of the developed scanning antenna is controlled by the frequency, one can easily tune the frequency of the radar source from 5.1 to 6.5 GHz to perform the 59° spatial scanning. The measured respiration and heartbeat rates are in good agreement with the results acquired from the medical finger pulse sensor.", "title": "" } ]
scidocsrr
859de6f75fd982136341046da15cecea
Optimal Cluster Preserving Embedding of Nonmetric Proximity Data
[ { "docid": "5d247482bb06e837bf04c04582f4bfa2", "text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.", "title": "" } ]
[ { "docid": "1635b235c59cc57682735202c0bb2e0d", "text": "The introduction of structural imaging of the brain by computed tomography (CT) scans and magnetic resonance imaging (MRI) has further refined classification of head injury for prognostic, diagnosis, and treatment purposes. We describe a new classification scheme to be used both as a research and a clinical tool in association with other predictors of neurologic status.", "title": "" }, { "docid": "7f04ef4eb5dc53cbfa6c8b5379a95e0e", "text": "Memory scanning is an essential component in detecting and deactivating malware while the malware is still active in memory. The content here is confined to user-mode memory scanning for malware on 32-bit and 64-bit Windows NT based systems that are memory resident and/or persistent over reboots. Malware targeting 32-bit Windows are being created and deployed at an alarming rate today. While there are not many malware targeting 64-bit Windows yet, many of the existing Win32 malware for 32-bit Windows will work fine on 64-bit Windows due to the underlying WoW64 subsystem. Here, we will present an approach to implement user-mode memory scanning for Windows. This essentially means scanning the virtual address space of all processes in memory. In case of an infection, while the malware is still active in memory, it can significantly limit detection and disinfection. The real challenge hence actually lies in fully disinfecting the machine and restoring back to its clean state. Today’s malware apply complex anti-disinfection techniques making the task of restoring the machine to a clean state extremely difficult. Here, we will discuss some of these techniques with examples from real-world malware scenarios. Practical approaches for user-mode disinfection will be presented. By leveraging the abundance of redundant information available via various Win32 and Native API from user-mode, certain techniques to detect hidden processes will also be presented. Certain challenges in porting the memory scanner to 64-bit Windows and Vista will be discussed. The advantages and disadvantages of implementing a memory scanner in user-mode (rather than kernel-mode) will also be discussed.", "title": "" }, { "docid": "8d3e7a6032d6e017537b68b47c4dae38", "text": "With the increasing complexity of modern radar system and the increasing number of devices used in the radar system, it would be highly desirable to model the complete radar system including hardware and software by a single tool. This paper presents a novel software-based simulation method for modern radar system which here is automotive radar application. Various functions of automotive radar, like target speed, distance and azimuth and elevation angle detection, are simulated in test case and the simulation results are compared with the measurement results.", "title": "" }, { "docid": "054fcf065915118bbfa3f12759cb6912", "text": "Automatization of the diagnosis of any kind of disease is of great importance and its gaining speed as more and more deep learning solutions are applied to different problems. One of such computer-aided systems could be a decision support tool able to accurately differentiate between different types of breast cancer histological images – normal tissue or carcinoma (benign, in situ or invasive). In this paper authors present a deep learning solution, based on convolutional capsule network, for classification of four types of images of breast tissue biopsy when hematoxylin and eosin staining is applied. The crossvalidation accuracy, averaged over four classes, was achieved to be 87 % with equally high sensitivity.", "title": "" }, { "docid": "ed4178ec9be6f4f8e87a50f0bf1b9a41", "text": "PURPOSE\nTo report a case of central retinal artery occlusion (CRAO) in a patient with biopsy-verified Wegener's granulomatosis (WG) with positive C-ANCA.\n\n\nMETHODS\nA 55-year-old woman presented with a 3-day history of acute painless bilateral loss of vision; she also complained of fever and weight loss. Examination showed a CRAO in the left eye and angiographically documented choroidal ischemia in both eyes.\n\n\nRESULTS\nThe possibility of systemic vasculitis was not kept in mind until further studies were carried out; methylprednisolone pulse therapy was then started. Renal biopsy disclosed focal and segmental necrotizing vasculitis of the medium-sized arteries, supporting the diagnosis of WG, and cyclophosphamide pulse therapy was administered with gradual improvement, but there was no visual recovery.\n\n\nCONCLUSION\nCRAO as presenting manifestation of WG, in the context of retinal vasculitis, is very uncommon, but we should be aware of WG in the etiology of CRAO. This report shows the difficulty of diagnosing Wegener's granulomatosis; it requires a high index of suspicion, and we should obtain an accurate medical history and repeat serological and histopathological examinations. It emphasizes that inflammation of arteries leads to irreversible retinal infarction, and visual loss may occur.", "title": "" }, { "docid": "6f9186944cdeab30da7a530a942a5b3d", "text": "In this work, we perform a comparative analysis of the impact of substrate technologies on the performance of 28 GHz antennas for 5G applications. For this purpose, we model, simulate, analyze and compare 2×2 patch antenna arrays on five substrate technologies typically used for manufacturing integrated antennas. The impact of these substrates on the impedance bandwidth, efficiency and gain of the antennas is quantified. Finally, the antennas are fabricated and measured. Excellent correlation is obtained between measurement and simulation results.", "title": "" }, { "docid": "09a6f724e5b2150a39f89ee1132a33e9", "text": "This paper concerns a deep learning approach to relevance ranking in information retrieval (IR). Existing deep IR models such as DSSM and CDSSM directly apply neural networks to generate ranking scores, without explicit understandings of the relevance. According to the human judgement process, a relevance label is generated by the following three steps: 1) relevant locations are detected; 2) local relevances are determined; 3) local relevances are aggregated to output the relevance label. In this paper we propose a new deep learning architecture, namely DeepRank, to simulate the above human judgment process. Firstly, a detection strategy is designed to extract the relevant contexts. Then, a measure network is applied to determine the local relevances by utilizing a convolutional neural network (CNN) or two-dimensional gated recurrent units (2D-GRU). Finally, an aggregation network with sequential integration and term gating mechanism is used to produce a global relevance score. DeepRank well captures important IR characteristics, including exact/semantic matching signals, proximity heuristics, query term importance, and diverse relevance requirement. Experiments on both benchmark LETOR dataset and a large scale clickthrough data show that DeepRank can significantly outperform learning to ranking methods, and existing deep learning methods.", "title": "" }, { "docid": "b531674f21e88ac82071583531e639c6", "text": "OBJECTIVE\nTo evaluate use of, satisfaction with, and social adjustment with adaptive devices compared with prostheses in young people with upper limb reduction deficiencies.\n\n\nMETHODS\nCross-sectional study of 218 young people with upper limb reduction deficiencies (age range 2-20 years) and their parents. A questionnaire was used to evaluate participants' characteristics, difficulties encountered, and preferred solutions for activities, use satisfaction, and social adjustment with adaptive devices vs prostheses. The Quebec User Evaluation of Satisfaction with assistive Technology and a subscale of Trinity Amputation and Prosthesis Experience Scales were used.\n\n\nRESULTS\nOf 218 participants, 58% were boys, 87% had transversal upper limb reduction deficiencies, 76% with past/present use of adaptive devices and 37% with past/present use of prostheses. Young people (> 50%) had difficulties in performing activities. Of 360 adaptive devices, 43% were used for self-care (using cutlery), 28% for mobility (riding a bicycle) and 5% for leisure activities. Prostheses were used for self-care (4%), mobility (9%), communication (3%), recreation and leisure (6%) and work/employment (4%). The preferred solution for difficult activities was using unaffected and affected arms/hands and other body parts (> 60%), adaptive devices (< 48%) and prostheses (< 9%). Satisfaction and social adjustment with adaptive devices were greater than with prostheses (p < 0.05).\n\n\nCONCLUSION\nYoung people with upper limb reduction deficiencies are satisfied and socially well-adjusted with adaptive devices. Adaptive devices are good alternatives to prostheses.", "title": "" }, { "docid": "5bee78694f3428d3882e27000921f501", "text": "We introduce a new approach to perform background subtraction in moving camera scenarios. Unlike previous treatments of the problem, we do not restrict the camera motion or the scene geometry. The proposed approach relies on Bayesian selection of the transformation that best describes the geometric relation between consecutive frames. Based on the selected transformation, we propagate a set of learned background and foreground appearance models using a single or a series of homography transforms. The propagated models are subjected to MAP-MRF optimization framework that combines motion, appearance, spatial, and temporal cues; the optimization process provides the final background/foreground labels. Extensive experimental evaluation with challenging videos shows that the proposed method outperforms the baseline and state-of-the-art methods in most cases.", "title": "" }, { "docid": "5c7080162c4df9fdd7d5f385c4005bd3", "text": "The placebo effect is very well known, being replicated in many scientific studies. At the same time, its exact mechanisms still remain unknown. Quite a few hypothetical explanations for the placebo effect have been suggested, including faith, belief, hope, classical conditioning, conscious/subconscious expectation, endorphins, and the meaning response. This article argues that all these explanations may boil down to autosuggestion, in the sense of \"communication with the subconscious.\" An important implication of this is that the placebo effect can in principle be used effectively without the placebo itself, through a direct use of autosuggestion. The benefits of such a strategy are clear: fewer side effects from medications, huge cost savings, no deception of patients, relief of burden on the physician's time, and healing in domains where medication or other therapies are problematic.", "title": "" }, { "docid": "4d4a09c7cef74e9be52844a61ca57bef", "text": "The key of zero-shot learning (ZSL) is how to find the information transfer model for bridging the gap between images and semantic information (texts or attributes). Existing ZSL methods usually construct the compatibility function between images and class labels with the consideration of the relevance on the semantic classes (the manifold structure of semantic classes). However, the relationship of image classes (the manifold structure of image classes) is also very important for the compatibility model construction. It is difficult to capture the relationship among image classes due to unseen classes, so that the manifold structure of image classes often is ignored in ZSL. To complement each other between the manifold structure of image classes and that of semantic classes information, we propose structure propagation (SP) for improving the performance of ZSL for classification. SP can jointly consider the manifold structure of image classes and that of semantic classes for approximating to the intrinsic structure of object classes. Moreover, the SP can describe the constrain condition between the compatibility function and these manifold structures for balancing the influence of the structure propagation iteration. The SP solution provides not only unseen class labels but also the relationship of two manifold structures that encode the positive transfer in structure propagation. Experimental results demonstrate that SP can attain the promising results on the AwA, CUB, Dogs and SUN databases.", "title": "" }, { "docid": "2a2db7ff8bb353143ca2bb9ad8ec2d7d", "text": "A revision of the genus Leptoplana Ehrenberg, 1831 in the Mediterranean basin is undertaken. This revision deals with the distribution and validity of the species of Leptoplana known for the area. The Mediterranean sub-species polyclad, Leptoplana tremellaris forma mediterranea Bock, 1913 is elevated to the specific level. Leptoplana mediterranea comb. nov. is redescribed from the Lake of Tunis, Tunisia. This flatworm is distinguished from Leptoplana tremellaris mainly by having a prostatic vesicle provided with a long diverticulum attached ventrally to the seminal vesicle, a genital pit closer to the male pore than to the female one and a twelve-eyed hatching juvenile instead of the four-eyed juvenile of L. tremellaris. The direct development in L. mediterranea is described at 15 °C.", "title": "" }, { "docid": "259972cd20a1f763b07bef4619dc7f70", "text": "This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.", "title": "" }, { "docid": "eb99d3fb9f6775453ac25861cb05f04c", "text": "Hate content in social media is ever increasing. While Facebook, Twitter, Google have attempted to take several steps to tackle this hate content, they most often risk the violation of freedom of speech. Counterspeech, on the other hand, provides an effective way of tackling the online hate without the loss of freedom of speech. Thus, an alternative strategy for these platforms could be to promote counterspeech as a defense against hate content. However, in order to have a successful promotion of such counterspeech, one has to have a deep understanding of its dynamics in the online world. Lack of carefully curated data largely inhibits such understanding. In this paper, we create and release the first ever dataset for counterspeech using comments from YouTube. The data contains 9438 manually annotated comments where the labels indicate whether a comment is a counterspeech or not. This data allows us to perform a rigorous measurement study characterizing the linguistic structure of counterspeech for the first time. This analysis results in various interesting insights such as: the counterspeech comments receive double the likes received by the non-counterspeech comments, for certain communities majority of the non-counterspeech comments tend to be hate speech, the different types of counterspeech are not all equally effective and the language choice of users posting counterspeech is largely different from those posting noncounterspeech as revealed by a detailed psycholinguistic analysis. Finally, we build a set of machine learning models that are able to automatically detect counterspeech in YouTube videos with an F1-score of 0.73.", "title": "" }, { "docid": "59e0bdccc5d983350ef7a53cfd953c07", "text": "1,2 Computer Studies Department , Faculty of Science, The Polytechnic, Ibadan Oyo State, Nigeria. ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Patient identification is the foundation of effective healthcare: the correct care needs to be delivered to the correct patient. However, relying on manual identification processes such as demographic searches and social security numbers often results in patient misidentification hence, the needs for electronic medical records (EMR). . It was discovered that many medical systems switching to electronic health records in order to explore the advantages of electronic medical records (EMR) creates new problems by producing more targets for medical data to be hacked. Hackers are believed to have gained access to up to 80 million records that contained Social Security numbers, birthdays, postal addresses, and e-mail addresses.", "title": "" }, { "docid": "1ab0974dc10f84c6e1fc80ac3f251ac3", "text": "The optimisation of a printed circuit board assembly line is mainly influenced by the constraints of the surface mount device placement (SMD) machine and the characteristics of the production environment. Hence, this paper surveys the various machine technologies and characteristics and proposes five categories of machines based on their specifications and operational methods. These are dual-delivery, multi-station, turret-type, multi-head and sequential pick-and-place SMD placement machines. We attempt to associate the assembly machine technologies with heuristic methods; and address the scheduling issues of each category of machine. This grouping aims to guide future researchers in this field to have a better understanding of the various SMD placement machine specifications and operational methods, so that they can subsequently use them to apply, or even design heuristics, which are more appropriate to the machine characteristics and the operational methods. We also discuss our experiences in solving the pick-and-place sequencing problem of the theoretical and real machine problem, and highlight some of the important practical issues that should be considered in solving real SMD placement machine problems.", "title": "" }, { "docid": "57c090eaab37e615b564ef8451412962", "text": "Variational inference is an umbrella term for algorithms which cast Bayesian inference as optimization. Classically, variational inference uses the Kullback-Leibler divergence to define the optimization. Though this divergence has been widely used, the resultant posterior approximation can suffer from undesirable statistical properties. To address this, we reexamine variational inference from its roots as an optimization problem. We use operators, or functions of functions, to design variational objectives. As one example, we design a variational objective with a Langevin-Stein operator. We develop a black box algorithm, operator variational inference (opvi), for optimizing any operator objective. Importantly, operators enable us to make explicit the statistical and computational tradeoffs for variational inference. We can characterize different properties of variational objectives, such as objectives that admit data subsampling—allowing inference to scale to massive data—as well as objectives that admit variational programs—a rich class of posterior approximations that does not require a tractable density. We illustrate the benefits of opvi on a mixture model and a generative model of images.", "title": "" }, { "docid": "a27a05cb00d350f9021b5c4f609d772c", "text": "Traffic light detection from a moving vehicle is an important technology both for new safety driver assistance functions as well as for autonomous driving in the city. In this paper we present a machine learning framework for detection of traffic lights that can handle in realtime both day and night situations in a unified manner. A semantic segmentation method is employed to generate traffic light candidates, which are then confirmed and classified by a geometric and color features based classifier. Temporal consistency is enforced by using a tracking by detection method. We evaluate our method on a publicly available dataset recorded at daytime in order to compare to existing methods and we show similar performance. We also present an evaluation on two additional datasets containing more than 50 intersections with multiple traffic lights recorded both at day and during nighttime and we show that our method performs consistently in those situations.", "title": "" }, { "docid": "73c3b82e723b5e76a6e9c3a556888c48", "text": "In this paper we present the first large-scale scene attribute database. First, we perform crowd-sourced human studies to find a taxonomy of 102 discriminative attributes. Next, we build the “SUN attribute database” on top of the diverse SUN categorical database. Our attribute database spans more than 700 categories and 14,000 images and has potential for use in high-level scene understanding and fine-grained scene recognition. We use our dataset to train attribute classifiers and evaluate how well these relatively simple classifiers can recognize a variety of attributes related to materials, surface properties, lighting, functions and affordances, and spatial envelope properties.", "title": "" }, { "docid": "5dc4dfc2d443c31332c70a56c2d70c7d", "text": "Sentiment analysis or opinion mining is an important type of text analysis that aims to support decision making by extracting and analyzing opinion oriented text, identifying positive and negative opinions, and measuring how positively or negatively an entity (i.e., people, organization, event, location, product, topic, etc.) is regarded. As more and more users express their political and religious views on Twitter, tweets become valuable sources of people's opinions. Tweets data can be efficiently used to infer people's opinions for marketing or social studies. This paper proposes a Tweets Sentiment Analysis Model (TSAM) that can spot the societal interest and general people's opinions in regard to a social event. In this paper, Australian federal election 2010 event was taken as an example for sentiment analysis experiments. We are primarily interested in the sentiment of the specific political candidates, i.e., two primary minister candidates - Julia Gillard and Tony Abbot. Our experimental results demonstrate the effectiveness of the system.", "title": "" } ]
scidocsrr
8b2e64216e390328a4e7f0e8db02b960
Towards affective camera control in games
[ { "docid": "eded90c762031357c1f5366fefca007c", "text": "The authors examined whether the nature of the opponent (computer, friend, or stranger) influences spatial presence, emotional responses, and threat and challenge appraisals when playing video games. In a within-subjects design, participants played two different video games against a computer, a friend, and a stranger. In addition to self-report ratings, cardiac interbeat intervals (IBIs) and facial electromyography (EMG) were measured to index physiological arousal and emotional valence. When compared to playing against a computer, playing against another human elicited higher spatial presence, engagement, anticipated threat, post-game challenge appraisals, and physiological arousal, as well as more positively valenced emotional responses. In addition, playing against a friend elicited greater spatial presence, engagement, and self-reported and physiological arousal, as well as more positively valenced facial EMG responses, compared to playing against a stranger. The nature of the opponent influences spatial presence when playing video games, possibly through the mediating influence on arousal and attentional processes.", "title": "" } ]
[ { "docid": "984a289e33debae553dffc4f601dc203", "text": "Nowadays, the prevailing detectors of steganographic communication in digital images mainly consist of three steps, i.e., residual computation, feature extraction, and binary classification. In this paper, we present an alternative approach to steganalysis of digital images based on convolutional neural network (CNN), which is shown to be able to well replicate and optimize these key steps in a unified framework and learn hierarchical representations directly from raw images. The proposed CNN has a quite different structure from the ones used in conventional computer vision tasks. Rather than a random strategy, the weights in the first layer of the proposed CNN are initialized with the basic high-pass filter set used in the calculation of residual maps in a spatial rich model (SRM), which acts as a regularizer to suppress the image content effectively. To better capture the structure of embedding signals, which usually have extremely low SNR (stego signal to image content), a new activation function called a truncated linear unit is adopted in our CNN model. Finally, we further boost the performance of the proposed CNN-based steganalyzer by incorporating the knowledge of selection channel. Three state-of-the-art steganographic algorithms in spatial domain, e.g., WOW, S-UNIWARD, and HILL, are used to evaluate the effectiveness of our model. Compared to SRM and its selection-channel-aware variant maxSRMd2, our model achieves superior performance across all tested algorithms for a wide variety of payloads.", "title": "" }, { "docid": "78e712f5d052c08a7dcbc2ee6fd92f96", "text": "Bug report contains a vital role during software development, However bug reports belongs to different categories such as performance, usability, security etc. This paper focuses on security bug and presents a bug mining system for the identification of security and non-security bugs using the term frequency-inverse document frequency (TF-IDF) weights and naïve bayes. We performed experiments on bug report repositories of bug tracking systems such as bugzilla and debugger. In the proposed approach we apply text mining methodology and TF-IDF on the existing historic bug report database based on the bug s description to predict the nature of the bug and to train a statistical model for manually mislabeled bug reports present in the database. The tool helps in deciding the priorities of the incoming bugs depending on the category of the bugs i.e. whether it is a security bug report or a non-security bug report, using naïve bayes. Our evaluation shows that our tool using TF-IDF is giving better results than the naïve bayes method.", "title": "" }, { "docid": "809046f2f291ce610938de209d98a6f2", "text": "Pregnancy loss before 20 weeks’ gestation without outside intervention is termed spontaneous abortion and may be encountered in as many as 20% of clinically diagnosed pregnancies.1 It is said to be complete when all products of conception are expelled, the uterus is in a contracted state, and the cervix is closed. On the other hand, retention of part of products of conception inside the uterus, cervix, or vagina results in incomplete abortion. Although incomplete spontaneous miscarriages are commonly encountered in early pregnancy,2 traumatic fetal decapitation has not been mentioned in the medical literature as a known complication of spontaneous abortion. We report an extremely rare and unusual case of traumatic fetal decapitation due to self-delivery during spontaneous abortion in a 26-year-old woman who presented at 15 weeks’ gestation with gradually worsening vaginal bleeding and lower abdominal pain and with the fetal head still lying in the uterine cavity. During our search for similar cases, we came across just 1 other case report describing traumatic fetal decapitation after spontaneous abortion,3 although there are reports of fetal decapitation from amniotic band syndrome, vacuum-assisted deliveries, and destructive operations.4–8 A 26-year-old woman, gravida 2, para 0, presented to the emergency department with vaginal bleeding and cramping pain in her lower abdomen, both of which had gradually increased in severity over the previous 2 days. Her pulse and blood pressure were 86 beats per minute and 100/66 mm Hg, respectively, and her respiratory rate was 26 breaths per minute. She had a high-grade fever; her temperature was 103°F (39.4°C), recorded orally. There was suprapubic tenderness on palpation. About 8 or 9 days before presentation, she had severe pain in the lower abdomen, followed by vaginal bleeding. She gave a history of passing brown to black clots, one of which was particularly large, and she had to pull it out herself as if it was stuck. It resembled “an incomplete very small baby” in her own words. Although not sure, she could not make out the head of the “baby,” although she could appreciate the limbs and trunk. Thereafter, the bleeding gradually decreased over the next 2 days, but her lower abdominal pain persisted. However, after 1 day, she again started bleeding, and her pain increased in intensity. Meanwhile she also developed fever. She gave a history of recent cocaine use and alcohol drinking occasionally. No history of smoking was present. According to her last menstrual period, the gestational age was at 15 weeks, and during this pregnancy, she never had a sonographic examination. She reported taking a urine test for pregnancy at home 4 weeks before, which showed positive results. She gave a history of being pregnant 11⁄2 years before. At that time, also, she aborted spontaneously at 9 weeks’ gestation. No complications were seen at that time. She resumed her menses normally after about 2 months and was regular until 3 months back. The patient was referred for emergency sonography, which revealed that the fetal head was lying in the uterine cavity (Figure 1, A and B) along with the presence of fluid/ hemorrhage in the cervix and upper vagina (Figure 1C). No other definite fetal part could be identified. The placenta was also seen in the uterine cavity, and it was upper anterior and fundic (Figure 1D). No free fluid in abdomen was seen. Subsequently after stabilization, the patient underwent dilation and evacuation and had an uneventful postoperative course. As mentioned earlier, traumatic fetal decapitation accompanying spontaneous abortion is a very rare occurrence; we came across only 1 other case3 describing similar findings. Patients presenting to the emergency department with features suggestive of abortion, whether threatened, incomplete, or complete, should be thoroughly evaluated by both pelvic and sonographic examinations to check for any retained products of conception with frequent followups in case of threatened or incomplete abortions.", "title": "" }, { "docid": "3e845c9a82ef88c7a1f4447d57e35a3e", "text": "Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a “heuristic” that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.", "title": "" }, { "docid": "a0d2ea9b5653d6ca54983bb3d679326e", "text": "A dynamic reasoning system (DRS) is an adaptation of a conventional formal logical system that explicitly portrays reasoning as a temporal activity, with each extralogical input to the system and each inference rule application being viewed as occurring at a distinct timestep. Every DRS incorporates some well-defined logic together with a controller that serves to guide the reasoning process in response to user inputs. Logics are generic, whereas controllers are application specific. Every controller does, nonetheless, provide an algorithm for nonmonotonic belief revision. The general notion of a DRS comprises a framework within which one can formulate the logic and algorithms for a given application and prove that the algorithms are correct, that is, that they serve to (1) derive all salient information and (2) preserve the consistency of the belief set. This article illustrates the idea with ordinary first-order predicate calculus, suitably modified for the present purpose, and two examples. The latter example revisits some classic nonmonotonic reasoning puzzles (Opus the Penguin, Nixon Diamond) and shows how these can be resolved in the context of a DRS, using an expanded version of first-order logic that incorporates typed predicate symbols. All concepts are rigorously defined and effectively computable, thereby providing the foundation for a future software implementation.", "title": "" }, { "docid": "19cb14825c6654101af1101089b66e16", "text": "Critical infrastructures, such as power grids and transportation systems, are increasingly using open networks for operation. The use of open networks poses many challenges for control systems. The classical design of control systems takes into account modeling uncertainties as well as physical disturbances, providing a multitude of control design methods such as robust control, adaptive control, and stochastic control. With the growing level of integration of control systems with new information technologies, modern control systems face uncertainties not only from the physical world but also from the cybercomponents of the system. The vulnerabilities of the software deployed in the new control system infrastructure will expose the control system to many potential risks and threats from attackers. Exploitation of these vulnerabilities can lead to severe damage as has been reported in various news outlets [1], [2]. More recently, it has been reported in [3] and [4] that a computer worm, Stuxnet, was spread to target Siemens supervisory control and data acquisition (SCADA) systems that are configured to control and monitor specific industrial processes.", "title": "" }, { "docid": "725f9c045b5618fe0feb39a5f4cb4d8c", "text": "This paper discusses the experiments carried out by us at Jadavpur University as part of the participation in ICON 2015 task: POS Tagging for Code-mixed Indian Social Media Text. The tool that we have developed for the task is based on Trigram Hidden Markov Model that utilizes information from dictionary as well as some other word level features to enhance the observation probabilities of the known tokens as well as unknown tokens. We submitted runs for Bengali-English, Hindi-English and Tamil-English Language pairs. Our system has been trained and tested on the datasets released for ICON 2015 shared task: POS Tagging For Code-mixed Indian Social Media Text. In constrained mode, our system obtains average overall accuracy (averaged over all three language pairs) of 75.60% which is very close to other participating two systems (76.79% for IIITH and 75.79% for AMRITA_CEN) ranked higher than our system. In unconstrained mode, our system obtains average overall accuracy of 70.65% which is also close to the system (72.85% for AMRITA_CEN) which obtains the highest average overall accuracy.", "title": "" }, { "docid": "47d673d7b917f3948274f1e32a847a35", "text": "Real-time lane detection and tracking is one of the most reliable approaches to prevent road accidents by alarming the driver of the excessive lane changes. This paper addresses the problem of correct lane detection and tracking of the current lane of a vehicle in real-time. We propose a solution that is computationally efficient and performs better than previous approaches. The proposed algorithm is based on detecting straight lines from the captured road image, marking a region of interest, filtering road marks and detecting the current lane by using the information gathered. This information is obtained by analyzing the geometric shape of the lane boundaries and the convergence point of the lane markers. To provide a feasible solution, the only sensing modality on which the algorithm depends on is the camera of an off-the-shelf mobile device. The proposed algorithm has a higher average accuracy of 96.87% when tested on the Caltech Lanes Dataset as opposed to the state-of-the-art technology for lane detection. The algorithm operates on three frames per second on a 2.26 GHz quad-core processor of a mobile device with an image resolution of 640×480 pixels. It is tested and verified under various visibility and road conditions.", "title": "" }, { "docid": "08c60605026ab1ba625c69cb27539daa", "text": "Few educational problems have received more attention in recent times than the failure to ensure that elementary and secondary classrooms are all staffed with qualified teachers. Over the past two decades, dozens of studies, commissions, and national reports have warned of a coming crisis resulting from widespread teacher shortages. This article briefly summarizes a recent study I undertook that used national data to examine the sources of school staffing problems and teacher shortages. This research shows that although these issues are among the most important facing schools, they are also among the least understood. The data also reveal that many currently popular reforms will not solve the staffing problems of schools because they do not address some of their key causes. Disciplines Educational Administration and Supervision | Teacher Education and Professional Development Comments This brief is based on the following Center for the Study of Teaching Policy (CTP) Research Report: Teacher Turnover, Teacher Shortages, and the Organization of Schools. View on the CPRE website. This policy brief is available at ScholarlyCommons: https://repository.upenn.edu/cpre_policybriefs/21", "title": "" }, { "docid": "d7c236983c54213f17a0d8db886d5f2f", "text": "Traffic light detection is an important system because it can alert driver on upcoming traffic light so that he/she can anticipate a head of time. In this paper we described our work on detecting traffic light color using machine learning approach. Using HSV color representation, our approach is to extract features based on an area of X×X pixels. Traffic light color model is then created by applying a learning algorithm on a set of examples of features representing pixels of traffic and non-traffic light colors. The learned model is then used to classify whether an area of pixels contains traffic light color or not. Evaluation of this approach reveals that it significantly improves the detection performance over the one based on value-range color segmentation technique.", "title": "" }, { "docid": "f03749ebd15b51b95e8ece5d6d58108c", "text": "The holding of an infant with ventral skin-to-skin contact typically in an upright position with the swaddled infant on the chest of the parent, is commonly referred to as kangaroo care (KC), due to its simulation of marsupial care. It is recommended that KC, as a feasible, natural, and cost-effective intervention, should be standard of care in the delivery of quality health care for all infants, regardless of geographic location or economic status. Numerous benefits of its use have been reported related to mortality, physiological (thermoregulation, cardiorespiratory stability), behavioral (sleep, breastfeeding duration, and degree of exclusivity) domains, as an effective therapy to relieve procedural pain, and improved neurodevelopment. Yet despite these recommendations and a lack of negative research findings, adoption of KC as a routine clinical practice remains variable and underutilized. Furthermore, uncertainty remains as to whether continuous KC should be recommended in all settings or if there is a critical period of initiation, dose, or duration that is optimal. This review synthesizes current knowledge about the benefits of KC for infants born preterm, highlighting differences and similarities across low and higher resource countries and in a non-pain and pain context. Additionally, implementation considerations and unanswered questions for future research are addressed.", "title": "" }, { "docid": "8c58b608430e922284d8b4b8cd5cc51d", "text": "At the end of the 19th century, researchers observed that biological substances have frequency- dependent electrical properties and that tissue behaves \"like a capacitor\" [1]. Consequently, in the first half of the 20th century, the permittivity of many types of cell suspensions and tissues was characterized up to frequencies of approximately 100 MHz. From the measurements, conclusions were drawn, in particular, about the electrical properties of the cell membranes, which are the main contributors to the tissue impedance at frequencies below 10 MHz [2]. In 1926, a study found a significant different permittivity for breast cancer tissue compared with healthy tissue at 20 kHz [3]. After World War II, new instrumentation enabled measurements up to 10 GHz, and a vast amount of data on the dielectric properties of different tissue types in the microwave range was published [4]-[6].", "title": "" }, { "docid": "967f1e68847111ecf96d964422bea913", "text": "Text preprocessing is an essential stage in text categorization (TC) particularly and text mining generally. Morphological tools can be used in text preprocessing to reduce multiple forms of the word to one form. There has been a debate among researchers about the benefits of using morphological tools in TC. Studies in the English language illustrated that performing stemming during the preprocessing stage degrades the performance slightly. However, they have a great impact on reducing the memory requirement and storage resources needed. The effect of the preprocessing tools on Arabic text categorization is an area of research. This work provides an evaluation study of several morphological tools for Arabic Text Categorization. The study includes using the raw text, the stemmed text, and the root text. The stemmed and root text are obtained using two different preprocessing tools. The results illustrated that using light stemmer combined with a good performing feature selection method enhances the performance of Arabic Text Categorization especially for small threshold values.", "title": "" }, { "docid": "ef4272cd4b0d4df9aa968cc9ff528c1e", "text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.", "title": "" }, { "docid": "79b55ac2983d0604613341d8d8775506", "text": "In this paper, an effective method is proposed to handle the facial expression recognition problem. The system detects the face and facial components including eyes, brows and mouths. Since facial expressions result from facial muscle movements or deformations, and Histogram of Oriented Gradients (HOG) is very sensitive to the object deformations, we apply the HOG to encode these facial components as features. A linear SVM is then trained to perform the facial expression classification. We evaluate our proposed method on the JAFFE dataset and an extended Cohn-Kanade dataset. The average classification rate on the two datasets reaches 94.3% and 88.7%, respectively. Experimental results demonstrate the competitive classification accuracy of our proposed method. Keywords—facial expression recognition, HOG features, facial component detection, SVM", "title": "" }, { "docid": "292b22b2ba3b79df1769fb794b1ca0da", "text": "High-throughput genotype screening is rapidly becoming a standard research tool in the post-genomic era. A major bottleneck currently exists, however, that limits the utility of this approach in the plant sciences. The rate-limiting step in current high-throughput pipelines is that tissue samples from living plants must be collected manually, one plant at a time. In this article I describe a novel method for harvesting tissue samples from living seedlings that eliminates this bottleneck. The method has been named Ice-Cap to reflect the fact that ice is used to capture the tissue samples. The planting of seeds, growth of seedlings, and harvesting of tissue are all performed in a 96-well format. I demonstrate the utility of this system by using tissue harvested by Ice-Cap to genotype a population of Arabidopsis seedlings that is segregating a previously characterized mutation. Because the harvesting of tissue is performed in a nondestructive manner, plants with the desired genotype can be transferred to soil and grown to maturity. I also show that Ice-Cap can be used to analyze genomic DNA from rice (Oryza sativa) seedlings. It is expected that this method will be applicable to high-throughput screening with many different plant species, making it a useful technology for performing marker assisted selection.", "title": "" }, { "docid": "c649d226448782ee972c620bea3e0ea3", "text": "Parents of children with developmental disabilities, particularly autism spectrum disorders (ASDs), are at risk for high levels of distress. The factors contributing to this are unclear. This study investigated how child characteristics influence maternal parenting stress and psychological distress. Participants consisted of mothers and developmental-age matched preschool-aged children with ASD (N = 51) and developmental delay without autism (DD) ( N = 22). Evidence for higher levels of parenting stress and psychological distress was found in mothers in the ASD group compared to the DD group. Children's problem behavior was associated with increased parenting stress and psychological distress in mothers in the ASD and DD groups. This relationship was stronger in the DD group. Daily living skills were not related to parenting stress or psychological distress. Results suggest clinical services aiming to support parents should include a focus on reducing problem behaviors in children with developmental disabilities.", "title": "" }, { "docid": "20e105b3b8d4469b2ddc0dbbc2a64082", "text": "For over a century, heme metabolism has been recognized to play a central role during intraerythrocytic infection by Plasmodium parasites, the causative agent of malaria. Parasites liberate vast quantities of potentially cytotoxic heme as a by-product of hemoglobin catabolism within the digestive vacuole, where heme is predominantly sequestered as inert crystalline hemozoin. Plasmodium spp. also utilize heme as a metabolic cofactor. Despite access to abundant host-derived heme, parasites paradoxically maintain a biosynthetic pathway. This pathway has been assumed to produce the heme incorporated into mitochondrial cytochromes that support electron transport. In this review, we assess our current understanding of the love-hate relationship between Plasmodium parasites and heme, we discuss recent studies that clarify several long-standing riddles about heme production and utilization by parasites, and we consider remaining challenges and opportunities for understanding and targeting heme metabolism within parasites.", "title": "" }, { "docid": "0b9e7adde5f9b577930cab27cd4bc7a0", "text": "Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.", "title": "" } ]
scidocsrr
8f732a61afa6e6ecf62e180852f5c111
Taylor–Fourier Analysis of Blood Pressure Oscillometric Waveforms
[ { "docid": "27c56cabe2742fbe69154e63073e193e", "text": "Developing a good model for oscillometric blood-pressure measurements is a hard task. This is mainly due to the fact that the systolic and diastolic pressures cannot be directly measured by noninvasive automatic oscillometric blood-pressure meters (NIBP) but need to be computed based on some kind of algorithm. This is in strong contrast with the classical Korotkoff method, where the diastolic and systolic blood pressures can be directly measured by a sphygmomanometer. Although an NIBP returns results similar to the Korotkoff method for patients with normal blood pressures, a big discrepancy exist between both methods for severe hyper- and hypotension. For these severe cases, a statistical model is needed to compensate or calibrate the oscillometric blood-pressure meters. Although different statistical models have been already studied, no immediate calibration method has been proposed. The reason is that the step from a model, describing the measurements, to a calibration, correcting the blood-pressure meters, is a rather large leap. In this paper, we study a “databased” Fourier series approach to model the oscillometric waveform and use the Windkessel model for the blood flow to correct the oscillometric blood-pressure meters. The method is validated on a measurement campaign consisting of healthy patients and patients suffering from either hyper- or hypotension.", "title": "" }, { "docid": "63b210cc5e1214c51b642e9a4a2a1fb0", "text": "This paper proposes a simplified method to compute the systolic and diastolic blood pressures from measured oscillometric blood-pressure waveforms. Therefore, the oscillometric waveform is analyzed in the frequency domain, which reveals that the measured blood-pressure signals are heavily disturbed by nonlinear contributions. The proposed approach will linearize the measured oscillometric waveform in order to obtain a more accurate and transparent estimation of the systolic and diastolic pressure based on a robust preprocessing technique. This new approach will be compared with the Korotkoff method and a commercially available noninvasive blood-pressure meter. This allows verification if the linearized approach contains as much information as the Korotkoff method in order to calculate a correct systolic and diastolic blood pressure.", "title": "" } ]
[ { "docid": "2b38ac7d46a1b3555fef49a4e02cac39", "text": "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.", "title": "" }, { "docid": "5d79ed3cd52cf572ca11e40cc9c98fb1", "text": "Misinformation such as fake news is one of the big challenges of our society. Research on automated fact-checking has proposed methods based on supervised learning, but these approaches do not consider external evidence apart from labeled training instances. Recent approaches counter this deficit by considering external sources related to a claim. However, these methods require substantial feature modeling and rich lexicons. This paper overcomes these limitations of prior work with an end-toend model for evidence-aware credibility assessment of arbitrary textual claims, without any human intervention. It presents a neural network model that judiciously aggregates signals from external evidence articles, the language of these articles and the trustworthiness of their sources. It also derives informative features for generating user-comprehensible explanations that makes the neural network predictions transparent to the end-user. Experiments with four datasets and ablation studies show the strength of our method.", "title": "" }, { "docid": "cfe31ce3a6a23d9148709de6032bd90b", "text": "I argue that Non-Photorealistic Rendering (NPR) research will play a key role in the scientific understanding of visual art and illustration. NPR can contribute to scientific understanding of two kinds of problems: how do artists create imagery, and how do observers respond to artistic imagery? I sketch out some of the open problems, how NPR can help, and what some possible theories might look like. Additionally, I discuss the thorny problem of how to evaluate NPR research and theories.", "title": "" }, { "docid": "fb655a622c2e299b8d7f8b85769575b4", "text": "With the substantial development of digital technologies in multimedia, network communication and user interfaces, we are seeing an increasing number of applications of these technologies, in particular in the entertainment domain. They include computer gaming, elearning, high-definition and interactive TVs, and virtual environments. The development of these applications typically involves the integration of existing technologies as well as the development of new technologies. This Introduction summarizes latest interactive entertainment technologies and applications, and briefly highlights some potential research directions. It also introduces the seven papers that are accepted to the special issue. Hopefully, this will provide the readers some insights into future research topics in interactive entertainment technologies and applications.", "title": "" }, { "docid": "4812ae9ee481b8c4b4f74b4ab01f3e1b", "text": "Recent work has shown how to train Convolutional Neural Networks (CNNs) rapidly on large image datasets [1], then transfer the knowledge gained from these models to a variety of tasks [2]. Following [3], in this work, we demonstrate similar scalability and transfer for Recurrent Neural Networks (RNNs) for Natural Language tasks. By utilizing mixed precision arithmetic and a 32k batch size distributed across 128 NVIDIA Tesla V100 GPUs, we are able to train a character-level 4096-dimension multiplicative LSTM (mLSTM) [4] for unsupervised text reconstruction over 3 epochs of the 40 GB Amazon Reviews dataset [5] in four hours. This runtime compares favorably with previous work taking one month to train the same size and configuration for one epoch over the same dataset [3]. Converging large batch RNN models can be challenging. Recent work has suggested scaling the learning rate as a function of batch size, but we find that simply scaling the learning rate as a function of batch size leads either to significantly worse convergence or immediate divergence for this problem. We provide a learning rate schedule that allows our model to converge with a 32k batch size. Since our model converges over the Amazon Reviews dataset in hours, and our compute requirement of 128 Tesla V100 GPUs, while substantial, is commercially available, this work opens up large scale unsupervised NLP training to most commercial applications and deep learning researchers 11Our code is publicly available: https://github.com/NVIDIA/sentiment-discovery, A model can be trained over most public or private text datasets overnight.", "title": "" }, { "docid": "9304c82e4b19c2f5e23ca45e7f2c9538", "text": "Previous work has shown that using the GPU as a brute force method for SELECT statements on a SQLite database table yields significant speedups. However, this requires that the entire table be selected and transformed from the B-Tree to row-column format. This paper investigates possible speedups by traversing B+ Trees in parallel on the GPU, avoiding the overhead of selecting the entire table to transform it into row-column format and leveraging the logarithmic nature of tree searches. We experiment with different input sizes, different orders of the B+ Tree, and batch multiple queries together to find optimal speedups for SELECT statements with single search parameters as well as range searches. We additionally make a comparison to a simple GPU brute force algorithm on a row-column version of the B+ Tree.", "title": "" }, { "docid": "c2e0b234898df278ee57ae5827faadeb", "text": "In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels.", "title": "" }, { "docid": "e26c8fde7d79298ea0dba161bf24f2da", "text": "We present a new exact subdivision algorithm CEVAL for isolating the complex roots of a square-free polynomial in any given box. It is a generalization of a previous real root isolation algorithm called EVAL. Under suitable conditions, our approach is applicable for general analytic functions. CEVAL is based on the simple Bolzano Principle and is easy to implement exactly. Preliminary experiments have shown its competitiveness.\n We further show that, for the \"benchmark problem\" of isolating all roots of a square-free polynomial with integer coefficients, the asymptotic complexity of both algorithms EVAL and CEVAL matches (up a logarithmic term) that of more sophisticated real root isolation methods which are based on Descartes' Rule of Signs, Continued Fraction or Sturm sequence. In particular, we show that the tree size of EVAL matches that of other algorithms. Our analysis is based on a novel technique called Δ-clusters from which we expect to see further applications.", "title": "" }, { "docid": "3ad25dabe3b740a91b939a344143ea9e", "text": "Recently, much attention in research and practice has been devoted to the topic of IT consumerization, referring to the adoption of private consumer IT in the workplace. However, research lacks an analysis of possible antecedents of the trend on an individual level. To close this gap, we derive a theoretical model for IT consumerization behavior based on the theory of planned behavior and perform a quantitative analysis. Our investigation shows that it is foremost determined by normative pressures, specifically the behavior of friends, co-workers and direct supervisors. In addition, behavioral beliefs and control beliefs were found to affect the intention to use non-corporate IT. With respect to the former, we found expected performance improvements and an increase in ease of use to be two of the key determinants. As for the latter, especially monetary costs and installation knowledge were correlated with IT consumerization intention.", "title": "" }, { "docid": "aa948c6380a54c8b5a24b062f854002c", "text": "This work focuses on the study of constant-time implementations; giving formal guarantees that such implementations are protected against cache-based timing attacks in virtualized platforms where their supporting operating system executes concurrently with other, potentially malicious, operating systems. We develop a model of virtualization that accounts for virtual addresses, physical and machine addresses, memory mappings, page tables, translation lookaside buffer, and cache; and provides an operational semantics for a representative set of actions, including reads and writes, allocation and deallocation, context switching, and hypercalls. We prove a non-interference result on the model that shows that an adversary cannot discover secret information using cache side-channels, from a constant-time victim.", "title": "" }, { "docid": "c3bfe9b5231c5f9b4499ad38b6e8eac6", "text": "As the World Wide Web has increasingly become a necessity in daily life, the acute need to safeguard user privacy and security has become manifestly apparent. After users realized that browser cookies could allow websites to track their actions without permission or notification, many have chosen to reject cookies in order to protect their privacy. However, more recently, methods of fingerprinting a web browser have become an increasingly common practice. In this paper, we classify web browser fingerprinting into four main categories: (1) Browser Specific, (2) Canvas, (3) JavaScript Engine, and (4) Cross-browser. We then summarize the privacy and security implications, discuss commercial fingerprinting techniques, and finally present some detection and prevention methods.", "title": "" }, { "docid": "f36ef9dd6b78605683f67b382b9639ac", "text": "Stable clones of neural stem cells (NSCs) have been isolated from the human fetal telencephalon. These self-renewing clones give rise to all fundamental neural lineages in vitro. Following transplantation into germinal zones of the newborn mouse brain they participate in aspects of normal development, including migration along established migratory pathways to disseminated central nervous system regions, differentiation into multiple developmentally and regionally appropriate cell types, and nondisruptive interspersion with host progenitors and their progeny. These human NSCs can be genetically engineered and are capable of expressing foreign transgenes in vivo. Supporting their gene therapy potential, secretory products from NSCs can correct a prototypical genetic metabolic defect in neurons and glia in vitro. The human NSCs can also replace specific deficient neuronal populations. Cryopreservable human NSCs may be propagated by both epigenetic and genetic means that are comparably safe and effective. By analogy to rodent NSCs, these observations may allow the development of NSC transplantation for a range of disorders.", "title": "" }, { "docid": "b4b8261b603fd153b03cca3c1d03621e", "text": "Stock market prediction is an attractive research problem to be investigated. News contents are one of the most important factors that have influence on market. Considering the news impact in analyzing the stock market behavior, leads to more precise predictions and as a result more profitable trades. So far various prototypes have been developed which consider the impact of news in stock market prediction. In this paper, the main components of such forecasting systems have been introduced. In addition, different developed prototypes have been introduced and the way whereby the main components are implemented compared. Based on studied attempts, the potential future research activities have been suggested.", "title": "" }, { "docid": "e685f07edfc9c7761c96d7926e00a64f", "text": "A middle-aged woman presented with fatigue and mild increases in hematocrit and red cell mass. Polycythemia vera was diagnosed. She underwent therapeutic phlebotomy but clinically worsened. On reevaluation, other problems were noted including episodic malaise, nausea, rash and vasomotor issues. The JAK2V617F mutation was absent; paraneoplastic erythrocytosis was investigated. Serum tryptase and urinary N-methylhistamine were normal, but urinary prostaglandin D2 was elevated. Skin and marrow biopsies showed no mast cell abnormalities. Extensive other evaluation was negative. Gastrointestinal tract biopsies were histologically normal but revealed increased, aberrant mast cells on immunohistochemistry; the KITD816V mutation was absent. Mast cell activation syndrome, recently identified as a clonal disorder involving assorted KIT mutations, was diagnosed. Imatinib 200 mg/d rapidly effected complete, sustained response. Diagnosis of mast cell activation syndrome is hindered by multiple factors, but existing therapies for mast cell disease are usually achieve significant benefit, highlighting the importance of early diagnosis. Multiple important aspects of clinical reasoning are illustrated by the case.", "title": "" }, { "docid": "e7fb4643c062e092a52ac84928ab46e9", "text": "Object detection and tracking are main tasks in video surveillance systems. Extracting the background is an intensive task with high computational cost. This work proposes a hardware computing engine to perform background subtraction on low-cost field programmable gate arrays (FPGAs), focused on resource-limited environments. Our approach is based on the codebook algorithm and offers very low accuracy degradation. We have analyzed resource consumption and performance trade-offs in Spartan-3 FPGAs by Xilinx. In addition, an accuracy evaluation with standard benchmark sequences has been performed, obtaining better results than previous hardware approaches. The implementation is able to segment objects in sequences with resolution $$768\\times 576$$ at 50 fps using a robust and accurate approach, and an estimated power consumption of 5.13 W.", "title": "" }, { "docid": "d9f442d281de14651ca17ec5d160b2d2", "text": "Query expansion of named entities can be employed in order to increase the retrieval effectiveness. A peculiarity of named entities compared to other vocabulary terms is that they are very dynamic in appearance, and synonym relationships between terms change with time. In this paper, we present an approach to extracting synonyms of named entities over time from the whole history of Wikipedia. In addition, we will use their temporal patterns as a feature in ranking and classifying them into two types, i.e., time-independent or time-dependent. Time-independent synonyms are invariant to time, while time-dependent synonyms are relevant to a particular time period, i.e., the synonym relationships change over time. Further, we describe how to make use of both types of synonyms to increase the retrieval effectiveness, i.e., query expansion with time-independent synonyms for an ordinary search, and query expansion with time-dependent synonyms for a search wrt. temporal criteria. Finally, through an evaluation based on TREC collections, we demonstrate how retrieval performance of queries consisting of named entities can be improved using our approach.", "title": "" }, { "docid": "45cc3369df084b22642cfc7288bc0abb", "text": "This paper proposes a novel unsupervised feature selection method by jointing self-representation and subspace learning. In this method, we adopt the idea of self-representation and use all the features to represent each feature. A Frobenius norm regularization is used for feature selection since it can overcome the over-fitting problem. The Locality Preserving Projection (LPP) is used as a regularization term as it can maintain the local adjacent relations between data when performing feature space transformation. Further, a low-rank constraint is also introduced to find the effective low-dimensional structures of the data, which can reduce the redundancy. Experimental results on real-world datasets verify that the proposed method can select the most discriminative features and outperform the state-of-the-art unsupervised feature selection methods in terms of classification accuracy, standard deviation, and coefficient of variation.", "title": "" }, { "docid": "101c03b85e3cc8518a158d89cc9b3b39", "text": "Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.", "title": "" }, { "docid": "fbcf0375db0e665c1028f8db77ccdc34", "text": "Design fiction is an emergent field within HCI and interaction design the understanding of which ultimately relies, so we argue, of an integrative account of poetics and design praxis. In this paper we give such an account. Initially, a precise definition of design fiction is given by drawing on the theory of possible worlds found within poetics. Further, we offer a method of practicing design fiction, which relies on the equal integration of literary practice with design practice. The use of this method is demonstrated by 4 design projects from a workshop set up in collaboration with a Danish author. All of this substantiates our notion of a poetics of practicing design fiction, and through our critical examination of related work we conclude on how our approach contribute to HCI and interaction design.", "title": "" }, { "docid": "d4406b74040e9f06b1d05cefade12c6c", "text": "Steganography is a science to hide information, it hides a message to another object, and it increases the security of data transmission and archiving it. In the process of steganography, the hidden object in which data is hidden the carrier object and the new object, is called the steganography object. The multiple carriers, such as text, audio, video, image and so can be mentioned for steganography; however, audio has been significantly considered due to the multiplicity of uses in various fields such as the internet. For steganography process, several methods have been developed; including work in the temporary and transformation, each of has its own advantages and disadvantages, and special function. In this paper we mainly review and evaluate different types of audio steganography techniques, advantages and disadvantages.", "title": "" } ]
scidocsrr
6edcaefcf4167d4824ef701952916eff
Work environment stressors, social support, anxiety, and depression among secondary school teachers.
[ { "docid": "779d75beb7ea4967f9503d6c4d087a5d", "text": "BACKGROUND\nTeaching is considered a highly stressful occupation. Burnout is a negative affective response occurring as a result of chronic work stress. While the early theories of burnout focused exclusively on work-related stressors, recent research adopts a more integrative approach where both environmental and individual factors are studied. Nevertheless, such studies are scarce with teacher samples.\n\n\nAIMS\nThe present cross-sectional study sought to investigate the association between burnout, personality characteristics and job stressors in primary school teachers from Cyprus. The study also investigates the relative contribution of these variables on the three facets of burnout - emotional exhaustion, depersonalization and reduced personal accomplishment.\n\n\nSAMPLE\nA representative sample of 447 primary school teachers participated in the study.\n\n\nMETHOD\nTeachers completed measures of burnout, personality and job stressors along with demographic and professional data. Surveys were delivered by courier to schools, and were distributed at faculty meetings.\n\n\nRESULTS\nResults showed that both personality and work-related stressors were associated with burnout dimensions. Neuroticism was a common predictor of all dimensions of burnout although in personal accomplishment had a different direction. Managing student misbehaviour and time constraints were found to systematically predict dimensions of burnout.\n\n\nCONCLUSIONS\nTeachers' individual characteristics as well as job related stressors should be taken into consideration when studying the burnout phenomenon. The fact that each dimension of the syndrome is predicted by different variables should not remain unnoticed especially when designing and implementing intervention programmes to reduce burnout in teachers.", "title": "" } ]
[ { "docid": "f313aeee26751cc81701eb0bd4f986b6", "text": "We have applied a little-known data transformation to subsets of the Surveillance, Epidemiology, and End Results (SEER) publically available data of the National Cancer Institute (NCI) to make it suitable input to standard machine learning classifiers. This transformation properly treats the right-censored data in the SEER data and the resulting Random Forest and Multi-Layer Perceptron models predict full survival curves. Treating the 6, 12, and 60 months points of the resulting survival curves as 3 binary classifiers, the 18 resulting classifiers have AUC values ranging from .765 to .885. Further evidence that the models have generalized well from the training data is provided by the extremely high levels of agreement between the random forest and neural network models predictions on the 6, 12, and 60 month binary classifiers.", "title": "" }, { "docid": "a84143b7aa2d42f3297d81a036dc0f5e", "text": "Vehicular Ad hoc Networks (VANETs) have emerged recently as one of the most attractive topics for researchers and automotive industries due to their tremendous potential to improve traffic safety, efficiency and other added services. However, VANETs are themselves vulnerable against attacks that can directly lead to the corruption of networks and then possibly provoke big losses of time, money, and even lives. This paper presents a survey of VANETs attacks and solutions in carefully considering other similar works as well as updating new attacks and categorizing them into different classes.", "title": "" }, { "docid": "645c0e5b4946217bb6ccaf7f03454cc2", "text": "Nowadays, huge sheet music collections exist on the Web, allowing people to access public domain scores for free. However, beginners may be lost in finding a score appropriate to their instrument level, and should often rely on themselves to start out on the chosen piece. In this instrumental e-Learning context, we propose a Score Analyzer prototype in order to automatically extract the difficulty level of a MusicXML piece and suggest advice thanks to a Musical Sign Base (MSB). To do so, we first review methods related to score performance information retrieval. We then identify seven criteria to characterize technical instrumental difficulties and propose methods to extract them from a MusicXML score. The relevance of these criteria is then evaluated through a Principal Components Analysis and compared to human estimations. Lastly we discuss the integration of this work to @MUSE, a collaborative score annotation platform based on multimedia contents indexation.", "title": "" }, { "docid": "687caec27d44691a6aac75577b32eb81", "text": "We present unsupervised approaches to the problem of modeling dialog acts in asynchronous conversations; i.e., conversations where participants collaborate with each other at different times. In particular, we investigate a graph-theoretic deterministic framework and two probabilistic conversation models (i.e., HMM and HMM+Mix) for modeling dialog acts in emails and forums. We train and test our conversation models on (a) temporal order and (b) graph-structural order of the datasets. Empirical evaluation suggests (i) the graph-theoretic framework that relies on lexical and structural similarity metrics is not the right model for this task, (ii) conversation models perform better on the graphstructural order than the temporal order of the datasets and (iii) HMM+Mix is a better conversation model than the simple HMM model.", "title": "" }, { "docid": "21c4cd3a91a659fcd3800967943a2ffd", "text": "Ground reaction force (GRF) measurement is important in the analysis of human body movements. The main drawback of the existing measurement systems is the restriction to a laboratory environment. This study proposes an ambulatory system for assessing the dynamics of ankle and foot, which integrates the measurement of the GRF with the measurement of human body movement. The GRF and the center of pressure (CoP) are measured using two 6D force/moment sensors mounted beneath the shoe. The movement of the foot and the lower leg is measured using three miniature inertial sensors, two rigidly attached to the shoe and one to the lower leg. The proposed system is validated using a force plate and an optical position measurement system as a reference. The results show good correspondence between both measurement systems, except for the ankle power. The root mean square (rms) difference of the magnitude of the GRF over 10 evaluated trials was 0.012 ± 0.001 N/N (mean ± standard deviation), being 1.1 ± 0.1 % of the maximal GRF magnitude. It should be noted that the forces, moments, and powers are normalized with respect to body weight. The CoP estimation using both methods shows good correspondence, as indicated by the rms difference of 5.1± 0.7 mm, corresponding to 1.7 ± 0.3 % of the length of the shoe. The rms difference between the magnitudes of the heel position estimates was calculated as 18 ± 6 mm, being 1.4 ± 0.5 % of the maximal magnitude. The ankle moment rms difference was 0.004 ± 0.001 Nm/N, being 2.3 ± 0.5 % of the maximal magnitude. Finally, the rms difference of the estimated power at the ankle was 0.02 ± 0.005 W/N, being 14 ± 5 % of the maximal power. This power difference is caused by an inaccurate estimation of the angular velocities using the optical reference measurement system, which is due to considering the foot as a single segment. The ambulatory system considers separate heel and forefoot segments, thus allowing an additional foot moment and power to be estimated. Based on the results of this research, it is concluded that the combination of the instrumented shoe and inertial sensing is a promising tool for the assessment of the dynamics of foot and ankle in an ambulatory setting.", "title": "" }, { "docid": "72226ba8d801a3db776cf40d5243c521", "text": "Hyperspectral image (HSI) classification is one of the most widely used methods for scene analysis from hyperspectral imagery. In the past, many different engineered features have been proposed for the HSI classification problem. In this paper, however, we propose a feature learning approach for hyperspectral image classification based on convolutional neural networks (CNNs). The proposed CNN model is able to learn structured features, roughly resembling different spectral band-pass filters, directly from the hyperspectral input data. Our experimental results, conducted on a commonly-used remote sensing hyperspectral dataset, show that the proposed method provides classification results that are among the state-of-the-art, without using any prior knowledge or engineered features.", "title": "" }, { "docid": "63815cfef828fffddd589f2f1c34299a", "text": "In this paper, we propose a Recurrent Highway Network with Language CNN for image caption generation. Our network consists of three sub-networks: the deep Convolutional Neural Network for image representation, the Convolutional Neural Network for language modeling, and the Multimodal Recurrent Highway Network for sequence prediction. Our proposed model can naturally exploit the hierarchical and temporal structure of history words, which are critical for image caption generation. The effectiveness of our model is validated on two datasets MS COCO and Flickr30K. Our extensive experiment results show that our method is competitive with the state-of-the-art methods.", "title": "" }, { "docid": "f4cbdcdb55e2bf49bcc62a79293f19b7", "text": "Network slicing for 5G provides Network-as-a-Service (NaaS) for different use cases, allowing network operators to build multiple virtual networks on a shared infrastructure. With network slicing, service providers can deploy their applications and services flexibly and quickly to accommodate diverse services’ specific requirements. As an emerging technology with a number of advantages, network slicing has raised many issues for the industry and academia alike. Here, the authors discuss this technology’s background and propose a framework. They also discuss remaining challenges and future research directions.", "title": "" }, { "docid": "74e0fb4cb7b57d8b84eed3f895a39ef3", "text": "High-throughput data production has revolutionized molecular biology. However, massive increases in data generation capacity require analysis approaches that are more sophisticated, and often very computationally intensive. Thus, making sense of high-throughput data requires informatics support. Galaxy (http://galaxyproject.org) is a software system that provides this support through a framework that gives experimentalists simple interfaces to powerful tools, while automatically managing the computational details. Galaxy is distributed both as a publicly available Web service, which provides tools for the analysis of genomic, comparative genomic, and functional genomic data, or a downloadable package that can be deployed in individual laboratories. Either way, it allows experimentalists without informatics or programming expertise to perform complex large-scale analysis with just a Web browser.", "title": "" }, { "docid": "ad5b787fd972c202a69edc98a8fbc7ba", "text": "BACKGROUND\nIntimate partner violence (IPV) is a major public health problem with serious consequences for women's physical, mental, sexual and reproductive health. Reproductive health outcomes such as unwanted and terminated pregnancies, fetal loss or child loss during infancy, non-use of family planning methods, and high fertility are increasingly recognized. However, little is known about the role of community influences on women's experience of IPV and its effect on terminated pregnancy, given the increased awareness of IPV being a product of social context. This study sought to examine the role of community-level norms and characteristics in the association between IPV and terminated pregnancy in Nigeria.\n\n\nMETHODS\nMultilevel logistic regression analyses were performed on nationally-representative cross-sectional data including 19,226 women aged 15-49 years in Nigeria. Data were collected by a stratified two-stage sampling technique, with 888 primary sampling units (PSUs) selected in the first sampling stage, and 7,864 households selected through probability sampling in the second sampling stage.\n\n\nRESULTS\nWomen who had experienced physical IPV, sexual IPV, and any IPV were more likely to have terminated a pregnancy compared to women who had not experienced these IPV types.IPV types were significantly associated with factors reflecting relationship control, relationship inequalities, and socio-demographic characteristics. Characteristics of the women aggregated at the community level (mean education, justifying wife beating, mean age at first marriage, and contraceptive use) were significantly associated with IPV types and terminated pregnancy.\n\n\nCONCLUSION\nFindings indicate the role of community influence in the association between IPV-exposure and terminated pregnancy, and stress the need for screening women seeking abortions for a history of abuse.", "title": "" }, { "docid": "be7f7d9c6a28b7d15ec381570752de95", "text": "Neural network are most popular in the research community due to its generalization abilities. Additionally, it has been successfully implemented in biometrics, features selection, object tracking, document image preprocessing and classification. This paper specifically, clusters, summarize, interpret and evaluate neural networks in document Image preprocessing. The importance of the learning algorithms in neural networks training and testing for preprocessing is also highlighted. Finally, a critical analysis on the reviewed approaches and the future research guidelines in the field are suggested.", "title": "" }, { "docid": "211cf327b65cbd89cf635bbeb5fa9552", "text": "BACKGROUND\nAdvanced mobile communications and portable computation are now combined in handheld devices called \"smartphones\", which are also capable of running third-party software. The number of smartphone users is growing rapidly, including among healthcare professionals. The purpose of this study was to classify smartphone-based healthcare technologies as discussed in academic literature according to their functionalities, and summarize articles in each category.\n\n\nMETHODS\nIn April 2011, MEDLINE was searched to identify articles that discussed the design, development, evaluation, or use of smartphone-based software for healthcare professionals, medical or nursing students, or patients. A total of 55 articles discussing 83 applications were selected for this study from 2,894 articles initially obtained from the MEDLINE searches.\n\n\nRESULTS\nA total of 83 applications were documented: 57 applications for healthcare professionals focusing on disease diagnosis (21), drug reference (6), medical calculators (8), literature search (6), clinical communication (3), Hospital Information System (HIS) client applications (4), medical training (2) and general healthcare applications (7); 11 applications for medical or nursing students focusing on medical education; and 15 applications for patients focusing on disease management with chronic illness (6), ENT-related (4), fall-related (3), and two other conditions (2). The disease diagnosis, drug reference, and medical calculator applications were reported as most useful by healthcare professionals and medical or nursing students.\n\n\nCONCLUSIONS\nMany medical applications for smartphones have been developed and widely used by health professionals and patients. The use of smartphones is getting more attention in healthcare day by day. Medical applications make smartphones useful tools in the practice of evidence-based medicine at the point of care, in addition to their use in mobile clinical communication. Also, smartphones can play a very important role in patient education, disease self-management, and remote monitoring of patients.", "title": "" }, { "docid": "25ae604f6e56aae8baf92693fa4df3d4", "text": "Many automatic image annotation methods are based on the learning by example paradigm. Image tagging, through manual image inspection, is the first step towards this end. However, manual image annotation, even for creating the training sets, is time-consuming, complicated and contains human subjectivity errors. Thus, alternative ways for automatically creating training examples, i.e., pairs of images and tags, are crucial. As we showed in one of our previous studies, tags accompanying photos in social media and especially the Instagram hashtags can be used for image annotation. However, it turned out that only a 20% of the Instagram hashtags are actually relevant to the content of the image they accompany. Identifying those hashtags through crowdsourcing is a plausible solution. In this work, we investigate the effectiveness of the HITS algorithm for identifying the right tags in a crowdsourced image tagging scenario. For this purpose, we create a bipartite graph in which the first type of nodes corresponds to the annotators and the second type to the tags they select, among the hashtags, to annotate a particular Instagram image. From the results, we conclude that the authority value of the HITS algorithm provides an accurate estimation of the appropriateness of each Instagram hashtag to be used as a tag for the image it accompanies while the hub value can be used to filter out the dishonest annotators.", "title": "" }, { "docid": "bad3eec42d75357aca75fa993ab49e52", "text": "By robust image hashing (RIH), a digital image is transformed into a short binary string, of fixed length, called hash value, hash code or simply hash. Other terms used occasionally for the hash are digital signature, fingerprint, message digest or label. The hash is attached to the image, inserted by watermarking or transmitted by side channels. The hash is robust to image low distortion, fragile to image tempering and have low collision probability. The main applications of RIH are in image copyright protection, content authentication and database indexing. The goal of copyright protections is to prevent possible illegal usage of digital images by identifying the image even when its pixels are distorted by small tempering or by common manipulation (transmission, lossy compression etc.). In such cases, the image is still identifiable by the hash, which is robust to low distortions (Khelifi & Jiang, 2010). The content authentication is today, one of the main issues in digital image security. The image content can be easily modified by using commercial image software. A common example is the object insertion or removal. Although visually undetectable, such modifications are put into evidence by the hash, which is fragile to image tempering (Zhao & al., 2013). Finally, in large databases management, the RIH can be an effective solution for image efficient retrieval, by replacing the manual annotation with the hash, which is automated extracted (Lin & al., 2001). The properties that recommend the hash for indexing are the low collision probability and the content-based features. The origins of the hash lay in computer science, where one of the earliest applications was the efficient search of large tables. Here, the hash – calculated by a hash function – serves as index for the data recorded in the table. Since, in general, such functions map more data strings to the same hash, the hash designates in fact a bucket of records, helping to narrow the search. Although very efficient in table searching, these hashes are not appropriate for file authentication, where the low collision probability is of high concern. The use in authentication applications has led to the development of the cryptographic hashing, a branch including hash functions with the following special properties: preimage resistance (by knowing the hash it is very difficult to find out the file that generated it), second image resistance (given a file, it is very difficult to find another with the same hash) and collision resistance (it is very difficult to find two files with the same hash). They allow the hash to withstand the cryptanalytic attacks. The development of multimedia applications in the last two decades has brought central stage the digital images. The indexing or authentication of these data has been a new challenge for hashing because of a property that might be called perceptible identity. It could be defined as follows: although the image pixels undergo slight modification during ordinary operations, the image is perceived as being the same. The perceptual similar images must have similar hashes. The hashing complying with this demand is called robust or perceptual. Specific methods have had to be developed in order to obtain hashes tolerant to distortions, inherent to image conventional handling like archiving, scaling, rotation, cropping, noise filtering, print-and-scan etc., called in one word non malicious attacks. These methods are grouped under the generic name of RIH. In this article, we define the main terms used in RIH and discuss the solutions commonly used for designing a RIH scheme. The presentation will be done in the light of robust hash inherent properties: randomness, independence and robustness.", "title": "" }, { "docid": "6ec3f783ec49c0b3e51a704bc3bd03ec", "text": "Abstract: It has been suggested by many supply chain practitioners that in certain cases inventory can have a stimulating effect on the demand. In mathematical terms this amounts to the demand being a function of the inventory level alone. In this work we propose a logistic growth model for the inventory dependent demand rate and solve first the continuous time deterministic optimal control problem of maximising the present value of the total net profit over an infinite horizon. It is shown that under a strict condition there is a unique optimal stock level which the inventory planner should maintain in order to satisfy demand. The stochastic version of the optimal control problem is considered next. A bang-bang type of optimal control problem is formulated and the associated Hamilton-Jacobi-Bellman equation is solved. The inventory level that signifies a switch in the ordering strategy is worked out in the stochastic case.", "title": "" }, { "docid": "9b848e6e472d875db48fab62d8dd31a4", "text": "We present a semantics-driven approach for stroke-based painterly rendering, based on recent image parsing techniques [Tu et al. 2005; Tu and Zhu 2006] in computer vision. Image parsing integrates segmentation for regions, sketching for curves, and recognition for object categories. In an interactive manner, we decompose an input image into a hierarchy of its constituent components in a parse tree representation with occlusion relations among the nodes in the tree. To paint the image, we build a brush dictionary containing a large set (760) of brush examples of four shape/appearance categories, which are collected from professional artists, then we select appropriate brushes from the dictionary and place them on the canvas guided by the image semantics included in the parse tree, with each image component and layer painted in various styles. During this process, the scene and object categories also determine the color blending and shading strategies for inhomogeneous synthesis of image details. Compared with previous methods, this approach benefits from richer meaningful image semantic information, which leads to better simulation of painting techniques of artists using the high-quality brush dictionary. We have tested our approach on a large number (hundreds) of images and it produced satisfactory painterly effects.", "title": "" }, { "docid": "b30b588142b60b105a39a79166ba2a36", "text": "JavaScript is used everywhere from the browser to the server, including desktops and mobile devices. However, the current state of the art in JavaScript static analysis lags far behind that of other languages such as C and Java. Our goal is to help remedy this lack. We describe JSAI, a formally specified, robust abstract interpreter for JavaScript. JSAI uses novel abstract domains to compute a reduced product of type inference, pointer analysis, control-flow analysis, string analysis, and integer and boolean constant propagation. Part of JSAI's novelty is user-configurable analysis sensitivity, i.e., context-, path-, and heap-sensitivity. JSAI is designed to be provably sound with respect to a specific concrete semantics for JavaScript, which has been extensively tested against a commercial JavaScript implementation. We provide a comprehensive evaluation of JSAI's performance and precision using an extensive benchmark suite, including real-world JavaScript applications, machine generated JavaScript code via Emscripten, and browser addons. We use JSAI's configurability to evaluate a large number of analysis sensitivities (some well-known, some novel) and observe some surprising results that go against common wisdom. These results highlight the usefulness of a configurable analysis platform such as JSAI.", "title": "" }, { "docid": "beb9fe0cb07e8531f01744bd8800d67b", "text": "Networked embedded systems have become quite important nowadays, especially for monitoring and controlling the devices. Advances in embedded system technologies have led to the development of residential gateway and automation systems. Apart from frequent power cuts, residential areas suffer from a serious problem that people are not aware about the power cuts due to power disconnections in the transformers, also power theft issues and power wastage in the street lamps during day time exists. So this paper presents a lifestyle system using GSM which transmits the status of the transformer.", "title": "" }, { "docid": "8c1445126369faaf6e2785769a33ba0b", "text": "Recommender systems objectives can be broadly characterized as modeling user preferences over shortor long-term time horizon. A large body of previous research studied long-term recommendation through dimensionality reduction techniques applied to the historical user-item interactions. A recently introduced session-based recommendation setting highlighted the importance of modeling short-term user preferences. In this task, Recurrent Neural Networks (RNN) have shown to be successful at capturing the nuances of user’s interactions within a short time window. In this paper, we evaluate RNN-based models on both short-term and long-term recommendation tasks. Our experimental results suggest that RNNs are capable of predicting immediate as well as distant user interactions. We also find the best performing configuration to be a stacked RNN with layer normalization and tied item embeddings.", "title": "" }, { "docid": "8c4b1b74d21dcf6d10852deecccece36", "text": "Trolley problems have been used in the development of moral theory and the psychological study of moral judgments and behavior. Most of this research has focused on people from the West, with implicit assumptions that moral intuitions should generalize and that moral psychology is universal. However, cultural differences may be associated with differences in moral judgments and behavior. We operationalized a trolley problem in the laboratory, with economic incentives and real-life consequences, and compared British and Chinese samples on moral behavior and judgment. We found that Chinese participants were less willing to sacrifice one person to save five others, and less likely to consider such an action to be right. In a second study using three scenarios, including the standard scenario where lives are threatened by an on-coming train, fewer Chinese than British participants were willing to take action and sacrifice one to save five, and this cultural difference was more pronounced when the consequences were less severe than death.", "title": "" } ]
scidocsrr
59f8ad6c9ba12b08572d4fc9bf0fe2ed
NEUTROSOPHIC INDEX NUMBERS . Neutrosophic logic applied in the statistical indicators theory
[ { "docid": "d80ca368563546b1c2a7aa99d97e39d2", "text": "In this paper we present a short history of logics: from parti cular cases of 2-symbol or numerical valued logic to the general case of n-symbol or num erical valued logic. We show generalizations of 2-valued Boolean logic to fuzzy log ic, also from the Kleene’s and Lukasiewicz’ 3-symbol valued logics or Belnap’s 4ymbol valued logic to the most generaln-symbol or numerical valued refined neutrosophic logic . Two classes of neutrosophic norm ( n-norm) and neutrosophic conorm ( n-conorm) are defined. Examples of applications of neutrosophic logic to physics are listed in the last section. Similar generalizations can be done for n-Valued Refined Neutrosophic Set , and respectively n-Valued Refined Neutrosopjhic Probability .", "title": "" } ]
[ { "docid": "8cd99d9b59e6f1b631767b57fb506619", "text": "We describe origami programming methodology based on constraint functional logic programming. The basic operations of origami are reduced to solving systems of equations which describe the geometric properties of paper folds. We developed two software components: one that provides primitives to construct, manipulate and visualize paper folds and the other that solves the systems of equations. Using these components, we illustrate computer-supported origami construction and show the significance of the constraint functional logic programming paradigm in the program development.", "title": "" }, { "docid": "6e051906ec3deac14acb249ea4982d2e", "text": "Recent attempts to fabricate surfaces with custom reflectance functions boast impressive angular resolution, yet their spatial resolution is limited. In this paper we present a method to construct spatially varying reflectance at a high resolution of up to 220dpi, orders of magnitude greater than previous attempts, albeit with a lower angular resolution. The resolution of previous approaches is limited by the machining, but more fundamentally, by the geometric optics model on which they are built. Beyond a certain scale geometric optics models break down and wave effects must be taken into account. We present an analysis of incoherent reflectance based on wave optics and gain important insights into reflectance design. We further suggest and demonstrate a practical method, which takes into account the limitations of existing micro-fabrication techniques such as photolithography to design and fabricate a range of reflection effects, based on wave interference.", "title": "" }, { "docid": "1ecf01e0c9aec4159312406368ceeff0", "text": "Image phylogeny is the problem of reconstructing the structure that represents the history of generation of semantically similar images (e.g., near-duplicate images). Typical image phylogeny approaches break the problem into two steps: (1) estimating the dissimilarity between each pair of images and (2) reconstructing the phylogeny structure. Given that the dissimilarity calculation directly impacts the phylogeny reconstruction, in this paper, we propose new approaches to the standard formulation of the dissimilarity measure employed in image phylogeny, aiming at improving the reconstruction of the tree structure that represents the generational relationships between semantically similar images. These new formulations exploit a different method of color adjustment, local gradients to estimate pixel differences and mutual information as a similarity measure. The results obtained with the proposed formulation remarkably outperform the existing counterparts in the literature, allowing a much better analysis of the kinship relationships in a set of images, allowing for more accurate deployment of phylogeny solutions to tackle traitor tracing, copyright enforcement and digital forensics problems.", "title": "" }, { "docid": "65ffbc6ee36ae242c697bb81ff3be23a", "text": "Full-duplex hands-free telecommunication systems employ an acoustic echo canceler (AEC) to remove the undesired echoes that result from the coupling between a loudspeaker and a microphone. Traditionally, the removal is achieved by modeling the echo path impulse response with an adaptive finite impulse response (FIR) filter and subtracting an echo estimate from the microphone signal. It is not uncommon that an adaptive filter with a length of 50-300 ms needs to be considered, which makes an AEC highly computationally expensive. In this paper, we propose an echo suppression algorithm to eliminate the echo effect. Instead of identifying the echo path impulse response, the proposed method estimates the spectral envelope of the echo signal. The suppression is done by spectral modification-a technique originally proposed for noise reduction. It is shown that this new approach has several advantages over the traditional AEC. Properties of human auditory perception are considered, by estimating spectral envelopes according to the frequency selectivity of the auditory system, resulting in improved perceptual quality. A conventional AEC is often combined with a post-processor to reduce the residual echoes due to minor echo path changes. It is shown that the proposed algorithm is insensitive to such changes. Therefore, no post-processor is necessary. Furthermore, the new scheme is computationally much more efficient than a conventional AEC.", "title": "" }, { "docid": "95a102f45ff856d2064d8042b0b1a0ad", "text": "Diagnosis and monitoring of health is a very important task in health care industry. Due to time constraint people are not visiting hospitals, which could lead to lot of health issues in one instant of time. Priorly most of the health care systems have been developed to predict and diagnose the health of the patients by which people who are busy in their schedule can also monitor their health at regular intervals. Many studies have shown that early prediction is the best way to cure health because early diagnosis will help and alert the patients to know the health status. In this paper, we review the various Internet of Things (IoT) enable devices and its actual implementation in the area of health care children’s, monitoring of the patients etc. Further, this paper addresses how different innovations as server, ambient intelligence and sensors can be leveraged in health care context; determines how they can facilitate economies and societies in terms of suitable development. KeywordsInternet of Things (IoT);ambient intelligence; monitoring; innovations; leveraged. __________________________________________________*****_________________________________________________", "title": "" }, { "docid": "b9d8ea80169ac5a5c48fd631c9d5625a", "text": "Deep convolutional networks have achieved great success for image recognition. However, for action recognition in videos, their advantage over traditional methods is not so evident. We present a general and flexible video-level framework for learning action models in videos. This method, called temporal segment network (TSN), aims to model long-range temporal structures with a new segment-based sampling and aggregation module. This unique design enables our TSN to efficiently learn action models by using the whole action videos. The learned models could be easily adapted for action recognition in both trimmed and untrimmed videos with simple average pooling and multi-scale temporal window integration, respectively. We also study a series of good practices for the instantiation of TSN framework given limited training samples. Our approach obtains the state-the-of-art performance on four challenging action recognition benchmarks: HMDB51 (71.0%), UCF101 (94.9%), THUMOS14 (80.1%), and ActivityNet v1.2 (89.6%). Using the proposed RGB difference for motion models, our method can still achieve competitive accuracy on UCF101 (91.0 %) while running at 340 FPS. Furthermore, based on the temporal segment networks, we won the video classification track at the ActivityNet challenge 2016 among 24 teams, which demonstrates the effectiveness of TSN and the proposed good practices.", "title": "" }, { "docid": "3100f5d0ed870be38770caf729798624", "text": "Our research objective is to facilitate the identification of true input manipulation vulnerabilities via the combination of static analysis, runtime detection, and automatic testing. We propose an approach for SQL injection vulnerability detection, automated by a prototype tool SQLInjectionGen. We performed case studies on two small web applications for the evaluation of our approach compared to static analysis for identifying true SQL injection vulnerabilities. In our case study, SQLInjectionGen had no false positives, but had a small number of false negatives while the static analysis tool had a false positive for every vulnerability that was actually protected by a white or black list.", "title": "" }, { "docid": "7ae137752af46ecd4bf8957691069779", "text": "We measured contrast detection thresholds for a foveal Gabor signal flanked by two high contrast Gabor signals. The spatially localized target and masks enabled investigation of space dependent lateral interactions between foveal and neighboring spatial channels. Our data show a suppressive region extending to a radius of two wavelengths, in which the presence of the masking signals have the effect of increasing target threshold. Beyond this range a much larger facilitatory region (up to a distance of ten wavelengths) is indicated, in which contrast thresholds were found to decrease by up to a factor of two. The interactions between the foveal target and the flanking Gabor signals are spatial-frequency and orientation specific in both regions, but less specific in the suppression region.", "title": "" }, { "docid": "945cb86f3eec8305ead0fd72162e8240", "text": "In this paper, the design of an integrated portable device that can monitor heart rate (HR) continuously and send notifications through short message service (SMS) over the cellular network using Android application is presented. The primary goal of our device is to ensure medical attention within the first few critical hours of an ailment of the patient with poor heart condition, hence boost chances of his or her survival. In situations where there is an absence of doctor or clinic nearby (e.g., rural area) and where the patient cannot realize their actual poor heart condition, is where our implemented system is of paramount importance. The designed system shows the real time HR on the mobile screen through Android application continuously and if any abnormal HR of the patient is detected, the system will immediately send a message to the concerned doctors and relatives whose numbers were previously saved in the Android application. This device ensures nonstop monitoring for cardiac events along with notifying immediately when an emergency strikes.", "title": "" }, { "docid": "66f6ca5a7ed26e43a5e06fb2c218aa94", "text": "We design two compressed data structures for the full-text indexing problem that support efficient substring searches using roughly the space required for storing the text in compressed form.Our first compressed data structure retrieves the <i>occ</i> occurrences of a pattern <i>P</i>[1,<i>p</i>] within a text <i>T</i>[1,<i>n</i>] in <i>O</i>(<i>p</i> + <i>occ</i> log<sup>1+ε</sup> <i>n</i>) time for any chosen ε, 0<ε<1. This data structure uses at most 5<i>n</i><i>H</i><inf><i>k</i></inf>(<i>T</i>) + <i>o</i>(<i>n</i>) bits of storage, where <i>H</i><inf><i>k</i></inf>(<i>T</i>) is the <i>k</i>th order empirical entropy of <i>T</i>. The space usage is Θ(<i>n</i>) bits in the worst case and <i>o</i>(<i>n</i>) bits for compressible texts. This data structure exploits the relationship between suffix arrays and the Burrows--Wheeler Transform, and can be regarded as a <i>compressed suffix array</i>.Our second compressed data structure achieves <i>O</i>(<i>p</i>+<i>occ</i>) query time using <i>O</i>(<i>n</i><i>H</i><inf><i>k</i></inf>(<i>T</i>)log<sup>ε</sup> <i>n</i>) + <i>o</i>(<i>n</i>) bits of storage for any chosen ε, 0<ε<1. Therefore, it provides optimal <i>output-sensitive</i> query time using <i>o</i>(<i>n</i>log <i>n</i>) bits in the worst case. This second data structure builds upon the first one and exploits the interplay between two compressors: the Burrows--Wheeler Transform and the <i>LZ78</i> algorithm.", "title": "" }, { "docid": "f733125d8cd3d90ac7bf463ae93ca24a", "text": "Various online, networked systems offer a lightweight process for obtaining identities (e.g., confirming a valid e-mail address), so that users can easily join them. Such convenience comes with a price, however: with minimum effort, an attacker can subvert the identity management scheme in place, obtain a multitude of fake accounts, and use them for malicious purposes. In this work, we approach the issue of fake accounts in large-scale, distributed systems, by proposing a framework for adaptive identity management. Instead of relying on users' personal information as a requirement for granting identities (unlike existing proposals), our key idea is to estimate a trust score for identity requests, and price them accordingly using a proof of work strategy. The research agenda that guided the development of this framework comprised three main items: (i) investigation of a candidate trust score function, based on an analysis of users' identity request patterns, (ii) combination of trust scores and proof of work strategies (e.g. cryptograhic puzzles) for adaptively pricing identity requests, and (iii) reshaping of traditional proof of work strategies, in order to make them more resource-efficient, without compromising their effectiveness (in stopping attackers).", "title": "" }, { "docid": "3ed5ec863971e04523a7ede434eaa80d", "text": "This article reports on the design, implementation, and usage of the CourseMarker (formerly known as CourseMaster) courseware Computer Based Assessment (CBA) system at the University of Nottingham. Students use CourseMarker to solve (programming) exercises and to submit their solutions. CourseMarker returns immediate results and feedback to the students. Educators author a variety of exercises that benefit the students while offering practical benefits. To date, both educators and students have been hampered by CBA software that has been constructed to assess text-based or multiple-choice answers only. Although there exist a few CBA systems with some capability to automatically assess programming coursework, none assess Java programs and none are as flexible, architecture-neutral, robust, or secure as the CourseMarker CBA system.", "title": "" }, { "docid": "da1f4117851762bfb5ef80c0893248c3", "text": "The recently-developed WaveNet architecture (van den Oord et al., 2016a) is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, a 1000x speed up relative to the original WaveNet, and capable of serving multiple English and Japanese voices in a production setting.", "title": "" }, { "docid": "b8e705c7dd974ee43b315d3146a0b149", "text": "The use of repeated measures, where the same subjects are tested under a number of conditions, has numerous practical and statistical benefits. For one thing it reduces the error variance caused by between-group individual differences, however, this reduction of error comes at a price because repeated measures designs potentially introduce covariation between experimental conditions (this is because the same people are used in each condition and so there is likely to be some consistency in their behaviour across conditions). In between-group ANOVA we have to assume that the groups we test are independent for the test to be accurate (Scariano & Davenport, 1987, have documented some of the consequences of violating this assumption). As such, the relationship between treatments in a repeated measures design creates problems with the accuracy of the test statistic. The purpose of this article is to explain, as simply as possible, the issues that arise in analysing repeated measures data with ANOVA: specifically, what is sphericity and why is it important? What is Sphericity?", "title": "" }, { "docid": "13173c37670511963b23a42a3cc7e36b", "text": "In patients having a short nose with a short septal length and/or severe columellar retraction, a septal extension graft is a good solution, as it allows the dome to move caudally and pushes down the columellar base. Fixing the medial crura of the alar cartilages to a septal extension graft leads to an uncomfortably rigid nasal tip and columella, and results in unnatural facial animation. Further, because of the relatively small and weak septal cartilage in the East Asian population, undercorrection of a short nose is not uncommon. To overcome these shortcomings, we have used the septal extension graft combined with a derotation graft. Among 113 patients who underwent the combined procedure, 82 patients had a short nose deformity alone; the remaining 31 patients had a short nose with columellar retraction. Thirty-two patients complained of nasal tip stiffness caused by a septal extension graft from previous operations. In addition to the septal extension graft, a derotation graft was used for bridging the gap between the alar cartilages and the septal extension graft for tip lengthening. Satisfactory results were obtained in 102 (90%) patients. Eleven (10%) patients required revision surgery. This combination method is a good surgical option for patients who have a short nose with small septal cartilages and do not have sufficient cartilage for tip lengthening by using a septal extension graft alone. It can also overcome the postoperative nasal tip rigidity of a septal extension graft.", "title": "" }, { "docid": "09e07f66760c1216e6e01841af2e48b7", "text": "Traditional approaches to rule-based information extraction (IE) have primarily been based on regular expression grammars. However, these grammar-based systems have difficulty scaling to large data sets and large numbers of rules. Inspired by traditional database research, we propose an algebraic approach to rule-based IE that addresses these scalability issues through query optimization. The operators of our algebra are motivated by our experience in building several rule-based extraction programs over diverse data sets. We present the operators of our algebra and propose several optimization strategies motivated by the text-specific characteristics of our operators. Finally we validate the potential benefits of our approach by extensive experiments over real-world blog data.", "title": "" }, { "docid": "f070e150ca71ac43d191ca1c3e2f2333", "text": "Weblog, one of the fastest growing user generated contents, often contains key learnings gleaned from people's past experiences which are really worthy to be well presented to other people. One of the key learnings contained in weblogs is often vented in the form of advice. In this paper, we aim to provide a methodology to extract sentences that reveal advices on weblogs. We observed our data to discover the characteristics of advices contained in weblogs. Based on this observation, we define our task as a classification problem using various linguistic features. We show that our proposed method significantly outperforms the baseline. The presence or absence of imperative mood expression appears to be the most important feature in this task. It is also worth noting that the work presented in this paper is the first attempt on mining advices from English data.", "title": "" }, { "docid": "9869f2a28b11a5f0a83127937408b0ac", "text": "With the advent of the Semantic Web, the field of domain ontology engineering has gained more and more importance. This innovative field may have a big impact on computer-based education and will certainly contribute to its development. This paper presents a survey on domain ontology engineering and especially domain ontology learning. The paper focuses particularly on automatic methods for ontology learning from texts. It summarizes the state of the art in natural language processing techniques and statistical and machine learning techniques for ontology extraction. It also explains how intelligent tutoring systems may benefit from this engineering and talks about the challenges that face the field.", "title": "" }, { "docid": "e5fc30045f458f84435363349d22204d", "text": "Today, root cause analysis of failures in data centers is mostly done through manual inspection. More often than not, cus- tomers blame the network as the culprit. However, other components of the system might have caused these failures. To troubleshoot, huge volumes of data are collected over the entire data center. Correlating such large volumes of diverse data collected from different vantage points is a daunting task even for the most skilled technicians. In this paper, we revisit the question: how much can you infer about a failure in the data center using TCP statistics collected at one of the endpoints? Using an agent that cap- tures TCP statistics we devised a classification algorithm that identifies the root cause of failure using this information at a single endpoint. Using insights derived from this classi- fication algorithm we identify dominant TCP metrics that indicate where/why problems occur in the network. We val- idate and test these methods using data that we collect over a period of six months in a production data center.", "title": "" }, { "docid": "8baa6af3ee08029f0a555e4f4db4e218", "text": "We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question/answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.", "title": "" } ]
scidocsrr
7e45f0f26f2bdce07b23ce2c2383ec40
Does sexual selection explain human sex differences in aggression?
[ { "docid": "0688abcb05069aa8a0956a0bd1d9bf54", "text": "Sex differences in mortality rates stem from genetic, physiological, behavioral, and social causes that are best understood when integrated in an evolutionary life history framework. This paper investigates the Male-to-Female Mortality Ratio (M:F MR) from external and internal causes and across contexts to illustrate how sex differences shaped by sexual selection interact with the environment to yield a pattern with some consistency, but also with expected variations due to socioeconomic and other factors.", "title": "" } ]
[ { "docid": "ba8cddc6ed18f941ed7409524137c28c", "text": "This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent’s past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.", "title": "" }, { "docid": "88530d3d70df372b915556eab919a3fe", "text": "The airway mucosa is lined by a continuous epithelium comprised of multiple cell phenotypes, several of which are secretory. Secretions produced by these cells mix with a variety of macromolecules, ions and water to form a respiratory tract fluid that protects the more distal airways and alveoli from injury and infection. The present article highlights the structure of the mucosa, particularly its secretory cells, gives a synopsis of the structure of mucus, and provides new information on the localization of mucin (MUC) genes that determine the peptide sequence of the protein backbone of the glycoproteins, which are a major component of mucus. Airway secretory cells comprise the mucous, serous, Clara and dense-core granulated cells of the surface epithelium, and the mucous and serous acinar cells of the submucosal glands. Several transitional phenotypes may be found, especially during irritation or disease. Respiratory tract mucins constitute a heterogeneous group of high molecular weight, polydisperse richly glycosylated molecules: both secreted and membrane-associated forms of mucin are found. Several mucin (MUC) genes encoding the protein core of mucin have been identified. We demonstrate the localization of MUC gene expression to a number of distinct cell types and their upregulation both in response to experimentally administered lipopolysaccharide and cystic fibrosis.", "title": "" }, { "docid": "46f3f27a88b4184a15eeb98366e599ec", "text": "Radiomics is an emerging field in quantitative imaging that uses advanced imaging features to objectively and quantitatively describe tumour phenotypes. Radiomic features have recently drawn considerable interest due to its potential predictive power for treatment outcomes and cancer genetics, which may have important applications in personalized medicine. In this technical review, we describe applications and challenges of the radiomic field. We will review radiomic application areas and technical issues, as well as proper practices for the designs of radiomic studies.", "title": "" }, { "docid": "0d2f933b139f50ff9195118d9d1466aa", "text": "Ambient Intelligence (AmI) and Smart Environments (SmE) are based on three foundations: ubiquitous computing, ubiquitous communication and intelligent adaptive interfaces [41]. This type of systems consists of a series of interconnected computing and sensing devices which surround the user pervasively in his environment and are invisible to him, providing a service that is dynamically adapted to the interaction context, so that users can naturally interact with the system and thus perceive it as intelligent. To ensure such a natural and intelligent interaction, it is necessary to provide an effective, easy, safe and transparent interaction between the user and the system. With this objective, as an attempt to enhance and ease human-to-computer interaction, in the last years there has been an increasing interest in simulating human-tohuman communication, employing the so-called multimodal dialogue systems [46]. These systems go beyond both the desktop metaphor and the traditional speech-only interfaces by incorporating several communication modalities, such as speech, gaze, gestures or facial expressions. Multimodal dialogue systems offer several advantages. Firstly, they can make use of automatic recognition techniques to sense the environment allowing the user to employ different input modalities, some of these technologies are automatic speech recognition [62], natural language processing [12], face location and tracking [77], gaze tracking [58], lipreading recognition [13], gesture recognition [39], and handwriting recognition [78].", "title": "" }, { "docid": "f0f16472cdb6b52b05d1d324e55da081", "text": "We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1/ √ n, we show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines.", "title": "" }, { "docid": "e32e17bb36f39d6020bced297b3989fe", "text": "Memory networks are a recently introduced model that combines reasoning, attention and memory for solving tasks in the areas of language understanding and dialogue -- where one exciting direction is the use of these models for dialogue-based recommendation. In this talk we describe these models and how they can learn to discuss, answer questions about, and recommend sets of items to a user. The ultimate goal of this research is to produce a full dialogue-based recommendation assistant. We will discuss recent datasets and evaluation tasks that have been built to assess these models abilities to see how far we have come.", "title": "" }, { "docid": "2f48ab4d20f0928837bf10d2f638fed3", "text": "Duchenne muscular dystrophy (DMD), a recessive sex-linked hereditary disorder, is characterized by degeneration, atrophy, and weakness of skeletal and cardiac muscle. The purpose of this study was to document the prevalence of abnormally low resting BP recordings in patients with DMD in our outpatient clinic. The charts of 31 patients with DMD attending the cardiology clinic at Rush University Medical Center were retrospectively reviewed. Demographic data, systolic, diastolic, and mean blood pressures along with current medications, echocardiograms, and documented clinical appreciation and management of low blood pressure were recorded in the form of 104 outpatient clinical visits. Blood pressure (BP) was classified as low if the systolic and/or mean BP was less than the fifth percentile for height for patients aged ≤17 years (n = 23). For patients ≥18 years (n = 8), systolic blood pressure (SBP) <90 mmHg or a mean arterial pressure (MAP) <60 mmHg was recorded as a low reading. Patients with other forms of myopathy or unclear diagnosis were excluded. Statistical analysis was done using PASW version 18. BP was documented at 103 (99.01 %) outpatient encounters. Low systolic and mean BP were recorded in 35 (33.7 %) encounters. This represented low recordings for 19 (61.3 %) out of a total 31 patients with two or more successive low recordings for 12 (38.7 %) patients. Thirty-one low BP encounters were in patients <18 years old. Hispanic patients accounted for 74 (71.2 %) visits and had low BP recorded in 32 (43.2 %) instances. The patients were non-ambulant in 71 (68.3 %) encounters. Out of 35 encounters with low BP, 17 patients (48.6 %) were taking heart failure medication. In instances when patients had low BP, 22 (66.7 %) out of 33 echocardiography encounters had normal left ventricular ejection fraction. Clinician comments on low BP reading were present in 11 (10.6 %) encounters, and treatment modification occurred in only 1 (1 %) patient. Age in years (p = .031) and ethnicity (p = .035) were independent predictors of low BP using stepwise multiple regression analysis. Low BP was recorded in a significant number of patient encounters in patients with DMD. Age 17 years or less and Hispanic ethnicity were significant predictors associated with low BP readings in our DMD cohort. Concomitant heart failure therapy was not a statistically significant association. There is a need for enhanced awareness of low BP in DMD patients among primary care and specialty physicians. The etiology and clinical impact of these findings are unclear but may impact escalation of heart failure therapy.", "title": "" }, { "docid": "3c1cc57db29b8c86de4f314163ccaca0", "text": "We are motivated by the need for a generic object proposal generation algorithm which achieves good balance between object detection recall, proposal localization quality and computational efficiency. We propose a novel object proposal algorithm, BING++, which inherits the virtue of good computational efficiency of BING [1] but significantly improves its proposal localization quality. At high level we formulate the problem of object proposal generation from a novel probabilistic perspective, based on which our BING++ manages to improve the localization quality by employing edges and segments to estimate object boundaries and update the proposals sequentially. We propose learning the parameters efficiently by searching for approximate solutions in a quantized parameter space for complexity reduction. We demonstrate the generalization of BING++ with the same fixed parameters across different object classes and datasets. Empirically our BING++ can run at half speed of BING on CPU, but significantly improve the localization quality by 18.5 and 16.7 percent on both VOC2007 and Microhsoft COCO datasets, respectively. Compared with other state-of-the-art approaches, BING++ can achieve comparable performance, but run significantly faster.", "title": "" }, { "docid": "7b463b290988262db44984a89846129c", "text": "We describe an integrated strategy for planning, perception, state-estimation and action in complex mobile manipulation domains based on planning in the belief space of probability distributions over states using hierarchical goal regression (pre-image back-chaining). We develop a vocabulary of logical expressions that describe sets of belief states, which are goals and subgoals in the planning process. We show that a relatively small set of symbolic operators can give rise to task-oriented perception in support of the manipulation goals. An implementation of this method is demonstrated in simulation and on a real PR2 robot, showing robust, flexible solution of mobile manipulation problems with multiple objects and substantial uncertainty.", "title": "" }, { "docid": "a1018c89d326274e4b71ffc42f4ebba2", "text": "We describe a method for improving the classification of short text strings using a combination of labeled training data plus a secondary corpus of unlabeled but related longer documents. We show that such unlabeled background knowledge can greatly decrease error rates, particularly if the number of examples or the size of the strings in the training set is small. This is particularly useful when labeling text is a labor-intensive job and when there is a large amount of information available about a particular problem on the World Wide Web. Our approach views the task as one of information integration using WHIRL, a tool that combines database functionalities with techniques from the information-retrieval literature.", "title": "" }, { "docid": "d2b545b4f9c0e7323760632c65206480", "text": "This brief presents a quantitative analysis of the operating characteristics of three-phase diode bridge rectifiers with ac-side reactance and constant-voltage loads. We focus on the case where the ac-side currents vary continuously (continuous ac-side conduction mode). This operating mode is of particular importance in alternators and generators, for example. Simple approximate expressions are derived for the line and output current characteristics as well as the input power factor. Expressions describing the necessary operating conditions for continuous ac-side conduction are also developed. The derived analytical expressions are applied to practical examples and both simulations and experimental results are utilized to validate the analytical results. It is shown that the derived expressions are far more accurate than calculations based on traditional constant-current models.", "title": "" }, { "docid": "afae709279cd8adeda2888089872d70e", "text": "One-class classification problemhas been investigated thoroughly for past decades. Among one of themost effective neural network approaches for one-class classification, autoencoder has been successfully applied for many applications. However, this classifier relies on traditional learning algorithms such as backpropagation to train the network, which is quite time-consuming. To tackle the slow learning speed in autoencoder neural network, we propose a simple and efficient one-class classifier based on extreme learning machine (ELM).The essence of ELM is that the hidden layer need not be tuned and the output weights can be analytically determined, which leads to much faster learning speed.The experimental evaluation conducted on several real-world benchmarks shows that the ELM based one-class classifier can learn hundreds of times faster than autoencoder and it is competitive over a variety of one-class classification methods.", "title": "" }, { "docid": "1c80fdc30b2b37443367dae187fbb376", "text": "The web is a catalyst for drawing people together around shared goals, but many groups never reach critical mass. It can thus be risky to commit time or effort to a goal: participants show up only to discover that nobody else did, and organizers devote significant effort to causes that never get off the ground. Crowdfunding has lessened some of this risk by only calling in donations when an effort reaches a collective monetary goal. However, it leaves unsolved the harder problem of mobilizing effort, time and participation. We generalize the concept into activation thresholds, commitments that are conditioned on others' participation. With activation thresholds, supporters only need to show up for an event if enough other people commit as well. Catalyst is a platform that introduces activation thresholds for on-demand events. For more complex coordination needs, Catalyst also provides thresholds based on time or role (e.g., a bake sale requiring commitments for bakers, decorators, and sellers). In a multi-month field deployment, Catalyst helped users organize events including food bank volunteering, on-demand study groups, and mass participation events like a human chess game. Our results suggest that activation thresholds can indeed catalyze a large class of new collective efforts.", "title": "" }, { "docid": "d013bf1a031dd8a4e546c963cd8bde84", "text": "Parallel text is the fuel that drives modern machine translation systems. The Web is a comprehensive source of preexisting parallel text, but crawling the entire web is impossible for all but the largest companies. We bring web-scale parallel text to the masses by mining the Common Crawl, a public Web crawl hosted on Amazon’s Elastic Cloud. Starting from nothing more than a set of common two-letter language codes, our open-source extension of the STRAND algorithm mined 32 terabytes of the crawl in just under a day, at a cost of about $500. Our large-scale experiment uncovers large amounts of parallel text in dozens of language pairs across a variety of domains and genres, some previously unavailable in curated datasets. Even with minimal cleaning and filtering, the resulting data boosts translation performance across the board for five different language pairs in the news domain, and on open domain test sets we see improvements of up to 5 BLEU. We make our code and data available for other researchers seeking to mine this rich new data resource.1", "title": "" }, { "docid": "f676c503bcf59a8916995a6db3908792", "text": "Bone tissue engineering has been increasingly studied as an alternative approach to bone defect reconstruction. In this approach, new bone cells are stimulated to grow and heal the defect with the aid of a scaffold that serves as a medium for bone cell formation and growth. Scaffolds made of metallic materials have preferably been chosen for bone tissue engineering applications where load-bearing capacities are required, considering the superior mechanical properties possessed by this type of materials to those of polymeric and ceramic materials. The space holder method has been recognized as one of the viable methods for the fabrication of metallic biomedical scaffolds. In this method, temporary powder particles, namely space holder, are devised as a pore former for scaffolds. In general, the whole scaffold fabrication process with the space holder method can be divided into four main steps: (i) mixing of metal matrix powder and space-holding particles; (ii) compaction of granular materials; (iii) removal of space-holding particles; (iv) sintering of porous scaffold preform. In this review, detailed procedures in each of these steps are presented. Technical challenges encountered during scaffold fabrication with this specific method are addressed. In conclusion, strategies are yet to be developed to address problematic issues raised, such as powder segregation, pore inhomogeneity, distortion of pore sizes and shape, uncontrolled shrinkage and contamination.", "title": "" }, { "docid": "3194a0dd979b668bb25afb10260c30d2", "text": "An octa-band antenna for 5.7-in mobile phones with the size of 80 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times6$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times5.8$ </tex-math></inline-formula> mm is proposed and studied. The proposed antenna is composed of a coupled line, a monopole branch, and a ground branch. By using the 0.25-, 0.5-, and 0.75-wavelength modes, the lower band (704–960 MHz) and the higher band (1710–2690 MHz) are covered. The working mechanism is analyzed based on the S-parameters and the surface current distributions. The attractive merits of the proposed antenna are that the nonground portion height is only 6 mm and any lumped element is not used. A prototype of the proposed antenna is fabricated and measured. The measured −6 dB impedance bandwidths are 350 MHz (0.67–1.02 GHz) and 1.27 GHz (1.65–2.92 GHz) at the lower and higher bands, respectively, which can cover the LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands. The measured patterns, gains, and efficiencies are presented.", "title": "" }, { "docid": "8bf7524cf8f4696833cfc3d7b5d57349", "text": "This article is concerned with the design, implementation and control of a redundant robotic tool for Minimal Invasive Surgical (MIS) operations. The robotic tool is modular, comprised of identical stages of dual rotational Degrees of Freedom (DoF). An antagonistic tendon-driven mechanism using two DC-motors in a puller-follower configuration is used for each DoF. The inherent Coulomb friction is compensated using an adaptive scheme while varying the follower's reaction. Preliminary experimental results are provided to investigate the efficiency of the robot in typical surgical manoeuvres.", "title": "" }, { "docid": "408696a41684af20733b25833e741259", "text": "We propose a method for accurate 3D shape reconstruction using uncalibrated multiview photometric stereo. A coarse mesh reconstructed using multiview stereo is first parameterized using a planar mesh parameterization technique. Subsequently, multiview photometric stereo is performed in the 2D parameter domain of the mesh, where all geometric and photometric cues from multiple images can be treated uniformly. Unlike traditional methods, there is no need for merging view-dependent surface normal maps. Our key contribution is a new photometric stereo based mesh refinement technique that can efficiently reconstruct meshes with extremely fine geometric details by directly estimating a displacement texture map in the 2D parameter domain. We demonstrate that intricate surface geometry can be reconstructed using several challenging datasets containing surfaces with specular reflections, multiple albedos and complex topologies.", "title": "" }, { "docid": "413b9fe872843974cc4c1fcb9839ce0e", "text": "1. I N T R O D U C T I O N Despite the long history of machine translation projects, and the well-known effects that evaluations such as the ALPAC Report (Pierce et al., 1966) have had on that history, optimal MT evaluation methodologies remain elusive. This is perhaps due in part to the subjectivity inherent in judging the quality of any translation output (human or machine). The difficulty also lies in the heterogeneity of MT language pairs, computational approaches, and intended end-use. The DARPA machine translation initiative is faced with all of these issues in evaluation, and so requires a suite of evaluation methodologies which minimize subjectivity and transcend the heterogeneity problems. At the same time, the initiative seeks to formulate this suite in such a way that it is economical to administer and portable to other MT development initiatives. This paper describes an evaluation of three research MT systems along with benchmark haman and external MT outputs. Two sets of evaluations were performed, one using a relatively complex suite of methodologies, and the other using a simpler set on the same data. The test procedure is described, along The authors would like to express their gratitude to Michael Naber for his assistance in compiling, expressing and interpreting data. with a comparison of the results of the different methodologies.", "title": "" }, { "docid": "8e03f4410676fb4285596960880263e9", "text": "Fuzzy computing (FC) has made a great impact in capturing human domain knowledge and modeling non-linear mapping of input-output space. In this paper, we describe the design and implementation of FC systems for detection of money laundering behaviors in financial transactions and monitoring of distributed storage system load. Our objective is to demonstrate the power of FC for real-world applications which are characterized by imprecise, uncertain data, and incomplete domain knowledge. For both applications, we designed fuzzy rules based on experts’ domain knowledge, depending on money laundering scenarios in transactions or the “health” of a distributed storage system. In addition, we developped a generic fuzzy inference engine and contributed to the open source community.", "title": "" } ]
scidocsrr
19f8b598960e1c072f368b92b18620c8
Analyzing Opinion Spammers’ Network Behavior in Online Review Systems
[ { "docid": "02edc26b93581e84de950fe11e04d5fc", "text": "Opinionated social media such as product reviews are now widely used by individuals and organizations for their decision making. However, due to the reason of profit or fame, people try to game the system by opinion spamming (e.g., writing fake reviews) to promote or demote some target products. For reviews to reflect genuine user experiences and opinions, such spam reviews should be detected. Prior works on opinion spam focused on detecting fake reviews and individual fake reviewers. However, a fake reviewer group (a group of reviewers who work collaboratively to write fake reviews) is even more damaging as they can take total control of the sentiment on the target product due to its size. This paper studies spam detection in the collaborative setting, i.e., to discover fake reviewer groups. The proposed method first uses a frequent itemset mining method to find a set of candidate groups. It then uses several behavioral models derived from the collusion phenomenon among fake reviewers and relation models based on the relationships among groups, individual reviewers, and products they reviewed to detect fake reviewer groups. Additionally, we also built a labeled dataset of fake reviewer groups. Although labeling individual fake reviews and reviewers is very hard, to our surprise labeling fake reviewer groups is much easier. We also note that the proposed technique departs from the traditional supervised learning approach for spam detection because of the inherent nature of our problem which makes the classic supervised learning approach less effective. Experimental results show that the proposed method outperforms multiple strong baselines including the state-of-the-art supervised classification, regression, and learning to rank algorithms.", "title": "" }, { "docid": "646097feed29f603724f7ec6b8bbeb8b", "text": "Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results.", "title": "" }, { "docid": "2b1f85f35d6609b2538d920bdffc46c6", "text": "Different real-world applications have varying definitions of suspicious behaviors. Detection methods often look for the most suspicious parts of the data by optimizing scores, but quantifying the suspiciousness of a behavioral pattern is still an open issue.", "title": "" } ]
[ { "docid": "e74298e5bfd1cde8aaed2465cfb6ed33", "text": "We introduce a new low-distortion embedding of l<sub>2</sub><sup>d</sup> into l<sub>p</sub><sup>O(log n)</sup> (p=1,2), called the <i>Fast-Johnson-Linden-strauss-Transform</i>. The FJLT is faster than standard random projections and just as easy to implement. It is based upon the preconditioning of a sparse projection matrix with a randomized Fourier transform. Sparse random projections are unsuitable for low-distortion embeddings. We overcome this handicap by exploiting the \"Heisenberg principle\" of the Fourier transform, ie, its local-global duality. The FJLT can be used to speed up search algorithms based on low-distortion embeddings in l<sub>1</sub> and l<sub>2</sub>. We consider the case of approximate nearest neighbors in l<sub>2</sub><sup>d</sup>. We provide a faster algorithm using classical projections, which we then further speed up by plugging in the FJLT. We also give a faster algorithm for searching over the hypercube.", "title": "" }, { "docid": "a5879d5e7934380913cd2683ba2525b9", "text": "This paper deals with the design & development of a theft control system for an automobile, which is being used to prevent/control the theft of a vehicle. The developed system makes use of an embedded system based on GSM technology. The designed & developed system is installed in the vehicle. An interfacing mobile is also connected to the microcontroller, which is in turn, connected to the engine. Once, the vehicle is being stolen, the information is being used by the vehicle owner for further processing. The information is passed onto the central processing insurance system, where by sitting at a remote place, a particular number is dialed by them to the interfacing mobile that is with the hardware kit which is installed in the vehicle. By reading the signals received by the mobile, one can control the ignition of the engine; say to lock it or to stop the engine immediately. Again it will come to the normal condition only after entering a secured password. The owner of the vehicle & the central processing system will know this secured password. The main concept in this design is introducing the mobile communications into the embedded system. The designed unit is very simple & low cost. The entire designed unit is on a single chip. When the vehicle is stolen, owner of vehicle may inform to the central processing system, then they will stop the vehicle by just giving a ring to that secret number and with the help of SIM tracking knows the location of vehicle and informs to the local police or stops it from further movement.", "title": "" }, { "docid": "a14b09005a7c7ccd5c6cb49b39c83a91", "text": "We report the case of a 53-years-old patient, known to have coronary artery disease, presenting with typical angina at rest with normal ECG and laboratory findings. His angina is relieved by sublingual nitroglycerin. He had undergone a cardiac catheterisation two weeks prior to his presentation for the same complaints. It showed nonsignificant coronary lesions. Another catheterisation was performed during his current admission. He developed coronary spasm during the procedure, still with no ECG changes. The spasm was reversed by administration of 2 mg of intracoronary isosorbide dinitrate. Variant (Prinzmetal's) angina was diagnosed in the absence of electrical ECG changes during pain episodes.", "title": "" }, { "docid": "126b52ab2e2585eabf3345ef7fb39c51", "text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.", "title": "" }, { "docid": "d8bdaaceba4fa1422244e043d2b6c78e", "text": "We consider the problem of clustering observations using a potentially large set of features. One might expect that the true underlying clusters present in the data differ only with respect to a small fraction of the features, and will be missed if one clusters the observations using the full set of features. We propose a novel framework for sparse clustering, in which one clusters the observations using an adaptively chosen subset of the features. The method uses a lasso-type penalty to select the features. We use this framework to develop simple methods for sparse K-means and sparse hierarchical clustering. A single criterion governs both the selection of the features and the resulting clusters. These approaches are demonstrated on simulated data and on genomic data sets.", "title": "" }, { "docid": "9badb6e864118f1782d86486f6df9ff3", "text": "The genera Opechona Looss and Prodistomum Linton are redefined: the latter is re-established, its diagnostic character being the lack of a uroproct. Pharyngora Lebour and Neopechona Stunkard are considered synonyms of Opechona, and Acanthocolpoides Travassos, Freitas & Bührnheim is considered a synonym of Prodistomum. Opechona bacillaris (Molin) and Prodistomum [originally Distomum] polonii (Molin) n. comb. are described from the NE Atlantic Ocean. Separate revisions with keys to Opechona, Prodistomum and ‘Opechona-like’ species incertae sedis are presented. Opechona is considered to contain: O. bacillaris (type-species), O. alaskensis Ward & Fillingham, O. [originally Neopechona] cablei (Stunkard) n. comb., O. chloroscombri Nahhas & Cable, O. occidentalis Montgomery, O. parvasoma Ching sp. inq., O. pharyngodactyla Manter, O. [originally Distomum] pyriforme (Linton) n. comb. and O. sebastodis (Yamaguti). Prodistomum includes: P. gracile Linton (type-species), P. [originally Opechona] girellae (Yamaguti) n. comb., P. [originally Opechona] hynnodi (Yamaguti) n. comb., P. [originally Opechona] menidiae (Manter) n. comb., P. [originally Pharyngora] orientalis (Layman) n. comb., P. polonii and P. [originally Opechona] waltairensis (Madhavi) n. comb. Some species are considered ‘Opechona-like’ species incertae sedis: O. formiae Oshmarin, O. siddiqii Ahmad, 1986 nec 1984, O. mohsini Ahmad, O. magnatestis Gaevskaya & Kovaleva, O. vinodae Ahmad, O. travassosi Ahmad, ‘Lepidapedon’ nelsoni Gupta & Mehrotra and O. siddiqi Ahmad, 1984 nec 1986. The related genera Cephalolepidapedon Yamaguti and Clavogalea Bray and the synonymies of their constituent species are discussed, and further comments are made on related genera and misplaced species. The new combination Clavogalea [originally Stephanostomum] trachinoti (Fischthal & Thomas) is made. The taxonomy, life-history, host-specificity and zoogeography of the genera are briefly discussed.", "title": "" }, { "docid": "40b4a9b3a594e2a9cb7d489a3f44c328", "text": "The present article integrates findings from diverse studies on the generalized role of perceived coping self-efficacy in recovery from different types of traumatic experiences. They include natural disasters, technological catastrophes, terrorist attacks, military combat, and sexual and criminal assaults. The various studies apply multiple controls for diverse sets of potential contributors to posttraumatic recovery. In these different multivariate analyses, perceived coping self-efficacy emerges as a focal mediator of posttraumatic recovery. Verification of its independent contribution to posttraumatic recovery across a wide range of traumas lends support to the centrality of the enabling and protective function of belief in one's capability to exercise some measure of control over traumatic adversity.", "title": "" }, { "docid": "957170b015e5acd4ab7ce076f5a4c900", "text": "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.", "title": "" }, { "docid": "e8bbbc1864090b0246735868faa0e11f", "text": "A pre-trained deep convolutional neural network (DCNN) is the feed-forward computation perspective which is widely used for the embedded vision systems. In the DCNN, the 2D convolutional operation occupies more than 90% of the computation time. Since the 2D convolutional operation performs massive multiply-accumulation (MAC) operations, conventional realizations could not implement a fully parallel DCNN. The RNS decomposes an integer into a tuple of L integers by residues of moduli set. Since no pair of modulus have a common factor with any other, the conventional RNS decomposes the MAC unit into circuits with different sizes. It means that the RNS could not utilize resources of an FPGA with uniform size. In this paper, we propose the nested RNS (NRNS), which recursively decompose the RNS. It can decompose the MAC unit into circuits with small sizes. In the DCNN using the NRNS, a 48-bit MAC unit is decomposed into 4-bit ones realized by look-up tables of the FPGA. In the system, we also use binary to NRNS converters and NRNS to binary converters. The binary to NRNS converter is realized by on-chip BRAMs, while the NRNS to binary one is realized by DSP blocks and BRAMs. Thus, a balanced usage of FPGA resources leads to a high clock frequency with less hardware. The ImageNet DCNN using the NRNS is implemented on a Xilinx Virtex VC707 evaluation board. As for the performance per area GOPS (Giga operations per second) per a slice, the proposed one is 5.86 times better than the existing best realization.", "title": "" }, { "docid": "0b1a8b80b4414fa34d6cbb5ad1342ad7", "text": "OBJECTIVE\nThe aim of the study was to evaluate the efficacy of topical 2% lidocaine gel in reducing pain and discomfort associated with nasogastric tube insertion (NGTI) and compare lidocaine to ordinary lubricant gel in the ease in carrying out the procedure.\n\n\nMETHODS\nThis prospective, randomized, double-blind, placebo-controlled, convenience sample trial was conducted in the emergency department of our tertiary care university-affiliated hospital. Five milliliters of 2% lidocaine gel or placebo lubricant gel were administered nasally to alert hemodynamically stable adult patients 5 minutes before undergoing a required NGTI. The main outcome measures were overall pain, nasal pain, discomfort (eg, choking, gagging, nausea, vomiting), and difficulty in performing the procedure. Standard comparative statistical analyses were used.\n\n\nRESULTS\nThe study cohort included 62 patients (65% males). Thirty-one patients were randomized to either lidocaine or placebo groups. Patients who received lidocaine reported significantly less intense overall pain associated with NGTI compared to those who received placebo (37 ± 28 mm vs 51 ± 26 mm on 100-mm visual analog scale; P < .05). The patients receiving lidocaine also had significantly reduced nasal pain (33 ± 29 mm vs 48 ± 27 mm; P < .05) and significantly reduced sensation of gagging (25 ± 30 mm vs 39 ± 24 mm; P < .05). However, conducting the procedure was significantly more difficult in the lidocaine group (2.1 ± 0.9 vs 1.4 ± 0.7 on 5-point Likert scale; P < .05).\n\n\nCONCLUSION\nLidocaine gel administered nasally 5 minutes before NGTI significantly reduces pain and gagging sensations associated with the procedure but is associated with more difficult tube insertion compared to the use of lubricant gel.", "title": "" }, { "docid": "81a45cb4ca02c38839a81ad567eb1491", "text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.", "title": "" }, { "docid": "62425652b113c72c668cf9c73b7c8480", "text": "Knowledge graph (KG) completion aims to fill the missing facts in a KG, where a fact is represented as a triple in the form of (subject, relation, object). Current KG completion models compel twothirds of a triple provided (e.g., subject and relation) to predict the remaining one. In this paper, we propose a new model, which uses a KGspecific multi-layer recurrent neutral network (RNN) to model triples in a KG as sequences. It outperformed several state-of-the-art KG completion models on the conventional entity prediction task for many evaluation metrics, based on two benchmark datasets and a more difficult dataset. Furthermore, our model is enabled by the sequential characteristic and thus capable of predicting the whole triples only given one entity. Our experiments demonstrated that our model achieved promising performance on this new triple prediction task.", "title": "" }, { "docid": "d5771929cdaf41ce059e00b35825adf2", "text": "We develop a new collaborative filtering (CF) method that combines both previously known users’ preferences, i.e. standard CF, as well as product/user attributes, i.e. classical function approximation, to predict a given user’s interest in a particular product. Our method is a generalized low rank matrix completion problem, where we learn a function whose inputs are pairs of vectors – the standard low rank matrix completion problem being a special case where the inputs to the function are the row and column indices of the matrix. We solve this generalized matrix completion problem using tensor product kernels for which we also formally generalize standard kernel properties. Benchmark experiments on movie ratings show the advantages of our generalized matrix completion method over the standard matrix completion one with no information about movies or people, as well as over standard multi-task or single task learning methods.", "title": "" }, { "docid": "6bae81e837f4a498ae4c814608aac313", "text": "person’s ability to focus on his or her primary task. Distractions occur especially in mobile environments, because walking, driving, or other real-world interactions often preoccupy the user. A pervasivecomputing environment that minimizes distraction must be context aware, and a pervasive-computing system must know the user’s state to accommodate his or her needs. Context-aware applications provide at least two fundamental services: spatial awareness and temporal awareness. Spatially aware applications consider a user’s relative and absolute position and orientation. Temporally aware applications consider the time schedules of public and private events. With an interdisciplinary class of Carnegie Mellon University (CMU) students, we developed and implemented a context-aware, pervasive-computing environment that minimizes distraction and facilitates collaborative design.", "title": "" }, { "docid": "68e646d8aa50b331b1218a6b049d401f", "text": "In this paper we address the problem of clustering trajectories, namely sets of short sequences of data measured as a function of a dependent variable such as time. Examples include storm path trajectories, longitudinal data such as drug therapy response, functional expression data in computational biology, and movements of objects or individuals in video sequences. Our clustering algorithm is based on a principled method for probabilistic modelling of a set of trajectories as individual sequences of points generated from a nite mixture model consisting of regression model components. Unsupervised learning is carried out using maximum likelihood principles. Speci cally, the EM algorithm is used to cope with the hidden data problem (i.e., the cluster memberships). We also develop generalizations of the method to handle non-parametric (kernel) regression components as well as multi-dimensional outputs. Simulation results comparing our method with other clustering methods such as K-means and Gaussian mixtures are presented as well as experimental results on real data sets. 0 5 10 15 20 25 30 35 200 220 240 260 280 300 320 340 Frame Number (in video sequence) V er tic al p ix el c oo rd in at e of e st im at ed c en tr oi d of h an d Figure 1: Trajectories of the estimated vertical position of a moving hand as a function of time, estimated from 6 di erent video sequences.", "title": "" }, { "docid": "87d15c47894210ad306948f32122a2c4", "text": "We design and implement MobileInsight, a software tool that collects, analyzes and exploits runtime network information from operational cellular networks. MobileInsight runs on commercial off-the-shelf phones without extra hardware or additional support from operators. It exposes protocol messages on both control plane and (below IP) data plane from the 3G/4G chipset. It provides in-device protocol analysis and operation logic inference. It further offers a simple API, through which developers and researchers obtain access to low-level network information for their mobile applications. We have built three showcases to illustrate how MobileInsight is applied to cellular network research.", "title": "" }, { "docid": "2e8a8ced91b9033a17abe4e54223fc19", "text": "Multimedia data secmity is important for multimedia commerce. Previous cryptography studies have focused on text data. The encryption algorithms devdoped to secure text data may not be suitable to multimedia applications becattse of large data sizes and real time constraint. For multimedia applications, light weight encryption algorithms are attractive. We present a novel MPEG Video Encryption Algorithm, called VEA The basic idea of VEA is to use a secret key randomly changing the sign bits of all of the DCT coefficients of MPEG video. VEA’S encryption effects are achieved by the IDCT during MPEG video decompression processing. VEA adds minimum overhead to MPEG codecj one Mm&e XOR operation to each none zero DCT coefficient. A software implementation of VEA is fast enough to meet the real time requirement of MPEG video applications. Our experimental results show that VEA achieves satisfying results. We believe that it can be used to secure video-on-demand, tideo conferencing and video email applications.", "title": "" }, { "docid": "866e60129032c4e41761b7b19483c74a", "text": "The technology to immerse people in computer generated worlds was proposed by Sutherland in 1965, and realised in 1968 with a head-mounted display that could present a user with a stereoscopic 3-dimensional view slaved to a sensing device tracking the user's head movements (Sutherland 1965; 1968). The views presented at that time were simple wire frame models. The advance of computer graphics knowledge and technology, itself tied to the enormous increase in processing power and decrease in cost, together with the development of relatively efficient and unobtrusive sensing devices, has led to the emergence of participatory immersive virtual environments, commonly referred to as \"virtual reality\" (VR) (Fisher 1982; Fisher et. al. 1986; Teitel 1990; see also SIGGRAPH Panel Proceedings 1989,1990). Ellis defines virtualisation as \"the process by which a human viewer interprets a patterned sensory impression to be an extended object in an environment other than that in which it physically exists\" (Ellis, 1991). In this definition the idea is taken from geometric optics, where the concept of a \"virtual image\" is precisely defined, and is well understood. In the context of virtual reality the \"patterned sensory impressions\" are generated to the human senses through visual, auditory, tactile and kinesthetic displays, though systems that effectively present information in all such sensory modalities do not exist at present. Ellis further distinguishes between a virtual space, image and environment. An example of the first is a flat surface on which an image is rendered. Perspective depth cues, texture gradients, occlusion, and other similar aspects of the image lead to an observer perceiving", "title": "" }, { "docid": "c757e54a14beec3b4930ad050a16d311", "text": "The University Class Scheduling Problem (UCSP) is concerned with assigning a number of courses to classrooms taking into consideration constraints like classroom capacities and university regulations. The problem also attempts to optimize the performance criteria and distribute the courses fairly to classrooms depending on the ratio of classroom capacities to course enrollments. The problem is a classical scheduling problem and considered to be NP-complete. It has received some research during the past few years given its wide use in colleges and universities. Several formulations and algorithms have been proposed to solve scheduling problems, most of which are based on local search techniques. In this paper, we propose a complete approach using integer linear programming (ILP) to solve the problem. The ILP model of interest is developed and solved using the three advanced ILP solvers based on generic algorithms and Boolean Satisfiability (SAT) techniques. SAT has been heavily researched in the past few years and has lead to the development of powerful 0-1 ILP solvers that can compete with the best available generic ILP solvers. Experimental results indicate that the proposed model is tractable for reasonable-sized UCSP problems. Index Terms — University Class Scheduling, Optimization, Integer Linear Programming (ILP), Boolean Satisfiability.", "title": "" }, { "docid": "217dfc849cea5e0d80555790362af2e7", "text": "Research examining online political forums has until now been overwhelmingly guided by two broad perspectives: (1) a deliberative conception of democratic communication and (2) a diverse collection of incommensurable multi-sphere approaches. While these literatures have contributed many insightful observations, their disadvantages have left many interesting communicative dynamics largely unexplored. This article seeks to introduce a new framework for evaluating online political forums (based on the work of Jürgen Habermas and Lincoln Dahlberg) that addresses the shortcomings of prior approaches by identifying three distinct, overlapping models of democracy that forums may manifest: the liberal, the communitarian and the deliberative democratic. For each model, a set of definitional variables drawn from the broader online forum literature is documented and discussed.", "title": "" } ]
scidocsrr
fb362389826df4832c188d77800a4705
Iterative Machine Teaching
[ { "docid": "35625f248c81ebb5c20151147483f3f6", "text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.", "title": "" }, { "docid": "bf1252479d6c1af3ceea574e41a2a782", "text": "This paper is concerned with various combinatorial parameters of classes that can be learned from a small set of examples. We show that the recursive teaching dimension, recently introduced by Zilles et al. (2008), is strongly connected to known complexity notions in machine learning, e.g., the self-directed learning complexity and the VC-dimension. To the best of our knowledge these are the first results unveiling such relations between teaching and query learning as well as between teaching and the VC-dimension. It will turn out that for many natural classes the RTD is upper-bounded by the VCD, e.g., classes of VCdimension 1, intersection-closed classes and finite maximum classes. However, we will also show that there are certain (but rare) classes for which the recursive teaching dimension exceeds the VC-dimension. Moreover, for maximum classes, the combinatorial structure induced by the RTD, called teaching plan, is highly similar to the structure of sample compression schemes. Indeed one can transform any repetition-free teaching plan for a maximum class C into an unlabeled sample compression scheme for C and vice versa, while the latter is produced by (i) the corner-peeling algorithm of Rubinstein and Rubinstein (2012) and (ii) the tail matching algorithm of Kuzmin and Warmuth (2007).", "title": "" } ]
[ { "docid": "0a4f21f3254445c68f6e43f7d2ce5e8e", "text": "The Multidimensional (MD) modeling, which is the foundation of data warehouses (DWs), MD databases, and On-Line Analytical Processing (OLAP) applications, is based on several properties different from those in traditional database modeling. In the past few years, there have been some proposals, providing their own formal and graphical notations, for representing the main MD properties at the conceptual level. However, unfortunately none of them has been accepted as a standard for conceptual MD modeling. In this paper, we present an extension of the Unified Modeling Language (UML) using a UML profile. This profile is defined by a set of stereotypes, constraints and tagged values to elegantly represent main MD properties at the conceptual level. We make use of the Object Constraint Language (OCL) to specify the constraints attached to the defined stereotypes, thereby avoiding an arbitrary use of these stereotypes. We have based our proposal in UML for two main reasons: (i) UML is a well known standard modeling language known by most database designers, thereby designers can avoid learning a new notation, and (ii) UML can be easily extended so that it can be tailored for a specific domain with concrete peculiarities such as the multidimensional modeling for data warehouses. Moreover, our proposal is Model Driven Architecture (MDA) compliant and we use the Query View Transformation (QVT) approach for an automatic generation of the implementation in a target platform. Throughout the paper, we will describe how to easily accomplish the MD modeling of DWs at the conceptual level. Finally, we show how to use our extension in Rational Rose for MD modeling.", "title": "" }, { "docid": "83c87294c33601023fdd0624d2dacecc", "text": "In modern road surveys, hanging power cables are among the most commonly-found geometric features. These cables are catenary curves that are conventionally modelled with three parameters in 2D Cartesian space. With the advent and popularity of the mobile mapping system (MMS), the 3D point clouds of hanging power cables can be captured within a short period of time. These point clouds, similarly to those of planar features, can be used for feature-based self-calibration of the system assembly errors of an MMS. However, to achieve this, a well-defined 3D equation for the catenary curve is needed. This paper proposes three 3D catenary curve models, each having different parameters. The models are examined by least squares fitting of simulated data and real data captured with an MMS. The outcome of the fitting is investigated in terms of the residuals and correlation matrices. Among the proposed models, one of them could estimate the parameters accurately and without any extreme correlation between the variables. This model can also be applied to those transmission lines captured by airborne laser scanning or any other hanging cable-like objects.", "title": "" }, { "docid": "b779b82b0ecc316b13129480586ac483", "text": "Chainspace is a decentralized infrastructure, known as a distributed ledger, that supports user defined smart contracts and executes user-supplied transactions on their objects. The correct execution of smart contract transactions is verifiable by all. The system is scalable, by sharding state and the execution of transactions, and using S-BAC, a distributed commit protocol, to guarantee consistency. Chainspace is secure against subsets of nodes trying to compromise its integrity or availability properties through Byzantine Fault Tolerance (BFT), and extremely highauditability, non-repudiation and ‘blockchain’ techniques. Even when BFT fails, auditing mechanisms are in place to trace malicious participants. We present the design, rationale, and details of Chainspace; we argue through evaluating an implementation of the system about its scaling and other features; we illustrate a number of privacy-friendly smart contracts for smart metering, polling and banking and measure their performance.", "title": "" }, { "docid": "07657456a2328be11dfaf706b5728ddc", "text": "Knowledge of wheelchair kinematics during a match is prerequisite for performance improvement in wheelchair basketball. Unfortunately, no measurement system providing key kinematic outcomes proved to be reliable in competition. In this study, the reliability of estimated wheelchair kinematics based on a three inertial measurement unit (IMU) configuration was assessed in wheelchair basketball match-like conditions. Twenty participants performed a series of tests reflecting different motion aspects of wheelchair basketball. During the tests wheelchair kinematics were simultaneously measured using IMUs on wheels and frame, and a 24-camera optical motion analysis system serving as gold standard. Results showed only small deviations of the IMU method compared to the gold standard, once a newly developed skid correction algorithm was applied. Calculated Root Mean Square Errors (RMSE) showed good estimates for frame displacement (RMSE≤0.05 m) and speed (RMSE≤0.1m/s), except for three truly vigorous tests. Estimates of frame rotation in the horizontal plane (RMSE<3°) and rotational speed (RMSE<7°/s) were very accurate. Differences in calculated Instantaneous Rotation Centres (IRC) were small, but somewhat larger in tests performed at high speed (RMSE up to 0.19 m). Average test outcomes for linear speed (ICCs>0.90), rotational speed (ICC>0.99) and IRC (ICC> 0.90) showed high correlations between IMU data and gold standard. IMU based estimation of wheelchair kinematics provided reliable results, except for brief moments of wheel skidding in truly vigorous tests. The IMU method is believed to enable prospective research in wheelchair basketball match conditions and contribute to individual support of athletes in everyday sports practice.", "title": "" }, { "docid": "08a6297a0959e0c12801b603d585e12c", "text": "The national exchequer, the banking industry and regular citizens all incur a high overhead in using physical cash. Electronic cash and cell phone-based payment in particular is a viable alternative to physical cash since it incurs much lower overheads and offers more convenience. Because security is of paramount importance in financial transactions, it is imperative that attack vectors in this application be identified and analyzed. In this paper, we investigate vulnerabilities in several dimensions – in choice of hardware/software platform, in technology and in cell phone operating system. We examine how existing and future mobile worms can severely compromise the security of transacting payments through a cell phone.", "title": "" }, { "docid": "300e215e91bb49aef0fcb44c3084789e", "text": "We present an extension to the Tacotron speech synthesis architecture that learns a latent embedding space of prosody, derived from a reference acoustic representation containing the desired prosody. We show that conditioning Tacotron on this learned embedding space results in synthesized audio that matches the prosody of the reference signal with fine time detail even when the reference and synthesis speakers are different. Additionally, we show that a reference prosody embedding can be used to synthesize text that is different from that of the reference utterance. We define several quantitative and subjective metrics for evaluating prosody transfer, and report results with accompanying audio samples from single-speaker and 44-speaker Tacotron models on a prosody transfer task.", "title": "" }, { "docid": "94c5f0bba64e131a64989813652846a5", "text": "The ability to access patents and relevant patent-related information pertaining to a patented technology can fundamentally transform the patent system and its functioning and patent institutions such as the USPTO and the federal courts. This paper describes an ontology-based computational framework that can resolve some of difficult issues in retrieving patents and patent related information for the legal and justice system.", "title": "" }, { "docid": "e7808c1fa1c5e02119a3c9da855f7499", "text": "Cloud computing provides users with great flexibility when provisioning resources, with cloud providers offering a choice of reservation and on-demand purchasing options. Reservation plans offer cheaper prices, but must be chosen in advance, and therefore must be appropriate to users' requirements. If demand is uncertain, the reservation plan may not be sufficient and on-demand resources have to be provisioned. Previous work focused on optimally placing virtual machines with cloud providers to minimize total cost. However, many applications require large amounts of network bandwidth. Therefore, considering only virtual machines offers an incomplete view of the system. Exploiting recent developments in software defined networking (SDN), we propose a unified approach that integrates virtual machine and network bandwidth provisioning. We solve a stochastic integer programming problem to obtain an optimal provisioning of both virtual machines and network bandwidth, when demand is uncertain. Numerical results clearly show that our proposed solution minimizes users' costs and provides superior performance to alternative methods. We believe that this integrated approach is the way forward for cloud computing to support network intensive applications.", "title": "" }, { "docid": "48b14b78512a8f63d3a9dcdf70d88182", "text": "A cute lymphocytic leukemia (ALL) is a malignant disease characterized by the accumulation of lymphoblast in the bone marrow. An improved scheme for ALL detection in blood microscopic images is presented here. In this study features i.e. hausdorff dimension and contour signature are employed to classify a lymphocytic cell in the blood image into normal lymphocyte or lymphoblast (blasts). In addition shape and texture features are also extracted for better classification. Initial segmentation is done using K-means clustering which segregates leukocytes or white blood cells (WBC) from other blood components i.e. erythrocytes and platelets. The results of K-means are used for evaluating individual cell shape, texture and other features for final detection of leukemia. Fractal features i.e. hausdorff dimension is implemented for measuring perimeter roughness and hence classifying a lymphocytic cell nucleus. A total of 108 blood smear images were considered for feature extraction and final performance evaluation is validated with the results of a hematologist.", "title": "" }, { "docid": "42903610920a47773627a33db25590f3", "text": "We consider the analysis of Electroencephalography (EEG) and Local Field Potential (LFP) datasets, which are “big” in terms of the size of recorded data but rarely have sufficient labels required to train complex models (e.g., conventional deep learning methods). Furthermore, in many scientific applications, the goal is to be able to understand the underlying features related to the classification, which prohibits the blind application of deep networks. This motivates the development of a new model based on parameterized convolutional filters guided by previous neuroscience research; the filters learn relevant frequency bands while targeting synchrony, which are frequency-specific power and phase correlations between electrodes. This results in a highly expressive convolutional neural network with only a few hundred parameters, applicable to smaller datasets. The proposed approach is demonstrated to yield competitive (often state-of-the-art) predictive performance during our empirical tests while yielding interpretable features. Furthermore, a Gaussian process adapter is developed to combine analysis over distinct electrode layouts, allowing the joint processing of multiple datasets to address overfitting and improve generalizability. Finally, it is demonstrated that the proposed framework effectively tracks neural dynamics on children in a clinical trial on Autism Spectrum Disorder.", "title": "" }, { "docid": "a436bdc20d63dcf4f0647005bb3314a7", "text": "The purpose of this study is to evaluate the feasibility of the integration of concept maps and tablet PCs in anti-phishing education for enhancing students’ learning motivation and achievement. The subjects were 155 students from grades 8 and 9. They were divided into an experimental group (77 students) and a control group (78 students). To begin with, the two groups received identical anti-phishing training: the teacher explained the concept of anti-phishing and asked the students questions; the students then used tablet PCs for polling and answering the teachers’ questions. Afterwards, the two groups performed different group activities: the experimental group was divided into smaller groups, which used tablet PCs to draw concept maps; the control group was also divided into groups which completed worksheets. The study found that the use of concept maps on tablet PCs during the anti-phishing education significantly enhanced the students’ learning motivation when their initial motivation was already high. For learners with low initial motivation or prior knowledge, the use of worksheets could increase their posttest achievement and motivation. This study therefore proposes that motivation and achievement in teaching the anti-phishing concept can be effectively enhanced if the curriculum is designed based on the students’ learning preferences or prior knowledge, in conjunction with the integration of mature and accessible technological media into the learning activities. The findings can also serve as a reference for anti-phishing educators and researchers.", "title": "" }, { "docid": "80ae8494ba7ebc70e9454d68f4dc5cbd", "text": "Advanced deep learning methods have been developed to conduct prostate MR volume segmentation in either a 2D or 3D fully convolutional manner. However, 2D methods tend to have limited segmentation performance, since large amounts of spatial information of prostate volumes are discarded during the slice-by-slice segmentation process; and 3D methods also have room for improvement, since they use isotropic kernels to perform 3D convolutions whereas most prostate MR volumes have anisotropic spatial resolution. Besides, the fully convolutional structural methods achieve good performance for localization issues but neglect the per-voxel classification for segmentation tasks. In this paper, we propose a 3D Global Convolutional Adversarial Network (3D GCA-Net) to address efficient prostate MR volume segmentation. We first design a 3D ResNet encoder to extract 3D features from prostate scans, and then develop the decoder, which is composed of a multi-scale 3D global convolutional block and a 3D boundary refinement block, to address the classification and localization issues simultaneously for volumetric segmentation. Additionally, we combine the encoder-decoder segmentation network with an adversarial network in the training phrase to enforce the contiguity of long-range spatial predictions. Throughout the proposed model, we use anisotropic convolutional processing for better feature learning on prostate MR scans. We evaluated our 3D GCA-Net model on two public prostate MR datasets and achieved state-of-the-art performances.", "title": "" }, { "docid": "15316c80d2a880b06846e8dd398a5c3f", "text": "One weak spot is all it takes to open secured digital doors and online accounts causing untold damage and consequences.", "title": "" }, { "docid": "3ea533be157b63e673f43205d195d13e", "text": "Recent work on fairness in machine learning has begun to be extended to recommender systems. While there is a tension between the goals of fairness and of personalization, there are contexts in which a global evaluations of outcomes is possible and where equity across such outcomes is a desirable goal. In this paper, we introduce the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes. We show that a modified version of the SLIM algorithm can be used to improve the balance of user neighborhoods, with the result of achieving greater outcome fairness in a real-world dataset with minimal loss in ranking performance.", "title": "" }, { "docid": "87d51053f5e66aefaf24318cc2b3ba22", "text": "In this paper we study the distribution of average user rating of entities in three different domains: restaurants, movies, and products. We find that the distribution is heavily skewed, closely resembling a log-normal in all the cases. In contrast, the distribution of average critic rating is much closer to a normal distribution. We propose user selection bias as the underlying behavioral phenomenon causing this disparity in the two distributions. We show that selection bias can indeed lead to a skew in the distribution of user ratings even when we assume the quality of entities are normally distributed. Finally, we apply these insights to the problem of predicting the overall rating of an entity given its few initial ratings, and obtain a simple method that outperforms strong baselines.", "title": "" }, { "docid": "e6548454f46962b5ce4c5d4298deb8e7", "text": "The use of SVM (Support Vector Machines) in detecting e-mail as spam or nonspam by incorporating feature selection using GA (Genetic Algorithm) is investigated. An GA approach is adopted to select features that are most favorable to SVM classifier, which is named as GA-SVM. Scaling factor is exploited to measure the relevant coefficients of feature to the classification task and is estimated by GA. Heavy-bias operator is introduced in GA to promote sparse in the scaling factors of features. So, feature selection is performed by eliminating irrelevant features whose scaling factor is zero. The experiment results on UCI Spam database show that comparing with original SVM classifier, the number of support vector decreases while better classification results are achieved based on GA-SVM.", "title": "" }, { "docid": "a3f06bfcc2034483cac3ee200803878c", "text": "This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.", "title": "" }, { "docid": "6968d5646db3941b06d3763033cb8d45", "text": "Path prediction is useful in a wide range of applications. Most of the existing solutions, however, are based on eager learning methods where models and patterns are extracted from historical trajectories and then used for future prediction. Since such approaches are committed to a set of statistically significant models or patterns, problems can arise in dynamic environments where the underlying models change quickly or where the regions are not covered with statistically significant models or patterns.\n We propose a \"semi-lazy\" approach to path prediction that builds prediction models on the fly using dynamically selected reference trajectories. Such an approach has several advantages. First, the target trajectories to be predicted are known before the models are built, which allows us to construct models that are deemed relevant to the target trajectories. Second, unlike the lazy learning approaches, we use sophisticated learning algorithms to derive accurate prediction models with acceptable delay based on a small number of selected reference trajectories. Finally, our approach can be continuously self-correcting since we can dynamically re-construct new models if the predicted movements do not match the actual ones.\n Our prediction model can construct a probabilistic path whose probability of occurrence is larger than a threshold and which is furthest ahead in term of time. Users can control the confidence of the path prediction by setting a probability threshold. We conducted a comprehensive experimental study on real-world and synthetic datasets to show the effectiveness and efficiency of our approach.", "title": "" }, { "docid": "6b44ee250cce2aa7f7589d85cb26417f", "text": "Financial fraud under IoT environment refers to the unauthorized use ofmobile transaction usingmobile platform through identity theft or credit card stealing to obtain money fraudulently. Financial fraud under IoT environment is the fast-growing issue through the emergence of smartphone and online transition services. In the real world, a highly accurate process of financial fraud detection under IoT environment is needed since financial fraud causes financial loss. Therefore, we have surveyed financial fraud methods using machine learning and deep learning methodology, mainly from 2016 to 2018, and proposed a process for accurate fraud detection based on the advantages and limitations of each research. Moreover, our approach proposed the overall process of detecting financial fraud based on machine learning and compared with artificial neural networks approach to detect fraud and process large amounts of financial data. To detect financial fraud and process large amounts of financial data, our proposed process includes feature selection, sampling, and applying supervised and unsupervised algorithms. The final model was validated by the actual financial transaction data occurring in Korea, 2015.", "title": "" }, { "docid": "b76af76207fa3ef07e8f2fbe6436dca0", "text": "Face recognition applications for airport security and surveillance can benefit from the collaborative coupling of mobile and cloud computing as they become widely available today. This paper discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The challenge lies with how to perform task partitioning from mobile devices to cloud and distribute compute load among cloud servers (cloudlet) to minimize the response time given diverse communication latencies and server compute powers. Our preliminary simulation results show that optimal task partitioning algorithms significantly affect response time with heterogeneous latencies and compute powers. Motivated by these results, we design, implement, and validate the basic functionalities of MOCHA as a proof-of-concept, and develop algorithms that minimize the overall response time for face recognition. Our experimental results demonstrate that high-powered cloudlets are technically feasible and indeed help reduce overall processing time when face recognition applications run on mobile devices using the cloud as the backend servers.", "title": "" } ]
scidocsrr
2d63698cbd21c4bed006f2b126a476dc
A Combined Deep Learning GRU-Autoencoder for the Early Detection of Respiratory Disease in Pigs Using Multiple Environmental Sensors
[ { "docid": "ba9122284ddc43eb3bc4dff89502aa9d", "text": "Recent advancements in sensor technology have made it possible to collect enormous amounts of data in real time. However, because of the sheer volume of data most of it will never be inspected by an algorithm, much less a human being. One way to mitigate this problem is to perform some type of anomaly (novelty /interestingness/surprisingness) detection and flag unusual patterns for further inspection by humans or more CPU intensive algorithms. Most current solutions are “custom made” for particular domains, such as ECG monitoring, valve pressure monitoring, etc. This customization requires extensive effort by domain expert. Furthermore, hand-crafted systems tend to be very brittle to concept drift. In this demonstration, we will show an online anomaly detection system that does not need to be customized for individual domains, yet performs with exceptionally high precision/recall. The system is based on the recently introduced idea of time series bitmaps. To demonstrate the universality of our system, we will allow testing on independently annotated datasets from domains as diverse as ECGs, Space Shuttle telemetry monitoring, video surveillance, and respiratory data. In addition, we invite attendees to test our system with any dataset available on the web.", "title": "" } ]
[ { "docid": "1095c68a0843daf6b8a3c8abc5ddc521", "text": "Crosslingual word embeddings represent lexical items from different languages using the same vector space, enabling crosslingual transfer. Most prior work constructs embeddings for a pair of languages, with English on one side. We investigate methods for building high quality crosslingual word embeddings for many languages in a unified vector space. In this way, we can exploit and combine information from many languages. We report competitive performance on bilingual lexicon induction, monolingual similarity and crosslingual document classification", "title": "" }, { "docid": "2f201cd1fe90e0cd3182c672110ce96d", "text": "BACKGROUND\nFor many years, high dose radiation therapy was the standard treatment for patients with locally or regionally advanced non-small-cell lung cancer (NSCLC), despite a 5-year survival rate of only 3%-10% following such therapy. From May 1984 through May 1987, the Cancer and Leukemia Group B (CALGB) conducted a randomized trial that showed that induction chemotherapy before radiation therapy improved survival during the first 3 years of follow-up.\n\n\nPURPOSE\nThis report provides data for 7 years of follow-up of patients enrolled in the CALGB trial.\n\n\nMETHODS\nThe patient population consisted of individuals who had clinical or surgical stage III, histologically documented NSCLC; a CALGB performance status of 0-1; less than 5% loss of body weight in the 3 months preceding diagnosis; and radiographically visible disease. Patients were randomly assigned to receive either 1) cisplatin (100 mg/m2 body surface area intravenously on days 1 and 29) and vinblastine (5 mg/m2 body surface area intravenously weekly on days 1, 8, 15, 22, and 29) followed by radiation therapy with 6000 cGy given in 30 fractions beginning on day 50 (CT-RT group) or 2) radiation therapy with 6000 cGy alone beginning on day 1 (RT group) for a maximum duration of 6-7 weeks. Patients were evaluated for tumor regression if they had measurable or evaluable disease and were monitored for toxic effects, disease progression, and date of death.\n\n\nRESULTS\nThere were 78 eligible patients randomly assigned to the CT-RT group and 77 randomly assigned to the RT group. Both groups were similar in terms of sex, age, histologic cell type, performance status, substage of disease, and whether staging had been clinical or surgical. All patients had measurable or evaluable disease at the time of random assignment to treatment groups. Both groups received a similar quantity and quality of radiation therapy. As previously reported, the rate of tumor response, as determined radiographically, was 56% for the CT-RT group and 43% for the RT group (P = .092). After more than 7 years of follow-up, the median survival remains greater for the CT-RT group (13.7 months) than for the RT group (9.6 months) (P = .012) as ascertained by the logrank test (two-sided). The percentages of patients surviving after years 1 through 7 were 54, 26, 24, 19, 17, 13, and 13 for the CT-RT group and 40, 13, 10, 7, 6, 6, and 6 for the RT group.\n\n\nCONCLUSIONS\nLong-term follow-up confirms that patients with stage III NSCLC who receive 5 weeks of chemotherapy with cisplatin and vinblastine before radiation therapy have a 4.1-month increase in median survival. The use of sequential chemotherapy-radiotherapy increases the projected proportion of 5-year survivors by a factor of 2.8 compared with that of radiotherapy alone. However, inasmuch as 80%-85% of such patients still die within 5 years and because treatment failure occurs both in the irradiated field and at distant sites in patients receiving either sequential chemotherapy-radiotherapy or radiotherapy alone, the need for further improvements in both the local and systemic treatment of this disease persists.", "title": "" }, { "docid": "819c9080a44a5ff6c8d99d37e82c7a0a", "text": "In this study we introduce a new indicator for private consumption based on search query time series provided by Google Trends. The indicator is based on factors extracted from consumption-related search categories of the Google Trends application Insights for Search. The forecasting performance of the new indicator is assessed relative to the two most common survey-based indicators the University of Michigan Consumer Sentiment Index and the Conference Board Consumer Confi dence Index. The results show that in almost all conducted in-sample and out-of-sample forecasting experiments the Google indicator outperforms the survey-based indicators. This suggests that incorporating information from Google Trends may off er signifi cant benefi ts to forecasters of private consumption. JEL Classifi cation: C53, E21, E27", "title": "" }, { "docid": "bf65f2c68808755cfcd13e6cc7d0ccab", "text": "Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.", "title": "" }, { "docid": "d909528f98e49f8107bf0cee7a83bbfe", "text": "INTRODUCTION\nThe increasing use of cone-beam computed tomography in orthodontics has been coupled with heightened concern about the long-term risks of x-ray exposure in orthodontic populations. An industry response to this has been to offer low-exposure alternative scanning options in newer cone-beam computed tomography models.\n\n\nMETHODS\nEffective doses resulting from various combinations of field of view size and field location comparing child and adult anthropomorphic phantoms with the recently introduced i-CAT FLX cone-beam computed tomography unit (Imaging Sciences, Hatfield, Pa) were measured with optical stimulated dosimetry using previously validated protocols. Scan protocols included high resolution (360° rotation, 600 image frames, 120 kV[p], 5 mA, 7.4 seconds), standard (360°, 300 frames, 120 kV[p], 5 mA, 3.7 seconds), QuickScan (180°, 160 frames, 120 kV[p], 5 mA, 2 seconds), and QuickScan+ (180°, 160 frames, 90 kV[p], 3 mA, 2 seconds). Contrast-to-noise ratio was calculated as a quantitative measure of image quality for the various exposure options using the QUART DVT phantom.\n\n\nRESULTS\nChild phantom doses were on average 36% greater than adult phantom doses. QuickScan+ protocols resulted in significantly lower doses than standard protocols for the child (P = 0.0167) and adult (P = 0.0055) phantoms. The 13 × 16-cm cephalometric fields of view ranged from 11 to 85 μSv in the adult phantom and 18 to 120 μSv in the child phantom for the QuickScan+ and standard protocols, respectively. The contrast-to-noise ratio was reduced by approximately two thirds when comparing QuickScan+ with standard exposure parameters.\n\n\nCONCLUSIONS\nQuickScan+ effective doses are comparable with conventional panoramic examinations. Significant dose reductions are accompanied by significant reductions in image quality. However, this trade-off might be acceptable for certain diagnostic tasks such as interim assessment of treatment results.", "title": "" }, { "docid": "e90b54f7ae5ebc0b46d0fb738bb0f458", "text": "The availability of large labeled datasets has allowed Convolutional Network models to achieve impressive recognition results. However, in many settings manual annotation of the data is impractical; instead our data has noisy labels, i.e. there is some freely available label for each image which may or may not be accurate. In this paper, we explore the performance of discriminatively-trained Convnets when trained on such noisy data. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark.", "title": "" }, { "docid": "21c4cd3a91a659fcd3800967943a2ffd", "text": "Ground reaction force (GRF) measurement is important in the analysis of human body movements. The main drawback of the existing measurement systems is the restriction to a laboratory environment. This study proposes an ambulatory system for assessing the dynamics of ankle and foot, which integrates the measurement of the GRF with the measurement of human body movement. The GRF and the center of pressure (CoP) are measured using two 6D force/moment sensors mounted beneath the shoe. The movement of the foot and the lower leg is measured using three miniature inertial sensors, two rigidly attached to the shoe and one to the lower leg. The proposed system is validated using a force plate and an optical position measurement system as a reference. The results show good correspondence between both measurement systems, except for the ankle power. The root mean square (rms) difference of the magnitude of the GRF over 10 evaluated trials was 0.012 ± 0.001 N/N (mean ± standard deviation), being 1.1 ± 0.1 % of the maximal GRF magnitude. It should be noted that the forces, moments, and powers are normalized with respect to body weight. The CoP estimation using both methods shows good correspondence, as indicated by the rms difference of 5.1± 0.7 mm, corresponding to 1.7 ± 0.3 % of the length of the shoe. The rms difference between the magnitudes of the heel position estimates was calculated as 18 ± 6 mm, being 1.4 ± 0.5 % of the maximal magnitude. The ankle moment rms difference was 0.004 ± 0.001 Nm/N, being 2.3 ± 0.5 % of the maximal magnitude. Finally, the rms difference of the estimated power at the ankle was 0.02 ± 0.005 W/N, being 14 ± 5 % of the maximal power. This power difference is caused by an inaccurate estimation of the angular velocities using the optical reference measurement system, which is due to considering the foot as a single segment. The ambulatory system considers separate heel and forefoot segments, thus allowing an additional foot moment and power to be estimated. Based on the results of this research, it is concluded that the combination of the instrumented shoe and inertial sensing is a promising tool for the assessment of the dynamics of foot and ankle in an ambulatory setting.", "title": "" }, { "docid": "07c39fa141334c0b18ecb274a50bed44", "text": "Virtual reality (VR) using head-mounted displays (HMDs) is becoming popular. Smartphone-based HMDs (SbHMDs) are so low cost that users can easily experience VR. Unfortunately, their input modality is quite limited. We propose a real-time eye tracking technique that uses the built-in front facing camera to capture the user's eye. It realizes stand-alone pointing functionality without any additional device.", "title": "" }, { "docid": "499075bf796e8f914d1c925258497144", "text": "In this paper we examine the social and legal conditions in which many transgender people (often called trans people) live, and the medical perspectives that frame the provision of health care for transgender people across much of the world. Modern research shows much higher numbers of transgender people than were apparent in earlier clinic-based studies, as well as biological factors associated with gender incongruence. We examine research showing that many transgender people live on the margins of society, facing stigma, discrimination, exclusion, violence, and poor health. They often experience difficulties accessing appropriate health care, whether specific to their gender needs or more general in nature. Some governments are taking steps to address human rights issues and provide better legal protection for transgender people, but this action is by no means universal. The mental illness perspective that currently frames health-care provision for transgender people across much of the world is under scrutiny. The WHO diagnostic manual may soon abandon its current classification of transgender people as mentally disordered. Debate exists as to whether there should be a diagnosis of any sort for transgender children below the age of puberty.", "title": "" }, { "docid": "b8f7e3d3470766750374cefb9f4c9210", "text": "For years, recursive neural networks (RvNNs) have been shown to be suitable for representing text into fixed-length vectors and achieved good performance on several natural language processing tasks. However, the main drawback of RvNNs is that they require structured input, which makes data preparation and model implementation hard. In this paper, we propose Gumbel Tree-LSTM, a novel tree-structured long short-term memory architecture that learns how to compose task-specific tree structures only from plain text data efficiently. Our model uses Straight-Through Gumbel-Softmax estimator to decide the parent node among candidates dynamically and to calculate gradients of the discrete decision. We evaluate the proposed model on natural language inference and sentiment analysis, and show that our model outperforms or is at least comparable to previous models. We also find that our model converges significantly faster than other models.", "title": "" }, { "docid": "c1ee5f717481652d91431f647401d6d2", "text": "Cluster ensembles have recently emerged as a powerful alternative to standard cluster analysis, aggregating several input data clusterings to generate a single output clustering, with improved robustness and stability. From the early work, these techniques held great promise; however, most of them generate the final solution based on incomplete information of a cluster ensemble. The underlying ensemble-information matrix reflects only cluster-data point relations, while those among clusters are generally overlooked. This paper presents a new link-based approach to improve the conventional matrix. It achieves this using the similarity between clusters that are estimated from a link network model of the ensemble. In particular, three new link-based algorithms are proposed for the underlying similarity assessment. The final clustering result is generated from the refined matrix using two different consensus functions of feature-based and graph-based partitioning. This approach is the first to address and explicitly employ the relationship between input partitions, which has not been emphasized by recent studies of matrix refinement. The effectiveness of the link-based approach is empirically demonstrated over 10 data sets (synthetic and real) and three benchmark evaluation measures. The results suggest the new approach is able to efficiently extract information embedded in the input clusterings, and regularly illustrate higher clustering quality in comparison to several state-of-the-art techniques.", "title": "" }, { "docid": "616fcc0eb15da5d2e2b6ce2e63ad49dd", "text": "Many factory optimization problems, from inventory control to scheduling and reliability , can be formulated as continuous-time Markov decision processes. A primary goal in such problems is to nd a gain-optimal policy that minimizes the long-run average cost. This paper describes a new average-reward algorithm called SMART for nd-ing gain-optimal policies in continuous time semi-Markov decision processes. The paper presents a detailed experimental study of SMART on a large unreliable production inventory problem. SMART outperforms two well-known reliability heuristics from industrial engineering. A key feature of this study is the integration of the reinforcement learning algorithm directly into two commercial discrete-event simulation packages, ARENA and CSIM, paving the way for this approach to be applied to many other factory optimization problems for which there already exist simulation models.", "title": "" }, { "docid": "904efe77c6a31867cf096770b99e856b", "text": "Deep neural networks have been proven powerful at processing perceptual data, such as images and audio. However for tabular data, tree-based models are more popular. A nice property of tree-based models is their natural interpretability. In this work, we present Deep Neural Decision Trees (DNDT) – tree models realised by neural networks. A DNDT is intrinsically interpretable, as it is a tree. Yet as it is also a neural network (NN), it can be easily implemented in NN toolkits, and trained with gradient descent rather than greedy splitting. We evaluate DNDT on several tabular datasets, verify its efficacy, and investigate similarities and differences between DNDT and vanilla decision trees. Interestingly, DNDT self-prunes at both split and feature-level.", "title": "" }, { "docid": "28c3e990b40b62069010e0a7f94adb11", "text": "Steep sub-threshold transistors are promising candidates to replace the traditional MOSFETs for sub-threshold leakage reduction. In this paper, we explore the use of Inter-Band Tunnel Field Effect Transistors (TFETs) in SRAMs at ultra low supply voltages. The uni-directional current conducting TFETs limit the viability of 6T SRAM cells. To overcome this limitation, 7T SRAM designs were proposed earlier at the cost of extra silicon area. In this paper, we propose a novel 6T SRAM design using Si-TFETs for reliable operation with low leakage at ultra low voltages. We also demonstrate that a functional 6T TFET SRAM design with comparable stability margins and faster performances at low voltages can be realized using proposed design when compared with the 7T TFET SRAM cell. We achieve a leakage reduction improvement of 700X and 1600X over traditional CMOS SRAM designs at VDD of 0.3V and 0.5V respectively which makes it suitable for use at ultra-low power applications.", "title": "" }, { "docid": "9798859ddb2d29fa461dab938c5183bb", "text": "The emergence of the extended manufacturing enterprise, a globally dispersed collection of strategically aligned organizations, has brought new attention to how organizations coordinate the flow of information and materials across their w supply chains. This paper explores and develops the concept of enterprise logistics Greis, N.P., Kasarda, J.D., 1997. Ž . x Enterprise logistics in the information age. California Management Review 39 3 , 55–78 as a tool for integrating the logistics activities both within and between the strategically aligned organizations of the extended enterprise. Specifically, this paper examines the fit between an organization’s enterprise logistics integration capabilities and its supply chain structure. Using a configurations approach, we test whether globally dispersed network organizations that adopt enterprise logistics practices are able to achieve higher levels of organizational performance. Results indicate that enterprise logistics is a necessary tool for the coordination of supply chain operations that are geographically dispersed around the world. However, for a pure network structure, a high level of enterprise logistics integration alone does not guarantee improved organizational performance. The paper ends with a discussion of managerial implications and directions for future research. q 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "392d6bf78f8a8a59f08d8102cec3ea91", "text": "Cancellous and cortical autografts histologically have three differences: (1) cancellous grafts are revascularized more rapidly and completely than cortical grafts; (2) creeping substitution of cancellous bone initially involves an appositional bone formation phase, followed by a resorptive phase, whereas cortical grafts undergo a reverse creeping substitution process; (3) cancellous grafts tend to repair completely with time, whereas cortical grafts remain as admixtures of necrotic and viable bone. Physiologic skeletal metabolic factors influence the rate, amount, and completeness of bone repair and graft incorporation. The mechanical strengths of cancellous and cortical grafts are correlated with their respective repair processes: cancellous grafts tend to be strengthened first, whereas cortical grafts are weakened. Bone allografts are influenced by the same immunologic factors as other tissue grafts. Fresh bone allografts may be rejected by the host's immune system. The histoincompatibility antigens of bone allografts are presumably the proteins or glycoproteins on cell surfaces. The matrix proteins may or may not elicit graft rejection. The rejection of a bone allograft is considered to be a cellular rather than a humoral response, although the humoral component may play a part. The degree of the host response to an allograft may be related to the antigen concentration and total dose. The rejection of a bone allograft is histologically expressed by the disruption of vessels, an inflammatory process including lymphocytes, fibrous encapsulation, peripheral graft resorption, callus bridging, nonunions, and fatigue fractures.", "title": "" }, { "docid": "c4fa5e593136812de615bcfae07ac5a9", "text": "BACKGROUND\nDelirium in critically ill children is a severe neuropsychiatric disorder which has gained increased attention from clinicians. Early identification of delirium is essential for successful management. The Sophia Observation withdrawal Symptoms-Paediatric Delirium (SOS-PD) scale was developed to detect Paediatric Delirium (PD) at an early stage.\n\n\nOBJECTIVE\nThe aim of this study was to determine the measurement properties of the PD component of the SOS-PD scale in critically ill children.\n\n\nMETHODS\nA prospective, observational study was performed in patients aged 3 months or older and admitted for more than 48h. These patients were assessed with the SOS-PD scale three times a day. If the SOS-PD total score was 4 or higher in two consecutive observations, the child psychiatrist was consulted to assess the diagnosis of PD using the Diagnostic and Statistical Manual-IV criteria as the \"gold standard\". The child psychiatrist was blinded to outcomes of the SOS-PD. The interrater reliability of the SOS-PD between the care-giving nurse and a researcher was calculated with the intraclass correlation coefficient (ICC).\n\n\nRESULTS\nA total of 2088 assessments were performed in 146 children (median age 49 months; IQR 13-140). The ICC of 16 paired nurse-researcher observations was 0.90 (95% CI 0.70-0.96). We compared 63 diagnoses of the child psychiatrist versus SOS-PD assessments in 14 patients, in 13 of whom the diagnosis of PD was confirmed. The sensitivity was 96.8% (95% CI 80.4-99.5%) and the specificity was 92.0% (95% CI 59.7-98.9%).\n\n\nCONCLUSIONS\nThe SOS-PD scale shows promising validity for early screening of PD. Further evidence should be obtained from an international multicentre study.", "title": "" }, { "docid": "71cac5680dafbc3c56dbfffa4472b67a", "text": "Three-dimensional printing has significant potential as a fabrication method in creating scaffolds for tissue engineering. The applications of 3D printing in the field of regenerative medicine and tissue engineering are limited by the variety of biomaterials that can be used in this technology. Many researchers have developed novel biomaterials and compositions to enable their use in 3D printing methods. The advantages of fabricating scaffolds using 3D printing are numerous, including the ability to create complex geometries, porosities, co-culture of multiple cells, and incorporate growth factors. In this review, recently-developed biomaterials for different tissues are discussed. Biomaterials used in 3D printing are categorized into ceramics, polymers, and composites. Due to the nature of 3D printing methods, most of the ceramics are combined with polymers to enhance their printability. Polymer-based biomaterials are 3D printed mostly using extrusion-based printing and have a broader range of applications in regenerative medicine. The goal of tissue engineering is to fabricate functional and viable organs and, to achieve this, multiple biomaterials and fabrication methods need to be researched.", "title": "" }, { "docid": "0305bac1e39203b49b794559bfe0b376", "text": "The emerging field of semantic web technologies promises new stimulus for Software Engineering research. However, since the underlying concepts of the semantic web have a long tradition in the knowledge engineering field, it is sometimes hard for software engineers to overlook the variety of ontology-enabled approaches to Software Engineering. In this paper we therefore present some examples of ontology applications throughout the Software Engineering lifecycle. We discuss the advantages of ontologies in each case and provide a framework for classifying the usage of ontologies in Software Engineering.", "title": "" }, { "docid": "fe65dd3bd5f11bea22c5421e84fad8da", "text": "*This contract was given to the United States Association for Small Business and Entrepreneurship (USASBE) for a best doctoral student paper award, presented to the awardees at the USASBE annual meeting. The opinions and recommendations of the authors of this study do not necessarily reflect official policies of the SBA or other agencies of the U.S. government. Note The 2009 Office of Advocacy Best Doctoral Paper award was presented to Pankaj Patel and Rodney D'Souza, doctoral students at the University of Louisville, at the United States Association for Small Business and Entrepreneurship (USASBE) annual meeting. Purpose Export strategy has become increasingly important for SMEs in recent years. To realize the full potential of export strategy, SMEs must be able to address challenges in export markets successfully. A firm must have adequate capabilities to meet unique challenges in such efforts. However, SMEs are limited by their access to resources and capabilities. While prior studies have looked at the importance of organizational learning in export strategy, they have overlooked the firm capabilities that facilitate the use of the learning. As firms that partake in export activity are entrepreneurial in nature, these firms would benefit by proactively seeking new markets, engaging in innovative action to meet local market needs, and be able and willing to take risks by venturing into previously unknown markets. The authors of this paper propose that SMEs make use of capabilities such as entrepreneurial orientation in an attempt to reduce impediments to exporting, which in turn could lead to enhanced export performance. This study finds that proactivity and risk-taking play a role in enhancing export performance of SMEs. However, it did not find support for innovation as a factor that enhances export performance. These findings could mean that firms that are proactive in nature are better at reducing export impediments. This is because these firms are able to bring new products quickly into the marketplace, and are better able to anticipate future demand, creating a first mover advantage. The results of the study also suggest that risk-taking firms might choose strategies that move away from the status quo, thereby increasing the firm's engagement in process enhancements, new product services, innovative marketing techniques , and the like. The data for this report were collected for the National Federation of Independent Business by the executive interviewing group of The Gallup Organization. The survey focused on international trade efforts of small manufacturers …", "title": "" } ]
scidocsrr
cbc5fcfac69eb91fe3ec48f00893245a
Efficient Partial Order Preserving Unsupervised Feature Selection on Networks
[ { "docid": "d7573e7b3aac75b49132076ce9fc83e0", "text": "The prevalent use of social media produces mountains of unlabeled, high-dimensional data. Feature selection has been shown effective in dealing with high-dimensional data for efficient data mining. Feature selection for unlabeled data remains a challenging task due to the absence of label information by which the feature relevance can be assessed. The unique characteristics of social media data further complicate the already challenging problem of unsupervised feature selection, (e.g., part of social media data is linked, which makes invalid the independent and identically distributed assumption), bringing about new challenges to traditional unsupervised feature selection algorithms. In this paper, we study the differences between social media data and traditional attribute-value data, investigate if the relations revealed in linked data can be used to help select relevant features, and propose a novel unsupervised feature selection framework, LUFS, for linked social media data. We perform experiments with real-world social media datasets to evaluate the effectiveness of the proposed framework and probe the working of its key components.", "title": "" }, { "docid": "2052b47be2b5e4d0c54ab0be6ae1958b", "text": "Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org .", "title": "" }, { "docid": "227786365219fe1efab6414bae0d8cdb", "text": "Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open.\n We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function.\n Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-the-art unsupervised approaches as well as approaches that are based on feature extraction.", "title": "" }, { "docid": "81c90998c5e456be34617e702dbfa4f5", "text": "In this paper, a new unsupervised learning algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), is proposed. To exploit the discriminative information in unsupervised scenarios, we perform spectral clustering to learn the cluster labels of the input samples, during which the feature selection is performed simultaneously. The joint learning of the cluster labels and feature selection matrix enables NDFS to select the most discriminative features. To learn more accurate cluster labels, a nonnegative constraint is explicitly imposed to the class indicators. To reduce the redundant or even noisy features, `2,1-norm minimization constraint is added into the objective function, which guarantees the feature selection matrix sparse in rows. Our algorithm exploits the discriminative information and feature correlation simultaneously to select a better feature subset. A simple yet efficient iterative algorithm is designed to optimize the proposed objective function. Experimental results on different real world datasets demonstrate the encouraging performance of our algorithm over the state-of-the-arts. Introduction The dimension of data is often very high in many domains (Jain and Zongker 1997; Guyon and Elisseeff 2003), such as image and video understanding (Wang et al. 2009a; 2009b), and bio-informatics. In practice, not all the features are important and discriminative, since most of them are often correlated or redundant to each other, and sometimes noisy (Duda, Hart, and Stork 2001; Liu, Wu, and Zhang 2011). These features may result in adverse effects in some learning tasks, such as over-fitting, low efficiency and poor performance (Liu, Wu, and Zhang 2011). Consequently, it is necessary to reduce dimensionality, which can be achieved by feature selection or transformation to a low dimensional space. In this paper, we focus on feature selection, which is to choose discriminative features by eliminating the ones with little or no predictive information based on certain criteria. Many feature selection algorithms have been proposed, which can be classified into three main families: filter, wrapper, and embedded methods. The filter methods (Duda, Hart, Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and Stork 2001; He, Cai, and Niyogi 2005; Zhao and Liu 2007; Masaeli, Fung, and Dy 2010; Liu, Wu, and Zhang 2011; Yang et al. 2011a) use statistical properties of the features to filter out poorly informative ones. They are usually performed before applying classification algorithms. They select a subset of features only based on the intrinsic properties of the data. In the wrapper approaches (Guyon and Elisseeff 2003; Rakotomamonjy 2003), feature selection is “wrapped” in a learning algorithm and the classification performance of features is taken as the evaluation criterion. Embedded methods (Vapnik 1998; Zhu et al. 2003) perform feature selection in the process of model construction. In contrast with filter methods, wrapper and embedded methods are tightly coupled with in-built classifiers, which causes that they are less generality and computationally expensive. In this paper, we focus on the filter feature selection algorithm. Because of the importance of discriminative information in data analysis, it is beneficial to exploit discriminative information for feature selection, which is usually encoded in labels. However, how to select discriminative features in unsupervised scenarios is a significant but hard task due to the lack of labels. In light of this, we propose a novel unsupervised feature selection algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), in this paper. We perform spectral clustering and feature selection simultaneously to select the discriminative features for unsupervised learning. The cluster label indicators are obtained by spectral clustering to guide the feature selection procedure. Different from most of the previous spectral clustering algorithms (Shi and Malik 2000; Yu and Shi 2003), we explicitly impose a nonnegative constraint into the objective function, which is natural and reasonable as discussed later in this paper. With nonnegative and orthogonality constraints, the learned cluster indicators are much closer to the ideal results and can be readily utilized to obtain cluster labels. Our method exploits the discriminative information and feature correlation in a joint framework. For the sake of feature selection, the feature selection matrix is constrained to be sparse in rows, which is formulated as `2,1-norm minimization term. To solve the proposed problem, a simple yet effective iterative algorithm is proposed. Extensive experiments are conducted on different datasets, which show that the proposed approach outperforms the state-of-the-arts in different applications. Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence", "title": "" } ]
[ { "docid": "c1f6052ecf802f1b4b2e9fd515d7ea15", "text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.", "title": "" }, { "docid": "f1af321a5d7c2e738c181373d5dbfc9a", "text": "This research examined how motivation (perceived control, intrinsic motivation, and extrinsic motivation), cognitive learning strategies (deep and surface strategies), and intelligence jointly predict long-term growth in students' mathematics achievement over 5 years. Using longitudinal data from six annual waves (Grades 5 through 10; Mage  = 11.7 years at baseline; N = 3,530), latent growth curve modeling was employed to analyze growth in achievement. Results showed that the initial level of achievement was strongly related to intelligence, with motivation and cognitive strategies explaining additional variance. In contrast, intelligence had no relation with the growth of achievement over years, whereas motivation and learning strategies were predictors of growth. These findings highlight the importance of motivation and learning strategies in facilitating adolescents' development of mathematical competencies.", "title": "" }, { "docid": "d292d1334594bec8531e6011fabaafd2", "text": "Insight into the growth (or shrinkage) of “knowledge communities” of authors that build on each other's work can be gained by studying the evolution over time of clusters of documents. We cluster documents based on the documents they cite in common using the Streemer clustering method, which finds cohesive foreground clusters (the knowledge communities) embedded in a diffuse background. We build predictive models with features based on the citation structure, the vocabulary of the papers, and the affiliations and prestige of the authors and use these models to study the drivers of community growth and the predictors of how widely a paper will be cited. We find that scientific knowledge communities tend to grow more rapidly if their publications build on diverse information and use narrow vocabulary and that papers that lie on the periphery of a community have the highest impact, while those not in any community have the lowest impact.", "title": "" }, { "docid": "09c5da2fbf8a160ba27221ff0c5417ac", "text": " The burst fracture of the spine was first described by Holdsworth in 1963 and redefined by Denis in 1983 as being a fracture of the anterior and middle columns of the spine with or without an associated posterior column fracture. This injury has received much attention in the literature as regards its radiological diagnosis and also its clinical managment. The purpose of this article is to review the way that imaging has been used both to diagnose the injury and to guide management. Current concepts of the stability of this fracture are presented and our experience in the use of magnetic resonance imaging in deciding treatment options is discussed.", "title": "" }, { "docid": "737231466c50ac647f247b60852026e2", "text": "The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people are accessing key-based security systems. Existing methods of obtaining such secret information rely on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user’s fine-grained hand movements, which enable attackers to reproduce the trajectories of the user’s hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user’s hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 7,000 key entry traces collected from 20 adults for key-based security systems (i.e., ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80 percent accuracy with only one try and more than 90 percent accuracy with three tries. Moreover, the performance of our system is consistently good even under low sampling rate and when inferring long PIN sequences. To the best of our knowledge, this is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.", "title": "" }, { "docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd", "text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.", "title": "" }, { "docid": "29e5f1dfc38c48f5296d9dde3dbc3172", "text": "Low-cost smartphone adapters can bring virtual reality to the masses, but input is typically limited to using head tracking, which makes it difficult to perform complex tasks like navigation. Walking-in-place (WIP) offers a natural and immersive form of virtual locomotion that can reduce simulation sickness. We present VR-Drop; an immersive puzzle game that illustrates the use of WIP for virtual locomotion. Our WIP implementation doesn't require any instrumentation as it is implemented using a smartphone's inertial sensors. VR-Drop demonstrates that WIP can significantly increase VR input options and allows for a deep and immersive VR experience.", "title": "" }, { "docid": "527c1e2a78e7f171025231a475a828b9", "text": "Cryptography is the science to transform the information in secure way. Encryption is best alternative to convert the data to be transferred to cipher data which is an unintelligible image or data which cannot be understood by any third person. Images are form of the multimedia data. There are many image encryption schemes already have been proposed, each one of them has its own potency and limitation. This paper presents a new algorithm for the image encryption/decryption scheme which has been proposed using chaotic neural network. Chaotic system produces the same results if the given inputs are same, it is unpredictable in the sense that it cannot be predicted in what way the system's behavior will change for any little change in the input to the system. The objective is to investigate the use of ANNs in the field of chaotic Cryptography. The weights of neural network are achieved based on chaotic sequence. The chaotic sequence generated and forwarded to ANN and weighs of ANN are updated which influence the generation of the key in the encryption algorithm. The algorithm has been implemented in the software tool MATLAB and results have been studied. To compare the relative performance peak signal to noise ratio (PSNR) and mean square error (MSE) are used.", "title": "" }, { "docid": "429ac6709131b648bb44a6ccaebe6a19", "text": "We highlight a practical yet rarely discussed problem in dialogue state tracking (DST), namely handling unknown slot values. Previous approaches generally assume predefined candidate lists and thus are not designed to output unknown values, especially when the spoken language understanding (SLU) module is absent as in many end-to-end (E2E) systems. We describe in this paper an E2E architecture based on the pointer network (PtrNet) that can effectively extract unknown slot values while still obtains state-of-the-art accuracy on the standard DSTC2 benchmark. We also provide extensive empirical evidence to show that tracking unknown values can be challenging and our approach can bring significant improvement with the help of an effective feature dropout technique.", "title": "" }, { "docid": "9ac90eeb0dec90578e060828b210a120", "text": "Computer networks are limited in performance by the electronic equipment. Terminals have received little attention, but need to be redesigned in order to be able to manage 10 Gigabit Ethernet. The Internet checksum computation, which is used in TCP and UDP requires specialized processing resources. The TUCFP hardware accelerator calculates the Internet checksum. It processes 32 bits in parallel and is designed for easy integration in the general purpose protocol processor. It handles UDP as well as TCP packets in both IPv4 and IPv6 environments. A synthesized implementation for 0.18 micron technology proves a throughput of over 12 Gigabits/s.", "title": "" }, { "docid": "ea52c884ddfb34ce3336f6795455ddbe", "text": "In this paper we introduce Smooth Particle Networks (SPNets), a framework for integrating fluid dynamics with deep networks. SPNets adds two new layers to the neural network toolbox: ConvSP and ConvSDF, which enable computing physical interactions with unordered particle sets. We use these layers in combination with standard neural network layers to directly implement fluid dynamics inside a deep network, where the parameters of the network are the fluid parameters themselves (e.g., viscosity, cohesion, etc.). Because SPNets are implemented as a neural network, the resulting fluid dynamics are fully differentiable. We then show how this can be successfully used to learn fluid parameters from data, perform liquid control tasks, and learn policies to manipulate liquids.", "title": "" }, { "docid": "587682bab865d7d6386a4951aacba45b", "text": "Biogeography-based optimization algorithm(BBO) is a new kind of optimization algorithm based on Biogeography. It mimics the migration strategy of animals to solve the problem of optimization. A new algorithm for Traveling Salesman Problem(TSPBBO) based on BBO is presented in this paper. It is tested on classical TSP problem. The comparison results with the other kinds of optimization algorithm show that TSPBBO is a very effective algorithm for TSP combination optimization. It provides a new method for this kind of problem.", "title": "" }, { "docid": "9e37c463a38a3efe746d9af7e8872dc6", "text": "OBJECTIVES\nTo examine the relationship of corporal punishment with children's behavior problems while accounting for neighborhood context and while using stronger statistical methods than previous literature in this area, and to examine whether different levels of corporal punishment have different effects in different neighborhood contexts.\n\n\nDESIGN\nLongitudinal cohort study.\n\n\nSETTING\nGeneral community.\n\n\nPARTICIPANTS\n1943 mother-child pairs from the National Longitudinal Survey of Youth.\n\n\nMAIN OUTCOME MEASURE\nInternalizing and externalizing behavior problem scales of the Behavior Problems Index.\n\n\nRESULTS AND CONCLUSIONS\nParental use of corporal punishment was associated with a 0.71 increase (P<.05) in children's externalizing behavior problems even when several parenting behaviors, neighborhood quality, and all time-invariant variables were accounted for. The association of corporal punishment and children's externalizing behavior problems was not dependent on neighborhood context. The research found no discernible relationship between corporal punishment and internalizing behavior problems.", "title": "" }, { "docid": "5038df440c0db19e1588cc69b10cc3c4", "text": "Electronic document management (EDM) technology has the potential to enhance the information management in construction projects considerably, without radical changes to current practice. Over the past fifteen years this topic has been overshadowed by building product modelling in the construction IT research world, but at present EDM is quickly being introduced in practice, in particular in bigger projects. Often this is done in the form of third party services available over the World Wide Web. In the paper, a typology of research questions and methods is presented, which can be used to position the individual research efforts which are surveyed in the paper. Questions dealt with include: What features should EMD systems have? How much are they used? Are there benefits from use and how should these be measured? What are the barriers to wide-spread adoption? Which technical questions need to be solved? Is there scope for standardisation? How will the market for such systems evolve?", "title": "" }, { "docid": "37257f51eddbad5d7a151c12083e51a7", "text": "As data rate pushes to 10Gbps and beyond, timing jitter has become one of the major factors that limit the link performance. Thorough understanding of the link jitter characteristics and accurate modeling of their impact on link performance is a must even at early design stage. This paper discusses the characteristics of timing jitter in typical I/O interfaces and overviews various jitter modeling methods proposed in the literature during the past few years. Recommendations are given based on the characteristics of timing jitter and their locations.", "title": "" }, { "docid": "ea0c9e70789c43e2c14c0b35d8f45dc2", "text": "Harlequin ichthyosis (HI) is a rare and severe form of congenital ichthyosis. Linked to deletion and truncation mutations of a keratinocyte lipid transporter, HI is characterized by diffuse epidermal hyperkeratinization and defective desquamation. At birth, the HI phenotype is striking with thick hyperkeratotic plate-like scales with deep dermal fissures, severe ectropion and eclabium, among other findings. Over the first months of life, the hyperkeratotic covering is shed, revealing a diffusely erythematous, scaly epidermis, which persists for the remainder of the patient's life. Although HI infants have historically succumbed in the perinatal period related to their profound epidermal compromise, the prognosis of HI infants has vastly improved over the past 20 years. Here, we report a case of HI treated with acitretin, focusing on the multi-faceted management of the disease in the inpatient setting. A review of the literature of the management of HI during the perinatal period is also presented.", "title": "" }, { "docid": "3f220d8863302719d3cf69b7d99f8c4e", "text": "The numerical representation precision required by the computations performed by Deep Neural Networks (DNNs) varies across networks and between layers of a same network. This observation motivates a precision-based approach to acceleration which takes into account both the computational structure and the required numerical precision representation. This work presents <italic>Stripes</italic> (<italic>STR</italic>), a hardware accelerator that uses bit-serial computations to improve energy efficiency and performance. Experimental measurements over a set of state-of-the-art DNNs for image classification show that <italic>STR</italic> improves performance over a state-of-the-art accelerator from 1.35<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq1-2597140.gif\"/></alternatives></inline-formula> to 5.33<inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives><inline-graphic xlink:href=\"judd-ieq2-2597140.gif\"/> </alternatives></inline-formula> and by 2.24<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math> <alternatives><inline-graphic xlink:href=\"judd-ieq3-2597140.gif\"/></alternatives></inline-formula> on average. <italic>STR</italic>’s area and power overhead are estimated at 5 percent and 12 percent respectively. <italic> STR</italic> is 2.00<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq4-2597140.gif\"/></alternatives></inline-formula> more energy efficient than the baseline.", "title": "" }, { "docid": "38fcf1fca6856c339cc7569ef725ab85", "text": "The multitarget recursive Bayes nonlinear filter is the theoretically optimal approach to multisensor-multitarget detection, tracking, and identification. For applications in which this filter is appropriate, it is likely to be tractable for only a small number of targets. In earlier papers we derived closed-form equations for an approximation of this filter based on propagation of a first-order multitarget moment called the probability hypothesis density (PHD). In a recent paper, Erdinc, Willett, and Bar-Shalom argued for the need for a PHD-type filter which remains first-order in the states of individual targets, but which is higher-order in target number. In this paper we show that this is indeed possible. We derive a closed-form cardinalized PHD (CPHD) filter, which propagates not only the PHD but also the entire probability distribution on target number.", "title": "" }, { "docid": "77c35887241735b833b0b8baaee569c4", "text": "Existing research efforts into tennis visualization have primarily focused on using ball and player tracking data to enhance professional tennis broadcasts and to aid coaches in helping their students. Gathering and analyzing this data typically requires the use of an array of synchronized cameras, which are expensive for non-professional tennis matches. In this paper, we propose TenniVis, a novel tennis match visualization system that relies entirely on data that can be easily collected, such as score, point outcomes, point lengths, service information, and match videos that can be captured by one consumer-level camera. It provides two new visualizations to allow tennis coaches and players to quickly gain insights into match performance. It also provides rich interactions to support ad hoc hypothesis development and testing. We first demonstrate the usefulness of the system by analyzing the 2007 Australian Open men's singles final. We then validate its usability by two pilot user studies where two college tennis coaches analyzed the matches of their own players. The results indicate that useful insights can quickly be discovered and ad hoc hypotheses based on these insights can conveniently be tested through linked match videos.", "title": "" }, { "docid": "604acce1aeb26ea5b6a72e230752ff60", "text": "Research in experimental psychology suggests that, in violation of Bayes' rule, most people tend to \"overreact\" to unexpected and dramatic news events. This study of market efficiency investigates whether such behavior affects stock prices. The empirical evidence, based on CRSP monthly return data, is consistent with the overreaction hypothesis. Substantial weak form market inefficiencies are discovered. The results also shed new light on the January returns earned by prior \"winners\" and \"losers.\" Portfolios of losers experience exceptionally large January returns as late as five years after portfolio formation. As ECONOMISTS INTERESTED IN both market behavior and the psychology of individual decision making, we have been struck by the similarity of two sets of empirical findings. Both classes of behavior can be characterized as displaying ouerreaction. This study was undertaken to investigate the possibility that these phenomena are related by more than just appearance. We begin by describing briefly the individual and market behavior that piqued our interest. The term overreaction carries with it an implicit comparison to some degree of reaction that is considered to be appropriate. What is an appropriate reaction? One class.of tasks which have a well-established norm are probability revision problems for which Bayes' rule prescribes the correct reaction to new information. It has now been well-established that Bayes' rule is not an apt characterization of how individuals actually respond to new data (Kahneman et al. [14]). In revising their beliefs, individuals tend to overweight recent information and underweight prior (or base rate) data. People seem to make predictions according to a simple matching rule: \"The predicted value is selected so that the standing of the case in the distribution of outcomes matches its standing in the distribution of impressions\" (Kahneman and Tversky [14, p. 4161). This rule-of-thumb, an instance of what Kahneman and Tversky call the representativeness heuristic, violates the basic statistical principal that the extremeness of predictions must be moderated by considerations of predictability. Grether [12] has replicated this finding under incentive compatible conditions. There is also considerable evidence that the actual expectations of professional security analysts and economic forecasters display the same overreaction bias (for a review, see De Bondt [7]). One of the earliest observations about overreaction in markets was made by J. M. Keynes:\". . .day-to-day fluctuations in the profits of existing investments, * University of Wisconsin at Madison and Cornell University, respectively. The financial support of the C.I.M. Doctoral Fellowship Program (Brussels, Belgium) and the Cornell Graduate School of Management is gratefully acknowledged. We received helpful comments from Seymour Smidt, Dale Morse, Peter Bernstein, Fischer Black, Robert Jarrow, Edwin Elton, and Ross Watts. 794 The Journal of Finance which are obviously of an ephemeral and nonsignificant character, tend to have an altogether excessive, and even an absurd, influence on the market\" [17, pp. 153-1541. About the same time, Williams noted in this Theory of Investment Value that \"prices have been based too much on current earning power and too little on long-term dividend paying power\" [28, p. 191. More recently, Arrow has concluded that the work of Kahneman and Tversky \"typifies very precisely the exessive reaction to current information which seems to characterize all the securities and futures markets\" [I, p. 51. Two specific examples of the research to which Arrow was referring are the excess volatility of security prices and the so-called price earnings ratio anomaly. The excess volatility issue has been investigated most thoroughly by Shiller [27]. Shiller interprets the Miller-Modigliani view of stock prices as a constraint on the likelihood function of a price-dividend sample. Shiller concludes that, at least over the last century, dividends simply do not vary enough to rationally justify observed aggregate price movements. Combining the results with Kleidon's [18] findings that stock price movements are strongly correlated with the following year's earnings changes suggests a clear pattern of overreaction. In spite of the observed trendiness of dividends, investors seem to attach disproportionate importance to short-run economic developments.' The price earnings ratio (PIE) anomaly refers to the observation that stocks with extremely low PIE ratios (i.e., lowest decile) earn larger risk-adjusted returns than high PIE stocks (Basu [3]). Most financial economists seem to regard the anomaly as a statistical artifact. Explanations are usually based on alleged misspecification of the capital asset pricing model (CAPM). Ball [2] emphasizes the effects of omitted risk factors. The PIE ratio is presumed to be a proxy for some omitted factor which, if included in the \"correct\" equilibrium valuation model, would eliminate the anomaly. Of course, unless these omitted factors can be identified, the hypothesis is untestable. Reinganum [21] has claimed that the small firm effect subsumes the PIE effect and that both are related to the same set of missing (and again unknown) factors. However, Basu [4] found a significant PIE effect after controlling for firm size, and earlier Graham [ l l ] even found an effect within the thirty Dow Jones Industrials, hardly a group of small firms! An alternative behavioral explanation for the anomaly based on investor overreaction is what Basu called the \"price-ratio\" hypothesis (e.g., Dreman [8]). Companies with very low PIE'Sare thought to be temporarily \"undervalued\" because investors become excessively pessimistic after a series of bad earnings reports or other bad news. Once future earnings turn out to be better than the unreasonably gloomy forecasts, the price adjusts. Similarly, the equity of companies with very high PIE'Sis thought to be \"overvalued,\" before (predictably) falling in price. While the overreaction hypothesis has considerable a priori appeal, the obvious question to ask is: How does the anomaly survive the process of arbitrage? There Of course, the variability of stock prices may also reflect changes in real interest rates. If so, the price movements of other assets-such as land or housing-should match those of stocks. However, this is not actually observed. A third hypothesis, advocated by Marsh and Merton [19], is that Shiller's findings are a result of his misspecification of the dividend process. 795 Does the Stock Market Overreact? is really a more general question here. What are the equilibria conditions for markets in which some agents are not rational in the sense that they fail to revise their expectations according to Bayes' rule? Russell and Thaler [24] address this issue. They conclude that the existence of some rational agents is not sufficient to guarantee a rational expectations equilibrium in an economy with some of what they call quasi-rational agents. (The related question of market equilibria with agents having heterogeneous expectations is investigated by Jarrow [13].) While we are highly sensitive to these issues, we do not have the space to address them here. Instead, we will concentrate on an empirical test of the overreaction hypothesis. If stock prices systematically overshoot, then their reversal should be predictable from past return data alone, with no use of any accounting data such as earnings. Specifically, two hypotheses are suggested: (1)Extreme movements in stock prices will be followed by subsequent price movements in the opposite direction. (2) The more extreme the initial price movement, the greater will be the subsequent adjustment. Both hypotheses imply a violation of weak-form market efficiency. To repeat, our goal is to test whether the overreaction hypothesis is predictive. In other words, whether it does more for us than merely to explain, ex post, the PIE effect or Shiller's results on asset price dispersion. The overreaction effect deserves attention because it represents a behavioral principle that may apply in many other contexts. For example, investor overreaction possibly explains Shiller's earlier [26] findings that when long-term interest rates are high relative to short rates, they tend to move down later on. Ohlson and Penman [20] have further suggested that the increased volatility of security returns following stock splits may also be linked to overreaction. The present empirical tests are to our knowledge the first attempt to use a behavioral principle to predict a new market anomaly. The remainder of the paper is organized as follows. The next section describes the actual empirical tests we have performed. Section I1 describes the results. Consistent with the overreaction hypothesis, evidence of weak-form market inefficiency is found. We discuss the implications for other empirical work on asset pricing anomalies. The paper ends with a brief summary of conclusions. I. The Overreaction Hypothesis: Empirical Tests The empirical testing procedures are a variant on a design originally proposed by Beaver and Landsman [5] in a different context. Typically, tests of semistrong form market efficiency start, at time t = 0, with the formation of portfolios on the basis of some event that affects all stocks in the portfolio, say, an earnings announcement. One then goes on to investigate whether later on ( t > 0) the estimated residual portfolio return rip,--measured relative to the single-period CAPM-equals zero. Statistically significant departures from zero are interpreted as evidence consistent with semistrong form market inefficiency, even though the results may also be due to misspecification of the CAPM, misestimation of the relevant alphas and/or betas, or simply market inefficiency of the weak form. 796 The Journal of Finance In contrast, the tests in this study assess the extent to which systematic nonzero", "title": "" } ]
scidocsrr
6152ef013710e91ee1b49d780ac4d16d
TR 09-004 Detecting Anomalies in a Time Series Database
[ { "docid": "ba9122284ddc43eb3bc4dff89502aa9d", "text": "Recent advancements in sensor technology have made it possible to collect enormous amounts of data in real time. However, because of the sheer volume of data most of it will never be inspected by an algorithm, much less a human being. One way to mitigate this problem is to perform some type of anomaly (novelty /interestingness/surprisingness) detection and flag unusual patterns for further inspection by humans or more CPU intensive algorithms. Most current solutions are “custom made” for particular domains, such as ECG monitoring, valve pressure monitoring, etc. This customization requires extensive effort by domain expert. Furthermore, hand-crafted systems tend to be very brittle to concept drift. In this demonstration, we will show an online anomaly detection system that does not need to be customized for individual domains, yet performs with exceptionally high precision/recall. The system is based on the recently introduced idea of time series bitmaps. To demonstrate the universality of our system, we will allow testing on independently annotated datasets from domains as diverse as ECGs, Space Shuttle telemetry monitoring, video surveillance, and respiratory data. In addition, we invite attendees to test our system with any dataset available on the web.", "title": "" } ]
[ { "docid": "21cde70c4255e706cb05ff38aec99406", "text": "In this paper, a multiple classifier machine learning (ML) methodology for predictive maintenance (PdM) is presented. PdM is a prominent strategy for dealing with maintenance issues given the increasing need to minimize downtime and associated costs. One of the challenges with PdM is generating the so-called “health factors,” or quantitative indicators, of the status of a system associated with a given maintenance issue, and determining their relationship to operating costs and failure risk. The proposed PdM methodology allows dynamical decision rules to be adopted for maintenance management, and can be used with high-dimensional and censored data problems. This is achieved by training multiple classification modules with different prediction horizons to provide different performance tradeoffs in terms of frequency of unexpected breaks and unexploited lifetime, and then employing this information in an operating cost-based maintenance decision system to minimize expected costs. The effectiveness of the methodology is demonstrated using a simulated example and a benchmark semiconductor manufacturing maintenance problem.", "title": "" }, { "docid": "0f49e229c08672dfba4026ec5ebca3bc", "text": "A grid array antenna is presented in this paper with sub grid arrays and multiple feed points, showing enhanced radiation characteristics and sufficient design flexibility. For instance, the grid array antenna can be easily designed as a linearly- or circularly-polarized, unbalanced or balanced antenna. A design example is given for a linearly-polarized unbalanced grid array antenna in Ferro A6M low temperature co-fired ceramic technology for 60-GHz radios to operate from 57 to 66 GHz (≈ 14.6% at 61.5 GHz ). It consists of 4 sub grid arrays and 4 feed points that are connected to a single-ended 50-Ω source by a quarter-wave matched T-junction network. The simulated results indicate that the grid array antenna has the maximum gain of 17.7 dBi at 59 GHz , an impedance bandwidth (|S11| ≤ -10&nbsp;dB) nearly from 56 to 67.5 GHz (or 18.7%), a 3-dB gain bandwidth from 55.4 to 66 GHz (or 17.2%), and a vertical beam bandwidth in the broadside direction from 57 to 66 GHz (14.6%). The measured results are compared with the simulated ones. Discrepancies and their causes are identified with a tolerance analysis on the fabrication process.", "title": "" }, { "docid": "a1d300bd5ac779e1b21a7ed20b3b01ad", "text": "a r t i c l e i n f o Keywords: Luxury brands Perceived social media marketing (SMM) activities Value equity Relationship equity Brand equity Customer equity Purchase intention In light of a growing interest in the use of social media marketing (SMM) among luxury fashion brands, this study set out to identify attributes of SMM activities and examine the relationships among those perceived activities, value equity, relationship equity, brand equity, customer equity, and purchase intention through a structural equation model. Five constructs of perceived SSM activities of luxury fashion brands are entertainment , interaction, trendiness, customization, and word of mouth. Their effects on value equity, relationship equity, and brand equity are significantly positive. For the relationship between customer equity drivers and customer equity, brand equity has significant negative effect on customer equity while value equity and relationship equity show no significant effect. As for purchase intention, value equity and relationship equity had significant positive effects, while relationship equity had no significant influence. Finally, the relationship between purchase intention and customer equity has significance. The findings of this study can enable luxury brands to forecast the future purchasing behavior of their customers more accurately and provide a guide to managing their assets and marketing activities as well. The luxury market has attained maturity, along with the gradual expansion of the scope of its market and a rapid growth in the number of customers. Luxury market is a high value-added industry basing on high brand assets. Due to the increased demand for luxury in emerging markets such as China, India, and the Middle East, opportunities abound to expand the business more than ever. In the past, luxury fashion brands could rely on strong brand assets and secure regular customers. However, the recent entrance of numerous fashion brands into the luxury market, followed by heated competition, signals unforeseen changes in the market. A decrease in sales related to a global economic downturn drives luxury businesses to change. Now they can no longer depend solely on their brand symbol but must focus on brand legacy, quality, esthetic value, and trustworthy customer relationships in order to succeed. A key element to luxury industry becomes providing values to customers in every way possible. As a means to constitute customer assets through effective communication with consumers, luxury brands have tilted their eyes toward social media. Marketing communication using social media such as Twitter, Facebook, and …", "title": "" }, { "docid": "765e766515c9c241ffd2d84572fd887f", "text": "The cost of reconciling consistency and state management with high availability is highly magnified by the unprecedented scale and robustness requirements of today’s Internet applications. We propose two strategies for improving overall availability using simple mechanisms that scale over large applications whose output behavior tolerates graceful degradation. We characterize this degradation in terms of harvest and yield, and map it directly onto engineering mechanisms that enhance availability by improving fault isolation, and in some cases also simplify programming. By collecting examples of related techniques in the literature and illustrating the surprising range of applications that can benefit from these approaches, we hope to motivate a broader research program in this area. 1. Motivation, Hypothesis, Relevance Increasingly, infrastructure services comprise not only routing, but also application-level resources such as search engines [15], adaptation proxies [8], and Web caches [20]. These applications must confront the same operational expectations and exponentially-growing user loads as the routing infrastructure, and consequently are absorbing comparable amounts of hardware and software. The current trend of harnessing commodity-PC clusters for scalability and availability [9] is reflected in the largest web server installations. These sites use tens to hundreds of PC’s to deliver 100M or more read-mostly page views per day, primarily using simple replication or relatively small data sets to increase throughput. The scale of these applications is bringing the wellknown tradeoff between consistency and availability [4] into very sharp relief. In this paper we propose two general directions for future work in building large-scale robust systems. Our approaches tolerate partial failures by emphasizing simple composition mechanisms that promote fault containment, and by translating possible partial failure modes into engineering mechanisms that provide smoothlydegrading functionality rather than lack of availability of the service as a whole. The approaches were developed in the context of cluster computing, where it is well accepted [22] that one of the major challenges is the nontrivial software engineering required to automate partial-failure handling in order to keep system management tractable. 2. Related Work and the CAP Principle In this discussion, strong consistency means singlecopy ACID [13] consistency; by assumption a stronglyconsistent system provides the ability to perform updates, otherwise discussing consistency is irrelevant. High availability is assumed to be provided through redundancy, e.g. data replication; data is considered highly available if a given consumer of the data can always reach some replica. Partition-resilience means that the system as whole can survive a partition between data replicas. Strong CAP Principle. Strong Consistency, High Availability, Partition-resilience: Pick at most 2. The CAP formulation makes explicit the trade-offs in designing distributed infrastructure applications. It is easy to identify examples of each pairing of CAP, outlining the proof by exhaustive example of the Strong CAP Principle: CA without P: Databases that provide distributed transactional semantics can only do so in the absence of a network partition separating server peers. CP without A: In the event of a partition, further transactions to an ACID database may be blocked until the partition heals, to avoid the risk of introducing merge conflicts (and thus inconsistency). AP without C: HTTP Web caching provides clientserver partition resilience by replicating documents, but a client-server partition prevents verification of the freshness of an expired replica. In general, any distributed database problem can be solved with either expiration-based caching to get AP, or replicas and majority voting to get PC (the minority is unavailable). In practice, many applications are best described in terms of reduced consistency or availability. For example, weakly-consistent distributed databases such as Bayou [5] provide specific models with well-defined consistency/availability tradeoffs; disconnected filesystems such as Coda [16] explicitly argued for availability over strong consistency; and expiration-based consistency mechanisms such as leases [12] provide fault-tolerant consistency management. These examples suggest that there is a Weak CAP Principle which we have yet to characterize precisely: The stronger the guarantees made about any two of strong consistency, high availability, or resilience to partitions, the weaker the guarantees that can be made about the third. 3. Harvest, Yield, and the CAP Principle Both strategies we propose for improving availability with simple mechanisms rely on the ability to broaden our notion of “correct behavior” for the target application, and then exploit the tradeoffs in the CAP principle to improve availability at large scale. We assume that clients make queries to servers, in which case there are at least two metrics for correct behavior: yield, which is the probability of completing a request, and harvest, which measures the fraction of the data reflected in the response, i.e. the completeness of the answer to the query. Yield is the common metric and is typically measured in “nines”: “four-nines availability” means a completion probability of . In practice, good HA systems aim for four or five nines. In the presence of faults there is typically a tradeoff between providing no answer (reducing yield) and providing an imperfect answer (maintaining yield, but reducing harvest). Some applications do not tolerate harvest degradation because any deviation from the single well-defined correct behavior renders the result useless. For example, a sensor application that must provide a binary sensor reading (presence/absence) does not tolerate degradation of the output.1 On the other hand, some applications tolerate graceful degradation of harvest: online aggregation [14] allows a user to explicitly trade running time for precision and confidence in performing arithmetic aggregation queries over a large dataset, thereby smoothly trading harvest for response time, which is particularly useful for approximate answers and for avoiding work that looks unlikely to be worthwhile based on preliminary results. At first glance, it would appear that this kind of degradation applies only to queries and not to updates. However, the model can be applied in the case of “single-location” updates: those changes that are localized to a single node (or technically a single partition). In this case, updates that 1This is consistent with the use of the term yield in semiconductor manufacturing: typically, each die on a wafer is intolerant to harvest degradation, and yield is defined as the fraction of working dice on a wafer. affect reachable nodes occur correctly but have limited visibility (a form of reduced harvest), while those that require unreachable nodes fail (reducing yield). These localized changes are consistent exactly because the new values are not available everywhere. This model of updates fails for global changes, but it is still quite useful for many practical applications, including personalization databases and collaborative filtering. 4. Strategy 1: Trading Harvest for Yield— Probabilistic Availability Nearly all systems are probabilistic whether they realize it or not. In particular, any system that is 100% available under single faults is probabilistically available overall (since there is a non-zero probability of multiple failures), and Internet-based servers are dependent on the best-effort Internet for true availability. Therefore availability maps naturally to probabilistic approaches, and it is worth addressing probabilistic systems directly, so that we can understand and limit the impact of faults. This requires some basic decisions about what needs to be available and the expected nature of faults. For example, node faults in the Inktomi search engine remove a proportional fraction of the search database. Thus in a 100-node cluster a single-node fault reduces the harvest by 1% during the duration of the fault (the overall harvest is usually measured over a longer interval). Implicit in this approach is graceful degradation under multiple node faults, specifically, linear degradation in harvest. By randomly placing data on nodes, we can ensure that the 1% lost is a random 1%, which makes the average-case and worstcase fault behavior the same. In addition, by replicating a high-priority subset of data, we reduce the probability of losing that data. This gives us more precise control of harvest, both increasing it and reducing the practical impact of missing data. Of course, it is possible to replicate all data, but doing so may have relatively little impact on harvest and yield despite significant cost, and in any case can never ensure 100% harvest or yield because of the best-effort Internet protocols the service relies on. As a similar example, transformation proxies for thin clients [8] also trade harvest for yield, by degrading results on demand to match the capabilities of clients that might otherwise be unable to get results at all. Even when the 100%-harvest answer is useful to the client, it may still be preferable to trade response time for harvest when clientto-server bandwidth is limited, for example, by intelligent degradation to low-bandwidth formats [7]. 5. Strategy 2: Application Decomposition and Orthogonal Mechanisms Some large applications can be decomposed into subsystems that are independently intolerant to harvest degradation (i.e. they fail by reducing yield), but whose independent failure allows the overall application to continue functioning with reduced utility. The application as a whole is then tolerant of harvest degradation. A good decomposition has at least one actual benefit and one potential benefit. The actual benefi", "title": "" }, { "docid": "627587e2503a2555846efb5f0bca833b", "text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.", "title": "" }, { "docid": "88ff3300dafab6b87d770549a1dc4f0e", "text": "Novelty search is a recent algorithm geared toward exploring search spaces without regard to objectives. When the presence of constraints divides a search space into feasible space and infeasible space, interesting implications arise regarding how novelty search explores such spaces. This paper elaborates on the problem of constrained novelty search and proposes two novelty search algorithms which search within both the feasible and the infeasible space. Inspired by the FI-2pop genetic algorithm, both algorithms maintain and evolve two separate populations, one with feasible and one with infeasible individuals, while each population can use its own selection method. The proposed algorithms are applied to the problem of generating diverse but playable game levels, which is representative of the larger problem of procedural game content generation. Results show that the two-population constrained novelty search methods can create, under certain conditions, larger and more diverse sets of feasible game levels than current methods of novelty search, whether constrained or unconstrained. However, the best algorithm is contingent on the particularities of the search space and the genetic operators used. Additionally, the proposed enhancement of offspring boosting is shown to enhance performance in all cases of two-population novelty search.", "title": "" }, { "docid": "3899c40009ac15e213e74bd08392ecec", "text": "In the past decade, research in person re-identification (re-id) has exploded due to its broad use in security and surveillance applications. Issues such as inter-camera viewpoint, illumination and pose variations make it an extremely difficult problem. Consequently, many algorithms have been proposed to tackle these issues. To validate the efficacy of re-id algorithms, numerous benchmarking datasets have been constructed. While early datasets contained relatively few identities and images, several large-scale datasets have recently been proposed, motivated by data-driven machine learning. In this paper, we introduce a new large-scale real-world re-id dataset, DukeMTMC4ReID, using 8 disjoint surveillance camera views covering parts of the Duke University campus. The dataset was created from the recently proposed fully annotated multi-target multi-camera tracking dataset DukeMTMC[36]. A benchmark summarizing extensive experiments with many combinations of existing re-id algorithms on this dataset is also provided for an up-to-date performance analysis.", "title": "" }, { "docid": "17752f2b561d81643b35b6d2d10e4e46", "text": "This randomised controlled trial was undertaken to evaluate the effectiveness of acupuncture as a treatment for frozen shoulder. Thirty-five patients with a diagnosis of frozen shoulder were randomly allocated to an exercise group or an exercise plus acupuncture group and treated for a period of 6 weeks. Functional mobility, power, and pain were assessed by a blinded assessor using the Constant Shoulder Assessment, at baseline, 6 weeks and 20 weeks. Analysis was based on the intention-to-treat principle. Compared with the exercise group, the exercise plus acupuncture group experienced significantly greater improvement with treatment. Improvements in scores by 39.8% (standard deviation, 27.1) and 76.4% (55.0) were seen for the exercise and the exercise plus acupuncture groups, respectively at 6 weeks (P=0.048), and were sustained at the 20-week re-assessment (40.3% [26.7] and 77.2% [54.0], respectively; P=0.025). We conclude that the combination of acupuncture with shoulder exercise may offer effective treatment for frozen shoulder.", "title": "" }, { "docid": "f3641aadeaf2ccd31f96e2db8d33f936", "text": "This paper proposes a novel approach to dynamically manage the traffic lights cycles and phases in an isolated intersection. The target of the work is a system that, comparing with previous solutions, offers improved performance, is flexible and can be implemented on off-the-shelf components. The challenge here is to find an effective design that achieves the target while avoiding complex and computationally expensive solutions, which would not be appropriate for the problem at hand and would impair the practical applicability of the approach in real scenarios. The proposed solution is a traffic lights dynamic control system that combines an IEEE 802.15.4 Wireless Sensor Network (WSN) for real-time traffic monitoring with multiple fuzzy logic controllers, one for each phase, that work in parallel. Each fuzzy controller addresses vehicles turning movements and dynamically manages both the phase and the green time of traffic lights. The proposed system combines the advantages of the WSN, such as easy deployment and maintenance, flexibility, low cost, noninvasiveness, and scalability, with the benefits of using four parallel fuzzy controllers, i.e., better performance, fault-tolerance, and support for phase-specific management. Simulation results show that the proposed system outperforms other solutions in the literature, significantly reducing the vehicles waiting times. A proof-of-concept implementation on an off-the-shelf device proves that the proposed controller does not require powerful hardware and can be easily implemented on a low-cost device, thus paving the way for extensive usage in practice. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c1b34059a896564df02ef984085b93a0", "text": "Robotics has become a standard tool in outreaching to grades K-12 and attracting students to the STEM disciplines. Performing these activities in the class room usually requires substantial time commitment by the teacher and integration into the curriculum requires major effort, which makes spontaneous and short-term engagements difficult. This paper studies using “Cubelets”, a modular robotic construction kit, which requires virtually no setup time and allows substantial engagement and change of perception of STEM in as little as a 1-hour session. This paper describes the constructivist curriculum and provides qualitative and quantitative results on perception changes with respect to STEM and computer science in particular as a field of study.", "title": "" }, { "docid": "5093e3d152d053a9f3322b34096d3e4e", "text": "To create conversational systems working in actual situations, it is crucial to assume that they interact with multiple agents. In this work, we tackle addressee and response selection for multi-party conversation, in which systems are expected to select whom they address as well as what they say. The key challenge of this task is to jointly model who is talking about what in a previous context. For the joint modeling, we propose two modeling frameworks: 1) static modeling and 2) dynamic modeling. To show benchmark results of our frameworks, we created a multi-party conversation corpus. Our experiments on the dataset show that the recurrent neural network based models of our frameworks robustly predict addressees and responses in conversations with a large number of agents.", "title": "" }, { "docid": "7e4b634b7b16d152fefe476d264c6726", "text": "We introduce openXBOW, an open-source toolkit for the generation of bag-of-words (BoW) representations from multimodal input. In the BoW principle, word histograms were first used as features in document classification, but the idea was and can easily be adapted to, e. g., acoustic or visual low-level descriptors, introducing a prior step of vector quantisation. The openXBOW toolkit supports arbitrary numeric input features and text input and concatenates computed subbags to a final bag. It provides a variety of extensions and options. To our knowledge, openXBOW is the first publicly available toolkit for the generation of crossmodal bags-of-words. The capabilities of the tool are exemplified in two sample scenarios: time-continuous speech-based emotion recognition and sentiment analysis in tweets where improved results over other feature representation forms were observed.", "title": "" }, { "docid": "3e0741fb69ee9bdd3cc455577aab4409", "text": "Recurrent neural network architectures have been shown to efficiently model long term temporal dependencies between acoustic events. However the training time of recurrent networks is higher than feedforward networks due to the sequential nature of the learning algorithm. In this paper we propose a time delay neural network architecture which models long term temporal dependencies with training times comparable to standard feed-forward DNNs. The network uses sub-sampling to reduce computation during training. On the Switchboard task we show a relative improvement of 6% over the baseline DNN model. We present results on several LVCSR tasks with training data ranging from 3 to 1800 hours to show the effectiveness of the TDNN architecture in learning wider temporal dependencies in both small and large data scenarios.", "title": "" }, { "docid": "a8cad81570a7391175acdcf82bc9040b", "text": "A model of Convolutional Fuzzy Neural Network for real world objects and scenes images classification is proposed. The Convolutional Fuzzy Neural Network consists of convolutional, pooling and fully-connected layers and a Fuzzy Self Organization Layer. The model combines the power of convolutional neural networks and fuzzy logic and is capable of handling uncertainty and impreciseness in the input pattern representation. The Training of The Convolutional Fuzzy Neural Network consists of three independent steps for three components of the net.", "title": "" }, { "docid": "2462af24189262b0145a6559d4aa6b3d", "text": "A 30-MHz voltage-mode buck converter using a delay-line-based pulse-width-modulation controller is proposed in this brief. Two voltage-to-delay cells are used to convert the voltage difference to delay-time difference. A charge pump is used to charge or discharge the loop filter, depending on whether the feedback voltage is larger or smaller than the reference voltage. A delay-line-based voltage-to-duty-cycle (V2D) controller is used to replace the classical ramp-comparator-based V2D controller to achieve wide duty cycle. A type-II compensator is implemented in this design with a capacitor and resistor in the loop filter. The prototype buck converter was fabricated using a 0.18-<inline-formula> <tex-math notation=\"LaTeX\">${\\mu }\\text{m}$ </tex-math></inline-formula> CMOS process. It occupies an active area of 0.834 mm<sup>2</sup> including the testing PADs. The tunable duty cycle ranges from 11.9%–86.3%, corresponding to 0.4 V–2.8 V output voltage with 3.3 V input. With a step of 400 mA in the load current, the settling time is around 3 <inline-formula> <tex-math notation=\"LaTeX\">${\\mu }\\text{s}$ </tex-math></inline-formula>. The peak efficiency is as high as 90.2% with 2.4 V output and the maximum load current is 800 mA.", "title": "" }, { "docid": "ab33dcd4172dec6cc88e13af867fed88", "text": "It is necessary to understand the content of articles and user preferences to make effective news recommendations. While ID-based methods, such as collaborative filtering and low-rank factorization, are well known for making recommendations, they are not suitable for news recommendations because candidate articles expire quickly and are replaced with new ones within short spans of time. Word-based methods, which are often used in information retrieval settings, are good candidates in terms of system performance but have issues such as their ability to cope with synonyms and orthographical variants and define \"queries\" from users' historical activities. This paper proposes an embedding-based method to use distributed representations in a three step end-to-end manner: (i) start with distributed representations of articles based on a variant of a denoising autoencoder, (ii) generate user representations by using a recurrent neural network (RNN) with browsing histories as input sequences, and (iii) match and list articles for users based on inner-product operations by taking system performance into consideration. The proposed method performed well in an experimental offline evaluation using past access data on Yahoo! JAPAN's homepage. We implemented it on our actual news distribution system based on these experimental results and compared its online performance with a method that was conventionally incorporated into the system. As a result, the click-through rate (CTR) improved by 23% and the total duration improved by 10%, compared with the conventionally incorporated method. Services that incorporated the method we propose are already open to all users and provide recommendations to over ten million individual users per day who make billions of accesses per month.", "title": "" }, { "docid": "ef53fb4fa95575c6472173db51d77a65", "text": "I review existing knowledge, unanswered questions, and new directions in research on stress, coping resource, coping strategies, and social support processes. New directions in research on stressors include examining the differing impacts of stress across a range of physical and mental health outcomes, the \"carry-overs\" of stress from one role domain or stage of life into another, the benefits derived from negative experiences, and the determinants of the meaning of stressors. Although a sense of personal control and perceived social support influence health and mental health both directly and as stress buffers, the theoretical mechanisms through which they do so still require elaboration and testing. New work suggests that coping flexibility and structural constraints on individuals' coping efforts may be important to pursue. Promising new directions in social support research include studies of the negative effects of social relationships and of support giving, mutual coping and support-giving dynamics, optimal \"matches\" between individuals' needs and support received, and properties of groups which can provide a sense of social support. Qualitative comparative analysis, optimal matching analysis, and event-structure analysis are new techniques which may help advance research in these broad topic areas. To enhance the effectiveness of coping and social support interventions, intervening mechanisms need to be better understood. Nevertheless, the policy implications of stress research are clear and are important given current interest in health care reform in the United States.", "title": "" }, { "docid": "12bdec4e6f70a7fe2bd4c750752287c3", "text": "Rapid growth in the Internet of Things (IoT) has resulted in a massive growth of data generated by these devices and sensors put on the Internet. Physical-cyber-social (PCS) big data consist of this IoT data, complemented by relevant Web-based and social data of various modalities. Smart data is about exploiting this PCS big data to get deep insights and make it actionable, and making it possible to facilitate building intelligent systems and applications. This article discusses key AI research in semantic computing, cognitive computing, and perceptual computing. Their synergistic use is expected to power future progress in building intelligent systems and applications for rapidly expanding markets in multiple industries. Over the next two years, this column on IoT will explore many challenges and technologies on intelligent use and applications of IoT data.", "title": "" }, { "docid": "940b907c28adeaddc2515f304b1d885e", "text": "In this study, we intend to identify the evolutionary footprints of the South Iberian population focusing on the Berber and Arab influence, which has received little attention in the literature. Analysis of the Y-chromosome variation represents a convenient way to assess the genetic contribution of North African populations to the present-day South Iberian genetic pool and could help to reconstruct other demographic events that could have influenced on that region. A total of 26 Y-SNPs and 17 Y-STRs were genotyped in 144 samples from 26 different districts of South Iberia in order to assess the male genetic composition and the level of substructure of male lineages in this area. To obtain a more comprehensive picture of the genetic structure of the South Iberian region as a whole, our data were compared with published data on neighboring populations. Our analyses allow us to confirm the specific impact of the Arab and Berber expansion and dominion of the Peninsula. Nevertheless, our results suggest that this influence is not bigger in Andalusia than in other Iberian populations.", "title": "" }, { "docid": "eb71ba791776ddfe0c1ddb3dc66f6e06", "text": "An enterprise resource planning (ERP) is an enterprise-wide application software package that integrates all necessary business functions into a single system with a common database. In order to implement an ERP project successfully in an organization, it is necessary to select a suitable ERP system. This paper presents a new model, which is based on linguistic information processing, for dealing with such a problem. In the study, a similarity degree based algorithm is proposed to aggregate the objective information about ERP systems from some external professional organizations, which may be expressed by different linguistic term sets. The consistency and inconsistency indices are defined by considering the subject information obtained from internal interviews with ERP vendors, and then a linear programming model is established for selecting the most suitable ERP system. Finally, a numerical example is given to demonstrate the application of the", "title": "" } ]
scidocsrr
1edd1ffbef283d1cebfa1a3ce9e8a1ac
LabelRankT: incremental community detection in dynamic networks via label propagation
[ { "docid": "a50ec2ab9d5d313253c6656049d608b3", "text": "A cluster algorithm for graphs called the Markov Cluster algorithm (MCL algorithm) is introduced. The algorithm provides basically an interface to an algebraic process de ned on stochastic matrices, called the MCL process. The graphs may be both weighted (with nonnegative weight) and directed. Let G be such a graph. The MCL algorithm simulates ow in G by rst identifying G in a canonical way with a Markov graph G1. Flow is then alternatingly expanded and contracted, leading to a row of Markov Graphs G(i). Flow expansion corresponds with taking the k power of a stochastic matrix, where k 2 IN . Flow contraction corresponds with a parametrized operator r, r 0, which maps the set of (column) stochastic matrices onto itself. The image rM is obtained by raising each entry in M to the r th power and rescaling each column to have sum 1 again. The heuristic underlying this approach is the expectation that ow between dense regions which are sparsely connected will evaporate. The invariant limits of the process are easily derived and in practice the process converges very fast to such a limit, the structure of which has a generic interpretation as an overlapping clustering of the graph G. Overlap is limited to cases where the input graph has a symmetric structure inducing it. The contraction and expansion parameters of the MCL process in uence the granularity of the output. The algorithm is space and time e cient and lends itself to drastic scaling. This report describes the MCL algorithm and process, convergence towards equilibrium states, interpretation of the states as clusterings, and implementation and scalability. The algorithm is introduced by rst considering several related proposals towards graph clustering, of both combinatorial and probabilistic nature. 2000 Mathematics Subject Classi cation: 05B20, 15A48, 15A51, 62H30, 68R10, 68T10, 90C35.", "title": "" }, { "docid": "f96bf84a4dfddc8300bb91227f78b3af", "text": "Membership diversity is a characteristic aspect of social networks in which a person may belong to more than one social group. For this reason, discovering overlapping structures is necessary for realistic social analysis. In this paper, we present a fast algorithm, called SLPA, for overlapping community detection in large-scale networks. SLPA spreads labels according to dynamic interaction rules. It can be applied to both unipartite and bipartite networks. It is also able to uncover overlapping nested hierarchy. The time complexity of SLPA scales linearly with the number of edges in the network. Experiments in both synthetic and realworld networks show that SLPA has an excellent performance in identifying both node and community level overlapping structures.", "title": "" } ]
[ { "docid": "dde5083017c2db3ffdd90668e28bab4b", "text": "Current industry standards for describing Web Services focus on ensuring interoperability across diverse platforms, but do not provide a good foundation for automating the use of Web Services. Representational techniques being developed for the Semantic Web can be used to augment these standards. The resulting Web Service specifications enable the development of software programs that can interpret descriptions of unfamiliar Web Services and then employ those services to satisfy user goals. OWL-S (“OWL for Services”) is a set of notations for expressing such specifications, based on the Semantic Web ontology language OWL. It consists of three interrelated parts: a profile ontology, used to describe what the service does; a process ontology and corresponding presentation syntax, used to describe how the service is used; and a grounding ontology, used to describe how to interact with the service. OWL-S can be used to automate a variety of service-related activities involving service discovery, interoperation, and composition. A large body of research on OWL-S has led to the creation of many open-source tools for developing, reasoning about, and dynamically utilizing Web Services.", "title": "" }, { "docid": "5d624fadc5502ef0b65c227d4dd47a9a", "text": "In this work, highly selective filters based on periodic arrays of electrically small resonators are pointed out. The high-pass filters are implemented in microstrip technology by etching complementary split ring resonators (CSRRs), or complementary spiral resonators (CSRs), in the ground plane, and series capacitive gaps, or interdigital capacitors, in the signal strip. The structure exhibits a composite right/left handed (CRLH) behavior and, by properly tuning the geometry of the elements, a high pass response with a sharp transition band is obtained. The low-pass filters, also implemented in microstrip technology, are designed by cascading open complementary split ring resonators (OCSRRs) in the signal strip. These low pass filters do also exhibit a narrow transition band. The high selectivity of these microwave filters is due to the presence of a transmission zero. Since the resonant elements are small, filter dimensions are compact. Several prototype device examples are reported in this paper.", "title": "" }, { "docid": "eaa6daff2f28ea7f02861e8c67b9c72b", "text": "The demand of fused magnesium furnaces (FMFs) refers to the average value of the power of the FMFs over a fixed period of time before the current time. The demand is an indicator of the electricity consumption of high energy-consuming FMFs. When the demand exceeds the limit of the Peak Demand (a predetermined maximum demand), the power supply of some FMF will be cut off to ensure that the demand is no more than Peak Demand. But the power cutoff will destroy the heat balance, reduce the quality and yield of the product. The composition change of magnesite in FMFs will cause demand spike occasionally, which a sudden increase in demand exceeds the limit and then drops below the limit. As a result, demand spike cause the power cutoff. In order to avoid the power cutoff at the moment of demand spike, the demand of FMFs needs to be forecasted. This paper analyzes the dynamic model of the demand of FMFs, using the power data, presents a data-driven demand forecasting method. This method consists of the following: PACF based decision module for the number of the input variables of the forecasting model, RBF neural network (RBFNN) based power variation rate forecasting model and demand forecasting model. Simulations based on actual data and industrial experiments at a fused magnesia plant show the effectiveness of the proposed method.", "title": "" }, { "docid": "323abed1a623e49db50bed383ab26a92", "text": "Robust object detection is a critical skill for robotic applications in complex environments like homes and offices. In this paper we propose a method for using multiple cameras to simultaneously view an object from multiple angles and at high resolutions. We show that our probabilistic method for combining the camera views, which can be used with many choices of single-image object detector, can significantly improve accuracy for detecting objects from many viewpoints. We also present our own single-image object detection method that uses large synthetic datasets for training. Using a distributed, parallel learning algorithm, we train from very large datasets (up to 100 million image patches). The resulting object detector achieves high performance on its own, but also benefits substantially from using multiple camera views. Our experimental results validate our system in realistic conditions and demonstrates significant performance gains over using standard single-image classifiers, raising accuracy from 0.86 area-under-curve to 0.97.", "title": "" }, { "docid": "9ffd665d6fe680fc4e7b9e57df48510c", "text": "BACKGROUND\nIn light of the increasing rate of dengue infections throughout the world despite vector-control measures, several dengue vaccine candidates are in development.\n\n\nMETHODS\nIn a phase 3 efficacy trial of a tetravalent dengue vaccine in five Latin American countries where dengue is endemic, we randomly assigned healthy children between the ages of 9 and 16 years in a 2:1 ratio to receive three injections of recombinant, live, attenuated, tetravalent dengue vaccine (CYD-TDV) or placebo at months 0, 6, and 12 under blinded conditions. The children were then followed for 25 months. The primary outcome was vaccine efficacy against symptomatic, virologically confirmed dengue (VCD), regardless of disease severity or serotype, occurring more than 28 days after the third injection.\n\n\nRESULTS\nA total of 20,869 healthy children received either vaccine or placebo. At baseline, 79.4% of an immunogenicity subgroup of 1944 children had seropositive status for one or more dengue serotypes. In the per-protocol population, there were 176 VCD cases (with 11,793 person-years at risk) in the vaccine group and 221 VCD cases (with 5809 person-years at risk) in the control group, for a vaccine efficacy of 60.8% (95% confidence interval [CI], 52.0 to 68.0). In the intention-to-treat population (those who received at least one injection), vaccine efficacy was 64.7% (95% CI, 58.7 to 69.8). Serotype-specific vaccine efficacy was 50.3% for serotype 1, 42.3% for serotype 2, 74.0% for serotype 3, and 77.7% for serotype 4. Among the severe VCD cases, 1 of 12 was in the vaccine group, for an intention-to-treat vaccine efficacy of 95.5%. Vaccine efficacy against hospitalization for dengue was 80.3%. The safety profile for the CYD-TDV vaccine was similar to that for placebo, with no marked difference in rates of adverse events.\n\n\nCONCLUSIONS\nThe CYD-TDV dengue vaccine was efficacious against VCD and severe VCD and led to fewer hospitalizations for VCD in five Latin American countries where dengue is endemic. (Funded by Sanofi Pasteur; ClinicalTrials.gov number, NCT01374516.).", "title": "" }, { "docid": "c3c7c392b4e7afedb269aa39e2b4680a", "text": "The temporal-difference (TD) algorithm from reinforcement learning provides a simple method for incrementally learning predictions of upcoming events. Applied to classical conditioning, TD models suppose that animals learn a real-time prediction of the unconditioned stimulus (US) on the basis of all available conditioned stimuli (CSs). In the TD model, similar to other error-correction models, learning is driven by prediction errors--the difference between the change in US prediction and the actual US. With the TD model, however, learning occurs continuously from moment to moment and is not artificially constrained to occur in trials. Accordingly, a key feature of any TD model is the assumption about the representation of a CS on a moment-to-moment basis. Here, we evaluate the performance of the TD model with a heretofore unexplored range of classical conditioning tasks. To do so, we consider three stimulus representations that vary in their degree of temporal generalization and evaluate how the representation influences the performance of the TD model on these conditioning tasks.", "title": "" }, { "docid": "907b8a8a8529b09114ae60e401bec1bd", "text": "Studies of information seeking and workplace collaboration often find that social relationships are a strong factor in determining who collaborates with whom. Social networks provide one means of visualizing existing and potential interaction in organizational settings. Groupware designers are using social networks to make systems more sensitive to social situations and guide users toward effective collaborations. Yet, the implications of embedding social networks in systems have not been systematically studied. This paper details an evaluation of two different social networks used in a system to recommend individuals for possible collaboration. The system matches people looking for expertise with individuals likely to have expertise. The effectiveness of social networks for matching individuals is evaluated and compared. One finding is that social networks embedded into systems do not match individuals' perceptions of their personal social network. This finding and others raise issues for the use of social networks in groupware. Based on the evaluation results, several design considerations are discussed.", "title": "" }, { "docid": "39351cdf91466aa12576d9eb475fb558", "text": "Fault tolerance is a remarkable feature of biological systems and their self-repair capability influence modern electronic systems. In this paper, we propose a novel plastic neural network model, which establishes homeostasis in a spiking neural network. Combined with this plasticity and the inspiration from inhibitory interneurons, we develop a fault-resilient robotic controller implemented on an FPGA establishing obstacle avoidance task. We demonstrate the proposed methodology on a spiking neural network implemented on Xilinx Artix-7 FPGA. The system is able to maintain stable firing (tolerance ±10%) with a loss of up to 75% of the original synaptic inputs to a neuron. Our repair mechanism has minimal hardware overhead with a tuning circuit (repair unit) which consumes only three slices/neuron for implementing a threshold voltage-based homeostatic fault-tolerant unit. The overall architecture has a minimal impact on power consumption and, therefore, supports scalable implementations. This paper opens a novel way of implementing the behavior of natural fault tolerant system in hardware establishing homeostatic self-repair behavior.", "title": "" }, { "docid": "8d02b303ad5fc96a082880d703682de4", "text": "Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive <italic>regular clinical motifs</italic> from <italic> irregular episodic records</italic>. We present <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math> </inline-formula> (short for <italic>Deep</italic> <italic>r</italic>ecord), a new <italic>end-to-end</italic> deep learning system that learns to extract features from medical records and predicts future risk automatically. <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> permits transparent inspection and visualization of its inner working. We validate <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$ </tex-math></inline-formula> on hospital data to predict unplanned readmission after discharge. <inline-formula> <tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space.", "title": "" }, { "docid": "ca8c262513466709a9d1eee198c804cc", "text": "Theories of language production have long been expressed as connectionist models. We outline the issues and challenges that must be addressed by connectionist models of lexical access and grammatical encoding, and review three recent models. The models illustrate the value of an interactive activation approach to lexical access in production, the need for sequential output in both phonological and grammatical encoding, and the potential for accounting for structural effects on errors and structural priming from learning.", "title": "" }, { "docid": "38d650cb945dc50d97762186585659a4", "text": "Sustainable biofuels, biomaterials, and fine chemicals production is a critical matter that research teams around the globe are focusing on nowadays. Polyhydroxyalkanoates represent one of the biomaterials of the future due to their physicochemical properties, biodegradability, and biocompatibility. Designing efficient and economic bioprocesses, combined with the respective social and environmental benefits, has brought together scientists from different backgrounds highlighting the multidisciplinary character of such a venture. In the current review, challenges and opportunities regarding polyhydroxyalkanoate production are presented and discussed, covering key steps of their overall production process by applying pure and mixed culture biotechnology, from raw bioprocess development to downstream processing.", "title": "" }, { "docid": "f923a3a18e8000e4094d4a6d6e69b18f", "text": "We describe the functional and architectural breakdown of a monocular pedestrian detection system. We describe in detail our approach for single-frame classification based on a novel scheme of breaking down the class variability by repeatedly training a set of relatively simple classifiers on clusters of the training set. Single-frame classification performance results and system level performance figures for daytime conditions are presented with a discussion about the remaining gap to meet a daytime normal weather condition production system.", "title": "" }, { "docid": "b4b66392aec0c4e00eb6b1cabbe22499", "text": "ADJ: Adjectives that occur with the NP CMC: Orthographic features of the NP CPL: Phrases that occur with the NP VERB: Verbs that appear with the NP Task: Predict whether a noun phrase (NP) belongs to a category (e.g. “city”) Category # Examples animal 20,733 beverage 18,932 bird 19,263 bodypart 21,840 city 21,778 disease 21,827 drug 20,452 fish 19,162 food 19,566 fruit 18,911 muscle 21,606 person 21,700 protein 21,811 river 21,723 vegetable 18,826", "title": "" }, { "docid": "4a6dc591d385d0fb02a98067d8a42f33", "text": "A new field has emerged to investigate the cognitive neuroscience of social behaviour, the popularity of which is attested by recent conferences, special issues of journals and by books. But the theoretical underpinnings of this new field derive from an uneasy marriage of two different approaches to social behaviour: sociobiology and evolutionary psychology on the one hand, and social psychology on the other. The first approach treats the study of social behaviour as a topic in ethology, continuous with studies of motivated behaviour in other animals. The second approach has often emphasized the uniqueness of human behaviour, and the uniqueness of the individual person, their environment and their social surroundings. These two different emphases do not need to conflict with one another. In fact, neuroscience might offer a reconciliation between biological and psychological approaches to social behaviour in the realization that its neural regulation reflects both innate, automatic and COGNITIVELY IMPENETRABLE mechanisms, as well as acquired, contextual and volitional aspects that include SELF-REGULATION. We share the first category of features with other species, and we might be distinguished from them partly by elaborations on the second category of features. In a way, an acknowledgement of such an architecture simply provides detail to the way in which social cognition is complex — it is complex because it is not monolithic, but rather it consists of several tracks of information processing that can be variously recruited depending on the circumstances. Specifying those tracks, the conditions under which they are engaged, how they interact, and how they must ultimately be coordinated to regulate social behaviour in an adaptive fashion, is the task faced by a neuroscientific approach to social cognition.", "title": "" }, { "docid": "95395c693b4cdfad722ae0c3545f45ef", "text": "Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.", "title": "" }, { "docid": "175229c7b756a2ce40f86e27efe28d53", "text": "This paper describes a comparative study of the envelope extraction algorithms for the cardiac sound signal segmentation. In order to extract the envelope curves based on the time elapses of the first and the second heart sounds of cardiac sound signals, three representative algorithms such as the normalized average Shannon energy, the envelope information of Hilbert transform, and the cardiac sound characteristic waveform (CSCW) are introduced. Performance comparison of the envelope extraction algorithms, and the advantages and disadvantages of the methods are examined by some parameters. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5d80bf63f19f3aa271c0d16e179c90d6", "text": "3D meshes are deployed in a wide range of application processes (e.g., transmission, compression, simplification, watermarking and so on) which inevitably introduce geometric distortions that may alter the visual quality of the rendered data. Hence, efficient model-based perceptual metrics, operating on the geometry of the meshes being compared, have been recently introduced to control and predict these visual artifacts. However, since the 3D models are ultimately visualized on 2D screens, it seems legitimate to use images of the models (i.e., snapshots from different viewpoints) to evaluate their visual fidelity. In this work we investigate the use of image metrics to assess the visual quality of 3D models. For this goal, we conduct a wide-ranging study involving several 2D metrics, rendering algorithms, lighting conditions and pooling algorithms, as well as several mean opinion score databases. The collected data allow (1) to determine the best set of parameters to use for this image-based quality assessment approach and (2) to compare this approach to the best performing model-based metrics and determine for which use-case they are respectively adapted. We conclude by exploring several applications that illustrate the benefits of image-based quality assessment.", "title": "" }, { "docid": "19a28d8bbb1f09c56f5c85be003a9586", "text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.", "title": "" }, { "docid": "bd9064905ba4ed166ad1e9c41eca7b34", "text": "Governments worldwide are encouraging public agencies to join e-Government initiatives in order to provide better services to their citizens and businesses; hence, methods of evaluating the readiness of individual public agencies to execute specific e-Government programs and directives are a key ingredient in the successful expansion of e-Government. To satisfy this need, a model called the eGovernment Maturity Model (eGov-MM) was developed, integrating the assessment of technological, organizational, operational, and human capital capabilities, under a multi-dimensional, holistic, and evolutionary approach. The model is strongly supported by international best practices, and provides tuning mechanisms to enable its alignment with nation-wide directives on e-Government. This article describes how the model was conceived, designed, developed, field tested by expert public officials from several government agencies, and finally applied to a selection of 30 public agencies in Chile, generating the first formal measurements, assessments, and rankings of their readiness for eGovernment. The implementation of the model also provided several recommendations to policymakers at the national and agency levels.", "title": "" }, { "docid": "e36e318dd134fd5840d5a5340eb6e265", "text": "Business Intelligence (BI) promises a range of technologies for using information to ensure compliance to strategic and tactical objectives, as well as government laws and regulations. These technologies can be used in conjunction with conceptual models of business objectives, processes and situations (aka business schemas) to drive strategic decision-making about opportunities and threats etc. This paper focuses on three key concepts for strategic business models -situation, influence and indicator -and how they are used for strategic analysis. The semantics of these concepts are defined using a state-ofthe-art upper ontology (DOLCE+). We also propose a method for building a business schema, and demonstrate alternative ways of formal analysis of the schema based on existing tools for goal and probabilistic reasoning.", "title": "" } ]
scidocsrr
c68908e6ee2bb178d3dd3da0db3ec66c
SCOPE AND LIMITATION OF ELECTRONIC VOTING SYSTEM
[ { "docid": "3293e4e0d7dd2e29505db0af6fbb13d1", "text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.", "title": "" } ]
[ { "docid": "c9e1c4b2a043ba43fbd07b05e8742e41", "text": "BACKGROUND\nThere has been research on the use of offline video games for therapeutic purposes but online video game therapy is still fairly under-researched. Online therapeutic interventions have only recently included a gaming component. Hence, this review represents a timely first step toward taking advantage of these recent technological and cultural innovations, particularly for the treatment of special-needs groups such as the young, the elderly and people with various conditions such as ADHD, anxiety and autism spectrum disorders.\n\n\nMATERIAL\nA review integrating research findings on two technological advances was conducted: the home computer boom of the 1980s, which triggered a flood of research on therapeutic video games for the treatment of various mental health conditions; and the rise of the internet in the 1990s, which caused computers to be seen as conduits for therapeutic interaction rather than replacements for the therapist.\n\n\nDISCUSSION\nWe discuss how video games and the internet can now be combined in therapeutic interventions, as attested by a consideration of pioneering studies.\n\n\nCONCLUSION\nFuture research into online video game therapy for mental health concerns might focus on two broad types of game: simple society games, which are accessible and enjoyable to players of all ages, and online worlds, which offer a unique opportunity for narrative content and immersive remote interaction with therapists and fellow patients. Both genres might be used for assessment and training purposes, and provide an unlimited platform for social interaction. The mental health community can benefit from more collaborative efforts between therapists and engineers, making such innovations a reality.", "title": "" }, { "docid": "f248f5bfb4d4aa8b1d90fcdcc19c3b7d", "text": "As telecommunication networks evolve rapidly in terms of scalability, complexity, and heterogeneity, the efficiency of fault localization procedures and the accuracy in the detection of anomalous behaviors are becoming important factors that largely influence the decision making process in large management companies. For this reason, telecommunication companies are doing a big effort investing in new technologies and projects aimed at finding efficient management solutions. One of the challenging issues for network and system management operators is that of dealing with the huge amount of alerts generated by the managed systems and networks. In order to discover anomalous behaviors and speed up fault localization processes, alert correlation is one of the most popular resources. Although many different alert correlation techniques have been investigated, it is still an active research field. In this paper, a survey of the state of the art in alert correlation techniques is presented. Unlike other authors, we consider that the correlation process is a common problem for different fields in the industry. Thus, we focus on showing the broad influence of this problem. Additionally, we suggest an alert correlation architecture capable of modeling current and prospective proposals. Finally, we also review some of the most important commercial products currently available. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "64cf7bd992bc6fea358273497d962619", "text": "Magnetic skyrmions are promising candidates for next-generation information carriers, owing to their small size, topological stability, and ultralow depinning current density. A wide variety of skyrmionic device concepts and prototypes have recently been proposed, highlighting their potential applications. Furthermore, the intrinsic properties of skyrmions enable new functionalities that may be inaccessible to conventional electronic devices. Here, we report on a skyrmion-based artificial synapse device for neuromorphic systems. The synaptic weight of the proposed device can be strengthened/weakened by positive/negative stimuli, mimicking the potentiation/depression process of a biological synapse. Both short-term plasticity and long-term potentiation functionalities have been demonstrated with micromagnetic simulations. This proposal suggests new possibilities for synaptic devices in neuromorphic systems with adaptive learning function.", "title": "" }, { "docid": "32874ff6ff0a4556950281fb300198ed", "text": "In the multi-armed bandit problem, a gambler must decide which arm ofKnon-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate", "title": "" }, { "docid": "da9ffb00398f6aad726c247e3d1f2450", "text": "We propose noWorkflow, a tool that transparently captures provenance of scripts and enables reproducibility. Unlike existing approaches, noWorkflow is non-intrusive and does not require users to change the way they work – users need not wrap their experiments in scientific workflow systems, install version control systems, or instrument their scripts. The tool leverages Software Engineering techniques, such as abstract syntax tree analysis, reflection, and profiling, to collect different types of provenance, including detailed information about the underlying libraries. We describe how noWorkflow captures multiple kinds of provenance and the different classes of analyses it supports: graph-based visualization; differencing over provenance trails; and inference queries.", "title": "" }, { "docid": "a4a6501af9edda1f7ede81d85a0f370b", "text": "This paper discusses the development of new winding configuration for six-phase permanent-magnet (PM) machines with 18 slots and 8 poles, which eliminates and/or reduces undesirable space harmonics in the stator magnetomotive force. The proposed configuration improves power/torque density and efficiency with a reduction in eddy-current losses in the rotor permanent magnets and copper losses in end windings. To improve drive train availability for applications in electric vehicles (EVs), this paper proposes the design of a six-phase PM machine as two independent three-phase windings. A number of possible phase shifts between two sets of three-phase windings due to their slot-pole combination and winding configuration are investigated, and the optimum phase shift is selected by analyzing the harmonic distributions and their effect on machine performance, including the rotor eddy-current losses. The machine design is optimized for a given set of specifications for EVs, under electrical, thermal and volumetric constraints, and demonstrated by the experimental measurements on a prototype machine.", "title": "" }, { "docid": "7f0023af2f3df688aa58ae3317286727", "text": "Time-parameterized queries (TP queries for short) retrieve (i) the actual result at the time that the query is issued, (ii) the validity period of the result given the current motion of the query and the database objects, and (iii) the change that causes the expiration of the result. Due to the highly dynamic nature of several spatio-temporal applications, TP queries are important both as standalone methods, as well as building blocks of more complex operations. However, little work has been done towards their efficient processing. In this paper, we propose a general framework that covers time-parameterized variations of the most common spatial queries, namely window queries, k-nearest neighbors and spatial joins. In particular, each of these TP queries is reduced to nearest neighbor search where the distance functions are defined according to the query type. This reduction allows the application and extension of well-known branch and bound techniques to the current problem. The proposed methods can be applied with mobile queries, mobile objects or both, given a suitable indexing method. Our experimental evaluation is based on R-trees and their extensions for dynamic objects.", "title": "" }, { "docid": "b42c9db51f55299545588a1ee3f7102f", "text": "With the increasing development of Web 2.0, such as social media and online businesses, the need for perception of opinions, attitudes, and emotions grows rapidly. Sentiment analysis, the topic studying such subjective feelings expressed in text, has attracted significant attention from both the research community and industry. Although we have known sentiment analysis as a task of mining opinions expressed in text and analyzing the entailed sentiments and emotions, so far the task is still vaguely defined in the research literature because it involves many overlapping concepts and sub-tasks. Because this is an important area of scientific research, the field needs to clear this vagueness and define various directions and aspects in detail, especially for students, scholars, and developers new to the field. In fact, the field includes numerous natural language processing tasks with different aims (such as sentiment classification, opinion information extraction, opinion summarization, sentiment retrieval, etc.) and these have multiple solution paths. Bing Liu has done a great job in this book in providing a thorough exploration and an anatomy of the sentiment analysis problem and conveyed a wealth of knowledge about different aspects of the field.", "title": "" }, { "docid": "95602759411f04ccbc29f96901addba4", "text": "Low-level feature extraction is the first step in any image analysis procedure and is essential for the performance of stereo vision and object recognition systems. Research concerning the detection of corners, blobs and circular or point like features is particularly rich and many procedures have been proposed in the literature. In this paper, several frequently used methods and some novel ideas are tested and compared. We measure the performance of the detectors under the criteria of their detection and repeatability rate as well as the localization accuracy. We present a short review of the major interest point detectors, propose some improvements and describe the experimental setup used for our comparison. Finally, we determine which detector leads to the best results and show that it satisfies the criteria specified above.", "title": "" }, { "docid": "ee7710bc2db66c00ff046c08a0f52718", "text": "This paper presents a new idea to determine a set of optimal design parameters of a linear delta robot (LDR) whose workspace is as close as possible of being equal to a prescribed cuboid dexterous workspace (PCDW). The optimal design procedure on the basis of three algorithms is introduced in this paper. The kinematic problem is analyzed in brief to determine the design parameters and their relation. Two algorithms are designed to determine the reachable, rectangular dexterous workspace, and the maximal inscribed rectangle of dexterous workspace in the O-xy plane. Another algorithm is used to solve the optimal problem. As applying example, the results of four cases PCDW to LDR are presented. And the design result is compared with a new concept of the distance between the best state of the LDR and the requirement of the operation task. The method and result of this paper are very useful for the design and comparison of the parallel robot.", "title": "" }, { "docid": "26813ea092f8bbedd3f970010a8a6fe6", "text": "Lane-border detection is one of the best-developed modules in vision-based driver assistance systems today. However, there is still a need for further improvement for challenging road and traffic situations, and a need to design tools for quantitative performance evaluation. This paper discusses and refines a previously published method to generate ground truth for lane markings from recorded video, applies two lanedetection methods to such video data, and then illustrates the proposed performance evaluation by comparing calculated ground truth with detected lane positions. This paper also proposes appropriate performance measures that are required to evaluate the proposed method.", "title": "" }, { "docid": "9f5dbb6aa2351d8a36368be88f35236e", "text": "Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization, and insertion into a data warehouse. In this paper, we derive into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide an exhaustive and two heuristic algorithms toward the minimization of the execution cost of an ETL workflow. The heuristic algorithm with greedy characteristics significantly outperforms the other two algorithms for a large set of experimental cases.", "title": "" }, { "docid": "a2d7fc045b1c8706dbfe3772a8f6ef70", "text": "This paper is concerned with the problem of domain adaptation with multiple sources from a causal point of view. In particular, we use causal models to represent the relationship between the features X and class label Y , and consider possible situations where different modules of the causal model change with the domain. In each situation, we investigate what knowledge is appropriate to transfer and find the optimal target-domain hypothesis. This gives an intuitive interpretation of the assumptions underlying certain previous methods and motivates new ones. We finally focus on the case where Y is the cause for X with changing PY and PX|Y , that is, PY and PX|Y change independently across domains. Under appropriate assumptions, the availability of multiple source domains allows a natural way to reconstruct the conditional distribution on the target domain; we propose to model PX|Y (the process to generate effect X from cause Y ) on the target domain as a linear mixture of those on source domains, and estimate all involved parameters by matching the target-domain feature distribution. Experimental results on both synthetic and real-world data verify our theoretical results. Traditional machine learning relies on the assumption that both training and test data are from the same distribution. In practice, however, training and test data are probably sampled under different conditions, thus violating this assumption, and the problem of domain adaptation (DA) arises. Consider remote sensing image classification as an example. Suppose we already have several data sets on which the class labels are known; they are called source domains here. For a new data set, or a target domain, it is usually difficult to find the ground truth reference labels, and we aim to determine the labels by making use of the information from the source domains. Note that those domains are usually obtained in different areas and time periods, and that the corresponding data distribution various due to the change in illumination conditions, physical factors related to ground (e.g., different soil moisture or composition), vegetation, and atmospheric conditions. Other well-known instances of this situation include sentiment data analysis (Blitzer, Dredze, and Pereira 2007) and flow cytometry data analysis (Blanchard, Lee, and Scott 2011). DA approaches have Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. many applications in varies areas including natural language processing, computer vision, and biology. For surveys on DA, see, e.g., (Jiang 2008; Pan and Yang 2010; Candela et al. 2009). In this paper, we consider the situation with n source domains on which both the features X and label Y are given, i.e., we are given (x,y) = (x k , y (i) k ) mi k=1, where i = 1, ..., n, and mi is the sample size of the ith source domain. Our goal is to find the classifier for the target domain, on which only the features x = (xk) m k=1 are available. Here we are concerned with a difficult scenario where no labeled point is available in the target domain, known as unsupervised domain adaptation. Since PXY changes across domains, we have to find what knowledge in the source domains should be transferred to the target one. Previous work in domain adaptation has usually assumed that PX changes but PY |X remain the same, i.e., the covariate shift situation; see, e.g., (Shimodaira 2000; Huang et al. 2007; Sugiyama et al. 2008; Ben-David, Shalev-Shwartz, and Urner 2012). It is also known as sample selection bias (particularly on the features X) in (Zadrozny 2004). In practice it is very often that both PX and PY |X change simultaneously across domains. For instance, both of them are likely to change over time and location for a satellite image classification system. If the data distribution changes arbitrarily across domains, clearly knowledge from the sources may not help in predicting Y on the target domain (Rosenstein et al. 2005). One has to find what type of information should be transferred from sources to the target. One possibility is to assume the change in both PX and PY |X is due to the change in PY , while PX|Y remains the same, as known as prior probability shift (Storkey 2009; Plessis and Sugiyama 2012) or target shift (Zhang et al. 2013). The latter further models the change in PX|Y caused by a location-scale (LS) transformation of the features for each class. The constraint of the LS transformation renders PX|Y on the target domain, denoted by P t X|Y , identifiable; however, it might be too restrictive. Fortunately, the availability of multiple source domains provides more hints as to find P t X|Y , as well as P t Y |X . Several algorithms have been proposed to combine knowledge from multiple source domains. For instance, (Mansour, Mohri, and Rostamizadeh 2008) proposed to form the target hypothesis by combining source hypotheses with a distribution weighted rule. (Gao et al. 2008), (Duan et al. 2009), and (Chattopadhyay et al. 2011) combine the predictions made by the source hypotheses, with the weights determined in different ways. An intuitive interpretation of the assumptions underlying those algorithms would facilitate choosing or developing DA methods for the problem at hand. To the best of our knowledge, however, it is still missing in the literature. One of our contributions in this paper is to provide such an interpretation. This paper studies the multi-source DA problem from a causal point of view where we consider the underlying data generating process behind the observed domains. We are particularly interested in what types of information stay the same, what types of information change, and how they change across domains. This enables us to construct the optimal hypothesis for the target domain in various situations. To this end, we use causal models to represent the relationship between X and Y , because they provide a compact description of the properties of the change in the data distribution.1 They, for instance, help characterize transportability of experimental findings (Pearl and Bareinboim 2011) or recoverability from selection bias (Bareinboim, Tian, and Pearl 2014). As another contribution, we further focus on a typical DA scenario where both PY and PX|Y (or the causal mechanism to generate effect X from cause Y ) change across domains, but their changes are independent from each other, as implied by the causal model Y → X . We assume that the source domains contains rich information such that for each class, P t X|Y can be approximated by a linear mixture of PX|Y on source domains. Together with other mild conditions on PX|Y , we then show that P t X|Y , as well as P t Y , is identifiable (or can be uniquely recovered). We present a computationally efficient method to estimate the involved parameters based on kernel mean distribution embedding (Smola et al. 2007; Gretton et al. 2007), followed by several approaches to constructing the target classifier using those parameters. One might wonder how to find the causal information underlying the data to facilitate domain adaptation. We note that in practice, background causal knowledge is usually available, helping formulating how to transfer the knowledge from source domains to the target. Even if this is not the case, multiple source domains with different data distributions may allow one to identify the causal structure, since the causal knowledge can be seen from the change in data distributions; see e.g., (Tian and Pearl 2001). 1 Possible DA Situations and Their Solutions DA can be considered as a learning problem in nonstationary environments (Sugiyama and Kawanabe 2012). It is helpful to find how the data distribution changes; it provides the clues as to find the learning machine for the target domain. The causal model also describes how the components of the joint distribution are related to each other, which, for instance, gives a causal explanation of the behavior of semi-supervised learning (Schölkopf et al. 2012). Table 1: Notation used in this paper. X , Y random variables X , Y domains", "title": "" }, { "docid": "8161c7e946d7fdeb22ebd6eb1b245e1e", "text": "A novel compact wideband four-way waveguide power divider has been developed for millimeter-wave/THz array application. The power divider was realized by interconnecting three novel H-plane T-junction structures. By properly choosing the length of the interconnecting rectangular waveguide, the bandwidth of the proposed four-way power divider can be broadened. Good phase and amplitude balance of all output ports are guaranteed by its symmetrical structure. According to EM simulations, the simulated -20dB return loss bandwidth is from 74.9GHz to 98.3GHz, and good amplitude and phase balance are got in the operating band. The proposed structure has the characteristics of wideband, compact structure, and high power capacity.", "title": "" }, { "docid": "6141b0cb5d5b2f24336714453a29b03f", "text": "We present the Extended Paraphrase Typology (EPT) and the Extended Typology Paraphrase Corpus (ETPC). The EPT typology addresses several practical limitations of existing paraphrase typologies: it is the first typology that copes with the non-paraphrase pairs in the paraphrase identification corpora and distinguishes between contextual and habitual paraphrase types. ETPC is the largest corpus to date annotated with atomic paraphrase types. It is the first corpus with detailed annotation of both the paraphrase and the non-paraphrase pairs and the first corpus annotated with paraphrase and negation. Both new resources contribute to better understanding the paraphrase phenomenon, and allow for studying the relationship between paraphrasing and negation. To the developers of Paraphrase Identification systems ETPC corpus offers better means for evaluation and error analysis. Furthermore, the EPT typology and ETPC corpus emphasize the relationship with other areas of NLP such as Semantic Similarity, Textual Entailment, Summarization and Simplification.", "title": "" }, { "docid": "29d9137c5fdc7e96e140f19acd6dee80", "text": "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the \"proximity\" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures.", "title": "" }, { "docid": "31ccfd3694ac87cf42f9ca9bc74cc0f4", "text": "This paper presents a highly accurate and efficient method for crack detection using percolation-based image processing. The detection of cracks in concrete surfaces during the maintenance and diagnosis of concrete structures is important to ensure the safety of these structures. Recently, the image-based crack detection method has attracted considerable attention due to its low cost and objectivity. However, there are several problems in the practical application of image processing for crack detection since real concrete surface images have noises such as concrete blebs, stains, and shadings of several sizes. In order to resolve these problems, our proposed method focuses on the number of pixels in a crack and the connectivity of the pixels. Our method employs a percolation model for crack detection in order to consider the features of the cracks. Through experiments using real concrete surface images, we demonstrate the accuracy and efficiency of our method.", "title": "" }, { "docid": "b19aab238e0eafef52974a87300750a3", "text": "This paper introduces a method to detect a fault associated with critical components/subsystems of an engineered system. It is required, in this case, to detect the fault condition as early as possible, with specified degree of confidence and a prescribed false alarm rate. Innovative features of the enabling technologies include a Bayesian estimation algorithm called particle filtering, which employs features or condition indicators derived from sensor data in combination with simple models of the system's degrading state to detect a deviation or discrepancy between a baseline (no-fault) distribution and its current counterpart. The scheme requires a fault progression model describing the degrading state of the system in the operation. A generic model based on fatigue analysis is provided and its parameters adaptation is discussed in detail. The scheme provides the probability of abnormal condition and the presence of a fault is confirmed for a given confidence level. The efficacy of the proposed approach is illustrated with data acquired from bearings typically found on aircraft and monitored via a properly instrumented test rig.", "title": "" }, { "docid": "15f6b6be4eec813fb08cb3dd8b9c97f2", "text": "ACKNOWLEDGEMENTS First, I would like to thank my supervisor Professor H. Levent Akın for his guidance. This thesis would not have been possible without his encouragement and enthusiastic support. I would also like to thank all the staff at the Artificial Intelligence Laboratory for their encouragement throughout the year. Their success in RoboCup is always a good motivation. Sharing their precious ideas during the weekly seminars have always guided me to the right direction. Finally I am deeply grateful to my family and to my wife Derya. They always give me endless love and support, which has helped me to overcome the various challenges along the way. Thank you for your patience... The field of Intelligent Transport Systems (ITS) is improving rapidly in the world. Ultimate aim of such systems is to realize fully autonomous vehicle. The researches in the field offer the potential for significant enhancements in safety and operational efficiency. Lane tracking is an important topic in autonomous navigation because the navigable region usually stands between the lanes, especially in urban environments. Several approaches have been proposed, but Hough transform seems to be the dominant among all. A robust lane tracking method is also required for reducing the effect of the noise and achieving the required processing time. In this study, we present a new lane tracking method which uses a partitioning technique for obtaining Multiresolution Hough Transform (MHT) of the acquired vision data. After the detection process, a Hidden Markov Model (HMM) based method is proposed for tracking the detected lanes. Traffic signs are important instruments to indicate the rules on roads. This makes them an essential part of the ITS researches. It is clear that leaving traffic signs out of concern will cause serious consequences. Although the car manufacturers have started to deploy intelligent sign detection systems on their latest models, the road conditions and variations of actual signs on the roads require much more robust and fast detection and tracking methods. Localization of such systems is also necessary because traffic signs differ slightly between countries. This study also presents a fast and robust sign detection and tracking method based on geometric transformation and genetic algorithms (GA). Detection is done by a genetic algorithm (GA) approach supported by a radial symmetry check so that false alerts are considerably reduced. Classification v is achieved by a combination of SURF features with NN or SVM classifiers. A heuristic …", "title": "" } ]
scidocsrr
f5abffa5b9526f85df481cab3a6bc537
Canonical Genetic Signatures of the Adult Human Brain
[ { "docid": "8159b022ee9252d2320e8c2bf7b582f6", "text": "The Human Connectome Project consortium led by Washington University, University of Minnesota, and Oxford University is undertaking a systematic effort to map macroscopic human brain circuits and their relationship to behavior in a large population of healthy adults. This overview article focuses on progress made during the first half of the 5-year project in refining the methods for data acquisition and analysis. Preliminary analyses based on a finalized set of acquisition and preprocessing protocols demonstrate the exceptionally high quality of the data from each modality. The first quarterly release of imaging and behavioral data via the ConnectomeDB database demonstrates the commitment to making HCP datasets freely accessible. Altogether, the progress to date provides grounds for optimism that the HCP datasets and associated methods and software will become increasingly valuable resources for characterizing human brain connectivity and function, their relationship to behavior, and their heritability and genetic underpinnings.", "title": "" } ]
[ { "docid": "5d21df36697616719bcc3e0ee22a08bd", "text": "In spite of the significant recent progress, the incorporation of haptics into virtual environments is still in its infancy due to limitations in the hardware, the cost of development, as well as the level of reality they provide. Nonetheless, we believe that the field will one day be one of the groundbreaking media of the future. It has its current holdups but the promise of the future is worth the wait. The technology is becoming cheaper and applications are becoming more forthcoming and apparent. If we can survive this infancy, it will promise to be an amazing revolution in the way we interact with computers and the virtual world. The researchers organize the rapidly increasing multidisciplinary research of haptics into four subareas: human haptics, machine haptics, computer haptics, and multimedia haptics", "title": "" }, { "docid": "be86e50e71e8d8ede9e3c64ae510f1d0", "text": "The subscription covering optimization, whereby a general subscription quenches the forwarding of more specific ones, is a common technique to reduce network traffic and routing state in content-based routing networks. Such optimizations, however, leave the system vulnerable to unsubscriptions that trigger the immediate forwarding of all the subscriptions they had previously quenched. These subscription bursts can severely congest the network, and destabilize the system. This paper presents techniques to retain much of the benefits of subscription covering while avoiding bursty subscription traffic. Heuristics are used to estimate the similarity among subscriptions, and a distributed algorithm determines the portions of a subscription propagation tree that should be preserved. Evaluations show that these mechanisms avoid subscription bursts while maintaining relatively compact routing tables.", "title": "" }, { "docid": "8cdf1b78bdf379e9355228a07f8b2016", "text": "OBJECTIVE AND BACKGROUND\nAutism is characterized by repetitive behaviors and impaired socialization and communication. Preliminary evidence showed possible language benefits in autism from the β-adrenergic antagonist propranolol. Earlier studies in other populations suggested propranolol might benefit performance on tasks involving a search of semantic and associative networks under certain conditions. Therefore, we wished to determine whether this benefit of propranolol includes an effect on semantic fluency in autism.\n\n\nMETHODS\nA sample of 14 high-functioning adolescent and adult participants with autism and 14 matched controls were given letter and category word fluency tasks on 2 separate testing sessions; 1 test was given 60 minutes after the administration of 40 mg propranolol orally, and 1 test was given after placebo, administered in a double-blinded, counterbalanced manner.\n\n\nRESULTS\nParticipants with autism were significantly impaired compared with controls on both fluency tasks. Propranolol significantly improved performance on category fluency, but not letter fluency among autism participants. No drug effect was observed among controls. Expected drug effects on heart rate and blood pressure were observed in both the groups.\n\n\nCONCLUSIONS\nResults are consistent with a selective beneficial effect of propranolol on flexibility of access to semantic and associative networks in autism, with no observed effect on phonological networks. Further study will be necessary to understand potential clinical implications of this finding.", "title": "" }, { "docid": "0f0305afce53933df1153af6a31c09fb", "text": "In the study of indoor simultaneous localization and mapping (SLAM) problems using a stereo camera, two types of primary features-point and line segments-have been widely used to calculate the pose of the camera. However, many feature-based SLAM systems are not robust when the camera moves sharply or turns too quickly. In this paper, an improved indoor visual SLAM method to better utilize the advantages of point and line segment features and achieve robust results in difficult environments is proposed. First, point and line segment features are automatically extracted and matched to build two kinds of projection models. Subsequently, for the optimization problem of line segment features, we add minimization of angle observation in addition to the traditional re-projection error of endpoints. Finally, our model of motion estimation, which is adaptive to the motion state of the camera, is applied to build a new combinational Hessian matrix and gradient vector for iterated pose estimation. Furthermore, our proposal has been tested on EuRoC MAV datasets and sequence images captured with our stereo camera. The experimental results demonstrate the effectiveness of our improved point-line feature based visual SLAM method in improving localization accuracy when the camera moves with rapid rotation or violent fluctuation.", "title": "" }, { "docid": "18f13858b5f9e9a8e123d80b159c4d72", "text": "Cryptocurrency, and its underlying technologies, has been gaining popularity for transaction management beyond financial transactions. Transaction information is maintained in the blockchain, which can be used to audit the integrity of the transaction. The focus on this paper is the potential availability of block-chain technology of other transactional uses. Block-chain is one of the most stable open ledgers that preserves transaction information, and is difficult to forge. Since the information stored in block-chain is not related to personally identifiable information, it has the characteristics of anonymity. Also, the block-chain allows for transparent transaction verification since all information in the block-chain is open to the public. These characteristics are the same as the requirements for a voting system. That is, strong robustness, anonymity, and transparency. In this paper, we propose an electronic voting system as an application of blockchain, and describe block-chain based voting at a national level through examples.", "title": "" }, { "docid": "b13c9597f8de229fb7fec3e23c0694d1", "text": "Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015.", "title": "" }, { "docid": "bd7f571534a9aa49cd875adf3615e2be", "text": "Built on an analogy between the visual and auditory systems, the following dual stream model for language processing was suggested recently: a dorsal stream is involved in mapping sound to articulation, and a ventral stream in mapping sound to meaning. The goal of the study presented here was to test the neuroanatomical basis of this model. Combining functional magnetic resonance imaging (fMRI) with a novel diffusion tensor imaging (DTI)-based tractography method we were able to identify the most probable anatomical pathways connecting brain regions activated during two prototypical language tasks. Sublexical repetition of speech is subserved by a dorsal pathway, connecting the superior temporal lobe and premotor cortices in the frontal lobe via the arcuate and superior longitudinal fascicle. In contrast, higher-level language comprehension is mediated by a ventral pathway connecting the middle temporal lobe and the ventrolateral prefrontal cortex via the extreme capsule. Thus, according to our findings, the function of the dorsal route, traditionally considered to be the major language pathway, is mainly restricted to sensory-motor mapping of sound to articulation, whereas linguistic processing of sound to meaning requires temporofrontal interaction transmitted via the ventral route.", "title": "" }, { "docid": "8bda505118b1731e778b41203520b3b8", "text": "Image search and retrieval systems depend heavily on availability of descriptive textual annotations with images, to match them with textual queries of users. In most cases, such systems have to rely on users to provide tags or keywords with images. Users may add insufficient or noisy tags. A system to automatically generate descriptive tags for images can be extremely helpful for search and retrieval systems. Automatic image annotation has been explored widely in both image and text processing research communities. In this paper, we present a novel approach to tackle this problem by incorporating contextual information provided by scene analysis of image. Image can be represented by features which indicate type of scene shown in the image, instead of representing individual objects or local characteristics of that image. We have used such features to provide context in the process of predicting tags for images.", "title": "" }, { "docid": "261af8bb868e629b0020bb4c2a63d867", "text": "A double-layer TFT NAND-type flash memory is demonstrated, ushering into the era of three-dimensional (3D) flash memory. A TFT device using bandgap engineered SONOS (BE-SONOS) (Lue et al., 2005, Lai et al., 2006) with fully-depleted (FD) poly silicon (60 nm) channel and tri-gate P+-poly gate is integrated into a NAND array. Small devices (L/W=0.2/0.09 mum) with excellent performance and reliability properties are achieved. The bottom layer shows no sign of reliability degradation compared to the top layer, indicating the potential for further multi-layer stacking. The present work illustrates the feasibility of 3D flash memory", "title": "" }, { "docid": "93bc875cf2145dfdcd8a2ce44049aa0d", "text": "We construct a counterfactual statement when we reason conjecturally about an event which did or did not occur in the past: If an event had occurred, what would have happened? Would it be relevant? Real world examples, as studied by Byrne, Rescher and many others, show that these conditionals involve a complex reasoning process. An intuitive and elegant approach to evaluate counterfactuals, without deep revision mechanisms, is proposed by Pearl. His Do-Calculus identifies causal relations in a Bayesian network resorting to counterfactuals. Though leaving out probabilities, we adopt Pearl’s stance, and its prior epistemological justification to counterfactuals in causal Bayesian networks, but for programs. Logic programming seems a suitable environment for several reasons. First, its inferential arrow is adept at expressing causal direction and conditional reasoning. Secondly, together with its other functionalities such as abduction, integrity constraints, revision, updating and debugging (a form of counterfactual reasoning), it proffers a wide range of expressibility itself. We show here how programs under the weak completion semantics in an abductive framework, comprising the integrity constraints, can smoothly and uniformly capture well-known and off-the-shelf counterfactual problems and conundrums, taken from the psychological and philosophical literature. Our approach is adroitly reconstructable in other three-valued LP semantics, or restricted to two-valued ones.", "title": "" }, { "docid": "daf63012a3603e5fd2fda4bdd693d010", "text": "Vertical selection is the task of predicting relevant verticals for a Web query so as to enrich the Web search results with complementary vertical results. We investigate a novel variant of this task, where the goal is to detect queries with a question intent. Specifically, we address queries for which the user would like an answer with a human touch. We call these CQA-intent queries, since answers to them are typically found in community question answering (CQA) sites. A typical approach in vertical selection is using a vertical’s specific language model of relevant queries and computing the query-likelihood for each vertical as a selective criterion. This works quite well for many domains like Shopping, Local and Travel. Yet, we claim that queries with CQA intent are harder to distinguish by modeling content alone, since they cover many different topics. We propose to also take the structure of queries into consideration, reasoning that queries with question intent have quite a different structure than other queries. We present a supervised classification scheme, random forest over word-clusters for variable length texts, which can model the query structure. Our experiments show that it substantially improves classification performance in the CQA-intent selection task compared to content-oriented based classification, especially as query length grows.", "title": "" }, { "docid": "bb88a929b1ac6565c7d31abb65813b29", "text": "Esophagitis dissecans superficialis and eosinophilic esophagitis are distinct esophageal pathologies with characteristic clinical and histologic findings. Esophagitis dissecans superficialis is a rare finding on endoscopy consisting of the peeling of large fragments of esophageal mucosa. Histology shows sloughing of the epithelium and parakeratosis. Eosinophilic esophagitis is an allergic disease of the esophagus characterized by eosinophilic inflammation of the epithelium and symptoms of esophageal dysfunction. Both of these esophageal processes have been associated with other diseases, but there is no known association between them. We describe a case of esophagitis dissecans superficialis and eosinophilic esophagitis in an adolescent patient. To our knowledge, this is the first case describing an association between esophageal dissecans superficialis and eosinophilic esophagitis. Citation: Guerra MR, Vahabnezhad E, Swanson E, Naini BV, Wozniak LJ (2015) Esophagitis dissecans associated with eosinophilic esophagitis in an adolescent. Adv Pediatr Res 2:8. doi:10.12715/apr.2015.2.8 Received: January 27, 2015; Accepted: February 19, 2015; Published: March 19, 2015 Copyright: © 2015 Guerra et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Competing interests: The authors have declared that no competing interests exist. * Email: marjorieanneguerra@mednet.ucla.edu", "title": "" }, { "docid": "e6f28d4bd8cbbc67acdbb06cc84a8c40", "text": "• Regularization: To force the label embedding as the anchor points for each classes, we regularize the learned label embeddings to be on its corresponding manifold Model Yahoo DBPedia AGNews Yelp P. Yelp F. Bag-ofwords 68.9 96.6 88.8 92.2 58 CNN 70.94 98.28 91.45 95.11 59.48 LSTM 70.84 98.55 86.06 94.74 58.17 Deep CNN 73.43 98.71 91.27 95.72 64.26 SWEM 73.53 98.42 92.24 93.76 61.11 fastText 72.3 98.6 92.5 95.7 63.9 HAN 75.8 Bi-BloSAN 76.28 98.77 93.32 94.56 62.13 LEAM 77.42 99.02 92.45 95.31 64.09 Test Accuracy on document classification tasks, in percentage", "title": "" }, { "docid": "b3874f8390e284c119635e7619e7d952", "text": "Since a vehicle logo is the clearest indicator of a vehicle manufacturer, most vehicle manufacturer recognition (VMR) methods are based on vehicle logo recognition. Logo recognition can be still a challenge due to difficulties in precisely segmenting the vehicle logo in an image and the requirement for robustness against various imaging situations simultaneously. In this paper, a convolutional neural network (CNN) system has been proposed for VMR that removes the requirement for precise logo detection and segmentation. In addition, an efficient pretraining strategy has been introduced to reduce the high computational cost of kernel training in CNN-based systems to enable improved real-world applications. A data set containing 11 500 logo images belonging to 10 manufacturers, with 10 000 for training and 1500 for testing, is generated and employed to assess the suitability of the proposed system. An average accuracy of 99.07% is obtained, demonstrating the high classification potential and robustness against various poor imaging situations.", "title": "" }, { "docid": "70f35b19ba583de3b9942d88c94b9148", "text": "ARCHEOGUIDE (Augmented Reality-based Cultural Heritage On-site GUIDE) is an IST project, funded by the EU, aiming at providing a personalized Virtual Reality guide and tour assistant to archaeological site visitors and a multimedia repository and information system for archaeologists and site curators. The system provides monument reconstructions, ancient life simulation, and database tools for creating and archiving archaeological multimedia material.", "title": "" }, { "docid": "830588b6ff02a05b4d76b58a3e4e7c44", "text": "The integration of GIS and multicriteria decision analysis has attracted significant interest over the last 15 years or so. This paper surveys the GISbased multicriteria decision analysis (GIS-MCDA) approaches using a literature review and classification of articles from 1990 to 2004. An electronic search indicated that over 300 articles appeared in refereed journals. The paper provides taxonomy of those articles and identifies trends and developments in GISMCDA.", "title": "" }, { "docid": "0790fd5a24e8a683fde04f3c8976ba9e", "text": "In a sophisticated and coordinated cyber-attack $100 million has been stolen from Bangladesh's account. Attackers introduced malicious code remotely into the Bangladesh Bank's server, which allowed them to process and authorize the transactions. Advanced attack techniques poses threats to all web application systems. Cross Site Scripting (XSS) and Cross Site Request Forgery (CSRF) are two vulnerabilities which have techniques that are similar to those of the Bangladesh Bank heist. XSS and CSRF are third and eighth of the top ten web application vulnerabilities on OWASP list from 2013 till now. Both these attacks violate the users trust for the websites and web browsers. Because of the severity of these vulnerabilities, security specialists have always shared their concern and warned the web developers. Yet Bangladesh government's and developers' reluctance to address the severity of the attacks resulted in Bangladesh Bank heist. In this paper, we aim to study and conduct an investigation of the vulnerabilities of similar attacks as these of the Bangladesh Bank heist on web applications of Bangladesh. We would focus on XSS and CSRF vulnerabilities due to their high ranking on the OWASP list. We analyze the data collected during the investigation and provide a summary of the current state and a guideline for the future web developers.", "title": "" }, { "docid": "8c800687e0f091cfd1edbd7e125cfed4", "text": "Semantic Annotation is required to add machine-readable content to natural language text. A global initiative such as the Semantic Web directly depends on the annotation of massive amounts of textual Web resources. However, considering the amount of those resources, a manual semantic annotation of their contents is neither feasible nor scalable. In this paper we introduce a methodology to partially annotate textual content of Web resources in an automatic and unsupervised way. It uses several well-established learning techniques and heuristics to discover relevant entities in text and to associate them to classes of an input ontology by means of linguistic patterns. It also relies on the Web information distribution to assess the degree of semantic co-relation between entities and classes of the input domain ontology. Special efforts have been put in minimizing the amount of Web accesses required to evaluate entities in order to ensure the scalability of the approach. A manual evaluation has been carried out to test the methodology for several domains showing promising results.", "title": "" }, { "docid": "88c5a6fca072ae849d300e6f30d15c40", "text": "Models such as feed-forward neural networks and certain other structures investigated in the computer science literature are not amenable to closed-form Bayesian analysis. The paper reviews the various approaches taken to overcome this difficulty, involving the use of Gaussian approximations, Markov chain Monte Carlo simulation routines and a class of non-Gaussian but “deterministic” approximations called variational approximations.", "title": "" } ]
scidocsrr
82d049448ca6604fbf3346624cf322c3
Efficient 3D shape matching and retrieval using a concrete radialized spherical projection representation
[ { "docid": "eea54b2aba2f533113176c4e87e80a44", "text": "of the dissertation", "title": "" } ]
[ { "docid": "8e8dc6f3579cf4360118a4ce5550de7e", "text": "In the Internet-age, malware poses a serious and evolving threat to security, making the detection of malware of utmost concern. Many research efforts have been conducted on intelligent malware detection by applying data mining and machine learning techniques. Though great results have been obtained with these methods, most of them are built on shallow learning architectures, which are still somewhat unsatisfying for malware detection problems. In this paper, based on the Windows Application Programming Interface (API) calls extracted from the Portable Executable (PE) files, we study how a deep learning architecture using the stacked AutoEncoders (SAEs) model can be designed for intelligent malware detection. The SAEs model performs as a greedy layerwise training operation for unsupervised feature learning, followed by supervised parameter fine-tuning (e.g., weights and offset vectors). To the best of our knowledge, this is the first work that deep learning using the SAEs model based on Windows API calls is investigated in malware detection for real industrial application. A comprehensive experimental study on a real and large sample collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed method can further improve the overall performance in malware detection compared with traditional shallow learning methods.", "title": "" }, { "docid": "96b1688b19bf71e8f1981d9abe52fc2c", "text": "Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic applications involving inference such as question answering (QA) – specifically “How?” and “Why?” questions. In this paper, we present the task of process extraction, in which events within a process and the relations between the events are automatically extracted from text. We represent processes by graphs whose edges describe a set of temporal, causal and co-reference event-event relations, and characterize the structural properties of these graphs (e.g., the graphs are connected). Then, we present a method for extracting relations between the events, which exploits these structural properties by performing joint inference over the set of extracted relations. On a novel dataset containing 148 descriptions of biological processes (released with this paper), we show significant improvement comparing to baselines that disregard process structure.", "title": "" }, { "docid": "9eedeec21ab380c0466ed7edfe7c745d", "text": "In this paper, we study the effect of using-grams (sequences of words of length n) for text categorization. We use an efficient algorithm for gener ating suchn-gram features in two benchmark domains, the 20 newsgroups data set and 21,578 REU TERS newswire articles. Our results with the rule learning algorithm R IPPER indicate that, after the removal of stop words, word sequences of length 2 or 3 are most useful. Using l o er sequences reduces classification performance.", "title": "" }, { "docid": "b41f99ba59923c108c43577c9d08f3dd", "text": "I use daily prices collected from online retailers in five countries to study the impact of measurement bias on three common price stickiness statistics. Relative to previous results, I find that online prices have longer durations, with fewer price changes close to 0, and hazard functions that initially increase over time. I show that time-averaging and imputed prices in scanner and CPI data can fully explain the differences with the literature. I then report summary statistics for the duration and size of price changes using scraped data collected from 181 retailers in 31 countries.", "title": "" }, { "docid": "257c887438ec1fbbe93c8ae757fb3a61", "text": "Facial landmark detection has received much attention in recent years, with two detection paradigms emerging: local approaches, where each facial landmark is modeled individually and with the help of a shape model; and holistic approaches, where the face appearance and shape are modeled jointly. In recent years both of these approaches have shown great performance gains for facial landmark detection even under \"in-the-wild\" conditions of varying illumination, occlusion and image quality. However, their accuracy and robustness are very often reduced for profile faces where face alignment is more challenging (e.g., no more facial symmetry, less defined features and more variable background). In this paper, we present a new model, named Holistically Constrained Local Model (HCLM), which unifies local and holistic facial landmark detection by integrating head pose estimation, sparse-holistic landmark detection and dense-local landmark detection. We evaluate our new model on two publicly available datasets, 300-W and AFLW, as well as a newly introduced dataset, IJB-FL which includes a larger proportion of profile face poses. Our HCLM model shows state-of-the-art performance, especially with extreme head poses.", "title": "" }, { "docid": "f3c2663cb0341576d754bb6cd5f2c0f5", "text": "This article surveys deformable models, a promising and vigorously researched computer-assisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics and approximation theory. They have proven to be effective in segmenting, matching and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) a priori knowledge about the location, size and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking.", "title": "" }, { "docid": "3a17d60c2eb1df3bf491be3297cffe79", "text": "Received: 3 October 2009 Revised: 22 June 2011 Accepted: 3 July 2011 Abstract Studies claiming to use the Grounded theory methodology (GTM) have been quite prevalent in information systems (IS) literature. A cursory review of this literature reveals conflict in the understanding of GTM, with a variety of grounded theory approaches apparent. The purpose of this investigation was to establish what alternative grounded theory approaches have been employed in IS, and to what extent each has been used. In order to accomplish this goal, a comprehensive set of IS articles that claimed to have followed a grounded theory approach were reviewed. The articles chosen were those published in the widely acknowledged top eight IS-centric journals, since these journals most closely represent exemplar IS research. Articles for the period 1985-2008 were examined. The analysis revealed four main grounded theory approaches in use, namely (1) the classic grounded theory approach, (2) the evolved grounded theory approach, (3) the use of the grounded theory approach as part of a mixed methodology, and (4) the application of grounded theory techniques, typically for data analysis purposes. The latter has been the most common approach in IS research. The classic approach was the least often employed, with many studies opting for an evolved or mixed method approach. These and other findings are discussed and implications drawn. European Journal of Information Systems (2013) 22, 119–129. doi:10.1057/ejis.2011.35; published online 30 August 2011", "title": "" }, { "docid": "5f0e1c63d60a4bdd8af5994b25b6654d", "text": "The machine representation of floating point values has limited precision such that errors may be introduced during execution. These errors may get propagated and magnified by the following operations, leading to instability problems, e.g., control flow path may be undesirably altered and faulty output may be emitted. In this paper, we develop an on-the-fly efficient monitoring technique that can predict if an execution is stable. The technique does not explicitly compute errors as doing so incurs high overhead. Instead, it detects possible places where an error becomes substantially inflated regarding the corresponding value, and then tags the value with one bit to denote that it has an inflated error. It then tracks inflation bit propagation, taking care of operations that may cut off such propagation. It reports instability if any inflation bit reaches a critical execution point, such as a predicate, where the inflated error may induce substantial execution difference, such as different execution paths. Our experiment shows that with appropriate thresholds, the technique can correctly detect that over 99.999996% of the inputs of all the programs we studied are stable while a traditional technique relying solely on inflation detection mistakenly classifies majority of the inputs as unstable for some of the programs. Compared to the state of the art technique that is based on high precision computation and causes several hundred times slowdown, our technique only causes 7.91 times slowdown on average and can report all the true unstable executions with the appropriate thresholds.", "title": "" }, { "docid": "e62daef8b5273096e0f174c73e3674a8", "text": "A wide range of human-robot collaborative applications in diverse domains such as manufacturing, search-andrescue, health care, the entertainment industry, and social interactions, require an autonomous robot to follow its human companion. Different working environments and applications pose diverse challenges by adding constraints on the choice of sensors, the degree of autonomy, and dynamics of the person-following robot. Researchers have addressed these challenges in many ways and contributed to the development of a large body of literature. This paper provides a comprehensive overview of the literature by categorizing different aspects of person-following by autonomous robots. Also, the corresponding operational challenges are identified based on various design choices for ground, underwater, and aerial scenarios. In addition, state-of-the-art methods for perception, planning, control, and interaction are elaborately discussed and their applicability in varied operational scenarios are presented. Then, qualitative evaluations of some of the prominent methods are performed, corresponding practicalities are illustrated, and their feasibility is analyzed in terms of standard metrics. Furthermore, several prospective application areas are identified, and open problems are highlighted for future research.", "title": "" }, { "docid": "bcb52f8aa483a8717e9e82fb8ae3160f", "text": "This paper presents a dynamic task scheduling approach to executing dense linear algebra algorithms on multicore systems (either shared-memory or distributed-memory). We use a task-based library to replace the existing linear algebra subroutines such as PBLAS to transparently provide the same interface and computational function as the ScaLAPACK library. Linear algebra programs are written with the task-based library and executed by a dynamic runtime system. We mainly focus our runtime system design on the metric of performance scalability. We propose a distributed algorithm to solve data dependences without process cooperation. We have implemented the runtime system and applied it to three linear algebra algorithms: Cholesky, LU, and QR factorizations. Our experiments on both shared-memory machines (16, 32 cores) and distributed-memory machines (1024 cores) demonstrate that our runtime system is able to achieve good scalability. Furthermore, we provide analytical analysis to show why the tiled algorithms are scalable and the expected execution time.", "title": "" }, { "docid": "46ab85859bd3966b243db79696a236f0", "text": "The general purpose optimization method known as Particle Swarm Optimization (PSO) has received much attention in past years, with many attempts to find the variant that performs best on a wide variety of optimization problems. The focus of past research has been with making the PSO method more complex, as this is frequently believed to increase its adaptability to other optimization problems. This study takes the opposite approach and simplifies the PSO method. To compare the efficacy of the original PSO and the simplified variant here, an easy technique is presented for efficiently tuning their behavioural parameters. The technique works by employing an overlaid meta-optimizer, which is capable of simultaneously tuning parameters with regard to multiple optimization problems, whereas previous approaches to meta-optimization have tuned behavioural parameters to work well on just a single optimization problem. It is then found that the PSO method and its simplified variant not only have comparable performance for optimizing a number of Artificial Neural Network problems, but the simplified variant appears to offer a small improvement in some cases.", "title": "" }, { "docid": "8d3a65d1dcf04773839a9ac4de0014ac", "text": "This paper proposes an energy-efficient deep inmemory architecture for NAND flash (DIMA-F) to perform machine learning and inference algorithms on NAND flash memory. Algorithms for data analytics, inference, and decision-making require processing of large data volumes and are hence limited by data access costs. DIMA-F achieves energy savings and throughput improvement for such algorithms by reading and processing data in the analog domain at the periphery of NAND flash memory. This paper also provides behavioral models of DIMA-F that can be used for analysis and large scale system simulations in presence of circuit non-idealities and variations. DIMA-F is studied in the context of linear support vector machines and knearest neighbor for face detection and recognition, respectively. An estimated 8×-to-23× reduction in energy and 9×-to-15× improvement in throughput resulting in EDP gains up to 345× over the conventional NAND flash architecture incorporating an external digital ASIC for computation.", "title": "" }, { "docid": "0a63a875b57b963372640f8fb527bd5c", "text": "KEMI-TORNIO UNIVERSITY OF APPLIED SCIENCES Degree programme: Business Information Technology Writer: Guo, Shuhang Thesis title: Analysis and evaluation of similarity metrics in collaborative filtering recommender system Pages (of which appendix): 62 (1) Date: May 15, 2014 Thesis instructor: Ryabov, Vladimir This research is focused on the field of recommender systems. The general aims of this thesis are to summary the state-of-the-art in recommendation systems, evaluate the efficiency of the traditional similarity metrics with varies of data sets, and propose an ideology to model new similarity metrics. The literatures on recommender systems were studied for summarizing the current development in this filed. The implementation of the recommendation and evaluation was achieved by Apache Mahout which provides an open source platform of recommender engine. By importing data information into the project, a customized recommender engine was built. Since the recommending results of collaborative filtering recommender significantly rely on the choice of similarity metrics and the types of the data, several traditional similarity metrics provided in Apache Mahout were examined by the evaluator offered in the project with five data sets collected by some academy groups. From the evaluation, I found out that the best performance of each similarity metric was achieved by optimizing the adjustable parameters. The features of each similarity metric were obtained and analyzed with practical data sets. In addition, an ideology by combining two traditional metrics was proposed in the thesis and it was proven applicable and efficient by the metrics combination of Pearson correlation and Euclidean distance. The observation and evaluation of traditional similarity metrics with practical data is helpful to understand their features and suitability, from which new models can be created. Besides, the ideology proposed for modeling new similarity metrics can be found useful both theoretically and practically.", "title": "" }, { "docid": "5f365973899e33de3052dda238db13c1", "text": "The global threat to public health posed by emerging multidrug-resistant bacteria in the past few years necessitates the development of novel approaches to combat bacterial infections. Endolysins encoded by bacterial viruses (or phages) represent one promising avenue of investigation. These enzyme-based antibacterials efficiently kill Gram-positive bacteria upon contact by specific cell wall hydrolysis. However, a major hurdle in their exploitation as antibacterials against Gram-negative pathogens is the impermeable lipopolysaccharide layer surrounding their cell wall. Therefore, we developed and optimized an approach to engineer these enzymes as outer membrane-penetrating endolysins (Artilysins), rendering them highly bactericidal against Gram-negative pathogens, including Pseudomonas aeruginosa and Acinetobacter baumannii. Artilysins combining a polycationic nonapeptide and a modular endolysin are able to kill these (multidrug-resistant) strains in vitro with a 4 to 5 log reduction within 30 min. We show that the activity of Artilysins can be further enhanced by the presence of a linker of increasing length between the peptide and endolysin or by a combination of both polycationic and hydrophobic/amphipathic peptides. Time-lapse microscopy confirmed the mode of action of polycationic Artilysins, showing that they pass the outer membrane to degrade the peptidoglycan with subsequent cell lysis. Artilysins are effective in vitro (human keratinocytes) and in vivo (Caenorhabditis elegans). Importance: Bacterial resistance to most commonly used antibiotics is a major challenge of the 21st century. Infections that cannot be treated by first-line antibiotics lead to increasing morbidity and mortality, while millions of dollars are spent each year by health care systems in trying to control antibiotic-resistant bacteria and to prevent cross-transmission of resistance. Endolysins--enzymes derived from bacterial viruses--represent a completely novel, promising class of antibacterials based on cell wall hydrolysis. Specifically, they are active against Gram-positive species, which lack a protective outer membrane and which have a low probability of resistance development. We modified endolysins by protein engineering to create Artilysins that are able to pass the outer membrane and become active against Pseudomonas aeruginosa and Acinetobacter baumannii, two of the most hazardous drug-resistant Gram-negative pathogens.", "title": "" }, { "docid": "df4146f0b223b9bc7a983a4198589b48", "text": "Since its official introduction in 2012, the Robot Web Tools project has grown tremendously as an open-source community, enabling new levels of interoperability and portability across heterogeneous robot systems, devices, and front-end user interfaces. At the heart of Robot Web Tools is the rosbridge protocol as a general means for messaging ROS topics in a client-server paradigm suitable for wide area networks, and human-robot interaction at a global scale through modern web browsers. Building from rosbridge, this paper describes our efforts with Robot Web Tools to advance: 1) human-robot interaction through usable client and visualization libraries for more efficient development of front-end human-robot interfaces, and 2) cloud robotics through more efficient methods of transporting high-bandwidth topics (e.g., kinematic transforms, image streams, and point clouds). We further discuss the significant impact of Robot Web Tools through a diverse set of use cases that showcase the importance of a generic messaging protocol and front-end development systems for human-robot interaction.", "title": "" }, { "docid": "29cceb730e663c08e20107b6d34ced8b", "text": "Cumulative citation recommendation refers to the task of filtering a time-ordered corpus for documents that are highly relevant to a predefined set of entities. This task has been introduced at the TREC Knowledge Base Acceleration track in 2012, where two main families of approaches emerged: classification and ranking. In this paper we perform an experimental comparison of these two strategies using supervised learning with a rich feature set. Our main finding is that ranking outperforms classification on all evaluation settings and metrics. Our analysis also reveals that a ranking-based approach has more potential for future improvements.", "title": "" }, { "docid": "0b6a3b143dfccd7ca9ea09f7fa5b5e8c", "text": "Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions. In addition, the ability of ML tools to detect key features from complex datasets reveals their importance. A variety of these techniques, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Support Vector Machines (SVMs) and Decision Trees (DTs) have been widely applied in cancer research for the development of predictive models, resulting in effective and accurate decision making. Even though it is evident that the use of ML methods can improve our understanding of cancer progression, an appropriate level of validation is needed in order for these methods to be considered in the everyday clinical practice. In this work, we present a review of recent ML approaches employed in the modeling of cancer progression. The predictive models discussed here are based on various supervised ML techniques as well as on different input features and data samples. Given the growing trend on the application of ML methods in cancer research, we present here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.", "title": "" }, { "docid": "750a1dd126b0bb90def0bba34dc73cdd", "text": "Skinning of skeletally deformable models is extensively used for real-time animation of characters, creatures and similar objects. The standard solution, linear blend skinning, has some serious drawbacks that require artist intervention. Therefore, a number of alternatives have been proposed in recent years. All of them successfully combat some of the artifacts, but none challenge the simplicity and efficiency of linear blend skinning. As a result, linear blend skinning is still the number one choice for the majority of developers. In this article, we present a novel skinning algorithm based on linear combination of dual quaternions. Even though our proposed method is approximate, it does not exhibit any of the artifacts inherent in previous methods and still permits an efficient GPU implementation. Upgrading an existing animation system from linear to dual quaternion skinning is very easy and has a relatively minor impact on runtime performance.", "title": "" }, { "docid": "caf01ca9e0bb31bbaf3e32741637477c", "text": "Deep convolutional neural networks (DCNNs) have been used to achieve state-of-the-art performance on many computer vision tasks (e.g., object recognition, object detection, semantic segmentation) thanks to a large repository of annotated image data. Large labeled datasets for other sensor modalities, e.g., multispectral imagery (MSI), are not available due to the large cost and manpower required. In this paper, we adapt state-of-the-art DCNN frameworks in computer vision for semantic segmentation for MSI imagery. To overcome label scarcity for MSI data, we substitute real MSI for generated synthetic MSI in order to initialize a DCNN framework. We evaluate our network initialization scheme on the new RIT-18 dataset that we present in this paper. This dataset contains very-high resolution MSI collected by an unmanned aircraft system. The models initialized with synthetic imagery were less prone to over-fitting and provide a state-of-the-art baseline for future work.", "title": "" }, { "docid": "eb0d9e1ebb725c5c14bfaec29faed500", "text": "STUDY DESIGN\nMulticentered randomized controlled trial.\n\n\nOBJECTIVES\nTo determine if previously validated low back pain (LBP) subgroups respond differently to contrasting exercise prescriptions.\n\n\nSUMMARY OF BACKGROUND DATA\nThe role of \"patient-specific\" exercises in managing LBP is controversial.\n\n\nMETHODS\nA total of 312 acute, subacute, and chronic patients, including LBP-only and sciatica, underwent a standardized mechanical assessment classifying them by their pain response, specifically eliciting either a \"directional preference\" (DP) (i.e., an immediate, lasting improvement in pain from performing either repeated lumbar flexion, extension, or sideglide/rotation tests), or no DP. Only DP subjects were randomized to: 1) directional exercises \"matching\" their preferred direction (DP), 2) exercises directionally \"opposite\" their DP, or 3) \"nondirectional\" exercises. Outcome measures included pain intensity, location, disability, medication use, degree of recovery, depression, and work interference.\n\n\nRESULTS\nA DP was elicited in 74% (230) of subjects. One third of both the opposite and non-directionally treated subjects withdrew within 2 weeks because of no improvement or worsening (no matched subject withdrew). Significantly greater improvements occurred in matched subjects compared with both other treatment groups in every outcome (P values <0.001), including a threefold decrease in medication use.\n\n\nCONCLUSIONS\nConsistent with prior evidence, a standardized mechanical assessment identified a large subgroup of LBP patients with a DP. Regardless of subjects' direction of preference, the response to contrasting exercise prescriptions was significantly different: exercises matching subjects' DP significantly and rapidly decreased pain and medication use and improved in all other outcomes. If repeatable, such subgroup validation has important implications for LBP management.", "title": "" } ]
scidocsrr
f34c5c27c3fd420b3efea774e854d2c2
Irregular Bipolar Fuzzy Graphs
[ { "docid": "fd48614d255b7c7bc7054b4d5de69a15", "text": "Article history: Received 31 December 2007 Received in revised form 12 December 2008 Accepted 3 January 2009", "title": "" } ]
[ { "docid": "232d7e7986de374499c8ca580d055729", "text": "In this paper we provide a survey of recent contributions to robust portfolio strategies from operations research and finance to the theory of portfolio selection. Our survey covers results derived not only in terms of the standard mean-variance objective, but also in terms of two of the most popular risk measures, mean-VaR and mean-CVaR developed recently. In addition, we review optimal estimation methods and Bayesian robust approaches.", "title": "" }, { "docid": "d58c81bf22cdad5c1a669dd9b9a77fbd", "text": "The rapid increase in healthcare demand has seen novel developments in health monitoring technologies, such as the body area networks (BAN) paradigm. BAN technology envisions a network of continuously operating sensors, which measure critical physical and physiological parameters e.g., mobility, heart rate, and glucose levels. Wireless connectivity in BAN technology is key to its success as it grants portability and flexibility to the user. While radio frequency (RF) wireless technology has been successfully deployed in most BAN implementations, they consume a lot of battery power, are susceptible to electromagnetic interference and have security issues. Intrabody communication (IBC) is an alternative wireless communication technology which uses the human body as the signal propagation medium. IBC has characteristics that could naturally address the issues with RF for BAN technology. This survey examines the on-going research in this area and highlights IBC core fundamentals, current mathematical models of the human body, IBC transceiver designs, and the remaining research challenges to be addressed. IBC has exciting prospects for making BAN technologies more practical in the future.", "title": "" }, { "docid": "1d19e616477e464e00570ca741ee3734", "text": "Data Warehouses are a good source of data for downstream data mining applications. New data arrives in data warehouses during the periodic refresh cycles. Appending of data on existing data requires that all patterns discovered earlier using various data mining algorithms are updated with each refresh. In this paper, we present an incremental density based clustering algorithm. Incremental DBSCAN is an existing incremental algorithm in which data can be added/deleted to/from existing clusters, one point at a time. Our algorithm is capable of adding points in bulk to existing set of clusters. In this new algorithm, the data points to be added are first clustered using the DBSCAN algorithm and then these new clusters are merged with existing clusters, to come up with the modified set of clusters. That is, we add the clusters incrementally rather than adding points incrementally. It is found that the proposed incremental clustering algorithm produces the same clusters as obtained by Incremental DBSCAN. We have used R*-trees as the data structure to hold the multidimensional data that we need to cluster. One of the major advantages of the proposed approach is that it allows us to see the clustering patterns of the new data along with the existing clustering patterns. Moreover, we can see the merged clusters as well. The proposed algorithm is capable of considerable savings, in terms of region queries performed, as compared to incremental DBSCAN. Results are presented to support the claim.", "title": "" }, { "docid": "be82ba26b91658ee90b6075c75c5f7bd", "text": "In this paper, we propose a content-based recommendation Algorithm which extends and updates the Minkowski distance in order to address the challenge of matching people and jobs. The proposed algorithm FoDRA (Four Dimensions Recommendation Algorithm) quantifies the suitability of a job seeker for a job position in a more flexible way, using a structured form of the job and the candidate's profile, produced from a content analysis of the unstructured form of the job description and the candidate's CV. We conduct an experimental evaluation in order to check the quality and the effectiveness of FoDRA. Our primary study shows that FoDRA produces promising results and creates new prospects in the area of Job Recommender Systems (JRSs).", "title": "" }, { "docid": "b3bcf4d5962cd2995d21cfbbe9767b9d", "text": "In computer, Cloud of Things (CoT) it is a Technique came by integrated two concepts Internet of Things(IoT) and Cloud Computing. Therefore, Cloud of Things is a currently a wide area of research and development. This paper discussed the concept of Cloud of Things (CoT) in detail and explores the challenges, open research issues, and various tools that can be used with Cloud of Things (CoT). As a result, this paper gives a knowledge and platform to explore Cloud of Things (CoT), and it gives new ideas for researchers to find the open research issues and solution to challenges.", "title": "" }, { "docid": "dd545adf1fba52e794af4ee8de34fc60", "text": "We propose solving continuous parametric simulation optimizations using a deterministic nonlinear optimization algorithm and sample-path simulations. The optimization problem is written in a modeling language with a simulation module accessed with an external function call. Since we allow no changes to the simulation code at all, we propose using a quadratic approximation of the simulation function to obtain derivatives. Results on three different queueing models are presented that show our method to be effective on a variety of practical problems.", "title": "" }, { "docid": "d038c7b29701654f8ee908aad395fe8c", "text": "Vaginal fibroepithelial polyp is a rare lesion, and although benign, it can be confused with malignant connective tissue lesions. Treatment is simple excision, and recurrence is extremely uncommon. We report a case of a newborn with vaginal fibroepithelial polyp. The authors suggest that vaginal polyp must be considered in the evaluation of interlabial masses in prepubertal girls.", "title": "" }, { "docid": "b07cff84fd585f9ee88865e7c51171f5", "text": "Convolutional neural networks (CNN) are extensions to deep neural networks (DNN) which are used as alternate acoustic models with state-of-the-art performances for speech recognition. In this paper, CNNs are used as acoustic models for speech activity detection (SAD) on data collected over noisy radio communication channels. When these SAD models are tested on audio recorded from radio channels not seen during training, there is severe performance degradation. We attribute this degradation to mismatches between the two dimensional filters learnt in the initial CNN layers and the novel channel data. Using a small amount of supervised data from the novel channels, the filters can be adapted to provide significant improvements in SAD performance. In mismatched acoustic conditions, the adapted models provide significant improvements (about 10-25%) relative to conventional DNN-based SAD systems. These results illustrate that CNNs have a considerable advantage in fast adaptation for acoustic modeling in these settings.", "title": "" }, { "docid": "0ec5953981251bc48cf0afc15e4c0c31", "text": "The Product Creation Process is described in its context. A phased model for Product Creation is shown. Many organizations use a phased model as blueprint for the way of working. The operational organization of the product creation process is discussed, especially the role of the operational leader. Distribution This article or presentation is written as part of the Gaudí project. The Gaudí project philosophy is to improve by obtaining frequent feedback. Frequent feedback is pursued by an open creation process. This document is published as intermediate or nearly mature version to get feedback. Further distribution is allowed as long as the document remains complete and unchanged. All Gaudí documents are available at: http://www.gaudisite.nl/ version: 2.2 status: concept September 9, 2018", "title": "" }, { "docid": "ee3c8327cb0083c4d9e0618c21d129c8", "text": "Information retrieval is used to find a subset of relevant documents against a set of documents. Determining semantic similarity between two terms is a crucial problem in Web Mining for such applications as information retrieval systems and recommender systems. Semantic similarity refers to the sameness of two terms based on sameness of their meaning or their semantic contents. Recently many techniques have introduced measuring semantic similarity using Wikipedia, a free online encyclopedia. In this paper, a new technique of measuring semantic similarity is proposed. The proposed method uses Wikipedia as an ontology and spreading activation strategy to compute semantic similarity. The utility of the proposed system is evaluated by using the taxonomy of Wikipedia categories.", "title": "" }, { "docid": "f562bd72463945bd35d42894e4815543", "text": "Sound levels in animal shelters regularly exceed 100 dB. Noise is a physical stressor on animals that can lead to behavioral, physiological, and anatomical responses. There are currently no policies regulating noise levels in dog kennels. The objective of this study was to evaluate the noise levels dogs are exposed to in an animal shelter on a continuous basis and to determine the need, if any, for noise regulations. Noise levels at a newly constructed animal shelter were measured using a noise dosimeter in all indoor dog-holding areas. These holding areas included large dog adoptable, large dog stray, small dog adoptable, small dog stray, and front intake. The noise level was highest in the large adoptable area. Sound from the large adoptable area affected some of the noise measurements for the other rooms. Peak noise levels regularly exceeded the measuring capability of the dosimeter (118.9 dBA). Often, in new facility design, there is little attention paid to noise abatement, despite the evidence that noise causes physical and psychological stress on dogs. To meet their behavioral and physical needs, kennel design should also address optimal sound range.", "title": "" }, { "docid": "2f01161f6873cf4f6fefd18deff4425b", "text": "Stock price prediction is a challenging task owing to the complexity patterns behind time series. Autoregressive integrated moving average (ARIMA) model and back propagation neural network (BPNN) model are popular linear and nonlinear models for time series forecasting respectively. The integration of two models can effectively capture the linear and nonlinear patterns hidden in a time series and improve forecast accuracy. In this paper, a new hybrid ARIMA-BPNN model containing technical indicators is proposed to forecast four individual stocks consisting of both main board market and growth enterprise market in software and information services sector. Experiment results show that the proposed method achieves the better one-step-ahead forecasting accuracies namely 78.79%, 72.73%, 59.09% and 66.67% respectively for each series than those of ARIMA, BPNN, and Khashei and Bijari's hybrid models.", "title": "" }, { "docid": "aa2af8bd2ef74a0b5fa463a373a4c049", "text": "What modern game theorists describe as “fictitious play” is not the learning process George W. Brown defined in his 1951 paper. Brown’s original version differs in a subtle detail, namely the order of belief updating. In this note we revive Brown’s original fictitious play process and demonstrate that this seemingly innocent detail allows for an extremely simple and intuitive proof of convergence in an interesting and large class of games: nondegenerate ordinal potential games. © 2006 Elsevier Inc. All rights reserved. JEL classification: C72", "title": "" }, { "docid": "f095118c63d1531ebdbaec3565b0d91f", "text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.", "title": "" }, { "docid": "e4237adfc7150443c5e0e9c5f4c967a5", "text": "Besides the well known microwave radar systems mainly used in navigation, surveillance, and control applications, High-Frequency (HF) radars gain increased attention during the last decades. These HF radars are operated in the 3-30 MHz frequency range and due to ground-wave or sky-wave propagation provide over-the-horizon (OTH) capabilities. Many of these OTH radars apply frequency modulated continuous wave (FMCW) modulation for range resolution, which enables them to operate with a relatively low transmit power of a few watts only. Sometimes these types of radars are referred to as “silent radar”. This paper discusses the signal processing chain from deramping the received signal down to processing of range-Doppler-azimuth spectra. Some steps which are critical for the overall system performance are discussed in detail and a new technique to derive the structure of radio frequency interference (RFI), which is superposed to the radar echoes, is described.", "title": "" }, { "docid": "b65ead6ac95bff543a5ea690caade548", "text": "Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and more large-delay satellite links.To address this problem, we develop a novel approach to Internet congestion control that outperforms TCP in conventional environments, and remains efficient, fair, scalable, and stable as the bandwidth-delay product increases. This new eXplicit Control Protocol, XCP, generalizes the Explicit Congestion Notification proposal (ECN). In addition, XCP introduces the new concept of decoupling utilization control from fairness control. This allows a more flexible and analytically tractable protocol design and opens new avenues for service differentiation.Using a control theory framework, we model XCP and demonstrate it is stable and efficient regardless of the link capacity, the round trip delay, and the number of sources. Extensive packet-level simulations show that XCP outperforms TCP in both conventional and high bandwidth-delay environments. Further, XCP achieves fair bandwidth allocation, high utilization, small standing queue size, and near-zero packet drops, with both steady and highly varying traffic. Additionally, the new protocol does not maintain any per-flow state in routers and requires few CPU cycles per packet, which makes it implementable in high-speed routers.", "title": "" }, { "docid": "804b320c6f5b07f7f4d7c5be29c572e9", "text": "Softmax is the most commonly used output function for multiclass problems and is widely used in areas such as vision, natural language processing, and recommendation. A softmax model has linear costs in the number of classes which makes it too expensive for many real-world problems. A common approach to speed up training involves sampling only some of the classes at each training step. It is known that this method is biased and that the bias increases the more the sampling distribution deviates from the output distribution. Nevertheless, almost all recent work uses simple sampling distributions that require a large sample size to mitigate the bias. In this work, we propose a new class of kernel based sampling methods and develop an efficient sampling algorithm. Kernel based sampling adapts to the model as it is trained, thus resulting in low bias. It can also be easily applied to many models because it relies only on the model’s last hidden layer. We empirically study the trade-off of bias, sampling distribution and sample size and show that kernel based sampling results in low bias with few samples.", "title": "" }, { "docid": "50708eb1617b59f605b926583d9215bf", "text": "Due to filmmakers focusing on violence, traumatic events, and hallucinations when depicting characters with schizophrenia, critics have scrutinized the representation of mental disorders in contemporary films for years. This study compared previous research on schizophrenia with the fictional representation of the disease in contemporary films. Through content analysis, this study examined 10 films featuring a schizophrenic protagonist, tallying moments of violence and charting if they fell into four common stereotypes. Results showed a high frequency of violent behavior in films depicting schizophrenic characters, implying that those individuals are overwhelmingly dangerous and to be feared.", "title": "" }, { "docid": "446a7404a0e4e78156532fcb93270475", "text": "Convolutional Neural Networks (CNNs) can provide accurate object classification. They can be extended to perform object detection by iterating over dense or selected proposed object regions. However, the runtime of such detectors scales as the total number and/or area of regions to examine per image, and training such detectors may be prohibitively slow. However, for some CNN classifier topologies, it is possible to share significant work among overlapping regions to be classified. This paper presents DenseNet, an open source system that computes dense, multiscale features from the convolutional layers of a CNN based object classifier. Future work will involve training efficient object detectors with DenseNet feature descriptors.", "title": "" }, { "docid": "d6cb714b47b056e1aea8ef0682f4ae51", "text": "Arti cial neural networks are being used with increasing frequency for high dimensional problems of regression or classi cation. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques.", "title": "" } ]
scidocsrr
adaf2bef0dbb2e7c31e8ff663b5c63a4
Diffusion tensor imaging of the corpus callosum in Autism
[ { "docid": "908716e7683bdc78283600f63bd3a1b0", "text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.", "title": "" }, { "docid": "891efd54485c7cf73edd690e0d9b3cfa", "text": "Quantitative-diffusion-tensor MRI consists of deriving and displaying parameters that resemble histological or physiological stains, i.e., that characterize intrinsic features of tissue microstructure and microdynamics. Specifically, these parameters are objective, and insensitive to the choice of laboratory coordinate system. Here, these two properties are used to derive intravoxel measures of diffusion isotropy and the degree of diffusion anisotropy, as well as intervoxel measures of structural similarity, and fiber-tract organization from the effective diffusion tensor, D, which is estimated in each voxel. First, D is decomposed into its isotropic and anisotropic parts, [D] I and D - [D] I, respectively (where [D] = Trace(D)/3 is the mean diffusivity, and I is the identity tensor). Then, the tensor (dot) product operator is used to generate a family of new rotationally and translationally invariant quantities. Finally, maps of these quantitative parameters are produced from high-resolution diffusion tensor images (in which D is estimated in each voxel from a series of 2D-FT spin-echo diffusion-weighted images) in living cat brain. Due to the high inherent sensitivity of these parameters to changes in tissue architecture (i.e., macromolecular, cellular, tissue, and organ structure) and in its physiologic state, their potential applications include monitoring structural changes in development, aging, and disease.", "title": "" } ]
[ { "docid": "4a741431c708cd92a250bcb91e4f1638", "text": "PURPOSE\nIn today's workplace, nurses are highly skilled professionals possessing expertise in both information technology and nursing. Nursing informatics competencies are recognized as an important capability of nurses. No established guidelines existed for nurses in Asia. This study focused on identifying the nursing informatics competencies required of nurses in Taiwan.\n\n\nMETHODS\nA modified Web-based Delphi method was used for two expert groups in nursing, educators and administrators. Experts responded to 323 items on the Nursing Informatics Competencies Questionnaire, modified from the initial work of Staggers, Gassert and Curran to include 45 additional items. Three Web-based Delphi rounds were conducted. Analysis included detailed item analysis. Competencies that met 60% or greater agreement of item importance and appropriate level of nursing practice were included.\n\n\nRESULTS\nN=32 experts agreed to participate in Round 1, 23 nursing educators and 9 administrators. The participation rates for Rounds 2 and 3=68.8%. By Round 3, 318 of 323 nursing informatics competencies achieved required consensus levels. Of the new competencies, 42 of 45 were validated. A high degree of agreement existed for specific nursing informatics competencies required for nurses in Taiwan (97.8%).\n\n\nCONCLUSIONS\nThis study provides a current master list of nursing informatics competency requirements for nurses at four levels in the U.S. and Taiwan. The results are very similar to the original work of Staggers et al. The results have international relevance because of the global importance of information technology for the nursing profession.", "title": "" }, { "docid": "07ba242cd29754cb008d225eb5663cf5", "text": "This paper summarizes the state of the real-time eld in the areas of scheduling and operating system kernels. Given the vast amount of work that has been done by both the operations research and computer science communities in the scheduling area, we discuss four paradigms underlying the scheduling approaches and present several exemplars of each. The four paradigms are: static table-driven scheduling, static priority preemptive scheduling, dynamic planning-based scheduling, and dynamic best-e ort scheduling. In the operating system context, we argue that most of the proprietary commercial kernels as well as real-time extensions to timesharing operating system kernels do not t the needs of predictable real-time systems. We discuss several research kernels that are currently being built to explicitly meet the needs of real-time applications. This material is based upon work supported by the National Science Foundation under grants CDA8922572 and IRI 9208920, and by the O ce of Naval Research under grant N00014-92-J-1048.", "title": "" }, { "docid": "0ccbb82de4b25ee10a9c8690c41232cf", "text": "Since amount of big data is extremely huge, low-delay and low-complexity signal processing devices are strongly required in big data signal processing. Digital filters are the key device for digital signal processing. Digital filters with sparse coefficients (0 coefficients) are beneficial to reduce the computational complexity. This paper proposes a design method for low-delay FIR filters with sparse coefficients. We consider the optimization of combination of selection for sparse coefficients. If the sparse coefficients are selected, the real coefficients can be computed based on the Lagrange multiplier method. We employ the branch and bound method incorporated with the Lagrange multiplier method. Also, we propose a selection method of the initial cost for high-speed search. The feature of this method is as follows: (a) The number of 0 coefficients can explicitly specify. (b) The optimality is guaranteed in the least squares sense. We present a design example in order to demonstrate the effectiveness of our method.", "title": "" }, { "docid": "456b7ad01115d9bc04ca378f1eb6d7f2", "text": "Article history: Received 13 October 2007 Received in revised form 12 June 2008 Accepted 31 July 2008", "title": "" }, { "docid": "6dfd7202e254fa8ec968c6d64b53e6ce", "text": "The Domain Name Service (DNS) provides a critical function in directing Internet traffic. Defending DNS servers from bandwidth attacks is assisted by the ability to effectively mine DNS log data for statistical patterns. Processing DNS log data can be classified as a data-intensive problem, and as such presents challenges unique to this class of problem. When problems occur in capturing log data, or when the DNS server experiences an outage (scheduled or unscheduled), the normal pattern of traffic for that server becomes clouded. Simple linear interpolation of the holes in the data does not preserve features such as peaks in traffic (which can occur during an attack, making them of particular interest). We demonstrate a method for estimating values for missing portions of time sensitive DNS log data. This method would be suitable for use with a variety of datasets containing time series values where certain portions are missing.", "title": "" }, { "docid": "e45f05e26ae6f7c1f5069bb3a3be236b", "text": "This paper presents a character-level encoder-decoder modeling method for question answering (QA) from large-scale knowledge bases (KB). This method improves the existing approach [9] from three aspects. First, long short-term memory (LSTM) structures are adopted to replace the convolutional neural networks (CNN) for encoding the candidate entities and predicates. Second, a new strategy of generating negative samples for model training is adopted. Third, a data augmentation strategy is applied to increase the size of the training set by generating factoid questions using another trained encoder-decoder model. Experimental results on the SimpleQuestions dataset and the Freebase5M KB demonstrates the effectiveness of the proposed method, which improves the state-of-the-art accuracy from 70.3% to 78.8% when augmenting the training set with 70,000 generated triple-question pairs.", "title": "" }, { "docid": "ad5005bc593b0fbddfe483732b30fe5e", "text": "Recent multi-agent extensions of Q-Learning require knowledge of other agents’ payoffs and Q-functions, and assume game-theoretic play at all times by all other agents. This paper proposes a fundamentally different approach, dubbed “Hyper-Q” Learning, in which values of mixed strategies rather than base actions are learned, and in which other agents’ strategies are estimated from observed actions via Bayesian inference. Hyper-Q may be effective against many different types of adaptive agents, even if they are persistently dynamic. Against certain broad categories of adaptation, it is argued that Hyper-Q may converge to exact optimal time-varying policies. In tests using Rock-Paper-Scissors, Hyper-Q learns to significantly exploit an Infinitesimal Gradient Ascent (IGA) player, as well as a Policy Hill Climber (PHC) player. Preliminary analysis of Hyper-Q against itself is also presented.", "title": "" }, { "docid": "8c0c2d5abd8b6e62f3184985e8e01d66", "text": "Neural networks are known to be vulnerable to adversarial examples: inputs that are close to natural inputs but classified incorrectly. In order to better understand the space of adversarial examples, we survey ten recent proposals that are designed for detection and compare their efficacy. We show that all can be defeated by constructing new loss functions. We conclude that adversarial examples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not. Finally, we propose several simple guidelines for evaluating future proposed defenses.", "title": "" }, { "docid": "c6e1c8aa6633ec4f05240de1a3793912", "text": "Medial prefrontal cortex (MPFC) is among those brain regions having the highest baseline metabolic activity at rest and one that exhibits decreases from this baseline across a wide variety of goal-directed behaviors in functional imaging studies. This high metabolic rate and this behavior suggest the existence of an organized mode of default brain function, elements of which may be either attenuated or enhanced. Extant data suggest that these MPFC regions may contribute to the neural instantiation of aspects of the multifaceted \"self.\" We explore this important concept by targeting and manipulating elements of MPFC default state activity. In this functional magnetic resonance imaging (fMRI) study, subjects made two judgments, one self-referential, the other not, in response to affectively normed pictures: pleasant vs. unpleasant (an internally cued condition, ICC) and indoors vs. outdoors (an externally cued condition, ECC). The ICC was preferentially associated with activity increases along the dorsal MPFC. These increases were accompanied by decreases in both active task conditions in ventral MPFC. These results support the view that dorsal and ventral MPFC are differentially influenced by attentiondemanding tasks and explicitly self-referential tasks. The presence of self-referential mental activity appears to be associated with increases from the baseline in dorsal MPFC. Reductions in ventral MPFC occurred consistent with the fact that attention-demanding tasks attenuate emotional processing. We posit that both self-referential mental activity and emotional processing represent elements of the default state as represented by activity in MPFC. We suggest that a useful way to explore the neurobiology of the self is to explore the nature of default state activity.", "title": "" }, { "docid": "e3557b0f064d848c5a9127a0c3d5f1db", "text": "Understanding the behaviors of a software system is very important for performing daily system maintenance tasks. In practice, one way to gain knowledge about the runtime behavior of a system is to manually analyze system logs collected during the system executions. With the increasing scale and complexity of software systems, it has become challenging for system operators to manually analyze system logs. To address these challenges, in this paper, we propose a new approach for contextual analysis of system logs for understanding a system's behaviors. In particular, we first use execution patterns to represent execution structures reflected by a sequence of system logs, and propose an algorithm to mine execution patterns from the program logs. The mined execution patterns correspond to different execution paths of the system. Based on these execution patterns, our approach further learns essential contextual factors (e.g., the occurrences of specific program logs with specific parameter values) that cause a specific branch or path to be executed by the system. The mining and learning results can help system operators to understand a software system's runtime execution logic and behaviors during various tasks such as system problem diagnosis. We demonstrate the feasibility of our approach upon two real-world software systems (Hadoop and Ethereal).", "title": "" }, { "docid": "9fb90f0d2480f212653c68f2dc334cd9", "text": "Consumer choice is influenced in a direct and meaningful way by the actions taken by others. These “actions” range from face-to-face recommendations from a friend to the passive observation of what a stranger is wearing. We refer to the set of such contexts as “social interactions” (SI). We believe that at least some of the SI effects are partially within the firm’s control and that this represents an exciting research opportunity. We present an agenda that identifies a list of unanswered questions of potential interest to both researchers and managers. In order to appreciate the firm’s choices with respect to its management of SI, it is important to first evaluate where we are in terms of understanding the phenomena themselves. We highlight five questions in this regard: (1) What are the antecedents of word of mouth (WOM)? (2) How does the transmission of positive WOM differ from that of negative WOM? (3) How does online WOM differ from offline WOM? (4) What is the impact of WOM? (5) How can we measure WOM? Finally, we identify and discuss four principal, non-mutually exclusive, roles that the firm might play: (1) observer, (2) moderator, (3) mediator, and (4) participant.", "title": "" }, { "docid": "2ddd492da2191f685daa111d5f89eedd", "text": "Given the abundance of cameras and LCDs in today's environment, there exists an untapped opportunity for using these devices for communication. Specifically, cameras can tune to nearby LCDs and use them for network access. The key feature of these LCD-camera links is that they are highly directional and hence enable a form of interference-free wireless communication. This makes them an attractive technology for dense, high contention scenarios. The main challenge however, to enable such LCD-camera links is to maximize coverage, that is to deliver multiple Mb/s over multi-meter distances, independent of the view angle. To do so, these links need to address unique types of channel distortions, such as perspective distortion and blur.\n This paper explores this novel communication medium and presents PixNet, a system for transmitting information over LCD-camera links. PixNet generalizes the popular OFDM transmission algorithms to address the unique characteristics of the LCD-camera link which include perspective distortion, blur, and sensitivity to ambient light. We have built a prototype of PixNet using off-the-shelf LCDs and cameras. An extensive evaluation shows that a single PixNet link delivers data rates of up to 12 Mb/s at a distance of 10 meters, and works with view angles as wide as 120 degree°.", "title": "" }, { "docid": "d88e8a363bbe8a00f814efd05ce6f46f", "text": "Many software applications today are written as web-based applications to be run in an Internet browser. Selenium is a set of powerful different software tools working with many browsers, operating systems, programming languages, and testing frameworks each with a different approach to supporting automation test for testing web-based applications. JMeter is used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types. JMeter operates at the protocol-level, on the other hand, Selenium works at the user-level. In this paper, authors have designed an automatic software testing framework for web applications based on the Selenium and JMeter. With the use of the software framework, we efficiently improve the extensibility and reusability of automated test. The results show that the new software framework improves software products quality and develop efficiency. This paper also illustrates how to design web-based test automation framework in details.", "title": "" }, { "docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0", "text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.", "title": "" }, { "docid": "210e22e098340e4f858b4ceab1c643e6", "text": "Dimethylsulfoxide (DMSO) controlled puff induction and repression (or non-induction) in larval polytene chromosomes of Chironomus tentans were studied for the case of the Balbiani rings (BR). A characteristic reaction pattern, involving BR 1, BR 2 and BR 3, all in salivary gland chromosome IV was found. In vivo exposure of 4th instar larvae (not prepupae) to 10% DMSO at 18° C first evokes an over-stimulation of BR 3 while DMSO-stimulation of puffing at BR 1 and BR 2 always follows that of BR 3. After removal of the drug, a rapid uniform collapse of all puffs occurs, thus more or less restoring the banding pattern of all previously decondensed chromosome segments. Recovery proceeds as BR's and other puffs reappear. By observing the restoration, one can locate the site from which a BR (puff) originates. BR 2, which is normally the most active non-ribosomal gene locus in untreated larvae, here serves as an example. As the sizes of BR 3, BR 1 and BR 2 change, so do the quantities of the transcriptional products in these gene loci (and vice versa), as estimated electron-microscopically in ultrathin sections and autoradiographically in squash preparations. In autoradiograms, the DMSO-stimulated BRs exhibit the most dense concentration of silver grains and therefore the highest rate of transcriptional activity. In DMSO-repressed BRs (and other puffs) the transcription of the locus specific genes is not completely shut off. In chromosomes from nuclei with high labelling intensities the repressed BRs (and other puffs) always exhibit a low level of 3H-uridine incorporation in vivo. The absence of cytologically visible BR (puff) formation therefore does not necessarily indicate complete transcriptional inactivity. Typically, before the stage of puff formation the 3H-uridine labelling first appears in the interband-like regions.", "title": "" }, { "docid": "b3ddcc6dbe3e118dfd0630feb42713c9", "text": "This thesis details the use of a programmable logic device to increase the playing strength of a chess program. The time–consuming task of generating chess moves is relegated to hardware in order to increase the processing speed of the search algorithm. A simpler inter–square connection protocol reduces the number of wires between chess squares, when compared to the Deep Blue design. With this interconnection scheme, special chess moves are easily resolved. Furthermore, dynamically programmable arbiters are introduced for optimal move ordering. Arbiter centrality is also shown to improve move ordering, thereby creating smaller search trees. The move generator is designed to allow the integration of crucial move ordering heuristics. With its new hardware move generator, the chess program’s playing ability is noticeably improved.", "title": "" }, { "docid": "5433a8e449bf4bf9d939e645e171f7e5", "text": "Software Testing (ST) processes attempt to verify and validate the capability of a software system to meet its required attributes and functionality. As software systems become more complex, the need for automated software testing methods emerges. Machine Learning (ML) techniques have shown to be quite useful for this automation process. Various works have been presented in the junction of ML and ST areas. The lack of general guidelines for applying appropriate learning methods for software testing purposes is our major motivation in this current paper. In this paper, we introduce a classification framework which can help to systematically review research work in the ML and ST domains. The proposed framework dimensions are defined using major characteristics of existing software testing and machine learning methods. Our framework can be used to effectively construct a concrete set of guidelines for choosing the most appropriate learning method and applying it to a distinct stage of the software testing life-cycle for automation purposes.", "title": "" }, { "docid": "cc6c485fdd8d4d61c7b68bfd94639047", "text": "Passive geolocaton of communication emitters provides great benefits to military and civilian surveillance and security operations. Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) measurement combination for stationary emitters may be obtained by sensors mounted on mobile platforms, for example on a pair of UAVs. Complex Ambiguity Function (CAF) of received complex signals can be efficiently calculated to provide required TDOA / FDOA measurement combination. TDOA and FDOA measurements are nonlinear in the sense that the emitter uncertainty given measurements in the Cartesian domain is non-Gaussian. Multiple non-linear measurements of emitter location need to be fused to provide the geolocation estimates. Gaussian Mixture Measurement (GMM) filter fuses nonlinear measurements as long as the uncertainty of each measurement in the surveillance (Cartesian) space is modeled by a Gaussian Mixture. Simulation results confirm this approach and compare it with geolocation using Bearings Only (BO) measurements.", "title": "" }, { "docid": "c7a55c0588c1cdccb5b01193a863eee0", "text": "Hypothyroidism is a very common, yet often overlooked disease. It can have a myriad of signs and symptoms, and is often nonspecific. Identification requires analysis of thyroid hormones circulating in the bloodstream, and treatment is simply replacement with exogenous hormone, usually levothyroxine (Synthroid). The deadly manifestation of hypothyroidism is myxedema coma. Similarly nonspecific and underrecognized, treatment with exogenous hormone is necessary to decrease the high mortality rate.", "title": "" }, { "docid": "d0bf34417300c70e4781ecf4cd6b5f1c", "text": "Recent advances in functional connectivity methods have made it possible to identify brain hubs - a set of highly connected regions serving as integrators of distributed neuronal activity. The integrative role of hub nodes makes these areas points of high vulnerability to dysfunction in brain disorders, and abnormal hub connectivity profiles have been described for several neuropsychiatric disorders. The identification of analogous functional connectivity hubs in preclinical species like the mouse may provide critical insight into the elusive biological underpinnings of these connectional alterations. To spatially locate functional connectivity hubs in the mouse brain, here we applied a fully-weighted network analysis to map whole-brain intrinsic functional connectivity (i.e., the functional connectome) at a high-resolution voxel-scale. Analysis of a large resting-state functional magnetic resonance imaging (rsfMRI) dataset revealed the presence of six distinct functional modules related to known large-scale functional partitions of the brain, including a default-mode network (DMN). Consistent with human studies, highly-connected functional hubs were identified in several sub-regions of the DMN, including the anterior and posterior cingulate and prefrontal cortices, in the thalamus, and in small foci within well-known integrative cortical structures such as the insular and temporal association cortices. According to their integrative role, the identified hubs exhibited mutual preferential interconnections. These findings highlight the presence of evolutionarily-conserved, mutually-interconnected functional hubs in the mouse brain, and may guide future investigations of the biological foundations of aberrant rsfMRI hub connectivity associated with brain pathological states.", "title": "" } ]
scidocsrr
b7b87b360c40464ad25019e46f4a0369
A neural model of voluntary and automatic emotion regulation: implications for understanding the pathophysiology and neurodevelopment of bipolar disorder
[ { "docid": "6e031a7dab98c28ca348d969f01787f0", "text": "Emotion regulation plays a central role in mental health and illness, but little is known about even the most basic forms of emotion regulation. To examine the acute effects of inhibiting negative and positive emotion, we asked 180 female participants to watch sad, neutral, and amusing films under 1 of 2 conditions. Suppression participants (N = 90) inhibited their expressive behavior while watching the films; no suppression participants (N = 90) simply watched the films. Suppression diminished expressive behavior in all 3 films and decreased amusement self-reports in sad and amusing films. Physiologically, suppression had no effect in the neutral film, but clear effects in both negative and positive emotional films, including increased sympathetic activation of the cardiovascular system. On the basis of these findings, we suggest several ways emotional inhibition may influence psychological functioning.", "title": "" } ]
[ { "docid": "70991373ae71f233b0facd2b5dd1a0d3", "text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.", "title": "" }, { "docid": "8326b4f0599718b3d8d3c7c8c8cd64ce", "text": "This paper introduces a neural model for concept-to-text generation that scales to large, rich domains. It generates biographical sentences from fact tables on a new dataset of biographies from Wikipedia. This set is an order of magnitude larger than existing resources with over 700k samples and a 400k vocabulary. Our model builds on conditional neural language models for text generation. To deal with the large vocabulary, we extend these models to mix a fixed vocabulary with copy actions that transfer sample-specific words from the input database to the generated output sentence. To deal with structured data, we allow the model to embed words differently depending on the data fields in which they occur. Our neural model significantly outperforms a Templated Kneser-Ney language model by nearly 15 BLEU.", "title": "" }, { "docid": "90eb392765c01b6166daa2a7a62944d1", "text": "Recent studies have demonstrated the potential for reducing energy consumption in integrated circuits by allowing errors during computation. While most proposed techniques for achieving this rely on voltage overscaling (VOS), this paper shows that Imprecise Hardware (IHW) with design-time structural parameters can achieve orthogonal energy-quality tradeoffs. Two IHW adders are improved and two IHW multipliers are introduced in this paper. In addition, a simulation-free error estimation technique is proposed to rapidly and accurately estimate the impact of IHW on output quality. Finally, a quality-aware energy minimization methodology is presented. To validate this methodology, experiments are conducted on two computational kernels: DOT-PRODUCT and L2-NORM -- used in three applications -- Leukocyte Tracker, SVM classification and K-means clustering. Results show that the Hellinger distance between estimated and simulated error distribution is within 0.05 and that the methodology enables designers to explore energy-quality tradeoffs with significant reduction in simulation complexity.", "title": "" }, { "docid": "be05abd038de9b32cc255ca221634a2c", "text": "This paper sees a smart city not as a status of how smart a city is but as a city's effort to make itself smart. The connotation of a smart city represents city innovation in management and policy as well as technology. Since the unique context of each city shapes the technological, organizational and policy aspects of that city, a smart city can be considered a contextualized interplay among technological innovation, managerial and organizational innovation, and policy innovation. However, only little research discusses innovation in management and policy while the literature of technology innovation is abundant. This paper aims to fill the research gap by building a comprehensive framework to view the smart city movement as innovation comprised of technology, management and policy. We also discuss inevitable risks from innovation, strategies to innovate while avoiding risks, and contexts underlying innovation and risks.", "title": "" }, { "docid": "e325165aa6628514015a6b467bf6c036", "text": "Wafer-scale beamforming lenses for future IEEE802.15.3c 60 GHz WPAN applications are presented. An on-wafer fabrication is of particular interest because a beamforming lens can be fabricated with sub-circuits in a single process. It means that the beamforming lens system would be compact, reliable, and cost-effective. The Rotman lens and the Rotman lens with antenna arrays were fabricated on a high-resistivity silicon (HRS) wafer in a semiconductor process, which is a preliminary research to check the feasibility of a Rotman lens for a chip scale packaging. In the case of the Rotman lens only, the efficiency is in the range from 50% to 70% depending on which beam port is excited. Assuming that the lens is coupled with ideal isotropic antennas, the synthesized beam patterns from the S-parameters shows that the beam directions are -29.3°, -15.1°, 0.2°, 15.2°, and 29.5 °, and the beam widths are 15.37°, 15.62°, 15.46°, 15.51°, and 15.63°, respectively. In the case of the Rotman lens with antenna array, the patterns were measured by using on-wafer measurement setup. It shows that the beam directions are -26.6°, -21.8°, 0°, 21.8°, and 26.6° . These results are in good agreement with the calculated results from ray-optic. Thus, it is verified that the lens antenna implemented on a wafer can be feasible for the system-in-package (SiP) and wafer-level package technologies.", "title": "" }, { "docid": "6bd9fc02c8e26e64cecb13dab1a93352", "text": "Kohlberg, who was born in 1927, grew up in Bronxville, New York, and attended the Andover Academy in Massachusetts, a private high school for bright and usually wealthy students. He did not go immediately to college, but instead went to help the Israeli cause, in which he was made the Second Engineer on an old freighter carrying refugees from parts of Europe to Israel. After this, in 1948, he enrolled at the University of Chicago, where he scored so high on admission tests that he had to take only a few courses to earn his bachelor's degree. This he did in one year. He stayed on at Chicago for graduate work in psychology, at first thinking he would become a clinical psychologist. However, he soon became interested in Piaget and began interviewing children and adolescents on moral issues. The result was his doctoral dissertation (1958a), the first rendition of his new stage theory.", "title": "" }, { "docid": "5f89dba01f03d4e7fbb2baa8877e0dff", "text": "The basic aim of a biometric identification system is to discriminate automatically between subjects in a reliable and dependable way, according to a specific-target application. Multimodal biometric identification systems aim to fuse two or more physical or behavioral traits to provide optimal False Acceptance Rate (FAR) and False Rejection Rate (FRR), thus improving system accuracy and dependability. In this paper, an innovative multimodal biometric identification system based on iris and fingerprint traits is proposed. The paper is a state-of-the-art advancement of multibiometrics, offering an innovative perspective on features fusion. In greater detail, a frequency-based approach results in a homogeneous biometric vector, integrating iris and fingerprint data. Successively, a hamming-distance-based matching algorithm deals with the unified homogenous biometric vector. The proposed multimodal system achieves interesting results with several commonly used databases. For example, we have obtained an interesting working point with FAR = 0% and FRR = 5.71% using the entire fingerprint verification competition (FVC) 2002 DB2B database and a randomly extracted same-size subset of the BATH database. At the same time, considering the BATH database and the FVC2002 DB2A database, we have obtained a further interesting working point with FAR = 0% and FRR = 7.28% ÷ 9.7%.", "title": "" }, { "docid": "0c20ed6f2506ecb181909128796c0e5d", "text": "This paper presents a multilevel spin-orbit torque magnetic random access memory (SOT-MRAM). The conventional SOT-MRAMs enables a reliable and energy efficient write operation. However, these cells require two access transistors per cell, hence the efficiency of the SOT-MRAMs can be questioned in high-density memory application. To deal with this obstacle, we propose a multilevel cell which stores two bits per memory cell. In addition, we propose a novel sensing scheme to read out the stored data in the multilevel SOT-MRAM cell. Our simulation results show that the proposed cell can achieve 3X more energy efficient write operation in comparison with the conventional STT-MRAMs. In addition, the proposed cell store two bits without any area penalty in comparison to the conventional one bit SOT-MRAM cells.", "title": "" }, { "docid": "8c58b3da5e724888992ebf0accd2889d", "text": "This paper describes an automatic tissue segmentation method for newborn brains from magnetic resonance images (MRI). The analysis and study of newborn brain MRI is of great interest due to its potential for studying early growth patterns and morphological changes in neurodevelopmental disorders. Automatic segmentation of newborn MRI is a challenging task mainly due to the low intensity contrast and the growth process of the white matter tissue. Newborn white matter tissue undergoes a rapid myelination process, where the nerves are covered in myelin sheathes. It is necessary to identify the white matter tissue as myelinated or non-myelinated regions. The degree of myelination is a fractional voxel property that represents regional changes of white matter as a function of age. Our method makes use of a registered probabilistic brain atlas. The method first uses robust graph clustering and parameter estimation to find the initial intensity distributions. The distribution estimates are then used together with the spatial priors to perform bias correction. Finally, the method refines the segmentation using training sample pruning and non-parametric kernel density estimation. Our results demonstrate that the method is able to segment the brain tissue and identify myelinated and non-myelinated white matter regions.", "title": "" }, { "docid": "87db1c76d4c90122206a3911b36cf44a", "text": "Availability of autonomous systems can be enhanced with self-monitoring and fault-tolerance methods based on failures prediction. With each correct prediction, proactive actions may be taken to prevent or to mitigate a failure. On the other hand, incorrect predictions will introduce additional downtime associated with the overhead of a proactive action that may decrease availability. The total effect on availability will depend on the quality of prediction (measured with precision and recall), the overhead of proactive actions (penalty), and the benefit of proactive actions when prediction is correct (reward). In this paper, we quantify the impact of failure prediction and proactive actions on steady-state availability. Furthermore, we provide guidelines for optimizing failure prediction to maximize availability by selecting a proper precision and recall trade-off with respect to penalty and reward. A case study to demonstrate the approach is also presented.", "title": "" }, { "docid": "a037986af265203341286983c434f6f8", "text": "We create a new cryptocurrency scheme based on the mini-blockchain scheme and homomorphic commitments. The aim is to improve the miniblockchain by making it more private. We also make a comparison of Bitcoin and our scheme regarding their ability to resist blockchain analysis.", "title": "" }, { "docid": "e4dbca720626a29f60a31ed9d22c30aa", "text": "Text classification is the process of classifying documents into predefined categories based on their content. It is the automated assignment of natural language texts to predefined categories. Text classification is the primary requirement of text retrieval systems, which retrieve texts in response to a user query, and text understanding systems, which transform text in some way such as producing summaries, answering questions or extracting data. Existing supervised learning algorithms to automatically classify text need sufficient documents to learn accurately. This paper presents a new algorithm for text classification using data mining that requires fewer documents for training. Instead of using words, word relation i.e. association rules from these words is used to derive feature set from pre-classified text documents. The concept of Naïve Bayes classifier is then used on derived features and finally only a single concept of Genetic Algorithm has been added for final classification. A system based on the proposed algorithm has been implemented and tested. The experimental results show that the proposed system works as a successful text classifier.", "title": "" }, { "docid": "8d197bf27af825b9972a490d3cc9934c", "text": "The past decade has witnessed an increasing adoption of cloud database technology, which provides better scalability, availability, and fault-tolerance via transparent partitioning and replication, and automatic load balancing and fail-over. However, only a small number of cloud databases provide strong consistency guarantees for distributed transactions, despite decades of research on distributed transaction processing, due to practical challenges that arise in the cloud setting, where failures are the norm, and human administration is minimal. For example, dealing with locks left by transactions initiated by failed machines, and determining a multi-programming level that avoids thrashing without under-utilizing available resources, are some of the challenges that arise when using lock-based transaction processing mechanisms in the cloud context. Even in the case of optimistic concurrency control, most proposals in the literature deal with distributed validation but still require the database to acquire locks during two-phase commit when installing updates of a single transaction on multiple machines. Very little theoretical work has been done to entirely eliminate the need for locking in distributed transactions, including locks acquired during two-phase commit. In this paper, we re-design optimistic concurrency control to eliminate any need for locking even for atomic commitment, while handling the practical issues in earlier theoretical work related to this problem. We conduct an extensive experimental study to evaluate our approach against lock-based methods under various setups and workloads, and demonstrate that our approach provides many practical advantages in the cloud context.", "title": "" }, { "docid": "8087fe4979dc5c056decd31e7c1e6ee1", "text": "With over 100 million users, Duolingo is the most popular education app in the world in Android and iOS. In the first part of this talk, we will describe the motivation for creating Duolingo, its philosophy, and some of the basic techniques used to successfully teach languages and keep users engaged. The second part will focus on the machine learning and natural language processing algorithms we use to model student learning. Proceedings of the 8th International Conference on Educational Data Mining 3 Proceedings of the 8th International Conference on Educational Data Mining 4 Personal Knowledge/Learning Graph George Siemens University of Texas Arlington and Athabasca University gsiemens@gmail.com Ryan Baker Teachers College Columbia University baker2@exchange. tc.columbia.edu Dragan Gasevic Schools of Education and Informatics University of Edinburgh dragan.gasevic@ed.ac.uk", "title": "" }, { "docid": "f4c78c6f0424458cbeea67a498679344", "text": "In the United States, the office of the Medical Examiner-Coroner is responsible for investigating all sudden and unexpected deaths and deaths by violence. Its jurisdiction includes deaths during the arrest procedures and deaths in police custody. Police officers are sometimes required to subdue and restrain an individual who is violent, often irrational and resisting arrest. This procedure may cause harm to the subject and to the arresting officers. This article deals with our experiences in Los Angeles and reviews the policies and procedures for investigating and determining the cause and manner of death in such cases. We have taken a \"quality improvement approach\" to the study of these deaths due to restraint asphyxia and related officer involved deaths, Since 1999, through interagency coordination with law enforcement agencies similar to the hospital healthcare quality improvement meeting program, detailed information related to the sequence of events in these cases and ideas for improvements to prevent such deaths are discussed.", "title": "" }, { "docid": "7b8bd0a884ebcfe66eb4e7fb69bf05b2", "text": "We propose a novel monocular visual odometry (VO) system called UnDeepVO in this paper. UnDeepVO is able to estimate the 6-DoF pose of a monocular camera and the depth of its view by using deep neural networks. There are two salient features of the proposed UnDeepVo:one is the unsupervised deep learning scheme, and the other is the absolute scale recovery. Specifically, we train UnDeepVoby using stereo image pairs to recover the scale but test it by using consecutive monocular images. Thus, UnDeepVO is a monocular system. The loss function defined for training the networks is based on spatial and temporal dense information. A system overview is shown in Fig. 1. The experiments on KITTI dataset show our UnDeepVO achieves good performance in terms of pose accuracy.", "title": "" }, { "docid": "aa6502972088385f0d72d5744f43779f", "text": "We are living in a cyber space with an unprecedented rapid expansion of the space and its elements. All interactive information is processed and exchanged via this space. Clearly a well-built cyber security is vital to ensure the security of the cyber space. However the definitions and scopes of both cyber space and cyber security are still not well-defined and this makes it difficult to establish sound security models and mechanisms for protecting this space. Out of existing models, maturity models offer a manageable approach for assessing the security level of a system or organization. The paper first provides a review of various definitions of cyber space and cyber security in order to ascertain a common understanding of the space and its security. The paper investigates existing security maturity models, focusing on their defining characteristics and identifying their strengths and weaknesses. Finally, the paper discusses and suggests measures for a sound and applicable cyber security model.", "title": "" }, { "docid": "98e7492293b295200b78c99cce8824dd", "text": "Ann Campbell Burke examines the development and evolution [5] of vertebrates, in particular, turtles [6]. Her Harvard University [7] experiments, described in \"Development of the Turtle Carapace [4]: Implications for the Evolution of a Novel Bauplan,\" were published in 1989. Burke used molecular techniques to investigate the developmental mechanisms responsible for the formation of the turtle shell. Burke's work with turtle embryos has provided empirical evidence for the hypothesis that the evolutionary origins of turtle morphology [8] depend on changes in the embryonic and developmental mechanisms underpinning shell production.", "title": "" }, { "docid": "6f049f55c1b6f65284c390bd9a2d7511", "text": "Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve these results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.", "title": "" }, { "docid": "6a2b9761b745f4ece1bba3fab9f5d8b1", "text": "Driven by the evolution of consumer-to-consumer (C2C) online marketplaces, we examine the role of communication tools (i.e., an instant messenger, internal message box and a feedback system), in facilitating dyadic online transactions in the Chinese C2C marketplace. Integrating the Chinese concept of guanxi with theories of social translucence and social presence, we introduce a structural model that explains how rich communication tools influence a website’s interactivity and presence, subsequently building trust and guanxi among buyers and sellers, and ultimately predicting buyers’ repurchase intentions. The data collected from 185 buyers in TaoBao, China’s leading C2C online marketplace, strongly support the proposed model. We believe that this research is the first formal study to show evidence of guanxi in online C2C marketplaces, and it is attributed to the role of communication tools to enhance a website’s interactivity and presence.", "title": "" } ]
scidocsrr
eebb92bfbac3d4927460a20e10f640e5
Navigating the Local Modes of Big Data: The Case of Topic Models
[ { "docid": "a8bd9e8470ad414c38f5616fb14d433d", "text": "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model.", "title": "" }, { "docid": "cce513c48e630ab3f072f334d00b67dc", "text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press", "title": "" } ]
[ { "docid": "66da54da90bbd252386713751cec7c67", "text": "A cyber world (CW) is a digitized world created on cyberspaces inside computers interconnected by networks including the Internet. Following ubiquitous computers, sensors, e-tags, networks, information, services, etc., is a road towards a smart world (SW) created on both cyberspaces and real spaces. It is mainly characterized by ubiquitous intelligence or computational intelligence pervasion in the physical world filled with smart things. In recent years, many novel and imaginative researcheshave been conducted to try and experiment a variety of smart things including characteristic smart objects and specific smart spaces or environments as well as smart systems. The next research phase to emerge, we believe, is to coordinate these diverse smart objects and integrate these isolated smart spaces together into a higher level of spaces known as smart hyperspace or hyper-environments, and eventually create the smart world. In this paper, we discuss the potential trends and related challenges toward the smart world and ubiquitous intelligence from smart things to smart spaces and then to smart hyperspaces. Likewise, we show our efforts in developing a smart hyperspace of ubiquitous care for kids, called UbicKids.", "title": "" }, { "docid": "3bfeb0096c0255aee35001c23acb2057", "text": "Tensegrity structures, isolated solid rods connected by tensile cables, are of interest in the field of soft robotics due to their flexible and robust nature. This makes them suitable for uneven and unpredictable environments in which traditional robots struggle. The compliant structure also ensures that the robot will not injure humans or delicate equipment in co-robotic applications [1]. A 6-bar tensegrity structure is being used as the basis for a new generation of robotic landers and rovers for space exploration [1]. In addition to a soft tensegrity structure, we are also exploring use of soft sensors as an integral part of the compliant elements. Fig. 1 shows an example of a 6-bar tensegrity structure, with integrated liquid metalembedded hyperelastic strain sensors as the 24 tensile components. For this tensegrity, the strain sensors are primarily composed of a silicone elastomer with embedded microchannels filled with conductive liquid metal (eutectic gallium indium alloy (eGaIn), Sigma-Aldrich) (fig.2). As the sensor is elongated, the resistance of the eGaIn channel will increase due to the decreased microchannel cross-sectional area and the increased microchannel length [2]. The primary functions of this hyperelastic sensor tensegrity are model validation, feedback control, and structure analysis under payload. Feedback from the sensors can be used for experimental validation of existing models of tensegrity structures and dynamics, such as for the NASA Tensegrity Robotics Toolkit [3]. In addition, the readings from the sensors can provide distance changed between the ends of the bars, which can be used as a state estimator for UC Berkeley’s rapidly prototyped tensegrity robot to perform feedback control [1]. Furthermore, this physical model allows us to observe and record the force distribution and structure deformation with different payload conditions. Currently, we are exploring the possibility of integrating shape memory alloys into the hyperelastic sensors, which can provide the benefit of both actuation and sensing in a compact module. Preliminary tests indicate that this combination has the potential to generate enough force and displacement to achieve punctuated rolling motion for the 6-bar tensegrity structure.", "title": "" }, { "docid": "3c0cc3398139b6a558a56b934d96c641", "text": "Targeted nucleases are powerful tools for mediating genome alteration with high precision. The RNA-guided Cas9 nuclease from the microbial clustered regularly interspaced short palindromic repeats (CRISPR) adaptive immune system can be used to facilitate efficient genome engineering in eukaryotic cells by simply specifying a 20-nt targeting sequence within its guide RNA. Here we describe a set of tools for Cas9-mediated genome editing via nonhomologous end joining (NHEJ) or homology-directed repair (HDR) in mammalian cells, as well as generation of modified cell lines for downstream functional studies. To minimize off-target cleavage, we further describe a double-nicking strategy using the Cas9 nickase mutant with paired guide RNAs. This protocol provides experimentally derived guidelines for the selection of target sites, evaluation of cleavage efficiency and analysis of off-target activity. Beginning with target design, gene modifications can be achieved within as little as 1–2 weeks, and modified clonal cell lines can be derived within 2–3 weeks.", "title": "" }, { "docid": "b915033fd3f8fdea3fc7bf9e3f95146d", "text": "Software traceability is a required element in the development and certification of safety-critical software systems. However, trace links, which are created at significant cost and effort, are often underutilized in practice due primarily to the fact that project stakeholders often lack the skills needed to formulate complex trace queries. To mitigate this problem, we present a solution which transforms spoken or written natural language queries into structured query language (SQL). TiQi includes a general database query mechanism and a domain-specific model populated with trace query concepts, project-specific terminology, token disambiguators, and query transformation rules. We report results from four different experiments exploring user preferences for natural language queries, accuracy of the generated trace queries, efficacy of the underlying disambiguators, and stability of the trace query concepts. Experiments are conducted against two different datasets and show that users have a preference for written NL queries. Queries were transformed at accuracy rates ranging from 47 to 93 %.", "title": "" }, { "docid": "51da24a6bdd2b42c68c4465624d2c344", "text": "Hashing based Approximate Nearest Neighbor (ANN) search has attracted much attention due to its fast query time and drastically reduced storage. However, most of the hashing methods either use random projections or extract principal directions from the data to derive hash functions. The resulting embedding suffers from poor discrimination when compact codes are used. In this paper, we propose a novel data-dependent projection learning method such that each hash function is designed to correct the errors made by the previous one sequentially. The proposed method easily adapts to both unsupervised and semi-supervised scenarios and shows significant performance gains over the state-ofthe-art methods on two large datasets containing up to 1 million points.", "title": "" }, { "docid": "f92351eac81d6d28c3fd33ea96b75f91", "text": "There is clear evidence that investment in intelligent transportation system technologies brings major social and economic benefits. Technological advances in the area of automatic systems in particular are becoming vital for the reduction of road deaths. We here describe our approach to automation of one the riskiest autonomous manœuvres involving vehicles – overtaking. The approach is based on a stereo vision system responsible for detecting any preceding vehicle and triggering the autonomous overtaking manœuvre. To this end, a fuzzy-logic based controller was developed to emulate how humans overtake. Its input is information from the vision system and from a positioning-based system consisting of a differential global positioning system (DGPS) and an inertial measurement unit (IMU). Its output is the generation of action on the vehicle’s actuators, i.e., the steering wheel and throttle and brake pedals. The system has been incorporated into a commercial Citroën car and tested on the private driving circuit at the facilities of our research center, CAR, with different preceding vehicles – a motorbike, car, and truck – with encouraging results. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c4d204b8ceda86e9d8e4ca56214f0ba3", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" }, { "docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0", "text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.", "title": "" }, { "docid": "f79167ce151d9f9c73cf307d4cff7fe7", "text": "Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Cluster-aware Generative Model, that uses unlabelled information to infer a latent representation that models the natural clustering of the data, and additional labelled data points to refine this clustering. The generative performances of the model significantly improve when labelled information is exploited, obtaining a log-likelihood of−79.38 nats on permutation invariant MNIST, while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised, and still improve the log-likelihood performance with respect to related methods.", "title": "" }, { "docid": "081347f2376f4e4061ea5009af137ca7", "text": "The Internet of things can be defined as to make the “things” belong to the Internet. However, many wonder if the current Internet can support such a challenge. For this and other reasons, hundreds of worldwide initiatives to redesign the Internet are underway. This article discusses the perspectives, challenges and opportunities behind a future Internet that fully supports the “things”, as well as how the “things” can help in the design of a more synergistic future Internet. Keywords–Internet of things, smart things, future Internet, software-defined networking, service-centrism, informationcentrism, ID/Loc splitting, security, privacy, trust.", "title": "" }, { "docid": "0d7ce42011c48232189c791e71c289f5", "text": "RECENT WORK in virtue ethics, particularly sustained reflection on specific virtues, makes it possible to argue that the classical list of cardinal virtues (prudence, justice, temperance, and fortitude) is inadequate, and that we need to articulate the cardinal virtues more correctly. With that end in view, the first section of this article describes the challenges of espousing cardinal virtues today, the second considers the inadequacy of the classical listing of cardinal virtues, and the third makes a proposal. Since virtues, no matter how general, should always relate to concrete living, the article is framed by a case.", "title": "" }, { "docid": "b59281f7deb759c5126687ab8df13527", "text": "Despite orthogeriatric management, 12% of the elderly experienced PUs after hip fracture surgery. PUs were significantly associated with a low albumin level, history of atrial fibrillation coronary artery disease, and diabetes. The risk ratio of death at 6 months associated with pressure ulcer was 2.38 (95% CI 1.31-4.32%, p = 0.044).\n\n\nINTRODUCTION\nPressure ulcers in hip fracture patients are frequent and associated with a poor outcome. An orthogeriatric management, recommended by international guidelines in hip fracture patients and including pressure ulcer prevention and treatment, could influence causes and consequences of pressure ulcer. However, remaining factors associated with pressure ulcer occurrence and prognostic value of pressure ulcer in hip fracture patients managed in an orthogeriatric care pathway remain unknown.\n\n\nMETHODS\nFrom June 2009 to April 2015, all consecutive patients with hip fracture admitted to a unit for Post-operative geriatric care were evaluated for eligibility. Patients were included if their primary presentation was due to hip fracture and if they were ≥ 70 years of age. Patients were excluded in the presence of pathological fracture or if they were already hospitalized at the time of the fracture. In our unit, orthogeriatric principles are implemented, including a multi-component intervention to improve pressure ulcer prevention and management. Patients were followed-up until 6 months after discharge.\n\n\nRESULTS\nFive hundred sixty-seven patients were included, with an overall 14.4% 6-month mortality (95% CI 11.6-17.8%). Of these, 67 patients (12%) experienced at least one pressure ulcer. Despite orthogeriatric management, pressure ulcers were significantly associated with a low albumin level (RR 0.90, 95% CI 0.84-0.96; p = 0.003) and history of atrial fibrillation (RR 1.91, 95% CI 1.05-3.46; p = 0.033), coronary artery disease (RR 2.16, 95% CI 1.17-3.99; p = 0.014), and diabetes (RR 2.33, 95% CI 1.14-4.75; p = 0.02). A pressure ulcer was associated with 6-month mortality (RR 2.38, 95% CI 1.31-4.32, p = 0.044).\n\n\nCONCLUSION\nIn elderly patients with hip fracture managed in an orthogeriatric care pathway, pressure ulcer remained associated with poorly modifiable risk factors and long-term mortality.", "title": "" }, { "docid": "e099b35de6036539a14cf13abe98142f", "text": "Human multitasking is often the result of self-init iated interruptions in the performance of an ongoing task. These self-interruptions occur in the absence of external triggers such as electronic alerts or email notifications. Compared to externally induced interruptions, self-interr uptions have not received enough research attention. To address this gap, this paper develops a typology of self-interruptions based on the integration of Flow Theory and Self-regulation Theory . In this new typology, the two major categories stem from positive and negative feelings of task progress and prospects of goal attainment. The proposed classification is validated in an experimental multitasking environmen t with pre-defined tasks. Empirical findings indicate that negat ive feelings trigger more self-interruptions than positive feelings. In general, more self-interr uptions result in lower accuracy in all tasks. The results suggest that negative internal triggers of self-interruptions unleash a downward spiral that may degrade performance. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d5c25d94fc0824a008d6632664669db2", "text": "The main barrier of the skin is formed by the lipids in the apical skin layer, the stratum corneum (SC). In SC mainly ceramides (CER), free fatty acids (FFA) and cholesterol (CHOL) are present. The CER are composed of at least six different fractions. CER 1 has an exceptional molecular structure as it contains a linoleic acid linked to a long-chain omega-hydroxy acid (C > 30). The SC lipids are organized in two lamellar phases with periodicities of approximately 6 and 13 nm, respectively. Recent studies revealed that ceramides isolated from pig SC mixed with cholesterol in confined ratios mimic stratum corneum lipid phase behavior closely (Bouwstra, J.A., et al. 1996. J. Lipid Res. 37: 999-1011). In this paper the role of CER 1 for the SC lipid lamellar organization was studied. For this purpose lipid phase behavior of mixtures of CHOL and total ceramide fraction was compared with that of mixtures of CHOL and a ceramide mixture lacking CER 1. These studies showed that in the absence of CER 1 almost no long periodicity phase was formed over a wide CHOL/CER molar ratio. A model is proposed for the molecular arrangement of the two lamellar phases. This model is based on the dominant role CER 1 plays in the formation of the long periodicity phase, electron density distribution calculations, and observations, such as i) the bimodal distribution of the fatty acid chain lengths of the ceramides, ii) the phase separation between long-chain ceramides and short-chain ceramides in a monolayer approach, and iii) the absence of swelling of the lamellae upon increasing the water content organization in SC. In this molecular model the short periodicity phase is composed of only two high electron density regions indicating the presence of only one bilayer, similar to that often found in phospholipid membranes. The molecular arrangement in the long periodicity phase is very exceptional. This phase most probably consists of two broad and one narrow low electron density regions. The two broad regions are formed by partly interdigitating ceramides with long-chain fatty acids of approximately 24-26 C atoms, while the narrow low-electron density region is formed by fully interdigitating ceramides with a short free fatty acid chain of approximately 16 to 18 C atoms.", "title": "" }, { "docid": "ab589fb1d97849e95da05d7e9b1d0f4f", "text": "We introduce a new speaker independent method for reducing wind noise in single-channel recordings of noisy speech. The method is based on non-negative sparse coding and relies on a wind noise dictionary which is estimated from an isolated noise recording. We estimate the parameters of the model and discuss their sensitivity. We then compare the algorithm with the classical spectral subtraction method and the Qualcomm-ICSI-OGI noise reduction method. We optimize the sound quality in terms of signal-to-noise ratio and provide results on a noisy speech recognition task.", "title": "" }, { "docid": "07eb3f5527e985c33ff7132381ee266d", "text": "Since the first application of indirect composite resins, numerous advances in adhesive dentistry have been made. Furthermore, improvements in structure, composition and polymerization techniques led to the development of a second-generation of indirect resin composites (IRCs). IRCs have optimal esthetic performance, enhanced mechanical properties and reparability. Due to these characteristics they can be used for a wide range of clinical applications. IRCs can be used for inlays, onlays, crowns’ veneering material, fixed dentures prostheses and removable prostheses (teeth and soft tissue substitution), both on teeth and implants. The purpose of this article is to review the properties of these materials and describe a case series of patients treated with different type of restorations in various indications. *Corresponding author: Aikaterini Petropoulou, Clinical Instructor, Department of Prosthodontics, School of Dentistry, National and Kapodistrian University of Athens, Greece, Tel: +306932989104; E-mail: aikatpetropoulou@gmail.com Received November 10, 2013; Accepted November 28, 2013; Published November 30, 2013 Citation: Petropoulou A, Pantzari F, Nomikos N, Chronopoulos V, Kourtis S (2013) The Use of Indirect Resin Composites in Clinical Practice: A Case Series. Dentistry 3: 173. doi:10.4172/2161-1122.1000173 Copyright: © 2013 Petropoulou A, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "e79db51ac85ceafba66dddd5c038fbdf", "text": "Machine learning based anti-phishing techniques are based on various features extracted from different sources. These features differentiate a phishing website from a legitimate one. Features are taken from various sources like URL, page content, search engine, digital certificate, website traffic, etc, of a website to detect it as a phishing or non-phishing. The websites are declared as phishing sites if the heuristic design of the websites matches with the predefined rules. The accuracy of the anti-phishing solution depends on features set, training data and machine learning algorithm. This paper presents a comprehensive analysis of Phishing attacks, their exploitation, some of the recent machine learning based approaches for phishing detection and their comparative study. It provides a better understanding of the phishing problem, current solution space in machine learning domain, and scope of future research to deal with Phishing attacks efficiently using machine learning based approaches.", "title": "" }, { "docid": "5dca3981eb4c353712b51f3cc32ff3ec", "text": "We present a new classification architecture based on autoassociative neural networks that are used to learn discriminant models of each class. The proposed architecture has several interesting properties with respect to other model-based classifiers like nearest-neighbors or radial basis functions: it has a low computational complexity and uses a compact distributed representation of the models. The classifier is also well suited for the incorporation of a priori knowledge by means of a problem-specific distance measure. In particular, we will show that tangent distance (Simard, Le Cun, & Denker, 1993) can be used to achieve transformation invariance during learning and recognition. We demonstrate the application of this classifier to optical character recognition, where it has achieved state-of-the-art results on several reference databases. Relations to other models, in particular those based on principal component analysis, are also discussed.", "title": "" }, { "docid": "f83ca1c2732011e9a661f8cf9a0516ac", "text": "We provide a characterization of pseudoentropy in terms of hardness of sampling: Let (X,B) be jointly distributed random variables such that B takes values in a polynomial-sized set. We show that B is computationally indistinguishable from a random variable of higher Shannon entropy given X if and only if there is no probabilistic polynomial-time S such that (X,S(X)) has small KL divergence from (X,B). This can be viewed as an analogue of the Impagliazzo Hardcore Theorem (FOCS '95) for Shannon entropy (rather than min-entropy).\n Using this characterization, we show that if f is a one-way function, then (f(Un),Un) has \"next-bit pseudoentropy\" at least n+log n, establishing a conjecture of Haitner, Reingold, and Vadhan (STOC '10). Plugging this into the construction of Haitner et al., this yields a simpler construction of pseudorandom generators from one-way functions. In particular, the construction only performs hashing once, and only needs the hash functions that are randomness extractors (e.g. universal hash functions) rather than needing them to support \"local list-decoding\" (as in the Goldreich--Levin hardcore predicate, STOC '89).\n With an additional idea, we also show how to improve the seed length of the pseudorandom generator to ~{O}(n3), compared to O(n4) in the construction of Haitner et al.", "title": "" }, { "docid": "2777fdcc4442c3d63b51b92710f3914d", "text": "Non-invasive pressure simulators that regenerate oscillometric waveforms promise an alternative to expensive clinical trials for validating oscillometric noninvasive blood pressure devices. However, existing simulators only provide oscillometric pressure in cuff and thus have a limited accuracy. It is promising to build a physical simulator that contains a synthetic arm with a built-in brachial artery and an affiliated hydraulic model of cardiovascular system. To guide the construction of this kind of simulator, this paper presents a computer model of cardiovascular system with a relatively simple structure, where the distribution of pressures and flows in aorta root and brachial artery can be simulated, and the produced waves are accordant with the physical data. This model can be used to provide the parameters and structure that will be needed to build the new simulator.", "title": "" } ]
scidocsrr
c7f5a23df45b056a8a593e880402f3ed
Advances in dental veneers: materials, applications, and techniques
[ { "docid": "b42c230ff1af8da8b8b4246bc9cb2bd8", "text": "Patients have many restorative options for changing the appearance of their teeth. The most conservative restorative treatments for changing the appearance of teeth include tooth bleaching, direct composite resin veneers, and porcelain veneers. Patients seeking esthetic treatment should undergo a comprehensive clinical examination that includes an esthetic evaluation. When selecting a conservative treatment modality, the use of minimally invasive or no-preparation porcelain veneers should be considered. As with any treatment decision, the indications and contraindications must be considered before a definitive treatment plan is made. Long-term research has demonstrated a 94% survival rate for minimally invasive porcelain veneers. While conservation of tooth structure is important, so is selecting the right treatment modality for each patient based on clinical findings.", "title": "" } ]
[ { "docid": "d78117c809f963a2983c262cca2399e9", "text": "Range detection applications based on radar can be separated into measurements of short distances with high accuracy or large distances with low accuracy. In this paper an approach is investigated to combine the advantages of both principles. Therefore an FMCW radar will be extended with an additional phase evaluation technique. In order to realize this combination an increased range resolution of the FMCW radar is required. This paper describes an frequency estimation algorithm to increase the frequency resolution and hence the range resolution of an FMCW radar at 24 GHz for a line based range detection system to evaluate the possibility of an extended FMCW radar using the phase information.", "title": "" }, { "docid": "d1ebf47c1f0b1d8572d526e9260dbd32", "text": "In this paper, mortality in the immediate aftermath of an earthquake is studied on a worldwide scale using multivariate analysis. A statistical method is presented that analyzes reported earthquake fatalities as a function of a heterogeneous set of parameters selected on the basis of their presumed influence on earthquake mortality. The ensemble was compiled from demographic, seismic, and reported fatality data culled from available records of past earthquakes organized in a geographic information system. The authors consider the statistical relation between earthquake mortality and the available data ensemble, analyze the validity of the results in view of the parametric uncertainties, and propose a multivariate mortality analysis prediction method. The analysis reveals that, although the highest mortality rates are expected in poorly developed rural areas, high fatality counts can result from a wide range of mortality ratios that depend on the effective population size.", "title": "" }, { "docid": "89039f8d247b3f178c0be6a1f30004b8", "text": "We study the property of the Fused Lasso Signal Approximator (FLSA) for estimating a blocky signal sequence with additive noise. We transform the FLSA to an ordinary Lasso problem, and find that in general the resulting design matrix does not satisfy the irrepresentable condition that is known as an almost necessary and sufficient condition for exact pattern recovery. We give necessary and sufficient conditions on the expected signal pattern such that the irrepresentable condition holds in the transformed Lasso problem. However, these conditions turn out to be very restrictive. We apply the newly developed preconditioning method — Puffer Transformation (Jia and Rohe, 2015) to the transformed Lasso and call the new procedure the preconditioned fused Lasso. We give nonasymptotic results for this method, showing that as long as the signal-to-noise ratio is not too small, our preconditioned fused Lasso estimator always recovers the correct pattern with high probability. Theoretical results give insight into what controls the ability of recovering the pattern — it is the noise level instead of the length of the signal sequence. Simulations further confirm our theorems and visualize the significant improvement of the preconditioned fused Lasso estimator over the vanilla FLSA in exact pattern recovery. © 2015 Published by Elsevier B.V.", "title": "" }, { "docid": "e756574e701c9ecc4e28da6135499215", "text": "MicroRNAs are small noncoding RNA molecules that regulate gene expression posttranscriptionally through complementary base pairing with thousands of messenger RNAs. They regulate diverse physiological, developmental, and pathophysiological processes. Recent studies have uncovered the contribution of microRNAs to the pathogenesis of many human diseases, including liver diseases. Moreover, microRNAs have been identified as biomarkers that can often be detected in the systemic circulation. We review the role of microRNAs in liver physiology and pathophysiology, focusing on viral hepatitis, liver fibrosis, and cancer. We also discuss microRNAs as diagnostic and prognostic markers and microRNA-based therapeutic approaches for liver disease.", "title": "" }, { "docid": "7b7289900ac45f4ee5357084f16a4c0d", "text": "We present a simple and accurate span-based model for semantic role labeling (SRL). Our model directly takes into account all possible argument spans and scores them for each label. At decoding time, we greedily select higher scoring labeled spans. One advantage of our model is to allow us to design and use spanlevel features, that are difficult to use in tokenbased BIO tagging approaches. Experimental results demonstrate that our ensemble model achieves the state-of-the-art results, 87.4 F1 and 87.0 F1 on the CoNLL-2005 and 2012 datasets, respectively.", "title": "" }, { "docid": "d9f7d78b6e1802a17225db13edd033f6", "text": "The edit distance between two character strings can be defined as the minimum cost of a sequence of editing operations which transforms one string into the other. The operations we admit are deleting, inserting and replacing one symbol at a time, with possibly different costs for each of these operations. The problem of finding the longest common subsequence of two strings is a special case of the problem of computing edit distances. We describe an algorithm for computing the edit distance between two strings of length n and m, n > m, which requires O(n * max( 1, m/log n)) steps whenever the costs of edit operations are integral multiples of a single positive real number and the alphabet for the strings is finite. These conditions are necessary for the algorithm to achieve the time bound.", "title": "" }, { "docid": "712098110f7713022e4664807ac106c7", "text": "Getting a machine to understand human narratives has been a classic challenge for NLP and AI. This paper proposes a new representation for the temporal structure of narratives. The representation is parsimonious, using temporal relations as surrogates for discourse relations. The narrative models, called Temporal Discourse Models, are treestructured, where nodes include abstract events interpreted as pairs of time points and where the dominance relation is expressed by temporal inclusion. Annotation examples and challenges are discussed, along with a report on progress to date in creating annotated corpora.", "title": "" }, { "docid": "67c444b9538ccfe7a2decdd11523dcd5", "text": "Attention-based learning for fine-grained image recognition remains a challenging task, where most of the existing methods treat each object part in isolation, while neglecting the correlations among them. In addition, the multi-stage or multi-scale mechanisms involved make the existing methods less efficient and hard to be trained end-to-end. In this paper, we propose a novel attention-based convolutional neural network (CNN) which regulates multiple object parts among different input images. Our method first learns multiple attention region features of each input image through the one-squeeze multi-excitation (OSME) module, and then apply the multi-attention multi-class constraint (MAMC) in a metric learning framework. For each anchor feature, the MAMC functions by pulling same-attention same-class features closer, while pushing different-attention or different-class features away. Our method can be easily trained end-to-end, and is highly efficient which requires only one training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog species dataset that surpasses similar existing datasets by category coverage, data volume and annotation quality. Extensive experiments are conducted to show the substantial improvements of our method on four benchmark datasets.", "title": "" }, { "docid": "a7c9d58c49f1802b94395c6f12c2d6dd", "text": "Signature-based network intrusion detection systems (NIDSs) have been widely deployed in current network security infrastructure. However, these detection systems suffer from some limitations such as network packet overload, expensive signature matching and massive false alarms in a large-scale network environment. In this paper, we aim to develop an enhanced filter mechanism (named EFM) to comprehensively mitigate these issues, which consists of three major components: a context-aware blacklist-based packet filter, an exclusive signature matching component and a KNN-based false alarm filter. The experiments, which were conducted with two data sets and in a network environment, demonstrate that our proposed EFM can overall enhance the performance of a signaturebased NIDS such as Snort in the aspects of packet filtration, signature matching improvement and false alarm reduction without affecting network security. a 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8d5d2f266181d456d4f71df26075a650", "text": "Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better tactic coordination of application subsystems compared to federated systems. In order to support safety-critical application subsystems, an integrated architecture needs to support fault-tolerant strategies that enable the continued operation of the system in the presence of failures. The basis for the implementation and validation of fault-tolerant strategies is a fault hypothesis that identifies the fault containment regions, specifies the failure modes and provides realistic failure rate assumptions. This paper describes a fault hypothesis for integrated architectures, which takes into account the collocation of multiple software components on shared node computers. We argue in favor of a differentiation of fault containment regions for hardware and software faults. In addition, the fault hypothesis describes the assumptions concerning the respective frequencies of transient and permanent failures in consideration of recent semiconductor trends", "title": "" }, { "docid": "d3a79da70eed0ec0352cb924c8ce0744", "text": "2. School of Electronics Engineering and Computer science. Peking University, Beijing 100871,China Abstract—Speech emotion recognition (SER) is to study the formation and change of speaker’s emotional state from the speech signal perspective, so as to make the interaction between human and computer more intelligent. SER is a challenging task that has encountered the problem of less training data and low prediction accuracy. Here we propose a data augmentation algorithm based on the imaging principle of the retina and convex lens, to acquire the different sizes of spectrogram and increase the amount of training data by changing the distance between the spectrogram and the convex lens. Meanwhile, with the help of deep learning to get the high-level features, we propose the Deep Retinal Convolution Neural Networks (DRCNNs) for SER and achieve the average accuracy over 99%. The experimental results indicate that DRCNNs outperforms the previous studies in terms of both the number of emotions and the accuracy of recognition. Predictably, our results will dramatically improve human-computer interaction.", "title": "" }, { "docid": "5c5c21bd0c50df31c6ccec63d864568c", "text": "Intellectual Property issues (IP) is a concern that refrains companies to cooperate in whatever of Open Innovation (OI) processes. Particularly, SME consider open innovation as uncertain, risky processes. Despite the opportunities that online OI platforms offer, SMEs have so far failed to embrace them, and proved reluctant to OI. We intend to find whether special collaborative spaces that facilitate a sort of preventive idea claiming, explicit claiming evolution of defensive publication, as so far patents and publications for prevailing innovation, can be the right complementary instruments in OI as to when stronger IP protection regimes might drive openness by SME in general. These spaces, which we name NIR (Networking Innovation Rooms), are a practical, smart paradigm to boost OI for SME. There users sign smart contracts as NDA which takes charge of timestamping any IP disclosure or creation and declares what corrective actions (if they might apply) might be taken for unauthorised IP usage or disclosure of any of the NDA signers. With Blockchain, a new technology emerges which enables decentralised, fine-grained IP management for OI.", "title": "" }, { "docid": "1514ce079eba01f4a78ab13c49cc2fa7", "text": "The task of event trigger labeling is typically addressed in the standard supervised setting: triggers for each target event type are annotated as training data, based on annotation guidelines. We propose an alternative approach, which takes the example trigger terms mentioned in the guidelines as seeds, and then applies an eventindependent similarity-based classifier for trigger labeling. This way we can skip manual annotation for new event types, while requiring only minimal annotated training data for few example events at system setup. Our method is evaluated on the ACE-2005 dataset, achieving 5.7% F1 improvement over a state-of-the-art supervised system which uses the full training data.", "title": "" }, { "docid": "e0d8936ecce870fbcee6b3bd4bc66d10", "text": "UNLABELLED\nMathematical modeling is a process by which a real world problem is described by a mathematical formulation. The cancer modeling is a highly challenging problem at the frontier of applied mathematics. A variety of modeling strategies have been developed, each focusing on one or more aspects of cancer.\n\n\nMATERIAL AND METHODS\nThe vast majority of mathematical models in cancer diseases biology are formulated in terms of differential equations. We propose an original mathematical model with small parameter for the interactions between these two cancer cell sub-populations and the mathematical model of a vascular tumor. We work on the assumption that, the quiescent cells' nutrient consumption is long. One the equations system includes small parameter epsilon. The smallness of epsilon is relative to the size of the solution domain.\n\n\nRESULTS\nMATLAB simulations obtained for transition rate from the quiescent cells' nutrient consumption is long, we show a similar asymptotic behavior for two solutions of the perturbed problem. In this system, the small parameter is an asymptotic variable, different from the independent variable. The graphical output for a mathematical model of a vascular tumor shows the differences in the evolution of the tumor populations of proliferating, quiescent and necrotic cells. The nutrient concentration decreases sharply through the viable rim and tends to a constant level in the core due to the nearly complete necrosis in this region.\n\n\nCONCLUSIONS\nMany mathematical models can be quantitatively characterized by ordinary differential equations or partial differential equations. The use of MATLAB in this article illustrates the important role of informatics in research in mathematical modeling. The study of avascular tumor growth cells is an exciting and important topic in cancer research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical oncologists.", "title": "" }, { "docid": "5ae4b1d4ef00afbde49edfaa2728934b", "text": "A wideband, low loss inline transition from microstrip line to rectangular waveguide is presented. This transition efficiently couples energy from a microstrip line to a ridge and subsequently to a TE10 waveguide. This unique structure requires no mechanical pressure for electrical contact between the microstrip probe and the ridge because the main planar circuitry and ridge sections are placed on a single housing. The measured insertion loss for back-to-back transition is 0.5 – 0.7 dB (0.25 – 0.35 dB/transition) in the band 50 – 72 GHz.", "title": "" }, { "docid": "9d175a211ec3b0ee7db667d39c240e1c", "text": "In recent years, there has been an increased effort to introduce coding and computational thinking in early childhood education. In accordance with the international trend, programming has become an increasingly growing focus in European education. With over 9.5 million iOS downloads, ScratchJr is the most popular freely available introductory programming language for young children (ages 5-7). This paper provides an overview of ScratchJr, and the powerful ideas from computer science it is designed to teach. In addition, data analytics are presented to show trends of usage in Europe and and how it compares to the rest of the world. Data reveals that countries with robust computer science initiatives such as the UK and the Nordic countries have high usage of ScratchJr.", "title": "" }, { "docid": "0243c98e13e814320ec2a3416d2bcc94", "text": "Projects that are over-budget, delivered late, and fall short of user's expectations have been a common problem are a for software development efforts for years. Agile methods, which represent an emerging set of software development methodologies based on the concepts of adaptability and flexibility, are currently touted as a way to alleviate these reoccurring problems and pave the way for the future of development. The estimation in Agile Software Development methods depends on an expert opinion and historical data of project for estimation of cost, size, effort and duration. In absence of the historical data and experts the previous method like analogy and planning poker are not useful. This paper focuses on the research work in Agile Software development and estimation in Agile. It also focuses the problems in current Agile practices thereby proposed a method for accurate cost and effort estimation.", "title": "" }, { "docid": "1738a8ccb1860e5b85e2364f437d4058", "text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.", "title": "" }, { "docid": "ddb51863430250a28f37c5f12c13c910", "text": "Much of our understanding of human thinking is based on probabilistic models. This innovative book by Jerome R. Busemeyer and Peter D. Bruza argues that, actually, the underlying mathematical structures from quantum theory provide a much better account of human thinking than traditional models. They introduce the foundations for modelling probabilistic-dynamic systems using two aspects of quantum theory. The first, “contextuality,” is a way to understand interference effects found with inferences and decisions under conditions of uncertainty. The second, “quantum entanglement,” allows cognitive phenomena to be modelled in non-reductionist ways. Employing these principles drawn from quantum theory allows us to view human cognition and decision in a totally new light. Introducing the basic principles in an easy-to-follow way, this book does not assume a physics background or a quantum brain and comes complete with a tutorial and fully worked-out applications in important areas of cognition and decision.", "title": "" } ]
scidocsrr
a46d0a29c078e13ab90409bbff71c217
Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors
[ { "docid": "0ee97a3afcc2471a05924a1171ac82cf", "text": "A number of researchers around the world have built machines that recognize, express, model, communicate, and respond to emotional information, instances of ‘‘affective computing.’’ This article raises and responds to several criticisms of affective computing, articulating state-of-the art research challenges, especially with respect to affect in humancomputer interaction. r 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "5e428061a28f7fa08656590984e6e12a", "text": "Will consumer wearable technology ever be adopted or accepted by the medical community? Patients and practitioners regularly use digital technology (e.g., thermometers and glucose monitors) to identify and discuss symptoms. In addition, a third of general practitioners in the United Kingdom report that patients arrive with suggestions for treatment based on online search results [1]. However, consumer health wearables are predicted to become the next “Dr Google.”One in six (15%) consumers in the United States currently uses wearable technology, including smartwatches or fitness bands. While 19 million fitness devices are likely to be sold this year, that number is predicted to grow to 110 million in 2018 [2]. As the line between consumer health wearables and medical devices begins to blur, it is now possible for a single wearable device to monitor a range of medical risk factors (Fig 1). Potentially, these devices could give patients direct access to personal analytics that can contribute to their health, facilitate preventive care, and aid in the management of ongoing illness. However, how this new wearable technology might best serve medicine remains unclear.", "title": "" } ]
[ { "docid": "4eead577c1b3acee6c93a62aee8a6bb5", "text": "The present study examined teacher attitudes toward dyslexia and the effects of these attitudes on teacher expectations and the academic achievement of students with dyslexia compared to students without learning disabilities. The attitudes of 30 regular education teachers toward dyslexia were determined using both an implicit measure and an explicit, self-report measure. Achievement scores for 307 students were also obtained. Implicit teacher attitudes toward dyslexia related to teacher ratings of student achievement on a writing task and also to student achievement on standardized tests of spelling but not math for those students with dyslexia. Self-reported attitudes of the teachers toward dyslexia did not relate to any of the outcome measures. Neither the implicit nor the explicit measures of teacher attitudes related to teacher expectations. The results show implicit attitude measures to be a more valuable predictor of the achievement of students with dyslexia than explicit, self-report attitude measures.", "title": "" }, { "docid": "b267bf90b86542e3032eaddcc2c3350f", "text": "Many modalities of treatment for acquired skin hyperpigmentation are available including chemical agents or physical therapies, but none are completely satisfactory. Depigmenting compounds should act selectively on hyperactivated melanocytes, without short- or long-term side-effects, and induce a permanent removal of undesired pigment. Since 1961 hydroquinone, a tyrosinase inhibitor, has been introduced and its therapeutic efficacy demonstrated, and other whitening agents specifically acting on tyrosinase by different mechanisms have been proposed. Compounds with depigmenting activity are now numerous and the classification of molecules, based on their mechanism of action, has become difficult. Systematic studies to assess both the efficacy and the safety of such molecules are necessary. Moreover, the evidence that bleaching compounds are fairly ineffective on dermal accumulation of melanin has prompted investigations on the effectiveness of physical therapies, such as lasers. This review which describes the different approaches to obtain depigmentation, suggests a classification of whitening molecules on the basis of the mechanism by which they interfere with melanogenesis, and confirms the necessity to apply standardized protocols to evaluate depigmenting treatments.", "title": "" }, { "docid": "0b587770a13ba76572a1e51df52d95a3", "text": "Current approaches to supervised learning of metaphor tend to use sophisticated features and restrict their attention to constructions and contexts where these features apply. In this paper, we describe the development of a supervised learning system to classify all content words in a running text as either being used metaphorically or not. We start by examining the performance of a simple unigram baseline that achieves surprisingly good results for some of the datasets. We then show how the recall of the system can be improved over this strong baseline.", "title": "" }, { "docid": "c72a42af9b6c69bc780c93997c6c2c5f", "text": "Water strider can slide agilely on water surface at high speed. To study its locomotion characters, movements of water strider are recorded by a high speed camera. The trajectories and angle variations of water strider leg are obtained from the photo series, and provide basic information for bionic robot design. Thus a water strider robot based on surface tension is proposed. The driving mechanism is designed to replicate the trajectory of water strider's middle leg.", "title": "" }, { "docid": "5cfaec0f198065bb925a1fb4ffb53f60", "text": "In the emerging inter-disciplinary field of art and image processing, algorithms have been developed to assist the analysis of art work. In most applications, especially brush stroke analysis, high resolution digital images of paintings are required to capture subtle patterns and details in the high frequency range of the spectrum. Algorithms have been developed to learn styles of painters from their digitized paintings to help identify authenticity of controversial paintings. However, high quality testing datasets containing both original and forgery are limited to confidential image files provided by museums, which is not publicly available, and a small sets of original/copy paintings painted by the same artist, where copies were deferred to two weeks after the originals were finished. Up to date, no synthesized painting by computers from a real painting has been used as a negative test case, mainly due to the limitation of prevailing style transfer algorithms. There are two main types of style transfer algorithms, either transferring the tone (color, contrast, saturation, etc.) of an image, preserving its patterns and details, or distorting the texture uniformly of an image to create “style”. In this paper, we are interested in a higher level of style transfer, particularly, transferring a source natural image (e.g. a photo) to a high resolution painting given a reference painting of similar object. The transferred natural image would have a similar presentation of the original object to that of the reference painting. In general, an object is painted in a different style of brush strokes than that of the background, hence the desired style transferring algorithm should be able to recognize the object in the source natural image and transfer brush stroke styles in the reference painting in a content-aware way such that the styles of the foreground and the background, and moreover different parts of the foreground in the transferred image, are consistent to that in the reference painting. Recently, an algorithm based on deep convolutional neural network has been developed to transfer artistic style from an art painting to a photo [2]. Successful as it is in transferring styles from impressionist paintings of artists such as Vincent van Gogh to photos of various scenes, the algorithm is prone to distorting the structure of the content in the source image and introducing artifacts/new", "title": "" }, { "docid": "7fb967f01038fb24c7d2b0c98df68b51", "text": "Any modern organization that is serious about security deploys a network intrusion detection system (NIDS) to monitor network traffic for signs of malicious activity. The most widely deployed NIDS system is Snort, an open source system originally released in 1998. Snort is a single threaded system that uses a set of clear text rules to instruct a base engine how to react when particular traffic patterns are detected. In 2009, the US Department of Homeland Security and a consortium of private companies provided substantial grant funding to a newly created organization known as the Open Information Security Foundation (OISF), to build a multi-threaded alternative to Snort, called Suricata. Despite many similarities between Snort and Suricata, the OISF stated it was essential to replace the older single-threaded Snort engine with a multi-threaded system that could deliver higher performance and better scalability. Key Snort developers argued that Suricata’s multi-threaded architecture would actually slow the detection process. Given these competing claims, an objective head-to-head comparison of the performance of Snort and Suricata is needed. In this paper, we present a comprehensive quantitative comparison of the two systems. We have developed a rigorous testing framework that examines the performance of both systems as we scale system resources. Our results show that a single instance of Suricata is able to deliver substantially higher performance than a corresponding single instance of Snort, but has problems scaling with a higher number of cores. We find that while Suricata needs tuning for a higher number of cores, it is still able to surpass Snort even at 1 core where we would have expected Snort to shine.", "title": "" }, { "docid": "1e27234f694ac4ac307e9088804a7444", "text": "Anomaly detection in social media refers to the detection of users’ abnormal opinions, sentiment patterns, or special temporal aspects of such patterns. Social media platforms, such as Sina Weibo or Twitter, provide a Big-data platform for information retrieval, which include user feedbacks, opinions, and information on most issues. This paper proposes a hybrid neural network model called Convolutional Neural Network-Long-Short Term Memory(CNN-LSTM), we successfully applies the model to sentiment analysis on a microblog Big-data platform and obtains significant improvements that enhance the generalization ability. Based on the sentiment of a single post in Weibo, this study also adopted the multivariate Gaussian model and the power law distribution to analyze the users’ emotion and detect abnormal emotion on microblog, the multivariate Gaussian method automatically captures the correlation between different features of the emotions and saves a certain amount of time through the batch calculation of the joint probability density of data sets. Through the measure of a joint probability density value and validation of the corpus from social network, anomaly detection accuracy of an individual user is 83.49% and that for a different month is 87.84%. The results of the distribution test show that individual user’s neutral, happy, and sad emotions obey the normal distribution but the surprised and angry emotions do not. In addition, the group-based emotions on microblogs obey the power law distribution but individual emotions do not.", "title": "" }, { "docid": "53edb03722153d091fb2e78c811d4aa5", "text": "One of the main reasons for failure in Software Process Improvement (SPI) initiatives is the lack of motivation of the professionals involved. Therefore, motivation should be encouraged throughout the software process. Gamification allows us to define mechanisms that motivate people to develop specific tasks. A gamification framework was adapted to the particularities of an organization and software professionals to encourage motivation. Thus, it permitted to facilitate the adoption of SPI improvements and a higher success rate. The objective of this research was to validate the framework presented and increase the actual implementation of gamification in organizations. To achieve this goal, a qualitative research methodology was employed through interviews that involved a total of 29 experts in gamification and SPI. The results of this study confirm the validity of the framework presented, its relevance in the field of SPI and its alignment with the standard practices of gamification implementation within organizations.", "title": "" }, { "docid": "aebdcd5b31d26ec1b4147efe842053e4", "text": "We describe a novel camera calibration algorithm for square, circle, and ring planar calibration patterns. An iterative refinement approach is proposed that utilizes the parameters obtained from traditional calibration algorithms as initialization to perform undistortion and unprojection of calibration images to a canonical fronto-parallel plane. This canonical plane is then used to localize the calibration pattern control points and recompute the camera parameters in an iterative refinement until convergence. Undistorting and unprojecting the calibration pattern to the canonical plane increases the accuracy of control point localization and consequently of camera calibration. We have conducted an extensive set of experiments with real and synthetic images for the square, circle and ring pattern, and the pixel reprojection errors obtained by our method are about 50% lower than those of the OpenCV Camera Calibration Toolbox. Increased accuracy of camera calibration directly leads to improvements in other applications; we demonstrate recovery of fine object structure for visual hull reconstruction, and recovery of precise epipolar geometry for stereo camera calibration.", "title": "" }, { "docid": "db8cd5dad5c3d3bda0f10f3369351bbd", "text": "The massive diffusion of online social media allows for the rapid and uncontrolled spreading of conspiracy theories, hoaxes, unsubstantiated claims, and false news. Such an impressive amount of misinformation can influence policy preferences and encourage behaviors strongly divergent from recommended practices. In this paper, we study the statistical properties of viral misinformation in online social media. By means of methods belonging to Extreme Value Theory, we show that the number of extremely viral posts over time follows a homogeneous Poisson process, and that the interarrival times between such posts are independent and identically distributed, following an exponential distribution. Moreover, we characterize the uncertainty around the rate parameter of the Poisson process through Bayesian methods. Finally, we are able to derive the predictive posterior probability distribution of the number of posts exceeding a certain threshold of shares over a finite interval of time.", "title": "" }, { "docid": "055a7be9623e794168b858e41bceaabd", "text": "Lexical Pragmatics is a research field that tries to give a systematic and explanatory account of pragmatic phenomena that are connected with the semantic underspecification of lexical items. Cases in point are the pragmatics of adjectives, systematic polysemy, the distribution of lexical and productive causatives, blocking phenomena, the interpretation of compounds, and many phenomena presently discussed within the framework of Cognitive Semantics. The approach combines a constrained-based semantics with a general mechanism of conversational implicature. The basic pragmatic mechanism rests on conditions of updating the common ground and allows to give a precise explication of notions as generalized conversational implicature and pragmatic anomaly. The fruitfulness of the basic account is established by its application to a variety of recalcitrant phenomena among which its precise treatment of Atlas & Levinson's Qand I-principles and the formalization of the balance between informativeness and efficiency in natural language processing (Horn's division of pragmatic labor) deserve particular mention. The basic mechanism is subsequently extended by an abductive reasoning system which is guided by subjective probability. The extended mechanism turned out to be capable of giving a principled account of lexical blocking, the pragmatics of adjectives, and systematic polysemy.", "title": "" }, { "docid": "04ed69959c28c3c4185d3af55521d864", "text": "A new differential-fed broadband antenna element with unidirectional radiation is proposed. This antenna is composed of a folded bowtie, a center-fed loop, and a box-shaped reflector. A pair of differential feeds is developed to excite the antenna and provide an ultrawideband (UWB) impedance matching. The box-shaped reflector is used for the reduction of the gain fluctuation across the operating frequency band. An antenna prototype for UWB applications is fabricated and measured, exhibiting an impedance bandwidth of 132% with standing wave ratio ≤ 2 from 2.48 to 12.12 GHz, over which the gain varies between 7.2 and 14.1 dBi at boresight. The proposed antenna radiates unidirectionally with low cross polarization and low back radiation. Furthermore, the time-domain characteristic of the proposed antenna is evaluated. In addition, a 2 × 2 element array using the proposed element is also investigated in this communication.", "title": "" }, { "docid": "5ca5cfcd0ed34d9b0033977e9cde2c74", "text": "We study the impact of regulation on competition between brand-names and generics and pharmaceutical expenditures using a unique policy experiment in Norway, where reference pricing (RP) replaced price cap regulation in 2003 for a sub-sample of o¤-patent products. First, we construct a vertical di¤erentiation model to analyze the impact of regulation on prices and market shares of brand-names and generics. Then, we exploit a detailed panel data set at product level covering several o¤-patent molecules before and after the policy reform. O¤-patent drugs not subject to RP serve as our control group. We …nd that RP signi…cantly reduces both brand-name and generic prices, and results in signi…cantly lower brand-name market shares. Finally, we show that RP has a strong negative e¤ect on average molecule prices, suggesting signi…cant cost-savings, and that patients’ copayments decrease despite the extra surcharges under RP. Key words: Pharmaceuticals; Regulation; Generic Competition JEL Classi…cations: I11; I18; L13; L65 We thank David Bardey, Øivind Anti Nilsen, Frode Steen, and two anonymous referees for valuable comments and suggestions. We also thank the Norwegian Research Council, Health Economics Bergen (HEB) for …nancial support. Corresponding author. Department of Economics and Health Economics Bergen, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. E-mail: kurt.brekke@nhh.no. Uni Rokkan Centre, Health Economics Bergen, Nygårdsgaten 5, N-5015 Bergen, Norway. E-mail: tor.holmas@uni.no. Department of Economics/NIPE, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal; and University of Bergen (Economics), Norway. E-mail: o.r.straume@eeg.uminho.pt.", "title": "" }, { "docid": "3e2012134aa2e88b230f95518c11994d", "text": "Echo Chamber is a game that persuades players to re-examine their argumentation style and adopt new rhetorical techniques procedurally delivered through gameplay. Several games have been made addressing the environmental impacts of climate change; none have examined the gap between scientific and public discourse over climate change, and our goal was to teach players more effective communication techniques for conveying climate change in public venues. Our game provides other developers insight into persuasion through game mechanics with good design practices for similar persuasive games.", "title": "" }, { "docid": "fdcf6e60ad11b10fba077a62f7f1812d", "text": "Delivering web software as a service has grown into a powerful paradigm for deploying a wide range of Internetscale applications. However for end-users, accessing software as a service is fundamentally at odds with free software, because of the associated cost of maintaining server infrastructure. Users end up paying for the service in one way or another, often indirectly through ads or the sale of their private data. In this paper, we aim to enable a new generation of portable and free web apps by proposing an alternative model to the existing client-server web architecture. freedom.js is a platform for developing and deploying rich multi-user web apps, where application logic is pushed out from the cloud and run entirely on client-side browsers. By shifting the responsibility of where code runs, we can explore a novel incentive structure where users power applications with their own resources, gain the ability to control application behavior and manage privacy of data. For developers, we lower the barrier of writing popular web apps by removing much of the deployment cost and making applications simpler to write. We provide a set of novel abstractions that allow developers to automatically scale their application with low complexity and overhead. freedom.js apps are inherently sandboxed, multi-threaded, and composed of reusable modules. We demonstrate the flexibility of freedom.js through a number of applications that we have built on top of the platform, including a messaging application, a social file synchronization tool, and a peer-to-peer (P2P) content delivery network (CDN). Our experience shows that we can implement a P2P-CDN with 50% fewer lines of application-specific code in the freedom.js framework when compared to a standalone version. In turn, we incur an additional startup latency of 50-60ms (about 6% of the page load time) with the freedom.js version, without any noticeable impact on system throughput.", "title": "" }, { "docid": "9b75357d49ece914e02b04a6eaa927a0", "text": "Feminist criticism of health care and ofbioethics has become increasingly rich andsophisticated in the last years of thetwentieth century. Nonetheless, this body ofwork remains quite marginalized. I believe thatthere are (at least) two reasons for this.First, many people are still confused aboutfeminism. Second, many people are unconvincedthat significant sexism still exists and aretherefore unreceptive to arguments that itshould be remedied if there is no largerbenefit. In this essay I argue for a thin,``core'' conception of feminism that is easy tounderstand and difficult to reject. Corefeminism would render debate within feminismmore fruitful, clear the way for appropriaterecognition of differences among women andtheir circumstances, provide intellectuallycompelling reasons for current non-feminists toadopt a feminist outlook, and facilitatemutually beneficial cooperation betweenfeminism and other progressive socialmovements. This conception of feminism alsomakes it clear that feminism is part of alarger egalitarian moral and political agenda,and adopting it would help bioethics focus onthe most urgent moral priorities. In addition,integrating core feminism into bioethics wouldopen a gateway to the more speculative parts offeminist work where a wealth of creativethinking is occurring. Engaging with thisfeminist work would challenge and strengthenmainstream approaches; it should also motivatemainstream bioethicists to explore othercurrently marginalized parts of bioethics.", "title": "" }, { "docid": "facf85be0ae23eacb7e7b65dd5c45b33", "text": "We review evidence for partially segregated networks of brain areas that carry out different attentional functions. One system, which includes parts of the intraparietal cortex and superior frontal cortex, is involved in preparing and applying goal-directed (top-down) selection for stimuli and responses. This system is also modulated by the detection of stimuli. The other system, which includes the temporoparietal cortex and inferior frontal cortex, and is largely lateralized to the right hemisphere, is not involved in top-down selection. Instead, this system is specialized for the detection of behaviourally relevant stimuli, particularly when they are salient or unexpected. This ventral frontoparietal network works as a 'circuit breaker' for the dorsal system, directing attention to salient events. Both attentional systems interact during normal vision, and both are disrupted in unilateral spatial neglect.", "title": "" }, { "docid": "14ca9dfee206612e36cd6c3b3e0ca61e", "text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.", "title": "" }, { "docid": "59b10765f9125e9c38858af901a39cc7", "text": "--------__------------------------------------__---------------", "title": "" }, { "docid": "dba3434c600ed7ddbb944f0a3adb1ba0", "text": "Although acoustic waves are the most versatile and widely used physical layer technology for underwater wireless communication networks (UWCNs), they are adversely affected by ambient noise, multipath propagation, and fading. The large propagation delays, low bandwidth, and high bit error rates of the underwater acoustic channel hinder communication as well. These operational limits call for complementary technologies or communication alternatives when the acoustic channel is severely degraded. Magnetic induction (MI) is a promising technique for UWCNs that is not affected by large propagation delays, multipath propagation, and fading. In this paper, the MI communication channel has been modeled. Its propagation characteristics have been compared to the electromagnetic and acoustic communication systems through theoretical analysis and numerical evaluations. The results prove the feasibility of MI communication in underwater environments. The MI waveguide technique is developed to reduce path loss. The communication range between source and destination is considerably extended to hundreds of meters in fresh water due to its superior bit error rate performance.", "title": "" } ]
scidocsrr
443b689900fc69a1a256fb30af2036e5
SYSTEMS CONTINUANCE : AN EXPECTATION-CONFIRMATION MODEL 1 By :
[ { "docid": "6c2afcf5d7db0f5d6baa9d435c203f8a", "text": "An attempt to extend current thinking on postpurchase response to include attribute satisfaction and dissatisfaction as separate determinants not fully reflected in either cognitive (i.e.. expectancy disconfirmation) or affective paradigms is presented. In separate studies of automobile satisfaction and satisfaction with course instruction, respondents provided the nature of emotional experience, disconfirmation perceptions, and separate attribute satisfaction and dissatisfaction judgments. Analysis confirmed the disconfirmation effect and tbe effects of separate dimensions of positive and negative affect and also suggested a multidimensional structure to the affect dimensions. Additionally, attribute satisfaction and dissatisfaction were significantly related to positive and negative affect, respectively, and to overall satisfaction. It is suggested that all dimensions tested are needed for a full accounting of postpurchase responses in usage.", "title": "" } ]
[ { "docid": "b3e90fdfda5346544f769b6dd7c3882b", "text": "Bromelain is a complex mixture of proteinases typically derived from pineapple stem. Similar proteinases are also present in pineapple fruit. Beneficial therapeutic effects of bromelain have been suggested or proven in several human inflammatory diseases and animal models of inflammation, including arthritis and inflammatory bowel disease. However, it is not clear how each of the proteinases within bromelain contributes to its anti-inflammatory effects in vivo. Previous in vivo studies using bromelain have been limited by the lack of assays to control for potential differences in the composition and proteolytic activity of this naturally derived proteinase mixture. In this study, we present model substrate assays and assays for cleavage of bromelain-sensitive cell surface molecules can be used to assess the activity of constituent proteinases within bromelain without the need for biochemical separation of individual components. Commercially available chemical and nutraceutical preparations of bromelain contain predominately stem bromelain. In contrast, the proteinase activity of pineapple fruit reflects its composition of fruit bromelain>ananain approximately stem bromelain. Concentrated bromelain solutions (>50 mg/ml) are more resistant to spontaneous inactivation of their proteolytic activity than are dilute solutions, with the proteinase stability in the order of stem bromelain>fruit bromelain approximately ananain. The proteolytic activity of concentrated bromelain solutions remains relatively stable for at least 1 week at room temperature, with minimal inactivation by multiple freeze-thaw cycles or exposure to the digestive enzyme trypsin. The relative stability of concentrated versus dilute bromelain solutions to inactivation under physiologically relevant conditions suggests that delivery of bromelain as a concentrated bolus would be the preferred method to maximize its proteolytic activity in vivo.", "title": "" }, { "docid": "135d451e66cdc8d47add47379c1c35f9", "text": "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind denoising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.", "title": "" }, { "docid": "9a9fd442bc7353d9cd202e9ace6e6580", "text": "The idea of developmental dyspraxia has been discussed in the research literature for almost 100 years. However, there continues to be a lack of consensus regarding both the definition and description of this disorder. This paper presents a neuropsychologically based operational definition of developmental dyspraxia that emphasizes that developmental dyspraxia is a disorder of gesture. Research that has investigated the development of praxis is discussed. Further, different types of gestural disorders displayed by children and different mechanisms that underlie developmental dyspraxia are compared to and contrasted with adult acquired apraxia. The impact of perceptual-motor, language, and cognitive impairments on children's gestural development and the possible associations between these developmental disorders and developmental dyspraxia are also examined. Also, the relationship among limb, orofacial, and verbal dyspraxia is discussed. Finally, problems that exist in the neuropsychological assessment of developmental dyspraxia are discussed and recommendations concerning what should be included in such an assessment are presented.", "title": "" }, { "docid": "22285844f638715765d21bff139d1bb1", "text": "The field of Terahertz (THz) radiation, electromagnetic energy, between 0.3 to 3 THz, has seen intense interest recently, because it combines some of the best properties of IR along with those of RF. For example, THz radiation can penetrate fabrics with less attenuation than IR, while its short wavelength maintains comparable imaging capabilities. We discuss major challenges in the field: designing systems and applications which fully exploit the unique properties of THz radiation. To illustrate, we present our reflective, radar-inspired THz imaging system and results, centered on biomedical burn imaging and skin hydration, and discuss challenges and ongoing research.", "title": "" }, { "docid": "85d9b0ed2e9838811bf3b07bb31dbeb6", "text": "In recent years, the medium which has negative index of refraction is widely researched. The medium has both the negative permittivity and the negative permeability. In this paper, we have researched the frequency range widening of negative permeability using split ring resonators.", "title": "" }, { "docid": "0d2260653f223db82e2e713f211a2ba0", "text": "Smartphone usage is a hot topic in pervasive computing due to their popularity and personal aspect. We present our initial results from analyzing how individual differences, such as gender and age, affect smartphone usage. The dataset comes from a large scale longitudinal study, the Menthal project. We select a sample of 30, 677 participants, from which 16, 147 are males and 14, 523 are females, with a median age of 21 years. These have been tracked for at least 28 days and they have submitted their demographic data through a questionnaire. The ongoing experiment has been started in January 2014 and we have used our own mobile data collection and analysis framework. Females use smartphones for longer periods than males, with a daily mean of 166.78 minutes vs. 154.26 minutes. Younger participants use their phones longer and usage is directed towards entertainment and social interactions through specialized apps. Older participants use it less and mainly for getting information or using it as a classic phone.", "title": "" }, { "docid": "893942f986718d639aa46930124af679", "text": "In this work we consider the problem of controlling a team of microaerial vehicles moving quickly through a three-dimensional environment while maintaining a tight formation. The formation is specified by a shape matrix that prescribes the relative separations and bearings between the robots. Each robot plans its trajectory independently based on its local information of other robot plans and estimates of states of other robots in the team to maintain the desired shape. We explore the interaction between nonlinear decentralized controllers, the fourth-order dynamics of the individual robots, the time delays in the network, and the effects of communication failures on system performance. An experimental evaluation of our approach on a team of quadrotors suggests that suitable performance is maintained as the formation motions become increasingly aggressive and as communication degrades.", "title": "" }, { "docid": "62f5640954e5b731f82599fb52ea816f", "text": "This paper presents an energy-balance control strategy for a cascaded single-phase grid-connected H-bridge multilevel inverter linking n independent photovoltaic (PV) arrays to the grid. The control scheme is based on an energy-sampled data model of the PV system and enables the design of a voltage loop linear discrete controller for each array, ensuring the stability of the system for the whole range of PV array operating conditions. The control design is adapted to phase-shifted and level-shifted carrier pulsewidth modulations to share the control action among the cascade-connected bridges in order to concurrently synthesize a multilevel waveform and to keep each of the PV arrays at its maximum power operating point. Experimental results carried out on a seven-level inverter are included to validate the proposed approach.", "title": "" }, { "docid": "1c6a9910a51656a47a8599a98dba77bb", "text": "In real life facial expressions show mixture of emotions. This paper proposes a novel expression descriptor based expression map that can efficiently represent pure, mixture and transition of facial expressions. The expression descriptor is the integration of optic flow and image gradient values and the descriptor value is accumulated in temporal scale. The expression map is realized using self-organizing map. We develop an objective scheme to find the percentage of different prototypical pure emotions (e.g., happiness, surprise, disgust etc.) that mix up to generate a real facial expression. Experimental results show that the expression map can be used as an effective classifier for facial expressions.", "title": "" }, { "docid": "210052dbabdb5c48502079d75cdd6ce6", "text": "Sketch It, Make It (SIMI) is a modeling tool that enables non-experts to design items for fabrication with laser cutters. SIMI recognizes rough, freehand input as a user iteratively edits a structured vector drawing. The tool combines the strengths of sketch-based interaction with the power of constraint-based modeling. Several interaction techniques are combined to present a coherent system that makes it easier to make precise designs for laser cutters.", "title": "" }, { "docid": "426d3b0b74eacf4da771292abad06739", "text": "Brain tumor is considered as one of the deadliest and most common form of cancer both in children and in adults. Consequently, determining the correct type of brain tumor in early stages is of significant importance to devise a precise treatment plan and predict patient's response to the adopted treatment. In this regard, there has been a recent surge of interest in designing Convolutional Neural Networks (CNNs) for the problem of brain tumor type classification. However, CNNs typically require large amount of training data and can not properly handle input transformations. Capsule networks (referred to as CapsNets) are brand new machine learning architectures proposed very recently to overcome these shortcomings of CNNs, and posed to revolutionize deep learning solutions. Of particular interest to this work is that Capsule networks are robust to rotation and affine transformation, and require far less training data, which is the case for processing medical image datasets including brain Magnetic Resonance Imaging (MRI) images. In this paper, we focus to achieve the following four objectives: (i) Adopt and incorporate CapsNets for the problem of brain tumor classification to design an improved architecture which maximizes the accuracy of the classification problem at hand; (ii) Investigate the over-fitting problem of CapsNets based on a real set of MRI images; (iii) Explore whether or not CapsNets are capable of providing better fit for the whole brain images or just the segmented tumor, and; (iv) Develop a visualization paradigm for the output of the CapsNet to better explain the learned features. Our results show that the proposed approach can successfully overcome CNNs for the brain tumor classification problem.", "title": "" }, { "docid": "4357e361fd35bcbc5d6a7c195a87bad1", "text": "In an age of increasing technology, the possibility that typing on a keyboard will replace handwriting raises questions about the future usefulness of handwriting skills. Here we present evidence that brain activation during letter perception is influenced in different, important ways by previous handwriting of letters versus previous typing or tracing of those same letters. Preliterate, five-year old children printed, typed, or traced letters and shapes, then were shown images of these stimuli while undergoing functional MRI scanning. A previously documented \"reading circuit\" was recruited during letter perception only after handwriting-not after typing or tracing experience. These findings demonstrate that handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading. Handwriting therefore may facilitate reading acquisition in young children.", "title": "" }, { "docid": "753f837e53a08a59392c30515481b503", "text": "Light is a powerful zeitgeber that synchronizes our endogenous circadian pacemaker with the environment and has been previously described as an agent in improving cognitive performance. With that in mind, this study was designed to explore the influence of exposure to blue-enriched white light in the morning on the performance of adolescent students. 58 High school students were recruited from four classes in two schools. In each school, one classroom was equipped with blue-enriched white lighting while the classroom next door served as a control setting. The effects of classroom lighting on cognitive performance were assessed using standardized psychological tests. Results show beneficial effects of blue-enriched white light on students' performance. In comparison to standard lighting conditions, students showed faster cognitive processing speed and better concentration. The blue-enriched white lighting seems to influence very basic information processing primarily, as no effects on short-term encoding and retrieval of memories were found. & 2014 Elsevier GmbH. All rights reserved.", "title": "" }, { "docid": "47b7ebc460ce1273941bdef5bc754d4a", "text": "When people predict their future behavior, they tend to place too much weight on their current intentions, which produces an optimistic bias for behaviors associated with currently strong intentions. More realistic self-predictions require greater sensitivity to situational barriers, such as obstacles or competing demands, that may interfere with the translation of current intentions into future behavior. We consider three reasons why people may not adjust sufficiently for such barriers. First, self-predictions may focus exclusively on current intentions, ignoring potential barriers altogether. We test this possibility, in three studies, with manipulations that draw greater attention to barriers. Second, barriers may be discounted in the self-prediction process. We test this possibility by comparing prospective and retrospective ratings of the impact of barriers on the target behavior. Neither possibility was supported in these tests, or in a further test examining whether an optimally weighted statistical model could improve on the accuracy of self-predictions by placing greater weight on anticipated situational barriers. Instead, the evidence supports a third possibility: Even when they acknowledge that situational factors can affect the likelihood of carrying out an intended behavior, people do not adequately moderate the weight placed on their current intentions when predicting their future behavior.", "title": "" }, { "docid": "9a397ca2a072d9b1f861f8a6770aa792", "text": "Computational photography systems are becoming increasingly diverse, while computational resources---for example on mobile platforms---are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.", "title": "" }, { "docid": "9c1687323661ccb6bf2151824edc4260", "text": "In this work we present the design of a digitally controlled ring type oscillator in 0.5 μm CMOS technology for a low-cost and portable radio-frequency diathermy (RFD) device. The oscillator circuit is composed by a low frequency ring oscillator (LFRO), a voltage controlled ring oscillator (VCRO), and a logic control. The digital circuit generates an input signal for the LFO, which generates a voltage ramp that controls the oscillating output signal of the VCRO in the range of 500 KHz to 1 MHz. Simulation results show that the proposed circuit exhibits controllable output characteristics in the range of 500 KHz–1 MHz, with low power consumption and low phase noise, making it suitable for a portable RFD device.", "title": "" }, { "docid": "47faebfa7d65ebf277e57436cf7c2ca4", "text": "Steganography is a method which can put data into a media without a tangible impact on the cover media. In addition, the hidden data can be extracted with minimal differences. In this paper, twodimensional discrete wavelet transform is used for steganography in 24-bit color images. This steganography is of blind type that has no need for original images to extract the secret image. In this algorithm, by the help of a structural similarity and a two-dimensional correlation coefficient, it is tried to select part of sub-band cover image instead of embedding location. These sub-bands are obtained by 3levels of applying the DWT. Also to increase the steganography resistance against cropping or insert visible watermark, two channels of color image is used simultaneously. In order to raise the security, an encryption algorithm based on Arnold transform was also added to the steganography operation. Because diversity of chaos scenarios is limited in Arnold transform, it could be improved by its mirror in order to increase the diversity of key. Additionally, an ability is added to encryption algorithm that can still maintain its efficiency against image crop. Transparency of steganography image is measured by the peak signalto-noise ratio that indicates the adequate transparency of steganography process. Extracted image similarity is also measured by two-dimensional correlation coefficient with more than 99% similarity. Moreover, steganography resistance against increasing and decreasing brightness and contrast, lossy compression, cropping image, changing scale and adding noise is acceptable", "title": "" }, { "docid": "04435e017e720c0ed6e5c0cd29f1b4fc", "text": "Blobworld is a system for image retrieval based on finding coherent image regions which roughly correspond to objects. Each image is automatically segmented into regions (“blobs”) with associated color and texture descriptors. Querying is based on the attributes of one or two regions of interest, rather than a description of the entire image. In order to make large-scale retrieval feasible, we index the blob descriptions using a tree. Because indexing in the high-dimensional feature space is computationally prohibitive, we use a lower-rank approximation to the high-dimensional distance. Experiments show encouraging results for both querying and indexing.", "title": "" } ]
scidocsrr
f56f899275cdcaa5153e3b9f16a78a0d
JAIST: Combining multiple features for Answer Selection in Community Question Answering
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "cd45dd9d63c85bb0b23ccb4a8814a159", "text": "Parameter set learned using all WMT12 data (Callison-Burch et al., 2012): • 100,000 binary rankings covering 8 language directions. •Restrict scoring for all languages to exact and paraphrase matching. Parameters encode human preferences that generalize across languages: •Prefer recall over precision. •Prefer word choice over word order. •Prefer correct translations of content words over function words. •Prefer exact matches over paraphrase matches, while still giving significant credit to paraphrases. Visualization", "title": "" }, { "docid": "f2478e4b1156e112f84adbc24a649d04", "text": "Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. In this context, we organized SemEval2015 Task 3 on Answer Selection in cQA, which included two subtasks: (a) classifying answers as good, bad, or potentially relevant with respect to the question, and (b) answering a YES/NO question with yes, no, or unsure, based on the list of all answers. We set subtask A for Arabic and English on two relatively different cQA domains, i.e., the Qatar Living website for English, and a Quran-related website for Arabic. We used crowdsourcing on Amazon Mechanical Turk to label a large English training dataset, which we released to the research community. Thirteen teams participated in the challenge with a total of 61 submissions: 24 primary and 37 contrastive. The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.", "title": "" } ]
[ { "docid": "0e81057fffb454c912c8626721a43d85", "text": "Although scaffolding is an important and frequently studied concept, much discussion exists with regard to its conceptualizations, appearances, and effectiveness. Departing from the last decade’s scaffolding literature, this review scrutinizes these three areas of scaffolding. First, contingency, fading, and transfer of responsibility are discerned in this review as the three key characteristics of scaffolding. Second, an overview is presented of the numerous descriptive studies that provided narratives on the appearances of scaffolding and classifications of scaffolding strategies. These strategies are synthesized into a framework for analysis, distinguishing between scaffolding means and intentions. Third, the small number of effectiveness studies available is discussed and the results suggest that scaffolding is effective. However, more research is needed. The main challenge in scaffolding research appears to be its measurement. Based on the encountered and described measurement problems, suggestions for future research are made.", "title": "" }, { "docid": "6fa9cc1030bd87a626fac1cfc7696054", "text": "Frequently, the design of interactive systems focuses exclusively on the capabilities provided by the dynamic nature of computational media. Yet our experience includes many examples in which physical models provide certain strengths not found in computational models. Rather than viewing this as a dichotomy—where one must choose between one or the other—we are exploring the creation of computational environments that build on the strengths of combined physical and virtual approaches. Over the last decade, we have developed different design environments to support stakeholders engaged in design processes by enhancing communication, facilitating shared understanding, and creating better artifacts. Until a few years ago, our work explored physical and computational media separately. In this paper we present our efforts to develop integrated design environments linking physical and computational dimensions to attain the complementary synergies that these two worlds offer. Our purpose behind this integration is the development of systems that can enhance the movement from conceptual thinking to concrete representations using face-to-face interaction to promote the negotiation of meaning, the direct interaction with artifacts, and the possibility that diverse stakeholders can participate fully in the process of design. To this end, we analyze the strengths, affordances, weaknesses, and limitations of the two media used separately and illustrate with our most recent work the value added by integrating these environments.", "title": "" }, { "docid": "2af96909058f0323d60a9c2b3807690a", "text": "Minutiae point pattern matching is the most common approach for fingerprint verification. Although many minutiae point pattern matching algorithms have been proposed, reliable automatic fingerprint verification remains as a challenging problem, both with respect to recovering the optimal alignment and the construction of an adequate matching function. In this paper, we develop a memetic fingerprint matching algorithm (MFMA) which aims to identify the optimal or near optimal global matching between two minutiae sets. Within the MFMA, we first introduce an efficient matching operation to produce an initial population of local alignment configurations by examining local features of minutiae. Then, we devise a hybrid evolutionary procedure by combining the use of the global search functionality of a genetic algorithm with a local improvement operator to search for the optimal or near optimal global alignment. Finally, we define a reliable matching function for fitness computation. The proposed algorithm was evaluated by means of a series of experiments conducted on the FVC2002 database and compared with previous work. Experimental results confirm that the MFMA is an effective and practical matching algorithm for fingerprint verification. The algorithm is faster and more accurate than a traditional genetic-algorithm-based method. It is also more accurate than a number of other methods implemented for comparison, though our method generally requires more computational time in performing fingerprint matching.", "title": "" }, { "docid": "cab1d175b7976b6e941764c319a6c85d", "text": "Participants listened to randomly selected excerpts of popular music and rated how nostalgic each song made them feel. Nostalgia was stronger to the extent that a song was autobiographically salient, arousing, familiar, and elicited a greater number of positive, negative, and mixed emotions. These effects were moderated by individual differences (nostalgia proneness, mood state, dimensions of the Affective Neurosciences Personality Scale, and factors of the Big Five Inventory). Nostalgia proneness predicted stronger nostalgic experiences, even after controlling for other individual difference measures. Nostalgia proneness was predicted by the Sadness dimension of the Affective Neurosciences Personality Scale and Neuroticism of the Big Five Inventory. Nostalgia was associated with both joy and sadness, whereas nonnostalgic and nonautobiographical experiences were associated with irritation.", "title": "" }, { "docid": "1f2832276b346316b15fe05d8593217c", "text": "This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoare-style verification condition generation with a logical abduction procedure based on quantifier elimination to speculate candidate invariants. Starting with true, our method iteratively strengthens loop invariants until they are inductive and strong enough to verify the program. A key feature of our technique is that it is lazy: It only infers those invariants that are necessary for verifying program correctness. Furthermore, our technique can infer arbitrary boolean combinations (including disjunctions) of linear invariants. We have implemented the proposed approach in a tool called HOLA. Our experiments demonstrate that HOLA can infer interesting invariants that are beyond the reach of existing state-of-the-art invariant generation tools.", "title": "" }, { "docid": "4933f3f3007dab687fc852e9c2b1ab0a", "text": "This paper presents a topology for bidirectional solid-state transformers with a minimal device count. The topology, referenced as dynamic-current or Dyna-C, has two current-source inverter stages with a high-frequency galvanic isolation, requiring 12 switches for four-quadrant three-phase ac/ac power conversion. The topology has voltage step-up/down capability, and the input and output can have arbitrary power factors and frequencies. Further, the Dyna-C can be configured as isolated power converters for single- or multiterminal dc, and single- or multiphase ac systems. The modular nature of the Dyna-C lends itself to be connected in series and/or parallel for high-voltage high-power applications. The proposed converter topology can find a broad range of applications such as isolated battery chargers, uninterruptible power supplies, renewable energy integration, smart grid, and power conversion for space-critical applications including aviation, locomotives, and ships. This paper outlines various configurations of the Dyna-C, as well as the relative operation and controls. The converter functionality is validated through simulations and experimental measurements of a 50-kVA prototype.", "title": "" }, { "docid": "0ff76204fcdf1a7cf2a6d13a5d3b1597", "text": "In this study, we found that the optimum take-off angle for a long jumper may be predicted by combining the equation for the range of a projectile in free flight with the measured relations between take-off speed, take-off height and take-off angle for the athlete. The prediction method was evaluated using video measurements of three experienced male long jumpers who performed maximum-effort jumps over a wide range of take-off angles. To produce low take-off angles the athletes used a long and fast run-up, whereas higher take-off angles were produced using a progressively shorter and slower run-up. For all three athletes, the take-off speed decreased and the take-off height increased as the athlete jumped with a higher take-off angle. The calculated optimum take-off angles were in good agreement with the athletes' competition take-off angles.", "title": "" }, { "docid": "8f930fc4f06f8b17e2826f0975af1fa1", "text": "Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called \"anchor\" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network.", "title": "" }, { "docid": "f33147619ba2d24efcea9e32f70c7695", "text": "The wide use of micro bloggers such as Twitter offers a valuable and reliable source of information during natural disasters. The big volume of Twitter data calls for a scalable data management system whereas the semi-structured data analysis requires full-text searching function. As a result, it becomes challenging yet essential for disaster response agencies to take full advantage of social media data for decision making in a near-real-time fashion. In this work, we use Lucene to empower HBase with full-text searching ability to build a scalable social media data analytics system for observing and analyzing human behaviors during the Hurricane Sandy disaster. Experiments show the scalability and efficiency of the system. Furthermore, the discovery of communities has the benefit of identifying influential users and tracking the topical changes as the disaster unfolds. We develop a novel approach to discover communities in Twitter by applying spectral clustering algorithm to retweet graph. The topics and influential users of each community are also analyzed and demonstrated using Latent Semantic Indexing (LSI).", "title": "" }, { "docid": "4a57ca8bc92a11c044b3e40c1496932a", "text": "Recent times have witnessed an increase in use of high-performance reconfigurable computing for accelerating large-scale simulations. A characteristic of such simulations, like infrared (IR) scene simulation, is the use of large quantities of uncorrelated random numbers. It is therefore of interest to have a fast uniform random number generator implemented in reconfigurable hardware. While there have been previous attempts to accelerate the MT19937 pseudouniform random number generator using FPGAs we believe that we can substantially improve the previous implementations to develop a higher throughput and more areatime efficient design. Due to the potential for parallel implementation of random numbers generators, designs that have both a small area footprint and high throughput are to be preferred to ones that have the high throughput but with significant extra area requirements. In this paper, we first present a single port design and then present an enhanced 624 port hardware implementation of the MT19937 algorithm. The 624 port hardware implementation when implemented on a Xilinx XC2VP70-6 FPGA chip has a throughput of 119.6 × 10 32 bit random numbers per second which is more than 17x that of the previously best published uniform random number generator. Furthermore it has the lowest area time metric of all the currently published FPGA-based pseudouniform random number generators.", "title": "" }, { "docid": "5088c5e6880f8557fa37b824b7d91b28", "text": "Localization of sensor nodes is an important aspect in Wireless Sensor Networks (WSNs). This paper presents an overview of the major localization techniques for WSNs. These techniques are classified into centralized and distributed depending on where the computational effort is carried out. The paper concentrates on the factors that need to be considered when selecting a localization technique. The advantages and limitation of various techniques are also discussed. Finally, future research directions and challenges are highlighted.", "title": "" }, { "docid": "53ab46387cb1c04e193d2452c03a95ad", "text": "Real time control of five-axis machine tools requires smooth generation of feed, acceleration and jerk in CNC systems without violating the physical limits of the drives. This paper presents a feed scheduling algorithm for CNC systems to minimize the machining time for five-axis contour machining of sculptured surfaces. The variation of the feed along the five-axis tool-path is expressed in a cubic B-spline form. The velocity, acceleration and jerk limits of the five axes are considered in finding the most optimal feed along the toolpath in order to ensure smooth and linear operation of the servo drives with minimal tracking error. The time optimal feed motion is obtained by iteratively modulating the feed control points of the B-spline to maximize the feed along the tool-path without violating the programmed feed and the drives’ physical limits. Long tool-paths are handled efficiently by applying a moving window technique. The improvement in the productivity and linear operation of the five drives is demonstrated with five-axis simulations and experiments on a CNC machine tool. r 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8eb31344917df5420df8912cdd4966d1", "text": "This contribution presents a precise localization method for advanced driver assistance systems. A Maximally Stable Extremal Region (MSER) detector is used to extract bright areas, i.e. lane markings, from grayscale camera images. Furthermore, this algorithm is also used to extract features from a laser scanner grid map. These regions are automatically stored as landmarks in a geospatial data base during a map creation phase. A particle filter is then employed to perform the pose estimation. For the weight update of the filter the similarity between the set of online MSER detections and the set of mapped landmarks within the field of view is evaluated. Hereby, a two stage sensor fusion is carried out. First, in order to have a large field of view available, not only a forward facing camera but also a rearward facing camera is used and the detections from both sensors are fused. Secondly, the weight update also integrates the features detected from the laser grid map, which is created using measurements of three laser scanners. The performance of the proposed algorithm is evaluated on a 7 km long stretch of a rural road. The evaluation reveals that a relatively good position estimation and a very accurate orientation estimation (0.01 deg ± 0.22 deg) can be achieved using the presented localization method. In addition, an evaluation of the localization performance based only on each of the respective kinds of MSER features is provided in this contribution and compared to the combined approach.", "title": "" }, { "docid": "b1f000790b6ff45bd9b0b7ba3aec9cb2", "text": "Broad-scale destruction and fragmentation of native vegetation is a highly visible result of human land-use throughout the world (Chapter 4). From the Atlantic Forests of South America to the tropical forests of Southeast Asia, and in many other regions on Earth, much of the original vegetation now remains only as fragments amidst expanses of land committed to feeding and housing human beings. Destruction and fragmentation of habitats are major factors in the global decline of populations and species (Chapter 10), the modification of native plant and animal communities and the alteration of ecosystem processes (Chapter 3). Dealing with these changes is among the greatest challenges facing the “mission-orientated crisis discipline” of conservation biology (Soulé 1986; see Chapter 1). Habitat fragmentation, by definition, is the “breaking apart” of continuous habitat, such as tropical forest or semi-arid shrubland, into distinct pieces. When this occurs, three interrelated processes take place: a reduction in the total amount of the original vegetation (i.e. habitat loss); subdivision of the remaining vegetation into fragments, remnants or patches (i.e. habitat fragmentation); and introduction of new forms of land-use to replace vegetation that is lost. These three processes are closely intertwined such that it is often difficult to separate the relative effect of each on the species or community of concern. Indeed, many studies have not distinguished between these components, leading to concerns that “habitat fragmentation” is an ambiguous, or even meaningless, concept (Lindenmayer and Fischer 2006). Consequently, we use “landscape change” to refer to these combined processes and “habitat fragmentation” for issues directly associated with the subdivision of vegetation and its ecological consequences. This chapter begins by summarizing the conceptual approaches used to understand conservation in fragmented landscapes. We then examine the biophysical aspects of landscape change, and how such change affects species and communities, posing two main questions: (i) what are the implications for the patterns of occurrence of species and communities?; and (ii) how does landscape change affect processes that influence the distribution and viability of species and communities? The chapter concludes by identifying the kinds of actions that will enhance the conservation of biota in fragmented landscapes.", "title": "" }, { "docid": "63a58b3b6eb46cdd92b9c241b1670926", "text": "The Healthcare industry is generally &quot;information rich&quot;, but unfortunately not all the data are mined which is required for discovering hidden patterns & effective decision making. Advanced data mining techniques are used to discover knowledge in database and for medical research, particularly in Heart disease prediction. This paper has analysed prediction systems for Heart disease using more number of input attributes. The system uses medical terms such as sex, blood pressure, cholesterol like 13 attributes to predict the likelihood of patient getting a Heart disease. Until now, 13 attributes are used for prediction. This research paper added two more attributes i. e. obesity and smoking. The data mining classification techniques, namely Decision Trees, Naive Bayes, and Neural Networks are analyzed on Heart disease database. The performance of these techniques is compared, based on accuracy. As per our results accuracy of Neural Networks, Decision Trees, and Naive Bayes are 100%, 99. 62%, and 90. 74% respectively. Our analysis shows that out of these three classification models Neural Networks predicts Heart disease with highest accuracy.", "title": "" }, { "docid": "e5338d8c6c765165c65de5c4f390da2a", "text": "Rewards are sparse in the real world and most today’s reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself — thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward — making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory — which incorporates rich information about environment dynamics. This allows us to overcome the known “couch-potato” issues of prior work — when the agent finds a way to instantly gratify itself by exploiting actions which lead to unpredictable consequences. We test our approach in visually rich 3D environments in VizDoom and DMLab. In VizDoom, our agent learns to successfully navigate to a distant goal at least 2 times faster than the state-of-the-art curiosity method ICM. In DMLab, our agent generalizes well to new procedurally generated levels of the game — reaching the goal at least 2 times more frequently than ICM on test mazes with very sparse reward.", "title": "" }, { "docid": "83cd04900b09258aa975f44dc2e3649d", "text": "The transition stage from the natural cognitive decline of normal aging to the more serious decline of dementia is referred to as mild cognitive impairment (MCI). The cognitive changes caused in MCI are noticeable by the individuals experiencing them and by others, but the changes are not severe enough to interfere with daily life or with independent activities. Because there is a thin line between normal aging and MCI, it is difficult for individuals to discern between the two conditions. Moreover, if the symptoms of MCI are not diagnosed in time, it may lead to more serious and permanent conditions. However, if these symptoms are detected in time and proper care and precaution are taken, it is possible to prevent the condition from worsening. A smart-home environment that unobtrusively keeps track of the individual's daily living activities is a possible solution to improve care and quality of life.", "title": "" }, { "docid": "adf3678a3f1fcd5db580a417194239f2", "text": "In training deep neural networks for semantic segmentation, the main limiting factor is the low amount of ground truth annotation data that is available in currently existing datasets. The limited availability of such data is due to the time cost and human effort required to accurately and consistently label real images on a pixel level. Modern sandbox video game engines provide open world environments where traffic and pedestrians behave in a pseudo-realistic manner. This caters well to the collection of a believable road-scene dataset. Utilizing open-source tools and resources found in single-player modding communities, we provide a method for persistent, ground truth, asset annotation of a game world. By collecting a synthetic dataset containing upwards of 1, 000, 000 images, we demonstrate realtime, on-demand, ground truth data annotation capability of our method. Supplementing this synthetic data to Cityscapes dataset, we show that our data generation method provides qualitative as well as quantitative improvements—for training networks—over previous methods that use video games as surrogate.", "title": "" }, { "docid": "5d92f58e929a851097eae320eb9c3ddc", "text": "In recent years, the study of genomic alterations and protein expression involved in the pathways of breast cancer carcinogenesis has provided an increasing number of targets for drugs development in the setting of metastatic breast cancer (i.e., trastuzumab, everolimus, palbociclib, etc.) significantly improving the prognosis of this disease. These drugs target specific molecular abnormalities that confer a survival advantage to cancer cells. On these bases, emerging evidence from clinical trials provided increasing proof that the genetic landscape of any tumor may dictate its sensitivity or resistance profile to specific agents and some studies have already showed that tumors treated with therapies matched with their molecular alterations obtain higher objective response rates and longer survival. Predictive molecular biomarkers may optimize the selection of effective therapies, thus reducing treatment costs and side effects. This review offers an overview of the main molecular pathways involved in breast carcinogenesis, the targeted therapies developed to inhibit these pathways, the principal mechanisms of resistance and, finally, the molecular biomarkers that, to date, are demonstrated in clinical trials to predict response/resistance to targeted treatments in metastatic breast cancer.", "title": "" }, { "docid": "8390fd7e559832eea895fabeb48c3549", "text": "An algorithm is presented to perform connected component labeling of images of arbitrary dimension that are represented by a linear bintree. The bintree is a generalization of the quadtree data structure that enables dealing with images of arbitrary dimension. The linear bintree is a pointerless representation. The algorithm uses an active border which is represented by linked lists instead of arrays. This results in a significant reduction in the space requirements, thereby making it feasible to process threeand higher dimensional images. Analysis of the execution time of the algorithm shows almost linear behavior with respect to the number of leaf nodes in the image, and empirical tests are in agreement. The algorithm can be modified easily to compute a ( d 1)-dimensional boundary measure (e.g., perimeter in two dimensions and surface area in three dimensions) with linear", "title": "" } ]
scidocsrr
a04245add9a1b1f59b8f46260db49621
Supplementary material for “ Masked Autoregressive Flow for Density Estimation ”
[ { "docid": "b6a8f45bd10c30040ed476b9d11aa908", "text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.", "title": "" }, { "docid": "4c21ec3a600d773ea16ce6c45df8fe9d", "text": "The efficacy of particle identification is compared using artificial neutral networks and boosted decision trees. The comparison is performed in the context of the MiniBooNE, an experiment at Fermilab searching for neutrino oscillations. Based on studies of Monte Carlo samples of simulated data, particle identification with boosting algorithms has better performance than that with artificial neural networks for the MiniBooNE experiment. Although the tests in this paper were for one experiment, it is expected that boosting algorithms will find wide application in physics. r 2005 Elsevier B.V. All rights reserved. PACS: 29.85.+c; 02.70.Uu; 07.05.Mh; 14.60.Pq", "title": "" }, { "docid": "3cdab5427efd08edc4f73266b7ed9176", "text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.", "title": "" } ]
[ { "docid": "2a5710aeaba7e39c5e08c1a5310c89f6", "text": "We present an augmented reality system that supports human workers in a rapidly changing production environment. By providing spatially registered information on the task directly in the user's field of view the system can guide the user through unfamiliar tasks (e.g. assembly of new products) and visualize information directly in the spatial context were it is relevant. In the first version we present the user with picking and assembly instructions in an assembly application. In this paper we present the initial experience with this system, which has already been used successfully by several hundred users who had no previous experience in the assembly task.", "title": "" }, { "docid": "527c4c17aadb23a991d85511004a7c4f", "text": "Accurate and robust recognition and prediction of traffic situation plays an important role in autonomous driving, which is a prerequisite for risk assessment and effective decision making. Although there exist a lot of works dealing with modeling driver behavior of a single object, it remains a challenge to make predictions for multiple highly interactive agents that react to each other simultaneously. In this work, we propose a generic probabilistic hierarchical recognition and prediction framework which employs a two-layer Hidden Markov Model (TLHMM) to obtain the distribution of potential situations and a learning-based dynamic scene evolution model to sample a group of future trajectories. Instead of predicting motions of a single entity, we propose to get the joint distribution by modeling multiple interactive agents as a whole system. Moreover, due to the decoupling property of the layered structure, our model is suitable for knowledge transfer from simulation to real world applications as well as among different traffic scenarios, which can reduce the computational efforts of training and the demand for a large data amount. A case study of highway ramp merging scenario is demonstrated to verify the effectiveness and accuracy of the proposed framework.", "title": "" }, { "docid": "08c26a40328648cf6a6d0a7efc3917a5", "text": "Person re-identification (ReID) is an important task in video surveillance and has various applications. It is non-trivial due to complex background clutters, varying illumination conditions, and uncontrollable camera settings. Moreover, the person body misalignment caused by detectors or pose variations is sometimes too severe for feature matching across images. In this study, we propose a novel Convolutional Neural Network (CNN), called Spindle Net, based on human body region guided multi-stage feature decomposition and tree-structured competitive feature fusion. It is the first time human body structure information is considered in a CNN framework to facilitate feature learning. The proposed Spindle Net brings unique advantages: 1) it separately captures semantic features from different body regions thus the macro-and micro-body features can be well aligned across images, 2) the learned region features from different semantic regions are merged with a competitive scheme and discriminative features can be well preserved. State of the art performance can be achieved on multiple datasets by large margins. We further demonstrate the robustness and effectiveness of the proposed Spindle Net on our proposed dataset SenseReID without fine-tuning.", "title": "" }, { "docid": "b2f0b5ef76d9e98e93e6c5ed64642584", "text": "The yeast and fungal prions determine heritable and infectious traits, and are thus genes composed of protein. Most prions are inactive forms of a normal protein as it forms a self-propagating filamentous β-sheet-rich polymer structure called amyloid. Remarkably, a single prion protein sequence can form two or more faithfully inherited prion variants, in effect alleles of these genes. What protein structure explains this protein-based inheritance? Using solid-state nuclear magnetic resonance, we showed that the infectious amyloids of the prion domains of Ure2p, Sup35p and Rnq1p have an in-register parallel architecture. This structure explains how the amyloid filament ends can template the structure of a new protein as it joins the filament. The yeast prions [PSI(+)] and [URE3] are not found in wild strains, indicating that they are a disadvantage to the cell. Moreover, the prion domains of Ure2p and Sup35p have functions unrelated to prion formation, indicating that these domains are not present for the purpose of forming prions. Indeed, prion-forming ability is not conserved, even within Saccharomyces cerevisiae, suggesting that the rare formation of prions is a disease. The prion domain sequences generally vary more rapidly in evolution than does the remainder of the molecule, producing a barrier to prion transmission, perhaps selected in evolution by this protection.", "title": "" }, { "docid": "7b88e651bf87e3a780fd1cf31b997bc5", "text": "While the use of the internet and social media as a tool for extremists and terrorists has been well documented, understanding the mechanisms at work has been much more elusive. This paper begins with a grounded theory approach guided by a new theoretical approach to power that utilizes both terrorism cases and extremist social media groups to develop an explanatory model of radicalization. Preliminary hypotheses are developed, explored and refined in order to develop a comprehensive model which is then presented. This model utilizes and applies concepts from social theorist Michel Foucault, including the use of discourse and networked power relations in order to normalize and modify thoughts and behaviors. The internet is conceptualized as a type of institution in which this framework of power operates and seeks to recruit and radicalize. Overall, findings suggest that the explanatory model presented is a well suited, yet still incomplete in explaining the process of online radicalization.", "title": "" }, { "docid": "d1ebf47c1f0b1d8572d526e9260dbd32", "text": "In this paper, mortality in the immediate aftermath of an earthquake is studied on a worldwide scale using multivariate analysis. A statistical method is presented that analyzes reported earthquake fatalities as a function of a heterogeneous set of parameters selected on the basis of their presumed influence on earthquake mortality. The ensemble was compiled from demographic, seismic, and reported fatality data culled from available records of past earthquakes organized in a geographic information system. The authors consider the statistical relation between earthquake mortality and the available data ensemble, analyze the validity of the results in view of the parametric uncertainties, and propose a multivariate mortality analysis prediction method. The analysis reveals that, although the highest mortality rates are expected in poorly developed rural areas, high fatality counts can result from a wide range of mortality ratios that depend on the effective population size.", "title": "" }, { "docid": "2e812c0a44832721fcbd7272f9f6a465", "text": "Previous research has shown that people differ in their implicit theories about the essential characteristics of intelligence and emotions. Some people believe these characteristics to be predetermined and immutable (entity theorists), whereas others believe that these characteristics can be changed through learning and behavior training (incremental theorists). The present study provides evidence that in healthy adults (N = 688), implicit beliefs about emotions and emotional intelligence (EI) may influence performance on the ability-based Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Adults in our sample with incremental theories about emotions and EI scored higher on the MSCEIT than entity theorists, with implicit theories about EI showing a stronger relationship to scores than theories about emotions. Although our participants perceived both emotion and EI as malleable, they viewed emotions as more malleable than EI. Women and young adults in general were more likely to be incremental theorists than men and older adults. Furthermore, we found that emotion and EI theories mediated the relationship of gender and age with ability EI. Our findings suggest that people's implicit theories about EI may influence their emotional abilities, which may have important consequences for personal and professional EI training.", "title": "" }, { "docid": "2f4cfa040664d08b1540677c8d72f962", "text": "We study the problem of modeling spatiotemporal trajectories over long time horizons using expert demonstrations. For instance, in sports, agents often choose action sequences with long-term goals in mind, such as achieving a certain strategic position. Conventional policy learning approaches, such as those based on Markov decision processes, generally fail at learning cohesive long-term behavior in such high-dimensional state spaces, and are only effective when fairly myopic decisionmaking yields the desired behavior. The key difficulty is that conventional models are “single-scale” and only learn a single state-action policy. We instead propose a hierarchical policy class that automatically reasons about both long-term and shortterm goals, which we instantiate as a hierarchical neural network. We showcase our approach in a case study on learning to imitate demonstrated basketball trajectories, and show that it generates significantly more realistic trajectories compared to non-hierarchical baselines as judged by professional sports analysts.", "title": "" }, { "docid": "83413682f018ae5aec9ec415679de940", "text": "An 18-year-old female patient arrived at the emergency department complaining of abdominal pain and fullness after a heavy meal. Physical examination revealed she was filthy and cover in feces, and she experienced severe abdominal distension. She died in ED and a diagnostic autopsy examination was requested. At external examination, the pathologist observed a significant dilation of the anal sphincter and suspected sexual assault, thus alerting the Judicial Authority who assigned the case to our department for a forensic autopsy. During the autopsy, we observed anal orifice expansion without signs of violence; food was found in the pleural cavity. The stomach was hyper-distended and perforated at three different points as well as the diaphragm. The patient was suffering from anorexia nervosa with episodes of overeating followed by manual voiding of her feces from the anal cavity (thus explaining the anal dilatation). The forensic pathologists closed the case as an accidental death.", "title": "" }, { "docid": "b692e35c404da653d27dc33c01867b6e", "text": "We demonstrate that it is possible to perform automatic sentiment classification in the very noisy domain of customer feedback data. We show that by using large feature vectors in combination with feature reduction, we can train linear support vector machines that achieve high classification accuracy on data that present classification challenges even for a human annotator. We also show that, surprisingly, the addition of deep linguistic analysis features to a set of surface level word n-gram features contributes consistently to classification accuracy in this domain.", "title": "" }, { "docid": "1701da2aed094fdcbfaca6c2252d2e53", "text": "Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still inaccessible to standard cameras.", "title": "" }, { "docid": "1c79bf1b4dcad01f9afc54f467d8067f", "text": "With the rapid growth of network bandwidth, increases in CPU cores on a single machine, and application API models demanding more short-lived connections, a scalable TCP stack is performance-critical. Although many clean-state designs have been proposed, production environments still call for a bottom-up parallel TCP stack design that is backward-compatible with existing applications.\n We present Fastsocket, a BSD Socket-compatible and scalable kernel socket design, which achieves table-level connection partition in TCP stack and guarantees connection locality for both passive and active connections. Fastsocket architecture is a ground up partition design, from NIC interrupts all the way up to applications, which naturally eliminates various lock contentions in the entire stack. Moreover, Fastsocket maintains the full functionality of the kernel TCP stack and BSD-socket-compatible API, and thus applications need no modifications.\n Our evaluations show that Fastsocket achieves a speedup of 20.4x on a 24-core machine under a workload of short-lived connections, outperforming the state-of-the-art Linux kernel TCP implementations. When scaling up to 24 CPU cores, Fastsocket increases the throughput of Nginx and HAProxy by 267% and 621% respectively compared with the base Linux kernel. We also demonstrate that Fastsocket can achieve scalability and preserve BSD socket API at the same time. Fastsocket is already deployed in the production environment of Sina WeiBo, serving 50 million daily active users and billions of requests per day.", "title": "" }, { "docid": "92ac3bfdcf5e554152c4ce2e26b77315", "text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.", "title": "" }, { "docid": "caf88f7fd5ec7f3a3499f46f541b985b", "text": "Photo-based question answering is a useful way of finding information about physical objects. Current question answering (QA) systems are text-based and can be difficult to use when a question involves an object with distinct visual features. A photo-based QA system allows direct use of a photo to refer to the object. We develop a three-layer system architecture for photo-based QA that brings together recent technical achievements in question answering and image matching. The first, template-based QA layer matches a query photo to online images and extracts structured data from multimedia databases to answer questions about the photo. To simplify image matching, it exploits the question text to filter images based on categories and keywords. The second, information retrieval QA layer searches an internal repository of resolved photo-based questions to retrieve relevant answers. The third, human-computation QA layer leverages community experts to handle the most difficult cases. A series of experiments performed on a pilot dataset of 30,000 images of books, movie DVD covers, grocery items, and landmarks demonstrate the technical feasibility of this architecture. We present three prototypes to show how photo-based QA can be built into an online album, a text-based QA, and a mobile application.", "title": "" }, { "docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04", "text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.", "title": "" }, { "docid": "e8055c37b0082cff57e02389949fb7ca", "text": "Distributed SDN controllers have been proposed to address performance and resilience issues. While approaches for datacenters are built on strongly-consistent state sharing among controllers, others for WAN and constrained networks rely on a loosely-consistent distributed state. In this paper, we address the problem of failover for distributed SDN controllers by proposing two strategies for neighbor active controllers to take over the control of orphan OpenFlow switches: (1) a greedy incorporation and (2) a pre-partitioning among controllers. We built a prototype with distributed Floodlight controllers to evaluate these strategies. The results show that the failover duration with the greedy approach is proportional to the quantity of orphan switches while the pre-partitioning approach, introducing a very small additional control traffic, enables to react quicker in less than 200ms.", "title": "" }, { "docid": "7eed5e11e47807a3ff0af21461e88385", "text": "We propose Attentive Regularization (AR), a method to constrain the activation maps of kernels in Convolutional Neural Networks (CNNs) to specific regions of interest (ROIs). Each kernel learns a location of specialization along with its weights through standard backpropagation. A differentiable attention mechanism requiring no additional supervision is used to optimize the ROIs. Traditional CNNs of different types and structures can be modified with this idea into equivalent Targeted Kernel Networks (TKNs), while keeping the network size nearly identical. By restricting kernel ROIs, we reduce the number of sliding convolutional operations performed throughout the network in its forward pass, speeding up both training and inference. We evaluate our proposed architecture on both synthetic and natural tasks across multiple domains. TKNs obtain significant improvements over baselines, requiring less computation (around an order of magnitude) while achieving superior performance.", "title": "" }, { "docid": "5e4ef99cd48e385984509613b3697e37", "text": "RC4 has been the most popular stream cipher in the history of symmetric key cryptography. Its internal state contains a permutation over all possible bytes from 0 to 255, and it attempts to generate a pseudo-random sequence of bytes (called keystream) by extracting elements of this permutation. Over the last twenty years, numerous cryptanalytic results on RC4 stream cipher have been published, many of which are based on non-random (biased) events involving the secret key, the state variables, and the keystream of the cipher. Though biases based on the secret key are common in RC4 literature, none of the existing ones depends on the length of the secret key. In the first part of this paper, we investigate the effect of RC4 keylength on its keystream, and report significant biases involving the length of the secret key. In the process, we prove the two known empirical biases that were experimentally reported and used in recent attacks against WEP and WPA by Sepehrdad, Vaudenay and Vuagnoux in EUROCRYPT 2011. After our current work, there remains no bias in the literature of WEP and WPA attacks without a proof. In the second part of the paper, we present theoretical proofs of some significant initial-round empirical biases observed by Sepehrdad, Vaudenay and Vuagnoux in SAC 2010. In the third part, we present the derivation of the complete probability distribution of the first byte of RC4 keystream, a problem left open for a decade since the observation by Mironov in CRYPTO 2002. Further, the existence of positive biases towards zero for all the initial bytes 3 to 255 is proved and exploited towards a generalized broadcast attack on RC4. We also investigate for long-term non-randomness in the keystream, and prove a new long-term bias of RC4.", "title": "" }, { "docid": "95a376ec68ac3c4bd6b0fd236dca5bcd", "text": "Long-term suppression of postprandial glucose concentration is an important dietary strategy for the prevention and treatment of type 2 diabetes. Because previous reports have suggested that seaweed may exert anti-diabetic effects in animals, the effects of Wakame or Mekabu intake with 200 g white rice, 50 g boiled soybeans, 60 g potatoes, and 40 g broccoli on postprandial glucose, insulin and free fatty acid levels were investigated in healthy subjects. Plasma glucose levels at 30 min and glucose area under the curve (AUC) at 0-30 min after the Mekabu meal were significantly lower than that after the control meal. Plasma glucose and glucose AUC were not different between the Wakame and control meals. Postprandial serum insulin and its AUC and free fatty acid concentration were not different among the three meals. In addition, fullness, satisfaction, and wellness scores were not different among the three meals. Thus, consumption of 70 g Mekabu with a white rice-based breakfast reduces postprandial glucose concentration.", "title": "" }, { "docid": "c18903fad6b70086de9be9bafffb2b65", "text": "In this work we determine how well the common objective image quality measures (Mean Squared Error (MSE), local MSE, Signalto-Noise Ratio (SNR), Structural Similarity Index (SSIM), Visual Signalto-Noise Ratio (VSNR) and Visual Information Fidelity (VIF)) predict subjective radiologists’ assessments for brain and body computed tomography (CT) images. A subjective experiment was designed where radiologists were asked to rate the quality of compressed medical images in a setting similar to clinical. We propose a modified Receiver Operating Characteristic (ROC) analysis method for comparison of the image quality measures where the “ground truth” is considered to be given by subjective scores. The best performance was achieved by the SSIM index and VIF for brain and body CT images. The worst results were observed for VSNR. We have utilized a logistic curve model which can be used to predict the subjective assessments with an objective criteria. This is a practical tool that can be used to determine the quality of medical images.", "title": "" } ]
scidocsrr
c83376cb5c074c60dafe2dcc919c7d5a
The well-being of playful adults : Adult playfulness , subjective well-being , physical well-being , and the pursuit of enjoyable activities
[ { "docid": "440d104d20d89e0f331de31c70993f2e", "text": "The prime aim of this set of studies was to test the disposition to play (playfulness) in adults in its relation with various measures of personality but also ability (self-estimated but also psychometrically measured ingenuity). Study 1 (n = 180) shows that adults playfulness relates primarily to extraversion, lower conscientiousness, and higher endorsements of culture; joy of being laughed at (gelotophilia) and agreeableness were also predictive in a regression analysis; Study 2 (n = 264) shows that playfulness relates primarily to a high expectation of intrinsic and a low expectation of extrinsic goals as well as greater intrinsic and lower extrinsic importance of goals (for expressive and fun-variants of playfulness); Study 3 (n = 212) shows that playfulness relates to greater selfperception of one’s degree of ingenuity and psychometric ingenuity correlated primarily with greater spontaneous and creative variants of playfulness (in about the same range for origence and fluidity of the productions). Overall, the findings were in line with the expectations and could stimulate further studies of playfulness in adults. Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-63532 Published Version Originally published at: Proyer, Rene T (2012). Examining playfulness in adults: Testing its correlates with personality, positive psychological functioning, goal aspirations, and multi-methodically assessed ingenuity. Psychological Test and Assessment Modeling, 54(2):103-127. Psychological Test and Assessment Modeling, Volume 54, 2012 (2), 103-127 Examining playfulness in adults: Testing its correlates with personality, positive psychological functioning, goal aspirations, and multi-methodically assessed ingenuity", "title": "" } ]
[ { "docid": "ec6f53bd2cbc482c1450934b1fd9e463", "text": "Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.", "title": "" }, { "docid": "e7475c3fd58141c496e8b430a2db24d3", "text": "This study concerns the quality of life of patients after stroke and how this is influenced by disablement and emotional factors. Ninety-six consecutive patients of mean age 71 years were followed for two years. At the end of that time 23% had experienced a recurrence of stroke and 27% were deceased. Of the survivors 76% were independent as regards activities of daily life (ADL) and lived in their own homes. Age as well as initial function were prognostically important factors. Patients who could participate in interviews marked on a visual analogue scale their evaluation of quality of life before and after stroke. Most of them had experienced a decrease and no improvement was observed during the two years. The deterioration was more pronounced in ADL dependent patients than among the independent. However, depression and anxiety were found to be of similar importance for quality of life as was physical disablement. These findings call for a greater emphasis on psychological support in the care of post stroke patients. The visual analogue scale can be a useful tool for detecting special needs.", "title": "" }, { "docid": "45ba18a9561acc0a6ecc9f7d47a05616", "text": "This paper presents a new active gate drive for SiC MOSFETs switching. The proposed driver is based on feedforward control method. The switch is benefited from a simple and effective analog gate driver (GD). The main achievement of this GD is the transient enhancement with minimum undesirable effect on the switching efficiency. Also, the electromagnetic interference (EMI) as the main threat to the operation of SiC MOSFET is eliminated by this method. The proposed GD has been validated through the simulation and experimental tests. All the evaluation have been carried out in a hard switching condition with high-frequency switching.", "title": "" }, { "docid": "97578b3a8f5f34c96e7888f273d4494f", "text": "We analyze the use, advantages, and drawbacks of graph kernels in chemoin-formatics, including a comparison of kernel-based approaches with other methodology, as well as examples of applications. Kernel-based machine learning [1], now widely applied in chemoinformatics, delivers state-of-the-art performance [2] in tasks like classification and regression. Molecular graph kernels [3] are a recent development where kernels are defined directly on the molecular structure graph. This allows the adaptation of methods from graph theory to structure graphs and their direct use with kernel learning algorithms. The main advantage of kernel learning, the so-called “kernel trick”, allows for a systematic, computationally feasible, and often globally optimal search for non-linear patterns, as well as the direct use of non-numerical inputs such as strings and graphs. A drawback is that solutions are expressed indirectly in terms of similarity to training samples, and runtimes that are typically quadratic or cubic in the number of training samples. Graph kernels [3] are positive semidefinite functions defined directly on graphs. The most important types are based on random walks, subgraph patterns, optimal assignments, and graphlets. Molecular structure graphs have strong properties that can be exploited [4], e.g., they are undirected, have no self-loops and no multiple edges, are connected (except for salts), annotated, often planar in the graph-theoretic sense, and their vertex degree is bounded by a small constant. In many applications, they are small. Many graph kernels are generalpurpose, some are suitable for structure graphs, and a few have been explicitly designed for them. We present three exemplary applications of the iterative similarity optimal assignment kernel [5], which was designed for the comparison of small structure graphs: The discovery of novel agonists of the peroxisome proliferator-activated receptor g [6] (ligand-based virtual screening), the estimation of acid dissociation constants [7] (quantitative structure-property relationships), and molecular de novo design [8].", "title": "" }, { "docid": "5cccc7cc748d3461dc3c0fb42a09245f", "text": "The self and attachment difficulties associated with chronic childhood abuse and other forms of pervasive trauma must be understood and addressed in the context of the therapeutic relationship for healing to extend beyond resolution of traditional psychiatric symptoms and skill deficits. The authors integrate contemporary research and theory about attachment and complex developmental trauma, including dissociation, and apply it to psychotherapy of complex trauma, especially as this research and theory inform the therapeutic relationship. Relevant literature on complex trauma and attachment is integrated with contemporary trauma theory as the background for discussing relational issues that commonly arise in this treatment, highlighting common challenges such as forming a therapeutic alliance, managing frame and boundaries, and working with dissociation and reenactments.", "title": "" }, { "docid": "e82c0826863ccd9cd647725fc00a2137", "text": "Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an off-the-shelf class of Markov kernels that can be used to simulate, for instance, the typically high-dimensional and highly autocorrelated state trajectory in a state-space model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in state-space models, but also in models with more complex dependencies, such as non-Markovian, Bayesian nonparametric, and general probabilistic graphical models.", "title": "" }, { "docid": "36d5ba974945cba3bf9120f3ab9aa7a0", "text": "In this paper, we analyze the spectral efficiency of multicell massive multiple-input-multiple-output (MIMO) systems with downlink training and a new pilot contamination precoding (PCP) scheme. First, we analyze the spectral efficiency of the beamforming training (BT) scheme with maximum-ratio transmission (MRT) precoding. Then, we derive an approximate closed-form expression of the spectral efficiency to find the optimal lengths of uplink and downlink pilots. Simulation results show that the achieved spectral efficiency can be improved due to channel estimation at the user side, but in comparison with a single-cell scenario, the spectral efficiency per cell in multicell scenario degrades because of pilot contamination. We focus on the practical case where the number of base station (BS) antennas is large but still finite and propose the BT and PCP (BT-PCP) transmission scheme to mitigate the pilot contamination with limited cooperation between BSs. We confirm the effectiveness of the proposed BT-PCP scheme with simulation, and we show that the proposed BT-PCP scheme achieves higher spectral efficiency than the conventional PCP method and that the performance gap from the perfect channel state information (CSI) scenario without pilot contamination is small.", "title": "" }, { "docid": "fe6c0bfab2443fd6c5c6e2f8f8da6f0b", "text": "Monocular optical flow has been widely used to detect obstacles in Micro Air Vehicles (MAVs) during visual navigation. However, this approach requires significant movement, which reduces the efficiency of navigation and may even introduce risks in narrow spaces. In this paper, we introduce a novel setup of self-supervised learning (SSL), in which optical flow cues serve as a scaffold to learn the visual appearance of obstacles in the environment. We apply it to a landing task, in which initially ‘surface roughness’ is estimated from the optical flow field in order to detect obstacles. Subsequently, a linear regression function is learned that maps appearance features represented by texton distributions to the roughness estimate. After learning, the MAV can detect obstacles by just analyzing a still image. This allows the MAV to search for a landing spot without moving. We first demonstrate this principle to work with offline tests involving images captured from an on-board camera, and then demonstrate the principle in flight. Although surface roughness is a property of the entire flow field in the global image, the appearance learning even allows for the pixel-wise segmentation of obstacles.", "title": "" }, { "docid": "8c3a76aa28177f64e72c52df5ff4a679", "text": "Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards lowor high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both lowand highorder feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its “wide” and “deep” parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.", "title": "" }, { "docid": "38ecb51f7fca71bd47248987866a10d2", "text": "Machine Translation has been a topic of research from the past many years. Many methods and techniques have been proposed and developed. However, quality of translation has always been a matter of concern. In this paper, we outline a target language generation mechanism with the help of language English-Sanskrit language pair using rule based machine translation technique [1]. Rule Based Machine Translation provides high quality translation and requires in depth knowledge of the language apart from real world knowledge and the differences in cultural background and conceptual divisions. A string of English sentence can be translated into string of Sanskrit ones. The methodology for design and development is implemented in the form of software named as “EtranS”. KeywordsAnalysis, Machine translation, translation theory, Interlingua, language divergence, Sanskrit, natural language processing.", "title": "" }, { "docid": "9d9afbd6168c884f54f72d3daea57ca7", "text": "0167-8655/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.patrec.2009.06.012 * Corresponding author. Tel.: +82 2 705 8931; fax: E-mail addresses: sjyoon@sogang.ac.kr (S. Yoon), sa Computer aided diagnosis (CADx) systems for digitized mammograms solve the problem of classification between benign and malignant tissues while studies have shown that using only a subset of features generated from the mammograms can yield higher classification accuracy. To this end, we propose a mutual information-based Support Vector Machine Recursive Feature Elimination (SVM-RFE) as the classification method with feature selection in this paper. We have conducted extensive experiments on publicly available mammographic data and the obtained results indicate that the proposed method outperforms other SVM and SVM-RFE-based methods. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f9d33c91e71a3e84f3b06af83fcdbb6c", "text": "OBJECTIVES\nTo estimate the magnitude of small meaningful and substantial individual change in physical performance measures and evaluate their responsiveness.\n\n\nDESIGN\nSecondary data analyses using distribution- and anchor-based methods to determine meaningful change.\n\n\nSETTING\nSecondary analysis of data from an observational study and clinical trials of community-dwelling older people and subacute stroke survivors.\n\n\nPARTICIPANTS\nOlder adults with mobility disabilities in a strength training trial (n=100), subacute stroke survivors in an intervention trial (n=100), and a prospective cohort of community-dwelling older people (n=492).\n\n\nMEASUREMENTS\nGait speed, Short Physical Performance Battery (SPPB), 6-minute-walk distance (6MWD), and self-reported mobility.\n\n\nRESULTS\nMost small meaningful change estimates ranged from 0.04 to 0.06 m/s for gait speed, 0.27 to 0.55 points for SPPB, and 19 to 22 m for 6MWD. Most substantial change estimates ranged from 0.08 to 0.14 m/s for gait speed, 0.99 to 1.34 points for SPPB, and 47 to 49 m for 6MWD. Based on responsiveness indices, per-group sample sizes for clinical trials ranged from 13 to 42 for substantial change and 71 to 161 for small meaningful change.\n\n\nCONCLUSION\nBest initial estimates of small meaningful change are near 0.05 m/s for gait speed, 0.5 points for SPPB, and 20 m for 6MWD and of substantial change are near 0.10 m/s for gait speed, 1.0 point for SPPB, and 50 m for 6MWD. For clinical use, substantial change in these measures and small change in gait speed and 6MWD, but not SPPB, are detectable. For research use, these measures yield feasible sample sizes for detecting meaningful change.", "title": "" }, { "docid": "9fd3f40785872710c03f1953e81f311a", "text": "The Wnt pathway is integrally involved in regulating self-renewal, proliferation, and maintenance of cancer stem cells (CSCs). We explored the effect of the Wnt antagonist, secreted frizzled-related protein 4 (sFRP4), in modulating epithelial to mesenchymal transition (EMT) in CSCs from human glioblastoma cells lines, U87 and U373. sFRP4 chemo-sensitized CSC-enriched cells to the most commonly used anti-glioblastoma drug, temozolomide (TMZ), by the reversal of EMT. Cell movement, colony formation, and invasion in vitro were suppressed by sFRP4+TMZ treatment, which correlated with the switch of expression of markers from mesenchymal (Twist, Snail, N-cadherin) to epithelial (E-cadherin). sFRP4 treatment elicited activation of the Wnt-Ca2(+) pathway, which antagonizes the Wnt/ß-catenin pathway. Significantly, the chemo-sensitization effect of sFRP4 was correlated with the reduction in the expression of drug resistance markers ABCG2, ABCC2, and ABCC4. The efficacy of sFRP4+TMZ treatment was demonstrated in vivo using nude mice, which showed minimum tumor engraftment using CSCs pretreated with sFRP4+TMZ. These studies indicate that sFRP4 treatment would help to improve response to commonly used chemotherapeutics in gliomas by modulating EMT via the Wnt/ß-catenin pathway. These findings could be exploited for designing better targeted strategies to improve chemo-response and eventually eliminate glioblastoma CSCs.", "title": "" }, { "docid": "1465aa476fe6313f15009bed69546a7d", "text": "The skyline operator and its variants such as dynamic skyline and reverse skyline operators have attracted considerable attention recently due to their broad applications. However, computations of such operators are challenging today since there is an increasing trend of applications to deal with big data. For such data-intensive applications, the MapReduce framework has been widely used recently. In this paper, we propose efficient parallel algorithms for processing the skyline and its variants using MapReduce. We first build histograms to effectively prune out non-skyline (non-reverse skyline) points in advance. We next partition data based on the regions divided by the histograms and compute candidate (reverse) skyline points for each region independently using MapReduce. Finally, we check whether each candidate point is actually a (reverse) skyline point in every region independently. Our performance study confirms the effectiveness and scalability of the proposed algorithms.", "title": "" }, { "docid": "84ad9c8ae3e1ed3d25650a29af0673c6", "text": "As data mining evolves and matures more and more businesses are incorporating this technology into their business practices. However, currently data mining and decision support software is expensive and selection of the wrong tools can be costly in many ways. This paper provides direction and decision-making information to the practicing professional. A framework for evaluating data mining tools is presented and a methodology for applying this framework is described. Finally a case study to demonstrate the method’s effectiveness is presented. This methodology represents the first-hand experience using many of the leading data mining tools against real business data at the Center for Data Insight (CDI) at Northern Arizona University (NAU). This is not a comprehensive review of commercial tools but instead provides a method and a point-of-reference for selecting the best software tool for a particular problem. Experience has shown that there is not one best data-mining tool for all purposes. This instrument is designed to accommodate differences in environments and problem domains. It is expected that this methodology will be used to publish tool comparisons and benchmarking results.", "title": "" }, { "docid": "4af5aa24efc82a8e66deb98f224cd033", "text": "Abstract—In the recent years, the rapid spread of mobile device has create the vast amount of mobile data. However, some shallow-structure models such as support vector machine (SVM) have difficulty dealing with high dimensional data with the development of mobile network. In this paper, we analyze mobile data to predict human trajectories in order to understand human mobility pattern via a deep-structure model called “DeepSpace”. To the best of out knowledge, it is the first time that the deep learning approach is applied to predicting human trajectories. Furthermore, we develop the vanilla convolutional neural network (CNN) to be an online learning system, which can deal with the continuous mobile data stream. In general, “DeepSpace” consists of two different prediction models corresponding to different scales in space (the coarse prediction model and fine prediction models). This two models constitute a hierarchical structure, which enable the whole architecture to be run in parallel. Finally, we test our model based on the data usage detail records (UDRs) from the mobile cellular network in a city of southeastern China, instead of the call detail records (CDRs) which are widely used by others as usual. The experiment results show that “DeepSpace” is promising in human trajectories prediction.", "title": "" }, { "docid": "6f6ae8ea9237cca449b8053ff5f368e7", "text": "With the rapid development of Location-based Social Network (LBSN) services, a large number of Point-of-Interests (POIs) have been available, which consequently raises a great demand of building personalized POI recommender systems. A personalized POI recommender system can significantly help users to find their preferred POIs and assist POI owners to attract more customers. However, due to the complexity of users’ checkin decision making process that is influenced by many different factors such as POI distance and region’s prosperity, and the dynamics of user’s preference, POI recommender systems usually suffer from many challenges. Although different latent factor based methods (e.g., probabilistic matrix factorization) have been proposed, most of them do not successfully incorporate both geographical influence and temporal effect together into latent factor models. To this end, in this paper, we propose a new Spatial-Temporal Probabilistic Matrix Factorization (STPMF) model that models a user’s preference for POI as the combination of his geographical preference and other general interest in POI. Furthermore, in addition to static general interest of user, we capture the temporal dynamics of user’s interest as well by modeling checkin data in a unique way. To evaluate the proposed STPMF model, we conduct extensive experiments with many state-of-the-art baseline methods and evaluation metrics on two real-world data sets. The experimental results clearly demonstrate the effectiveness of our proposed STPMF model.", "title": "" }, { "docid": "a1cff98eecf6691777bb89e849645077", "text": "Information-centric networking(ICN) opens new opportunities in the IoT domain due to its in-network caching capability. This significantly reduces read latency and the load on the origin server. In-network caching in ICN however introduces its own set of challenges because of its ubiquitous caches. Maintaining cache consistency without incurring high overhead is an important problem that needs to be handled in ICN to prevent a client from retrieving stale data. We propose a cache consistency approach based on the rate at which an IoT application generates its data. Our technique is lightweight and can be deployed easily in real network. Our simulation results demonstrate that our proposed algorithm significantly reduces the network traffic as well as the load on the origin server while serving fresh content to the clients.", "title": "" }, { "docid": "816c78f518d8d0621015d4922623e58d", "text": "Driver fatigue is a significant factor in many traffic accidents. We propose a novel approach for driver fatigue detection from facial image sequences, which is based on multiscale dynamic features. First, Gabor filters are used to get a multiscale representation for image sequences. Then Local Binary Patterns are extracted from each multiscale image. To account for the temporal aspect of human fatigue, the LBP image sequence is divided into dynamic units, and a histogram of each dynamic unit is computed and concatenated as dynamic features. Finally a statistical learning algorithm is applied to extract the most discriminative features from the multiscale dynamic features and construct a strong classifier for fatigue detection. The proposed approach is validated under real-life fatigue conditions. The test data includes 600 image sequences with illumination and pose variations from 30 people’s videos. Experimental results show the validity of the proposed approach, and a correct rate of 98.33% is achieved which is much better than the baselines.", "title": "" }, { "docid": "8bb5794d38528ab459813ab1fa484a69", "text": "We introduce the ACL Anthology Network (AAN), a manually curated networked database of citations, collaborations, and summaries in the field of Computational Linguistics. We also present a number of statistics about the network including the most cited authors, the most central collaborators, as well as network statistics about the paper citation, author citation, and author collaboration networks.", "title": "" } ]
scidocsrr
492f7220fe4063f99776054b7db6dcc4
Comparative study of machine learning algorithms for breast cancer detection and diagnosis
[ { "docid": "0b6a3b143dfccd7ca9ea09f7fa5b5e8c", "text": "Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions. In addition, the ability of ML tools to detect key features from complex datasets reveals their importance. A variety of these techniques, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Support Vector Machines (SVMs) and Decision Trees (DTs) have been widely applied in cancer research for the development of predictive models, resulting in effective and accurate decision making. Even though it is evident that the use of ML methods can improve our understanding of cancer progression, an appropriate level of validation is needed in order for these methods to be considered in the everyday clinical practice. In this work, we present a review of recent ML approaches employed in the modeling of cancer progression. The predictive models discussed here are based on various supervised ML techniques as well as on different input features and data samples. Given the growing trend on the application of ML methods in cancer research, we present here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.", "title": "" } ]
[ { "docid": "2a4e5635e2c15ce8ed84e6e296c4bbf4", "text": "The games with a purpose paradigm proposed by Luis von Ahn [9] is a new approach for game design where useful but boring tasks, like labeling a random image found in the web, are packed within a game to make them entertaining. But there are not only large numbers of internet users that can be used as voluntary data producers but legions of mobile device owners, too. In this paper we describe the design of a location-based mobile game with a purpose: CityExplorer. The purpose of this game is to produce geospatial data that is useful for non-gaming applications like a location-based service. From the analysis of four use case studies of CityExplorer we report that such a purposeful game is entertaining and can produce rich geospatial data collections.", "title": "" }, { "docid": "4287db8deb3c4de5d7f2f5695c3e2e70", "text": "The brain is complex and dynamic. The spatial scales of interest to the neurobiologist range from individual synapses (approximately 1 microm) to neural circuits (centimeters); the timescales range from the flickering of channels (less than a millisecond) to long-term memory (years). Remarkably, fluorescence microscopy has the potential to revolutionize research on all of these spatial and temporal scales. Two-photon excitation (2PE) laser scanning microscopy allows high-resolution and high-sensitivity fluorescence microscopy in intact neural tissue, which is hostile to traditional forms of microscopy. Over the last 10 years, applications of 2PE, including microscopy and photostimulation, have contributed to our understanding of a broad array of neurobiological phenomena, including the dynamics of single channels in individual synapses and the functional organization of cortical maps. Here we review the principles of 2PE microscopy, highlight recent applications, discuss its limitations, and point to areas for future research and development.", "title": "" }, { "docid": "9b086872cad65b92237696ec3a48550f", "text": "Memory-augmented neural networks (MANNs) refer to a class of neural network models equipped with external memory (such as neural Turing machines and memory networks). These neural networks outperform conventional recurrent neural networks (RNNs) in terms of learning long-term dependency, allowing them to solve intriguing AI tasks that would otherwise be hard to address. This paper concerns the problem of quantizing MANNs. Quantization is known to be effective when we deploy deep models on embedded systems with limited resources. Furthermore, quantization can substantially reduce the energy consumption of the inference procedure. These benefits justify recent developments of quantized multilayer perceptrons, convolutional networks, and RNNs. However, no prior work has reported the successful quantization of MANNs. The in-depth analysis presented here reveals various challenges that do not appear in the quantization of the other networks. Without addressing them properly, quantized MANNs would normally suffer from excessive quantization error which leads to degraded performance. In this paper, we identify memory addressing (specifically, content-based addressing) as the main reason for the performance degradation and propose a robust quantization method for MANNs to address the challenge. In our experiments, we achieved a computation-energy gain of 22× with 8-bit fixed-point and binary quantization compared to the floating-point implementation. Measured on the bAbI dataset, the resulting model, named the quantized MANN (Q-MANN), improved the error rate by 46% and 30% with 8-bit fixed-point and binary quantization, respectively, compared to the MANN quantized using conventional techniques.", "title": "" }, { "docid": "e4893b639d75a6650756927d36fa37f8", "text": "BACKGROUND\nThe length of stay (LOS) is an important indicator of the efficiency of hospital management. Reduction in the number of inpatient days results in decreased risk of infection and medication side effects, improvement in the quality of treatment, and increased hospital profit with more efficient bed management. The purpose of this study was to determine which factors are associated with length of hospital stay, based on electronic health records, in order to manage hospital stay more efficiently.\n\n\nMATERIALS AND METHODS\nResearch subjects were retrieved from a database of patients admitted to a tertiary general university hospital in South Korea between January and December 2013. Patients were analyzed according to the following three categories: descriptive and exploratory analysis, process pattern analysis using process mining techniques, and statistical analysis and prediction of LOS.\n\n\nRESULTS\nOverall, 55% (25,228) of inpatients were discharged within 4 days. The department of rehabilitation medicine (RH) had the highest average LOS at 15.9 days. Of all the conditions diagnosed over 250 times, diagnoses of I63.8 (cerebral infarction, middle cerebral artery), I63.9 (infarction of middle cerebral artery territory) and I21.9 (myocardial infarction) were associated with the longest average hospital stay and high standard deviation. Patients with these conditions were also more likely to be transferred to the RH department for rehabilitation. A range of variables, such as transfer, discharge delay time, operation frequency, frequency of diagnosis, severity, bed grade, and insurance type was significantly correlated with the LOS.\n\n\nCONCLUSIONS\nAccurate understanding of the factors associating with the LOS and progressive improvements in processing and monitoring may allow more efficient management of the LOS of inpatients.", "title": "" }, { "docid": "9e2f1e5f74fbb856f4d6196203d4c23c", "text": "This paper summarises the experiences of UNICEF India in tackling the problem of high fluoride content in rural drinking water supply sources, using household-based defluoridation filter using activated alumina. Since 1991, UNICEF supported the research work for development of the technology by the Department of Chemistry, Indian Institute of Technology (IIT), Kanpur. This resulted in pilot projects on Domestic Defluoridation Units in the states of Andhra Pradesh and Rajasthan during 1996-2002. Gradually a demand for these filters has grown and the private sector is gradually becoming interested.", "title": "" }, { "docid": "980ad058a2856048765f497683557386", "text": "Hierarchical reinforcement learning (HRL) has recently shown promising advances on speeding up learning, improving the exploration, and discovering intertask transferable skills. Most recent works focus on HRL with two levels, i.e., a master policy manipulates subpolicies, which in turn manipulate primitive actions. However, HRL with multiple levels is usually needed in many real-world scenarios, whose ultimate goals are highly abstract, while their actions are very primitive. Therefore, in this paper, we propose a diversity-driven extensible HRL (DEHRL), where an extensible and scalable framework is built and learned levelwise to realize HRL with multiple levels. DEHRL follows a popular assumption: diverse subpolicies are useful, i.e., subpolicies are believed to be more useful if they are more diverse. However, existing implementations of this diversity assumption usually have their own drawbacks, which makes them inapplicable to HRL with multiple levels. Consequently, we further propose a novel diversity-driven solution to achieve this assumption in DEHRL. Experimental studies evaluate DEHRL with five baselines from four perspectives in two domains; the results show that DEHRL outperforms the state-of-the-art baselines in all four aspects.", "title": "" }, { "docid": "a249375471d58592f1911f2a285aa945", "text": "The existing state-of-the-art in the field of intrusion detection systems (IDSs) generally involves some use of machine learning algorithms. However, the computer security community is growing increasingly aware that a sophisticated adversary could target the learning module of these IDSs in order to circumvent future detections. Consequently, going forward, robustness of machine-learning based IDSs against adversarial manipulation (i.e., poisoning) will be the key factor for the overall success of these systems in the real world. In our work, we focus on adaptive IDSs that use anomaly-based detection to identify malicious activities in an information system. To be able to evaluate the susceptibility of these IDSs to deliberate adversarial poisoning, we have developed a novel framework for their performance testing under adversarial contamination. We have also studied the viability of using deep autoencoders in the detection of anomalies in adaptive IDSs, as well as their overall robustness against adversarial poisoning. Our experimental results show that our proposed autoencoder-based IDS outperforms a generic PCA-based counterpart by more than 15% in terms of detection accuracy. The obtained results concerning the detection ability of the deep autoencoder IDS under adversarial contamination, compared to that of the PCA-based IDS, are also encouraging, with the deep autoencoder IDS maintaining a more stable detection in parallel to limiting the contamination of its training dataset to just bellow 2%.", "title": "" }, { "docid": "6a6191695c948200658ad6020f21f203", "text": "Given a random pair of images, an arbitrary style transfer method extracts the feel from the reference image to synthesize an output based on the look of the other content image. Recent arbitrary style transfer methods transfer second order statistics from reference image onto content image via a multiplication between content image features and a transformation matrix, which is computed from features with a pre-determined algorithm. These algorithms either require computationally expensive operations, or fail to model the feature covariance and produce artifacts in synthesized images. Generalized from these methods, in this work, we derive the form of transformation matrix theoretically and present an arbitrary style transfer approach that learns the transformation matrix with a feed-forward network. Our algorithm is highly efficient yet allows a flexible combination of multi-level styles while preserving content affinity during style transfer process. We demonstrate the effectiveness of our approach on four tasks: artistic style transfer, video and photo-realistic style transfer as well as domain adaptation, including comparisons with the stateof-the-art methods.", "title": "" }, { "docid": "c23976667414fd4786ac1d71363ee04d", "text": "More and more sensitive information is transmitted and stored in computer networks. Security has become a critical issue. As traditional cryptographic systems are now vulnerable to attacks, DNA based cryptography has been identified as a promising technology because of the vast parallelism and extraordinary information density. While a body of research has proposed the DNA based encryption algorithm, no research has provided solutions to distribute complex and long secure keys. This paper introduces a Hamming code and a block cipher mechanism to ensure secure transmission of a secure key. The research overcomes the limitation on the length of the secure key represented by DNA strands. Therefore it proves that real biological DNA strands are useful for encryption computing. To evaluate our method, we apply the block cipher mechanism to optimize a DNA-based implementation of a conventional symmetric encryption algorithm, described as “yet another encryption algorithm”. Moreover, a maximum length matching algorithm is developed to provide immunity against frequency attacks.", "title": "" }, { "docid": "e5048285c2616e9bfb28accd91629187", "text": "Hidden Markov Models (HMMs) are learning methods for pattern recognition. The probabilistic HMMs have been one of the most used techniques based on the Bayesian model. First-order probabilistic HMMs were adapted to the theory of belief functions such that Bayesian probabilities were replaced with mass functions. In this paper, we present a second-order Hidden Markov Model using belief functions. Previous works in belief HMMs have been focused on the first-order HMMs. We extend them to the second-order model.", "title": "" }, { "docid": "309e020e38f4a9286cef5aaba33a78a5", "text": "Brain-machine interface (BMI) systems convert neural signals from motor regions of the brain into control signals to guide prosthetic devices. The ultimate goal of BMIs is to improve the quality of life for people with paralysis by providing direct neural control of prosthetic arms or computer cursors. While considerable research over the past 15 years has led to compelling BMI demonstrations, there remain several challenges to achieving clinically viable BMI systems. In this review, we focus on the challenge of increasing BMI performance and robustness. We review and highlight key aspects of intracortical BMI decoder design, which is central to the conversion of neural signals into prosthetic control signals, and discuss emerging opportunities to improve intracortical BMI decoders. This is one of the primary research opportunities where information systems engineering can directly impact the future success of BMIs.", "title": "" }, { "docid": "1cf5ffbd1929b1d6d475cdfabeb9bf2a", "text": "In this paper we concern ourselves with the problem of minimizing leakage power in CMOS circuits consisting of AOI (and-or-invert) gates as they operate in stand-by mode or an idle mode waiting for other circuits to complete their operation. It is known that leakage power due to subthreshold leakage current in transistors in the OFF state is dependent on the input vector applied. Therefore, we try to compute an input vector that can be applied to the circuit in stand-by mode so that the power loss due to sub-threshold leakage current is the minimum possible. We employ a integer linear programming (ILP) approach to solve the problem of minimizing leakage by first obtaining a good lower bound (estimate) on the minimum leakage power and then rounding the solution to actually obtain an input vector that causes low leakage. The chief advantage of this technique as opposed to others in the literature is that it invariably provides us with a good idea about the quality of the input vector found.", "title": "" }, { "docid": "a575c136fa021d7efe1f00ebdaf12740", "text": "Most works in automatic music generation have addressed so far specific tasks. Such a reductionist approach has been extremely successful and some of these tasks have been solved once and for all. However, few works have addressed the issue of generating automatically fully fledged music material, of human-level quality. In this article, we report on a specific experiment in holistic music generation: the reorchestration of Beethoven’s Ode to Joy, the European anthem, in seven styles. These reorchestrations were produced with algorithms developed in the Flow Machines project and within a short time frame. We stress the benefits of having had such a challenging and unifying goal, and the interesting problems and challenges it raised along the way.", "title": "" }, { "docid": "050dd71858325edd4c1a42fc1a25de95", "text": "This paper presents Disco, a prototype for supporting knowledge workers in exploring, reviewing and sorting collections of textual data. The goal is to facilitate, accelerate and improve the discovery of information. To this end, it combines Semantic Relatedness techniques with a review workflow developed in a tangible environment. Disco uses a semantic model that is leveraged on-line in the course of search sessions, and accessed through natural hand-gesture, in a simple and intuitive way.", "title": "" }, { "docid": "e173580f0dd327c78fd0b16b234112a1", "text": "Multi-view data is very popular in real-world applications, as different view-points and various types of sensors help to better represent data when fused across views or modalities. Samples from different views of the same class are less similar than those with the same view but different class. We consider a more general case that prior view information of testing data is inaccessible in multi-view learning. Traditional multi-view learning algorithms were designed to obtain multiple view-specific linear projections and would fail without this prior information available. That was because they assumed the probe and gallery views were known in advance, so the correct view-specific projections were to be applied in order to better learn low-dimensional features. To address this, we propose a Low-Rank Common Subspace (LRCS) for multi-view data analysis, which seeks a common low-rank linear projection to mitigate the semantic gap among different views. The low-rank common projection is able to capture compatible intrinsic information across different views and also well-align the within-class samples from different views. Furthermore, with a low-rank constraint on the view-specific projected data and that transformed by the common subspace, the within-class samples from multiple views would concentrate together. Different from the traditional supervised multi-view algorithms, our LRCS works in a weakly supervised way, where only the view information gets observed. Such a common projection can make our model more flexible when dealing with the problem of lacking prior view information of testing data. Two scenarios of experiments, robust subspace learning and transfer learning, are conducted to evaluate our algorithm. Experimental results on several multi-view datasets reveal that our proposed method outperforms state-of-the-art, even when compared with some supervised learning methods.", "title": "" }, { "docid": "929640bc4813841f1a220e31da3bd631", "text": "In this paper, a U-slot rectangular microstrip patch antenna is designed in order to overcome the narrowband characteristic and gain broader band. The antenna has dual-band characteristics, so it has wider operating bandwidth. The antenna works at Ku-band and the center frequency is 16GHz. The characteristics are analyzed and optimized with Ansoft HFSS, the simulation results show that the absolute bandwidth and gain of the antenna unit are 2.7GHz and 8.1dB, and of the antenna array are 3.1GHz and 14.4dB. The relative bandwidth reaches to 19.4%, which is much wider than the general bandwidth of about 1% to 7%.", "title": "" }, { "docid": "797166b4c68bcdc7a8860462117e2051", "text": "In this paper we propose a novel feature descriptor Extended Co-occurrence HOG (ECoHOG) and integrate it with dense point trajectories demonstrating its usefulness in fine grained activity recognition. This feature is inspired by original Co-occurrence HOG (CoHOG) that is based on histograms of occurrences of pairs of image gradients in the image. Instead relying only on pure histograms we introduce a sum of gradient magnitudes of co-occurring pairs of image gradients in the image. This results in giving the importance to the object boundaries and straightening the difference between the moving foreground and static background. We also couple ECoHOG with dense point trajectories extracted using optical flow from video sequences and demonstrate that they are extremely well suited for fine grained activity recognition. Using our feature we outperform state of the art methods in this task and provide extensive quantitative evaluation.", "title": "" }, { "docid": "15b26ceb3a81f4af6233ab8a36f66d3f", "text": "The number of web images has been explosively growing due to the development of network and storage technology. These images make up a large amount of current multimedia data and are closely related to our daily life. To efficiently browse, retrieve and organize the web images, numerous approaches have been proposed. Since the semantic concepts of the images can be indicated by label information, automatic image annotation becomes one effective technique for image management tasks. Most existing annotation methods use image features that are often noisy and redundant. Hence, feature selection can be exploited for a more precise and compact representation of the images, thus improving the annotation performance. In this paper, we propose a novel feature selection method and apply it to automatic image annotation. There are two appealing properties of our method. First, it can jointly select the most relevant features from all the data points by using a sparsity-based model. Second, it can uncover the shared subspace of original features, which is beneficial for multi-label learning. To solve the objective function of our method, we propose an efficient iterative algorithm. Extensive experiments are performed on large image databases that are collected from the web. The experimental results together with the theoretical analysis have validated the effectiveness of our method for feature selection, thus demonstrating its feasibility of being applied to web image annotation.", "title": "" }, { "docid": "820f67fa3521ee4af7da0e022a8d0be3", "text": "The visual appearance of rain is highly complex. Unlike the particles that cause other weather conditions such as haze and fog, rain drops are large and visible to the naked eye. Each drop refracts and reflects both scene radiance and environmental illumination towards an observer. As a result, a spatially distributed ensemble of drops moving at high velocities (rain) produces complex spatial and temporal intensity fluctuations in images and videos. To analyze the effects of rain, it is essential to understand the visual appearance of a single rain drop. In this paper, we develop geometric and photometric models for the refraction through, and reflection (both specular and internal) from, a rain drop. Our geometric and photometric models show that each rain drop behaves like a wide-angle lens that redirects light from a large field of view towards the observer. From this, we observe that in spite of being a transparent object, the brightness of the drop does not depend strongly on the brightness of the background. Our models provide the fundamental tools to analyze the complex effects of rain. Thus, we believe our work has implications for vision in bad weather as well as for efficient rendering of rain in computer graphics.", "title": "" } ]
scidocsrr
2a7eb157a26fbd8d97ee39ca8d717fe0
Mining criminal networks from unstructured text documents
[ { "docid": "a1d167f6c1c1d574e8e5c0c6cba2c775", "text": "Hypothesis generation, a crucial initial step for making scientific discoveries, relies on prior knowledge, experience and intuition. Chance connections made between seemingly distinct subareas sometimes turn out to be fruitful. The goal in text mining is to assist in this process by automatically discovering a small set of interesting hypotheses from a suitable text collection. In this paper we present open and closed text mining algorithms that are built within the discovery framework established by Swanson and Smalheiser. Our algorithms represent topics using metadata profiles. When applied to MEDLINE these are MeSH based profiles. We present experiments that demonstrate the effectiveness of our algorithms. Specifically, our algorithms generate ranked term lists where the key terms representing novel relationships between topics are ranked high.", "title": "" }, { "docid": "902b1d774e89adae23d9b8396327532e", "text": "A new generation of data mining tools and applications work to unearth hidden patterns in large volumes of crime data.", "title": "" }, { "docid": "55b405991dc250cd56be709d53166dca", "text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.", "title": "" } ]
[ { "docid": "80f1c8b99de81b9b1220b4178d126042", "text": "Indigenous groups offer alternative knowledge and perspectives based on their own locally developed practices of resource use. We surveyed the international literature to focus on the role of Traditional Ecological Knowledge in monitoring, responding to, and managing ecosystem processes and functions, with special attention to ecological resilience. Case studies revealed that there exists a diversity of local or traditional practices for ecosystem management. These include multiple species management, resource rotation, succession management, landscape patchiness management, and other ways of responding to and managing pulses and ecological surprises. Social mechanisms behind these traditional practices include a number of adaptations for the generation, accumulation, and transmission of knowledge; the use of local institutions to provide leaders/stewards and rules for social regulation; mechanisms for cultural internalization of traditional practices; and the development of appropriate world views and cultural values. Some traditional knowledge and management systems were characterized by the use of local ecological knowledge to interpret and respond to feedbacks from the environment to guide the direction of resource management. These traditional systems had certain similarities to adaptive management with its emphasis on feedback learning, and its treatment of uncertainty and unpredictability intrinsic to all ecosystems.", "title": "" }, { "docid": "bc58f2f9f6f5773f5f8b2696d9902281", "text": "Software development is a complicated process and requires careful planning to produce high quality software. In large software development projects, release planning may involve a lot of unique challenges. Due to time, budget and some other constraints, potentially there are many problems that may possibly occur. Subsequently, project managers have been trying to identify and understand release planning, challenges and possible resolutions which might help them in developing more effective and successful software products. This paper presents the findings from an empirical study which investigates release planning challenges. It takes a qualitative approach using interviews and observations with practitioners and project managers at five large software banking projects in Informatics Services Corporation (ISC) in Iran. The main objective of this study is to explore and increase the understanding of software release planning challenges in several software companies in a developing country. A number of challenges were elaborated and discussed in this study within the domain of software banking projects. These major challenges are classified into two main categories: the human-originated including people cooperation, disciplines and abilities; and the system-oriented including systematic approaches, resource constraints, complexity, and interdependency among the systems.", "title": "" }, { "docid": "d319a17ad2fa46e0278e0b0f51832f4b", "text": "Automatic Essay Assessor (AEA) is a system that utilizes information retrieval techniques such as Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Latent Dirichlet Allocation (LDA) for automatic essay grading. The system uses learning materials and relatively few teacher-graded essays for calibrating the scoring mechanism before grading. We performed a series of experiments using LSA, PLSA and LDA for document comparisons in AEA. In addition to comparing the methods on a theoretical level, we compared the applicability of LSA, PLSA, and LDA to essay grading with empirical data. The results show that the use of learning materials as training data for the grading model outperforms the k-NN-based grading methods. In addition to this, we found that using LSA yielded slightly more accurate grading than PLSA and LDA. We also found that the division of the learning materials in the training data is crucial. It is better to divide learning materials into sentences than paragraphs.", "title": "" }, { "docid": "cf299917f1de627e5d09ea943ab92157", "text": "Discovering hyponym relations among domain-specific terms is a fundamental task in taxonomy learning and knowledge acquisition. However, the great diversity of various domain corpora and the lack of labeled training sets make this task very challenging for conventional methods that are based on text content. The hyperlink structure of Wikipedia article pages was found to contain recurring network motifs in this study, indicating the probability of a hyperlink being a hyponym hyperlink. Hence, a novel hyponym relation extraction approach based on the network motifs of Wikipedia hyperlinks was proposed. This approach automatically constructs motif-based features from the hyperlink structure of a domain; every hyperlink is mapped to a 13-dimensional feature vector based on the 13 types of three-node motifs. The approach extracts structural information from Wikipedia and heuristically creates a labeled training set. Classification models were determined from the training sets for hyponym relation extraction. Two experiments were conducted to validate our approach based on seven domain-specific datasets obtained from Wikipedia. The first experiment, which utilized manually labeled data, verified the effectiveness of the motif-based features. The second experiment, which utilized an automatically labeled training set of different domains, showed that the proposed approach performs better than the approach based on lexico-syntactic patterns and achieves comparable result to the approach based on textual features. Experimental results show the practicability and fairly good domain scalability of the proposed approach.", "title": "" }, { "docid": "5c29083624be58efa82b4315976f8dc2", "text": "This paper presents a structured ordinal measure method for video-based face recognition that simultaneously lear ns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space . The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization metho d is employed to handle the discrete and low-rank constraints , yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition ra tes using fewer features and samples.", "title": "" }, { "docid": "6615eff178f12e0d5c357fb92c493c7e", "text": "A novel aperture stacked patch (ASP) antenna with circular polarization is proposed. The antenna consists of four parasitic patches, each one being rotated by an angle of 30° relative to its adjacent patches. The proposed antenna has achieved a simultaneous axial ratio <;3 dB and voltage standing wave ratio (VSWR) <;2 bandwidth of 33.6% (7.2-10.11 GHz) in the single element and 36.15% (7.1-10.2 GHz) in a 2 × 1-element array configuration. The antenna behavior is explained by a thorough parameter study together with fabrication and measurement.", "title": "" }, { "docid": "357ae5590fb6f11fbd210baced2fc4ee", "text": "To achieve the best results from an OCR system, the pre-processing steps must be performed with a high degree of accuracy and reliability. There are two critically important steps in the OCR pre-processing phase. First, blocks must be extracted from each page of the scanned document. Secondly, all blocks resulting from the first step must be arranged in the correct order. One of the most notable techniques for block ordering in the second step is the recursive x-y cut (RXYC) algorithm. This technique works accurately only when applied to documents with a simple page layout but it causes incorrect block ordering when applied to documents with complex page layouts. This paper proposes a modified recursive x-y cut algorithm for solving block ordering problems for documents with complex page layouts. This proposed algorithm can solve problems such as (1) the overlapping block problem; (2) the blocks overlay problem, and (3) the L-Shaped block problem.", "title": "" }, { "docid": "f20f8dee5d7a576d1ffd5932afcdc0c5", "text": "Stock market prediction is the act of trying to determine the future value of a company stock or other financial instrument traded on a financial exchange. The successful prediction of a stock's future price will maximize investor’s gains. This paper proposes a machine learning model to predict stock market price. The proposed algorithm integrates Particle swarm optimization (PSO) and least square support vector machine (LS-SVM). The PSO algorithm is employed to optimize LS-SVM to predict the daily stock prices. Proposed model is based on the study of stocks historical data and technical indicators. PSO algorithm selects best free parameters combination for LS-SVM to avoid over-fitting and local minima problems and improve prediction accuracy. The proposed model was applied and evaluated using thirteen benchmark financials datasets and compared with artificial neural network with Levenberg-Marquardt (LM) algorithm. The obtained results showed that the proposed model has better prediction accuracy and the potential of PSO algorithm in optimizing LS-SVM.", "title": "" }, { "docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd", "text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.", "title": "" }, { "docid": "0e796ac2c27a1811eaafb8e3a65c7d59", "text": "When dealing with large graphs, such as those that arise in the context of online social networks, a subset of nodes may be labeled. These labels can indicate demographic values, interest, beliefs or other characteristics of the nodes (users). A core problem is to use this information to extend the labeling so that all nodes are assigned a label (or labels). In this chapter, we survey classification techniques that have been proposed for this problem. We consider two broad categories: methods based on iterative application of traditional classifiers using graph information as features, and methods which propagate the existing labels via random walks. We adopt a common perspective on these methods to highlight the similarities between different approaches within and across the two categories. We also describe some extensions and related directions to the central problem of node classification.", "title": "" }, { "docid": "cbb41bdd23a34b3531d2980dfc2211bf", "text": "Examined bones were obtained from eight adult African giant rats, Cricetomys gambianus Waterhouse. Animals used had an average body mass of 730.00 ± 41.91 gm and body length of 67.20 ± 0.05 cm. The vertebral formula was found to be C7, T13, L6, S4, Ca31-36. The lowest and highest points of the cervicothoracic curvature were at C5 and T2, respectively. The spinous process of the axis was the largest in the cervical group while others were sharp and pointed. The greatest diameter of the vertebral canal was at the atlas (0.8 cm) and the lowest at the caudal sacral bones (2 mm). The diameter of the vertebral foramen was the largest at C1 and the smallest at the S4; the foramina were negligibly indistinct caudal to the sacral vertebrae. There were 13 pairs of ribs. The first seven pairs were sternal, and six pairs were asternal of which the last 2-3 pairs were floating ribs. The sternum was composed of deltoid-shaped manubrium sterni, four sternebrae, and a slender processus xiphoideus. No sex-related differences were observed. The vertebral column is adapted for strong muscular attachment and actions helping the rodent suited for speed, agility, dexterity, and strength which might enable it to overpower prey and escape predation.", "title": "" }, { "docid": "82b03b45a093fb6342e92602c437741b", "text": "Human-like path planning is still a challenging task for automated vehicles. Imitation learning can teach these vehicles to learn planning from human demonstration. In this work, we propose to formulate the planning stage as a convolutional neural network (CNN). Thus, we can employ well established CNN techniques to learn planning from imitation. With the proposed method, we train a network for planning in complex traffic situations from both simulated and real world data. The resulting planning network exhibits human-like path generation.", "title": "" }, { "docid": "173c0124ac81cfe8fa10fbdc20a1a094", "text": "This paper presents a new approach to compare fuzzy numbers using α-distance. Initially, the metric distance on the interval numbers based on the convex hull of the endpoints is proposed and it is extended to fuzzy numbers. All the properties of the α-distance are proved in details. Finally, the ranking of fuzzy numbers by the α-distance is discussed. In addition, the proposed method is compared with some known ones, the validity of the new method is illustrated by applying its to several group of fuzzy numbers.", "title": "" }, { "docid": "e86c2af47c55a574aecf474f95fb34d3", "text": "This paper presents a novel way to address the extrinsic calibration problem for a system composed of a 3D LIDAR and a camera. The relative transformation between the two sensors is calibrated via a nonlinear least squares (NLS) problem, which is formulated in terms of the geometric constraints associated with a trihedral object. Precise initial estimates of NLS are obtained by dividing it into two sub-problems that are solved individually. With the precise initializations, the calibration parameters are further refined by iteratively optimizing the NLS problem. The algorithm is validated on both simulated and real data, as well as a 3D reconstruction application. Moreover, since the trihedral target used for calibration can be either orthogonal or not, it is very often present in structured environments, making the calibration convenient.", "title": "" }, { "docid": "7f82ff12310f74b17ba01cac60762a8c", "text": "For worst case parameter mismatch, modest levels of unbalance are predicted through the use of minimum gate decoupling, dynamic load lines with high Q values, common source inductance or high yield screening. Each technique is evaluated in terms of current unbalance, transition energy, peak turn-off voltage and parasitic oscillations, as appropriate, for various pulse duty cycles and frequency ranges.", "title": "" }, { "docid": "70b481d263ecb5c46d211e4414b63b85", "text": "S ome years ago I interviewed the chief executive officer of a successful Thai manufacturing firm as part of a pilot survey project. While trying to figure out a good way to quantify the firm’s experience with government regulations and corruption in the foreign trade sector, the CEO exclaimed: “I hope to be reborn as a custom official.” When a well-paid CEO wishes for a job with low official pay in the government sector, corruption is almost surely a problem! The most devastating forms of corruption include the diversion and outright theft of funds for public programs and the damage caused by firms and individuals that pay bribes to avoid health and safety regulations intended to benefit the public. Examples abound. A conservative estimate is that the former President of Zaire, Mobutu Sese Seko, looted the treasury of some $5 billion—an amount equal to the country’s entire external debt at the time he was ousted in 1997. The funds allegedly embezzled by the former presidents of Indonesia and Philippines, Mohamed Suharto and Ferdinand Marcos, are estimated to be two and seven times higher (Transparency International, 2004). In the Goldenberg scam in Kenya in the early 1990s, the Goldenberg firm received as much as $1 billion from the government as part of an export compensation scheme for fictitious exports of commodities of which Kenya either produced little (gold) or nothing at all (diamonds) (“Public Inquiry into Kenya Gold Scam,” 2003). An internal IMF report found that nearly $1 billion of oil revenues, or $77 per capita, vanished from Angolan state coffers in 2001 alone (Pearce, 2002). This amount was about three times the value of the humanitarian aid received by Angola in 2001—in a country where three-quarters of the population survives on less than $1 a day and where one", "title": "" }, { "docid": "eb218a1d8b7cbcd895dd0cd8cfcf9d80", "text": "Caring is considered as the essence of nursing and is the basic factor that distinguishes between nurses and other health professions. The literature is rich of previous studies that focused on perceptions of nurses toward nurse caring behaviors, but less studywas applied in pediatric nurses in different settings. Aim of the study:evaluate the effect of application of Watson caring theory for nurses in pediatric critical care unit. Method(s): A convenience sample of 70 nurses of Pediatric Critical Care Unit in El-Menoufya University Hospital and educational hospital in ShebenElkom.were completed the demographics questionnaire, and the Caring Behavior Assessment (CBA) questionnaire,medical record to collect medical data regarding children characteristics such as age and diagnosis, Interviewing questionnaire for nurses regarding their barrier to less interest of comfort behavior such as doing doctor order, Shortage of nursing staff, Large number of patients, Heavy workloads, Secretarial jobs for nurses and Emotional stress. Results: more thantwothirds of nurses in study group and majority of control group had age less than 30 years, there were highly statistically significant difference related to mean scores for Caring Behavior Assessment (CBA) as rated by nurses in pretest (1.4750 to 2.0750) than in posttest (3.5 to 4.55). Also, near to two-thirds (64.3%) of the nurses stated that doing doctor order act as a barrier to apply this theory. In addition, there were a statistical significance difference between educational qualifications of nurses and a Supportive\\ protective\\corrective environment subscale with mean score for master degree 57.0000, also between years of experiences and human needs assistance. Conclusion: Program instructions for all nurses to apply Watson Caring theory for children in pediatric critical care unit were successful and effective and this study provided evidence for application of this theory for different departments in all settings. Recommendations: It was recommended that In-service training programs for nurses about caring behavior and its different areas, with special emphasis on communication are needed to improve their own behaviors in all aspects of the caring behaviors for all health care settings. Motivating hospital authorities to recruit more nurses, then, the nurses would be able to have more care that is direct. Consequently, the amount and the quality of nurse-child communication and opportunities for patient education would increase, this in turn improve child's outcome.", "title": "" }, { "docid": "facf85be0ae23eacb7e7b65dd5c45b33", "text": "We review evidence for partially segregated networks of brain areas that carry out different attentional functions. One system, which includes parts of the intraparietal cortex and superior frontal cortex, is involved in preparing and applying goal-directed (top-down) selection for stimuli and responses. This system is also modulated by the detection of stimuli. The other system, which includes the temporoparietal cortex and inferior frontal cortex, and is largely lateralized to the right hemisphere, is not involved in top-down selection. Instead, this system is specialized for the detection of behaviourally relevant stimuli, particularly when they are salient or unexpected. This ventral frontoparietal network works as a 'circuit breaker' for the dorsal system, directing attention to salient events. Both attentional systems interact during normal vision, and both are disrupted in unilateral spatial neglect.", "title": "" }, { "docid": "d4c19a8e4e51ede55ce62a3bcc3df5ad", "text": "The daily average PM2.5 concentration forecast is a leading component nowadays in air quality research, which is necessary to perform in order to assess the impact of air on the health and welfare of every living being. The present work is aimed at analyzing and benchmarking a neural-network approach to the prediction of average PM2.5 concentrations. The model thus obtained will be indispensable, as a control tool, for the purpose of preventing dangerous situations that may arise. To this end we have obtained data and measurements based on samples taken during the early hours of the day. Results from three different topologies of neural networks were compared so as to identify their potential uses, or rather, their strengths and weaknesses: Multilayer Perceptron (MLP), Radial Basis Function (RBF) and Square Multilayer Perceptron (SMLP). Moreover, two classical models were built (a persistence model and a linear regression), so as to compare their results with the ones provided by the neural network models. The results clearly demonstrated that the neural approach not only outperformed the classical models but also showed fairly similar values among different topologies. Moreover, a differential behavior in terms of stability and length of the training phase emerged during testing as well. The RBF shows up to be the network with the shortest training times, combined with a greater stability during the prediction stage, thus characterizing this topology as an ideal solution for its use in environmental applications instead of the widely used and less effective MLP. 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ebc3bc7b74b7e24ec629ca1a659a5093", "text": "OBJECTIVE. In this study, we examined the effectiveness of using weighted vests for improving attention, impulse control, and on-task behavior in children with attention deficit hyperactivity disorder (ADHD). METHOD. In a randomized, two-period crossover design, 110 children with ADHD were measured using the Conners' Continuous Performance Test-II (CPT-II) task. RESULTS. In the weighted vest condition, the participants did show significant improvement in all three attentional variables of the CPT-II task, including inattention; speed of processing and responding; consistency of executive management; and three of four on-task behaviors, including off task, out of seat, and fidgets. No significant improvements in impulse control and automatic vocalizations were found. CONCLUSION. Although wearing a weighted vest is not a cure-all strategy, our findings support the use of the weighted vest to remedy attentional and on-task behavioral problems of children with ADHD.", "title": "" } ]
scidocsrr
a22e3ee7da53c0b8fe9336622d42fa38
A character-based convolutional neural network for language-agnostic Twitter sentiment analysis
[ { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" }, { "docid": "fabc65effd31f3bb394406abfa215b3e", "text": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).", "title": "" } ]
[ { "docid": "ff272c41a811b6e0031d6e90a895f919", "text": "Three-dimensional reconstruction of dynamic scenes is an important prerequisite for applications like mobile robotics or autonomous driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.", "title": "" }, { "docid": "6c9acb831bc8dc82198aef10761506be", "text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided.", "title": "" }, { "docid": "14e6cf0e85c184f85ae0ae6202246d91", "text": "The use of probiotics for human and animal health is continuously increasing. The probiotics used in humans commonly come from dairy foods, whereas the sources of probiotics used in animals are often the animals' own digestive tracts. Increasingly, probiotics from sources other than milk products are being selected for use in people who are lactose intolerant. These sources are non-dairy fermented foods and beverages, non-dairy and non-fermented foods such as fresh fruits and vegetables, feces of breast-fed infants and human breast milk. The probiotics that are used in both humans and animals are selected in stages; after the initial isolation of the appropriate culture medium, the probiotics must meet important qualifications, including being non-pathogenic acid and bile-tolerant strains that possess the ability to act against pathogens in the gastrointestinal tract and the safety-enhancing property of not being able to transfer any antibiotic resistance genes to other bacteria. The final stages of selection involve the accurate identification of the probiotic species.", "title": "" }, { "docid": "b56f65fd08c8b6a9fe9ff05441ff8734", "text": "While symbolic parsers can be viewed as deduction systems, t his view is less natural for probabilistic parsers. We present a view of parsing as directed hypergraph analysis which naturally covers both symbolic and probabilistic parsing. We illustrate the approach by showing how a dynamic extension of Dijkstra’s algorithm can be used to construct a probabilistic chart parser with an O(n3) time bound for arbitrary PCFGs, while preserving as much of t he flexibility of symbolic chart parsers as allowed by the inher ent ordering of probabilistic dependencies.", "title": "" }, { "docid": "cb67ffc6559d42628022994961179208", "text": "Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans.", "title": "" }, { "docid": "efb81d85abcf62f4f3747a58154c5144", "text": "Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-the-art approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion. Our code is available at https://github.com/sergeytulyakov/mocogan.", "title": "" }, { "docid": "4424a73177671ce5f1abcd304e546434", "text": "Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoderdecoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-theart results on large pose face recognition.", "title": "" }, { "docid": "51fb43ac979ce0866eb541adc145ba70", "text": "In many cooperatively breeding species, group members form a dominance hierarchy or queue to inherit the position of breeder. Models aimed at understanding individual variation in helping behavior, however, rarely take into account the effect of dominance rank on expected future reproductive success and thus the potential direct fitness costs of helping. Here we develop a kin-selection model of helping behavior in multimember groups in which only the highest ranking individual breeds. Each group member can invest in the dominant’s offspring at a cost to its own survivorship. The model predicts that lower ranked subordinates, who have a smaller probability of inheriting the group, should work harder than higher ranked subordinates. This prediction holds regardless of whether the intrinsic mortality rate of subordinates increases or decreases with rank. The prediction does not necessarily hold, however, where the costs of helping are higher for lower ranked individuals: a situation that may be common in vertebrates. The model makes two further testable predictions: that the helping effort of an individual of given rank should be lower in larger groups, and the reproductive success of dominants should be greater where group members are more closely related. Empirical evidence for these predictions is discussed. We argue that the effects of rank on stable helping effort may explain why attempts to correlate individual helping effort with relatedness in cooperatively breeding species have met with limited success.", "title": "" }, { "docid": "3f4c1474f79a4d3b179d2a8391719d5f", "text": "An unresolved challenge for all kind of temporal data is the reliable anomaly detection, especially when adaptability is required in the case of non-stationary time series or when the nature of future anomalies is unknown or only vaguely defined. Most of the current anomaly detection algorithms follow the general idea to classify an anomaly as a significant deviation from the prediction. In this paper we present a comparative study where several online anomaly detection algorithms are compared on the large Yahoo Webscope S5 anomaly benchmark. We show that a relatively Simple Online Regression Anomaly Detector (SORAD) is quite successful compared to other anomaly detectors. We discuss the importance of several adaptive and online elements of the algorithm and their influence on the overall anomaly detection accuracy.", "title": "" }, { "docid": "4c2b22c651aa4cc40807cc92a044a008", "text": "Robotic grasping is very sensitive to how accurate is the pose estimation of the object to grasp. Even a small error in the estimated pose may cause the planned grasp to fail. Several methods for robust grasp planning exploit the object geometry or tactile sensor feedback. However, object pose range estimation introduces specific uncertainties that can also be exploited to choose more robust grasps. We present a grasp planning method that explicitly considers the uncertainties on the visually-estimated object pose. We assume a known shape (e.g. primitive shape or triangle mesh), observed as a –possibly sparse– point cloud. The measured points are usually not uniformly distributed over the surface as the object is seen from a particular viewpoint; additionally this non-uniformity can be the result of heterogeneous textures over the object surface, when using stereo-vision algorithms based on robust feature-point matching. Consequently the pose estimation may be more accurate in some directions and contain unavoidable ambiguities. The proposed grasp planner is based on a particle filter to estimate the object probability distribution as a discrete set. We show that, for grasping, some ambiguities are less unfavorable so the distribution can be used to select robust grasps. Some experiments are presented with the humanoid robot iCub and its stereo cameras.", "title": "" }, { "docid": "74686e9acab0a4d41c87cadd7da01889", "text": "Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is able to capture both local and global structure similarity information is proposed for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.", "title": "" }, { "docid": "7e557091d8cfe6209b1eda3b664ab551", "text": "With the increasing penetration of mobile phones, problematic use of mobile phone (PUMP) deserves attention. In this study, using a path model we examined the relationship between depression and PUMP, with motivations as mediators. Findings suggest that depressed people may rely on mobile phone to alleviate their negative feelings and spend more time on communication activities via mobile phone, which in turn can deteriorate into PUMP. However, face-to-face communication with others played a moderating role, weakening the link between use of mobile phone for communication activities and dete-", "title": "" }, { "docid": "9f005054e640c2db97995c7540fe2034", "text": "Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly, because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been a very few prior studies that take into account the attacker’s tradeoff between adapting to the classifier being used against him with his desire to maintain the efficacy of his attack. Including this effect is a key to derive solutions that perform well in practice. In this investigation, we model the interaction as a game between a defender who chooses a classifier to distinguish between attacks and normal behavior based on a set of observed features and an attacker who chooses his attack features (class 1 data). Normal behavior (class 0 data) is random and exogenous. The attacker’s objective balances the benefit from attacks and the cost of being detected while the defender’s objective balances the benefit of a correct attack detection and the cost of false alarm. We provide an efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker. We also explore qualitatively and quantitatively the impact of the non-attacker and underlying parameters on the equilibrium strategies.", "title": "" }, { "docid": "9fd3321922a73539210cb5b73d8d5d9c", "text": "This paper presents a new model for controlling information flow in systems with mutual distrust and decentralized authority. The model allows users to share information with distrusted code (e.g., downloaded applets), yet still control how that code disseminates the shared information to others. The model improves on existing multilevel security models by allowing users to declassify information in a decentralized way, and by improving support for fine-grained data sharing. The paper also shows how static program analysis can be used to certify proper information flows in this model and to avoid most run-time information flow checks.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "4287b25e6e80d16d3d19f69bece2dfcc", "text": "Short ranges and limited field-of-views in semi-passive radio frequency identification (RFID) tags are the most prominent obstacles that limit the number of RFID applications relying on backscatter modulation to exchange data between a reader and a tag. We propose a retrodirective array structure that, if equipped on a tag, can increase the field-of-view and the coverage area of RFID systems by making the tag insensitive to its orientation with respect to one or more RFID readers. In this article, we derive and experimentally validate the conditions under which a rat-race coupler is retrodirective. The performance of the fabricated passive retrodirective structure is evaluated through the retrodirective ideality factor (RIF) amounting to a value of 1.003, which is close to the ideal RIF of one. The article ends with a discussion on how the proposed design can improve current RFID systems from a communication perspective.", "title": "" }, { "docid": "67544e71b45acb84923a3db84534a377", "text": "The precision of point-of-gaze (POG) estimation during a fixation is an important factor in determining the usability of a noncontact eye-gaze tracking system for real-time applications. The objective of this paper is to define and measure POG fixation precision, propose methods for increasing the fixation precision, and examine the improvements when the methods are applied to two POG estimation approaches. To achieve these objectives, techniques for high-speed image processing that allow POG sampling rates of over 400 Hz are presented. With these high-speed POG sampling rates, the fixation precision can be improved by filtering while maintaining an acceptable real-time latency. The high-speed sampling and digital filtering techniques developed were applied to two POG estimation techniques, i.e., the highspeed pupil-corneal reflection (HS P-CR) vector method and a 3-D model-based method allowing free head motion. Evaluation on the subjects has shown that when operating at 407 frames per second (fps) with filtering, the fixation precision for the HS P-CR POG estimation method was improved by a factor of 5.8 to 0.035deg (1.6 screen pixels) compared to the unfiltered operation at 30 fps. For the 3-D POG estimation method, the fixation precision was improved by a factor of 11 to 0.050deg (2.3 screen pixels) compared to the unfiltered operation at 30 fps.", "title": "" }, { "docid": "5d48b6fcc1d8f1050b5b5dc60354fedb", "text": "The latency in the current neural based dialogue state tracking models prohibits them from being used efficiently for deployment in production systems, albeit their highly accurate performance. This paper proposes a new scalable and accurate neural dialogue state tracking model, based on the recently proposed Global-Local Self-Attention encoder (GLAD) model by Zhong et al. (2018) which uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features. By using only one recurrent networks with global conditioning, compared to (1 + # slots) recurrent networks with global and local conditioning used in the GLAD model, our proposed model reduces the latency in training and inference times by 35% on average, while preserving performance of belief state tracking, by 97.38% on turn request and 88.51% on joint goal and accuracy. Evaluation on Multi-domain dataset (Multi-WoZ) also demonstrates that our model outperforms GLAD on turn inform and joint goal accuracy.", "title": "" }, { "docid": "fe38b44457f89bcb63aabe65babccd03", "text": "Single sample face recognition have become an important problem because of the limitations on the availability of gallery images. In many real-world applications such as passport or driver license identification, there is only a single facial image per subject available. The variations between the single gallery face image and the probe face images, captured in unconstrained environments, make the single sample face recognition even more difficult. In this paper, we present a fully automatic face recognition system robust to most common face variations in unconstrained environments. Our proposed system is capable of recognizing faces from non-frontal views and under different illumination conditions using only a single gallery sample for each subject. It normalizes the face images for both in-plane and out-of-plane pose variations using an enhanced technique based on active appearance models (AAMs). We improve the performance of AAM fitting, not only by training it with in-the-wild images and using a powerful optimization technique, but also by initializing the AAM with estimates of the locations of the facial landmarks obtained by a method based on flexible mixture of parts. The proposed initialization technique results in significant improvement of AAM fitting to non-frontal poses and makes the normalization process robust, fast and reliable. Owing to the proper alignment of the face images, made possible by this approach, we can use local feature descriptors, such as Histograms of Oriented Gradients (HOG), for matching. The use of HOG features makes the system robust against illumination variations. In order to improve the discriminating information content of the feature vectors, we also extract Gabor features from the normalized face images and fuse them with HOG features using Canonical Correlation Analysis (CCA). Experimental results performed on various databases outperform the state-of-the-art methods and show the effectiveness of our proposed method in normalization and recognition of face images obtained in unconstrained environments.", "title": "" } ]
scidocsrr
d20ee5fba86a814522db8b64606f8b5e
Migrant mothers, left-behind fathers: the negotiation of gender subjectivities in Indonesia and the Philippines
[ { "docid": "2da67ed8951caf3388ca952465d61b37", "text": "As a significant supplier of labour migrants, Southeast Asia presents itself as an important site for the study of children in transnational families who are growing up separated from at least one migrant parent and sometimes cared for by 'other mothers'. Through the often-neglected voices of left-behind children, we investigate the impact of parental migration and the resulting reconfiguration of care arrangements on the subjective well-being of migrants' children in two Southeast Asian countries, Indonesia and the Philippines. We theorise the child's position in the transnational family nexus through the framework of the 'care triangle', representing interactions between three subject groups- 'left-behind' children, non-migrant parents/other carers; and migrant parent(s). Using both quantitative (from 1010 households) and qualitative (from 32 children) data from a study of child health and migrant parents in Southeast Asia, we examine relationships within the caring spaces both of home and of transnational spaces. The interrogation of different dimensions of care reveals the importance of contact with parents (both migrant and nonmigrant) to subjective child well-being, and the diversity of experiences and intimacies among children in the two study countries.", "title": "" } ]
[ { "docid": "513224bb1034217b058179f3805dd37f", "text": "Existing work on subgraph isomorphism search mainly focuses on a-query-at-a-time approaches: optimizing and answering each query separately. When multiple queries arrive at the same time, sequential processing is not always the most efficient. In this paper, we study multi-query optimization for subgraph isomorphism search. We first propose a novel method for efficiently detecting useful common subgraphs and a data structure to organize them. Then we propose a heuristic algorithm based on the data structure to compute a query execution order so that cached intermediate results can be effectively utilized. To balance memory usage and the time for cached results retrieval, we present a novel structure for caching the intermediate results. We provide strategies to revise existing single-query subgraph isomorphism algorithms to seamlessly utilize the cached results, which leads to significant performance improvement. Extensive experiments verified the effectiveness of our solution.", "title": "" }, { "docid": "9cf1791f7d73f7e2471b27dd7667e023", "text": "We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.", "title": "" }, { "docid": "571c7cb6e0670539a3effbdd65858d2a", "text": "When writing software, developers often employ abbreviations in identifier names. In fact, some abbreviations may never occur with the expanded word, or occur more often in the code. However, most existing program comprehension and search tools do little to address the problem of abbreviations, and therefore may miss meaningful pieces of code or relationships between software artifacts. In this paper, we present an automated approach to mining abbreviation expansions from source code to enhance software maintenance tools that utilize natural language information. Our scoped approach uses contextual information at the method, program, and general software level to automatically select the most appropriate expansion for a given abbreviation. We evaluated our approach on a set of 250 potential abbreviations and found that our scoped approach provides a 57% improvement in accuracy over the current state of the art.", "title": "" }, { "docid": "8dc9170093a0317fff3971b18f758ff3", "text": "In many Web applications, such as blog classification and new-sgroup classification, labeled data are in short supply. It often happens that obtaining labeled data in a new domain is expensive and time consuming, while there may be plenty of labeled data in a related but different domain. Traditional text classification ap-proaches are not able to cope well with learning across different domains. In this paper, we propose a novel cross-domain text classification algorithm which extends the traditional probabilistic latent semantic analysis (PLSA) algorithm to integrate labeled and unlabeled data, which come from different but related domains, into a unified probabilistic model. We call this new model Topic-bridged PLSA, or TPLSA. By exploiting the common topics between two domains, we transfer knowledge across different domains through a topic-bridge to help the text classification in the target domain. A unique advantage of our method is its ability to maximally mine knowledge that can be transferred between domains, resulting in superior performance when compared to other state-of-the-art text classification approaches. Experimental eval-uation on different kinds of datasets shows that our proposed algorithm can improve the performance of cross-domain text classification significantly.", "title": "" }, { "docid": "dd7a87be674da00360de58df77bf980a", "text": "This paper presents an overview of single-pass interferometric Synthetic Aperture Radar (SAR) missions employing two or more satellites flying in a close formation. The simultaneous reception of the scattered radar echoes from different viewing directions by multiple spatially distributed antennas enables the acquisition of unique Earth observation products for environmental and climate monitoring. After a short introduction to the basic principles and applications of SAR interferometry, designs for the twin satellite missions TanDEM-X and Tandem-L are presented. The primary objective of TanDEM-X (TerraSAR-X add-on for Digital Elevation Measurement) is the generation of a global Digital Elevation Model (DEM) with unprecedented accuracy as the basis for a wide range of scientific research as well as for commercial DEM production. This goal is achieved by enhancing the TerraSAR-X mission with a second TerraSAR-X like satellite that will be launched in spring 2010. Both satellites act then as a large single-pass SAR interferometer with the opportunity for flexible baseline selection. Building upon the experience gathered with the TanDEM-X mission design, the fully polarimetric L-band twin satellite formation Tandem-L is proposed. Important objectives of this highly capable interferometric SAR mission are the global acquisition of three-dimensional forest structure and biomass inventories, large-scale measurements of millimetric displacements due to tectonic shifts, and systematic observations of glacier movements. The sophisticated mission concept and the high data-acquisition capacity of Tandem-L will moreover provide a unique data source to systematically observe, analyze, and quantify the dynamics of a wide range of additional processes in the bio-, litho-, hydro-, and cryosphere. By this, Tandem-L will be an essential step to advance our understanding of the Earth system and its intricate dynamics. Enabling technologies and techniques are described in detail. An outlook on future interferometric and tomographic concepts and developments, including multistatic SAR systems with multiple receivers, is provided.", "title": "" }, { "docid": "539a25209bf65c8b26cebccf3e083cd0", "text": "We study the problem of web search result diversification in the case where intent based relevance scores are available. A diversified search result will hopefully satisfy the information need of user-L.s who may have different intents. In this context, we first analyze the properties of an intent-based metric, ERR-IA, to measure relevance and diversity altogether. We argue that this is a better metric than some previously proposed intent aware metrics and show that it has a better correlation with abandonment rate. We then propose an algorithm to rerank web search results based on optimizing an objective function corresponding to this metric and evaluate it on shopping related queries.", "title": "" }, { "docid": "79fdfee8b42fe72a64df76e64e9358bc", "text": "An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Gauss pseudospectral method. The algorithm is well suited for use in modern vectorized programming languages such as FORTRAN 95 and MATLAB. The algorithm discretizes the cost functional and the differential-algebraic equations in each phase of the optimal control problem. The phases are then connected using linkage conditions on the state and time. A large-scale nonlinear programming problem (NLP) arises from the discretization and the significant features of the NLP are described in detail. A particular reusable MATLAB implementation of the algorithm, called GPOPS, is applied to three classical optimal control problems to demonstrate its utility. The algorithm described in this article will provide researchers and engineers a useful software tool and a reference when it is desired to implement the Gauss pseudospectral method in other programming languages.", "title": "" }, { "docid": "b75f793f4feac0b658437026d98a1e8b", "text": "From a certain (admittedly narrow) perspective, one of the annoying features of natural language is the ubiquitous syntactic ambiguity. For a computational model intended to assign syntactic descriptions to natural language text, this seem like a design defect. In general, when context and lexical content are taken into account, such syntactic ambiguity can be resolved: sentences used in context show, for the most part, little ambiguity. But the grammar provides many alternative analyses, and gives little guidance about resolving the ambiguity. Prepositional phrase attachment is the canonical case of structural ambiguity, as in the time worn example,", "title": "" }, { "docid": "9b42c1b58bb7b74bdcf09c7556800ad5", "text": "In this paper, we propose a method to find the safest path between two locations, based on the geographical model of crime intensities. We consider the police records and news articles for finding crime density of different areas of the city. It is essential to consider news articles as there is a significant delay in updating police crime records. We address this problem by updating the crime intensities based on current news feeds. Based on the updated crime intensities, we identify the safest path. It is this real time updation of crime intensities which makes our model way better than the models that are presently in use. Our model would also inform the user of crime sprees in a particular area thereby ensuring that user avoids these crime hot spots.", "title": "" }, { "docid": "59ba83e88085445e3bcf009037af6617", "text": "— We examine the relationship between resource abundance and several indicators of human welfare. Consistent with the existing literature on the relationship between resource abundance and economic growth we find that, given an initial income level, resource-intensive countries tend to suffer lower levels of human development. While we find only weak support for a direct link between resources and welfare, there is an indirect link that operates through institutional quality. There are also significant differences in the effects that resources have on different measures of institutional quality. These results imply that the ‘‘resource curse’’ is a more encompassing phenomenon than previously considered, and that key differences exist between the effects of different resource types on various aspects of governance and human welfare. 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "68bc2abd13bcd19566eed66f0031c934", "text": "As DRAM density keeps increasing, more rows need to be protected in a single refresh with the constant refresh number. Since no memory access is allowed during a refresh, the refresh penalty is no longer trivial and can result in significant performance degradation. To mitigate the refresh penalty, a Concurrent-REfresh-Aware Memory system (CREAM) is proposed in this work so that memory access and refresh can be served in parallel. The proposed CREAM architecture distinguishes itself with the following key contributions: (1) Under a given DRAM power budget, sub-rank-level refresh (SRLR) is developed to reduce refresh power and the saved power is used to enable concurrent memory access; (2) sub-array-level refresh (SALR) is also devised to effectively lower the probability of the conflict between memory access and refresh; (3) In addition, novel sub-array level refresh scheduling schemes, such as sub-array round-robin and dynamic scheduling, are designed to further improve the performance. A quasi-ROR interface protocol is proposed so that CREAM is fully compatible with JEDEC-DDR standard with negligible hardware overhead and no extra pin-out. The experimental results show that CREAM can improve the performance by 12.9% and 7.1% over the conventional DRAM and the Elastic-Refresh DRAM memory, respectively.", "title": "" }, { "docid": "73edaa7319dcf225c081f29146bbb385", "text": "Sign language is a specific area of human gesture communication and a full-edged complex language that is used by various deaf communities. In Bangladesh, there are many deaf and dumb people. It becomes very difficult to communicate with them for the people who are unable to understand the Sign Language. In this case, an interpreter can help a lot. So it is desirable to make computer to understand the Bangladeshi sign language that can serve as an interpreter. In this paper, a Computer Vision-based Bangladeshi Sign Language Recognition System (BdSL) has been proposed. In this system, separate PCA (Principal Component Analysis) is used for Bengali Vowels and Bengali Numbers recognition. The system is tested for 6 Bengali Vowels and 10 Bengali Numbers.", "title": "" }, { "docid": "3292af68a03deb0cffcf3b701e1c0f63", "text": "Limitations imposed by the traditional practice in financial institutions of running risk analysis on the desktop mean many rely on models which assume a “normal” Gaussian distribution of events which can seriously underestimate the real risk. In this paper, we propose an alternative service which uses the elastic capacities of Cloud Computing to escape the limitations of the desktop and produce accurate results more rapidly. The Business Intelligence as a Service (BIaaS) in the Cloud has a dual-service approach to compute risk and pricing for financial analysis. In the first type of BIaaS service uses three APIs to simulate the Heston Model to compute the risks and asset prices, and computes the volatility (unsystematic risks) and the implied volatility (systematic risks) which can be tracked down at any time. The second type of BIaaS service uses two APIs to provide business analytics for stock market analysis, and compute results in the visualised format, so that stake holders without prior knowledge can understand. A full case study with two sets of experiments is presented to support the validity and originality of BIaaS. Additional three examples are used to support accuracy of the predicted stock index movement as a result of the use of Heston Model and its associated APIs. We describe the architecture of deployment, together with examples and results which show how our approach improves risk and investment analysis and maintaining accuracy and efficiency whilst improving performance over desktops.", "title": "" }, { "docid": "f6264315a5bbf32b9fa21488b4c80f03", "text": "into empirical, corpus-based learning approaches to natural language processing (NLP). Most empirical NLP work to date has focused on relatively low-level language processing such as part-ofspeech tagging, text segmentation, and syntactic parsing. The success of these approaches has stimulated research in using empirical learning techniques in other facets of NLP, including semantic analysis—uncovering the meaning of an utterance. This article is an introduction to some of the emerging research in the application of corpusbased learning techniques to problems in semantic interpretation. In particular, we focus on two important problems in semantic interpretation, namely, word-sense disambiguation and semantic parsing.", "title": "" }, { "docid": "28f61d005f1b53ad532992e30b9b9b71", "text": "We propose a method for nonlinear residual echo suppression that consists of extracting spectral features from the far-end signal, and using an artificial neural network to model the residual echo magnitude spectrum from these features. We compare the modeling accuracy achieved by realizations with different features and network topologies, evaluating the mean squared error of the estimated residual echo magnitude spectrum. We also present a low complexity real-time implementation combining an offline-trained network with online adaptation, and investigate its performance in terms of echo suppression and speech distortion for real mobile phone recordings.", "title": "" }, { "docid": "45d6563b2b4c64bb11ad65c3cff0d843", "text": "The performance of single cue object tracking algorithms may degrade due to complex nature of visual world and environment challenges. In recent past, multicue object tracking methods using single or multiple sensors such as vision, thermal, infrared, laser, radar, audio, and RFID are explored to a great extent. It was acknowledged that combining multiple orthogonal cues enhance tracking performance over single cue methods. The aim of this paper is to categorize multicue tracking methods into single-modal and multi-modal and to list out new trends in this field via investigation of representative work. The categorized works are also tabulated in order to give detailed overview of latest advancement. The person tracking datasets are analyzed and their statistical parameters are tabulated. The tracking performance measures are also categorized depending upon availability of ground truth data. Our review gauges the gap between reported work and future demands for object tracking.", "title": "" }, { "docid": "40ab6e98dbf02235b882ea56a8675bba", "text": "BACKGROUND\nThe lowering of cholesterol concentrations in individuals at high risk of cardiovascular disease improves outcome. No study, however, has assessed benefits of cholesterol lowering in the primary prevention of coronary heart disease (CHD) in hypertensive patients who are not conventionally deemed dyslipidaemic.\n\n\nMETHODS\nOf 19342 hypertensive patients (aged 40-79 years with at least three other cardiovascular risk factors) randomised to one of two antihypertensive regimens in the Anglo-Scandinavian Cardiac Outcomes Trial, 10305 with non-fasting total cholesterol concentrations 6.5 mmol/L or less were randomly assigned additional atorvastatin 10 mg or placebo. These patients formed the lipid-lowering arm of the study. We planned follow-up for an average of 5 years, the primary endpoint being non-fatal myocardial infarction and fatal CHD. Data were analysed by intention to treat.\n\n\nFINDINGS\nTreatment was stopped after a median follow-up of 3.3 years. By that time, 100 primary events had occurred in the atorvastatin group compared with 154 events in the placebo group (hazard ratio 0.64 [95% CI 0.50-0.83], p=0.0005). This benefit emerged in the first year of follow-up. There was no significant heterogeneity among prespecified subgroups. Fatal and non-fatal stroke (89 atorvastatin vs 121 placebo, 0.73 [0.56-0.96], p=0.024), total cardiovascular events (389 vs 486, 0.79 [0.69-0.90], p=0.0005), and total coronary events (178 vs 247, 0.71 [0.59-0.86], p=0.0005) were also significantly lowered. There were 185 deaths in the atorvastatin group and 212 in the placebo group (0.87 [0.71-1.06], p=0.16). Atorvastatin lowered total serum cholesterol by about 1.3 mmol/L compared with placebo at 12 months, and by 1.1 mmol/L after 3 years of follow-up.\n\n\nINTERPRETATION\nThe reductions in major cardiovascular events with atorvastatin are large, given the short follow-up time. These findings may have implications for future lipid-lowering guidelines.", "title": "" }, { "docid": "ff345d732a273577ca0f965b92e1bbbd", "text": "Integrated circuit (IC) testing for quality assurance is approaching 50% of the manufacturing costs for some complex mixed-signal IC’s. For many years the market growth and technology advancements in digital IC’s were driving the developments in testing. The increasing trend to integrate information acquisition and digital processing on the same chip has spawned increasing attention to the test needs of mixed-signal IC’s. The recent advances in wireless communications indicate a trend toward the integration of the RF and baseband mixed signal technologies. In this paper we examine the developments in IC testing form the historic, current status and future view points. In separate sections we address the testing developments for digital, mixed signal and RF IC’s. With these reviews as context, we relate new test paradigms that have the potential to fundamentally alter the methods used to test mixed-signal and RF parts.", "title": "" }, { "docid": "f617b8b5c2c5fc7829cbcd0b2e64ed2d", "text": "This paper proposes a novel lifelong learning (LL) approach to sentiment classification. LL mimics the human continuous learning process, i.e., retaining the knowledge learned from past tasks and use it to help future learning. In this paper, we first discuss LL in general and then LL for sentiment classification in particular. The proposed LL approach adopts a Bayesian optimization framework based on stochastic gradient descent. Our experimental results show that the proposed method outperforms baseline methods significantly, which demonstrates that lifelong learning is a promising research direction.", "title": "" }, { "docid": "856012f3cf81a1527916da8a5136ce79", "text": "Folk psychology postulates a spatial unity of self and body, a \"real me\" that resides in one's body and is the subject of experience. The spatial unity of self and body has been challenged by various philosophical considerations but also by several phenomena, perhaps most notoriously the \"out-of-body experience\" (OBE) during which one's visuo-spatial perspective and one's self are experienced to have departed from their habitual position within one's body. Here the authors marshal evidence from neurology, cognitive neuroscience, and neuroimaging that suggests that OBEs are related to a failure to integrate multisensory information from one's own body at the temporo-parietal junction (TPJ). It is argued that this multisensory disintegration at the TPJ leads to the disruption of several phenomenological and cognitive aspects of self-processing, causing illusory reduplication, illusory self-location, illusory perspective, and illusory agency that are experienced as an OBE.", "title": "" } ]
scidocsrr
02bcbf5daa4d06a34c43d2d5c6cfa67a
Enhancing Person Re-identification in a Self-Trained Subspace
[ { "docid": "22650cb6c1470a076fc1dda7779606ec", "text": "This paper addresses the problem of handling spatial misalignments due to camera-view changes or human-pose variations in person re-identification. We first introduce a boosting-based approach to learn a correspondence structure which indicates the patch-wise matching probabilities between images from a target camera pair. The learned correspondence structure can not only capture the spatial correspondence pattern between cameras but also handle the viewpoint or human-pose variation in individual images. We further introduce a global-based matching process. It integrates a global matching constraint over the learned correspondence structure to exclude cross-view misalignments during the image patch matching process, hence achieving a more reliable matching score between images. Experimental results on various datasets demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "0c62db080c7c49a2642eca00c04f92ba", "text": "Dimensionality reduction is one of the important preprocessing steps in high-dimensional data analysis. In this paper, we consider the supervised dimensionality reduction problem where samples are accompanied with class labels. Traditional Fisher discriminant analysis is a popular and powerful method for this purpose. However, it tends to give undesired results if samples in some class form several separate clusters, i.e., multimodal. In this paper, we propose a new dimensionality reduction method called local Fisher discriminant analysis (LFDA), which is a localized variant of Fisher discriminant analysis. LFDA takes local structure of the data into account so the multimodal data can be embedded appropriately. We also show that LFDA can be extended to non-linear dimensionality reduction scenarios by the kernel trick.", "title": "" } ]
[ { "docid": "07cb7c48a534cc002c5088225a540b1e", "text": "OBJECTIVES\nThe Health Information Technology for Economic and Clinical Health (HITECH) Act created incentives for adopting electronic health records (EHRs) for some healthcare organisations, but long-term care (LTC) facilities are excluded from those incentives. There are realisable benefits of EHR adoption in LTC facilities; however, there is limited research about this topic. The purpose of this systematic literature review is to identify EHR adoption factors for LTC facilities that are ineligible for the HITECH Act incentives.\n\n\nSETTING\nWe conducted systematic searches of Cumulative Index of Nursing and Allied Health Literature (CINAHL) Complete via Ebson B. Stephens Company (EBSCO Host), Google Scholar and the university library search engine to collect data about EHR adoption factors in LTC facilities since 2009.\n\n\nPARTICIPANTS\nSearch results were filtered by date range, full text, English language and academic journals (n=22).\n\n\nINTERVENTIONS\nMultiple members of the research team read each article to confirm applicability and study conclusions.\n\n\nPRIMARY AND SECONDARY OUTCOME MEASURES\nResearchers identified common themes across the literature: specifically facilitators and barriers to adoption of the EHR in LTC.\n\n\nRESULTS\nResults identify facilitators and barriers associated with EHR adoption in LTC facilities. The most common facilitators include access to information and error reduction. The most prevalent barriers include initial costs, user perceptions and implementation problems.\n\n\nCONCLUSIONS\nSimilarities span the system selection phases and implementation process; of those, cost was the most common mentioned. These commonalities should help leaders in LTC facilities align strategic decisions to EHR adoption. This review may be useful for decision-makers attempting successful EHR adoption, policymakers trying to increase adoption rates without expanding incentives and vendors that produce EHRs.", "title": "" }, { "docid": "3553d1dc8272bf0366b2688e5107aa3f", "text": "The emergence of the Web 2.0 technology generated a massive amount of raw data by enabling Internet users to post their opinions, reviews, comments on the web. Processing this raw data to extract useful information can be a very challenging task. An example of important information that can be automatically extracted from the users' posts and comments is their opinions on different issues, events, services, products, etc. This problem of Sentiment Analysis (SA) has been studied well on the English language and two main approaches have been devised: corpus-based and lexicon-based. This paper addresses both approaches to SA for the Arabic language. Since there is a limited number of publically available Arabic dataset and Arabic lexicons for SA, this paper starts by building a manually annotated dataset and then takes the reader through the detailed steps of building the lexicon. Experiments are conducted throughout the different stages of this process to observe the improvements gained on the accuracy of the system and compare them to corpus-based approach.", "title": "" }, { "docid": "b93b2c6ccb1b155996d2af3947497497", "text": "This paper surveys the techniques used for designing the most efficient algorithms for finding a maximum cardinality or weighted matching in (general or bipartite) graphs. It also lists some open problems concerning possible improvements in existing algorithms and the existence of fast parallel algorithms for these problems.", "title": "" }, { "docid": "28846c26b51e53e4d42bb49c6d410379", "text": "Social media language contains huge amount and wide variety of nonstandard tokens, created both intentionally and unintentionally by the users. It is of crucial importance to normalize the noisy nonstandard tokens before applying other NLP techniques. A major challenge facing this task is the system coverage, i.e., for any user-created nonstandard term, the system should be able to restore the correct word within its top n output candidates. In this paper, we propose a cognitivelydriven normalization system that integrates different human perspectives in normalizing the nonstandard tokens, including the enhanced letter transformation, visual priming, and string/phonetic similarity. The system was evaluated on both wordand messagelevel using four SMS and Twitter data sets. Results show that our system achieves over 90% word-coverage across all data sets (a 10% absolute increase compared to state-ofthe-art); the broad word-coverage can also successfully translate into message-level performance gain, yielding 6% absolute increase compared to the best prior approach.", "title": "" }, { "docid": "7d0b37434699aa5c3b36de33549a2b68", "text": "In Ethiopia, malaria control has been complicated due to resistance of the parasite to the current drugs. Thus, new drugs are required against drug-resistant Plasmodium strains. Historically, many of the present antimalarial drugs were discovered from plants. This study was, therefore, conducted to document antimalarial plants utilized by Sidama people of Boricha District, Sidama Zone, South Region of Ethiopia. An ethnobotanical survey was carried out from September 2011 to February 2012. Data were collected through semistructured interview and field and market observations. Relative frequency of citation (RFC) was calculated and preference ranking exercises were conducted to estimate the importance of the reported medicinal plants in Boricha District. A total of 42 antimalarial plants belonging to 27 families were recorded in the study area. Leaf was the dominant plant part (59.0%) used in the preparation of remedies and oral (97.4%) was the major route of administration. Ajuga integrifolia scored the highest RFC value (0.80). The results of this study revealed the existence of rich knowledge on the use of medicinal plants in the study area to treat malaria. Thus, an attempt should be made to conserve and evaluate the claimed antimalarial medicinal plants with priority given to those that scored the highest RFC values.", "title": "" }, { "docid": "2d02bf71ee22e062d12ce4ec0b53d4c9", "text": "BACKGROUND\nTherapies that maintain remission for patients with Crohn's disease are essential. Stable remission rates have been demonstrated for up to 2 years in adalimumab-treated patients with moderately to severely active Crohn's disease enrolled in the CHARM and ADHERE clinical trials.\n\n\nAIM\nTo present the long-term efficacy and safety of adalimumab therapy through 4 years of treatment.\n\n\nMETHODS\nRemission (CDAI <150), response (CR-100) and corticosteroid-free remission over 4 years, and maintenance of these endpoints beyond 1 year were assessed in CHARM early responders randomised to adalimumab. Corticosteroid-free remission was also assessed in all adalimumab-randomised patients using corticosteroids at baseline. Fistula healing was assessed in adalimumab-randomised patients with fistula at baseline. As observed, last observation carried forward and a hybrid nonresponder imputation analysis for year 4 (hNRI) were used to report efficacy. Adverse events were reported for any patient receiving at least one dose of adalimumab.\n\n\nRESULTS\nOf 329 early responders randomised to adalimumab induction therapy, at least 30% achieved remission (99/329) or CR-100 (116/329) at year 4 of treatment (hNRI). The majority of patients (54%) with remission at year 1 maintained this endpoint at year 4 (hNRI). At year 4, 16% of patients taking corticosteroids at baseline were in corticosteroid-free remission and 24% of patients with fistulae at baseline had healed fistulae. The incidence rates of adverse events remained stable over time.\n\n\nCONCLUSIONS\nProlonged adalimumab therapy maintained clinical remission and response in patients with moderately to severely active Crohn's disease for up to 4 years. No increased risk of adverse events or new safety signals were identified with long-term maintenance therapy. (clinicaltrials.gov number: NCT00077779).", "title": "" }, { "docid": "67f3426cbcb52a82a9970198d107acfc", "text": "In current conceptualizations of visual attention, selection takes place through integrated competition between recurrently connected visual processing networks. Selection, which facilitates the emergence of a 'winner' from among many potential targets, can be associated with particular spatial locations or object properties, and it can be modulated by both stimulus-driven and goal-driven factors. Recent neurobiological data support this account, revealing the activation of striate and extrastriate brain regions during conditions of competition. In addition, parietal and temporal cortices play a role in selection, biasing the ultimate outcome of the competition.", "title": "" }, { "docid": "2b1e2b90d7fcff0f3b159908d58c0cae", "text": "Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. They learn regression models from training images with associated human subjective scores to predict the perceptual quality of test images. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in practice. By comparison, opinion-unaware methods do not need human subjective scores for training, and thus have greater potential for good generalization capability. Unfortunately, thus far no opinion-unaware BIQA method has shown consistently better quality prediction accuracy than the opinion-aware methods. Here, we aim to develop an opinion-unaware BIQA method that can compete with, and perhaps outperform, the existing opinion-aware methods. By integrating the features of natural image statistics derived from multiple cues, we learn a multivariate Gaussian model of image patches from a collection of pristine natural images. Using the learned multivariate Gaussian model, a Bhattacharyya-like distance is used to measure the quality of each image patch, and then an overall quality score is obtained by average pooling. The proposed BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to the state-of-the-art opinion-aware BIQA methods. The MATLAB source code of our algorithm is publicly available at www.comp.polyu.edu.hk/~cslzhang/IQA/ILNIQE/ILNIQE.htm.", "title": "" }, { "docid": "8e80d35cd01bde9b34651ca14e715171", "text": "A complementary metal-oxide semiconductor (CMOS) single-stage cascode low-noise amplifier (LNA) is presented in this paper. The microwave monolithic integrated circuit (MMIC) is fabricated using digital 90-nm silicon-on-insulator (SOI) technology. All impedance matching and bias elements are implemented on the compact chip, which has a size of 0.6 mm /spl times/ 0.3 mm. The supply voltage and supply current are 2.4 V and 17 mA, respectively. At 35 GHz and 50 /spl Omega/ source/load impedances, a gain of 11.9 dB, a noise figure of 3.6 dB, an output compression point of 4 dBm, an input return loss of 6 dB, and an output return loss of 18 dB are measured. The -3-dB frequency bandwidth ranges from 26 to 42 GHz. All results include the pad parasitics. To the knowledge of the author, the results are by far the best for a silicon-based millimeter-wave LNA reported to date. The LNA is well suited for systems operating in accordance to the local multipoint distribution service (LMDS) standards at 28 and 38 GHz and the multipoint video distribution system (MVDS) standard at 42 GHz.", "title": "" }, { "docid": "250c1a5ac98dc6556bc62cc05555499d", "text": "Smartphones are programmable and equipped with a set of cheap but powerful embedded sensors, such as accelerometer, digital compass, gyroscope, GPS, microphone, and camera. These sensors can collectively monitor a diverse range of human activities and the surrounding environment. Crowdsensing is a new paradigm which takes advantage of the pervasive smartphones to sense, collect, and analyze data beyond the scale of what was previously possible. With the crowdsensing system, a crowdsourcer can recruit smartphone users to provide sensing service. Existing crowdsensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for crowdsensing. We consider two system models: the crowdsourcer-centric model where the crowdsourcer provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the crowdsourcer-centric model, we design an incentive mechanism using a Stackelberg game, where the crowdsourcer is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the crowdsourcer is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms.", "title": "" }, { "docid": "c3261d1552912642d407b512d08cc6f7", "text": "Four studies apply self-determination theory (SDT; Ryan & Deci, 2000) in investigating motivation for computer game play, and the effects of game play on wellbeing. Studies 1–3 examine individuals playing 1, 2 and 4 games, respectively and show that perceived in-game autonomy and competence are associated with game enjoyment, preferences, and changes in well-being preto post-play. Competence and autonomy perceptions are also related to the intuitive nature of game controls, and the sense of presence or immersion in participants’ game play experiences. Study 4 surveys an on-line community with experience in multiplayer games. Results show that SDT’s theorized needs for autonomy, competence, and relatedness independently predict enjoyment and future game play. The SDT model is also compared with Yee’s (2005) motivation taxonomy of game play motivations. Results are discussed in terms of the relatively unexplored landscape of human motivation within virtual worlds.", "title": "" }, { "docid": "2d94bc7459304885c60c7bf29341fa5d", "text": "Bayesian optimization schemes often rely on Gaussian processes (GP). GP models are very flexible, but are known to scale poorly with the number of training points. While several efficient sparse GP models are known, they have limitations when applied in optimization settings. We propose a novel Bayesian optimization framework that uses sparse online Gaussian processes. We introduce a new updating scheme for the online GP that accounts for our preference during optimization for regions with better performance. We apply this method to optimize the performance of a free-electron laser, and demonstrate empirically that the weighted updating scheme leads to substantial improvements to performance in optimization.", "title": "" }, { "docid": "fb7f0dbfb4d603ff122b95a41ac3a3bc", "text": "Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT’16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT’16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.", "title": "" }, { "docid": "05d3029a38631e4c0e445731f655b52c", "text": "This paper presents a non-inverting buck-boost based power-factor-correction (PFC) converter operating in the boundary-conduction-mode (BCM) for the wide input-voltage-range applications. Unlike other conventional PFC converters, the proposed non-inverting buck-boost based PFC converter has both step-up and step-down conversion functionalities to provide positive DC output-voltage. In order to reduce the turn-on switching-loss in high frequency applications, the BCM current control is employed to achieve zero current turn-on for the power switches. Besides, the relationships of the power factor versus the voltage conversion ratio between the BCM boost PFC converter and the proposed BCM non-inverting buck-boost PFC converter are also provided. Finally, the 70-watt prototype circuit of the proposed BCM buck-boost based PFC converter is built for the verification of the high frequency and wide input-voltage-range.", "title": "" }, { "docid": "15a079037d3dbb1b08591c0a3c8e0804", "text": "The paper offers an introduction and a road map to the burgeoning literature on two-sided markets. In many industries, platforms court two (or more) sides that use the platform to interact with each other. The platforms’ usage or variable charges impact the two sides’ willingness to trade, and thereby their net surpluses from potential interactions; the platforms’ membership or fixed charges in turn determine the end-users’ presence on the platform. The platforms’ fine design of the structure of variable and fixed charges is relevant only if the two sides do not negotiate away the corresponding usage and membership externalities. The paper first focuses on usage charges and provides conditions for the allocation of the total usage charge (e.g., the price of a call or of a payment card transaction) between the two sides not to be neutral; the failure of the Coase theorem is necessary but not sufficient for two-sidedness. Second, the paper builds a canonical model integrating usage and membership externalities. This model allows us to unify and compare the results obtained in the two hitherto disparate strands of the literature emphasizing either form of externality; and to place existing membership (or indirect) externalities models on a stronger footing by identifying environments in which these models can accommodate usage pricing. We also obtain general results on usage pricing of independent interest. Finally, the paper reviews some key economic insights on platform price and non-price strategies.", "title": "" }, { "docid": "be1bfd488f90deca658937dd20ee0915", "text": "This research examined the effects of hands-free cell phone conversations on simulated driving. The authors found that these conversations impaired driver's reactions to vehicles braking in front of them. The authors assessed whether this impairment could be attributed to a withdrawal of attention from the visual scene, yielding a form of inattention blindness. Cell phone conversations impaired explicit recognition memory for roadside billboards. Eye-tracking data indicated that this was due to reduced attention to foveal information. This interpretation was bolstered by data showing that cell phone conversations impaired implicit perceptual memory for items presented at fixation. The data suggest that the impairment of driving performance produced by cell phone conversations is mediated, at least in part, by reduced attention to visual inputs.", "title": "" }, { "docid": "4fe2467d44337a911f30e652c436db8f", "text": "The use of computer programming in K-12 spread into schools worldwide in the 70s and 80s of the last century, but it disappeared from the educational landscape in the early 90s. With the development of visual programming languages such as Scratch, this movement has emerged again in recent years, as teachers at all educational levels and from different disciplines consider that the use of programming enhances learning in many subjects and allows students to develop important skills. The systematic literature review presented in this article aims to summarize the results of recent research using programming with Scratch in subjects not related to computing and communications, as well as studies analyzing the kind of skills students develop while learning to code in this environment. Although the analyzed papers provide promising results regarding the use of programming as an educational resource, this review highlights the need to conduct more empirical research in classrooms, using larger samples of students that allow to obtain clear conclusions about the types of learning that could be enhanced through programming.", "title": "" }, { "docid": "23583b155fc8ec3301cfef805f568e57", "text": "We address the problem of covering an environment with robots equipped with sensors. The robots are heterogeneous in that the sensor footprints are different. Our work uses the location optimization framework in with three significant extensions. First, we consider robots with different sensor footprints, allowing, for example, aerial and ground vehicles to collaborate. We allow for finite size robots which enables implementation on real robotic systems. Lastly, we extend the previous work allowing for deployment in non convex environments.", "title": "" }, { "docid": "0bce954374d27d4679eb7562350674fc", "text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.", "title": "" } ]
scidocsrr
cb5d74020b03c492872fa66c730053a3
Comparison processes in social judgment: mechanisms and consequences.
[ { "docid": "e9b2f987c4744e509b27cbc2ab1487be", "text": "Analogy and similarity are often assumed to be distinct psychological processes. In contrast to this position, the authors suggest that both similarity and analogy involve a process of structural alignment and mapping, that is, that similarity is like analogy. In this article, the authors first describe the structure-mapping process as it has been worked out for analogy. Then, this view is extended to similarity, where it is used to generate new predictions. Finally, the authors explore broader implications of structural alignment for psychological processing.", "title": "" } ]
[ { "docid": "776de4218230e161570d599440183354", "text": "For the first time, we present a state-of-the-art energy-efficient 16nm technology integrated with FinFET transistors, 0.07um2 high density (HD) SRAM, Cu/low-k interconnect and high density MiM for mobile SoC and computing applications. This technology provides 2X logic density and >35% speed gain or >55% power reduction over our 28nm HK/MG planar technology. To our knowledge, this is the smallest fully functional 128Mb HD FinFET SRAM (with single fin) test-chip demonstrated with low Vccmin for 16nm node. Low leakage (SVt) FinFET transistors achieve excellent short channel control with DIBL of <;30 mV/V and superior Idsat of 520/525 uA/um at 0.75V and Ioff of 30 pA/um for NMOS and PMOS, respectively.", "title": "" }, { "docid": "bda7775f0ec70cf1f80093d484e84332", "text": "Comprehensive situational awareness is paramount to the effectiveness of proprietary navigational and higher-level functions of intelligent vehicles. In this paper, we address a graph-based approach for 2D road representation of 3D point clouds with respect to the road topography. We employ the gradient cues of the road geometry to construct a Markov Random Filed (MRF) and implement an efficient belief propagation (BP) algorithm to classify the road environment into four categories, i.e. the reachable region, the drivable region, the obstacle region and the unknown region. The proposed approach can overcome a wide variety of practical challenges, such as sloped terrains, rough road surfaces, rolling/pitching of the host vehicle, etc., and represent the road environment accurately as well as robustly. Experimental results in typical but challenging environments have substantiated that the proposed approach is more sensitive and reliable than the conventional vertical displacements analysis and show superior performance against other local classifiers.", "title": "" }, { "docid": "cc3b36d8026396a7a931f07ef9d3bcfb", "text": "Planning an itinerary before traveling to a city is one of the most important travel preparation activities. In this paper, we propose a novel framework called TripPlanner, leveraging a combination of location-based social network (i.e., LBSN) and taxi GPS digital footprints to achieve personalized, interactive, and traffic-aware trip planning. First, we construct a dynamic point-of-interest network model by extracting relevant information from crowdsourced LBSN and taxi GPS traces. Then, we propose a two-phase approach for personalized trip planning. In the route search phase, TripPlanner works interactively with users to generate candidate routes with specified venues. In the route augmentation phase, TripPlanner applies heuristic algorithms to add user's preferred venues iteratively to the candidate routes, with the objective of maximizing the route score while satisfying both the venue visiting time and total travel time constraints. To validate the efficiency and effectiveness of the proposed approach, extensive empirical studies were performed on two real-world data sets from the city of San Francisco, which contain more than 391 900 passenger delivery trips generated by 536 taxis in a month and 110 214 check-ins left by 15 680 Foursquare users in six months.", "title": "" }, { "docid": "b775aec0813048fb7101d428f67d0690", "text": "This paper proposes a new type of cache-collision timing attacks on software implementations of AES. Our major technique is of differential nature and is based on the internal cryptographic properties of AES, namely, on the MDS property of the linear code providing the diffusion matrix used in the MixColumns transform. It is a chosen-plaintext attack where pairs of AES executions are treated differentially. The method can be easily converted into a chosen-ciphertext attack. We also thoroughly study the physical behavior of cache memory enabling this attack. On the practical side, we demonstrate that our theoretical findings lead to efficient realworld attacks on embedded systems implementing AES at the example of ARM9. As this is one of the most wide-spread embedded platforms today [7], our experimental results might make a revision of the practical security of many embedded applications with security functionality necessary. To our best knowledge, this is the first paper to study cache timing attacks on embedded systems.", "title": "" }, { "docid": "162b678c669a84386e38d5bc8b794bb3", "text": "Women who have been sexually coerced by an intimate partner experience many negative health consequences. Recent research has focused on predicting this sexual coercion. In two studies, we investigated the relationship between men’s use of partner-directed insults and sexually coercive behaviors in the context of intimate relationships. Study 1 secured self-reports from 247 men on the Partner-Directed Insults Scale and the Sexual Coercion in Intimate Relationships Scale. Study 2 obtained partner-reports from 378 women on the same measures. Across both studies, results indicate that men’s use of sexually coercive behaviors can be statistically predicted by the frequency and content of the insults that men direct at their intimate partner. Insults derogating a partner’s value as a person and accusing a partner of sexual infidelity were most useful in predicting sexual coercion. The discussion notes limitations of the current research and highlights directions for future research.", "title": "" }, { "docid": "08dfd4bb173f7d70cff710590b988f08", "text": "Gallium-67 citrate is currently considered as the tracer of first choice in the diagnostic workup of fever of unknown origin (FUO). Fluorine-18 2'-deoxy-2-fluoro-D-glucose (FDG) has been shown to accumulate in malignant tumours but also in inflammatory processes. The aim of this study was to prospectively evaluate FDG imaging with a double-head coincidence camera (DHCC) in patients with FUO in comparison with planar and single-photon emission tomography (SPET) 67Ga citrate scanning. Twenty FUO patients underwent FDG imaging with a DHCC which included transaxial and longitudinal whole-body tomography. In 18 of these subjects, 67Ga citrate whole-body and SPET imaging was performed. The 67Ga citrate and FDG images were interpreted by two investigators, both blinded to the results of other diagnostic modalities. Forty percent (8/20) of the patients had infection, 25% (5/20) had auto-immune diseases, 10% (2/20) had neoplasms and 15% (3/20) had other diseases. Fever remained unexplained in 10% (2/20) of the patients. Of the 20 patients studied, FDG imaging was positive and essentially contributed to the final diagnosis in 11 (55%). The sensitivity of transaxial FDG tomography in detecting the focus of fever was 84% and the specificity, 86%. Positive and negative predictive values were 92% and 75%, respectively. If the analysis was restricted to the 18 patients who were investigated both with 67Ga citrate and FDG, sensitivity was 81% and specificity, 86%. Positive and negative predictive values were 90% and 75%, respectively. The diagnostic accuracy of whole-body FDG tomography (again restricted to the aforementioned 18 patients) was lower (sensitivity, 36%; specificity, 86%; positive and negative predictive values, 80% and 46%, respectively). 67Ga citrate SPET yielded a sensitivity of 67% in detecting the focus of fever and a specificity of 78%. Positive and negative predictive values were 75% and 70%, respectively. A low sensitivity (45%), but combined with a high specificity (100%), was found in planar 67Ga imaging. Positive and negative predictive values were 100% and 54%, respectively. It is concluded that in the context of FUO, transaxial FDG tomography performed with a DHCC is superior to 67Ga citrate SPET. This seems to be the consequence of superior tracer kinetics of FDG compared with those of 67Ga citrate and of a better spatial resolution of a DHCC system compared with SPET imaging. In patients with FUO, FDG imaging with either dedicated PET or DHCC should be considered the procedure of choice.", "title": "" }, { "docid": "3d72ed32a523f4c51b9c57b0d7d0f9ab", "text": "A theoretical study on the design of broadbeam leaky-wave antennas (LWAs) of uniform type and rectilinear geometry is presented. A new broadbeam LWA structure based on the hybrid printed-circuit waveguide is proposed, which allows for the necessary flexible and independent control of the leaky-wave phase and leakage constants. The study shows that both the real and virtual focus LWAs can be synthesized in a simple manner by tapering the printed-slot along the LWA properly, but the real focus LWA is preferred in practice. Practical issues concerning the tapering of these LWA are investigated, including the tuning of the radiation pattern asymmetry level and beamwidth, the control of the ripple level inside the broad radiated main beam, and the frequency response of the broadbeam LWA. The paper provides new insight and guidance for the design of this type of LWAs.", "title": "" }, { "docid": "941dc605dab6cf9bfe89bedb2b4f00a3", "text": "Word boundary detection in continuous speech is very common and important problem in speech synthesis and recognition. Several researches are open on this field. Since there is no sign of start of the word, end of the word and number of words in the spoken utterance of any natural language, one must study the intonation pattern of a particular language. In this paper an algorithm is proposed to detect word boundaries in continuous speech of Hindi language. A careful study of the intonation pattern of Hindi language has been done. Based on this study it is observed that, there are several suprasegmental parameters of speech signal such as pitch, F0 fundamental frequency, duration, intensity, and pause, which can play important role in finding some clues to detect the start and the end of the word from the spoken utterance of Hindi Language. The proposed algorithm is based mainly on two prosodic parameters, pitch and intensity.", "title": "" }, { "docid": "6cabc50fda1107a61c2704c4917b9501", "text": "A vehicle tracking system is very useful for tracking the movement of a vehicle from any location at any time. In this work, real time Google map and Arduino based vehicle tracking system is implemented with Global Positioning System (GPS) and Global system for mobile communication (GSM) technology. GPS module provides geographic coordinates at regular time intervals. Then the GSM module transmits the location of vehicle to cell phone of owner/user in terms of latitude and longitude. At the same time, location is displayed on LCD. Finally, Google map displays the location and name of the place on cell phone. Thus, owner/user will be able to continuously monitor a moving vehicle using the cell phone. In order to show the feasibility and effectiveness of the system, this work presents experimental result of the vehicle tracking system. The proposed system is user friendly and ensures safety and surveillance at low maintenance cost.", "title": "" }, { "docid": "7633393bdc807165f2042f0e9e3c7407", "text": "We present our system for the WNUT 2017 Named Entity Recognition challenge on Twitter data. We describe two modifications of a basic neural network architecture for sequence tagging. First, we show how we exploit additional labeled data, where the Named Entity tags differ from the target task. Then, we propose a way to incorporate sentence level features. Our system uses both methods and ranked second for entity level annotations, achieving an F1-score of 40.78, and second for surface form annotations, achieving an F1score of 39.33.", "title": "" }, { "docid": "774797d2a1bb201bdca750f808d8eb37", "text": "Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning problematic. Recently, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more meaningful comparisons, we identified three distinct continual learning scenarios based on whether task identity is known and, if it is not, whether it needs to be inferred. Performing the split and permuted MNIST task protocols according to each of these scenarios, we found that regularization-based approaches (e.g., elastic weight consolidation) failed when task identity needed to be inferred. In contrast, generative replay combined with distillation (i.e., using class probabilities as “soft targets”) achieved superior performance in all three scenarios. In addition, we reduced the computational cost of generative replay by integrating the generative model into the main model by equipping it with generative feedback connections. This Replay-through-Feedback approach substantially shortened training time with no or negligible loss in performance. We believe this to be an important first step towards making the powerful technique of generative replay scalable to real-world continual learning applications.", "title": "" }, { "docid": "1c60ddeb7e940992094cb8f3913e811a", "text": "In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. we make the code and trained models publicly available at https://github.com/junfu1115/DANet", "title": "" }, { "docid": "9cf8a2f73a906f7dc22c2d4fbcf8fa6b", "text": "In this paper the effect of spoilers on aerodynamic characteristics of an airfoil were observed by CFD.As the experimental airfoil NACA 2415 was choosen and spoiler was extended from five different positions based on the chord length C. Airfoil section is designed with a spoiler extended at an angle of 7 degree with the horizontal.The spoiler extends to 0.15C.The geometry of 2-D airfoil without spoiler and with spoiler was designed in GAMBIT.The numerical simulation was performed by ANS YS Fluent to observe the effect of spoiler position on the aerodynamic characteristics of this particular airfoil. The results obtained from the computational process were plotted on graph and the conceptual assumptions were verified as the lift is reduced and the drag is increased that obeys the basic function of a spoiler. Coefficient of drag. I. INTRODUCTION An airplane wing has a special shape called an airfoil. As a wing moves through air, the air is split and passes above and below the wing. The wing's upper surface is shaped so the air rushing over the top speeds up and stretches out. This decreases the air pressure above the wing. The air flowing below the wing moves in a straighter line, so its speed and air pressure remains the same. Since high air pressure always moves toward low air pressure, the air below the wing pushes upward toward the air above the wing. The wing is in the middle, and the whole wing is ―lifted‖. The faster an airplane moves, the more lift there is and when the force of lift is greater than the force of gravity, the airplane is able to fly. [1] A spoiler, sometimes called a lift dumper is a device intended to reduce lift in an aircraft. Spoilers are plates on the top surface of a wing which can be extended upward into the airflow and spoil it. By doing so, the spoiler creates a carefully controlled stall over the portion of the wing behind it, greatly reducing the lift of that wing section. Spoilers are designed to reduce lift also making considerable increase in drag. Spoilers increase drag and reduce lift on the wing. If raised on only one wing, they aid roll control, causing that wing to drop. If the spoilers rise symmetrically in flight, the aircraft can either be slowed in level flight or can descend rapidly without an increase in airspeed. When the …", "title": "" }, { "docid": "c24bfd3b7bbc8222f253b004b522f7d5", "text": "The Audio/Visual Emotion Challenge and Workshop (AVEC 2017) \"Real-life depression, and affect\" will be the seventh competition event aimed at comparison of multimedia processing and machine learning methods for automatic audiovisual depression and emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the depression and emotion recognition communities, as well as the audiovisual processing communities, to compare the relative merits of the various approaches to depression and emotion recognition from real-life data. This paper presents the novelties introduced this year, the challenge guidelines, the data used, and the performance of the baseline system on the two proposed tasks: dimensional emotion recognition (time and value-continuous), and dimensional depression estimation (value-continuous).", "title": "" }, { "docid": "55fabe16bf74ac28c7d7bb9fddd7d12d", "text": "Efficiently and accurately searching for similarities among time series and discovering interesting patterns is an important and non-trivial problem. In this paper, we introduce a new representation of time series, the multiresolution vector quantized (MVQ) approximation, along with a new distance function. The novelty of MVQ is that it keeps both local and global information about the original time series in a hierarchical mechanism, processing the original time series at multiple resolutions. Moreover, the proposed representation is symbolic employing key subsequences and potentially allows the application of text-based retrieval techniques into the similarity analysis of time series. The proposed method is fast and scales linearly with the size of database and the dimensionality. Contrary to the vast majority in the literature that uses the Euclidean distance, MVQ uses a multi-resolution/hierarchical distance function. We performed experiments with real and synthetic data. The proposed distance function consistently outperforms all the major competitors (Euclidean, dynamic time warping, piecewise aggregate approximation) achieving up to 20% better precision/recall and clustering accuracy on the tested datasets.", "title": "" }, { "docid": "87ddd84859d182085c6422e1988b08d8", "text": "The effects of menstrual cycle phase and hormones on women’s visual ability to detect symmetry and visual preference for symmetry were examined. Participants completed tests of symmetry detection and preference for male facial symmetry at two of three menstrual cycle phases (menses, periovulatory, and luteal). Women were better at detecting facial symmetry during the menses than luteal phase of their cycle. A trend indicated the opposite pattern for dot symmetry detection. Similarly, change in salivary progesterone levels across the cycle was negatively related to change in facial symmetry detection scores. However, there was no clear evidence of a greater preference for facial symmetry at any cycle phase, despite an overall preference for facial symmetry across phases. These findings suggest a menses phase advantage and a low progesterone advantage in women’s ability to detect facial symmetry. The results are discussed in the context of hormonal, evolutionary mate selection, and functional neurocognitive theories. # 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e27d560bd974985dec1df3791fdf2f13", "text": "Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets.", "title": "" }, { "docid": "757f066b6e693738037994b48d38fcfd", "text": "In this study, results of a variety of ML algorithms are tested against artificially polluted datasets with noise. Two noise models are tested, each of these studied on a range of noise levels from 0 to 50algorithm, a linear regression algorithm, a decision tree, a M5 algorithm, a decision table classifier, a voting interval scheme as well as a hyper pipes classifier. The study is based on an environmental field of application employing data from two air quality prediction problems, a toxicity classification problem and four artificially produced datasets. The results contain evaluation of classification criteria for every algorithm and noise level for the noise sensitivity study. The results suggest that the best algorithms per problem in terms of showing the lower RMS error are the decision table and the linear regression, for classification and regression problems respectively.", "title": "" }, { "docid": "e1dd2a719d3389a11323c5245cd2b938", "text": "Secure identity tokens such as Electronic Identity (eID) cards are emerging everywhere. At the same time user-centric identity management gains acceptance. Anonymous credential schemes are the optimal realization of user-centricity. However, on inexpensive hardware platforms, typically used for eID cards, these schemes could not be made to meet the necessary requirements such as future-proof key lengths and transaction times on the order of 10 seconds. The reasons for this is the need for the hardware platform to be standardized and certified. Therefore an implementation is only possible as a Java Card applet. This results in severe restrictions: little memory (transient and persistent), an 8-bit CPU, and access to hardware acceleration for cryptographic operations only by defined interfaces such as RSA encryption operations.\n Still, we present the first practical implementation of an anonymous credential system on a Java Card 2.2.1. We achieve transaction times that are orders of magnitudes faster than those of any prior attempt, while raising the bar in terms of key length and trust model. Our system is the first one to act completely autonomously on card and to maintain its properties in the face of an untrusted terminal. In addition, we provide a formal system specification and share our solution strategies and experiences gained and with the Java Card.", "title": "" }, { "docid": "d1940745dcc684006037ad099697c4a4", "text": "On a day in November, the body of a 31-year-old man was found near a swimming lake with two open and partly emptied fish tins lying next to him. Further investigations showed that the man had been allergic to fish protein and suffered from severe depression and drug psychosis. Already some days before the suicide, he had repeatedly asked for fish to kill himself. Although the results of the chemical and toxicological examinations were negative, the autopsy findings and histological tests suggest that death was caused by an anaphylactic reaction.", "title": "" } ]
scidocsrr
441a8cccfe1b05140b8bed527e8a2359
Building a Recommender Agent for e-Learning Systems
[ { "docid": "323113ab2bed4b8012f3a6df5aae63be", "text": "Clustering data generally involves some input parameters or heuristics that are usually unknown at the time they are needed. We discuss the general problem of parameters in clustering and present a new approach, TURN, based on boundary detection and apply it to the clustering of web log data. We also present the use of di erent lters on the web log data to focus the clustering results and discuss di erent coeÆcients for de ning similarity in a non-Euclidean space.", "title": "" } ]
[ { "docid": "d297360f609e4b03c9d70fda7cc04123", "text": "This paper describes an FPGA implementation of a single-precision floating-point multiply-accumulator (FPMAC) that supports single-cycle accumulation while maintaining high clock frequencies. A non-traditional internal representation reduces the cost of mantissa alignment within the accumulator. The FPMAC is evaluated on an Altera Stratix III FPGA.", "title": "" }, { "docid": "35981768a2a46c2dd9d52ebbd5b63750", "text": "A vehicle detection and classification system has been developed based on a low-cost triaxial anisotropic magnetoresistive sensor. Considering the characteristics of vehicle magnetic detection signals, especially the signals for low-speed congested traffic in large cities, a novel fixed threshold state machine algorithm based on signal variance is proposed to detect vehicles within a single lane and segment the vehicle signals effectively according to the time information of vehicles entering and leaving the sensor monitoring area. In our experiments, five signal features are extracted, including the signal duration, signal energy, average energy of the signal, ratio of positive and negative energy of x-axis signal, and ratio of positive and negative energy of y-axis signal. Furthermore, the detected vehicles are classified into motorcycles, two-box cars, saloon cars, buses, and Sport Utility Vehicle commercial vehicles based on a classification tree model. The experimental results have shown that the detection accuracy of the proposed algorithm can reach up to 99.05% and the average classification accuracy is 93.66%, which verify the effectiveness of our algorithm for low-speed congested traffic.", "title": "" }, { "docid": "176cf87aa657a5066a02bfb650532070", "text": "Structural Design of Reinforced Concrete Tall Buildings Author: Ali Sherif S. Rizk, Director, Dar al-Handasah Shair & Partners Subject: Structural Engineering", "title": "" }, { "docid": "02c687cbe7961f082c60fad1cc3f3f80", "text": "The simplicity of Transpose Jacobian (TJ) control is a significant characteristic of this algorithm for controlling robotic manipulators. Nevertheless, a poor performance may result in tracking of fast trajectories, since it is not dynamics-based. Use of high gains can deteriorate performance seriously in the presence of feedback measurement noise. Another drawback is that there is no prescribed method of selecting its control gains. In this paper, based on feedback linearization approach a Modified TJ (MTJ) algorithm is presented which employs stored data of the control command in the previous time step, as a learning tool to yield improved performance. The gains of this new algorithm can be selected systematically, and do not need to be large, hence the noise rejection characteristics of the algorithm are improved. Based on Lyapunov’s theorems, it is shown that both the standard and the MTJ algorithms are asymptotically stable. Analysis of the required computational effort reveals the efficiency of the proposed MTJ law compared to the Model-based algorithms. Simulation results are presented which compare tracking performance of the MTJ algorithm to that of the TJ and Model-Based algorithms in various tasks. Results of these simulations show that performance of the new MTJ algorithm is comparable to that of Computed Torque algorithms, without requiring a priori knowledge of plant dynamics, and with reduced computational burden. Therefore, the proposed algorithm is well suited to most industrial applications where simple efficient algorithms are more appropriate than complicated theoretical ones with massive computational burden. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b22137cbb14396f1dcd24b2a15b02508", "text": "This paper studies the self-alignment properties between two chips that are stacked on top of each other with copper pillars micro-bumps. The chips feature alignment marks used for measuring the resulting offset after assembly. The accuracy of the alignment is found to be better than 0.5 µm in × and y directions, depending on the process. The chips also feature waveguides and vertical grating couplers (VGC) fabricated in the front-end-of-line (FEOL) and organized in order to realize an optical interconnection between the chips. The coupling of light between the chips is measured and compared to numerical simulation. This high accuracy self-alignment was obtained after studying the impact of flux and fluxless treatments on the wetting of the pads and the successful assembly yield. The composition of the bump surface was analyzed with Time-of-Flight Secondary Ions Mass Spectroscopy (ToF-SIMS) in order to understand the impact of each treatment. This study confirms that copper pillars micro-bumps can be used to self-align photonic integrated circuits (PIC) with another die (for example a microlens array) in order to achieve high throughput alignment of optical fiber to the PIC.", "title": "" }, { "docid": "e4007c7e6a80006238e1211a213e391b", "text": "Various techniques for multiprogramming parallel multiprocessor systems have been proposed recently as a way to improve performance. A natural approach is to divide the set of processing elements into independent partitions, and simultaneously execute a diierent parallel program in each partition. Several issues arise, including the determination of the optimal number of programs allowed to execute simultaneously (i.e., the number of partitions) and the corresponding partition sizes. This can be done statically, dynamically, or adaptively, depending on the system and workload characteristics. In this paper several adaptive partitioning policies are evaluated. Their behavior, as well as the behavior of static policies, is investigated using real parallel programs. The policy applicability to actual systems is addressed, and implementation results of the proposed policies on an iPSC/2 hypercube system are reported. The concept of robustness (i.e., the ability to perform well on a wide range of workload types over a wide range of arrival rates) is presented and quantiied. Relative rankings of the policies are obtained, depending on the speciic work-load characteristics. A trade-oo is shown between potential performance and the amount of knowledge of the workload characteristics required to select the best policy. A policy that performs best when such knowledge of workload parallelism and/or arrival rate is not available is proposed as the most robust of those analyzed.", "title": "" }, { "docid": "18b3328725661770be1f408f37c7eb64", "text": "Researchers have proposed various machine learning algorithms for traffic sign recognition, which is a supervised multicategory classification problem with unbalanced class frequencies and various appearances. We present a novel graph embedding algorithm that strikes a balance between local manifold structures and global discriminative information. A novel graph structure is designed to depict explicitly the local manifold structures of traffic signs with various appearances and to intuitively model between-class discriminative information. Through this graph structure, our algorithm effectively learns a compact and discriminative subspace. Moreover, by using L2, 1-norm, the proposed algorithm can preserve the sparse representation property in the original space after graph embedding, thereby generating a more accurate projection matrix. Experiments demonstrate that the proposed algorithm exhibits better performance than the recent state-of-the-art methods.", "title": "" }, { "docid": "a712b6efb5c869619864cd817c2e27e1", "text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.", "title": "" }, { "docid": "307d9742739cbd2ade98c3d3c5d25887", "text": "In this paper, we present a smart US imaging system (SMUS) based on an android-OS smartphone, which can provide maximally optimized efficacy in terms of weight and size in point-of-care diagnostic applications. The proposed SMUS consists of the smartphone (Galaxy S5 LTE-A, Samsung., Korea) and a 16-channel probe system. The probe system contains analog and digital front-ends, which conducts beamforming and mid-processing procedures. Otherwise, the smartphone performs the back-end processing including envelope detection, log compression, 2D image filtering, digital scan conversion, and image display with custom-made graphical user interface (GUI). Note that the probe system and smartphone are interconnected by the USB 3.0 protocol. As a result, the developed SMUS can provide real-time B-mode image with the sufficient frame rate (i.e., 58 fps), battery run-time for point-of-care diagnosis (i.e., 54 min), and 35.0°C of transducer surface temperature during B-mode imaging, which satisfies the temperature standards for the safety and effectiveness of medical electrical equipment, IEC 60601-1 (i.e., 43°C).", "title": "" }, { "docid": "4d4a09c7cef74e9be52844a61ca57bef", "text": "The key of zero-shot learning (ZSL) is how to find the information transfer model for bridging the gap between images and semantic information (texts or attributes). Existing ZSL methods usually construct the compatibility function between images and class labels with the consideration of the relevance on the semantic classes (the manifold structure of semantic classes). However, the relationship of image classes (the manifold structure of image classes) is also very important for the compatibility model construction. It is difficult to capture the relationship among image classes due to unseen classes, so that the manifold structure of image classes often is ignored in ZSL. To complement each other between the manifold structure of image classes and that of semantic classes information, we propose structure propagation (SP) for improving the performance of ZSL for classification. SP can jointly consider the manifold structure of image classes and that of semantic classes for approximating to the intrinsic structure of object classes. Moreover, the SP can describe the constrain condition between the compatibility function and these manifold structures for balancing the influence of the structure propagation iteration. The SP solution provides not only unseen class labels but also the relationship of two manifold structures that encode the positive transfer in structure propagation. Experimental results demonstrate that SP can attain the promising results on the AwA, CUB, Dogs and SUN databases.", "title": "" }, { "docid": "4100a10b2a03f3a1ba712901cee406d2", "text": "Traditionally, many clinicians tend to forego esthetic considerations when full-coverage restorations are indicated for pediatric patients with primary dentitions. However, the availability of new zirconia pediatric crowns and reliable techniques for cementation makes esthetic outcomes practical and consistent when restoring primary dentition. Two cases are described: a 3-year-old boy who presented with severe early childhood caries affecting both anterior and posterior teeth, and a 6-year-old boy who presented with extensive caries of his primary posterior dentition, including a molar requiring full coverage. The parents of both boys were concerned about esthetics, and the extent of decay indicated the need for full-coverage restorations. This led to the boys receiving treatment using a restorative procedure in which the carious teeth were prepared for and restored with esthetic tooth-colored zirconia crowns. In both cases, comfortable function and pleasing esthetics were achieved.", "title": "" }, { "docid": "b7ca3a123963bb2f0bfbe586b3bc63d0", "text": "Objective In symptom-dependent diseases such as functional dyspepsia (FD), matching the pattern of epigastric symptoms, including severity, kind, and perception site, between patients and physicians is critical. Additionally, a comprehensive examination of the stomach, duodenum, and pancreas is important for evaluating the origin of such symptoms. Methods FD-specific symptoms (epigastric pain, epigastric burning, early satiety, and postprandial fullness) and other symptoms (regurgitation, nausea, belching, and abdominal bloating) as well as the perception site of the above symptoms were investigated in healthy subjects using a new questionnaire with an illustration of the human body. A total of 114 patients with treatment-resistant dyspeptic symptoms were evaluated for their pancreatic exocrine function using N-benzoyl-L-tyrosyl-p-aminobenzoic acid. Results A total of 323 subjects (men:women, 216:107; mean age, 52.1 years old) were initially enrolled. Most of the subjects felt the FD-specific symptoms at the epigastrium, while about 20% felt them at other abdominal sites. About 30% of expressed as epigastric symptoms were FD-nonspecific symptoms. At the epigastrium, epigastric pain and epigastric burning were mainly felt at the upper part, and postprandial fullness and early satiety were felt at the lower part. The prevalence of patients with pancreatic exocrine dysfunction was 71% in the postprandial fullness group, 68% in the epigastric pain group, and 82% in the diarrhea group. Conclusion We observed mismatch in the perception site and expression between the epigastric symptoms of healthy subjects and FD-specific symptoms. Postprandial symptoms were often felt at the lower part of the epigastrium, and pancreatic exocrine dysfunction may be involved in the FD symptoms, especially for treatment-resistant dyspepsia patients.", "title": "" }, { "docid": "a5e960a4b20959a1b4a85e08eebab9d3", "text": "This paper presents a new class of dual-, tri- and quad-band BPF by using proposed open stub-loaded shorted stepped-impedance resonator (OSLSSIR). The OSLSSIR consists of a two-end-shorted three-section stepped-impedance resistor (SIR) with two identical open stubs loaded at its impedance junctions. Two 50- Ω tapped lines are directly connected to two shorted sections of the SIR to serve as I/O ports. As the electrical lengths of two identical open stubs increase, many more transmission poles (TPs) and transmission zeros (TZs) can be shifted or excited within the interested frequency range. The TZs introduced by open stubs divide the TPs into multiple groups, which can be applied to design a multiple-band bandpass filter (BPF). In order to increase many more design freedoms for tuning filter performance, a high-impedance open stub and the narrow/broad side coupling are introduced as perturbations in all filters design, which can tune the even- and odd-mode TPs separately. In addition, two branches of I/O coupling and open stub-loaded shorted microstrip line are employed in tri- and quad-band BPF design. As examples, two dual-wideband BPFs, one tri-band BPF, and one quad-band BPF have been successfully developed. The fabricated four BPFs have merits of compact sizes, low insertion losses, and high band-to-band isolations. The measured results are in good agreement with the full-wave simulated results.", "title": "" }, { "docid": "d6f473f6b6758b2243dde898840656b0", "text": "In this paper, we introduce the new generation 3300V HiPak2 IGBT module (130x190)mm employing the recently developed TSPT+ IGBT with Enhanced Trench MOS technology and Field Charge Extraction (FCE) diode. The new chip-set enables IGBT modules with improved electrical performance in terms of low losses, good controllability, high robustness and soft diode recovery. Due to the lower losses and the excellent SOA, the current rating of the 3300V HiPak2 module can be increased from 1500A for the current SPT+ generation to 1800A for the new TSPT+ version.", "title": "" }, { "docid": "7635d39eda6ac2b3969216b39a1aa1f7", "text": "We introduce tailored displays that enhance visual acuity by decomposing virtual objects and placing the resulting anisotropic pieces into the subject's focal range. The goal is to free the viewer from needing wearable optical corrections when looking at displays. Our tailoring process uses aberration and scattering maps to account for refractive errors and cataracts. It splits an object's light field into multiple instances that are each in-focus for a given eye sub-aperture. Their integration onto the retina leads to a quality improvement of perceived images when observing the display with naked eyes. The use of multiple depths to render each point of focus on the retina creates multi-focus, multi-depth displays. User evaluations and validation with modified camera optics are performed. We propose tailored displays for daily tasks where using eyeglasses are unfeasible or inconvenient (e.g., on head-mounted displays, e-readers, as well as for games); when a multi-focus function is required but undoable (e.g., driving for farsighted individuals, checking a portable device while doing physical activities); or for correcting the visual distortions produced by high-order aberrations that eyeglasses are not able to.", "title": "" }, { "docid": "69f710a71b27cf46039d54e20b5f589b", "text": "This paper presents a new needle deflection model that is an extension of prior work in our group based on the principles of beam theory. The use of a long flexible needle in percutaneous interventions necessitates accurate modeling of the generated curved trajectory when the needle interacts with soft tissue. Finding a feasible model is important in simulators with applications in training novice clinicians or in path planners used for needle guidance. Using intra-operative force measurements at the needle base, our approach relates mechanical and geometric properties of needle-tissue interaction to the net amount of deflection and estimates the needle curvature. To this end, tissue resistance is modeled by introducing virtual springs along the needle shaft, and the impact of needle-tissue friction is considered by adding a moving distributed external force to the bending equations. Cutting force is also incorporated by finding its equivalent sub-boundary conditions. Subsequently, the closed-from solution of the partial differential equations governing the planar deflection is obtained using Green's functions. To evaluate the performance of our model, experiments were carried out on artificial phantoms.", "title": "" }, { "docid": "c0f46732345837cf959ea9ee030874fd", "text": "In this paper we discuss the development and use of low-rank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis. The evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed. The interpretability of NMF outputs in specific contexts are provided along with opportunities for future work in the modification of NMF algorithms for large-scale and time-varying datasets.", "title": "" }, { "docid": "481d62df8c6cc7ed6bc93a4e3c27a515", "text": "Minutiae points are defined as the minute discontinuities of local ridge flows, which are widely used as the fine level features for fingerprint recognition. Accurate minutiae detection is important and traditional methods are often based on the hand-crafted processes such as image enhancement, binarization, thinning and tracing of the ridge flows etc. These methods require strong prior knowledge to define the patterns of minutiae points and are easily sensitive to noises. In this paper, we propose a machine learning based algorithm to detect the minutiae points with the gray fingerprint image based on Convolution Neural Networks (CNN). The proposed approach is divided into the training and testing stages. In the training stage, a number of local image patches are extracted and labeled and CNN models are trained to classify the image patches. The test fingerprint is scanned with the CNN model to locate the minutiae position in the testing stage. To improve the detection accuracy, two CNN models are trained to classify the local patch into minutiae v.s. non-minutiae and into ridge ending v.s. bifurcation, respectively. In addition, multi-scale CNNs are constructed with the image patches of varying sizes and are combined to achieve more accurate detection. Finally, the proposed algorithm is tested the fingerprints of FVC2002 DB1 database. Experimental results and comparisons have been presented to show the effectiveness of the proposed method.", "title": "" }, { "docid": "57a48dee2cc149b70a172ac5785afc6c", "text": "We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ~ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.", "title": "" }, { "docid": "438e690466823b7ae79cf28f62ba87be", "text": "Decades of research have documented that young word learners have more difficulty learning verbs than nouns. Nonetheless, recent evidence has uncovered conditions under which children as young as 24 months succeed. Here, we focus in on the kind of linguistic information that undergirds 24-month-olds' success. We introduced 24-month-olds to novel words (either nouns or verbs) as they watched dynamic scenes (e.g., a man waving a balloon); the novel words were presented in semantic contexts that were either rich (e.g., The man is pilking a balloon), or more sparse (e.g., He's pilking it). Toddlers successfully learned nouns in both the semantically rich and sparse contexts, but learned verbs only in the rich context. This documents that to learn the meaning of a novel verb, English-acquiring toddlers take advantage of the semantically rich information provided in lexicalized noun phrases. Implications for cross-linguistic theories of acquisition are discussed.", "title": "" } ]
scidocsrr
587beaec003e11de1c6ce7a00b43e31a
Improving biocuration of microRNAs in diseases: a case study in idiopathic pulmonary fibrosis
[ { "docid": "2564e804c862e3e40a5f8d0d6dada0c0", "text": "microRNAs (miRNAs) are short non-coding RNA species, which act as potent gene expression regulators. Accurate identification of miRNA targets is crucial to understanding their function. Currently, hundreds of thousands of miRNA:gene interactions have been experimentally identified. However, this wealth of information is fragmented and hidden in thousands of manuscripts and raw next-generation sequencing data sets. DIANA-TarBase was initially released in 2006 and it was the first database aiming to catalog published experimentally validated miRNA:gene interactions. DIANA-TarBase v7.0 (http://www.microrna.gr/tarbase) aims to provide for the first time hundreds of thousands of high-quality manually curated experimentally validated miRNA:gene interactions, enhanced with detailed meta-data. DIANA-TarBase v7.0 enables users to easily identify positive or negative experimental results, the utilized experimental methodology, experimental conditions including cell/tissue type and treatment. The new interface provides also advanced information ranging from the binding site location, as identified experimentally as well as in silico, to the primer sequences used for cloning experiments. More than half a million miRNA:gene interactions have been curated from published experiments on 356 different cell types from 24 species, corresponding to 9- to 250-fold more entries than any other relevant database. DIANA-TarBase v7.0 is freely available.", "title": "" } ]
[ { "docid": "7d603d154025f7160c0711bba92e1049", "text": "Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences.", "title": "" }, { "docid": "4322f123ff6a1bd059c41b0037bac09b", "text": "Nowadays, as a beauty-enhancing product, clothing plays an important role in human's social life. In fact, the key to a proper outfit usually lies in the harmonious clothing matching. Nevertheless, not everyone is good at clothing matching. Fortunately, with the proliferation of fashion-oriented online communities, fashion experts can publicly share their fashion tips by showcasing their outfit compositions, where each fashion item (e.g., a top or bottom) usually has an image and context metadata (e.g., title and category). Such rich fashion data offer us a new opportunity to investigate the code in clothing matching. However, challenges co-exist with opportunities. The first challenge lies in the complicated factors, such as color, material and shape, that affect the compatibility of fashion items. Second, as each fashion item involves multiple modalities (i.e., image and text), how to cope with the heterogeneous multi-modal data also poses a great challenge. Third, our pilot study shows that the composition relation between fashion items is rather sparse, which makes traditional matrix factorization methods not applicable. Towards this end, in this work, we propose a content-based neural scheme to model the compatibility between fashion items based on the Bayesian personalized ranking (BPR) framework. The scheme is able to jointly model the coherent relation between modalities of items and their implicit matching preference. Experiments verify the effectiveness of our scheme, and we deliver deep insights that can benefit future research.", "title": "" }, { "docid": "556dbae297d06aaaeb0fd78016bd573f", "text": "This paper presents a learning and scoring framework based on neural networks for speaker verification. The framework employs an autoencoder as its primary structure while three factors are jointly considered in the objective function for speaker discrimination. The first one, relating to the sample reconstruction error, makes the structure essentially a generative model, which benefits to learn most salient and useful properties of the data. Functioning in the middlemost hidden layer, the other two attempt to ensure that utterances spoken by the same speaker are mapped into similar identity codes in the speaker discriminative subspace, where the dispersion of all identity codes are maximized to some extent so as to avoid the effect of over-concentration. Finally, the decision score of each utterance pair is simply computed by cosine similarity of their identity codes. Dealing with utterances represented by i-vectors, the results of experiments conducted on the male portion of the core task in the NIST 2010 Speaker Recognition Evaluation (SRE) significantly demonstrate the merits of our approach over the conventional PLDA method.", "title": "" }, { "docid": "c6c04fe37b540df1ab54f31dd01afef6", "text": "A backtracking algorithm for testing a pair of digraphs for isomorphism is presented. The information contained in the distance matrix representation of a graph is used to establish an initial partition of the graph's vertices. This distance matrix information is then applied in a backtracking procedure to reduce the search tree of possible mappings. While the algorithm is not guaranteed to run in polynomial time, it performs efficiently for a large class of graphs.", "title": "" }, { "docid": "154c5c644171c63647e5a1c83ed06440", "text": "Recommender System are new generation internet tool that help user in navigating through information on the internet and receive information related to their preferences. Although most of the time recommender systems are applied in the area of online shopping and entertainment domains like movie and music, yet their applicability is being researched upon in other area as well. This paper presents an overview of the Recommender Systems which are currently working in the domain of online book shopping. This paper also proposes a new book recommender system that combines user choices with not only similar users but other users as well to give diverse recommendation that change over time. The overall architecture of the proposed system is presented and its implementation with a prototype design is described. Lastly, the paper presents empirical evaluation of the system based on a survey reflecting the impact of such diverse recommendations on the user choices. Key-Words: Recommender system; Collaborative filtering; Content filtering; Data mining; Time; Book", "title": "" }, { "docid": "85736b2fd608e3d109ce0f3c46dda9ac", "text": "The WHO (2001) recommends exclusive breast-feeding and delaying the introduction of solid foods to an infant's diet until 6 months postpartum. However, in many countries, this recommendation is followed by few mothers, and earlier weaning onto solids is a commonly reported global practice. Therefore, this prospective, observational study aimed to assess compliance with the WHO recommendation and examine weaning practices, including the timing of weaning of infants, and to investigate the factors that predict weaning at ≤ 12 weeks. From an initial sample of 539 pregnant women recruited from the Coombe Women and Infants University Hospital, Dublin, 401 eligible mothers were followed up at 6 weeks and 6 months postpartum. Quantitative data were obtained on mothers' weaning practices using semi-structured questionnaires and a short dietary history of the infant's usual diet at 6 months. Only one mother (0.2%) complied with the WHO recommendation to exclusively breastfeed up to 6 months. Ninety-one (22.6%) infants were prematurely weaned onto solids at ≤ 12 weeks with predictive factors after adjustment, including mothers' antenatal reporting that infants should be weaned onto solids at ≤ 12 weeks, formula feeding at 12 weeks and mothers' reporting of the maternal grandmother as the principal source of advice on infant feeding. Mothers who weaned their infants at ≤ 12 weeks were more likely to engage in other sub-optimal weaning practices, including the addition of non-recommended condiments to their infants' foods. Provision of professional advice and exploring antenatal maternal misperceptions are potential areas for targeted interventions to improve compliance with the recommended weaning practices.", "title": "" }, { "docid": "c15aa2444187dffe2be4636ad00babdd", "text": "Most people have become “big data” producers in their daily life. Our desires, opinions, sentiments, social links as well as our mobile phone calls and GPS track leave traces of our behaviours. To transform these data into knowledge, value is a complex task of data science. This paper shows how the SoBigData Research Infrastructure supports data science towards the new frontiers of big data exploitation. Our research infrastructure serves a large community of social sensing and social mining researchers and it reduces the gap between existing research centres present at European level. SoBigData integrates resources and creates an infrastructure where sharing data and methods among text miners, visual analytics researchers, socio-economic scientists, network scientists, political scientists, humanities researchers can indeed occur. The main concepts related to SoBigData Research Infrastructure are presented. These concepts support virtual and transnational (on-site) access to the resources. Creating and supporting research communities are considered to be of vital importance for the success of our research infrastructure, as well as contributing to train the new generation of data scientists. Furthermore, this paper introduces the concept of exploratory and shows their role in the promotion of the use of our research infrastructure. The exploratories presented in this paper represent also a set of real applications in the context of social mining. Finally, a special attention is given to the legal and ethical aspects. Everything in SoBigData is supervised by an ethical and legal framework.", "title": "" }, { "docid": "00d081e61bfbfa64371b4d9e30fcd452", "text": "In the coming era of social companions, many researches have been pursuing natural dialog interactions and long-term relations between social companions and users. With respect to the quick decrease of user interests after the first few interactions, various emotion and memory models are developed and integrated with social companions for better user engagement. This paper reviews related works in the effort of combining memory and emotion with natural language dialog on social companions. We separate these works into three categories: (1) Affective system with dialog, (2) Task-driven memory with dialog, (3) Chat-driven memory with dialog. In addition, we discussed limitations and challenging issues to be solved. Finally, we also introduced our framework of social companions.", "title": "" }, { "docid": "b4153b7a973b3ca413f944cdd5723033", "text": "A number of Lactobacillus species, Bifidobacterium sp, Saccharomyces boulardii, and some other microbes have been proposed as and are used as probiotic strains, i.e. live microorganisms as food supplement in order to benefit health. The health claims range from rather vague as regulation of bowel activity and increasing of well-being to more specific, such as exerting antagonistic effect on the gastroenteric pathogens Clostridium difficile, Campylobacter jejuni, Helicobacter pylori and rotavirus, neutralising food mutagens produced in colon, shifting the immune response towards a Th2 response, and thereby alleviating allergic reactions, and lowering serum cholesterol (Tannock, 2002). Unfortunately, most publications are case reports, uncontrolled studies in humans, or reports of animal or in vitro studies. Whether or not the probiotic strains employed shall be of human origin is a matter of debate but this is not a matter of concern, as long as the strains can be shown to survive the transport in the human gastrointestinal (GI) tract and to colonise the human large intestine. This includes survival in the stressful environment of the stomach - acidic pH and bile - with induction of new genes encoding a number of stress proteins. Since the availability of antioxidants decreases rostrally in the GI tract production of antioxidants by colonic bacteria provides a beneficial effect in scavenging free radicals. LAB strains commonly produce antimicrobial substance(s) with activity against the homologous strain, but LAB strains also often produce microbicidal substances with effect against gastric and intestinal pathogens and other microbes, or compete for cell surface and mucin binding sites. This could be the mechanism behind reports that some probiotic strains inhibit or decrease translocation of bacteria from the gut to the liver. A protective effect against cancer development can be ascribed to binding of mutagens by intestinal bacteria, reduction of the enzymes beta-glucuronidase and beta-glucosidase, and deconjugation of bile acids, or merely by enhancing the immune system of the host. The latter has attracted considerable interest, and LAB have been tested in several clinical trials in allergic diseases. Characteristics ascribed to a probiotic strain are in general strain specific, and individual strains have to be tested for each property. Survival of strains during production, packing and storage of a viable cell mass has to be tested and declared.", "title": "" }, { "docid": "b8429d68d520906656bb612087b1bce6", "text": "Saliva has been advocated as an alternative to serum or plasma for steroid monitoring. Little normative information is available concerning expected concentrations of the major reproductive steroids in saliva during pregnancy and the extended postpartum. Matched serum and saliva specimens controlled for time of day and collected less than 30 minutes apart were obtained in 28 women with normal singleton pregnancies between 32 and 38 weeks of gestation and in 43 women during the first six months postpartum. Concentrations of six steroids (estriol, estradiol, progesterone, testosterone, cortisol, dehydroepiandrosterone) were quantified in saliva by enzyme immunoassay. For most of the steroids examined, concentrations in antepartum saliva showed linear increases near end of gestation, suggesting an increase in the bioavailable hormone component. Observed concentrations were in agreement with the limited data available from previous reports. Modal concentrations of the ovarian steroids were undetectable in postpartum saliva and, when detectable in individual women, approximated early follicular phase values. Only low to moderate correlations between the serum and salivary concentrations were found, suggesting that during the peripartum period saliva provides information that is not redundant to serum. Low correlations in the late antepartum may be due to differential rates of change in the total and bioavailable fractions of the circulating steroid in the final weeks of the third trimester as a consequence of dynamic changes in carrier proteins such as corticosteroid binding globulin.", "title": "" }, { "docid": "9df78ef5769ed4da768d1a7b359794ab", "text": "We describe a computer-aided optimization technique for the efficient and reliable design of compact wide-band waveguide septum polarizers (WSP). Wide-band performance is obtained by a global optimization which considers not only the septum section but also several step discontinuities placed before the ridge-to-rectangular bifurcation and the square-to-circular discontinuity. The proposed technique mnakes use of a dynamical optimization procedure which has been tested by designing several WSP operating in different frequency bands. In this work two examples are reported, one operating at Ku band and a very wideband prototype (3.4-4.2 GHz) operating in the C band. The component design, entirely carried out at computer level, has demonstrated significant advantages in terms of development times and no need of post manufacturing adjustments. The very satisfactory agreement between experimental and theoretical results further confirm the validity of the proposed technique.", "title": "" }, { "docid": "da9b9a32db674e5f6366f6b9e2c4ee10", "text": "We introduce a data-driven approach to aid the repairing and conservation of archaeological objects: ORGAN, an object reconstruction generative adversarial network (GAN). By using an encoder-decoder 3D deep neural network on a GAN architecture, and combining two loss objectives: a completion loss and an Improved Wasserstein GAN loss, we can train a network to effectively predict the missing geometry of damaged objects. As archaeological objects can greatly differ between them, the network is conditioned on a variable, which can be a culture, a region or any metadata of the object. In our results, we show that our method can recover most of the information from damaged objects, even in cases where more than half of the voxels are missing, without producing many errors.", "title": "" }, { "docid": "fb729bf4edf25f082a4808bd6bb0961d", "text": "The paper reports some of the reasons behind the low use of Information and Communication Technology (ICT) by teachers. The paper has reviewed a number or studies from different parts of the world and paid greater attention to Saudi Arabia. The literature reveals a number of factors that hinder teachers’ use of ICT. This paper will focus on lack of access to technology, lack of training and lack of time.", "title": "" }, { "docid": "36484e10b0644f01e8adbb3268c20561", "text": "Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural networks, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training framework with a graph-regularised objective, namely Neural Graph Machines, that can combine the power of neural networks and label propagation. This work generalises previous literature on graph-augmented training of neural networks, enabling it to be applied to multiple neural architectures (Feed-forward NNs, CNNs and LSTM RNNs) and a wide range of graphs. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a)~allowing the network to train using labeled data as in the supervised setting, (b)~biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs, with a runtime that is linear in the number of edges. The proposed joint training approach convincingly outperforms many existing methods on a wide range of tasks (multi-label classification on social graphs, news categorization, document classification and semantic intent classification), with multiple forms of graph inputs (including graphs with and without node-level features) and using different types of neural networks.", "title": "" }, { "docid": "8c6ec02821d17fbcf79d1a42ed92a971", "text": "OBJECTIVE\nTo explore whether an association exists between oocyte meiotic spindle morphology visualized by polarized light microscopy at the time of intracytoplasmic sperm injection and the ploidy of the resulting embryo.\n\n\nDESIGN\nProspective cohort study.\n\n\nSETTING\nPrivate IVF clinic.\n\n\nPATIENT(S)\nPatients undergoing preimplantation genetic screening/diagnosis (n = 113 patients).\n\n\nINTERVENTION(S)\nOocyte meiotic spindles were assessed by polarized light microscopy and classified at the time of intracytoplasmic sperm injection as normal, dysmorphic, translucent, telophase, or no visible spindle. Single blastomere biopsy was performed on day 3 of culture for analysis by array comparative genomic hybridization.\n\n\nMAIN OUTCOME MEASURE(S)\nSpindle morphology and embryo ploidy association was evaluated by regression methods accounting for non-independence of data.\n\n\nRESULT(S)\nThe frequency of euploidy in embryos derived from oocytes with normal spindle morphology was significantly higher than all other spindle classifications combined (odds ratio [OR] 1.93, 95% confidence interval [CI] 1.33-2.79). Oocytes with translucent (OR 0.25, 95% CI 0.13-0.46) and no visible spindle morphology (OR 0.35, 95% CI 0.19-0.63) were significantly less likely to result in euploid embryos when compared with oocytes with normal spindle morphology. There was no significant difference between normal and dysmorphic spindle morphology (OR 0.73, 95% CI 0.49-1.08), whereas no telophase spindles resulted in euploid embryos (n = 11). Assessment of spindle morphology was found to be independently associated with embryo euploidy after controlling for embryo quality (OR 1.73, 95% CI 1.16-2.60).\n\n\nCONCLUSION(S)\nOocyte spindle morphology is associated with the resulting embryo's ploidy. Oocytes with normal spindle morphology are significantly more likely to produce euploid embryos compared with oocytes with meiotic spindles that are translucent or not visible.", "title": "" }, { "docid": "b4fa57fec99131cdf0cb6fc4795fce43", "text": "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.", "title": "" }, { "docid": "490d63de99f1973d5bab4c1a90633d18", "text": "Flows transported across mobile ad hoc wireless networks suffer from route breakups caused by nodal mobility. In a network that aims to support critical interactive real-time data transactions, to provide for the uninterrupted execution of a transaction, or for the rapid transport of a high value file, it is essential to identify robust routes across which such transactions are transported. Noting that route failures can induce long re-routing delays that may be highly interruptive for many applications and message/stream transactions, it is beneficial to configure the routing scheme to send a flow across a route whose lifetime is longer, with sufficiently high probability, than the estimated duration of the activity that it is selected to carry. We evaluate the ability of a mobile ad hoc wireless network to distribute flows across robust routes by introducing the robust throughput measure as a performance metric. The utility gained by the delivery of flow messages is based on the level of interruption experienced by the underlying transaction. As a special case, for certain applications only transactions that are completed without being prematurely interrupted may convey data to their intended users that is of acceptable utility. We describe the mathematical calculation of a network’s robust throughput measure, as well as its robust throughput capacity. We introduce the robust flow admission and routing algorithm (RFAR) to provide for the timely and robust transport of flow transactions across mobile ad hoc wireless net-", "title": "" }, { "docid": "4ad261905326b55a40569ebbc549a67c", "text": "OBJECTIVES\nTo analyze the Spanish experience in an international study which evaluated tocilizumab in patients with rheumatoid arthritis (RA) and an inadequate response to conventional disease-modifying antirheumatic drugs (DMARDs) or tumor necrosis factor inhibitors (TNFis) in a clinical practice setting.\n\n\nMATERIAL AND METHODS\nSubanalysis of 170 patients with RA from Spain who participated in a phase IIIb, open-label, international clinical trial. Patients presented inadequate response to DMARDs or TNFis. They received 8mg/kg of tocilizumab every 4 weeks in combination with a DMARD or as monotherapy during 20 weeks. Safety and efficacy of tocilizumab were analyzed. Special emphasis was placed on differences between failure to a DMARD or to a TNFi and the need to switch to tocilizumab with or without a washout period in patients who had previously received TNFi.\n\n\nRESULTS\nThe most common adverse events were infections (25%), increased total cholesterol (38%) and transaminases (15%). Five patients discontinued the study due to an adverse event. After six months of tocilizumab treatment, 71/50/30% of patients had ACR 20/50/70 responses, respectively. A higher proportion of TNFi-naive patients presented an ACR20 response: 76% compared to 64% in the TNFi group with previous washout and 66% in the TNFi group without previous washout.\n\n\nCONCLUSIONS\nSafety results were consistent with previous results in patients with RA and an inadequate response to DMARDs or TNFis. Tocilizumab is more effective in patients who did not respond to conventional DMARDs than in patients who did not respond to TNFis.", "title": "" }, { "docid": "46b8fef519545e285a410d29340b87ad", "text": "The inferior gluteal artery is described in standard anatomy textbooks as contributing to the blood supply of the hip through an anastomosis with the medial femoral circumflex artery. The site(s) of the anastomosis has not been described previously. We undertook an injection study to define the anastomotic connections between these two arteries and to determine whether the inferior gluteal artery could supply the lateral epiphyseal arteries alone. From eight fresh-frozen cadaver pelvic specimens we were able to inject the vessels in 14 hips with latex moulding compound through either the medial femoral circumflex artery or the inferior gluteal artery. Injected vessels around the hip were then carefully exposed and documented photographically. In seven of the eight specimens a clear anastomosis was shown between the two arteries adjacent to the tendon of obturator externus. The terminal vessel arising from this anastomosis was noted to pass directly beneath the posterior capsule of the hip before ascending the superior aspect of the femoral neck and terminating in the lateral epiphyseal vessels. At no point was the terminal vessel found between the capsule and the conjoined tendon. The medial femoral circumflex artery receives a direct supply from the inferior gluteal artery immediately before passing beneath the capsule of the hip. Detailed knowledge of this anatomy may help to explain the development of avascular necrosis after hip trauma, as well as to allow additional safe surgical exposure of the femoral neck and head.", "title": "" }, { "docid": "793cd937ea1fc91e73735b2b8246f1f5", "text": "Using data from a national probability sample of heterosexual U.S. adults (N02,281), the present study describes the distribution and correlates of men’s and women’s attitudes toward transgender people. Feeling thermometer ratings of transgender people were strongly correlated with attitudes toward gay men, lesbians, and bisexuals, but were significantly less favorable. Attitudes toward transgender people were more negative among heterosexual men than women. Negative attitudes were associated with endorsement of a binary conception of gender; higher levels of psychological authoritarianism, political conservatism, and anti-egalitarianism, and (for women) religiosity; and lack of personal contact with sexual minorities. In regression analysis, sexual prejudice accounted for much of the variance in transgender attitudes, but respondent gender, educational level, authoritarianism, anti-egalitarianism, and (for women) religiosity remained significant predictors with sexual prejudice statistically controlled. Implications and directions for future research on attitudes toward transgender people are discussed.", "title": "" } ]
scidocsrr
741db789cc170256224dc080ca2f1ba1
A Corpus for Modeling Word Importance in Spoken Dialogue Transcripts
[ { "docid": "8feb5dce809acf0efb63d322f0526fcf", "text": "Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.", "title": "" }, { "docid": "d2bf33fcd8d1de5cca697ef97e774feb", "text": "The accuracy of Automated Speech Recognition (ASR) technology has improved, but it is still imperfect in many settings. Researchers who evaluate ASR performance often focus on improving the Word Error Rate (WER) metric, but WER has been found to have little correlation with human-subject performance on many applications. We propose a new captioning-focused evaluation metric that better predicts the impact of ASR recognition errors on the usability of automatically generated captions for people who are Deaf or Hard of Hearing (DHH). Through a user study with 30 DHH users, we compared our new metric with the traditional WER metric on a caption usability evaluation task. In a side-by-side comparison of pairs of ASR text output (with identical WER), the texts preferred by our new metric were preferred by DHH participants. Further, our metric had significantly higher correlation with DHH participants' subjective scores on the usability of a caption, as compared to the correlation between WER metric and participant subjective scores. This new metric could be used to select ASR systems for captioning applications, and it may be a better metric for ASR researchers to consider when optimizing ASR systems.", "title": "" } ]
[ { "docid": "6075b9f909a5df033d1222685d30b1dc", "text": "Recent advances in high-throughput cDNA sequencing (RNA-seq) can reveal new genes and splice variants and quantify expression genome-wide in a single assay. The volume and complexity of data from RNA-seq experiments necessitate scalable, fast and mathematically principled analysis software. TopHat and Cufflinks are free, open-source software tools for gene discovery and comprehensive expression analysis of high-throughput mRNA sequencing (RNA-seq) data. Together, they allow biologists to identify new genes and new splice variants of known ones, as well as compare gene and transcript expression under two or more conditions. This protocol describes in detail how to use TopHat and Cufflinks to perform such analyses. It also covers several accessory tools and utilities that aid in managing data, including CummeRbund, a tool for visualizing RNA-seq analysis results. Although the procedure assumes basic informatics skills, these tools assume little to no background with RNA-seq analysis and are meant for novices and experts alike. The protocol begins with raw sequencing reads and produces a transcriptome assembly, lists of differentially expressed and regulated genes and transcripts, and publication-quality visualizations of analysis results. The protocol's execution time depends on the volume of transcriptome sequencing data and available computing resources but takes less than 1 d of computer time for typical experiments and ∼1 h of hands-on time.", "title": "" }, { "docid": "72e4d7729031d63f96b686444c9b446e", "text": "In this paper we describe the fundamentals of affective gaming from a physiological point of view, covering some of the origins of the genre, how affective videogames operate and current conceptual and technological capabilities. We ground this overview of the ongoing research by taking an in-depth look at one of our own early biofeedback-based affective games. Based on our analysis of existing videogames and our own experience with affective videogames, we propose a new approach to game design based on several high-level design heuristics: assist me, challenge me and emote me (ACE), a series of gameplay \"tweaks\" made possible through affective videogames.", "title": "" }, { "docid": "cc3f47aba00cb986bdb8234f98726c57", "text": "Gender differences in brain development and in the prevalence of neuropsychiatric disorders such as depression have been reported. Gender differences in human brain might be related to patterns of gene expression. Microarray technology is one useful method for investigation of gene expression in brain. We investigated gene expression, cell types, and regional expression patterns of differentially expressed sex chromosome genes in brain. We profiled gene expression in male and female dorsolateral prefrontal cortex, anterior cingulate cortex, and cerebellum using the Affymetrix oligonucleotide microarray platform. Differentially expressed genes between males and females on the Y chromosome (DBY, SMCY, UTY, RPS4Y, and USP9Y) and X chromosome (XIST) were confirmed using real-time PCR measurements. In situ hybridization confirmed the differential expression of gender-specific genes and neuronal expression of XIST, RPS4Y, SMCY, and UTY in three brain regions examined. The XIST gene, which silences gene expression on regions of the X chromosome, is expressed in a subset of neurons. Since a subset of neurons express gender-specific genes, neural subpopulations may exhibit a subtle sexual dimorphism at the level of differences in gene regulation and function. The distinctive pattern of neuronal expression of XIST, RPS4Y, SMCY, and UTY and other sex chromosome genes in neuronal subpopulations may possibly contribute to gender differences in prevalence noted for some neuropsychiatric disorders. Studies of the protein expression of these sex-chromosome-linked genes in brain tissue are required to address the functional consequences of the observed gene expression differences.", "title": "" }, { "docid": "0a761fba9fa9246261ca7627ff6afe91", "text": "Compositing is one of the most commonly performed operations in computer graphics. A realistic composite requires adjusting the appearance of the foreground and background so that they appear compatible; unfortunately, this task is challenging and poorly understood. We use statistical and visual perception experiments to study the realism of image composites. First, we evaluate a number of standard 2D image statistical measures, and identify those that are most significant in determining the realism of a composite. Then, we perform a human subjects experiment to determine how the changes in these key statistics influence human judgements of composite realism. Finally, we describe a data-driven algorithm that automatically adjusts these statistical measures in a foreground to make it more compatible with its background in a composite. We show a number of compositing results, and evaluate the performance of both our algorithm and previous work with a human subjects study.", "title": "" }, { "docid": "723cf2a8b6142a7e52a0ff3fb74c3985", "text": "The Internet of Mobile Things (IoMT) requires support for a data lifecycle process ranging from sorting, cleaning and monitoring data streams to more complex tasks such as querying, aggregation, and analytics. Current solutions for stream data management in IoMT have been focused on partial aspects of a data lifecycle process, with special emphasis on sensor networks. This paper aims to address this problem by developing an offline and real-time data lifecycle process that incorporates a layered, data-flow centric, and an edge/cloud computing approach that is needed for handling heterogeneous, streaming and geographicallydispersed IoMT data streams. We propose an end to end architecture to support an instant intra-layer communication that establishes a stream data flow in real-time to respond to immediate data lifecycle tasks at the edge layer of the system. Our architecture also provides offline functionalities for later analytics and visualization of IoMT data streams at the core layer of the system. Communication and process are thus the defining factors in the design of our stream data management solution for IoMT. We describe and evaluate our prototype implementation using real-time transit data feeds and a commercial edge-based platform. Preliminary results are showing the advantages of running data lifecycle tasks at the edge of the network for reducing the volume of data streams that are redundant and should not be transported to the cloud. Keywords—stream data lifecycle, edge computing, cloud computing, Internet of Mobile Things, end to end architectures", "title": "" }, { "docid": "335847313ee670dc0648392c91d8567a", "text": "Several large scale data mining applications, such as text c ategorization and gene expression analysis, involve high-dimensional data that is also inherentl y directional in nature. Often such data is L2 normalized so that it lies on the surface of a unit hyperspher e. Popular models such as (mixtures of) multi-variate Gaussians are inadequate for characteri zing such data. This paper proposes a generative mixture-model approach to clustering directional data based on the von Mises-Fisher (vMF) distribution, which arises naturally for data distributed on the unit hypersphere. In particular, we derive and analyze two variants of the Expectation Maximiza tion (EM) framework for estimating the mean and concentration parameters of this mixture. Nume rical estimation of the concentration parameters is non-trivial in high dimensions since it i nvolves functional inversion of ratios of Bessel functions. We also formulate two clustering algorit hms corresponding to the variants of EM that we derive. Our approach provides a theoretical basis fo r the use of cosine similarity that has been widely employed by the information retrieval communit y, and obtains the spherical kmeans algorithm (kmeans with cosine similarity) as a special case of both variants. Empirical results on clustering of high-dimensional text and gene-expression d ata based on a mixture of vMF distributions show that the ability to estimate the concentration pa rameter for each vMF component, which is not present in existing approaches, yields superior resu lts, especially for difficult clustering tasks in high-dimensional spaces.", "title": "" }, { "docid": "07cfc30244cb9269861a7db9ad594ad4", "text": "In this paper we report on results from a cross-sectional survey with manufacturers in four typical Chinese industries, i.e., power generating, chemical/petroleum, electrical/electronic and automobile, to evaluate their perceived green supply chain management (GSCM) practices and relate them to closing the supply chain loop. Our findings provide insights into the capabilities of Chinese organizations on the adoption of GSCM practices in different industrial contexts and that these practices are not considered equitably across the four industries. Academic and managerial implications of our findings are discussed. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5c01d28cd3b94c2eba7435ec08f323bf", "text": "Methods to overcome metal artifacts in computed tomography (CT) images have been researched and developed for nearly 40 years. When X-rays pass through a metal object, depending on its size and density, different physical effects will negatively affect the measurements, most notably beam hardening, scatter, noise, and the non-linear partial volume effect. These phenomena severely degrade image quality and hinder the diagnostic power and treatment outcomes in many clinical applications. In this paper, we first review the fundamental causes of metal artifacts, categorize metal object types, and present recent trends in the CT metal artifact reduction (MAR) literature. To improve image quality and recover information about underlying structures, many methods and correction algorithms have been proposed and tested. We comprehensively review and categorize these methods into six different classes of MAR: metal implant optimization, improvements to the data acquisition process, data correction based on physics models, modifications to the reconstruction algorithm (projection completion and iterative reconstruction), and image-based post-processing. The primary goals of this paper are to identify the strengths and limitations of individual MAR methods and overall classes, and establish a relationship between types of metal objects and the classes that most effectively overcome their artifacts. The main challenges for the field of MAR continue to be cases with large, dense metal implants, as well as cases with multiple metal objects in the field of view. Severe photon starvation is difficult to compensate for with only software corrections. Hence, the future of MAR seems to be headed toward a combined approach of improving the acquisition process with dual-energy CT, higher energy X-rays, or photon-counting detectors, along with advanced reconstruction approaches. Additional outlooks are addressed, including the need for a standardized evaluation system to compare MAR methods.", "title": "" }, { "docid": "a431c8c717fd4452a9654e59c6974031", "text": "While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/ .", "title": "" }, { "docid": "91d3008dcd6c351d6cc0187c59cad8df", "text": "Peer-to-peer markets such as eBay, Uber, and Airbnb allow small suppliers to compete with traditional providers of goods or services. We view the primary function of these markets as making it easy for buyers to …nd sellers and engage in convenient, trustworthy transactions. We discuss elements of market design that make this possible, including search and matching algorithms, pricing, and reputation systems. We then develop a simple model of how these markets enable entry by small or ‡exible suppliers, and the resulting impact on existing …rms. Finally, we consider the regulation of peer-to-peer markets, and the economic arguments for di¤erent approaches to licensing and certi…cation, data and employment regulation. We appreciate support from the National Science Foundation, the Stanford Institute for Economic Policy Research, the Toulouse Network on Information Technology, and the Alfred P. Sloan Foundation. yEinav and Levin: Department of Economics, Stanford University and NBER. Farronato: Harvard Business School. Email: leinav@stanford.edu, chiarafarronato@gmail.com, jdlevin@stanford.edu.", "title": "" }, { "docid": "62d3ed4ab5baeea14ccf93ae1b064dda", "text": "Many challenges are associated with the integration of geographic information systems (GISs) with models in specific applications. One of them is adapting models to the environment of GISs. Unique aspects of water resource management problems require a special approach to development of GIS data structures. Expanded development of GIS applications for handling water resources management analysis can be assisted by use of an object oriented approach. In this paper, we model a river basin water allocation problem as a collection of spatial and thematic objects. A conceptual GIS data model is formulated to integrate the physical and logical components of the modeling problem into an operational framework, based on which, extended GIS functions are developed to implement a tight linkage between the GIS and the water resources management model. Through the object-oriented approach, data, models and users interfaces are integrated in the GIS environment, creating great flexibility for modeling and analysis. The concept and methodology described in this paper is also applicable to connecting GIS with models in other fields that have a spatial dimension and hence to which GIS can provide a powerful additional component of the modeler’s tool kit.  2002 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "3fb2879369216d47d5462db09be970a8", "text": "Automatic synthesis of digital circuits has played a key role in obtaining high-performance designs. While considerable work has been done in the past, emerging device technologies call for a need to re-examine the synthesis approaches, so that better circuits that harness the true power of these technologies can be developed. This paper presents a methodology for synthesis applicable to devices that support ternary logic. We present an algorithm for synthesis that combines a geometrical representation with unary operators of multivalued logic. The geometric representation facilitates scanning appropriately to obtain simple sum-of-products expressions in terms of unary operators. An implementation based on Python is described. The power of the approach lies in its applicability to a wide variety of circuits. The proposed approach leads to the savings of 26% and 22% in transistor-count, respectively, for a ternary full-adder and a ternary content-addressable memory (TCAM) over the best existing designs. Furthermore, the proposed approach requires, on an average, less than 10% of the number of the transistors in comparison with a recent decoder-based design for various ternary benchmark circuits. Extensive HSPICE simulation results show roughly 92% reduction in power-delay product (PDP) for a $12\\times 12$ TCAM and 60% reduction in PDP for a 24-ternary digit barrel shifter over recent designs.", "title": "" }, { "docid": "08dab42f86183ffcdcca88735525bddd", "text": "Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational paper of (Goodfellow et al 2014) suggested they do, if they were given “sufficiently large” deep nets, sample size, and computation time. A recent theoretical analysis in Arora et al (to appear at ICML 2017) raised doubts whether the same holds when discriminator has finite size. It showed that the training objective can approach its optimum value even if the generated distribution has very low support —in other words, the training objective is unable to prevent mode collapse. The current note reports experiments suggesting that such problems are not merely theoretical. It presents empirical evidence that well-known GANs approaches do learn distributions of fairly low support, and thus presumably are not learning the target distribution. The main technical contribution is a new proposed test, based upon the famous birthday paradox, for estimating the support size of the generated distribution.", "title": "" }, { "docid": "aa6502972088385f0d72d5744f43779f", "text": "We are living in a cyber space with an unprecedented rapid expansion of the space and its elements. All interactive information is processed and exchanged via this space. Clearly a well-built cyber security is vital to ensure the security of the cyber space. However the definitions and scopes of both cyber space and cyber security are still not well-defined and this makes it difficult to establish sound security models and mechanisms for protecting this space. Out of existing models, maturity models offer a manageable approach for assessing the security level of a system or organization. The paper first provides a review of various definitions of cyber space and cyber security in order to ascertain a common understanding of the space and its security. The paper investigates existing security maturity models, focusing on their defining characteristics and identifying their strengths and weaknesses. Finally, the paper discusses and suggests measures for a sound and applicable cyber security model.", "title": "" }, { "docid": "e0807a0ee11caa23207d3eb7da6c87b4", "text": "Considering recent advancements and successes in the development of efficient quantum algorithms for electronic structure calculations-alongside impressive results using machine learning techniques for computation-hybridizing quantum computing with machine learning for the intent of performing electronic structure calculations is a natural progression. Here we report a hybrid quantum algorithm employing a restricted Boltzmann machine to obtain accurate molecular potential energy surfaces. By exploiting a quantum algorithm to help optimize the underlying objective function, we obtained an efficient procedure for the calculation of the electronic ground state energy for a small molecule system. Our approach achieves high accuracy for the ground state energy for H2, LiH, H2O at a specific location on its potential energy surface with a finite basis set. With the future availability of larger-scale quantum computers, quantum machine learning techniques are set to become powerful tools to obtain accurate values for electronic structures.", "title": "" }, { "docid": "4260077a3a48f3ed2a71208e2dd68924", "text": "Algorithmic image-based diagnosis and prognosis of neurodegenerative diseases on longitudinal data has drawn great interest from computer vision researchers. The current state-of-the-art models for many image classification tasks are based on the Convolutional Neural Networks (CNN). However, a key challenge in applying CNN to biological problems is that the available labeled training samples are very limited. Another issue for CNN to be applied in computer aided diagnosis applications is that to achieve better diagnosis and prognosis accuracy, one usually has to deal with the longitudinal dataset, i.e., the dataset of images scanned at different time points. Here we argue that an enhanced CNN model with transfer learning for the joint analysis of tasks from multiple time points or regions of interests may have a potential to improve the accuracy of computer aided diagnosis. To reach this goal, we innovate a CNN based deep learning multi-task dictionary learning framework to address the above challenges. Firstly, we pretrain CNN on the ImageNet dataset and transfer the knowledge from the pre-trained model to the medical imaging progression representation, generating the features for different tasks. Then, we propose a novel unsupervised learning method, termed Multi-task Stochastic Coordinate Coding (MSCC), for learning different tasks by using shared and individual dictionaries and generating the sparse features required to predict the future cognitive clinical scores. We apply our new model in a publicly available neuroimaging cohort to predict clinical measures with two different feature sets and compare them with seven other state-of-theart methods. The experimental results show our proposed method achieved superior results.", "title": "" }, { "docid": "af271bf4b478d6b46d53d9df716d75ee", "text": "The mobile technology is an ever evolving concept. The world has seen various generations of mobile technology be it 1G, 2G, 3G or 4G. The fifth generation of mobile technology i.e. 5G is seen as a futuristic notion that would help in solving the issues that are pertaining in the 4G. In this paper we have discussed various security issues of 4G with respect to Wi-max and long term evolution. These issues are discussed at MAC and physical layer level. The security issues are seen in terms of possible attacks, system vulnerabilities and privacy concerns. We have also highlighted how the notions of 5G can be tailored to provide a more secure mobile computing environment. We have considered the futuristic architectural framework for 5G networks in our discussion. The basic concepts and features of the fifth generation technology are explained here. We have also analyzed five pillars of strength for the 5G network security which would work in collaboration with each other to provide a secure mobile computing environment to the user.", "title": "" }, { "docid": "a53065d1cfb1fe898182d540d65d394b", "text": "This paper presents a novel approach for detecting affine invariant interest points. Our method can deal with significant affine transformations including large scale changes. Such transformations introduce significant changes in the point location as well as in the scale and the shape of the neighbourhood of an interest point. Our approach allows to solve for these problems simultaneously. It is based on three key ideas : 1) The second moment matrix computed in a point can be used to normalize a region in an affine invariant way (skew and stretch). 2) The scale of the local structure is indicated by local extrema of normalized derivatives over scale. 3) An affine-adapted Harris detector determines the location of interest points. A multi-scale version of this detector is used for initialization. An iterative algorithm then modifies location, scale and neighbourhood of each point and converges to affine invariant points. For matching and recognition, the image is characterized by a set of affine invariant points ; the affine transformation associated with each point allows the computation of an affine invariant descriptor which is also invariant to affine illumination changes. A quantitative comparison of our detector with existing ones shows a significant improvement in the presence of large affine deformations. Experimental results for wide baseline matching show an excellent performance in the presence of large perspective transformations including significant scale changes. Results for recognition are very good for a database with more than 5000 images.", "title": "" }, { "docid": "154f19af2518b8e4cd197847214c2410", "text": "This paper presents a generalized i-vector framework with phonetic tokenizations and tandem features for speaker verification as well as language identification. First, the tokens for calculating the zero-order statistics is extended from the MFCC trained Gaussian Mixture Models (GMM) components to phonetic phonemes, 3-grams and tandem feature trained GMM components using phoneme posterior probabilities. Second, given the calculated zero-order statistics (posterior probabilities on tokens), the feature used to calculate the first-order statistics is also extended from MFCC to tandem features and is not necessarily the same feature employed by the tokenizer. Third, the zero-order and first-order statistics vectors are then concatenated and represented by the simplified supervised i-vector approach followed by the standard back end modeling methods. We study different system setups with different tokens and features. Finally, selected effective systems are fused at the score level to further improve the performance. Experimental results are reported on the NIST SRE 2010 common condition 5 female part task and the NIST LRE 2007 closed set 30 seconds task for speaker verification and language identification, respectively. The proposed generalized i-vector framework outperforms the i-vector baseline by relatively 45% in terms of equal error rate (EER) and norm minDCF values.", "title": "" }, { "docid": "79263437dad5927ce3615edd36ca1eab", "text": "This paper gives an insight on how to develop plug-ins (signal processing blocks) for GNU Radio Companion. GRC is on the monitoring computer and does bulk of the signal processing before transmission and after reception. The coding done in order to develop any block is discussed. A block that performs Huffman coding has been built. Huffman coding is a coding technique that gives a prefix code. A block that performs convolution coding at any desired rate using any generator polynomial has also been built. Both Huffman and Convolution coding are done on data stored in file sources by these blocks. This paper thus describes the ease of signal processing that can be attained by developing blocks in demand by changing the C++ and PYTHON codes of the HOWTO package. Being an open source it is available to all, is highly cost effective and is a field with great potential.", "title": "" } ]
scidocsrr
c8e4a79a61c855d7c527cd225c143542
Wechsler Intelligence Scale for Children-V: Test Review.
[ { "docid": "040587526c0fa1fd5ba2a28ee554329f", "text": "Data from 14 nations reveal IQ gains ranging from 5 to 25 points in a single generation. Some of the largest gains occur on culturally reduced tests and tests of fluid intelligence. The Norwegian data show that a nation can make significant gains on a culturally reduced test while suffering losses on other tests. The Dutch data prove the existence of unknown environmental factors so potent that they account for 15 of the 20 points gained. The hypothesis that best fits the results is that IQ tests do not measure intelligence but rather a correlate with a weak causal link to intelligence. This hypothesis can also explain differential trends on various mental tests, such as the combination of IQ gains and Scholastic Aptitude Test losses in the United States.", "title": "" } ]
[ { "docid": "1f81e5e9851b4750aac009da5ae578a1", "text": "This paper describes a method to automatically create dialogue resources annotated with dialogue act information by reusing existing dialogue corpora. Numerous dialogue corpora are available for research purposes and many of them are annotated with dialogue act information that captures the intentions encoded in user utterances. Annotated dialogue resources, however, differ in various respects: data collection settings and modalities used, dialogue task domains and scenarios (if any) underlying the collection, number and roles of dialogue participants involved and dialogue act annotation schemes applied. The presented study encompasses three phases of data-driven investigation. We, first, assess the importance of various types of features and their combinations for effective cross-domain dialogue act classification. Second, we establish the best predictive model comparing various cross-corpora training settings. Finally, we specify models adaptation procedures and explore late fusion approaches to optimize the overall classification decision taking process. The proposed methodology accounts for empirically motivated and technically sound classification procedures that may reduce annotation and training costs significantly.", "title": "" }, { "docid": "9b72d423e13bdd125b3a8c30b40e6b49", "text": "With the increasing popularity of the web, some new web technologies emerged and introduced dynamics to web applications, in comparison to HTML, as a static programming language. JavaScript is the language that provided a dynamic web site which actively communicates with users. JavaScript is used in today's web applications as a client script language and on the server side. The JavaScript language supports the Model View Controller (MVC) architecture that maintains a readable code and clearly separates parts of the program code. The topic of this research is to compare the popular JavaScript frameworks: AngularJS, Ember, Knockout, Backbone. All four frameworks are based on MVC or similar architecture. In this paper, the advantages and disadvantages of each framework, the impact on application speed, the ways of testing such JS applications and ways to improve code security are presented.", "title": "" }, { "docid": "45e4a8bd1689d2f127f20b2e692b56cc", "text": "BACKGROUND\nPreventive health care promotes health and prevents disease or injuries by addressing factors that lead to the onset of a disease, and by detecting latent conditions to reduce or halt their progression. Many risk factors for costly and disabling conditions (such as cardiovascular diseases, cancer, diabetes, and chronic respiratory diseases) can be prevented, yet healthcare systems do not make the best use of their available resources to support this process. Mobile phone messaging applications, such as Short Message Service (SMS) and Multimedia Message Service (MMS), could offer a convenient and cost-effective way to support desirable health behaviours for preventive health care.\n\n\nOBJECTIVES\nTo assess the effects of mobile phone messaging interventions as a mode of delivery for preventive health care, on health status and health behaviour outcomes.\n\n\nSEARCH METHODS\nWe searched: the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library 2009, Issue 2), MEDLINE (OvidSP) (January 1993 to June 2009), EMBASE (OvidSP) (January 1993 to June 2009), PsycINFO (OvidSP) (January 1993 to June 2009), CINAHL (EbscoHOST) (January 1993 to June 2009), LILACS (January 1993 to June 2009) and African Health Anthology (January 1993 to June 2009).We also reviewed grey literature (including trial registers) and reference lists of articles.\n\n\nSELECTION CRITERIA\nWe included randomised controlled trials (RCTs), quasi-randomised controlled trials (QRCTs), controlled before-after (CBA) studies, and interrupted time series (ITS) studies with at least three time points before and after the intervention. We included studies using SMS or MMS as a mode of delivery for any type of preventive health care. We only included studies in which it was possible to assess the effects of mobile phone messaging independent of other technologies or interventions.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed all studies against the inclusion criteria, with any disagreements resolved by a third review author. Study design features, characteristics of target populations, interventions and controls, and results data were extracted by two review authors and confirmed by a third author. Primary outcomes of interest were health status and health behaviour outcomes. We also considered patients' and providers' evaluation of the intervention, perceptions of safety, health service utilisation and costs, and potential harms or adverse effects. Because the included studies were heterogeneous in type of condition addressed, intervention characteristics and outcome measures, we did not consider that it was justified to conduct a meta-analysis to derive an overall effect size for the main outcome categories; instead, we present findings narratively.\n\n\nMAIN RESULTS\nWe included four randomised controlled trials involving 1933 participants.For the primary outcome category of health, there was moderate quality evidence from one study that women who received prenatal support via mobile phone messages had significantly higher satisfaction than those who did not receive the messages, both in the antenatal period (mean difference (MD) 1.25, 95% confidence interval (CI) 0.78 to 1.72) and perinatal period (MD 1.19, 95% CI 0.37 to 2.01). Their confidence level was also higher (MD 1.12, 95% CI 0.51 to 1.73) and anxiety level was lower (MD -2.15, 95% CI -3.42 to -0.88) than in the control group in the antenatal period. In this study, no further differences were observed between groups in the perinatal period. There was low quality evidence that the mobile phone messaging intervention did not affect pregnancy outcomes (gestational age at birth, infant birth weight, preterm delivery and route of delivery).For the primary outcome category of health behaviour, there was moderate quality evidence from one study that mobile phone message reminders to take vitamin C for preventive reasons resulted in higher adherence (risk ratio (RR) 1.41, 95% CI 1.14 to 1.74). There was high quality evidence from another study that participants receiving mobile phone messaging support had a significantly higher likelihood of quitting smoking than those in a control group at 6 weeks (RR 2.20, 95% CI 1.79 to 2.70) and at 12 weeks follow-up (RR 1.55, 95% CI 1.30 to 1.84). At 26 weeks, there was only a significant difference between groups if, for participants with missing data, the last known value was carried forward. There was very low quality evidence from one study that mobile phone messaging interventions for self-monitoring of healthy behaviours related to childhood weight control did not have a statistically significant effect on physical activity, consumption of sugar-sweetened beverages or screen time.For the secondary outcome of acceptability, there was very low quality evidence from one study that user evaluation of the intervention was similar between groups. There was moderate quality evidence from one study of no difference in adverse effects of the intervention, measured as rates of pain in the thumb or finger joints, and car crash rates.None of the studies reported the secondary outcomes of health service utilisation or costs of the intervention.\n\n\nAUTHORS' CONCLUSIONS\nWe found very limited evidence that in certain cases mobile phone messaging interventions may support preventive health care, to improve health status and health behaviour outcomes. However, because of the low number of participants in three of the included studies, combined with study limitations of risk of bias and lack of demonstrated causality, the evidence for these effects is of low to moderate quality. The evidence is of high quality only for interventions aimed at smoking cessation. Furthermore, there are significant information gaps regarding the long-term effects, risks and limitations of, and user satisfaction with, such interventions.", "title": "" }, { "docid": "205880d3205cb0f4844c20dcf51c4890", "text": "Recently, deep networks were proved to be more effective than shallow architectures to face complex real–world applications. However, theoretical results supporting this claim are still few and incomplete. In this paper, we propose a new topological measure to study how the depth of feedforward networks impacts on their ability of implementing high complexity functions. Upper and lower bounds on network complexity are established, based on the number of hidden units and on their activation functions, showing that deep architectures are able, with the same number of resources, to address more difficult classification problems.", "title": "" }, { "docid": "43b18a9fe6c1c67109ea7ee27285714b", "text": "Nonlinear dimensionality reduction methods have demonstrated top-notch performance in many pattern recognition and image classification tasks. Despite their popularity, they suffer from highly expensive time and memory requirements, which render them inapplicable to large-scale datasets. To leverage such cases we propose a new method called “Path-Based Isomap”. Similar to Isomap, we exploit geodesic paths to find the low-dimensional embedding. However, instead of preserving pairwise geodesic distances, the low-dimensional embedding is computed via a path-mapping algorithm. Due to the much fewer number of paths compared to number of data points, a significant improvement in time and memory complexity with a comparable performance is achieved. The method demonstrates state-of-the-art performance on well-known synthetic and real-world datasets, as well as in the presence of noise.", "title": "" }, { "docid": "bb1d208ad8f31e59ecba7eea35dcff8a", "text": "Over the past two decades, the molecular machinery that underlies autophagic responses has been characterized with ever increasing precision in multiple model organisms. Moreover, it has become clear that autophagy and autophagy-related processes have profound implications for human pathophysiology. However, considerable confusion persists about the use of appropriate terms to indicate specific types of autophagy and some components of the autophagy machinery, which may have detrimental effects on the expansion of the field. Driven by the overt recognition of such a potential obstacle, a panel of leading experts in the field attempts here to define several autophagy-related terms based on specific biochemical features. The ultimate objective of this collaborative exchange is to formulate recommendations that facilitate the dissemination of knowledge within and outside the field of autophagy research.", "title": "" }, { "docid": "c48d4bd9d5fde3fa61e600449411fd25", "text": "Shape-From-Silhouette (SFS), also known as Visual Hull (VH) construction, is a popular 3D reconstruction method which estimates the shape of an object from multiple silhouette images. The original SFS formulation assumes that all of the silhouette images are captured either at the same time or while the object is static. This assumption is violated when the object moves or changes shape. Hence the use of SFS with moving objects has been restricted to treating each time instant sequentially and independently. Recently we have successfully extended the traditional SFS formulation to refine the shape of a rigidly moving object over time. Here we further extend SFS to apply to dynamic articulated objects. Given silhouettes of a moving articulated object, the process of recovering the shape and motion requires two steps: (1) correctly segmenting (points on the boundary of) the silhouettes to each articulated part of the object, (2) estimating the motion of each individual part using the segmented silhouette. In this paper, we propose an iterative algorithm to solve this simultaneous assignment and alignment problem. Once we have estimated the shape and motion of each part of the object, the articulation points between each pair of rigid parts are obtained by solving a simple motion constraint between the connected parts. To validate our algorithm, we first apply it to segment the different body parts and estimate the joint positions of a person. The acquired kinematic (shape and joint) information is then used to track the motion of the person in new video sequences.", "title": "" }, { "docid": "b06dcdb662a8d55219c9ae1c7e507987", "text": "Most programs today are written not by professional software developers, but by people with expertise in other domains working towards goals for which they need computational support. For example, a teacher might write a grading spreadsheet to save time grading, or an interaction designer might use an interface builder to test some user interface design ideas. Although these end-user programmers may not have the same goals as professional developers, they do face many of the same software engineering challenges, including understanding their requirements, as well as making decisions about design, reuse, integration, testing, and debugging. This article summarizes and classifies research on these activities, defining the area of End-User Software Engineering (EUSE) and related terminology. The article then discusses empirical research about end-user software engineering activities and the technologies designed to support them. The article also addresses several crosscutting issues in the design of EUSE tools, including the roles of risk, reward, and domain complexity, and self-efficacy in the design of EUSE tools and the potential of educating users about software engineering principles.", "title": "" }, { "docid": "1bd2fb70817734ec1a0e96d67ca5daaf", "text": "This paper proposed a new detection and prevention system against DDoS (Distributed Denial of Service) attack in SDN (software defined network) architecture, FL-GUARD (Floodlight-based guard system). Based on characteristics of SDN and centralized control, etc., FL-GUARD applies dynamic IP address binding to solve the problem of IP spoofing, and uses 3.3.2 C-SVM algorithm to detect attacks, and finally take advantage of the centralized control of software-defined network to issue flow tables to block attacks at the source port. The experiment results show the effectiveness of our system. The modular design of FL-GUARD lays a good foundation for the future improvement.", "title": "" }, { "docid": "92c91a8e9e5eec86f36d790dec8020e7", "text": "Aspect-based opinion mining, which aims to extract aspects and their corresponding ratings from customers reviews, provides very useful information for customers to make purchase decisions. In the past few years several probabilistic graphical models have been proposed to address this problem, most of them based on Latent Dirichlet Allocation (LDA). While these models have a lot in common, there are some characteristics that distinguish them from each other. These fundamental differences correspond to major decisions that have been made in the design of the LDA models. While research papers typically claim that a new model outperforms the existing ones, there is normally no \"one-size-fits-all\" model. In this paper, we present a set of design guidelines for aspect-based opinion mining by discussing a series of increasingly sophisticated LDA models. We argue that these models represent the essence of the major published methods and allow us to distinguish the impact of various design decisions. We conduct extensive experiments on a very large real life dataset from Epinions.com (500K reviews) and compare the performance of different models in terms of the likelihood of the held-out test set and in terms of the accuracy of aspect identification and rating prediction.", "title": "" }, { "docid": "888efce805d5271f0b6571748793c4c6", "text": "Pedagogical changes and new models of delivering educational content should be considered in the effort to address the recommendations of the 2007 Institute of Medicine report and Benner's recommendations on the radical transformation of nursing. Transition to the nurse anesthesia practice doctorate addresses the importance of these recommendations, but educational models and specific strategies on how to implement changes in educational models and systems are still emerging. The flipped classroom (FC) is generating a considerable amount of buzz in academic circles. The FC is a pedagogical model that employs asynchronous video lectures, reading assignments, practice problems, and other digital, technology-based resources outside the classroom, and interactive, group-based, problem-solving activities in the classroom. This FC represents a unique combination of constructivist ideology and behaviorist principles, which can be used to address the gap between didactic education and clinical practice performance. This article reviews recent evidence supporting use of the FC in health profession education and suggests ways to implement the FC in nurse anesthesia educational programs.", "title": "" }, { "docid": "f031bd5139a31ac327c61ce3d306a376", "text": "In this paper we give an overview of the Tri-lingual Entity Discovery and Linking (EDL) task at the Knowledge Base Population (KBP) track at TAC2017, and of the Ten Low Resource Language EDL Pilot. We will summarize several new and effective research directions including multi-lingual common space construction for cross-lingual knowledge transfer, rapid approaches for silver-standard training data generation and joint entity and word representation. We will also sketch out remaining challenges and future research directions.", "title": "" }, { "docid": "e0fbfac63b894c46e3acda86adb67053", "text": "OBJECTIVE\nTo investigate the effectiveness of acupuncture compared with minimal acupuncture and with no acupuncture in patients with tension-type headache.\n\n\nDESIGN\nThree armed randomised controlled multicentre trial.\n\n\nSETTING\n28 outpatient centres in Germany.\n\n\nPARTICIPANTS\n270 patients (74% women, mean age 43 (SD 13) years) with episodic or chronic tension-type headache.\n\n\nINTERVENTIONS\nAcupuncture, minimal acupuncture (superficial needling at non-acupuncture points), or waiting list control. Acupuncture and minimal acupuncture were administered by specialised physicians and consisted of 12 sessions per patient over eight weeks.\n\n\nMAIN OUTCOME MEASURE\nDifference in numbers of days with headache between the four weeks before randomisation and weeks 9-12 after randomisation, as recorded by participants in headache diaries.\n\n\nRESULTS\nThe number of days with headache decreased by 7.2 (SD 6.5) days in the acupuncture group compared with 6.6 (SD 6.0) days in the minimal acupuncture group and 1.5 (SD 3.7) days in the waiting list group (difference: acupuncture v minimal acupuncture, 0.6 days, 95% confidence interval -1.5 to 2.6 days, P = 0.58; acupuncture v waiting list, 5.7 days, 3.9 to 7.5 days, P < 0.001). The proportion of responders (at least 50% reduction in days with headache) was 46% in the acupuncture group, 35% in the minimal acupuncture group, and 4% in the waiting list group.\n\n\nCONCLUSIONS\nThe acupuncture intervention investigated in this trial was more effective than no treatment but not significantly more effective than minimal acupuncture for the treatment of tension-type headache.\n\n\nTRIAL REGISTRATION NUMBER\nISRCTN9737659.", "title": "" }, { "docid": "08084de7a702b87bd8ffc1d36dbf67ea", "text": "In recent years, the mobile data traffic is increasing and many more frequency bands have been employed in cellular handsets. A simple π type tunable band elimination filter (BEF) with switching function has been developed using a wideband tunable surface acoustic wave (SAW) resonator circuit. The frequency of BEF is tuned approximately 31% by variable capacitors without spurious. In LTE low band, the arrangement of TX and RX frequencies is to be reversed in Band 13, 14 and 20 compared with the other bands. The steep edge slopes of the developed filter can be exchanged according to the resonance condition and switching. With combining the TX and RX tunable BEFs and the small sized broadband circulator, a new tunable duplexer has been fabricated, and its TX-RX isolation is proved to be more than 50dB in LTE low band operations.", "title": "" }, { "docid": "f154fb6af73bc0673d208716f8b77d72", "text": "Deep autoencoder networks have successfully been applied in unsupervised dimension reduction. The autoencoder has a \"bottleneck\" middle layer of only a few hidden units, which gives a low dimensional representation for the data when the full network is trained to minimize reconstruction error. We propose using a deep bottlenecked neural network in supervised dimension reduction. Instead of trying to reproduce the data, the network is trained to perform classification. Pretraining with restricted Boltzmann machines is combined with supervised finetuning. Finetuning with supervised cost functions has been done, but with cost functions that scale quadratically. Training a bottleneck classifier scales linearly, but still gives results comparable to or sometimes better than two earlier supervised methods.", "title": "" }, { "docid": "be73344151ac52835ba9307e363f36d9", "text": "BACKGROUND AND OBJECTIVE\nSmoking is the largest preventable cause of death and diseases in the developed world, and advances in modern electronics and machine learning can help us deliver real-time intervention to smokers in novel ways. In this paper, we examine different machine learning approaches to use situational features associated with having or not having urges to smoke during a quit attempt in order to accurately classify high-urge states.\n\n\nMETHODS\nTo test our machine learning approaches, specifically, Bayes, discriminant analysis and decision tree learning methods, we used a dataset collected from over 300 participants who had initiated a quit attempt. The three classification approaches are evaluated observing sensitivity, specificity, accuracy and precision.\n\n\nRESULTS\nThe outcome of the analysis showed that algorithms based on feature selection make it possible to obtain high classification rates with only a few features selected from the entire dataset. The classification tree method outperformed the naive Bayes and discriminant analysis methods, with an accuracy of the classifications up to 86%. These numbers suggest that machine learning may be a suitable approach to deal with smoking cessation matters, and to predict smoking urges, outlining a potential use for mobile health applications.\n\n\nCONCLUSIONS\nIn conclusion, machine learning classifiers can help identify smoking situations, and the search for the best features and classifier parameters significantly improves the algorithms' performance. In addition, this study also supports the usefulness of new technologies in improving the effect of smoking cessation interventions, the management of time and patients by therapists, and thus the optimization of available health care resources. Future studies should focus on providing more adaptive and personalized support to people who really need it, in a minimum amount of time by developing novel expert systems capable of delivering real-time interventions.", "title": "" }, { "docid": "6d285e0e8450791f03f95f58792c8f3c", "text": "Basic psychology research suggests the possibility that confessions-a potent form of incrimination-may taint other evidence, thereby creating an appearance of corroboration. To determine if this laboratory-based phenomenon is supported in the high-stakes world of actual cases, we conducted an archival analysis of DNA exoneration cases from the Innocence Project case files. Results were consistent with the corruption hypothesis: Multiple evidence errors were significantly more likely to exist in false-confession cases than in eyewitness cases; in order of frequency, false confessions were accompanied by invalid or improper forensic science, eyewitness identifications, and snitches and informants; and in cases containing multiple errors, confessions were most likely to have been obtained first. We believe that these findings underestimate the problem and have important implications for the law concerning pretrial corroboration requirements and the principle of \"harmless error\" on appeal.", "title": "" }, { "docid": "a0dad0be3da6f4c7672427924036d904", "text": "About fifteen years ago, I wrote a paper on security problems in the TCP/IP protocol suite, In particular, I focused on protocol-level issues, rather than implementation flaws. It is instructive to look back at that paper, to see where my focus and my predictions were accurate, where I was wrong, and where dangers have yet to happen. This is a reprint of the original paper, with added commentary.", "title": "" }, { "docid": "aa60d0d73efdf21adcc95c6ad7a7dbc3", "text": "While hardware obfuscation has been used in industry for many years, very few scientific papers discuss layout-level obfuscation. The main aim of this paper is to start a discussion about hardware obfuscation in the academic community and point out open research problems. In particular, we introduce a very flexible layout-level obfuscation tool that we use as a case study for hardware obfuscation. In this obfuscation tool, a small custom-made obfuscell is used in conjunction with a standard cell to build a new obfuscated standard cell library called Obfusgates. This standard cell library can be used to synthesize any HDL code with standard synthesis tools, e.g. Synopsis Design Compiler. However, only obfuscating the functionality of individual gates is not enough. Not only the functionality of individual gates, but also their connectivity, leaks important important information about the design. In our tool we therefore designed the obfuscation gates to include a large number of \"dummy wires\". Due to these dummy wires, the connectivity of the gates in addition to their logic functionality is obfuscated. We argue that this aspect of obfuscation is of great importance in practice and that there are many interesting open research questions related to this.", "title": "" }, { "docid": "8115fddcf7bd64ad0976619f0a51e5a8", "text": "Current research in content-based semantic image understanding is largely confined to exemplar-based approaches built on low-level feature extraction and classification. The ability to extract both low-level and semantic features and perform knowledge integration of different types of features is expected to raise semantic image understanding to a new level. Belief networks, or Bayesian networks (BN), have proven to be an effective knowledge representation and inference engine in artificial intelligence and expert systems research. Their effectiveness is due to the ability to explicitly integrate domain knowledge in the network structure and to reduce a joint probability distribution to conditional independence relationships. In this paper, we present a general-purpose knowledge integration framework that employs BN in integrating both low-level and semantic features. The efficacy of this framework is demonstrated via three applications involving semantic understanding of pictorial images. The first application aims at detecting main photographic subjects in an image, the second aims at selecting the most appealing image in an event, and the third aims at classifying images into indoor or outdoor scenes. With these diverse examples, we demonstrate that effective inference engines can be built within this powerful and flexible framework according to specific domain knowledge and available training data to solve inherently uncertain vision problems. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
52cd6d7d61d23ad6857c32ab4a8b159c
Women's partnered orgasm consistency is associated with greater duration of penile-vaginal intercourse but not of foreplay.
[ { "docid": "172aaf47ee3f89818abba35a463ecc76", "text": "I examined the relationship of recalled and diary recorded frequency of penile-vaginal intercourse (FSI), noncoital partnered sexual activity, and masturbation to measured waist and hip circumference in 120 healthy adults aged 19-38. Slimmer waist (in men and in the sexes combined) and slimmer hips (in men and women) were associated with greater FSI. Slimmer waist and hips were associated with rated importance of intercourse for men. Noncoital partnered sexual activity had a less consistent association with slimness. Slimmer waist and hips were associated with less masturbation (in men and in the sexes combined). I discuss the results in terms of differences between different sexual behaviors, attractiveness, emotional relatedness, physical sensitivity, sexual dysfunction, sociobiology, psychopharmacological aspects of excess fat and carbohydrate consumption, and implications for sex therapy.", "title": "" } ]
[ { "docid": "77233d4f7a7bb0150b5376c7bb93c108", "text": "In-filled frame structures are commonly used in buildings, even in those located in seismically active regions. Precent codes unfortunately, do not have adequate guidance for treating the modelling, analysis and design of in-filled frame structures. This paper addresses this need and first develops an appropriate technique for modelling the infill-frame interface and then uses it to study the seismic response of in-filled frame structures. Finite element time history analyses under different seismic records have been carried out and the influence of infill strength, openings and soft-storey phenomenon are investigated. Results in terms of tip deflection, fundamental period, inter-storey drift ratio and stresses are presented and they will be useful in the seismic design of in-filled frame structures.", "title": "" }, { "docid": "dcc56a831c040a1ac8e08fcd177962d1", "text": "Clothing image recognition has recently received considerable attention from many communities, such as multimedia information processing and computer vision, due to its commercial and social applications. However, the large variations in clothing images’ appearances and styles and their complicated formation conditions make the problem challenging. In addition, a generic treatment with convolutional neural networks (CNNs) cannot provide a satisfactory solution considering the training time and recognition performance. Therefore, how to balance those two factors for clothing image recognition is an interesting problem. Motivated by the fast training and straightforward solutions exhibited by extreme learning machines (ELMs), in this paper, we propose a recognition framework that is based on multiple sources of features and ELM neural networks. In this framework, three types of features are first extracted, including CNN features with pre-trained networks, histograms of oriented gradients and color histograms. Second, those low-level features are concatenated and taken as the inputs to an autoencoder version of the ELM for deep feature-level fusion. Third, we propose an ensemble of adaptive ELMs for decision-level fusion using the previously obtained feature-level fusion representations. Extensive experiments are conducted on an up-to-date large-scale clothing image data set. Those experimental results show that the proposed framework is competitive and efficient.", "title": "" }, { "docid": "707828ef765512b0b5ebef27ca133504", "text": "In the mammalian myocardium, potassium (K(+)) channels control resting potentials, action potential waveforms, automaticity, and refractory periods and, in most cardiac cells, multiple types of K(+) channels that subserve these functions are expressed. Molecular cloning has revealed the presence of a large number of K(+) channel pore forming (alpha) and accessory (beta) subunits in the heart, and considerable progress has been made recently in defining the relationships between expressed K(+) channel subunits and functional cardiac K(+) channels. To date, more than 20 mouse models with altered K(+) channel expression/functioning have been generated using dominant-negative transgenic and targeted gene deletion approaches. In several instances, the genetic manipulation of K(+) channel subunit expression has revealed the role of specific K(+) channel subunit subfamilies or individual K(+) channel subunit genes in the generation of myocardial K(+) channels. In other cases, however, the phenotypic consequences have been unexpected. This review summarizes what has been learned from the in situ genetic manipulation of cardiac K(+) channel functioning in the mouse, discusses the limitations of the models developed to date, and explores the likely directions of future research.", "title": "" }, { "docid": "0d1ff6f8cfc8022138565116f832db03", "text": "Suppose X is a uniformly distributed n-dimensional binary vector and Y is obtained by passing X through a binary symmetric channel with crossover probability α. A recent conjecture by Courtade and Kumar postulates that I(f(X); Y ) ≤ 1 - h(α) for any Boolean function f. So far, the best known upper bound was essentially I(f(X); Y ) ≤ (1 - 2α)2. In this paper, we derive a new upper bound that holds for all balanced functions, and improves upon the best known previous bound for α > 1 over 3.", "title": "" }, { "docid": "d3b82bc0ec07047abf4965e982a436cf", "text": "An adaptive control system, using a recurrent cerebellar model articulation controller (RCMAC) and based on a sliding mode technique, is developed for uncertain nonlinear systems. The proposed dynamic structure of RCMAC has superior capability to the conventional static cerebellar model articulation controller in an efficient learning mechanism and dynamic response. Temporal relations are embedded in RCMAC by adding feedback connections in the association memory space so that the RCMAC provides a dynamical structure. The proposed control system consists of an adaptive RCMAC and a compensated controller. The adaptive RCMAC is used to mimic an ideal sliding mode controller, and the compensated controller is designed to compensate for the approximation error between the ideal sliding mode controller and the adaptive RCMAC. The online adaptive laws of the control system are derived based on the Lyapunov stability theorem, so that the stability of the system can be guaranteed. In addition, in order to relax the requirement of the approximation error bound, an estimation law is derived to estimate the error bound. Finally, the simulation and experimental studies demonstrate the effectiveness of the proposed control scheme for the nonlinear systems with unknown dynamic functions", "title": "" }, { "docid": "ef2ef7812c88e7db8010590dd6dc38b4", "text": "A robust adaptive control approach is proposed to solve the consensus problem of multiagent systems. Compared with the previous work, the agent's dynamics includes the uncertainties and external disturbances, which is more practical in real-world applications. Due to the approximation capability of neural networks, the uncertain dynamics is compensated by the adaptive neural network scheme. The effects of the approximation error and external disturbances are counteracted by employing the robustness signal. The proposed algorithm is decentralized because the controller for each agent only utilizes the information of its neighbor agents. By the theoretical analysis, it is proved that the consensus error can be reduced as small as desired. The proposed method is then extended to two cases: agents form a prescribed formation, and agents have the higher order dynamics. Finally, simulation examples are given to demonstrate the satisfactory performance of the proposed method.", "title": "" }, { "docid": "0dd334ac819bfb77094e06dc0c00efee", "text": "How to propagate label information from labeled examples to unlabeled examples over a graph has been intensively studied for a long time. Existing graph-based propagation algorithms usually treat unlabeled examples equally, and transmit seed labels to the unlabeled examples that are connected to the labeled examples in a neighborhood graph. However, such a popular propagation scheme is very likely to yield inaccurate propagation, because it falls short of tackling ambiguous but critical data points (e.g., outliers). To this end, this paper treats the unlabeled examples in different levels of difficulties by assessing their reliability and discriminability, and explicitly optimizes the propagation quality by manipulating the propagation sequence to move from simple to difficult examples. In particular, we propose a novel iterative label propagation algorithm in which each propagation alternates between two paradigms, teaching-to-learn and learning-to-teach (TLLT). In the teaching-to-learn step, the learner conducts the propagation on the simplest unlabeled examples designated by the teacher. In the learning-to-teach step, the teacher incorporates the learner’s feedback to adjust the choice of the subsequent simplest examples. The proposed TLLT strategy critically improves the accuracy of label propagation, making our algorithm substantially robust to the values of tuning parameters, such as the Gaussian kernel width used in graph construction. The merits of our algorithm are theoretically justified and empirically demonstrated through experiments performed on both synthetic and real-world data sets.", "title": "" }, { "docid": "7d09c7f94dda81e095b80736e229d00e", "text": "With the constant deepening of research on marine environment simulation and information expression, there are higher and higher requirements for the sense of reality of ocean data visualization results and the real-time interaction in the visualization process. This paper tackle the challenge of key technology of three-dimensional interaction and volume rendering technology based on GPU technology, develops large scale marine hydrological environmental data-oriented visualization software and realizes oceanographic planar graph, contour line rendering, isosurface rendering, factor field volume rendering and dynamic simulation of current field. To express the spatial characteristics and real-time update of massive marine hydrological environmental data better, this study establishes nodes in the scene for the management of geometric objects to realize high-performance dynamic rendering. The system employs CUDA (Computing Unified Device Architecture) parallel computing for the improvement of computation rate, uses NetCDF (Network Common Data Form) file format for data access and applies GPU programming technology to realize fast volume rendering of marine water environmental factors. The visualization software of marine hydrological environment developed can simulate and show properties and change process of marine water environmental factors efficiently and intuitively. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "90d0d75ca8413dad8ffe42b6d064905b", "text": "BACKGROUND\nDebate continues about the consequences of adolescent cannabis use. Existing data are limited in statistical power to examine rarer outcomes and less common, heavier patterns of cannabis use than those already investigated; furthermore, evidence has a piecemeal approach to reporting of young adult sequelae. We aimed to provide a broad picture of the psychosocial sequelae of adolescent cannabis use.\n\n\nMETHODS\nWe integrated participant-level data from three large, long-running longitudinal studies from Australia and New Zealand: the Australian Temperament Project, the Christchurch Health and Development Study, and the Victorian Adolescent Health Cohort Study. We investigated the association between the maximum frequency of cannabis use before age 17 years (never, less than monthly, monthly or more, weekly or more, or daily) and seven developmental outcomes assessed up to age 30 years (high-school completion, attainment of university degree, cannabis dependence, use of other illicit drugs, suicide attempt, depression, and welfare dependence). The number of participants varied by outcome (N=2537 to N=3765).\n\n\nFINDINGS\nWe recorded clear and consistent associations and dose-response relations between the frequency of adolescent cannabis use and all adverse young adult outcomes. After covariate adjustment, compared with individuals who had never used cannabis, those who were daily users before age 17 years had clear reductions in the odds of high-school completion (adjusted odds ratio 0·37, 95% CI 0·20-0·66) and degree attainment (0·38, 0·22-0·66), and substantially increased odds of later cannabis dependence (17·95, 9·44-34·12), use of other illicit drugs (7·80, 4·46-13·63), and suicide attempt (6·83, 2·04-22·90).\n\n\nINTERPRETATION\nAdverse sequelae of adolescent cannabis use are wide ranging and extend into young adulthood. Prevention or delay of cannabis use in adolescence is likely to have broad health and social benefits. Efforts to reform cannabis legislation should be carefully assessed to ensure they reduce adolescent cannabis use and prevent potentially adverse developmental effects.\n\n\nFUNDING\nAustralian Government National Health and Medical Research Council.", "title": "" }, { "docid": "18b744209b3918d6636a87feed2597c6", "text": "Robot learning is critically enabled by the availability of appropriate state representations. We propose a robotics-specific approach to learning such state representations. As robots accomplish tasks by interacting with the physical world, we can facilitate representation learning by considering the structure imposed by physics; this structure is reflected in the changes that occur in the world and in the way a robot can effect them. By exploiting this structure in learning, robots can obtain state representations consistent with the aspects of physics relevant to the learning task. We name this prior knowledge about the structure of interactions with the physical world robotic priors. We identify five robotic priors and explain how they can be used to learn pertinent state representations. We demonstrate the effectiveness of this approach in simulated and real robotic experiments with distracting moving objects. We show that our method extracts task-relevant state representations from high-dimensional observations, even in the presence of taskirrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning.", "title": "" }, { "docid": "4f57be0a248769b0b8a46b879c42b1e0", "text": "This study tested whether mental training alone can produce a gain in muscular strength. Thirty male university athletes, including football, basketball and rugby players, were randomly assigned to perform mental training of their hip flexor muscles, to use weight machines to physically exercise their hip flexors, or to form a control group which received neither mental nor physical training. The hip strength of each group was measured before and after training. Physical strength was increased by 24% through mental practice (p = .008). Strength was also increased through physical training, by 28%, but did not change significantly in the control condition. The strength gain was greatest among the football players given mental training. Mental and physical training produced similar decreases in heart rate, and both yielded a marginal reduction in systolic blood pressure. The results support the related findings of Ranganathan, Siemionow, Liu, Sahgal, and Yue (2004).", "title": "" }, { "docid": "fcf70e8f0a35ae805ec682a0d8cacae2", "text": "One of the most important factors in training object recognition networks using convolutional neural networks (CNN) is the provision of annotated data accompanying human judgment. Particularly, in object detection or semantic segmentation, the annotation process requires considerable human effort. In this paper, we propose a semi-supervised learning (SSL)-based training methodology for object detection, which makes use of automatic labeling of un-annotated data by applying a network previously trained from an annotated dataset. Because an inferred label by the trained network is dependent on the learned parameters, it is often meaningless for re-training the network. To transfer a valuable inferred label to the unlabeled data, we propose a re-alignment method based on co-occurrence matrix analysis that takes into account one-hot-vector encoding of the estimated label and the correlation between the objects in the image. We used an MS-COCO detection dataset to verify the performance of the proposed SSL method and deformable neural networks (D-ConvNets) [1] as an object detector for basic training. The performance of the existing state-of-the-art detectors (D-ConvNets, YOLO v2 [2], and single shot multi-box detector (SSD) [3]) can be improved by the proposed SSL method without using the additional model parameter or modifying the network architecture.", "title": "" }, { "docid": "fb34a0868942928ada71cf8d1c746c19", "text": "We introduce the new Multimodal Named Entity Disambiguation (MNED) task for multimodal social media posts such as Snapchat or Instagram captions, which are composed of short captions with accompanying images. Social media posts bring significant challenges for disambiguation tasks because 1) ambiguity not only comes from polysemous entities, but also from inconsistent or incomplete notations, 2) very limited context is provided with surrounding words, and 3) there are many emerging entities often unseen during training. To this end, we build a new dataset called SnapCaptionsKB, a collection of Snapchat image captions submitted to public and crowd-sourced stories, with named entity mentions fully annotated and linked to entities in an external knowledge base. We then build a deep zeroshot multimodal network for MNED that 1) extracts contexts from both text and image, and 2) predicts correct entity in the knowledge graph embeddings space, allowing for zeroshot disambiguation of entities unseen in training set as well. The proposed model significantly outperforms the stateof-the-art text-only NED models, showing efficacy and potentials of the MNED task.", "title": "" }, { "docid": "73e6f03d67508bd2f04b955fc750c18d", "text": "Interleaving is a key component of many digital communication systems involving error correction schemes. It provides a form of time diversity to guard against bursts of errors. Recently, interleavers have become an even more integral part of the code design itself, if we consider for example turbo and turbo-like codes. In a non-cooperative context, such as passive listening, it is a challenging problem to estimate the interleaver parameters. In this paper we propose an algorithm that allows us to estimate the parameters of the interleaver at the output of a binary symmetric channel and to locate the codewords in the interleaved block. This gives us some clues about the interleaving function used.", "title": "" }, { "docid": "a6d0c3a9ca6c2c4561b868baa998dace", "text": "Diprosopus or duplication of the lower lip and mandible is a very rare congenital anomaly. We report this unusual case occurring in a girl who presented to our hospital at the age of 4 months. Surgery and problems related to this anomaly are discussed.", "title": "" }, { "docid": "87748c1fc9dc379c2225c92d2218e278", "text": "If components (denoted by horizontal and vertical axis in Figure 2a) are correlated, then samples (points in Figure 2a) are in a non-spherical shape, then eigenvalues are mutually different. Hence correlation leads to non-uniformity of eigenvalues. Since the eigenvectors are orthogonal by design, it suffices to focus on eigenvalues only. To reduce correlation, we encourage the eigenvalues to be uniform (Figure 2b). Rotation does not affect eigenvalues or uncorrelation. For a component matrix A and rotation matrix R, A>A equals to A>R>RA and they have the same eigendecomposition (say UEU>). Ensuring the eigenvalue matrix E is close to identity implies the latent components are rotations of the orthonormal (and hence uncorrelated) eigenvectors.", "title": "" }, { "docid": "6c784fc34cf7a8e700c67235e05d8cb0", "text": "Fully automatic methods that extract lists of objects from the Web have been studied extensively. Record extraction, the first step of this object extraction process, identifies a set of Web page segments, each of which represents an individual object (e.g., a product). State-of-the-art methods suffice for simple search, but they often fail to handle more complicated or noisy Web page structures due to a key limitation -- their greedy manner of identifying a list of records through pairwise comparison (i.e., similarity match) of consecutive segments. This paper introduces a new method for record extraction that captures a list of objects in a more robust way based on a holistic analysis of a Web page. The method focuses on how a distinct tag path appears repeatedly in the DOM tree of the Web document. Instead of comparing a pair of individual segments, it compares a pair of tag path occurrence patterns (called visual signals) to estimate how likely these two tag paths represent the same list of objects. The paper introduces a similarity measure that captures how closely the visual signals appear and interleave. Clustering of tag paths is then performed based on this similarity measure, and sets of tag paths that form the structure of data records are extracted. Experiments show that this method achieves higher accuracy than previous methods.", "title": "" }, { "docid": "c8d56c100db663ba532df4766e458345", "text": "Decomposing sensory measurements into relevant parts is a fundamental prerequisite for solving complex tasks, e.g., in the field of mobile manipulation in domestic environments. In this paper, we present a fast approach to surface reconstruction in range images by means of approximate polygonal meshing. The obtained local surface information and neighborhoods are then used to 1) smooth the underlying measurements, and 2) segment the image into planar regions and other geometric primitives. An evaluation using publicly available data sets shows that our approach does not rank behind state-of-the-art algorithms while allowing to process range images at high frame rates.", "title": "" }, { "docid": "796dc233bbf4e9e063485f26ab7b5b64", "text": "Anomaly detection refers to identifying the patterns in data that deviate from expected behavior. These non-conforming patterns are often termed as outliers, malwares, anomalies or exceptions in different application domains. This paper presents a novel, generic real-time distributed anomaly detection framework for multi-source stream data. As a case study, we have decided to detect anomaly for multi-source VMware-based cloud data center. The framework monitors VMware performance stream data (e.g., CPU load, memory usage, etc.) continuously. It collects these data simultaneously from all the VMwares connected to the network. It notifies the resource manager to reschedule its resources dynamically when it identifies any abnormal behavior of its collected data. We have used Apache Spark, a distributed framework for processing performance stream data and making prediction without any delay. Spark is chosen over a traditional distributed framework (e.g., Hadoop and MapReduce, Mahout, etc.) that is not ideal for stream data processing. We have implemented a flat incremental clustering algorithm to model the benign characteristics in our distributed Spark based framework. We have compared the average processing latency of a tuple during clustering and prediction in Spark with Storm, another distributed framework for stream data processing. We experimentally find that Spark processes a tuple much quicker than Storm on average.", "title": "" } ]
scidocsrr
918bbff49a2dc80244b34fa50592d0dc
Deep Learning: An Introduction for Applied Mathematicians
[ { "docid": "9e243ada78a3920a9af58f9958408399", "text": "The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises in all Artificial Intelligence applications in the real world. Its solution requires robust separation of samples with errors from samples where the system works properly. We demonstrate that in (moderately) high dimension this separation could be achieved with probability close to one by linear discriminants. Based on fundamental properties of measure concentration, we show that for M1-ϑ, where 1>ϑ>0 is a given small constant. Exact values of a,b>0 depend on the probability distribution that determines how the random M-element sets are drawn, and on the constant ϑ. These stochastic separation theorems provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. Theoretical statements are illustrated with numerical examples.", "title": "" }, { "docid": "938395ce421e0fede708e3b4ab7185b5", "text": "This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.", "title": "" } ]
[ { "docid": "e9e7cb42ed686ace9e9785fafd3c72f8", "text": "We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).", "title": "" }, { "docid": "2a059577ca2a186c53ac76c6a3eae82d", "text": "• Talk with faculty members who are best acquainted with the field(s) of study that interest you. • Talk with professionals in the occupations you wish to enter about the types of degrees and/or credentials they hold or recommend for entry or advancement in field. • Browse program websites or look through catalogs and graduate school reference books (some of these are available in the CSC Resource Library) and determine prerequisites, length and scope of program, etc. • Narrow down your choices based on realistic assessment of your strengths and the programs, which will meet your needs.", "title": "" }, { "docid": "06e58f46c989f22037f443ccf38198ce", "text": "Many biological surfaces in both the plant and animal kingdom possess unusual structural features at the micro- and nanometre-scale that control their interaction with water and hence wettability. An intriguing example is provided by desert beetles, which use micrometre-sized patterns of hydrophobic and hydrophilic regions on their backs to capture water from humid air. As anyone who has admired spider webs adorned with dew drops will appreciate, spider silk is also capable of efficiently collecting water from air. Here we show that the water-collecting ability of the capture silk of the cribellate spider Uloborus walckenaerius is the result of a unique fibre structure that forms after wetting, with the ‘wet-rebuilt’ fibres characterized by periodic spindle-knots made of random nanofibrils and separated by joints made of aligned nanofibrils. These structural features result in a surface energy gradient between the spindle-knots and the joints and also in a difference in Laplace pressure, with both factors acting together to achieve continuous condensation and directional collection of water drops around spindle-knots. Submillimetre-sized liquid drops have been driven by surface energy gradients or a difference in Laplace pressure, but until now neither force on its own has been used to overcome the larger hysteresis effects that make the movement of micrometre-sized drops more difficult. By tapping into both driving forces, spider silk achieves this task. Inspired by this finding, we designed artificial fibres that mimic the structural features of silk and exhibit its directional water-collecting ability.", "title": "" }, { "docid": "f773798785419625b8f283fc052d4ab2", "text": "The increasing interest in energy storage for the grid can be attributed to multiple factors, including the capital costs of managing peak demands, the investments needed for grid reliability, and the integration of renewable energy sources. Although existing energy storage is dominated by pumped hydroelectric, there is the recognition that battery systems can offer a number of high-value opportunities, provided that lower costs can be obtained. The battery systems reviewed here include sodium-sulfur batteries that are commercially available for grid applications, redox-flow batteries that offer low cost, and lithium-ion batteries whose development for commercial electronics and electric vehicles is being applied to grid storage.", "title": "" }, { "docid": "2074ab39d5cec1f9e645ff2ad457f3e3", "text": "[Context and motivation] The current breakthrough of natural language processing (NLP) techniques can provide the requirements engineering (RE) community with powerful tools that can help addressing specific tasks of natural language (NL) requirements analysis, such as traceability, ambiguity detection and requirements classification, to name a few. [Question/problem] However, modern NLP techniques are mainly statistical, and need large NL requirements datasets, to support appropriate training, test and validation of the techniques. The RE community has experimented with NLP since long time, but datasets were often proprietary, or limited to few software projects for which requirements were publicly available. Hence, replication of the experiments and generalization have always been an issue. [Principal idea/results] Our near future commitment is to provide a publicly available NL requirements dataset. [Contribution] To this end, we are collecting requirements documents from the Web, and we are representing them in a common XML format. In this paper, we present the current version of the dataset, together with our agenda concerning formatting, extension, and annotation of the dataset.", "title": "" }, { "docid": "9ca71bbeb4643a6a347050002f1317f5", "text": "In modern society, we are increasingly disconnected from natural light/dark cycles and beset by round-the-clock exposure to artificial light. Light has powerful effects on physical and mental health, in part via the circadian system, and thus the timing of light exposure dictates whether it is helpful or harmful. In their compelling paper, Obayashi et al. (Am J Epidemiol. 2018;187(3):427-434.) offer evidence that light at night can prospectively predict an elevated incidence of depressive symptoms in older adults. Strengths of the study include the longitudinal design and direct, objective assessment of light levels, as well as accounting for multiple plausible confounders during analyses. Follow-up studies should address the study's limitations, including reliance on a global self-report of sleep quality and a 2-night assessment of light exposure that may not reliably represent typical light exposure. In addition, experimental studies including physiological circadian measures will be necessary to determine whether the light effects on depression are mediated through the circadian system or are so-called \"direct\" effects of light. In any case, these exciting findings could inform novel approaches to preventing depressive disorders in older adults.", "title": "" }, { "docid": "58858f0cd3561614f1742fe7b0380861", "text": "This study focuses on how technology can encourage and ease awkwardness-free communications between people in real-world scenarios. We propose a device, The Wearable Aura, able to project a personalized animation onto one's Personal Distance zone. This projection, as an extension of one-self is reactive to user's cognitive status, aware of its environment, context and user's activity. Our user study supports the idea that an interactive projection around an individual can indeed benefit the communications with other individuals.", "title": "" }, { "docid": "2705142c78ee84292a13d06a593e99e6", "text": "The objective of this paper is to conduct a critical review of cybersecurity procedures and practices in the water distribution sector. Specifically, this paper provides a characterization of the current state of cybersecurity practice and risk management in drinking water systems. This characterization is critically important due to the number of cyber attacks that have occurred against water systems recently. While organizations such as AWWA (American Water Works Association) have provided guidelines for implementing cybersecurity measures, limited formal research is being done in the field of risk analysis to assess the nature and impact of risks, and ways to mitigate them. Our work illuminates areas of concern: lack of detailed risk guidance and decision making strategies, and a need for a cost-benefit analysis to be performed. Addressing these will allow future research to address risks and decision making strategies. We believe this characterization is the next step towards developing a comprehensive risk assessment methodology that can identify cyber vulnerabilities and prioritize cyber measures in drinking water systems.", "title": "" }, { "docid": "7d314a77d3b853f37b2b3d59d1255af7", "text": "The paper introduces first insights into a methodology for developing eBusiness business models, which was elaborated at evolaris and is currently validated in various business cases. This methodology relies upon a definition of the term business model, which is first examined and upon which prerequisites for such a methodology are presented. A business model is based on a mental representation of certain aspects of the real world that are relevant for the business. Supporting this change of the mental model is therefore a major prerequisite for a methodology for developing business models. This paper demonstrates that it addition, a business model discussion should be theory based, able to handle complex systems, provide a way for risk free experiments and be practically applicable. In order to fulfill the above critieria, the evolaris methodology is grounded on system theory and combines aspects of system dynamics and action research.", "title": "" }, { "docid": "513ae13c6848f3a83c36dc43d34b43a5", "text": "In this paper, we describe the design, analysis, implementation, and operational deployment of a real-time trip information system that provides passengers with the expected fare and trip duration of the taxi ride they are planning to take. This system was built in cooperation with a taxi operator that operates more than 15,000 taxis in Singapore. We first describe the overall system design and then explain the efficient algorithms used to achieve our predictions based on up to 21 months of historical data consisting of approximately 250 million paid taxi trips. We then describe various optimisations (involving region sizes, amount of history, and data mining techniques) and accuracy analysis (involving routes and weather) we performed to increase both the runtime performance and prediction accuracy. Our large scale evaluation demonstrates that our system is (a) accurate --- with the mean fare error under 1 Singapore dollar (~ 0.76 US$) and the mean duration error under three minutes, and (b) capable of real-time performance, processing thousands to millions of queries per second. Finally, we describe the lessons learned during the process of deploying this system into a production environment.", "title": "" }, { "docid": "dd5bb8731211666b4f7ab6362d0b62b9", "text": "This paper proposes a new trajectory clustering scheme for objects moving on road networks. A trajectory on road networks can be defined as a sequence of road segments a moving object has passed by. We first propose a similarity measurement scheme that judges the degree of similarity by considering the total length of matched road segments. Then, we propose a new clustering algorithm based on such similarity measurement criteria by modifying and adjusting the FastMap and hierarchical clustering schemes. To evaluate the performance of the proposed clustering scheme, we also develop a trajectory generator considering the fact that most objects tend to move from the starting point to the destination point along their shortest path. The performance result shows that our scheme has the accuracy of over 95%.", "title": "" }, { "docid": "70176231436f713c426028c34a76a6c6", "text": "This paper reports the first millimeter-wave double-balanced up-conversion mixer that is realized using a standard CMOS 90nm process. The circuit integrates passive on-chip baluns for single-ended to differential-ended conversions at the high frequency LO and RF ports and an active balun at the IF port. The passive baluns use a stacked configuration and also provide impedance matching to improve conversion gain. The active balun employs the quasi-symmetric properties of the drain and the source of the MOSFET to provide a balanced output. The mixer has a measured 11dB conversion loss and a LO rejection of 26.5dB. The power consumption is 13.2mW.", "title": "" }, { "docid": "26cecceea22566025c22e66376dbb138", "text": "The development of technologies related to the Internet of Things (IoT) provides a new perspective on applications pertaining to smart cities. Smart city applications focus on resolving issues facing people in everyday life, and have attracted a considerable amount of research interest. The typical issue encountered in such places of daily use, such as stations, shopping malls, and stadiums is crowd dynamics management. Therefore, we focus on crowd dynamics management to resolve the problem of congestion using IoT technologies. Real-time crowd dynamics management can be achieved by gathering information relating to congestion and propose less crowded places. Although many crowd dynamics management applications have been proposed in various scenarios and many models have been devised to this end, a general model for evaluating the control effectiveness of crowd dynamics management has not yet been developed in IoT research. Therefore, in this paper, we propose a model to evaluate the performance of crowd dynamics management applications. In other words, the objective of this paper is to present the proof-of-concept of control effectiveness of crowd dynamics management. Our model uses feedback control theory, and enables an integrated evaluation of the control effectiveness of crowd dynamics management methods under various scenarios. We also provide extensive numerical results to verify the effectiveness of the model.", "title": "" }, { "docid": "91979667e2846ab489db4dad8df1df0a", "text": "Performance of speaker identification SID systems is known to degrade rapidly in the presence of mismatch such as noise and channel degradations. This study introduces a novel class of curriculum learning CL based algorithms for noise robust speaker recognition. We introduce CL-based approaches at two stages within a state-of-the-art speaker verification system: at the i-Vector extractor estimation and at the probabilistic linear discriminant PLDA back-end. Our proposed CL-based approaches operate by categorizing the available training data into progressively more challenging subsets using a suitable difficulty criterion. Next, the corresponding training algorithms are initialized with a subset that is closest to a clean noise-free set, and progressively moving to subsets that are more challenging for training as the algorithms progress. We evaluate the performance of our proposed approaches on the noisy and severely degraded data from the DARPA RATS SID task, and show consistent and significant improvement across multiple test sets over a baseline SID framework with a standard i-Vector extractor and multisession PLDA-based back-end. We also construct a very challenging evaluation set by adding noise to the NIST SRE 2010 C5 extended condition trials, where our proposed CL-based PLDA is shown to offer significant improvements over a traditional PLDA based back-end.", "title": "" }, { "docid": "5e4c4a9f298a2eb015ce96fa2c82c2c2", "text": "Tendons are able to respond to mechanical forces by altering their structure, composition, and mechanical properties--a process called tissue mechanical adaptation. The fact that mechanical adaptation is effected by cells in tendons is clearly understood; however, how cells sense mechanical forces and convert them into biochemical signals that ultimately lead to tendon adaptive physiological or pathological changes is not well understood. Mechanobiology is an interdisciplinary study that can enhance our understanding of mechanotransduction mechanisms at the tissue, cellular, and molecular levels. The purpose of this article is to provide an overview of tendon mechanobiology. The discussion begins with the mechanical forces acting on tendons in vivo, tendon structure and composition, and its mechanical properties. Then the tendon's response to exercise, disuse, and overuse are presented, followed by a discussion of tendon healing and the role of mechanical loading and fibroblast contraction in tissue healing. Next, mechanobiological responses of tendon fibroblasts to repetitive mechanical loading conditions are presented, and major cellular mechanotransduction mechanisms are briefly reviewed. Finally, future research directions in tendon mechanobiology research are discussed.", "title": "" }, { "docid": "424fe4ffd8077d390ddee2a05ff5dcea", "text": "A re-emergence of research on EEG-neurofeedback followed controlled evidence of clinical benefits and validation of cognitive/affective gains in healthy participants including correlations in support of feedback learning mediating outcome. Controlled studies with healthy and elderly participants, which have increased exponentially, are reviewed including protocols from the clinic: sensory-motor rhythm, beta1 and alpha/theta ratios, down-training theta maxima, and from neuroscience: upper-alpha, theta, gamma, alpha desynchronisation. Outcome gains include sustained attention, orienting and executive attention, the P300b, memory, spatial rotation, RT, complex psychomotor skills, implicit procedural memory, recognition memory, perceptual binding, intelligence, mood and well-being. Twenty-three of the controlled studies report neurofeedback learning indices along with beneficial outcomes, of which eight report correlations in support of a meditation link, results which will be supplemented by further creativity and the performing arts evidence in Part II. Validity evidence from optimal performance studies represents an advance for the neurofeedback field demonstrating that cross fertilisation between clinical and optimal performance domains will be fruitful. Theoretical and methodological issues are outlined further in Part III.", "title": "" }, { "docid": "08134d0d76acf866a71d660062f2aeb8", "text": "Colorization methods using deep neural networks have become a recent trend. However, most of them do not allow user inputs, or only allow limited user inputs (only global inputs or only local inputs), to control the output colorful images. The possible reason is that it’s difficult to differentiate the influence of different kind of user inputs in network training. To solve this problem, we present a novel deep colorization method, which allows simultaneous global and local inputs to better control the output colorized images. The key step is to design an appropriate loss function that can differentiate the influence of input data, global inputs and local inputs. With this design, our method accepts no inputs, or global inputs, or local inputs, or both global and local inputs, which is not supported in previous deep colorization methods. In addition, we propose a global color theme recommendation system to help users determine global inputs. Experimental results shows that our methods can better control the colorized images and generate state-of-art results.", "title": "" }, { "docid": "e91e350cd2e3f385333be9156d38feac", "text": "Mobile devices store a diverse set of private user data and have gradually become a hub to control users' other personal Internet-of-Things devices. Access control on mobile devices is therefore highly important. The widely accepted solution is to protect access by asking for a password. However, password authentication is tedious, e.g., a user needs to input a password every time she wants to use the device. Moreover, existing biometrics such as face, fingerprint, and touch behaviors are vulnerable to forgery attacks. We propose a new touch-based biometric authentication system that is passive and secure against forgery attacks. In our touch-based authentication, a user's touch behaviors are a function of some random \"secret\". The user can subconsciously know the secret while touching the device's screen. However, an attacker cannot know the secret at the time of attack, which makes it challenging to perform forgery attacks even if the attacker has already obtained the user's touch behaviors. We evaluate our touch-based authentication system by collecting data from 25 subjects. Results are promising: the random secrets do not influence user experience and, for targeted forgery attacks, our system achieves 0.18 smaller Equal Error Rates (EERs) than previous touch-based authentication.", "title": "" }, { "docid": "21eddfd81b640fc1810723e93f94ae5d", "text": "R. B. Gnanajothi, Topics in graph theory, Ph. D. thesis, Madurai Kamaraj University, India, 1991. E. M. Badr, On the Odd Gracefulness of Cyclic Snakes With Pendant Edges, International journal on applications of graph theory in wireless ad hoc networks and sensor networks (GRAPH-HOC) Vol. 4, No. 4, December 2012. E. M. Badr, M. I. Moussa & K. Kathiresan (2011): Crown graphs and subdivision of ladders are odd graceful, International Journal of Computer Mathematics, 88:17, 3570-3576. A. Rosa, On certain valuation of the vertices of a graph, Theory of Graphs (International Symposium, Rome, July 1966), Gordon and Breach, New York and Dunod Paris (1967) 349-355. A. Solairaju & P. Muruganantham, Even Vertex Gracefulness of Fan Graph,", "title": "" }, { "docid": "09f8641595d946e85f985aa4e3140b37", "text": "Wavelet transform combined with the set partitioning coders (SPC) are the most widely used fingerprint image compression approach. Many different SPC coders have been proposed in the literature to encode the wavelet transform coefficients a common feature of which is trying to maximize the global peak-signal-to-noise ratio (PSNR) at a given bit rate. Unfortunately, they have not considered the local variations of SNR within the compressed fingerprint image; therefore, different regions in the compressed image will have different ridge-valley qualities. This problem causes the verification performance to be decreased because minutiae and other useful features cannot be extracted precisely from the low-bit-rate-compressed fingerprint images. Contrast variation within the original image worsens the problem. This paper deals with those applications of fingerprint image compression in which high compression ratios and preserving or improving the verification performance of the compressed images are the main concern. We propose a compression scheme in which the local-SNR (signal-to-noise ratio) variations within the compressed image are minimized (and thus, general quality is maximized everywhere) by means of an iterative procedure. The proposed procedure can be utilized in conjunction with any SPC coder without the need to modify the SPC coder’s algorithm. In addition, we used image enhancement to further improve the ridge-valley quality as well as the verification performance of the compressed fingerprint images through alleviating the leakage effect. We evaluated the compression and verification performances of some conventional and modern SPC coders including STW, EZW, SPIHT, WDR, and ASWDR combined with the proposed scheme. This evaluation was performed on the FVC2004 dataset with respect to measures including average PSNR curve versus bit rate, verification accuracy, detection error trade-off (DET) curve, and correlation of matching scores versus the average quality of involved fingerprint images. Simulation results showed considerable improvement on verification performance of all examined SPC coders, especially the SPIHT coder, by using the proposed scheme.", "title": "" } ]
scidocsrr
62c2666f78c3d16db14c058e12d2651c
Behavioral Assessment of Emotion Discrimination, Emotion Regulation, and Cognitive Control in Childhood, Adolescence, and Adulthood
[ { "docid": "9beaf6c7793633dceca0c8df775e8959", "text": "The course, antecedents, and implications for social development of effortful control were examined in this comprehensive longitudinal study. Behavioral multitask batteries and parental ratings assessed effortful control at 22 and 33 months (N = 106). Effortful control functions encompassed delaying, slowing down motor activity, suppressing/initiating activity to signal, effortful attention, and lowering voice. Between 22 and 33 months, effortful control improved considerably, its coherence increased, it was stable, and it was higher for girls. Behavioral and parent-rated measures converged. Children's focused attention at 9 months, mothers' responsiveness at 22 months, and mothers' self-reported socialization level all predicted children's greater effortful control. Effortful control had implications for concurrent social development. Greater effortful control at 22 months was linked to more regulated anger, and at 33 months, to more regulated anger and joy and to stronger restraint.", "title": "" } ]
[ { "docid": "f513165fd055b04544dff6eb5b7ec771", "text": "Low power wide area (LPWA) networks are attracting a lot of attention primarily because of their ability to offer affordable connectivity to the low-power devices distributed over very large geographical areas. In realizing the vision of the Internet of Things, LPWA technologies complement and sometimes supersede the conventional cellular and short range wireless technologies in performance for various emerging smart city and machine-to-machine applications. This review paper presents the design goals and the techniques, which different LPWA technologies exploit to offer wide-area coverage to low-power devices at the expense of low data rates. We survey several emerging LPWA technologies and the standardization activities carried out by different standards development organizations (e.g., IEEE, IETF, 3GPP, ETSI) as well as the industrial consortia built around individual LPWA technologies (e.g., LoRa Alliance, Weightless-SIG, and Dash7 alliance). We further note that LPWA technologies adopt similar approaches, thus sharing similar limitations and challenges. This paper expands on these research challenges and identifies potential directions to address them. While the proprietary LPWA technologies are already hitting the market with large nationwide roll-outs, this paper encourages an active engagement of the research community in solving problems that will shape the connectivity of tens of billions of devices in the next decade.", "title": "" }, { "docid": "4ff5953f4c81a6c77f46c66763d791dc", "text": "We propose a system that finds text in natural scenes using a variety of cues. Our novel data-driven method incorporates coarse-to-fine detection of character pixels using convolutional features (Text-Conv), followed by extracting connected components (CCs) from characters using edge and color features, and finally performing a graph-based segmentation of CCs into words (Word-Graph). For Text-Conv, the initial detection is based on convolutional feature maps similar to those used in Convolutional Neural Networks (CNNs), but learned using Convolutional k-means. Convolution masks defined by local and neighboring patch features are used to improve detection accuracy. The Word-Graph algorithm uses contextual information to both improve word segmentation and prune false character/word detections. Different definitions for foreground (text) regions are used to train the detection stages, some based on bounding box intersection, and others on bounding box and pixel intersection. Our system obtains pixel, character, and word detection f-measures of 93.14%, 90.26%, and 86.77% respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems. This approach may work for other detection targets with homogenous color in natural scenes.", "title": "" }, { "docid": "b163fb3faa31f6db35599d32d7946523", "text": "Humans learn how to behave directly through environmental experience and indirectly through rules and instructions. Behavior analytic research has shown that instructions can control behavior, even when such behavior leads to sub-optimal outcomes (Hayes, S. (Ed.). 1989. Rule-governed behavior: cognition, contingencies, and instructional control. Plenum Press.). Here we examine the control of behavior through instructions in a reinforcement learning task known to depend on striatal dopaminergic function. Participants selected between probabilistically reinforced stimuli, and were (incorrectly) told that a specific stimulus had the highest (or lowest) reinforcement probability. Despite experience to the contrary, instructions drove choice behavior. We present neural network simulations that capture the interactions between instruction-driven and reinforcement-driven behavior via two potential neural circuits: one in which the striatum is inaccurately trained by instruction representations coming from prefrontal cortex/hippocampus (PFC/HC), and another in which the striatum learns the environmentally based reinforcement contingencies, but is \"overridden\" at decision output. Both models capture the core behavioral phenomena but, because they differ fundamentally on what is learned, make distinct predictions for subsequent behavioral and neuroimaging experiments. Finally, we attempt to distinguish between the proposed computational mechanisms governing instructed behavior by fitting a series of abstract \"Q-learning\" and Bayesian models to subject data. The best-fitting model supports one of the neural models, suggesting the existence of a \"confirmation bias\" in which the PFC/HC system trains the reinforcement system by amplifying outcomes that are consistent with instructions while diminishing inconsistent outcomes.", "title": "" }, { "docid": "4fea04d8f04012b0dbbf45a6ab3a5951", "text": "Nowadays large-scale distributed machine learning systems have been deployed to support various analytics and intelligence services in IT firms. To train a large dataset and derive the prediction/inference model, e.g., a deep neural network, multiple workers are run in parallel to train partitions of the input dataset, and update shared model parameters. In a shared cluster handling multiple training jobs, a fundamental issue is how to efficiently schedule jobs and set the number of concurrent workers to run for each job, such that server resources are maximally utilized and model training can be completed in time. Targeting a distributed machine learning system using the parameter server framework, $w$ e design an online algorithm for scheduling the arriving jobs and deciding the adjusted numbers of concurrent workers and parameter servers for each job over its course, to maximize overall utility of all jobs, contingent on their completion times. Our online algorithm design utilizes a primal-dual framework coupled with efficient dual subroutines, achieving good long-term performance guarantees with polynomial time complexity. Practical effectiveness of the online algorithm is evaluated using trace-driven simulation and testbed experiments, which demonstrate its outperformance as compared to commonly adopted scheduling algorithms in today's cloud systems.", "title": "" }, { "docid": "27775805c45a82cbd31fd9a5e93f3df1", "text": "In a dynamic world, mechanisms allowing prediction of future situations can provide a selective advantage. We suggest that memory systems differ in the degree of flexibility they offer for anticipatory behavior and put forward a corresponding taxonomy of prospection. The adaptive advantage of any memory system can only lie in what it contributes for future survival. The most flexible is episodic memory, which we suggest is part of a more general faculty of mental time travel that allows us not only to go back in time, but also to foresee, plan, and shape virtually any specific future event. We review comparative studies and find that, in spite of increased research in the area, there is as yet no convincing evidence for mental time travel in nonhuman animals. We submit that mental time travel is not an encapsulated cognitive system, but instead comprises several subsidiary mechanisms. A theater metaphor serves as an analogy for the kind of mechanisms required for effective mental time travel. We propose that future research should consider these mechanisms in addition to direct evidence of future-directed action. We maintain that the emergence of mental time travel in evolution was a crucial step towards our current success.", "title": "" }, { "docid": "2efe399d3896f78c6f152d98aa6d33a0", "text": "We consider the problem of verifying the identity of a distribution: Given the description of a distribution over a discrete support p = (p<sub>1</sub>, p<sub>2</sub>, ... , p<sub>n</sub>), how many samples (independent draws) must one obtain from an unknown distribution, q, to distinguish, with high probability, the case that p = q from the case that the total variation distance (L<sub>1</sub> distance) ||p - q||1≥ ϵ? We resolve this question, up to constant factors, on an instance by instance basis: there exist universal constants c, c' and a function f(p, ϵ) on distributions and error parameters, such that our tester distinguishes p = q from ||p-q||1≥ ϵ using f(p, ϵ) samples with success probability > 2/3, but no tester can distinguish p = q from ||p - q||1≥ c · ϵ when given c' · f(p, ϵ) samples. The function f(p, ϵ) is upperbounded by a multiple of ||p||2/3/ϵ<sup>2</sup>, but is more complicated, and is significantly smaller in some cases when p has many small domain elements, or a single large one. This result significantly generalizes and tightens previous results: since distributions of support at most n have L<sub>2/3</sub> norm bounded by √n, this result immediately shows that for such distributions, O(√n/ϵ<sup>2</sup>) samples suffice, tightening the previous bound of O(√npolylog/n<sup>4</sup>) for this class of distributions, and matching the (tight) known results for the case that p is the uniform distribution over support n. The analysis of our very simple testing algorithm involves several hairy inequalities. To facilitate this analysis, we give a complete characterization of a general class of inequalities- generalizing Cauchy-Schwarz, Holder's inequality, and the monotonicity of L<sub>p</sub> norms. Specifically, we characterize the set of sequences (a)<sub>i</sub> = a<sub>1</sub>, . . . , ar, (b)i = b<sub>1</sub>, . . . , br, (c)i = c<sub>1</sub>, ... , cr, for which it holds that for all finite sequences of positive numbers (x)<sub>j</sub> = x<sub>1</sub>,... and (y)<sub>j</sub> = y<sub>1</sub>,...,Π<sub>i=1</sub><sup>r</sup> (Σ<sub>j</sub>x<sup>a</sup><sub>j</sub><sup>i</sup><sub>y</sub><sub>i</sub><sup>b</sup><sup>i</sup>)<sup>ci</sup>≥1. For example, the standard Cauchy-Schwarz inequality corresponds to the sequences a = (1, 0, 1/2), b = (0,1, 1/2), c = (1/2 , 1/2 , -1). Our characterization is of a non-traditional nature in that it uses linear programming to compute a derivation that may otherwise have to be sought throu.gh trial and error, by hand. We do not believe such a characterization has appeared in the literature, and hope its computational nature will be useful to others, and facilitate analyses like the one here.", "title": "" }, { "docid": "6f0283efa932663c83cc2c63d19fd6cf", "text": "Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the “How May I Help You” spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.", "title": "" }, { "docid": "c94460bfeeec437b751e987f399778c0", "text": "The Steiner packing problem is to find the maximum number of edge-disjoint subgraphs of a given graph G that connect a given set of required points S. This problem is motivated by practical applications in VLSI- layout and broadcasting, as well as theoretical reasons. In this paper, we study this problem and present an algorithm with an asymptotic approximation factor of |S|/4. This gives a sufficient condition for the existence of k edge-disjoint Steiner trees in a graph in terms of the edge-connectivity of the graph. We will show that this condition is the best possible if the number of terminals is 3. At the end, we consider the fractional version of this problem, and observe that it can be reduced to the minimum Steiner tree problem via the ellipsoid algorithm.", "title": "" }, { "docid": "1a91e143f4430b11f3af242d6e07cbba", "text": "Random graph matching refers to recovering the underlying vertex correspondence between two random graphs with correlated edges; a prominent example is when the two random graphs are given by Erdős-Rényi graphs G(n, d n ). This can be viewed as an average-case and noisy version of the graph isomorphism problem. Under this model, the maximum likelihood estimator is equivalent to solving the intractable quadratic assignment problem. This work develops an Õ(nd + n)-time algorithm which perfectly recovers the true vertex correspondence with high probability, provided that the average degree is at least d = Ω(log n) and the two graphs differ by at most δ = O(log−2(n)) fraction of edges. For dense graphs and sparse graphs, this can be improved to δ = O(log−2/3(n)) and δ = O(log−2(d)) respectively, both in polynomial time. The methodology is based on appropriately chosen distance statistics of the degree profiles (empirical distribution of the degrees of neighbors). Before this work, the best known result achieves δ = O(1) and n ≤ d ≤ n for some constant c with an n-time algorithm [BCL18] and δ = Õ((d/n)) and d = Ω̃(n) with a polynomial-time algorithm [DCKG18].", "title": "" }, { "docid": "08731e24a7ea5e8829b03d79ef801384", "text": "A new power-rail ESD clamp circuit designed with PMOS as main ESD clamp device has been proposed and verified in a 65nm 1.2V CMOS process. The new proposed design with adjustable holding voltage controlled by the ESD detection circuit has better immunity against mis-trigger or transient-induced latch-on event. The layout area and the standby leakage current of this new proposed design are much superior to that of traditional RC-based power-rail ESD clamp circuit with NMOS as main ESD clamp device.", "title": "" }, { "docid": "29025f061a22aed656e8d24416c52002", "text": "This contribution deals with the Heeger-Bergen pyramid-based texture analysis/synthesis algorithm. It brings a detailed explanation of the original algorithm tested on many characteristic examples. Our analysis reproduces the original results, but also brings a minor improvement concerning non-periodic textures. Inspired by visual perception theories, Heeger and Bergen proposed to characterize a texture by its first-order statistics of both its color and its responses to multiscale and multi-orientation filters, namely the steerable pyramid. The Heeger-Bergen algorithm consists in the following procedure: starting from a white noise image, histogram matchings are performed to the image alternately in the image domain and the steerable pyramid domain, so that the corresponding output histograms match the ones of the input texture. Source Code An on-line demo1 of the Heeger-Bergen pyramid-based texture synthesis algorithm is available. The demo permits to upload a color image to extract a subimage and to run the texture synthesis algorithm on this subimage. The algorithm available in the demo is a slightly improved version treating non-periodic textures by a “periodic+smooth” decomposition [13]. The algorithm works with color textures and is able to synthesize textures with larger size than the input image. The original version of the Heeger-Bergen algorithm (where the boundaries are handled by mirror symmetrization) is optional in the source code. An ANSI C implementation is available for download here2. It is provided with: • An illustrated html documentation; • Source code; This code requires libpng, libfftw3, openmp, and getopt. Compilation and usage instructions are included in the README.txt file of the zip archive. The illustrated HTML documentation can be reproduced from the source code by using doxygen (see the README.txt file of the zip archive for details).", "title": "" }, { "docid": "abc48ae19e2ea1e1bb296ff0ccd492a2", "text": "This paper reports the results achieved by Carnegie Mellon University on the Topic Detection and Tracking Project’s secondyear evaluation for the segmentation, detection, and tracking tasks. Additional post-evaluation improvements are also", "title": "" }, { "docid": "4a6e382b9db87bf5915fec8de4a67b55", "text": "BACKGROUND\nThe aim of the study is to analyze the nature, extensions, and dural relationships of hormonally inactive giant pituitary tumors. The relevance of the anatomic relationships to surgery is analyzed.\n\n\nMETHODS\nThere were 118 cases of hormonally inactive pituitary tumors analyzed with the maximum dimension of more than 4 cm. These cases were surgically treated in our neurosurgical department from 1995 to 2002. Depending on the anatomic extensions and the nature of their meningeal coverings, these tumors were divided into 4 grades. The grades reflected an increasing order of invasiveness of adjacent dural and arachnoidal compartments. The strategy and outcome of surgery and radiotherapy was analyzed for these 4 groups. Average duration of follow-up was 31 months.\n\n\nRESULTS\nThere were 54 giant pituitary tumors, which remained within the confines of sellar dura and under the diaphragma sellae and did not enter into the compartment of cavernous sinus (Grade I). Transgression of the medial wall and invasion into the compartment of the cavernous sinus (Grade II) was seen in 38 cases. Elevation of the dura of the superior wall of the cavernous sinus and extension of this elevation into various compartments of brain (Grade III) was observed in 24 cases. Supradiaphragmatic-subarachnoid extension (Grade IV) was seen in 2 patients. The majority of patients were treated by transsphenoidal route.\n\n\nCONCLUSIONS\nGiant pituitary tumors usually have a meningeal cover and extend into well-defined anatomic pathways. Radical surgery by a transsphenoidal route is indicated and possible in Grade I-III pituitary tumors. Such a strategy offers a reasonable opportunity for recovery in vision and a satisfactory postoperative and long-term outcome. Biopsy of the tumor followed by radiotherapy could be suitable for Grade IV pituitary tumors.", "title": "" }, { "docid": "a75e29521b04d5e09228918e4ed560a6", "text": "This study assessed motives for social network site (SNS) use, group belonging, collective self-esteem, and gender effects among older adolescents. Communication with peer group members was the most important motivation for SNS use. Participants high in positive collective self-esteem were strongly motivated to communicate with peer group via SNS. Females were more likely to report high positive collective self-esteem, greater overall use, and SNS use to communicate with peers. Females also posted higher means for group-in-self, passing time, and entertainment. Negative collective self-esteem correlated with social compensation, suggesting that those who felt negatively about their social group used SNS as an alternative to communicating with other group members. Males were more likely than females to report negative collective self-esteem and SNS use for social compensation and social identity gratifications.", "title": "" }, { "docid": "dd32079de1ca0b5cac5b2dc5fc146d17", "text": "In this paper, we propose a new authentication method to prevent authentication vulnerability of Claim Token method of Membership Service provide in Private BlockChain. We chose Hyperledger Fabric v1.0 using JWT authentication method of membership service. TOTP, which generate OTP tokens and user authentication codes that generate additional time-based password on existing authentication servers, has been applied to enforce security and two-factor authentication method to provide more secure services.", "title": "" }, { "docid": "cb1a99cc1bb705d8ad5f26cc9a61e695", "text": "In the smart grid system, dynamic pricing can be an efficient tool for the service provider which enables efficient and automated management of the grid. However, in practice, the lack of information about the customers' time-varying load demand and energy consumption patterns and the volatility of electricity price in the wholesale market make the implementation of dynamic pricing highly challenging. In this paper, we study a dynamic pricing problem in the smart grid system where the service provider decides the electricity price in the retail market. In order to overcome the challenges in implementing dynamic pricing, we develop a reinforcement learning algorithm. To resolve the drawbacks of the conventional reinforcement learning algorithm such as high computational complexity and low convergence speed, we propose an approximate state definition and adopt virtual experience. Numerical results show that the proposed reinforcement learning algorithm can effectively work without a priori information of the system dynamics.", "title": "" }, { "docid": "0b67a35902f4a027032e5b9034997342", "text": "In order to make software applications simpler to write and easier to maintain, a software digital signal processing library that performs essential signal and image processing functions is an important part of every DSP developer’s toolset. In general, such a library provides high-level interface and mechanisms, therefore developers only need to known how to use algorithm, not the details of how they work. Then, complex signal transformations become function calls, e.g. C-callable functions. Considering the 2-D convolver function as an example of great significance for DSPs, this work proposes to replace this software function by an emulation on a FPGA initially configured by software programming. Therefore, the exploration of the 2-D convolver’s design space will provide guidelines for the development of a library of DSP-oriented hardware configurations intended to significantly speed-up the performance of general DSP processors. Based on the specific convolver, and considering operators supported in the library as hardware accelerators, a series of trade-offs for efficiently exploiting the bandwidth between the general purpose DSP and the accelerators are proposed. In terms of implementation, this work explores the performance and architectural tradeoffs involved in the design of an FPGA-based 2D convolution coprocessor for the TMS320C40 DSP microprocessor from Texas Instruments. However, the proposed concept is not limited to a particular processor. Copyright  1999 IEEE . This paper is an extended version of a paper accepted in the IEEE VLSI Systems Transaction. The paper will be published in 1999. Personnel use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collectives works for resale or redistribution to servers or lists, or to reuse any copyrithed component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions / IEEE service Center / 445 Hoes Lane / P.O. Box 1331 / Pistacataway, NJ 08855-1331, USA. Telephone: + Intl. 732562-3966.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "0fd37a459c95b20e3d80021da1bb281d", "text": "Social media data are increasingly used as the source of research in a variety of domains. A typical example is urban analytics, which aims at solving urban problems by analyzing data from different sources including social media. The potential value of social media data in tourism studies, which is one of the key topics in urban research, however has been much less investigated. This paper seeks to understand the relationship between social media dynamics and the visiting patterns of visitors to touristic locations in real-world cases. By conducting a comparative study, we demonstrate how social media characterizes touristic locations differently from other data sources. Our study further shows that social media data can provide real-time insights of tourists’ visiting patterns in big events, thus contributing to the understanding of social media data utility in tourism studies.", "title": "" }, { "docid": "3df9e73ce61d6168dba668dc9f02078a", "text": "Web mail search is an emerging topic, which has not been the object of as many studies as traditional Web search. In particular, little is known about the characteristics of mail searchers and of the queries they issue. We study here the characteristics of Web mail searchers, and explore how demographic signals such as location, age, gender, and inferred income, influence their search behavior. We try to understand for instance, whether women exhibit different mail search patterns than men, or whether senior people formulate more precise queries than younger people. We compare our results, obtained from the analysis of a Yahoo Web mail search query log, to similar work conducted in Web and Twitter search. In addition, we demonstrate the value of the user’s personal query log, as well as of the global query log and of the demographic signals, in a key search task: dynamic query auto-completion. We discuss how going beyond users’ personal query logs (their search history) significantly improves the quality of suggestions, in spite of the fact that a user’s mailbox is perceived as being highly personal. In particular, we note the striking value of demographic features for queries relating to companies/organizations, thus verifying our assumption that query completion benefits from leveraging queries issued by “people like me\". We believe that demographics and other such global features can be leveraged in other mail applications, and hope that this work is a first step in this direction.", "title": "" } ]
scidocsrr
8260303b7e590d8e86f700a350832b4e
ChEMBL: a large-scale bioactivity database for drug discovery
[ { "docid": "e9326cb2e3b79a71d9e99105f0259c5a", "text": "Although drugs are intended to be selective, at least some bind to several physiological targets, explaining side effects and efficacy. Because many drug–target combinations exist, it would be useful to explore possible interactions computationally. Here we compared 3,665 US Food and Drug Administration (FDA)-approved and investigational drugs against hundreds of targets, defining each target by its ligands. Chemical similarities between drugs and ligand sets predicted thousands of unanticipated associations. Thirty were tested experimentally, including the antagonism of the β1 receptor by the transporter inhibitor Prozac, the inhibition of the 5-hydroxytryptamine (5-HT) transporter by the ion channel drug Vadilex, and antagonism of the histamine H4 receptor by the enzyme inhibitor Rescriptor. Overall, 23 new drug–target associations were confirmed, five of which were potent (<100 nM). The physiological relevance of one, the drug N,N-dimethyltryptamine (DMT) on serotonergic receptors, was confirmed in a knockout mouse. The chemical similarity approach is systematic and comprehensive, and may suggest side-effects and new indications for many drugs.", "title": "" } ]
[ { "docid": "c6baff0d600c76fac0be9a71b4238990", "text": "Nature has provided rich models for computational problem solving, including optimizations based on the swarm intelligence exhibited by fireflies, bats, and ants. These models can stimulate computer scientists to think nontraditionally in creating tools to address application design challenges.", "title": "" }, { "docid": "d552b6beeea587bc014a4c31cabee121", "text": "Recent successes of neural networks in solving combinatorial problems and games like Go, Poker and others inspire further attempts to use deep learning approaches in discrete domains. In the field of automated planning, the most popular approach is informed forward search driven by a heuristic function which estimates the quality of encountered states. Designing a powerful and easily-computable heuristics however is still a challenging problem on many domains. In this paper, we use machine learning to construct such heuristic automatically. We train a neural network to predict a minimal number of moves required to solve a given instance of Rubik’s cube. We then use the trained network as a heuristic distance estimator with a standard forward-search algorithm and compare the results with other heuristics. Our experiments show that the learning approach is competitive with state-of-the-art and might be the best choice in some use-case scenarios.", "title": "" }, { "docid": "87ded3ada9aa454d8f9a914ef92ccc4a", "text": "We advocate the use of a new distribution family—the transelliptical—for robust inference of high dimensional graphical models. The transelliptical family is an extension of the nonparanormal family proposed by Liu et al. (2009). Just as the nonparanormal extends the normal by transforming the variables using univariate functions, the transelliptical extends the elliptical family in the same way. We propose a nonparametric rank-based regularization estimator which achieves the parametric rates of convergence for both graph recovery and parameter estimation. Such a result suggests that the extra robustness and flexibility obtained by the semiparametric transelliptical modeling incurs almost no efficiency loss. We also discuss the relationship between this work with the transelliptical component analysis proposed by Han and Liu (2012).", "title": "" }, { "docid": "4cdef79370abcd380357c8be92253fa5", "text": "In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech.", "title": "" }, { "docid": "bae1f44165387e086868efecf318ecd2", "text": "Clustering graphs under the Stochastic Block Model (SBM) and extensions are well studied. Guarantees of correctness exist under the assumption that the data is sampled from a model. In this paper, we propose a framework, in which we obtain “correctness” guarantees without assuming the data comes from a model. The guarantees we obtain depend instead on the statistics of the data that can be checked. We also show that this framework ties in with the existing model-based framework, and that we can exploit results in model-based recovery, as well as strengthen the results existing in that area of research.", "title": "" }, { "docid": "ffa1dcdc856d2400defba36ed155bfdc", "text": "The theory of possibility described in this paper is related to the theory of fuzzy sets by defining the concept of a possibility distribution as a fuzzy restriction which acts as an elastic constraint on the values that may be assigned to a variable. More specifically, if F is a fuzzy subset of a universe of discourse U = {u} which is characterized by its membership function/~r, then a proposition of the form \"X is F,\" where X is a variable taking values in U, induces a possibility distribution Hx which equates the possibility of X taking the value u to/~r.(u)--the compatibility of u with F. In this way, X becomes a fuzzy variable which is associated with the possibility distribution Fix in much the same way as a random variable is associated with a probability distribution. In general, a variable may be associated both with a possibility distribution and a probability distribution, with the weak connection between the two expressed as the possibility/probability consistency principle. A thesis advanced in this paper is that the imprecision that is intrinsic in natural languages is, in the main, possibilistic rather than probabilistic in nature. Thus, by employing the concept of a possibility distribution, a proposition, p, in a natural language may be translated into a procedure which computes the probability distribution of a set of attributes which are implied by p. Several types of conditional translation rules are discussed and, in particular, a translation rule r,~r propositions of the form\"X is F is ~-possible, \"~ where ~ is a number in the interval [0, ! ], is formulate~ and illustrated by examples.", "title": "" }, { "docid": "c2b1dd2d2dd1835ed77cf6d43044eed8", "text": "The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision community uses complicated, hand-engineered features, like SIFT [6], that produce a whole vector of outputs including an explicit representation of the pose of the feature. We show how neural networks can be used to learn features that output a whole vector of instantiation parameters and we argue that this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community. It is also more promising than the handengineered features currently used in computer vision because it provides an efficient way of adapting the features to the domain.", "title": "" }, { "docid": "345f54e3a6d00ecb734de529ed559933", "text": "Size and cost of a switched mode power supply can be reduced by increasing the switching frequency. The maximum switching frequency and the maximum input voltage range, respectively, is limited by the minimum propagated on-time pulse, which is mainly determined by the level shifter speed. At switching frequencies above 10 MHz, a voltage conversion with an input voltage range up to 50 V and output voltages below 5 V requires an on-time of a pulse width modulated signal of less than 5 ns. This cannot be achieved with conventional level shifters. This paper presents a level shifter circuit, which controls an NMOS power FET on a high-voltage domain up to 50 V. The level shifter was implemented as part of a DCDC converter in a 180 nm BiCMOS technology. Experimental results confirm a propagation delay of 5 ns and on-time pulses of less than 3 ns. An overlapping clamping structure with low parasitic capacitances in combination with a high-speed comparator makes the level shifter also very robust against large coupling currents during high-side transitions as fast as 20 V/ns, verified by measurements. Due to the high dv/dt, capacitive coupling currents can be two orders of magnitude larger than the actual signal current. Depending on the conversion ratio, the presented level shifter enables an increase of the switching frequency for multi-MHz converters towards 100 MHz. It supports high input voltages up to 50 V and it can be applied also to other high-speed applications.", "title": "" }, { "docid": "ae04395194c7079aecd95e3b1efb7b50", "text": "Various methods for the estimation of populations of algae and other small freshwater organisms are described. A method of counting is described in detail. It is basically that of Utermöhl and uses an inverted microscope. If the organisms are randomly distributed, a single count is sufficient to obtain an estimate of their abundance and confidence limits for this estimate, even if pipetting, dilution or concentration are involved. The errors in the actual counting and in converting colony counts to cell numbers are considered and found to be small relative to the random sampling error. Data are also given for a variant of Utermöhl's method using a normal microscope and for a method of using a haemocytometer for the larger plankton algae.", "title": "" }, { "docid": "4ce0ba9266d5a73fb3a120a19510857c", "text": "This paper presents a novel linear time-varying model predictive controller (LTV-MPC) using a sparse clothoid-based path description: a LTV-MPCC. Clothoids are used world-wide in road design since they allow smooth driving associated with low jerk values. The formulation of the MPC controller is based on the fact that the path of a vehicle traveling at low speeds defines a segment of clothoids if the steering angle is chosen to vary piecewise linearly. Therefore, we can compute the vehicle motion as clothoid parameters and translate them to vehicle inputs. We present simulation results that demonstrate the ability of the controller to produce a very comfortable and smooth driving while maintaining a tracking accuracy comparable to that of a regular LTV-MPC. While the regular MPC controllers use path descriptions where waypoints are close to each other, our LTV-MPCC has the ability of using paths described by very sparse waypoints. In this case, each pair of waypoints describes a clothoid segment and the cost function minimization is performed in a more efficient way allowing larger prediction distances to be used. This paper also presents a novel algorithm that addresses the problem of path sparsification using a reduced number of clothoid segments. The path sparsification enables a path description using few waypoints with almost no loss of detail. The detail of the reconstruction is an adjustable parameter of the algorithm. The higher the required detail, the more clothoid segments are used.", "title": "" }, { "docid": "6c07a47e1b691f492a7efa6c64d13e06", "text": "Four studies investigate the relationship between individuals' mood and their reliance on the ease retrieval heuristic. Happy participants were consistently more likely to rely on the ease of retrieval heuristic, whereas sad participants were more likely to rely on the activated content. Additional analyses indicate that this pattern is not due to a differential recall (Experiment 2) and that happy participants ceased to rely on the ease of retrieval when the diagnosticity of this information was called into question (Experiment 3). Experiment 4 shows that reliance on the ease of retrieval heuristic resulted in faster judgments than reliance on content, with the former but not the latter being a function of the amount of activated information.", "title": "" }, { "docid": "c749e0a0ae26f95bd8baedfa6e8c5f05", "text": "This paper proposes a new polynomial time constant factor approximation algorithm for a more-a-decade-long open NP-hard problem, the minimum four-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in unit disk graph UDG with any positive integer <inline-formula> <tex-math notation=\"LaTeX\">$m \\geq 1$ </tex-math></inline-formula> for the first time in the literature. We observe that it is difficult to modify the existing constant factor approximation algorithm for the minimum three-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem to solve the minimum four-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in UDG due to the structural limitation of Tutte decomposition, which is the main graph theory tool used by Wang <i>et al.</i> to design their algorithm. To resolve this issue, we first reinvent a new constant factor approximation algorithm for the minimum three-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in UDG and later use this algorithm to design a new constant factor approximation algorithm for the minimum four-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in UDG.", "title": "" }, { "docid": "2f7944399a1f588d1b11d3cf7846af1c", "text": "Corrosion can cause section loss or cracks in the steel members which is one of the most important causes of deterioration of steel bridges. For some critical components of a steel bridge, it is fatal and could even cause the collapse of the whole bridge. Nowadays the most common approach to steel bridge inspection is visual inspection by inspectors with inspection trucks. This paper mainly presents a climbing robot with magnetic wheels which can move on the surface of steel bridge. Experiment results shows that the climbing robot can move on the steel bridge freely without disrupting traffic to reduce the risks to the inspectors.", "title": "" }, { "docid": "2fcd7e151c658e29cacda5c4f5542142", "text": "The connection between gut microbiota and energy homeostasis and inflammation and its role in the pathogenesis of obesity-related disorders are increasingly recognized. Animals models of obesity connect an altered microbiota composition to the development of obesity, insulin resistance, and diabetes in the host through several mechanisms: increased energy harvest from the diet, altered fatty acid metabolism and composition in adipose tissue and liver, modulation of gut peptide YY and glucagon-like peptide (GLP)-1 secretion, activation of the lipopolysaccharide toll-like receptor-4 axis, and modulation of intestinal barrier integrity by GLP-2. Instrumental for gut microbiota manipulation is the understanding of mechanisms regulating gut microbiota composition. Several factors shape the gut microflora during infancy: mode of delivery, type of infant feeding, hospitalization, and prematurity. Furthermore, the key importance of antibiotic use and dietary nutrient composition are increasingly recognized. The role of the Western diet in promoting an obesogenic gut microbiota is being confirmation in subjects. Following encouraging results in animals, several short-term randomized controlled trials showed the benefit of prebiotics and probiotics on insulin sensitivity, inflammatory markers, postprandial incretins, and glucose tolerance. Future research is needed to unravel the hormonal, immunomodulatory, and metabolic mechanisms underlying microbe-microbe and microbiota-host interactions and the specific genes that determine the health benefit derived from probiotics. While awaiting further randomized trials assessing long-term safety and benefits on clinical end points, a healthy lifestyle--including breast lactation, appropriate antibiotic use, and the avoidance of excessive dietary fat intake--may ensure a friendly gut microbiota and positively affect prevention and treatment of metabolic disorders.", "title": "" }, { "docid": "d12d51010fcf4433c5a74a6fbead5cb5", "text": "This paper introduces the power-density and temperature induced issues in the modern on-chip systems. In particular, the emerging Dark Silicon problem is discussed along with critical research challenges. Afterwards, an overview of key research efforts and concepts is presented that leverage dark silicon for performance and reliability optimization. In case temperature constraints are violated, an efficient dynamic thermal management technique is employed.", "title": "" }, { "docid": "3a37bf4ffad533746d2335f2c442a6d6", "text": "Keyphrase extraction is the task of identifying single or multi-word expressions that represent the main topics of a document. In this paper we present TopicRank, a graph-based keyphrase extraction method that relies on a topical representation of the document. Candidate keyphrases are clustered into topics and used as vertices in a complete graph. A graph-based ranking model is applied to assign a significance score to each topic. Keyphrases are then generated by selecting a candidate from each of the topranked topics. We conducted experiments on four evaluation datasets of different languages and domains. Results show that TopicRank significantly outperforms state-of-the-art methods on three datasets.", "title": "" }, { "docid": "95db9ce9faaf13e8ff8d5888a6737683", "text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: boydce1@auburn.edu Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:", "title": "" }, { "docid": "6176a2fd4e07d0c72a53c6207af305ca", "text": "At present, Bluetooth Low Energy (BLE) is dominantly used in commercially available Internet of Things (IoT) devices -- such as smart watches, fitness trackers, and smart appliances. Compared to classic Bluetooth, BLE has been simplified in many ways that include its connection establishment, data exchange, and encryption processes. Unfortunately, this simplification comes at a cost. For example, only a star topology is supported in BLE environments and a peripheral (an IoT device) can communicate with only one gateway (e.g. a smartphone, or a BLE hub) at a set time. When a peripheral goes out of range, it loses connectivity to a gateway, and cannot connect and seamlessly communicate with another gateway without user interventions. In other words, BLE connections do not get automatically migrated or handed-off to another gateway. In this paper, we propose a system which brings seamless connectivity to BLE-capable mobile IoT devices in an environment that consists of a network of gateways. Our framework ensures that unmodified, commercial off-the-shelf BLE devices seamlessly and securely connect to a nearby gateway without any user intervention.", "title": "" }, { "docid": "79934e1cb9a6c07fb965da9674daeb69", "text": "BACKGROUND\nAtrophic scars can complicate moderate and severe acne. There are, at present, several modalities of treatment with different results. Percutaneous collagen induction (PCI) has recently been proposed as a simple and effective therapeutic option for the management of atrophic scars.\n\n\nOBJECTIVE\nThe aim of our study was to analyze the efficacy and safety of percutaneous collagen induction for the treatment of acne scarring in different skin phototypes.\n\n\nMETHODS & MATERIALS\nA total of 60 patients of skin types phototype I to VI were included in the study. They were divided into three groups before beginning treatment: Group A (phototypes I to II), Group B (phototypes III to V), and Group C (phototypes VI). Each patient had three treatments at monthly intervals. The aesthetic improvement was evaluated by using a Global Aesthetic Improvement Scale (GAIS), and analyzed statistically by computerized image analysis of the patients' photographs. The differences in the GAIS scores in the different time-points of each group were found using the Wilcoxon's test for nonparametric-dependent continuous variables. Computerized image analysis of silicone replicas was used to quantify the irregularity of the surface micro-relief with Fast Fourier Transformation (FFT); average values of gray were obtained along the x- and y-axes. The calculated indexes were the integrals of areas arising from the distribution of pixels along the axes.\n\n\nRESULTS\nAll patients completed the study. The Wilcoxon's test for nonparametric-dependent continuous variables showed a statistically significant (p < 0.05) reduction in severity grade of acne scars at T5 compared to baseline (T1). The analysis of the surface micro-relief performed on skin replicas showed a decrease in the degree of irregularity of skin texture in all three groups of patients, with an average reduction of 31% in both axes after three sessions. No short- or long-term dyschromia was observed.\n\n\nCONCLUSION\nPCI offers a simple and safe modality to improve the appearance of acne scars without risk of dyspigmentation in patient of all skin types.", "title": "" }, { "docid": "9951ef687bdf5f01f8d4a38b1120c459", "text": "Urban ecosystems evolve over time and space as the outcome of dynamic interactions between socio-economic and biophysical processes operating over multiple scales. The ecological resilience of urban ecosystems—the degree to which they tolerate alteration before reorganizing around a new set of structures and processes—is influenced by these interactions. In cities and urbanizing areas fragmentation of natural habitats, simplification and homogenization of species composition, disruption of hydrological systems, and alteration of energy flow and nutrient cycling reduce cross-scale resilience, leaving systems increasingly vulnerable to shifts in system control and structure. Because varied urban development patterns affect the amount and interspersion of built and natural land cover, as well as the human demands on ecosystems differently, we argue that alternative urban patterns (i.e., urban form, land use distribution, and connectivity) generate varied effects on ecosystem dynamics and their ecological resilience. We build on urban economics, landscape ecology, population dynamics, and complex system science to propose a conceptual model and a set of hypotheses that explicitly link urban pattern to human and ecosystem functions in urban ecosystems. Drawing on preliminary results from an empirical study of the relationships between urban pattern and bird and aquatic macroinvertebrate diversity in the Puget Sound region, we propose that resilience in urban ecosystems is a function of the patterns of human activities and natural habitats that control and are controlled by both socio-economic and biophysical processes operating at various scales. We discuss the implications of this conceptual model for urban planning and design.", "title": "" } ]
scidocsrr
5d2571d1d08ae27e52fafebc7a084d10
Semantically Consistent Regularization for Zero-Shot Recognition
[ { "docid": "bcb756857adef42264eab0f1361f8be7", "text": "The problem of multi-class boosting is considered. A new fra mework, based on multi-dimensional codewords and predictors is introduced . The optimal set of codewords is derived, and a margin enforcing loss proposed. The resulting risk is minimized by gradient descent on a multidimensional functi onal space. Two algorithms are proposed: 1) CD-MCBoost, based on coordinate des cent, updates one predictor component at a time, 2) GD-MCBoost, based on gradi ent descent, updates all components jointly. The algorithms differ in the w ak learners that they support but are both shown to be 1) Bayes consistent, 2) margi n enforcing, and 3) convergent to the global minimum of the risk. They also red uce to AdaBoost when there are only two classes. Experiments show that both m et ods outperform previous multiclass boosting approaches on a number of data sets.", "title": "" }, { "docid": "a81b08428081cd15e7c705d5a6e79a6f", "text": "Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as, e.g., class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.", "title": "" }, { "docid": "285cd651dd4c32671df5a002c5011c49", "text": "Due to the dramatic expanse of data categories and the lack of labeled instances, zero-shot learning, which transfers knowledge from observed classes to recognize unseen classes, has started drawing a lot of attention from the research community. In this paper, we propose a semi-supervised max-margin learning framework that integrates the semisupervised classification problem over observed classes and the unsupervised clustering problem over unseen classes together to tackle zero-shot multi-class classification. By further integrating label embedding into this framework, we produce a dual formulation that permits convenient incorporation of auxiliary label semantic knowledge to improve zero-shot learning. We conduct extensive experiments on three standard image data sets to evaluate the proposed approach by comparing to two state-of-the-art methods. Our results demonstrate the efficacy of the proposed framework.", "title": "" }, { "docid": "9bb8a69b500d7d3ab5299262c8f17726", "text": "Collecting training images for all visual categories is not only expensive but also impractical. Zero-shot learning (ZSL), especially using attributes, offers a pragmatic solution to this problem. However, at test time most attribute-based methods require a full description of attribute associations for each unseen class. Providing these associations is time consuming and often requires domain specific knowledge. In this work, we aim to carry out attribute-based zero-shot classification in an unsupervised manner. We propose an approach to learn relations that couples class embeddings with their corresponding attributes. Given only the name of an unseen class, the learned relationship model is used to automatically predict the class-attribute associations. Furthermore, our model facilitates transferring attributes across data sets without additional effort. Integrating knowledge from multiple sources results in a significant additional improvement in performance. We evaluate on two public data sets: Animals with Attributes and aPascal/aYahoo. Our approach outperforms state-of the-art methods in both predicting class-attribute associations and unsupervised ZSL by a large margin.", "title": "" } ]
[ { "docid": "62bf93deeb73fab74004cb3ced106bac", "text": "Since the publication of the Design Patterns book, a large number of object-oriented design patterns have been identified and codified. As part of the pattern form, objectoriented design patterns must indicate their relationships with other patterns, but these relationships are typically described very briefly, and different collections of patterns describe different relationships in different ways. In this paper we describe and classify the common relationships between object oriented design patterns. Practitioners can use these relationships to help them identity those patterns which may be applicable to a particular problem, and pattern writers can use these relationships to help them integrate new patterns into the body of the patterns literature.", "title": "" }, { "docid": "a456e0d4a421fbae34cbbb3ca6217fa1", "text": "Software-Defined Networking (SDN) is an emerging network architecture, centralized in the SDN controller entity, that decouples the control plane from the data plane. This controller-based solution allows programmability, and dynamic network reconfigurations, providing decision taking with global knowledge of the network. Currently, there are more than thirty SDN controllers with different features, such as communication protocol version, programming language, and architecture. Beyond that, there are also many studies about controller performance with the goal to identify the best one. However, some conclusions have been unjust because benchmark tests did not follow the same methodology, or controllers were not in the same category. Therefore, a standard benchmark methodology is essential to compare controllers fairly. The standardization can clarify and help us to understand the real behavior and weaknesses of an SDN controller. The main goal of this work-in-progress is to show existing benchmark methodologies, bringing a discussion about the need SDN controller benchmark standardization.", "title": "" }, { "docid": "d6e565c0123049b9e11692b713674ccf", "text": "Now days many research is going on for text summari zation. Because of increasing information in the internet, these kind of research are gaining more a nd more attention among the researchers. Extractive text summarization generates a brief summary by extracti ng proper set of sentences from a document or multi ple documents by deep learning. The whole concept is to reduce or minimize the important information prese nt in the documents. The procedure is manipulated by Rest rict d Boltzmann Machine (RBM) algorithm for better efficiency by removing redundant sentences. The res tricted Boltzmann machine is a graphical model for binary random variables. It consist of three layers input, hidden and output layer. The input data uni formly distributed in the hidden layer for operation. The experimentation is carried out and the summary is g enerated for three different document set from different kno wledge domain. The f-measure value is the identifie r to the performance of the proposed text summarization meth od. The top responses of the three different knowle dge domain in accordance with the f-measure are 0.85, 1 .42 and 1.97 respectively for the three document se t.", "title": "" }, { "docid": "888e8f68486c08ffe538c46ba76de85c", "text": "Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query. Traditional learning to rank models employ machine learning techniques over hand-crafted IR features. By contrast, neural models learn representations of language from raw text that can bridge the gap between query and document vocabulary. Unlike classical IR models, these new machine learning based approaches are data-hungry, requiring large scale training data before they can be deployed. This tutorial introduces basic concepts and intuitions behind neural IR models, and places them in the context of traditional retrieval models. We begin by introducing fundamental concepts of IR and different neural and non-neural approaches to learning vector representations of text. We then review shallow neural IR methods that employ pre-trained neural term embeddings without learning the IR task end-to-end. We introduce deep neural networks next, discussing popular deep architectures. Finally, we review the current DNN models for information retrieval. We conclude with a discussion on potential future directions for neural IR.", "title": "" }, { "docid": "01490975c291a64b40484f6d37ea1c94", "text": "Context-aware systems offer entirely new opportunities for application developers and for end users by gathering context data and adapting systems’ behavior accordingly. Especially in combination with mobile devices such mechanisms are of great value and claim to increase usability tremendously. In this paper, we present a layered architectural framework for context-aware systems. Based on our suggested framework for analysis, we introduce various existing context-aware systems focusing on context-aware middleware and frameworks, which ease the development of context-aware applications. We discuss various approaches and analyze important aspects in context-aware computing on the basis of the presented systems.", "title": "" }, { "docid": "7af90ba25ca20b64e02e2674d408d395", "text": "We present a search engine for mathematical formulae. The MathWebSearch system harvests the web for content representations (currently MathML and OpenMath) of formulae and indexes them with substitution tree indexing, a technique originally developed for accessing intermediate results in automated theorem provers. For querying, we present a generic language extension approach that allows constructing queries by minimally annotating existing representations. First experiments show that this architecture results in a scalable application.", "title": "" }, { "docid": "044de981e34f0180accfb799063a7ec1", "text": "This paper proposes a novel hybrid full-bridge three-level LLC resonant converter. It integrates the advantages of the hybrid full-bridge three-level converter and the LLC resonant converter. It can operate not only under three-level mode but also under two-level mode, so it is very suitable for wide input voltage range application, such as fuel cell power system. The input current ripple and output filter can also be reduced. Three-level leg switches just sustain only half of the input voltage. ZCS is achieved for the rectifier diodes, and the voltage stress across the rectifier diodes can be minimized to the output voltage. The main switches can realize ZVS from zero to full load. A 200-400 V input, 360 V/4 A output prototype converter is built in our lab to verify the operation principle of the proposed converter", "title": "" }, { "docid": "848d80584211c6a4f69ad20ffea21ecf", "text": "This study develops a MEMS-based low-cost sensing platform for sensing gas flow rate and flow direction comprising four silicon nitride cantilever beams arranged in a cross-form configuration, a circular hot-wire flow meter suspended on a silicon nitride membrane, and an integrated resistive temperature detector (RTD). In the proposed device, the flow rate is inversely derived from the change in the resistance signal of the flow meter when exposed to the sensed air stream. To compensate for the effects of the ambient temperature on the accuracy of the flow rate measurements, the output signal from the flow meter is compensated using the resistance signal generated by the RTD. As air travels over the surface of the cross-form cantilever structure, the upstream cantilevers are deflected in the downward direction, while the downstream cantilevers are deflected in the upward direction. The deflection of the cantilever beams causes a corresponding change in the resistive signals of the piezoresistors patterned on their upper surfaces. The amount by which each beam deflects depends on both the flow rate and the orientation of the beam relative to the direction of the gas flow. Thus, following an appropriate compensation by the temperature-corrected flow rate, the gas flow direction can be determined through a suitable manipulation of the output signals of the four piezoresistors. The experimental results have confirmed that the resulting variation in the output signals of the integrated sensors can be used to determine not only the ambient temperature and the velocity of the air flow, but also its direction relative to the sensor with an accuracy of ± 7.5° error.", "title": "" }, { "docid": "a6ea435c346d2d3051d1fc31db59ca35", "text": "As news reading on social media becomes more and more popular, fake news becomes a major issue concerning the public and government. The fake news can take advantage of multimedia content to mislead readers and get dissemination, which can cause negative effects or even manipulate the public events. One of the unique challenges for fake news detection on social media is how to identify fake news on newly emerged events. Unfortunately, most of the existing approaches can hardly handle this challenge, since they tend to learn event-specific features that can not be transferred to unseen events. In order to address this issue, we propose an end-to-end framework named Event Adversarial Neural Network (EANN), which can derive event-invariant features and thus benefit the detection of fake news on newly arrived events. It consists of three main components: the multi-modal feature extractor, the fake news detector, and the event discriminator. The multi-modal feature extractor is responsible for extracting the textual and visual features from posts. It cooperates with the fake news detector to learn the discriminable representation for the detection of fake news. The role of event discriminator is to remove the event-specific features and keep shared features among events. Extensive experiments are conducted on multimedia datasets collected from Weibo and Twitter. The experimental results show our proposed EANN model can outperform the state-of-the-art methods, and learn transferable feature representations.", "title": "" }, { "docid": "45db5ab7ceac7156da713d19d5576598", "text": "The development of user interface systems has languished with the stability of desktop computing. Future systems, however, that are off-the-desktop, nomadic or physical in nature will involve new devices and new software systems for creating interactive applications. Simple usability testing is not adequate for evaluating complex systems. The problems with evaluating systems work are explored and a set of criteria for evaluating new UI systems work is presented.", "title": "" }, { "docid": "7b79b0643dfb779bb0d3a8eb852bdd9e", "text": "Traditionally, the document summarisation task has been tackled either as a natural language processing problem, with an instantiated meaning template being rendered into coherent prose, or as a passage extraction problem, where certain fragments (typically sentences) of the source document are deemed to be highly representative of its content, and thus delivered as meaningful “approximations” of it. Balancing the conflicting requirements of depth and accuracy of a summary, on the one hand, and document and domain independence, on the other, has proven a very hard problem. This paper describes a novel approach to content characterisation of text documents. It is domainand genre-independent, by virtue of not requiring an in-depth analysis of the full meaning. At the same time, it remains closer to the core meaning by choosing a different granularity of its representations (phrasal expressions rather than sentences or paragraphs), by exploiting a notion of discourse contiguity and coherence for the purposes of uniform coverage and context maintenance, and by utilising a strong linguistic notion of salience, as a more appropriate and representative measure of a document’s “aboutness”.", "title": "" }, { "docid": "a214ed60c288762210189f14a8cf8256", "text": "We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.", "title": "" }, { "docid": "fe77670a01f93c3192c6760e46bbab46", "text": "Group recommendation has attracted significant research efforts for its importance in benefiting a group of users. This paper investigates the Group Recommendation problem from a novel aspect, which tries to maximize the satisfaction of each group member while minimizing the unfairness between them. In this work, we present several semantics of the individual utility and propose two concepts of social welfare and fairness for modeling the overall utilities and the balance between group members. We formulate the problem as a multiple objective optimization problem and show that it is NP-Hard in different semantics. Given the multiple-objective nature of fairness-aware group recommendation problem, we provide an optimization framework for fairness-aware group recommendation from the perspective of Pareto Efficiency. We conduct extensive experiments on real-world datasets and evaluate our algorithm in terms of standard accuracy metrics. The results indicate that our algorithm achieves superior performances and considering fairness in group recommendation can enhance the recommendation accuracy.", "title": "" }, { "docid": "7eff2743d36414e3f008be72598bfd8e", "text": "BACKGROUND\nPsychiatry has been consistently shown to be a profession characterised by 'high-burnout'; however, no nationwide surveys on this topic have been conducted in Japan.\n\n\nAIMS\nThe objective of this study was to estimate the prevalence of burnout and to ascertain the relationship between work environment satisfaction, work-life balance satisfaction and burnout among psychiatrists working in medical schools in Japan.\n\n\nMETHOD\nWe mailed anonymous questionnaires to all 80 psychiatry departments in medical schools throughout Japan. Work-life satisfaction, work-environment satisfaction and social support assessments, as well as the Maslach Burnout Inventory (MBI), were used.\n\n\nRESULTS\nSixty psychiatric departments (75.0%) responded, and 704 psychiatrists provided answers to the assessments and MBI. Half of the respondents (n = 311, 46.0%) experienced difficulty with their work-life balance. Based on the responses to the MBI, 21.0% of the respondents had a high level of emotional exhaustion, 12.0% had a high level of depersonalisation, and 72.0% had a low level of personal accomplishment. Receiving little support, experiencing difficulty with work-life balance, and having less work-environment satisfaction were significantly associated with higher emotional exhaustion. A higher number of nights worked per month was significantly associated with higher depersonalisation.\n\n\nCONCLUSIONS\nA low level of personal accomplishment was quite prevalent among Japanese psychiatrists compared with the results of previous studies. Poor work-life balance was related to burnout, and social support was noted to mitigate the impact of burnout.", "title": "" }, { "docid": "f6e60500973da7e3078b9b975b5aa774", "text": "UNLABELLED\nIt has been argued that internal hemipelvectomy without reconstruction of the pelvic ring leads to poor ambulation and inferior patient acceptance. To determine the accuracy of this contention, we posed the following questions: First, how effectively does a typical patient ambulate following this procedure? Second, what is the typical functional capacity of a patient following internal hemipelvectomy? In the spring of 2006, we obtained video documentation of eight patients who had undergone resection arthroplasty of the hemipelvis seen in our clinic during routine clinical followup. The minimum followup in 2006 was 1.1 years (mean, 8.2 years; range, 1.1-22.7 years); at the time of last followup in 2008 the minimum followup was 2.9 years (mean, 9.8 years; range, 2.9-24.5 years). At last followup seven of the eight patients were without pain, and were able to walk without supports. The remaining patient used narcotic medication and a cane or crutch only occasionally. The mean MSTS score at the time of most recent followup was 73.3% of normal (range 53.3-80.0%; mean raw score was 22.0; range 16-24). All eight patients ultimately returned to gainful employment. These observations demonstrate independent painless ambulation and acceptable function is possible following resection arthroplasty of the hemipelvis.\n\n\nLEVEL OF EVIDENCE\nLevel IV, case series. See Guidelines for Authors for a complete description of levels of evidence.", "title": "" }, { "docid": "b1823c456360037d824614a6cf4eceeb", "text": "This paper provides an overview of the Industrial Internet with the emphasis on the architecture, enabling technologies, applications, and existing challenges. The Industrial Internet is enabled by recent rising sensing, communication, cloud computing, and big data analytic technologies, and has been receiving much attention in the industrial section due to its potential for smarter and more efficient industrial productions. With the merge of intelligent devices, intelligent systems, and intelligent decisioning with the latest information technologies, the Industrial Internet will enhance the productivity, reduce cost and wastes through the entire industrial economy. This paper starts by investigating the brief history of the Industrial Internet. We then present the 5C architecture that is widely adopted to characterize the Industrial Internet systems. Then, we investigate the enabling technologies of each layer that cover from industrial networking, industrial intelligent sensing, cloud computing, big data, smart control, and security management. This provides the foundations for those who are interested in understanding the essence and key enablers of the Industrial Internet. Moreover, we discuss the application domains that are gradually transformed by the Industrial Internet technologies, including energy, health care, manufacturing, public section, and transportation. Finally, we present the current technological challenges in developing Industrial Internet systems to illustrate open research questions that need to be addressed to fully realize the potential of future Industrial Internet systems.", "title": "" }, { "docid": "872f224c2dbf06a335eee267bac4ec79", "text": "Shallow supervised 1-hidden layer neural networks have a number of favorable properties that make them easier to interpret, analyze, and optimize than their deep counterparts, but lack their representational power. Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks. Contrary to previous approaches using shallow networks, we focus on problems where deep learning is reported as critical for success. We thus study CNNs on two large-scale image recognition tasks: ImageNet and CIFAR-10. Using a simple set of ideas for architecture and training we find that solving sequential 1-hidden-layer auxiliary problems leads to a CNN that exceeds AlexNet performance on ImageNet. Extending our training methodology to construct individual layers by solving 2-and-3-hidden layer auxiliary problems, we obtain an 11-layer network that exceeds VGG-11 on ImageNet obtaining 89.8% top-5 single crop.To our knowledge, this is the first competitive alternative to end-to-end training of CNNs that can scale to ImageNet. We conduct a wide range of experiments to study the properties this induces on the intermediate layers.", "title": "" }, { "docid": "7a202dfa59cb8c50a6999fe8a50895a9", "text": "The process for transferring knowledge of multiple reinforcement learning policies into a single multi-task policy via distillation technique is known as policy distillation. When policy distillation is under a deep reinforcement learning setting, due to the giant parameter size and the huge state space for each task domain, it requires extensive computational efforts to train the multi-task policy network. In this paper, we propose a new policy distillation architecture for deep reinforcement learning, where we assume that each task uses its taskspecific high-level convolutional features as the inputs to the multi-task policy network. Furthermore, we propose a new sampling framework termed hierarchical prioritized experience replay to selectively choose experiences from the replay memories of each task domain to perform learning on the network. With the above two attempts, we aim to accelerate the learning of the multi-task policy network while guaranteeing a good performance. We use Atari 2600 games as testing environment to demonstrate the efficiency and effectiveness of our proposed solution for policy distillation.", "title": "" }, { "docid": "de668bf99b307f96580e294f7e58afcf", "text": "Sliding window is one direct way to extend a successful recognition system to handle the more challenging detection problem. While action recognition decides only whether or not an action is present in a pre-segmented video sequence, action detection identifies the time interval where the action occurred in an unsegmented video stream. Sliding window approaches for action detection can however be slow as they maximize a classifier score over all possible sub-intervals. Even though new schemes utilize dynamic programming to speed up the search for the optimal sub-interval, they require offline processing on the whole video sequence. In this paper, we propose a novel approach for online action detection based on 3D skeleton sequences extracted from depth data. It identifies the sub-interval with the maximum classifier score in linear time. Furthermore, it is invariant to temporal scale variations and is suitable for real-time applications with low latency.", "title": "" }, { "docid": "255f4d19d89e9ff7acb6cca900fe9ed6", "text": "Objectives: Burnout syndrome (B.S.) affects millions of workers around the world, having a significant impact on their quality of life and the services they provide. It’s a psycho-social phenomenon, which can be handled through emotional management and psychological help. Emotional Intelligence (E.I) is very important to emotional management. This paper aims to investigate the relationship between Burnout syndrome and Emotional Intelligence in health professionals occupied in the sector of rehabilitation. Methods: The data were collected from a sample of 148 healthcare professionals, workers in the field of rehabilitation, who completed Maslach Burnout Inventory questionnaire, Trait Emotional Intelligence Que-Short Form questionnaire and a questionnaire collecting demographic data as well as personal and professional information. Simple linear regression and multiple regression analyses were conducted to analyze the data. Results: The results indicated that there is a positive relationship between Emotional Intelligence and Burnout syndrome as Emotional Intelligence acts protectively against Burnout syndrome and even reduces it. In particular, it was found that the higher the Emotional Intelligence, the lower the Burnout syndrome. Also, among all factors of Emotional Intelligence, “Emotionality”, seems to influence Burnout syndrome the most, as, the higher the rate of Emotionality, the lower the rate of Burnout. At the same time, evidence was found on the variability of Burnout syndrome through various models of explanation and correlation between Burnout syndrome and Emotional Intelligence and also, Burnout syndrome and Emotional Intelligence factors. Conclusion: Employers could focus on building emotional relationships with their employees, especially in the health care field. Furthermore, they could also promote some experimental seminars, sponsored by public or private institutions, in order to enhance Emotional Intelligence and to improve the workers’ quality of life and the quality of services they provide.", "title": "" } ]
scidocsrr
72421606941910464582042677d9730c
Role of Dopamine Receptors in ADHD: A Systematic Meta-analysis
[ { "docid": "39a25e2a4b3e4d56345d0e268d4a1cb1", "text": "OBJECTIVE\nAttention deficit hyperactivity disorder is a heterogeneous disorder of unknown etiology. Little is known about the comorbidity of this disorder with disorders other than conduct. Therefore, the authors made a systematic search of the psychiatric and psychological literature for empirical studies dealing with the comorbidity of attention deficit hyperactivity disorder with other disorders.\n\n\nDATA COLLECTION\nThe search terms included hyperactivity, hyperkinesis, attention deficit disorder, and attention deficit hyperactivity disorder, cross-referenced with antisocial disorder (aggression, conduct disorder, antisocial disorder), depression (depression, mania, depressive disorder, bipolar), anxiety (anxiety disorder, anxiety), learning problems (learning, learning disability, academic achievement), substance abuse (alcoholism, drug abuse), mental retardation, and Tourette's disorder.\n\n\nFINDINGS\nThe literature supports considerable comorbidity of attention deficit hyperactivity disorder with conduct disorder, oppositional defiant disorder, mood disorders, anxiety disorders, learning disabilities, and other disorders, such as mental retardation, Tourette's syndrome, and borderline personality disorder.\n\n\nCONCLUSIONS\nSubgroups of children with attention deficit hyperactivity disorder might be delineated on the basis of the disorder's comorbidity with other disorders. These subgroups may have differing risk factors, clinical courses, and pharmacological responses. Thus, their proper identification may lead to refinements in preventive and treatment strategies. Investigation of these issues should help to clarify the etiology, course, and outcome of attention deficit hyperactivity disorder.", "title": "" } ]
[ { "docid": "b59a2c49364f3e95a2c030d800d5f9ce", "text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.", "title": "" }, { "docid": "7dc0be689a4c58f4bc6ee0624605df81", "text": "Oil spills represent a major threat to ocean ecosystems and their health. Illicit pollution requires continuous monitoring and satellite remote sensing technology represents an attractive option for operational oil spill detection. Previous studies have shown that active microwave satellite sensors, particularly Synthetic Aperture Radar (SAR) can be effectively used for the detection and classification of oil spills. Oil spills appear as dark spots in SAR images. However, similar dark spots may arise from a range of unrelated meteorological and oceanographic phenomena, resulting in misidentification. A major focus of research in this area is the development of algorithms to distinguish oil spills from `look-alikes'. This paper describes the development of a new approach to SAR oil spill detection employing two different Artificial Neural Networks (ANN), used in sequence. The first ANN segments a SAR image to identify pixels belonging to candidate oil spill features. A set of statistical feature parameters are then extracted and used to drive a second ANN which classifies objects into oil spills or look-alikes. The proposed algorithm was trained using 97 ERS-2 SAR and ENVSAT ASAR images of individual verified oil spills or/and look-alikes. The algorithm was validated using a large dataset comprising full-swath images and correctly identified 91.6% of reported oil spills and 98.3% of look-alike phenomena. The segmentation stage of the new technique outperformed the established edge detection and adaptive thresholding approaches. An analysis of feature descriptors highlighted the importance of image gradient information in the classification stage.", "title": "" }, { "docid": "0ae8f9626a6621949c9d6c5fa7c2a098", "text": "In this paper, numerical observability analysis is restudied. Algorithms to determine observable islands and to decide a minimal set of pseudo-measurements to make the unobservable system observable are presented. The algorithms make direct use of the measurement Jacobian matrix. Gaussian elimination, which makes the whole process of observability analysis simple and effective, is the only computation required by the algorithms. Numerical examples are used to illustrate the proposed algorithms. Comparison of computation expense on the Texas system among the proposed algorithm and the existing algorithms is performed.", "title": "" }, { "docid": "b1a440cb894c1a76373bdbf7ff84318d", "text": "We present a language-theoretic approach to symbolic model checking of PCTL over discrete-time Markov chains. The probability with which a path formula is satisfied is represented by a regular expression. A recursive evaluation of the regular expression yields an exact rational value when transition probabilities are rational, and rational functions when some probabilities are left unspecified as parameters of the system. This allows for parametric model checking by evaluating the regular expression for different parameter values, for instance, to study the influence of a lossy channel in the overall reliability of a randomized protocol.", "title": "" }, { "docid": "fd0defe3aaabd2e27c7f9d3af47dd635", "text": "A fast test for triangle-triangle intersection by computing signed vertex-plane distances (sufficient if one triangle is wholly to one side of the other) and signed line-line distances of selected edges (otherwise) is presented. This algorithm is faster than previously published algorithms and the code is available online.", "title": "" }, { "docid": "c21e39d4cf8d3346671ae518357c8edb", "text": "The success of deep learning depends on finding an architecture to fit the task. As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future.", "title": "" }, { "docid": "8e4eb520c80dfa8d39c69b1273ea89c8", "text": "This paper examines the potential impact of automatic meter reading (AMR) on short-term load forecasting for a residential customer. Real-time measurement data from customers' smart meters provided by a utility company is modeled as the sum of a deterministic component and a Gaussian noise signal. The shaping filter for the Gaussian noise is calculated using spectral analysis. Kalman filtering is then used for load prediction. The accuracy of the proposed method is evaluated for different sampling periods and planning horizons. The results show that the availability of more real-time measurement data improves the accuracy of the load forecast significantly. However, the improved prediction accuracy can come at a high computational cost. Our results qualitatively demonstrate that achieving the desired prediction accuracy while avoiding a high computational load requires limiting the volume of data used for prediction. Consequently, the measurement sampling rate must be carefully selected as a compromise between these two conflicting requirements.", "title": "" }, { "docid": "a55eed627afaf39ee308cc9e0e10a698", "text": "Perspective-taking is a complex cognitive process involved in social cognition. This positron emission tomography (PET) study investigated by means of a factorial design the interaction between the emotional and the perspective factors. Participants were asked to adopt either their own (first person) perspective or the (third person) perspective of their mothers in response to situations involving social emotions or to neutral situations. The main effect of third-person versus first-person perspective resulted in hemodynamic increase in the medial part of the superior frontal gyrus, the left superior temporal sulcus, the left temporal pole, the posterior cingulate gyrus, and the right inferior parietal lobe. A cluster in the postcentral gyrus was detected in the reverse comparison. The amygdala was selectively activated when subjects were processing social emotions, both related to self and other. Interaction effects were identified in the left temporal pole and in the right postcentral gyrus. These results support our prediction that the frontopolar, the somatosensory cortex, and the right inferior parietal lobe are crucial in the process of self/ other distinction. In addition, this study provides important building blocks in our understanding of social emotion processing and human empathy.", "title": "" }, { "docid": "afde8e4d9ed4b2a95d780522e7905047", "text": "Compared to offline shopping, the online shopping experience may be viewed as lacking human warmth and sociability as it is more impersonal, anonymous, automated and generally devoid of face-to-face interactions. Thus, understanding how to create customer loyalty in online environments (e-Loyalty) is a complex process. In this paper a model for e-Loyalty is proposed and used to examine how varied conditions of social presence in a B2C e-Services context influence e-Loyalty and its antecedents of perceived usefulness, trust and enjoyment. This model is examined through an empirical study involving 185 subjects using structural equation modeling techniques. Further analysis is conducted to reveal gender differences concerning hedonic elements in the model on e-Loyalty. 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cfc4dc24378c5b7b83586db56fad2cac", "text": "This study investigated the effects of proximal and distal constructs on adolescent's academic achievement through self-efficacy. Participants included 482 ninth- and tenth- grade Norwegian students who completed a questionnaire designed to assess school-goal orientations, organizational citizenship behavior, academic self-efficacy, and academic achievement. The results of a bootstrapping technique used to analyze relationships between the constructs indicated that school-goal orientations and organizational citizenship predicted academic self-efficacy. Furthermore, school-goal orientation, organizational citizenship, and academic self-efficacy explained 46% of the variance in academic achievement. Mediation analyses revealed that academic self-efficacy mediated the effects of perceived task goal structure, perceived ability structure, civic virtue, and sportsmanship on adolescents' academic achievements. The results are discussed in reference to current scholarship, including theories underlying our hypothesis. Practical implications and directions for future research are suggested.", "title": "" }, { "docid": "33d98005d696cc5cee6a23f5c1e7c538", "text": "Design activity has recently attempted to embrace designing the user experience. Designers need to demystify how we design for user experience and how the products we design achieve specific user experience goals. This paper proposes an initial framework for understanding experience as it relates to user-product interactions. We propose a system for talking about experience, and look at what influences experience and qualities of experience. The framework is presented as a tool to understand what kinds of experiences products evoke.", "title": "" }, { "docid": "e98e902e22d9b8acb6e9e9dcd241471c", "text": "We introduce a novel iterative approach for event coreference resolution that gradually builds event clusters by exploiting inter-dependencies among event mentions within the same chain as well as across event chains. Among event mentions in the same chain, we distinguish withinand cross-document event coreference links by using two distinct pairwise classifiers, trained separately to capture differences in feature distributions of withinand crossdocument event clusters. Our event coreference approach alternates between WD and CD clustering and combines arguments from both event clusters after every merge, continuing till no more merge can be made. And then it performs further merging between event chains that are both closely related to a set of other chains of events. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods in joint task of WD and CD event coreference resolution.", "title": "" }, { "docid": "fbcdb57ae0d42e9665bc95dbbca0d57b", "text": "Data classification and tag recommendation are both important and challenging tasks in social media. These two tasks are often considered independently and most efforts have been made to tackle them separately. However, labels in data classification and tags in tag recommendation are inherently related. For example, a Youtube video annotated with NCAA, stadium, pac12 is likely to be labeled as football, while a video/image with the class label of coast is likely to be tagged with beach, sea, water and sand. The existence of relations between labels and tags motivates us to jointly perform classification and tag recommendation for social media data in this paper. In particular, we provide a principled way to capture the relations between labels and tags, and propose a novel framework CLARE, which fuses data CLAssification and tag REcommendation into a coherent model. With experiments on three social media datasets, we demonstrate that the proposed framework CLARE achieves superior performance on both tasks compared to the state-of-the-art methods.", "title": "" }, { "docid": "1de10e40580ba019045baaa485f8e729", "text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.", "title": "" }, { "docid": "0d7c29b40f92b5997791f1bbe192269c", "text": "We present a general approach to video understanding, inspired by semantic transfer techniques that have been successfully used for 2D image analysis. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics – natural language captions or other labels – depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video. We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of the selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC’16 benchmark, video summarization on the SumMe and TV-Sum benchmarks, Temporal Action Detection on the Thumos2014 benchmark, and sound prediction on the Greatest Hits benchmark. Our method not only surpasses the state of the art, in four out of five benchmarks, but importantly, it is the only single method we know of that was successfully applied to such a diverse range of tasks.", "title": "" }, { "docid": "28bb93f193b62cc1829cf082ffcea7f9", "text": "Analyzing a user's first impression of a Web site is essential for interface designers, as it is tightly related to their overall opinion of a site. In fact, this early evaluation affects user navigation behavior. Perceived usability and user interest (e.g., revisiting and recommending the site) are parameters influenced by first opinions. Thus, predicting the latter when creating a Web site is vital to ensure users’ acceptance. In this regard, Web aesthetics is one of the most influential factors in this early perception. We propose the use of low-level image parameters for modeling Web aesthetics in an objective manner, which is an innovative research field. Our model, obtained by applying a stepwise multiple regression algorithm, infers a user's first impression by analyzing three different visual characteristics of Web site screenshots—texture, luminance, and color—which are directly derived from MPEG-7 descriptors. The results obtained over three wide Web site datasets (composed by 415, 42, and 6 Web sites, respectively) reveal a high correlation between low-level parameters and the users’ evaluation, thus allowing a more precise and objective prediction of users’ opinion than previous models that are based on other image characteristics with fewer predictors. Therefore, our model is meant to support a rapid assessment of Web sites in early stages of the design process to maximize the likelihood of the users’ final approval.", "title": "" }, { "docid": "25c41bdba8c710b663cb9ad634b7ae5d", "text": "Massive data streams are now fundamental to many data processing applications. For example, Internet routers produce large scale diagnostic data streams. Such streams are rarely stored in traditional databases, and instead must be processed “on the fly” as they are produced. Similarly, sensor networks produce multiple data streams of observations from their sensors. There is growing focus on manipulating data streams, and hence, there is a need to identify basic operations of interest in managing data streams, and to support them efficiently. We propose computation of the Hamming norm as a basic operation of interest. The Hamming norm formalises ideas that are used throughout data processing. When applied to a single stream, the Hamming norm gives the number of distinct items that are present in that data stream, which is a statistic of great interest in databases. When applied to a pair of streams, the Hamming norm gives an important measure of (dis)similarity: the number of unequal item counts in the two streams. Hamming norms have many uses in comparing data streams. We present a novel approximation technique for estimating the Hamming norm for massive data streams; this relies on what we call the “ l0 sketch” and we prove its accuracy. We test our approximation method on a large quantity of synthetic and real stream data, and show that the estimation is accurate to within a few percentage points. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 28th VLDB Conference, Hong Kong, China, 2002", "title": "" }, { "docid": "288dc197e9be9b5289615b10eddbb987", "text": "As biometric applications are fielded to serve large population groups, issues of performance differences between individual sub-groups are becoming increasingly important. In this paper we examine cases where we believe race is one such factor. We look in particular at two forms of problem; facial classification and image synthesis. We take the novel approach of considering race as a boundary for transfer learning in both the task (facial classification) and the domain (synthesis over distinct datasets). We demonstrate a series of techniques to improve transfer learning of facial classification; outperforming similar models trained in the target's own domain. We conduct a study to evaluate the performance drop of Generative Adversarial Networks trained to conduct image synthesis, in this process, we produce a new annotation for the Celeb-A dataset by race. These networks are trained solely on one race and tested on another - demonstrating the subsets of the CelebA to be distinct domains for this task.", "title": "" }, { "docid": "8e74a27a3edea7cf0e88317851bc15eb", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "0daa16a3f40612946187d6c66ccd96f4", "text": "A 60 GHz frequency band planar diplexer based on Substrate Integrated Waveguide (SIW) technology is presented in this research. The 5th order millimeter wave SIW filter is investigated first, and then the 60 GHz SIW diplexer is designed and been simulated. SIW-microstrip transitions are also included in the final design. The relative bandwidths of up and down channels are 1.67% and 1.6% at 59.8 GHz and 62.2 GHz respectively. Simulation shows good channel isolation, small return losses and moderate insertion losses in pass bands. The diplexer can be easily integrated in millimeter wave integrated circuits.", "title": "" } ]
scidocsrr
8bcf0a9eed2179d9bb6d3fa3a3e7f29e
Linear classifier design under heteroscedasticity in Linear Discriminant Analysis
[ { "docid": "5d0a77058d6b184cb3c77c05363c02e0", "text": "For two-class discrimination, Ref. [1] claimed that, when covariance matrices of the two classes were unequal, a (class) unbalanced dataset had a negative effect on the performance of linear discriminant analysis (LDA). Through re-balancing 10 realworld datasets, Ref. [1] provided empirical evidence to support the claim using AUC (Area Under the receiver operating characteristic Curve) as the performance metric. We suggest that such a claim is vague if not misleading, there is no solid theoretical analysis presented in [1], and AUC can lead to a quite different conclusion from that led to by misclassification error rate (ER) on the discrimination performance of LDA for unbalanced datasets. Our empirical and simulation studies suggest that, for LDA, the increase of the median of AUC (and thus the improvement of performance of LDA) from re-balancing is relatively small, while, in contrast, the increase of the median of ER (and thus the decline in performance of LDA) from re-balancing is relatively large. Therefore, from our study, there is no reliable empirical evidence to support the claim that a (class) unbalanced data set has a negative effect on the performance of LDA. In addition, re-balancing affects the performance of LDA for datasets with either equal or unequal covariance matrices, indicating that having unequal covariance matrices is not a key reason for the difference in performance between original and re-balanced data.", "title": "" } ]
[ { "docid": "3c22c94c9ab99727840c2ca00c66c0f3", "text": "The impact of numerous distributed generators (DGs) coupled with the implementation of virtual inertia on the transient stability of power systems has been studied extensively. Time-domain simulation is the most accurate and reliable approach to evaluate the dynamic behavior of power systems. However, the computational efficiency is restricted by their multi-time-scale property due to the combination of various DGs and synchronous generators. This paper presents a novel projective integration method (PIM) for the efficient transient stability simulation of power systems with high DG penetration. One procedure of the proposed PIM is decomposed into two stages, which adopt mixed explicit-implicit integration methods to achieve both efficiency and numerical stability. Moreover, the stability of the PIM is not affected by its parameter, which is related to the step size. Based on this property, an adaptive parameter scheme is developed based on error estimation to fit the time constants of the system dynamics and further increase the simulation speed. The presented approach is several times faster than the conventional integration methods with a similar level of accuracy. The proposed method is demonstrated using test systems with DGs and virtual synchronous generators, and the performance is verified against MATLAB/Simulink and DIgSILENT PowerFactory.", "title": "" }, { "docid": "89297a4aef0d3251e8d947ccc2acacc7", "text": "We propose a novel probabilistic framework for learning visual models of 3D object categories by combining appearance information and geometric constraints. Objects are represented as a coherent ensemble of parts that are consistent under 3D viewpoint transformations. Each part is a collection of salient image features. A generative framework is used for learning a model that captures the relative position of parts within each of the discretized viewpoints. Contrary to most of the existing mixture of viewpoints models, our model establishes explicit correspondences of parts across different viewpoints of the object class. Given a new image, detection and classification are achieved by determining the position and viewpoint of the model that maximize recognition scores of the candidate objects. Our approach is among the first to propose a generative probabilistic framework for 3D object categorization. We test our algorithm on the detection task and the viewpoint classification task by using “car” category from both the Savarese et al. 2007 and PASCAL VOC 2006 datasets. We show promising results in both the detection and viewpoint classification tasks on these two challenging datasets.", "title": "" }, { "docid": "e2b74db574db8001dace37cbecb8c4eb", "text": "Distributed key-value stores are now a standard component of high-performance web services and cloud computing applications. While key-value stores offer significant performance and scalability advantages compared to traditional databases, they achieve these properties through a restricted API that limits object retrieval---an object can only be retrieved by the (primary and only) key under which it was inserted. This paper presents HyperDex, a novel distributed key-value store that provides a unique search primitive that enables queries on secondary attributes. The key insight behind HyperDex is the concept of hyperspace hashing in which objects with multiple attributes are mapped into a multidimensional hyperspace. This mapping leads to efficient implementations not only for retrieval by primary key, but also for partially-specified secondary attribute searches and range queries. A novel chaining protocol enables the system to achieve strong consistency, maintain availability and guarantee fault tolerance. An evaluation of the full system shows that HyperDex is 12-13x faster than Cassandra and MongoDB for finding partially specified objects. Additionally, HyperDex achieves 2-4x higher throughput for get/put operations.", "title": "" }, { "docid": "e599659ec215993598b98d26384ce6ac", "text": "The Computer aided modeling and optimization analysis of crankshaft is to study was to evaluate and compare the fatigue performance of two competing manufacturing technologies for automotive crankshafts, namely forged steel and ductile cast iron. In this study a dynamic simulation was conducted on two crankshafts, cast iron and forged steel, from similar single cylinder four stroke engines.Finite element analyses was performed to obtain the variation of stress magnitude at critical locations. The dynamic analysis was done analytically and was verified by simulations in ANSYS.Results achived from aforementioned analysis were used in optimization of the forged steel crankshaft.Geometry,material and manufacturing processes were optimized considering different constraints,manufacturing feasibility and cost.The optimization process included geometry changes compatible with the current engine,fillet rolling and result in increased fatigue strength and reduced cost of the crankshaft, without changing connecting rod and engine block.", "title": "" }, { "docid": "d2c202e120fecf444e77b08bd929e296", "text": "Deep Learning has been applied successfully to speech processing. In this paper we propose an architecture for speech synthesis using multiple speakers. Some hidden layers are shared by all the speakers, while there is a specific output layer for each speaker. Objective and perceptual experiments prove that this scheme produces much better results in comparison with single speaker model. Moreover, we also tackle the problem of speaker interpolation by adding a new output layer (α-layer) on top of the multi-output branches. An identifying code is injected into the layer together with acoustic features of many speakers. Experiments show that the α-layer can effectively learn to interpolate the acoustic features between speakers.", "title": "" }, { "docid": "49680e94843e070a5ed0179798f66f33", "text": "Routing protocols for Wireless Sensor Networks (WSN) are designed to select parent nodes so that data packets can reach their destination in a timely and efficient manner. Typically neighboring nodes with strongest connectivity are more selected as parents. This Greedy Routing approach can lead to unbalanced routing loads in the network. Consequently, the network experiences the early death of overloaded nodes causing permanent network partition. Herein, we propose a framework for load balancing of routing in WSN. In-network path tagging is used to monitor network traffic load of nodes. Based on this, nodes are identified as being relatively overloaded, balanced or underloaded. A mitigation algorithm finds suitable new parents for switching from overloaded nodes. The routing engine of the child of the overloaded node is then instructed to switch parent. A key future of the proposed framework is that it is primarily implemented at the Sink and so requires few changes to existing routing protocols. The framework was implemented in TinyOS on TelosB motes and its performance was assessed in a testbed network and in TOSSIM simulation. The algorithm increased the lifetime of the network by 41 % as recorded in the testbed experiment. The Packet Delivery Ratio was also improved from 85.97 to 99.47 %. Finally a comparative study was performed using the proposed framework with various existing routing protocols.", "title": "" }, { "docid": "f055f5f02b264b47c6218ea6683bcc7b", "text": "Prepositions are very common and very ambiguous, and understanding their sense is critical for understanding the meaning of the sentence. Supervised corpora for the preposition-sense disambiguation task are small, suggesting a semi-supervised approach to the task. We show that signals from unannotated multilingual data can be used to improve supervised prepositionsense disambiguation. Our approach pre-trains an LSTM encoder for predicting the translation of a preposition, and then incorporates the pre-trained encoder as a component in a supervised classification system, and fine-tunes it for the task. The multilingual signals consistently improve results on two preposition-sense datasets.", "title": "" }, { "docid": "41a74c1664143f602bdde3be9e26312f", "text": "This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation. Simple rules are designed to detect slowly-varying gradients and, therefore, trigger the reuse of outdated gradients. The resultant gradient-based algorithms are termed Lazily Aggregated Gradient — justifying our acronym LAG used henceforth. Theoretically, the merits of this contribution are: i) the convergence rate is the same as batch gradient descent in strongly-convex, convex, and nonconvex smooth cases; and, ii) if the distributed datasets are heterogeneous (quantified by certain measurable constants), the communication rounds needed to achieve a targeted accuracy are reduced thanks to the adaptive reuse of lagged gradients. Numerical experiments on both synthetic and real data corroborate a significant communication reduction compared to alternatives.", "title": "" }, { "docid": "f499ea5160d1e787a51b456ee01c3814", "text": "In this paper, a tri band compact octagonal fractal monopole MIMO antenna is presented. The proposed antenna is microstrip line fed and its structure is based on fractal geometry where the resonance frequency of antenna is lowered by applying iteration techniques. The simulated bandwidth of the antenna are 2.3706GHz to 2.45GHz, 3.398GHz to 3.677GHz and 4.9352GHz to 5.8988GHz (S11 <; -10 dB), covering the bands of WLAN and WiMAX. The characteristics of small size, nearly omnidirectional radiation pattern and moderate gain make the proposed MIMO antenna entirely applicable to WLAN and WiMAX applications. The proposed antenna has compact size of 50 mm × 50 mm. Details of the proposed antenna design and performance are presented and discussed.", "title": "" }, { "docid": "3ddf6fab70092eade9845b04dd8344a0", "text": "Fractional Fourier transform (FRFT) is a generalization of the Fourier transform, rediscovered many times over the past 100 years. In this paper, we provide an overview of recent contributions pertaining to the FRFT. Specifically, the paper is geared toward signal processing practitioners by emphasizing the practical digital realizations and applications of the FRFT. It discusses three major topics. First, the manuscripts relates the FRFT to other mathematical transforms. Second, it discusses various approaches for practical realizations of the FRFT. Third, we overview the practical applications of the FRFT. From these discussions, we can clearly state that the FRFT is closely related to other mathematical transforms, such as time–frequency and linear canonical transforms. Nevertheless, we still feel that major contributions are expected in the field of the digital realizations and its applications, especially, since many digital realizations of a b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" }, { "docid": "843a56ac5a061e8131bd4ce3ff7238a5", "text": "OBJECTIVE\nTo compare the efficacy of 2 atypical anti-psychotic drugs, olanzapine and risperidone, in the treatment of paradoxical insomnia.\n\n\nMETHODS\nIn this cross-sectional study over a 2-year period (September 2008 to September 2010), 29 patients with paradoxical insomnia, diagnosed in Kermanshah, Iran by both psychiatric interview and actigraphy, were randomly assigned to 2 groups. For 8 weeks, the first group (n=14) was treated with 10 mg olanzapine daily, and the second group (n=15) was treated with 4 mg risperidone daily. All participants completed the Pittsburgh Sleep Quality Inventory (PSQI) at baseline and at the end of the study.\n\n\nRESULTS\nAs expected, a baseline actigraphy analysis showed that total sleep time was not significantly different between the 2 treatment groups (p<0.3). In both groups, sleep quality was improved (p<0.001) with treatment. When comparing the 2 treatments directions, a significant difference emerged (9.21+/-2.35, 6.07+/-4.46) among the 2 treatment groups based on data from the PSQI. Patients who were treated with olanzapine showed greater improvement than patients who were treated by risperidone (p<0.04).\n\n\nCONCLUSION\nAtypical anti-psychotic drugs such as olanzapine and risperidone may be beneficial options for treatment of paradoxical insomnia. Larger clinical trials with longer periods of follow-up are needed for further investigation.", "title": "" }, { "docid": "f78b6308d5fc78ec6440433af45925bb", "text": "Recognizing the potentially ruinous effect of negative reviews on the reputation of the hosts as well as a subjective nature of the travel experience judgements, peer-to-peer accommodation sharing platforms, like Airbnb, have readily embraced the “response” option, empowering hosts with the voice to challenge, deny or at least apologize for the subject of critique. However, the effects of different response strategies on trusting beliefs towards the host remain unclear. To fill this gap, this study focuses on understanding the impact of different response strategies and review negativity on trusting beliefs towards the host in peer-to-peer accommodation sharing setting utilizing experimental methods. Examination of two different contexts, varying in the controllability of the subject of complaint, reveals that when the subject of complaint is controllable by a host, such strategies as confession / apology and denial can improve trusting beliefs towards the host. However, when the subject of criticism is beyond the control of the host, denial of the issue does not yield guest’s confidence in the host, whereas confession and excuse have positive influence on trusting beliefs.", "title": "" }, { "docid": "b722f2fbdf20448e3a7c28fc6cab026f", "text": "Alternative Mechanisms Rationale/Arguments/ Assumptions Connected Literature/Theory Resulting (Possible) Effect Support for/Against A1. Based on WTP and Exposure Theory A1a Light user segments (who are likely to have low WTP) are more likely to reduce (or even discontinue in extreme cases) their consumption of NYT content after the paywall implementation. Utility theory — WTP (Danaher 2002) Juxtaposing A1a and A1b leads to long tail effect due to the disproportionate reduction of popular content consumption (as a results of reduction of content consumption by light users). A1a. Supported (see the descriptive statistics in Table 11). A1b. Supported (see results from the postestimation of finite mixture model in Table 9) Since the resulting effects as well as both the assumptions (A1a and A1b) are supported, we suggest that there is support for this mechanism. A1b Light user segments are more likely to consume popular articles whereas the heavy user segment is more likely to consume a mix of niche articles and popular content. Exposure theory (McPhee 1963)", "title": "" }, { "docid": "c5cc7fc9651ff11d27e08e1910a3bd20", "text": "An omnidirectional circularly polarized (OCP) antenna operating at 28 GHz is reported and has been found to be a promising candidate for device-to-device (D2D) communications in the next generation (5G) wireless systems. The OCP radiation is realized by systematically integrating electric and magnetic dipole elements into a compact disc-shaped configuration (9.23 mm $^{3} =0.008~\\lambda _{0}^{3}$ at 28 GHz) in such a manner that they are oriented in parallel and radiate with the proper phase difference. The entire antenna structure was printed on a single piece of dielectric substrate using standard PCB manufacturing technologies and, hence, is amenable to mass production. A prototype OCP antenna was fabricated on Rogers 5880 substrate and was tested. The measured results are in good agreement with their simulated values and confirm the reported design concepts. Good OCP radiation patterns were produced with a measured peak realized RHCP gain of 2.2 dBic. The measured OCP overlapped impedance and axial ratio bandwidth was 2.2 GHz, from 26.5 to 28.7 GHz, an 8 % fractional bandwidth, which completely covers the 27.5 to 28.35 GHz band proposed for 5G cellular systems.", "title": "" }, { "docid": "929f294583267ca8cb8616e803687f1e", "text": "Recent systems for natural language understanding are strong at overcoming linguistic variability for lookup style reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. Our formal model uses two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic symbol space that captures a noisy grounding of the meaning space in the symbols or words of a language. We apply this framework to study the connectivity problem in undirected graphs---a core reasoning problem that forms the basis for more complex multi-hop reasoning. We show that it is indeed possible to construct a high-quality algorithm for detecting connectivity in the (latent) meaning graph, based on an observed noisy symbol graph, as long as the noise is below our quantified noise level and only a few hops are needed. On the other hand, we also prove an impossibility result: if a query requires a large number (specifically, logarithmic in the size of the meaning graph) of hops, no reasoning system operating over the symbol graph is likely to recover any useful property of the meaning graph. This highlights a fundamental barrier for a class of reasoning problems and systems, and suggests the need to limit the distance between the two spaces, rather than investing in multi-hop reasoning with\"many\"hops.", "title": "" }, { "docid": "ea9bafe86af4418fa51abe27a2c2180b", "text": "In this work, we propose a novel phenomenological model of the EEG signal based on the dynamics of a coupled Duffing-van der Pol oscillator network. An optimization scheme is adopted to match data generated from the model with clinically obtained EEG data from subjects under resting eyes-open (EO) and eyes-closed (EC) conditions. It is shown that a coupled system of two Duffing-van der Pol oscillators with optimized parameters yields signals with characteristics that match those of the EEG in both the EO and EC cases. The results, which are reinforced using statistical analysis, show that the EEG recordings under EC and EO resting conditions are clearly distinct realizations of the same underlying model occurring due to parameter variations with qualitatively different nonlinear dynamic characteristics. In addition, the interplay between noise and nonlinearity is addressed and it is shown that, for appropriately chosen values of noise intensity in the model, very good agreement exists between the model output and the EEG in terms of the power spectrum as well as Shannon entropy. In summary, the results establish that an appropriately tuned stochastic coupled nonlinear oscillator network such as the Duffing-van der Pol system could provide a useful framework for modeling and analysis of the EEG signal. In turn, design of algorithms based on the framework has the potential to positively impact the development of novel diagnostic strategies for brain injuries and disorders. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "21c7cbcf02141c60443f912ae5f1208b", "text": "A novel driving scheme based on simultaneous emission is reported for 2D/3D AMOLED TVs. The new method reduces leftright crosstalk without sacrificing luminance. The new scheme greatly simplifies the pixel circuit as the number of transistors for Vth compensation is reduced from 6 to 3. The capacitive load of scan lines is reduced by 48%, enabling very high refresh rate (240 Hz).", "title": "" }, { "docid": "afd378cf5e492a9627e746254586763b", "text": "Gradient-based optimization has enabled dramatic advances in computational imaging through techniques like deep learning and nonlinear optimization. These methods require gradients not just of simple mathematical functions, but of general programs which encode complex transformations of images and graphical data. Unfortunately, practitioners have traditionally been limited to either hand-deriving gradients of complex computations, or composing programs from a limited set of coarse-grained operators in deep learning frameworks. At the same time, writing programs with the level of performance needed for imaging and deep learning is prohibitively difficult for most programmers.\n We extend the image processing language Halide with general reverse-mode automatic differentiation (AD), and the ability to automatically optimize the implementation of gradient computations. This enables automatic computation of the gradients of arbitrary Halide programs, at high performance, with little programmer effort. A key challenge is to structure the gradient code to retain parallelism. We define a simple algorithm to automatically schedule these pipelines, and show how Halide's existing scheduling primitives can express and extend the key AD optimization of \"checkpointing.\"\n Using this new tool, we show how to easily define new neural network layers which automatically compile to high-performance GPU implementations, and how to solve nonlinear inverse problems from computational imaging. Finally, we show how differentiable programming enables dramatically improving the quality of even traditional, feed-forward image processing algorithms, blurring the distinction between classical and deep methods.", "title": "" }, { "docid": "debb2bc6845eb2355c54c2599b40e102", "text": "Graphs are used to model many real objects such as social networks and web graphs. Many real applications in various fields require efficient and effective management of large-scale, graph-structured data. Although distributed graph engines such as GBase and Pregel handle billion-scale graphs, users need to be skilled at managing and tuning a distributed system in a cluster, which is a non-trivial job for ordinary users. Furthermore, these distributed systems need many machines in a cluster in order to provide reasonable performance. Several recent works proposed non-distributed graph processing platforms as complements to distributed platforms. In fact, efficient non-distributed platforms require less hardware resource and can achieve better energy efficiency than distributed ones. GraphChi is a representative non-distributed platform that is disk-based and can process billions of edges on CPUs in a single PC. However, the design drawbacks of GraphChi on I/O and computation model have limited its parallelism and performance. In this paper, we propose a general, disk-based graph engine called gGraph to process billion-scale graphs efficiently by utilizing both CPUs and GPUs in a single PC. GGraph exploits full parallelism and full overlap of computation and I/O processing as much as possible. Experiment results show that gGraph outperforms GraphChi and PowerGraph. In addition, gGraph achieves the best energy efficiency among all evaluated platforms.", "title": "" }, { "docid": "f49bb940c12e2eac57112862a564c95f", "text": "Hydrogels in which cells are encapsulated are of great potential interest for tissue engineering applications. These gels provide a structure inside which cells can spread and proliferate. Such structures benefit from controlled microarchitectures that can affect the behavior of the enclosed cells. Microfabrication-based techniques are emerging as powerful approaches to generate such cell-encapsulating hydrogel structures. In this paper we introduce common hydrogels and their crosslinking methods and review the latest microscale approaches for generation of cell containing gel particles. We specifically focus on microfluidics-based methods and on techniques such as micromolding and electrospinning.", "title": "" } ]
scidocsrr
22239accc498928007bf36ee6dc778d7
On Estimation and Selection for Topic Models
[ { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "8f704e4c4c2a0c696864116559a0f22c", "text": "Friendships with competitors can improve the performance of organizations through the mechanisms of enhanced collaboration, mitigated competition, and better information exchange. Moreover, these benefits are best achieved when competing managers are embedded in a cohesive network of friendships (i.e., one with many friendships among competitors), since cohesion facilitates the verification of information culled from the network, eliminates the structural holes faced by customers, and facilitates the normative control of competitors. The first part of this analysis examines the performance implications of the friendship-network structure within the Sydney hotel industry, with performance being the yield (i.e., revenue per available room) of a given hotel. This shows that friendships with competitors lead to dramatic improvements in hotel yields. Performance is further improved if a manager’s competitors are themselves friends, evidencing the benefit of cohesive friendship networks. The second part of the analysis examines the structure of friendship ties among hotel managers and shows that friendships are more likely between managers who are competitors.", "title": "" }, { "docid": "1f81e5e9851b4750aac009da5ae578a1", "text": "This paper describes a method to automatically create dialogue resources annotated with dialogue act information by reusing existing dialogue corpora. Numerous dialogue corpora are available for research purposes and many of them are annotated with dialogue act information that captures the intentions encoded in user utterances. Annotated dialogue resources, however, differ in various respects: data collection settings and modalities used, dialogue task domains and scenarios (if any) underlying the collection, number and roles of dialogue participants involved and dialogue act annotation schemes applied. The presented study encompasses three phases of data-driven investigation. We, first, assess the importance of various types of features and their combinations for effective cross-domain dialogue act classification. Second, we establish the best predictive model comparing various cross-corpora training settings. Finally, we specify models adaptation procedures and explore late fusion approaches to optimize the overall classification decision taking process. The proposed methodology accounts for empirically motivated and technically sound classification procedures that may reduce annotation and training costs significantly.", "title": "" }, { "docid": "7c8ab43cfd1c9e03b33a1454651512a7", "text": "The authors investigated the extent to which touch, vision, and audition mediate the processing of statistical regularities within sequential input. Few researchers have conducted rigorous comparisons across sensory modalities; in particular, the sense of touch has been virtually ignored. The current data reveal not only commonalities but also modality constraints affecting statistical learning across the senses. To be specific, the authors found that the auditory modality displayed a quantitative learning advantage compared with vision and touch. In addition, they discovered qualitative learning biases among the senses: Primarily, audition afforded better learning for the final part of input sequences. These findings are discussed in terms of whether statistical learning is likely to consist of a single, unitary mechanism or multiple, modality-constrained ones.", "title": "" }, { "docid": "f636dece7889f998fa10c19736d90a9a", "text": "Our use of language depends upon two capacities: a mental lexicon of memorized words and a mental grammar of rules that underlie the sequential and hierarchical composition of lexical forms into predictably structured larger words, phrases, and sentences. The declarative/procedural model posits that the lexicon/grammar distinction in language is tied to the distinction between two well-studied brain memory systems. On this view, the memorization and use of at least simple words (those with noncompositional, that is, arbitrary form-meaning pairings) depends upon an associative memory of distributed representations that is subserved by temporal-lobe circuits previously implicated in the learning and use of fact and event knowledge. This \"declarative memory\" system appears to be specialized for learning arbitrarily related information (i.e., for associative binding). In contrast, the acquisition and use of grammatical rules that underlie symbol manipulation is subserved by frontal/basal-ganglia circuits previously implicated in the implicit (nonconscious) learning and expression of motor and cognitive \"skills\" and \"habits\" (e.g., from simple motor acts to skilled game playing). This \"procedural\" system may be specialized for computing sequences. This novel view of lexicon and grammar offers an alternative to the two main competing theoretical frameworks. It shares the perspective of traditional dual-mechanism theories in positing that the mental lexicon and a symbol-manipulating mental grammar are subserved by distinct computational components that may be linked to distinct brain structures. However, it diverges from these theories where they assume components dedicated to each of the two language capacities (that is, domain-specific) and in their common assumption that lexical memory is a rote list of items. Conversely, while it shares with single-mechanism theories the perspective that the two capacities are subserved by domain-independent computational mechanisms, it diverges from them where they link both capacities to a single associative memory system with broad anatomic distribution. The declarative/procedural model, but neither traditional dual- nor single-mechanism models, predicts double dissociations between lexicon and grammar, with associations among associative memory properties, memorized words and facts, and temporal-lobe structures, and among symbol-manipulation properties, grammatical rule products, motor skills, and frontal/basal-ganglia structures. In order to contrast lexicon and grammar while holding other factors constant, we have focused our investigations of the declarative/procedural model on morphologically complex word forms. Morphological transformations that are (largely) unproductive (e.g., in go-went, solemn-solemnity) are hypothesized to depend upon declarative memory. These have been contrasted with morphological transformations that are fully productive (e.g., in walk-walked, happy-happiness), whose computation is posited to be solely dependent upon grammatical rules subserved by the procedural system. Here evidence is presented from studies that use a range of psycholinguistic and neurolinguistic approaches with children and adults. It is argued that converging evidence from these studies supports the declarative/procedural model of lexicon and grammar.", "title": "" }, { "docid": "07c43b1daa2520196e733b6efbd75a2b", "text": "Disruptive digital technologies empower customers to define how they would like to interact with organizations. Consequently, organizations often struggle to implement an appropriate omni-channel strategy (OCS) that both meets customers’ interaction preferences and can be operated efficiently. Despite this strong practical need, research on omni-channel management predominantly adopts a descriptive perspective. There is little prescriptive knowledge to support organizations in assessing the business value of OCSs and comparing them accordingly. To address this research gap, we propose an economic decision model that helps select an appropriate OCS, considering online and offline channels, the opening and closing of channels, non-sequential customer journeys, and customers’ channel preferences. Drawing from investment theory and value-based management, the decision model recommends implementing the OCS with the highest contribution to an organization’s long-term firm value. We validate the decision model using real-world data on the omni-channel environment of a German financial service provider.", "title": "" }, { "docid": "3c02edb767f39d38ede6310987ca5816", "text": "OBJECTIVE\nThe Connor-Davidson Resilience Scale (CD-RISC) measures various aspects of psychological resilience in patients with posttraumatic stress disorder (PTSD) and other psychiatric ailments. This study sought to assess the reliability and validity of the Korean version of the Connor-Davidson Resilience Scale (K-CD-RISC).\n\n\nMETHODS\nIn total, 576 participants were enrolled (497 females and 79 males), including hospital nurses, university students, and firefighters. Subjects were evaluated using the K-CD-RISC, the Beck Depression Inventory (BDI), the Impact of Event Scale-Revised (IES-R), the Rosenberg Self-Esteem Scale (RSES), and the Perceived Stress Scale (PSS). Test-retest reliability and internal consistency were examined as a measure of reliability, and convergent validity and factor analysis were also performed to evaluate validity.\n\n\nRESULTS\nCronbach's alpha coefficient and test-retest reliability were 0.93 and 0.93, respectively. The total score on the K-CD-RISC was positively correlated with the RSES (r=0.56, p<0.01). Conversely, BDI (r=-0.46, p<0.01), PSS (r=-0.32, p<0.01), and IES-R scores (r=-0.26, p<0.01) were negatively correlated with the K-CD-RISC. The K-CD-RISC showed a five-factor structure that explained 57.2% of the variance.\n\n\nCONCLUSION\nThe K-CD-RISC showed good reliability and validity for measurement of resilience among Korean subjects.", "title": "" }, { "docid": "8096886eff1b288561cbe75302e8c578", "text": "In this paper, we develop a framework to classify supply chain risk management problems and approaches for the solution of these problems. We argue that risk management problems need to be handled at three levels strategic, operational and tactical. In addition, risk within the supply chain might manifest itself in the form of deviations, disruptions and disasters. To handle unforeseen events in the supply chain there are two obvious approaches: (1) to design chains with built in risk-tolerance and (2) to contain the damage once the undesirable event has occurred. Both of these approaches require a clear understanding of undesirable events that may take place in the supply chain and also the associated consequences and impacts from these events. We can then focus our efforts on mapping out the propagation of events in the supply chain due to supplier non-performance, and employ our insight to develop two mathematical programming based preventive models for strategic level deviation and disruption management. The first model, a simple integer quadratic optimization model, adapted from the Markowitz model, determines optimal partner selection with the objective of minimizing both the operational cost and the variability of total operational cost. The second model, a simple mixed integer programming optimization model, adapted from the credit risk minimization model, determines optimal partner selection such that the supply shortfall is minimized even in the face of supplier disruptions. Hence, both of these models offer possible approaches to robust supply chain design.", "title": "" }, { "docid": "95fb51b0b6d8a3a88edfc96157233b10", "text": "Various types of video can be captured with fisheye lenses; their wide field of view is particularly suited to surveillance video. However, fisheye lenses introduce distortion, and this changes as objects in the scene move, making fisheye video difficult to interpret. Current still fisheye image correction methods are either limited to small angles of view, or are strongly content dependent, and therefore unsuitable for processing video streams. We present an efficient and robust scheme for fisheye video correction, which minimizes time-varying distortion and preserves salient content in a coherent manner. Our optimization process is controlled by user annotation, and takes into account a wide set of measures addressing different aspects of natural scene appearance. Each is represented as a quadratic term in an energy minimization problem, leading to a closed-form solution via a sparse linear system. We illustrate our method with a range of examples, demonstrating coherent natural-looking video output. The visual quality of individual frames is comparable to those produced by state-of-the-art methods for fisheye still photograph correction.", "title": "" }, { "docid": "406fab96a8fd49f4d898a9735ee1512f", "text": "An otolaryngology phenol applicator kit can be successfully and safely used in the performance of chemical matricectomy. The applicator kit provides a convenient way to apply phenol to the nail matrix precisely and efficiently, whereas minimizing both the risk of application to nonmatrix surrounding soft tissue and postoperative recovery time.Given the smaller size of the foam-tipped applicator, we feel that this is a more precise tool than traditional cotton-tipped applicators for chemical matricectomy. Particularly with regard to lower extremity nail ablation and matricectomy, minimizing soft tissue inflammation could in turn reduce the risk of postoperative infections, decrease recovery time, as well and make for a more positive overall patient experience.", "title": "" }, { "docid": "4f0274c2303560867fb1f4fe922db86f", "text": "Cerebral activation was measured with positron emission tomography in ten human volunteers. The primary auditory cortex showed increased activity in response to noise bursts, whereas acoustically matched speech syllables activated secondary auditory cortices bilaterally. Instructions to make judgments about different attributes of the same speech signal resulted in activation of specific lateralized neural systems. Discrimination of phonetic structure led to increased activity in part of Broca's area of the left hemisphere, suggesting a role for articulatory recoding in phonetic perception. Processing changes in pitch produced activation of the right prefrontal cortex, consistent with the importance of right-hemisphere mechanisms in pitch perception.", "title": "" }, { "docid": "3e83f454f66e8aba14733205c8e19753", "text": "BACKGROUND\nNormal-weight adults gain lower-body fat via adipocyte hyperplasia and upper-body subcutaneous (UBSQ) fat via adipocyte hypertrophy.\n\n\nOBJECTIVES\nWe investigated whether regional fat loss mirrors fat gain and whether the loss of lower-body fat is attributed to decreased adipocyte number or size.\n\n\nDESIGN\nWe assessed UBSQ, lower-body, and visceral fat gains and losses in response to overfeeding and underfeeding in 23 normal-weight adults (15 men) by using dual-energy X-ray absorptiometry and abdominal computed tomography scans. Participants gained ∼5% of weight in 8 wk and lost ∼80% of gained fat in 8 wk. We measured abdominal subcutaneous and femoral adipocyte sizes and numbers after weight gain and loss.\n\n\nRESULTS\nVolunteers gained 3.1 ± 2.1 (mean ± SD) kg body fat with overfeeding and lost 2.4 ± 1.7 kg body fat with underfeeding. Although UBSQ and visceral fat gains were completely reversed after 8 wk of underfeeding, lower-body fat had not yet returned to baseline values. Abdominal and femoral adipocyte sizes, but not numbers, decreased with weight loss. Decreases in abdominal adipocyte size and UBSQ fat mass were correlated (ρ = 0.76, P = 0.001), as were decreases in femoral adipocyte size and lower-body fat (ρ = 0.49, P = 0.05).\n\n\nCONCLUSIONS\nUBSQ and visceral fat increase and decrease proportionately with a short-term weight gain and loss, whereas a gain of lower-body fat does not relate to the loss of lower-body fat. The loss of lower-body fat is attributed to a reduced fat cell size, but not number, which may result in long-term increases in fat cell numbers.", "title": "" }, { "docid": "137b9760d265304560f1cac14edb7f21", "text": "Gallstones are solid particles formed from bile in the gall bladder. In this paper, we propose a technique to automatically detect Gallstones in ultrasound images, christened as, Automated Gallstone Segmentation (AGS) Technique. Speckle Noise in the ultrasound image is first suppressed using Anisotropic Diffusion Technique. The edges are then enhanced using Unsharp Filtering. NCUT Segmentation Technique is then put to use to segment the image. Afterwards, edges are detected using Sobel Edge Detection. Further, Edge Thickening Process is used to smoothen the edges and probability maps are generated using Floodfill Technique. Then, the image is scribbled using Automatic Scribbling Technique. Finally, we get the segmented gallstone within the gallbladder using the Closed Form Matting Technique.", "title": "" }, { "docid": "d9cf753cf3e7a73a69199e67fc7fcc3e", "text": "Litz wire is useful for high-frequency power but has high cost, limited thermal conductivity, and decreased effectiveness above 1 MHz. Multiple parallel layers of foil have the potential to overcome these limitations, but techniques are needed to enforce current sharing between the foil layers, as is accomplished by twisting in litz wire. Four strategies for this include already known techniques for interchanging foil layer positions, which are reviewed, and three new approaches: 1) balancing flux linkage by adjusting spacing between layers, 2) using capacitance between layers to provide ballast impedance, and 3) adding miniature balancing transformers to the foil terminations. The methods are analyzed and their scope of applicability is compared.", "title": "" }, { "docid": "c0c30c3b9539511e9079ec7894ad754f", "text": "Cardiovascular disease remains the world's leading cause of death. Yet, we have known for decades that the vast majority of atherosclerosis and its subsequent morbidity and mortality are influenced predominantly by diet. This paper will describe a health-promoting whole food, plant-based diet; delineate macro- and micro-nutrition, emphasizing specific geriatric concerns; and offer guidance to physicians and other healthcare practitioners to support patients in successfully utilizing nutrition to improve their health.", "title": "" }, { "docid": "4a16195478fcb1285ed5e5129a49199d", "text": "BACKGROUND AND PURPOSE\nLittle research has been done regarding the attitudes and behaviors of physical therapists relative to the use of evidence in practice. The purposes of this study were to describe the beliefs, attitudes, knowledge, and behaviors of physical therapist members of the American Physical Therapy Association (APTA) as they relate to evidence-based practice (EBP) and to generate hypotheses about the relationship between these attributes and personal and practice characteristics of the respondents.\n\n\nMETHODS\nA survey of a random sample of physical therapist members of APTA resulted in a 48.8% return rate and a sample of 488 that was fairly representative of the national membership. Participants completed a questionnaire designed to determine beliefs, attitudes, knowledge, and behaviors regarding EBP, as well as demographic information about themselves and their practice settings. Responses were summarized for each item, and logistic regression analyses were used to examine relationships among variables.\n\n\nRESULTS\nRespondents agreed that the use of evidence in practice was necessary, that the literature was helpful in their practices, and that quality of patient care was better when evidence was used. Training, familiarity with and confidence in search strategies, use of databases, and critical appraisal tended to be associated with younger therapists with fewer years since they were licensed. Seventeen percent of the respondents stated they read fewer than 2 articles in a typical month, and one quarter of the respondents stated they used literature in their clinical decision making less than twice per month. The majority of the respondents had access to online information, although more had access at home than at work. According to the respondents, the primary barrier to implementing EBP was lack of time.\n\n\nDISCUSSION AND CONCLUSION\nPhysical therapists stated they had a positive attitude about EBP and were interested in learning or improving the skills necessary to implement EBP. They noted that they needed to increase the use of evidence in their daily practice.", "title": "" }, { "docid": "7e736d4f906a28d4796fe7ac404b5f94", "text": "The internal program representation chosen for a software development environment plays a critical role in the nature of that environment. A form should facilitate implementation and contribute to the responsiveness of the environment to the user. The program dependence graph (PDG) may be a suitable internal form. It allows programs to be sliced in linear time for debugging and for use by language-directed editors. The slices obtained are more accurate than those obtained with existing methods because I/O is accounted for correctly and irrelevant statements on multi-statement lines are not displayed. The PDG may be interpreted in a data driven fashion or may have highly optimized (including vectorized) code produced from it. It is amenable to incremental data flow analysis, improving response time to the user in an interactive environment and facilitating debugging through data flow anomaly detection. It may also offer a good basis for software complexity metrics, adding to the completeness of an environment based on it.", "title": "" }, { "docid": "041b308fe83ac9d5a92e33fd9c84299a", "text": "Spaceborne synthetic aperture radar systems are severely constrained to a narrow swath by ambiguity limitations. Here a vertically scanned-beam synthetic aperture system (SCANSAR) is proposed as a solution to this problem. The potential length of synthetic aperture must be shared between beam positions, so the along-track resolution is poorer; a direct tradeoff exists between resolution and swath width. The length of the real aperture is independently traded against the number of scanning positions. Design curves and equations are presented for spaceborne SCANSARs for altitudes between 400 and 1400 km and inner angles of incidence between 20° and 40°. When the real antenna is approximately square, it may also be used for a microwave radiometer. The combined radiometer and synthetic-aperture (RADISAR) should be useful for those applications where the poorer resolution of the radiometer is useful for some purposes, but the finer resolution of the radar is needed for others.", "title": "" }, { "docid": "756929d22f107a5ff0b3bf0b19414a06", "text": "Users of social networking sites such as Facebook frequently post self-portraits on their profiles. While research has begun to analyze the motivations for posting such pictures, less is known about how selfies are evaluated by recipients. Although producers of selfies typically aim to create a positive impression, selfies may also be regarded as narcissistic and therefore fail to achieve the intended goal. The aim of this study is to examine the potentially ambivalent reception of selfies compared to photos taken by others based on the Brunswik lens model Brunswik (1956). In a between-subjects online experiment (N = 297), Facebook profile mockups were shown which differed with regard to picture type (selfie vs. photo taken by others), gender of the profile owner (female vs. male), and number of individuals within a picture (single person vs. group). Results revealed that selfies were indeed evaluated more negatively than photos taken by others. Persons in selfies were rated as less trustworthy, less socially attractive, less open to new experiences, more narcissistic and more extroverted than the same persons in photos taken by others. In addition, gender differences were observed in the perception of pictures. Male profile owners were rated as more narcissistic and less trustworthy than female profile owners, but there was no significant interaction effect of type of picture and gender. Moreover, a mediation analysis of presumed motives for posting selfies revealed that negative evaluations of selfie posting individuals were mainly driven by the perceived motivation of impression management. Findings suggest that selfies are likely to be evaluated less positively than producers of selfies might suppose.", "title": "" }, { "docid": "13055a3a35f058eb3622fb60afc436fc", "text": "AIM\nTo investigate the probability of and factors influencing periapical status of teeth following primary (1°RCTx) or secondary (2°RCTx) root canal treatment.\n\n\nMETHODOLOGY\nThis prospective study involved annual clinical and radiographic follow-up of 1°RCTx (1170 roots, 702 teeth and 534 patients) or 2°RCTx (1314 roots, 750 teeth and 559 patients) carried out by Endodontic postgraduate students for 2-4 (50%) years. Pre-, intra- and postoperative data were collected prospectively on customized forms. The proportion of roots with complete periapical healing was estimated, and prognostic factors were investigated using multiple logistic regression models. Clustering effects within patients were adjusted in all models using robust standard error.\n\n\nRESULTS\nproportion of roots with complete periapical healing after 1°RCTx (83%; 95% CI: 81%, 85%) or 2°RCTx (80%; 95% CI: 78%, 82%) were similar. Eleven prognostic factors were identified. The conditions that were found to improve periapical healing significantly were: the preoperative absence of a periapical lesion (P = 0.003); in presence of a periapical lesion, the smaller its size (P ≤ 0.001), the better the treatment prognosis; the absence of a preoperative sinus tract (P = 0.001); achievement of patency at the canal terminus (P = 0.001); extension of canal cleaning as close as possible to its apical terminus (P = 0.001); the use of ethylene-diamine-tetra-acetic acid (EDTA) solution as a penultimate wash followed by final rinse with NaOCl solution in 2°RCTx cases (P = 0.002); abstaining from using 2% chlorexidine as an adjunct irrigant to NaOCl solution (P = 0.01); absence of tooth/root perforation (P = 0.06); absence of interappointment flare-up (pain or swelling) (P =0.002); absence of root-filling extrusion (P ≤ 0.001); and presence of a satisfactory coronal restoration (P ≤ 0.001).\n\n\nCONCLUSIONS\nSuccess based on periapical health associated with roots following 1°RCTx (83%) or 2°RCTx (80%) was similar, with 10 factors having a common effect on both, whilst the 11th factor 'EDTA as an additional irrigant' had different effects on the two treatments.", "title": "" }, { "docid": "b475a47a9c8e8aca82c236250bbbfc33", "text": "OBJECTIVE\nTo issue a recommendation on the types and amounts of physical activity needed to improve and maintain health in older adults.\n\n\nPARTICIPANTS\nA panel of scientists with expertise in public health, behavioral science, epidemiology, exercise science, medicine, and gerontology.\n\n\nEVIDENCE\nThe expert panel reviewed existing consensus statements and relevant evidence from primary research articles and reviews of the literature.\n\n\nPROCESS\nAfter drafting a recommendation for the older adult population and reviewing drafts of the Updated Recommendation from the American College of Sports Medicine (ACSM) and the American Heart Association (AHA) for Adults, the panel issued a final recommendation on physical activity for older adults.\n\n\nSUMMARY\nThe recommendation for older adults is similar to the updated ACSM/AHA recommendation for adults, but has several important differences including: the recommended intensity of aerobic activity takes into account the older adult's aerobic fitness; activities that maintain or increase flexibility are recommended; and balance exercises are recommended for older adults at risk of falls. In addition, older adults should have an activity plan for achieving recommended physical activity that integrates preventive and therapeutic recommendations. The promotion of physical activity in older adults should emphasize moderate-intensity aerobic activity, muscle-strengthening activity, reducing sedentary behavior, and risk management.", "title": "" } ]
scidocsrr
1397c20487ec9b557651d74e004a2989
Switching from long-term benzodiazepine therapy to pregabalin in patients with generalized anxiety disorder: a double-blind, placebo-controlled trial.
[ { "docid": "d8c27be808c422d024bd23a37251884b", "text": "and treatable mental disorders presenting in general medical as well as specialty settings. There are a number of case-finding instruments for detecting depression in primary care, ranging from 2 to 28 items.1,2 Typically these can be scored as continuous measures of depression severity and also have established cutpoints above which the probability of major depression is substantially increased. Scores on these various measures tend to be highly correlated3, with little evidence that one measure is superior to any other.1,2,4", "title": "" } ]
[ { "docid": "d5f3c534ecbb1ab8bc4ecc42aed38c18", "text": "Automatic construction of large knowledge graphs (KG) by mining web-scale text datasets has received considerable attention recently. Estimating accuracy of such automatically constructed KGs is a challenging problem due to their size and diversity and has largely been ignored in prior research. In this work, we try to fill this gap by proposing KGEval. KGEval uses coupling constraints to bind facts and crowdsource those few that can infer large parts of the graph. We demonstrate that the objective optimized by KGEval is submodular and NP-hard, allowing guarantees for our approximation algorithm. Through experiments on real-world datasets, we demonstrate that KGEval best estimates KG accuracy compared to other baselines, while requiring significantly lesser number of human evaluations.", "title": "" }, { "docid": "58c8287255ad6d861bd6f970f9b723fb", "text": "BACKGROUND\nFatigue and burnout are two concepts often linked in the literature. However, regardless of their commonalities they should be approached as distinct concepts. The current and ever-growing reforms regarding the delivery of nursing care in Cyprus, stress for the development of ways to prevent burnout and effectively manage fatigue that can result from working in stressful clinical environments.\n\n\nMETHODS\nTo explore the factors associated with the burnout syndrome in Cypriot nurses working in various clinical departments. A random sampling method taking into account geographical location, specialty and type of employment has been used.\n\n\nRESULTS\nA total of 1,482 nurses (80.4% were females) working both in the private and public sectors completed and returned an anonymous questionnaire that included several aspects related to burnout; the MBI scale, questions related to occupational stress, and questions pertaining to self reported fatigue. Two-thirds (65.1%) of the nurses believed that their job is stressful with the majority reporting their job as stressful being female nurses (67.7%). Twelve point eight percent of the nurses met Maslach's criteria for burnout. The prevalence of fatigue in nurses was found 91.9%. The prevalence of fatigue was higher in females (93%) than in males (87.5%) (p = 0.003). As opposed to the burnout prevalence, fatigue prevalence did not differ among the nursing departments (p = 0.166) and among nurses with a different marital status (p = 0.553). Burnout can be associated adequately knowing if nurses find their job stressful, their age, the level of emotional exhaustion and depersonalization. It has been shown that the fatigue may be thought of as a predictor of burnout, but its influence is already accounted by emotional exhaustion and depersonalization.\n\n\nCONCLUSION\nThe clinical settings in Cyprus appear as stress generating environment for nurses. Nurses working both in the private and public sector appear to experience low to severe burnout. Self-reported fatigue interferes to the onset of emotional exhaustion and depersonalization.", "title": "" }, { "docid": "485accd6923115eb66996cf734cb1cab", "text": "We describe extensions to the method of Pritchard et al. for inferring population structure from multilocus genotype data. Most importantly, we develop methods that allow for linkage between loci. The new model accounts for the correlations between linked loci that arise in admixed populations (\"admixture linkage disequilibium\"). This modification has several advantages, allowing (1) detection of admixture events farther back into the past, (2) inference of the population of origin of chromosomal regions, and (3) more accurate estimates of statistical uncertainty when linked loci are used. It is also of potential use for admixture mapping. In addition, we describe a new prior model for the allele frequencies within each population, which allows identification of subtle population subdivisions that were not detectable using the existing method. We present results applying the new methods to study admixture in African-Americans, recombination in Helicobacter pylori, and drift in populations of Drosophila melanogaster. The methods are implemented in a program, structure, version 2.0, which is available at http://pritch.bsd.uchicago.edu.", "title": "" }, { "docid": "cd0786a460701482df190fe04be01bb0", "text": "This software is designed to solve conic programming problems whose constraint cone is a product of semidefinite cones, second-order cones, nonnegative orthants and Euclidean spaces; and whose objective function is the sum of linear functions and log-barrier terms associated with the constraint cones. This includes the special case of determinant maximization problems with linear matrix inequalities. It employs an infeasible primal-dual predictor-corrector path-following method, with either the HKM or the NT search direction. The basic code is written in Matlab, but key subroutines in C are incorporated via Mex files. Routines are provided to read in problems in either SDPA or SeDuMi format. Sparsity and block diagonal structure are exploited. We also exploit low-rank structures in the constraint matrices associated the semidefinite blocks if such structures are explicitly given. To help the users in using our software, we also include some examples to illustrate the coding of problem data for our SQLP solver. Various techniques to improve the efficiency and stability of the algorithm are incorporated. For example, step-lengths associated with semidefinite cones are calculated via the Lanczos method. Numerical experiments show that this general purpose code can solve more than 80% of a total of about 300 test problems to an accuracy of at least 10−6 in relative duality gap and infeasibilities. Department of Mathematics, National University of Singapore, 2 Science Drive 2, Singapore 117543 (mattohkc@nus.edu.sg); and Singapore-MIT Alliance, E4-04-10, 4 Engineering Drive 3, Singapore 117576. Research supported in parts by NUS Research Grant R146-000-076-112 and SMA IUP Research Grant. Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA 15213, USA (reha@cmu.edu). Research supported in part by NSF through grants CCR-9875559, CCF-0430868 and by ONR through grant N00014-05-1-0147 School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York 14853, USA (miketodd@cs.cornell.edu). Research supported in part by NSF through grant DMS-0209457 and by ONR through grant N00014-02-1-0057.", "title": "" }, { "docid": "0a8763b4ba53cf488692d1c7c6791ab4", "text": "a r t i c l e i n f o To address the longitudinal relation between adolescents' habitual usage of media violence and aggressive behavior and empathy, N = 1237 seventh and eighth grade high school students in Germany completed measures of violent and nonviolent media usage, aggression, and empathy twice in twelve months. Cross-lagged panel analyses showed significant pathways from T1 media violence usage to higher physical aggression and lower empathy at T2. The reverse paths from T1 aggression or empathy to T2 media violence usage were nonsignificant. The links were similar for boys and girls. No links were found between exposure to nonviolent media and aggression or between violent media and relational aggression. T1 physical aggression moderated the impact of media violence usage, with stronger effects of media violence usage among the low aggression group. Introduction Despite the rapidly growing body of research addressing the potentially harmful effects of exposure to violent media, the evidence currently available is still limited in several ways. First, there is a shortage of longitudinal research examining the associations of media violence usage and aggression over time. Such evidence is crucial for examining hypotheses about the causal directions of observed co-variations of media violence usage and aggression that cannot be established on the basis of cross-sectional research. Second, most of the available longitudinal evidence has focused on aggression as the critical outcome variable, giving comparatively little attention to other potentially harmful effects, such as a decrease in empathy with others in distress. Third, the vast majority of studies available to date were conducted in North America. However, even in the age of globalization, patterns of media violence usage and their cultural contexts may vary considerably, calling for a wider database from different countries to examine the generalizability of results to address each of these aspects. It presents findings from a longitudinal study with a large sample of early adolescents in Germany, relating habitual usage of violence in movies, TV series, and interactive video games to self-reports of physical aggression and empathy over a period of twelve months. The study focused on early adolescence as a developmental period characterized by a confluence of risk factors as a result of biological, psychological, and social changes for a range of adverse outcomes. Regular media violence usage may significantly contribute to the overall risk of aggression as one such negative outcome. Media consumption increases from childhood …", "title": "" }, { "docid": "0b50ec58f82b7ac4ad50eb90425b3aea", "text": "OBJECTIVES\nThe study aimed (1) to examine if there are equivalent results in terms of union, alignment and elbow functionally comparing single- to dual-column plating of AO/OTA 13A2 and A3 distal humeral fractures and (2) if there are more implant-related complications in patients managed with bicolumnar plating compared to single-column plate fixation.\n\n\nDESIGN\nThis was a multi-centred retrospective comparative study.\n\n\nSETTING\nThe study was conducted at two academic level 1 trauma centres.\n\n\nPATIENTS/PARTICIPANTS\nA total of 105 patients were identified to have surgical management of extra-articular distal humeral fractures Arbeitsgemeinschaft für Osteosynthesefragen/Orthopaedic Trauma Association (AO/OTA) 13A2 and AO/OTA 13A3).\n\n\nINTERVENTION\nPatients were treated with traditional dual-column plating or a single-column posterolateral small-fragment pre-contoured locking plate used as a neutralisation device with at least five screws in the short distal segment.\n\n\nMAIN OUTCOME MEASUREMENTS\nThe patients' elbow functionality was assessed in terms of range of motion, union and alignment. In addition, the rate of complications between the groups including radial nerve palsy, implant-related complications (painful prominence and/or ulnar nerve neuritis) and elbow stiffness were compared.\n\n\nRESULTS\nPatients treated with single-column plating had similar union rates and alignment. However, single-column plating resulted in a significantly better range of motion with less complications.\n\n\nCONCLUSIONS\nThe current study suggests that exposure/instrumentation of only the lateral column is a reliable and preferred technique. This technique allows for comparable union rates and alignment with increased elbow functionality and decreased number of complications.", "title": "" }, { "docid": "bbb33d4eb3894471e446db3e5bb936ab", "text": "In this article we propose the novel approach to measure anthropometrical features such as height, width of shoulder, circumference of the chest, hip and waist. The sub-pixel processing and convex hull technique are used to efficiently measure the features from 2d image. The SVM technique is used to classify men and women based on measured features. The results of real data processing are presented.", "title": "" }, { "docid": "d07a10da23e0fc18b473f8a30adaebfb", "text": "DATA FLOW IS A POPULAR COMPUTATIONAL MODEL for visual programming languages. Data flow provides a view of computation which shows the data flowing from one filter function to another, being transformed as it goes. In addition, the data flow model easily accomodates the insertion of viewing monitors at various points to show the data to the user. Consequently, many recent visual programming languages are based on the data flow model. This paper describes many of the data flow visual programming languages. The languages are grouped according to their application domain. For each language, pertinent aspects of its appearance, and the particular design alternatives it uses, are discussed. Next, some strengths of data flow visual programming languages are mentioned. Finally, unsolved problems in the design of such languages are discussed.", "title": "" }, { "docid": "b11a161588bd1a3d4d7cd78ecce4aa64", "text": "This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up a VE into a configuration task, and hence reducing the time needed for VE creation. The reference models are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) based upon GERAM and created in the IMS GLOBEMEN project.", "title": "" }, { "docid": "e17558c5a39f3e231aa6d09c8e2124fc", "text": "Surveys of child sexual abuse in large nonclinical populations of adults have been conducted in at least 19 countries in addition to the United States and Canada, including 10 national probability samples. All studies have found rates in line with comparable North American research, ranging from 7% to 36% for women and 3% to 29% for men. Most studies found females to be abused at 1 1/2 to 3 times the rate for males. Few comparisons among countries are possible because of methodological and definitional differences. However, they clearly confirm sexual abuse to be an international problem.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "d151881de9a0e1699e95db7bbebc032b", "text": "Despite the noticeable progress in perceptual tasks like detection, instance segmentation and human parsing, computers still perform unsatisfactorily on visually understanding humans in crowded scenes, such as group behavior analysis, person re-identification and autonomous driving, etc. To this end, models need to comprehensively perceive the semantic information and the differences between instances in a multi-human image, which is recently defined as the multi-human parsing task. In this paper, we present a new large-scale database “Multi-Human Parsing (MHP)” for algorithm development and evaluation, and advances the state-of-the-art in understanding humans in crowded scenes. MHP contains 25,403 elaborately annotated images with 58 fine-grained semantic category labels, involving 2-26 persons per image and captured in real-world scenes from various viewpoints, poses, occlusion, interactions and background. We further propose a novel deep Nested Adversarial Network (NAN) model for multi-human parsing. NAN consists of three Generative Adversarial Network (GAN)-like sub-nets, respectively performing semantic saliency prediction, instance-agnostic parsing and instance-aware clustering. These sub-nets form a nested structure and are carefully designed to learn jointly in an end-to-end way. NAN consistently outperforms existing state-of-the-art solutions on our MHP and several other datasets, and serves as a strong baseline to drive the future research for multi-human parsing.", "title": "" }, { "docid": "d0a41ebc758439b91f96b44c40dd711b", "text": "Chirp signals are very common in radar, communication, sonar, and etc. Little is known about chirp images, i.e., 2-D chirp signals. In fact, such images frequently appear in optics and medical science. Newton's rings fringe pattern is a classical example of the images, which is widely used in optical metrology. It is known that the fractional Fourier transform(FRFT) is a convenient method for processing chirp signals. Furthermore, it can be extended to 2-D fractional Fourier transform for processing 2-D chirp signals. It is interesting to observe the chirp images in the 2-D fractional Fourier transform domain and extract some physical parameters hidden in the images. Besides that, in the FRFT domain, it is easy to separate the 2-D chirp signal from other signals to obtain the desired image.", "title": "" }, { "docid": "1d9dc60534c6f5fa0d510b41bd151b33", "text": "Android multitasking provides rich features to enhance user experience and offers great flexibility for app developers to promote app personalization. However, the security implication of Android multitasking remains under-investigated. With a systematic study of the complex tasks dynamics, we find design flaws of Android multitasking which make all recent versions of Android vulnerable to task hijacking attacks. We demonstrate proof-of-concept examples utilizing the task hijacking attack surface to implement UI spoofing, denialof-service and user monitoring attacks. Attackers may steal login credentials, implement ransomware and spy on user’s activities. We have collected and analyzed over 6.8 million apps from various Android markets. Our analysis shows that the task hijacking risk is prevalent. Since many apps depend on the current multitasking design, defeating task hijacking is not easy. We have notified the Android team about these issues and we discuss possible mitigation techniques in this paper.", "title": "" }, { "docid": "595a31e82d857cedecd098bf4c910e99", "text": "Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.", "title": "" }, { "docid": "245c02139f875fac756dc17d1a2fc6c2", "text": "This paper tries to answer two questions. First, how to infer real-time air quality of any arbitrary location given environmental data and historical air quality data from very sparse monitoring locations. Second, if one needs to establish few new monitoring stations to improve the inference quality, how to determine the best locations for such purpose? The problems are challenging since for most of the locations (>99%) in a city we do not have any air quality data to train a model from. We design a semi-supervised inference model utilizing existing monitoring data together with heterogeneous city dynamics, including meteorology, human mobility, structure of road networks, and point of interests (POIs). We also propose an entropy-minimization model to suggest the best locations to establish new monitoring stations. We evaluate the proposed approach using Beijing air quality data, resulting in clear advantages over a series of state-of-the-art and commonly used methods.", "title": "" }, { "docid": "22f2a272b948595f98227bbaa0a7e719", "text": "The grasping dynamics between a soft gripper and a deformable object has not been investigated so far. To this end, a 3D printed soft robot gripper with modular design was proposed in this paper. The gripper consists of a rigid base and three modular soft fingers. A snap-lock mechanism was designed on each finger for easy attach-detach to the base. All components were 3D printed using the Objet260 Connex system. The soft finger is air-driven and the idea is based on the principle of fluidic elastomer actuator. A curvature sensor was integrated inside each finger to measure the curvature during grasping. The fingers integrated with sensors were calibrated under different pneumatic pressure inputs. Relationship between pressure loading and bending angle, and relationship between bending angle and sensor output (voltage) were derived. Experiments with the gripper grasping and lifting a paper container filled with peanuts were conducted and results were presented and discussed.", "title": "" }, { "docid": "27ed0ab08b10935d12b59b6d24bed3f1", "text": "A major stumbling block to progress in understanding basic human interactions, such as getting out of bed or opening a refrigerator, is lack of good training data. Most past efforts have gathered this data explicitly: starting with a laundry list of action labels, and then querying search engines for videos tagged with each label. In this work, we do the reverse and search implicitly: we start with a large collection of interaction-rich video data and then annotate and analyze it. We use Internet Lifestyle Vlogs as the source of surprisingly large and diverse interaction data. We show that by collecting the data first, we are able to achieve greater scale and far greater diversity in terms of actions and actors. Additionally, our data exposes biases built into common explicitly gathered data. We make sense of our data by analyzing the central component of interaction - hands. We benchmark two tasks: identifying semantic object contact at the video level and non-semantic contact state at the frame level. We additionally demonstrate future prediction of hands.", "title": "" }, { "docid": "2cd327bd5a7814776825e090b12664ec", "text": "is an open access repository that collects the work of Arts et Métiers ParisTech researchers and makes it freely available over the web where possible. This article proposes a method based on wavelet transform and neural networks for relating pupillary behavior to psychological stress. The proposed method was tested by recording pupil diameter and electrodermal activity during a simulated driving task. Self-report measures were also collected. Participants performed a baseline run with the driving task only, followed by three stress runs where they were required to perform the driving task along with sound alerts, the presence of two human evaluators, and both. Self-reports and pupil diameter successfully indexed stress manipulation, and significant correlations were found between these measures. However, electrodermal activity did not vary accordingly. After training, the four-way parallel neu-ral network classifier could guess whether a given unknown pupil diameter signal came from one of the four experimental trials with 79.2% precision. The present study shows that pupil diameter signal has good discriminating power for stress detection. 1. INTRODUCTION Stress detection and measurement are important issues in several human–computer interaction domains such as Affective Computing, Adaptive Automation, and Ambient Intelligence. In general, researchers and system designers seek to estimate the psychological state of operators in order to adapt or redesign the working environment accordingly (Sauter, 1991). The primary goal of such adaptation is to enhance overall system performance, trying to reduce workers' psychophysi-cal detriment (e. One key aspect of stress measurement concerns the recording of physiological parameters, which are known to be modulated by the autonomic nervous system (ANS). However, despite", "title": "" }, { "docid": "091bb846b07dac79c8844491c95725bf", "text": "The Internet has opened new avenues for information accessing and sharing in a variety of media formats. Such popularity has resulted in an increase of the amount of resources consumed in backbone links, whose capacities have witnessed numerous upgrades to cope with the ever-increasing demand for bandwidth. Consequently, network traffic processing at today’s data transmission rates is a very demanding task, which has been traditionally accomplished by means of specialized hardware tailored to specific tasks. However, such approaches lack either of flexibility or extensibility—or both. As an alternative, the research community has pointed to the utilization of commodity hardware, which may provide flexible and extensible cost-aware solutions, ergo entailing large reductions of the operational and capital expenditure investments. In this chapter, we provide a survey-like introduction to high-performance network traffic processing using commodity hardware. We present the required background to understand the different solutions proposed in the literature to achieve high-speed lossless packet capture, which are reviewed and compared.", "title": "" } ]
scidocsrr
ec8ca1843aede3eba3652535c2ba7e56
Arithmetic Coding for Data Compression
[ { "docid": "bbf581230ec60c2402651d51e3a37211", "text": "The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.", "title": "" } ]
[ { "docid": "eb761eb499b2dc82f7f2a8a8a5ff64a7", "text": "We consider the situation in which digital data is to be reliably transmitted over a discrete, memoryless channel (dmc) that is subjected to a wire-tap at the receiver. We assume that the wire-tapper views the channel output via a second dmc). Encoding by the transmitter and decoding by the receiver are permitted. However, the code books used in these operations are assumed to be known by the wire-tapper. The designer attempts to build the encoder-decoder in such a way as to maximize the transmission rate R, and the equivocation d of the data as seen by the wire-tapper. In this paper, we find the trade-off curve between R and d, assuming essentially perfect (“error-free”) transmission. In particular, if d is equal to Hs, the entropy of the data source, then we consider that the transmission is accomplished in perfect secrecy. Our results imply that there exists a Cs > 0, such that reliable transmission at rates up to Cs is possible in approximately perfect secrecy.", "title": "" }, { "docid": "69d3c943755734903b9266ca2bd2fad1", "text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.", "title": "" }, { "docid": "c3f25271d25590bf76b36fee4043d227", "text": "Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.", "title": "" }, { "docid": "a09cb533a0a90a056857d597213efdf2", "text": "一 引言 图像的边缘是图像的重要的特征,它给出了图像场景中物体的轮廓特征信息。当要对图 像中的某一个物体进行识别时,边缘信息是重要的可以利用的信息,例如在很多系统中采用 的模板匹配的识别算法。基于此,我们设计了一套基于 PCI Bus和 Vision Bus的可重构的机 器人视觉系统[3]。此系统能够实时的对图像进行采集,并可以通过系统实时的对图像进行 边缘的提取。 对于图像的边缘提取,采用二阶的边缘检测算子处理后要进行过零点检测,计算量很大 而且用硬件实现资源占用大且速度慢,所以在我们的视觉系统中,卷积器中选择的是一阶的 边缘检测算子。采用一阶的边缘检测算子进行卷积运算之后,仅仅需要对卷积得到的图像进 行阈值处理就可以得到图像的边缘,而阈值处理的操作用硬件实现占用资源少且速度快。由 于本视觉系统要求与应用环境下的精密装配机器人配合使用,系统的实时性要求非常高。因 此,如何对实时采集图像进行快速实时的边缘提取阈值的自动选取,是我们必须要考虑的问 题。 遗传算法是一种仿生物系统的基因进化的迭代搜索算法,其基本思想是由美国Michigan 大学的 J.Holland 教授提出的。由于遗传算法的整体寻优策略以及优化计算时不依赖梯度信 息,所以它具有很强的全局搜索能力,即对于解空间中的全局最优解有着很强的逼近能力。 它适用于问题结构不是十分清楚,总体很大,环境复杂的场合,而对于实时采集的图像进行 边缘检测阈值的选取就是此类问题。本文在对传统的遗传算法进行改进的基础上,提出了一 种对于实时采集图像进行边缘检测的阈值的自动选取方法。", "title": "" }, { "docid": "a8d3a75cdc3bb43217a0120edf5025ff", "text": "An important approach to text mining involves the use of natural-language information extraction. Information extraction (IE) distills structured data or knowledge from unstructured text by identifying references to named entities as well as stated relationships between such entities. IE systems can be used to directly extricate abstract knowledge from a text corpus, or to extract concrete data from a set of documents which can then be further analyzed with traditional data-mining techniques to discover more general patterns. We discuss methods and implemented systems for both of these approaches and summarize results on mining real text corpora of biomedical abstracts, job announcements, and product descriptions. We also discuss challenges that arise when employing current information extraction technology to discover knowledge in text.", "title": "" }, { "docid": "71dd012b54ae081933bddaa60612240e", "text": "This paper analyzes & compares four adders with different logic styles (Conventional, transmission gate, 14 transistors & GDI based technique) for transistor count, power dissipation, delay and power delay product. It is performed in virtuoso platform, using Cadence tool with available GPDK - 90nm kit. The width of NMOS and PMOS is set at 120nm and 240nm respectively. Transmission gate full adder has sheer advantage of high speed but consumes more power. GDI full adder gives reduced voltage swing not being able to pass logic 1 and logic 0 completely showing degraded output. Transmission gate full adder shows better performance in terms of delay (0.417530 ns), whereas 14T full adder shows better performance in terms of all three aspects.", "title": "" }, { "docid": "79a20b9a059a2b4cc73120812c010495", "text": "The present article summarizes the state of the art algorithms to compute the discrete Moreau envelope, and presents a new linear-time algorithm, named NEP for NonExpansive Proximal mapping. Numerical comparisons between the NEP and two existing algorithms: The Linear-time Legendre Transform (LLT) and the Parabolic Envelope (PE) algorithms are performed. Worst-case time complexity, convergence results, and examples are included. The fast Moreau envelope algorithms first factor the Moreau envelope as several one-dimensional transforms and then reduce the brute force quadratic worst-case time complexity to linear time by using either the equivalence with Fast Legendre Transform algorithms, the computation of a lower envelope of parabolas, or, in the convex case, the non expansiveness of the proximal mapping.", "title": "" }, { "docid": "efe70da1a3118e26acf10aa480ad778d", "text": "Background: Facebook (FB) is becoming an increasingly salient feature in peoples’ lives and has grown into a bastion in our current society with over 1 billion users worldwide –the majority of which are college students. However, recent studies conducted suggest that the use of Facebook may impacts individuals’ well being. Thus, this paper aimed to explore the effects of Facebook usage on adolescents’ emotional states of depression, anxiety, and stress. Method and Material: A cross sectional design was utilized in this investigation. The study population included 76 students enrolled in the Bachelor of Science in Nursing program from a government university in Samar, Philippines. Facebook Intensity Scale (FIS) and the Depression Anxiety and Stress Scale (DASS) were the primary instruments used in this study. Results: Findings indicated correlation coefficients of 0.11 (p=0.336), 0.07 (p=0.536), and 0.10 (p=0.377) between Facebook Intensity Scale (FIS) and Depression, Anxiety, and Stress scales in the DASS. Time spent on FBcorrelated significantly with depression (r=0.233, p=0.041) and anxiety (r=0.259, p=0.023). Similarly, the three emotional states (depression, anxiety, and stress) correlated significantly. Conclusions: Intensity of Facebook use is not directly related to negative emotional states. However, time spent on Facebooking increases depression and anxiety scores. Implications of the findings to the fields of counseling and psychology are discussed.", "title": "" }, { "docid": "c487af41ead3ee0bc8fe6c95b356a80b", "text": "With such a large volume of material accessible from the World Wide Web, there is an urgent need to increase our knowledge of factors in#uencing reading from screen. We investigate the e!ects of two reading speeds (normal and fast) and di!erent line lengths on comprehension, reading rate and scrolling patterns. Scrolling patterns are de\"ned as the way in which readers proceed through the text, pausing and scrolling. Comprehension and reading rate are also examined in relation to scrolling patterns to attempt to identify some characteristics of e!ective readers. We found a reduction in overall comprehension when reading fast, but the type of information recalled was not dependent on speed. A medium line length (55 characters per line) appears to support e!ective reading at normal and fast speeds. This produced the highest level of comprehension and was also read faster than short lines. Scrolling patterns associated with better comprehension (more time in pauses and more individual scrolling movements) contrast with scrolling patterns used by faster readers (less time in pauses between scrolling). Consequently, e!ective readers can only be de\"ned in relation to the aims of the reading task, which may favour either speed or accuracy. ( 2001 Academic Press", "title": "" }, { "docid": "34623fb38c81af8efaf8e7073e4c43bc", "text": "The k-means problem consists of finding k centers in R that minimize the sum of the squared distances of all points in an input set P from R to their closest respective center. Awasthi et. al. recently showed that there exists a constant ε′ > 0 such that it is NP-hard to approximate the k-means objective within a factor of 1 + ε′. We establish that the constant ε′ is at least 0.0013. For a given set of points P ⊂ R, the k-means problem consists of finding a partition of P into k clusters (C1, . . . , Ck) with corresponding centers (c1, . . . , ck) that minimize the sum of the squared distances of all points in P to their corresponding center, i.e. the quantity arg min (C1,...,Ck),(c1,...,ck) k ∑", "title": "" }, { "docid": "455e3f0c6f755d78ecafcdff14c46014", "text": "BACKGROUND\nIn neonatal and early childhood surgeries such as meningomyelocele repairs, closing deep wounds and oncological treatment, tensor fasciae lata (TFL) flaps are used. However, there are not enough data about structural properties of TFL in foetuses, which can be considered as the closest to neonates in terms of sampling. This study's main objective is to gather data about morphological structures of TFL in human foetuses to be used in newborn surgery.\n\n\nMATERIALS AND METHODS\nFifty formalin-fixed foetuses (24 male, 26 female) with gestational age ranging from 18 to 30 weeks (mean 22.94 ± 3.23 weeks) were included in the study. TFL samples were obtained by bilateral dissection and then surface area, width and length parameters were recorded. Digital callipers were used for length and width measurements whereas surface area was calculated using digital image analysis software.\n\n\nRESULTS\nNo statistically significant differences were found in terms of numerical value of parameters between sides and sexes (p > 0.05). Linear functions for TFL surface area, width, anterior and posterior margin lengths were calculated as y = -225.652 + 14.417 × age (weeks), y = -5.571 + 0.595 × age (weeks), y = -4.276 + 0.909 × age (weeks), and y = -4.468 + 0.779 × age (weeks), respectively.\n\n\nCONCLUSIONS\nLinear functions for TFL surface area, width and lengths can be used in designing TFL flap dimensions in newborn surgery. In addition, using those described linear functions can also be beneficial in prediction of TFL flap dimensions in autopsy studies.", "title": "" }, { "docid": "89322e0d2b3566aeb85eeee9f505d5b2", "text": "Parkinson's disease is a neurological disorder with evolving layers of complexity. It has long been characterised by the classical motor features of parkinsonism associated with Lewy bodies and loss of dopaminergic neurons in the substantia nigra. However, the symptomatology of Parkinson's disease is now recognised as heterogeneous, with clinically significant non-motor features. Similarly, its pathology involves extensive regions of the nervous system, various neurotransmitters, and protein aggregates other than just Lewy bodies. The cause of Parkinson's disease remains unknown, but risk of developing Parkinson's disease is no longer viewed as primarily due to environmental factors. Instead, Parkinson's disease seems to result from a complicated interplay of genetic and environmental factors affecting numerous fundamental cellular processes. The complexity of Parkinson's disease is accompanied by clinical challenges, including an inability to make a definitive diagnosis at the earliest stages of the disease and difficulties in the management of symptoms at later stages. Furthermore, there are no treatments that slow the neurodegenerative process. In this Seminar, we review these complexities and challenges of Parkinson's disease.", "title": "" }, { "docid": "6033f644fb18ce848922a51d3b0000ab", "text": "This paper tests two of the simplest and most popular trading rules moving average and trading range break, by utilitizing a very long data series, the Dow Jones index from 1897 to 1986. Standard statistical analysis is extended through the use .of bootstrap techniques. Overall our results provide strong support for the technical strategies that are explored. The returns obtained from buy (sell) signals are not consistent with the three popular null models: the random walk, the AR(I) and the GARCH-M. Consistently, buy signals generate higher returns than sell signals. Moreover, returns following sell signals are negative which is not easily explained by any of the currently existing equilibrium models. Furthermore the returns following buy signals are less volatile than returns following sell signals. The term, \"technical analysis,\" is a general heading for a myriad of trading techniques. Technical analysts attempt to forecast prices by the study of past prices and a few other related summary statistics about security trading. They believe that shifts in supply and demand can be detected in charts of market action. Technical analysis is considered by many to be the original form of investment analysis, dating back to the 1800's. It came into widespread use before the period of extensive and fully disclosed financial information, which in turn enabled the practice of fnndamental analysis to develop. In the U.S., the use of trading rules to detect patterns in stock prices is probably as old as the stock market itself. The oldest technique is attributed to Charles Dow and is traced to the late 1800's. Many of the techniques used today have been utilized for over 60 years. These techniques for discovering hidden relations in stock returns can range from extremely simple to quite elaborate. The attitude of academics towards technical analysis, until recently, is well described by Malkiel(1981): \"Obviously, I am biased against the chartist. This is not only a personal predilection, but a professional one as well. Technical analysis is anathema to, the academic world. We love to pick onit. Our bullying tactics' are prompted by two considerations: (1) the method is patently false; and (2) it's easy to pick on. And while it may seem a bit unfair to pick on such a sorry target, just remember': His your money we are trying to save.\" , Nonetheless, technical analysis has been enjoying a renaissance on Wall Street. All major brokerage firms publish technical commentary on the market and individual securities\" and many of the newsletters published by various \"experts\" are based on technical analysis. In recent years the efficient market hypothesis has come under serious siege. Various papers suggested that stock returns are not fully explained by common risk measures. A significant relationship between expected return and fundamental variables such as price-earnings ratio, market-to, book ratio and size was documented. Another group ofpapers has uncovered systematic patterns in stock returns related to various calendar periods such as the weekend effect, the tnrn-of-the-month effect, the holiday effect and the, January effect. A line of research directly related to this work provides evidence of predictability of equity returns from past returns. De Bandt and Thaler(1985), Fama and French(1986), and Poterba and Summers(1988) find negative serial correlation in returns of individual stocks aid various portfolios over three to ten year intervals. Rosenberg, Reid, and Lanstein(1985) provide evidence for the presence of predictable return reversals on a monthly basis", "title": "" }, { "docid": "f4b0a7e2ab8728b682b8d399a887c3df", "text": "This paper presents a framework for localization or grounding of phrases in images using a large collection of linguistic and visual cues.1 We model the appearance, size, and position of entity bounding boxes, adjectives that contain attribute information, and spatial relationships between pairs of entities connected by verbs or prepositions. We pay special attention to relationships between people and clothing or body part mentions, as they are useful for distinguishing individuals. We automatically learn weights for combining these cues and at test time, perform joint inference over all phrases in a caption. The resulting system produces a 4% improvement in accuracy over the state of the art on phrase localization on the Flickr30k Entities dataset [25] and a 4-10% improvement for visual relationship detection on the Stanford VRD dataset [20].", "title": "" }, { "docid": "90d1d78d3d624d3cb1ecc07e8acaefd4", "text": "Wheat straw is an abundant agricultural residue with low commercial value. An attractive alternative is utilization of wheat straw for bioethanol production. However, production costs based on the current technology are still too high, preventing commercialization of the process. In recent years, progress has been made in developing more effective pretreatment and hydrolysis processes leading to higher yield of sugars. The focus of this paper is to review the most recent advances in pretreatment, hydrolysis and fermentation of wheat straw. Based on the type of pretreatment method applied, a sugar yield of 74-99.6% of maximum theoretical was achieved after enzymatic hydrolysis of wheat straw. Various bacteria, yeasts and fungi have been investigated with the ethanol yield ranging from 65% to 99% of theoretical value. So far, the best results with respect to ethanol yield, final ethanol concentration and productivity were obtained with the native non-adapted Saccharomyses cerevisiae. Some recombinant bacteria and yeasts have shown promising results and are being considered for commercial scale-up. Wheat straw biorefinery could be the near-term solution for clean, efficient and economically-feasible production of bioethanol as well as high value-added products.", "title": "" }, { "docid": "8646bc8ddeadf17e443e5ddcf705e492", "text": "This paper proposes a model predictive control (MPC) scheme for the interleaved dc-dc boost converter with coupled inductors. The main control objectives are the regulation of the output voltage to its reference value, despite changes in the input voltage and the load, and the equal sharing of the load current by the two circuit inductors. An inner control loop, using MPC, regulates the input current to its reference that is provided by the outer loop, which is based on a load observer. Simulation results are provided to highlight the performance of the proposed control scheme.", "title": "" }, { "docid": "2113655d3467fbdbf7769e36952d2a6f", "text": "The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature make an informed choice of metrics challenging. As a result, instead of using existing metrics, new metrics are proposed frequently, and privacy studies are often incomparable. In this survey, we alleviate these problems by structuring the landscape of privacy metrics. To this end, we explain and discuss a selection of over 80 privacy metrics and introduce categorizations based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on nine questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement.", "title": "" }, { "docid": "b0901a572ecaaeb1233b92d5653c2f12", "text": "This qualitative study offers a novel exploration of the links between social media, virtual intergroup contact, and empathy by examining how empathy is expressed through interactions on a popular social media blog. Global leaders are encouraging individuals to engage in behaviors and support policies that provide basic social foundations. It is difficult to motivate people to undertake such actions. However, research shows that empathy intensifies motivation to help others. It can cause individuals to see the world from the perspective of stigmatized group members and increase positive feelings. Social media offers a new pathway for virtual intergroup contact, providing opportunities to increase conversation about disadvantaged others and empathy. We examined expressions of empathy within a popular blog, Humans of New York (HONY), and engaged in purposeful case selection by focusing on (1) events where specific prosocial action was taken corresponding to interactions on the HONY blog and (2) presentation of people in countries other than the United States. Nine overarching themes; (1) perspective taking, (2) fantasy, (3) empathic concern, (4) personal distress, (5) relatability, (6) prosocial action, (7) community appreciation, (8) anti-empathy, and (9) rejection of anti-empathy, exemplify how the HONY community expresses and shares empathic thoughts and feelings.", "title": "" }, { "docid": "976aee37c264dbf53b7b1fbbf0d583c4", "text": "This paper applies Halliday's (1994) theory of the interpersonal, ideational and textual meta-functions of language to conceptual metaphor. Starting from the observation that metaphoric expressions tend to be organized in chains across texts, the question is raised what functions those expressions serve in different parts of a text as well as in relation to each other. The empirical part of the article consists of the sample analysis of a business magazine text on marketing. This analysis is two-fold, integrating computer-assisted quantitative investigation with qualitative research into the organization and multifunctionality of metaphoric chains as well as the cognitive scenarios evolving from those chains. The paper closes by summarizing the main insights along the lines of the three Hallidayan meta-functions of conceptual metaphor and suggesting functional analysis of metaphor at levels beyond that of text. Im vorliegenden Artikel wird Hallidays (1994) Theorie der interpersonellen, ideellen und textuellen Metafunktion von Sprache auf das Gebiet der konzeptuellen Metapher angewandt. Ausgehend von der Beobachtung, dass metaphorische Ausdrücke oft in textumspannenden Ketten angeordnet sind, wird der Frage nachgegangen, welche Funktionen diese Ausdrücke in verschiedenen Teilen eines Textes und in Bezug aufeinander erfüllen. Der empirische Teil der Arbeit besteht aus der exemplarischen Analyse eines Artikels aus einem Wirtschaftsmagazin zum Thema Marketing. Diese Analysis gliedert sich in zwei Teile und verbindet computergestütze quantitative Forschung mit einer qualitativen Untersuchung der Anordnung und Multifunktionalität von Metaphernketten sowie der kognitiven Szenarien, die aus diesen Ketten entstehen. Der Aufsatz schließt mit einer Zusammenfassung der wesentlichen Ergebnisse im Licht der Hallidayschen Metafunktionen konzeptueller Metaphern und gibt einen Ausblick auf eine funktionale Metaphernanalyse, die über die rein textuelle Ebene hinausgeht.", "title": "" }, { "docid": "9cea5720bdba8af6783d9e9f8bc7b7d1", "text": "BACKGROUND\nFeasible, cost-effective instruments are required for the surveillance of moderate-to-vigorous physical activity (MVPA) and sedentary behaviour (SB) and to assess the effects of interventions. However, the evidence base for the validity and reliability of the World Health Organisation-endorsed Global Physical Activity Questionnaire (GPAQ) is limited. We aimed to assess the validity of the GPAQ, compared to accelerometer data in measuring and assessing change in MVPA and SB.\n\n\nMETHODS\nParticipants (n = 101) were selected randomly from an on-going research study, stratified by level of physical activity (low, moderate or highly active, based on the GPAQ) and sex. Participants wore an accelerometer (Actigraph GT3X) for seven days and completed a GPAQ on Day 7. This protocol was repeated for a random sub-sample at a second time point, 3-6 months later. Analysis involved Wilcoxon-signed rank tests for differences in measures, Bland-Altman analysis for the agreement between measures for median MVPA and SB mins/day, and Spearman's rho coefficient for criterion validity and extent of change.\n\n\nRESULTS\n95 participants completed baseline measurements (44 females, 51 males; mean age 44 years, (SD 14); measurements of change were calculated for 41 (21 females, 20 males; mean age 46 years, (SD 14). There was moderate agreement between GPAQ and accelerometer for MVPA mins/day (r = 0.48) and poor agreement for SB (r = 0.19). The absolute mean difference (self-report minus accelerometer) for MVPA was -0.8 mins/day and 348.7 mins/day for SB; and negative bias was found to exist, with those people who were more physically active over-reporting their level of MVPA: those who were more sedentary were less likely to under-report their level of SB. Results for agreement in change over time showed moderate correlation (r = 0.52, p = 0.12) for MVPA and poor correlation for SB (r = -0.024, p = 0.916).\n\n\nCONCLUSIONS\nLevels of agreement with objective measurements indicate the GPAQ is a valid measure of MVPA and change in MVPA but is a less valid measure of current levels and change in SB. Thus, GPAQ appears to be an appropriate measure for assessing the effectiveness of interventions to promote MVPA.", "title": "" } ]
scidocsrr
2c080e7343fd71060049df0f6fda1cc7
A detail analysis on intrusion detection datasets
[ { "docid": "11a2882124e64bd6b2def197d9dc811a", "text": "1 Abstract— Clustering is the most acceptable technique to analyze the raw data. Clustering can help detect intrusions when our training data is unlabeled, as well as for detecting new and unknown types of intrusions. In this paper we are trying to analyze the NSL-KDD dataset using Simple K-Means clustering algorithm. We tried to cluster the dataset into normal and four of the major attack categories i.e. DoS, Probe, R2L, U2R. Experiments are performed in WEKA environment. Results are verified and validated using test dataset. Our main objective is to provide the complete analysis of NSL-KDD intrusion detection dataset.", "title": "" }, { "docid": "320c7c49dd4341cca532fa02965ef953", "text": "During the last decade, anomaly detection has attracted the attention of many researchers to overcome the weakness of signature-based IDSs in detecting novel attacks, and KDDCUP'99 is the mostly widely used data set for the evaluation of these systems. Having conducted a statistical analysis on this data set, we found two important issues which highly affects the performance of evaluated systems, and results in a very poor evaluation of anomaly detection approaches. To solve these issues, we have proposed a new data set, NSL-KDD, which consists of selected records of the complete KDD data set and does not suffer from any of mentioned shortcomings.", "title": "" } ]
[ { "docid": "91cb2ee27517441704bf739ee811d6c6", "text": "The primo vascular system has a specific anatomical and immunohistochemical signature that sets it apart from the arteriovenous and lymphatic systems. With immune and endocrine functions, the primo vascular system has been found to play a large role in biological processes, including tissue regeneration, inflammation, and cancer metastases. Although scientifically confirmed in 2002, the original discovery was made in the early 1960s by Bong-Han Kim, a North Korean scientist. It would take nearly 40 years after that discovery for scientists to revisit Kim's research to confirm the early findings. The presence of primo vessels in and around blood and lymph vessels, nerves, viscera, and fascia, as well as in the brain and spinal cord, reveals a common link that could potentially open novel possibilities of integration with cranial, lymphatic, visceral, and fascial approaches in manual medicine.", "title": "" }, { "docid": "9308c1dfdf313f6268db9481723f533d", "text": "We report the discovery of a highly active Ni-Co alloy electrocatalyst for the oxidation of hydrazine (N(2)H(4)) and provide evidence for competing electrochemical (faradaic) and chemical (nonfaradaic) reaction pathways. The electrochemical conversion of hydrazine on catalytic surfaces in fuel cells is of great scientific and technological interest, because it offers multiple redox states, complex reaction pathways, and significantly more favorable energy and power densities compared to hydrogen fuel. Structure-reactivity relations of a Ni(60)Co(40) alloy electrocatalyst are presented with a 6-fold increase in catalytic N(2)H(4) oxidation activity over today's benchmark catalysts. We further study the mechanistic pathways of the catalytic N(2)H(4) conversion as function of the applied electrode potential using differentially pumped electrochemical mass spectrometry (DEMS). At positive overpotentials, N(2)H(4) is electrooxidized into nitrogen consuming hydroxide ions, which is the fuel cell-relevant faradaic reaction pathway. In parallel, N(2)H(4) decomposes chemically into molecular nitrogen and hydrogen over a broad range of electrode potentials. The electroless chemical decomposition rate was controlled by the electrode potential, suggesting a rare example of a liquid-phase electrochemical promotion effect of a chemical catalytic reaction (\"EPOC\"). The coexisting electrocatalytic (faradaic) and heterogeneous catalytic (electroless, nonfaradaic) reaction pathways have important implications for the efficiency of hydrazine fuel cells.", "title": "" }, { "docid": "724845cb5c9f531e09f2c8c3e6f52fe4", "text": "Deep learning has given way to a new era of machine learning, apart from computer vision. Convolutional neural networks have been implemented in image classification, segmentation and object detection. Despite recent advancements, we are still in the very early stages and have yet to settle on best practices for network architecture in terms of deep design, small in size and a short training time. In this work, we propose a very deep neural network comprised of 16 Convolutional layers compressed with the Fire Module adapted from the SQUEEZENET model. We also call for the addition of residual connections to help suppress degradation. This model can be implemented on almost every neural network model with fully incorporated residual learning. This proposed model Residual-Squeeze-VGG16 (ResSquVGG16) trained on the large-scale MIT Places365-Standard scene dataset. In our tests, the model performed with accuracy similar to the pre-trained VGG16 model in Top-1 and Top-5 validation accuracy while also enjoying a 23.86% reduction in training time and an 88.4% reduction in size. In our tests, this model was trained from scratch. Keywords— Convolutional Neural Networks; VGG16; Residual learning; Squeeze Neural Networks; Residual-Squeeze-VGG16; Scene Classification; ResSquVGG16.", "title": "" }, { "docid": "3a06104103bbfbadbe67a89e84f425ab", "text": "According to the Technology Acceptance Model (TAM), behavioral intentions to use a new IT are primarily the product of a rational analysis of its desirable perceived outcomes, namely perceived usefulness (PU) and perceived ease of use (PEOU). But what happens with the continued use of an IT among experienced users? Does habit also kick in as a major factor or is continued use only the product of its desirable outcomes? This study examines this question in the context of experienced online shoppers. The data show that, as hypothesized, online shoppers’ intentions to continue using a website that they last bought at depend not only on PU and PEOU, but also on habit. In fact, habit alone can explain a large proportion of the variance of continued use of a website. Moreover, the explained variance indicates that habit may also be a major predictor of PU and PEOU among experienced shoppers. Implications are discussed.", "title": "" }, { "docid": "73af8236cc76e386aa76c6d20378d774", "text": "Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase1. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types, person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC).", "title": "" }, { "docid": "b26882cddec1690e3099757e835275d2", "text": "Accumulating evidence suggests that, independent of physical activity levels, sedentary behaviours are associated with increased risk of cardio-metabolic disease, all-cause mortality, and a variety of physiological and psychological problems. Therefore, the purpose of this systematic review is to determine the relationship between sedentary behaviour and health indicators in school-aged children and youth aged 5-17 years. Online databases (MEDLINE, EMBASE and PsycINFO), personal libraries and government documents were searched for relevant studies examining time spent engaging in sedentary behaviours and six specific health indicators (body composition, fitness, metabolic syndrome and cardiovascular disease, self-esteem, pro-social behaviour and academic achievement). 232 studies including 983,840 participants met inclusion criteria and were included in the review. Television (TV) watching was the most common measure of sedentary behaviour and body composition was the most common outcome measure. Qualitative analysis of all studies revealed a dose-response relation between increased sedentary behaviour and unfavourable health outcomes. Watching TV for more than 2 hours per day was associated with unfavourable body composition, decreased fitness, lowered scores for self-esteem and pro-social behaviour and decreased academic achievement. Meta-analysis was completed for randomized controlled studies that aimed to reduce sedentary time and reported change in body mass index (BMI) as their primary outcome. In this regard, a meta-analysis revealed an overall significant effect of -0.81 (95% CI of -1.44 to -0.17, p = 0.01) indicating an overall decrease in mean BMI associated with the interventions. There is a large body of evidence from all study designs which suggests that decreasing any type of sedentary time is associated with lower health risk in youth aged 5-17 years. In particular, the evidence suggests that daily TV viewing in excess of 2 hours is associated with reduced physical and psychosocial health, and that lowering sedentary time leads to reductions in BMI.", "title": "" }, { "docid": "353bbc5e68ec1d53b3cd0f7c352ee699", "text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" }, { "docid": "3d45de7d6ef9e162552698839550a6ee", "text": "The queries people issue to a search engine and the results clicked following a query change over time. For example, after the earthquake in Japan in March 2011, the query japan spiked in popularity and people issuing the query were more likely to click government-related results than they would prior to the earthquake. We explore the modeling and prediction of such temporal patterns in Web search behavior. We develop a temporal modeling framework adapted from physics and signal processing and harness it to predict temporal patterns in search behavior using smoothing, trends, periodicities, and surprises. Using current and past behavioral data, we develop a learning procedure that can be used to construct models of users' Web search activities. We also develop a novel methodology that learns to select the best prediction model from a family of predictive models for a given query or a class of queries. Experimental results indicate that the predictive models significantly outperform baseline models that weight historical evidence the same for all queries. We present two applications where new methods introduced for the temporal modeling of user behavior significantly improve upon the state of the art. Finally, we discuss opportunities for using models of temporal dynamics to enhance other areas of Web search and information retrieval.", "title": "" }, { "docid": "2d0a82799d75c08f288d1105280a6d60", "text": "The increasing complexity of deep learning architectures is resulting in training time requiring weeks or even months. This slow training is due in part to \"vanishing gradients,\" in which the gradients used by back-propagation are extremely large for weights connecting deep layers (layers near the output layer), and extremely small for shallow layers (near the input layer), this results in slow learning in the shallow layers. Additionally, it has also been shown that in highly non-convex problems, such as deep neural networks, there is a proliferation of high-error low curvature saddle points, which slows down learning dramatically [1]. In this paper, we attempt to overcome the two above problems by proposing an optimization method for training deep neural networks which uses learning rates which are both specific to each layer in the network and adaptive to the curvature of the function, increasing the learning rate at low curvature points. This enables us to speed up learning in the shallow layers of the network and quickly escape high-error low curvature saddle points. We test our method on standard image classification datasets such as MNIST, CIFAR10 and ImageNet, and demonstrate that our method increases accuracy as well as reduces the required training time over standard algorithms.", "title": "" }, { "docid": "ee03340751553afa79f6183a230f64f0", "text": "We provide an overview of the recent trends toward digitalization and large-scale data analytics in healthcare. It is expected that these trends are instrumental in the dramatic changes in the way healthcare will be organized in the future. We discuss the recent political initiatives designed to shift care delivery processes from paper to electronic, with the goals of more effective treatments with better outcomes; cost pressure is a major driver of innovation. We describe newly developed networks of healthcare providers, research organizations, and commercial vendors to jointly analyze data for the development of decision support systems. We address the trend toward continuous healthcare where health is monitored by wearable and stationary devices; a related development is that patients increasingly assume responsibility for their own health data. Finally, we discuss recent initiatives toward a personalized medicine, based on advances in molecular medicine, data management, and data analytics.", "title": "" }, { "docid": "dd130195f82c005d1168608a0388e42d", "text": "CONTEXT\nThe educational environment makes an important contribution to student learning. The DREEM questionnaire is a validated tool assessing the environment.\n\n\nOBJECTIVES\nTo translate and validate the DREEM into Greek.\n\n\nMETHODS\nForward translations from English were produced by three independent Greek translators and then back translations by five independent bilingual translators. The Greek DREEM.v0 that was produced was administered to 831 undergraduate students from six Greek medical schools. Cronbach's alpha and test-retest correlation were used to evaluate reliability and factor analysis was used to assess validity. Questions that increased alpha if deleted and/or sorted unexpectedly in factor analysis were further checked through two focus groups.\n\n\nFINDINGS\nQuestionnaires were returned by 487 respondents (59%), who were representative of all surveyed students by gender but not by year of study or medical school. The instrument's overall alpha was 0.90, and for the learning, teachers, academic, atmosphere and social subscales the alphas were 0.79 (expected 0.69), 0.78 (0.67), 0.69 (0.60), 0.68 (0.69), 0.48 (0.57), respectively. In a subset of the whole sample, test and retest alphas were both 0.90, and mean item scores highly correlated (p<0.001). Factor analysis produced meaningful subscales but not always matching the original ones. Focus group evaluation revealed possible misunderstanding for questions 17, 25, 29 and 38, which were revised in the DREEM.Gr.v1. The group mean overall scale score was 107.7 (SD 20.2), with significant differences across medical schools (p<0.001).\n\n\nCONCLUSION\nAlphas and test-retest correlation suggest the Greek translated and validated DREEM scale is a reliable tool for assessing the medical education environment and for informing policy. Factor analysis and focus group input suggest it is a valid tool. Reasonable school differences suggest the instrument's sensitivity.", "title": "" }, { "docid": "3cc84fda5e04ccd36f5b632d9da3a943", "text": "We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.", "title": "" }, { "docid": "bbedbe2d901f63e3f163ea0f24a2e2d7", "text": "a r t i c l e i n f o a b s t r a c t The leader trait perspective is perhaps the most venerable intellectual tradition in leadership research. Despite its early prominence in leadership research, it quickly fell out of favor among leadership scholars. Thus, despite recent empirical support for the perspective, conceptual work in the area lags behind other theoretical perspectives. Accordingly, the present review attempts to place the leader trait perspective in the context of supporting intellectual traditions, including evolutionary psychology and behavioral genetics. We present a conceptual model that considers the source of leader traits, mediators and moderators of their effects on leader emergence and leadership effectiveness, and distinguish between perceived and actual leadership effectiveness. We consider both the positive and negative effects of specific \" bright side \" personality traits: the Big Five traits, core self-evaluations, intelligence, and charisma. We also consider the positive and negative effects of \" dark side \" leader traits: Narcissism, hubris, dominance, and Machiavellianism. If one sought to find singular conditions that existed across species, one might find few universals. One universal that does exist, at least those species that have brains and nervous systems, is leadership. From insects to reptiles to mammals, leadership exists as surely as collective activity exists. There is the queen bee, and there is the alpha male. Though the centrality of leadership may vary by species (it seems more important to mammals than, say, to avians and reptiles), it is fair to surmise that whenever there is social activity, a social structure develops, and one (perhaps the) defining characteristic of that structure is the emergence of a leader or leaders. The universality of leadership, however, does not deny the importance of individual differences — indeed the emergence of leadership itself is proof of individual differences. Moreover, even casual observation of animal (including human) collective behavior shows the existence of a leader. Among a herd of 100 cattle or a pride of 20 lions, one is able to detect a leadership structure (especially at times of eating, mating, and attack). One quickly wonders: What has caused this leadership structure to emerge? Why has one animal (the alpha) emerged to lead the collective? And how does this leadership cause this collective to flourish — or founder? Given these questions, it is of no surprise that the earliest conceptions of leadership focused on individual …", "title": "" }, { "docid": "dc198f396142376e36d7143a5bfe7d19", "text": "Successful direct pulp capping of cariously exposed permanent teeth with reversible pulpitis and incomplete apex formation can prevent the need for root canal treatment. A case report is presented which demonstrates the use of mineral trioxide aggregate (MTA) as a direct pulp capping material for the purpose of continued maturogenesis of the root. Clinical and radiographic follow-up demonstrated a vital pulp and physiologic root development in comparison with the contralateral tooth. MTA can be considered as an effective material for vital pulp therapy, with the goal of maturogenesis.", "title": "" }, { "docid": "de99ebecca6a9c3e6539ba00fd91feba", "text": "In previous lectures, we have analyzed random forms of optimization problems, in which the randomness was injected (via random projection) for algorithmic reasons. On the other hand, in statistical problems—even without considering approximations—the starting point is a random instance of an optimization problem. To be more concrete, suppose that we are interested in estimating some parameter θ * ∈ R d based on a set of n samples, say {Z 1 ,. .. , Z n }. Many estimators of θ * are based on solving the (random) optimization problem θ ∈ argmin θ∈C L n (θ), where C ⊂ R d is some subset of R d , and L n (θ) = 1 n n i=1 i (θ; Z i) decomposes as a sum of terms, one for each data point. Our interest will be in analyzing the sequence {θ t } ∞ t=0 generated by some optimization algorithm. A traditional analysis in (deterministic) optimization involves bounding the optimization error θ t − θ, measuring the distance between the iterates and (some) optimum. On the other hand, the population version of this problem is defined in terms of the averaged function ¯ L(θ) : = E[L n (θ)]. If the original problem has been constructed in a reasonable way, then it should be the case that θ * ∈ arg min θ∈C ¯ L(θ), meaning that the quantity of interest is a global minimizer of the population function.", "title": "" }, { "docid": "514f65393daf3c02b1100e16e22b24be", "text": "A capsule is a collection of neurons which represents different variants of a pattern in the network. The routing scheme ensures only certain capsules which resemble lower counterparts in the higher layer should be activated. However, the computational complexity becomes a bottleneck for scaling up to larger networks, as lower capsules need to correspond to each and every higher capsule. To resolve this limitation, we approximate the routing process with two branches: a master branch which collects primary information from its direct contact in the lower layer and an aide branch that replenishes master based on pattern variants encoded in other lower capsules. Compared with previous iterative and unsupervised routing scheme, these two branches are communicated in a fast, supervised and one-time pass fashion. The complexity and runtime of the model are therefore decreased by a large margin. Motivated by the routing to make higher capsule have agreement with lower capsule, we extend the mechanism as a compensation for the rapid loss of information in nearby layers. We devise a feedback agreement unit to send back higher capsules as feedback. It could be regarded as an additional regularization to the network. The feedback agreement is achieved by comparing the optimal transport divergence between two distributions (lower and higher capsules). Such an add-on witnesses a unanimous gain in both capsule and vanilla networks. Our proposed EncapNet performs favorably better against previous state-of-the-arts on CIFAR10/100, SVHN and a subset of ImageNet.", "title": "" }, { "docid": "82c03a96e993095abb66d35508e287b4", "text": "By all accounts, 2016 is the year of the chatbot. Some commentators take the view that chatbot technology will be so disruptive that it will eliminate the need for websites and apps. But chatbots have a long history. So what’s new, and what’s different this time? And is there an opportunity here to improve how our industry does technology transfer? 1 The year of interacting conversationally This year’s most hyped language technology is the intelligent virtual assistant. Whether you call these things digital assistants, conversational interfaces or just chatbots, the basic concept is the same: achieve some result by conversing with a machine in a dialogic fashion, using natural language. Most visible at the forefront of the technology, we have the voice-driven digital assistants from the Big Four: Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa and Google’s new Assistant. Following up behind, we have many thousands of text-based chatbots that target specific functionalities, enabled by tools that let you build bots for a number of widely used messaging platforms. Many see this technology as heralding a revolution in how we interact with devices, websites and apps. The MIT Technology Review lists conversational interfaces as one of the ten breakthrough technologies of 2016. In January of this year, Uber’s Chris Messina wrote an influential blog piece declaring 2016 the year of conversational commerce. In March, Microsoft CEO Satya Nadella announced that chatbots were the next big thing, on a par with the graphical user interface, the web browser 1 https://www.technologyreview.com/s/600766/10-breakthrough-technologies2016-conversational-interfaces 2 https://medium.com/chris-messina/2016-will-be-the-year-of-conversationalcommerce-1586e85e3991 https://www.cambridge.org/core/terms. https://doi.org/10.1017/S1351324916000243 Downloaded from https://www.cambridge.org/core. IP address: 54.70.40.11, on 03 Aug 2018 at 20:59:31, subject to the Cambridge Core terms of use, available at", "title": "" }, { "docid": "58702f835df43337692f855f35a9f903", "text": "A dual-mode wide-band transformer based VCO is proposed. The two port impedance of the transformer based resonator is analyzed to derive the optimum primary to secondary capacitor load ratio, for robust mode selectivity and minimum power consumption. Fabricated in a 16nm FinFET technology, the design achieves 2.6× continuous tuning range spanning 7-to-18.3 GHz using a coil area of 120×150 μm2. The absence of lossy switches helps in maintaining phase noise of -112 to -100 dBc/Hz at 1 MHz offset, across the entire tuning range. The VCO consumes 3-4.4 mW and realizes power frequency tuning normalized figure of merit of 12.8 and 2.4 dB at 7 and 18.3 GHz respectively.", "title": "" }, { "docid": "c784bfbd522bb4c9908c3f90a31199fe", "text": "Vedolizumab (VDZ) inhibits α4β7 integrins and is used to target intestinal immune responses in patients with inflammatory bowel disease, which is considered to be relatively safe. Here we report on a fatal complication following VDZ administration. A 64-year-old female patient with ulcerative colitis (UC) refractory to tumor necrosis factor inhibitors was treated with VDZ. One week after the second VDZ infusion, she was admitted to hospital with severe diarrhea and systemic inflammatory response syndrome (SIRS). Blood stream infections were ruled out, and endoscopy revealed extensive ulcerations of the small intestine covered with pseudomembranes, reminiscent of invasive candidiasis or mesenteric ischemia. Histology confirmed subtotal destruction of small intestinal epithelia and colonization with Candida. Moreover, small mesenteric vessels were occluded by hyaline thrombi, likely as a result of SIRS, while perfusion of large mesenteric vessels was not compromised. Beta-D-glucan concentrations were highly elevated, and antimycotic therapy was initiated for suspected invasive candidiasis but did not result in any clinical benefit. Given the non-responsiveness to anti-infective therapies, an autoimmune phenomenon was suspected and immunosuppressive therapy was escalated. However, the patient eventually died from multi-organ failure. This case should raise the awareness for rare but severe complications related to immunosuppressive therapy, particularly in high risk patients.", "title": "" } ]
scidocsrr
7fabdf6063107d656b2ae326017db1fe
Interpersonal influences on adolescent materialism : A new look at the role of parents and peers
[ { "docid": "d602cafe18d720f024da1b36c9283ba5", "text": "Associations between materialism and peer relations are likely to exist in elementary school children but have not been studied previously. The first two studies introduce a new Perceived Peer Group Pressures (PPGP) Scale suitable for this age group, demonstrating that perceived pressure regarding peer culture (norms for behavioral, attitudinal, and material characteristics) can be reliably measured and that it is connected to children's responses to hypothetical peer pressure vignettes. Studies 3 and 4 evaluate the main theoretical model of associations between peer relations and materialism. Study 3 supports the hypothesis that peer rejection is related to higher perceived peer culture pressure, which in turn is associated with greater materialism. Study 4 confirms that the endorsement of social motives for materialism mediates the relationship between perceived peer pressure and materialism.", "title": "" } ]
[ { "docid": "49c1924821c326f803cefff58ca7ab67", "text": "Dynamic binary analysis is a prevalent and indispensable technique in program analysis. While several dynamic binary analysis tools and frameworks have been proposed, all suffer from one or more of: prohibitive performance degradation, a semantic gap between the analysis code and the program being analyzed, architecture/OS specificity, being user-mode only, and lacking APIs. We present DECAF, a virtual machine based, multi-target, whole-system dynamic binary analysis framework built on top of QEMU. DECAF provides Just-In-Time Virtual Machine Introspection and a plugin architecture with a simple-to-use event-driven programming interface. DECAF implements a new instruction-level taint tracking engine at bit granularity, which exercises fine control over the QEMU Tiny Code Generator (TCG) intermediate representation to accomplish on-the-fly optimizations while ensuring that the taint propagation is sound and highly precise. We perform a formal analysis of DECAF's taint propagation rules to verify that most instructions introduce neither false positives nor false negatives. We also present three platform-neutral plugins—Instruction Tracer, Keylogger Detector, and API Tracer, to demonstrate the ease of use and effectiveness of DECAF in writing cross-platform and system-wide analysis tools. Implementation of DECAF consists of 9,550 lines of C++ code and 10,270 lines of C code and we evaluate DECAF using CPU2006 SPEC benchmarks and show average overhead of 605 percent for system wide tainting and 12 percent for VMI.", "title": "" }, { "docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24", "text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.", "title": "" }, { "docid": "3c444d8918a31831c2dc73985d511985", "text": "This paper presents methods for collecting and analyzing physiological data during real-world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance, and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from 24 drives of at least 50-min duration were collected for analysis. The data were analyzed in two ways. Analysis I used features from 5-min intervals of data during the rest, highway, and city driving conditions to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. Analysis II compared continuous features, calculated at 1-s intervals throughout the entire drive, with a metric of observable stressors created by independent coders from videotapes. The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level. These findings indicate that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring. Such a metric could be used to help manage noncritical in-vehicle information systems and could also provide a continuous measure of how different road and traffic conditions affect drivers.", "title": "" }, { "docid": "fa6ec1ff4a0849e5a4ec2dda7b20d966", "text": "Most digital still cameras acquire imagery with a color filter array (CFA), sampling only one color value for each pixel and interpolating the other two color values afterwards. The interpolation process is commonly known as demosaicking. In general, a good demosaicking method should preserve the high-frequency information of imagery as much as possible, since such information is essential for image visual quality. We discuss in this paper two key observations for preserving high-frequency information in CFA demosaicking: (1) the high frequencies are similar across three color components, and 2) the high frequencies along the horizontal and vertical axes are essential for image quality. Our frequency analysis of CFA samples indicates that filtering a CFA image can better preserve high frequencies than filtering each color component separately. This motivates us to design an efficient filter for estimating the luminance at green pixels of the CFA image and devise an adaptive filtering approach to estimating the luminance at red and blue pixels. Experimental results on simulated CFA images, as well as raw CFA data, verify that the proposed method outperforms the existing state-of-the-art methods both visually and in terms of peak signal-to-noise ratio, at a notably lower computational cost.", "title": "" }, { "docid": "eced59d8ec159f3127e7d2aeca76da96", "text": "Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face to face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches, such as handheld devices with composited graphics or see-through head worn displays, is that users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction. We detail our prototype system and a number of interactive experiences. We present an initial user experiment that shows that participants are able to deduce the size and distance of a virtual projected object. A second experiment shows that participants are able to infer which of a number of targets the other user indicates by pointing.", "title": "" }, { "docid": "c797b2a78ea6eb434159fd948c0a1bf0", "text": "Feature extraction is an essential part in applications that require computer vision to recognize objects in an image processed. To extract the features robustly, feature extraction algorithms are often very demanding in computation so that the performance achieved by pure software is far from real-time. Among those feature extraction algorithms, scale-invariant feature transform (SIFT) has gained a lot of popularity recently. In this paper, we propose an all-hardware SIFT accelerator-the fastest of its kind to our knowledge. It consists of two interactive hardware components, one for key point identification, and the other for feature descriptor generation. We successfully developed a segment buffer scheme that could not only feed data to the computing modules in a data-streaming manner, but also reduce about 50% memory requirement than a previous work. With a parallel architecture incorporating a three-stage pipeline, the processing time of the key point identification is only 3.4 ms for one video graphics array (VGA) image. Taking also into account the feature descriptor generation part, the overall SIFT processing time for a VGA image can be kept within 33 ms (to support real-time operation) when the number of feature points to be extracted is fewer than 890.", "title": "" }, { "docid": "d43dc521d3f0f17ccd4840d6081dcbfe", "text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.", "title": "" }, { "docid": "6a04e07937d1c5beef84acb0a4e0e328", "text": "Linear hashing and spiral storage are two dynamic hashing schemes originally designed for external files. This paper shows how to adapt these two methods for hash tables stored in main memory. The necessary data structures and algorithms are described, the expected performance is analyzed mathematically, and actual execution times are obtained and compared with alternative techniques. Linear hashing is found to be both faster and easier to implement than spiral storage. Two alternative techniques are considered: a simple unbalanced binary tree and double hashing with periodic rehashing into a larger table. The retrieval time of linear hashing is similar to double hashing and substantially faster than a binary tree, except for very small trees. The loading times of double hashing (with periodic reorganization), a binary tree, and linear hashing are similar. Overall, linear hashing is a simple and efficient technique for applications where the cardinality of the key set is not known in advance.", "title": "" }, { "docid": "6c4433b640cf1d7557b2e74cbd2eee85", "text": "A compact Ka-band broadband waveguide-based travelingwave spatial power combiner is presented. The low loss micro-strip probes are symmetrically inserted into both broadwalls of waveguide, quadrupling the coupling ways but the insertion loss increases little. The measured 16 dB return-loss bandwidth of the eight-way back-toback structure is from 30 GHz to 39.4 GHz (more than 25%) and the insertion loss is less than 1 dB, which predicts the power-combining efficiency is higher than 90%.", "title": "" }, { "docid": "89349e8f3e7d8df8bb8ab6f55404a91f", "text": "Due to the high intake of sugars, especially sucrose, global trends in food processing have encouraged producers to use sweeteners, particularly synthetic ones, to a wide extent. For several years, increasing attention has been paid in the literature to the stevia (Stevia rebauidana), containing glycosidic diterpenes, for which sweetening properties have been identified. Chemical composition, nutritional value and application of stevia leaves are briefl y summarized and presented.", "title": "" }, { "docid": "31873424960073962d3d8eba151f6a4b", "text": "Multiple view data, which have multiple representations from different feature spaces or graph spaces, arise in various data mining applications such as information retrieval, bioinformatics and social network analysis. Since different representations could have very different statistical properties, how to learn a consensus pattern from multiple representations is a challenging problem. In this paper, we propose a general model for multiple view unsupervised learning. The proposed model introduces the concept of mapping function to make the different patterns from different pattern spaces comparable and hence an optimal pattern can be learned from the multiple patterns of multiple representations. Under this model, we formulate two specific models for two important cases of unsupervised learning, clustering and spectral dimensionality reduction; we derive an iterating algorithm for multiple view clustering, and a simple algorithm providing a global optimum to multiple spectral dimensionality reduction. We also extend the proposed model and algorithms to evolutionary clustering and unsupervised learning with side information. Empirical evaluations on both synthetic and real data sets demonstrate the effectiveness of the proposed model and algorithms.", "title": "" }, { "docid": "3323474060ba5f1fbbbdcb152c22a6a9", "text": "A compact triple-band microstrip slot antenna applied to WLAN/WiMAX applications is proposed in this letter. This antenna has a simpler structure than other antennas designed for realizing triple-band characteristics. It is just composed of a microstrip feed line, a substrate, and a ground plane on which some simple slots are etched. Then, to prove the validation of the design, a prototype is fabricated and measured. The experimental data show that the antenna can provide three impedance bandwidths of 600 MHz centered at 2.7 GHz, 430 MHz centered at 3.5 GHz, and 1300 MHz centered at 5.6 GHz.", "title": "" }, { "docid": "713010fe0ee95840e6001410f8a164cc", "text": "Three studies tested the idea that when social identity is salient, group-based appraisals elicit specific emotions and action tendencies toward out-groups. Participants' group memberships were made salient and the collective support apparently enjoyed by the in-group was measured or manipulated. The authors then measured anger and fear (Studies 1 and 2) and anger and contempt (Study 3), as well as the desire to move against or away from the out-group. Intergroup anger was distinct from intergroup fear, and the inclination to act against the out-group was distinct from the tendency to move away from it. Participants who perceived the in-group as strong were more likely to experience anger toward the out-group and to desire to take action against it. The effects of perceived in-group strength on offensive action tendencies were mediated by anger.", "title": "" }, { "docid": "a7e8c3a64f6ba977e142de9b3dae7e57", "text": "Craniofacial superimposition is a process that aims to identify a person by overlaying a photograph and a model of the skull. This process is usually carried out manually by forensic anthropologists; thus being very time consuming and presenting several difficulties in finding a good fit between the 3D model of the skull and the 2D photo of the face. In this paper we present a fast and automatic procedure to tackle the superimposition problem. The proposed method is based on real-coded genetic algorithms. Synthetic data are used to validate the method. Results on a real case from our Physical Anthropology lab of the University of Granada are also presented.", "title": "" }, { "docid": "77cfc86c63ca0a7b3ed3b805ea16b9c9", "text": "The research presented in this paper is about detecting collaborative networks inside the structure of a research social network. As case study we consider ResearchGate and SEE University academic staff. First we describe the methodology used to crawl and create an academic-academic network depending from their fields of interest. We then calculate and discuss four social network analysis centrality measures (closeness, betweenness, degree, and PageRank) for entities in this network. In addition to these metrics, we have also investigated grouping of individuals, based on automatic clustering depending from their reciprocal relationships.", "title": "" }, { "docid": "7354d8c1e8253a99cfd62d8f96e57a77", "text": "In the past few decades, clustering has been widely used in areas such as pattern recognition, data analysis, and image processing. Recently, clustering has been recognized as a primary data mining method for knowledge discovery in spatial databases, i.e. databases managing 2D or 3D points, polygons etc. or points in some d-dimensional feature space. The well-known clustering algorithms, however, have some drawbacks when applied to large spatial databases. First, they assume that all objects to be clustered reside in main memory. Second, these methods are too inefficient when applied to large databases. To overcome these limitations, new algorithms have been developed which are surveyed in this paper. These algorithms make use of efficient query processing techniques provided by spatial database systems.", "title": "" }, { "docid": "23493c14053a4608203f8e77bd899445", "text": "In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of “lossy plus residual coding,” consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately.", "title": "" }, { "docid": "3a3c0c21d94c2469bd95a103a9984354", "text": "Recently it was shown that the problem of Maximum Inner Product Search (MIPS) is efficient and it admits provably sub-linear hashing algorithms. Asymmetric transformations before hashing were the key in solving MIPS which was otherwise hard. In [18], the authors use asymmetric transformations which convert the problem of approximate MIPS into the problem of approximate near neighbor search which can be efficiently solved using hashing. In this work, we provide a different transformation which converts the problem of approximate MIPS into the problem of approximate cosine similarity search which can be efficiently solved using signed random projections. Theoretical analysis show that the new scheme is significantly better than the original scheme for MIPS. Experimental evaluations strongly support the theoretical findings.", "title": "" } ]
scidocsrr
c822b341f266abe617affdf50abb121d
Dual Iterative Hard Thresholding: From Non-convex Sparse Minimization to Non-smooth Concave Maximization
[ { "docid": "e43e5723ad4c2362b2a899bf4b8af0cb", "text": "The Hard Thresholding Pursuit (HTP) is a class of truncated gradient descent methods for finding sparse solutions of l0-constrained loss minimization problems. The HTP-style methods have been shown to have strong approximation guarantee and impressive numerical performance in high dimensional statistical learning applications. However, the current theoretical treatment of these methods has traditionally been restricted to the analysis of parameter estimation consistency. It remains an open problem to analyze the support recovery performance (a.k.a., sparsistency) of this type of methods for recovering the global minimizer of the original NP-hard problem. In this paper, we bridge this gap by showing, for the first time, that exact recovery of the global sparse minimizer is possible for HTP-style methods under restricted strong condition number bounding conditions. We further show that HTP-style methods are able to recover the support of certain relaxed sparse solutions without assuming bounded restricted strong condition number. Numerical results on simulated data confirms our theoretical predictions.", "title": "" }, { "docid": "1548b8de344be4a6dddff04d833a633b", "text": "We derive a novel norm that corresponds to the tightest convex relaxation of sparsity combined with an `2 penalty. We show that this new k-support norm provides a tighter relaxation than the elastic net and can thus be advantageous in in sparse prediction problems. We also bound the looseness of the elastic net, thus shedding new light on it and providing justification for its use.", "title": "" }, { "docid": "6696d9092ff2fd93619d7eee6487f867", "text": "We propose an accelerated stochastic block coordinate descent algorithm for nonconvex optimization under sparsity constraint in the high dimensional regime. The core of our algorithm is leveraging both stochastic partial gradient and full partial gradient restricted to each coordinate block to accelerate the convergence. We prove that the algorithm converges to the unknown true parameter at a linear rate, up to the statistical error of the underlying model. Experiments on both synthetic and real datasets backup our theory.", "title": "" } ]
[ { "docid": "4d2be7aac363b77c6abd083947bc28c7", "text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.", "title": "" }, { "docid": "abc75d5b44323133d2b1ffef57a920f3", "text": "With the increasing adoption of mobile 4G LTE networks, video streaming as the major contributor of 4G LTE data traffic, has become extremely hot. However, the battery life has become the bottleneck when mobile users are using online video services. In this paper, we deploy a real mobile system for power measurement and profiling of online video streaming in 4G LTE networks. Based on some designed experiments with different configurations, we measure the power consumption for online video streaming, offline video playing, and mobile background. A RRC state study is taken to understand how RRC states impact power consumption. Then, we profile the power consumption of video streaming and show the results with different impact factors. According to our experimental statistics, the power saving room for online video streaming in 4G LTE networks can be up to 69%.", "title": "" }, { "docid": "b84971bc1f2d2ebf43815d33cea86c8c", "text": "The container-inhabiting mosquito simulation model (CIMSiM) is a weather-driven, dynamic life table simulation model of Aedes aegypti (L.) and similar nondiapausing Aedes mosquitoes that inhabit artificial and natural containers. This paper presents a validation of CIMSiM simulating Ae. aegypti using several independent series of data that were not used in model development. Validation data sets include laboratory work designed to elucidate the role of diet on fecundity and rates of larval development and survival. Comparisons are made with four field studies conducted in Bangkok, Thailand, on seasonal changes in population dynamics and with a field study in New Orleans, LA, on larval habitat. Finally, predicted ovipositional activity of Ae. aegypti in seven cities in the southeastern United States for the period 1981-1985 is compared with a data set developed by the U.S. Public Health Service. On the basis of these comparisons, we believe that, for stated design goals, CIMSiM adequately simulates the population dynamics of Ae. aegypti in response to specific information on weather and immature habitat. We anticipate that it will be useful in simulation studies concerning the development and optimization of control strategies and that, with further field validation, can provide entomological inputs for a dengue virus transmission model.", "title": "" }, { "docid": "46d3009e8fd4071c932e50a0c50b8f41", "text": "Background\nE-health technology applications are essential tools of modern information technology that improve quality of healthcare delivery in hospitals of both developed and developing countries. However, despite its positive benefits, studies indicate that the rate of the e-health adoption in some developing countries is either low or underutilized. This is due in part, to barriers such as resistance from healthcare professionals, poor infrastructure, and low technical expertise among others.\n\n\nObjective\nThe aim of this study is to investigate, identify and analyze the underlying factors that affect healthcare professionals decision to adopt and use e-health technology applications in developing countries, with particular reference to hospitals in Nigeria.\n\n\nMethods\nThe study used a cross sectional approach in the form of a close-ended questionnaire to collect quantitative data from a sample of 465 healthcare professionals randomly selected from 15 hospitals in Nigeria. We used the modified Technology Acceptance Model (TAM) as the dependent variable and external factors as independent variables. The collected data was then analyzed using SPSS statistical analysis such as frequency test, reliability analysis, and correlation coefficient analysis.\n\n\nResults\nThe results obtained, which correspond with findings from other researches published, indicate that perceived usefulness, belief, willingness, as well as attitude of healthcare professionals have significant influence on their intention to adopt and use the e-health technology applications. Other strategic factors identified include low literacy level and experience in using the e-health technology applications, lack of motivation, poor organizational and management policies.\n\n\nConclusion\nThe study contributes to the literature by pinpointing significant areas where findings can positively affect, or be found useful by, healthcare policy decision makers in Nigeria and other developing countries. This can help them understand their areas of priorities and weaknesses when planning for e-health technology adoption and implementation.", "title": "" }, { "docid": "0ea6d4a02a4013a0f9d5aa7d27b5a674", "text": "Recently, there has been growing interest in social network analysis. Graph models for social network analysis are usually assumed to be a deterministic graph with fixed weights for its edges or nodes. As activities of users in online social networks are changed with time, however, this assumption is too restrictive because of uncertainty, unpredictability and the time-varying nature of such real networks. The existing network measures and network sampling algorithms for complex social networks are designed basically for deterministic binary graphs with fixed weights. This results in loss of much of the information about the behavior of the network contained in its time-varying edge weights of network, such that is not an appropriate measure or sample for unveiling the important natural properties of the original network embedded in the varying edge weights. In this paper, we suggest that using stochastic graphs, in which weights associated with the edges are random variables, can be a suitable model for complex social network. Once the network model is chosen to be stochastic graphs, every aspect of the network such as path, clique, spanning tree, network measures and sampling algorithms should be treated stochastically. In particular, the network measures should be reformulated and new network sampling algorithms must be designed to reflect the stochastic nature of the network. In this paper, we first define some network measures for stochastic graphs, and then we propose four sampling algorithms based on learning automata for stochastic graphs. In order to study the performance of the proposed sampling algorithms, several experiments are conducted on real and synthetic stochastic graphs. The performances of these algorithms are studied in terms of Kolmogorov-Smirnov D statistics, relative error, Kendall’s rank correlation coefficient and relative cost.", "title": "" }, { "docid": "b084482c7dcffc70e307f60cb9bd3409", "text": "The evolution of the endoscopic endonasal transsphenoidal technique, which was initially reserved only for sellar lesions through the sphenoid sinus cavity, has lead in the last decades to a progressive possibility to access the skull base from the nose. This route allows midline access and visibility to the suprasellar, retrosellar and parasellar space while obviating brain retraction, and makes possible to treat transsphenoidally a variety of relatively small midline skull base and parasellar lesions traditionally approached transcranially. We report our current knowledge of the endoscopic anatomy of the midline skull base as seen from the endonasal perspective, in order to describe the surgical path and structures whose knowledge is useful during the operation. Besides, we describe the step-by-step surgical technique to access the different compartments, the \"dangerous landmarks\" to avoid in order to minimize the risks of complications and how to manage them, and our paradigm and techniques for dural and bony reconstruction. Furthermore, we report a brief description of the useful instruments and tools for the extended endoscopic approaches. Between January 2004 and April 2006 we performed 33 extended endonasal approaches for lesions arising from or involving the sellar region and the surrounding areas. The most representative pathologies of this series were the ten cranioparvngiomas, the six giant adenomas and the five meningiomas; we also used this procedure in three cases of chordomas, three of Rathke's cleft cysts and three of meningo-encephaloceles, one case of optic nerve glioma, one olfactory groove neuroendocrine tumor and one case of fibro-osseous dysplasia. Tumor removal, as assessed by post-operative MRI, revealed complete removal of the lesion in 2/6 pituitary adenomas, 7/10 craniopharyngiomas, 4/5 meningiomas, 3/3 Rathke's cleft cyst, 3/3 meningo-encephalocele. Surgical complications have been observed in 3 patients, two with a craniopharyngioma, one with a clival meningioma and one with a recurrent giant pituitary macroadenoma involving the entire left cavernous sinus, who developed a CSF leak and a second operation was necessary in order to review the cranial base reconstruction and seal the leak. One of them developed a bacterial meningitis, which resolved after a cycle of intravenous antibiotic therapy with no permanent neurological deficits. One patient with an intra-suprasellar non-functioning adenoma presented with a generalized epileptic seizure a few hours after the surgical procedure, due to the intraoperative massive CSF loss and consequent presence of intracranial air. We registered one surgical mortality. In three cases of craniopharyngioma and in one case of meningioma a new permanent diabetes insipidus was observed. One patient developed a sphenoid sinus mycosis, cured with antimycotic therapy. Epistaxis and airway difficulties were never observed. It is difficult todav to define the boundaries and the future limits of the extended approaches because the work is still in progress. Such extended endoscopic approaches, although at a first glance might be considered something that everyone can do, require an advanced and specialized training.", "title": "" }, { "docid": "3611d022aee93b9cbcc961bb7cbdd3ff", "text": "Due to the popularity of Deep Neural Network (DNN) models, we have witnessed extreme-scale DNN models with the continued increase of the scale in terms of depth and width. However, the extremely high memory requirements for them make it difficult to run the training processes on single many-core architectures such as a Graphic Processing Unit (GPU), which compels researchers to use model parallelism over multiple GPUs to make it work. However, model parallelism always brings very heavy additional overhead. Therefore, running an extreme-scale model in a single GPU is urgently required. There still exist several challenges to reduce the memory footprint for extreme-scale deep learning. To address this tough problem, we first identify the memory usage characteristics for deep and wide convolutional networks, and demonstrate the opportunities for memory reuse at both the intra-layer and inter-layer levels. We then present Layrub, a runtime data placement strategy that orchestrates the execution of the training process. It achieves layer-centric reuse to reduce memory consumption for extreme-scale deep learning that could not previously be run on a single GPU. Experiments show that, compared to the original Caffe, Layrub can cut down the memory usage rate by an average of 58.2% and by up to 98.9%, at the moderate cost of 24.1% higher training execution time on average. Results also show that Layrub outperforms some popular deep learning systems such as GeePS, vDNN, MXNet, and Tensorflow. More importantly, Layrub can tackle extreme-scale deep learning tasks. For example, it makes an extra-deep ResNet with 1,517 layers that can be trained successfully in one GPU with 12GB memory, while other existing deep learning systems cannot.", "title": "" }, { "docid": "c399a885345466505cfbaf8c175533b7", "text": "Science is going through two rapidly changing phenomena: one is the increasing capabilities of the computers and software tools from terabytes to petabytes and beyond, and the other is the advancement in high-throughput molecular biology producing piles of data related to genomes, transcriptomes, proteomes, metabolomes, interactomes, and so on. Biology has become a data intensive science and as a consequence biology and computer science have become complementary to each other bridged by other branches of science such as statistics, mathematics, physics, and chemistry. The combination of versatile knowledge has caused the advent of big-data biology, network biology, and other new branches of biology. Network biology for instance facilitates the system-level understanding of the cell or cellular components and subprocesses. It is often also referred to as systems biology. The purpose of this field is to understand organisms or cells as a whole at various levels of functions and mechanisms. Systems biology is now facing the challenges of analyzing big molecular biological data and huge biological networks. This review gives an overview of the progress in big-data biology, and data handling and also introduces some applications of networks and multivariate analysis in systems biology.", "title": "" }, { "docid": "8e7088af6940cf3c2baa9f6261b402be", "text": "Empathy is an integral part of human social life, as people care about and for others who experience adversity. However, a specific “pathogenic” form of empathy, marked by automatic contagion of negative emotions, can lead to stress and burnout. This is particularly detrimental for individuals in caregiving professions who experience empathic states more frequently, because it can result in illness and high costs for health systems. Automatically recognizing pathogenic empathy from text is potentially valuable to identify at-risk individuals and monitor burnout risk in caregiving populations. We build a model to predict this type of empathy from social media language on a data set we collected of users’ Facebook posts and their answers to a new questionnaire measuring empathy. We obtain promising results in identifying individuals’ empathetic states from their social media (Pearson r = 0.252,", "title": "" }, { "docid": "974f5d138d2a85d81b5dd64f13311721", "text": "We present a new constraint solver over Boolean variables, available as library(clpb) in SWI-Prolog. Our solver distinguishes itself from other available CLP(B) solvers by several unique features: First, it is written entirely in Prolog and is hence portable to different Prolog implementations. Second, it is the first freely available BDDbased CLP(B) solver. Third, we show that new interface predicates allow us to solve new types of problems with CLP(B) constraints. We also use our implementation experience to contrast features and state necessary requirements of attributed variable interfaces to optimally support CLP(B) constraints in different Prolog systems. Finally, we also present some performance results and comparisons with SICStus Prolog.", "title": "" }, { "docid": "3a32fe66af2e99f3601aae71dc9b64c2", "text": "Low-power wide-area networking (LPWAN) technologies are capable of supporting a large number of Internet of Things (IoT) use cases. While several LPWAN technologies exist, Long Range (LoRa) and its network architecture LoRaWAN, is currently the most adopted technology. LoRa provides a range of physical layer communication settings, such as bandwidth, spreading factor, coding rate, and transmission frequency. These settings impact throughput, reliability, and communication range. As IoT use cases result in varying communication patterns, it is essential to analyze how LoRa's different communication settings impact on real IoT use cases. In this paper, we analyze the impact of LoRa's communication settings on four IoT use cases, e.g. smart metering, smart parking, smart street lighting, and vehicle fleet tracking. Our results demonstrate that the setting corresponding to the fastest data rate achieves up to 380% higher packet delivery ratio and uses 0.004 times the energy compared to other evaluated settings, while being suitable to support the IoT use cases presented here. However, the setting covers a smaller communication area compared to the slow data rate settings. Moreover, we modified the Aloha-based channel access mechanism used by LoRaWAN and our results demonstrate that the modified channel access positively impacts the performance of the different communication settings.", "title": "" }, { "docid": "7beeea42e8f5d0f21ea418aa7f433ab9", "text": "This application note describes principles and uses for continuous ST segment monitoring. It also provides a detailed description of the ST Analysis algorithm implemented in the multi-lead ST/AR (ST and Arrhythmia) algorithm, and an assessment of the ST analysis algorithm's performance.", "title": "" }, { "docid": "793edca657c68ade4d2391c23f585c41", "text": "In the linear bandit problem a learning agent chooses an arm at each round and receives a stochastic reward. The expected value of this stochastic reward is an unknown linear function of the arm choice. As is standard in bandit problems, a learning agent seeks to maximize the cumulative reward over an n round horizon. The stochastic bandit problem can be seen as a special case of the linear bandit problem when the set of available arms at each round is the standard basis ei for the Euclidean space R, i.e. the vector ei is a vector with all 0s except for a 1 in the ith coordinate. As a result each arm is independent of the others and the reward associated with each arm depends only on a single parameter as is the case in stochastic bandits. The underlying algorithmic approach to solve this problem uses the optimism in the face of uncertainty (OFU) principle. The OFU principle solves the exploration-exploitation tradeoff in the linear bandit problem by maintaining a confidence set for the vector of coefficients of the linear function that governs rewards. In each round the algorithm chooses an estimate of the coefficients of the linear function from the confidence set and then takes an action so that the predicted reward is maximized. The problem reduces to constructing confidence sets for the vector of coefficients of the linear function based on the action-reward pairs observed in the past time steps. The linear bandit problem was first studied by Auer et al. (2002) [1] under the name of linear reinforcement learning. Since the introduction of the problem, several works have improved the analysis and explored variants of the problem. The most influential works include Dani et al. (2008) [2], Rusmevichientong et al. (2010) [3], and Abbasi et al. (2011) [4]. In each of these works the set of available arms remains constant, but the set is only restricted to being a bounded subset of a finite-dimensional vector space. Variants of the problem formulation have also been widely applied to recommendation systems following the work of Li et al. (2010) [5] within the context of web advertisement. An important property of this problem is that the arms are not independent because future arm choices depend on the confidence sets constructed from past choices. In the literature, several works including [5] have failed to recognize this property leading to faulty analysis. This fine detail requires special care which we explore in depth in Section 2.", "title": "" }, { "docid": "b8b4e582fbcc23a5a72cdaee1edade32", "text": "In recent years, research into the mining of user check-in behavior for point-of-interest (POI) recommendations has attracted a lot of attention. Existing studies on this topic mainly treat such recommendations in a traditional manner—that is, they treat POIs as items and check-ins as ratings. However, users usually visit a place for reasons other than to simply say that they have visited. In this article, we propose an approach referred to as Urban POI-Walk (UPOI-Walk), which takes into account a user's social-triggered intentions (SI), preference-triggered intentions (PreI), and popularity-triggered intentions (PopI), to estimate the probability of a user checking-in to a POI. The core idea of UPOI-Walk involves building a HITS-based random walk on the normalized check-in network, thus supporting the prediction of POI properties related to each user's preferences. To achieve this goal, we define several user--POI graphs to capture the key properties of the check-in behavior motivated by user intentions. In our UPOI-Walk approach, we propose a new kind of random walk model—Dynamic HITS-based Random Walk—which comprehensively considers the relevance between POIs and users from different aspects. On the basis of similitude, we make an online recommendation as to the POI the user intends to visit. To the best of our knowledge, this is the first work on urban POI recommendations that considers user check-in behavior motivated by SI, PreI, and PopI in location-based social network data. Through comprehensive experimental evaluations on two real datasets, the proposed UPOI-Walk is shown to deliver excellent performance.", "title": "" }, { "docid": "1302963869cdcb958a331838786c51de", "text": "Introduction: Benefits from mental health early interventions may not be sustained over time, and longer-term intervention programs may be required to maintain early clinical gains. However, due to the high intensity of face-to-face early intervention treatments, this may not be feasible. Adjunctive internet-based interventions specifically designed for youth may provide a cost-effective and engaging alternative to prevent loss of intervention benefits. However, until now online interventions have relied on human moderators to deliver therapeutic content. More sophisticated models responsive to user data are critical to inform tailored online therapy. Thus, integration of user experience with a sophisticated and cutting-edge technology to deliver content is necessary to redefine online interventions in youth mental health. This paper discusses the development of the moderated online social therapy (MOST) web application, which provides an interactive social media-based platform for recovery in mental health. We provide an overview of the system's main features and discus our current work regarding the incorporation of advanced computational and artificial intelligence methods to enhance user engagement and improve the discovery and delivery of therapy content. Methods: Our case study is the ongoing Horyzons site (5-year randomized controlled trial for youth recovering from early psychosis), which is powered by MOST. We outline the motivation underlying the project and the web application's foundational features and interface. We discuss system innovations, including the incorporation of pertinent usage patterns as well as identifying certain limitations of the system. This leads to our current motivations and focus on using computational and artificial intelligence methods to enhance user engagement, and to further improve the system with novel mechanisms for the delivery of therapy content to users. In particular, we cover our usage of natural language analysis and chatbot technologies as strategies to tailor interventions and scale up the system. Conclusions: To date, the innovative MOST system has demonstrated viability in a series of clinical research trials. Given the data-driven opportunities afforded by the software system, observed usage patterns, and the aim to deploy it on a greater scale, an important next step in its evolution is the incorporation of advanced and automated content delivery mechanisms.", "title": "" }, { "docid": "bedf1cc302c4ca05dc8371c29d396169", "text": "We propose Mixcoin, a protocol to facilitate anonymous payments using the Bitcoin currency system. We build on the emergent phenomenon of currency mixes, adding an accountability mechanism to expose theft. Unlike other proposals to improve anonymity in Bitcoin, our scheme can be deployed immediately with no changes to Bitcoin itself. We demonstrate that incentives of mixes and clients can be aligned to ensure that rational mixes will not steal from clients. We contrast mixing for financial anonymity with better-studied communication mixes, demonstrating important and subtle new attacks.", "title": "" }, { "docid": "b3212385bb7e4650833814e108ea2591", "text": "Credit scoring has been widely investigated in the area of finance, in general, and banking sectors, in particular. Recently, genetic programming (GP) has attracted attention in both academic and empirical fields, especially for credit problems. The primary aim of this paper is to investigate the ability of GP, which was proposed as an extension of genetic algorithms and was inspired by the Darwinian evolution theory, in the analysis of credit scoring models in Egyptian public sector banks. The secondary aim is to compare GP with probit analysis (PA), a successful alternative to logistic regression, and weight of evidence (WOE) measure, the later a neglected technique in published research. Two evaluation criteria are used in this paper, namely, average correct classification (ACC) rate criterion and estimated misclassification cost (EMC) criterion with different misclassification cost (MC) ratios, in order to evaluate the capabilities of the credit scoring models. Results so far revealed that GP has the highest ACC rate and the lowest EMC. However, surprisingly, there is a clear rule for the WOE measure under EMC with higher MC ratios. In addition, an analysis of the dataset using Kohonen maps is undertaken to provide additional visual insights into cluster groupings. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d8ec0c507217500a97c1664c33b2fe72", "text": "To realize ideal force control of robots that interact with a human, a very precise actuating system with zero impedance is desired. For such applications, a rotary series elastic actuator (RSEA) has been introduced recently. This paper presents the design of RSEA and the associated control algorithms. To generate joint torque as desired, a torsional spring is installed between a motor and a human joint, and the motor is controlled to produce a proper spring deflection for torque generation. When the desired torque is zero, the motor must follow the human joint motion, which requires that the friction and the inertia of the motor be compensated. The human joint and the body part impose the load on the RSEA. They interact with uncertain environments and their physical properties vary with time. In this paper, the disturbance observer (DOB) method is applied to make the RSEA precisely generate the desired torque under such time-varying conditions. Based on the nominal model preserved by the DOB, feedback and feedforward controllers are optimally designed for the desired performance, i.e., the RSEA: (1) exhibits very low impedance and (2) generates the desired torque precisely while interacting with a human. The effectiveness of the proposed design is verified by experiments.", "title": "" }, { "docid": "7a417c3fe0a93656f5628463d9c425e7", "text": "Given a finite range space Σ = (X, R), with N = |X| + |R|, we present two simple algorithms, based on the multiplicative-weight method, for computing a small-size hitting set or set cover of Σ. The first algorithm is a simpler variant of the Brönnimann-Goodrich algorithm but more efficient to implement, and the second algorithm can be viewed as solving a two-player zero-sum game. These algorithms, in conjunction with some standard geometric data structures, lead to near-linear algorithms for computing a small-size hitting set or set cover for a number of geometric range spaces. For example, they lead to O(N polylog(N)) expected-time randomized O(1)-approximation algorithms for both hitting set and set cover if X is a set of points and ℜ a set of disks in R2.", "title": "" } ]
scidocsrr
d1fed528c5a08bb4995f74ffe1391fa8
Structure and function of auditory cortex: music and speech
[ { "docid": "a411780d406e8b720303d18cd6c9df68", "text": "Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.", "title": "" } ]
[ { "docid": "a24b4546eb2da7ce6ce70f45cd16e07d", "text": "This paper examines the state of the art in mobile clinical and health-related apps. A 2012 estimate puts the number of health-related apps at no fewer than 40,000, as healthcare professionals and consumers continue to express concerns about the quality of many apps, calling for some form of app regulatory control or certification to be put in place. We describe the range of apps on offer as of 2013, and then present a brief survey of evaluation studies of medical and health-related apps that have been conducted to date, covering a range of clinical disciplines and topics. Our survey includes studies that highlighted risks, negative issues and worrying deficiencies in existing apps. We discuss the concept of 'apps as a medical device' and the relevant regulatory controls that apply in USA and Europe, offering examples of apps that have been formally approved using these mechanisms. We describe the online Health Apps Library run by the National Health Service in England and the calls for a vetted medical and health app store. We discuss the ingredients for successful apps beyond the rather narrow definition of 'apps as a medical device'. These ingredients cover app content quality, usability, the need to match apps to consumers' general and health literacy levels, device connectivity standards (for apps that connect to glucometers, blood pressure monitors, etc.), as well as app security and user privacy. 'Happtique Health App Certification Program' (HACP), a voluntary app certification scheme, successfully captures most of these desiderata, but is solely focused on apps targeting the US market. HACP, while very welcome, is in ways reminiscent of the early days of the Web, when many \"similar\" quality benchmarking tools and codes of conduct for information publishers were proposed to appraise and rate online medical and health information. It is probably impossible to rate and police every app on offer today, much like in those early days of the Web, when people quickly realised the same regarding informational Web pages. The best first line of defence was, is, and will always be to educate consumers regarding the potentially harmful content of (some) apps.", "title": "" }, { "docid": "6d1f374686b98106ab4221066607721b", "text": "How does one instigate a scientific revolution, or more modestly, a shift of scientific paradigm? This must have been on the minds of the organizers of the two conferences \"The Economy as an Evolving Complex System, I and II\" and the research program in economics at the Santa Fe Institute documented in the present volume and its predecessor of ten years ago.(1) Their strategy might be reconstructed as follows. First, the stranglehold of neoclassical economics on the Anglo-Saxon academic community since World War II is at least partly due to the ascendancy of mathematical rigor as the touchstone of serious economic theorizing. Thus if one could beat the prevailing paradigm at its own game one would immediately have a better footing in the community than the heretics, mostly from the left or one of the variousìnstitu-tional' camps, who had been sniping at it from the sidelines all the while but were never above the suspicion of not being mathematically up to comprehending it in the first place. Second, one could enlist both prominent representatives and path-breaking methods from the natural sciences to legitimize the introduction of (to economists) fresh and in some ways disturbing approaches to the subject. This was particularly the tack taken in 1987, where roughly equal numbers of scientists and economists were brought together in an extensive brain storming session. Physics has always been the role model for other aspiring`hard' sciences, and physicists seem to have succeeded in institutional-izing a `permanent revolution' in their own methodology , i.e., they are relatively less dogmatic and willing to be more eclectic in the interests of getting results. The fact that, with the exception of a brief chapter by Philip Anderson in the present volume, physicists as representatives of their discipline are no longer present, presumably indicates that their services can now be dispensed with in this enterprise.(2) Finally, one should sponsor research of the highest caliber, always laudable in itself, and make judicious use of key personalities. Care should also be taken that the work is of a form and style which, rather than explicitly provoking the profession, makes it appear as if it were the natural generalization of previous mainstream research and thus reasonably amenable to inclusion in the canon. This while tacitly encouraging and profiting from a wave of publicity in the popular media , a difficult line to tread if one does not want to appear …", "title": "" }, { "docid": "e0c71e449f4c155a993ae04ece4bc822", "text": "This paper shows how one can directly apply natural language processing (NLP) methods to classification problems in cheminformatics. Connection between these seemingly separate fields is shown by considering standard textual representation of compound, SMILES. The problem of activity prediction against a target protein is considered, which is a crucial part of computer aided drug design process. Conducted experiments show that this way one can not only outrank state of the art results of hand crafted representations but also gets direct structural insights into the way decisions are made.", "title": "" }, { "docid": "f4b6f3b281a420999b60b38c245113a6", "text": "There is growing interest in using intranasal oxytocin (OT) to treat social dysfunction in schizophrenia and bipolar disorders (i.e., psychotic disorders). While OT treatment results have been mixed, emerging evidence suggests that OT system dysfunction may also play a role in the etiology of metabolic syndrome (MetS), which appears in one-third of individuals with psychotic disorders and associated with increased mortality. Here we examine the evidence for a potential role of the OT system in the shared risk for MetS and psychotic disorders, and its prospects for ameliorating MetS. Using several studies to demonstrate the overlapping neurobiological profiles of metabolic risk factors and psychiatric symptoms, we show that OT system dysfunction may be one common mechanism underlying MetS and psychotic disorders. Given the critical need to better understand metabolic dysregulation in these disorders, future OT trials assessing behavioural and cognitive outcomes should additionally include metabolic risk factor parameters.", "title": "" }, { "docid": "8612b5e8f00fd8469ba87f1514b69fd0", "text": "Online gaming is one of the most profitable businesses on the Internet. Among various threats to continuous player subscriptions, network lags are particularly notorious. It is widely known that frequent and long lags frustrate game players, but whether the players actually take action and leave a game is unclear. Motivated to answer this question, we apply survival analysis to a 1, 356-million-packet trace from a sizeable MMORPG, called ShenZhou Online. We find that both network delay and network loss significantly affect a player’s willingness to continue a game. For ShenZhou Online, the degrees of player “intolerance” of minimum RTT, RTT jitter, client loss rate, and server loss rate are in the proportion of 1:2:11:6. This indicates that 1) while many network games provide “ping time,” i.e., the RTT, to players to facilitate server selection, it would be more useful to provide information about delay jitters; and 2) players are much less tolerant of network loss than delay. This is due to the game designer’s decision to transfer data in TCP, where packet loss not only results in additional packet delays due to in-order delivery and retransmission, but also a lower sending rate.", "title": "" }, { "docid": "63663dbc320556f7de09b5060f3815a6", "text": "There has been a long history of applying AI technologies to address software engineering problems especially on tool automation. On the other hand, given the increasing importance and popularity of AI software, recent research efforts have been on exploring software engineering solutions to improve the productivity of developing AI software and the dependability of AI software. The emerging field of intelligent software engineering is to focus on two aspects: (1) instilling intelligence in solutions for software engineering problems; (2) providing software engineering solutions for intelligent software. This extended abstract shares perspectives on these two aspects of intelligent software engineering.", "title": "" }, { "docid": "ddc56e9f2cbe9c086089870ccec7e510", "text": "Serotonin is an ancient monoamine neurotransmitter, biochemically derived from tryptophan. It is most abundant in the gastrointestinal tract, but is also present throughout the rest of the body of animals and can even be found in plants and fungi. Serotonin is especially famous for its contributions to feelings of well-being and happiness. More specifically it is involved in learning and memory processes and is hence crucial for certain behaviors throughout the animal kingdom. This brief review will focus on the metabolism, biological role and mode-of-action of serotonin in insects. First, some general aspects of biosynthesis and break-down of serotonin in insects will be discussed, followed by an overview of the functions of serotonin, serotonin receptors and their pharmacology. Throughout this review comparisons are made with the vertebrate serotonergic system. Last but not least, possible applications of pharmacological adjustments of serotonin signaling in insects are discussed.", "title": "" }, { "docid": "83aa2a89f8ecae6a84134a2736a5bb22", "text": "The activity of dozens of simultaneously recorded neurons can be used to control the movement of a robotic arm or a cursor on a computer screen. This motor neural prosthetic technology has spurred an increased interest in the algorithms by which motor intention can be inferred. The simplest of these algorithms is the population vector algorithm (PVA), where the activity of each cell is used to weight a vector pointing in that neuron's preferred direction. Off-line, it is possible to show that more complicated algorithms, such as the optimal linear estimator (OLE), can yield substantial improvements in the accuracy of reconstructed hand movements over the PVA. We call this open-loop performance. In contrast, this performance difference may not be present in closed-loop, on-line control. The obvious difference between open and closed-loop control is the ability to adapt to the specifics of the decoder in use at the time. In order to predict performance gains that an algorithm may yield in closed-loop control, it is necessary to build a model that captures aspects of this adaptation process. Here we present a framework for modeling the closed-loop performance of the PVA and the OLE. Using both simulations and experiments, we show that (1) the performance gain with certain decoders can be far less extreme than predicted by off-line results, (2) that subjects are able to compensate for certain types of bias in decoders, and (3) that care must be taken to ensure that estimation error does not degrade the performance of theoretically optimal decoders.", "title": "" }, { "docid": "7d8884a7f6137068f8ede464cf63da5b", "text": "Object detection and localization is a crucial step for inspection and manipulation tasks in robotic and industrial applications. We present an object detection and localization scheme for 3D objects that combines intensity and depth data. A novel multimodal, scale- and rotation-invariant feature is used to simultaneously describe the object's silhouette and surface appearance. The object's position is determined by matching scene and model features via a Hough-like local voting scheme. The proposed method is quantitatively and qualitatively evaluated on a large number of real sequences, proving that it is generic and highly robust to occlusions and clutter. Comparisons with state of the art methods demonstrate comparable results and higher robustness with respect to occlusions.", "title": "" }, { "docid": "850becfa308ce7e93fea77673db8ab50", "text": "Controlled generation of text is of high practical use. Recent efforts have made impressive progress in generating or editing sentences with given textual attributes (e.g., sentiment). This work studies a new practical setting of text content manipulation. Given a structured record, such as (PLAYER: Lebron, POINTS: 20, ASSISTS: 10), and a reference sentence, such as Kobe easily dropped 30 points, we aim to generate a sentence that accurately describes the full content in the record, with the same writing style (e.g., wording, transitions) of the reference. The problem is unsupervised due to lack of parallel data in practice, and is challenging to minimally yet effectively manipulate the text (by rewriting/adding/deleting text portions) to ensure fidelity to the structured content. We derive a dataset from a basketball game report corpus as our testbed, and develop a neural method with unsupervised competing objectives and explicit content coverage constraints. Automatic and human evaluations show superiority of our approach over competitive methods including a strong rule-based baseline and prior approaches designed for style transfer.", "title": "" }, { "docid": "7e127a6f25e932a67f333679b0d99567", "text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.", "title": "" }, { "docid": "ff1ed09b9952f9d0b67d6f6bb1cd507a", "text": "Microblogging websites have emerged to the center of information production and diffusion, on which people can get useful information from other users’ microblog posts. In the era of Big Data, we are overwhelmed by the large amount of microblog posts. To make good use of these informative data, an effective search tool is required specialized for microblog posts. However, it is not trivial to do microblog search due to the following reasons: 1) microblog posts are noisy and time-sensitive rendering general information retrieval models ineffective. 2) Conventional IR models are not designed to consider microblog-specific features. In this paper, we propose to utilize learning to rank model for microblog search. We combine content-based, microblog-specific and temporal features into learning to rank models, which are found to model microblog posts effectively. To study the performance of learning to rank models, we evaluate our models using tweet data set provided by TERC 2011 and TREC 2012 microblogs track with the comparison of three stateof-the-art information retrieval baselines, vector space model, language model, BM25 model. Extensive experimental studies demonstrate the effectiveness of learning to rank models and the usefulness to integrate microblog-specific and temporal information for microblog search task.", "title": "" }, { "docid": "d4a96cc393a3f1ca3bca94a57e07941e", "text": "With the increasing number of scientific publications, research paper recommendation has become increasingly important for scientists. Most researchers rely on keyword-based search or following citations in other papers, in order to find relevant research articles. And usually they spend a lot of time without getting satisfactory results. This study aims to propose a personalized research paper recommendation system, that facilitate this task by recommending papers based on users' explicit and implicit feedback. The users will be allowed to explicitly specify the papers of interest. In addition, user activities (e.g., viewing abstracts or full-texts) will be analyzed in order to enhance users' profiles. Most of the current research paper recommendation and information retrieval systems use the classical bag-of-words methods, which don't consider the context of the words and the semantic similarity between the articles. This study will use Recurrent Neural Networks (RNNs) to discover continuous and latent semantic features of the papers, in order to improve the recommendation quality. The proposed approach utilizes PubMed so far, since it is frequently used by physicians and scientists, but it can easily incorporate other datasets in the future.", "title": "" }, { "docid": "188c55ef248f7021a66c1f2e05c2fc98", "text": "The objective of the proposed study is to explore the performance of credit scoring using a two-stage hybrid modeling procedure with artificial neural networks and multivariate adaptive regression splines (MARS). The rationale under the analyses is firstly to use MARS in building the credit scoring model, the obtained significant variables are then served as the input nodes of the neural networks model. To demonstrate the effectiveness and feasibility of the proposed modeling procedure, credit scoring tasks are performed on one bank housing loan dataset using cross-validation approach. As the results reveal, the proposed hybrid approach outperforms the results using discriminant analysis, logistic regression, artificial neural networks and MARS and hence provides an alternative in handling credit scoring tasks. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "70b6abe2cb82eead9235612c1a1998d7", "text": "PURPOSE\nThe aim of the study was to investigate white blood cell counts and neutrophil to lymphocyte ratio (NLR) as markers of systemic inflammation in the diagnosis of localized testicular cancer as a malignancy with initially low volume.\n\n\nMATERIALS AND METHODS\nThirty-six patients with localized testicular cancer with a mean age of 34.22±14.89 years and 36 healthy controls with a mean age of 26.67±2.89 years were enrolled in the study. White blood cell counts and NLR were calculated from complete blood cell counts.\n\n\nRESULTS\nWhite blood cell counts and NLR were statistically significantly higher in patients with testicular cancer compared with the control group (p<0.0001 for all).\n\n\nCONCLUSIONS\nBoth white blood cell counts and NLR can be used as a simple test in the diagnosis of testicular cancer besides the well-known accurate serum tumor markers as AFP (alpha fetoprotein), hCG (human chorionic gonadotropin) and LDH (lactate dehydrogenase).", "title": "" }, { "docid": "655413f10d0b99afd15d54d500c9ffb6", "text": "Herbal medicine (phytomedicine) uses remedies possessing significant pharmacological activity and, consequently, potential adverse effects and drug interactions. The explosion in sales of herbal therapies has brought many products to the marketplace that do not conform to the standards of safety and efficacy that physicians and patients expect. Unfortunately, few surgeons question patients regarding their use of herbal medicines, and 70% of patients do not reveal their use of herbal medicines to their physicians and pharmacists. All surgeons should question patients about the use of the following common herbal remedies, which may increase the risk of bleeding during surgical procedures: feverfew, garlic, ginger, ginkgo, and Asian ginseng. Physicians should exercise caution in prescribing retinoids or advising skin resurfacing in patients using St John's wort, which poses a risk of photosensitivity reaction. Several herbal medicines, such as aloe vera gel, contain pharmacologically active ingredients that may aid in wound healing. Practitioners who wish to recommend herbal medicines to patients should counsel them that products labeled as supplements have not been evaluated by the US Food and Drug Administration and that no guarantee of product quality can be made.", "title": "" }, { "docid": "5c45aa22bb7182259f75260c879f81d6", "text": "This paper presents an approach to parsing the Manhattan structure of an indoor scene from a single RGBD frame. The problem of recovering the floor plan is recast as an optimal labeling problem which can be solved efficiently using Dynamic Programming.", "title": "" }, { "docid": "0bba0afb68f80afad03d0ba3d1ce9c89", "text": "The Luneburg lens is an aberration-free lens that focuses light from all directions equally well. We fabricated and tested a Luneburg lens in silicon photonics. Such fully-integrated lenses may become the building blocks of compact Fourier optics on chips. Furthermore, our fabrication technique is sufficiently versatile for making perfect imaging devices on silicon platforms.", "title": "" }, { "docid": "89ed5dc0feb110eb3abc102c4e50acaf", "text": "Automatic object detection in infrared images is a vital task for many military defense systems. The high detection rate and low false detection rate of this phase directly affect the performance of the following algorithms in the system as well as the general performance of the system. In this work, a fast and robust algorithm is proposed for detection of small and high intensity objects in infrared scenes. Top-hat transformation and mean filter was used to increase the visibility of the objects, and a two-layer thresholding algorithm was introduced to calculate the object sizes more accurately. Finally, small objects extracted by using post processing methods.", "title": "" } ]
scidocsrr
1d1c7ed520b543c6c4fd71f0e3776c9d
Teachers' pedagogical beliefs and their use of digital media in classrooms: Sharpening the focus of the 'will, skill, tool' model and integrating teachers' constructivist orientations
[ { "docid": "48dd3e8e071e7dd580ea42b528ee9427", "text": "Information systems (IS) implementation is costly and has a relatively low success rate. Since the seventies, IS research has contributed to a better understanding of this process and its outcomes. The early efforts concentrated on the identification of factors that facilitated IS use. This produced a long list of items that proved to be of little practical value. It became obvious that, for practical reasons, the factors had to be grouped into a model in a way that would facilitate analysis of IS use. In 1985, Fred Davis suggested the technology acceptance model (TAM). It examines the mediating role of perceived ease of use and perceived usefulness in their relation between systems characteristics (external variables) and the probability of system use (an indicator of system success). More recently, Davis proposed a new version of his model: TAM2. It includes subjective norms, and was tested with longitudinal research designs. Overall the two explain about 40% of system’s use. Analysis of empirical research using TAM shows that results are not totally consistent or clear. This suggests that significant factors are not included in the models. We conclude that TAM is a useful model, but has to be integrated into a broader one which would include variables related to both human and social change processes, and to the adoption of the innovation model. # 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "ff0644de5cd474dbd858c96bb4c76fd9", "text": "With the growth of the Internet of Things, many insecure embedded devices are entering into our homes and businesses. Some of these web-connected devices lack even basic security protections such as secure password authentication. As a result, thousands of IoT devices have already been infected with malware and enlisted into malicious botnets and many more are left vulnerable to exploitation. In this paper we analyze the practical security level of 16 popular IoT devices from high-end and low-end manufacturers. We present several low-cost black-box techniques for reverse engineering these devices, including software and fault injection based techniques for bypassing password protection. We use these techniques to recover device rmware and passwords. We also discover several common design aws which lead to previously unknown vulnerabilities. We demonstrate the e ectiveness of our approach by modifying a laboratory version of the Mirai botnet to automatically include these devices. We also discuss how to improve the security of IoT devices without signi cantly increasing their cost.", "title": "" }, { "docid": "6aed31a677c2fca976c91c67abd1e7b1", "text": "Facebook is the most popular Social Network Site (SNS) among college students. Despite the popularity and extensive use of Facebook by students, its use has not made significant inroads into classroom usage. In this study, we seek to examine why this is the case and whether it would be worthwhile for faculty to invest the time to integrate Facebook into their teaching. To this end, we decided to undertake a study with a sample of 214 undergraduate students at the University of Huelva (Spain). We applied the structural equation model specifically designed by Mazman and Usluel (2010) to identify the factors that may motivate these students to adopt and use social network tools, specifically Facebook, for educational purposes. According to our results, Social Influence is the most important factor in predicting the adoption of Facebook; students are influenced to adopt it to establish or maintain contact with other people with whom they share interests. Regarding the purposes of Facebook usage, Social Relations is perceived as the most important factor among all of the purposes collected. Our findings also revealed that the educational use of Facebook is explained directly by its purposes of usage and indirectly by its adoption. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e2459b9991cfda1e81119e27927140c5", "text": "This research demo describes the implementation of a mobile AR-supported educational course application, AR Circuit, which is designed to promote the effectiveness of remote collaborative learning for physics. The application employs the TCP/IP protocol enabling multiplayer functionality in a mobile AR environment. One phone acts as the server and the other acts as the client. The server phone will capture the video frames, process the video frame, and send the current frame and the markers transformation matrices to the client phone.", "title": "" }, { "docid": "e79a335fb5dc6e2169484f8ac4130b35", "text": "We obtained expressions for TE and TM modes of the planar hyperbolic secant (HS) waveguide. We found waveguide parameters for which the fundamental mode has minimal width. By FDTD-simulation we show propagation of TE-modes and periodical reconstruction of non-modal fields in bounded HS-waveguides. We show that truncated HS-waveguide focuses plane wave into spot with diameter 0.132 of wavelength.", "title": "" }, { "docid": "b15b88a31cc1762618ca976bdf895d57", "text": "How can we build agents that keep learning from experience, quickly and efficiently, after their initial training? Here we take inspiration from the main mechanism of learning in biological brains: synaptic plasticity, carefully tuned by evolution to produce efficient lifelong learning. We show that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections. First, recurrent plastic networks with more than two million parameters can be trained to memorize and reconstruct sets of novel, high-dimensional (1,000+ pixels) natural images not seen during training. Crucially, traditional non-plastic recurrent networks fail to solve this task. Furthermore, trained plastic networks can also solve generic meta-learning tasks such as the Omniglot task, with competitive results and little parameter overhead. Finally, in reinforcement learning settings, plastic networks outperform a non-plastic equivalent in a maze exploration task. We conclude that differentiable plasticity may provide a powerful novel approach to the learning-to-learn problem.", "title": "" }, { "docid": "4dbbcaf264cc9beda8644fa926932d2e", "text": "It is relatively stress-free to write about computer games as nothing too much has been said yet, and almost anything goes. The situation is pretty much the same when it comes to writing about games and gaming in general. The sad fact with alarming cumulative consequences is that they are undertheorized; there are Huizinga, Caillois and Ehrmann of course, and libraries full of board game studies,in addition to game theory and bits and pieces of philosophy—most notably those of Wittgenstein— but they won’t get us very far with computer games. So if there already is or soon will be a legitimate field for computer game studies, this field is also very open to intrusions and colonisations from the already organized scholarly tribes. Resisting and beating them is the goal of our first survival game in this paper, as what these emerging studies need is independence, or at least relative independence.", "title": "" }, { "docid": "85856deb5bf7cafef8f68ad13414d4b1", "text": "human health and safety, serving as an early-warning system for hazardous environmental conditions, such as poor air and water quality (e.g., Glasgow et al. 2004, Normander et al. 2008), and natural disasters, such as fires (e.g., Hefeeda and Bagheri 2009), floods (e.g., Young 2002), and earthquakes (e.g., Hart and Martinez 2006). Collectively, these changes in the technological landscape are altering the way that environmental conditions are monitored, creating a platform for new scientific discoveries (Porter et al. 2009). Although sensor networks can provide many benefits, they are susceptible to malfunctions that can result in lost or poor-quality data. Some level of sensor failure is inevitable; however, steps can be taken to minimize the risk of loss and to improve the overall quality of the data. In the ecological community, it has become common practice to post streaming sensor data online with limited or no quality control. That is, these data are often delivered to end users in a raw form, without any checks or evaluations having been performed. In such cases, the data are typically released provisionally with the understanding that they could change in the future. However, when provisional data are made publically available before they have been comprehensively checked, there is the potential for erroneous or misleading results. Streaming sensor networks have advanced ecological research by providing enormous quantities of data at fine temporal and spatial resolutions in near real time (Szewczyk et al. 2004, Porter et al. 2005, Collins et al. 2006). The advent of wireless technologies has enabled connections with sensors in remote locations, making it possible to transmit data instantaneously using communication devices such as cellular phones, radios, and local area networks. Advancements in cyberinfrastructure have improved data storage capacity, processing speed, and communication bandwidth, making it possible to deliver to end users the most current observations from sensors (e.g., within minutes after their collection). Recent technological developments have resulted in a new generation of in situ sensors that provide continuous data streams on the physical, chemical, optical, acoustical, and biological properties of ecosystems. These new types of sensors provide a window into natural patterns not obtainable with discrete measurements (Benson et al. 2010). Techniques for rapidly processing and interpreting digital data, such as webcam images in investigations of tree phenology (Richardson et al. 2009) and acoustic data in wildlife research (Szewczyk et al. 2004), have also enhanced our understanding of ecological processes. Access to near-real-time data has become important for", "title": "" }, { "docid": "c429bf418a4ecbd56c7b2ab6f4ca3cd6", "text": "The Internet exhibits a gigantic measure of helpful data which is generally designed for its users, which makes it hard to extract applicable information from different sources. Accordingly, the accessibility of strong, adaptable Information Extraction framework that consequently concentrate structured data such as, entities, relationships between entities, and attributes from unstructured or semi-structured sources. But somewhere during extraction of information may lead to the loss of its meaning, which is absolutely not feasible. Semantic Web adds solution to this problem. It is about providing meaning to the data and allow the machine to understand and recognize these augmented data more accurately. The proposed system is about extracting information from research data of IT domain like journals of IEEE, Springer, etc., which aid researchers and the organizations to get the data of journals in an optimized manner so the time and hard work of surfing and reading the entire journal's papers or articles reduces. Also the accuracy of the system is taken care of using RDF, the data extracted has a specific declarative semantics so that the meaning of the research papers or articles during extraction remains unchanged. In addition, the same approach shall be applied on multiple documents, so that time factor can get saved.", "title": "" }, { "docid": "bf5cedb076c779157e1c1fbd4df0adc9", "text": "Generating novel graph structures that optimize given objectives while obeying some given underlying rules is fundamental for chemistry, biology and social science research. This is especially important in the task of molecular graph generation, whose goal is to discover novel molecules with desired properties such as drug-likeness and synthetic accessibility, while obeying physical laws such as chemical valency. However, designing models to find molecules that optimize desired properties while incorporating highly complex and non-differentiable rules remains to be a challenging task. Here we propose Graph Convolutional Policy Network (GCPN), a general graph convolutional network based model for goaldirected graph generation through reinforcement learning. The model is trained to optimize domain-specific rewards and adversarial loss through policy gradient, and acts in an environment that incorporates domain-specific rules. Experimental results show that GCPN can achieve 61% improvement on chemical property optimization over state-of-the-art baselines while resembling known molecules, and achieve 184% improvement on the constrained property optimization task.", "title": "" }, { "docid": "a6a7770857964e96f98bd4021d38f59f", "text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.", "title": "" }, { "docid": "aafae4864d274540d0f80842970c7eac", "text": "Fraud is increasing with the extensive use of internet and the increase of online transactions. More advanced solutions are desired to protect financial service companies and credit card holders from constantly evolving online fraud attacks. The main objective of this paper is to construct an efficient fraud detection system which is adaptive to the behavior changes by combining classification and clustering techniques. This is a two stage fraud detection system which compares the incoming transaction against the transaction history to identify the anomaly using BOAT algorithm in the first stage. In second stage to reduce the false alarm rate suspected anomalies are checked with the fraud history database and make sure that the detected anomalies are due to fraudulent transaction or any short term change in spending profile. In this work BOAT supports incremental update of transactional database and it handles maximum fraud coverage with high speed and less cost. Proposed model is evaluated on both synthetically generated and real life data and shows very good accuracy in detecting fraud transaction.", "title": "" }, { "docid": "a19c27371c6bf366fddabc2fd3f277b7", "text": "Simultaneous sparse coding (SSC) or nonlocal image representation has shown great potential in various low-level vision tasks, leading to several state-of-the-art image restoration techniques, including BM3D and LSSC. However, it still lacks a physically plausible explanation about why SSC is a better model than conventional sparse coding for the class of natural images. Meanwhile, the problem of sparsity optimization, especially when tangled with dictionary learning, is computationally difficult to solve. In this paper, we take a low-rank approach toward SSC and provide a conceptually simple interpretation from a bilateral variance estimation perspective, namely that singular-value decomposition of similar packed patches can be viewed as pooling both local and nonlocal information for estimating signal variances. Such perspective inspires us to develop a new class of image restoration algorithms called spatially adaptive iterative singular-value thresholding (SAIST). For noise data, SAIST generalizes the celebrated BayesShrink from local to nonlocal models; for incomplete data, SAIST extends previous deterministic annealing-based solution to sparsity optimization through incorporating the idea of dictionary learning. In addition to conceptual simplicity and computational efficiency, SAIST has achieved highly competent (often better) objective performance compared to several state-of-the-art methods in image denoising and completion experiments. Our subjective quality results compare favorably with those obtained by existing techniques, especially at high noise levels and with a large amount of missing data.", "title": "" }, { "docid": "1831e2a5a75fc85299588323d68947b2", "text": "The Transaction Processing Performance Council (TPC) is completing development of TPC-DS, a new generation industry standard decision support benchmark. The TPC-DS benchmark, first introduced in the “The Making of TPC-DS” [9] paper at the 32 International Conference on Very Large Data Bases (VLDB), has now entered the TPC’s “Formal Review” phase for new benchmarks; companies and researchers alike can now download the draft benchmark specification and tools for evaluation. The first paper [9] gave an overview of the TPC-DS data model, workload model, and execution rules. This paper details the characteristics of different phases of the workload, namely: database load, query workload and data maintenance; and also their impact to the benchmark’s performance metric. As with prior TPC benchmarks, this workload will be widely used by vendors to demonstrate their capabilities to support complex decision support systems, by customers as a key factor in purchasing servers and software, and by the database community for research and development of optimization techniques.", "title": "" }, { "docid": "797166b4c68bcdc7a8860462117e2051", "text": "In this paper we propose a novel feature descriptor Extended Co-occurrence HOG (ECoHOG) and integrate it with dense point trajectories demonstrating its usefulness in fine grained activity recognition. This feature is inspired by original Co-occurrence HOG (CoHOG) that is based on histograms of occurrences of pairs of image gradients in the image. Instead relying only on pure histograms we introduce a sum of gradient magnitudes of co-occurring pairs of image gradients in the image. This results in giving the importance to the object boundaries and straightening the difference between the moving foreground and static background. We also couple ECoHOG with dense point trajectories extracted using optical flow from video sequences and demonstrate that they are extremely well suited for fine grained activity recognition. Using our feature we outperform state of the art methods in this task and provide extensive quantitative evaluation.", "title": "" }, { "docid": "6a1ade9670c8ee161209d54901318692", "text": "The motion of a plane can be described by a homography. We study how to parameterize homographies to maximize plane estimation performance. We compare the usual 3 × 3 matrix parameterization with a parameterization that combines 4 fixed points in one of the images with 4 variable points in the other image. We empirically show that this 4pt parameterization is far superior. We also compare both parameterizations with a variety of direct parameterizations. In the case of unknown relative orientation, we compare with a direct parameterization of the plane equation, and the rotation and translation of the camera(s). We show that the direct parameteri-zation is both less accurate and far less robust than the 4-point parameterization. We explain the poor performance using a measure of independence of the Jacobian images. In the fully calibrated setting, the direct parameterization just consists of 3 parameters of the plane equation. We show that this parameterization is far more robust than the 4-point parameterization, but only approximately as accurate. In the case of a moving stereo rig we find that the direct parameterization of plane equation, camera rotation and translation performs very well, both in terms of accuracy and robustness. This is in contrast to the corresponding direct parameterization in the case of unknown relative orientation. Finally, we illustrate the use of plane estimation in 2 automotive applications.", "title": "" }, { "docid": "90dc36628f9262157ea8722d82830852", "text": "Inexpensive fixed wing UAV are increasingly useful in remote sensing operations. They are a cheaper alternative to manned vehicles, and are ideally suited for dangerous or monotonous missions that would be inadvisable for a human pilot. Groups of UAV are of special interest for their abilities to coordinate simultaneous coverage of large areas, or cooperate to achieve goals such as mapping. Cooperation and coordination in UAV groups also allows increasingly large numbers of aircraft to be operated by a single user. Specific applications under consideration for groups of cooperating UAV are border patrol, search and rescue, surveillance, communications relaying, and mapping of hostile territory. The capabilities of small UAV continue to grow with advances in wireless communications and computing power. Accordingly, research topics in cooperative UAV control include efficient computer vision for real-time navigation and networked computing and communication strategies for distributed control, as well as traditional aircraft-related topics such as collision avoidance and formation flight. Emerging results in cooperative UAV control are presented via discussion of these topics, including particular requirements, challenges, and some promising strategies relating to each area. Case studies from a variety of programs highlight specific solutions and recent results, ranging from pure simulation to control of multiple UAV. This wide range of case studies serves as an overview of current problems of Interest, and does not present every relevant result.", "title": "" }, { "docid": "0f10bb2afc1797fad603d8c571058ecb", "text": "This paper presents findings from the All Wales Hate Crime Project. Most hate crime research has focused on discrete victim types in isolation. For the first time, internationally, this paper examines the psychological and physical impacts of hate crime across seven victim types drawing on quantitative and qualitative data. It contributes to the hate crime debate in two significant ways: (1) it provides the first look at the problem in Wales and (2) it provides the first multi-victim-type analysis of hate crime, showing that impacts are not homogenous across victim groups. The paper provides empirical credibility to the impacts felt by hate crime victims on the margins who have routinely struggled to gain support.", "title": "" }, { "docid": "a4dea5e491657e1ba042219401ebcf39", "text": "Beam scanning arrays typically suffer from scan loss; an increasing degradation in gain as the beam is scanned from broadside toward the horizon in any given scan plane. Here, a metasurface is presented that reduces the effects of scan loss for a leaky-wave antenna (LWA). The metasurface is simple, being composed of an ultrathin sheet of subwavelength split-ring resonators. The leaky-wave structure is balanced, scanning from the forward region, through broadside, and into the backward region, and designed to scan in the magnetic plane. The metasurface is effectively invisible at broadside, where balanced LWAs are most sensitive to external loading. It is shown that the introduction of the metasurface results in increased directivity, and hence, gain, as the beam is scanned off broadside, having an increasing effect as the beam is scanned to the horizon. Simulations show that the metasurface improves the effective aperture distribution at higher scan angles, resulting in a more directive main beam, while having a negligible impact on cross-polarization gain. Experimental validation results show that the scan range of the antenna is increased from $-39 {^{\\circ }} \\leq \\theta \\leq +32 {^{\\circ }}$ to $-64 {^{\\circ }} \\leq \\theta \\leq +70 {^{\\circ }}$ , when loaded with the metasurface, demonstrating a flattened gain profile over a 135° range centered about broadside. Moreover, this scan range occurs over a frequency band spanning from 9 to 15.5 GHz, demonstrating a relative bandwidth of 53% for the metasurface.", "title": "" }, { "docid": "16b5c5d176f2c9292d9c9238769bab31", "text": "We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning rule is approximately as good as any other strategy for minimizing this number of labels.", "title": "" }, { "docid": "2a4201c5789a546edf8944acbcf99546", "text": "Relation extraction models based on deep learning have been attracting a lot of attention recently. Little research is carried out to reduce their need of labeled training data. In this work, we propose an unsupervised pre-training method based on the sequence-to-sequence model for deep relation extraction models. The pre-trained models need only half or even less training data to achieve equivalent performance as the same models without pre-training.", "title": "" } ]
scidocsrr
e712a6a8962386e24801f52412fdce61
Quantifying the relation between performance and success in soccer
[ { "docid": "b88ceafe9998671820291773be77cabc", "text": "The aim of this study was to propose a set of network methods to measure the specific properties of a team. These metrics were organised at macro-analysis levels. The interactions between teammates were collected and then processed following the analysis levels herein announced. Overall, 577 offensive plays were analysed from five matches. The network density showed an ambiguous relationship among the team, mainly during the 2nd half. The mean values of density for all matches were 0.48 in the 1st half, 0.32 in the 2nd half and 0.34 for the whole match. The heterogeneity coefficient for the overall matches rounded to 0.47 and it was also observed that this increased in all matches in the 2nd half. The centralisation values showed that there was no 'star topology'. The results suggest that each node (i.e., each player) had nearly the same connectivity, mainly in the 1st half. Nevertheless, the values increased in the 2nd half, showing a decreasing participation of all players at the same level. Briefly, these metrics showed that it is possible to identify how players connect with each other and the kind and strength of the connections between them. In summary, it may be concluded that network metrics can be a powerful tool to help coaches understand team's specific properties and support decision-making to improve the sports training process based on match analysis.", "title": "" }, { "docid": "6325188ee21b6baf65dbce6855c19bc2", "text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.", "title": "" } ]
[ { "docid": "c6e14529a55b0e6da44dd0966896421a", "text": "Context-based pairing solutions increase the usability of IoT device pairing by eliminating any human involvement in the pairing process. This is possible by utilizing on-board sensors (with same sensing modalities) to capture a common physical context (e.g., ambient sound via each device's microphone). However, in a smart home scenario, it is impractical to assume that all devices will share a common sensing modality. For example, a motion detector is only equipped with an infrared sensor while Amazon Echo only has microphones. In this paper, we develop a new context-based pairing mechanism called Perceptio that uses time as the common factor across differing sensor types. By focusing on the event timing, rather than the specific event sensor data, Perceptio creates event fingerprints that can be matched across a variety of IoT devices. We propose Perceptio based on the idea that devices co-located within a physically secure boundary (e.g., single family house) can observe more events in common over time, as opposed to devices outside. Devices make use of the observed contextual information to provide entropy for Perceptio's pairing protocol. We design and implement Perceptio, and evaluate its effectiveness as an autonomous secure pairing solution. Our implementation demonstrates the ability to sufficiently distinguish between legitimate devices (placed within the boundary) and attacker devices (placed outside) by imposing a threshold on fingerprint similarity. Perceptio demonstrates an average fingerprint similarity of 94.9% between legitimate devices while even a hypothetical impossibly well-performing attacker yields only 68.9% between itself and a valid device.", "title": "" }, { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" }, { "docid": "581f8909adca17194df618cc951749cd", "text": "In this paper the problem of emotion recognition using physiological signals is presented. Firstly the problems with acquisition of physiological signals related to specific human emotions are described. It is not a trivial problem to elicit real emotions and to choose stimuli that always, and for all people, elicit the same emotion. Also different kinds of physiological signals for emotion recognition are considered. A set of the most helpful biosignals is chosen. An experiment is described that was performed in order to verify the possibility of eliciting real emotions using specially prepared multimedia presentations, as well as finding physiological signals that are most correlated with human emotions. The experiment was useful for detecting and identifying many problems and helping to find their solutions. The results of this research can be used for creation of affect-aware applications, for instance video games, that will be able to react to user's emotions.", "title": "" }, { "docid": "2603c07864b92c6723b40c83d3c216b9", "text": "Background: A study was undertaken to record exacerbations and health resource use in patients with COPD during 6 months of treatment with tiotropium, salmeterol, or matching placebos. Methods: Patients with COPD were enrolled in two 6-month randomised, placebo controlled, double blind, double dummy studies of tiotropium 18 μg once daily via HandiHaler or salmeterol 50 μg twice daily via a metered dose inhaler. The two trials were combined for analysis of heath outcomes consisting of exacerbations, health resource use, dyspnoea (assessed by the transitional dyspnoea index, TDI), health related quality of life (assessed by St George’s Respiratory Questionnaire, SGRQ), and spirometry. Results: 1207 patients participated in the study (tiotropium 402, salmeterol 405, placebo 400). Compared with placebo, tiotropium but not salmeterol was associated with a significant delay in the time to onset of the first exacerbation. Fewer COPD exacerbations/patient year occurred in the tiotropium group (1.07) than in the placebo group (1.49, p<0.05); the salmeterol group (1.23 events/year) did not differ from placebo. The tiotropium group had 0.10 hospital admissions per patient year for COPD exacerbations compared with 0.17 for salmeterol and 0.15 for placebo (not statistically different). For all causes (respiratory and non-respiratory) tiotropium, but not salmeterol, was associated with fewer hospital admissions while both groups had fewer days in hospital than the placebo group. The number of days during which patients were unable to perform their usual daily activities was lowest in the tiotropium group (tiotropium 8.3 (0.8), salmeterol 11.1 (0.8), placebo 10.9 (0.8), p<0.05). SGRQ total score improved by 4.2 (0.7), 2.8 (0.7) and 1.5 (0.7) units during the 6 month trial for the tiotropium, salmeterol and placebo groups, respectively (p<0.01 tiotropium v placebo). Compared with placebo, TDI focal score improved in both the tiotropium group (1.1 (0.3) units, p<0.001) and the salmeterol group (0.7 (0.3) units, p<0.05). Evaluation of morning pre-dose FEV1, peak FEV1 and mean FEV1 (0–3 hours) showed that tiotropium was superior to salmeterol while both active drugs were more effective than placebo. Conclusions: Exacerbations of COPD and health resource usage were positively affected by daily treatment with tiotropium. With the exception of the number of hospital days associated with all causes, salmeterol twice daily resulted in no significant changes compared with placebo. Tiotropium also improved health related quality of life, dyspnoea, and lung function in patients with COPD.", "title": "" }, { "docid": "155e53e97c23498a557f848ef52da2a7", "text": "We propose a simultaneous extraction method for 12 organs from non-contrast three-dimensional abdominal CT images. The proposed method uses an abdominal cavity standardization process and atlas guided segmentation incorporating parameter estimation with the EM algorithm to deal with the large fluctuations in the feature distribution parameters between subjects. Segmentation is then performed using multiple level sets, which minimize the energy function that considers the hierarchy and exclusiveness between organs as well as uniformity of grey values in organs. To assess the performance of the proposed method, ten non-contrast 3D CT volumes were used. The accuracy of the feature distribution parameter estimation was slightly improved using the proposed EM method, resulting in better performance of the segmentation process. Nine organs out of twelve were statistically improved compared with the results without the proposed parameter estimation process. The proposed multiple level sets also boosted the performance of the segmentation by 7.2 points on average compared with the atlas guided segmentation. Nine out of twelve organs were confirmed to be statistically improved compared with the atlas guided method. The proposed method was statistically proved to have better performance in the segmentation of 3D CT volumes.", "title": "" }, { "docid": "5ea45a4376e228b3eacebb8dd8e290d2", "text": "The sharing economy has quickly become a very prominent subject of research in the broader computing literature and the in human--computer interaction (HCI) literature more specifically. When other computing research areas have experienced similarly rapid growth (e.g. human computation, eco-feedback technology), early stage literature reviews have proved useful and influential by identifying trends and gaps in the literature of interest and by providing key directions for short- and long-term future work. In this paper, we seek to provide the same benefits with respect to computing research on the sharing economy. Specifically, following the suggested approach of prior computing literature reviews, we conducted a systematic review of sharing economy articles published in the Association for Computing Machinery Digital Library to investigate the state of sharing economy research in computing. We performed this review with two simultaneous foci: a broad focus toward the computing literature more generally and a narrow focus specifically on HCI literature. We collected a total of 112 sharing economy articles published between 2008 and 2017 and through our analysis of these papers, we make two core contributions: (1) an understanding of the computing community's contributions to our knowledge about the sharing economy, and specifically the role of the HCI community in these contributions (i.e.what has been done) and (2) a discussion of under-explored and unexplored aspects of the sharing economy that can serve as a partial research agenda moving forward (i.e.what is next to do).", "title": "" }, { "docid": "8d40b29088a331578e502abb2148ea8c", "text": "Governments are increasingly realizing the importance of utilizing Information and Communication Technologies (ICT) as a tool to better address user’s/citizen’s needs. As citizen’s expectations grow, governments need to deliver services of high quality level to motivate more users to utilize these available e-services. In spite of this, governments still fall short in their service quality level offered to citizens/users. Thus understanding and measuring service quality factors become crucial as the number of services offered is increasing while not realizing what citizens/users really look for when they utilize these services. The study presents an extensive literature review on approaches used to evaluate e-government services throughout a phase of time. The study also suggested those quality/factors indicators government’s need to invest in of high priority in order to meet current and future citizen’s expectations of service quality.", "title": "" }, { "docid": "00946bbfab7cd0ab0d51875b944bca66", "text": "We introduce RelNet: a new model for relational reasoning. RelNet is a memory augmented neural network which models entities as abstract memory slots and is equipped with an additional relational memory which models relations between all memory pairs. The model thus builds an abstract knowledge graph on the entities and relations present in a document which can then be used to answer questions about the document. It is trained end-to-end: only supervision to the model is in the form of correct answers to the questions. We test the model on the 20 bAbI question-answering tasks with 10k examples per task and find that it solves all the tasks with a mean error of 0.3%, achieving 0% error on 11 of the 20 tasks.", "title": "" }, { "docid": "56e1778df9d5b6fa36cbf4caae710e67", "text": "The Levenberg-Marquardt method is a standard technique used to solve nonlinear least squares problems. Least squares problems arise when fitting a parameterized function to a set of measured data points by minimizing the sum of the squares of the errors between the data points and the function. Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points. The Levenberg-Marquardt curve-fitting method is actually a combination of two minimization methods: the gradient descent method and the Gauss-Newton method. In the gradient descent method, the sum of the squared errors is reduced by updating the parameters in the direction of the greatest reduction of the least squares objective. In the Gauss-Newton method, the sum of the squared errors is reduced by assuming the least squares function is locally quadratic, and finding the minimum of the quadratic. The Levenberg-Marquardt method acts more like a gradient-descent method when the parameters are far from their optimal value, and acts more like the Gauss-Newton method when the parameters are close to their optimal value. This document describes these methods and illustrates the use of software to solve nonlinear least squares curve-fitting problems.", "title": "" }, { "docid": "735cc7f7b067175705cb605affd7f06e", "text": "This paper presents a design, simulation, implementation and measurement of a novel microstrip meander patch antenna for the application of sensor networks. The dimension of the microstrip chip antenna is 15 mm times 15 mm times 2 mm. The meander-type radiating patch is constructed on the upper layer of the 2 mm height substrate with 0.0 5 mm height metallic conduct lines. Because of using the very high relative permittivity substrate ( epsivr=90), the proposed antenna achieves 315 MHz band operations.", "title": "" }, { "docid": "43b76baccb237dd36dddfac5854414b8", "text": "PISCES is a public server for culling sets of protein sequences from the Protein Data Bank (PDB) by sequence identity and structural quality criteria. PISCES can provide lists culled from the entire PDB or from lists of PDB entries or chains provided by the user. The sequence identities are obtained from PSI-BLAST alignments with position-specific substitution matrices derived from the non-redundant protein sequence database. PISCES therefore provides better lists than servers that use BLAST, which is unable to identify many relationships below 40% sequence identity and often overestimates sequence identity by aligning only well-conserved fragments. PDB sequences are updated weekly. PISCES can also cull non-PDB sequences provided by the user as a list of GenBank identifiers, a FASTA format file, or BLAST/PSI-BLAST output.", "title": "" }, { "docid": "a692778b7f619de5ad4bc3b2d627c265", "text": "Many students are being left behind by an educational system that some people believe is in crisis. Improving educational outcomes will require efforts on many fronts, but a central premise of this monograph is that one part of a solution involves helping students to better regulate their learning through the use of effective learning techniques. Fortunately, cognitive and educational psychologists have been developing and evaluating easy-to-use learning techniques that could help students achieve their learning goals. In this monograph, we discuss 10 learning techniques in detail and offer recommendations about their relative utility. We selected techniques that were expected to be relatively easy to use and hence could be adopted by many students. Also, some techniques (e.g., highlighting and rereading) were selected because students report relying heavily on them, which makes it especially important to examine how well they work. The techniques include elaborative interrogation, self-explanation, summarization, highlighting (or underlining), the keyword mnemonic, imagery use for text learning, rereading, practice testing, distributed practice, and interleaved practice. To offer recommendations about the relative utility of these techniques, we evaluated whether their benefits generalize across four categories of variables: learning conditions, student characteristics, materials, and criterion tasks. Learning conditions include aspects of the learning environment in which the technique is implemented, such as whether a student studies alone or with a group. Student characteristics include variables such as age, ability, and level of prior knowledge. Materials vary from simple concepts to mathematical problems to complicated science texts. Criterion tasks include different outcome measures that are relevant to student achievement, such as those tapping memory, problem solving, and comprehension. We attempted to provide thorough reviews for each technique, so this monograph is rather lengthy. However, we also wrote the monograph in a modular fashion, so it is easy to use. In particular, each review is divided into the following sections: General description of the technique and why it should work How general are the effects of this technique?  2a. Learning conditions  2b. Student characteristics  2c. Materials  2d. Criterion tasks Effects in representative educational contexts Issues for implementation Overall assessment The review for each technique can be read independently of the others, and particular variables of interest can be easily compared across techniques. To foreshadow our final recommendations, the techniques vary widely with respect to their generalizability and promise for improving student learning. Practice testing and distributed practice received high utility assessments because they benefit learners of different ages and abilities and have been shown to boost students' performance across many criterion tasks and even in educational contexts. Elaborative interrogation, self-explanation, and interleaved practice received moderate utility assessments. The benefits of these techniques do generalize across some variables, yet despite their promise, they fell short of a high utility assessment because the evidence for their efficacy is limited. For instance, elaborative interrogation and self-explanation have not been adequately evaluated in educational contexts, and the benefits of interleaving have just begun to be systematically explored, so the ultimate effectiveness of these techniques is currently unknown. Nevertheless, the techniques that received moderate-utility ratings show enough promise for us to recommend their use in appropriate situations, which we describe in detail within the review of each technique. Five techniques received a low utility assessment: summarization, highlighting, the keyword mnemonic, imagery use for text learning, and rereading. These techniques were rated as low utility for numerous reasons. Summarization and imagery use for text learning have been shown to help some students on some criterion tasks, yet the conditions under which these techniques produce benefits are limited, and much research is still needed to fully explore their overall effectiveness. The keyword mnemonic is difficult to implement in some contexts, and it appears to benefit students for a limited number of materials and for short retention intervals. Most students report rereading and highlighting, yet these techniques do not consistently boost students' performance, so other techniques should be used in their place (e.g., practice testing instead of rereading). Our hope is that this monograph will foster improvements in student learning, not only by showcasing which learning techniques are likely to have the most generalizable effects but also by encouraging researchers to continue investigating the most promising techniques. Accordingly, in our closing remarks, we discuss some issues for how these techniques could be implemented by teachers and students, and we highlight directions for future research.", "title": "" }, { "docid": "7a12529d179d9ca6b94dbac57c54059f", "text": "A novel design of a hand functions task training robotic system was developed for the stroke rehabilitation. It detects the intention of hand opening or hand closing from the stroke person using the electromyography (EMG) signals measured from the hemiplegic side. This training system consists of an embedded controller and a robotic hand module. Each hand robot has 5 individual finger assemblies capable to drive 2 degrees of freedom (DOFs) of each finger at the same time. Powered by the linear actuator, the finger assembly achieves 55 degree range of motion (ROM) at the metacarpophalangeal (MCP) joint and 65 degree range of motion (ROM) at the proximal interphalangeal (PIP) joint. Each finger assembly can also be adjusted to fit for different finger length. With this task training system, stroke subject can open and close their impaired hand using their own intention to carry out some of the daily living tasks.", "title": "" }, { "docid": "ea277c160544fb54bef69e2a4fa85233", "text": "This paper proposes approaches to measure linkography in protocol studies of designing. It outlines the ideas behind using clustering and Shannon’s entropy as measures of designing behaviour. Hypothetical cases are used to illustrate the methods. The paper concludes that these methods may form the basis of a new tool to assess designer behaviour in terms of chunking of design ideas and the opportunities for idea development.", "title": "" }, { "docid": "a5df1d285a359c493d53d1a3bf9920c2", "text": "In this paper, we have reported a new failure phenomenon of read-disturb in MLC NAND flash memory caused by boosting hot-carrier injection effect. 1) The read-disturb failure occurred on unselected WL (WLn+1) after the adjacent selected WL (WLn) was performed with more than 1K read cycles. 2) The read-disturb failure of WLn+1 depends on WLn cell’s Vth and its applied voltage. 3) The mechanism of this kind of failure can be explained by hot carrier injection that is generated by discharging from boosting voltage in unselected cell area (Drain of WLn) to ground (Source of WLn). Experiment A NAND Flash memory was fabricated based on 70nm technology. In order to investigate the mechanisms of readdisturb, 3 different read voltages and 4 different cell data states (S0, S1, S2 and S3) were applied on the selected WL with SGS/SGD rising time shift scheme [1]. Fig. 1 and Fig. 2 show the operation condition and waveform for readdisturb evaluation. In the evaluation, the selected WLn was performed with more than 100K read cycles. Result And Discussion Fig. 3 shows the measured results of WL2 Vth shift (i.e. read-disturb failure) during WL1 read-didturb cycles with different WL1 voltages (VWL1) and cell data states (S0~S3). From these data, a serious WL2 Vth shift can be observed in VWL1=0.5V and VWL1=1.8V after 1K read cycles. In Fig. 3(a), the magnitude of Vth shift with WL1=S2 state is larger than that with WL1=S3 state. However, obviously WL2 Vth shift can be found only when WL1 is at S3 state in Fig. 3(b). In Fig. 3(c), WL2 Vth is unchanged while VWL1 is set to 3.6V. To precisely analyze the phenomenon, further TCAD simulation and analysis were carried out to clarify the mechanism of the read-disturb failure. Based on simulation results of Fig. 4, the channel potential difference between selected WLn (e.g. WL1) and unselected WLn+1 (e.g. WL2) is related to cell data states (S0~S3) and the read voltage of the selected WL (VWL1). Fig. 4(a) exhibits that the selected WL1 channel was tuned off and the channel potential of unselected WL2~31 was boosted to high level when the WL1 cell data state is S2 or S3. Therefore, a sufficient potential difference appears between WLn and WLn+1 and provides a high transverse electric field. When VWL1 is increased to 1.8V as Fig. 4(b), a high programming cell state (S3) is required to support the potential boosting of unselected WL2~31. In addition, from Fig. 4(c) and the case of WL1=S2 in Fig. 4(b), we can find that the potential difference were depressed since the WL1 channel is turned on by high WL1 voltage. Therefore, the potential difference can be reduced by sharing effect. These simulation results are well corresponding with read disturb results of Fig. 3. Electron current density is another factor to cause the Vth shift of WLn+1. From Fig. 3(a), the current density of WL1=S2 should higher than that of WL=S3 since its Vth is lower. Consequently, the probability of impact ionization can be increased due to the high current density in case of WL1=S2. According to the model, we can clearly explain the phenomenon of serious WL2 Vth shift occurs in the condition of WL 1=S2 rather than WL1=S3. Fig. 5 shows the schematic diagram of the mechanism of Boosting Hot-carrier Injection in MLC NAND flash memories. The transverse E-field can be enhanced by the channel potential difference and consequently make a high probability of impact ionization. As a result, electron-hole pairs will be generated, and then electrons will inject into the adjacent cell (WL2) since the higher vertical field of VWL2. Thus, the Vth of adjacent cell will be changed after 1K cycles with the continual injecting of the hot electrons. Table 1 shows the measured result of cell array Vth shift for WL1 to WL4 after the read-disturb cycles on WL1 or WL2. From the data, it concretely indicates that the WLn read cycles could only causes WLn+1 Vth shift even if WLn+1 did not apply the read cycles. The result is consistent with measured data and also supports that the read-disturb on adjacent cell results from boosting hotcarrier injection. Conclusion A new read-disturb failure mechanism caused by boosting hot-carrier injection effect in MLC NAND flash memory has been reported and clarified. Simulation and measured data describe that the electrostatic potential difference between reading cell and the adjacent cell plays a significant role to enhance hot-carrier injection effect. Reference [1] Ken Takeuchi, “A 56nm CMOS 99mm2 8Gb Multilevel NAND Flash Memory with 10MB/s Program Throughput ,” ISSCC, 2006. 978-1-4244-3761-0/09/$25.00 ©2009 IEEE R ead D isturbance Test R esult W L0 W L1 W L2 W L3 W L4 W L1= S0 Pass Pass Pass Pass Pass W L1= S1 Pass Pass Pass Pass Pass W L1= S2 Pass Pass Fail Pass Pass W L1= S3 Pass Pass Fail Pass Pass W L1= S0 Pass Pass Pass Pass Pass W L1= S1 Pass Pass Pass Pass Pass W L1= S2 Pass Pass Pass Pass Pass W L1= S3 Pass Pass Fail Pass Pass W L1= S0 Pass Pass Pass Pass Pass W L1= S1 Pass Pass Pass Pass Pass W L1= S2 Pass Pass Pass Pass Pass W L1= S3 Pass Pass Pass Pass Pass W L2= S0 Pass Pass Pass Pass Pass W L2= S1 Pass Pass Pass Pass Pass W L2= S2 Pass Pass Pass Fail Pass W L2= S3 Pass Pass Pass Fail Pass W L1 Read= 0.5V (C ase 1 ) W L1 Read= 1.8V (C ase 2 ) W L1 Read= 3.6V (C ase 3 )", "title": "" }, { "docid": "ee0d11cbd2e723aff16af1c2f02bbc2b", "text": "This study simplifies the complicated metric distance method [L.S. Chen, C.H. Cheng, Selecting IS personnel using ranking fuzzy number by metric distance method, Eur. J. Operational Res. 160 (3) 2005 803–820], and proposes an algorithm to modify Chen’s Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) [C.T. Chen, Extensions of the TOPSIS for group decision-making under fuzzy environment, Fuzzy Sets Syst., 114 (2000) 1–9]. From experimental verification, Chen directly assigned the fuzzy numbers 1̃ and 0̃ as fuzzy positive ideal solution (PIS) and negative ideal solution (NIS). Chen’s method sometimes violates the basic concepts of traditional TOPSIS. This study thus proposes fuzzy hierarchical TOPSIS, which not only is well suited for evaluating fuzziness and uncertainty problems, but also can provide more objective and accurate criterion weights, while simultaneously avoiding the problem of Chen’s Fuzzy TOPSIS. For application and verification, this study presents a numerical example and build a practical supplier selection problem to verify our proposed method and compare it with other methods. 2008 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +886 5 5342601x5312; fax: +886 5 531 2077. E-mail addresses: jwwang@mail.nhu.edu.tw (J.-W. Wang), chcheng@yuntech.edu.tw (C.-H. Cheng), lendlice@ms12.url.com.tw (K.-C. Huang).", "title": "" }, { "docid": "2ecdf4a4d7d21ca30f3204506a91c22c", "text": "Because of the transition from analog to digital technologies, content owners are seeking technologies for the protection of copyrighted multimedia content. Encryption and watermarking are two major tools that can be used to prevent unauthorized consumption and duplication. In this paper, we generalize an idea in a recent paper that embeds a binary pattern in the form of a binary image in the LL and HH bands at the second level of Discrete Wavelet Transform (DWT) decomposition. Our generalization includes all four bands (LL, HL, LH, and HH), and a comparison of embedding a watermark at first and second level decompositions. We tested the proposed algorithm against fifteen attacks. Embedding the watermark in lower frequencies is robust to a group of attacks, and embedding the watermark in higher frequencies is robust to another set of attacks. Only for rewatermarking and collusion attacks, the watermarks extracted from all four bands are identical. Our experiments indicate that first level decomposition appear advantageous for two reasons: The area for watermark embedding is maximized, and the extracted watermarks are more textured with better visual quality.", "title": "" }, { "docid": "89cc39369eeb6c12a12c61e210c437e3", "text": "Multimodal learning with deep Boltzmann machines (DBMs) is an generative approach to fuse multimodal inputs, and can learn the shared representation via Contrastive Divergence (CD) for classification and information retrieval tasks. However, it is a 2-fan DBM model, and cannot effectively handle multiple prediction tasks. Moreover, this model cannot recover the hidden representations well by sampling from the conditional distribution when more than one modalities are missing. In this paper, we propose a Kfan deep structure model, which can handle the multi-input and muti-output learning problems effectively. In particular, the deep structure has K-branch for different inputs where each branch can be composed of a multi-layer deep model, and a shared representation is learned in an discriminative manner to tackle multimodal tasks. Given the deep structure, we propose two objective functions to handle two multi-input and multi-output tasks: joint visual restoration and labeling, and the multi-view multi-calss object recognition tasks. To estimate the model parameters, we initialize the deep model parameters with CD to maximize the joint distribution, and then we use backpropagation to update the model according to specific objective function. The experimental results demonstrate that the model can effectively leverages multi-source information and predict multiple tasks well over competitive baselines.", "title": "" }, { "docid": "d4ea4a718837db4ecdfd64896661af77", "text": "Laboratory studies have documented that women often respond less favorably to competition than men. Conditional on performance, men are often more eager to compete, and the performance of men tends to respond more positively to an increase in competition. This means that few women enter and win competitions. We review studies that examine the robustness of these differences as well the factors that may give rise to them. Both laboratory and field studies largely confirm these initial findings, showing that gender differences in competitiveness tend to result from differences in overconfidence and in attitudes toward competition. Gender differences in risk aversion, however, seem to play a smaller and less robust role. We conclude by asking what could and should be done to encourage qualified males and females to compete. 601 A nn u. R ev . E co n. 2 01 1. 3: 60 163 0. D ow nl oa de d fr om w w w .a nn ua lre vi ew s.o rg by $ {i nd iv id ua lU se r.d is pl ay N am e} o n 08 /1 6/ 11 . F or p er so na l u se o nl y.", "title": "" }, { "docid": "c7d901f63f0d7ca0b23d5b8f23d92f7d", "text": "We propose a novel approach to automatic spoken language identification (LID) based on vector space modeling (VSM). It is assumed that the overall sound characteristics of all spoken languages can be covered by a universal collection of acoustic units, which can be characterized by the acoustic segment models (ASMs). A spoken utterance is then decoded into a sequence of ASM units. The ASM framework furthers the idea of language-independent phone models for LID by introducing an unsupervised learning procedure to circumvent the need for phonetic transcription. Analogous to representing a text document as a term vector, we convert a spoken utterance into a feature vector with its attributes representing the co-occurrence statistics of the acoustic units. As such, we can build a vector space classifier for LID. The proposed VSM approach leads to a discriminative classifier backend, which is demonstrated to give superior performance over likelihood-based n-gram language modeling (LM) backend for long utterances. We evaluated the proposed VSM framework on 1996 and 2003 NIST Language Recognition Evaluation (LRE) databases, achieving an equal error rate (EER) of 2.75% and 4.02% in the 1996 and 2003 LRE 30-s tasks, respectively, which represents one of the best results reported on these popular tasks", "title": "" } ]
scidocsrr
ae3dbdad428b7cd12dadceef2f3ef261
Linguistic Reflections of Student Engagement in Massive Open Online Courses
[ { "docid": "a7eff25c60f759f15b41c85ac5e3624f", "text": "Connectivist massive open online courses (cMOOCs) represent an important new pedagogical approach ideally suited to the network age. However, little is known about how the learning experience afforded by cMOOCs is suited to learners with different skills, motivations, and dispositions. In this study, semi-structured interviews were conducted with 29 participants on the Change11 cMOOC. These accounts were analyzed to determine patterns of engagement and factors affecting engagement in the course. Three distinct types of engagement were recognized – active participation, passive participation, and lurking. In addition, a number of key factors that mediated engagement were identified including confidence, prior experience, and motivation. This study adds to the overall understanding of learning in cMOOCs and provides additional empirical data to a nascent research field. The findings provide an insight into how the learning experience afforded by cMOOCs suits the diverse range of learners that may coexist within a cMOOC. These insights can be used by designers of future cMOOCs to tailor the learning experience to suit the diverse range of learners that may choose to learn in this way.", "title": "" }, { "docid": "2fbc75f848a0a3ae8228b5c6cbe76ec4", "text": "The authors summarize 35 years of empirical research on goal-setting theory. They describe the core findings of the theory, the mechanisms by which goals operate, moderators of goal effects, the relation of goals and satisfaction, and the role of goals as mediators of incentives. The external validity and practical significance of goal-setting theory are explained, and new directions in goal-setting research are discussed. The relationships of goal setting to other theories are described as are the theory's limitations.", "title": "" } ]
[ { "docid": "4dca240e5073db9f09e6fdc3b022a29a", "text": "We describe an evolutionary approach to the control problem of bipedal walking. Using a full rigid-body simulation of a biped, it was possible to evolve recurrent neural networks that controlled stable straight-line walking on a planar surface. No proprioceptive information was necessary to achieve this task. Furthermore, simple sensory input to locate a sound source was integrated to achieve directional walking. To our knowledge, this is the first work that demonstrates the application of evolutionary optimization to three-dimensional physically simulated biped locomotion.", "title": "" }, { "docid": "cf0b49aabe042b93be0c382ad69e4093", "text": "This paper shows a technique to enhance the resolution of a frequency modulated continuous wave (FMCW) radar system. The range resolution of an FMCW radar system is limited by the bandwidth of the transmitted signal. By using high resolution methods such as the Matrix Pencil Method (MPM) it is possible to enhance the resolution. In this paper a new method to obtain a better resolution for FMCW radar systems is used. This new method is based on the MPM and is enhanced to require less computing power. To evaluate this new technique, simulations and measurements are used. The result shows that this new method is able to improve the performance of FMCW radar systems.", "title": "" }, { "docid": "a5b147f5b3da39fed9ed11026f5974a2", "text": "The aperture coupled patch geometry has been extended to dual polarization by several authors. In Tsao et al. (1988) a cross-shaped slot is fed by a balanced feed network which allows for a high degree of isolation. However, the balanced feed calls for an air-bridge which complicates both the design process and the manufacture. An alleviation to this problem is to separate the two channels onto two different substrate layers separated by the ground plane. In this case the disadvantage is increased cost. Another solution with a single layer feed is presented in Brachat and Baracco (1995) where one channel feeds a single slot centered under the patch whereas the other channel feeds two separate slots placed near the edges of the patch. Our experience is that with this geometry it is hard to achieve a well-matched broadband design since the slots near the edge of the patch present very low coupling. All the above geometries maintain symmetry with respect to the two principal planes if we ignore the small spurious coupling from feed lines in the vicinity of the aperture. We propose to reduce the symmetry to only one principal plane which turns out to be sufficient for high isolation and low cross-polarization. The advantage is that only one layer of feed network is needed, with no air-bridges required. In addition the aperture position is centered under the patch. An important application for dual polarized antennas is base station antennas. We have therefore designed and measured an element for the PCS band (1.85-1.99 GHz).", "title": "" }, { "docid": "db7a4ab8d233119806e7edf2a34fffd1", "text": "Recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user’s topical interests. In this paper, we propose a new embedding approach to learning user profiles, where users are embedded on a topical interest space. We then directly utilize the user profiles for search personalization. Experiments on query logs from a major commercial web search engine demonstrate that our embedding approach improves the performance of the search engine and also achieves better search performance than other strong baselines.", "title": "" }, { "docid": "d9a87325efbd29520c37ec46531c6062", "text": "Predicting the risk of potential diseases from Electronic Health Records (EHR) has attracted considerable attention in recent years, especially with the development of deep learning techniques. Compared with traditional machine learning models, deep learning based approaches achieve superior performance on risk prediction task. However, none of existing work explicitly takes prior medical knowledge (such as the relationships between diseases and corresponding risk factors) into account. In medical domain, knowledge is usually represented by discrete and arbitrary rules. Thus, how to integrate such medical rules into existing risk prediction models to improve the performance is a challenge. To tackle this challenge, we propose a novel and general framework called PRIME for risk prediction task, which can successfully incorporate discrete prior medical knowledge into all of the state-of-the-art predictive models using posterior regularization technique. Different from traditional posterior regularization, we do not need to manually set a bound for each piece of prior medical knowledge when modeling desired distribution of the target disease on patients. Moreover, the proposed PRIME can automatically learn the importance of different prior knowledge with a log-linear model.Experimental results on three real medical datasets demonstrate the effectiveness of the proposed framework for the task of risk prediction", "title": "" }, { "docid": "719c1b6ad0d945b68b34abceb1ed8e3b", "text": "This editorial provides a behavioral science view on gamification and health behavior change, describes its principles and mechanisms, and reviews some of the evidence for its efficacy. Furthermore, this editorial explores the relation between gamification and behavior change frameworks used in the health sciences and shows how gamification principles are closely related to principles that have been proven to work in health behavior change technology. Finally, this editorial provides criteria that can be used to assess when gamification provides a potentially promising framework for digital health interventions.", "title": "" }, { "docid": "927f2c68d709c7418ad76fd9d81b18c4", "text": "With the growing deployment of host and network intrusion detection systems, managing reports from these systems becomes critically important. We present a probabilistic approach to alert correlation, extending ideas from multisensor data fusion. Features used for alert correlation are based on alert content that anticipates evolving IETF standards. The probabilistic approach provides a unified mathematical framework for correlating alerts that match closely but not perfectly, where the minimum degree of match required to fuse alerts is controlled by a single configurable parameter. Only features in common are considered in the fusion algorithm. For each feature we define an appropriate similarity function. The overall similarity is weighted by a specifiable expectation of similarity. In addition, a minimum similarity may be specified for some or all features. Features in this set must match at least as well as the minimum similarity specification in order to combine alerts, regardless of the goodness of match on the feature set as a whole. Our approach correlates attacks over time, correlates reports from heterogeneous sensors, and correlates multiple attack steps.", "title": "" }, { "docid": "121f2bfd854b79a14e8171d875ba951f", "text": "Arising from many applications at the intersection of decision-making and machine learning, Marginal Maximum A Posteriori (Marginal MAP) problems unify the two main classes of inference, namely maximization (optimization) and marginal inference (counting), and are believed to have higher complexity than both of them. We propose XOR_MMAP, a novel approach to solve the Marginal MAP problem, which represents the intractable counting subproblem with queries to NP oracles, subject to additional parity constraints. XOR_MMAP provides a constant factor approximation to the Marginal MAP problem, by encoding it as a single optimization in a polynomial size of the original problem. We evaluate our approach in several machine learning and decision-making applications, and show that our approach outperforms several state-of-the-art Marginal MAP solvers.", "title": "" }, { "docid": "3bae971fce094c3ff6c34595bac60ef2", "text": "In this work, we present a 3D 128Gb 2bit/cell vertical-NAND (V-NAND) Flash product. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1xnm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50MB/s write throughput with 3K endurance for typical embedded applications. Also, extended endurance of 35K is achieved with 36MB/s of write throughput for data center and enterprise SSD applications. And 2nd generation of 3D V-NAND opens up a whole new world at SSD endurance, density and battery life for portables.", "title": "" }, { "docid": "7a4bb28ae7c175a018b278653e32c3a1", "text": "Additive manufacturing (AM) alias 3D printing translates computer-aided design (CAD) virtual 3D models into physical objects. By digital slicing of CAD, 3D scan, or tomography data, AM builds objects layer by layer without the need for molds or machining. AM enables decentralized fabrication of customized objects on demand by exploiting digital information storage and retrieval via the Internet. The ongoing transition from rapid prototyping to rapid manufacturing prompts new challenges for mechanical engineers and materials scientists alike. Because polymers are by far the most utilized class of materials for AM, this Review focuses on polymer processing and the development of polymers and advanced polymer systems specifically for AM. AM techniques covered include vat photopolymerization (stereolithography), powder bed fusion (SLS), material and binder jetting (inkjet and aerosol 3D printing), sheet lamination (LOM), extrusion (FDM, 3D dispensing, 3D fiber deposition, and 3D plotting), and 3D bioprinting. The range of polymers used in AM encompasses thermoplastics, thermosets, elastomers, hydrogels, functional polymers, polymer blends, composites, and biological systems. Aspects of polymer design, additives, and processing parameters as they relate to enhancing build speed and improving accuracy, functionality, surface finish, stability, mechanical properties, and porosity are addressed. Selected applications demonstrate how polymer-based AM is being exploited in lightweight engineering, architecture, food processing, optics, energy technology, dentistry, drug delivery, and personalized medicine. Unparalleled by metals and ceramics, polymer-based AM plays a key role in the emerging AM of advanced multifunctional and multimaterial systems including living biological systems as well as life-like synthetic systems.", "title": "" }, { "docid": "f2a1e5d8e99977c53de9f2a82576db69", "text": "During the last years, several masking schemes for AES have been proposed to secure hardware implementations against DPA attacks. In order to investigate the effectiveness of these countermeasures in practice, we have designed and manufactured an ASIC. The chip features an unmasked and two masked AES-128 encryption engines that can be attacked independently. In addition to conventional DPA attacks on the output of registers, we have also mounted attacks on the output of logic gates. Based on simulations and physical measurements we show that the unmasked and masked implementations leak side-channel information due to glitches at the output of logic gates. It turns out that masking the AES S-Boxes does not prevent DPA attacks, if glitches occur in the circuit.", "title": "" }, { "docid": "d6d07f50778ba3d99f00938b69fe0081", "text": "The use of metal casing is attractive to achieve robustness of modern slim tablet devices. The metal casing includes the metal back cover and the metal frame around the edges thereof. For such metal-casing tablet devices, the frame antenna that uses a part of the metal frame as an antenna's radiator is promising to achieve wide bandwidths for mobile communications. In this paper, the frame antenna based on the simple half-loop antenna structure to cover the long-term evolution 746-960 and 1710-2690 MHz bands is presented. The half-loop structure for the frame antenna is easy for manufacturing and increases the robustness of the metal casing. The dual-wideband operation of the half-loop frame antenna is obtained by using an elevated feed network supported by a thin feed substrate. The measured antenna efficiencies are, respectively, 45%-69% and 60%-83% in the low and high bands. By selecting different feed circuits, the antenna's low band can also be shifted from 746-960 MHz to lower frequencies such as 698-840 MHz, with the antenna's high-band coverage very slightly varied. The working principle of the antenna with the elevated feed network is discussed. The antenna is also fabricated and tested, and experimental results are presented.", "title": "" }, { "docid": "f2fed9066ac945ae517aef8ec5bb5c61", "text": "BACKGROUND\nThe aging of society is a global trend, and care of older adults with dementia is an urgent challenge. As dementia progresses, patients exhibit negative emotions, memory disorders, sleep disorders, and agitated behavior. Agitated behavior is one of the most difficult problems for family caregivers and healthcare providers to handle when caring for older adults with dementia.\n\n\nPURPOSE\nThe aim of this study was to investigate the effectiveness of white noise in improving agitated behavior, mental status, and activities of daily living in older adults with dementia.\n\n\nMETHODS\nAn experimental research design was used to study elderly participants two times (pretest and posttest). Six dementia care centers in central and southern Taiwan were targeted to recruit participants. There were 63 participants: 28 were in the experimental group, and 35 were in the comparison group. Experimental group participants received 20 minutes of white noise consisting of ocean, rain, wind, and running water sounds between 4 and 5 P.M. daily over a period of 4 weeks. The comparison group received routine care. Questionnaires were completed, and observations of agitated behaviors were collected before and after the intervention.\n\n\nRESULTS\nAgitated behavior in the experimental group improved significantly between pretest and posttest. Furthermore, posttest scores on the Mini-Mental Status Examination and Barthel Index were slightly better for this group than at pretest. However, the experimental group registered no significant difference in mental status or activities of daily living at posttest. For the comparison group, agitated behavior was unchanged between pretest and posttest.\n\n\nCONCLUSIONS\nThe results of this study support white noise as a simple, convenient, and noninvasive intervention that improves agitated behavior in older adults with dementia. These results may provide a reference for related healthcare providers, educators, and administrators who care for older adults with dementia.", "title": "" }, { "docid": "e3d0a58ddcffabb26d5e059d3ae6b370", "text": "HCI ( Human Computer Interaction ) studies the ways humans use digital or computational machines, systems or infrastructures. The study of the barriers encountered when users interact with the various interfaces is critical to improving their use, as well as their experience. Access and information processing is carried out today from multiple devices (computers, tablets, phones... ) which is essential to maintain a multichannel consistency. This complexity increases with environments in which we do not have much experience as users, where interaction with the machine is a challenge even in phases of research: virtual reality environments, augmented reality, or viewing and handling of large amounts of data, where the simplicity and ease of use are critical.", "title": "" }, { "docid": "e8c9067f13c9a57be46823425deb783b", "text": "In order to utilize the tremendous computing power of graphics hardware and to automatically adapt to the fast and frequent changes in its architecture and performance characteristics, this paper implements an automatic tuning system to generate high-performance matrix-multiplication implementation on graphics hardware. The automatic tuning system uses a parameterized code generator to generate multiple versions of matrix multiplication, whose performances are empirically evaluated by actual execution on the target platform. An ad-hoc search engine is employed to search over the implementation space for the version that yields the best performance. In contrast to similar systems on CPUs, which utilize cache blocking, register tiling, instruction scheduling tuning strategies, this paper identifies and exploits several tuning strategies that are unique for graphics hardware. These tuning strategies include optimizing for multiple-render-targets, SIMD instructions with data packing, overcoming limitations on instruction count and dynamic branch instruction. The generated implementations have comparable performance with expert manually tuned version in spite of the significant overhead incurred due to the use of the high-level BrookGPU language.", "title": "" }, { "docid": "01f8616cafa72c473e33f149faff044a", "text": "We show that the e-commerce domain can provide all the right ingredients for successful data mining and claim that it is a killer domain for data mining. We describe an integrated architecture, based on our experience at Blue Martini Software, for supporting this integration. The architecture can dramatically reduce the pre-processing, cleaning, and data understanding effort often documented to take 80% of the time in knowledge discovery projects. We emphasize the need for data collection at the application server layer (not the web server) in order to support logging of data and metadata that is essential to the discovery process. We describe the data transformation bridges required from the transaction processing systems and customer event streams (e.g., clickstreams) to the data warehouse. We detail the mining workbench, which needs to provide multiple views of the data through reporting, data mining algorithms, visualization, and OLAP. We conclude with a set of challenges.", "title": "" }, { "docid": "fe41de4091692d1af643bf144ac1dcaa", "text": "Introduction. This research addresses a primary issue that involves motivating academics to share knowledge. Adapting the theory of reasoned action, this study examines the role of motivation that consists of intrinsic motivators (commitment; enjoyment in helping others) and extrinsic motivators (reputation; organizational rewards) to determine and explain the behaviour of Malaysian academics in sharing knowledge. Method. A self-administered questionnaire was distributed using a non-probability sampling technique. A total of 373 completed responses were collected with a total response rate of 38.2%. Analysis. The partial least squares analysis was used to analyse the data. Results. The results indicated that all five of the hypotheses were supported. Analysis of data from the five higher learning institutions in Malaysia found that commitment and enjoyment in helping others (i.e., intrinsic motivators) and reputation and organizational rewards (i.e., extrinsic motivators) have a positive and significant relationship with attitude towards knowledge-sharing. In addition, the findings revealed that intrinsic motivators are more influential than extrinsic motivators. This suggests that academics are influenced more by intrinsic motivators than by extrinsic motivators. Conclusions. The findings provided an indication of the determinants in enhancing knowledgesharing intention among academics in higher education institutions through extrinsic and intrinsic motivators.", "title": "" }, { "docid": "2da67ed8951caf3388ca952465d61b37", "text": "As a significant supplier of labour migrants, Southeast Asia presents itself as an important site for the study of children in transnational families who are growing up separated from at least one migrant parent and sometimes cared for by 'other mothers'. Through the often-neglected voices of left-behind children, we investigate the impact of parental migration and the resulting reconfiguration of care arrangements on the subjective well-being of migrants' children in two Southeast Asian countries, Indonesia and the Philippines. We theorise the child's position in the transnational family nexus through the framework of the 'care triangle', representing interactions between three subject groups- 'left-behind' children, non-migrant parents/other carers; and migrant parent(s). Using both quantitative (from 1010 households) and qualitative (from 32 children) data from a study of child health and migrant parents in Southeast Asia, we examine relationships within the caring spaces both of home and of transnational spaces. The interrogation of different dimensions of care reveals the importance of contact with parents (both migrant and nonmigrant) to subjective child well-being, and the diversity of experiences and intimacies among children in the two study countries.", "title": "" }, { "docid": "db0b55cd4064799b9d7c52c6f3da6aac", "text": "Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-toend to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.", "title": "" }, { "docid": "4c21f108d05132ce00fe6d028c17c7ab", "text": "In this work, a new predictive phase-locked loop (PLL) for encoderless control of a permanent-magnet synchronous generator (PMSG) in a variable-speed wind energy conversion system (WECS) is presented. The idea of the predictive PLL is derived from the direct-model predictive control (DMPC) principle. The predictive PLL uses a limited (discretized) number of rotor-angles for predicting/estimating the back-electromotive-force (BEMF) of the PMSG. subsequently, that predicted angle, which optimizes a pre-defined quality function, is chosen to become the best rotor-angle/position. Accordingly, the fixed gain proportional integral (FGPI) regulator that is normally used in PLLs is eliminated. The performance of the predictive PLL is validated experimentally and compared with that of the traditional one under various operating scenarios and under variations of the PMSG parameters.", "title": "" } ]
scidocsrr
c0159657811c724b694af1cb60a2c215
How to increase and sustain positive emotion : The effects of expressing gratitude and visualizing best possible selves
[ { "docid": "c03265e4a7d7cc14e6799c358a4af95a", "text": "Three studies considered the consequences of writing, talking, and thinking about significant events. In Studies 1 and 2, students wrote, talked into a tape recorder, or thought privately about their worst (N = 96) or happiest experience (N = 111) for 15 min each during 3 consecutive days. In Study 3 (N = 112), students wrote or thought about their happiest day; half systematically analyzed, and half repetitively replayed this day. Well-being and health measures were administered before each study's manipulation and 4 weeks after. As predicted, in Study 1, participants who processed a negative experience through writing or talking reported improved life satisfaction and enhanced mental and physical health relative to those who thought about it. The reverse effect for life satisfaction was observed in Study 2, which focused on positive experiences. Study 3 examined possible mechanisms underlying these effects. Students who wrote about their happiest moments--especially when analyzing them--experienced reduced well-being and physical health relative to those who replayed these moments. Results are discussed in light of current understanding of the effects of processing life events.", "title": "" }, { "docid": "f515695b3d404d29a12a5e8e58a91fc0", "text": "One area of positive psychology analyzes subjective well-being (SWB), people's cognitive and affective evaluations of their lives. Progress has been made in understanding the components of SWB, the importance of adaptation and goals to feelings of well-being, the temperament underpinnings of SWB, and the cultural influences on well-being. Representative selection of respondents, naturalistic experience sampling measures, and other methodological refinements are now used to study SWB and could be used to produce national indicators of happiness.", "title": "" } ]
[ { "docid": "88804f285f4d608b81a1cd741dbf2b7e", "text": "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates.\n We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.", "title": "" }, { "docid": "9256277615e0016992d007b29a2bcf21", "text": "Three experiments explored how words are learned from hearing them across contexts. Adults watched 40-s videotaped vignettes of parents uttering target words (in sentences) to their infants. Videos were muted except for a beep or nonsense word inserted where each \"mystery word\" was uttered. Participants were to identify the word. Exp. 1 demonstrated that most (90%) of these natural learning instances are quite uninformative, whereas a small minority (7%) are highly informative, as indexed by participants' identification accuracy. Preschoolers showed similar information sensitivity in a shorter experimental version. Two further experiments explored how cross-situational information helps, by manipulating the serial ordering of highly informative vignettes in five contexts. Response patterns revealed a learning procedure in which only a single meaning is hypothesized and retained across learning instances, unless disconfirmed. Neither alternative hypothesized meanings nor details of past learning situations were retained. These findings challenge current models of cross-situational learning which assert that multiple meaning hypotheses are stored and cross-tabulated via statistical procedures. Learners appear to use a one-trial \"fast-mapping\" procedure, even under conditions of referential uncertainty.", "title": "" }, { "docid": "7c27bfa849ba0bd49f9ddaec9beb19b5", "text": "Very High Spatial Resolution (VHSR) large-scale SAR image databases are still an unresolved issue in the Remote Sensing field. In this work, we propose such a dataset and use it to explore patch-based classification in urban and periurban areas, considering 7 distinct semantic classes. In this context, we investigate the accuracy of large CNN classification models and pre-trained networks for SAR imaging systems. Furthermore, we propose a Generative Adversarial Network (GAN) for SAR image generation and test, whether the synthetic data can actually improve classification accuracy.", "title": "" }, { "docid": "205a38ac9f2df57a33481d36576e7d54", "text": "Business process improvement initiatives typically employ various process analysis techniques, including evidence-based analysis techniques such as process mining, to identify new ways to streamline current business processes. While plenty of process mining techniques have been proposed to extract insights about the way in which activities within processes are conducted, techniques to understand resource behaviour are limited. At the same time, an understanding of resources behaviour is critical to enable intelligent and effective resource management an important factor which can significantly impact overall process performance. The presence of detailed records kept by today’s organisations, including data about who, how, what, and when various activities were carried out by resources, open up the possibility for real behaviours of resources to be studied. This paper proposes an approach to analyse one aspect of resource behaviour: the manner in which a resource prioritises his/her work. The proposed approach has been formalised, implemented, and evaluated using a number of synthetic and real datasets. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c625221e79bdc508c7c772f5be0458a1", "text": "Word embeddings that can capture semantic and syntactic information from contexts have been extensively used for various natural language processing tasks. However, existing methods for learning contextbased word embeddings typically fail to capture sufficient sentiment information. This may result in words with similar vector representations having an opposite sentiment polarity (e.g., good and bad), thus degrading sentiment analysis performance. Therefore, this study proposes a word vector refinement model that can be applied to any pre-trained word vectors (e.g., Word2vec and GloVe). The refinement model is based on adjusting the vector representations of words such that they can be closer to both semantically and sentimentally similar words and further away from sentimentally dissimilar words. Experimental results show that the proposed method can improve conventional word embeddings and outperform previously proposed sentiment embeddings for both binary and fine-grained classification on Stanford Sentiment Treebank (SST).", "title": "" }, { "docid": "1e176f66a29b6bd3dfce649da1a4db9d", "text": "In just a few years, crowdsourcing markets like Mechanical Turk have become the dominant mechanism for for building \"gold standard\" datasets in areas of computer science ranging from natural language processing to audio transcription. The assumption behind this sea change - an assumption that is central to the approaches taken in hundreds of research projects - is that crowdsourced markets can accurately replicate the judgments of the general population for knowledge-oriented tasks. Focusing on the important domain of semantic relatedness algorithms and leveraging Clark's theory of common ground as a framework, we demonstrate that this assumption can be highly problematic. Using 7,921 semantic relatedness judgements from 72 scholars and 39 crowdworkers, we show that crowdworkers on Mechanical Turk produce significantly different semantic relatedness gold standard judgements than people from other communities. We also show that algorithms that perform well against Mechanical Turk gold standard datasets do significantly worse when evaluated against other communities' gold standards. Our results call into question the broad use of Mechanical Turk for the development of gold standard datasets and demonstrate the importance of understanding these datasets from a human-centered point-of-view. More generally, our findings problematize the notion that a universal gold standard dataset exists for all knowledge tasks.", "title": "" }, { "docid": "e3bb490de9489a0c02f023d25f0a94d7", "text": "During the past two decades, self-efficacy has emerged as a highly effective predictor of students' motivation and learning. As a performance-based measure of perceived capability, self-efficacy differs conceptually and psychometrically from related motivational constructs, such as outcome expectations, self-concept, or locus of control. Researchers have succeeded in verifying its discriminant validity as well as convergent validity in predicting common motivational outcomes, such as students' activity choices, effort, persistence, and emotional reactions. Self-efficacy beliefs have been found to be sensitive to subtle changes in students' performance context, to interact with self-regulated learning processes, and to mediate students' academic achievement. Copyright 2000 Academic Press.", "title": "" }, { "docid": "bef64076bf62d9e8fbb6fbaf5534fdc6", "text": "This paper presents an application of PageRank, a random-walk model originally devised for ranking Web search results, to ranking WordNet synsets in terms of how strongly they possess a given semantic property. The semantic properties we use for exemplifying the approach are positivity and negativity, two properties of central importance in sentiment analysis. The idea derives from the observation that WordNet may be seen as a graph in which synsets are connected through the binary relation “a term belonging to synset sk occurs in the gloss of synset si”, and on the hypothesis that this relation may be viewed as a transmitter of such semantic properties. The data for this relation can be obtained from eXtended WordNet, a publicly available sensedisambiguated version of WordNet. We argue that this relation is structurally akin to the relation between hyperlinked Web pages, and thus lends itself to PageRank analysis. We report experimental results supporting our intuitions.", "title": "" }, { "docid": "574838d3fecf8e8dfc4254b41d446ad2", "text": "This paper proposes a new procedure to detect Glottal Closure and Opening Instants (GCIs and GOIs) directly from speech waveforms. The procedure is divided into two successive steps. First a mean-based signal is computed, and intervals where speech events are expected to occur are extracted from it. Secondly, at each interval a precise position of the speech event is assigned by locating a discontinuity in the Linear Prediction residual. The proposed method is compared to the DYPSA algorithm on the CMU ARCTIC database. A significant improvement as well as a better noise robustness are reported. Besides, results of GOI identification accuracy are promising for the glottal source characterization.", "title": "" }, { "docid": "cee0d7bac437a3a98fa7aba31969341b", "text": "Throughout history, the educational process used different educational technologies which did not significantly alter the manner of learning in the classroom. By implementing e-learning technology to the educational process, new and completely different innovative learning scenarios are made possible, including more active student involvement outside the traditional classroom. The quality of the realization of the educational objective in any learning environment depends primarily on the teacher who creates the educational process, mentors and acts as a moderator in the communication within the educational process, but also relies on the student who acquires the educational content. The traditional classroom learning and e-learning environment enable different manners of adopting educational content, and this paper reveals their key characteristics with the purpose of better use of e-learning technology in the educational process.", "title": "" }, { "docid": "e0c87b957faf9c14ce96ed09f968e8ee", "text": "It is well-known that the power factor of Vernier machines is small compared to permanent magnet machines. However, the power factor equations already derived show a huge deviation to the finite-element analysis (FEA) when used for Vernier machines with concentrated windings. Therefore, this paper develops an analytic model to calculate the power factor of Vernier machines with concentrated windings and different numbers of flux modulating poles (FMPs) and stator slots. The established model bases on the winding function theory in combination with a magnetic equivalent circuit. Consequently, equations for the q-inductance and for the no-load back-EMF of the machine are derived, thus allowing the calculation of the power factor. Thereby, the model considers stator leakage effects, as they are crucial for a good power factor estimation. Comparing the results of the Vernier machine to those of a pm machine explains the decreased power factor of Vernier machines. In addition, a FEA confirms the results of the derived model.", "title": "" }, { "docid": "1d724b07c232098e2a5e5af2bb1e7c83", "text": "[2] Brown SJ, McLean WH. One remarkable molecule: filaggrin. J Invest Dermatol 2012;132:751–62. [3] Sandilands A, Terron-Kwiatkowski A, Hull PR, O’Regan GM, Clayton TH, Watson RM, et al. Comprehensive analysis of the gene encoding filaggrin uncovers prevalent and rare mutations in ichthyosis vulgaris and atopic eczema. Nat Genet 2007;39:650–4. [4] Margolis DJ, Apter AJ, Gupta J, Hoffstad O, Papadopoulos M, Campbell LE, et al. The persistence of atopic dermatitis and Filaggrin mutations in a US longitudinal cohort. J Allergy Clin Immunol 2012;130(4):912–7. [5] Smith FJ, Irvine AD, Terron-Kwiatkowski A, Sandilands A, Campbell LE, Zhao Y, et al. Loss-of-function mutations in the gene encoding filaggrin cause ichthyosis vulgaris. Nat Genet 2006;38:337–42. [6] Paternoster L, Standl M, Chen CM, Ramasamy A, Bonnelykke K, Duijts L, et al. Meta-analysis of genome-wide association studies identifies three new risk Table 1 Reliability and validity comparisons for FLG null mutations as assayed by TaqMan and beadchip methods.", "title": "" }, { "docid": "7843fb4bbf2e94a30c18b359076899ab", "text": "In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms has been proposed which can be used with general Fourier subsampling patterns. However, the design of these subsampling patterns has typically been considered in isolation from the reconstruction rule and the anatomy under consideration. In this paper, we propose a learning-based framework for optimizing MRI subsampling patterns for a specific reconstruction rule and anatomy, considering both the noiseless and noisy settings. Our learning algorithm has access to a representative set of training signals, and searches for a sampling pattern that performs well on average for the signals in this set. We present a novel parameter-free greedy mask selection method and show it to be effective for a variety of reconstruction rules and performance metrics. Moreover, we also support our numerical findings by providing a rigorous justification of our framework via statistical learning theory.", "title": "" }, { "docid": "47199e959f3b10c6fa6b4b8c68434b94", "text": "The everyday use of smartphones with high quality built-in cameras has lead to an increase in museum visitors' use of these devices to document and share their museum experiences. In this paper, we investigate how one particular photo sharing application, Instagram, is used to communicate visitors' experiences while visiting a museum of natural history. Based on an analysis of 222 instagrams created in the museum, as well as 14 interviews with the visitors who created them, we unpack the compositional resources and concerns contributing to the creation of instagrams in this particular context. By re-categorizing and re-configuring the museum environment, instagrammers work to construct their own narratives from their visits. These findings are then used to discuss what emerging multimedia practices imply for the visitors' engagement with and documentation of museum exhibits. Drawing upon these practices, we discuss the connection between online social media dialogue and the museum site.", "title": "" }, { "docid": "4dc38ae50a2c806321020de4a140ed5f", "text": "Transcranial direct current stimulation (tDCS) is a promising technology to enhance cognitive and physical performance. One of the major areas of interest is the enhancement of memory function in healthy individuals. The early arrival of tDCS on the market for lifestyle uses and cognitive enhancement purposes lead to the voicing of some important ethical concerns, especially because, to date, there are no official guidelines or evaluation procedures to tackle these issues. The aim of this article is to review ethical issues related to uses of tDCS for memory enhancement found in the ethics and neuroscience literature and to evaluate how realistic and scientifically well-founded these concerns are? In order to evaluate how plausible or speculative each issue is, we applied the methodological framework described by Racine et al. (2014) for \"informed and reflective\" speculation in bioethics. This framework could be succinctly presented as requiring: (1) the explicit acknowledgment of factual assumptions and identification of the value attributed to them; (2) the validation of these assumptions with interdisciplinary literature; and (3) the adoption of a broad perspective to support more comprehensive reflection on normative issues. We identified four major considerations associated with the development of tDCS for memory enhancement: safety, autonomy, justice and authenticity. In order to assess the seriousness and likelihood of harm related to each of these concerns, we analyzed the assumptions underlying the ethical issues, and the level of evidence for each of them. We identified seven distinct assumptions: prevalence, social acceptance, efficacy, ideological stance (bioconservative vs. libertarian), potential for misuse, long term side effects, and the delivery of complete and clear information. We conclude that ethical discussion about memory enhancement via tDCS sometimes involves undue speculation, and closer attention to scientific and social facts would bring a more nuanced analysis. At this time, the most realistic concerns are related to safety and violation of users' autonomy by a breach of informed consent, as potential immediate and long-term health risks to private users remain unknown or not well defined. Clear and complete information about these risks must be provided to research participants and consumers of tDCS products or related services. Broader public education initiatives and warnings would also be worthwhile to reach those who are constructing their own tDCS devices.", "title": "" }, { "docid": "65e320e250cbeb8942bf00f335be4cbd", "text": "In this paper, we propose a deep progressive reinforcement learning (DPRL) method for action recognition in skeleton-based videos, which aims to distil the most informative frames and discard ambiguous frames in sequences for recognizing actions. Since the choices of selecting representative frames are multitudinous for each video, we model the frame selection as a progressive process through deep reinforcement learning, during which we progressively adjust the chosen frames by taking two important factors into account: (1) the quality of the selected frames and (2) the relationship between the selected frames to the whole video. Moreover, considering the topology of human body inherently lies in a graph-based structure, where the vertices and edges represent the hinged joints and rigid bones respectively, we employ the graph-based convolutional neural network to capture the dependency between the joints for action recognition. Our approach achieves very competitive performance on three widely used benchmarks.", "title": "" }, { "docid": "bbf987eef74d76cf2916ae3080a2b174", "text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.", "title": "" }, { "docid": "81190a4c576f86444a95e75654bddf29", "text": "Enforcing a variety of security measures (such as intrusion detection systems, and so on) can provide a certain level of protection to computer networks. However, such security practices often fall short in face of zero-day attacks. Due to the information asymmetry between attackers and defenders, detecting zero-day attacks remains a challenge. Instead of targeting individual zero-day exploits, revealing them on an attack path is a substantially more feasible strategy. Such attack paths that go through one or more zero-day exploits are called zero-day attack paths. In this paper, we propose a probabilistic approach and implement a prototype system ZePro for zero-day attack path identification. In our approach, a zero-day attack path is essentially a graph. To capture the zero-day attack, a dependency graph named object instance graph is first built as a supergraph by analyzing system calls. To further reveal the zero-day attack paths hidden in the supergraph, our system builds a Bayesian network based upon the instance graph. By taking intrusion evidence as input, the Bayesian network is able to compute the probabilities of object instances being infected. Connecting the high-probability-instances through dependency relations forms a path, which is the zero-day attack path. The experiment results demonstrate the effectiveness of ZePro for zero-day attack path identification.", "title": "" }, { "docid": "4b90fefa981e091ac6a5d2fd83e98b66", "text": "This paper explores an analysis-aware data cleaning architecture for a large class of SPJ SQL queries. In particular, we propose QuERy, a novel framework for integrating entity resolution (ER) with query processing. The aim of QuERy is to correctly and efficiently answer complex queries issued on top of dirty data. The comprehensive empirical evaluation of the proposed solution demonstrates its significant advantage in terms of efficiency over the traditional techniques for the given problem settings.", "title": "" }, { "docid": "3630c575bf7b5250930c7c54d8a1c6d0", "text": "The RCSB Protein Data Bank (RCSB PDB, http://www.rcsb.org) provides access to 3D structures of biological macromolecules and is one of the leading resources in biology and biomedicine worldwide. Our efforts over the past 2 years focused on enabling a deeper understanding of structural biology and providing new structural views of biology that support both basic and applied research and education. Herein, we describe recently introduced data annotations including integration with external biological resources, such as gene and drug databases, new visualization tools and improved support for the mobile web. We also describe access to data files, web services and open access software components to enable software developers to more effectively mine the PDB archive and related annotations. Our efforts are aimed at expanding the role of 3D structure in understanding biology and medicine.", "title": "" } ]
scidocsrr
8627e2833f43092297a911400c8ece69
Video-based Framework for Safer and Smarter Computer Aided Surgery
[ { "docid": "3e66d3e2674bdaa00787259ac99c3f68", "text": "Dempster-Shafer theory offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. The significant innovation of this framework is that it allows for the allocation of a probability mass to sets or intervals. DempsterShafer theory does not require an assumption regarding the probability of the individual constituents of the set or interval. This is a potentially valuable tool for the evaluation of risk and reliability in engineering applications when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for discrete and interval-valued data.", "title": "" } ]
[ { "docid": "d09b4b59c30925bae0983c7e56c3386d", "text": "We describe a system that automatically extracts 3D geometry of an indoor scene from a single 2D panorama. Our system recovers the spatial layout by finding the floor, walls, and ceiling; it also recovers shapes of typical indoor objects such as furniture. Using sampled perspective sub-views, we extract geometric cues (lines, vanishing points, orientation map, and surface normals) and semantic cues (saliency and object detection information). These cues are used for ground plane estimation and occlusion reasoning. The global spatial layout is inferred through a constraint graph on line segments and planar superpixels. The recovered layout is then used to guide shape estimation of the remaining objects using their normal information. Experiments on synthetic and real datasets show that our approach is state-of-the-art in both accuracy and efficiency. Our system can handle cluttered scenes with complex geometry that are challenging to existing techniques.", "title": "" }, { "docid": "d4c8acbbee72b8a9e880e2bce6e2153a", "text": "This paper presents a simple linear operator that accurately estimates the position and parameters of ellipse features. Based on the dual conic model, the operator avoids the intermediate stage of precisely extracting individual edge points by exploiting directly the raw gradient information in the neighborhood of an ellipse's boundary. Moreover, under the dual representation, the dual conic can easily be constrained to a dual ellipse when minimizing the algebraic distance. The new operator is assessed and compared to other estimation approaches in simulation as well as in real situation experiments and shows better accuracy than the best approaches, including those limited to the center position.", "title": "" }, { "docid": "b19630c809608601948a7f16910396f7", "text": "This paper presents a novel, smart and portable active knee rehabilitation orthotic device (AKROD) designed to train stroke patients to correct knee hyperextension during stance and stiff-legged gait (defined as reduced knee flexion during swing). The knee brace provides variable damping controlled in ways that foster motor recovery in stroke patients. A resistive, variable damper, electro-rheological fluid (ERF) based component is used to facilitate knee flexion during stance by providing resistance to knee buckling. Furthermore, the knee brace is used to assist in knee control during swing, i.e. to allow patients to achieve adequate knee flexion for toe clearance and adequate knee extension in preparation to heel strike. The detailed design of AKROD, the first prototype built, closed loop control results and initial human testing are presented here", "title": "" }, { "docid": "a2f3b158f1ec7e6ecb68f5ddfeaf0502", "text": "Facial landmark detection of face alignment has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multitask learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [29]. In this technical report, we extend the method presented in our ECCV 2014 [39] paper to handle more landmark points (68 points instead of 5 major facial points) without either redesigning the deep model or involving significant increase in run time cost. This is made possible by transferring the learned 5-point model to the desired facial landmark configuration, through model fine-tuning with dense landmark annotations. Our new model achieves the state-of-the-art result on the 300-W benchmark dataset (mean error of 9.15% on the challenging IBUG subset).", "title": "" }, { "docid": "ef1d28df2575c2c844ca2fa109893d92", "text": "Measurement of the quantum-mechanical phase in quantum matter provides the most direct manifestation of the underlying abstract physics. We used resonant x-ray scattering to probe the relative phases of constituent atomic orbitals in an electronic wave function, which uncovers the unconventional Mott insulating state induced by relativistic spin-orbit coupling in the layered 5d transition metal oxide Sr2IrO4. A selection rule based on intra-atomic interference effects establishes a complex spin-orbital state represented by an effective total angular momentum = 1/2 quantum number, the phase of which can lead to a quantum topological state of matter.", "title": "" }, { "docid": "758310a8bcfcdec01b11889617f5a2c7", "text": "1 †This paper is an extended version of the ICSCA 2017 paper “Reference scope identification for citances by classification with text similarity measures” [55]. This work is supported by the Ministry of Science and Technology (MOST), Taiwan (Grant number: MOST 104-2221-E-178-001). *Corresponding author. Tel: +886 4 23226940728; fax: +886 4 23222621. On Identifying Cited Texts for Citances and Classifying Their Discourse Facets by Classification Techniques", "title": "" }, { "docid": "49b7fa9ad8912c23a7e9e725307cf69c", "text": "In recent years, with the development of social networks, sentiment analysis has become one of the most important research topics in the field of natural language processing. The deep neural network model combining attention mechanism has achieved remarkable success in the task of target-based sentiment analysis. In current research, however, the attention mechanism is more combined with LSTM networks, such neural network- based architectures generally rely on complex computation and only focus on the single target, thus it is difficult to effectively distinguish the different polarities of variant targets in the same sentence. To address this problem, we propose a deep neural network model combining convolutional neural network and regional long short-term memory (CNN-RLSTM) for the task of target-based sentiment analysis. The approach can reduce the training time of neural network model through a regional LSTM. At the same time, the CNN-RLSTM uses a sentence-level CNN to extract sentiment features of the whole sentence, and controls the transmission of information through different weight matrices, which can effectively infer the sentiment polarities of different targets in the same sentence. Finally, experimental results on multi-domain datasets of two languages from SemEval2016 and auto data show that, our approach yields better performance than SVM and several other neural network models.", "title": "" }, { "docid": "6d41ec322f71c32195119807f35fde53", "text": "Input current distortion in the vicinity of input voltage zero crossings of boost single-phase power factor corrected (PFC) ac-dc converters is studied in this paper. Previously known causes for the zero-crossing distortion are reviewed and are shown to be inadequate in explaining the observed input current distortion, especially under high ac line frequencies. A simple linear model is then presented which reveals two previously unknown causes for zero-crossing distortion, namely, the leading phase of the input current and the lack of critical damping in the current loop. Theoretical and practical limitations in reducing the phase lead and increasing the damping factor are discussed. A simple phase compensation technique to reduce the zero-crossing distortion is also presented. Numerical simulation and experimental results are presented to validate the theory.", "title": "" }, { "docid": "84195c27330dad460b00494ead1654c8", "text": "We present a unified framework for the computational implementation of syntactic, semantic, pragmatic and even \"stylistic\" constraints on anaphora. We build on our BUILDRS implementation of Discourse Representation (DR) Theory and Lexical Functional Grammar (LFG) discussed in Wada & Asher (1986). We develop and argue for a semantically based processing model for anaphora resolution that exploits a number of desirable features: (1) the partial semantics provided by the discourse representation structures (DRSs) of DR theory, (2) the use of syntactic and lexical features to filter out unacceptable potential anaphoric antecedents from the set of logically possible antecedents determined by the logical structure of the DRS, (3) the use of pragmatic or discourse constraints, noted by those working on focus, to impose a salience ordering on the set of grammatically acceptable potential antecedents. Only where there is a marked difference in the degree of salience among the possible antecedents does the salience ranking allow us to make predictions on preferred readings. In cases where the difference is extreme, we predict the discourse to be infelicitous if, because of other constraints, one of the markedly less salient antecedents must be linked with the pronoun. We also briefly consider the applications of our processing model to other definite noun phrases besides anaphoric pronouns.", "title": "" }, { "docid": "9493fa9f3749088462c1af7b34d9cfc9", "text": "Computer vision assisted diagnostic systems are gaining popularity in different healthcare applications. This paper presents a video analysis and pattern recognition framework for the automatic grading of vertical suspension tests on infants during the Hammersmith Infant Neurological Examination (HINE). The proposed vision-guided pipeline applies a color-based skin region segmentation procedure followed by the localization of body parts before feature extraction and classification. After constrained localization of lower body parts, a stick-diagram representation is used for extracting novel features that correspond to the motion dynamic characteristics of the infant's leg movements during HINE. This set of pose features generated from such a representation includes knee angles and distances between knees and hills. Finally, a time-series representation of the feature vector is used to train a Hidden Markov Model (HMM) for classifying the grades of the HINE tests into three predefined categories. Experiments are carried out by testing the proposed framework on a large number of vertical suspension test videos recorded at a Neuro-development clinic. The automatic grading results obtained from the proposed method matches the scores of experts at an accuracy of 74%.", "title": "" }, { "docid": "05046c00903852a983bf194f4348799c", "text": "This paper describes a temperature sensor realized in a 65nm CMOS process with a batch-calibrated inaccuracy of ±0.5°C (3s) and a trimmed inaccuracy of ±0.2°C (3s) from −70°C to 125°C. This represents a 10-fold improvement in accuracy compared to other deep-submicron temperature sensors [1,2], and is comparable with that of state-of-the-art sensors implemented in larger-feature-size processes [3,4]. The sensor draws 8.3µA from a 1.2V supply and occupies an area of 0.1mm2, which is 45 times less than that of sensors with comparable accuracy [3,4]. These advances are enabled by the use of NPN transistors as sensing elements, the use of dynamic techniques i.e. correlated double sampling (CDS) and dynamic element matching (DEM), and a single room-temperature trim.", "title": "" }, { "docid": "563d5144c9e053bb4e3cf5a06b19f656", "text": "After introductory remarks on the definition of marketing, the evolution of library and information services (LIS) marketing is explained. The authors then describe how marketing was applied to LIS over the years. Marketing is also related to other concepts used in the management of LIS. Finally the role of professional associations in diffusing marketing theory is portrayed and the importance of education addressed. The entry ends with a reflection on the future of marketing for LIS.", "title": "" }, { "docid": "bd37aa47cf495c7ea327caf2247d28e4", "text": "The purpose of this study is to identify the negative effects of social network sites such as Facebook among Asia Pacific University scholars. The researcher, distributed 152 surveys to students of the chosen university to examine and study the negative effects. Electronic communication is emotionally gratifying but how do such technological distraction impact on academic performance? Because of social media platform’s widespread adoption by university students, there is an interest in how Facebook is related to academic performance. This paper measure frequency of use, participation in activities and time spent preparing for class, in order to know if Facebook affects the performance of students. Moreover, the impact of social network site on academic performance also raised another major concern which is health. Today social network sites are running the future and carrier of students. Social network sites were only an electronic connection between users, but unfortunately it has become an addiction for students. This paper examines the relationship between social network sites and health threat. Lastly, the paper provides a comprehensive analysis of the law and privacy of Facebook. It shows how Facebook users socialize on the site, while they are not aware or misunderstand the risk involved and how their privacy suffers as a result.", "title": "" }, { "docid": "b282d29318b44b56e5bfe07d28c00286", "text": "Word2vec (Mikolov et al., 2013b) has proven to be successful in natural language processing by capturing the semantic relationships between different words. Built on top of single-word embeddings, paragraph vectors (Le and Mikolov, 2014) find fixed-length representations for pieces of text with arbitrary lengths, such as documents, paragraphs, and sentences. In this work, we propose a novel interpretation for neural-network-based paragraph vectors by developing an unsupervised generative model whose maximum likelihood solution corresponds to traditional paragraph vectors. This probabilistic formulation allows us to go beyond point estimates of parameters and to perform Bayesian posterior inference. We find that the entropy of paragraph vectors decreases with the length of documents, and that information about posterior uncertainty improves performance in supervised learning tasks such as sentiment analysis and paraphrase detection.", "title": "" }, { "docid": "15102e561d9640ee39952e4ad62ef896", "text": "OBJECTIVE\nTo define the relative position of the maxilla and mandible in fetuses with trisomy 18 at 11 + 0 to 13 + 6 weeks of gestation.\n\n\nMETHODS\nA three-dimensional (3D) volume of the fetal head was obtained before karyotyping at 11 + 0 to 13 + 6 weeks of gestation in 36 fetuses subsequently found to have trisomy 18, and 200 chromosomally normal fetuses. The frontomaxillary facial (FMF) angle and the mandibulomaxillary facial (MMF) angle were measured in a mid-sagittal view of the fetal face.\n\n\nRESULTS\nIn the chromosomally normal group both the FMF and MMF angles decreased significantly with crown-rump length (CRL). In the trisomy 18 fetuses the FMF angle was significantly greater and the angle was above the 95(th) centile of the normal range in 21 (58.3%) cases. In contrast, in trisomy 18 fetuses the MMF angle was significantly smaller than that in normal fetuses and the angle was below the 5(th) centile of the normal range in 12 (33.3%) cases.\n\n\nCONCLUSIONS\nTrisomy 18 at 11 + 0 to 13 + 6 weeks of gestation is associated with both mid-facial hypoplasia and micrognathia or retrognathia that can be documented by measurement of the FMF angle and MMF angle, respectively.", "title": "" }, { "docid": "4e4bd38230dba0012227d8b40b01e867", "text": "In this paper, we present a travel guidance system W2Go (Where to Go), which can automatically recognize and rank the landmarks for travellers. In this system, a novel Automatic Landmark Ranking (ALR) method is proposed by utilizing the tag and geo-tag information of photos in Flickr and user knowledge from Yahoo Travel Guide. ALR selects the popular tourist attractions (landmarks) based on not only the subjective opinion of the travel editors as is currently done on sites like WikiTravel and Yahoo Travel Guide, but also the ranking derived from popularity among tourists. Our approach utilizes geo-tag information to locate the positions of the tag-indicated places, and computes the probability of a tag being a landmark/site name. For potential landmarks, impact factors are calculated from the frequency of tags, user numbers in Flickr, and user knowledge in Yahoo Travel Guide. These tags are then ranked based on the impact factors. Several representative views for popular landmarks are generated from the crawled images with geo-tags to describe and present them in context of information derived from several relevant reference sources. The experimental comparisons to the other systems are conducted on eight famous cities over the world. User-based evaluation demonstrates the effectiveness of the proposed ALR method and the W2Go system.", "title": "" }, { "docid": "4575b5c93aa86c150944597638402439", "text": "Multilayer networks are networks where edges exist in multiple layers that encode different types or sources of interactions. As one of the most important problems in network science, discovering the underlying community structure in multilayer networks has received an increasing amount of attention in recent years. One of the challenging issues is to develop effective community structure quality functions for characterizing the structural or functional properties of the expected community structure. Although several quality functions have been developed for evaluating the detected community structure, little has been explored about how to explicitly bring our knowledge of the desired community structure into such quality functions, in particular for the multilayer networks. To address this issue, we propose the multilayer edge mixture model (MEMM), which is positioned as a general framework that enables us to design a quality function that reflects our knowledge about the desired community structure. The proposed model is based on a mixture of the edges, and the weights reflect their role in the detection process. By decomposing a community structure quality function into the form of MEMM, it becomes clear which type of community structure will be discovered by such quality function. Similarly, after such decomposition we can also modify the weights of the edges to find the desired community structure. In this paper, we apply the quality functions modified with the knowledge of MEMM to different multilayer benchmark networks as well as real-world multilayer networks and the detection results confirm the feasibility of MEMM.", "title": "" }, { "docid": "a47d001dc8305885e42a44171c9a94b2", "text": "Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.", "title": "" }, { "docid": "558e6532f9a228c1ec41448a67214df2", "text": "We consider the problem of shape recovery for real world scenes, where a variety of global illumination (inter-reflections, subsurface scattering, etc.) and illumination defocus effects are present. These effects introduce systematic and often significant errors in the recovered shape. We introduce a structured light technique called Micro Phase Shifting, which overcomes these problems. The key idea is to project sinusoidal patterns with frequencies limited to a narrow, high-frequency band. These patterns produce a set of images over which global illumination and defocus effects remain constant for each point in the scene. This enables high quality reconstructions of scenes which have traditionally been considered hard, using only a small number of images. We also derive theoretical lower bounds on the number of input images needed for phase shifting and show that Micro PS achieves the bound.", "title": "" }, { "docid": "1b17fd5250b50a750931660ac0e130fe", "text": "MOS varactors are used extensively as tunable elements in the tank circuits of RF voltage-controlled oscillators (VCOs) based on submicrometer CMOS technologies. MOS varactor topologies include conventionalD=S=B connected, inversion-mode (I-MOS), and accumulation-mode (A-MOS) structures. When incorporated into the VCO tank circuit, the large-signal swing of the VCO output oscillation modulates the varactor capacitance in time, resulting in a VCO tuning curve that deviates from the dc tuning curve of the particular varactor structure. This paper presents a detailed analysis of this large-signal effect. Simulated results are compared to measurements for an example 2.5-GHz complementary LC VCO using I-MOS varactors implemented in 0.35m CMOS technology.", "title": "" } ]
scidocsrr
d3e0cc84199f9795bfe1f2001d87685e
Aromatase inhibitors versus tamoxifen in early breast cancer: patient-level meta-analysis of the randomised trials
[ { "docid": "f2b291fd6dacf53ed88168d7e1e4ecce", "text": "BACKGROUND\nAs trials of 5 years of tamoxifen in early breast cancer mature, the relevance of hormone receptor measurements (and other patient characteristics) to long-term outcome can be assessed increasingly reliably. We report updated meta-analyses of the trials of 5 years of adjuvant tamoxifen.\n\n\nMETHODS\nWe undertook a collaborative meta-analysis of individual patient data from 20 trials (n=21,457) in early breast cancer of about 5 years of tamoxifen versus no adjuvant tamoxifen, with about 80% compliance. Recurrence and death rate ratios (RRs) were from log-rank analyses by allocated treatment.\n\n\nFINDINGS\nIn oestrogen receptor (ER)-positive disease (n=10,645), allocation to about 5 years of tamoxifen substantially reduced recurrence rates throughout the first 10 years (RR 0·53 [SE 0·03] during years 0-4 and RR 0·68 [0·06] during years 5-9 [both 2p<0·00001]; but RR 0·97 [0·10] during years 10-14, suggesting no further gain or loss after year 10). Even in marginally ER-positive disease (10-19 fmol/mg cytosol protein) the recurrence reduction was substantial (RR 0·67 [0·08]). In ER-positive disease, the RR was approximately independent of progesterone receptor status (or level), age, nodal status, or use of chemotherapy. Breast cancer mortality was reduced by about a third throughout the first 15 years (RR 0·71 [0·05] during years 0-4, 0·66 [0·05] during years 5-9, and 0·68 [0·08] during years 10-14; p<0·0001 for extra mortality reduction during each separate time period). Overall non-breast-cancer mortality was little affected, despite small absolute increases in thromboembolic and uterine cancer mortality (both only in women older than 55 years), so all-cause mortality was substantially reduced. In ER-negative disease, tamoxifen had little or no effect on breast cancer recurrence or mortality.\n\n\nINTERPRETATION\n5 years of adjuvant tamoxifen safely reduces 15-year risks of breast cancer recurrence and death. ER status was the only recorded factor importantly predictive of the proportional reductions. Hence, the absolute risk reductions produced by tamoxifen depend on the absolute breast cancer risks (after any chemotherapy) without tamoxifen.\n\n\nFUNDING\nCancer Research UK, British Heart Foundation, and Medical Research Council.", "title": "" } ]
[ { "docid": "e6704cac805b39fe7f321f095a92ebf4", "text": "Crowd counting is a challenging task, mainly due to the severe occlusions among dense crowds. This paper aims to take a broader view to address crowd counting from the perspective of semantic modeling. In essence, crowd counting is a task of pedestrian semantic analysis involving three key factors: pedestrians, heads, and their context structure. The information of different body parts is an important cue to help us judge whether there exists a person at a certain position. Existing methods usually perform crowd counting from the perspective of directly modeling the visual properties of either the whole body or the heads only, without explicitly capturing the composite body-part semantic structure information that is crucial for crowd counting. In our approach, we first formulate the key factors of crowd counting as semantic scene models. Then, we convert the crowd counting problem into a multi-task learning problem, such that the semantic scene models are turned into different sub-tasks. Finally, the deep convolutional neural networks are used to learn the sub-tasks in a unified scheme. Our approach encodes the semantic nature of crowd counting and provides a novel solution in terms of pedestrian semantic analysis. In experiments, our approach outperforms the state-of-the-art methods on four benchmark crowd counting data sets. The semantic structure information is demonstrated to be an effective cue in scene of crowd counting.", "title": "" }, { "docid": "c61f68104b2d058acb0d16c89e4b1454", "text": "Recently, training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has improved the generalization performance of neural networks. In contrast to the biased individual inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of dropouts that maximize the divergence between 1) the training supervision and 2) the outputs from the network with the dropouts. The identified adversarial dropouts are used to automatically reconfigure the neural network in the training process, and we demonstrated that the simultaneous training on the original and the reconfigured network improves the generalization performance of supervised and semi-supervised learning tasks on MNIST, SVHN, and CIFAR-10. We analyzed the trained model to find the performance improvement reasons. We found that adversarial dropout increases the sparsity of neural networks more than the standard dropout. Finally, we also proved that adversarial dropout is a regularization term with a rank-valued hyper parameter that is different from a continuous-valued parameter to specify the strength of the regularization.", "title": "" }, { "docid": "ab47dbcafba637ae6e3b474642439bd3", "text": "Ear detection from a profile face image is an important step in many applications including biometric recognition. But accurate and rapid detection of the ear for real-time applications is a challenging task, particularly in the presence of occlusions. In this work, a cascaded AdaBoost based ear detection approach is proposed. In an experiment with a test set of 203 profile face images, all the ears were accurately detected by the proposed detector with a very low (5 x 10-6) false positive rate. It is also very fast and relatively robust to the presence of occlusions and degradation of the ear images (e.g. motion blur). The detection process is fully automatic and does not require any manual intervention.", "title": "" }, { "docid": "fef45863bc531960dbf2a7783995bfdb", "text": "The main goal of facial attribute recognition is to determine various attributes of human faces, e.g. facial expressions, shapes of mouth and nose, headwears, age and race, by extracting features from the images of human faces. Facial attribute recognition has a wide range of potential application, including security surveillance and social networking. The available approaches, however, fail to consider the correlations and heterogeneities between different attributes. This paper proposes that by utilizing these correlations properly, an improvement can be achieved on the recognition of different attributes. Therefore, we propose a facial attribute recognition approach based on the grouping of different facial attribute tasks and a multi-task CNN structure. Our approach can fully utilize the correlations between attributes, and achieve a satisfactory recognition result on a large number of attributes with limited amount of parameters. Several modifications to the traditional architecture have been tested in the paper, and experiments have been conducted to examine the effectiveness of our approach.", "title": "" }, { "docid": "2f9b8ee2f7578c7820eced92fb98c696", "text": "The Tic tac toe is very popular game having a 3 × 3 grid board and 2 players. A Special Symbol (X or O) is assigned to each player to indicate the slot is covered by the respective player. The winner of the game is the player who first cover a horizontal, vertical and diagonal row of the board having only player's own symbols. This paper presents the design model of Tic tac toe Game using Multi-Tape Turing Machine in which both player choose input randomly and result of the game is declared. The computational Model of Tic tac toe is used to describe it in a formal manner.", "title": "" }, { "docid": "3c203c55c925fb3f78506d46b8b453a8", "text": "In this paper, we provide combinatorial interpretations for some determinantal identities involving Fibonacci numbers. We use the method due to Lindström-Gessel-Viennot in which we count nonintersecting n-routes in carefully chosen digraphs in order to gain insight into the nature of some well-known determinantal identities while allowing room to generalize and discover new ones.", "title": "" }, { "docid": "5705022b0a08ca99d4419485f3c03eaa", "text": "In this paper, we propose a wireless sensor network paradigm for real-time forest fire detection. The wireless sensor network can detect and forecast forest fire more promptly than the traditional satellite-based detection approach. This paper mainly describes the data collecting and processing in wireless sensor networks for real-time forest fire detection. A neural network method is applied to in-network data processing. We evaluate the performance of our approach by simulations.", "title": "" }, { "docid": "673674dd11047747db79e5614daa4974", "text": "Distracted driving is one of the main causes of vehicle collisions in the United States. Passively monitoring a driver's activities constitutes the basis of an automobile safety system that can potentially reduce the number of accidents by estimating the driver's focus of attention. This paper proposes an inexpensive vision-based system to accurately detect Eyes Off the Road (EOR). The system has three main components: 1) robust facial feature tracking; 2) head pose and gaze estimation; and 3) 3-D geometric reasoning to detect EOR. From the video stream of a camera installed on the steering wheel column, our system tracks facial features from the driver's face. Using the tracked landmarks and a 3-D face model, the system computes head pose and gaze direction. The head pose estimation algorithm is robust to nonrigid face deformations due to changes in expressions. Finally, using a 3-D geometric analysis, the system reliably detects EOR.", "title": "" }, { "docid": "c281538d7aa7bd8727ce4718de82c7c8", "text": "More than 15 years after model predictive control (MPC) appeared in industry as an effective means to deal with multivariable constrained control problems, a theoretical basis for this technique has started to emerge. The issues of feasibility of the on-line optimization, stability and performance are largely understood for systems described by linear models. Much progress has been made on these issues for non-linear systems but for practical applications many questions remain, including the reliability and efficiency of the on-line computation scheme. To deal with model uncertainty ‘rigorously’ an involved dynamic programming problem must be solved. The approximation techniques proposed for this purpose are largely at a conceptual stage. Among the broader research needs the following areas are identified: multivariable system identification, performance monitoring and diagnostics, non-linear state estimation, and batch system control. Many practical problems like control objective prioritization and symptom-aided diagnosis can be integrated systematically and effectively into the MPC framework by expanding the problem formulation to include integer variables yielding a mixed-integer quadratic or linear program. Efficient techniques for solving these problems are becoming available. © 1999 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "fa2ba8897c9dcd087ea01de2caaed9e4", "text": "This paper aims to investigate the relationship between library anxiety and emotional intelligence of Bushehr University of Medical Sciences’ students and Persian Gulf University’s students in Bushehr municipality. In this descriptive study which is of correlation type, 700 students of Bushehr University of Medical Sciences and the Persian Gulf University selected through stratified random sampling. Required data has been collected using normalized Siberia Shrink’s emotional intelligence questionnaire and localized Bostick’s library anxiety scale. The results show that the rate of library anxiety among students is less than average (91.73%) except “mechanical factors”. There is not a significant difference in all factors of library anxiety except “interaction with librarian” between male and female. The findings also indicate that there is a negative significant relationship between library anxiety and emotional intelligence (r= -0.41). According to the results, it seems that by improving the emotional intelligence we can decrease the rate of library anxiety among students during their search in a library. Emotional intelligence can optimize academic library’s productivity.", "title": "" }, { "docid": "dcee61dad66f59b2450a3e154726d6b1", "text": "Mussels are marine organisms that have been mimicked due to their exceptional adhesive properties to all kind of surfaces, including rocks, under wet conditions. The proteins present on the mussel's foot contain 3,4-dihydroxy-l-alanine (DOPA), an amino acid from the catechol family that has been reported by their adhesive character. Therefore, we synthesized a mussel-inspired conjugated polymer, modifying the backbone of hyaluronic acid with dopamine by carbodiimide chemistry. Ultraviolet-visible (UV-vis) spectroscopy and nuclear magnetic resonance (NMR) techniques confirmed the success of this modification. Different techniques have been reported to produce two-dimensional (2D) or three-dimensional (3D) systems capable to support cells and tissue regeneration; among others, multilayer systems allow the construction of hierarchical structures from nano- to macroscales. In this study, the layer-by-layer (LbL) technique was used to produce freestanding multilayer membranes made uniquely of chitosan and dopamine-modified hyaluronic acid (HA-DN). The electrostatic interactions were found to be the main forces involved in the film construction. The surface morphology, chemistry, and mechanical properties of the freestanding membranes were characterized, confirming the enhancement of the adhesive properties in the presence of HA-DN. The MC3T3-E1 cell line was cultured on the surface of the membranes, demonstrating the potential of these freestanding multilayer systems to be used for bone tissue engineering.", "title": "" }, { "docid": "31dbf3fcd1a70ad7fb32fb6e69ef88e3", "text": "OBJECTIVE\nHealth care researchers have not taken full advantage of the potential to effectively convey meaning in their multivariate data through graphical presentation. The aim of this paper is to translate knowledge from the fields of analytical chemistry, toxicology, and marketing research to the field of medicine by introducing the radar plot, a useful graphical display method for multivariate data.\n\n\nSTUDY DESIGN AND SETTING\nDescriptive study based on literature review.\n\n\nRESULTS\nThe radar plotting technique is described, and examples are used to illustrate not only its programming language, but also the differences in tabular and bar chart approaches compared to radar-graphed data displays.\n\n\nCONCLUSION\nRadar graphing, a form of radial graphing, could have great utility in the presentation of health-related research, especially in situations in which there are large numbers of independent variables, possibly with different measurement scales. This technique has particular relevance for researchers who wish to illustrate the degree of multiple-group similarity/consensus, or group differences on multiple variables in a single graphical display.", "title": "" }, { "docid": "206263c06b0d41725aeec7844f3b3a01", "text": "Basic properties of the operational transconductance amplifier (OTA) are discussed. Applications of the OTA in voltage-controlled amplifiers, filters, and impedances are presented. A versatile family of voltage-controlled filter sections suitable for systematic design requirements is described. The total number of components used in these circuits is small, and the design equations and voltage-control characteristics are attractive. Limitations as well as practical considerations of OTA-based filters using commercially available bipolar OTAs are discussed. Applications of OTAs in continuous-time monolithic filters are considered.", "title": "" }, { "docid": "9b3a39ddeadd14ea5a50be8ac2057a26", "text": "0 7 4 0 7 4 5 9 / 0 0 / $ 1 0 . 0 0 © 2 0 0 0 I E E E J u l y / A u g u s t 2 0 0 0 I E E E S O F T W A R E 19 design, algorithm, code, or test—does indeed improve software quality and reduce time to market. Additionally, student and professional programmers consistently find pair programming more enjoyable than working alone. Yet most who have not tried and tested pair programming reject the idea as a redundant, wasteful use of programming resources: “Why would I put two people on a job that just one can do? I can’t afford to do that!” But we have found, as Larry Constantine wrote, that “Two programmers in tandem is not redundancy; it’s a direct route to greater efficiency and better quality.”1 Our supportive evidence comes from professional programmers and from advanced undergraduate students who participated in a structured experiment. The experimental results show that programming pairs develop better code faster with only a minimal increase in prerelease programmer hours. These results apply to all levels of programming skill from novice to expert.", "title": "" }, { "docid": "d76b7b25bce29cdac24015f8fa8ee5bb", "text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.", "title": "" }, { "docid": "e30df718ca1981175e888755cce3ce90", "text": "Human identification at distance by analysis of gait patterns extracted from video has recently become very popular research in biometrics. This paper presents multi-projections based approach to extract gait patterns for human recognition. Binarized silhouette of a motion object is represented by 1-D signals which are the basic image features called the distance vectors. The distance vectors are differences between the bounding box and silhouette, and extracted using four projections to silhouette. Eigenspace transformation is applied to time-varying distance vectors and the statistical distance based supervised pattern classification is then performed in the lower-dimensional eigenspace for human identification. A fusion strategy developed is finally executed to produce final decision. Based on normalized correlation on the distance vectors, gait cycle estimation is also performed to extract the gait cycle. Experimental results on four databases demonstrate that the right person in top two matches 100% of the times for the cases where training and testing sets corresponds to the same walking styles, and in top three-four matches 100% of the times for training and testing sets corresponds to the different walking styles.", "title": "" }, { "docid": "c5eb252d17c2bec8ab168ca79ec11321", "text": "Personalization is pervasive in the online space as it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user. However, recent studies suggest that personalization methods can propagate societal or systemic biases and polarize opinions; this has led to calls for regulatory mechanisms and algorithms to combat bias and inequality. Algorithmically, bandit optimization has enjoyed great success in learning user preferences and personalizing content or feeds accordingly. We propose an algorithmic framework that allows for the possibility to control bias or discrimination in such bandit-based personalization. Our model allows for the specification of general fairness constraints on the sensitive types of the content that can be displayed to a user. The challenge, however, is to come up with a scalable and low regret algorithm for the constrained optimization problem that arises. Our main technical contribution is a provably fast and low-regret algorithm for the fairness-constrained bandit optimization problem. Our proofs crucially leverage the special structure of our problem. Experiments on synthetic and real-world data sets show that our algorithmic framework can control bias with only a minor loss to revenue. ∗A short version of this paper appeared in the FAT/ML 2017 workshop (https://arxiv.org/abs/1707.02260) 1 ar X iv :1 80 2. 08 67 4v 1 [ cs .L G ] 2 3 Fe b 20 18", "title": "" }, { "docid": "3db3308b3f98563390e8f21e565798b7", "text": "RDF question/answering (Q/A) allows users to ask questions in natural languages over a knowledge base represented by RDF. To answer a natural language question, the existing work takes a two-stage approach: question understanding and query evaluation. Their focus is on question understanding to deal with the disambiguation of the natural language phrases. The most common technique is the joint disambiguation, which has the exponential search space. In this paper, we propose a systematic framework to answer natural language questions over RDF repository (RDF Q/A) from a graph data-driven perspective. We propose a semantic query graph to model the query intention in the natural language question in a structural way, based on which, RDF Q/A is reduced to subgraph matching problem. More importantly, we resolve the ambiguity of natural language questions at the time when matches of query are found. The cost of disambiguation is saved if there are no matching found. More specifically, we propose two different frameworks to build the semantic query graph, one is relation (edge)-first and the other one is node-first. We compare our method with some state-of-the-art RDF Q/A systems in the benchmark dataset. Extensive experiments confirm that our method not only improves the precision but also speeds up query performance greatly.", "title": "" }, { "docid": "23ff0b54dcef99754549275eb6714a9a", "text": "The HCI community has developed guidelines and recommendations for improving the usability system that are usually applied at the last stages of the software development process. On the other hand, the SE community has developed sound methods to elicit functional requirements in the early stages, but usability has been relegated to the last stages together with other nonfunctional requirements. Therefore, there are no methods of usability requirements elicitation to develop software within both communities. An example of this problem arises if we focus on the Model-Driven Development paradigm, where the methods and tools that are used to develop software do not support usability requirements elicitation. In order to study the existing publications that deal with usability requirements from the first steps of the software development process, this work presents a mapping study. Our aim is to compare usability requirements methods and to identify the strong points of each one.", "title": "" }, { "docid": "a6d550a64dc633e50ee2b21255344e7b", "text": "Sentiment classification is a much-researched field that identifies positive or negative emotions in a large number of texts. Most existing studies focus on document-based approaches and documents are represented as bag-of word. Therefore, this feature representation fails to obtain the relation or associative information between words and it can't distinguish different opinions of a sentiment word with different targets. In this paper, we present a dependency tree-based sentence-level sentiment classification approach. In contrast to a document, a sentence just contains little information and a small set of features which can be used for the sentiment classification. So we not only capture flat features (bag-of-word), but also extract structured features from the dependency tree of a sentence. We propose a method to add more information to the dependency tree and provide an algorithm to prune dependency tree to reduce the noisy, and then introduce a convolution tree kernel-based approach to the sentence-level sentiment classification. The experimental results show that our dependency tree-based approach achieved significant improvement, particularly for implicit sentiment classification.", "title": "" } ]
scidocsrr
797ec259cf5128e687eb9748f3e338f9
Chronic insomnia and its negative consequences for health and functioning of adolescents: a 12-month prospective study.
[ { "docid": "b6bf6c87040bc4996315fee62acb911b", "text": "The influence of the sleep patterns of 2,259 students, aged 11 to 14 years, on trajectories of depressive symptoms, self-esteem, and grades was longitudinally examined using latent growth cross-domain models. Consistent with previous research, sleep decreased over time. Students who obtained less sleep in sixth grade exhibited lower initial self-esteem and grades and higher initial levels of depressive symptoms. Similarly, students who obtained less sleep over time reported heightened levels of depressive symptoms and decreased self-esteem. Sex of the student played a strong role as a predictor of hours of sleep, self-esteem, and grades. This study underscores the role of sleep in predicting adolescents' psychosocial outcomes and highlights the importance of using idiographic methodologies in the study of developmental processes.", "title": "" } ]
[ { "docid": "e510140bfc93089e69cb762b968de5e9", "text": "Owing to the popularity of the PDF format and the continued exploitation of Adobe Reader, the detection of malicious PDFs remains a concern. All existing detection techniques rely on the PDF parser to a certain extent, while the complexity of the PDF format leaves an abundant space for parser confusion. To quantify the difference between these parsers and Adobe Reader, we create a reference JavaScript extractor by directly tapping into Adobe Reader at locations identified through a mostly automatic binary analysis technique. By comparing the output of this reference extractor against that of several opensource JavaScript extractors on a large data set obtained from VirusTotal, we are able to identify hundreds of samples which existing extractors fail to extract JavaScript from. By analyzing these samples we are able to identify several weaknesses in each of these extractors. Based on these lessons, we apply several obfuscations on a malicious PDF sample, which can successfully evade all the malware detectors tested. We call this evasion technique a PDF parser confusion attack. Lastly, we demonstrate that the reference JavaScript extractor improves the accuracy of existing JavaScript-based classifiers and how it can be used to mitigate these parser limitations in a real-world setting.", "title": "" }, { "docid": "e9838d3c33d19bdd20a001864a878757", "text": "FPGAs are increasingly popular as application-specific accelerators because they lead to a good balance between flexibility and energy efficiency, compared to CPUs and ASICs. However, the long routing time imposes a barrier on FPGA computing, which significantly hinders the design productivity. Existing attempts of parallelizing the FPGA routing either do not fully exploit the parallelism or suffer from an excessive quality loss. Massive parallelism using GPUs has the potential to solve this issue but faces non-trivial challenges.\n To cope with these challenges, this work presents Corolla, a GPU-accelerated FPGA routing method. Corolla enables applying the GPU-friendly shortest path algorithm in FPGA routing, leveraging the idea of problem size reduction by limiting the search in routing subgraphs. We maintain the convergence after problem size reduction using the dynamic expansion of the routing resource subgraphs. In addition, Corolla explores the fine-grained single-net parallelism and proposes a hybrid approach to combine the static and dynamic parallelism on GPU. To explore the coarse-grained multi-net parallelism, Corolla proposes an effective method to parallelize mutli-net routing while preserving the equivalent routing results as the original single-net routing. Experimental results show that Corolla achieves an average of 18.72x speedup on GPU with a tolerable loss in the routing quality and sustains a scalable speedup on large-scale routing graphs. To our knowledge, this is the first work to demonstrate the effectiveness of GPU-accelerated FPGA routing.", "title": "" }, { "docid": "77d11e0b66f3543fadf91d0de4c928c9", "text": "In the United States, the number of people over 65 will double between ow and 2030 to 69.4 million. Providing care for this increasing population becomes increasingly difficult as the cognitive and physical health of elders deteriorates. This survey article describes ome of the factors that contribute to the institutionalization of elders, and then presents some of the work done towards providing technological support for this vulnerable community.", "title": "" }, { "docid": "02855c493744435d868d669a6ddedd1c", "text": "Recurrent neural networks (RNNs), particularly long short-term memory (LSTM), have gained much attention in automatic speech recognition (ASR). Although some successful stories have been reported, training RNNs remains highly challenging, especially with limited training data. Recent research found that a well-trained model can be used as a teacher to train other child models, by using the predictions generated by the teacher model as supervision. This knowledge transfer learning has been employed to train simple neural nets with a complex one, so that the final performance can reach a level that is infeasible to obtain by regular training. In this paper, we employ the knowledge transfer learning approach to train RNNs (precisely LSTM) using a deep neural network (DNN) model as the teacher. This is different from most of the existing research on knowledge transfer learning, since the teacher (DNN) is assumed to be weaker than the child (RNN); however, our experiments on an ASR task showed that it works fairly well: without applying any tricks on the learning scheme, this approach can train RNNs successfully even with limited training data.", "title": "" }, { "docid": "c6160b8ad36bc4f297bfb1f6b04c79e0", "text": "Despite their incentive structure flaws, mining pools account for more than 95% of Bitcoin’s computation power. This paper introduces an attack against mining pools in which a malicious party pays pool members to withhold their solutions from their pool operator. We show that an adversary with a tiny amount of computing power and capital can execute this attack. Smart contracts enforce the malicious party’s payments, and therefore miners need neither trust the attacker’s intentions nor his ability to pay. Assuming pool members are rational, an adversary with a single mining ASIC can, in theory, destroy all big mining pools without losing any money (and even make some profit).", "title": "" }, { "docid": "62c6050db8e42b1de54f8d1d54fd861f", "text": "In this paper we present our approach of solving the PAN 2016 Author Profiling Task. It involves classifying users’ gender and age using social media posts. We used SVM classifiers and neural networks on TF-IDF and verbosity features. Results showed that SVM classifiers are better for English datasets and neural networks perform better for Dutch and Spanish datasets.", "title": "" }, { "docid": "3e9f54363d930c703dfe20941b2568b0", "text": "Organizations are looking to new graduate nurses to fill expected staffing shortages over the next decade. Creative and effective onboarding programs will determine the success or failure of these graduates as they transition from student to professional nurse. This longitudinal quantitative study with repeated measures used the Casey-Fink Graduate Nurse Experience Survey to investigate the effects of offering a prelicensure extern program and postlicensure residency program on new graduate nurses and organizational outcomes versus a residency program alone. Compared with the nurse residency program alone, the combination of extern program and nurse residency program improved neither the transition factors most important to new nurse graduates during their first year of practice nor a measure important to organizations, retention rates. The additional cost of providing an extern program should be closely evaluated when making financially responsible decisions.", "title": "" }, { "docid": "68971b7efc9663c37113749206b5382b", "text": "Trehalose 6-phosphate (Tre6P), the intermediate of trehalose biosynthesis, has a profound influence on plant metabolism, growth, and development. It has been proposed that Tre6P acts as a signal of sugar availability and is possibly specific for sucrose status. Short-term sugar-feeding experiments were carried out with carbon-starved Arabidopsis thaliana seedlings grown in axenic shaking liquid cultures. Tre6P increased when seedlings were exogenously supplied with sucrose, or with hexoses that can be metabolized to sucrose, such as glucose and fructose. Conditional correlation analysis and inhibitor experiments indicated that the hexose-induced increase in Tre6P was an indirect response dependent on conversion of the hexose sugars to sucrose. Tre6P content was affected by changes in nitrogen status, but this response was also attributable to parallel changes in sucrose. The sucrose-induced rise in Tre6P was unaffected by cordycepin but almost completely blocked by cycloheximide, indicating that de novo protein synthesis is necessary for the response. There was a strong correlation between Tre6P and sucrose even in lines that constitutively express heterologous trehalose-phosphate synthase or trehalose-phosphate phosphatase, although the Tre6P:sucrose ratio was shifted higher or lower, respectively. It is proposed that the Tre6P:sucrose ratio is a critical parameter for the plant and forms part of a homeostatic mechanism to maintain sucrose levels within a range that is appropriate for the cell type and developmental stage of the plant.", "title": "" }, { "docid": "555e3bbc504c7309981559a66c584097", "text": "The hippocampus has been implicated in the regulation of anxiety and memory processes. Nevertheless, the precise contribution of its ventral (VH) and dorsal (DH) division in these issues still remains a matter of debate. The Trial 1/2 protocol in the elevated plus-maze (EPM) is a suitable approach to assess features associated with anxiety and memory. Information about the spatial environment on initial (Trial 1) exploration leads to a subsequent increase in open-arm avoidance during retesting (Trial 2). The objective of the present study was to investigate whether transient VH or DH deactivation by lidocaine microinfusion would differently interfere with the performance of EPM-naive and EPM-experienced rats. Male Wistar rats were bilaterally-implanted with guide cannulas aimed at the VH or the DH. One-week after surgery, they received vehicle or lidocaine 2.0% in 1.0 microL (0.5 microL per side) at pre-Trial 1, post-Trial 1 or pre-Trial 2. There was an increase in open-arm exploration after the intra-VH lidocaine injection on Trial 1. Intra-DH pre-Trial 2 administration of lidocaine also reduced the open-arm avoidance. No significant changes were observed in enclosed-arm entries, an EPM index of general exploratory activity. The cautious exploration of potentially dangerous environment requires VH functional integrity, suggesting a specific role for this region in modulating anxiety-related behaviors. With regard to the DH, it may be preferentially involved in learning and memory since the acquired response of inhibitory avoidance was no longer observed when lidocaine was injected pre-Trial 2.", "title": "" }, { "docid": "4ec266df91a40330b704c4e10eacb820", "text": "Recently many cases of missing children between ages 14 and 17 years are reported. Parents always worry about the possibility of kidnapping of their children. This paper proposes an Android based solution to aid parents to track their children in real time. Nowadays, most mobile phones are equipped with location services capabilities allowing us to get the device’s geographic position in real time. The proposed solution takes the advantage of the location services provided by mobile phone since most of kids carry mobile phones. The mobile application use the GPS and SMS services found in Android mobile phones. It allows the parent to get their child’s location on a real time map. The system consists of two sides, child side and parent side. A parent’s device main duty is to send a request location SMS to the child’s device to get the location of the child. On the other hand, the child’s device main responsibility is to reply the GPS position to the parent’s device upon request. Keywords—Child Tracking System, Global Positioning System (GPS), SMS-based Mobile Application.", "title": "" }, { "docid": "065b0af0f1ed195ac90fa3ad041fa4c4", "text": "We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.", "title": "" }, { "docid": "16d417e6d2c75edbdf2adbed8ec8d072", "text": "Network middleboxes are difficult to manage and troubleshoot, due to their proprietary monolithic design. Moving towards Network Functions Virtualization (NFV), virtualized middlebox appliances can be more flexibly instantiated and dynamically chained, making troubleshooting even more difficult. To guarantee carrier-grade availability and minimize outages, operators need ways to automatically verify that the deployed network and middlebox configurations obey higher level network policies. In this paper, we first define and identify the key challenges for checking the correct forwarding behavior of Service Function Chains (SFC). We then design and develop a network diagnosis framework that aids network administrators in verifying the correctness of SFC policy enforcement. Our prototype - SFC-Checker can verify stateful service chains efficiently, by analyzing the switches' forwarding rules and the middleboxes' stateful forwarding behavior. Built on top of the network function models we proposed, we develop a diagnosis algorithm that is able to check the stateful forwarding behavior of a chain of network service functions.", "title": "" }, { "docid": "1a2fe54f7456c5e726f87a401a4628f3", "text": "Starting from a neurobiological standpoint, I will propose that our capacity to understand others as intentional agents, far from being exclusively dependent upon mentalistic/linguistic abilities, be deeply grounded in the relational nature of our interactions with the world. According to this hypothesis, an implicit, prereflexive form of understanding of other individuals is based on the strong sense of identity binding us to them. We share with our conspecifics a multiplicity of states that include actions, sensations and emotions. A new conceptual tool able to capture the richness of the experiences we share with others will be introduced: the shared manifold of intersubjectivity. I will posit that it is through this shared manifold that it is possible for us to recognize other human beings as similar to us. It is just because of this shared manifold that intersubjective communication and ascription of intentionality become possible. It will be argued that the same neural structures that are involved in processing and controlling executed actions, felt sensations and emotions are also active when the same actions, sensations and emotions are to be detected in others. It therefore appears that a whole range of different \"mirror matching mechanisms\" may be present in our brain. This matching mechanism, constituted by mirror neurons originally discovered and described in the domain of action, could well be a basic organizational feature of our brain, enabling our rich and diversified intersubjective experiences. This perspective is in a position to offer a global approach to the understanding of the vulnerability to major psychoses such as schizophrenia.", "title": "" }, { "docid": "7c9d35fb9cec2affbe451aed78541cef", "text": "Dental caries, also known as dental cavities, is the most widespread pathology in the world. Up to a very recent period, almost all individuals had the experience of this pathology at least once in their life. Early detection of dental caries can help in a sharp decrease in the dental disease rate. Thanks to the growing accessibility to medical imaging, the clinical applications now have better impact on patient care. Recently, there has been interest in the application of machine learning strategies for classification and analysis of image data. In this paper, we propose a new method to detect and identify dental caries using X-ray images as dataset and deep neural network as technique. This technique is based on stacked sparse auto-encoder and a softmax classifier. Those techniques, sparse auto-encoder and softmax, are used to train a deep neural network. The novelty here is to apply deep neural network to diagnosis of dental caries. This approach was tested on a real dataset and has demonstrated a good performance of detection. Keywords-dental X-ray; classification; Deep Neural Networks; Stacked sparse auto-encoder; Softmax.", "title": "" }, { "docid": "e98aefff2ab776efcc13c1d9534ec9fb", "text": "Many software providers operate crash reporting services to automatically collect crashes from millions of customers and file bug reports. Precisely triaging crashes is necessary and important for software providers because the millions of crashes that may be reported every day are critical in identifying high impact bugs. However, the triaging accuracy of existing systems is limited, as they rely only on the syntactic information of the stack trace at the moment of a crash without analyzing program semantics.\n In this paper, we present RETracer, the first system to triage software crashes based on program semantics reconstructed from memory dumps. RETracer was designed to meet the requirements of large-scale crash reporting services. RETracer performs binary-level backward taint analysis without a recorded execution trace to understand how functions on the stack contribute to the crash. The main challenge is that the machine state at an earlier time cannot be recovered completely from a memory dump, since most instructions are information destroying.\n We have implemented RETracer for x86 and x86-64 native code, and compared it with the existing crash triaging tool used by Microsoft. We found that RETracer eliminates two thirds of triage errors based on a manual analysis of 140 bugs fixed in Microsoft Windows and Office. RETracer has been deployed as the main crash triaging system on Microsoft's crash reporting service.", "title": "" }, { "docid": "ed3a859e2cea465a6d34c556fec860d9", "text": "Multi-word expressions constitute a significant portion of the lexicon of every natural language, and handling them correctly is mandatory for various NLP applications. Yet such entities are notoriously hard to define, and are consequently missing from standard lexicons and dictionaries. Multi-word expressions exhibit idiosyncratic behavior on various levels: orthographic, morphological, syntactic and semantic. In this work we take advantage of the morphological and syntactic idiosyncrasy of Hebrew noun compounds and employ it to extract such expressions from text corpora. We show that relying on linguistic information dramatically improves the accuracy of compound extraction, reducing over one third of the errors compared with the best baseline.", "title": "" }, { "docid": "c80dbfc2e1f676a7ffe4a6a4f7460d36", "text": "Coarse-grained semantic categories such as supersenses have proven useful for a range of downstream tasks such as question answering or machine translation. To date, no effort has been put into integrating the supersenses into distributional word representations. We present a novel joint embedding model of words and supersenses, providing insights into the relationship between words and supersenses in the same vector space. Using these embeddings in a deep neural network model, we demonstrate that the supersense enrichment leads to a significant improvement in a range of downstream classification tasks.", "title": "" }, { "docid": "e1e1005788a0133025f9f3951b9a5372", "text": "Despite the recent success of neural networks in tasks involving natural language understanding (NLU) there has only been limited progress in some of the fundamental challenges of NLU, such as the disambiguation of the meaning and function of words in context. This work approaches this problem by incorporating contextual information into word representations prior to processing the task at hand. To this end we propose a general-purpose reading architecture that is employed prior to a task-specific NLU model. It is responsible for refining context-agnostic word representations with contextual information and lends itself to the introduction of additional, context-relevant information from external knowledge sources. We demonstrate that previously non-competitive models benefit dramatically from employing contextual representations, closing the gap between general-purpose reading architectures and the state-of-the-art performance obtained with fine-tuned, task-specific architectures. Apart from our empirical results we present a comprehensive analysis of the computed representations which gives insights into the kind of information added during the refinement process.", "title": "" }, { "docid": "a3cb839b4299a50c475b2bb1b608ee91", "text": "In this work, we present an event detection method in Twitter based on clustering of hashtags and introduce an enhancement technique by using the semantic similarities between the hashtags. To this aim, we devised two methods for tweet vector generation and evaluated their effect on clustering and event detection performance in comparison to word-based vector generation methods. By analyzing the contexts of hashtags and their co-occurrence statistics with other words, we identify their paradigmatic relationships and similarities. We make use of this information while applying a lexico-semantic expansion on tweet contents before clustering the tweets based on their similarities. Our aim is to tolerate spelling errors and capture statements which actually refer to the same concepts. We evaluate our enhancement solution on a three-day dataset of tweets with Turkish content. In our evaluations, we observe clearer clusters, improvements in accuracy, and earlier event detection times.", "title": "" }, { "docid": "de2527840267fbc3bf5412498323933b", "text": "In time series classification, signals are typically mapped into some intermediate representation which is used to construct models. We introduce the joint time-frequency scattering transform, a locally time-shift invariant representation which characterizes the multiscale energy distribution of a signal in time and frequency. It is computed through wavelet convolutions and modulus non-linearities and may therefore be implemented as a deep convolutional neural network whose filters are not learned but calculated from wavelets. We consider the progression from mel-spectrograms to time scattering and joint time-frequency scattering transforms, illustrating the relationship between increased discriminability and refinements of convolutional network architectures. The suitability of the joint time-frequency scattering transform for characterizing time series is demonstrated through applications to chirp signals and audio synthesis experiments. The proposed transform also obtains state-of-the-art results on several audio classification tasks, outperforming time scattering transforms and achieving accuracies comparable to those of fully learned networks.", "title": "" } ]
scidocsrr