Search Results

Search found 6438 results on 258 pages for 'layer groups'.

Page 213/258 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • MS DPM 2007: Testing the Recovery for a Production Domain

    - by NewToDPM
    Hi everybody! MS DPM 2007 is a new technology in my company, and so am I to the product. We have a classic Microsoft domain with two DCs, Exchange 2007 and a couple Web/MS SQL servers. I have deployed DPM one month ago on the domain, and after fixing the various issues I got with the replicas inconsistence and adapting the schedule and retention range to the server storage pool size, I can say the backup system is working correctly (no errors) as of today. However, there is one problem: we did not attempt to restore from the backups yet, which is a big no-no of course. I'm not sure about the way I should handle this, my main concern being Exchange and the System State of the DCs. From my understanding, DPM can only protect AND restore data on a server which is part of the same domain as the backup server. If I restore the System State (containing Active Directory) and the Exchange Storage Groups on a testing server, I am afraid it would completely disturb the domain functioning (for example, having two primary DCs on the domain). I am thinking about building a second DPM server on a testing separate domain which would mirror the replicas and then restore it on testing servers from this new domain. Is it the right way to handle the data recovery testing? How did you do on your domain when you first deployed DPM? I'd be grateful for any link/documentation or advice. Thank you in advance for your help! EDIT: Two options seem possible so far: i. Create another DC/Exchange server in the alternate location; ii. Create a separate domain in the alternate location and setup a trust between this domain and the production one. The option i is certainly the best but implies setting up a secondary Exchange server, with a dedicated public IP address so that if Exchange #1 dies, we can still send emails with Exchange #2. I don't know how complex this can be and would need to discuss it with my colleagues. The option ii would only fit the testing purposes. My only question regarding this is: if my production and DPM servers are part of domain A, and there is a trust between domains A and B, can I restore a domain A content to any domain B server?

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • How Should I Generate Trade Statistics For CouchDB/Rails3 Application?

    - by James
    My Problem: I am trying to developing a web application for currency traders. The application allows traders to enter or upload information about their trades and I want to calculate a wide variety of statistics based on what the user entered. Now, normally I would use a relational database for this, but I have two requirements that don't fit well with a relational database so I am attempting to use couchdb. Those two problems are: 1) Primarily, I have a companion desktop application that users will be able to work with and replicate to the site using couchdb's awesome replication feature and 2) I would like to allow users to be able to define their own custom things to track about trades and generate results based off of what they enter. The schema less nature of couch seems perfect here, but it may end up being harder than it sounds. (I already know couch requires you to define views in advance and such so I was just planning on sticking all the custom attributes in an array and then emitting the array in the view and further processing from there.) What I Am Doing: Right now I am just emitting each trade in couch keyed by each user's system and querying with the key of the system to get an array of trades per system. Simple. I am not using a reduce function currently to calculate any stats because I couldn't figure out how to get everything I need without getting a reduce overflow error. Here is an example of rows that are getting emitted from couch: {"total_rows":134,"offset":0,"rows":[ {"id":"5b1dcd47221e160d8721feee4ccc64be", "key":["80e40ba2fa43589d57ec3f1d19db41e6","2010/05/14 04:32:37 +0000"], null, "doc":{ "_id":"5b1dcd47221e160d8721feee4ccc64be", "_rev":"1-bc9fe763e2637694df47d6f5efb58e5b", "couchrest-type":"Trade", "system":"80e40ba2fa43589d57ec3f1d19db41e6", "pair":"EUR/USD", "direction":"Buy", "entry":12600, "exit":12700, "stop_loss":12500, "profit_target":12700, "status":"Closed", "slug":"101332132375", "custom_tracking": [{"name":"signal", "value":"Pin Bar"}] "updated_at":"2010/05/14 04:32:37 +0000", "created_at":"2010/05/14 04:32:37 +0000", "result":100}} ]} In my rails 3 controller I am basically just populating an array of trades such as the one above and then extracting out the relevant data into smaller arrays that I can compute my statistics on. Here is my show action for the page that I want to display the stats and all the trades: def show @trades = Trade.by_system(:startkey => [@system.id], :endkey => [@system.id, Time.now ]) @trades.each do |trade| if trade.result > 0 @winning_trades << trade.result elsif trade.result < 0 @losing_trades << trade.result else @breakeven_trades << trade.result end if trade.direction == "Buy" @long_trades << trade.result else @short_trades << trade.result end if trade["custom_tracking"] != nil @custom_tracking << {"result" => trade.result, "variables" => trade["custom_tracking"]} end end end I am omitting some other stuff that is going on, but that is the gist of what I am doing. Then I am calculating stuff in the view layer to produce some results: <% winning_long_trades = @long_trades.reject {|trade| trade <= 0 } %> <% winning_short_trades = @short_trades.reject {|trade| trade <= 0 } %> <ul> <li>Total Trades: <%= @trades.count %></li> <li>Winners: <%= @winning_trades.size %></li> <li>Biggest Winner (Pips): <%= @winning_trades.max %></li> <li>Average Win(Pips): <%= @winning_trades.sum/@winning_trades.size %></li> <li>Losers: <%= @losing_trades.size %></li> <li>Biggest Loser (Pips): <%= @losing_trades.min %></li> <li>Average Loss(Pips): <%= @losing_trades.sum/@losing_trades.size %></li> <li>Breakeven Trades: <%= @breakeven_trades.size %></li> <li>Long Trades: <%= @long_trades.size %></li> <li>Winning Long Trades: <%= winning_long_trades.size %></li> <li>Short Trades: <%= @short_trades.size %></li> <li>Winning Short Trades: <%= winning_short_trades.size %></li> <li>Total Pips: <%= @winning_trades.sum + @losing_trades.sum %></li> <li>Win Rate (%): <%= @winning_trades.size/@trades.count.to_f * 100 %></li> </ul> This produces the following results, which aside from a few things is exactly what I want: Total Trades: 134 Winners: 70 Biggest Winner (Pips): 1488 Average Win(Pips): 440 Losers: 58 Biggest Loser (Pips): -516 Average Loss(Pips): -225 Breakeven Trades: 6 Long Trades: 125 Winning Long Trades: 67 Short Trades: 9 Winning Short Trades: 3 Total Pips: 17819 Win Rate (%): 52.23880597014925 What I Am Wondering- Finally The Actual Questions: I am starting to get really skeptical of how well this method will work when a user has 5,000 trades instead of just 134 like in this example. I anticipate most users will only have somewhere under 200 per year, but some users may have a couple thousand trades per year. Probably no more than 5,000 per year. It seems to work ok now, but the page load times are already getting a tad high for my tastes. (About 800ms to generate the page according to rails logs with about a 250ms of that spent in the view layer.) I will end up caching this page I am sure, but I still need the regenerate the page each time a trade is updated and I can't afford to have this be too slow. Sooo..... Is doing something similar here possible with a straight couchdb reduce function? I am assuming handing this off to couch would possibly help with larger data sets. I couldn't figure out how, but I suppose that doesn't mean it isn't possible. If possible, any hints will be helpful. Could I use a list function if a reduce was not available due to reduce constraints? Are couchdb list functions suitable for this type of calculations? Anyone have any idea of whether or not list functions perform well? Any hints what one would look like for the type of calculations I am trying to achieve? I thought about other options such as running the calculations at the time each trade was saved or nightly if I had to and saving the results to a statistics doc that I could then query so that all the processing was done ahead of time. I would like this to be the last resort because then I can't really filter out trades by time periods dynamically like I would really like to. (I want to have a slider that a user can slide to only show trades from that time period using the startkey and endkey in couchdb if I can.) If I should continue running the calculations inside the rails app at the time of the page view, what can I do to improve my current implementation. I am new to rails, couch and programming in general. I am sure that I could be doing something better here. Do I need to create an array for each stat or is there a better way to do that. I guess I just would really like some advice on how to tackle this problem. I want to keep the page generation time minimal since I anticipate these being some of the highest trafficked pages. My gut is that I will need to offload the statistics calculation to either couch or run the stats in advance of when they are called, but I am not sure. Lastly: Like I mentioned above, one of the primary reasons for using couch is to allow users to define their own things to track per trade. Getting the data into couch is no problem, but how would I be able to take the custom_tracking array and find how many winning trades for each named tracking attribute. If anyone can give me any hints to the possibility of doing this that would be great. Thanks a bunch. Would really appreciate any help. Willing to fork out some $$$ if someone wants to take on the problem for me. (Don't know if that is allowed on stack overflow or not.)

    Read the article

  • Overlapping and Stacking in CSS

    - by ApacheCode
    Hello, I'm a programmer and learning new techniques in web development. I've ran into a problem if you could look at the link below. http://bailesslaw.com/Bailess_003/bailesHeader/header.html This example I made isnt fixed and it needs to be, which is becoming difficult. This looks fine on here, but when I put those layers on main website, index.html, place this code as the header, the banner moves in the documents position 0,0 . I need these boxes fixed, center page and I cannot get them to do that without messing up the layers order and content. Layer1-rotating images, js causes the rotation Layer2-blue triangle with backdrop effect overlapping layer 1, Layer 3-is a static image with a high z-index Below I including some code, the important part that needs 3 overlapped layers exactly matching in width and height, except it has to be fixed in center 780px wide. Code: <style rel="stylesheet" type="text/css"> div#layer1 { border: 1px solid #000; height: 200px; left: 0px; position: fixed; top: 0px; width: 780px; z-index: 1; } div#layer2 { border: 1px solid #000; height: 200px; left: 0px; position: absolute; top: 0px; width: 780px; z-index: 2; } div#layer3 { border: 1px solid #000000; height: 200px; left: 0px; position: absolute; top: 0px; width: 780px; z-index: 3; } </style> </head> <body class="oneColFixCtr"> <div id="container"> <div id="mainContent"> <div id="layer1"> </div> <div id="layer2"> <div class="slideshow"> <span id="rotating1"> <p class="rotating"> </p> </span> <span id="rotating2"> <p class="rotating"> </p> </span> <span id="rotating3"> <p class="rotating"> </p> </span> <span id="rotating4"> <p class="rotating"> </p> </span> </div> </div> <div id="layer3"> <table width="385" border="0"> <tr> <th width="81" scope="col"> &nbsp; </th> <th width="278" scope="col"> &nbsp; </th> <th width="12" scope="col"> &nbsp; </th> </tr> <tr> <td> &nbsp; </td> <td> </td> <td> &nbsp; </td> </tr> <tr> <td> &nbsp; </td> <td> &nbsp; </td> <td> &nbsp; </td> </tr> </table> </div> </div> <!-- end #container --> </div> </body> </body> </html> CSS: @charset "utf-8"; CSS code: #rotating1 { height: 200px; width: 780px; } #rotating2 { height: 200px; width: 780px; } #rotating3 { height: 200px; width: 780px; } #main { background-repeat: no-repeat; height: 200px; width: 780px; z-index: 100; } #test { width: 780px; z-index: 2; } #indexContent { background-color: #12204d; background-repeat: no-repeat; height: 200px; width: 780px; z-index: 1; } #indexContent p { padding: .5em 2em .5em 2em; text-align: justify; text-indent: 2em; } .rotating { float: right; margin-top: 227px; text-indent: 0px !important; } .clearfix:after { clear: both; content: " "; display: block; font-size: 0; height: 0; visibility: hidden; } .clearfix { display: inline-block; } * html .clearfix { height: 1%; } .clearfix { display: block; }

    Read the article

  • SINGLE SIGN ON SECURITY THREAT! FACEBOOK access_token broadcast in the open/clear

    - by MOKANA
    Subsequent to my posting there was a remark made that this was not really a question but I thought I did indeed postulate one. So that there is no ambiquity here is the question with a lead in: Since there is no data sent from Facebook during the Canvas Load process that is not at some point divulged, including the access_token, session and other data that could uniquely identify a user, does any one see any other way other than adding one more layer, i.e., a password, sent over the wire via HTTPS along with the access_toekn, that will insure unique untampered with security by the user? Using Wireshark I captured the local broadcast while loading my Canvas Application page. I was hugely surprised to see the access_token broadcast in the open, viewable for any one to see. This access_token is appended to any https call to the Facebook OpenGraph API. Using facebook as a single click log on has now raised huge concerns for me. It is stored in a session object in memory and the cookie is cleared upon app termination and after reviewing the FB.Init calls I saw a lot of HTTPS calls so I assumed the access_token was always encrypted. But last night I saw in the status bar a call from what was simply an http call that included the App ID so I felt I should sniff the Application Canvas load sequence. Today I did sniff the broadcast and in the attached image you can see that there are http calls with the access_token being broadcast in the open and clear for anyone to gain access to. Am I missing something, is what I am seeing and my interpretation really correct. If any one can sniff and get the access_token they can theorically make calls to the Graph API via https, even though the call back would still need to be the site established in Facebook's application set up. But what is truly a security threat is anyone using the access_token for access to their own site. I do not see the value of a single sign on via Facebook if the only thing that was established as secure was the access_token - becuase for what I can see it clearly is not secure. Access tokens that never have an expire date do not change. Access_tokens are different for every user, to access to another site could be held tight to just a single user, but compromising even a single user's data is unacceptable. http://www.creatingstory.com/images/InTheOpen.png Went back and did more research on this: FINDINGS: Went back an re ran the canvas application to verify that it was not any of my code that was not broadcasting. In this call: HTTP GET /connect.php/en_US/js/CacheData HTTP/1.1 The USER ID is clearly visible in the cookie. So USER_ID's are fully visible, but they are already. Anyone can go to pretty much any ones page and hover over the image and see the USER ID. So no big threat. APP_ID are also easily obtainable - but . . . http://www.creatingstory.com/images/InTheOpen2.png The above file clearly shows the FULL ACCESS TOKEN clearly in the OPEN via a Facebook initiated call. Am I wrong. TELL ME I AM WRONG because I want to be wrong about this. I have since reset my app secret so I am showing the real sniff of the Canvas Page being loaded. Additional data 02/20/2011: @ifaour - I appreciate the time you took to compile your response. I am pretty familiar with the OAuth process and have a pretty solid understanding of the signed_request unpacking and utilization of the access_token. I perform a substantial amount of my processing on the server and my Facebook server side flows are all complete and function without any flaw that I know of. The application secret is secure and never passed to the front end application and is also changed regularly. I am being as fanatical about security as I can be, knowing there is so much I don’t know that could come back and bite me. Two huge access_token issues: The issues concern the possible utilization of the access_token from the USER AGENT (browser). During the FB.INIT() process of the Facebook JavaScript SDK, a cookie is created as well as an object in memory called a session object. This object, along with the cookie contain the access_token, session, a secret, and uid and status of the connection. The session object is structured such that is supports both the new OAuth and the legacy flows. With OAuth, the access_token and status are pretty much al that is used in the session object. The first issue is that the access_token is used to make HTTPS calls to the GRAPH API. If you had the access_token, you could do this from any browser: https://graph.facebook.com/220439?access_token=... and it will return a ton of information about the user. So any one with the access token can gain access to a Facebook account. You can also make additional calls to any info the user has granted access to the application tied to the access_token. At first I thought that a call into the GRAPH had to have a Callback to the URL established in the App Setup, but I tested it as mentioned below and it will return info back right into the browser. Adding that callback feature would be a good idea I think, tightens things up a bit. The second issue is utilization of some unique private secured data that identifies the user to the third party data base, i.e., like in my case, I would use a single sign on to populate user information into my database using this unique secured data item (i.e., access_token which contains the APP ID, the USER ID, and a hashed with secret sequence). None of this is a problem on the server side. You get a signed_request, you unpack it with secret, make HTTPS calls, get HTTPS responses back. When a user has information entered via the USER AGENT(browser) that must be stored via a POST, this unique secured data element would be sent via HTTPS such that they are validated prior to data base insertion. However, If there is NO secured piece of unique data that is supplied via the single sign on process, then there is no way to guarantee unauthorized access. The access_token is the one piece of data that is utilized by Facebook to make the HTTPS calls into the GRAPH API. it is considered unique in regards to BOTH the USER and the APPLICATION and is initially secure via the signed_request packaging. If however, it is subsequently transmitted in the clear and if I can sniff the wire and obtain the access_token, then I can pretend to be the application and gain the information they have authorized the application to see. I tried the above example from a Safari and IE browser and it returned all of my information to me in the browser. In conclusion, the access_token is part of the signed_request and that is how the application initially obtains it. After OAuth authentication and authorization, i.e., the USER has logged into Facebook and then runs your app, the access_token is stored as mentioned above and I have sniffed it such that I see it stored in a Cookie that is transmitted over the wire, resulting in there being NO UNIQUE SECURED IDENTIFIABLE piece of information that can be used to support interaction with the database, or in other words, unless there were one more piece of secure data sent along with the access_token to my database, i.e., a password, I would not be able to discern if it is a legitimate call. Luckily I utilized secure AJAX via POST and the call has to come from the same domain, but I am sure there is a way to hijack that. I am totally open to any ideas on this topic on how to uniquely identify my USERS other than adding another layer (password) via this single sign on process or if someone would just share with me that I read and analyzed my data incorrectly and that the access_token is always secure over the wire. Mahalo nui loa in advance.

    Read the article

  • Oracle ADF Coverage at OOW

    - by Frank Nimphius
    Below is the schedule for all ADF related sessions at a glance. Note the Meet and greet session added for Wednesday Octiber 3rd from 4.30 pm to 5:30. Oracle ADF and Fusion Development General Session Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM General Session: The Future of Development for Oracle Fusion—From Desktop to Mobile to Cloud Marriott Marquis - Salon 8 12:15 PM - 1:15 PM General Session: Extend Oracle Fusion Apps to Tablets/Smartphones with Oracle Mobile Technology Moscone West - 3014 1:45 PM - 2:45 PM General Session: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies Moscone West - 3002/3004 4:45 PM - 5:45 PM General Session: Building Mobile Applications with Oracle Cloud Moscone West - 2002/2004 Conference Session Mon 1 Oct, 2012 Time Title Location 12:15 PM - 1:15 PM Understanding Oracle ADF and Its Role in Oracle Fusion Moscone South - 306 1:45 PM - 2:45 PM Building Performant Oracle ADF Business Components to Meet Tomorrow’s Needs Marriott Marquis - Golden Gate C3 3:15 PM - 4:15 PM End-to-End Oracle ADF Development in Eclipse Marriott Marquis - Golden Gate C3 4:45 PM - 5:45 PM Classic Mistakes with Oracle Application Development Framework Marriott Marquis - Salon 7 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM One Size Doesn’t Fit All: Oracle ADF Architecture Fundamentals Marriott Marquis - Golden Gate C2 10:15 AM - 11:15 AM Oracle Business Process Management/Oracle ADF Integration Best Practices Marriott Marquis - Golden Gate C3 11:45 AM - 12:45 PM Mobile-Enable Oracle Fusion Middleware and Enterprise Applications with Oracle ADF Moscone South - 306 11:45 AM - 12:45 PM Secrets of Successful Projects with Oracle Application Development Framework Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Develop On-Device iPhone and iPad Apps Without Writing Any Objective-C Code Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM BPM, SOA, and Oracle ADF Combined: Patterns Learned from Oracle Fusion Applications Moscone West - 3003 1:15 PM - 2:15 PM The Future of Forms Is … Oracle Forms (and Friends) Moscone South - 306 5:00 PM - 6:00 PM Best Practices for Integrating SOAP and REST Service into Oracle ADF Marriott Marquis - Golden Gate C2 Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA Suite Moscone West - 3001 10:15 AM - 11:15 AM Visualize This! Best Practices for Data Visualization in Desktop and Mobile Apps Marriott Marquis - Golden Gate C3 10:15 AM - 11:15 AM Set Up Your Oracle ADF Project and Development Team for Productivity: Seven Essential Tips Marriott Marquis - Golden Gate C2 11:45 AM - 12:45 PM How to Migrate an Oracle Forms Application to Oracle ADF Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Oracle ADF: Lessons Learned in Real-World Implementations Moscone South - 309 3:30 PM - 4:30 PM Oracle ADF Implementations Around the Globe: Best Practices Marriott Marquis - Golden Gate C2 3:30 PM - 4:30 PM Oracle Developer Cloud Services Marriott Marquis - Salon 7 4:30 PM - 5:30 PM Oracle JDeveloper and Oracle ADF: What’s New Hilton San Francisco - Continental Ballroom 5 5:00 PM - 6:00 PM Mobile Solutions for Oracle E-Business Suite Applications: Technical Insight Moscone West - 2020 5:00 PM - 6:00 PM Extending Social into Enterprise Applications and Business Processes Marriott Marquis - Golden Gate C3 5:00 PM - 6:00 PM The Tie That Binds: An Introduction to Oracle ADF Bindings Marriott Marquis - Golden Gate C2 Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Using Oracle ADF with Oracle E-Business Suite: The Full Integration View Moscone West - 3003 11:15 AM - 12:15 PM Deep Dive into Oracle ADF: Advanced Techniques Marriott Marquis - Golden Gate C2 12:45 PM - 1:45 PM Monitor, Analyze, and Troubleshoot Your Oracle ADF Application Marriott Marquis - Golden Gate C2 2:15 PM - 3:15 PM Oracle WebCenter Portal: Creating and Using Content Presenter Templates Marriott Marquis - Golden Gate C2 HOL (Hands-on Lab) Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:45 PM - 2:45 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 4:45 PM - 5:45 PM Application Lifecycle Management with Oracle JDeveloper: Hands-on Lab Marriott Marquis - Salon 3/4 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 11:45 AM - 12:45 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:15 PM - 2:15 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:30 PM - 4:30 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 11:15 AM - 12:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 12:45 PM - 1:45 PM Oracle ADF for Java EE Developers with Oracle Enterprise Pack for Eclipse Marriott Marquis - Salon 3/4 BOF (Birds-of-a-Feather) Mon 1 Oct, 2012 Time Title Location 6:15 PM - 7:00 PM How to Get Started with Oracle ADF Marriott Marquis - Club Room 7:15 PM - 8:00 PM Building Next-Generation Applications with Oracle ADF and Oracle BPM Marriott Marquis - Golden Gate C3 7:15 PM - 8:00 PM The Future of Oracle Forms: Upgrade, Modernize, or Migrate? Marriott Marquis - Golden Gate C2 7:15 PM - 8:00 PM Oracle ADF Faces: One Site for Many Devices Marriott Marquis - Golden Gate C1 - User Group Forum (Sunday Only) Sun 30 Sept, 2012 Time Title Location 9:00 AM - 10:00 AM Oracle ADF Immersion: How an Oracle Forms Developer Immersed Himself in the Oracle ADF World Moscone South - 305 10:15 AM - 11:15 AM Deploy with Joy: Using Hudson to Build and Deploy Your Oracle ADF Applications Moscone South - 305 11:30 AM - 12:30 PM ADF EMG User Group: A Peek into the Oracle ADF Architecture of Oracle Fusion Applications Moscone South - 305 12:45 PM - 3:45 PM ADF EMG User Group: Oracle Fusion Middleware Live Application Development Demo Moscone South - 305 3:15 PM - 4:15 PM Mobile Development with Oracle JDeveloper and Oracle ADF Moscone West - 2010 Demos Demo Location Developer Moscone North, Upper Lobby - N-002 Oracle ADF Mobile Development Moscone North, Upper Lobby - N-001 Oracle Eclipse Projects Hilton San Francisco, Grand Ballroom - HHJ-008 Oracle Enterprise Pack for Eclipse Moscone South, Right - S-208 Oracle JDeveloper and Oracle ADF Moscone South, Right - S-207 Exhibits 0 Exhibitor Location Accenture Moscone South - 1813 Moscone South - 2221 Infosys Moscone South - 1701 Moscone South - SMR-005 Innowave Technology Moscone South - 2309 ODTUG Moscone West, Level 2 Lobby - Kiosk in the User Groups Pavilion Oracle ADF Developers Meet Up Wednesday, Oct 03 Time Activity Location 4:30 PM - 5:30 PM Stop by the OTN Lounge and meet other Oracle ADF & Fusion developers as well as product managers and engineers who work on Oracle ADF, ADF Mobile and ADF Essentials. Feedback and questions welcome, or simply stop by and say ‘hi!’ and enjoy free beer. OTN Lounge

    Read the article

  • Cocos2d-xna memory management for WP8

    - by Arkiliknam
    I recently upgraded to VS2012 and try my in dev game out on the new WP8 emulators but was dismayed to find out the emulator now crashes and throws an out of memory exception during my sprite loading procedure (funnily, it still works in WP7 emulators and on my WP7). Regardless of whether the problem is the emulator or not, I want to get a clear understanding of how I should be managing memory in the game. My game consists of a character whom has 4 or more different animations. Each animation consists of 4 to 7 frames. On top of that, the character has up to 8 stackable visualization modifications (eg eye type, nose type, hair type, clothes type). Pre memory issue, I preloaded all textures for each animation frame and customization and created animate action out of them. The game then plays animations using the customizations applied to that current character. I re-looked at this implementation when I received the out of memory exceptions and have started playing with RenderTexture instead, so instead of pre loading all possible textures, it on loads textures needed for the character, renders them onto a single texture, from which the animation is built. This means the animations use 1/8th of the sprites they were before. I thought this would solve my issue, but it hasn't. Here's a snippet of my code: var characterTexture = CCRenderTexture.Create((int)width, (int)height); characterTexture.BeginWithClear(0, 0, 0, 0); // stamp a body onto my texture var bodySprite = MethodToCreateSpecificSprite(); bodySprite.Position = centerPoint; bodySprite.Visit(); bodySprite.Cleanup(); bodySprite = null; // stamp eyes, nose, mouth, clothes, etc... characterTexture.End(); As you can see, I'm calling CleanUp and setting the sprite to null in the hope of releasing the memory, though I don't believe this is the right way, nor does it seem to work... I also tried using SharedTextureCache to load textures before Stamping my texture out, and then clearing the SharedTextureCache with: CCTextureCache.SharedTextureCache.RemoveAllTextures(); But this didn't have an effect either. Any tips on what I'm not doing? I used VS to do a memory profile of the emulation causing the crash. Both WP7.1 and WP8 emulators peak at about 150mb of usage. WP8 crashes and throws an out of memory exception. Each customisation/frame is 15kb at the most. Lets say there are 8 layers of customisation = 120kb but I render then onto one texture which I would assume is only 15kb again. Each animation is 8 frames at the most. That's 15kb for 1 texture, or 960kb for 8 textures of customisation. There are 4 animation sets. That's 60Kb for 4 sets of 1 texture, or 3.75MB for 4 sets of 8 textures of customisation. So even if its storing every layer, its 3.75MB.... no where near the 150mb breaking point my profiler seems to suggest :( WP 7.1 Memory Profile (max 150MB) WP8 Memory Profile (max 150MB and crashes)

    Read the article

  • SSAS: Using fake dimension and scopes for dynamic ranges

    - by DigiMortal
    In one of my BI projects I needed to find count of objects in income range. Usual solution with range dimension was useless because range where object belongs changes in time. These ranges depend on calculation that is done over incomes measure so I had really no option to use some classic solution. Thanks to SSAS forums I got my problem solved and here is the solution. The problem – how to create dynamic ranges? I have two dimensions in SSAS cube: one for invoices related to objects rent and the other for objects. There is measure that sums invoice totals and two calculations. One of these calculations performs some computations based on object income and some other object attributes. Second calculation uses first one to define income ranges where object belongs. What I need is query that returns me how much objects there are in each group. I cannot use dimension for range because on one date object may belong to one range and two days later to another income range. By example, if object is not rented out for two days it makes no money and it’s income stays the same as before. If object is rented out after two days it makes some income and this income may move it to another income range. Solution – fake dimension and scopes Thanks to Gerhard Brueckl from pmOne I got everything work fine after some struggling with BI Studio. The original discussion he pointed out can be found from SSAS official forums thread Create a banding dimension that groups by a calculated measure. Solution was pretty simple by nature – we have to define fake dimension for our range and use scopes to assign values for object count measure. Object count measure is primitive – it just counts objects and that’s it. We will use it to find out how many objects belong to one or another range. We also need table for fake ranges and we have to fill it with ranges used in ranges calculation. After creating the table and filling it with ranges we can add fake range dimension to our cube. Let’s see now how to solve the problem step-by-step. Solving the problem Suppose you have ranges calculation defined like this: CASE WHEN [Measures].[ComplexCalc] < 0 THEN 'Below 0'WHEN [Measures].[ComplexCalc] >=0 AND  [Measures].[ComplexCalc] <=50 THEN '0 - 50'...END Let’s create now new table to our analysis database and name it as FakeIncomeRange. Here is the definition for table: CREATE TABLE [FakeIncomeRange] (     [range_id] [int] IDENTITY(1,1) NOT NULL,     [range_name] [nvarchar](50) NOT NULL,     CONSTRAINT [pk_fake_income_range] PRIMARY KEY CLUSTERED      (         [range_id] ASC     ) ) Don’t forget to fill this table with range labels you are using in ranges calculation. To use ranges from table we have to add this table to our data source view and create new dimension. We cannot bind this table to other tables but we have to leave it like it is. Our dimension has two attributes: ID and Name. The next thing to create is calculation that returns objects count. This calculation is also fake because we override it’s values for all ranges later. Objects count measure can be defined as calculation like this: COUNT([Object].[Object].[Object].members) Now comes the most crucial part of our solution – defining the scopes. Based on data used in this posting we have to define scope for each of our ranges. Here is the example for first range. SCOPE([FakeIncomeRange].[Name].&[Below 0], [Measures].[ObjectCount])     This=COUNT(            FILTER(                [Object].[Object].[Object].members,                 [Measures].[ComplexCalc] < 0          )     ) END SCOPE To get these scopes defined in cube we need MDX script blocks for each line given here. Take a look at the screenshot to get better idea what I mean. This example is given from SQL Server books online to avoid conflicts with NDA. :) From previous example the lines (MDX scripts) are: Line starting with SCOPE Block for This = Line with END SCOPE And now it is time to deploy and process our cube. Although you may see examples where there are semicolons in the end of statements you don’t need them. Visual Studio BI tools generate separate command from each script block so you don’t need to worry about it.

    Read the article

  • The Challenge with HTML5 – In Pictures

    - by dwahlin
    I love working with Web technologies and am looking forward to the new functionality that HTML5 will ultimately bring to the table (some of which can be used today). Having been through the div versus layer battle back in the IE4 and Netscape 4 days I think we’re headed down that road again as a result of browsers implementing features differently. I’ve been spending a lot of time researching and playing around with HTML5 samples and features (mainly because we’re already seeing demand for training on HTML5) and there’s a lot of great stuff there that will truly revolutionize web applications as we know them. However, browsers just aren’t there yet and many people outside of the development world don’t really feel a need to upgrade their browser if it’s working reasonably well (Mom and Dad come to mind) so it’s going to be awhile. There’s a nice test site at http://www.HTML5Test.com that runs through different HTML5 features and scores how well they’re supported. They don’t test for everything and are very clear about that on the site: “The HTML5 test score is only an indication of how well your browser supports the upcoming HTML5 standard and related specifications. It does not try to test all of the new features offered by HTML5, nor does it try to test the functionality of each feature it does detect. Despite these shortcomings we hope that by quantifying the level of support users and web developers will get an idea of how hard the browser manufacturers work on improving their browsers and the web as a development platform. The score is calculated by testing for the many new features of HTML5. Each feature is worth one or more points. Apart from the main HTML5 specification and other specifications created the W3C HTML Working Group, this test also awards points for supporting related drafts and specifications. Some of these specifications were initially part of HTML5, but are now further developed by other W3C working groups. WebGL is also part of this test despite not being developed by the W3C, because it extends the HTML5 canvas element with a 3d context. The test also awards bonus points for supporting audio and video codecs and supporting SVG or MathML embedding in a plain HTML document. These test do not count towards the total score because HTML5 does not specify any required audio or video codec. Also SVG and MathML are not required by HTML5, the specification only specifies rules for how such content should be embedded inside a plain HTML file. Please be aware that the specifications that are being tested are still in development and could change before receiving an official status. In the future new tests will be added for the pieces of the specification that are currently still missing. The maximum number of points that can be scored is 300 at this moment, but this is a moving goalpost.” It looks like their tests haven’t been updated since June, but the numbers are pretty scary as a developer because it means I’m going to have to do a lot of browser sniffing before assuming a particular feature is available to use. Not that much different from what we do today as far as browser sniffing you say? I’d have to disagree since HTML5 takes it to a whole new level. In today’s world we have script libraries such as jQuery (my personal favorite), Prototype, script.aculo.us, YUI Library, MooTools, etc. that handle the heavy lifting for us. Until those libraries handle all of the key HTML5 features available it’s going to be a challenge. Certain features such as Canvas are supported fairly well across most of the major browsers while other features such as audio and video are hit or miss depending upon what codec you want to use. Run the tests yourself to see what passes and what fails for different browsers. You can also view the HTML5 Test Suite Conformance Results at http://test.w3.org/html/tests/reporting/report.htm (a work in progress). The table below lists the scores that the HTML5Test site returned for different browsers I have installed on my desktop PC and laptop. A specific list of tests run and features supported are given when you go to the site. Note that I went ahead and tested the IE9 beta and it didn’t do nearly as good as I expected it would, but it’s not officially out yet so I expect that number will change a lot. Am I opposed to HTML5 as a result of these tests? Of course not - I’m actually really excited about what it offers.  However, I’m trying to be realistic and feel it'll definitely add a new level of headache to the Web application development process having been through something like this many years ago. On the flipside, developers that are able to target a specific browser (typically Intranet apps) or master the cross-browser issues are going to release some pretty sweet applications. Check out http://html5gallery.com/ for a look at some of the more cutting-edge sites out there that use HTML5. Also check out the http://www.beautyoftheweb.com site that Microsoft put together to showcase IE9. Chrome 8 Safari 5 for Windows     Opera 10 Firefox 3.6     Internet Explorer 9 Beta (Note that it’s still beta) Internet Explorer 8

    Read the article

  • MySQL 5.5 brings in new ways to authenticate users

    - by Georgi Kodinov
    Ever wanted to use your server's OS for authenticating MySQL users ? Or the corporate LDAP repository ? Unfortunately options like the above are plentiful nowadays. And providing hard-coded support for protocol X or service Y is not the best possible idea. MySQL 5.5 has taken the step into the right direction by providing an infrastructure allowing one to make the server understand different authentication protocols by creating a set of simple plugins (one for the client and one for the server). So now you can easily extend MySQL to search for and authenticate users in your favorite user directory. In fact the API supplied is so versatile that we took the possibility to re-design the current "native" authentication mechanism into a built-in always-on plugin ! OK, let me give you an example: Imagine we have a bunch of users defined in your OS, e.g. we have a user joro with his respective password. And we have a MySQL instance running on the same computer. It would not be unexpected to need to let joro access and/or modify MySQL data. The first step is to define him as a MySQL user. And there's a problem right there : MySQL's CREATE USER joro@localhost IDENTIFIED BY 'joros_password' statement needs a password. And this is a password in no way related to the password that joro have set up in the OS. What's worse : if joro changes his OS password this will in no way be reflected in MySQL. So he'll need to change his MySQL password in a separate step. Not very convenient, specially when you have a lot of users. This is a laborious setup for joro's DBA as well : he'll have to disable his access in both MySQL and the OS should he decides that joro's out of the "nice" list. Now mysql 5.5 to the rescue: Imagine that the smart DBA has created a MySQL server plugin that will check if the name of the user logging in is a valid and enabled OS name and if the password supplied to the mysql client matches the OS and has called this plugin 'auth_os'. Now all that's left to do is to define joro as a MySQL user that will be authenticated externally. This is done by the following command : CREATE USER 'joro'@'localhost' IDENTIFIED WITH 'auth_os'; Now joro can login to MySQL using his current OS password. Note : joro is still a valid MySQL user, so you can grant privileges to him just like you would for all other users. What's better: you can have users that authenticate using different mechanisms in the same server. So you can e.g. safely experiment with external authentication for selected users while keeping your current user base operational. What happens under the hood when joro logs in ? The server will find out by the user definition that it needs to use a non-default authentication and will ask the client to "switch" to using the appropriate client-side plugin (if of course the client is not already using it). If the client can't do this (e.g. because it's an old client or doesn't have the necessary plugin available) the server will reject the login. Otherwise the server will let the server-side plugin decide (while possibly talking to the client side plugin and the OS user directory) if this is a valid login or not. If it is the login process will continue as usual, while if it's not the login will get rejected. There's a lot more that MySQL 5.5 can do for you than just the simple case above. Stay tuned for more advanced use cases like mapping groups of external users to a single MySQL user (so you won't have to have 1-to-1 mapping between your external user directory and your mysql user repository) or ways to control the process as a DBA. Or you can simply skip ahead and read the relevant topics from MySQL's excellent online documentation. Or take a look at the example plugins in plugin/auth. Or take a look at the test suite in mysql-test/t/plugin_auth.test. Changelog entry: http://dev.mysql.com/doc/refman/5.5/en/news-5-5-7.html Primary new sections: Pluggable authentication Proxy users Client plugin C API functions Revised sections: New PROXY privilege New proxies_priv grant table Passwords might be external New external_user and proxy_user system variables New --default-auth and --plugin-dir mysql options New MYSQL_DEFAULT_AUTH and MYSQL_PLUGIN_DIR options for mysql_options() CREATE USER has IDENTIFIED WITH clause to specify auth plugin GRANT has PROXY privilege, IDENTIFIED WITH clause to specify auth plugin The data structure for writing client plugins

    Read the article

  • Convert YouTube Videos to MP3 with YouTube Downloader

    - by DigitalGeekery
    Are you looking for a way to take the music videos you watch on YouTube and convert them to MP3? Today we take a look at an easy way to convert those YouTube videos to MP3 for free with YouTube Downloader. The YouTube Downloader functions in two steps. First, it downloads the video from YouTube in MP4 format, and then allows you to convert that MP4 file to MP3. Note: It also supports conversion conversion to some other formats such as AVI video, MOV, iPhone, PSP, 3GP, and WMV.   Installation and usage Download and Install YouTube Downloader. (See download link below) Open the YouTube Downloader by clicking on the desktop icon. Find a YouTube video you’d like to convert to MP3 and copy the URL. Paste the URL into the “Enter video URL” text box in YouTube Downloader. When you hover your mouse over the text box, the text box will auto-fill with the URL from your clipboard. Select the “Download video from YouTube” radio button and click “Ok.” Choose a folder to location to download your YouTube video and click “Save.” The video is downloaded in MP4 format. Now wait while the video is downloaded to your hard drive.   Select the “Convert video (previously downloaded) from file” radio button. Click the (…) button to the right of the “Select video file” text box to browse for and select the MP4 file you just downloaded. Then select “MPEG Audio Layer (MP3) from the “Convert to” drop down list. Select “OK” to begin the conversion. Choose the conversion quality by moving the slider to the right or left. The options are: Low (96kbps bite rate), Medium (128kbps bit rate), Optimal (192kbps bit rate), and High 256kbps bit rate). Here you can select the output volume as well. Click “OK” when finished. If there is a portion of the beginning or end of the video that you wish to cut out of the MP3, select the “Cut video” check box and choose a Start and End time. Click “OK” when finished. Note: The start and end time represent the audio portion of the MP3 you wish to keep. All portions before and after these times will be cut.   The conversion process will begin and should only take a few moments. Times will vary depending on the size of the video you’re converting. Conversion was successful! The MP3 you converted will be in the same directory you downloaded the video to. Now you’re ready to listen to your MP3 or import it to your Zune, iTunes, or music library. You may also want to delete the MP4 files after the conversion if you will no longer need them. Conclusion YouTube Downloader features a very simple interface that’s user friendly and easy to use. It comes in handy when you watch videos that look horrible, but the sound quality is good. Or if you just need to hear the audio of something posted and don’t need the video. It also allows you to download from Google Video, MySpace, and others. Download YouTube Downloader Similar Articles Productive Geek Tips Download YouTube Videos with Cheetah YouTube DownloaderWatch YouTube Videos in Cinema Style in FirefoxStop YouTube Videos from Automatically Playing in FirefoxRemove Unsuitable Comments from YouTubeImprove YouTube Video Viewing in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet

    Read the article

  • Change the User Interface Language in Ubuntu

    - by Matthew Guay
    Would you like to use your Ubuntu computer in another language?  Here’s how you can easily change your interface language in Ubuntu. Ubuntu’s default install only includes a couple languages, but it makes it easy to find and add a new interface language to your computer.  To get started, open the System menu, select Administration, and then click Language Support. Ubuntu may ask if you want to update or add components to your current default language when you first open the dialog.  Click Install to go ahead and install the additional components, or you can click Remind Me Later to wait as these will be installed automatically when you add a new language. Now we’re ready to find and add an interface language to Ubuntu.  Click Install / Remove Languages to add the language you want. Find the language you want in the list, and click the check box to install it.  Ubuntu will show you all the components it will install for the language; this often includes spellchecking files for OpenOffice as well.  Once you’ve made your selection, click Apply Changes to install your new language.  Make sure you’re connected to the internet, as Ubuntu will have to download the additional components you’ve selected. Enter your system password when prompted, and then Ubuntu will download the needed languages files and install them.   Back in the main Language & Text dialog, we’re now ready to set our new language as default.  Find your new language in the list, and then click and drag it to the top of the list. Notice that Thai is the first language listed, and English is the second.  This will make Thai the default language for menus and windows in this account.  The tooltip reminds us that this setting does not effect system settings like currency or date formats. To change these, select the Text Tab and pick your new language from the drop-down menu.  You can preview the changes in the bottom Example box. The changes we just made will only affect this user account; the login screen and startup will not be affected.  If you wish to change the language in the startup and login screens also, click Apply System-Wide in both dialogs.  Other user accounts will still retain their original language settings; if you wish to change them, you must do it from those accounts. Once you have your new language settings all set, you’ll need to log out of your account and log back in to see your new interface language.  When you re-login, Ubuntu may ask you if you want to update your user folders’ names to your new language.  For example, here Ubuntu is asking if we want to change our folders to their Thai equivalents.  If you wish to do so, click Update or its equivalents in your language. Now your interface will be almost completely translated into your new language.  As you can see here, applications with generic names are translated to Thai but ones with specific names like Shutter keep their original name. Even the help dialogs are translated, which makes it easy for users around to world to get started with Ubuntu.  Once again, you may notice some things that are still in English, but almost everything is translated. Adding a new interface language doesn’t add the new language to your keyboard, so you’ll still need to set that up.  Check out our article on adding languages to your keyboard to get this setup. If you wish to revert to your original language or switch to another new language, simply repeat the above steps, this time dragging your original or new language to the top instead of the one you chose previously. Conclusion Ubuntu has a large number of supported interface languages to make it user-friendly to people around the globe.  And since you can set the language for each user account, it’s easy for multi-lingual individuals to share the same computer. Or, if you’re using Windows, check out our article on how you can Change the User Interface Language in Vista or Windows 7, too! Similar Articles Productive Geek Tips Restart the Ubuntu Gnome User Interface QuicklyChange the User Interface Language in Vista or Windows 7Create a Samba User on UbuntuInstall Samba Server on UbuntuSee Which Groups Your Linux User Belongs To TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED

    Read the article

  • Mobile HCM: It’s not the future, it is right now

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Boese, Director Product Strategy, Oracle I’ll bet you reached for your iPhone or Android or BlackBerry and took a quick look at email or Facebook or last night’s text messages before you even got out of bed this morning. Come on, admit it, it’s ok, you are among friends here. See, feel better now? But seriously, the incredible growth and near-ubiquity of increasingly powerful, capable, and for many of us, essential in our daily lives mobile devices has profoundly changed the way we communicate, consume information, socialize, and more and more, conduct business and get our work done. And if you doubt that profound change has happened, just think for a moment about the last time you misplaced your iPhone.  The shivers, the cold sweats, the panic... We have all been there. And indeed your personal experiences with mobile technology echoes throughout the world - here are a few data points to consider: Market research firm IDC estimates 1.8 billion mobile phones will be shipped in 2012. A recent Pew study reports 46% of Americans own a smartphone of some kind. And finally in the USA, ownership of tablets like the iPad has doubled from 10% to 19% in the last year. So truly for the Human Resources leader, the question is no longer, ‘Should HR explore ways to exploit mobile devices and their always-on nature to better support and empower the modern workforce?’, but rather ‘How can HR best take advantage of smartphone and tablet capability to provide information, enable transactions, and enhance decision making?’. Because even though moving HCM applications to mobile devices seems inherently logical given today’s fast-moving and mobile workforces, and its promise to deliver incredible value to the organization, HR leaders also have to consider many factors before devising their Mobile HCM strategy and embarking on mobile HR technology projects. Here are just some of the important considerations for HR leaders as you build your strategies and evaluate mobile HCM solutions: Does your organization provide mobile devices to the workforce today, and if so, will the current set of deployed devices have the necessary capability and ecosystems to support your mobile HCM initiatives? Will you allow workers to use or bring their own mobile devices, (commonly abbreviated as ‘BYOD’), and if so are your IT and Security organizations in agreement and capable of supporting that strategy? Do you know which workers need access to mobile HCM applications? Often mobile HCM capability flows down in an organization, with executives and other ‘road-warrior’ types having the most immediate needs, followed by field sales staff, project managers, and even potential job candidates. But just as an organization will have to spend time understanding ‘who’ should have access to mobile HCM technology, the ‘what’ of the way the solutions should be deployed to these groups will also vary. What works and makes sense for the executive, (company-wide dashboards and analytics on an iPad), might not be as relevant for a retail store manager, (employee schedules, location-level sales and inventory data, transaction approvals, etc.). With Oracle Fusion HCM, we are taking an approach to mobile HR that encompasses not just the mobile solution needs for the various types of worker, but also incorporates the fundamental attributes of great mobile applications - the ability to support end-to-end transactions, apps that respond with lightning-fast speed, with functions that are embedded in a worker’s daily activities, and features that can be mashed-up easily with other business areas like Finance and CRM. Finally, and perhaps most importantly for the Oracle Fusion HCM team, delivering mobile experiences that truly enhance, enable, and empower the mobile workforce, and deliver on the design mantras of the best-in-class consumer applications, continues to shape and drive design decisions. Mobile is no longer the future, it is right now, and the cutting-edge HR leader of today will need to consider how mobile fits her HCM technology strategy from here on out. You can learn more about our ideas and plans for Oracle Fusion HCM mobile solutions at https://fusiontap.oracle.com/.

    Read the article

  • Using Oracle ADF Data Visualization Tools (DVT) Line Graphs to Display Weather Information

    - by Christian David Straub
    OverviewA guest post by Jeanne Waldman.I have a simple JDeveloper Fusion application that retrieves weather data. I wanted to compare the week's temperatures of different locations in a graph. I decided to check out the dvt:lineGraph component, and it took me a few minutes to add it to my jspx page and supply it with data.Drag and Drop the dvt:lineGraph onto your pageI opened my .jspx page in design modeIn the Component Palette, I selected ADF Data Visualization.Then I dragged 'Line' onto my page.A dialog popped up giving me options of the type of line graph. I chose the default.A lineGraph displayed with some default data. Hook up your weather dataNow I wanted to hook up my own data. I browsed the tagdoc, and I found the tabularData attribute.Attribute: tabularDataType: java.util.ListTagDoc:Specifies a list of data that the graph uses to create a grid and populate itself. The List consists of a three-member Object array for each data value to be passed to the graph. The members of each array must be organized as follows: The first member (index 0) is the column label, in the grid, of the data value. This is generally a String. If the graph has a time axis, then this should be a Java Date. Column labels typically identify groups in the graph. The second member (index 1) is the row label, in the grid, of the data value. This is generally a String. Row labels appear as series labels in the graph (usually in the legend). The third member (index 2) is the data value, which is usually a Double.The first member is the column label of the data value. This would be the day of the week.The second member is the row label of the data value. This would be the location name.The third member is the data value, usually a Double. This would be the temperature. I already had all this information, I just needed to put it in a List with a three-member Object array for each data value.   /**    * This is used for the lineGraph to show the data for each location.    */   public List<Object[]> getTabularData()   {      List<Object[]> tabularData = new ArrayList<Object []>();      List<WeatherForecast> weatherForecastList = getWeatherForecastList();      // loop through the list and build up the tabular data. Then cache it.      for(WeatherForecast wf : weatherForecastList)      {        List<ForecastDay> forecastDayList = wf.getForecastDayList();        String location = wf.getLocation();        for (ForecastDay fday : forecastDayList)        {          String day = fday.getPrettyDate();          String highTemp = fday.getHighF();          tabularData.add(new Object[]{day, location, Double.valueOf(highTemp)});        }             }      return tabularData;    }  Now I bound the lineGraph to this method by setting tabularData to#{weatherForAllLocationsBean.tabularData}weatherForAllLocationsBean is my bean that is defined in faces-config.xml. Adding a barGraphIn about 30 seconds, I added a barGraph with the same data. I dragged and dropped a bar graph onto the page, used the same tabularData as I did in the line graph. The page looks like this:  ConclusionI was very happy how fast it was to hook up my weather data to these graphs. They look great, and they have built in functionality. For instance, I can hide/show a location by clicking on the name of the location in the legend.

    Read the article

  • OBIEE 11.1.1 - Introduction to OBIEE 11g Full Sample App

    - by user809526
    Isn't it nice to discover OBIEE 11g around a nice "How To" catalog of features? to observe OBI and Essbase relationships at work? to discover TimesTen? The OBIEE 11g Full Sample App (FSA) is a comprehensive collection of examples designed to demonstrate the latest Oracle BIEE 11g capabilities and design best practices: Enhanced visualizations as Geo-spacial maps and interactive dashboards, Action Framework,  BI Publisher, Scorecard and Strategy Management, Mobile style sheets, Semantic layer modeling, Multi-source federation, Integration with products such as Essbase, Oracle OLAP, ODM, TimesTen, ODI and more The FSA is intended to be comprehensive, it is big (see CAVEAT below). The FSA is not an Oracle product, it is a good will free deployment of OBIEE/Essbase designed to exemplify OBIEE features, infrastructure and security around the Fusion Middleware components. Its contents and code are distributed free for demonstrative purposes only. It is neither maintained nor supported by Oracle as a licensed product. The OBIEE Full Sample App is independent of the default Sample App that comes with the OBIEE product. BENEFITS The FSA helps as a demonstrator of OBIEE 11g best practices, a tutorial, an environment "Test & Scrap", a SR bench (regression, conflicts), a tuning bench, a quick ready made POC seed for projects, a security options environment, ... The FSA - Is organized around a catalog of functional features - Has been deployed over 1000 times, it should be stable RELEASE The Full Sample App (V107) is bound to OBIEE 11.1.1.5 and Essbase 11.1.2.1 (November 2011). The FSA release dates are independent of the Product GA date (OBIEE). In early December 2011, a new functional Patch (V110) is released. It is easily applied (in less than 15 mins) on top of OBIEE SampleApp 11.1.1.5 (V107). The patch (V110) includes additional functional examples:        1. Web Catalog Statistics Application: Provides detailed insight into your web catalog content, dormant catalog objects, webcat impact analysis for metadata changes and more        2. Data inflation Scripts: A set of simple SQL procedures to quickly inflate SampleApp Fact and Dimension data to millions of records in a few minutes        3. Public Content Extensions Framework: A patching framework for public examples and contributions leveraging SampleApp        4. Additional report examples (including bridge report, external chart integrations) and bug fixes DISTRIBUTION as VBox image (November 2011) The ready made VBox image is designed to run on Virtual Box. It can be converted to VMware (see another BLOG). 1/ http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html VBox Image Deployment Guide Sampleapp_v107_GA.ovf - VBox image key file The above http URL provides the user:password for the ftp URLs below. 2/ ftp://user:[email protected]/static/SampleAppV107/ 12 "7-zip" files Sampleapp_v107_GA_7_20.7z.001 -> .012 We recommend 7-zip file manager for unzipping (http://www.7-zip.org/). Select Unzip here option, it will create the contents under a directory named "SampleApp_10722". On Windows, it is important to download and save zip file under the root directory (e.g. C:\ or D:\) because of possible long pathnames. 3/ ftp://user:[email protected]/static/SampleAppV107/Unzipped_Version/ 4 files Sampleapp_v107_GA-disk[1234].vmdk Important note: Check the provided checksums (md5sum). Please do it! DISTRIBUTION as Installation files for existing OBI 11.1.1.5 (November 2011) http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html Install files Deployment Guide SampleApp_10722_1.zip - 198 MB CAVEAT Many computers have RAM chips problems that keep often silent ... until you manipulate big files. It is strongly advised you run some memory check program eg MEMTEST in GRUB boot manager. Running md5sum repeatedly onto the very same big file must be consistent [same result], else a hardware memory problem is suspected. For Virtual Box, you should most likely enable VT-X (Vanderpool) hardware virtualization in BIOS. A free disk space of 80 GB is required to perform safely the VBox image installation. A Virtual Machine of minimum 6 to 7 GB memory fits the needs of combining OBIEE and Essbase execution.

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • What’s New for Oracle Commerce? Executive QA with John Andrews, VP Product Management, Oracle Commerce

    - by Katrina Gosek
    Oracle Commerce was for the fifth time positioned as a leader by Gartner in the Magic Quadrant for E-Commerce. This inspired me to sit down with Oracle Commerce VP of Product Management, John Andrews to get his perspective on what continues to make Oracle a leader in the industry and what’s new for Oracle Commerce in 2013. Q: Why do you believe Oracle Commerce continues to be a leader in the industry? John: Oracle has a great acquisition strategy – it brings best-of-breed technologies into the product fold and then continues to grow and innovate them. This is particularly true with products unified into the Oracle Commerce brand. Oracle acquired ATG in late 2010 – and then Endeca in late 2011. This means that under the hood of Oracle Commerce you have market-leading technologies for cross-channel commerce and customer experience, both designed and developed in direct response to the unique challenges online businesses face. And we continue to innovate on capabilities core to what our customers need to be successful – contextual and personalized experience delivery, merchant-inspired tools, and architecture for performance and scalability. Q: It’s not a slow moving industry. What are you doing to keep the pace of innovation at Oracle Commerce? John: Oracle owes our customers the most innovative commerce capabilities. By unifying the core components of ATG and Endeca we are delivering on this promise. Oracle Commerce is continuing to innovate and redefine how commerce is done and in a way that drive business results and keeps customers coming back for experiences tailored just for them. Our January and May 2013 releases not only marked the seventh significant releases for the solution since the acquisitions of ATG and Endeca, we also continue to demonstrate rapid and significant progress on the unification of commerce and customer experience capabilities of the two commerce technologies. Q: Can you tell us what was notable about these latest releases under the Oracle Commerce umbrella? John: Specifically, our latest product innovations give businesses selling online the ability to get to market faster with more personalized commerce experiences in the following ways: Mobile: the latest Commerce Reference Application in this release offers a wider range of examples for online businesses to leverage for iOS development and specifically new iPad reference capabilities. This release marks the first release of the iOS Universal application that serves both the iPhone and iPad devices from a single download or binary. Business users can now drive page content management and layout of search results and category pages, as well as create additional storefront elements such as categories, facets / dimensions, and breadcrumbs through Experience Manager tools. Cross-Channel Commerce: key commerce platform capabilities have been added to support cross-channel commerce, including an expanded inventory model to maintain inventory for stores, pickup in stores and Web-based returns. Online businesses with in-store operations can now offer advanced shipping options on the web and make returns and exchange logic easily available on the web. Multi-Site Capabilities: significant enhancements to the Commerce Platform multi-site architecture that allows business users to quickly launch and manage multiple sites on the same cluster and share data, carts, and other components. First introduced in 2010, with this latest release business users can now partition or share customer profiles, control users’ site-based access, and manage personalization assets using site groups. Internationalization: continued language support and enhancements for business user tools as well and search and navigation. Guided Search now supports 35 total languages with 11 new languages (including Danish, Arabic, Norwegian, Serbian Cyrillic) added in this release. Commerce Platform tools now include localized support for 17 locales with 4 new languages (Danish, Portuguese (European), Finnish, and Thai). No development or customization is required in order for business users to use the applications in any of these supported languages. Business Tool Experience: valuable new Commerce Merchandising features include a new workflow for making emergency changes quickly and increased visibility into promotions rules and qualifications in preview mode. Oracle Commerce business tools continue to become more and more feature rich to provide intuitive, easy- to-use (yet powerful) capabilities to allow business users to manage content and the shopping experience. Commerce & Experience Unification: demonstrable unification of commerce and customer experience capabilities include – productized cartridges that provide supported integration between the Commerce Platform and Experience Management tools, cross-channel returns, Oracle Service Cloud integration, and integrated iPad application. The mission guiding our product development is to deliver differentiated, personalized user experiences across any device in a contextual manner – and to give the business the best tools to tune and optimize those user experiences to meet their business objectives. We also need to do this in a way that makes it operationally efficient for the business, keeping the overall total cost of ownership low – yet also allows the business to expand, whether it be to new business models, geographies or brands. To learn more about the latest Oracle Commerce releases and mission, visit the links below: • Hear more from John about the Oracle Commerce mission • Hear from Oracle Commerce customers • Documentation on the new releases • Listen to the Oracle ATG Commerce 10.2 Webcast • Listen to the Oracle Endeca Commerce 3.1.2 Webcast

    Read the article

  • How to reproject a shapefile from WGS 84 to Spherical/Web Mercator projection.

    - by samkea
    Definitions: You will need to know the meaning of these terms below. I have given a small description to the acronyms but you can google and know more about them. #1:WGS-84- World Geodetic Systems (1984)- is a standard reference coordinate system used for Cartography, Geodesy and Navigation. #2: EPGS-European Petroleum Survey Group-was a scientific organization with ties to the European petroleum industry consisting of specialists working in applied geodesy, surveying, and cartography related to oil exploration. EPSG::4326 is a common coordinate reference system that refers to WGS84 as (latitude, longitude) pair coordinates in degrees with Greenwich as the central meridian. Any degree representation (e.g., decimal or DMSH: degrees minutes seconds hemisphere) may be used. Which degree representation is used must be declared for the user by the supplier of data. So, the Spherical/Web Mercator projection is referred to as EPGS::3785 which is renamed to EPSG:900913 by google for use in googlemaps. The associated CRS(Coordinate Reference System) for this is the "Popular Visualisation CRS / Mercator ". This is the kind of projection that is used by GoogleMaps, BingMaps,OSM,Virtual Earth, Deep Earth excetra...to show interactive maps over the web with thier nearly precise coordinates.  Reprojection: After reading alot about reprojecting my coordinates from the deepearth project on Codeplex, i still could not do it. After some help from a colleague, i got my ball rolling.This is how i did it. #1 You need to download and open your shapefile using Q-GIS; its the one with the biggest number of coordinate reference systems/ projections. #2 Use the plugins menu, and enable ftools and the WFS plugin. #3 Use the Vector menu--> Data Management Tools and choose define current projection. Enable, use predefined reference system and choose WGS 84 coodinate system. I am personally in zone 36, so i chose WGS84-UTM Zone 36N under ( Projected Coordinate Systems--> Universal Transverse Mercator) and click ok. #4 Now use the Vector menu--> Data Management Tools and choose export to new projection. The same dialog will pop-up. Now choose WGS 84 EPGS::4326 under Geodetic Coordinate Systems. My Input user Defined Spatial Reference System should looks like this: +proj=tmerc +lat_0=0 +lon_0=33 +k=0.9996 +x_0=500000 +y_0=200000 +ellps=WGS84 +datum=WGS84 +units=m +no_defs Your Output user Defined Spatial Reference System should look like this: +proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs Browse for the place where the shapefile is going to be and give the shapefile a name(like origna_reprojected). If it prompts you to add the projected layer to the TOC, accept. There, you have your re-projected map with latitude and longitude pair of coordinates. #5 Now, this is not the actual Spherical/Web Mercator projection, but dont worry, this is where you have to stop. All the other custom web-mapping portals will pick this projection and transform it into EPGS::3785 or EPSG:900913 but the coordinates will still remain as the LatLon pair of the projected shapefile. If you want to test, a particular know point, Q-GIS has a lot of room for that. Go ahead and test it.

    Read the article

  • SQL SERVER – List of All the Samples Database Available to Download for FREE

    - by Pinal Dave
    It is pretty much very common to have a sample database for any database product. Different companies keep on improving their product and keep on coming up with innovation in their product. To demonstrate the capability of their new enhancements they need the sample database. Microsoft have various sample database available for free download for their SQL Server Product. I have collected them here in a single blog post. Download an AdventureWorks Database The AdventureWorks OLTP database supports standard online transaction processing scenarios for a fictitious bicycle manufacturer (Adventure Works Cycles). Scenarios include Manufacturing, Sales, Purchasing, Product Management, Contact Management, and Human Resources. Coconut Dal Coconut Dal is a lightweight data access layer, for use in projects where the Entity Framework cannot be used or Microsoft’s Enterprise Library Data Block is unsuitable. Anyone who is handwriting ADO.NET should use a library instead and Coconut Dal might be the answer.  DataBooster – Extension to ADO.NET Data Provider The dbParallel DataBooster library is a high-performance extension to ADO.NET Data Provider, includes two aspects: 1) A slimmed down API encapsulation which simplified the most common data access operations (DbConnection -> DbCommand -> DbParameter -> DbDataReader) into a single class DbAccess, to help application with a clean DAL, avoid over-packing and redundant-copy of data transfer. 2) A booster for writing mass data onto database. Base on a rational utilization of database concurrency and a effective utilization of network bandwidth. Tabular AMO 2012 The sample is made of two project parts. The first part is a library of functions to manage tabular models -AMO2Tabular V2-. The second part is a sample to build a tabular model -AdventureWorks Tabular AMO 2012- using the AMO2Tabular library; the created model is similar to the ‘AdventureWorks Tabular Model 2012. SQL Server Analysis Services Product Samples SQL Server Analysis Services provides, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, Key Performance Indicator (KPI) scorecards, and data mining. Analysis Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Reporting Services Product Samples This project contains Reporting Services samples released with Microsoft SQL Server product. These samples are in the following five categories: Application Samples, Extension Samples, Model Samples, Report Samples, and Script Samples. If you are interested in contributing Reporting Services samples, please let us know by posting in the developers’ forum. Reporting Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008 R2 PCU1. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Integration Services Product Samples This project contains Integration Services samples released with Microsoft SQL Server product. These samples are in the following two categories: Package Samples and Programming Samples. If you are interested in contributing Integration Services samples, please let us know by posting in the developers’ forum. Integration Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. Windows Azure SQL Reporting Admin Sample The SQLReportingAdmin sample for Windows Azure SQL Reporting demonstrates the usage of SQL Reporting APIs, and manages (add/update/delete) permissions of SQL Reporting users. Windows Azure SQL Reporting ReportViewer-SOAP API usage sample These sample projects demonstrate how to embed a Microsoft ReportViewer control that points to reports hosted on SQL Reporting report servers and how to use SQL Reporting SOAP APIs in your Windows Azure Web application. Enterprise Library 5.0 – Integration Pack for Windows Azure This NuGet package contains a zip file with the source code for the Enterprise Library Integration Pack for Windows Azure.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Sample Database

    Read the article

  • HTML Presence Controls for Communications Server 14 CodePlex Project

    Showing Presence on the Web If youre running Office Communicator Server 2007 R2, you know that your only out-of-the-box option for showing presence on the web is to use the NameControl ActiveX control that ships as part of Office.  Being an ActiveX control, this obviously means that youre limited to Internet Explorer.  Also, nobody likes ActiveX controls What if you want to show the presence of users in a pure ASP.NET or HTML application and cant assume that the user has Communicator installed you need anASP.NET or HTML presence control.  HTML Presence Controls for Microsoft Communications Server 14 We recently worked with the UC team at Microsoft on a keynote demo for TechEd 2010 in New Orleans.  The demo was for a fictitious airline Fabrikam Airlines that wanted to show the presence of customer service and reservations agents on its website.  Customers could also start an instant message conversation with the agents using a Silverlight web chat window that used WCF to communicate with the backend UCMA application. We built HTML Presence Controls that use AJAX to poll a REST-based WCF service running in IIS and hosting a UCMA 3.0 presence subscription application.   Microsoft has graciously allowed us to publish these on CodePlex so that the development community can benefit from them:  http://htmlpresencecontrols.codeplex.com/ We will be maintaining the CodePlex project as new builds of UCMA 3.0 become available.  Check out the project home page on CodePlex for some more in-depth details on how the controls are implemented. ASP.NET Server Control Implementation Were providing an ASP.NET Server Control implementation that you can use stand-alone or in a GridView or Repeater (or other layout control).  The control has properties that allow you to control its appearance, e.g. you can choose whether or not to show the contacts name or availability text. You can also use the server control in a layout control such as a GridView by putting it in a TemplateColumn and binding to the Sip Uri in the data source. Disclaimer Once we started working on these, we realized why Microsoft hasnt shipped such controls as part of the product.  There are some tradeoffs you have to be aware of when using these controls, heres the high level. Privacy The backend UCMA 3.0 application that subscribes to presence of contacts runs as a trusted application and can thus retrieve the presence of any user in the organization.  Theres currently no good way in UCMA to apply any privacy rules to ensure that the consumer of the presence controls has permission to see the presence of the contacts that the controls are bound to.  Just to be absolutely crystal clear These controls provide a way to query the presence of any user in the organization, regardless of the privacy relationship between the person consuming the controls and the contacts whose presence is being displayed. Were exploring options for a design pattern that would allow you to inject some privacy controls.  Keep in mind though that you would most likely be responsible for implementing this logic, as there is currently no functionality in UCMA that allows you to do that. Polling the WCF REST Service The controls poll the backend WCF service to retrieve the presence of contacts - you can control the refresh interval so that they poll less often. We implemented a caching layer so that the WCF service is always communicating with a presence cache it never communicates directly with Communications Server.  For example, if your web page is showing the presence of sip:[email protected] and 500 people have the page open, the presence cache only contains one instance of the subscription Communications Server is not being polled 500 times for the presence of that contact. Once the presence of a contact changes, it is updated in the cache.  There are some server-based push mechanisms that would work nicely here, such as the one that Outlook Web Access 2010 uses.  Unfortunately we didnt have time to explore these options. Community Contribution Take a look at the project Issue Tracker, there are a couple of things we can use some help with.  Shoot me a note if youre interested in contributing to the project. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Last GUID used up - new ScottGuID unique ID to replace it

    - by Eilon
    You might have heard in recent news that the last ever GUID was used up. The GUID {FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF} was just consumed by a soon to be released project at Microsoft. Immediately after the GUID's creation the word spread around the Microsoft campuses around the globe. Microsoft's approximately 100,000 worldwide employees then started blogging, tweeting, and facebooking about the dubious "achievement." The following screenshot shows GUIDGEN (the Windows tool for creating GUIDs) with the last ever GUID. All GUIDs created by projects at Microsoft must be registered in a central repository for record keeping. This allows quick-fix engineers, security engineers, anti-malware developers, and testers to do a quick look up of an unknown GUID and find out if it belongs to Microsoft. The following screenshot shows the Microsoft GUID Tracker internal application and the last few GUIDs being used up by various Microsoft projects. What is perhaps more interesting than the news about the GUID is the project that used that last GUID. The recent announcements regarding the development experience for the Windows Phone 7 Series (WP7S) all involve free editions of Visual Studio 2010. One of the lesser known developer tools is based on a resurrected project that many of you are probably familiar with, but have never used. The tool is in fact Microsoft Bob 7 Series (MB7S). MB7S is an agent-based approach for mobile phone app development. The UI incorporates both natural language interfaces and motion gesture behaviors, similar to the Windows Phone 7 Series “Metro” interface. If it works, it will help to expand the breadth of mobile app developers. After the GUID: The ScottGuID It came as no big surprise that eventually the last GUID would be used up. Knowing this, a group of engineers at Microsoft has designed, implemented, and tested a replacement to the GUID: The ScottGuID. There are several core principles of the ScottGuID: 1. The concepts used in ScottGuIDs must be easily understood by a developer who is already familiar with GUIDs 2. There must exist a compatibility layer between ScottGuIDs and GUIDs 3. A ScottGuID must be usable in a practical manner in non-computing environments 4. There must exist ScottGuID APIs for all common platforms: Win32/Win64/WinCE, .NET (incl. Silverlight), Linux, FreeBSD, MacOS (incl. iPhone OS), Symbian, RIM BlackBerry, Google Android, etc. 5. ScottGuIDs must never run out ScottGuID use cases One of the more subtle principles of the ScottGuID is principle #3. While technically a GUID could be used in any environment, it was not practical to do so in terms of data entry and error detection. In order to have the ScottGuID be a true universal ID it must be usable in non-computing environments. Prior to the announcement of the ScottGuID there have been a number of until-now confidential projects. One of the tools that will soon become public is ScottGuIDGen, which is in essence an updated version of GUIDGEN that can create ScottGuIDs. The following screenshot shows a sample ScottGuID. To demonstrate the various applications of the ScottGuID there were test deployments around the globe. The following examples are a small showcase of the applications that have already been prototyped. Log in to Hotmail: Pay for gas: Sign in to Twitter: Dispense cat food: Conclusion I hope that this brief introduction to the ScottGuID shows how technology can continue to move forward, even when it appears there is a point that cannot be passed. With a small number of principles, a team of smart engineers, and a passion for "getting it right" the ScottGuID should last well past our lifetimes. In the coming months expect further announcements regarding additional developer tools, samples, whitepapers, podcasts, and videos. Please leave a comment on this post if you have any questions about the ScottGuID or what you would like to see us do with it. With ScottGuID, the possibilities are nearly endless and we want to stretch their reach as far as possible.

    Read the article

  • Off The Beaten Path—Three Things Growing Midsize Companies are Thankful For

    - by Christine Randle
    By: Jim Lein, Senior Director, Oracle Accelerate Last Sunday I went on a walkabout.  That’s when I just step out the door of my Colorado home and hike through the mountains for hours with no predetermined destination. I favor “social trails”, the unmapped routes pioneered by both animal and human explorers.  These tracks  are usually more challenging than established, marked routes and you can’t be 100% sure of where you’re going to end up. But I’ve found the rewards to be much greater. For awhile, I pondered on how—depending upon your perspective—the current economic situation worldwide could be viewed as either a classic “the glass is half empty” or a “the glass is half full” scenario. Midsize companies buy Oracle to grow and so I’m continually amazed and fascinated by the success stories our customers relate to me.  Oracle’s successful midsize companies are growing via innovation, agility, and opportunity. For them, the glass isn’t half full—it’s overflowing. Growing Midsize Companies are Thankful for: Innovation The sun angling through the pine trees reminded me of a conversation with a European customer a year ago May.  You might not recognize the name but, chances are, your local evening weather report relies on this company’s weather observation, monitoring and measurement products.  For decades, the company was recognized in its industry for product innovation, but its recent rapid growth comes from tailoring end to end product and service solutions based on the needs of distinctly different customer groups across industrial, public sector, and defense sectors.  Hours after that phone call I was walking my dog in a local park and came upon a small white plastic box sprouting short antennas and dangling by a nylon cord from a tree branch.  I cut it down. The name of that customer’s company was stamped on the housing. “It’s a radiosonde from a high altitude weather balloon,” he told me the next day. “Keep it as a souvenir.”  It sits on my fireplace mantle and elicits many questions from guests. Growing Midsize Companies are Thankful for: Agility In July, I had another interesting discussion with the CFO of an Asia-Pacific company which owns and operates a large portfolio of leisure assets. They are best known for their epic outdoor theme parks. However, their primary growth today is coming from a chain of indoor amusement centers in the USA where billiards, bowling, and laser tag take the place of roller coasters, kiddy rides, and wave pools. With mountains and rivers right out my front door, I’m not much for theme parks, but I’ll take a spirited game of laser tag any day.  This company has grown dramatically since first implementing Oracle ERP more than a decade ago. Their profitable expansion into a completely foreign market is derived from the ability to replicate proven and efficient best business practices across diverse operating environments.  They recently went live on Oracle’s Fusion HCM and Taleo. Their CFO explained to me how, with thousands of employees in three countries, Fusion HCM and Taleo would enable them to remain incredibly agile by acting on trends linking individual employee performance to their management, establishing and maintaining those best practices. Growing Midsize Companies are Thankful for: Opportunity I have three GPS apps on my iPhone. I use them mainly to keep track of my stats—distance, time, and vertical gain. However, every once in awhile I need to find the most efficient route back home before dark from my current location (notice I didn’t use the word “lost”). In August I listened in on an interview with the CFO of another European company that designs and delivers telematics solutions—the integrated use of telecommunications and informatics—for managing the mobile workforce. These solutions enable customers to achieve evolutionary step-changes in their performance and service delivery. Forgive the overused metaphor, but this is route optimization on steroids.  The company’s executive team saw an opportunity in this emerging market and went “all in”. Consequently, they are being rewarded with tremendous growth results and market domination by providing the ability for their clients to collect and analyze performance information related to fuel consumption, service workforce safety, and asset productivity. This Thanksgiving, I’m thankful for health, family, friends, and a career with an innovative company that helps companies leverage top tier software to drive and manage growth. And I’m thankful to have learned the lesson that good things happen when you get off the beaten path—both when hiking and when forging new routes through a complex world economy. Halfway through my walkabout on Sunday, after scrambling up a long stretch of scree-covered hill, I crested a ridge with an obstructed view of 14,265 ft Mt Evans just a few miles to the west.  There, nowhere near a house or a trail, someone had placed a wooden lounge chair. Its wood was worn and faded but it was sturdy. I had lunch and a cold drink in my pack. Opportunity knocked and I seized it. Happy Thanksgiving.  

    Read the article

  • Open Source but not Free Software (or vice versa)

    - by TRiG
    The definition of "Free Software" from the Free Software Foundation: “Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program's users have the four essential freedoms: The freedom to run the program, for any purpose (freedom 0). The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, to anyone anywhere. Being free to do these things means (among other things) that you do not have to ask or pay for permission to do so. The definition of "Open Source Software" from the Open Source Initiative: Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria: Free Redistribution The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale. Source Code The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost preferably, downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed. Derived Works The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software. Integrity of The Author's Source Code The license may restrict source-code from being distributed in modified form only if the license allows the distribution of "patch files" with the source code for the purpose of modifying the program at build time. The license must explicitly permit distribution of software built from modified source code. The license may require derived works to carry a different name or version number from the original software. No Discrimination Against Persons or Groups The license must not discriminate against any person or group of persons. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research. Distribution of License The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties. License Must Not Be Specific to a Product The rights attached to the program must not depend on the program's being part of a particular software distribution. If the program is extracted from that distribution and used or distributed within the terms of the program's license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution. License Must Not Restrict Other Software The license must not place restrictions on other software that is distributed along with the licensed software. For example, the license must not insist that all other programs distributed on the same medium must be open-source software. License Must Be Technology-Neutral No provision of the license may be predicated on any individual technology or style of interface. These definitions, although they derive from very different ideologies, are broadly compatible, and most Free Software is also Open Source Software and vice versa. I believe, however, that it is possible for this not to be the case: It is possible for software to be Open Source without being Free, or to be Free without being Open Source. Questions Is my belief correct? Is it possible for software to fall into one camp and not the other? Does any such software actually exist? Please give examples. Clarification I've already accepted an answer now, but I seem to have confused a lot of people, so perhaps a clarification is in order. I was not asking about the difference between copyleft (or "viral", though I don't like that term) and non-copyleft ("permissive") licenses. Nor was I asking about your personal idiosyncratic definitions of "Free" and "Open". I was asking about "Free Software as defined by the FSF" and "Open Source Software as defined by the OSI". Are the two always the same? Is it possible to be one without being the other? And the answer, it seems, is that it's impossible to be Free without being Open, but possible to be Open without being Free. Thank you everyone who actually answered the question.

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >