Search Results

Search found 23347 results on 934 pages for 'key storage'.

Page 494/934 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Executing Stored Procedures in Visual Studio LightSwitch.

    - by dataintegration
    A LightSwitch Project is very easy way to visualize and manipulate information directly from one of our ADO.NET Providers. But when it comes to executing the Stored Procedures, it can be a bit more complicated. In this article, we will demonstrate how to execute a Stored Procedure in LightSwitch. For the purposes of this article, we will be using the RSSBus Email Data Provider, but the same process will work with any of our ADO.NET Providers. Creating the RIA Service. Step 1: Open Visual Studio and create a new WCF RIA Service Class Project. Step 2:Add the reference to the RSSBus Email Data Provider dll in the (ProjectName).Web project. Step 3: Add a new Domain Service Class to the (ProjectName).Web project. Step 4: In the new Domain Service Class, create a new class with the attributes needed for the Stored Procedure's parameters. In this demo, the Stored Procedure we are executing is called SendMessage. The parameters we will need are as follows: public class NewMessage{ [Key] public int ID { get; set; } public string FromEmail { get; set; } public string ToEmail { get; set; } public string Subject { get; set; } public string Text { get; set; } } Note: The created class must have an ID which will serve as the key value. Step 5: Create a new method that will executed when the insert event fires. Inside this method you can use the standards ADO.NET code which will execute the stored procedure. [Insert] public void SendMessage(NewMessage newMessage) { try { EmailConnection conn = new EmailConnection(connectionString); EmailCommand comm = new EmailCommand("SendMessage", conn); comm.CommandType = System.Data.CommandType.StoredProcedure; if (!newMessage.FromEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@From", newMessage.FromEmail)); if (!newMessage.ToEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@To", newMessage.ToEmail)); if (!newMessage.Subject.Equals("")) comm.Parameters.Add(new EmailParameter("@Subject", newMessage.Subject)); if (!newMessage.Text.Equals("")) comm.Parameters.Add(new EmailParameter("@Text", newMessage.Text)); comm.ExecuteNonQuery(); } catch (Exception exc) { Console.WriteLine(exc.Message); } } Step 6: Create a query method. We are not going to be using getNewMessages(), so it does not matter what it returns for the purpose of our example, but you will need to create a method for the query event as well. [Query(IsDefault=true)] public IEnumerable<NewMessage> getNewMessages() { return null; } Step 7: Rebuild the whole solution. Creating the LightSwitch Project. Step 8: Open Visual Studio and create a new LightSwitch Application Project. Step 9: On the Data Sources, add a new data source. Choose a WCF RIA Service Step 10: Choose to add a new reference and select the (Project Name).Web.dll generated from the RIA Service. Step 11: Select the entities you would like to import. In this case, we are using the recently created NewMessage entity. Step 13: On the Screens section, create a new screen and select the NewMessage entity as the Screen Data. Step 14: After you run the project, you will be able to add a new record and save it. This will execute the Stored Procedure and send the new message. If you create a screen to check the sent messages, you can refresh this screen to see the mail you sent. Sample Project To help you with get started using stored procedures in LightSwitch, download the fully functional sample project. You will also need the RSSBus Email Data Provider to make the connection. You can download a free trial here.

    Read the article

  • not being able to access any sudo function on my pc

    - by explorex
    Hi, I am not being to access any functions in my desktop and I don't have an OS besides Ubuntu 10.04 Lucid Linux and I am new to ubuntu. I think I rebooted my computer thinking that Google Chrome crashed. I opened Google Chrome but it showed opening message but never opened so I restarted my computer. and when my system was loading ('i was playing with keyboard dont know what I typed') and when by ubutnu loaded, I was unable to access anything some of characteristics are listed below I cannot hear any sound I cannot access wired ethernet connection on the right corner where I usually enable to access interne and I have no internet. There is no local apache server either. when ever I try to start apacer I get setuid must be root or something. When I type sudo then I get message setuid must be root. I cannot access orther external storage devices like pendrive and portable hard drive and cannot mount my other drives with FAT32 filesystem. When I try to start my apache webserver with out typing sudo then I get message cannnot open socket or something like it. EDIT:: i remember also doing command chown -R www-data / earlier and got error message EDIT:: and i cannot shutdown my computer, it only logs off

    Read the article

  • Rotating 2D Object

    - by Vico Pelaez
    Well I am trying to learn openGL and want to make a triangle move one unit (0.1) everytime I press one of the keyboard arrows. However i want the triangle to turn first pointing the direction where i will move one unit. So if my triangle is pointing up and I press right the it should point right first and then move one unit in the x axis. I have implemented the code to move the object one unit in any direction, however I can not get it to turn pointing to the direction it is going. The initial position of the Triangle is pointing up. #define LENGTH 0.05 float posX = -0.5, posY = -0.5, posZ = 0; float inX = 0.0 ,inY = 0.0 ,inZ = 0.0; // what values???? void rect(){ glMatrixMode(GL_PROJECTION); glLoadIdentity(); glPushMatrix(); glTranslatef(posX,posY,posZ); glRotatef(rotate, inX, inY, inZ); glBegin(GL_TRIANGLES); glColor3f(0.0, 0.0, 1.0); glVertex2f(-LENGTH,-LENGTH); glVertex2f(LENGTH-LENGTH, LENGTH); glVertex2f(LENGTH, -LENGTH); glEnd(); glPopMatrix(); } void display(){ //Clear Window glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); rect(); glFlush(); } void init(){ glClearColor(0.0, 0.0, 0.0, 0.0); glColor3f(1.0, 1.0, 1.0); } float move_unit = 0.01; bool change = false; void keyboardown(int key, int x, int y) { switch (key){ case GLUT_KEY_UP: if(rotate = 0) posY += move_unit; else{ inX = 1.0; rotate = 0; } break; case GLUT_KEY_RIGHT: if(rotate = -90) posX += move_unit; else{ inX = 1.0; // is this value ok?? rotate -= 90; } break; case GLUT_KEY_LEFT: if(rotate = 90) posX -= move_unit; else{ inX = 1.0; // is this value ok??? rotate += 90; } break; case GLUT_KEY_DOWN: if(rotate = 180) posY -= move_unit; else{ inX = 1.0; rotate += 180; } break; case 27: // Escape button exit(0); break; default: break; } glutPostRedisplay(); } int main(int argc, char** argv){ glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(500,500); glutInitWindowPosition(0, 0); glutCreateWindow("Triangle turn"); glutSpecialFunc(keyboardown); glutDisplayFunc(display); init(); glutMainLoop()

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • TechEd 2012: Fast SQL Server

    - by Tim Murphy
    While I spend a certain amount of my time creating databases (coding around SQL Server and setup a server when I have to) it isn’t my bread and butter.  Since I have run into a number of time that SQL Server needed to be tuned I figured I would step out of my comfort zone and see what I can learn. Brent Ozar packed a mountain of information into his session on making SQL Server faster.  I’m not sure how he found time to hit all of his points since he was allowing the audience abuse him on Twitter instead of asking questions, but he managed it.  I also questioned his sanity since he appeared to be using a fruit laptop. He had my attention though when he stated that he had given up on telling people to not use “select *”. He posited that it could be fixed with hardware by caching the data in memory.  He continued by cautioning that having too many indexes could defeat this approach.  His logic was sound if not always practical, but it was a good place to start when determining the trade-offs you need to balance.  He was moving pretty fast, but I believe he was prescribing this solution predominately for OLTP database prior to moving on to data warehouse solutions. Much of the advice he gave for data warehouses is contained in the Microsoft Fast Track guidance so I won’t rehash it here.  To summarize the solution seems to be the proper balance memory, disk access speed and the speed of the pipes that get the data from storage to the CPU.  It appears to be sound guidance and the session gave enough information that going forward we should be able to find the details needed easily.  Just what the doctor ordered. del.icio.us Tags: SQL Server,TechEd,TechEd 2012,Database,Performance Tuning

    Read the article

  • is Java free for mobile development?

    - by exTrace101
    Q1. I would like to know if it's free for a developer (I mean, if I have to pay no royalties to Sun/Oracle) to develop (Android) mobile apps in Java? After reading this snippet about use of Java field, I'm getting the impression that Java is not free for mobile development, is that right? .."General Purpose Desktop Computers and Servers" means computers, including desktop and laptop computers, or servers, used for general computing functions under end user control (such as but not specifically limited to email, general purpose Internet browsing, and office suite productivity tools). The use of Software in systems and solutions that provide dedicated functionality (other than as mentioned above) or designed for use in embedded or function-specific software applications, for example but not limited to: Software embedded in or bundled with industrial control systems, wireless mobile telephones, wireless handheld devices, netbooks, kiosks, TV/STB, Blu-ray Disc devices, telematics and network control switching equipment, printers and storage management systems, and other related systems are excluded from this definition and not licensed under this Agreement... and from http://www.excelsiorjet.com/embedded/ Notice : The Java SE Embedded technology license currently prohibits the use of Java SE in cell phones. Q2. how come these plethora of Android Java developers aren't paying Sun/Oracle a dime?

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • Amazon EC2 vs Dedicated server at Hetzner, what's the use for EC2?

    - by C-Blu
    After searching the web I still can't find the reason to use EC2. What's the point to scale EC2? If you expect a huge burst in traffic, they say. OK, but what if you already have a couple of sites with good traffic, and for example medium reserved EC2 instance is not enough. You are paying $36.60(medium reserved for 1year) in EU(Ireland) + traffic + optional expenses for databases and S3 if you use them. Of course as some point when you are under $56.6-$66.1 you can optimize your hosting costs with Amazon EC2. But when you get at some point if purchase EX4 server from Hetzner, it will surpass your perfomance needs for a long time, before you get a massive traffic. (I am wrong?) CPU: i7-2600 Quadcore (3.4-3.8 Ghz) RAM: 16 GB HDD: 2x3 TB SATA (6 Gbit/s) - I think that disc performance of a dedicated is better then of Amazon EBS Traffic: 10 TiB in month included. This is what you get from Hetzner for $56(- 19% VAT) or $66 for EU residents. Please, tell me what's the reason to use Amazon? Which load won't a server from Hetzner take, but Amazon Auto Scaling will? The maintenance of dedicated vs EC2 is still the same? Or hardware failure at Amazon, won't ruin your EBS storage? I'm still not at the level when I need expensive hosting, but want to know beforehand, just to be sure if Amazon infrastructure is better then pure performance of Hetzner's hardware.

    Read the article

  • Slightly off topic - How to Fix Sky Go Error [t6013-c1501] (and [t6000-c1501])

    - by bconlon
    Sky doesn't seem to understand what their own errors mean, so I cobbled together an understanding from some other posts and managed to get it working.When you see the error [t6013-c1501] instead of your TV programme in Sky Go, it seems to mean:'You registered a device, but then changed the hardware, so now I'm confused!'In other words, the Digital rights management (DRM) used between Sky Go and Silverlight stored an old fingerprint of your PC, but rather than recognising this and allowing you to remove the device, it just disappears from the 'Manage Devices' page.DISCLAIMER: Perform the following steps at your own risk. It worked for me, but I didn't care if it broke stuff. If you care....don't do it!So, to fix this I did the following:1. Login to Sky Go and click 'Watch live TV' from the home page. It will attempt to show Sky News and fail with the error [t6013-c1501].2. Right click on the error and you should see the Menu option 'Silverlight'. Select this and a dialog should appear. Click the 'Application Storage' tab and delete any entry that relates to sky go. Clcik OK to close the dialog.3. Open explorer and navigate to the folder C:\ProgramData\Microsoft\PlayReady4. Rename the file mspr.hds to mspr.hds.OLD5. Go back to the browser and click F5. You may need to logout/login (not sure).Note: Don't rename/delete the folder C:\ProgramData\Microsoft\PlayReady or you will get the error [t6000-c1501]. The folder must exist in order for the new file to be created by Silverlight. Techie talk:So whoever wrote the code to create a new mspr.hds file didn't write code to check the folder existed causing what I assume is a generic error t6000, probably something like:catch (Exception ex) { WriteToLog("Oops, something broke!"); }#

    Read the article

  • Cloud Fact for Business Managers #3: Where You Data Is, and Who Has Access to It Might Surprise You

    - by yaldahhakim
    Written by: David Krauss While data security and operational risk conversations usually happen around the desk of a CCO/CSO (chief compliance and/or security officer), or perhaps the CFO, since business managers are now selecting cloud providers, they need to be able to at least ask some high-level questions on the topic of risk and compliance.  While the report found that 76% of adopters were motivated to adopt cloud apps because of quick access to software, most of these managers found that after they made a purchase decision their access to exciting new capabilities in the cloud could be hindered due to performance and scalability constraints put forth  by their cloud provider.  If you are going to let your business consume their mission critical business applications as a service, then it’s important to understand who is providing those cloud services and what kind of performance you are going to get.  Different types of departments, companies and industries will all have unique requirements so it’s key to take this also into consideration.   Nothing puts a CEO in a bad mood like a public data breach or finding out the company lost money when customers couldn’t buy a product or service because your cloud service provider had a problem.  With 42% of business managers having seen a data security breach in their department associated directly with the use of cloud applications, this is happening more than you think.   We’ve talked about the importance of being able to avoid information silos through a unified cloud approach and platform.  This is also important when keeping your data safe and secure, and a key conversation to have with your cloud provider.  Your customers want to know that their information is protected when they do business with you, just like you want your own company information protected.   This is really hard to do when each line of business is running different cloud application services managed by different cloud providers, all with different processes and controls.   It only adds to the complexity, and the more complex, the more risky and the chance that something will go wrong. What about compliance? Depending on the cloud provider, it can be difficult at best to understand who has access to your data, and were your data is actually stored.  Add to this multiple cloud providers spanning multiple departments and it becomes very problematic when trying to comply with certain industry and country data security regulations.  With 73% of business managers complaining that having cloud data handled externally by one or more cloud vendors makes it hard for their department to be compliant, this is a big time suck for executives and it puts the organization at risk. Is There A Complete, Integrated, Modern Cloud Out there for Business Executives?If you are a business manager looking to drive faster innovation for your business and want a cloud application that your CIO would approve of, I would encourage you take a look at Oracle Cloud.  It’s everything you want from a SaaS based application, but without compromising on functionality and other modern capabilities like embedded business intelligence, social relationship management (for your entire business), and advanced mobile.  And because Oracle Cloud is built and managed by Oracle, you can be confident that your cloud application services are enterprise-grade.  Over 25 Million users and 10 thousands companies around the globe rely on Oracle Cloud application services everyday – maybe your business should too.  For more information, visit cloud.oracle.com. Additional Resources •    Try it: cloud.oracle.com•    Learn more: http://www.oracle.com/us/corporate/features/complete-cloud/index.html•    Research Report: Cloud for Business Managers: The Good, the Bad, and the Ugly

    Read the article

  • MySQL Server 5.6 default my.cnf and my.ini

    - by user12626240
    We've introduced a default my.cnf / my.ini file for MySQL Server that you can now see in the 5.6.8 release candidate: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M   # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin   # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # socket = ..... # server_id = .....   # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M   sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES    There is also a template file called my-default.cnf or my-default.ini that has these lines near the start: # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL.   On Linux systems, the mysql_install_db command will copy the template file to the final location, where the server will read and use the file, removing the extra three lines. On Windows, the installer will create extra settings based on the answers you gave during installation. Neither will overwrite an existing my.cnf or my.ini file. The only initially active setting here is to change the value of  sql_mode from the server default of NO_ENGINE_SUBSTITUTION to NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES. This strict mode changes warnings for some non-standard behaviour into errors. This can cause applications which rely on the non-standard things, like dates that aren't valid, to lose data. If we had just changed the server default, the new setting would affect all servers that lack an explicit sql_mode setting, including those where strict mode is harmful. So we did it in the default file instead because that will only affect new server installations. You should expect that in our next version after 5.6, the server default will include STRICT_TRANS_TABLES. Our Windows installer and some of our connectors already use STRICT_TRANS_TABLES by default. Strict has been our preferred setting for many years and it is good to see some development platforms are using it. If you need the old behaviour, just remove the STRICT_TRANS_TABLES setting. If you do this, please also ask your application provider to make it unnecessary. They can do that by setting the session sql_mode setting in their own connections, so the rest of the applications using the server don't have to have an undesirable default. We've kept this file as small as possible because we found that our old files were too big and confused people. We've also now removed the old my-huge and related example files. One key part of this is the link to the documentation, where we will provide an introduction to some key settings. We'd like to hear your feedback on settings that will benefit most users or are most important to call out for existing users. Please do that by commenting here or if you prefer by adding comments to this bug report.

    Read the article

  • Moving to New Machine... also upgrade to 64bit. What steps?

    - by Kendor
    I am about to move to a new Lenovo X201 from current X61. Current setup has separate \home, separate swap file, also separate \Data partition. Am currently running 10.04 32 bit. Am considering running 64 bit on new machine because I will now have 8 GB of RAM. And would like to also move to 10.10. Ideally I would like preserve as much of my current setup as possible... New machine has Win7 on it, but will blow that away, as I've made a clonezilla copy of it, and will use VirtualBox for when I need Windows. Can someone suggest a good step by step for me? I'm networked to a NAS and also have plenty of external USB storage in case I need intermediary steps. So do I set up new machine first with 64bit 10.10, with partition scheme I want? then rsnyc over \home from old machine (over write target home)? Do I need to upgrade the X61 first to 10.10?

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • Bring on the Cheer, Oracle’s Q3 is Here

    - by Kristin Rose
    November is long gone and December is near… this must mean OPN’s Q2 Winter Wrap-Up is here! Listed below are just a few of the highlights from Oracle’s past three months… Yet another successful Oracle OpenWorld 2012 and the launch of our first ever Oracle PartnerNetwork Exchange program! Get the recap. Our exciting Java Embedded @ JavaOne event. Get the low-down here! The debut of our new Oracle Cloud programs for partners, which have already created some awesome buzz in the Channel. Check out the CRN article, and don’t forget to watch the Cloud Programs Overview video and visit our OPN Cloud Knowledge Zone! On the product front, Oracle’s Sun ZFS Storage Appliance was awarded the 2012 Tech Innovator and Enterprise App Award by CRN. Read the full article. Oracle partner, Hitachi Consulting, reached OPN’s premier Diamond Level status. Read more. Was Oracle part of your September, October or November highlights? If so, leave us a comment below, we’d love to feature your story! Also, don’t forget to share the love by re-tweeting this post on Twitter or “liking” this post on Facebook! Stay Warm, The OPN Communications Team 

    Read the article

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • Documents stored on separate internal drive, Ubuntu doesn't notice on startup

    - by PlanoAlto
    My machine has Windows 7 Ultimate x64 and Ubuntu 12.04 LTS running side-by-side on a single hard drive with GRUB bootloader, each with 500 GB storage. I keep my personal documents on a separate 1TB hard drive so they remain isolated from any changes I make to the OS drive, but when Ubuntu starts it does not seem to notice my documents drive. While I've installed and worked with Ubuntu 12.04 Server x32 before, using it as a desktop OS is new to me. I use my documents drive for all of my personal data, including wallpapers and music, so it is imperative that Ubuntu recognize it on startup. Concerning the two specific examples: Ubuntu loads with the default blue-colored desktop instead of my desired picture of the spectacular Carina galaxy. When I right-click the desktop and select "Change Desktop Background", it wakes up from its amnesia and loads the proper background. As for my music, Rhythmbox defaults to an empty library upon reboot, forcing me to reload the settings manually each time. This gets quite tedious because I certainly can't work to my full potential without my music. The second thing I would like to address is making Ubuntu point the documents directories in ~ to their appropriate counterparts on the 1TB documents drive. I realize that this question is not new, but when I create the symbolical links, they established themselves inside the directories and did not convert the directories themselves into symbolical links. I also prefer not to move the files themselves from their current location on the 1TB drive. I believe this would also help the Rhythmbox library problem as well considering it's a default directory for the music player. Excerpt from fstab: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb6 during installation UUID=057ac83e-76ad-460d-86e5-b6d46e9b1d80 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb7 during installation #UUID=1183df90-23fc-44e4-aa17-4e7c9865d5cb none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 That's enough content for one question. I really like the Ubuntu experience so far since it doesn't treat me like an idiot out of the box (can't say the same for Windows) so I can't wait to hear from the community! Thanks for your help in advance.

    Read the article

  • How can I fix the #c3284d# malvertising hack on my website?

    - by crm
    For the past couple of weeks at semi regular intervals, this website has had the #c3284d# malware code inserted into some of its .php files. Also the .htaccess file had its equivelant code inserted. I have, on many occasions removed the malicious code, replaced files, changed the ftp password on my ftp client (which is CoreFTP), changed the connection method to FTPS for more secure storage of the password (instead of plain text). I have also scanned my computer several times using AVG and Windows Defender which have found no malware on my computer which might have been storing my ftp passwords. I used Sucuri SiteCheck to check my website which says my website is clean of malware which is bizarre because I just attempted to click one of the links on the site a minute ago and it linked me to another one of these random stats.php sites, even though it appears I have gotten rid of the #c3284d# code again (which will no doubt be re-inserted somehow in an hour or so).. Has anyone found an actual viable solution for this malware hack? I have done just about all of the things suggested here and here and the problem still persists. Currently when I click on a link within the sites navigation menu within Google Chrome I get googles Malware warning page: Warning: Something's Not Right Here! oxsanasiberians.com contains malware. Your computer might catch a virus if you visit this site. Google has found that malicious software may be installed onto your computer if you proceed. If you've visited this site in the past or you trust this site, it's possible that it has just recently been compromised by a hacker. You should not proceed. Why not try again tomorrow or go somewhere else? We have already notified oxsanasiberians.com that we found malware on the site. For more about the problems found on oxsanasiberians.com, visit the Google Safe Browsing diagnostic page. I'm wondering if it is possible that the Google Chrome browser I am using has itself been hacked? Does anyone else get re-directed when clicking links on the the website?

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • Free Xsigo Technical Pre-sales workshop for Selected Partners !

    - by mseika
    In 2012 Oracle acquired Xsigo, a developer of network I/O virtualisation solutions. This acquisition compliments Oracle’s extensive virtualisation portfolio. With Oracle Virtual Networking products (Xsigo) you can: Virtualise connectivity from any server to any storage and any network. Reduce datacentre complexity by 70% Cut infrastructure expenses by up to 50% Benefits to Channel Partners: Offer a unique proposition that your competitors can’t match. Provide an innovative solution that delivers more performance at less cost. High margins that help sell more products and services. This course is aimed at Technical Pre-Sales Consultants equipping them to provide detailed demos, and architect RFP feedback and customer solutions. The language of this event is French. WHEN24th September 2013 WHEREOracle France 15, boulevard Charles De Gaulle92715 COLOMBES FEESFree of charge 09.00: Welcome, Coffee & Introduction 09.30: Value Propositions, Architecture & Use Cases 11.30: Build a OVN Web Quote & TCO 12.30: Lunch 13.30: Competitive Summary 14.00: Design Scenario Workshop 15.45: Questions/Opportunities  REGISTRATION: Register via this link as soon as possible, 14th june, latest. Note that we have only 20 seats in total for this event. Note that after 14th june we will release free seats for other organizations to register. We look forward to your participation! What we expect from you: You will bring your own laptop. Recommended browser is Firefox 10 ESR. You have checked the material and conducted the assessments. You will be flexible in terms of Agenda and Progress as we intend this to be more of a Workshop having Dialogue rather than sticking tightly into the tentative timeline. What this is not: This PartnerLab does not replace Oracle University Trainings. This PartnerLab does not lead to a Certification as such. This PartnerLab does not enable Partners to full and complete implementation skills.

    Read the article

  • Offsite Backup

    - by Grant Fritchey
    There was a recent weather event in the United States that seriously impacted our power grid and our physical well being. Lots of businesses found that they couldn’t get to their building or that their building was gone. Many of them got to do a full test of their disaster recovery processes. A big part of DR is having the ability to get yourself back online in a different location. Now, most of us are not going to be paying for multiple sites, but, we need the ability to move to one if needed. The best thing you can to start to set this up is have an off-site backup. Want an easy way to automate that? I mean, yeah, you can go to tape or to a portable drive (much more likely these days) and then carry that home, but we’ve all got access to offsite storage these days, SkyDrive, DropBox, S3, etc. How about just backing up to there? I agree. Great idea. That’s why Red Gate is setting up some methods around it. Want to take part in the early access program? Go here and try it out.

    Read the article

  • ADF - Now with Robots!

    - by Duncan Mills
    I mentioned this briefly in a tweet the other day, just before the full rush of OOW really kicked off, so I though it was worth re-visiting. Check out this video, and then read on: So why so interesting? Well - you probably guessed from the title, ADF is involved. Indeed this is as about as far from the traditional ADF data entry application as you can get. Instead of a database at the back-end there's basically a robot. That's right, this remarkable tape drive is controlled through an ADF using all your usual friends of ADF Faces, Controller and Binding (but no ADFBC for obvious reasons). ADF is used both on the touch screen you see on the front of the device in the video, and also for the remote management console which provides a visual representation of the slots and drives. The latter uses ADF's Active Data Framework to provide a real-time view of what's going on the rack. . What's even more interesting (for the techno-geeks) is the fact that all of this is running out of flash storage on a ridiculously small form factor with tiny processor - I probably shouldn't reveal the actual specs but take my word for it, don't complain about the capabilities of your laptop ever again! This is a project that I've been personally involved in and I'm pumped to see such a good result and,  I have to say, those hardware guys are great to work with (and have way better toys on their desks than we do). More info in the SL150 (should you feel the urge to own one) is here. 

    Read the article

  • Is it OK to create all primary partitions.?

    - by james
    I have a 320GB hard disk. I only use either ubuntu or kubuntu (12.04 for now). I don't want to use windows or any other dual boot os. And i need only 3 partitions on my hard disk. One for the OS and remaining two for data storage. I don't want to create swap also. Now can i create all primary partitions on the hard disk. Are there any disadvantages in doing so. If all the partitions are primary i think i can easily resize partitions in future. On second thought i have the idea of using seperate partition for /home. Is it good practice . If i have to do this, i will create 4 partitions all primary. In any case i don't want to create more than 4 partitions . And i know the limit will be 4. So is it safe to create all 3 or 4 primary partitions. Pls suggest me, What are the good practices . (previously i used win-xp and win-7 on dual boot with 2 primary partitions and that bugged me somehow i don't remember. Since then i felt there should be only one primary partition in a hard disk.) EDIT 1 : Now i will use four partitions in the sequence - / , /home , /for-data , /swap . I have another question. Does a partition need continuous blocks on the disk. I mean if i want to resize partitions later, can i add space from sda3 to sda1. Is it possible and is it safe to do ?

    Read the article

  • Default mount options on auto-mounted NTFS partitions (how to add `noexec` and `fmask=0111`?)

    - by jetxee
    I use auto-mounting of external USB devices, and it works as expected, except that NTFS partitions are mounted with executability flag on. For example: /dev/sdb1 on /media/Elements type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) All normal files are -rwxrwxrwx on this partition. I am not happy with the xs. I know I can have it mounted the way I want if I pass the fmask=0111 option. Now I use Lucid, and suppose it uses some new auto-mounting mechanism (gvfs-mount?), but I don't really know how the default mounting options can be changed now. Gconf settings in /system/storage/default_options/ntfs/mount_options have no effect. So, how do I make fmask=0111 the default automounting option for all NTFS partitions? (I'd be grateful also if someone explains how the current automounting mechanism works, how to configure it, and if the default mounting options are hard-coded, what I have to recompile to change them). I know that I can put a line in the /etc/fstab and/or mount manually, but this is not the solution I want, because 1) I don't want to edit /etc/fstab for each and every external drive I use, 2) fstab records appear in the Places pane of Nautilus, even if the drives are not present. The questions is how to change the defaults.

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >