Search Results

Search found 30896 results on 1236 pages for 'best buy'.

Page 4/1236 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Build vs Buy Webcast: November 8, 2012

    - by TammyBednar
    Date: Thursday, November 8, 2012, 1:00 PM EST You have a choice. Do you build your own database platform or buy a pre-engineered database appliance? Building a high-availability database platform presents unique challenges. Combining servers, storage, networking, OS, firmware, and database is complicated and raises important concerns: Will coordination between multiple SME’s delay deployment? Will it be reliable? Will it scale? Will routine maintenance consume precious IT-staff time? Ultimately, will it work? Enter the Oracle Database Appliance, a complete package of software, server, storage, and networking that’s engineered for simplicity. It saves time and money by simplifying deployment, maintenance, and support of database workloads. Plus, it’s based on Intel Xeon processors to ensure a high level of performance and scalability. Attend this Webcast to hear customer stories and discover how the Oracle Database Appliance: Increases ROI by reducing capital and operational expenses Frees IT staff by reducing deployment and management time from weeks to hours Takes the worry out of supporting mission critical application workloads Register For this WebCast today!

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by Singletony
    Hi guys. We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI, i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks) ? If my question is not clear, I will gladly elaborate! THANK YOU SO MUCH!

    Read the article

  • NGinx Best Practices

    - by The Pixel Developer
    What best practices do you use while using NGinx? try_files in Subdirectory Credits go to Igor for helping me with this one. location /wordpress { try_files $uri $uri/ @wordpress; } location @wordpress { fastcgi_pass 127.0.0.1:9000; fastcgi_split_path_info ^(/wordpress)(/.*)$; fastcgi_param SCRIPT_FILENAME /var/www/wordpress/index.php; fastcgi_param PATH_INFO $fastcgi_path_info; } Normally PATH_INFO would include the "/wordpress", so we use the "split_path_info" command to grab the part of the URI after "/wordpress". This allows us to wordpress with and without the index.php file.

    Read the article

  • Best way to replicate / mirror 100s of databases in SQL 2005

    - by mrwayne
    Hi, I currently host around 400-500 SQL 2005 databases of varying sizes (1-10 gig) each. I am aware of most of the different methods available and the general pros/cons of mirroring, log shipping, replication and clustering, but i am not aware of how well they tend to perform when its employed at the size i have specified (400-500 unique databases). Does anyone have any good advice on what is likely the best method for having the ability to fail over to another server with this sort of setup? Fail over does not need to be immediate, i'm just looking for something better than taking backups every day and moving them to storage. I'm preferably looking for something that would also makes it easy to manage the databases in bulk (as opposed to one at a time). Thanks for your input!

    Read the article

  • Best Practice: iDRAC & NIC Selection

    - by Josh Brower
    I am setting up a new Dell server with iDRAC 6 Express. My options for the NIC are: 1) Shared 2) Shared with failover to LOM2 3) Shared with failover to all LOMs The server has 2x dual-nic PCI-E cards (total of 4 nics) My questions are thusly: -What is best practice for setting this up? Is there any reason why I would not want option 3? -If the NIC is being used for both iDRAC and the OS, (there is no dedicated iDRAC nic), does this ever cause any kinds of issues for either iDRAC or the OS? Thanks- -Josh

    Read the article

  • datacenter network change control best practices

    - by jpolache
    I have been tasked with compiling a list of possible network equipment changes at a data center. The task includes tagging which changes need change control and which don't. Does anyone know of a "best practices" list that I can start from? The methods for doing change control at this data center are well established. The list would be of specific configuration items that should or should not be included in the change control process, of example; static route entries switch port assignments firewall rule additions/changes etc.

    Read the article

  • Best practice for Exchange 2010 HA topology considering 6 x Exchange licenses and TMG 2010

    - by MadBoy
    What would be best topology considering that: 6 x Exchange 2010 Standard Licenses 2 x Separate locations that are supposed to support redundancy in case of link problems 4 x Forefront TMG 2010 with Forefront Security and Forefront Protection/Security Multiple locations worldwide using those Exchange. Most locations will be connected with VPN Tunnel (the ones hosting Exchange for sure). I was thinking something like this: Location MAIN (about 70-100 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Location SUPPORT (about 20 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Management wants to make sure that in case of problems in main location (power failure, link loss etc) second location can support all traffic from around the world and vice-versa. We have 6-7 locations and more comming up (not big ones but like 10+ people per each location). I do know that CAS/HUB is single point of failure (and no NLB), but i simply lack more licenses to do some redundancy on that. What do you think about this approach? What would be better approach according to you?

    Read the article

  • SQL Server data platform upgrade - Why upgrade and how best you can reduce pre & post upgrade problems?

    - by ssqa.net
    SQL Server upgrade, let it be database(s) or instance(s) or both the process and procedures must follow best practices in order to reduce any problems that may occur even after the platform is upgraded. The success of any project relies upon the simpler methods of implementation and a process to reduce the complexity in testing to ensure a successful outcome. Also the topic has been a popular topic that .... read more from here ......(read more)

    Read the article

  • What is the best way to do development with git?

    - by marlene
    I have been searching the web for best practices, but don't see anything that is consistent. If you have an excellent development process that includes successful releases of your product as well as hotfixes/patches and maintenance releases and you use git. I would love to hear how you use git to accomplish this. Do you use branches, tags, etc? How do you use them? I am looking for details, please.

    Read the article

  • How to Buy an SD Card: Speed Classes, Sizes, and Capacities Explained

    - by Chris Hoffman
    Memory cards are used in digital cameras, music players, smartphones, tablets, and even laptops. But not all SD cards are created equal — there are different speed classes, physical sizes, and capacities to consider. Different devices require different types of SD cards. Here are the differences you’ll need to keep in mind when picking out the right SD card for your device. Speed Class In a nutshell, not all SD cards offer the same speeds. This matters for some tasks more than it matters for others. For example, if you’re a professional photographer taking photos in rapid succession on a DSLR camera saving them in high-resolution RAW format, you’ll want a fast SD card so your camera can save them as fast as possible. A fast SD card is also important if you want to record high-resolution video and save it directly to the SD card. If you’re just taking a few photos on a typical consumer camera or you’re just using an SD card to store some media files on your smartphone, the speed isn’t as important. Manufacturers use “speed classes” to measure an SD card’s speed. The SD Association that defines the SD card standard doesn’t actually define the exact speeds associated with these classes, but they do provide guidelines. There are four different speed classes — 10, 8, 4, and 2. 10 is the fastest, while 2 is the slowest. Class 2 is suitable for standard definition video recording, while classes 4 and 6 are suitable for high-definition video recording. Class 10 is suitable for “full HD video recording” and “HD still consecutive recording.” There are also two Ultra High Speed (UHS) speed classes, but they’re more expensive and are designed for professional use. UHS cards are designed for devices that support UHS. Here are the associated logos, in order from slowest to fastest:       You’ll probably be okay with a class 4 or 6 card for typical use in a digital camera, smartphone, or tablet. Class 10 cards are ideal if you’re shooting high-resolution videos or RAW photos. Class 2 cards are a bit on the slow side these days, so you may want to avoid them for all but the cheapest digital cameras. Even a cheap smartphone can record HD video, after all. An SD card’s speed class is identified on the SD card itself. You’ll also see the speed class on the online store listing or on the card’s packaging when purchasing it. For example, in the below photo, the middle SD card is speed class 4, while the two other cards are speed class 6. If you see no speed class symbol, you have a class 0 SD card. These cards were designed and produced before the speed class rating system was introduced. They may be slower than even a class 2 card. Physical Size Different devices use different sizes of SD cards. You’ll find standard-size CD cards, miniSD cards, and microSD cards. Standard SD cards are the largest, although they’re still very small. They measure 32x24x2.1 mm and weigh just two grams. Most consumer digital cameras for sale today still use standard SD cards. They have the standard “cut corner”  design. miniSD cards are smaller than standard SD cards, measuring 21.5x20x1.4 mm and weighing about 0.8 grams. This is the least common size today. miniSD cards were designed to be especially small for mobile phones, but we now have a smaller size. microSD cards are the smallest size of SD card, measuring 15x11x1 mm and weighing just 0.25 grams. These cards are used in most cell phones and smartphones that support SD cards. They’re also used in many other devices, such as tablets. SD cards will only fit into marching slots. You can’t plug a microSD card into a standard SD card slot — it won’t fit. However, you can purchase an adapter that allows you to plug a smaller SD card into a larger SD card’s form and fit it into the appropriate slot. Capacity Like USB flash drives, hard drives, solid-state drives, and other storage media, different SD cards can have different amounts of storage. But the differences between SD card capacities don’t stop there. Standard SDSC (SD) cards are 1 MB to 2 GB in size, or perhaps 4 GB in size — although 4 GB is non-standard. The SDHC standard was created later, and allows cards 2 GB to 32 GB in size. SDXC is a more recent standard that allows cards 32 GB to 2 TB in size. You’ll need a device that supports SDHC or SDXC cards to use them. At this point, the vast majority of devices should support SDHC. In fact, the SD cards you have are probably SDHC cards. SDXC is newer and less common. When buying an SD card, you’ll need to buy the right speed class, size, and capacity for your needs. Be sure to check what your device supports and consider what speed and capacity you’ll actually need. Image Credit: Ryosuke SEKIDO on Flickr, Clive Darra on Flickr, Steven Depolo on Flickr

    Read the article

  • Will buy simple Cocos2D bubbles iPad game for private use (source)

    - by boliva
    Hi, First of all, sorry if this is the wrong place for posting this kind of request. IDK if is there already a marketplace on the stack community. I'm a fairly experienced iPhone/iPad developer with several Apps already published. I have a deep understanding of Objective-C and the Cocoa framework, as well as with the iPhone development tools. However, I have never used Cocos2d (or any other gaming engine for that matter) as I've mostly specialized in utilities/productivity Apps. I am in the urgent need of developing a really simple iPad game (for which I will provide all of the media assets - graphics and sounds) that needs to be deployed in about a week from now. Basically the game should allow the user to pop bubbles of different size and speed as they move from the bottom to the top of the screen. While I could take the time to read the documentation and start working on this game myself, I'm currently with a couple of other projects that I need to finish soon, so I would like to ask for the help of some other more experienced Cocos2D developer which could develop this game on its basic form for me. If you think you can help, please send me your quote, timing and, if possible, samples of previous work done with Cocos2D that would be similar to what I need. I can provide more detail upon request. Best and thank you all.

    Read the article

  • ASP.NET MVC Best Practices, Tips and Tricks

    - by Koistya Navin
    Please, share your ideas which could serve as best practices or guidelines for creating ASP.NET MVC web applications. These ideas and/or coding samples should be relevant to ASP.NET MVC application creation itself and not to TDD or similar practices. Other resources: ASP.NET MVC Best Practices (Part 1) by Kazi Manzur Rashid ASP.NET MVC Best Practices (Part 2) by Kazi Manzur Rashid

    Read the article

  • How to organize a large number of objects

    - by shane
    We have a large number of documents and metadata (xml files) associated with these documents. What is the best way to organize them? Currently we put them into a series of nested folders: /repository/category/date(when they were loaded into our db)/document_number.pdf and .xml We use the path as a unique identifier for the document in our system. This is more versatile than putting them all in a single flat folder. also it is independent from our database/application, so we can reload them in case of failure. Yet, it introduces some limitations. for example we can't move the files once they've been placed in this structure, also it takes work to put them this way. What is the best practice? How websites such as Scribd deal with this problem?

    Read the article

  • NHibernate Transactions Best Practices

    - by Ramiro
    I have been reading about Nhibernate for a while and have been trying to use it for a site I'm implementing. I read the article by Billy McCafferty on NHibernate best practices but I did not see any indication on where is the best place to handle transactions. I thought of putting that code in the Data Access Object (DAO) but then I'm not sure how to handle cases in which more than one DAO is used. What are the best places to put transaction code in your NHibernate Application?

    Read the article

  • Best approach for coding ?

    - by ahmed
    What should or how should I decide the best approach for coding as a smart programmer. I have just started programming last year in VB, and I keep on listening this statement. But I never could find by myself to choose the best approach for coding. When I search for a coding example on internet I find different types of approach to achieve the same target. So help me finding the best approach. (asp.net,vb.net)

    Read the article

  • Am I wrong to disagree with A Gentle Introduction to symfony's template best practices?

    - by AndrewKS
    I am currently learning symfony and going through the book A Gentle Introduction to symfony and came across this section in "Chapter 4: The Basics of Page Creation" on creating templates (or views): "If you need to execute some PHP code in the template, you should avoid using the usual PHP syntax, as shown in Listing 4-4. Instead, write your templates using the PHP alternative syntax, as shown in Listing 4-5, to keep the code understandable for non-PHP programmers." Listing 4-4 - The Usual PHP Syntax, Good for Actions, But Bad for Templates <p>Hello, world!</p> <?php if ($test) { echo "<p>".time()."</p>"; } ?> (The ironic thing about this is that echo statement would look even better if time was variable declared in the controller, because then you could just embed the variable in the string instead of concatenating) Listing 4-5 - The Alternative PHP Syntax, Good for Templates <p>Hello, world!</p> <?php if ($test): ?> <p><?php echo time(); ?> </p><?php endif; ?> I fail to see how listing 4-5 makes the code "understandable for non-PHP programmers", and its readability is shaky at best. 4-4 looks much more readable to me. Are there any programmers who are using symfony that write their templates like those in 4-4 rather than 4-5? Are there reasons I should use one over the other? There is the very slim chance that somewhere down the road someone less technical could be editing it the template, but how does 4-5 actually make it more understandable to them?

    Read the article

  • automated email downloading and treading similar messages

    - by Michael
    Okay here it is : I have built an c# console app that downloads email, save attachments , and stores the subject, from, to, body to a MS SQL Database. I use aspNetPOP3 Component to do this. I have build a front end ASP.NET application to search and view the messages. Works great. Next Steps (this is where I need help ): Now I want my users (of the asp.net app) to reply to this message send the email to the originator, and tread any additional replies back and forth on from that original message(like basecamp). This would allow my end user not to have to log-in to a system, they just continue using email (our users can as well). The question is what should I use to determine if messages are related? Subject line I think is a bad approach. I believe the best method i've seen so far is way basecamp does it, but I'm not sure how that is done, here is a real example of the reply to address from a basecamp email (I've changed the host name): [email protected] Basecamp obviously are prefixing the pre-pending a tracking id to the email address, however , when I try this with my mail service, it's rejected. Is this the best approach, is there a way I can accomplish this, is there a better approach, or even a better email component tool? Thanks, Mike

    Read the article

  • Are there any "best practices" on cross-device development?

    - by vstrien
    Developing for smartphones in the way the industry is currently doing is relatively new. Of course, there has been enterprise-level mobile development for several decades. The platforms have changed, however. Think of: from stylus-input to touch-input (different screen res, different control layout etc.) new ways of handling multi-tasking on mobile platforms (e.g. WP7's "tombstoning") The way these platforms work aren't totally new (iPhone has been around for quite awhile now for example), but at the moment when developing a functionally equal application for both desktop and smartphone it comes down to developing two applications from ground up. Especially with the birth of Windows Phone with the .NET-platform on board and using Silverlight as UI-language, it's becoming appealing to promote the re-use of (parts of the UI). Still, it's fairly obvious that the needs of an application on a smartphone (or tablet) are very different compared to the needs of a desktop application. An (almost) one-on-one conversion will therefore be impossible. My question: are there "best practices", pitfalls etc. documented about developing "cross-device" applications (for example, developing an app for both the desktop and the smartphone/tablet)? I've been looking at weblogs, scientific papers and more for a week or so, but what I've found so far is only about "migratory interfaces".

    Read the article

  • best practice to removing DC from Site that no longer connects via vpn in another city

    - by dasko
    hi i am looking for a recap of what i have done already to see if i missed anything. i had two cities connected by wan using a ipsec persistent tunnel between gateways. i had one DC (DOMAIN CONTROLLER) in each city that was a global catalog server (GC) they were set up to replicate and i had them configured under Sites and Servers with their own subnet etc... about 6 months ago the one city was removed and i was not able to gracefully remove, through dcpromo, the server that was there. it is no longer used and cannot be brought back. the company went from two sites down to single site. Problem is i had a whole bunch of kcc errors and replication bugs in the event viewer. i wanted to clean up my active directory and decided to use the ntdsutil metadata cleanup commands. i removed the server from the specifed site based on a procedure from petri website. I then removed the instances of the old DC and site from Sites and Servers. Then i went and cleaned up the DNS by removing Host A records, NS server name from both the local DNS forward lookup zone and the _msdcs i also removed the reverse lookup zone for the subnet that no longer exists. is there anything i missed? thanks in advance for any help. gd

    Read the article

  • What are the best practices for service accounts?

    - by LockeCJ
    We're running several services in our company using a shared domain account. Unfortunately, the credentials for this account are widely distributed and being used frequently for both service and non-service purposes. This has led to a situation where it is possible that the services will be temporarily down due to this shared account being locked. Obviously, this situation needs to change. The plan is to change the services to run under a new account, but I don't think this goes far enough, as that account is subject to the same locking policy. My questions is this: Should we be setting up the service accounts differently than other domain accounts, and if we do, how do we manage those accounts. Please keep in mind that we are running a 2003 domain, and upgrading the domain controller is not a viable solution in the near term.

    Read the article

  • GPO best practices : Security-Group Filtering Versus OU

    - by Olivier Rochaix
    Good afternoon everyone, I'm quite new to Active Directory stuff. After upgraded Functional level of our AD from 2003 to 2008 R2 (I need it to put fine-grained password policy), I then start to reorganized my OUs. I keep in mind that a good OU organization facilitate application of GPO (and maybe GPP).But in the end, it feels more natural for me to use Security-group filtering (from Scope tab) to apply my policies, instead of direct OU. Do you think it is a good practice or should I stick to OU ? We are a small organisation with 20 users and 30-35 computers. So, we got a simple OU tree, but more subtle split with security-groups. The OU tree doesn't contain any objects except at the bottom level. Each bottom level OU contains Computers,Users, and of course security groups. These security groups contains Users & Computers of the same OU. Thanks for your advices, Olivier

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • best practices for setting development environment

    - by Sharique
    I use Linux as primary OS. I need some suggestions regarding how should I set up my desktop and development. I do work on mostly .Net and Drupal, but some time on other lamp products and C/C++, Qt. I'm also interested in mobile (android..) and embedded development. Currently I install everything on my main OS, even I use it a little. I use VMs a little. Should I use separate VM for each kind of development (like one for .Net/Mono, another C++, one for mobile and one for db only, one for xyz things etc) Keep primary development environment on main os and moveothers in VM. main os should be messed up keep things easy to organize (must) performance should be optimal

    Read the article

  • Best way to override 1024 process ulimit

    - by CamelBlues
    On CentOS distros, there is an /etc/security/limits.d/90-noproc.conf that sets a process limit for all users: # Default limit for number of user's processes to prevent # accidental fork bombs. # See rhbz #432903 for reasoning. * soft nproc 1024 I'd like to keep this limit in there, but allow one user to have more than 1024 processes. Because of how the server is puppetized, I'm unable to use the built-in bash ulimit command.

    Read the article

  • Win7 Domain User Profile- Desktop Icon management best practices request

    - by Doltknuckle
    Here's the situation: We have a large (5,000+ user) organization that is currently using folder redirection to manage the windows desktop icons. This folder is redirected to a network share where we can centrally manage the different sites and such. When a user tries to use a computer when the network is not available, they are unable to use any shortcuts in the Public folder. We only redirect the C:\Users\%username%\Desktop folder. Does anyone have any suggestions of how to go about managing desktop icons? We still want a central location to manage these items, but find a way to keep the system working when the network is unavailable. As a point of clarification, the network rarely goes down. We do have instances where a few computers do not have a network connection. Usually, something is simply unplugged. Since we have multiple sites, the line from a branch to the central office has gone down a few times. This is more of an attempt to maintain a positive end user experience when disconnected from the network.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >