Search Results

Search found 13664 results on 547 pages for 'storage engine'.

Page 32/547 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • MySQL Exotic Storage Engines

    <b>Database Journal:</b> "MySQL has an interesting architecture that sets it apart from some other enterprise database systems. It allows you to plug in different modules to handle storage. What that means to end users is that it is quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs."

    Read the article

  • Virtual Machine Storage Provisioning and best practises

    If you're using Virtualization technology, then at some point you'll have run out of (or will run out of) virtual disk space, & had to provision extra storage; are you confident that you know how to do that? Sean Duffy makes sure you're doing it right, sharing his recommendations and tips in this step-by-step guide to Virtual Machine Storage provisioning for VMware. Follow this advice, and you'll be a Virtualization Veteran in no time.

    Read the article

  • Right-size IT Budgets with Windows Server 2012 "Storage Spaces"

    - by KeithMayer
    What is the Largest Single Cost Category in Your IT Hardware Budget? If you're like most of the enterprise customer organizations that were surveyed when we were designing Windows Server 2012, your answer is probably the same as theirs: STORAGE! For the organizations we surveyed, we found that as much as 60% of their annual hardware budgets were allocated to expensive hardware SAN solutions due to ever-increasing storage requirements. Wouldn't it be nice to have some of that budget back

    Read the article

  • Storage device manger regarding NTFS automount at boot time

    - by muneesh
    I am using storage device manager to auto-mount NTFS file system at boot time.But repeatedly, I am trying to uncheck the checkbox listed 'read only mode' in assistant option of storage device manager. I am not able to to auto-mount my NTFS partition in read/write mode. Please suggest a solution regarding this problem? Remember I am repeatedly trying to uncheck read only checkbox but not able to do that!

    Read the article

  • Survey of MySQL Storage Engines

    MySQL has an interesting architecture that sets it apart from some other enterprise database systems. It allows you to plug in different modules to handle storage. What that means to end users is that it is quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs.

    Read the article

  • Survey of MySQL Storage Engines

    MySQL has an interesting architecture that sets it apart from some other enterprise database systems. It allows you to plug in different modules to handle storage. What that means to end users is that it is quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs.

    Read the article

  • Why might my Fedora 15 live USB persistent storage not work?

    - by Richard J Foster
    I created a Fedora 15 "live" USB stick using the live USB creator found at https://fedorahosted.org/liveusb-creator/ and the Fedora 15 i686 Desktop ISO image with the persistent storage space set to 4096MB. (The USB stick I have available has an 8GB capacity, so there should be plenty of space.) Fedora appears to boot correctly, however it seems that the persistent storage is not working. To verify this, I opened a terminal prompt, then did su - followed by yum update yum. As expected, I was informed that a new version was available. (The live CD contains version 3.2.29-4, at the time of typing 3.2.29-6 is the current version). After installing, I verified that the new version was installed by typing yum --version. I then shutdown the system using shutdown now. After the system had shut down, I rebooted and returned to the terminal prompt. On typing yum --version, I was informed that the version was 3.2.29-4 (i.e. the original version). Why might the persistent storage not be working? Is there anything I can do to fix it?

    Read the article

  • External Storage for 2TB of backups and 4TB of data RAID level? HW vs Software?

    - by Jerry Mayers
    I have a Mac Mini set up as a media center/file server. Currently I just have a hodgepodge mess of external drives for storage. I'm maxed out, and I have some new laptops on the way with much larger drives and I need to work out a good storage solution for backing them up, as well as storing media on the server. I need around 2 TB of storage for the time machine backups from my various systems and around 2 TB more for media. I would like to build this to handle around 6 TB total so I have some growing room. Since I'm using a Mac Mini as the server I need to use external enclosure(s) that support USB 2 or Firewire 800 (preferred) or gigabit Ethernet. Performance of the system isn't a huge concern since the majority of the access from other computers is done over 802.11N. I plan on using 2TB drives, for the final version, but initially I'll try and use my existing 2 (1TB) drives + some new 2TB drives, and swap the 1TB ones out as I fill up. As to the actual questions: Should I use hardware RAID in some enclosure? Because if the enclosure dies I have to find an identical one to get to my data right? Wouldn't a software RAID be better as I can use any method of connecting the drives to the system? Remember OS X server is my OS. What if I had to reinstall OS X, can I restore the software RAID easily? What RAID version should I use? For the 2TB used for the time machine disk I don't see why I need RAID here, just a single 2TB drive since its already the backup, but for the remaining 4TB it would be the only copy of the data so I should build some redundancy. I had a RAID 5 setup using a cheep RAID PCI card years ago running RAID 5 in a 2 TB array and when a drive died it wanted 48 hours to rebuild. Is this crazy slow for a setup of this size or is this to be expected? Any suggestions as to drive enclosures?

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • Looping Redirect with PyFacebook and Google App Engine

    - by Nick Gotch
    I have a Python Facebook project hosted on Google App Engine and use the following code to handle initialization of the Facebook API using PyFacebook. # Facebook Initialization def initialize_facebook(f): # Redirection handler def redirect(self, url): logger.info('Redirecting the user to: ' + url) self.response.headers.add_header("Cache-Control", "max-age=0") self.response.headers.add_header("Pragma", "no-cache") self.response.out.write('<html><head><script>parent.location.replace(\'' + url + '\');</script></head></html>') return 'Moved temporarily' auth_token = request.params.get('auth_token', None) fbapi = Facebook(settings['FACEBOOK_API_KEY'], settings['FACEBOOK_SECRET_KEY'], auth_token=auth_token) if not fbapi: logger.error('Facebook failed to initialize') if fbapi.check_session(request) or auth_token: pass else: logger.info('User not logged into Facebook') return lambda a: redirect(a, fbapi.get_login_url()) if fbapi.added: pass else: logger.info('User does not have ' + settings['FACEBOOK_APP_NAME'] + ' added') return lambda a: redirect(a, fbapi.get_add_url()) # Return the validated API logger.info('Facebook successfully initialized') return lambda a: f(a, fbapi=fbapi) I'm trying to set it up so that I can drop this decorator on any page handler method and verify that the user has everything set up correctly. The issue is that when the redirect handler gets called, it starts an infinite loop of redirection. I tried using an HTTP 302 redirection in place of the JavaScript but that kept failing too. Does anyone know what I can do to fix this? I saw this similar question but there are no answers.

    Read the article

  • Google app engine issue 777 particular solution?

    - by Niklas R
    I use 64.202.189.170 (godaddy) for a HTTP access to a www...on google app engine like GAE issue 777 so that a blank subdomain forwards to www.domain I get the blank to respond by output "This website is temporarily unavailable, please try again later. " There's info about this issue here http://knol.google.com/k/google-apps-discussion-group#view and here http://code.google.com/p/googleappengine/issues/detail?id=777 Since I managed to do it with a .com domain (The godaddy DNS hosted gralumo.com correctly responds to www..) I now want to do it with an off-site DNS managed domain getting the following info about servers: $ ping montao.com.br PING montao.com.br (64.202.189.170) 56(84) bytes of data. 64 bytes from pwfwd-v01.prod.mesa1.secureserver.net (64.202.189.170): icmp_seq=1 ttl=113 time=188 ms 64 bytes from pwfwd-v01.prod.mesa1.secureserver.net (64.202.189.170): icmp_seq=2 ttl=113 time=188 ms ^C --- montao.com.br ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 188.459/188.692/188.926/0.493 ms ubuntu@ubuntu:~$ ping www.montao.com.br PING ghs.l.google.com (74.125.43.121) 56(84) bytes of data. 64 bytes from bw-in-f121.1e100.net (74.125.43.121): icmp_seq=1 ttl=56 time=30.2 ms 64 bytes from bw-in-f121.1e100.net (74.125.43.121): icmp_seq=2 ttl=56 time=28.0 ms 64 bytes from bw-in-f121.1e100.net (74.125.43.121): icmp_seq=3 ttl=56 time=24.2 ms ^C --- ghs.l.google.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 24.201/27.522/30.282/2.514 ms So it looks to me these are the same IP numbers whether or not it's off-site DNS. It's just that it works for one domain and not the other. Could it be just resetting the appspot app ie removing and adding it? Can you recommend how to proceed? Thanks in advance

    Read the article

  • Google App Engine: JDO does the job, JPA does not

    - by Phuong Nguyen de ManCity fan
    I have setup a project using both Jdo and Jpa. I used Jpa Annotation to Declare my Entity. Then I setup my testCases based on LocalTestHelper (from Google App Engine Documentation). When I run the test, a call to makePersistent of Jdo:PersistenceManager is perfectly OK; a call to persist of Jpa:EntityManager raised an error: java.lang.IllegalArgumentException: Type ("org.seamoo.persistence.jpa.model.ExampleModel") is not that of an entity but needs to be for this operation at org.datanucleus.jpa.EntityManagerImpl.assertEntity(EntityManagerImpl.java:888) at org.datanucleus.jpa.EntityManagerImpl.persist(EntityManagerImpl.java:385) Caused by: org.datanucleus.exceptions.NoPersistenceInformationException: The class "org.seamoo.persistence.jpa.model.ExampleModel" is required to be persistable yet no Meta-Data/Annotations can be found for this class. Please check that the Meta-Data/annotations is defined in a valid file location. at org.datanucleus.ObjectManagerImpl.assertClassPersistable(ObjectManagerImpl.java:3894) at org.datanucleus.jpa.EntityManagerImpl.assertEntity(EntityManagerImpl.java:884) ... 27 more How can it be the case? Below is the link to the source code of the maven projects that reproduce that problem: http://seamoo.com/jpa-bug-reproduce.tar.gz Execute the maven test goal over the parent pom you will notice that 3/4 tests from org.seamoo.persistence.jdo.JdoGenericDAOImplTest passed, while all tests from org.seamoo.persistence.jpa.JpaGenericDAOImplTest failed.

    Read the article

  • Google App Engine - "Invalid sender format" when sending e-mail

    - by Taylor Leese
    I'm trying to send an e-mail using Google App Engine. I'm getting the exception below and I'm not sure why at the moment. Any ideas? javax.mail.SendFailedException: Send failure (javax.mail.MessagingException: Illegal Arguments (java.lang.IllegalArgumentException: Bad Request: Invalid sender format)) at javax.mail.Transport.send(Transport.java:163) at javax.mail.Transport.send(Transport.java:48) at com.mystuff.service.mail.MailService.sendActivationEmail(MailService.java:145) Below is the code related to sending the e-mail. public final void sendActivationEmail(final UserAccount user) { final Properties props = new Properties(); final Session session = Session.getDefaultInstance(props, null); final Message message = new MimeMessage(session); final Multipart multipart = new MimeMultipart(); final MimeBodyPart htmlPart = new MimeBodyPart(); final MimeBodyPart textPart = new MimeBodyPart(); final Locale locale = LocaleContextHolder.getLocale(); try { message.setFrom(new InternetAddress(getFromAddress(), "Qoogeo")); message.addRecipient(Message.RecipientType.TO, new InternetAddress(user.getUsername(), user.getFirstName() + " " + user.getLastName())); message.setSubject(messageSource.getMessage("mail.subject", null, locale)); textPart.setContent(messageSource.getMessage("mail.body.txt", new Object[] {getHostname(), user.getActivationKey()}, locale), "text/plain"); htmlPart.setContent(messageSource.getMessage("mail.body.html", new Object[] {getHostname(), user.getActivationKey()}, locale), "text/html"); multipart.addBodyPart(textPart); multipart.addBodyPart(htmlPart); message.setContent(multipart); Transport.send(message); } catch (MessagingException e) { LOGGER.warn(ERROR_MSG, e); } catch (UnsupportedEncodingException e) { LOGGER.warn(ERROR_MSG, e); } } Also, getFromAddress() returns "[email protected]".

    Read the article

  • Google App Engine: Unit testing concurrent access to memcache

    - by Phuong Nguyen de ManCity fan
    Would you guys show me a way to simulating concurrent access to memcache on Google App Engine? I'm trying with LocalServiceTestHelpers and threads but don't have any luck. Every time I try to access Memcache within a thread, then I get this error: ApiProxy$CallNotFoundException: The API package 'memcache' or call 'Increment()' was not found I guess that the testing library of GAE SDK tried to mimic the real environment and thus setup the environment for only one thread (the thread that running the test) which cannot be seen by other thread. Here is a piece of code that can reproduce the problem package org.seamoo.cache.memcacheImpl; import org.testng.Assert; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; import com.google.appengine.api.memcache.MemcacheService; import com.google.appengine.api.memcache.MemcacheServiceFactory; import com.google.appengine.tools.development.testing.LocalMemcacheServiceTestConfig; import com.google.appengine.tools.development.testing.LocalServiceTestHelper; public class MemcacheTest { LocalServiceTestHelper helper; public MemcacheTest() { LocalMemcacheServiceTestConfig memcacheConfig = new LocalMemcacheServiceTestConfig(); helper = new LocalServiceTestHelper(memcacheConfig); } /** * */ @BeforeMethod public void setUp() { helper.setUp(); } /** * @see LocalServiceTest#tearDown() */ @AfterMethod public void tearDown() { helper.tearDown(); } @Test public void memcacheConcurrentAccess() throws InterruptedException { final MemcacheService service = MemcacheServiceFactory.getMemcacheService(); Runnable runner = new Runnable() { @Override public void run() { // TODO Auto-generated method stub service.increment("test-key", 1L, 1L); try { Thread.sleep(200L); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } service.increment("test-key", 1L, 1L); } }; Thread t1 = new Thread(runner); Thread t2 = new Thread(runner); t1.start(); t2.start(); while (t1.isAlive()) { Thread.sleep(100L); } Assert.assertEquals((Long) (service.get("test-key")), new Long(4L)); } }

    Read the article

  • AJAX - querying a search engine and returning the number of results

    - by Moddy
    Right, so basically I need to query a selection of search engines in an AJAX app. As there is a number of different search engines - there's no search engine specific API I can use either. My main problem is getting the number of results returned by the search. So far, I have decided it is probably best to use a regexp from the returned search query HTML, and then convert the string of characters to an integer. However, this method just doesn't feel.. clean to me. It seems a bit rough around the edges and I think it could do with improving! I guess not being 100% confident with regular expressions doesn't help; but it just feels like it could be improved. Any ideas on how to implement this would be great cheers! It doesn't seem like that an exotic thing to do, so I was wondering if perhaps any of you guys have done this before and would have a few tips? Note: This is an AJAX app at the moment, but I may be re-writing this functionality in a different app soon - which won't be AJAX. I'm confident I can transfer any AJAX implementation to the other language though.

    Read the article

  • Why use Django on Google App Engine?

    - by Travis Bradshaw
    When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google. To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django. It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack. Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus. I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. Thanks in advance for your time!

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >