Search Results

Search found 10755 results on 431 pages for 'cluster shared volume'.

Page 32/431 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • C++ shared libraries

    - by saminny
    Hi, I am trying to get my head around the way shared libraries work in the c++ unix environment. I understand we only need header files and no shared libraries specification when compiling code. But if I want to create an executable or shared library from my compiled files, do I need to specify shared library dependencies (those are dynamic)? And do the paths of shared libraries need to match the path at runtime loading? I am using Linux 2.6.18-164.11.1.el5 #1 SMP x86_64 GNU/Linux I am having a problem where my code is not able to pick up a library at runtime. I have tried setting LD_LIBRARY_PATH and PATH. But at runtime when I run the executable, I get the following error: Error: librc.so: cannot open shared object file: No such file or directory Sam

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • System State Backups using NTbackup fail with error 0x800423f4 (relating to volume shadow copy)

    - by Paul Zimmerman
    We have a Windows Server 2003 R2 running Service Pack 2. It is a domain controller (Global Catalog) and our main internal DNS server. We run a System State backup of the machine to back up Active Directory information and save the backup to a different server. This server has a single drive (C:), and we do have Shadow Copies enabled for the volume (which are completing successfully). The System State Backup is now failing with the following listed in the backup logs: Volume shadow copy creation: Attempt 1. "Event Log Writer" has reported an error 0x800423f4. This is part of System State. The backup cannot continue. Error returned while creating the volume shadow copy:800423f4 Aborting Backup. The operation did not successfully complete. When doing a vssadmin list writers, we sometimes get the following reported for the Event Log Writer (other times it says that it is in the state of "[1] Stable" with "No error"): Writer name: 'Event Log Writer' Writer Id: {eee8c692-67ed-4250-8d86-390603070d00} Writer Instance Id: {c7194e96-868a-49e5-ba99-89b61977753c} State: [8] Failed Last error: Retryable error We have tried disabling the event log service via the registry, rebooting, deleting the event log files from the drive, then re-enabling the service via the registry and rebooting, but this didn't seem to solve the issue. We also get an error message when in the event viewer when trying to open the log for the "File Replication Service" of "Unable to complete the operation on 'File Replication Service'. The security descriptor structure is invalid." I have searched the error via Google and tried a number of different things, but nothing has seemed to help. Any suggestions on what we might try to get the Event Log Writer to behave would be greatly appreciated!

    Read the article

  • Error 0x80300001 Installing Windows Server 2008 R2 64bit on FastTrak TX4660 RAID volume

    - by Konstantin Boyandin
    I am trying to install Windows Server 2008 R2 Enterprise 64bit on the following hardware: motherboard Intel DBS1200BTL Promise FastTrak TX4660 RAID controller 4 disks set up in two RAID1 arrays (handled by FastTrak) I am trying to install Windows so it would boot from RAID1 volume created with the FastTrak controller. The installation goes as in the manual, I insert the disk with the driver, select 'Browse' and specify the correct driver, it finds all the RAID arrays but notifies me that error 0x80300001 happened, Windows can't be installed on the mentioned RAID volumes, since they may not be bootable (even though the target RAID volume is the first in boot options list). If I proceed with the installation, Windows copies and unpacks itself, performs other standard actions after that. After the computer is restarted, it won't boot (Windows Boot Manager appears in the boot devices list; however, neither it nor the RAID volume itself does not boot). Is it a known problem? I can attach the boot disks to the motherboard and use its RAID capabilities instead, but I'd prefer FastTrak ones. Driver version is 1.3.0.4. Thanks.

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Configuration for a two machine ESXi cluster using VSA to present local storage to VMs

    - by MDMarra
    I'm designing a little vSphere 5 cluster for one of our remote sites. We have some IBM x3650s that have 6x 300GB 10K RPM drives in them, along with dual quad core CPUs and 24GB RAM. Because we use HP P4500 G2s at our primary site, we have licenses available for HP P4000 VSAs. I thought that this would be the perfect opportunity to use them. Below is a basic drawing of what I want to accomplish: I want to run a P4000 VSA on each server and run them in a Network RAID-10 (Lefthand speak for network mirroring, think of it as RAID 1 across nodes or as an active/active storage cluster). I will then present this storage to guests that will run on this mini-cluster. It will be managed by a vCenter Server on our main site. All connections will be GbE with two dedicated to storage. Management and Data will share a pair of connections, since I don't expect there to be high load. These servers are just there to provide directory services, dhcp, printing, etc. Does anyone see anything potentially wrong with this approach? Is this the best way to do this without adding additional dedicated storage heads? Are there any pitfalls in this design, besides the lack of dedicated Data/Mgmt interfaces?

    Read the article

  • Ubuntu 12.04 glusterfs volume failed to mount at boot time

    - by user183394
    I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience. The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab: 192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server". If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted! I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable. So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell. But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly. Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust. --Zack

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • Volume licenced copy of MS Office 2007 shows "Non Commercial Use" in title bar

    - by Linker3000
    I have just removed the demo copy of Office 2007 preinstalled on a new laptop and replaced it with an install of the full professional edition downloaded from the MS Volume Licensing site and installed one of our volume licence keys, yet the apps (Word etc.) show "Non Commercial Use" in the title bar, which is what usually happens in the Home and Student edition. I have tried: Deleting the Office registration keys in the registry and using one of our other Office 2007 volume licence keys (we have 7) when prompted to re-register Uninstalling Office completely and reinstalling it from a newly-downloaded ISO burned to CD and also from a compressed file that installs from hard disk/USB stick (both from Microsoft - no dodgy stuff) Yet the non-commercial message persists. Although it's a cosmetic issue, the laptop is going to be used for customer presentations and so the sales person is rightly concerned about the image this portrays. I presume there may be something floating around the registry or in a file somewhere but I can't find it. Articles I have found elsewhere just refer to the message being related to the use of a Home and Student licence key, which is 100% not the case. Any thoughts? Thanks.

    Read the article

  • Network Misconfiguration when adding first host to new vSphere cluster

    - by dunxd
    I am building a new vSphere cluster from scratch. I have installed ESXi on the first host, and built a vCenter server on a VM residing on that host (storage is on the local hard drive, although we have iSCSI targets which I can reach from the host). The cluster is configured for HA. When I try and add the host to the cluster, I get an error at the point where HA is configured - Cannot complete the . I have stripped the network configuration of the host down to the most basic - a single NIC attached to a single vSwitch - this is running the VMKernel Port on VLAN 8 - that is our Management VLAN. The vCenter server will have a network address on this VLAN, so I also set the initial Virtual Machine Port Group to this VLAN, and connected the vCenter server NIC to this port group. I understand I can't connect the vCenter server to the VMkernel port group, but shouldn't I be able to connect the vCenter server to a Port Group in the same VLAN? If not, do I need to create a VLAN specifically for VMKernel Port Group? I plan to set up another port group for vMotion with a dedicated and isolated VLAN (i.e. VLAN isn't routed) so this wouldn't allow vCenter to communicate. Does anyone have any suggestions, or other ideas for what might be causing the problem. I've read through the documentation, but it isn't giving me any pointers, and the error message isn't helping me beyond telling me something is wrong with my network config.

    Read the article

  • Allignment of ext3 partition on LVM RAID volume group

    - by John P
    I'm trying to add a partition on a LVM that resides on a RAID6 volume group and fdisk is complaining about the partition not residing on a physical sector boundry. My question is, how do you calculate the correct starting sector for a partition on a LVM? This partition will be formated ext3. Would it be better to just format the LVM directly instead of creating a new partition? Disk /dev/dedvol/backup: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 1048576 bytes / 8388608 bytes Disk identifier: 0x4e428f49 Device Boot Start End Blocks Id System /dev/dedvol/backup1 63 267349 2146982827+ 83 Linux Partition 1 does not start on physical sector boundary. lvdisplay /dev/dedvol/backup --- Logical volume --- LV Name /dev/dedvol/backup VG Name dedvol LV UUID OV2n5j-7LHb-exJL-t8dI-dU8A-2vxf-uIicCt LV Write Access read/write LV Status available # open 0 LV Size 2.00 TiB Current LE 524288 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 32768 Block device 253:1 vgdisplay dedvol --- Volume group --- VG Name dedvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 14.55 TiB PE Size 4.00 MiB Total PE 3815448 Alloc PE / Size 3670016 / 14.00 TiB Free PE / Size 145432 / 568.09 GiB VG UUID 8fBcOk-aXGx-P3Qy-VVpJ-0zK1-fQgy-Cb691J

    Read the article

  • LSI1068E hidden drives after failed raid volume creation

    - by silk
    We are using LSI 1068E raid chipset with SAS drives. We had added new drives to the system, and tried to create new raid volume with the lsiutil, unfortunately the creation failed. The problem is that now we do not have the new raid volume and disks 'disappeared' and are not available as targets for raid. Lsiutil option 8 (scan for devices) does not display these disks at all. lsiutil option 16 (display attached devices) does list them as targets. lsiutil option 21+30 (create raid) does not list these disks. Just after insrting them to enclosure these disks appeared in the system, as expected. During the raid creation kernel logged: Mar 4 08:40:02 kilo kernel: [57555.687946] mptbase: ioc0: RAID STATUS CHANGE for PhysDisk 2 id=0 Mar 4 08:40:02 kilo kernel: [57555.687978] mptbase: ioc0: PhysDisk has been created Mar 4 08:40:02 kilo kernel: [57555.695438] scsi target0:0:2: mptsas: ioc0: RAID Hidding: fw_channel=0, fw_id=0, physdsk 2, sas_addr 0x5000c50008ebe5fd for both of them, again as expected. Unfortunately they did not appear back even though the volume was not created. The same situation is in the controller's bios after a reboot. Taking the disks out and inserting in different slots did not help, either. Has someone seen a similar problem? And knows how to 'get back' our disks?

    Read the article

  • Expand a volume residing on one X-RAID disk installed on a Netgear ReadyNas Duo v2

    - by Sid
    I've got a Netgear ReadyNas Duo v2 (2 disk slots). System is configured with X-RAID which does not provide flexibility but automatically expands based on a sort of RAID-5 logic. I had 2 500 GB hard disk installed, redundant, so I had 500 GB of volume size. I wanted to upgrade the whole system to 3 GB * 2 hard disk maintaining both the data already on the NAS and the data on one of the two 3 TB hard disks. So I did this: Unplugged one disk from the ReadyNas. Now the readynas has 1*500 GB non redundant. Plugged one empty 3 TB hard disk. Now the readynas has 1*500 GB + 1*3 TB, redundant. I waited for the resync. I then unplugged the 500 GB hard disk, so that I have only the 3 TB hard disk with the previous data. Now what I want is to copy the data on my other 3 TB hard disk in the NAS, so that I can plug this other disk in the NAS and use it for redundancy. The problem is that: the NAS has the (single) 3 TB hard disk in X-RAID, but the volume does not expand to 3 TB, it remains fixed to 500 GB. Is there a way to tell the ReadyNas to force expanding the volume to the whole disk without plugging in another hard disk of the same size?

    Read the article

  • Suitable Hosting for Web Development Company: VPS Hosting, Cloud hosting, Shared Hosting

    - by KoolKabin
    Hi, We are web development company here in nepal. We are still in growth phase. Our URL: http://www.outsourcingnepal.com We have our clients demanding for web hosting and are now ready for getting hosting package to host websites of our clients. Since we have multiple clients and they want their own cpanel for self configuration, only creating ftp channel is not appropirate. So their own cpanel is needed. So i thought of reseller package. When i searched for reseller package i came across with vps, cloud hosting and shared hosting. So now confused which one is better for our company?

    Read the article

  • Looking for Windows shared web hosting with PHP support

    - by Ladislav Mrnka
    I'm looking for Windows based shared web hosting which supports multiple hosted web sites (multiple domains). Supported technologies should contain: ASP.NET 4, ASP.NET MVC IIS 7 MS SQL 2008 PHP, MySQL It is for my hobby projects so it should not be too expensive. I tried GoDaddy's Windows Deluxe hosting but the experience is very bad and I want to move elsewhere. WordPress hosted on GoDaddy's Windows hosting is unloaded every few minutes and next request takes around 20s to complete. Following request to empty site takes around 3s to complete. Even request for RSS wich transfers 1.2KB takes several seconds. The delay happens in PHP processing because static content is served within 200ms. It helped to migrate to Linux hosting (all requests are served under 1s) but Linux hosting is not what I'm looking for.

    Read the article

  • Valid concerns over shared developer team

    - by alphadogg
    Say your company is committed (and don't want to consider not doing this) to sharing/pooling a development team across a handful of business units. What would you setup as concerns/expectations that must be cleared before doing this? For example: Need agreement between units on how much actual time (FTE) is allocated to each unit Need agreement on scheduling of staff need agreement on request procedure if extra time is required by one party etc... Have you been in a situation like this as a manager of one unit destined to use this? If so, what were problems you experienced? What would you have or did implement? Same if you were the manager of the shared team. Please assume, for discussion, that the people concerned know that you can swap devs in and out on a whim. I don't want to know the disadvantages of this approach; I know them. I want to anticipate issues and know how to mitigate the fallout.

    Read the article

  • managing information/functionality on shared common project classes

    - by ilansch
    In my company, we have a common solution the contains common projects (2 projects so far, one for .net 3.5 and one for .net 4.5). My main problem is that during time, a lot of code is added, for example hosting a process as windows service is a class called ServiceManagement, But no one but the developer knows it, and if someone wants to use this shared class, he does not know it exist. So i am looking for a way to document and manage all the classes with tags, a 3rd party util/web util, that i can search for tags and maybe find common classes that i can use (if we keep all our code well-documented). Does anyone familiar with sort of tools ?

    Read the article

  • Analyze Drupal and Wordpress sites CPU load in shared server

    - by Tedi
    Our hosting company is complaining that both our Drupal and Wordpress websites running in a shared server are consuming too many CPU resources. The traffic for each site is not more than 100 users per day and, at a first glance, we don't have very many plugins/add-ons. Is there any tool or resource to analyse what is causing that high CPU load? Thanks Update: We decided to suspend our accounts while the problem was being debugged but still our hosting (Site5) said that they saw unacceptable activity on our sites so we had to move to a dedicated server... asked them several times to provide us with more information and they always came back saying that we had to purchase a higher account. Finally decided to move to another hosting service.

    Read the article

  • Book about tcp, http, named pipe, shared memory, wcf and other inter-process communication protocol

    - by Samuel
    Recently, I had to create a program to send messages between two winforms executable. I used a tool with simple built-in functionalities to prevent having to figure out all the ins and outs of this vast quantity of protocols that exist. But now, I'm ready to learn more about the internals difference between each of theses protocols. I googled a couple of them but it would be greatly appreciate to have a good reference book that gives me a clean idea of how each protocol works and what are the pros and cons in a couple of context. Here is a list of nice protocols that I found: Shared memory TCP List item Named Pipe File Mapping Mailslots MSMQ (Microsoft Queue Solution) WCF I know that all of these protocols are not specific to a language, it would be nice if example could be in .net. Thank you very much.

    Read the article

  • Develop an android and iPhone application with shared database

    - by Bongo
    I have a great idea for smartphone application, And I want to develop an application suited for both android and iPhone. In addition I need to use spatial database for geo indexing that will be shared for both applications. I am new to this app world and I have some questions. Is there away to develop for both machines? I know java but not objective c. My guess is that I need to separate the database from the computing to support both applications. What are the best cloud computing providers with spatial database support that can host the server? Do I need 2 hosting servers or there is one server the can support the both of them? which database provider can support geo indexing and support this intergraion, I prefer providers with reasonable free quotas. Thanks.

    Read the article

  • Why can't I get to a shared folder from Windows

    - by Ron
    I want to access a folder on my new Ubuntu 12.10 box from any machine on my network without the need to provide credentials. My machine name is Ubuntu1 I have a 2TB disk that formatted NTFS that has media on it The mount point is mount1 I have numerous folders on the disk and I want to share each of them individually I have enabled folder1 and folder2 for sharing I have enabled shared access, Allow others to create and delete files in this folder and guest access is allowed. The folder icon now has arrows so I assume all is good. From windows I can see under network Ubuntu1folder1 Ubuntu1folder2 When I click to open the folder from windows I get the error "You cannot access \Ubuntu1\folder1 You do not have permission to access \Ubuntu1\folder1 I have them both on the same workgroup. Your assistance would be appreciated

    Read the article

  • Finding out the shared hosting providers located in a particular data center

    - by unixman83
    I know the physical location of data-centers that I want my website hosted in. One of these is located on 350 E Cermack in Chicago, IL. My problem is that I am looking for all the providers of low-cost shared hosting in this data center. Do you have a list? And if you do have such a list can you please tell me how you came up with it? I know many discount hosting providers are physically located in the Arizona-Utah areas. But I am located near Chicago.

    Read the article

  • Enable 'mod_rewrite' Using .htaccess File On cPanel Shared Hosting Server

    - by zulhfreelancer
    I'm using cPanel to host my website. I need to enable 'mod_rewrite' on this Shared Hosting cPanel account to run my script. I've tried to Google the solutions high and low but did not find any luck yet. Those tutorials that I found only work well with VPS and some of them said that, only hosting provider can change and enable it. But, some of them said that, it can be done easily by editing the .htaccess file. My question: If I want to edit the .htaccess file, what should I include in that file? What the 'rules' and 'conditions' that should be included?

    Read the article

  • I can not print on Windows 7 shared printer

    - by lrichard
    For a while I can not print from my Ubuntu 12.04 64 bit pc on a Windows 7 shared printer. Before that everything was ok. The printer is HP LaserJet 1100. The error message is: "File "/usr/lib/cups/backend/smb" not available: No such file or directory" I have tried to reinstall the samba, I have configured the workgroup properly. I have tried to reinstall the printer on the Ubuntu machine with the proper address, or with lpd (I have read somewhere). If I restart the printer on the cups web interface ("Resume Printer") the problem seems to be solved, but not. (The printer seems to be idle, and if I want to print, I still get the error message.) I have checked the share rights on the Windows 7 pc, and I think it is ok.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >