Search Results

Search found 16032 results on 642 pages for 'everyday problems'.

Page 445/642 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • Still about SSD potentials...write and read speed

    - by Macroideal
    I have been working on SSD (solid state disk) for several months..Problems and Questions hit my head unexpectedly..Coz i am a virgin in ssd... Especially these days I was testing the write-read speed of ssd, which I was always caring.... however result turned out not good as I expected, or even worse Three kinds of read-write were implemented in my test read and write directly from and into ssd, with openning ssd as a whole device. in windows: _open("\\:g", ***).. It can be very tricky and hairy that you'd write a data with size of folds of 512, at the disk position of folds of 512bytes... So, If you wanto write just a byte or 4 bytes, you'v to write at least a whole sector one time. Read and write data from and into files located in SSD... Read and Write data from and into files in mechanical Disk I compared the pratices below...I found ssd sucks...the ssd performs worse than mechanical disk... so i am wondering where i can get the potential performance of ssd, since ssd is said to a substitute for mechanical disk in the future.. Nevertheless, I test ssd with a pro-hard-disk tools..ssd is like twice speedier than mechanical disk. So, why?

    Read the article

  • New 3TB HDD, can see full 2.7TB in Linux and Windows, but shows up as 801.6GB in BIOS

    - by Ben Lee
    I recently purchased a Seagate Barracuda 3TB drive (ST3000DM001). After installing it, my BIOS recognized it but reported the size as 801.6gb. I went ahead and booted into Linux anyway (Ubuntu 11.10 64-bit). Linux saw it as a 2.7TB. Following some online instructions (don't have the link handy, unfortunately), it looks liked converting this drive to GPT was recommended. So I used gparted to do that, then formatted it to NTFS also using gparted. (I'm using NTFS because my machine is daul-boot and I want to have access to the drive in Windows too). I rebooted to Windows (Windows 7 64-bit), and Windows also sees the drive with 2.7TB free. Everything seems to be working fine. The only issue is that my BIOS is still reporting the drive as 801.6GB. My motherboard is an ASRock 770 Extreme3 and BIOS is the latest version. Since everything seems to be working with the new drive anyway, I'm hoping that the fact that the BIOS is reporting the wrong size is not an actual problem. But honestly, I don't really know. Anyone out there more familiar with this know if this could potentially cause any problems in the future? Any way to get the BIOS to report the correct size?

    Read the article

  • How do I install OpenStack on a single Ubuntu 12.04 node?

    - by Sam Edwards
    I'm having trouble installing OpenStack in Ubuntu 12.04, for various reasons: The official Ubuntu website recommends Juju and MAAS. However, this is a single node I am trying to get OpenStack installed on, and MAAS requires "two or more nodes" according to the docs. Additionally, I don't have any experience in MAAS and Juju and would rather stick to technologies I am more familiar with so that I can debug problems that arise. I have tried StackGeek but this fails because the node only has a single Ethernet port. The node does, however, have the second hard drive required for the nova storage. I have tried DevStack but cannot log into the dashboard. The login form appears fine, but as soon as I try to submit the page, my browser begins loading indefinitely. I have tried installing straight from packages, but I get an Internal Server Error in the dashboard upon trying to log in, with no helpful logs anywhere in sight to aid me in debugging the issue. Each of these attempts was with a fresh Ubuntu 12.04 LTS setup; I'm finding it really strange that no matter what I try, I cannot get OpenStack installed. Is this even a stable/mature project? Why am I encountering so many bugs?

    Read the article

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • USB Wifi will not connect on Windows 7 (Even though the driver installs OK)

    - by Pete Roberts
    Windows 7 will not connect to a WiFi Netowrk using a USB Network Adapter. I have 3 adapters, A Senoa SUB 364 (EXT), a Repeatit SU2410 USB V2 and a ZYXEL G202. All of these devices install OK on Windows 7 Home Premium on my Destop PC (64 bit) and on my Asus Wii Netbook (32 bit). In each case the adapter can be enabled/disabled and the driver properties says it is working correctly. When I try and connect to a network Windows 7 behaves as though the adapter does not exist and reports no networks. The Wii has an integrated adapter which works perfectly under Windows and connects to either of the 3 networks available to me. I have done all the checks I can on the configuration. What seems odd to me is that it happens to all 3 devices on 2 different windows 7 PCs both of which are working perfectly in any other respect. This suggests the common denominator is me and I must be doing something wrong.. what's also strange is that I cannot find any similar problems being reported on any of the forums. From what reading I've been able to do it seems like the new wifi virtualisation thingy in W7 is not recognising the adapters which suggests I', missing a configuration option somewhere. Looking forward to finding out if I'm not alone or just being stupid. Pete

    Read the article

  • VMWare Fusion: "No Permission to access this virtual machine"

    - by Craig Walker
    I had a VMWare Fusion VM backed up on my home network file server (Ubuntu). I wanted to run it again, so I copied it back to my Macbook. When I tried to launch it in VMWare, I got an error message: No permission to access this virtual machine. Configuration file: /Users/craig/WinXP Clean + Scanner.vmwarevm/WinXP Pro Test.vmx The permissions look fine to me: The bundle directory is 777 The bundle files (including the listed .vmx) are all 666 User is craig (my current user); group is staff. I changed the group to wheel at the suggestion of this page, but that didn't help. Finder shows read & write for craig, staff, and everyone on the bundle directory The bundle dir is also not locked Finder also shows rw and unlocked for the .vmx file The parent directory is also rw & unlocked Disk Utility permissions check doesn't show any problems with any of the associated files It sure looks like I should have wide open access to run this VM; why is Fusion complaining?

    Read the article

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • Wifi Drops Connections with WPA2-PSK

    - by graf_ignotiev
    I run a small computer lab made up of 10 computers of identical hardware and software (Dell Latitudes with Windows 7 x64 Enterprise) and I use a ZyWALL 2WG as a router/firewall. Nine of the computers connect to the router over wifi using WPA2-PSK encryption while the last one is connected by ethernet cable. I'm having a problem where any computer connected to the wi-fi occasionally drops off the network (it cannot be pinged and the client cannot ping the gateway). It only happens on the wifi side and only when the encryption is WPA2-PSK or WPA-PSK. I tried using another router with a different make and model and had no problems. Thinking it could be a software error, I reset the router to factory defaults and installed the newest firmware (V4.04(AQI.8) | 04/09/2010), but still have the problem. The 802.1X log gives the following error User logout because of user disassociation. with this note WPA2-PSK:00242c582ece:logout where 00242c582ece is the mac address of the device. At this point I'm out of things to try and leads to follow. It looks like this user had the same or similar problem, but none of those proposed solutions work for me.

    Read the article

  • Apache Won't Restart After Compiling PHP with Postgres

    - by gonzofish
    I've compiled PHP (v5.3.1) with Postgres using the following configure: ./configure \ --build=x86_64-redhat-linux-gnu \ --host=x86_64-redhat-linux-gnu \ --target=x86_64-redhat-linux-gnu \ --program-prefix= \ --prefix=/usr/ \ --exec-prefix=/usr/ \ --bindir=/usr/bin/ \ --sbindir=/usr/sbin/ \ --sysconfdir=/etc \ --datadir=/usr/share \ --includedir=/usr/include/ \ --libdir=/usr/lib64 \ --libexecdir=/usr/libexec \ --localstatedir=/var \ --sharedstatedir=/usr/com \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --cache-file=../config.cache \ --with-libdir=lib64 \ --with-config-file-path=/etc \ --with-config-file-scan-dir=/etc/php.d \ --with-pic \ --disable-rpath \ --with-pear \ --with-pic \ --with-bz2 \ --with-exec-dir=/usr/bin \ --with-freetype-dir=/usr \ --with-png-dir=/usr \ --with-xpm-dir=/usr \ --enable-gd-native-ttf \ --with-t1lib=/usr \ --without-gdbm \ --with-gettext \ --without-gmp \ --with-iconv \ --with-jpeg-dir=/usr \ --with-openssl \ --with-zlib \ --with-layout=GNU \ --enable-exif \ --enable-ftp \ --enable-magic-quotes \ --enable-sockets \ --enable-sysvsem \ --enable-sysvshm \ --enable-sysvmsg \ --with-kerberos \ --enable-ucd-snmp-hack \ --enable-shmop \ --enable-calendar \ --with-libxml-dir=/usr \ --enable-xml \ --with-system-tzdata \ --with-mime-magic=/usr/share/file/magic \ --with-apxs2=/usr/sbin/apxs \ --with-mysql=/usr/include/mysql \ --without-gd \ --with-dom=/usr/include/libxml2/libxml \ --disable-dba \ --without-unixODBC \ --disable-pdo \ --enable-xmlreader \ --enable-xmlwriter \ --without-sqlite \ --without-sqlite3 \ --disable-phar \ --enable-fileinfo \ --enable-json \ --without-pspell \ --disable-wddx \ --with-curl=/usr/include/curl \ --enable-posix \ --with-mcrypt \ --enable-mbstring \ --with-pgsql=/mnt/mv/pgsql I'm using Postgres 8.4.0 and Apache 2.2.8; I have the following line in my Apache conf file: LoadModule php5_module /usr/lib64/httpd/modules/libphp5.so And when I attempt to restart Apache, I get the following error message: Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib64/httpd/modules/libphp5.so into server: /usr/lib64/httpd/modules/libphp5.so: undefined symbol: lo_import_with_oid Now, I know that this is a problem with Postgres with PHP because lo_import_with_oid is a function in the Postgres source which allows the importing of large objects; also, if I remove the --with-pgsql option, PHP and Apache get along great. I've scoured the Internet looking for answers all day, but to no avail. Does anyone have ANY insight into what is causing my problems.

    Read the article

  • Experience with AMCC 3ware 9650se raid cards? Ours seems dead

    - by antiduh
    We have a 8-port 3ware 9650se raid card for our main disk array. We had to bring the server down for a pending power outage, and when we turned the machine back on, the raid card never started. This card has been in service for a couple years without problems, and was working up until the shutdown. Now, when we turn the machine on, the bios option rom that normally kicks in before the bootloader doesn't show up, none of the drives start, and when the OS tries to access the device, it just times out. The firmware on it has been upgraded in the past, so it's possible we've hit some sort of firmware bug. We're using it in a Silicon Mechanics R272 machine with gentoo for the OS. The OS eventually boots, but alas, without the card. We've ordered a new one, but I'm worried that if we replace the card it won't recognize the existing array. Has anybody performed a card swap before? Any help would be greatly appreciated. Edit: These are the kernel errors we see: 3ware 9000 Storage Controller device driver for Linux v2.26.02.012. 3w-9xxx 0000:09:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 3w-9xxx 0000:09:00.0: setting latency timer to 64 3w-9xxx: scsi0: ERROR: (0x06:0x000D): PCI Abort: clearing. 3w-9xxx: scsi0: ERROR: (0x06:0x001F): Microcontroller not ready during reset sequence. 3w-9xxx: scsi0: ERROR: (0x06:0x0036): Response queue (large) empty failed during reset sequence. 3w-9xxx 0000:09:00.0: PCI INT A disabled

    Read the article

  • Is it necessary to burn-in RAM for server-class systems?

    - by ewwhite
    When using server-class systems with ECC RAM, is it necessary or even useful to burn-in the memory DIMMs prior to deployment? I've encountered an environment where all server RAM is placed through a lengthy multi-day burn-in/stress-tesing process. This has delayed system deployments on occasion and adds an extra step to the hardware lead-time. The server hardware is primarily Supermicro, so the RAM is sourced from a variety of vendors; not directly from the manufacturer like a Dell Poweredge or HP ProLiant. Is this process useful? In my past experience, I simply used vendor RAM out of the box. Isn't that what the POST memory tests are for? I've encountered and responded to ECC errors long before a DIMM actually failed. The ECC thresholds were usually the trigger for warranty placement. Do you burn your RAM in? If so, what method do you use to perform the tests? Has the burn-in process resulted in any additional platform stability? Has it identified any pre-deployment problems?

    Read the article

  • Win 7 running slowly with low CPU usage and memory

    - by guywhoneedsahand
    I have a relatively new (under 2 yrs old) windows 7 machine. It has 9GB of RAM, and an i7 core CPU (930 @ 2.8GHz w/ 8 CPUs). After about 8 months since a clean install, I noticed my computer was running slowly. I figure it was fragmentation etc, and I did a complete wipe & clean reinstall. However, my problems are somehow persisting. The computer is running painfully slowly (but in leaps and bounds - sometimes it will work fine for 3 hrs, then suddenly freeze up just from clicking the start button). The 'freezes' happen randomly - not during any especially intensive computing. I initially thought something might be eating through my CPU and/or Memory, but Task Manager indicates that neither the CPU or Memory spike. In fact, even during serious lag, CPU usage remains at less than 5% and Memory at ~ 1.5GB. It's beyond me why a fresh install on a powerful machine is performing so poorly... and it certainly is frustrating! What could be causing the poor performance, and what can I do to fix it?

    Read the article

  • How do I fix this Windows 7 wireless connectivity issue?

    - by Charles Randall
    I have a laptop with an Intel Wireless Centrino 6300 module. Recently, the machine has stopped properly connecting to my wireless router. It will get stuck in a loop of connecting, then disconnecting and reconnecting. While connected, it will simply say "No Internet Access." Running inSSIDer 2.0, it shows my network jumping around between two channels -- I know this isn't the case, because I've set my router to sit on one single channel. My MacBook Pro, Boxee Box, PS3, and Xbox 360 all connect fine to the wireless and have no problems at all. I know it's not the wireless module, as I bought a second one recently assuming the first had died -- but I get the same behavior with both. Sometimes, I can fix the issue temporarily by deleting the network (Using the Manage Wireless Networks page), and then re-adding it (via standard wireless methods). Then it will work for a few days. But inevitably the problem comes back, and now the laptop simply won't connect to the wireless at all, even if I take steps that usually work. Since I've ruled out the hardware, and it's unlikely some kind of interference issue (because I would expect to see it on any multitude of other devices), I would think at this point that it's a problem with Windows itself. One thing that might be a hint, even though I delete the network, when I add it again, it's always listed as "Wireless Network Connection 2" even though there isn't another in the list.

    Read the article

  • Making the iPhone work with stripped down Windows XP

    - by Gabriel
    Hi, this is my first time posting here and I have a really specific question. I have an ASUS eee 901 running Windows XP Home. I had everything working well, but then I decided to improve performance by moving Windows to the smaller but faster internal SSD. I used Nlite to strip down Windows, following the instructions here: http://wiki.eeeuser.com/howto:nlitexp I now have a very lightweight installation of XP home with SP3 and all the current updates. Almost everything is working really well. I have installed iTunes and I CAN sync with no problems. However, each time I plug in my iPhone 3GS (latest firmware), Windows tries and fails to install drivers. The Found New Hardware Wizard launches, but nothing I do will make it complete successfully, with the result that the iphone does not show up in Windows as removable storage, or as a camera. When I launch the Camera and Scanner Wizard, it shows only my webcam, not the iphone. I have verified that I have the following files in place: Windows\System32\ptpusb.dll (regsvr32 successful) Windows\System32\ptpusd.dll (entry point not found, can not be registered) Windows\System32\usbaaplrc.dll (entry point not found, can not be registered) Windows\System32\drivers\usbaapl.sys Windows\System32\drivers\usbscan.sys Windows\System32\drivers\usbstor.sys Does anyone know if some other file is required or if there's some other element preventing this from working? Edit (From posted answer) I did select Cameras & Camcorders, and my webcam is working fine for video & still capture.

    Read the article

  • haproxy and tomcat intermittent hangs

    - by user7347
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • How do I fix issue causing "incomplete startup packet" log message trying to implement replication in Postgresql?

    - by colour me brad
    I've got two cloud servers running Ubuntu 13.04 and PostgreSQL 9.2. I've primarily used this blog post to aid me in setting things up. However, to do the initial database dump to the slave I'm using pg_start_backup/pg_stop_backup strategy used in this other blog post. I've read through the docs and postgres wikis as well. I ran into several problems I was able to solve, but I can't get past this wretched "the database is starting up" failure. I'm not sure if seeing "cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory" after "consistent recover state reached" is normal or the first sign of a problem. The searching I've done on "the database is starting up" and "incomplete startup packet" tells me that something is sending empty TCP packets to the slave. The only thing that even knows about the slave is the master, so I'm not sure why it's sending empty packets... Has anyone worked with this and have an idea what might be going wrong? The postgres log on the slave looks like so: 2013-08-26 13:01:38 CDT LOG: entering standby mode 2013-08-26 13:01:38 CDT LOG: restored log file "000000010000000000000039" from archive 2013-08-26 13:01:38 CDT LOG: incomplete startup packet 2013-08-26 13:01:39 CDT LOG: redo starts at 0/39000020 2013-08-26 13:01:39 CDT LOG: consistent recovery state reached at 0/390000E0 cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory 2013-08-26 13:01:39 CDT LOG: streaming replication successfully connected to primary 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:41 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT LOG: incomplete startup packet 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up thanks! brad

    Read the article

  • SIP and NAT routers?

    - by OverTheRainbow
    Hello SIP was not built with NAT routers in mind, and I'd like to get to the bottom of this issue to check what needs to be done on all devices so it works with NAT routers, and understand in what context it just can't be used and I should check more NAT-friendly alternatives like IAX. A picture being worth a thousand words, here's the layout I need to use: http://img62.imageshack.us/img62/4077/sipandnatrouters.jpg The PBX server is located in the private LAN behind a NAT router connected to the Internet (I know it'd be easier if it were located in the public network, but this router doesn't support DMZ's so the server has to be in the private network) A couple of (soft|hard)phones are located on the same LAN and connected to the PBX server, along with a PSTN gateway (Linksys 3102 or a Digium PCI card) Remote users using (soft|hard)phones are located somewhere on the Net with dynamic IP's and are also located behind NAT routers I may or may not have control over the local NAT router where the PBX server is located, but I have no control over the remote NAT routers, either because the users don't have the computer knowledge to map ports or because the routers are off-limit (eg. web cafés, hotel LAN's, etc.) Is it possible to configure the PBX server, the (soft|hard)phones, and the PSTN gateway so that the all conversations work fine, no matter the endpoints (POTS caller/local phone, POTS caller/remote phone, local phones, remote phone/local phone)? In which cases may I expect problems, and are there solutions? FWIW, I'm leaning toward using Freeswitch, but I could end up using Asterisk if there are technical advantages to it in this context. Thank you for any info.

    Read the article

  • Folder Redirection Issues - Freezing, Strange Warnings

    - by JCardenas
    I have Folder Redirection set up in a test environment for a couple accounts. I have followed the instructions for setting up the folder security settings here, and I can confirm that folders are created automatically by the system with the correct security settings when a user logs in. The GPO has been configured to automatically move user files up to the redirected folders, and this is working properly. Problems start occurring when a Windows 7 PC is in use. It is rare, but Explorer will lock up when performing a file write operation (move/copy/save from application). This results in the entire system being unusable, with only a hard reset resolving it (Task Manager doesn't start, the "three finger salute" does nothing, apps stop working). The mouse functions, but clicks do nothing. The other issue is that occasionally when copying/creating/modifying files a dialog box will pop up with the message "You need permission to perform this action. You require permission from XYZ\cardenas to make changes to this folder." The folder that was created by copying an existing one has the correct security settings and lists me as the owner. My company will not be implementing Folder Redirection on XP, since we are making a "clean break" with implementing new technologies with the Windows 7 rollout, so this behavior has not been - nor will be - checked for in XP. Thanks in advance for your help!

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

  • How do I fix libdispatch problem crashing Mac OS X apps?

    - by david-ocallaghan
    In the last day I have started having a lot of brokenness on my Mac (MacBook Air running Mac OS X 10.6.2 with all software updates). Most noticably, iTunes no longer syncs with my iPhone. It fails with a crash dialog reporting "AppleMobileDeviceHelper quit unexpectedly" and an error dialog "iTunes was unable to load dataclass information from SyncServices. Reconnect or try again later." I've attempted the fix at support.apple.com/kb/HT1747 but it failed. I've also been having problems (at first seemingly unrelated) with the horrible Cisco VPN client, which started giving me this error: Error 51: Unable to communicate with the VPN subsystem I followed the steps at www.anders.com/cms/192/CiscoVPN/Error.51:.Unable.to.communicate.with.the.VPN.subsystem which don't seem to work for me, although I can connect if I use the command line with sudo : sudo vpnclient connect MyProfile I had a look in the Console app at the diagnostic messages and I noticed a pattern, that a number of apps were reporting "BUG IN CLIENT OF LIBDISPATCH". The affected programs are: AppleMobileBackup AppleMobileDeviceHelper Safari Webpage Preview Fetcher cvpnd (the Cisco VPN daemon) Of these, only the last is non-Apple software! The common text in the diagnostic messages is: Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000 Crashed Thread: 1 Dispatch queue: com.apple.libdispatch-manager Application Specific Information: BUG IN CLIENT OF LIBDISPATCH: Do not close random Unix descriptors I'm beginning to wonder if there's a permissions problem, or corruption of an important library, ... I should note that I've rebooted several times and verified the disk permissions and the disk. Any help would be great!

    Read the article

  • I cannot format my PC

    - by Jesus Buelna
    I have a Toshiba Satellite(1) l505 6gb RAM, 6.00GB hard disk.Initially I have problem with another satellite(2) I had (mother board problem). I took my Laptop to a technician and cost a lot of money (almost as much as buying new one). So, since I have HDD problems with the first one(1) I decided to use the hard disk of the other one(2). I formatted the HDD and erased the partitions it had into 1 partition (or no partition). The problem is that when I try to format with the SO CD, in the screen, where I have to decide in which partition I want to install the SO, the only one option I have says "unallocated partition and I receive this message "Windows cannot install the SO in this partition, run files do not existed or maybe corrupted" When I erased the disk with Parted Magic, Did I erased any files needed for running the installing disk? I don't know. Is it possible to fixed or reinstate the disk to install the OS? By the way, I checked the disk physical health with Parted Magic, and it is OK. One more thing when I erased the disc to 0, I used the safety option offered by the Parted Magic.Need help please.

    Read the article

  • problem booting crusty old windows XP

    - by Carson Myers
    I have an acer aspire laptop running Windows XP home. I believe I have some virus on it, I'm not sure--I mostly just run linux in a VM on it so I wasn't too worried. I'm not sure if that virus caused this problem. The laptop wasn't recognizing my USB hard drive for some reason so I decided to restart it. When it started up, it got past the memory test, past the boot screen, (but it paused right here on a blank screen for awhile) and flashed the desktop once (like it does just before the login screen) and then crashed. I got a quick BSOD and then it restarted. Then it tried to boot again, etc etc infinite loop of failure. Well, before trying safe mode, I disabled automatic restart on system crash so I could read the blue screen. There wasn't anything important on it, it said *** STOP: 0x00000000 (0xC0000000 0x,.... ) beginning physical memory dump physical memory dump complete That's not verbatim (obviously) but it didn't help me. so I booted in safe mode, and it stopped on the driver gagp30kx.sys and then restarted (and infinite loop of failure again). I burned a recovery CD and tried that. It loaded it, and I went into repair mode. I ran chkdsk and then disabled the AGP driver. Same thing on booting in safe mode except it stopped at mup.sys instead. I enabled the AGP driver again, and ran chkdsk again from the CD. It said it found problems but didn't say it fixed them. So I ran it a second time, and it said "performing additional checking or recovery" lots of times (I can't tell how many, they went above the screen top). I tried booting again and no luck. Every time I run chkdsk after trying to boot again it says it found and fixed more errors. I think it might be whatever driver is after the AGP driver, but I don't know what it is or how to find out. Can anyone help me fix this?

    Read the article

  • Cassandra Remote Connection

    - by Lyuben Todorov
    I'm not managing to connect to cassandra from outside machines. The database is hosted on a windows machine and im trying to connect through a mac (but this shouldn't cause problems) Local connection works: C:\cassandra\bin>cassandra-cli Starting Cassandra Client Connected to: "Test Cluster" on 127.0.0.1/9160 Welcome to Cassandra CLI version 1.1.6 But fails from other machines on the same network bin/cassandra-cli --host 192.168.0.10 --port 9160 org.apache.thrift.transport.TTransportException: java.net.ConnectException: Operation timed out at org.apache.thrift.transport.TSocket.open(TSocket.java:183) at org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81) at org.apache.cassandra.cli.CliMain.connect(CliMain.java:70) at org.apache.cassandra.cli.CliMain.main(CliMain.java:246) Exception connecting to 192.168.0.10/9160. Reason: Operation timed out. Welcome to Cassandra CLI version 1.2.0-beta3 Type 'help;' or '?' for help. Type 'quit;' or 'exit;' to quit. There is a router on the network but these ports have been triggred Ports: 1024, 7000, 7001, 7199, 9160 And the same ports were forwarded to 192.168.0.10 (where Cassandra is hosted) Cassandra version is 1.0.7 And the settings I think i need to change in cassandra.yaml listen_address: 192.168.0.10 rpc_address: I'm not really sure if I've missed any steps. Any help would be appreciated.

    Read the article

  • bind9 "error sending response: host unreachable"

    - by wolfgangsz
    of course), I have a number of DNS servers, all running bind9 (9.5.1, to be specific) under fedora. 4 of them are slaves, fed by a common master for our public DNS. These are all located on the public gateways of our various offices. One of them has tons of messages in its log files similar to these: Jul 21 17:26:18 gateway named[3487]: client 10.171.3.8#52500: view internal: error sending response: host unreachable I wonder where that comes from. The firewall is open on port 53 between the two machines (10.171.3.8 is an internal DNS server located on a Windows Domain Controller). The internal domains do NOT list the gateway as a name server (so there should not be any attempts of replicating the domains), and the gateway does not handle any internal DNS. The clients in these messages vary between the two domain controllers on the internal network and a third internal name server (running bind9 on debian in a different segment of the network). Any pointers are highly welcome. In response to the first reply: The issue with this really is that tcpdump doesn't show any problems. Here is an extract from "tcpdump -i any port 53" 09:13:38.283308 IP valine.aminocom.com.61815 ns-pri.ripe.net.domain: 14075 PTR? 166.225.58.95.in-addr.arpa. (44) 09:13:42.007410 IP gateway-eng.aminocom.com.37047 alanine.aminocom.com.domain: 35410+ PTR? 12.3.172.10.in-addr.arpa. (42) At the same time, the DNS log shows: Jul 22 09:13:38 gateway named[3487]: client 10.171.3.6#61300: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.172.3.12#56230: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.171.3.8#55221: view internal: error sending response: host unreachable Jul 22 09:13:49 gateway named[3487]: client 10.171.3.8#51342: view internal: error sending response: host unreachable So clearly at 09:13:40 there were two unsuccessful attempts to connect to internal machines (10.172.3.12 and 10.171.3.8, both are DNS servers), but nothing in the tcpdump output.

    Read the article

  • Windows XP SP3 client over NAT to a Windows 2008 R2 SP1 file server disconnection

    - by Patrick Pellegrino
    we just transferred a pilot group from our old(!!) Netware infrastructure to an Microsoft infrastructure. Since then, our users got problems accessing their files. They all experience disconnection from the mapped drives. The file server is access via a WAN connection by a firewall (Sonicwall) between both network and we do NAT. All clients have Windows XP SP3 and the file server is an Windows 2008 R2 SP1. On the file server I got many Event Id 2012. Many post over the Internet suggested a problem between the SMB protocol and NAT. We need a short term fix to continue to transfer users from Netware to Microsoft after what will work to remove the NATing. I found this MS KB http://support.microsoft.com/kb/2444558 that suggested a kind of workaround for Windows 7 clients but I can found anything for Windows XP. Anyone can help me with this ? We don't want to stop the project and do a network job before migrating. Regards. Update: Our few Windows 7 computers doesn't seem to have this issue.

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >