Search Results

Search found 1392 results on 56 pages for 'maze solving'.

Page 38/56 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Weird Apache Crash (with Dump) zend_hash_find (), libphp5.so

    - by Jacob84
    To be honest I don't have experience working with Apache. I'm just putting the best of my intentions on solving this and don't know if I'm making it right. So any help will be greatly appreciated. We have a php page wich is throwing the following message in the browser: Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. The logs from /var/log/httpd doesn't seem to help because It seems that the Apache is unable to write any information. So the exception or error is preventing the writing (maybe ocurring in some stage of the process that makes impossible to log?). I've read about the procedure to make dumps of the apache, and here we have the content: Reading symbols from /lib64/libgpg-error.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libgpg-error.so.0 Reading symbols from /usr/lib64/php/modules/zip.so...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/php/modules/zip.so Core was generated by `/usr/sbin/httpd'. Program terminated with signal 11, Segmentation fault. 0 0x00007fb828fff712 in zend_hash_find () from /etc/httpd/modules/libphp5.so Missing separate debuginfos, use: debuginfo-install httpd-2.2.15-15.el6.centos.1.x86_64 I've been looking in the PHP files and I haven't found any direct call to zend_hash_find (wich seems to be causing the error). I've been looking at Google but found nothing related. Can somebody please help? Is there any step that I need to accomplish to know more? Thanks a lot, as always!

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

  • Create a special folder within an outlook PST file

    - by Tony Dallimore
    Original question I have two problems caused by missing special folders. I added a second email address for which Outlook created a new PST file with an Inbox to which emails are successfully imported. But there is no Deleted Items folder. If I attempt to delete an unwanted email it is struck out. If move an email to a different PST file it is copied. I created a new PST file using Data File Management. This PST file has no Drafts folder. This is not important but I fail to see why I cannot have Drafts folder if I want. Any suggestions for solving these problems, particularly the first, gratefully received. Update Thanks to Ramhound and Dave Rook for their helpful responses to my original question. I assumed the problem of not have a Drafts folder in an Archive PST file and not having a Deleted Items folder associated with an Inbox were part of the same problem or I would not have mentioned the Drafts folder issue since I have an easy work-around. Perhaps my question should have been: How to I load emails from an IMAP account and be able to delete the spam?

    Read the article

  • Windows 7 Unidentified Network problem. Cannot connect to the internet.

    - by Gordon
    This is my first time on this website, but I was told this was a good place to ask this. I basically have a problem with Windows 7 connecting to my home network. It keeps identifying my home network as unidentified, and it continually does "identifying" until it simply say cannot connect to the Internet. I don't know how this problem occurred. It simply happened one morning. I am running Windows 7 Ultimate. I have a Realtek Network adapter. I don't think its the drivers. I have already tried system restore to a date when my computer was working fine but it still didn't fix the problem. From what Ive read online there was this bug in the services.msc area, something to do with Bonjour service. I cannot find either so I do not think that was the problem. I'll be online for a while, so I can provide any additional details if needed. I don't really know how to explain it because its so fudging complicated. I really appreciate clear and open steps to solving this. I have tried some things like system restore and rolling back drivers, doesn't seem to help.

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Linux Mint on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Mint and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • Suggestions for Windows 8 migration [closed]

    - by Big Endian
    I'm thinking of migrating to Windows 8. At first I hated it, but I'm pretty sure the Windows 8 model is the future, and I don't particularly want to end up hating the future like my parents, frustrated and bewildered by anything past Windows XP. I'm currently running Windows 7 and my system has been accumulating some problems. It's probably an accumulation of issues from installing too much software, changing firewall settings, installing Ubuntu alongside Windows, and... well I'm not sure, but my computer has been buggy in unexpected ways lately (freezing and unfreezing, display driver crashing and recovering, and what I call "deep freeze/thaw cycle" where the mouse won't even move for a while). I'm good at solving computer problems, but I can't seem to get to the root of these and my best idea for fixing them is making sure I've backed up every file then re-installing the entire OS. Luckily for me, a new OS is just around the corner so this would be a good time to get two things out of the way at once. The problem I see is that the upgrade options I see are all "seamless". I don't want a seamless upgrade. I want to wipe the slate clean and start all over. Does this mean I will have to buy a full, new copy of Windows 8 rather than one of the cheaper upgrading options? Or does it not make since for me to go to Windows 8 given that I have a laptop, not a tablet? Maybe I should just re-install Windows 7, or even call good enough good enough, try to eliminate the bugs, and start with a fresh slate in 2-3 years after this computer eventually dies entirely from (inevitable) hardware failure. What would be the advantages or disadvantages and costs of each option, how would I go about upgrading to Windows 8 if that's the option I choose, and what is your personal opinion about my situation?

    Read the article

  • How to setup Mac server to use two gateways

    - by Brady
    I recently asked this question: How to set Mac server to use different Gateway for internet bound traffic The answer given works but has presented me with another issue that I didnt make clear in that question. Here is my network layout as it stands: At the moment outside staff members use some services on the existing internet 1 link. Those services are hosted by the Mac server. If I change the gateway of the Mac server to the second modem those outside staff lose visabilty on those services. Now I dont know how to go about solving this issue. I want the second link to be used when the Mac server goes to rsync data offsite but everything else use link one. How do I do this? Thanks Scott EDIT: This has been resolved by setting the default gateway on the Mac server to 192.168.1.254 Thus leaving everything on the network as it was before. but to get the Mac server to use the other link for rsync I've added a route to the Mac server to route traffic to the rsync server through the second gateway. sudo route add -net {server IP's}/{Netmask} 192.168.1.1 I've awarded the answer to gravyface for pointing me to a post on how to make this route persistant in Mac

    Read the article

  • Spots appear in a rectangle area on screen, ubuntu gnome 13.04, nvidia driver

    - by frozen-flame
    I am using Ubuntu Gnome 13.04 with nvidia-310 driver installed. My GPU is GeForce GTX650. Strange spots freqently appear on screen, with following traits: Spots are restricted in one or two rectangle areas at any instant. When typing, the pattern of spots change. Possibly increase, or all disappear when one key pressed. Mouse movement also influences. This problem last within one boot. The only way can I get rid of this problem is to reboot. It can be detected as soon as entering desktop if it appears. Simultaneously, the "power off" option is lost in the top-right menu of Gnome3. Never such problem when using windows 7 on the same computer, neither ubuntu with Nouveou driver. Seldomly, half of the screen become black. I googled a lot. Similar conditions are described, but no confirmed solution. Uninstall-r einstall strategy does not work. Any clue solving this will be appeciated.

    Read the article

  • Enable CPU fan always on

    - by Gundars Meness
    I am using 3 years old overheating laptop and I want my CPU fan to be spinning 24/7 regardless of the consequences. How to make it spin? The problem is that CPU & GPU heats up to 68°C (154 F) right after boot and never goes down, because CPU fan is not spinning full throttle. It starts spinning faster when temperature goes over 70°C and stops when it reaches seventy again. When doing heavy work on databases, it gets from 70 to 90 in no-time and automatically powers off. Bios does not contain any "fan spin 100%" options, just "spin slowly all the time" and "auto" which is more useless than the first one since my fan doesn't have pwm wire. Currently I'm solving this with cooling stand (3x5V), but it isn't much of a help. I would rather use the CPU fan since it is the only fan directly responsible for cooling down CPU/GPU. But how to make it spin 100% all the time? Should I attach it's red power wire to motherboard to get constant 5V (is there such option?), or is there an option to control it via software? Laptop: Samsung R528 2.3 GHz Intel i3 with Nvidia GeForce 310M Bios: Phoenix 03KT.M003.20100622.KSJ (and that is latest update) OS: Ubuntu 12.04.2 LTS with 3.2.0.51 kernel CPU fan: Image/Description Has 5V 0,4A and only 3 pins, no pwm. P.S. Yes, I did clean everything with alcohol, freed the air vents, changed thermal paste etc; that reduced temperature by 4 degrees.

    Read the article

  • Wireless driver activation problem in compaq c700 in ubuntu 9.04

    - by Fazil
    I am using ubuntu 9.04, i cant access my wireless driver, i activate the madwifi in administrationhardware drivers, but i could'nt activated the wireless too. when i type lspci i get the following message, ################################################## # 00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 04) 00:1d.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 04) 00:1d.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 04) 00:1d.2 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 04) 00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 04) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f4) 00:1f.0 ISA bridge: Intel Corporation 82801HEM (ICH8M) LPC Interface Controller (rev 04) 00:1f.1 IDE interface: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 04) 01:00.0 Ethernet controller: Atheros Communications Inc. AR242x 802.11abg Wireless PCI Express Adapter (rev 01) 02:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) ################################################## but when i tried in windows i found that the driver for my laptop is ################################################ atheros AR5007 802.11b/g WiFi Adapter ################################################ so what can i do for solving this problem.

    Read the article

  • Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

    - by grg-n-sox
    <tl;dr> In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches? </tl;dr> <introduction> I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this. So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing. So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it. Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged. Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons. </introduction> <optimizations> Chop the compared sequences into multiple chucks of subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence. Pro: Changes the time increase as the changed area gets bigger from a quadratic increase to something more similar to a linear increase. Con: Figuring out where to split already seems like you have to solve edit distance except now you don't care how it is changed. Would be fine if this was solvable by a process closer to solving hamming distance but a single insertion would throw this off. Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves. Pro: The operation of comparing two integers is faster than the operation of comparing two strings, so a slight performance gain is received after every comparison, which can be a lot overall. Con: Using a cryptographic hash function takes time to convert all the sequence elements and may end up costing more time to do the conversion that you gain back from the integer comparisons. You could use the built in hash function for a string but that will not guarantee uniqueness. Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here. Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst case scenario is that time, best case becomes practically linear, and average case is somewhere between the two. Con: It is an algorithm I've only seen implemented in functional programming languages and I am having a difficult time comprehending how to convert this into Ruby based on how it is described at the site linked to above. Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs. Pro: I have to imagine that evaluating something like this in could be a LOT faster. Con: I have no idea how Rails handles apps with ruby code that has C extensions and it hurts the portability of the app. This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n). Pro: Would make going back to an old version a lot faster. Con: It would mean every new commit, the delta-tree would get a new root node that will cost time to reorganize the delta-tree for an operation that will be carried out a lot more often than going back a version, not to mention the unlikelihood it will be an old version. </optimizations> So are these things worth the effort?

    Read the article

  • How can I do a large file upload using Sinatra, haml, nginx, and passenger?

    - by mmr
    Hi all, I need to be able to allow a user to upload 30-60 mb files at a time. Right now, I'm solving the problem with a simple form post: %form{:action=>"/Upload",:method=>"post",:enctype=>"multipart/form-data"} - @theModelHash.each do |key,value| %br %input{:type=>"checkbox", :name=>"#{key}", :value=>1, :checked=>value} =key %br %input{:type=>"file",:name=>"file"} %input{:type=>"submit",:value=>"Upload"} This form allows the user to select processing options contained in theModelHash and upload a file for processing. Problem is, this method both freezes the user's UI and also requires that the entire form be reposted when the user presses the 'back' button. I've looked at SWFUpload, but have no idea how to integrate that into my relatively simple app. There's a page here about integrating it with Rails, but I'm using Sinatra, and am new enough to this whole web programming thing that I don't know how to modify those files to work with what I need to do. Is there a how-to to add large file uploads to my form there? Something relatively simple that just adds in a progress bar and doesn't repost? I feel like I'm having to triple the size of my application just to make this feature play nice, and that's bothering me a bit.

    Read the article

  • Some free cloud solution to enhance your business

    - by Saif Bechan
    I am co-owner of a small internet business. I am in charge of IT, and I try to get things done as low cost as possible. When investing in servers, resources and overall business costs your project can soon turn into a financial disaster. Cloud solutions can help you in solving some financial problems, they can help you in scalability problems, and overall performance problems of your server or web application. Recently I moved the whole internal/external communication(email,calendar,documents) of my business to the cloud. I did this by using the free version of Google Apps. This works great and is a big advantage on multiple levels. I do not have to fight spam anymore on my system, and there are less resources used on my system. Also switching servers will go a lot easier. Questions Can you name some cloud solution that you have used, or some you just recommend. They can fairy form financial benefits, organizational benefits, performance benefits. It doesn't matter as soon as it helps you spread the load of your business.

    Read the article

  • Private staff network within public network

    - by pianohacker
    I'm the sysadmin at a small public library. Since I got here a few years ago, I've been trying to set up the network in a secure and simple way. Security is a little tricky; the staff and patron networks need to be separated, for security reasons. Even if I further isolated the public wireless, I'd still rather not trust the security of our public computers. However, the two networks also need to communicate; even if I set up enough VMs so they didn't share any servers, they need to use the same two printers at the very least. Currently, I'm solving this with some jerry-rigged commodity equipment. The patron network, linked together by switches, has a Windows server connected to it for DNS and DHCP and a DSL modem for a gateway. Also on the patron network is the WAN side of a Linksys router. This router is the "top" of the staff network, and has the same Windows server connected on a different port, providing DNS and DHCP, and another, faster DSL modem (separate connections are very useful, especially as we heavily depend on some cloud-hosted software). tl;dr: We have a public network, and a NATed staff network within it. My question is; is this really the best way to do this? The right equipment would likely make my job easier, but anything with more than four ports and even rudimentary management quickly becomes a heavy hit on our budget. (My original question was about an ungodly frustrating DHCP routing issue, but I thought I'd ask whether my network was broken rather than asking about the DHCP problem and being told my network was broken.)

    Read the article

  • open_basedir problems with APC and Symfony2

    - by Stephen Orr
    I'm currently setting up a shared staging environment for one of our applications, written in PHP5.3 and using the Symfony2 framework. If I only host a single instance of the application per server, everything works as it should. However, if I then deploy additional instances of the application (which may or may not share the exact same code, dependent on client customisations), I get errors like this: [Tue Nov 06 10:19:23 2012] [error] [client 127.0.0.1] PHP Warning: require(/var/www/vhosts/application1/httpdocs/vendor/doctrine-common/lib/Doctrine/Common/Annotations/AnnotationRegistry.php): failed to open stream: Operation not permitted in /var/www/vhosts/application2/httpdocs/app/bootstrap.php.cache on line 1193 [Tue Nov 06 10:19:23 2012] [error] [client 127.0.0.1] PHP Fatal error: require(): Failed opening required '/var/www/vhosts/application1/httpdocs/app/../vendor/doctrine-common/lib/Doctrine/Common/Annotations/AnnotationRegistry.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/vhosts/application2/httpdocs/app/bootstrap.php.cache on line 1193 Basically, the second site is trying to require the files from the first site, but due to open_basedir restrictions it can't do that. I'm not willing to disable open_basedir as that is only masking the problem instead of solving it, and creates a dependency between applications that should not be present. I initially believed this was related to a Symfony2 error, but I've now tracked it down to an issue with APC; disabling APC also solves the error, but I'm concerned about the performance impact of doing so. Does anyone have any suggestions on what I might be able to do?

    Read the article

  • Not able to run firefox in a head-less Ubuntu server 9.10

    - by Julio J.
    I need to run Firefox in my server in order to execute some Selenium tests from Hudson. I would love no to have to install a complete gui. So I installed Xvfb in order to fake the Gui (I understand it this way correct me if my assumptions are wrong). After some time trying to make it work, I'm stuck with the next situation: $ sudo Xvfb -ac :99 & [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) $ firefox [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! [config/dbus] couldn't register object path (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) Xlib: extension "RANDR" missing on display ":99.0". GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: /bin/dbus-launch terminated abnormally without any error message) I'm runnig firefox without installing it from the repositories. And I'm getting a socket timeout when I try to run the selenium tests, so I guess the problem is in Firefox and Xvfb. I have installed already the nex package: i gconf-defaults-service - GNOME configuration database system (system defaults service) That in some forums suggest to be a fix, that in my case doesn't work. Any explanation about the problem and ways of solving it without installing a full gui, will be very helpful.

    Read the article

  • How to remove static IP from Mitel 5312 and enable DHCP

    - by jimbo
    I'm not sure this is the right forum for this question -- although I'm confident I'll be told if not! -- but I've read the fine manual (at least, such a manual as I have), I've googled and I cannot get any insight into where to even start solving this problem. I have a bunch of Mitel 5312 handsets, talking to a 3300 ICP controller. Some handsets are at a remote location, get an address from my DHCP server over there, and use the Mitel "Teleworker" extension to connect in over the Internet. The remaining handsets were set up with static IPs by a BT-supplied engineer, on the same subnet as the ICP itself. So far, so good. I have one remaining teleworker licence, and need to move a handset from the home location to the remote. I've managed to boot it and configure teleworker, but I cannot for the life of me see where I tell it to forget its static IP, and make a DHCP request. Any ideas? Should I be looking on the controller, or holding magic combinations of buttons on the handset itself? EDIT: Following some advice from Robert, below, I've broken out a spare device and reassigned the profile for this user's extension to the MAC of the new phone, and a new profile to the old MAC. Unfortunately this still doesn't get me anywhere -- the new handset now asks for the teleworker install password. I suspect I'm going to have to get a Mitel engineer involved here, since I've never been given that password... Unless anyone has any great ideas?

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • Access router set up as a bridge behind another router

    - by Alari Truuts
    I have a problem my ISP is refusing to help me with, even though they put up the whole system. Specifications: There's a Thomson TG784 router through which the internet comes in to the building, Behind that (for some reason) is a Juniper NetScreen 5XT - 105 Firewall/Router? which leads to an AMX nxa-enet24 switch that carries the connections all over the building and a series of Apple AirPorts for wifi. Problem: The first router (Thomson) is required for ipTV (by Elion). The tv or ipTV box has to be connected straight to the Thomson router. My service provider cannot see the Thomson router from their side, but see the Juniper, so we might think the Thomson has been configured as a bridge. I need a way to access the Thomson router and see it's configurations, because currently, when connecting a Samsung tv to that router (with elion app for ipTV viewing) or even a computer, it cannot access the internet and even if it could, it would update the Thomson router software, losing it's configurations which I need to preserve. I'm unable to find out the Thomson routers ip address to connect to it, and when directly conencting with a cat5 cable, it doesn't give me an ip address. Hope someone can show me the correct direction for solving my issue. Thank you all for reading, and I appreciate any help, Alari Truuts

    Read the article

  • Ubuntu purple splash screen with blinking pixels?

    - by joxnas
    I had ubuntu 9.10 I upgraded to 10.04 after solving some problems (freeze at boot). Since then, I don't have the ubuntu's logo showing up when I boot, but a purple screen with some blinking pixels. I didn't care much about it... but today my computer took too long at that screen (normally it was just 1/4 second, but today it was like a minute..). And it happened like 4 or 5 times in a row (Only at the 5th time I realised that it was not freezing up, but it simply would took more time) After a reboot, it is again 1/4 second of purple screen but I don't want this problem to return.. so I want to get rid of the purple screen (I think it is an indicator of the problem) Well, I already installed the graphic drivers (going to system admnistration hardware drivers). But it didn't solve anything. (I don't know if it is even related) I searched in google, found something old (2006) and I think it maybe has some relation with my problems .. http://ubuntuforums.org/archive/index.php/t-294692.html But couldn't understand the conversation (i'm a linux novice) Sorry for my horrible english.. I would appreciate any help! My hardware: ATI Mobility Radeon 4650 HD P7450 2.13Ghz Core 2 Duo

    Read the article

  • SSH sometimes screws up connection when terminal overflows?

    - by SeveQ
    I've got a problem with SSH on a Debian Lenny based server (it's a vHost within a Xen environment, booted on a Xen kernel). I hope someone can help me with this. The SSH connection seems somehow getting screwed up frequently when the terminal overflows (new lines beyond the bottom of the terminal, usually forcing it to scroll). The connection gets lost but not regularly disconnected. It nearly always happens when I do the following: an existing SSH connection gets disconnected (regularly) I order putty to reestablish the connection login-prompt appears at the very bottom of the putty terminal window I enter my login-name, press the enter key I'm asked for the password, I enter it, press the enter key and BOOM! Nothing more happens. I have to reconnect again. So it is reproducable. I'm not totally sure if the connection crashes before or after I enter the password. Furthermore it also happens when there is much text to be displayed (for example when I compile something or do an ls -l on a directory with many entries). Using 'screen', however, helps to reduces the frequency of occurence but doesn't solve the problem completely. It's occurence is independent from which terminal software I use. I mostly use putty but it also happens with other clients. I certainly hope somebody can help me solving this problem. Thanks in advance! //edit: I've just made a Wireshark trace of the ssh connection and there is nothing, I repeat, nothing different between the working and the failing connection (at least aside from frame numbers, ports and times that obviously can't be equal). This leads me to the assumption that the error has to happen on the server's side.

    Read the article

  • Errors generating gpg keypair

    - by Evan Lynch
    I posted this originally on stackoverflow, but was told that it was offtopic and this would be the better place to post it, so I am reposting it here and deleting my original topic. I have a rather old PGP key, but I've long ago lost the private key for it, so I'm trying to generate a new key with GPG on Windows 7. While it technically generates the key, GPA crashes every time I generate the keypair. I've tried this four times now and just downloaded what appears to be the latest version of Gpg4Win and am still receiving this problem. A comment on my original post informed me that GPA crashes is not a very good description of the problem, but unfortunately I can't do much better than that: all it tells me is "gpa.exe has crashed and will close now", I don't get an error dump or anything. Is there anything I can do to fix this, or is this just a bug in the latest version of Gpg4Win? Here are the specs of GPG that I'm using: GPA 0.9.4. GnuPG 2.0.22. My operating system is Windows 7 64 Bit, and I have 5 GB of RAM. Also, I was told to try generating the keypair on the command line but can't find any documentation for how to do this in Windows 7. If anyone could link me to current documentation for this, that would be a good workaround for solving this problem.

    Read the article

  • How do you enable view source in ie8 when it gets magically diabled

    - by Tim Meers
    I have multiple computers that all seem to have View Source disabled from the content menu when you right click on a web page. Now I know it's not that the web page is some how disabling it, I'm pretty sure thats not even possible. But alas I have at least 3 machines in my office (not on AD) that have this problem. I have also worked on clients computers that have this same issue. It's down right maddening! I tried to Google for it, but it just shows results from the dawn of IE6 in all of it's "glory" with a bug where if the cache was full it would be disabled. But this is not the case in IE8. Any body have a clue why this is happening, or a fix for it? Maybe a reg setting? Update: So I got a little closer to solving it, but there was still an issue on one computer where it allowed it not is HTTP, but not in HTTPS. One other computer works correctly in both. I Found these two keys missing in the registry: [-HKEY_CURRENT_USER\SOFTWARE\Microsoft\Internet Explorer\View Source Editor] [-HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\View Source Editor]

    Read the article

  • Having problems VPN'ing into our Windows server network.

    - by Pure.Krome
    Hi folks, When two people (on their notebooks) try to VPN to our office, only the first user gets a connection. the second user always times out. Is it possible for VPN to allow two or more people, using / sharing the same EXTERNAL PUBLIC IP to connect/authenticate? Now for some specifics (cause those two statements are very broad). I'm not in the IT Dept. I'm a developer. Our IT Dept don't really care (sigh) so it's up to me to fix this crap. Our office is all Microsoft shop stuff - servers and clients. We also have a firewall (watchguard brand?) and some other crazy setups (yes i know, it's very vague :( ). So i'm wondering - is it possible for multiple users, from the same public IP, to connect via VPN to a windows server? i'm under the impression - yes. But it is possible that this only happens when the clients (who are all behind the single, public IP .. otherwise they will have their OWN ip's) need to have UPnP running or something? this is killing me and i need to start asking the right questions cause these guys don't know what they are doing and i can't work without this happening. I know this is a vauge question with so many 'if-what's-etc' but maybe some questions/suggestions from you guys might start to lead to solving this problem. EDIT: Network Connection: WAN Miniport (PPTP)

    Read the article

  • How do you optimize your Outlook Exchange + IMAP setup?

    - by Mike
    My company provides an Outlook/Exchange account we must use for mail/calendar. Like many companies, they unfortunately also provide a ridiculously small mail quota. I got tired of managing and backing up .pst files (since I'm always in my e-mail there is never a good time to back it up), so I started storing my archived mail "in the cloud", using an IMAP server I set up on my Linux box. This has a few drawbacks for me: IMAP (at least the implementation in Outlook) is *very slow*. Furthermore, if I move a large number of messages to the IMAP server, it blocks the entire Outlook client for hours sometimes, which is quite annoying. Can't use exchange over HTTP to do mail without launching a VPN session, because the client-side rules I have which organize my mail fail and disable the rule if the IMAP server can't be reached. If I reply to a message from my IMAP store, I have to specify a SMTP server willing to relay for me in order to send e-mail, unless I always remember to select my Exchange account while composing e-mail. ... but the main advantage of being very easy to back up, with a couple of cron jobs that essentially do an 'rsync'. Short of moving the IMAP server to my local host (which seem like might have the same file locking problems as using a .pst), my options seem limited for solving (1). I'd like to come up with a solution for (2) and (3) though. For problem (2) would it be possible to somehow tell Outlook that the IMAP server is "offline", and have it synchronize my changes during a periodic "send and receive"? If so, I wonder if it would block the Outlook client, like it does in problem (1), and if it would be compatible with the client-only rules I use to sort my mail into folders. I've looked all over the options menu and have not found a way to tell Outlook to not use a certain account for sending mail, which would solve (3). Is anyone else crazy enough to be doing something like this? Any ideas?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >