Search Results

Search found 18422 results on 737 pages for 'down town'.

Page 548/737 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • Migrating ODBC information through a batch file

    - by DeskSide
    I am a desktop support technician currently working on a large scale migration project across multiple sites. I am looking at a way to transfer ODBC entries from Windows XP to Windows 7. If anyone knows of a program or anything prebuilt that already does this, please redirect me. I've already looked but haven't found anything, so I'm trying to build my own. I know enough basic programming to read the work of others and monkey around with something that already exists, but not much else. I have come across a custom batch file written at one site that (among other things) exports ODBC information from the old computer and stores it on a server (labelled as y: through net use at the beginning of the file), then later transfers it from the server to a new computer. The pre-existing code is for Windows XP to XP migrations. Here are the pertinate bits of code: echo Exporting ODBC Information start /wait regedit.exe /e "y:\%username%\odbc.reg" HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI (and later on) echo Importing ODBC start /wait regedit /s "y:\%username%\odbc.reg" We are now migrating from Windows XP to 7, and this part of the batch file still seems to work for this particular site, where Oracle 8i and 10g are used. I'm looking to use my cut down version of this code at multiple sites, and I'm wondering if the same lines of code will still work for anything other than Oracle. Also, my research on this issue has shown that there are different locations in 64 bit operating systems for 32/64 bit entries, and I'm wondering what effect that would have on the code. Could I copy the same data to both parts of the registry, in hopes of catching everything? Any assistance would be appreciated. Thank you for your time.

    Read the article

  • Cisco VoIP stuck as Unregistered?

    - by Shifty
    Question: Why is one VoIP stuck as Unregistered? Background: We have a Cisco UC540 Small Business switch/router/voip combo. This phone was working until I powered everything down to install a larger UPS unit. The phone originally had a status of "Deceased". I removed the registration and tried to add it again. Now it just sits as "Unregistered". I even tried giving it another extension. I am stuck using the Cisco Communication Assistant since this is small business hardware. There is very limited CLI access. Also, from what I heard, if you access the CLI with out cisco permission, you will void any warranty. The phone in question is a Cisco SPA501G. It is connected to a SG300-28P. There are 5 other phones on this switch working just fine. I have tried other ports with no luck. Both the link and PoE lights are lit up. Any ideas?

    Read the article

  • Is there any way to shut up my ATI HD 5770?

    - by slpsys
    So to preface, I basically built Jeff's machine; I already had some of the components, including (scarily enough) the exact same case1. I've been buying bits and pieces over the past few months, which coincided perfectly with his recent post about three monitors, though not being a gamer outright, I opted for the second-from-the-bottom option. After finally plopping all the pieces lovingly into the case this evening, I turn it on...and it sounds like four professional grade hair-driers. Some quick regression analysis determined that with the video card out, the running machine sounded no louder than our house's vents. Basically, my last desktop build included a $45-at-the-time graphics card, and it's been Macbook Pros and workstations since then, so I have zero idea whether I'll just be able to tune the fan speed later on. Will I be able to get this thing to quiet down every time I'm not playing Modern Warfare 2 at maximum framerate, or should I just send this thing back now, and get the quietest card in my pricerange? 1 One thing of note is that I do not have noise-absorbing foam in the case, as is pictured in the article. I'm only mentioning that because I suspect it could drop the overall output a few decibels, but obviously not that many.

    Read the article

  • PHP potential issues with compiling 5.3.8 extensions against RHEL 6 / CentOS 6 PHP 5.3.3 package

    - by user101203
    I'm working on getting a Red Hat 6 LAMP server going and while the PHP that comes with it has many extensions we use, it doesn't have all of them. To solve this, I was thinking about either compiling the PHP extensions which come in the ext folder of the downloadable source code of PHP 5.3.3 from php.net same as #1, but using the extensions from the latest PHP version (currently 5.3.8). Do #1 but manually decide which updates to backport from the latest version of the PHP extensions into the older version and then compile the backported result A drawback to #1 is that security and bug fixes come out which we wouldn't be able to take advantage of. A drawback to #3 is that it might be a lot of work Does anyone know what the drawbacks to #2 are? I don't want to go down that route if it might result in some unexpected negative outcomes. Also, are there any other drawbacks to the other options or a better way to go altogether? I want to use the PHP 5.3.3 which comes with the Linux distro because I don't want us to get to a place again where we are forced to upgrade to a new version of PHP to stay on top of security updates like from PHP 5.2.x to 5.3.x and there be backwards incompatible changes (this is the situation we're in now with PHP 5.2.x no longer being supported).

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • Unusual HEAD requests to nonsense URLs from Chrome

    - by JeremyDWill
    I have noticed unusual traffic coming from my workstation the last couple of days. I am seeing HEAD requests sent to random character URLs, usually three or four within a second, and they appear to be coming from my Chrome browser. The requests repeat only three or four times a day, but I have not identified a particular pattern. The URL characters are different for each request. Here is an example of the request as recorded by Fiddler 2: HEAD http://xqwvykjfei/ HTTP/1.1 Host: xqwvykjfei Proxy-Connection: keep-alive Content-Length: 0 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The response to this request is as follows: HTTP/1.1 502 Fiddler - DNS Lookup Failed Content-Type: text/html Connection: close Timestamp: 08:15:45.283 Fiddler: DNS Lookup for xqwvykjfei failed. No such host is known I have been unable to find any information through Google searches related to this issue. I do not remember seeing this kind of traffic before late last week, but it may be that I just missed it before. The one modification I made to my system last week that was unusual was adding the Delicious add-in/extension to both IE and Chrome. I have since removed both of these, but am still seeing the traffic. I have run virus scan (Trend Micro) and HiJackThis looking for malicious code, but I have not found any. I would appreciate any help tracking down the source of the requests, so I can determine if they are benign, or indicative of a bigger problem. Thanks.

    Read the article

  • Emails sent from Coldfusion using the same SMTP/Exchange server works from one machine but fails for another

    - by Peter Herdenborg
    First, apologies if this question is too vague or has too little information to really be answerable. I am not normally working with these issues, and I don't have full access to the environment. However, the hosting provider seems to have a hard time tracking down the issue, so I am hoping that someone can at least provide me with some qualified guesses about the most likely problem. Here goes: A client I work for has a hosted IT environment, based on virtual machines running Windows 2008 R2 Standard. Our website, based on Coldfusion 9 was recently migrated from one virtual machine to another, and though Coldfusion is configured in the exact same way, using the same SMTP server, i.e. the client's Exchange server hosted in the same environment and in the same AD as both web servers, sending emails to external recipients is no longer working. It is still working fine when testing from the old machine. This is what I've learnt so far (all emails are sent using a valid from-address on the client's domain): Emails sent to other recipients on the same domain are delivered without any problem. Emails sent to external recipients on other domains are never delivered. When sending emails to both internal and external recipients, no emails are delivered. When receiving one of these emails to an internal address, the sender is now indicated as "[email protected]", while when sent from the old machine, it used to say just "sender". This seems to me that it could hint that the Exchange machine "recognizes" the old web server while it is a stranger to the new. In Coldfusion's mail log, all messages appear to be successfully delivered to the SMTP server. Any ideas what settings to look at, what log entries to search for or how to compare the old web server with the new one will be highly appreciated.

    Read the article

  • Outbound ports to allow through firewall - core requirements

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support email and web access, which I know everyone will need: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working? I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • Sendmail slow to accept emails

    - by Rich
    I have a PHP web app which is using SMTP to sendmail on localhost to send email. I would like sendmail to accept the mail request immediately and queue it for later sending, as I don't want to have user-facing request threads blocked on emails. Sendmail is installed with the default settings on RHEL web servers. Sometimes sendmail is blocking for a long time after the MAIL command is sent -- sometimes taking 60 or 90 seconds to accept the mail. The time take is usually very close to 60 or 90 sec, which makes me think this is some kind of timeout. I have looked in the sendmail logs, and there are plenty of "deferred" emails, but nothing which looks responsible for this delay. How can I diagnose what is slowing down sendmail? How can I configure sendmail to always accept the mail immediately and to queue the mail for later sending? Update: I'm not sure, but it looks like this might be linked to aol.com addresses. I strongly suspect that sendmail is doing some kind of blocking receipient address verification at the accept-email-for-sending stage. How can I disable that, so that sendmail doesn't block my UI threads? Update 2: This only seems to happen at busy times. Perhaps I am running out of sendmail threads or something? How can I check that?

    Read the article

  • How to backup virtual machines on a standalone ESXi host?

    - by Massimo
    Standalone ESXi (4.1) host without any vCenter Server. How to backup virtual machines as quickly and storage-friendly as possible? I know I can access the ESXi console and use the standard Unix cp command, but this has the downfall of copying the whole VMDK files, not only their actually used space; so, for a 30-GB VMDK of which only 1 GB is used, the backup would take 30 full GBs of space, and time accordingly. And yes, I know about thin-provisioned virtual disks, but they tend to behave very badly when physically copied, and/or to blow up to their full provisioned size; also, they are not recommended for actual VM performance. It is ok for me to shut down the VMs before backing them up (i.e. I don't need "live" backups); but I need a way to copy them around efficiently; and yes, a way to automate shutdown/startup when taking a backup would also help. I only have ESXi; no Service Console, no vCenter Server... what's the best way to handle this task? Also, what about restores?

    Read the article

  • Tuning Linux + HAProxy

    - by react
    I'm currently rolling out HAProxy on Centos 6 which will send requests to some Apache HTTPD servers and I'm having issues with performance. I've spent the last couple of days googling and still can't seem to get past 10k/sec connections consistently when benchmarking (sometimes I do get 30k/sec though). I've pinned the IRQ's of the TX/RX queues for both the internal and external NICS to separate CPU cores and made sure HAProxy is pinned to it's own core. I've also made the following adjustments to sysctl.conf: # Max open file descriptors fs.file-max = 331287 # TCP Tuning net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 1024 65023 net.ipv4.tcp_max_syn_backlog = 10240 net.ipv4.tcp_max_tw_buckets = 400000 net.ipv4.tcp_max_orphans = 60000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 40000 net.ipv4.tcp_rmem = 4096 8192 16384 net.ipv4.tcp_wmem = 4096 8192 16384 net.ipv4.tcp_mem = 65536 98304 131072 net.core.netdev_max_backlog = 40000 net.ipv4.tcp_tw_reuse = 1 If I use AB to hit the a webserver directly I easily get 30k/s connections. If I stop the webservers and use AB to hit HAProxy then I get 30k/s connections but obviously it's useless. I've also disabled iptables for now since I read that nf_conntrack can slow everything down, no change. I've also disabled the irqbalance service. The fact that I can hit each individual device with 30k/s makes me believe the tuning of the servers is OK and that it must be some HAProxy config? Here's the config which I've built from reading tuning articles, etc http://pastebin.com/zsCyAtgU The server is a dual Xeon CPU E5-2620 (6 cores) with 32GB of RAM. Running Centos 6.2 x64. The private and public interfaces are on separate NICS. Anyone have any ideas? Thanks.

    Read the article

  • How do I setup a secondary incoming mail server?

    - by abrahamvegh
    I currently have a server running Debian 6, with postfix and dovecot handling email. This server hosts email for a number of domains and users, so I use MySQL as my backing store for users and forwardings and everything related. Currently, this server is the only server listed in an MX record for all of the domains it serves. I would like to create a secondary server that would be listed in the DNS with a lower priority (e.g. current primary server is priority 5, secondary would be priority 10), so that in the event that I need to reboot the primary server, or otherwise make it unavailable, the secondary server would receive email, and hold it until the primary server came back up, at which point it would deliver any held email to the primary server. I do not need the secondary server to function as a backup sending server. Users would never need to see the secondary server, they would simply not lose incoming emails if the primary server is down, and they would be unable to send or receive until the primary came back up. How would I go about doing this? I would like to use the same software if they can handle this task, because I’m already familiar with managing them.

    Read the article

  • How do you enable webcam support in Facebook for Ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in Ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated!

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Windows 7 Sharing issue on RAID 5 Array(s)

    - by K.A.I.N
    Greetings all, I'm having a very odd error with a windows 7 ultimate x64 system. The network system setup is as follows: 2x XP Pro 32 Bit machines 1x Vista ultimate x64 machine 2x Windows 7 x64 Ultimate machines all chained into 1x 16 port netgear prosafe gigabit switch, the windows 7 & vista machines are duplexed. Also there is a router (netgear Rangemax) chained off the switch I am basically using one of the windows 7 machines to host storage & stream media to other machines. To this end i have put 2x 3tb hardware RAID 5 arrays in it and assorted other spare disks which i have shared the roots of. The unusual problems start when i am getting Access denied, Please contact administrator for permission blah blah blah when trying to access both of the RAID 5 arrays but not the other stand alones. I have checked the permission settings, i have added everyone to the read permission for the root, i have tried moving things into sub directories then sharing them. I have tried various setting combinations in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa and always the same. I have tried flushing caches all round, disabling and re-enabling shares & sharing after restart as well as several other things & the result is always the same... No problem on individual drives but access denied on both the RAID arrays from both XP & Vista & Windows 7 machines. One interesting quirk that may lead to an answer is that there is no "offline status" information regarding the folders when you select the RAID 5s from a windows 7 machine yet there is on the normal drives which say they are online. It is as if the raid is present but turned off or spun down but as far as i was aware windows will spin an array back up on network request and on the machine itself the drives seem to be online and can be accessed. Have to admit this has me stumped. Any suggestions anyone? Thanks in advance for any fellow geek assistance. K.A.I.N

    Read the article

  • Software for RAID Failure Alerts?

    - by QF_Developer
    I have two 256 GB Samsung 840 Pro SSD disks in a RAID 1 array. I would like to receive a notification if one of the disks in the array fails. Can anybody recommend an application I can install on the server to fire an email if such an event occurs? Here are some additional specs: Supermicro X9SCM-IIF motherboard utilising the hardware RAID controller. OS = Windows 2012 Standard Also is it possible to simulate a disk failure by pulling it out of the bay? SSDs appear to fail close together when in a mirrored config so I'd like to know ASAP if one goes down so I can swap them out with minimum delay. UPDATE 26th June 2013 ------------------------ None of the software that ships with the Supermicro X9SCM-* motherboards offer support for RAID monitoring. As has been pointed out here, these boards are built on an Intel chipset for RAID and so I installed Intel Rapid Storage Technology that supports automated email notifications on RAID failure http://www.intel.com/support/chipsets/imsm/sb/cs-020784.htm One small issue, the software only allows you to send email notifications without SMTP authentication. There's a bunch of different workarounds here: http://communities.intel.com/thread/30771

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

  • Unable to connect to SQL Database (can the password be reset)

    - by user45450
    I have recently joined a company which has an SQL 2005 Server running a few databases. The server looks like no one has touched it in a couple of years and has this week it ran out of disk space. After a quick hard drive scan it looks like some of the databases have become a little bloated and particularly the Sharepoint_config~*~_log and WSS_Content_log.ldf have grown to about 15GB. I have been able to log into a couple of the other databases and use the shrinkfile command to free up disk space but for some reason I am unable to log into the sharepoint and Microsoft#SSEE databases (which gives me the "cannot connect to Sharepoint, a network related or instance specific error occurred..." when I try and connect) I can see that the database is running via the SQL surface configuration and I have made sure that the remote connection settings allow me to connect locally but I am still unable to log in either with windows authentication or locally. Is there any way to reset or recover the database login details so I can get in? ( I have tried logging in with all the administrative passwords I can find and after tracking down the company who installed it in the first place I found out that they have no idea what the password could have been)

    Read the article

  • Strange boot problems on 6 month old setup

    - by Balefire
    I've already exhausted my knowledge on this one, so forgive me if this post is a bit long. I built a computer 6 months ago for my wife and it worked fine until last week. Then it randomly shut down and would lock up while trying to boot on the boot screen. I cleared cmos and it allowed me to do startup recovery, but it "failed to fix the issue" so I reinstalled windows on the HD (moving the old install to windows_old). It worked, so I started installing drivers again, but then when I restarted to finalize installations it locked up again. This time, I took the hard drive and hooked it up to my computer, backed up all her files, and then formatted the hard drive before reinstalling it. (again had to clear cmos to let me boot from disk) It installed windows, I installed drivers, and it worked for a few hours but then died during startup again. So, then I got a new HD, cleared cmos, and installed clean again, with the same result as the time before, it worked for a few hours, installed windows updates, then crashed on the 3rd or 4th time turning it on. I decided next to try reinstalling and then going online to see if there were any updates for the BIOS or drivers on the Motherboard, but now I can't get it to even bring up the boot menu, so now I'm just left wondering was it the motherboard, or is it the CPU, or the RAM? The problem was strangely intermittent so I thought it had to be a software issue, since a hardware issue would ALWAYS fail to boot, right? But now it seems to be a hardware issue, because it's not bringing up anything. Any suggestions? System: Windows 7 64-bit 970A-DS3 Gigabyte Motherboard AMD Phenom II X4 955 Deneb 3.2GHz Quad core Proc GeForce GT 430 (Fermi) 1GB Video Card 500W PSU 2 x G.SKILL Ripjaws X Series 4GB 240-Pin DDR3 1600 RAM

    Read the article

  • Is there an Installer Analyser tool that can list what Registry Keys will be created?

    - by EvoGamer
    I can think of 3 ways to achieve my goal: Create a clean VPC, install a given piece of software, and compare the before and after states. Somehow reverse-engineer the installer. Somehow redirect the output of the installer in question so that all registry calls and copy/move file commands are recorded, but not executed. The first option can be done manually, or potentially automated, but I feel it's rather OTT for my needs. The second could cause all sorts of licencing issues, not to mention it may not always return a correct result. Also, without delving into hex editing, I can't think of a way that it would be possible to do manually (some installers - eg Anti-Virus software - may react unfavourably on automated attempts to investigate the installer). The third option shows the most promise, although if the first could be stripped down into a lightweight throwaway environment, it would work pretty much the same way. However, I'm not sure how to do it. So my question is: What tools are available (if any) and/or how could I find out this information manually? I'm not looking to reverse-engineer anything (if I can help it), but I just want to know exactly what changes are being made to my PC by a given piece of software.

    Read the article

  • sub application and virtual directory file permissions

    - by Zeus
    I have a website setup in IIS7, exampledomain.com. Under the application exampledomain.com lives a sub application cms. In a rather convoluted way, we have content in our cms system in this sub-app, under cms\content\{generatedfoldername}. So to access an image in this content, the full URL would be http://www.exampledomain.com/cms/cms/content/{generatedfoldername}/image.jpg, (yes, cms twice...) and this works just fine. Now, we have a virtual directory under the parent website, called stuff which points at the content of the cms. So I should be able to get to the image using the url http://www.exampledomain.com/stuff/{generatedfoldername}/image.jpg. Unfortunately this gives a server 500 error "There is a problem with the resource you are looking for, and it cannot be displayed." Whilst you do have to log into the cms system to access any of the admin pages within, I don't think the image files are protected by login, or else the first example URL wouldn't work, right? Also it's a server 500 error, rather than a 403. I'm sure I must be missing something obvious here- will the virtual directory be using the permissions defined in the parent application, or the subapplication to which it is pointing? Or is there some other permissions I may have missed? Sorry, that was a bit long, thanks for reading all the way down here! (I also must point out that I'm pretty new to the server management stuff.) edit: also, we have <location path="." inheritInChildApplications="false"> specified in the webconfig of the parent app, so it's hopefully not the issue described in this config file hierarchy article.

    Read the article

  • Moving Images from Database to File System

    - by msarchet
    So currently in our system we have been storing image files in the database (SQL Express 2005). Unfortunately it wasn't perceived that this would reach the max database size allowed by the SQL Express License. So I have proposed a plan of storing the images in the file system and only indexing where the file is in the database. The plan is to save the root path in our OptionsTable as something like ImagesRoot and then only saving the actual imageID in the table, which is basically a FK from the PK of the record with the image. I have determined that it would be best to then split this down into sub-directories inside of the ImagesRoot based on every 1000 images so basically (ImagedID / 1000)\(ImageID % 1000) (e.g. ImageID is 1999 it would be in %ImageRoot%\1\999). I'm looking for any potential pitfalls of this system and any thing that could be improved as I am already receiving some resistance from the owner of the company who wants everything to be in databases. Along those lines I would also take reasons why it should all be in databases. I should mention we have in place already automated backups that run for all of our customers databases and any files that are generated by our program that are required to be saved over a period of time These are optional but if someone isn't using our system it is expected that they are using their own or data loss isn't our problem (it is if our system fails and they are using it!). Thanks

    Read the article

  • after building in more ram, bios/debian does nothing [closed]

    - by derty
    My private server has 2x1gb Ram working with a 64bit Debian and an Q6600 Intel. This runs 2 virtual mashines on it each one recives 512mb RAM. Which you can immagine is a bit tight for the hole system. Now i got 2x2gb ram from a friend. I'm not sure if thery are clocking at the same speed, but i'm sure my power adaptor is not on his limit and can handle that. So there are 2 ram sockets left at the mainboard. I shuted down the system and build in the 4 gigs and looked what happen. After pressing the start button, everything gets noisy as known BUT the screen shows nothing, not even the bios stuff it usally does. Why isn't the system booting? I can immagine that it does not direkt link between booting debian and the bios not showing a thing. Or is it Grub? I mounted the system disk, and i can see it, switch folders write stuff with "vim" it does not seems like there is a problem.

    Read the article

  • Svchost.exe connecting to different IPs with remote port 445

    - by Coll911
    Im using Windows XP Professional SP2. Whenever I start my Windows, svchost.exe starts connecting to all the possible IPs on LAN like from 192.168.1.2 to 192.168.1.200. The local port ranges from 1000-1099 and the remote port being 445. After it's done with the local IPs, it starts connecting to other random IPs. I tried blocking connections to the port 445 using the local security polices but it didn't work. Is there any possible way I could prevent svchost from connecting to these IPs without involving any firewall installed? My PC slows down due to the load. I scanned my PC with MalwareBytes and found out it was infected with a worm, it's deleted now but still svchost is connecting to the IPs. I also found out that in my Windows Firewall settings, under Internet Control Message Protocol (ICMP), there's a tick on "allow incoming echo request" (usually disabled) which is locked and I can't disable it. Its description is as follows Messages sent to this computer will be repeated back to the sender. This is used for trouble shooting for e.g to ping a machine. Requests of this type are automatically allowed if TCP port 445 is enabled. Any solutions? I can't bear going with the reinstalling Windows phase again.

    Read the article

  • Adding a prefix or postfix to a Word 2013 quotation source

    - by user2690527
    I am using the German version of Word 2013, so I am not absolutely sure, if "quotation source" is the term being used in the English version. I am talking about the automatic text field, one can get via "References" (German: "Verweise"), "Insert quotation" (German: "Zitat einfügen"). Then one gets a drop down menu with all the entries from the biobliography, one can pick one and then an automatic text field is inserted into the text. After it is hopefully clear what I am talking about, here is the question: I choosed the citation style "ISO 690" that generates labels of the pattern "(author+year)" in round parentheses. Sometimes I have to append a prefix or postfix to the label, but this prefix/postfix must go into the parantheses. Is there any way how I can do this? For example, I can add page numbers to the end of a quotation label via a special dialog box that appears after a right click onto the text field and choosing "Edit quote" (German: "Zitat bearbeiten"). But there are a lot more cases of optional pre-/postfixes than page numbers. I am looking for a way to add general pre-/postfixes.

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >