Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 431/916 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Joining a network with a Virtual Windows Server 2008 R2

    - by Triztian
    Hi all, here's my case, I have set up a share in a virtual windows server 2008 R2, the server is hosted by GoDaddy my question is, how do I access the server's public folders, I need to open a file locally (on the client) and to do that I need the server to show up in my Network locations, I have the right credentials and have created a special group that has access to the particular folder that I'm sharing the problem is I don't know how to add the server to my network locations. I have tried VPN Connection but it is my understanding that it cannot be done since it is virtual share server. Any help is trully appreciated.

    Read the article

  • Developing my momentum on open source projects

    - by sashang
    Hi I've been struggling to develop momentum contributing to open source projects. I have in the past tried with gcc and contributed a fix to libstdc++ but it was a once off and even though I spent months in my spare time on the dev mailing list and reading through things I just never seemed to develop any momentum with the code. Eventually I unsubscribed and got my free time back and uncluttered my mailbox. Like a lot of people I have some little open source defunct projects lying around on the net, but they're not large and I'm the only contributor. At the moment I'm more interested in contributing to a large open source project and want to know how people got started because I find it difficult while working full time to develop any momentum with the code base. Other more regular contributors, who are on the project full-time, are able to make changes at will and as result enter that positive feedback cycle where they understand the code and also know where it's heading. It makes the barrier to entry higher for those that come along later. My questions are to people who actively contribute to large opensource projects, like the Linux kernel, or gcc or clang/llvm or anything else with say a developer head count of more than 10. How did you get started? Was there a large chunk of time in your life that you just could dedicate to working on the project? I know in Linus's case he had a chunk of time (6 months) to get it started. What barriers to entry did you encounter? Can you describe the initial stages of the time spent with the project, from when you had little understanding of the code to when you understood enough to commit regularly. Thanks

    Read the article

  • Why some recovery tools are still able to find deleted files after I purge Recycle Bin, defrag the disk and zero-fill free space?

    - by Ivan
    As far as I understand, when I delete (without using Recycle Bin) a file, its record is removed from the file system table of contents (FAT/MFT/etc...) but the values of the disk sectors which were occupied by the file remain intact until these sectors are reused to write something else. When I use some sort of erased files recovery tool, it reads those sectors directly and tries to build up the original file. In this case, what I can't understand is why recovery tools are still able to find deleted files (with reduced chance of rebuilding them though) after I defragment the drive and overwrite all the free space with zeros. Can you explain this? I thought zero-overwritten deleted files can be only found by means of some special forensic lab magnetic scan hardware and those complex wiping algorithms (overwriting free space multiple times with random and non-random patterns) only make sense to prevent such a physical scan to succeed, but practically it seems that plain zero-fill is not enough to wipe all the tracks of deleted files. How can this be?

    Read the article

  • Ubuntu: crypt user's home directory and protect from admin ?

    - by Luc
    I have the following problem: I need to run some scripts on a Ubuntu machine but I do not want those scripts to be visible by anybody. What could be the best way to do that ? I was thinking of the following: create a particular user Add the scripts in this user's home directory Protect + crypt the user's home directory = Can I run the script from outside if the directory is crypted ? Can superuser see the content of the home dir ? Is there a right way to do this ? UPDATE I thing the best way would be that root own those scripts. In this case I would need to allow an another user to modify the network configuration. Is it possible to ONLY provide network rights to a user ? (via sudo or else)

    Read the article

  • Working with the ADF DVT Map Component

    - by Shay Shmeltzer
    The map component provided by the ADF Faces DVT set of components is one that we are always ending up using in key demos - simply because it is so nice looking, but also because it is quite simple to use. So in case you need to show some geographical data, or if you just want to impress your manager, here is a little video that shows you how to create two types of maps. The first one is a color themed map - where you show different states with different colors based on the value of some data point there. The other is a point theme - basically showing specific locations on the map. For both cases I'm using the Oracle provided mapviewer instance at http://elocation.oracle.com/mapviewer. You can find more information about using the map component in the Web User Interface Developer's Guide here and in the tag doc and components demo. For the first map the query I'm using (on the HR demo schema in the Oracle DB) is: SELECT     COUNT(EMPLOYEES.EMPLOYEE_ID) , Department_name , STATE_PROVINCE FROM     EMPLOYEES,     DEPARTMENTS,     LOCATIONS WHERE employees.department_id=departments.department_idand Departments.location_id=locations.location_idGROUP BY Department_name,    LOCATIONS.STATE_PROVINCE

    Read the article

  • Amazon Careers website - are resumes processed in plain text format only?

    - by sapphiremirage
    The submission site has the following options: "Please upload your resume (Word Document, max size: 512 KB) OR Please copy and paste the text version of your file here", with a text box below the latter option. I went ahead and uploaded my shiny LaTeX resume (as a PDF), despite the fact that they seem to want a Word Document, and there didn't seem to be any issues. However, when I went back to edit my profile, there was no evidence that my PDF had been uploaded, other than a text version of my resume, awfully formatted and clearly stripped from the PDF, sitting in the text box below "Please copy and paste the text version of your file here". Exasperated, I did a quick and dirty copy of the text from my resume into a Word doc and uploaded that. Same result: no evidence of a file uploaded, just a stripped text version in the textbox. What I'm wondering now is, are they only going to look at the text version of my resume? If that's the case then I'm obviously going to edit it so that it looks halfway decent and doesn't contain such atrocities from the conversion as "Other Skills: LTEX". I can pretty little text files without too much effort, so this isn't that big of deal. However, my LaTeX resume is going to look better than anything I can do in plain text, so if the site is actually keeping a copy of that, then I certainly don't want to override it. Has anyone here either gone through the Amazon hiring process or interviewed candidates and know how this works? (i.e. When on site with Amazon, did the interviewers have diversely formatted resumes, or did they all look suspiciously similar)

    Read the article

  • Is it possible to have multiple subdomains point to the same Blogger blog?

    - by cclark
    For our application we want to have a status page which is hosted outside of the rest of our infrastructure so in case there are issues in our data center we can post updates for our users and our users will be able to access them. We registered a blog on Blogger and set it up with xyzstatus.blogspot.com and status.xyz.com. Everything seems to work fine. We need to perform some maintenance at our datacenter which will sever all connectivity so we're unable to have a redirect using nginx or apache. We'd like to do this with a short TTL CNAME DNS entry. Ideally www.xyz.com and app.xyz.com could be CNAMEd to status.xyz.com. When I setup the CNAME and go to that URL I get a Google broken robot 404 page. I figure I must need to let Google know it should associate traffic to www.xyz.com and app.xyz.com to the blog served up by status.xyz.com. But I can't see anywhere to do this in Blogger. Does anyone know if this is possible?

    Read the article

  • Does lshw list the "factory" speed of a memory module or the effective speed and how to find the former?

    - by Panayiotis Karabassis
    I hope I phrased this correctly. lshw gives: description: DIMM Synchronous 400 MHz (2.5 ns) product: M378B5773CH0-CH9 vendor: Samsung physical id: 0 slot: DIMM0 size: 2GiB width: 64 bits clock: 400MHz (2.5ns) And indeed the memory speed is set is set to 800MHz in the BIOS, which I think makes sense since it is a double rate. On the other hand, Googling strongly suggests that to this product number corresponds the PC3-10600 type, which is 1333MHz, not 800MHz. And this seems to be confirmed in the BIOS, where if I select Auto for memory bus speed, 1333MHz is selected "based on SPD settings". However in the latter case, the computer does not boot, i.e. the kernel panics, complaining that something attempted to kill the Idle process. So, I am I am beginning to suspect that I have been given defective memory, the technician that installed saw this, and lowered the bus speed. Is this a possibility?

    Read the article

  • Linux Software Raid runs checkarray on the First Sunday of the Month? Why?

    - by mgjk
    It looks like Debian has a default to run checkarray on the first Sunday of the month. This causes massive performance problems and heavy disk usage for 12 hours on my 2TB mirror. Doing this "just in case" is bizzare to me. Discovering data out of sync between the two disks without quorum would be a failure anyway. This massive checking could only tell me that I have an unrecoverable drive failure and corrupt data. Which is nice, but not all that helpful. Is it necessary? Given I have no disk errors and no reason to believe my disks have failed, why is this check necessary? Should I take it out of my cron? /etc/cron.d# tail -1 /etc/cron.d/mdadm 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet Thanks for any insight,

    Read the article

  • Bash: Quotes getting stripped when a command is passed as argument to a function

    - by Shoaibi
    I am trying to implement a dry run kind of mechanism for my script and facing the issue of quotes getting stripped off when a command is passed as an argument to a function and resulting in unexpected behavior. dry_run () { echo "$@" #printf '%q ' "$@" if [ "$DRY_RUN" ]; then return 0 fi "$@" } email_admin() { echo " Emailing admin" dry_run su - $target_username -c "cd $GIT_WORK_TREE && git log -1 -p|mail -s '$mail_subject' $admin_email" echo " Emailed" } Output is: su - webuser1 -c cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' [email protected] Expected: su - webuser1 -c "cd /home/webuser1/public_html && git log -1 -p|mail -s 'Git deployment on webuser1' [email protected]" With printf enabled instead of echo: su - webuser1 -c cd\ /home/webuser1/public_html\ \&\&\ git\ log\ -1\ -p\|mail\ -s\ \'Git\ deployment\ on\ webuser1\'\ [email protected] Result: su: invalid option -- 1 That shouldn't be the case if quotes remained where they were inserted. I have also tried using "eval", not much difference. If i remove the dry_run call in email_admin and then run script, it work great.

    Read the article

  • How to optimize calls to multiple APIs at once and return as one set?

    - by Martin
    I have a web app that searches across 2 APIs right now. I have my own Restful web service that I call, and it does all the work on the backend to asynchronously call the 2 APIs and concatenate them into one result set for my web app to use. I want to scale this out and add as many other APIs as I can (currently looking at about 10 more). But as I add APIs, the call to my service gets (potentially) slower and more complex. How do I handle one API not responding ... and other issues that arise? What would be the best way to approach this? Should I create a service call for each API, that way each one is independent and not coupled to all the other calls? Is there a way on the backend to handle the multiple API calls without all the extra complexity it adds? If I go the route of a service call per API, now my client code gets more complex (and I have a lot of clients)? And it's more work for the client, and since I have mobile apps, it will cost the client more data usage. If I go one service call, is there a way to set up some sort of connection so I can return data as I get it, in case one service call hangs?

    Read the article

  • Does exportfs disrupt users already utilizing those filesystems?

    - by CptSupermrkt
    I need to modify a servers /etc/exports file to export to an additional host. After modifying this file, for it to take effect (i.e. for the additional host to have access to the designated filesystem), I believe I have to run "exportfs" on the server exporting the filesystem. Does this disrupt users who are currently using filesystems that are exported from that serving host? I'm hoping to add this new host "silently", without disruption. Any additional advice related to this, common traps, things to be careful of, etc. would be appreciated if you have any. Edit: just in case...uname -a returns 2.6.32-358.18.1.el6.x86_64 #1 SMP Fri Aug 2 17:04:38 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • what to do when ctrl-c can't kill a process?

    - by Dustin Boswell
    Ctrl-c doesn't always work to kill the current process (for instance, if that process is busy in certain network operations). In that case, you just see "^C" by your cursor, and can't do much else. What's the easiest way to force that process to die now without losing my terminal? Summary of answers below: Usually, you can Ctrl-z to put the process to sleep, and then do "kill -9 process-pid", where you find the process's pid with 'ps' and other tools. On Bash (and possibly other shells) you can do "kill -9 %1" (or '%N' in general) which is easier. If Ctrl-z doesn't work, you'll have to open another terminal and kill from there.

    Read the article

  • How do you keep up with Nagios/Capistrano configs when using EC2?

    - by imaginative
    I use Amazon EC2 for my mobile app. Depending on load of the application at a given time, I might spawn new instances and then take them down when load is lower to save costs. How does one keep up with Nagios configurations for such a dynamic environment? When one deals with managed hardware, configuration files are predictable. In this case Nagios, Capistrano and a bunch of other configuration files would need to be added. Capistrano needs to know where to deploy a new build to for an app server. Nagios needs to know to remove an existing instance or add a new instance for monitoring. Nagios also needs to know if a node was intentionally taken down or if the host is down due to error. How is this done with the wonderful world of VPS/dynamic instances?

    Read the article

  • Can I create a DC without a DNS Server?

    - by onik
    So as the title says, I need to promote a standalone Win2008R2 server to a Domain Controller, and I don't a DNS Server (I think), as there will be no clients connected to the domain, it will be only used for Remote Desktop Services. Yes, I know, it's considered bad practice to install other roles on the DC, but in this case, it's necessary. Do I need to install the DNS Server, and if I do, how to make it as transparent as possible? EDIT: Seems that I need to install the DNS Server, so I can I configure it not to mess up my entire domain? For example: The server I need to promote is rdc.mydomain.com, and it has an A entry to it's IP in the current DNS, while other servers under mydomain.com are running Linux and don't need to know anything about this Windows box. The domain uses a third-party DNS and all edits and updates need to be done via a separate web page, our servers don't have write/update access.

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • Write to stdin of a running process using pipe

    - by aditya
    I am in a similar situation as in this post But I couln't get the solution provided there to work in my situation as the answer seems related to that question only. In particular, I couldnt understand what was the purpose of cat my.fifo | nc remotehost.tld 10000 In my case, I have a process running and waiting for input. how can I send input to that process using named pipes? I've tried echo 'h' > /proc/PID/fd/0 it just displays 'h' on the process' window.

    Read the article

  • move existing Windows XP to a new harddrive

    - by MrKanister
    I've bought a SSD for my laptop and made a fresh install of Windows 7. On my old HDD I've got a Windows XP system, which I would like to move to the SSD. I've tried it this way: put the HDD to an USB case moved the Windows XP partition to a partition on my ssd usind DriveImage XML used EasyBCD 2.0 to create a new entry in the boot menue The problem: Windows XP is booting up, but before the login-screen comes up, it seems to freeze. No bluescreens, no errors - just nothing happens. I tried to start it in "protected mode", but the same results. I'm not sure if DriveImage XML didn't worked well or if there's another reason.

    Read the article

  • rsync : Read input from a file and sync accordingly

    - by Dheeraj
    I have a text file which contains the list of files and directories that I want to copy (one on a line). Now I want rsync to take this input from my text file and sync it to the destination that I provide. I've tried playing around with "--include-from=FILE" and "--file-from=FILE" options of rsync but is is just not working I also tried pre-fixing "+" on each line in my file but still it is not working. I have tried coming with various filter PATTERNs as outlined in the rsync man page but it is not working. Could someone provide me correct syntax for this use case. I've tried above things on Fedora 15, RHEL 6.2 and Ubuntu 10.04 and none worked. So i am definitely missing something. Many thanks.

    Read the article

  • Oracle Partner Days and Oracle Days are coming to an EMEA city near you!

    - by Javier Puerta
    Oracle Partner Days A new round of Oracle Partner Days is coming to a large number of European cities. These events are exclusive for Oracle partners and will deliver to you real Business return on your OPN membership.You will hear the business opportunities coming from the adoption of the entire Oracle stack, the latest products value propositions and related sales strategy and be able to connect directly with Oracle executives and find new business opportunities with other partners in your region.The EMEA Oracle Partner Days are Local/Regional live events targeting the key contacts in sales and consultancy delivering Oracle strategy, engaging around the several perspectives of the Oracle portfolio, executive keynotes and deep dive Business content-related breakout sessions. The first city will be Frankfurt, on Oct. 29. Check the full list to find an Oracle Partner Day in a city near you. Oracle Days Oracle Days will be hosted after Oracle OpenWorld across EMEA, along October and November. By attending an Oracle Day, customers and partners can: Learn about how to leverage the power of the Oracle stack, by hearing customer case studies about successful business transformation, and by following cross-stack solution tracks within the agenda Discuss key issues for business and IT executives in cloud, big data, social, and mobile solutions, and network with peers who are facing the same challenges Meet Oracle experts and watch live demos of new products Get the latest news from Oracle OpenWorld. See full calendar and cities here

    Read the article

  • How do I recover a RAID 1 volume on Mac OS X (10.7)?

    - by Avry
    I have a Synology NAS that I've set up with RAID 1. The device is set up with two drives, both the same size (i.e. 500 GB each), formatted in ext3, as a RAID 1 volume (i.e. even though the total capacity is 1TB, I effectively only get 500 GB). In the case of a device failure where I can only access one of the drives, how can I recover my data? The solution I'm looking for is something like: 'Put the working drive in an enclosure, and use <some software> to recover your data.'

    Read the article

  • Developing an iOS app for a single device - licensing issue

    - by bfavaretto
    I'm developing an iOS app for a museum as a freelancer. It's a very simple video player, to be installed on a single iPad that will be part of a permanent exhibition, basically acting as a kiosk. It turns out the iPad is the ideal device for that if you're looking for a small and affordable touchscreen. The problem is: as far as I can tell, none of the Apple Developer Program options available will allow me to distribute an app like that. The relevant options are (from the link above): iOS Developer Program ($99/year) Select this program if you would like to distribute apps on the App Store as an individual, sole proprietor, company, organization, government entity or educational institution. iOS Developer Enterprise Program ($299/year) Select this program if you would like to develop proprietary apps for internal distribution within your company, organization, government entity or educational institution. The regular program requires distribution through the App Store. The Enterprise version is for internal distribution within my own organization. Neither is the case here! It seems like I'm doomed to violate Apple's terms of service (and I can think of at least two ways of doing that: jailbreaking, or changing the iPad's date so it won't know the provisioning profile expired). Is that really so, or did I get the descriptions wrong? Has anyone here been in a similar situation?

    Read the article

  • Should I sell video tutorials on my own or via publishers like lynda.com? [closed]

    - by Derfder
    I am asking this because I am deciding between two models right now. One way is to create video tutorials on my own (make some short free videos and long pay per download/stream videos) or sell them to lynda.com or tutsplus. The 2nd way is easier, because they will do all the boring business stuff, will host the files to download etc. In that case, everything I need is a good microphone and obey their guidelines. On the other side if I do it on my own, I have to do all the unwanted business stuff, pay the server and other stuff. This is quite a big downside, however, I will have all the videos under my control in the future. I know that lynda.com has bigger attention and marketing that I am capable, but if you take e.g. phpvideotutrials.com (r.i.p ;), I think Leigh was very successful with relatively small budget. The interesting question will be the cost or how much will they pay me. Would it be less than if I sell it myself+monthly server hosting+other expenses? Any advice from people who actively sell their videos to some companies or do it on they own is highly appreciated.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >