Search Results

Search found 22224 results on 889 pages for 'point of sale'.

Page 408/889 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Bounding volume hierarchy - linked nodes (linear model)

    - by teodron
    The scenario A chain of points: (Pi)i=0,N where Pi is linked to its direct neighbours (Pi-1 and Pi+1). The goal: perform efficient collision detection between any two, non-adjacent links: (PiPi+1) vs. (PjPj+1). The question: it's highly recommended in all works treating this subject of collision detection to use a broad phase and to implement it via a bounding volume hierarchy. For a chain made out of Pi nodes, it can look like this: I imagine the big blue sphere to contain all links, the green half of them, the reds a quarter and so on (the picture is not accurate, but it's there to help understand the question). What I do not understand is: How can such a hierarchy speed up computations between segments collision pairs if one has to update it for a deformable linear object such as a chain/wire/etc. each frame? More clearly, what is the actual principle of collision detection broad phases in this particular case/ how can it work when the actual computation of bounding spheres is in itself a time consuming task and has to be done (since the geometry changes) in each frame update? I think I am missing a key point - if we look at the picture where the chain is in a spiral pose, we see that most spheres are already contained within half of others or do intersect them.. it's odd if this is the way it should work.

    Read the article

  • RHEL kickstart with external DVDrom drive

    - by AndyM
    HI I've an old Dell poweredge server with an CDROM drive. I've attached a USB DVDROM and USB stick with my ks.cfg on so I can install RHEL from DVD not loads of CDs :-) I can boot from the RHEL media in DVDROM and point the installer to the USB ks.cfg. This works but the ks.cfg script has the cdrom keyword in it. The install then stops and asks for the RHEL media to be in the CDROM drive not the DVDROM. How can I change the ks.cfg so it uses the external DVDROM for the install media not servers builtin cdrom drive ? I know I can go and rebuild my DVD image to include the ks.cfg , but this is an extra step I dont want to do if I have to. Regards Andy

    Read the article

  • Unattended install of IIS 6 with PHP / FastCGI

    - by Ciaran
    I'm trying to figure out the best method of installing PHP on IIS 6 unattended. I've figured that to install IIS 6 unattended I need to use sysocmgr and modify a registry key before hand to point at the correct sources folder to prevent being asked for an installation disc. The bit I'm caught up on at the moment is the process of installing FastCGI. The info I've read about installing FastCGI so far is that the installer unpacks 3 files and "registers" them with IIS. Question is, does anyone know what happens with these three files so I can handle it without using the installer? Also, if anyone has experience with the steps I need to take following the FastCGI install, I'd appreciate help.

    Read the article

  • Why is Ubuntu offline (except torrents) while Windows is online?

    - by Fahim al Islam
    I am using a static wired connection. Everything was perfect. But suddenly from few hours back I can't access any website. Dropbox, Ubuntu One also can't connect. Ping request is also unsuccessful, but I can download through torrent. I am not trying torrent download and browsing at the same time. So, I think it's not an issue about torrent using all the bandwidth. One important point is that this connection works perfectly on Windows on this same PC (My PC is dual-boot). I have tried the way what izx has suggested (using "sudo sh -c 'echo nameserver 8.8.8.8 /etc/resolv.conf'"), but I'm facing the same problem again. Now I can't even ping 8.8.8.8 and google.com. Though I can ping 74.125.228.2 (which is Google IP address) I can't understand what's happening and why this is happening. I'm new in this website many rules and regulations is unknown to me. So, please don't be bothered for my mistakes. Looking forward for help from anyone. Thanks to all.

    Read the article

  • Blank Screen after inactive time

    - by Hari K T
    I am using Ubuntu 11.04 , on a DELL Vostro 1510. If I am inactive for a certain time, the screen usually gets locked and the login prompt will be there. Yes sometimes I am able to see the login screen and logging in as normal. But sometimes after its locked and when the mouse or keyboard is moved also nothing happens. This is happening for the second time. And its not happening always . I can switch from one terminal to other from the locked screen with Alt + + Ctrl F1 etc . But when I switch to the graphical one with Alt + Ctrl + F7 ( from the locked time ) I can see only a blank screen. I tried once login in ( Alt + Ctrl + F1 ) and trying to startx, but it said something was locked and want to delete the lock and try. So I removed don't remember what exactly it was . But it too was not a success , I was forced to press the powerbutton. Is this a bug ? I saw some but all are happening for switching users. Never experienced after logging out, but this happens only when its locked automatically actually. Update : I strongly feel this is a Bug. As I upgraded to 11.10 , I didn't noticed the same issues. But at some point if any one has an answer, you can post it. I can approve.

    Read the article

  • Rant - Why is Windows Azure not available in Africa?

    - by Allan Rwakatungu
    Yesterday at the .NET user group meeting in Kampala Uganda  I gave a talk on cloud computing with Windows Azure  (details will be in my next blog post). The guys where excited. Without owning they own inftrastucture and at low cost they can build scalable , highly available applications. Not quite. Azure accounts are only available to people in particular countries - none from Africa. I attended PDC in 2008 when Microsoft unleashed Windows Azure. One of the case studies to show the benefits ofr cloud computing was a project in Africa for an education service in Ethiopia. The point they where making was that the cloud was perfect for scenarios where computing infrastructure is not sophiscated, like Ethiopia. Perfect , i thought. So i got my beta account from PDC and started playing around in the cloud. Then Azure goes live , my beta account does not work any more and I cant pay because am from Uganda. Microsoft , this sucks. I dont know the reasons for Microsoft doing this, but am sure we can work out something. We in Africa need the cloud more than anybody else in the world. Setting up data centers that are higly scalable and available for our startups is not an option we have. But we also cant pay for cloud computing with Microsoft. Microsoft, we know we are a tiny insigficant market for a company your size, but your excluding us only continues to widen the digital divide. Microsoft , how about you have a reseller model for cloud computing. Instead of trying to deal direclty with each client you have local partners who help you sell and bill your cloud services. I think that would lead to Windows Azure being available in Africa. I can help you resell in Uganda.

    Read the article

  • Windows 2000 under Windows 7 Virtual PC not working correctly

    - by dave
    I have just moved my Windows 2000 Virtual PCs from Vista to Windows 7 Professional (64-bit). The machines work to a point but I have found some problems: drive mapping does not seem to work any more. I need this to exchange data. I do not need network access to the virtual PC so would rather leave it unconnected. the virtual PC would automatically shutdown the session and go to the login screen after a few minutes of inactivity. I tried installing the Virtual PC Integration Components but the install failed (one of the messages basically says it's XP+ only). Now I'm stuck in 640x480 mode with mouse capture. I have heard that you can install an older version of the Integration Components but this sounds a bit suspect. Does anyone have any ideas on how to get Windows2000 working with drive sharing on a Virtual PC?

    Read the article

  • Antenna Aligner part 2: Finding the right direction

    - by Chris George
    Last time I managed to get "my first app(tm)" built, published and running on my iPhone. This was really cool, a piece of my code running on my very own device. Ok, so I'm easily pleased! The next challenge was actually trying to determine what it was I wanted this app to do, and how to do it. Reverting back to good old paper and pen, I started sketching out designs for the app. I knew I wanted it to get a list of transmitters, then clicking on a transmitter would display a compass type view, with an arrow pointing the right way. I figured there would not be much point in continuing until I know I could do the graphical part of the project, i.e. the rotating compass, so armed with that reasoning (plus the fact I just wanted to get on and code!), I once again dived into visual studio. Using my friend (google) I found some example code for getting the compass data from the phone using the PhoneGap framework. // onSuccess: Get the current heading // function onSuccess(heading) {    alert('Heading: ' + heading); } navigator.compass.getCurrentHeading(onSuccess, onError); Using the ripple mobile emulator this showed that it was successfully getting the compass heading. But it didn't work when uploaded to my phone. It turns out that the examples I had been looking at were for PhoneGap 1.0, and Nomad uses PhoneGap 1.4.1. In 1.4.1, getCurrentHeading provides a compass object to onSuccess, not just a numeric value, so the code now looks like // onSuccess: Get the current magnetic heading // function onSuccess(heading) {    alert('Heading: ' + heading.magneticHeading); }; navigator.compass.getCurrentHeading(onSuccess, onError); So the lesson learnt from this... read the documentation for the version you are actually using! This does, however, lead to compatibility problems with ripple as it only supports 1.0 which is a real pain. I hope that the ripple system is updated sometime soon.

    Read the article

  • Video game "Gish" will only launch from command line

    - by aberration
    Platform: Lubuntu 11.10 x64 Program: Gish When I try to launch Gish from the command line (/opt/gish/gi.sh), there are no problems. But when I try to launch it from the LXDE menu, it will not start. Contents of /usr/share/applications/gish.desktop: [Desktop Entry] Categories=Game;ActionGame;AdventureGame;ArcadeGame; Exec=/opt/gish/gi.sh Path=/opt/gish Icon=x-gish Terminal=false Type=Application Name=Gish I tried changing Terminal=false to Terminal=true to debug it, but then I just got a blank terminal, and the game didn't start. Edit: Here is some additional information, as requested by Eliah Kagan below: I tried editing /usr/share/applications/gish.desktop, as recommended, but it had no effect However, ~/.xsession-errors contained the following error: [: 8: x86_64: unexpected operator ./gish_32: error while loading shared libraries: libGL.so.1: wrong ELF class: ELFCLASS64 I think there's a problem with the /opt/gish/gi.sh shell script. This is its contents: cd /opt/gish/ MACHINE_TYPE=`uname -m` if [ ${MACHINE_TYPE} == 'x86_64' ]; then ./gish_64 else ./gish_32 fi I'm not too familiar with Bash, so hopefully someone else can point out the error. I have a 64-bit machine. I think that when the script is run from the command line, it's properly launching the 64-bit version (/opt/gish/gish_64), but when it's run from the LXDE menu, it's launching the 32-bit version (/opt/gish/gish_32), which is causing the libGL.so.1 error. However, this may be related to my libGL.so.1 problems with 2 other games.

    Read the article

  • how can I git-revise configs in my /etc/ dir? (sudo has different keys..)

    - by Dean Rather
    I'd like to keep some of the folders in my /etc/ dir git-revised, cause I'm quite new to server administration and am constantly messing around in my /etc/nginx/ and /etc/bind/ directories. I've heard of people git-revising their either /etc/ directories, but that seems a bit like overkill, as at this point I'm only messing in those 2 subdirectories. The problem I'm having is that if I sudo my git operations, I don't have the right pubkeys to push to my remote repo (bitbucket). But if I don't sudo, I need to mess around with all the permissions (again, not very pro at this). Does anyone know best practices for managing their configs? or how I should solve this problem? Thanks, Dean. PS. It's Ubuntu 12.04, Git, nginx, bind9, amazon aws, bitbucket...

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time ? I say that I would consider the "Will you always deploy to Windows?" the single most important decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications?

    Read the article

  • How do I get a Belkin F5D8053 wireless adapter working?

    - by disassembler
    I've tried getting my Belkin N Wireless adapter to work on Ubuntu many times with no luck at all. Each time I seem to arrive at a dead end. After some thorough searching of UbuntuForums and WifiDocs I've gathered some information and narrowed the problem down to an issue with the rtl819xU driver. Here's some info that may help: $ sudo lshw -C network *-network DISABLED description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan0 serial: 00:22:75:38:52:ac capabilities: ethernet physical wireless configuration: broadcast=yes driver=rtl819xU multicast=yes wireless=802.11b/g/n $ sudo lsmod Module Size Used by vesafb 13449 1 snd_ice1724 106559 2 snd_ice17xx_ak4xxx 13163 1 snd_ice1724 snd_ac97_codec 105614 1 snd_ice1724 ac97_bus 12642 1 snd_ac97_codec snd_ak4xxx_adda 18436 2 snd_ice1724,snd_ice17xx_ak4xxx snd_ak4114 14326 1 snd_ice1724 snd_pt2258 12986 1 snd_ice1724 snd_i2c 13831 2 snd_ice1724,snd_pt2258 snd_ak4113 14307 1 snd_ice1724 snd_pcm 80244 4 snd_ice1724,snd_ac97_codec,snd_ak4114,snd_ak4113 fglrx 2434640 121 snd_seq_midi 13132 0 snd_rawmidi 25269 2 snd_ice1724,snd_seq_midi binfmt_misc 13213 1 snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51291 2 snd_seq_midi,snd_seq_midi_event ppdev 12849 0 snd_timer 28659 2 snd_pcm,snd_seq snd_seq_device 14110 3 snd_seq_midi,snd_rawmidi,snd_seq dcdbas 14054 0 r8192u_usb 297246 0 snd 55295 16 snd_ice1724,snd_ac97_codec,snd_ak4xxx_adda,snd_ak4114,snd_pt2258,snd_i2c,snd_ak4113,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 12600 1 snd parport_pc 32111 1 snd_page_alloc 14073 1 snd_pcm shpchp 32345 0 lp 13349 0 parport 36746 3 ppdev,parport_pc,lp usbhid 41704 0 hid 77084 1 usbhid e100 40108 0 floppy 60032 0 $ sudo iwconfig wlan0 802.11b/g/n Mode:Managed Access Point: Not-Associated Bit Rate:1 Mb/s Retry min limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=0/100 Signal level=0 dBm Noise level=0 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 I'd like to know if 1) Is the driver is properly installed and recognized by Ubuntu? 2) What can I do to load the drivers properly and make use of the adapter? Thanks!

    Read the article

  • Empty APN list on your Android phone? Restart the phone to fix it

    - by Gopinath
    Today I tried to connect to internet on my Google Galaxy Nexus running on 4.2.2 using Cellular Data connection and it failed. Tried reaching customer care representative to figure out why data connection is not working, but the robots (Interactive Voice Response systems) never allowed me to reach a human. After digging through the settings I found empty list of APN (Access Point Names) is the reason for not able to connect to internet. Not sure what caused APN list to vanish but I tried to create a new one that matches with the settings required for AT & T mobile. To my surprise I found that the newly created APN is also not shown in the APN list. Well there is something wrong with the phone – my APN’s are not shown as well as the newly created one is also not displayed. A simple Google search on this problem shown many forum discussions list and the solution to resolve the issue is to restart the phone. As soon as I restarted my phone the APN list is automatically populated and I’m able to connect to internet on my mobile.

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • Hybrid Graphics Functional won't work with my Asus UL30V anymore

    - by futuress
    The problem is that I am no longer able to boot in compatibility mode for just turning on my Nvidia graphics to install the driver. Because no login screen will appear if Ubuntu is loading. In Ubuntu 11.10 I was able to activate nvidia graphics only' option this way: 1) Change BIOS to 'compatibility mode' which will turn off the Intel card. 2) Install the Nvidia proprietary driver using Ubuntu's driver finder (Additional Drivers) and then reboot. I was not interested using only the Intel graphics, for the sake of battery life. Now I have both cards running and they drain my battery life dramatically. And the main problem of this configuration no OpenGL is available, so I can't play any games any more. At this point, I have a pre-solution. I uninstalled the nvidia drivers and installed bumblebee. Now the Intel card is recognized. I would prefer to run just the nvidia card as in Ubuntu 11.10 but for now this is better than nothing. Does anybody else have the same problem?

    Read the article

  • Change X settings to boot into laptop with non working screen using an external monitor

    - by dassouki
    My laptop committed suicide. Is there a way I can boot into my ubuntu using an external screen? Ubuntu 10.10 video: Nvidia 9500 gm i think I can get dual screen upto and until the ubuntu login screen at which point it goes back into laptop display only. Then I can just type and enter my password. The system (i assume boots into ubuntu) but I got no way of going onto the x or nvidia settings to change my display to external monitor EDIT well I booted into terminal using control shift F1 i think, and now I'm trying to reporgram x.org, but strangely it seems bland with not a lot of settings in it EDIT 2 xrandr returns "Can't open display" EDit 3 after some messing around with xrandr and xinit ... my x.org only displays one monitor instead of two in its settings although both, i.e. the laptop and external screens are both connected. EDIT 4 it seems that x.org now has a "screen" and "monitor" section, I can't seem to be able to boot linux into monitor. I get a "Monitor is not a valid keyword" in this subsection

    Read the article

  • How to handle editing a large file for a non-technical user

    - by Luke
    I have a client who is given a tab delimited .txt file containing hundreds of thousands of rows. I have a user story as follows: As a user I want to take the text file and add a new value at the end of each line which contains the concatenated value of two of the columns. for example if the file read text_one text_two I need to output the following (preferably to a .txt file) text_one text_two text_onetext_two My first approach was to ask the vendor supplying the file to do the concatenation before providing the file, the easiest way to solve a problem is to eliminate it right? however they are very uncooperative and have point blank refused. I've looked at building a simple javascript application that does this client side so a non-technical user could select the file using a file selector. This approach has a few problems The file could be over a GB in size and so can't be loaded straight into memory, I've tried and the browser crashes There is no means to write a file in javascript so I'd need to output the content to the screen and have the user save it (somehow) I was thinking if I could get around the filesize limitations I could just output the edited content to the page and have the user save the page as a .txt file, however I think there is a better way than using javascript that will still accommodate the users lack of technical know-how. Please consider this question to be stack agnostic, but bear in mind that a nice little shell script or python script would be deemed unsuitable for a non technical user unless there is a way of "packaging" it nicely for a non-technical user. Updates The file is too large to open in excel. The process needs to be run weekly, but it doesn't require scheduling or automation...(yet)

    Read the article

  • Secure Server Distro

    - by Drama
    Hello, I have a root-server (i7/24GB/1TB) running Ubuntu 10.04 LTS as my OS. After some security audits (OpenVAS, Retina etc) I see that Ubuntu isn't the most secure system for a semi-corporate environment. Its updated from many sources, ofc from the Ubuntu security repo too. But nevertheless I could exploit my OpenSSL install with an exploit from August/September. There are some critical updates needed which Ubuntu does not provide. I was using Debian and Ubuntu for almost 5 years but now I doubt. What distro is secure and up to date from your point of view? How can I make the server more secure? Outsourcing of every software-module to a VM? I am not new to server-hardening, my packages are up to date I read Ubuntu Security Notices and I have no unneeded services installed on my server. Thanks.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • How to set up an inter-OS partition?

    - by Confuzzled Persun
    I need a working partition configuration for use and accessibility on both Ubuntu and Windows. I have an 8GB USB flash drive onto which I am installing Ubuntu 11.10 so that I can have a personal bootable OS wherever I go. I've installed Ubuntu several times, but I just can't seem to get this one partition right. This is my own configuration: Partition 1: Primary - 200MB - Beginning - Ext4 - /boot Partition 2: Primary - 1300MB - End - swap area Partition 3: Logical - 5200MB - Beginning - Ext4 - / Partition 4: Logical - 1258MB - Beginning - Ext4 - /home Partition 5: Logical - 42MB - End - FAT32? - /windows? What I want to do is to get partition 5 configured so I can access it on both the installed Ubuntu system and a Windows system (when the USB drive is connected while Windows is booted). Basically, what I want is Ubuntu installed on the USB drive along with a partition that I can access with other operating systems. I'm thinking I just need the technical configuration of "Use as:" and "Mount point:" for my final partition. But I don't know. Any help with this is appreciated. And any other tips are appreciated as well.

    Read the article

  • Gparted: Copy Greyed Out

    - by David
    I booted my system to a gparted linux live CD and I have two partitions: /dev/sda1 /dev/sda2 I want to move both of them to my new physical hard drive (called /dev/sdb). When I right click on /dev/sda1, I can choose copy and then paste it onto /dev/sdb. When I right click on /dev/sda2, copy is greyed out and there is a yellow exclamation point to the left of the disk. I know that the disk works since I can boot my computer from it. Why won't gparted let me copy /dev/sda2 to my new hard disk? Since the option is greyed out, I don't even get an error message. Thanks!

    Read the article

  • tradeoffs of iSCSI vs. AFP when using Time Machine with a NAS?

    - by ajit.george
    I'm setting up a home NAS device (Synology DS409) that I'm planning to use for Time Machine backups (amongst other things). What are the tradeoffs between using iSCSI or AFP to mount the backup volume? The Synology wiki suggests that iSCSI is better if the Mac will be frequently disconnected from the network or sleeping, from the point of view of the volume automatically remounting. What about filesystem consistency? Given that unplugging a USB drive without properly unmounting it often requires the Time Machine volume to be repaired, would iSCSI have the same issues? Thanks in advance.

    Read the article

  • Virtualhost setup, same IP address, different DirectoryIndex's

    - by kaykills
    I am trying to set up 2 virtual host entries in apache but I'm not sure how to accomplish what I want to do. I have two domain names, both pointing to the same IP Address. I need the DirectoryIndex to be different, which is pretty much the only difference in the entries. I have the following set up: <VirtualHost *:80> ServerName firstdomain.com ServerAdmin [email protected] DocumentRoot "/srv/www" DirectoryIndex /portals/site/index.html </VirtualHost> <VirtualHost *:80> ServerName seconddomain.com ServerAdmin [email protected] DocumentRoot "/srv/www" DirectoryIndex /portals/site/index_fr.html </VirtualHost> Not sure what I need to do differently but the second entry doesn't work. The only real difference is I need the second domain to point to a different DirectoryIndex. If there is a better way to accomplish this, your help would be appreciated.

    Read the article

  • Controlling access to site folders if you cannot user Roles

    - by DavidMadden
    I find myself on an assignment where I could not use System.Web.Security.Roles.  That meant that I could not use Visual Studio's Website | ASP.NET Configuration.  I had to go about things another way.  The clues were in these two websites:http://www.csharpaspnetarticles.com/2009/02/formsauthentication-ticket-roles-aspnet.htmlhttp://msdn.microsoft.com/en-us/library/b6x6shw7(v=VS.71).aspxhttp://msdn.microsoft.com/en-us/library/b6x6shw7(v=VS.71).aspxYou can set in your web.config the restrictions on folders without having to set the restrictions in multiple folders through their own web.config file.  In my main default.aspx file in my protected subfolder off my main site, I did the following code due to MultiFormAuthentication (MFA) providing the security to this point:        string role = string.Empty;         if (((Login)Session["Login"]).UserLevelID > 3)         {             role = "PowerUser";         }         else         {             role = "Newbie";         }         FormsAuthenticationTicket ticket =  new FormsAuthenticationTicket( 1,                 ((Login)Session["Login"]).UserID,                 DateTime.Now,                 DateTime.Now.AddMinutes(20),                 false,                 role,                 FormsAuthentication.FormsCookiePath);         string hashCookies = FormsAuthentication.Encrypt(ticket);         HttpCookie cookie =  new HttpCookie(FormsAuthentication.FormsCookieName, hashCookies);         Response.Cookies.Add(cookie); This all gave me the ability to change restrictions on folders without having to restart the website or having to do any hard coding.

    Read the article

  • Running a home mail server using dynamic dns [closed]

    - by Anand
    Hi, Is it possible to run an email server on my home box using dynamic dns? The scenario is, I want to auto cc all incoming and outgoing emails from my one account to another, from some server side config instead of configuring email clients for rules. I have tried Google Apps Mail but it doesn't allow auto cc of outgoing emails. After having read tons of blogs, forum messages etc (hope I have been reading the correct info :) ) the only option to achieve what I am needing is to setup my own mail server, but the cost of getting a static IP doesn't fit my budget. Please can someone point me in the correct direction. Platform doesn't matter, I can setup a Windows or Linux server. Many Thanks

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >