Search Results

Search found 3994 results on 160 pages for 'implementing'.

Page 112/160 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Differences between a retail and wholesale website, and how difficult would it be to integrate them? [closed]

    - by kmy
    I was told to come here for some guidance, I know that my questions can be quite broad, but I just really want some kind of direction to go towards (links, articles, etc). So here it goes: For the past month I have been planning and started implementing a wholesale website, but plans got changed on me and I need to allow both retail and wholesale orders which will also depend on whether the account is a regular customer or also has a seller account. There are various types of products and it has been decided that I would use subcategories as tables to organize them and allow various specific item fields. So here are some of my specific questions: How different is the database structure for each item to be available for both wholesale and retail? Will it just be adding a quantity column and have two different pricing, or much more complicated? I am unaware if there's any price tables but would that be more difficult? They use Quickbooks POS software, how difficult/inefficient will it be to update inventory quantities if they have like 2000-4000 products? And what would be the best ways to extract the information and update the system? I know it export an excel spreadsheet so maybe a suggested php plugin for it? How difficult is this project in your professional opinion, and how big should a team usually be? (At the moment it is just me.) Projected time for planning, implementation, and quality assurance in accordance to team size? I am an entry level developer and I know that I do not have enough knowledge to direct myself on this website...What kind of web developer skills will I need to find to help me? (My company is willing to hire people to get this website done as fast as possible.) Also, what would be some great questionnaires to ask the product stakeholder about what he wants from the website? (He has made it clear he is neither computer or internet savvy...) Sorry for the amount of questions, I'm an entry level web developer and do not have a senior to look up to for guidance. I have knowledge in HTML, CSS, Javascript, jQuery, PHP, MySQL, phpMyAdmin. I have never used any frameworks like Zend, Magento, etc, and is doing this all from scratch. So far the website is built in an object oriented way, MVC architecture to the best of my abilities, but I have many doubts because I would really want to have this done right. If I have been unclear on some parts please tell me and I'll add more detail. I'm sure I'll have more additional questions later on, if anyone is open to that please tell me. Thanks in advance!

    Read the article

  • LiteSpeed enable Access-Control-Allow-Origin (no response header on CORS request)

    - by Joe Coder Guy
    Seriously, I can't find a single page discussing this for litespeed. Using this format in the htaccess "Header set Access-Control-Allow-Origin http://aSite.com" (and https) sends the setting in the http response header, but I still get the "XMLHttpRequest cannot load https://aSite.com/aFile.php. Origin aSite.com is not allowed by Access-Control-Allow-Origin" error when trying to access https from http origin. Also, I receive no response header for https, only that message shows up in Chrome. Is the server still blocking it even though I've sent the proper headers? I read elsewhere that it helps to add these terms Access-Control-Allow-Headers X-Requested-With Access-Control-Allow-Methods OPTIONS, GET, POST Access-Control-Allow-Headers Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control but I don't see these in my headers. Using these, my PHP files aren't even reached (because they register no errors or anything), so it looks like it comes from the server only, but what do I know. Thanks in advance! Update Since no response header, Prashant seems to suggest it's a server issue in his error since it worked on another server. http://stackoverflow.com/questions/11953132/no-response-obtained-while-implementing-cors Anyone know how to flip this switch? Headers work now Bad litespeed format. Should look like this. Still being denied though. Header set Access-Control-Allow-Headers X-Requested-With Header set Access-Control-Allow-Methods OPTIONS Header set Access-Control-Allow-Methods GET Header set Access-Control-Allow-Methods POST Header set Access-Control-Allow-Headers Content-Type Header set Access-Control-Allow-Headers Depth Header set Access-Control-Allow-Headers User-Agent Header set Access-Control-Allow-Headers X-File-Size Header set Access-Control-Allow-Headers X-Requested-With Header set Access-Control-Allow-Headers If-Modified-Since Header set Access-Control-Allow-Headers X-File-Name Header set Access-Control-Allow-Headers Cache-Control

    Read the article

  • Windows 7 Sysprep Default User

    - by Demonwolf
    I seem to be having a problem with implementing my sysprep. I have been playing with Windows 7, WAIK, Server 2008 R2 and various other things. I managed to create a WIM with everything I need installed and I have worked out the autounattend.xml. I now have a Windows 7 64-bit complete unattended install from a USB device. It has all my programs, setting and everything done except one thing - the default profile set up 100% correctly. I have created a mostly set up default profile. I booted into audit mode, customized the Administrator account (mostly anyway) and then used sysprep with an unattend.xml file containing the copyprofile=true command. The file was set up with the WSIM and does not contain any extra info. This all works wonderfully. I recreated the WIM and all was good. I then decided to move the default location of the visible stuff in the user profile (Documents, Music, Pictures etc.) without changing the location of Appdata or other hidden folders. This is where things went a little... wrong. I went to the user folder (generally has the User name) with all the other folders in it. I right clicked on My Documents, found the location tab and changed it to M:\Documents. Now if I run sysprep /generalize /oobe /reboot /unattend:unattend.xml it starts the generalise... then spits out a fatal error and goes no further. The setuperr.log contains the following errors: 2011-08-18 23:21:43, Error [0x0f0043] SYSPRP WinMain:The sysprep dialog box returned FALSE 2011-08-18 23:31:57, Error [0x0f0082] SYSPRP LaunchDll:Failure occurred while executing 'C:\Windows\System32\slc.dll,SLReArmWindows', returned error code -1073425657 2011-08-18 23:31:57, Error [0x0f0070] SYSPRP RunExternalDlls:An error occurred while running registry sysprep DLLs, halting sysprep execution. dwRet = -1073425657 2011-08-18 23:31:57, Error [0x0f00a8] SYSPRP WinMain:Hit failure while processing sysprep generalize internal providers; hr = 0xc004d307 Does anyone have any ideas how I can redirect My Documents and other items in a user file to a second drive in the default profile so it affects each person logging in?

    Read the article

  • Moving from WDS to MDT + WDS - Prestaged Computer Name

    - by MSCF
    We previously used just WDS to deploy our images. WDS was setup to request approval for new machines. We used the "Name and Approve" option to name the machines as we added them. If it was pre-existing, it would just use the existing computer name from AD. Then in our unattend.xml file we had Computername=%MACHINENAME%. This picked up the name we gave it during approval and set the computer name accordingly. We are now implementing MDT to manage our images and drivers. But upon testing, we noticed it would assign random computer names. I went into the Unattend.xml for the deploy task sequence and added that value under Specialize amd64_Microsoft-Windows-Shell-Setup_neutral Computername=%MACHINENAME%. But when we try applying the image, it errors out at that point of the install. How can an MDT deployment be configured to leverage the pre-staged name? Some additional info: Error message during the imaging process: Windows could not parse or process the unattend answer file for pass [specialize]. The settings specified in the answer file cannot be applied. The error was detected while processing settings for component [Microsoft-Windows-Shell-Setup].? setuperr.log: 2014-07-22 14:02:13, Error [setup.exe] [Action Queue] : Unattend action failed with exit code 4 2014-07-22 14:02:13, Error [setup.exe] Execution of unattend GCs failed; hr = 0x0; pResults-hrResult = 0x8030000b

    Read the article

  • BitLocker with Windows DPAPI Encryption Key Management

    - by bigmac
    We have a need to enforce resting encryption on an iSCSI LUN that is accessible from within a Hyper-V virtual machine. We have implementing a working solution using BitLocker, using Windows Server 2012 on a Hyper-V Virtual Server which has iSCSI access to a LUN on our SAN. We were able to successfully do this by using the "floppy disk key storage" hack as defined in THIS POST. However, this method seems "hokey" to me. In my continued research, I found out that the Amazon Corporate IT team published a WHITEPAPER that outlined exactly what I was looking for in a more elegant solution, without the "floppy disk hack". On page 7 of this white paper, they state that they implemented Windows DPAPI Encryption Key Management to securely manage their BitLocker keys. This is exactly what I am looking to do, but they stated that they had to write a script to do this, yet they don't provide the script or even any pointers on how to create one. Does anyone have details on how to create a "script in conjunction with a service and a key-store file protected by the server’s machine account DPAPI key" (as they state in the whitepaper) to manage and auto-unlock BitLocker volumes? Any advice is appreciated.

    Read the article

  • Splitting Servers into Two Groups

    - by Matt Hanson
    At our organization, we're looking at implementing some sort of informal internal policy for server maintenance. What we're looking at doing is completing maintenance on our entire server pool every two months; each month we'll do half of the servers. What I'm trying to figure out is some way to split the servers into the two groups. Our naming convention isn't much to be desired (but getting better) so by name or number doesn't really work. I can easily take a list of all the servers and split them in two, but with new servers are being added constantly, and old ones retired, that list would be a headache to maintain. I'd like to look at any given server and know if it should have its maintenance done this month or next. For example, it would be nice to look at the serial number. If it started with an even number, then it gets maintenance done on even months and vice-versa. This example won't work though as a little over half of the servers are virtual. Any ideas?

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Custom HTTP Status Codes (a la Twitter 420: Enhance Your Calm) [migrated]

    - by Max Bucknell
    I'm currently implementing an HTTP API, my first ever. I've been spending a lot of time looking at the Wikipedia page for HTTP status codes, because I'm determined to implement the right codes for the right situations. Listed on that page is a code with number 420, which is a custom code that Twitter used to use for rate limiting. There is already a code for rate limiting, though. It's 429. This led me to wonder why they would set a custom one, when there is already a use case. Is that just being cute? And if so, then which circumstances would make it acceptable to return a different status code, and what, if any problems may clients have with it? I read somewhere that Mozilla doesn't implement the joke 418: I’m a teapot response, which makes me think that clients choose which status codes they implement. If that's true, then I can imagine Twitter's funny little enhance your calm code being problematic. Unless I'm mistaken, and we can appropriate any code number to mean whatever we like, and that only convention dictates that 404 means not found, and 429 means take it easy.

    Read the article

  • DansGuardian/Squid Traffic doesn't get back to user

    - by DKNUCKLES
    I've purchased a Squid appliance that I'm attempting to implement, however the lack of documentation has left me a bit high and dry. Forgive me if this is a silly question, but this is my first attempt at implementing Squid. From what I can ascertain from the documentation (or lack thereof), the users connect to DansGuardian first at port 8080 where the filtering is done, at which point it forwards it to the Squid appliance at port 3128. The traffic is then sent to the internet. The setup I have is as follows Gateway (MikroTik router) : 192.168.88.1 Squid/DansGuardian :192.168.88.100 Client : 192.168.88.238 Client --- Gateway --- Proxy --- Internet I have set up a simple NAT rule to forward all traffic from the client machine (for testing purposes) to go to the DansGuardian. The traffic seems to get there, although I see a lot of SYN_RECV w/ a netstat -antp command on the virtual appliance machine. From this I gather that the traffic is NOT being routed back to the client machine. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN - tcp 0 0 192.168.88.100:8080 192.168.88.238:55786 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55787 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55785 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55788 SYN_RECV - tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - Is this a routing issue or an issue with the Squid Appliance?

    Read the article

  • Driver issue for 7850 HD Diamond Windows 7

    - by Mr.Student
    I have an issue with windows 7 implementing the drivers ati provides for my 7850 HD Diamond video card. The video card is plugged in correctly to the PCIE 2.0 slots of my motherboard (p6t asus) with more then enough power. Fortunately the card works as I have my monitor plugged into the card itself and I can see just fine. However windows 7 does not replace the standard vga default microsoft driver with catalyst drivers that I've tried installing from online as well as the CD. Every time I install the drivers using Catalyst Install manager it always finishes with "Install complete(Warnings occured)". I then see the warnings it has which it instead reports that it successfully installed the SDK, Catalyst Install Manager, and HDMI/AUDIO packages. When I check my device manager after that, the display adapter still shows as standard vga and that its still using microsoft default display adapter. Now this current card is replacing an old card of mine which is the ATI 4890 HD which works beautifully. In fact for the old card to work, all I need to do is go to device manager and right click on standard vga - click update driver and windows 7 magically finds the correct driver to install and everything is good. Not so with the new card. I've even went into my regedits and uninstalled every bit of ati driver software before reinstalling my new card. Nothing's worked thus far. I've already exchanged my card out once and talked to customer support from my motherboard, microsoft, and ati all blaming the other. Please help me out!!

    Read the article

  • arp requests are sent continuously and my linux machine disconnected to the world

    - by sees
    I have the following problem and really need your help I'm implementing a small server to receive request from client on port 18999(just sample) using TCP socket. When I tested my server by using a lot of requests from a tablet through a router, I got the ARP problem(?) My net work just like: TABLET <------- WIRELESS ROUTER <------- MY SERVER (LINUX) Problems: 1. Can not connect to my Linux any more ( telnet, ping v.v...unreachable) 2. I use serial cable to connect to my Linux machine and use Wiresharp (from Windows) to catch the send message from Linux. It says that Linux keeps sending out continuously every 3 seconds ARP messages like the following: xx:xx:99:77:ff:69 ff:ff:ff:ff:ff:ff ARP 60 Who has 192.168.10.2? Tell 192.168.10.3 In the above message: xx:xx:99:77:ff:69 my Linux MAC address 192.168.10.2 my Tablet address 192.168.10.3 my Linux IP address Can you help me figure out the problem? Or tell me the way to detect the problem and reset the network back to normal (maybe restart Linux but I want to detect problem and restart automatically) UPDATE: 1. The above network works normally if tablet sends messages to my LINUX in normal speed (but also down after 48 hours) 2. The router works again after I unplugged my Linux ethernet cable (RJ45) from router. 3. The wireless network still works ( I can browser the router page from tablet) 4. When I use: ifconfig down then ifconfig up , the Linux restarts (?????????)

    Read the article

  • Network Load Balancing, intermittent port problem

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • IP address from a MAC address

    - by acermate433s
    I'm writing a class to integrate a POS card reader device to our software. In order for it to work I must know what IP it's using. We were given some sample code by the service provider and the way they do this is they open a website (http://www.ebizchargeemv.com/getip.php?mac={MAC address of device}) and it would return the IP address of the device. The device I'm using is a POSLynx220 Mini. It has an ethernet port that connects to the internet to communicate with the service provider. I send TCP data to it and the device then controls a PIN pad that prompts a client to swipe his card. It's probably a mini computer that communicates with the service provider and uses the PIN pad as its input device. Just being curious but how did they implement this? Are they implementing it using ARP? I'm planning on not using their website to determine the IP of the device. I've seen some code that uses ARP but using executing ARP in one of the PC didn't detect the POS device.

    Read the article

  • vDS - vCenter Problem

    - by rbmadison
    We are implementing a vSphere farm and are using a distrubuted switch. The VC is a VM within the farm connected to the distrubuted switch. We had a SAN issue and all of our VMs were down. When the SAN recovered and we restarted the ESX host containing the VC the VC couldn't connect to the network through the vDS. We had to remove a NIC from the vDS on that host and create a regular vswitch and then connect the VC to that before the VC would connect to the network. Is this typical behavior? If the VC goes down does all vDS networking stop on all the hosts? That seems to be a very bad thing. I thought networking would work even though the VC is down because the hosts have the vDS configuration cached. Is there a better way to configure it to prevent this from happening. We want to keep the VC as a VM for HA and recoverabilty purposes. Can anyone offer suggestions or explanations? I appreciate the help. Thanks, Rick

    Read the article

  • Value of Itanium over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium hardware, are there any drawbacks? Is there a point where Itanium starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the two platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer.

    Read the article

  • Problem with setting up raid5 on Freenas, Please help.

    - by Benjy23
    Hey Guys, I've been running Freenas for awhile now. Hardware is 1.8celeron Ram 1GB Sata Card is Via - not sure the model.... its 2 ports and I have 6 x 1.5TB HDs All ran ok while running on 1.5TB, no raid. I'm now trying to create a raid5 with my 6 hds. Software raid... is it normal for it to take roughly up to 2 weeks just to build the raid? Sorry, I'm very new to implementing raid and googling doesn't tell much other than it takes a long time. Also the Raid building process seems to fail many times... going to degraded. I suspect its cos 4 of my HDs are connected to my motherboard and the other 2 are connected to my sata card...what's your take? I'm considering 2 options now... either get a 8 port sata card and attach all the HDs to it. Or get a raid controller 8 portcard which is probably gonna be more pricey... also how do you access hardware raid through Freenas? I like how Freenas emails you should your harddrive fails so can this be done as well with hardware raid? Thanks in advance guys.

    Read the article

  • knife on Windows inconsistently reads ~\.ssh\knife.rb on Management Workstation

    - by gWaldo
    I am implementing a new instance of (Open-source v10.12) Chef in an existing environment. Currently the environment is mostly Windows, but more Linux is being introduced. I have used Chef in a previous gig, however that was a *nix-only environment. Because this is a primarily-Windows environment, my main workstation is Windows 7 (x64), and I use Powershell as my main terminal. I created a ~\.chef directory, populated with a knife.rb and my client.pem file. When I run knife client list from ~, I get the expected results. I keep my work in Dropbox just in case my laptop should fail or be stolen. When I run knife client list from the repo directory (C:\Users\waldo\Dropbox_company\projects\chef`), I get ERROR: Your private key could not be loaded from C:/home/waldo/.chef/waldog.pem Check your configuration file and ensure that your private key is readable (Note that the path is incorrect) This is the progression as I walk up the tree towards my ~ running knife client list: C:\Users\waldo\Dropbox\_company\projects\ => Above error C:\Users\waldo\Dropbox\_company\ => Above error C:\Users\waldo\Dropbox\ => It works! (Expected results) C:\Users\waldo\ => Expected results C:\Users\waldo\Documents\ => Expected Results C:\Users\waldo\Documents\GitHub => Expected Results C:\Users\waldo\Documents\GitHub\aProject\ => Expected Results What. The. Eff! Now, I know that I can add -c path\to\knife.rb, but that's a HUGE PITA. Question is: Why is knife inconsistently reading my ~\.chef\knife.rb, and how can I get around that without incurring carpal tunnel?

    Read the article

  • IP-dependent local port-forwarding on Linux

    - by chronos
    I have configured my server's sshd to listen on a non-standard port 42. However, at work I am behind a firewall/proxy, which only allow outgoing connections to ports 21, 22, 80 and 443. Consequently, I cannot ssh to my server from work, which is bad. I do not want to return sshd to port 22. The idea is this: on my server, locally forward port 22 to port 42 if source IP is matching the external IP of my work's network. For clarity, let us assume that my server's IP is 169.1.1.1 (on eth1), and my work external IP is 169.250.250.250. For all IPs different from 169.250.250.250, my server should respond with an expected 'connection refused', as it does for a non-listening port. I'm very new to iptables. I have briefly looked through the long iptables manual and these related / relevant questions: http://serverfault.com/questions/57872/iptables-question-forwarding-port-x-to-an-ssh-port-of-different-machine-on-the-n http://serverfault.com/questions/140622/how-can-i-port-forward-with-iptables However, those questions deal with more complicated several-host scenarios, and it is not clear to me which tables and chains I should use for local port-forwarding, and if I should have 2 rules (for "question" and "answer" packets), or only 1 rule for "question" packets. So far I have only enabled forwarding via sysctl. I will start testing solutions tomorrow, and will appreciate pointers or maybe case-specific examples for implementing my simple scenario. Is the draft solution below correct? iptables -A INPUT [-m state] [-i eth1] --source 169.250.250.250 -p tcp --destination 169.1.1.1:42 --dport 22 --state NEW,ESTABLISHED,RELATED -j ACCEPT Should I use the mangle table instead of filter? And/or FORWARD chain instead of INPUT?

    Read the article

  • Network Load Balancing, intermittent port problem on Windows Server 2008

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem on a Windows Server 2008 NLB. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • Updating Samba From RPMs

    - by KnickerKicker
    My Red Hat Enterprise Edition 4 comes with Samba Version 3.0.10, which does not have support for the "inherit owner" attribute that is essential in implementing a Deny-Delete Write Once Read Many share (for examples, search google for a-shared-drop-box-using-samba). (BTW, if any body knows an alternative way to do it without updating samba, I'm all ears!) I am not all that comfortable building from source, and after hours of googling (no, I do not have a red hat subscription, so I cannot just run the up2date command), I found a whole bunch of rpms on http://ftp.sernet.de/pub/samba/tested/rhel/4/i386/ (Samba 3.2.15 for RHEL 4)... Next, I tried updating them with the rpm -U --nodeps command, but I got file conflict errors. So I went ahead and overwrote everything (or so I thought) by using the rpm's --force option. But no good has come of all that. /usr/sbin/smbd -V still returns the old version. As of now, rpm -qa | grep samba returns, samba3-client-3.2.15-40.el4 samba-3.0.10-1.4E.2 samba-client-3.0.10-1.4E.2 system-config-samba-1.2.21-1 samba3-3.2.15-40.el4 samba-common-3.0.10-1.4E.2 samba3-winbind-3.2.15-40.el4 I cannot remove the older ones because samba-common >= 3.0.8-0.pre1.3 is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) kdebase-3.3.1-5.8.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 Now thats a whole bunch of dependencies that I dare not touch :) Any and all pointer are welcome at this stage. Thanks in advance!

    Read the article

  • Has anyone had luck running 802.1x over ethernet using the stock Windows or other free supplicant?

    - by maxxpower
    I just wanted to see if anyone else has had luck implementing 802.1x over ethernet. So here's my basic setup. Switch sends out 3 eapol messages spaced out 5 seconds apart. if there's no response the machine gets put on a guest vlan with restricted access. If the machine is properly configured it will authenticate and be placed into a secure vlan. About 10% of my windows xp users are getting self assigned 169 addresses. I've used the Odyssey Access Client and it worked without a hitch. I'm using the setting to automatically use the users windows login to authenticate, but it's workign on 90% of the machines so I don't think that's the issue. Checking the logs on the dc it seems that the machines are trying to authenticate with computer credentials even though they are configured not to. I'm running Juniper switches with IAS for radius. I have radius configured for PEAP and MSvhapv2. Macs and linux boxes seem to have no issues authenticating. One last thing to add If I unplugging the ethernet cable and plug it back in usually resolves the issue, but I'd hardly call that acceptable for production. Kinda long winded and specific for a discussion, but just want to see if anyone else has had similar issues or experiences, or if anyone knows of a free XP supplicant that actually works with 802.1x over ethernet.

    Read the article

  • iptables captive portal remove user

    - by Burgos
    I followed this guide: http://aryo.info/labs/captive-portal-using-php-and-iptables.html I am implementing captive portal using iptables. I've setup web server and iptables on linux router, and everything is working as it should. I can allow user to access internet with sudo iptables -I internet -t mangle -m mac --mac-source USER_MAC_ADDRESS -j RETURN and I can remove access with sudo iptables -D internet -t mangle -m mac --mac-source USER_MAC_ADDRESS -j RETURN However, on removal, user can still open last viewed page as many times he wants (if he restart his Ethernet adapter, future connections will be closed). On blog page I found a script /usr/sbin/conntrack -L \ |grep $1 \ |grep ESTAB \ |grep 'dport=80' \ |awk \ "{ system(\"conntrack -D --orig-src $1 --orig-dst \" \ substr(\$6,5) \" -p tcp --orig-port-src \" substr(\$7,7) \" \ --orig-port-dst 80\"); }" Which should remove their "redirection" connection track, as it is written, but when I execute that script, nothing happens - user still have access to that page. When I execute /usr/sbin/conntrack -L | grep USER_IP after executing script I am having nothing returned, so my questions: Is there anything else that can help me clean these track? Obviously - I can't reset nor mine, nor users network adapter.

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • How to connect to a remote server and run some code on that particular server?

    - by seedeg
    I am implementing an automated backup scheme so I created a shell script which first creates SQL Dumps for all MySQL databases, then it retrieves all the websites from the /var/www of a remote server. The latter is working as I am using rsync to get the remote files. However, obviously, the MySQL dumps being retrieved are the ones on the local server which is not what I want. I want to get the SQL Dumps from the remote server as well. I have a tunnel between the local and remote server which I can connect without using any password (I added the public key to the authorized_hosts), so I tried to add the following code to the script: ssh [email protected] Then I tried to retrieve the SQL dumps and then I exit from the remote server. However this does not work as I still have to enter exit manually in the terminal for the SQL dumps to be retrieved from the remote host. I don't know why this is happening. Basically this is what the script is trying to do: //connect to remote server ssh [email protected] //retrieve SQL dumps //code to retrieve... //exit from remote server exit //use rsync to get remote files of /var/www from local server (working) Is there a way to connect to the remote host AND run the script's code ON THAT remote host? Many thanks in advance

    Read the article

  • Server 2008 R2 domain windows update strategy

    - by Joost Verdaasdonk
    Let me explain my question a bit. We are a small company that have now made the first move to a bigger network. For now the network contains of 5 servers 2008 R2 (dc,sql,web,etc..). Everything we need is now in place but for now we cannot afford to finish the network by implementing redundant systems. (secondary dc, dns, sql cluster, etc...) For some people this is hard to understand but this is the current situation. (and we are aware and will fix this when we can) Because we want to keep our system secure and up to date I've made sure that all systems are updated regularly. The problem is ofc that the nr of updates Microsoft rolls out that need a system reboot seam to occur more often. (maybe I'm wrong and it just feels like this) ;-) In our domain servers depend on each other for services (like SQL, WEB, or whatever) so just rebooting a server at will is NOT a good idea! For now I update all of them without rebooting at once. After all are up to date I bring them down in the order they are depended on each other. After this I reboot all of them in the inverse order. I understand ofc that if I DID have redundancy in my system that updating and rebooting would not be such a problem because the server task could be taken over by another node but this is something we generally need to add when we can. So my question is. If you read my above situation can you suggest me more Update strategies or general ideas that could help me do this process in a better / faster way? Thanks for your thoughts!

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >