Search Results

Search found 34337 results on 1374 pages for 'build machine'.

Page 207/1374 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Active Directory + IIS + SQL + ASP.NET

    - by Amira Elsayed Ismail
    I have sent the following question to stackoverflow website I have installed Windows server 2008 r2 on a virtual machine, Can I install Active directory with domain controller + IIS + SQL server on the same machine? I want to make web application and this web application will authenticate users from Active Directory, the web application should be published on the server IIS and the users should access it remotely from their home using domain name of my machine, Someone tell me that its very wrong to have IIS and Active directory on the same machine I got the following Answer You can't use ActiveDirectory over the internet. At least not without something like a VPN as a middle man. Their home computers will not be joined to the domain, so there is no pass-through authentication. Yes, it's a bad idea to put AD on the web server. Why is too complex to get into in an answer here. Suffice it to say that even if you did do this, it's probably would not work the way you are thinking it should. It's not impossible to do this. For instance, many of the Microsoft "Small Businesss" products put IIS, AD, and SQL Server on the same server. But, you kind of have to know what you're doing to configure it securely. Then I add the following comment Thanks for ur reply.so what you think about the best way to do this as I didn't do anything like that before should I install active directory on a machine and IIS on another machine ? and what about SQL should I add it to the same server of active directory ? I didn't mentioned also that it will be Microsoft dynamics server that will access some information about work and i have to read data from axapta also ? also what is VPN and how can I use it to let users access my web application anywhere ? Sorry for my long questions and thanks in advance so please if anyone can help I will be thankful

    Read the article

  • Configuring SASL support in libmemcached

    - by John Keyes
    I'm trying to build libmemcached with SASL support on OS X Mountain Lion. I have built memcached (1.4.15) with SASL support: $ memcached -S -vv Initialized SASL. slab class 1: chunk size 96 perslab 10922 ... slab class 42: chunk size 1048576 perslab 1 <17 server listening (binary) <18 server listening (binary) <19 send buffer was 9216, now 3728270 <20 send buffer was 9216, now 3728270 <19 server listening (udp) <20 server listening (udp) ... I am trying to build libmemcached with SASL support too. I have tried the following: $ ./configure --prefix=/usr/local \ --with-memcached-sasl=/usr/local/bin/memcached ... $ ./configure --prefix=/usr/local \ --with-memcached-sasl="/usr/local/bin/memcached -S" ... But the resulting configuration summary is the same for both: Configuration summary for libmemcached version 1.0.11 * Installation prefix: /usr/local * System type: apple-darwin12.2.0 * Host CPU: x86_64 * C Compiler: i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C Flags: -O2 -Werror -Wall -Wextra -std=c99 -Wbad-function-cast -Wmissing-prototypes -Wnested-externs -Woverride-init * C++ Compiler: i686-apple-darwin11-llvm-g++-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C++ Flags: -O2 -Werror -Wall -Wextra -Wpragmas -D_FORTIFY_SOURCE=2 -Waddress -Wchar-subscripts -Wcomment -Wctor-dtor-privacy -Wfloat-equal -Wformat=2 -Wmissing-field-initializers -Wmissing-noreturn -Wnon-virtual-dtor -Wnormalized=id -Woverloaded-virtual -Wpointer-arith -Wredundant-decls -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wstrict-overflow=1 -Wswitch-enum -Wundef -Wunused-variable -Wwrite-strings -fwrapv -ggdb * CPP Flags: -I/usr/local/include * Assertions enabled: no * Debug enabled: no * Warnings as failure: no * SASL support: Am I doing something incorrectly? Thanks.

    Read the article

  • How do I restore to a delta file (disk) on Vmware ESXi

    - by Oscar
    Using VMware Server ESXi (freebie version) I have a Virtual Machine (win 2k3 r2 server). When I first provisioned it I took a snapshot of it. I recently tried to clone the primary drive using my standard hardware-based method to grow a windows disk. (using knoppix, clone drive to a new drive, make it bootable, then I intended to extend the partition via diskpart from within windows). This process failed; I tried setting the cloned drive (via the vmware gui) to replace the original drive, boot and be done. This didn't work out so well. The machine never booted. I checked the boot order, the disk location and all the basics I usually do. As a failsafe, I then tried changing all the settings back so the machine would boot to the original drive and I could figure out (as I eventually did) a better way of growing the disk. However when I powered on the machine with the original drive, it reverted back to that initial snapshot I created; It lost all the changes since. I looked in the file system and found a few files, I think the keyfile here is one named "delta" and I'm assuming that's the disk I want, but I can't find a way to have the Virtual Machine actually use that drive/file. It isn't available to add when I go to add an existing drive. Do I need to somehow commit that delta to the original drive and then boot from it again? Can you point me in the right direction? I've since discovered the proper way of growing drives using "vmkfstools" but I need to get back to the original state of the machine to try this out. Any help would be greatly appreciated.

    Read the article

  • OpenVPN bridge network from routed clients

    - by gphilip
    I have the following setup: subnet 1 - 10.0.1.0/24 with a machine used as NAT and also running an OpenVPN client subnet 2 - 192.168.1/24 with an OpenVPN server (the server in subnet 1 connect here) subnet 3 - 10.0.2.0/24 that uses the NAT machine (subnet 1) to access the internet, so all non-local traffic is routed there to the eth0 interface The OpenVPN client creates the tun0 interface and appropriate routing so that I can access machines from 192.168.1/24 [root@ip-10-0-1-208 ~]# telnet 192.168.1.186 8081 Trying 192.168.1.186... Connected to 192.168.1.186. Escape character is '^]'. [root@ip-10-0-1-208 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.1.1 0.0.0.0 UG 0 0 0 eth0 10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.1 10.8.0.5 255.255.255.255 UGH 0 0 0 tun0 10.8.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 192.168.0.0 10.8.0.5 255.255.0.0 UG 0 0 0 tun0 However, when I try the same from subnet 3, it can't reach that machine. [root@ip-10-0-2-61 ~]# telnet 192.168.1.186 8081 Trying 192.168.1.186... I suspect that it's because subnet 3 is routed to eth0 on the NAT machine in subnet 1 and it cannot jump to tun0. What's the easiest way to resolve it? I don't want to use iptables. I can't change the routing from machines in subnet 1 because it's done in AWS and so it works only with specific interfaces. Also, the NAT machine gets its IP with DHCP and so bridging is a bit complicated. IP forwarding is set on the NAT machine [root@ip-10-0-1-208 ~]# cat /proc/sys/net/ipv4/ip_forward 1 Thank you!

    Read the article

  • Can't see more than the first few lines in an SSH connection

    - by hello
    I need some help for SSH buffer size. I have a vista machine at home and i have installed "Free SSHD" on it. I also have Dynamic DNS setup to access some of my home lab equipment which are connected to this vista machine. From my work machine which is an XP machine I connect to my home machine using Putty. Everything up to this point is working fine without any problem. The issue is I can't see more lines than the first few lines of the output. I press the space bar to get more output off the screen and the output scrolls up and it gets lost as the more output gets displayed on the screen. The Putty client i am using on my work machine has been setup with enough buffer size but the output still only displays few lines and as it moves up, the buffer gets empty automatically. I have searched the entire web and haven’t found any proper solution any where. Can someone please help here? Thanks.

    Read the article

  • How to do a Windows 7 Image restore to an external drive?

    - by Vaccano
    I have a system that I have done a Windows 7 Image restore on. I would like to migrate that image to a different hard drive. Is there a way to restore the image to an externally connected hard drive? For example: I have 3 hard drives: The first in the source machine (the one I want to copy). The second in a machine that I want to do the work. And the third is not in a machine. It is the target that I want to overwrite with the contents of the first. I boot up a 2nd machine and connect the 3rd hard drive externally (using some cool cables I have). I then use some cool feature of Windows 7 to replace what is on the 3rd hard drive with the windows 7 image of my 1st machine (that is on on my networked backup server). I need to know what the above mentioned "cool feature of windows 7" is, if there is one. And how to use it. Any ideas? Note: that I very much so don't want it to overwrite what is on the 2nd machine/hard drive.

    Read the article

  • Why is REMOTE_ADDR only sometimes available as an Apache environment variable?

    - by Xiong Chiamiov
    To avoid having to parse X-Forwarded-For in Varnish, I'm trying to just set a header on the SSL terminator (currently Apache) that stores the direct client IP in a header. On our development machine, this works: RequestHeader set X-Foo %{REMOTE_ADDR}e However, in staging it doesn't. Specifically, the header is empty, as illustrated by both varnishlog: 13 TxHeader b X-Foo: (null) (On the development machine, this shows the IP address as expected.) Similarly, logging REMOTE_ADDR shows that it only appears to be populated on the dev machine: # Config LogFormat "%{X-Forwarded-For}i %{REMOTE_ADDR}e" combined CustomLog "/var/log/httpd/access_log" combined # Log file, staging <my ip> - # Log file, development <my ip> <my ip> Since the dev machine is, well, a dev machine, it is different in a number of ways; however, I can't track down which difference is causing this. The versions of Apache are the same (2.2.22), and I don't see anything relevant in any of the standard config files or /etc/sysconfig/httpd. And the rest of the system is reasonably similar, since they're built off the same CentOS 5 base image. I can't even tell from the Apache documentation whether REMOTE_ADDR is expected to exist or not as an environment variable, but it clearly works on one machine, whether by fluke or design, and the inconsistency is driving me mad.

    Read the article

  • Apache: Setting up local test server with subdomains

    - by RC
    Hi everyone, I have XAMPP running on my desktop machine, and I do all my work on it with no issue. http://localhost ---> points to public_html http://site1.localhost ---> points to site 1 http://site2.localhost ---> points to site 2 http://site3.localhost ---> points to site 3 Entering the above URLs in my web browser on the machine with Apache works great, and I can work on multiple sites within distinct subdomains. But what I want to do now is to transfer Apache and all the files to another Windows 7 machine within the LAN, but still be able to view the subdomains from my main development machine. With a vanilla XAMPP installation on the new hosting machine, entering the IP address of that machine (e.g. 192.168.1.10) into my development computer would send me to the main public_html folder. But how do I set up subdomains such that I can access it externally? For example, http://site1.devmachine Thanks for any help.

    Read the article

  • Can't Ping - Wireless network of home

    - by Naunidh
    Hello, This may seem like other ping problem, but I have tried a lot before posting it here. I have a linksys WRT54G - firmware v8.00.8. I have two laptops one windows vista (192.168.1.99) and Windows Xp (192.168.1.13) connected on WiFi . The Router's IP address is 192.168.1.4, and default gateway is the ADSL modem (192.168.1.1) connected through wire. The problem is that laptops can not ping each other, they can ping the gateway and the linksys router, and both can access internet. Following has been tried (I am pinging from XP machine to Vista): I saw that arp entires for Vista machines were not being populated, so I added static ARP entries. 192.168.1.99 00-19-7e-70-d0-4e static I checked on ethereal that an ICMP packet for MAC address of Vista machine does go out from XP machine towards the Vista machine, but never reaches the Vista machine. So its get eaten by the Router? I added Vista machine to DMZ in my linksys router, so that all the ports are open (In case it was an issue). Firewalls , antivirus etc were turned off, echo was enabled explicitly on vista, file sharing, network discovery were turned on. Network type was set to private. Unchecked everything in Router;s firewall, even though they are only meant for WAN requests. Is there anything else that I should try. Thanks.

    Read the article

  • Windows 7 system freezes: would like to know if they could be related to MrxSmb, Event ID 8003 errors

    - by lifegoeson
    First, this question centers around a home network. Is it okay to ask here? Or should I go to SuperUser? (I see less answers over there, but I'll go there if that would be more appropriate.) Network setup: 1 Machine running XP Pro 1 Machine running Win7 Ultimate Comcast router Linksys WRT610N Wireless router The Win7 machine goes into a total, unrecoverable system freeze frequently. I was tearing out my hair trying to ascertain a cause, but I noticed that it usually seems to correspond with performing operations on the shared folders on the XP machine. The last 2 occasions that the Win7 machine froze, I saw this entry for Event ID 8003 from source MrxSmb in the Event log of the XP machine: The master browser has received a server announcement from the computer WIN7_COMPUTER that believes that it is the master browser for the domain on transport NetBT_Tcpip_{320B32A7-FED9. The master browser is stopping or an election is being forced. My question is twofold: Could this cause a Win7 system freeze? If so, what could I configure differently on my network to stop these conflicts over who is the master browser? Thank you for your help!

    Read the article

  • Can't ping devices by IP address for devices allocated IPs by DHCP

    - by GiddyUpHorsey
    I have a home network with a Trendnet wireless router and a Windows Domain. The Domain Controller/DNS server is a Windows 2000 Server and is configured to forward queries to DNS servers provided by the ISP. The router provides DHCP and is configured with the Windows 2000 Server as the DNS server. The network has been set up for a couple of years and usually works fine. When I connect iPhones to the network over WiFi, the router can ping the iPhones through its browser based admin interface, but Windows machines that are part of the Windows Domain cannot. A laptop was connected to the network over WiFi that wasn't joined to the domain and it could see the iPhones. The router UI shows that the laptop has a reserved IP allocated via DHCP. All machines either have a static or DHCP allocated IP on the 192.168.0.* subnet. Router - 192.168.0.1 - Static - Wired Windows Domain Controller - 192.168.0.8 - Static - Virtual Windows 7 Workstation - 192.168.0.200 - DHCP Auto - Wired VMWare ESXi Host - 192.168.0.201 - Static? - Wired iPhone 1 - 192.168.0.202 - DHCP Auto - WiFi iPhone 2 - 192.168.0.203 - DHCP Auto - WiFi Windows Vista Laptop - 192.168.0.204 - DHCP Reserved - WiFi Using the Windows 7 machine (200), I try to ping each machine and the only DHCP machine that responds is itself. The other DHCP machines fail with Reply from 192.168.0.200: Destination host unreachable.. Using nslookup fails with *** domain.controller.name can't find 192.168.0.203: Non-existent domain. Using the Windows 2000 Domain Controller (8), I try to ping each machine and the only DHCP machine that responds is the Windows 7 machine (200). Pinging the other DHCP machines fails with Request timed out.. Using nslookup also fails with *** domain.controller.name can't find 192.168.0.203: Non-existent domain. Using the iPhone 2 (203), I try to ping (Network Ping Lite) the machines with static IP addresses and that works fine. When I try to ping the Windows 7 machine (200) it is unable to get a response. How do I configure the DNS server/Windows Domain/Router properly so that the Windows Domain machines can see the IPs allocated via DHCP?

    Read the article

  • TCP/IP & throughput between FreeNAS (BSD) server & other LAN machines

    - by Tim Dickerson
    I have got a question for someone that knows BSD a bit better than me that are in regards to my LAN setup at home/work here outside Chicago. I can't seem to fully optimize my network's (LAN) thoughput via my FreeNAS (BSD based) file server. It runs with the latest FreeBSD release which is modified to support several protocols for file transfers and more. Every machine that is behind my Smoothwall (Linux based) router is on the usual 192.168.0.x subnet and for most part works just fine. Behind the Smoothwall box, all machines are connected to a GB HP unmanaged switch. I host a large WISP here and have an OC-3 connection here at home/work and have no issues with downloading/uploading from/to the 'net'. My problem is with throughput. When I try and transfer large files...really any for that matter..between any of the machines to/and from the FreeNAS server via FTP, the max throughput I can achieve say between a Win 7 or a Linux box is ~65Mbit/sec. All machines are running Intel Pro 1000 GB NIC's and all cable is CAT6. Each is set to 'auto negotiation' and each shows 1500 MTU Full Duplex @1GB so I know the hardware is okay. I have not adjusted the MTU on any machine as I understand it to be pointless unless certain configurations are used (I assume I am not one of those). My settings for the FreeNAS machine are the following: # FreeNAS /etc/sysctl.conf - pertinent settings shown kern.ipc.maxsockbuf=262144 kern.ipc.nmbclusters=32768 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.inflight.enable=0 net.inet.tcp.path_mtu_discovery=0 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.recvbuf_max=16777216 net.inet.tcp.recvspace=65536 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.sendspace=65536 net.inet.udp.recvspace=65536 net.local.stream.recvspace=65536 net.local.stream.sendspace=65536 net.inet.tcp.hostcache.expire=1 From what I can tell, that looks to be a somewhat optimized profile for a typical BSD machine acting as a server for a LAN. I might be wrong and just wanted to find out from someone that knows BSD better than I do if indeed that is ok or if something is out of tune or what. Are there other ways I would find better for P2P file transfers? I honestly do not know what I SHOULD be looking for with respect to throughput between the NAS box and another client when xferring files via FTP, but I am told that what I get on average (40-70MB/sec) is too low for what it could be. I have thought about adding another NIC in the FreeNAS box as well as the Win7 machine and use a X-over cable via a static route, but wanted to check with someone first to see if that might be worth it or not. I don't know if doing that would bypass the HP GB switch and allow for a machine to machine xfer anyways. The FTP client I use is: Filezilla and have tried both active and passive modes with no real gain over each other. The NAS box runs ProFTPD.

    Read the article

  • Sycronizing/deploying scripts across several systems

    - by otto
    I have a few time consuming tasks that I like to spread across several computers. These tasks require running an identical ruby or python script (or series of scripts that call each other) on each machine. The machines will a separate config file telling the script what portion of the task to complete. I want to figure out the best way to syncronize the scripts on these machines prior to running them. Up until now, I have been making changes to a copy of the script on a network share and then copying a fresh copy to each machine when I want to run it. But this is cumbersome and leaves a chance for error ( e.g missing a file on the copy or not clicking "copy and replace"). Lets assume the systems are standard windows machines that are not dedicated to this task and I don't need to run these scripts all the time (so I don't want a solution that runs 24/7 and always keeps them up to date, I'd prefer something that pushes/pulls on command). My thoughts on various options: Simple adaptation of my current workflow: Keep the originals on the network drive, but write a batch file that copies over the latest version of the scripts so everything is a one-click operation. Requires action on each system, but that's not the end of the world (since each one usually needs their configuration file changed slightly too). Put everything in a Mercurial/Git reposotory and pull a fresh copy onto each node. Going straight to the repo from each machine would guarantee a current version (and would have the fringe benefit of allowing edits to the script to be made from any machine). Cons would be that it requires VCS to be installed on each machine and there might be some pains dealing with authentication since I wouldn't use a public repo. Open up write access on a shared folder and write a script to use rsync (or similar) to push the changes out to all of the machines at once. This gets a current version on every machine (though you would have to change the script if you want to omit a machine or add a new one). Possible issue would be that each computer has to allow write access. Dropbox is a reasonable suggestion (and could work well) but I dont want to use an external service and I'd prefer not to have to have dropbox running 24/7 on systems that would normally not need it. Is there something simple that I am missing? Some tool designed expressly for doing this kind of thing? Otherwise I am leaning toward just tying all of the systems into Mercurial since, while it requires extra software, it is a little more robust than writing a batch file (e.g. if I split part of a script into a separate module, Mercurial will know what to do whereas I would have to add a line to the batch file).

    Read the article

  • How to troubleshoot a 'System.Management.Automation.CmdletInvocationException'

    - by JamesD
    Does anyone know how best to determine the specific underlying cause of this exception? Consider a WCF service that is supposed to use Powershell 2.0 remoting to execute MSBuild on remote machines. In both cases the scripting environments are being called in-process (via C# for Powershell and via Powershell for MSBuild), rather than 'shelling-out' - this was a specific design decision to avoid command-line hell as well as to enable passing actual objects into the Powershell script. The Powershell script that calls MSBuild is shown below: function Run-MSBuild { [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Build.Engine") $engine = New-Object Microsoft.Build.BuildEngine.Engine $engine.BinPath = "C:\Windows\Microsoft.NET\Framework\v3.5" $project = New-Object Microsoft.Build.BuildEngine.Project($engine, "3.5") $project.Load("deploy.targets") $project.InitialTargets = "DoStuff" # # Set some initial Properties & Items # # Optionally setup some loggers (have also tried it without any loggers) $consoleLogger = New-Object Microsoft.Build.BuildEngine.ConsoleLogger $engine.RegisterLogger($consoleLogger) $fileLogger = New-Object Microsoft.Build.BuildEngine.FileLogger $fileLogger.Parameters = "verbosity=diagnostic" $engine.RegisterLogger($fileLogger) # Run the build - this is the line that throws a CmdletInvocationException $result = $project.Build() $engine.Shutdown() } When running the above script from a PS command prompt it all works fine. However, as soon as the script is executed from C# it fails with the above exception. The C# code being used to call Powershell is shown below (remoting functionality removed for simplicity's sake): // Build the DTO object that will be passed to Powershell dto = SetupDTO() RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create(); using (Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig)) { runspace.Open(); IList errors; using (var scriptInvoker = new RunspaceInvoke(runspace)) { // The Powershell script lives in a file that gets compiled as an embedded resource TextReader tr = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream("MyScriptResource")); string script = tr.ReadToEnd(); // Load the script into the Runspace scriptInvoker.Invoke(script); // Call the function defined in the script, passing the DTO as an input object var psResults = scriptInvoker.Invoke("$input | Run-MSBuild", dto, out errors); } } Assuming that the issue was related to MSBuild outputting something that the Powershell runspace can't cope with, I have also tried the following variations to the second .Invoke() call: var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-String", dto, out errors); var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-Null", dto, out errors); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-Null"); I've also looked at using a custom PSHost (based on this sample: http://blogs.msdn.com/daiken/archive/2007/06/22/hosting-windows-powershell-sample-code.aspx), but during debugging I was unable to see any 'interesting' calls to it being made. Do the great and the good of Stackoverflow have any insight that might save my sanity?

    Read the article

  • Coded UI Test - How to change the exe it runs

    - by Vaccano
    I created a Coded UI Test from a Microsoft Test Manager recording. The exe it runs is the one the tester recorded against. I want this to be a test I run with my build. How do I change the exe that the coded UI test uses to be the output of: The TFS Build when a TFS Build is being run The local build when the test is being run on my machine.

    Read the article

  • How to specify different Debug/Release output directories in QMake .pro file

    - by esavard
    I have a Qt project and I would like to output compilation files outside the source tree. I currently have the following directory structure: / |_/build |_/mylib |_/include |_/src |_/resources Depending on the configuration (debug/release), I will like to output the resulting files inside the build directory under build/debug or build/release directories. How can I do that using a .pro file?

    Read the article

  • j2ee deployment error

    - by Rajesh
    hi im new to j2ee.. the problem is while deploying a particular project i'm getting deployment error like "module has not been deployed".. but i'm able to deploy other projects... the error shown is as follows In-place deployment at F:\onlineexam_1\build\web deploy?path=F:\onlineexam_1\build\web&name=onlineexam_1&force=true failed on GlassFish v3 Domain F:\onlineexam_1\nbproject\build-impl.xml:577: The module has not been deployed. BUILD FAILED (total time: 3 seconds) pls assist me to overcome this problem Thnx in advance Raj

    Read the article

  • Installing jdk without sudo?

    - by Legend
    Currently, I have a machine on which I am working in Eclipse, it says that the JRE System Library version is sun-jdk-1.5.0.11 but on my active development machine, it is java-6-sun-1.6.0.16. What is the difference between these two (of course, besides the versioning)? Is there any way I can make the first machine to use the same "java-6-sun-1.6.0.16" version without having sudo permissions on the machine?

    Read the article

  • Help with SSh Tunnel [closed]

    - by Andrew Johnson
    I am running a Django instance locally and doing some Facebook development. So, I set up a port on a remote machine to forward to my local machine, so that Facebook can hit the web server, and have the requests forwarded to my local machine. Unfortunately, I'm getting the following error in my browser when I try and access the page: http://dev.thegreathive.com/ Any idea what I'm doing wrong? I think the problem is on my local machine, since if I kill the SSH tunnel, the error message changes.

    Read the article

  • GCC: Simple inheritance test fails

    - by knight666
    I'm building an open source 2D game engine called YoghurtGum. Right now I'm working on the Android port, using the NDK provided by Google. I was going mad because of the errors I was getting in my application, so I made a simple test program: class Base { public: Base() { } virtual ~Base() { } }; // class Base class Vehicle : virtual public Base { public: Vehicle() : Base() { } ~Vehicle() { } }; // class Vehicle class Car : public Vehicle { public: Car() : Base(), Vehicle() { } ~Car() { } }; // class Car int main(int a_Data, char** argv) { Car* stupid = new Car(); return 0; } Seems easy enough, right? Here's how I compile it, which is the same way I compile the rest of my code: /home/oem/android-ndk-r3/build/prebuilt/linux-x86/arm-eabi-4.4.0/bin/arm-eabi-g++ -g -std=c99 -Wall -Werror -O2 -w -shared -fshort-enums -I ../../YoghurtGum/src/GLES -I ../../YoghurtGum/src -I /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/include -c src/Inheritance.cpp -o intermediate/Inheritance.o (Line breaks are added for clarity). This compiles fine. But then we get to the linker: /home/oem/android-ndk-r3/build/prebuilt/linux-x86/arm-eabi-4.4.0/bin/arm-eabi-gcc -lstdc++ -Wl, --entry=main, -rpath-link=/system/lib, -rpath-link=/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib, -dynamic-linker=/system/bin/linker, -L/home/oem/android-ndk-r3/build/prebuilt/linux-x86/arm-eabi-4.4.0/lib/gcc/arm-eabi/4.4.0, -L/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib, -rpath=../../YoghurtGum/lib/GLES -nostdlib -lm -lc -lGLESv1_CM -z /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib/crtbegin_dynamic.o /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib/crtend_android.o intermediate/Inheritance.o ../../YoghurtGum/bin/YoghurtGum.a -o bin/Galaxians.android As you can probably tell, there's a lot of cruft in there that isn't really needed. That's because it doesn't work. It fails with the following errors: intermediate/Inheritance.o:(.rodata._ZTI3Car[typeinfo for Car]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Inheritance.o:(.rodata._ZTI7Vehicle[typeinfo for Vehicle]+0x0): undefined reference to `vtable for __cxxabiv1::__vmi_class_type_info' intermediate/Inheritance.o:(.rodata._ZTI4Base[typeinfo for Base]+0x0): undefined reference to `vtable for __cxxabiv1::__class_type_info' collect2: ld returned 1 exit status make: *** [bin/Galaxians.android] Fout 1 These are the same errors I get from my actual application. If someone could explain to me where I went wrong in my test or what option or I forgot in my linker, I would be very, extremely grateful. Thanks in advance. UPDATE: When I make my destructors non-inlined, I get new and more exciting link errors: intermediate/Inheritance.o:(.rodata+0x78): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Inheritance.o:(.rodata+0x90): undefined reference to `vtable for __cxxabiv1::__vmi_class_type_info' intermediate/Inheritance.o:(.rodata+0xb0): undefined reference to `vtable for __cxxabiv1::__class_type_info' collect2: ld returned 1 exit status make: *** [bin/Galaxians.android] Fout 1

    Read the article

  • Java EE deployment error

    - by Rajesh
    While deploying a particular project I'm getting deployment error like "module has not been deployed"... but I'm able to deploy other projects... the error shown is as follows In-place deployment at F:\onlineexam_1\build\web deploy?path=F:\onlineexam_1\build\web&name=onlineexam_1&force=true failed on GlassFish v3 Domain F:\onlineexam_1\nbproject\build-impl.xml:577: The module has not been deployed. BUILD FAILED (total time: 3 seconds)

    Read the article

  • How do I gather TeamCity code coverage reports from multiple projects into one report?

    - by Loofer
    We use the build in coverage application in TeamCity 6 (about to upgrade to 7.1) If we wish to see the code coverage (or other metrics) of a particular build it is fine as we can navigate to that build, but it would be great if we could pluck out a few interesting metrics from all/some of the current projects/build configurations and display them all together. For convenience I would expect the new display to be accessible from within TeamCity itself, however if there are solutions that require a separate solution we could look at them. Thanks

    Read the article

  • 64 bits ant.jar

    - by sonic
    I have installed 64 bits RHEL. I have following questions regarding ant.jar for the system. I was not able to find ant.jar build with 64 bit JVM from the apache website. Do I have to build it form the source code, if I intend to run the jar on 64 bit JVM? Would it speed up the build process if I use ant.jar build with 64 bit JVM and run it on 64 bit JVM?

    Read the article

  • P2V converter for desktop MS Virtual PC

    - by Wavel
    Are there any tools available for converting a desktop vista machine into a virtual machine to run with MS Virtual PC? I am buying a new workstation and would like to virtualize my old machine onto the new one. I know of the tools for Hyper-V, but i'll be running Win7 on the new machine, not Hyper-V server.

    Read the article

  • remsh rsh error redirect problem

    - by soField
    using following command on hp-ux remsh opera -l myuser crontab -l /opt1/exp_opera_crontab 2/opt/a.log and when i echo $? i get 0 because its executing crontab -l on remote machine but i dont have opt1 directory so export wont be copied to my local machine in /opt1/exp_opera_crontab i dont get any error about this when i run this remsh or rsh command is there any way to identify both of remote and local machine related errors and redirecting them into my local machine ?

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >