Search Results

Search found 7891 results on 316 pages for 'multi layer perceptron'.

Page 245/316 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • Linksys SFE2000 Interface

    - by boburob
    I have a real problem with a layer 3 Linksys switch. First, every time it looses power, it seems to reset back to an older config. Not only this, but when this happens it looses interface settings on one subnet. This would not be a problem but I am completely unable to get to the interface on the working side. It allows me to log on and then just displays a blank screen. I have tried this on: IE6 IE7 IE8 IE9 Firefox 3.5 Opera Chrome All with the same results, except for Opera, which loads half the interface but nothing I can really use. I really need to get onto this switch so I can sort out routing and VLAN tagged ports, so if anyone has any ideas on either of these issues please let me know ASAP! Thanks! Also, due to its location and my lack of laptops with serial connections I cannot putty into it. UPDATE: Looked into this a bit more and it looks like this model of switch does not save the current config to boot unless you make sure to save it yourself, which explains the first issue, however the broken interface is more worrying!

    Read the article

  • Network Sniffing and Hubs

    - by Chris_K
    This will likely seem naive to the experts... but it has been on my mind lately. For years I've been using ntop and a cheap 4 port hub to sniff client networks to determine who's doing what -- and how much. Great way to see what's going on when they call and say "Geeze, the network seems really slow today." No need to bring in a managed switch (or access the existing one) and no need to configure spanning or mirroring. I just drop in the hub inline where I want to measure. Lately I noticed it is just about impossible to buy a real honest-to-goodness hub anymore. While looking for a new one, I had someone tell me that I should be sure to get a full-duplex hub or I'd only be seeing half the traffic when I monitor. Really? I've been using a crusty old Netgear DS104 all this time. No clue if it is half or FD. Have I really been understating my measurements? I'm just not bright enough about the physical layer to really know... Side note: Just ordered a Dualcomm Ethernet Switch TAP as a hub replacement. Seems like a nifty gadget. Any notes or tips about it would be welcome in the comments :-)

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • Mod_pagespeed, Varnish and Apache cache issues after new code pushes

    - by WerkkreW
    I have a rather strange issue. In my environment we are running a load balanced cluster of 8 apache servers with a master-master MySQL backend. In front of apache we have Varnish in the cache layer. We have been running Apache mod_pagespeed for several weeks now and for the most part it has been working great. The issue arises when we do fresh code updates from Git, and and/all of the JS/CSS assets change. Basically the problem appears to be two fold. One, after the code push we generally take the opportunity to flush varnish, restart apache, and restart varnish. In doing this all of the mod_pagespeed combinied/minified files are cleared out ensuring that all of the new JS/CSS assets are fresh. The problem is, upon doing this the file names that mod_pagespeed creates change, but the old files (appear) to be still cached for many people client side leading to very unexpected results. However, if we do not restart apache, the changes to the files may or may not appear client side due to the cached minified assets. The simple solution is to disable mod_pagespeed, however I would rather not do that as it has made a fairly large impact in performance. I feel as if there must be a better way to deal with the inconsistencies in cache between the client and server to prevent having people to go to great lengths or perform a large number of page refreshes to see a working page. I can provide configuration snippets if anyone needs them. If you would like to inspect the site, source, headers, or anything try the following addresses: http://wellplayed.org http://wellplayed.org/tv Thanks in advance!

    Read the article

  • What is the typical maximum number of database connections for Oracle running on Windows server ?

    - by Sake
    We are maintaining a database server that serve a large number of clients. Each client typically running serveral client-applications. The total number of connections to the database server (Oracle 9i) is reaching 800 connections on peak load. The windows 2003 server is starting to run out of memory. We are now planning to move to 64bit Windows in order to gain higher memory capability. As a developer I suggest moving to multi-tier architecture with conneciton pooling, which I believe is a natural solution to this problem. However, in order to support my idea, I want the information on: what exactly is the typical number of connections allowed for Oracle database ? What is the problem when the number connections is too high ? Too much memory comsumption ? or too many sockets opened ? or too many context switching between threads ? To be a little bit specific, how could Oracle Forms application scale to thousand of users without facing this problem ? Shall Oracle RAC applied to this case ? I'm sure the answer to this question should depend on quite a number of factors, like the exact spec of the hardware being used. I'm expecting a rough estimation or some experience from the real world.

    Read the article

  • Did Adobe Photoshop just killed my Graphics Card for good?

    - by user6004
    I was working with Adobe Photoshop, just some regular work, when I came to edit a PSD file and change the text of some layer, when all of a sudden the PC froze. No mouse, screen is frozen, keyboard strokes aren't getting me anything, no Task Manager, nada. So I rebooted my PC, and then something quite terrifying appeared before my eyes. It was not the Checkdisk utility that was launched, that made me terrified (by the way, that reboot damaged the partition table of an external HDD that was connected at the time to my PC, but that's another story). It was the screen itself. Please have a look. So after Checkdisk finished and Windows loaded, I noticed that the resolution was not right. Instead of 1440x900 which I had set, it was 1280x1024. When I went to change it back, I had no option to change back to my old resolution, and has only 3 other general resolution properties, as if my Video Card (GeForce 8800 GTS btw) was not recognized. And what do you know, in the Device Manager it appeared with an exclamation mark. Inside the hardware, it said this: Windows has stopped this device because it has reported problems. (Code 43) Uninstalling the drivers, downloading the newest drivers from NVIDIA and installing them did not work. It always comes back to this. So, do you have any advice before I go out and buy a new graphics card? I thought this was the only option left, but maybe the experts at Super User can help me out. By the way, the dotted screen appears after every reboot, and I see the dots when the ASUS Motherboard screen shows up at boot. Thanks in advance.

    Read the article

  • Bind9 not doing anything with forwarded query responses?

    - by Rykaro
    I have a Bind DNS server that is the local production DNS server and a Windows 2008 R2 domain controller which provides DNS for a lab environment with the domain xyz.lab. I've configured the Bind DNS to forward DNS requests for the domain xyz.lab to the Windows DNS server with this config: zone "xyz.lab" { type forward; forward only; forwarders { x.x.x.x; }; }; zone "x.x.x.in-addr.arpa" { type forward; forward only; forwarders { x.x.x.x; }; }; And Bind options are (the all_internal acl includes the subnets of both the production and lab networks as well as the loopback of the bind server): allow-query { all_internal; }; allow-recursion { all_internal; }; allow-transfer { none; }; notify no; minimal-responses yes; version "unknown"; Unfortunately, when I do an nslookup or dig on the bind server for a host on the lab domain, the request times out. The logs on the Windows 2008 DNS server show it receiving the query and responding to it and a network packet trace shows the query responses arriving at the Bind DNS server. The servers reside on the same switch with a router providing connectivity between the layer 3 subnets (production and lab are on different subnets) and there is a round trip time of between 3ms and 5ms on pings between the two servers, so I don't think there is an issue with latency causing a timeout of the query. In summary a query-response arrives back at the Bind server and the nslookup/dig times-out. Why does the Bind DNS not seem to be doing anything with the query responses when it receives them?

    Read the article

  • Vim: auto-comment in new line

    - by padde
    Vim automatically inserts a comment when I start a new line from a commented out line, because I have set formatoptions=tcroql. For example (cursor is *): // this is a comment* and after hitting <Enter> (insert mode) or o (normal mode) i am left with: // this is a comment // * This feature is very handy when writing long multi-line comments, but often I just want a single line comment. Now if I want to end the comment series I have several options: hit <Esc>S hit <BS> three times Both of these afford three keystrokes, taken together with the <Enter> this means four keystrokes for a new line, which I think is too much. Ideally, I would like to just hit <Enter> a second time to be left with: // this is a comment * It is important that the solution will also work with different indentation levels, i.e. int main(void) { // this is a comment* } hit <Enter> int main(void) { // this is a comment // * } hit <Enter> int main(void) { // this is a comment * } I think I have seen this feature in some text editor a few years ago but I do not recall which one it was. Is anyone aware of a solution that will do this for me in Vim? Pointers in the right direction on how to roll my own solution are also very welcome.

    Read the article

  • Hosting options for data-enabled web application

    - by Hertfordian
    I am independently developing an asp.net business application with a MySQL database. I currently have a Windows web hosting account which includes MySQL and MS SQL as installed supported options. I am not yet finally committed to using MySQL and I want to keep my options open to evaluate MS SQL and possibly other options such as PostGreSQL later when more of the business logic is in place - my data access layer will handle the database connectivity. The web hosting setup I have now is fine for development purposes, but if in future I want to use, say, PostGreSQL Server, and a level of usage of, say, 10,000 hits per day concentrated in business hours, I'm assuming I'll need a dedicated server. But in that case, should I just install PostGreSQL on the dedicated server, or is best practice to have a separate database server - perhaps locked down so that it can only be accessed through the web server? And supposing it was only 2000 hits a day - how would that change things? I'd appreciate it if anyone could point me in the direction of a useful guide to these sorts of issues. Naturally if I start paying for separate servers, I would like to know exactly why I'm doing it and what the performance issues and thresholds are.

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • VMware ESX Linux Guest Customization

    - by andyh_ky
    Hello, I am interested in deploying several RHEL 4 Update 8 virtual machines for creation of a test environment. Here are the steps I am taking: In off hours, P2V/V2V the production machines and convert them to templates Deploy the virtual machines with a customization specification that changes hostname, IP address I am interested in how these processes are done and if there are any options for further customization. Are the machines brought on the network when they are powered on, before they are reconfigured? Is there a potential IP address conflict? Is there an option to run additional scripts which reside on the guest as a part of the reconfiguration? For example, restoring an Oracle Database. This is an option with Windows guests and sysprep, but I have been unable to locate anything showing a RHEL equivalent. I am dealing with a multi tier application. The main issue I am attempting to mitigate is that the application servers reference database servers by hostname and in tnsnames files. I am interested in scripting the reconfiguration of the application in the deployment so that the app/db servers are pointing to the test environment. I am OK with placing the 'cleanup' script on the source and executing it after the machine has been brought up. I am interested in the automation of the script's execution post clone/boot, as well as if there could be an IP address conflict. (cross posted to VMTN's ESX 4 community)

    Read the article

  • How to decouple development server from Internet?

    - by intoxicated.roamer
    I am working in a small set-up where there are 4 developers (might grow to 6 or 8 in cuople of years). I want to set-up an environment in which developers get an internet access but can not share any data from the company on internet. I have thought of the following plan: Set-up a centralized git server (Debian). The server will have an internet access. A developer will only have git account on that server, and won't have any other account on it. Do not give internet access to developer's individual machine (Windows XP/Windows 7). Run a virtual machine (any multi-user OS) on the centralized server (the same one on which git is hosted). Developer will have an account on this virtual machine. He/she can access internet via this virtual machine. Any data-movement between this virtual machine and underlying server, as well as any of the developer's machine, is prohibited. All developers require USB port on their local machine, so that they can burn their code into a microcontroller. This port will be made available only to associated software that dumps the code in a microcontroller (MPLAB in current case). All other softwares will be prohibited from accessing the port. As more developers get added, providing internet support for them will become difficult with this plan as it will slow down the virtual machine running on the server. Can anyone suggest an alternative ? Are there any obvious flaws in the above plan ? Some key details of the server are as below: 1) OS:Debian 2) RAM: 8GB 3) CPU: Intel Xeon E3-1220v2 4C/4T

    Read the article

  • Too many connections to Sql Server 2008

    - by Luis Forero
    I have an application in C# Framework 4.0. Like many app this one connects to a data base to get information. In my case this database is SqlServer 2008 Express. The database is in my machine In my data layer I’m using Enterprise Library 5.0 When I publish my app in my local machine (App Pool Classic) Windows Professional IIS 7.5 The application works fine. I’m using this query to check the number of connections my application is creating when I’m testing it. SELECT db_name(dbid) as DatabaseName, count(dbid) as NoOfConnections, loginame as LoginName FROM sys.sysprocesses WHERE dbid > 0 AND db_name(dbid) = 'MyDataBase' GROUP BY dbid, loginame When I start testing the number of connection start growing but at some point the max number of connection is 26. I think that’s ok because the app works When I publish the app to TestMachine1 • XP Mode Virtual Machine (Windows XP Professional) • IIS 5.1 It works fine, the behavior is the same, the number of connections to the database increment to 24 or 26, after that they stay at that point no matter what I do in the application. The problem: When I publish to TestMachine2 (App Pool Classic) • Windows Server 2008 R2 • IIS 7.5 I start to test the application the number of connection to the database start to grow but this time they grow very rapidly and don’t stop growing at 24 or 26, the number of connections grow till the get to be 100 and the application stop working at that point. I have check for any difference on the publications, especially in Windows Professional and Windows Server and they seem with the same parameters and configurations. Any clues why this could be happening? , any suggestions?

    Read the article

  • Need solution for Network/Servers.

    - by rehanplus
    Dear All, Please help me. I just joined a new Hospital and want some help managing my network. There are some requirements: Current Network: There is a D.S.L connection and that is terminated on a LINUX proxy and then connected to D-Link layer 2 switches and then providing internet to more then 200 PC's (Would be increasing to 1500 in couple of months). D-Link switches are not configured yet. Also there is one Database server Report server and an application server. In near Future Application should be accessed by local users as well as remote users from internet via our web server. We do have a sharing server and all these servers databases and PC's are on single sub net. Required Network: All i do want is to secure my network from outside access and just allowing specific users via web application and they will be submitting there record for patient card and appointment facility by means of application and entering there record (on our database) but not violating our network resources. Secondly in house users also need to access the same application and also internet but they must have some unique identity and rights (i.e. Finance lab dept. peoples do have limited access to that application). Notes: Should i create V LAN or break sub nets. Having a firewall will solve my issues? is a router needed on these type of scenario's. Currently all the access are restricted from Linux Proxy. Thanks.

    Read the article

  • [openVPN] server & client on same machine . And multiple VPN servers

    - by HiWorld
    Hello everyone, im stucked configuring openvpn to build a multi vpn connection. like this: CLIENT - VPN1 - VPN2 - INTERNET Well, i already have and know how to done a normal sigle vpn but want to use a chain of vpns, so i explain what i have done and how i did it. ON VPN1. i have 1 openvpn instance running as server( where client connect to) and another as client connecting to VPN2 running as server. { Here comes the problem } when i connect VPN1 as client of VPN2 i cant connect to VPN1 from CLIENT, my question is HOW TO procced with this... Also have another third instance working as server to use VPN1 without chains. ON VPN2. 1 openvpn instance as server where VPN1 will connect and then forward to the NET. Im using TUN interface on configs. And iptables are on this way: VPN1 - openvpn ip server1 : 192.168.6.0 / ip as client of VPN2: 192.168.5.70 iptables -t nat -A POSTROUTING -s 192.168.6.0 -j SNAT --to-source 192.168.5.70 VPN2 - openvpn ip server2 : 192.168.5.0 iptables -t nat -A POSTROUTING -s 192.168.5.0/24 -j SNAT --to-source EXTERNAL_IP_TO_INTERNET Hope someone help me with this. thanks in advance

    Read the article

  • MAC-Address based routing

    - by d-fens
    Here is what i want to do: I have a bunch of systems, some might have the same Public-IP, i disable ARP. I have a Firewall (either IP Layer or bridge-FW) between these systems and the internet. Depending on the destination port of incoming IP-Packets to some of these Public-IPs i want to set the destinsation-Ethernet-Adress. So for instance System A has IP 8.8.8.8, mac de:ad:be:ef:de:ad, arp disabled System B has IP 8.8.8.8, mac 1f:1f:1f:1f:1f:1f, arp disabled Firewall has IP 8.8.8.1, arp disabled on that interface Incoming packet to IP 8.8.8.8 tcp dest port 100 Incoming packet to IP 8.8.8.8 tcp dest port 101 Firewall sets dest-mac for 1.) - de:ad:be:ef:de:ad Firewall sets dest-mac for 2.) - 1f:1f:1f:1f:1f:1f Second scenario: System A and System B establish outgoing TCP-Connections, and the firewall matches the dst-mac of the incoming IP-Packets (response packets) to the senders-mac address. is this possible in any way with linux and iptables? edit: i read ebtables might "work" in a hackish way for this purpose but i am not sure...

    Read the article

  • PPTP VPN on Server 2008 Enterprise

    - by Mike K
    I asked this question on Server fault and was told that was not allowed so im moving it here. I am running Windows Server 2008 enterprise in my HOME network inside of vmware workstation. I am running this on my home network to setup a PPTP VPN connection at home. I have correctly setup everything I needed to make it work, including opening all the ports, 1723 and 43 (GRE). I am able to connect just fine, but when I connect I dont have internet unless I uncheck use remote gateway. The thing is, I want to use the remote gateway to route all my traffic through that connection. Can someone tell me why this isnt working and how to get it to work. When I have remote gateway checked, and I do an ipconfig I dont get a remote gateway for the VPN connection, its 0.0.0.0 when id assume if connected properly should be 192.168.1.254 (my ATT Home Router). Also, if I cant get the remote gateway issue to work, and I have to uncheck that box to get internet, does this mean my VPN session is no longer encrypted? I am fully aware the PPTP VPN is the weakest VPN encryption out there but still having that extra layer of security when im on an unsecure wifi connection makes me feel a bit better. Thank you for all your help in advance. Someone told me I need to setup a gateway or router configured on the server. If thats the case, how go I go about telling the remote co

    Read the article

  • Methods and practices for managing a network that has no internet connection

    - by FaultyJuggler
    Originally asked in Super User but realized this belongs here. Long story short, I am setting up a network with 32 servers of varying specs that will be used for testing and development. We will be using RedHat Linux, we also do not have a router as of yet and were looking into making one of the servers act as our router/DHCP etc. The small cluster will be on an isolated network with no internet. I can use external harddrives and discs to transfer anything from external sources into machines on the network, so this isn't a locked down secure network, it just won't have a direct connection to the outside world. I've worked on such setups before, but always long after they were setup. So I'm reaching out to see what everyone knows as far as how groups have handled initial setup and maintenance of such a situation. What is the best way to get them all configured and up to date? What are the best ways to automate updates, network wide installs, etc. With the only given that I have large multi-terabyte external hard drives that would be used to drop whatever files are needed onto a central server, how do i then distribute those files and install their contents? I've done perl scripting, some teammates have played with puppet, so we aren't completely in the dark, I just wanted to avoid reinventing the wheel since this is a common challenge.

    Read the article

  • Integrating external computer into a domain - some recommendations please

    - by TomTom
    Given: * A multi loation company. Every office has local routers that connect to a central VPN capable rouer in a data center. All fine so far. We now need to move a computer off site into a hosting center across the globe, to get it closer to some supplier computers we work for. it will run limited logic but latency is important, and our latency so far is too large. This computer will be in a data center and does no require incoming connections except for adminsitrative purposes, although it needs outgoing connetions. I have no real chance to put one of my VPN routers there, sadly - otherwise I would have no problem. Usage of RRAs is not recommended (we had various probblems there over time). I could deal with it. The computer MUSt integrate into the corporate structure via VPN and join the domain and be fully "tracked" (controlled for performance). What is the best suggestion? So far it looks like my best bets woudl be to log in via RRAS and deal with whatever issues arise there plus uise the local firewall the limit incoming connections to this computer to what is needed (which runs down to an emergency RDP connection allowance). Anyone a better idea?

    Read the article

  • 2008R2 Standard and Hyper-V and Ram Usage (Usable vs Available)

    - by Mark
    A new server was purchased for our development team to start utilizing the full feature set of TFS, namely Lab Management. Because of the need for Lab Management we bought a fairly beefy machine to handle this task and to also act as a build machine. I have been tasked to setup additional features TFS on this machine starting out with a build controller and eventually going towards a full out Lab Management setup using Hyper-V. My question: Upon initially logging I noticed that Windows is registering 64gb but only 32gb available. I know this is a limitation because of licencing since only Standard Edition is installed. Since Hyper-V is another layer that handles the virtualization of guest OS's is Hyper-V able to access this memory? Or is Hyper-V memory usage also limited by 2008 R2 Standard? If Hyper-V can somehow access this memory, is this how it should be setup? Or should the host 2008R2 Standard be upgraded to Enterprise so the Host can utilize the full 64gb? Before I go hog wild and using TFS I wanted to ask some experts so I don't need to reinstall the OS down the road to utilize the additional 32gb. Thanks for any help or links you can share.

    Read the article

  • Windows user moving to Ubuntu 12.04. Where are the system tools, or equivalents?

    - by Big Endian
    I am a Windows user who has begun experimenting with Ubuntu. Ubuntu seems great, but for all the things it seems like I CAN'T do. How do I get to advanced administration stuff, like the list of drivers, all of the installed software, and something equivalent to Windows' Device Manager. I always heard that Linux was supposed to be very raw, and you had to have lots of computer experience to make it work. This seems just the opposite. Ubuntu seems very modern and user friendly, better in some regards than any operating system I have seen. Unfortunately, I can't find any of the guts of this system beneath all of the user friendly frosting... gunk... crap... stuff. I'm reminded more and more of an Apple computer (except Linux is more affordable :). So how do I peel back this layer and start using the computer? A solution other than installing Gnome 3 would be appreciated.

    Read the article

  • Huge discrepancy in Inkscape file size

    - by Keyran
    When using Inkscape to create many pictures with common elements across them, I tend to copy the first SVG file I have created as many times as I need pictures, and then edit the copies. If I reuse files across projects, it can result in a file being copied and modified tens to hundreds of files. I have recently realized that the latest copies have a size between 29 and 60 MB, slowing my computer down significatively. My pictures are very simple, nothing that would normally go over 1 MB in size. As an experiment, I copied the entire content of one of the latest files into a new Inkscape file. I am certain that I have copied the content of the file entirely (I have only one layer and I used the "Select All" option). The new file has a size of 102,2 KB. This would indicate that about 30 MB of data per file is irrelevant to me. What could be the cause of this size difference ? Is there a way to reduce the size of a file without having to copy the content into a new file ? I am using Inkscape 0.48.4 on Debian Unstable. Thanks for any input you might be able to provide !

    Read the article

  • Intermittent 403 errors when using allow to limit access to url with both explicit IP and SetEnvIf

    - by rbieber
    We are running Apache 2.2.22 on a Solaris 10 environment. We have a specific URL that we want to limit access to by IP. We recently implemented a CDN and now have the added complexity that the IP's that a request are shown to be coming from are actually the CDN servers and not the ultimate end user. In the case that we need to back the CDN out, we want to handle the case where either the CDN is forwarding the request, or the ultimate client is sending the request directly. The CDN sends the end user IP address in an HTTP header (for this scenario that header is called "User-IP"). Here is the configuration that we have put in place: SetEnvIf User-IP (\d+\.\d+\.\d+\.\d+) REAL_USER_IP=$1 SetEnvIf REAL_USER_IP "(10\.1\.2\.3|192\.168\..+)" access_allowed=1 <Location /uri/> Order deny,allow Allow from 10.1.2.3 192.168. allow from env=access_allowed Deny from all </Location> This seems to work fine for a time, however at some point the web server starts serving 403 errors to the end user - so for some reason it is restricting access. The odd thing is that a bounce of the web server seems to resolve the issue, but only for a time - then the behavior comes back. It might be worthwhile to note as well that this URL is delegated to a JBoss server via mod_jk. The denial of access is, however; confirmed to be at the Apache layer and the issue only seems to happen after the server has been running for some time.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >