Search Results

Search found 7165 results on 287 pages for 'regions dynamic regions'.

Page 230/287 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • Command-line video editing in Linux (cut, join and preview)

    - by sdaau
    I have rather simple editing needs - I need to cut up some videos, maybe insert some PNGs in between them, and join these videos (don't need transitions, effects, etc.). Basically, pitivi would do what I want - except, I use 640x480 30 fps AVI's from a camera, and as soon as I put in over a couple of minutes of that kind of material, pitivi starts freezing on preview, and thus becomes unusable. So, I started looking for a command line tool for Linux; I guess only ffmpeg (command line - Using ffmpeg to cut up video - Super User) and mplayer (Sam - Edit video file with mencoder under linux) are so far candidates, but I cannot find examples of the use I have in mind.   Basically, I'd imagine there's an encoder and player tools (like ffmpeg vs ffplay; or mencoder vs mplayer) - such that, to begin with, the edit sequence could be specified directly on the command line, preferably with frame resolution - a pseudocode would look like: videnctool -compose --file=vid1.avi --start=00:00:30:12 --end=00:01:45:00 --file=vid2.avi --start=00:05:00:00 --end=00:07:12:25 --file=mypicture.png --duration=00:00:02:00 --file=vid3.avi --start=00:02:00:00 --end=00:02:45:10 --output=editedvid.avi ... or, it could have a "playlist" text file, like: vid1.avi 00:00:30:12 00:01:45:00 vid2.avi 00:05:00:00 00:07:12:25 mypicture.png - 00:00:02:00 vid3.avi 00:02:00:00 00:02:45:10 ... so it could be called with videnctool -compose --playlist=playlist.txt --output=editedvid.avi The idea here would be that all of the videos are in the same format - allowing the tool to avoid transcoding, and just do a "raw copy" instead (as in mencoder's copy codec: "-oac copy -ovc copy") - or in lack of that, uncompressed audio/video would be OK (although it would eat a bit of space). In the case of the still image, the tool would use the encoding set by the video files.   The thing is, I can so far see that mencoder and ffmpeg can operate on individual files; e.g. cut a single section from a single file, or join files (mencoder also has Edit Decision Lists (EDL), which can be used to do frame-exact cutting - so you can define multiple cut regions, but it's again attributed to a single file). Which implies I have to work on cutting pieces first from individual files first (each of which would demand own temporary file on disk), and then joining them in a final video file. I would then imagine, that there is a corresponding player tool, which can read the same command line option format / playlist file as the encoding tool - except it will not generate an output file, but instead play the video; e.g. in pseudocode: vidplaytool --playlist=playlist.txt --start=00:01:14 --end=00:03:13 ... and, given there's enough memory, it would generate a low-res video preview in RAM, and play it back in a window, while offering some limited interaction ( like mplayer's keyboard shortcuts for play, pause, rewind, step frame). Of course, I'd imagine the start and end times to refer to the entire playlist, and include any file that may end up in that region in the playlist. Thus, the end result of all this would be: command line operation; no temporary files while doing the editing - and also no temporary files (nor transcoding) when rendering final output... which I myself think would be nice. So, while I think that all of the above may be a bit of a stretch - does there exist anything that would approximate the workflow described above?

    Read the article

  • filter / directing URLs coming onto a network

    - by Jon
    Hi all, I an not sure if this is possible or not but what i would like to do is as follows: I have one IP address (dynamic using zoneedit.com to keep it upto date). I have one webserver running my main site which is an Ubuntu machine running Apache. I also have a windows 2008 server running another site. Just to confuse things I also run part of my Apache site on the windows server, currently using proxypassreverse to get the information from it. So it looks something like this: IP 1.2.3.4 maps to mydomain.com as well as myotherdomain.com All requests that come into port 80 are forwarded to the Apache box and I use Virtualhost settings to proxy the windows sites where needed. so mydomain.com is an Apache site mydomain.com/mywindowssection is the Apache server using proxypassreverse to get part of the site from the Windows server myotherdomain.com uses Apache and proxypassreverse to get the whole site. What I would like to be able to do is forward all http requests that come into my network to one machine that figures out who should be serving that content. so: mydomain.com would go to the Apache machine myotherdomain.com would go the windows machine. I am just in the process of setting up an Astaro gateway (never done this before so taking a while to configure) as my firewall, dns, dhcp etc, don't know if this can handle it. I have the capacity to run a VM on the network if a seperate box would be needed for this process as well. Thanks for any and all feedback. Jon

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • SSH through standard Belkin router to Asus Tomato router

    - by Luke
    I've set up SSH on the Tomato firmware on an Asus N10, via port 22 with key authentication. I've tested the keys by connecting with putty directly to the router when connected to its network. That works OK. But this router is behind a Belkin (F5D7632-4) router which also acts as modem and when I try to connect through with the (dynamic) public IP it times out. I'm guessing it's something to do with the NAT? My putty settings are taken from various online tutorials, but it's set up for port 22, with the correct key as mentioned. The Belkin router has port forwarding to the Asus (192.168.2.3) for port 22 TCP and UDP set up. It's now tough to see what to do in order to connect to the Asus router with an external IP - if it's even possible. Ideally I would have liked to have only needed to use the Asus router, but as it doesn't act as a modem, I need to connect it to the Belkin to use Tomato's features. Perhaps there's a solution here too? Network: Internet -> Belkin modem/router -> Asus router (Tomato SSH) -> Devices

    Read the article

  • How to get nginx to pass HTTP_AUTHORIZATION header to Apache

    - by codeinthehole
    Am using Nginx as a reverse proxy to an Apache server that uses HTTP Auth. For some reason, I can't get the HTTP_AUTHORIZATION header through to Apache, it seems to get filtered out by Nginx. Hence, no requests can authenticate. Note that the Basic auth is dynamic so I don't want to hard-code it in my nginx config. My nginx config is: server { listen 80; server_name example.co.uk ; access_log /var/log/nginx/access.cdk-dev.tangentlabs.co.uk.log; gzip on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 120; location / { proxy_pass http://localhost:81/; } location ~* \.(jpg|png|gif|jpeg|js|css|mp3|wav|swf|mov|doc|xls|ppt|docx|pptx|xlsx|swf)$ { if (!-f $request_filename) { break; proxy_pass http://localhost:81; } root /var/www/example; } } Anyone know why this is happening? Update - turns out the problem was something I had overlooked in my original question: mod_wsgi. The site in question here is a Django site, and it turns out that Apache does get the auth variables passed through, however mod_wsgi filters them out. The resolution is to use: WSGIPassAuthorization On See http://www.arnebrodowski.de/blog/508-Django,-mod_wsgi-and-HTTP-Authentication.html for more details

    Read the article

  • Jenkins CI - Cannot allocate memory

    - by Programmieraffe
    I tested jenkins-ci successfully on a ubuntu 10.4 (with vmware fusion) on my local computer. Now I want to install and use it on my virtual server at hosteurope. The basic installation was no problem, but now I have problems with my build project. After pulling an mercurial update from a repository, ant is invoked and throws the following error in my build project: "Buildfile: /var/lib/jenkins/workspace/concrete5-seed-clean/build.xml [property] java.io.IOException: Cannot run program "/usr/bin/env": java.io.IOException: error=12, Cannot allocate memory" There is a known problem with heap size at virtual servers at hosteurope (http://faq.hosteurope.de/index.php?cpid=13918), so I tried to set the heap size manually: # for ant export ANT_OPTS="-Xms512m -Xmx512m" # jenkins # edited /etc/default/jenkins, added line JAVA_ARGS="-Xms512m -Xmx512m" # restarted jenkins via /etc/init.d/jenkins restart After setting this for ant, the command "ant -diagnostics" runs through and does not cause an error, but the error still occurs when I try to build the project. Server-Details: - http://www.hosteurope.de/produkt/Virtual-Server-Linux-L Ubuntu 10.4 LTS RAM: 1GB / Dynamic 2GB My questions: - Is 1GB enough for Jenkins or do I have to upgrade the server? - Is this error caused by ant or jenkins? Update: I got it running with ant options -Xmx128m -Xms128m, but sometimes the error occurs again. (this freaks me out, cause i can not reproduce it by now :/ ) Help much appreciated! Cheers, Matthias

    Read the article

  • Can I format a Veritas cluster shared volume from windows?

    - by spaghettidba
    We have a Microsoft Failover Cluster with dynamic disks managed by Veritas Storage Foundation. Today the sysadmins added a new disk for SQL Server but the cluster size on the volume was wrong, so I issued a quick format to change it. The disk volume failed, the SQL Server group failed as well and the cluster became unresponsive. After some minutes I managed to fail over to a passive node. The SAN admins say it's my fault because I shouldn't have formatted the disk from the Windows format applet, but I should have used Veritas Enterprise Administrator instead. Can a format operation bring offline a whole cluster group this way? Relevant error messages: From the eventlog: The cluster resource host subsystem (RHS) stopped unexpectedly. An attempt will be made to restart it. This is usually due to a problem in a resource DLL. Please determine which resource DLL is causing the issue and report the problem to the resource vendor. From the cluster.log ERR [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Control(STORAGE_GET_DISK_INFO_EX) to resource 'NameOfTheDiskGroup' timed out.' Veritas Documentation: Excerpt from Symantec's documentation: Note: Before manually creating the resource, you must format the cluster-shared volume with NTFS using the VEA GUI and mount it on the node where you are trying to create the resource. Does this mean the disk cannot be formatted from Windows? I don't read it that way. For the record, I formatted many disks using the Windows applet in the past and nothing bad happened.

    Read the article

  • IIS 7.0 - responses throttled to 500ms blocks?

    - by Julia Hayward
    Scenario: ASP.NET MVC wep app sitting on my local machine (Vista Ultimate, IIS 7.0), nothing going on except one user (me) logged in and viewing an index page. The page includes 9 dynamic images drawn from the underlying DB and returned from a controller action. I have got the actual processing time for these images down to 15ms each. Turn on Firebug and watch the page load. What I see is 9 requests for images firing off together – no surprise – but four come back to me almost immediately; two more after 0.5s; another after 1s; then at 1.5s and 2s. Logging on the server side suggests the individual responses are still only taking 15ms. So it appears IIS is queueing things up into 500ms chunks. (Repeating the experiment produces different results, but each time the images return in similar blocks – you might get three in the first group, then three at 0.5s, two at 1s etc, for example – and it’s always at 500ms intervals, not anything else.) It’s also repeatable cross-browser, and it’s not repeatable with other forms of content. I haven't found any particular mention of this problem out there, so I'm sort of assuming it's not an IIS bug, so is it: i) IIS on desktop OSs deliberately does it, to make you use server OSs in production? ii) There is some magical setting that has eluded me for as long as I’ve known IIS? iii) Something peculiar to MVC or SQL Server 2008? or something else?

    Read the article

  • MD RAID 1 with external bitmap doesn't fully resync

    - by user64744
    I have an interesting configuration: dual boot system with a RAID 1 that needs to be visible in both Windows and Linux. The Windows install is Win 7 Enterprise, and the Linux install is Kubuntu 10.04. To get the RAID to work, I set it up using Windows's "Dynamic Disks" RAID 1, and brought it up in Linux using MD with no persistent superblock, and a write-intent bitmap on another partition. (Without this bitmap, MD had no way of knowing that the array was in sync, and would do a complete resync every time the array started.) The array is assembled like so: mdadm --build /dev/md1 -l 1 -n 2 -b /var/local/md1.bitmap /dev/sdb2 /dev/sdc2 I expected that the first time I ran this command, it would resync the array, write out a bitmap with no dirty chunks, and all would be good. This wasn't the case: after completing the resync, the bitmap was mostly clean, but about 5% dirty blocks remained, as revealed by mdadm -X /var/local/md1.bitmap I didn't mount the filesystem on /dev/md1 or touch it in any other way. I then found that stopping and restarting the array: mdadm --stop /dev/md1 mdadm --build /dev/md1 -l 1 -n 2 -b /var/local/md1.bitmap /dev/sdb2 /dev/sdc2 did indeed read in the bitmap, with an ensuing resync that went quickly because most of the blocks were marked clean. The confusing part is that this resync further reduced the number of dirty blocks, but still did not remove all of them. By repeatedly stopping and restarting I could slowly bring the dirty block count down to around 0.6%, where it seemed to level out. Any ideas what could be causing this? It smells to me of a race condition somewhere that leads to blocks either being skipped over during synchronization or not properly cleared from the bitmap, but I really have no evidence to prove this. It doesn't look like hardware issues since both drives are new and have zero read errors and reallocated sectors reported by smartctl -a.

    Read the article

  • asterisk extensions.conf & sip.conf

    - by Josh
    I'm trying to get my Dialplan to work. When I call, the only thing I get is a dial tone to enter extension "no Background(thanks-calling) is played". When extension 123 is dialed, busy signal is triggered and asterisk CLI get frozen. Any help will be appreciate it. Conf files below. ; PSTN on sip.conf [pstn] type=friend host=dynamic context=pstn username=pstn secret=password nat=yes canreinvite=no dtmfmode=rfc2833 qualify=yes insecure=port,invite disallow=all allow=ulaw ; PSTN on extensions.conf [pstn] exten => s,1,Answer exten => s,2,Wait,2 exten => s,4,DigitTimeout,5 exten => s,5,ResponseTimeout,10 exten => s,6,Background(thanks-calling) exten => 0,1,Goto(incoming,123,1) ; (Member Services) [incoming] exten => 123,1,NoOP(${CALLERID}) ; show the caller ID info in the console exten => 123,n,Ringing() exten => 123,n,Answer() exten => 123,n,Playback(silence/1) exten => 123,n,Playback(connecting1) exten => 123,n,Wait(3) exten => 123,n,Dial(SIP/line1,60) exten => 123,n,Congestion

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by blade
    Hi, I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • Why my internal hard disk can be ejected?

    - by Bear Bear
    I have 6 hard disks in my computer and one DVD. The disk that I connect to the 6th SATA port is a regular disk. But my Windows 7 64 bit shows it as a removable drive. I can eject my hard disk! Just like a pen drive. If I connect another disk to the 6th port, I can eject that disk too. So it has nothing to do with the physical disk, it must be something related to Windows or BIOS I don't understand why Windows 7 is seeing a normal hard disk as removable. In Computer Management - Disk Management, the disk looks identical to the others - there is nothing there to suggest the drive is different from the others. But in the tray I have the icon to eject it. The motherboard model is ASUS F2A85 V PRO FM2. All the disks are formatted normally, no Dynamic Disk, no RAID, nothing special. How can I tell Windows 7 to treat the disk exactly like the others, so it can't be ejected?

    Read the article

  • Website hosting from home - IIS6

    - by Paul
    I'm wanting to host a few websites from home, primarily because I'm using some BETA Microsoft software (.NET 4 and EF) and don't want to install it on my production server which is hosted at eukhost.com. Basically, I'm completely new to this sort of thing. So far, here is what I've done: Registered the domain name at namecheap.com (let's call it mydomain.com) Gone to "Nameserver Registration" in the panel and entered my IP address for the NS1 and NS2 records (let's say the IP is 0.0.0.0). Gone to "Domain Name Server Setup" and entered ns1.mydomain.com & ns2.mydomain.com Forwarded requests from port 80 to my internal IP (let's say 192.168.1.254) Created the website in IIS (I'm just testing with a single website so far, so have not created any host header values) Now, if I type in the IP address (http://0.0.0.0) I get the site as expected. However, if I enter http://www.mydomain.com I get an error saying "DNS Error - Cannot find server". I'm aware that there is a service from DynDNS that will automatically change the IP if I have a dynamic address, however my IP has remained static since I installed the ISP (since October) so I don't need this. Is there any way that I can get the DNS to work just by configuring IIS or something in Windows? I don't really want to have to pay for any 3rd party service. Thanks,

    Read the article

  • What's the piece of hardware listening on Facebook's or Wikipedia's IP address?

    - by Igor Ostrovsky
    I am trying to understand how massive sites like Facebook or Wikipedia work, for my intellectual curiosity. I read about various techniques for building scalable sites, but I am still puzzled about one particular detail. The part that confuses me is that ultimately, the DNS will map the entire domain to a single IP address, or a handful of IP addresses in the case of round-robin DNS. For example, wikipedia.org has only one type-A DNS record. So, people from all over the world visiting Wikipedia have to send a request to the one IP address specified in DNS. What is the piece of hardware that listens on the IP address for a massive site, and how can it possibly handle all the load coming from the requests for users all over the world? Edit 1: Thanks for all the responses! Anycast seems like a feasible answer... Does anyone know of a way to check whether a particular IP address is anycast-routed, so that I could verify that this really is the trick used in practice by large sites? Edit 2: After more reading on the topic, it appears that anycast is not typically used for dynamic web content. Anycast is usually used for UDP (e.g., DNS lookups), or sometimes for static content. One interesting thing to note is that Facebook uses profile.ak.fbcdn.net to host static content like style sheets and javascript libraries. Each time I ping this name, I get a response from a different IP address. However, I can't tell whether this is anycast in action, or a completely different technique. Back to my original question: as far as I can tell, even a large site will have a single expensive piece of load-balancing hardware listening on its handful of public IP addresses.

    Read the article

  • Disable CTRL+mouse wheel zooming in Chrome?

    - by Peter Nore
    I'm a normal-sighted person and I would like to view pages at 100% all the time. I use keyboard shortcuts that involve CTRL a lot, so about twenty times a day I accidentally hit CTRL at the same time that I'm scrolling, which results in the page being reflowed and repainted. This in is annoying because it can take up to 30 seconds to fix the issue, depending on how complex the site layout is. On sites with dynamic layout such as Google Docs the problem is more serious; accidentally hitting CTRL+mouse wheel corrupts the display and forces me to refresh the page entirely, sometimes causing me to loose information in the process. I would like to either decouple CTRL+mouse wheel from zoom, or disable zoom functionality altogether. This is possible on Firefox by using about:config; is there a similar way to edit detailed settings in Chrome? Would I have access to the detailed settings if I used Chromium instead of Chrome? I'll probably jump ship back to Firefox if I can't solve this problem. There is a superuser question that asks basically the same thing I'm asking, but for Firefox and Internet Explorer exclusively. Other people on the Chrome forum have had related issues, but none have the same problem. "I would really like it if I could deactivate the auto zoom in/out." had "something with laptops and Windows 7", not the feature built into Chrome. Other people have had PDF specific issues, which doesn't concern me. I've also tried searching for extensions that allow you to disable the scroll; I had hoped that "Zoom Lock" would have the ability to lock the zoom at 100% and prevent CTRL+scroll wheel from distorting the display, but it doesn't work for my use case. Google Chrome version 9.0.597.84 (Official Build 72991) Operating System: Ubuntu 10.10

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode e2fsck 1.41.3 (12-Oct-2008) fsck.ext3: Group descriptors look bad... trying backup blocks... fsck.ext3: A block group is missing an inode table while checking ext3 journal for /dev/sda3 I tried also backup superblocks, same error result. There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • Python module: Trouble Installing Bitarray 0.8.0 on Mac OSX 10.7.4

    - by Gabriele
    I'm new here! I have trouble installing bitarray (vers 0.8.0) on my Mac OSX 10.7.4. Thanks! ('gcc' does not seem to be the problem) Last login: Sun Sep 9 22:24:25 on ttys000 host-001:~ gabriele$ gcc -version i686-apple-darwin11-llvm-gcc-4.2: no input files host-001:~ gabriele$ Last login: Sun Sep 9 22:18:41 on ttys000 host-001:~ gabriele$ cd /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/bitarray-0.8.0/ host-001:bitarray-0.8.0 gabriele$ python2.7 setup.py installrunning install running bdist_egg running egg_info creating bitarray.egg-info writing bitarray.egg-info/PKG-INFO writing top-level names to bitarray.egg-info/top_level.txt writing dependency_links to bitarray.egg-info/dependency_links.txt writing manifest file 'bitarray.egg-info/SOURCES.txt' reading manifest file 'bitarray.egg-info/SOURCES.txt' writing manifest file 'bitarray.egg-info/SOURCES.txt' installing library code to build/bdist.macosx-10.6-intel/egg running install_lib running build_py creating build creating build/lib.macosx-10.6-intel-2.7 creating build/lib.macosx-10.6-intel-2.7/bitarray copying bitarray/__init__.py -> build/lib.macosx-10.6-intel-2.7/bitarray copying bitarray/test_bitarray.py -> build/lib.macosx-10.6-intel-2.7/bitarray running build_ext building 'bitarray._bitarray' extension creating build/temp.macosx-10.6-intel-2.7 creating build/temp.macosx-10.6-intel-2.7/bitarray gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -g -O2 -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c bitarray/_bitarray.c -o build/temp.macosx-10.6-intel-2.7/bitarray/_bitarray.o unable to execute gcc-4.2: No such file or directory error: command 'gcc-4.2' failed with exit status 1 host-001:bitarray-0.8.0 gabriele$

    Read the article

  • How do you use VIM to edit tabular data (tables)? Specifically, BIND (named) DNS db files.

    - by Richard Bronosky
    I'm usually a purist when it comes to vimming. I don't like remapping keys, or learning to rely on a bunch of plugins. I like to feel just as powerful on foreign boxen as I do on my own dev box. I do, however, believe in syntax files. Even though the solution may not be a syntax file (bindzone.vim is what I use), I want it bad enough to do whatever. I regularly view or edit tab (or comma, but that would be a bonus) delimited data. I hate having to set my tabstop to some ridiculous number in order to have everything line up. Example: The BIND zone files are ~40+,6,2,5,15+. So, even though I could view them on a single screen, if I set ts=40, I cannot. I have been searching for a "dynamic tab size" solution for years, but no luck. I hate that my only good way of editing or even visualizing tabular data is to scp it to my work station and open it in Open Office. There has to be a better way.

    Read the article

  • Problem routing between directly connected Subnets w/ ASA-5510

    - by Zephyr Pellerin
    This is an issue I've been struggling with for quite some time, with a seemingly simple answer (Aren't all IT problems?). And that is the problem of passing traffic between two directly connected subnets with an ASA While I'm aware that best practice is to have Internet - Firewall - Router, in many cases this isn't possible. For example, In have an ASA with two interfaces, named OutsideNetwork (10.19.200.3/24) and InternalNetwork (10.19.4.254/24). You'd expect Outside to be able to get to, say, 10.19.4.1, or at LEAST 10.19.4.254, but pinging the interface gives only bad news. Result of the command: "ping OutsideNetwork 10.19.4.254" Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.19.4.254, timeout is 2 seconds: ????? Success rate is 0 percent (0/5) Naturally, you'd assume that you could add a static route, to no avail. [ERROR] route Outsidenetwork 10.19.4.0 255.255.255.0 10.19.4.254 1 Cannot add route, connected route exists At this point, you might gander if its a NAT or Access list problem. access-list Outsidenetwork_access_in extended permit ip any any access-list Internalnetwork_access_in extended permit ip any any There is no dynamic nat (or static nat for that matter), and Unnatted traffic is permitted. When I try pinging the above address (10.19.4.254 from Outsidenetwork), I get this error message from level 0 logging (debugging). Routing failed to locate next hop for icmp from NP Identity Ifc:10.19.200.3/0 to Outsidenetwork:10.19.4.1/0 This led me to set same-security traffic permit, and assigned the same, lesser and greater security numbers between the two interfaces. Am I overlooking something obvious? Is there a command to set static routes that are classified higher than connected routes?

    Read the article

  • Someone used or hacked my computer to commit a crime? what defense do I have?

    - by srguws
    Hello, I need IMMEDIATE Help on a computer crime that I was arrested for. It may involve my computer, my ip, and my ex-girlfriend being the true criminal. The police do not tell you much they are very vague. I was charged though! So my questions are: -If someone did use my computer at my house and business and post a rude craigslist ad about a friend of my girlfriend at the time from a fake email address, how can I be the ONLY one as a suspect. Also how can I be charged. I noticed the last few days there are many ways to use other peoples computers, connections, etc. Here are a few things I found: You can steal or illegally use an ip addresss or mac address. Dynamic Ip is less secure and more vulnerable than static. People can sidejack and spoof your Mac, Ip, etc. There is another thing called arp spoofing. I am sure this is more things, but how can I prove that this happened to me or didnt happen to me. -The police contacted Craigslist, the victim, aol, and the two isp companies. They say they traced the IP's to my business and my home. My ex was who I lived with and had a business with has access to the computers and the keys to bothe buildings. My brother also lives and works with me. My business has many teenagers who use the computer and wifi. My brother is a college kid and also has friends over the house and they use the computer freely. So how can they say it was me because of an angry ex girlfriend.

    Read the article

  • VPN on a ubuntu server limited to certain ips

    - by Hultner
    I got an server running Ubuntu Server 9.10 and I need access to it and other parts of my network sometimes when not at home. There's two places I need to access the VPN from. One of the places to an static IP and the other got an dynamic but with DynDNS setup so I can always get the current IP if I want to. Now when it comes to servers people call me kinda paranoid but security is always my number one priority and I never like to allow access to the server outside the network therefor I have two things I have to have on this VPN. One it shouldn't be accessiable from any other IP then these 2 and two it has to use a very secure key so it will be virtually impossible to bruteforce even from the said IP´s. I have no experience what so ever in setting up VPNs, I have used SSH tunneling but never an actuall VPN. So what would be the best, most stable, safest and performance effiecent way to set this up on a Ubuntu Server? Is it possible or should I just set up some kind of SSH Tunnel instead? Thanks on beforehand for answers.

    Read the article

  • Require and Includes not Functioning Nginx Fpm/FastCGI

    - by Vince Kronlein
    I've split up my FPM pools so that php will run under each individual user and set the routing correctly in my vhost.conf files to pass the proper port number. But I must have something incorrect in my environment because on this new domain I set up, require, require_once, include, include_once do not function, or rather, they may not be getting passed up to the interpreter to be rendered as php. Since I already have a Wordpress install on this server that runs perfectly, I'm pretty sure the error is in my server block for nginx. server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { if (!-e $request_filename) { rewrite ^(.*)$ /index.php?name=$1 break; } fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } The problem I'm finding I think is that there are dynamic calls to the doc root index file, while all calls to anything within a sub-folder should be routed as normal ie: NOT passed to index.php. I can't seem to find the right mix here. It should run like so: domain.com/cindy (file doesn't exist) --> index.php?name=$1 domain.com/admin/anyfile.php (files DO exist) --> admin/anyfile.php?$args

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by Olal'a
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >