Search Results

Search found 4702 results on 189 pages for 'coding district'.

Page 144/189 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Getting Dell E6320 with I7 to work with 3 monitors at 1920x1080p x 3

    - by MadBoy
    I want to buy Dell E6320 which comes with Intel Core I7-2620M (2.70GHz, 4MB cache, Dual Core) with Intel HD Graphics 3000. Laptop will come with docking station. I want to connect 3 monitors to that docking station so that working at home would give me some additional boost. Docking station will allow me to connect only 2 monitors so I'm looking at following other options: Matrox TRIPLEHEAD2GO DIGITAL Edition or TRIPLEHEAD2GO DP Edition. But reading Matrox Support Page intel GPU can't run the highest resolution with 3 monitors connected, it even gets worse since it seems monitors would have to be able to work at 50hz. Also I'm not sure but it seems that Matrox doesn't split the monitors as 3 separate monitors but simply as one big space (which is a bit opposite to what I need) Buy 2 or maybe just 1 USB based monitor but it would also mean having 1 or 2 different monitors then the main one, unless I buy 3 USB based monitors which would mean more money to spend. Also I found only couple of models and most of them require USB 3.0 and no other cables to plug in (nice but costly - couldn't find decent monitor with only USB for sending signal and having power connected normally) . But docking station has only one USB 3.0 port. Can I use hub and still get it to work? Find some converters from Digital to USB (I think DisplayLink does some?) Buy different laptop but what kind? I need it to be I7, small (13"), fast and lightweight. At same time it requires docking station that I can use at home to connect 3 external monitors. Some other suggested solution... Edit: I need 3 monitors for work in terms of coding in Visual Studio or having word/excel/outlook open. Nothing fancy. Maybe some movie once in a while.

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    I Asked this at Staock Overflow, but I would like your oppinion too as it has as much to do with administration as it does with coding. We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • Windows Explorer Hangs on Right-Click

    - by Bryan
    I am not sure if this is the right site to post this one as I typically post coding questions on stackoverflow. But I'll ask anyways and hopefully someone can move it if it's incorrect. Currently I have a customer built PC, utilizing an Intel i7 chip, 1300WATT PSU, 8Gigs of RAM, and two video cards. Originally I had the one video card (NVIDIA) that used the PSU and had two DVI output. After purchasing a third monitor I installed another ATI) graphics card not needed any PSU connectors. After installing and restarting, I noticed that when I right-click on my desktop, or through Windows Explorer it will hang, freeze then restarted. Sometimes after Windows Explorer restarts the problem dissipates. I checked to make sure everything was connected properly and it was. I repaired the ATI Catalyst Control Center to see if that had an issue, and I checked to see if either video card required updated drivers. Nothing worked. I tried restarting my PC and that didn't work. I tried using ShellXView (I forgot what it's actually called) and tried closed processes but that didn't work. Does anyone have any idea what could have caused this orpossible solutions I should try?< Thanks in advance.

    Read the article

  • Slowdown upon router/modem setup change

    - by Ollie Saunders
    I’ve been using a Belkin FSD7632-4 modem router to connect to my TalkTalk provided ADSL internet connection for some time and been pretty happy with it. Recently, however, the connection has been failing and I decided to get a ASUS RT-N16 instead, which is also a much more capable router generally. The ASUS RT-N16 doesn’t come with a modem built-in so I purchased as Zoom modem as well. I’ve set them both up and am using them to post this message. But I’m a bit miffed to find that I get a significantly and consistently slower downstream rate from the new configuration than with the old Belkin. Belkin modem router: downstream: 3.45 mbps upstream: 0.73 mbps ASUS router + Zoom modem: downstream: 2.71 mbps upstream: 0.66 mbps Any ideas why this is? The really weird thing about this is that the Zoom supports ADSL2 and ADSL2+ but I don’t think the old Belkin does. At first I thought it might be due to the Zoom modem being limited to PPPoE instead of PPPoA, which my ISP supports, but then I tried using PPPoE with the Belkin and that still gave a high speed. I’m using VC-Mux encapsulation with both. VPI of 0 and VCI of 38. I pulled this data off the Zoom: Mode: ADSL2 Line Coding: Trellis On Status: No Defect Link Power State: L0 Downstream Upstream SNR Margin (dB): 12.3 11.8 Attenuation (dB): 43.0 24.9 Output Power (dBm): 12.9 0.0 Attainable Rate (Kbps): 3936 844 Rate (Kbps): 3194 840 MSGc (number of bytes in overhead channel message): 59 10 B (number of bytes in Mux Data Frame): 99 14 M (number of Mux Data Frames in FEC Data Frame): 2 16 T (Mux Data Frames over sync bytes): 1 8 R (number of check bytes in FEC Data Frame): 8 8 S (ratio of FEC over PMD Data Frame length): 1.9833 9.0594 L (number of bits in PMD Data Frame): 839 219 D (interleaver depth): 32 2 Delay (msec): 15 4 Super Frames: 15808 14078 Super Frame Errors: 0 4294967232 RS Words: 513778 111753 RS Correctable Errors: 126 4294967238 RS Uncorrectable Errors: 0 N/A HEC Errors: 0 4294967279 OCD Errors: 0 0 LCD Errors: 0 0 Total Cells: 1920175 237597 Data Cells: 205993 392 Bit Errors: 0 0 Total ES: 0 0 Total SES: 0 0 Total UAS: 34 0

    Read the article

  • ffmpeg: video file played OK on Ubuntu, but no sound on XP

    - by Andy Le
    I created a video clip using ffmpeg (vcodec: mpeg2video, acodec: AC3 5.1). The file can be played normally on Ubuntu, but when I play it on an XP machine, there is no sound. I can play AC3 files and other movies with AC3 sound. I already tried many codec packs and many players. When I compare the MediaInfo tab of the Properties window of the file with another playable movie, I see that the Audio Identifier of the audio stream in my file is 0x80 while it is 0x02 in the other movie. So I guess that's why players on XP can't recognize the audio codec. When I use an MKV container instead of MPEG (still mpeg2video codec), then the result is OK on both Ubuntu and XP (with the correct Audio ID). I really need MPEG though. Any idea? This is the command I used: ~/ffmpeg/ffmpeg/ffmpeg -loop_input \ -t 97 -r 30000/1001 -i v%4d.tga -i final.ac3 \ -vcodec mpeg2video -qscale 1 -s 400x400 -r 30000/1001 \ -acodec copy -y out6.mpeg 2 This is the output of mediainfo (on Ubuntu): General Complete name : out6.mpeg Format : MPEG-PS File size : 6.86 MiB Duration : 1mn 37s Overall bit rate : 593 Kbps Video ID : 224 (0xE0) Format : MPEG Video Format version : Version 2 Format profile : Main@Main Format settings, BVOP : No Format settings, Matrix : Default Format_Settings_GOP : M=1, N=12 Duration : 1mn 37s Bit rate mode : Variable Bit rate : 122 Kbps Width : 400 pixels Height : 400 pixels Display aspect ratio : 1.000 Frame rate : 29.970 fps Resolution : 8 bits Colorimetry : 4:2:0 Scan type : Progressive Bits/(Pixel*Frame) : 0.025 Stream size : 1.41 MiB (21%) Audio ID : 128 (0x80) Format : AC-3 Format/Info : Audio Coding 3 Duration : 1mn 36s Bit rate mode : Constant Bit rate : 448 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 44.1 KHz Stream size : 5.18 MiB (75%)

    Read the article

  • Technology mash: is this possible?

    - by Jon Story
    I'm in the process of setting up my own DNS+hosting on a couple of VPS and my home machines, mostly for academic/learning purposes, but also for convenient accessing of my files, hosting my personal websites, private git repositories etc. I've got a main web server with DNS, and a slave DNS server. I've also got a couple of machines at home doing file hosting, video streaming and all that fun stuff. I'm intending to use my VPS's to provide myself with a dynamic DNS system so that I can point mydomain.com at my DNS servers, with home.mydomain.com going into my home network via a raspberry pi. HOWEVER.... I've not got access to the network infrastructure at home (rented accommodation with managed internet), so I can't forward the ports on the router to my own machines. As such, I'm wondering if it's possible to route all the traffic via an SSH/HTTP tunnel through one of the VPS? My plan is to have the raspberry pi provide a VPN into my home network. The raspberry pi uses SSH to connect to the VPS, and the VPS forwards any traffic to home.mydomain.com via the tunnel to the raspberry pi. Is this even possible, and how do I go about it? I don't mind getting my hands dirty with coding and low level tools, I'm just not sure where to start or what the best way to go about it is.

    Read the article

  • How can something relevant to graphics completely kill a motherboard?

    - by leladax
    I was coding something in OpenGL and after a bug there was an 'OS slowed down' situation. After a few seconds the screen went blank and the laptop shutdown. Now not even a led turns on battery or not. It doesn't appear to be the AC or the battery since there was some battery when it died and when it's connected to the AC the laptop produces near the AC connection a very slight 'clicking' noise (very faint, one has to be very careful to notice it, I don't know if it was there forever tbh). I suspect the motherboard died, as in something from the point it gets AC or battery power and the point it actually feeds itself. But I can't figure out how that effect was produced by the OpenGL bug or graphics overheating. If the graphics died alone, it should at least give some indication that the laptop is barely alive, at least a led, a sound, anything, the laptop is instead completely dead (other than faint 'clicking' I mentioned). Does anyone have expert advice on this? I'm especially interested in any ideas connected to "graphics overheated/bugged ---- they killed motherboard". I have a very lengthy experience in that stuff as a hobbyist and it really puzzles me. It's not just a "AC died" situation I can easily google.

    Read the article

  • Encrypting peer-to-peer application with iptables and stunnel

    - by Jonathan Oliver
    I'm running legacy applications in which I do not have access to the source code. These components talk to each other using plaintext on a particular port. I would like to be able to secure the communications between the two or more nodes using something like stunnel to facilitate peer-to-peer communication rather than using a more traditional (and centralized) VPN package like OpenVPN, etc. Ideally, the traffic flow would go like this: app@hostA:1234 tries to open a TCP connection to app@hostB:1234. iptables captures and redirects the traffic on port 1234 to stunnel running on hostA at port 5678. stunnel@hostA negotiates and establishes a connection with stunnel@hostB:4567. stunnel@hostB forwards any decrypted traffic to app@hostB:1234. In essence, I'm trying to set this up to where any outbound traffic (generated on the local machine) to port N forwards through stunnel to port N+1, and the receiving side receives on port N+1, decrypts, and forwards to the local application at port N. I'm not particularly concerned about losing the hostA origin IP address/machine identity when stunnel@hostB forwards to app@hostB because the communications payload contains identifying information. The other trick in this is that normally with stunnel you have a client/server architecture. But this application is much more P2P because nodes can come and go dynamically and hard-coding some kind of "connection = hostN:port" in the stunnel configuration won't work.

    Read the article

  • A very peculiar problem with an old pc and a newer laptop...

    - by user553492
    I got my old pc ( 248mb ram , 80 GB ) repaired and the tech people put XP in it .My newer laptop has UBUNTU 10.04 .now I only have one cable and one usb cord .So I connected my modem (with only one CAT5 port and 4 usb ports ) to laptop with CAT5 cable .Th internet is working fine . I also wanted to use net on older pc so I installed the usb drivers for win and it worked. But I got fed up of win xp and made a separate partition for FreeBSD which I planned to install .During the install I screwd up sumthing and now freebsd starts with a boot option with a ? mark in place of win xp .If I click on that it gives me a "NTLDR missing " msg. I tried connecting CAT5 cable between old and new pc and tried connecting my laptop with USB cable but nothing happend and then I realozed the modem doesnt have a WORKING usb driver for LINUX :( .FCUK ! .Freebsd doesnt` even detect the LAN cable if I use it for old pc . So basically I have a old pc that has FREEBSD which I can olny start and stare at the blank terminal console but works perfectly otherwise .FREEBSD was supposed to detect the LAN cable ??.And I have a laptop that has LINUX which only works if I connect it with a CAT5 cable .wtf . So what can I do with my old pc ??? any local server (if possible :( ) or some such thing ? or can u suggest any use .Im 18 and im into learning programming , coding .So I can practice it .Thankx !

    Read the article

  • How to have PHP and mod_wsgi python app on the same domain?

    - by Lazik
    I am using apache with mod_wsgi (python3) on ubuntu 12.04. I have a python app (bottle) which is at www.mysite.com/ In my python app I have routes like www.mysite.com/abbb?q=blab I would like a path www.mysite.com/forum to resolve to a php app (simple machine forums) Ideally I would like to use apache to handle the forum part and pass it to php (instead of coding it in the python app). Don't know if it's possible. I'm new to this, I have read https://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines#The_Apache_Alias_Directive but I don't understand how to use it. Here is my apache conf for the mod_wsgi app, I don't know how to specify the PHP portion. <VirtualHost *:80> ServerName www.ex.com ServerAlias ex.com *.ex.com RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}$1 [R=301,L] WSGIDaemonProcess ex user=www-data group=www-data processes=1 threads=5 WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi <Directory /var/www/vhosts/ex> WSGIProcessGroup ex WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Have .cmd in startup wait for system wide startup to run.

    - by Dan
    On a computer I'm not an administrator on, there is a startup file (.cmd file in C:\Documents and Settings\All Users\Start Menu\Programs\Startup) for all users. It does several things I don't like, so I have created my own .cmd file which I have placed in C:\Documents and Settings\[user]\Start Menu\Programs\Startup. I want my code to run after the system wide, because my program first undoes the network mappings the system wide file did, and then replaces it with my own network mappings. How can I make my program wait and start as soon as the system wide one is done? (I cannot remove or edit the system wide file). Thanks EDIT: The time that the system wide file takes to runs varies and I would like my file to run right after. The "If Exists" method seems a little to contingent since the system wide script can change or a file could be moved. I am hoping to give my script to a few of my coworkers, so hoping to have it work without me having to update anything. Also, being a linux guy, I don't know cmd code, so please write out any coding suggestions.

    Read the article

  • How to "drag and drop" folders or multiple HTML files into a browser and have them open in multiple tabs

    - by PoorLuzer
    I save pages that I browse on the net and find interesting into a folder called C:\PageSaves Later, during the commute, I open these pages to see what they are and move them into a neatly categorized folder tree. For example, Perl related pages goto C:\Pages\Perl, MySQL related pages goto C:\Pages\MySQL and so on. I was wondering if there is any way I could open any number of HTML files on disc / inside a folder (C:\PageSaves in my case) into Mozilla/FF/K-Meleon etc For example, I would like to just "drag and drop" the folder C:\PageSaves into FireFox and have it open all the .html pages in the folder in a separate tab Right now, if I "drag and drop" multiple HTML files, it just opens the last file in the selection. Have a set of toolbar buttons, basically, a (the) plugin that should allow me to nuke the page (if I don't want to keep the page anymore) from disc or move the file (and its corresponding folder) into a predefined / new folder I am familiar with coding full blown FireFox plugins, so even if something very basic/almost similar exists, I can take it forward. Hints/clues/other methods of achieving the same result are all welcome!

    Read the article

  • Write once, read many (WORM) using Linux file system

    - by phil_ayres
    I have a requirement to write files to a Linux file system that can not be subsequently overwritten, appended to, updated in any way, or deleted. Not by a sudo-er, root, or anybody. I am attempting to meet the requirements of the financial services regulations for recordkeeping, FINRA 17A-4, which basically requires that electronic documents are written to WORM (write once, read many) devices. I would very much like to avoid having to use DVDs or expensive EMC Centera devices. Is there a Linux file system, or can SELinux support the requirement for files to be made complete immutable immediately (or at least soon) after write? Or is anybody aware of a way I could enforce this on an existing file system using Linux permissions, etc? I understand that I can set readonly permissions, and the immutable attribute. But of course I expect that a root user would be able to unset those. I considered storing data to small volumes that are unmounted and then remounted read-only, but then I think that root could still unmount and remount as writable again. I'm looking for any smart ideas, and worst case scenario I'm willing to do a little coding to 'enhance' an existing file system to provide this. Assuming there is a file system that is a good starting point. And put in place a carefully configured Linux server to act as this type of network storage device, doing nothing else. After all of that, encryption on the files would be useful too!

    Read the article

  • I need to take the first three letters of a filename and set it into a text file, in a certain fashion. How can I do this?

    - by JuniorD
    Okay, so I need to take the first three letters of a file from a list of files, and place this into a text file in a certain manner. I will provide examples below. Lets say that I have two file names in the same directory, one called cougar.txt and the other bear.txt. These are in the animals directory. I need to take the first three letters of these words, and transpose them into a text file along with the directory, in the following format: BEA = "animals/bear.txt" COU = "animals/cougar.txt" This should happen with any random thing that might be in the list. I'm fairly new to this sort of coding, so I'm not quite sure which language to use, and I'm learning as I go. This new challenge seems fairly daunting to me, and I would be much appreciated if you guys could help. Also, I'm using Windows 7. Been attempting at this all day, to no avail. Preferably done in batch, but if that is impossible I'm open to recommendations.

    Read the article

  • Push, parse & import "selected" data, text, info blobs from Webpages/ Emails as Event/ Appointment to standard Calendar directly or as .ics file?

    - by Alex S
    Any tool, plugin, extension, script/ code to push "selected" data, text, information blobs from Web pages, Emails etc, then parsed and imported to structured Event, Appointment (e.g. .ics) on a standard Calendar like Outlook, Google, iCal? If not, what and how could I use some scripting, coding or existing tools, extensions to add on top and do this. I come across a lot of unstructured information on Webpages, Emails, FB events etc. where I just want to add that information to my Calendar. Instead of entering all the information by hand all the time, there should be an easy enough way to have the information get parsed, organized and imported to a Calendar... Either directly to a calendar from source or Translated to a standard format such as .ICS that can be imported & saved easily. Would love to see some suggestions for this incorporating one or more of the following: on Windows with Chrome & Outlook on iPhone/ iPad to its Calendar PS: I'll come back and see if I can add more information to this question and to answer it as well. I have not found a solution yet.

    Read the article

  • Can my employer force me to backup my personal machine? [closed]

    - by Eric B
    Here's the background: Approximately 1.25 years ago, the company I work for was acquired by a larger 400 person company. Before acquisition (and today still) we are all remote employees using our own personal hardware for work-related duties (coding, email, etc). We are approximately 15 employees within the larger organization. Some time after acquisition, the now owning company was slapped with a civil lawsuit. Part of this lawsuit (discovery) is requiring them to retrieve & store from us any related information. Because we were a separate company up until acquisition, there is a high probability that our personal machines might contain information about what the lawsuit alleges (email, documents, chat logs?, etc). Obviously, this depends largely on the person's job function (engineer vs. customer support vs. CEO). All employees are being required to comply. Since acquisition (1.25 yrs), the new company has not provided us with company laptops/desktops. We continue to use personal hardware, licenses, etc for work. Email is via POP3s and not hanging around on the mail server - it's on everyone's client. Documents are spread across personal machines. So, now they want us each to backup our complete personal machines. They are allowing us to create a "personal" folder where we can place personal documents. That single folder will be excluded from backup. Of course, that means total re-arrangement of documents, etc. For most of us, 99% of the data on the machine is NOT related to work. So, what's the consensus? Should we comply? What is their recourse if we do not?

    Read the article

  • Transferring files from ftp to local system

    - by user1056221
    I want to copy a file from FTP and paste it to my local system. I want to run this through a batch file. I am trying this for a week. But I couldn't find the solution. Anyone help me please.... This is my actual work Want to copy a file named "Friday.bat" from ftp://172.16.3.132 (with username and password) So I use the below coding: @echo off @ftp -i -s:"%~f0"&GOTO:EOF open 172.16.3.132 mmftp ((((pasword entered here))))) binary get Friday.bat pause Result: ftp> @echo off ftp> @ftp -i -s:"%~f0"&GOTO:EOF Invalid command. ftp> open 172.16.3.132 Connected to 172.16.3.132. 220 Welcome to ABL FTP service. User (172.16.3.132:(none)): 331 Please specify the password. 230 Login successful. ftp> binary 200 Switching to Binary mode. ftp> get Friday.bat 200 PORT command successful. Consider using PASV. 550 Failed to open file. ftp> pause Finally, a file named Friday.bat is copied to my local system with 0 bytes, but it will not open.

    Read the article

  • Can I host multiple sites with one Amazon EC2 instance [duplicate]

    - by user22
    This question already has an answer here: Can you help me with my capacity planning? 2 answers I currently have VPS server and I pay around $75 per month and I get: 40GB HD 2Gb RAM 100GB BW 6 core cpu (but i dont use much) I have only one live website running and traffic is only max 100 user visit per day. I mostly do the my testing stuff and some of my inter sites for playing with coding. But I do need one server. I am thinking of moving to Amazon EC2 if the price diff is not so much because then I can learn some more stuff. I am thinking of getting the 3 years Heavy utilization Reserved instance because my server will be running all day and night. I tried their online caluclator with Medium Instance Heavy reserved for 3 years for EC2 it comes $31 per month(effective price) and for EBS and S3 , I think even if thats it $40 for all other stuff. I will be at no loss for what I am getting at present. Am i correct or I missed something?? Now In my current VPS I have Apache for PHP sites and MOD wsgi for python sites. I am not sure if I will be able to do all that stuff in Amazon EC2. Can I host python and PHP sites both in Amazon EC2 instance using Named Virtual Hosts and Ngnix

    Read the article

  • cannot connect to my nginx server from remote machine

    - by margincall
    I thought that it's iptables problem.. but it seems not. I really have no idea about this situation. I'm getting a server hosting(CentOS). I installed Nginx + Django and nginx uses 8080 port. A domain is connected to the server. When I executed "wget [domain]:8080/[app name]/" in the server, it worked. Of course, "wget 127.0.0.1:8080/[app name]/" has no problem. (wget [server ip]:8080/[app name]/, either) However, from other computers, connecting was failed. (message says, no route) I checked my firewall setting. I excuted these commands. iptables -I INPUT -p tcp --dport 8080 -j ACCEPT iptables -I OUTPUT -p tcp --sport 8080 -j ACCEPT iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT /etc/init.d/iptables restart I don't really understand all options of commands and I think there were useless commands, but I just tried all googled iptables settings. But still I cannot connect to my server. What should I check, first? I don't know this is important, but add to this post. On 80 port, an apache server is running. It works fine, I can connect to apache from other computers. There is DB connecting issue, (PHP to MySQL) but I think that it is just PHP coding bug. please excuse my low-level English. I'm not native English speaker.. but I tried to explane well as far as possible. Thank you for reading this question.

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • Issue with 'selected' value within form

    - by JM4
    I currently have a form built in which after validation, if errors exist, the data stays on screen for the consumer to correct. An example of how this works for say the 'Year of Birth' is: <select name="DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> If an error is found, the year of birth value returns the year previously entered. I am able to have this work on all field with the exception of my 'State' field. I build the array and function for the drop down with the following code: <?php $states_arr = array('AL'=>"Alabama",'AK'=>"Alaska",'AZ'=>"Arizona",'AR'=>"Arkansas",'CA'=>"California",'CO'=>"Colorado",'CT'=>"Connecticut",'DE'=>"Delaware",'DC'=>"District Of Columbia",'FL'=>"Florida",'GA'=>"Georgia",'HI'=>"Hawaii",'ID'=>"Idaho",'IL'=>"Illinois", 'IN'=>"Indiana", 'IA'=>"Iowa", 'KS'=>"Kansas",'KY'=>"Kentucky",'LA'=>"Louisiana",'ME'=>"Maine",'MD'=>"Maryland", 'MA'=>"Massachusetts",'MI'=>"Michigan",'MN'=>"Minnesota",'MS'=>"Mississippi",'MO'=>"Missouri",'MT'=>"Montana",'NE'=>"Nebraska",'NV'=>"Nevada",'NH'=>"New Hampshire",'NJ'=>"New Jersey",'NM'=>"New Mexico",'NY'=>"New York",'NC'=>"North Carolina",'ND'=>"North Dakota",'OH'=>"Ohio",'OK'=>"Oklahoma", 'OR'=>"Oregon",'PA'=>"Pennsylvania",'RI'=>"Rhode Island",'SC'=>"South Carolina",'SD'=>"South Dakota",'TN'=>"Tennessee",'TX'=>"Texas",'UT'=>"Utah",'VT'=>"Vermont",'VA'=>"Virginia",'WA'=>"Washington",'WV'=>"West Virginia",'WI'=>"Wisconsin",'WY'=>"Wyoming"); function showOptionsDrop($array, $active, $echo=true){ $string = ''; foreach($array as $k => $v){ $s = ($active == $k)? ' selected="selected"' : ''; $string .= '<option value="'.$k.'"'.$s.'>'.$v.'</option>'."\n"; } if($echo) { echo $string;} else { return $string;} } ?> I then call the function from within the form using: <td><select name="State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> Not sure what I'm missing but would love any assistance if somebody sees the error in my code. Thanks!

    Read the article

  • Which programming idiom to choose for this open source library?

    - by Walkman
    I have an interesting question about which programming idiom is easier to use for beginner developers writing concrete file parsing classes. I'm developing an open source library, which one of the main functionality is to parse plain text files and get structured information from them. All of the files contains the same kind of information, but can be in different formats like XML, plain text (each of them is structured differently), etc. There are a common set of information pieces which is the same in all (e.g. player names, table names, some id numbers) There are formats which are very similar to each other, so it's possible to define a common Base class for them to facilitate concrete format parser implementations. So I can clearly define base classes like SplittablePlainTextFormat, XMLFormat, SeparateSummaryFormat, etc. Each of them hints the kind of structure they aim to parse. All of the concrete classes should have the same information pieces, no matter what. To be useful at all, this library needs to define at least 30-40 of these parsers. A couple of them are more important than others (obviously the more popular formats). Now my question is, which is the best programming idiom to choose to facilitate the development of these concrete classes? Let me explain: I think imperative programming is easy to follow even for beginners, because the flow is fixed, the statements just come one after another. Right now, I have this: class SplittableBaseFormat: def parse(self): "Parses the body of the hand history, but first parse header if not yet parsed." if not self.header_parsed: self.parse_header() self._parse_table() self._parse_players() self._parse_button() self._parse_hero() self._parse_preflop() self._parse_street('flop') self._parse_street('turn') self._parse_street('river') self._parse_showdown() self._parse_pot() self._parse_board() self._parse_winners() self._parse_extra() self.parsed = True So the concrete parser need to define these methods in order in any way they want. Easy to follow, but takes longer to implement each individual concrete parser. So what about declarative? In this case Base classes (like SplittableFormat and XMLFormat) would do the heavy lifting based on regex and line/node number declarations in the concrete class, and concrete classes have no code at all, just line numbers and regexes, maybe other kind of rules. Like this: class SplittableFormat: def parse_table(): "Parses TABLE_REGEX and get information" # set attributes here def parse_players(): "parses PLAYER_REGEX and get information" # set attributes here class SpecificFormat1(SplittableFormat): TABLE_REGEX = re.compile('^(?P<table_name>.*) other info \d* etc') TABLE_LINE = 1 PLAYER_REGEX = re.compile('^Player \d: (?P<player_name>.*) has (.*) in chips.') PLAYER_LINE = 16 class SpecificFormat2(SplittableFormat): TABLE_REGEX = re.compile(r'^Tournament #(\d*) (?P<table_name>.*) other info2 \d* etc') TABLE_LINE = 2 PLAYER_REGEX = re.compile(r'^Seat \d: (?P<player_name>.*) has a stack of (\d*)') PLAYER_LINE = 14 So if I want to make it possible for non-developers to write these classes the way to go seems to be the declarative way, however, I'm almost certain I can't eliminate the declarations of regexes, which clearly needs (senior :D) programmers, so should I care about this at all? Do you think it matters to choose one over another or doesn't matter at all? Maybe if somebody wants to work on this project, they will, if not, no matter which idiom I choose. Can I "convert" non-programmers to help developing these? What are your observations? Other considerations: Imperative will allow any kind of work; there is a simple flow, which they can follow but inside that, they can do whatever they want. It would be harder to force a common interface with imperative because of this arbitrary implementations. Declarative will be much more rigid, which is a bad thing, because formats might change over time without any notice. Declarative will be harder for me to develop and takes longer time. Imperative is already ready to release. I hope a nice discussion will happen in this thread about programming idioms regarding which to use when, which is better for open source projects with different scenarios, which is better for wide range of developer skills. TL; DR: Parsing different file formats (plain text, XML) They contains same kind of information Target audience: non-developers, beginners Regex probably cannot be avoided 30-40 concrete parser classes needed Facilitate coding these concrete classes Which idiom is better?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >