Search Results

Search found 684 results on 28 pages for 'cygwin x'.

Page 27/28 | < Previous Page | 23 24 25 26 27 28  | Next Page >

  • Optimize Duplicate Detection

    - by Dave Jarvis
    Background This is an optimization problem. Oracle Forms XML files have elements such as: <Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... /> Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as: sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql I wrote a script to generate a list of exact duplicates using a brute force approach. Problem There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons? The script will complete in approximately 208 days. Script Source Code The comparison script is as follows: #!/bin/bash echo Loading directory ... for i in $(find sql/ -type f -name \*.sql); do echo Comparing $i ... for j in $(find sql/ -type f -name \*.sql); do if [ "$i" = "$j" ]; then continue; fi # Case insensitive compare, ignore spaces diff -IEbwBaq $i $j > /dev/null # 0 = no difference (i.e., duplicate code) if [ $? = 0 ]; then echo $i :: $j >> clones.txt fi done done Question How would you optimize the script so that checking for cloned code is a few orders of magnitude faster? System Constraints Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome. Thank you!

    Read the article

  • Which DVCS would work best on Windows for my scenario?

    - by PoorLuzer
    At work I use ClearCase and SourceSafe, but have found some time to do some time to code for myself enroute thanks to a disposable laptop. However, I wish I had a lightweight VCS on my system using which I would be able to make changes to my code during the commute and then push/grab them from my Linux systems. I use git on my home system, but I can't really get it working on Windows. I don't want all that cygwin hack. If it does not run natively on Windows, it just won't do. What have you guys tried on your Windows system? Something that YOU use. The big player at the moment seems to be Mercurial? What would be best for a one (or maybe two) man team? I just need to maintain : Versioned copies of source code. Checking in and out should be as less obtrusive as possible. I am looking forward to a multiple Undo kind of feature (like that in an EMacs buffer) but persistent. I really like the way git keeps track of lines moving between files in a source code set I should be able to move part(s)/sub tree(s) of the source tree (each sub tree implies a module/plugin to my the main software I am building) to an archival system either completly or partially and restore them back from the archive as and when required and the system should track any changes to this tree as well. I actually want to experiment with my code as much as possible without me manually keeping track of what I modified and what I need to undo once I try out some idea, so that I am back to where I want to continue from. Notes : A similar topic came up a year ago : http://stackoverflow.com/questions/4670/dvcs-choices-whats-good-for-windows I hope things have changed, and I really want people to share their own, real life experiences. Not something they recommend without using it or they think will work.

    Read the article

  • How can I attach multiple urls to a single git remote?

    - by deterb
    I'm currently using git on windows through a combination of msysgit and Cygwin. I have a laptop which I move around quite a bit, so it's not on a consistent location. Unfortunately, I don't have a consistent name for it due to the computer name not being resolved on all of the locations I connect to, so I can't just use the computer name as the host for the url (e.g. git://compname/repo), so I have to use the IP address. Is there a way I can add multiple urls to pull from for a particular remote? I've seen git remote set-url --add [--push] <name> <newurl> as a way to add multiple URLs to a remote, and I can see the updates in the .git/config file, but git only tries to use the first one. Is there a way to get git to try to use all of the urls? I've tried both git fetch and git remote update, but neither tries anything after the first url. Note that I haven't tried this on linux yet, and I can't fix the computer name resolution as this is at work. Thanks

    Read the article

  • C++: What is the size of an object of an empty class?

    - by Ashwin
    I was wondering what could be the size of an object of an empty class. It surely could not be 0 bytes since it should be possible to reference and point to it like any other object. But, how big is such an object? I used this small program: #include <iostream> using namespace std; class Empty {}; int main() { Empty e; cerr << sizeof(e) << endl; return 0; } The output I got on both Visual C++ and Cygwin-g++ compilers was 1 byte! This was a little surprising to me since I was expecting it to be of the size of the machine word (32 bits or 4 bytes). Can anyone explain why the size of 1 byte? Why not 4 bytes? Is this dependent on compiler or the machine too? Also, can someone give a more cogent reason for why an empty class object will not be of size 0 bytes?

    Read the article

  • Inspire Geek Love with These Hilarious Geek Valentines

    - by Eric Z Goodnight
    Want to send some Geek Love to that special someone? Why not do it with these elementary school throwback valentines, and win their heart this upcoming Valentine’s day—the geek way! Read on to see the simple method to make your own custom Valentines, as well as download a set of eleven ready-made ones any geek guy or gal should be delighted get. It’s amore! How to Make Custom Valentines A size we’ve used for all of our Valentines is a 3” x 4” at 150 dpi. This is fairly low resolution for print, but makes a great graphic to email. With your new image open, Navigate to Edit > Fill and fill your background layer with a rich, red color (or whatever appeals to you.) By setting “Use” to “Foreground color as shown above, you’ll paint whatever foreground color you have in your color picker. Press to select the text tool. Set a few text objects, using whatever fonts appeal to you. Pixel fonts, like this one, are freely downloadable, and we’ve already shared a great list of Valentines fonts. Copy an image from the internet if you’re confident your sweetie won’t mind a bit of fair use of copyrighted imagery. If they do mind, find yourself some great Creative Commons images. to do a free transform on your image, sizing it to whatever dimensions work best for your design. Right click your newly added image layer in your panel and Choose “Blending Effects” to pick a Layer Style. “Stroke” with this setting adds a black line around your image. Also turning on “Outer Glow” with this setting puts a dark black shadow around the top and bottom (and sides, although they are hidden). Add some more text. Double entendre is recommended. Click and hold down on the “Rectangle Tool” to get the “Custom Shape Tool.” The custom shape tool has useful vector shapes built into it. Find the “Shape” dropdown in the menu to find the heart image. Click and drag to create a vector heart shape in your image. Your layers panel is where you can change the color, if it happens to use the wrong one at first. Click the color swatch in your panel, highlighted in blue above. will transform your vector heart. You can also use it to rotate, if you like. Add some details, like this Power or Standby symbol, which can be found in symbol fonts, taken from images online, or drawn by hand. Your Valentine is now ready to be saved as a JPG or PNG and sent to the object of your affection! Keep reading to see a list of 11 downloadable How-To Geek Valentines, including this one and the three from the header image. Download The HTG Set of Valentines Download the HTG Geek Valentines (ZIP) Download the HTG Geek Valentines (ZIP) When he’s not wooing ladies with Valentines cards, you can email the author at [email protected] with your Photoshop and Graphics questions. Your questions may be featured in a future How-To Geek article! Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Kid Proof Your Computer’s Power and Reset Buttons Microsoft’s Windows Media Player Extension Adds H.264 Support Back to Google Chrome Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7

    Read the article

  • Rsync: windows 7, synology: login error and permission denied error

    - by loonboon
    Good day to all of you all, I'm running into strange/stupid errors, and I hope anybody would be so kind to help me out. I have to admit, I am by no means a guru, so please bear with me :-) Situation: -Synology NAS (runs Linux), and Windows 7 desktop (1 normal/restricted user Lisa and 1 admin user). - Data from W7 desktop to be rsynced to synology: /volume1/home/Lisa/Backup - Rsync command: c:\cygwin\bin\rsync -avz /cygdrive/e/Lisa/ [email protected]:/volume1/homes/Lisa/Backup - I've set up ssh per these two threads: a. http://www.cesareriva.com/archives/102 b. http://www.cesareriva.com/archives/112 Now the horrors begin: - root is allowed to run the rsync succesfully, however, he doesn't login automatically (so I can not use rsync in W7 batchscripts, which is of course required). - Lisa is allowed to login automatically but he can not succesfully finish the rsync command because of permission errors: rsync change dir /volume1/homes/Lisa/Backup failed: permissions denied. This happens for each and every file and subdir rsync tries to create. However, the main directory (Backup) is created. When I try to copy files from windows explorer to the directory 'Backup' using the very same user Lisa everything goes smoothly. So, obviously, there is a permission problem somewhere; either my rsync-command isn't correct, or the folder permissions for homes/Lisa aren't correct (but, then again, Windows 7 copies files to that folder without any problems, so that does make me believe the homes/Lisa-permissions don't appear to be the problem). I also tried adding: --chmod=Dugo+x --chmod=ugo+r which I found somewhere on the web, to the rsync-command, but this didn't solve any problem and gave the exact errors. Would anybody please please help me on how to fix this? I am utterly frustrated about this, because I have been trying for 1 month to get everything to work and it simply doesn't work. I bought the big Synology to end the horrors of 20 external USB-disks for once and for all (we have many pictures and home vids of our deceased dogs and want to watch these, the horrors being 'what material is on what disk'). I'll gladly return the favour of somebody helping me out by buying you a nice beer (paypal), if you could end my misery. I am not extremely skilled on Linux (not at all :-( ) so if you could give an extra word when possible so I understand what to do, I'd be very grateful. I really hope somebody can help me out, Thank you in advance, Lisa

    Read the article

  • batch file infinite loop when parsing file

    - by Bart
    Okay, this should be a really simple task but its proving to be more complicated than I think it should be. I'm clearly doing something wrong, and would like someone else's input. What I would like to do is parse through a file containing paths to directories and set permissions on those directories. An example line of the input file. There are several lines, all formatted the same way, with a different path to a directory. E:\stuff\Things\something else (X)\ (The file in question is generated under Cygwin using find to list all directories with "(X)" in the name. The file is then passed through unix2win to make it windows compatible. I've also tried manually creating the input file from within windows to rule out the file's creation method as the problem.) Here's where I'm stuck... I wrote the following quick and dirty batch file in Windows XP and it worked without any issues at all, but it will not work in server 2k8. Batch file code to run through the file and set permissions: FOR /F "tokens=*" %%A IN (dirlist.txt) DO echo y| cacls "%%A" /T /C /G "Domain Admins":f "Some Group":f "some-security-group":f What this is SUPPOSED to do (and does in XP) is loop through the specified file (dirlist.txt) and run cacls.exe on each directory it pulls from the file. The "echo y|" is in there to automagically confirm when cacls helpfully asks "are you sure?" for every directory in the list. Unfortunately, however, what it DOES is fall into an infinite loop. I've tried surrounding everything after "DO" with quotes, which prevents the endless loop but confuses cacls so it throws an error. Interestingly, I've tried running the code from after "DO" manually (obviously replacing the variable with the full path, copied straight from the file) at a command prompt and it runs as expected. I don't think it's the file or the loop, as adding quotes to the command to be executed prevents the loop from continuing past where it's supposed to... I really have no idea at this point. Any help would be appreciated. I have a feeling it's going to be something increadibly stupid... but I'm pulling my hair out so I thought I'd ask.

    Read the article

  • OBIEE 11.1.1 - OBIEE 11g Full Sample App on VMware Player 4

    - by user809526
    The Full Sample App is designed to run on Virtual Box. Let's describe how to run it on VMware Player 4. Open Virtualization Format Tool http://communities.vmware.com/community/vmtn/server/vsphere/automationtools/ovf VMware Player Documentation https://www.vmware.com/support/pubs/player_pubs.html Full Sample App Deployment Guide sampleapp107-vbimage-deployguide-453583.pdf INSTALL VMplayer 4.0.0 as root LINUX # sh VMware-Player-4.0.0-471780.x86_64.bundle (A new VM is not needed and can be deleted later after that installation is completed. "I will install OS later" - blank hard disk Guest: linux, Red Hat Enterprise Linux 5-64bits => rename to RHEL target: eg /a/root/vmware/ Max disk size: 5 GB (will be deleted) Disk: Single file Dummy RHEL.vmk, RHEL.vmdk is generated. "Delete VM from Disk" in VM Player.) Copy Full Sample App files to target /a/root/vmware/ WARNING: Select a target eg /a/root/vmware/ with lots of free space, 95 GB. Check checksums (md5sum). Please do it! ff85c7eacf7fb8c382e98da875e879e1  Sampleapp_v107_GA-disk1.vmdk 973258cb3c7d64ab03ae853278cf2233  Sampleapp_v107_GA-disk2.vmdk e576be16e36d810479736bfb15d050f5  Sampleapp_v107_GA-disk3.vmdk 3455df77279e53e07d5fee6712f1597d  Sampleapp_v107_GA-disk4.vmdk OVF FILE   Sampleapp_v107_GA.ovf CONVERSION $ cd /a/root/vmware/ LINUX $ /usr/bin/ovftool -tt=ovf --compress=1 -dm=monolithicSparse Sampleapp_v107_GA.ovf .  [dot] Opening OVF source: Sampleapp_v107_GA.ovf Warning: No manifest file Opening OVF target: . Writing OVF package: Sampleapp_v107_GA/Sampleapp_v107_GA.ovf Disk Transfer Completed                   Completed successfully WINDOWS CYGWIN $ /cygdrive/c/VMwarePlayer/OVFTool/ovftool.exe -tt=ovf --compress=1 -dm=monolithicSparse Sampleapp_v107_GA.ovf .  [dot] Opening OVF source: Sampleapp_v107_GA.ovf Warning: No manifest file Opening OVF target: . Writing OVF package: Sampleapp_v107_GA\Sampleapp_v107_GA.ovf Disk Transfer Completed Completed successfully /a/root/vmware$ du -sk 49095328    .   [50 GB already occupied] IMPORT - First start of VM Player 4: /usr/bin/vmplayer "Open a Virtual Machine" Browse to /a/root/vmware/Sampleapp_v107_GA/Sampleapp_v107_GA.ovf [the new generated .ovf] "Import Virtual Machine" dialog Name: Sampleapp_v107_GA Location: /a/root/vmware/Sampleapp_v107_GA/storage [was /home/tdubois/vmware/Sampleapp_v107_GA] "Import" "The import failed because /a/root/vmware/Sampleapp_v107_GA/Sampleapp_v107_GA.ovf did not pass OVF specification conformance or virtual hardware compliance checks. Click Retry to relax OVF specification..." "Retry" ; Long import /a/root/vmware/Sampleapp_v107_GA/storage/Sampleapp_v107_GA.vmx and new .vmdk files are created. /a/root/vmware$ du -sk 95551384    .   [95 GB occupied] Full Sample App GUEST SETUP "Edit VM settings" min 3GB, 2+ processors, network bridged. For OBIEE + Essbase testing use 8 GB RAM hardware. At first time lauch of Full Sample App, leave OEL booting for several minutes undisturbed. Problem with X display server may occur [/usr/bin/Xorg ; man Xorg]. "Failed to start the X server.... Would you like to view the X server output to diagnose the problem?" "No" [tab key] "Would you like to try to configure the X server? Note that you will need the root password for this." "Yes" [oracle] X Display Settings 800x600 saved in /etc/X11/xorg.conf "Trying to restart the X server" Login as root/oracle in guest OEL. In guest OEL, Virtual Machine > Install VMware Tools... Extract archive VMwareTools-8.8.0-471268.tar.gz all files in writable local directory eg /root In Terminal run Perl script # cd /root/vmware-tools-distrib ; ./vmware-install.pl [keep all default answers] Set keyboard layout System > Preferences > Keyboard > Layouts Restart X server eg System > Log Out root... , relogin Modify X resolution System > Preferences > Screen Resolution Full Sample App OEL login: oracle/oracle ; root/oracle [default US keyboard layout] Credentials are described in the 'sampleapp107-vbimage-deployguide-453583.pdf' The large files in /a/root/vmware/ /a/root/vmware/Sampleapp_v107_GA/ may be removed. FAILURE REMARK: Adding the 4 original Sampleapp_v107_GA-disks[1234].vmdk to VM Player does NOT work as described below. "Edit VM settings" "Remove" "Hard Disk" "Edit VM settings" "Add" "Hard Disk" "Next" "Use an existing virtual disk" "Browse" "Finish" "Keep existing format" "Ok" for each 4 disks settings one by one. Start VM Player 4. "You do not have write access to a partition" Allow all Sampleapp_v107 OEL linux launches. OEL stalls silently after 'Checking filesystems'.

    Read the article

  • OperationalError "unable to open database file" processing query results with SQLAlchemy and SQLite3

    - by Peter
    I'm running into this little problem that I hope is just a dumb user error. It looks like some sort of a size limit with a query to a SQLite database. I managed to reproduce the issue with an in-memory DB and a simple script shown below. I can make it work by either reducing the number of records in the DB; or by reducing the size of each record; or by dropping the order_by() call. I am using Python 2.5.5 and SQLAlchemy 0.6.0 in a Cygwin environment. Thanks! #!/usr/bin/python from sqlalchemy.orm import sessionmaker import sqlalchemy import sqlalchemy.orm class Person(object): def __init__(self, name): self.name = name engine = sqlalchemy.create_engine('sqlite:///:memory:') Session = sessionmaker(bind=engine) metadata = sqlalchemy.schema.MetaData(bind=engine) person_table = sqlalchemy.Table('person', metadata, sqlalchemy.Column('id', sqlalchemy.types.Integer, primary_key=True), sqlalchemy.Column('name', sqlalchemy.types.String)) metadata.create_all(engine) sqlalchemy.orm.mapper(Person, person_table) session = Session() session.add_all([Person("012345678901234567890123456789012") for i in range(5000)]) session.commit() persons = session.query(Person).order_by(Person.name).all() print "count =", len(persons) session.close() The all() call to the query result fails with the OperationalError exception: Traceback (most recent call last): File "./stress.py", line 27, in <module> persons = session.query(Person).order_by(Person.name).all() File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1343, in all return list(self) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1451, in __iter__ return self._execute_and_instances(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1456, in _execute_and_instances mapper=self._mapper_zero_or_none()) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/session.py", line 737, in execute clause, params or {}) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1109, in execute return Connection.executors[c](self, object, multiparams, params) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1186, in _execute_clauseelement return self.__execute_context(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1215, in __execute_context context.parameters[0], context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1284, in _cursor_execute self._handle_dbapi_exception(e, statement, parameters, cursor, context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1282, in _cursor_execute self.dialect.do_execute(cursor, statement, parameters, context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (OperationalError) unable to open database file u'SELECT person.id AS person_id, person.name AS person_name \nFROM person ORDER BY person.name' ()

    Read the article

  • Installing PyGraphViz on Windows, Python 2.7

    - by user574043
    I can't install pygraphviz on Windows XP. I'm using Python27. Before to launch the setup I've changet these two variables of the setup.py file library_path="C:\\Archivos de programa\\Graphviz 2.28\\bin" include_path="C:\\Archivos de programa\\Graphviz 2.28\\include\\graphviz" Then I've launched the setup. I'm using mingw32 as a compiler. I don't know what can I do now. I'm using the following command: C:\Python27\pygraphviz-1.1>c:\python27\python setup.py build -c mingw32 And I've got the folling result library_path=C:\Archivos de programa\Graphviz 2.28\bin include_path=C:\Archivos de programa\Graphviz 2.28\include\graphviz running build running build_py running build_ext building 'pygraphviz._graphviz' extension C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall "-IC:\Archivos de programa\Graphviz 2.28\include\graphviz" -Ic:\python27\include -Ic:\python 27\PC -c pygraphviz/graphviz_wrap.c -o build\temp.win32-2.7\Release\pygraphviz\graphviz_wrap.o pygraphviz/graphviz_wrap.c: In function 'agattr_label': pygraphviz/graphviz_wrap.c:2855:5: warning: return makes integer from pointer without a cast writing build\temp.win32-2.7\Release\pygraphviz\_graphviz.def Traceback (most recent call last): File "setup.py", line 146, in <module> package_data = package_data File "c:\python27\lib\distutils\core.py", line 152, in setup dist.run_commands() File "c:\python27\lib\distutils\dist.py", line 953, in run_commands self.run_command(cmd) File "c:\python27\lib\distutils\dist.py", line 972, in run_command cmd_obj.run() File "c:\python27\lib\distutils\command\build.py", line 127, in run self.run_command(cmd_name) File "c:\python27\lib\distutils\cmd.py", line 326, in run_command self.distribution.run_command(command) File "c:\python27\lib\distutils\dist.py", line 972, in run_command cmd_obj.run() File "c:\python27\lib\distutils\command\build_ext.py", line 340, in run self.build_extensions() File "c:\python27\lib\distutils\command\build_ext.py", line 449, in build_extensions self.build_extension(ext) File "c:\python27\lib\distutils\command\build_ext.py", line 531, in build_extension target_lang=language) File "c:\python27\lib\distutils\ccompiler.py", line 741, in link_shared_object extra_preargs, extra_postargs, build_temp, target_lang) File "c:\python27\lib\distutils\cygwinccompiler.py", line 260, in link target_lang) File "c:\python27\lib\distutils\unixccompiler.py", line 218, in link libraries) File "c:\python27\lib\distutils\ccompiler.py", line 1121, in gen_lib_options opt = compiler.runtime_library_dir_option(dir) File "c:\python27\lib\distutils\unixccompiler.py", line 285, in runtime_library_dir_option compiler = os.path.basename(sysconfig.get_config_var("CC")) File "c:\python27\lib\ntpath.py", line 198, in basename return split(p)[1] File "c:\python27\lib\ntpath.py", line 170, in split d, p = splitdrive(p) File "c:\python27\lib\ntpath.py", line 125, in splitdrive if p[1:2] == ':': TypeError: 'NoneType' object is not subscriptable Any idea on how to solve it over windows? In other computer with Ubuntu I've installed without problems.

    Read the article

  • Tunnel over HTTPS

    - by ephemient
    At my workplace, the traffic blocker/firewall has been getting progressively worse. I can't connect to my home machine on port 22, and lack of ssh access makes me sad. I was previously able to use SSH by moving it to port 5050, but I think some recent filters now treat this traffic as IM and redirect it through another proxy, maybe. That's my best guess; in any case, my ssh connections now terminate before I get to log in. These days I've been using Ajaxterm over HTTPS, as port 443 is still unmolested, but this is far from ideal. (Sucky terminal emulation, lack of port forwarding, my browser leaks memory at an amazing rate...) I tried setting up mod_proxy_connect on top of mod_ssl, with the idea that I could send a CONNECT localhost:22 HTTP/1.1 request through HTTPS, and then I'd be all set. Sadly, this seems to not work; the HTTPS connection works, up until I finish sending my request; then SSL craps out. It appears as though mod_proxy_connect takes over the whole connection instead of continuing to pipe through mod_ssl, confusing the heck out of the HTTPS client. Is there a way to get this to work? I don't want to do this over plain HTTP, for several reasons: Leaving a big fat open proxy like that just stinks A big fat open proxy is not good over HTTPS either, but with authentication required it feels fine to me HTTP goes through a proxy -- I'm not too concerned about my traffic being sniffed, as it's ssh that'll be going "plaintext" through the tunnel -- but it's a lot more likely to be mangled than HTTPS, which fundamentally cannot be proxied Requirements: Must work over port 443, without disturbing other HTTPS traffic (i.e. I can't just put the ssh server on port 443, because I would no longer be able to serve pages over HTTPS) I have or can write a simple port forwarder client that runs under Windows (or Cygwin) Edit DAG: Tunnelling SSH over HTTP(S) has been pointed out to me, but it doesn't help: at the end of the article, they mention Bug 29744 - CONNECT does not work over existing SSL connection preventing tunnelling over HTTPS, exactly the problem I was running into. At this point, I am probably looking at some CGI script, but I don't want to list that as a requirement if there's better solutions available.

    Read the article

  • How do gitignore exclusion rules actually work?

    - by meowsqueak
    I'm trying to solve a gitignore problem on a large directory structure, but to simplify my question I have reduced it to the following. I have the following directory structure of two files (foo, bar) in a brand new git repository (no commits so far): a/b/c/foo a/b/c/bar Obviously, a 'git status -u' shows: # Untracked files: ... # a/b/c/bar # a/b/c/foo What I want to do is create a .gitignore file that ignores everything inside a/b/c but does not ignore the file 'foo'. If I create a .gitignore thus: c/ Then a 'git status -u' shows both foo and bar as ignored: # Untracked files: ... # .gitignore Which is as I expect. Now if I add an exclusion rule for foo, thus: c/ !foo According to the gitignore manpage, I'd expect this to to work. But it doesn't - it still ignores foo: # Untracked files: ... # .gitignore This doesn't work either: c/ !a/b/c/foo Neither does this: c/* !foo Gives: # Untracked files: ... # .gitignore # a/b/c/bar # a/b/c/foo In that case, although foo is no longer ignored, bar is also not ignored. The order of the rules in .gitignore doesn't seem to matter either. This also doesn't do what I'd expect: a/b/c/ !a/b/c/foo That one ignores both foo and bar. One situation that does work is if I create the file a/b/c/.gitignore and put in there: * !foo But the problem with this is that eventually there will be other subdirectories under a/b/c and I don't want to have to put a separate .gitignore into every single one - I was hoping to create 'project-based' .gitignore files that can sit in the top directory of each project, and cover all the 'standard' subdirectory structure. This also seems to be equivalent: a/b/c/* !a/b/c/foo This might be the closest thing to "working" that I can achieve, but the full relative paths and explicit exceptions need to be stated, which is going to be a pain if I have a lot of files of name 'foo' in different levels of the subdirectory tree. Anyway, either I don't quite understand how exclusion rules work, or they don't work at all when directories (rather than wildcards) are ignored - by a rule ending in a / Can anyone please shed some light on this? Is there a way to make gitignore use something sensible like regular expressions instead of this clumsy shell-based syntax? I'm using and observe this with git-1.6.6.1 on Cygwin/bash3 and git-1.7.1 on Ubuntu/bash3.

    Read the article

  • ImageMagick on Mac OSX Snow Leopard. Is there any way to get it to work?

    - by ?????
    It seems that I have more trouble getting standard Unix things to run on Snow Leopard than any other platform--including Windows cygwin For the past couple of days, I've been trying to get ImageMagick to run on Snow Leopard. The most obvious way, Mac Ports, fails: tppllc-Mac-Pro:ImageMagick-sl swirsky$ sudo port install imagemagick ---> Computing dependencies for p5-locale-gettext ---> Configuring p5-locale-gettext Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-locale-gettext/work/gettext-1.05" && /opt/local/bin/perl Makefile.PL INSTALLDIRS=vendor " returned error 2 Command output: checking for gettext... no checking for gettext in -I/opt/local/include -arch i386 -L/opt/local/lib -lintl...gettext function not found. Please install libintl at Makefile.PL line 18. no Error: Unable to upgrade port: 1 Error: Unable to execute port: upgrade xorg-libXt failed Before reporting a bug, first run the command again with the -d flag to get complete output. tppllc-Mac-Pro:ImageMagick-sl swirsky$ Not wanting to spend another two days figuring out why my libintl doesn't have a "gettext" function, I tried a different route: the script mentioned here: http://github.com/masterkain/ImageMagick-sl This script downloads and installs an ImageMagic independently of MacPorts issues tppllc-Mac-Pro:ImageMagick-sl swirsky$ /usr/local/bin/convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap It downloads everything and compiles fine, but fails when I try to run it, with the message above. So now I'm two steps away from ImageMagick, trying to get a newer libiconv on my machine. I downloaded the latest libiconv, compiled and built it. I put the resulting library in /opt/local/lib, and I still get the same error message: tppllc-Mac-Pro:.libs swirsky$ sudo mv libiconv.2.dylib /opt/local/lib/libiconv.2.dylib tppllc-Mac-Pro:.libs swirsky$ convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap So here's my question: Is it possible to get ImageMagick to run on OSX Snow Leopard? Are there any binary distributions that have static libraries baked in so I don't have to worry about these issue/

    Read the article

  • ImageMagick on Mac OSX Snow Leopard. Is there any way to get it to compile and run?

    - by ?????
    It seems that I have more trouble getting standard Unix things to run on Snow Leopard than any other platform--including Windows cygwin For the past couple of days, I've been trying to get ImageMagick to run on Snow Leopard. The most obvious way, Mac Ports, fails: tppllc-Mac-Pro:ImageMagick-sl swirsky$ sudo port install imagemagick ---> Computing dependencies for p5-locale-gettext ---> Configuring p5-locale-gettext Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-locale-gettext/work/gettext-1.05" && /opt/local/bin/perl Makefile.PL INSTALLDIRS=vendor " returned error 2 Command output: checking for gettext... no checking for gettext in -I/opt/local/include -arch i386 -L/opt/local/lib -lintl...gettext function not found. Please install libintl at Makefile.PL line 18. no Error: Unable to upgrade port: 1 Error: Unable to execute port: upgrade xorg-libXt failed Before reporting a bug, first run the command again with the -d flag to get complete output. tppllc-Mac-Pro:ImageMagick-sl swirsky$ Not wanting to spend another two days figuring out why my libintl doesn't have a "gettext" function, I tried a different route: the script mentioned here: http://github.com/masterkain/ImageMagick-sl This script downloads and installs an ImageMagic independently of MacPorts issues tppllc-Mac-Pro:ImageMagick-sl swirsky$ /usr/local/bin/convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap It downloads everything and compiles fine, but fails when I try to run it, with the message above. So now I'm two steps away from ImageMagick, trying to get a newer libiconv on my machine. I downloaded the latest libiconv, compiled and built it. I put the resulting library in /opt/local/lib, and I still get the same error message: tppllc-Mac-Pro:.libs swirsky$ sudo mv libiconv.2.dylib /opt/local/lib/libiconv.2.dylib tppllc-Mac-Pro:.libs swirsky$ convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap Now here's something interesting. The error message shows it's looking in /opt/local/lib/libiconv.2.dylib. otools -L shows that this does implement 8.0.0: tppllc-Mac-Pro:.libs swirsky$ otool -L /opt/local/lib/libiconv.2.dylib /opt/local/lib/libiconv.2.dylib: /usr/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.0) tppllc-Mac-Pro:.libs swirsky$ And, for good measure, I set the DYLD_LIBRARY_PATH to make sure this directory is the one for dynamic libraries. So even though I do have a library that provides 8.0.0, it's being seen as 7.0.0! Any ideas why this would happen? So here's my question: Is it possible to get ImageMagick to run on OSX Snow Leopard? Are there any binary distributions that have static libraries baked in so I don't have to worry about these issue/

    Read the article

  • gcc, strict-aliasing, and casting through a union

    - by Joseph Quinsey
    About a year ago the following paragraph was added to the GCC Manual, version 4.3.4, regarding -fstrict-aliasing: Similarly, access by taking the address, casting the resulting pointer and dereferencing the result has undefined behavior [emphasis added], even if the cast uses a union type, e.g.: union a_union { int i; double d; }; int f() { double d = 3.0; return ((union a_union *)&d)->i; } Does anyone have an example to illustrate this undefined behavior? Note this question is not about what the C99 standard says, or does not say. It is about the actual functioning of gcc, and other existing compilers, today. My simple, naive, attempt fails. For example: #include <stdio.h> union a_union { int i; double d; }; int f1(void) { union a_union t; t.d = 3333333.0; return t.i; // gcc manual: 'type-punning is allowed, provided ...' } int f2(void) { double d = 3333333.0; return ((union a_union *)&d)->i; // gcc manual: 'undefined behavior' } int main(void) { printf("%d\n", f1()); printf("%d\n", f2()); return 0; } works fine, giving on CYGWIN: -2147483648 -2147483648 Also note that taking addresses is obviously wrong (or right, if you are trying to illustrate undefined behavior). For example, just as we know this is wrong: extern void foo(int *, double *); union a_union t; t.d = 3.0; foo(&t.i, &t.d); // UD behavior so is this wrong: extern void foo(int *, double *); double d = 3.0; foo(&((union a_union *)&d)->i, &d); // UD behavior For background discussion about this, see for example: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1422.pdf http://gcc.gnu.org/ml/gcc/2010-01/msg00013.html http://davmac.wordpress.com/2010/02/26/c99-revisited/ http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html http://stackoverflow.com/questions/98650/what-is-the-strict-aliasing-rule http://stackoverflow.com/questions/2771023/c99-strict-aliasing-rules-in-c-gcc/2771041#2771041 The first link, draft minutes of an ISO meeting seven months ago, notes in section 4.16: Is there anybody that thinks the rules are clear enough? No one is really able to interpret tham.

    Read the article

  • Makefile issue with compiling a C++ program

    - by Steve
    I recently got MySQL compiled and working on Cygwin, and got a simple test example from online to verify that it worked. The test example compiled and ran successfully. However, when incorporating MySQL in a hobby project of mine it isn't compiling which I believe is due to how the Makefile is setup, I have no experience with Makefiles and after reading tutorials about them, I have a better grasp but still can't get it working correctly. When I try and compile my hobby project I recieve errors such as: Obj/Database.o:Database.cpp:(.text+0x492): undefined reference to `_mysql_insert_id' Obj/Database.o:Database.cpp:(.text+0x4c1): undefined reference to `_mysql_affected_rows' collect2: ld returned 1 exit status make[1]: *** [build] Error 1 make: *** [all] Error 2 Here is my Makefile, it worked with compiling and building the source before I attempted to put in MySQL support into the project. The LIBMYSQL paths are correct, verified by 'mysql_config'. COMPILER = g++ WARNING1 = -Wall -Werror -Wformat-security -Winline -Wshadow -Wpointer-arith WARNING2 = -Wcast-align -Wcast-qual -Wredundant-decls LIBMYSQL = -I/usr/local/include/mysql -L/usr/local/lib/mysql -lmysqlclient DEBUGGER = -g3 OPTIMISE = -O C_FLAGS = $(OPTIMISE) $(DEBUGGER) $(WARNING1) $(WARNING2) -export-dynamic $(LIBMYSQL) L_FLAGS = -lz -lm -lpthread -lcrypt $(LIBMYSQL) OBJ_DIR = Obj/ SRC_DIR = Source/ MUD_EXE = project MUD_DIR = TestP/ LOG_DIR = $(MUD_DIR)Files/Logs/ ECHOCMD = echo -e L_GREEN = \e[1;32m L_WHITE = \e[1;37m L_BLUE = \e[1;34m L_RED = \e[1;31m L_NRM = \e[0;00m DATE = `date +%d-%m-%Y` FILES = $(wildcard $(SRC_DIR)*.cpp) C_FILES = $(sort $(FILES)) O_FILES = $(patsubst $(SRC_DIR)%.cpp, $(OBJ_DIR)%.o, $(C_FILES)) all: @$(ECHOCMD) " Compiling $(L_RED)$(MUD_EXE)$(L_NRM)."; @$(MAKE) -s build build: $(O_FILES) @rm -f $(MUD_EXE) $(COMPILER) -o $(MUD_EXE) $(L_FLAGS) $(O_FILES) @echo " Finished Compiling $(MUD_EXE)."; @chmod g+w $(MUD_EXE) @chmod a+x $(MUD_EXE) @chmod g+w $(O_FILES) $(OBJ_DIR)%.o: $(SRC_DIR)%.cpp @echo " Compiling $@"; $(COMPILER) -c $(C_FLAGS) $< -o $@ .cpp.o: $(COMPILER) -c $(C_FLAGS) $< clean: @echo " Complete compile on $(MUD_EXE)."; @rm -f $(OBJ_DIR)*.o $(MUD_EXE) @$(MAKE) -s build I like the functionality of the Makefile, instead of spitting out all the arguments etc, it just spits out the "Compiling [Filename]" etc. If I add -c to the L_FLAGS then it compiles (I think) but instead spits out stuff like: g++: Obj/Database.o: linker input file unused because linking not done After a full day of trying and research on google, I'm no closer to solving my problem, so I come to you guys to see if you can explain to me why all this is happening and if possible, steps to solve. Regards, Steve

    Read the article

  • Rsync: how to mount truecrypt on-the-fly on the receiving side?

    - by deepc
    The short version: how can I keep an rsync backup on a truecrypt volume? The hard part is to mount/unmount this volume on the fly when it is needed for rsync. Details This is my current backup configuration (which works fairly well for the most part): backup source is on Win7 64 bit, destination is a remote Linux box (Debian) actual data transfer is done by rsync via ssh (cwRsync with cygwin) rsync daemon is started on demand via ssh On the Linux box the backup is protected by file permissions only. I want to increase security here and put the backup into a truecrypt volume. I can fuse-mount that volume manually in the shell. The question is now how can I make rsync not only open an ssh connection and starting the rsync daemon, but also to mount the truecrypt volume before (and unmount it after)? My money is on option --rsync-path which can be used to pass a command line to ssh - provided that stdin and stdout still work the same. I guess that command would have to be a shell script. Is this possible, and what would the script look like? For reference, here's a quote of that option: --rsync-path=PROGRAM Use this to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell's path (e.g. --rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you'd care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate. One tricky example is to set a different default directory on the remote machine for use with the --relative option. For instance: rsync -avR --rsync-path="cd /a/b && rsync" host:c/d /e/ This is the full rsync man page. Truecrypt volume auto-mount Solved! Turns out this option is actually key to auto-mounting the truecrypt volume on the remote side. The following command line does the trick (one line!): rsync $options -e "ssh -p $port -i ../.ssh/id_dsa" --rsync-path="/usr/local/bin/truecrypt -d && /usr/local/bin/truecrypt --fs-options=rw,sync,utf8,uid=$UID,umask=0007 --non-interactive -p $password $pathToVolume $remoteMountDir && rsync" $localSourceDir $user:$remoteMountMountDir Truecrypt volume auto-dismount Still open: how can I unmount the volume when rsync is done? Not sure if the following makes sense to anyone but I give it a try... Right now I am unmounting (truecrypt -d), then mounting again, then continuing with rsync. At this time rsync needs to do its thing but I dont know when its done. Adding ... rsync && truecrypt -d to the command line does not work because then the rsync daemon does not start. This is because rsync starts the daemon with parameter --server on the remote side and that parameter would go to the final truecrypt -d.

    Read the article

  • What are the pitfalls of hardlinked files on my desktop PC?

    - by MountainX
    All the identical-content files on my PC are now hardlinked. (My data is completely de-duplicated. It is a consequence of the way I copied my data from my old computer.) What pitfalls do I need to be aware of now that certain actions on one file could silently affect a number of other files? I know that deleting the file I'm working on is not a problem (assuming I deleted it on purpose). It doesn't affect any of the other hardlinked files and I don't see that the delete action would lead to unexpected side effects. Moving or renaming the file is not a problem. I don't see any unexpected consequences. I don't think copying hardlinked files is a problem, but I'm not as confident about any unexpected consequences in this regard. What I have seen is that making a copy (to the same disk) of a hardlinked file with cp keeps the copy hardlinked (i.e., inode number doesn't change in the copy). Copying to another filesystem obviously breaks the hardlink. (I guess one pitfall is forgetting this fact, given that my PC has 3 hard disks.) Changing permissions does affect all linked files. So far this has proven handy. (I made a large number of the hardlinked files read-only.) None of the operations above seem to produce any major unexpected consequences. However, as was pointed out to me by Daniel Beck in a comment, editing or modifying a file can sometimes be a problem. It depends on the tool and maybe the type of edit. (For example, editing small text files using sed seems to always break the link while using nano doesn't.) This introduces the chance that editing one file could affect all the hardlinked files (i.e., alter the original inode). My proposed solution to this is to make all hardlinked files read-only (and that is already mostly the case). If I can't do that for some files, I will unlink those particular files. Is there any problem with this read-only approach? I'm assuming that if I go to edit a file and find it to be read-only, I'll remember to unlink that filename while making it writable. So one pitfall might be forgetting this rule. In that case, I'll have to rely on my backups. Am I correct in the above statements? And what else do I need to know? BTW, I'm running Kubuntu 12.04. I'm also using btrfs. (I have 2 SSD's and 1 HDD in the PC. I will also be adding an external USB HDD. I'm also connected to a network and I mount some NFS shares. I don't assume any of these last bits are relevant to the question, but I'm adding them just in case.) BTW, since I have more than one drive (with separate file systems), to unlink any file all I have to do is copy it to another drive, then move it back. However, using sed also works (in my testing). Here's my script: sed -i 's/\(.\)/\1/' file1 Surprisingly, this even unlinks zero byte files. In my testing it also appears to work on non-text files without any special options. (But I understand that the --binary option might be needed on Windows, MS-DOS and Cygwin.) However, copying to another disk and moving back may be the best way to unlink. For my use-case, unlink command doesn't really "unlink", rather it "removes".

    Read the article

  • IIS 7.5 on Windows Server 2008 R2 refusing to create PASSIVE MODE FTP connections

    - by Campbell
    I'm attempting to get an FTP client written in perl to transfer files from an IIS 7.5 FTP server using passive mode. I've configured the FTP server as per instructions and have also configured Windows Firewall to allow this type of traffic. I have validated that the firewall is behaviong correctly by checking to ensure there are no blocked packets in the logs. I have verified the that FTP control channel is being opened on Port 21. I believe the client is being told by IIS which port to connect on for passive mode and IIS is refusing to allow this connection. The perl log looks like: C:\cygwin\Perl\lib\FMT>perl FTPTest.pl Net::FTP>>> Net::FTP(2.77) Net::FTP>>> Exporter(5.64_01) Net::FTP>>> Net::Cmd(2.29) Net::FTP>>> IO::Socket::INET(1.31) Net::FTP>>> IO::Socket(1.31) Net::FTP>>> IO::Handle(1.28) Net::FTP=GLOB(0x20abac0)<<< 220 Microsoft FTP Service Net::FTP=GLOB(0x20abac0)>>> USER ftpuser Net::FTP=GLOB(0x20abac0)<<< 331 Password required for ftpuser. Net::FTP=GLOB(0x20abac0)>>> PASS .... Net::FTP=GLOB(0x20abac0)<<< 230 User logged in. Net::FTP=GLOB(0x20abac0)>>> CWD /Logs Net::FTP=GLOB(0x20abac0)<<< 250 CWD command successful. Net::FTP=GLOB(0x20abac0)>>> PASV Net::FTP=GLOB(0x20abac0)<<< 227 Entering Passive Mode (xx,xxx,xxx,xxx,160,41). Net::FTP=GLOB(0x20abac0)>>> RETR filename.txt Can't use an undefined value as a symbol reference at C:/Utilities/strawberryper l/perl/lib/Net/FTP/dataconn.pm line 54. IIS logs look as follows: 2010-10-02 17:40:06 xx.xxx.xx.xx - yy.y.yy.yy ControlChannelOpened - - 0 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a - - 2010-10-02 17:40:06 xx.xxx.xx.xx - yy.y.yy.yy USER ftpuser 331 0 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a - - 2010-10-02 17:40:06 xx.xxx.xx.xx MACHINENAME\ftpuser yy.y.yy.yy PASS *** 230 0 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a / - 2010-10-02 17:40:06 xx.xxx.xx.xx MACHINENAME\ftpuser yy.y.yy.yy CWD /Logs 250 0 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a /Logs - 2010-10-02 17:40:06 xx.xxx.xx.xx MACHINENAME\ftpuser yy.y.yy.yy PASV - 227 0 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a - - 2010-10-02 17:40:27 - MACHINENAME\ftpuser zz.z.zz.zzz 41001 DataChannelClosed - - 64 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a - - 2010-10-02 17:40:27 xx.xxx.xx.xx MACHINENAME\ftpuser yy.y.yy.yy ControlChannelClosed - - 64 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a - - 2010-10-02 17:40:27 xx.xxx.xx.xx MACHINENAME\ftpuser yy.y.yy.yy RETR filename.txt 550 1236 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a filename.txt - We've managed to see this issue with other FTP clients also, I don't think its something funny in Perl. I've been informed that this works fine in the IIS 6 FTP server. I'm wondering if there is something we're missing here.

    Read the article

  • ndd on Solaris 10

    - by user12620111
    This is mostly a repost of LaoTsao's Weblog with some tweaks. Last time that I tried to cut & paste directly off of his page, some of the XML was messed up. I run this from my MacBook. It should also work from your windows laptop if you use cygwin. ================If not already present, create a ssh key on you laptop================ # ssh-keygen -t rsa ================ Enable passwordless ssh from my laptop. Need to type in the root password for the remote machines. Then, I no longer need to type in the password when I ssh or scp from my laptop to servers. ================ #!/usr/bin/env bash for server in `cat servers.txt` do   echo root@$server   cat ~/.ssh/id_rsa.pub | ssh root@$server "cat >> .ssh/authorized_keys" done ================ servers.txt ================ testhost1testhost2 ================ etc_system_addins ================ set rpcmod:clnt_max_conns=8 set zfs:zfs_arc_max=0x1000000000 set nfs:nfs3_bsize=131072 set nfs:nfs4_bsize=131072 ================ ndd-nettune.txt ================ #!/sbin/sh # # ident   "@(#)ndd-nettune.xml    1.0     01/08/06 SMI" . /lib/svc/share/smf_include.sh . /lib/svc/share/net_include.sh # Make sure that the libraries essential to this stage of booting  can be found. LD_LIBRARY_PATH=/lib; export LD_LIBRARY_PATH echo "Performing Directory Server Tuning..." >> /tmp/smf.out # # Standard SuperCluster Tunables # /usr/sbin/ndd -set /dev/tcp tcp_max_buf 2097152 /usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 1048576 /usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 1048576 # Reset the library path now that we are past the critical stage unset LD_LIBRARY_PATH ================ ndd-nettune.xml ================ <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <!-- ident "@(#)ndd-nettune.xml 1.0 04/09/21 SMI" --> <service_bundle type='manifest' name='SUNWcsr:ndd'>   <service name='network/ndd-nettune' type='service' version='1'>     <create_default_instance enabled='true' />     <single_instance />     <dependency name='fs-minimal' type='service' grouping='require_all' restart_on='none'>       <service_fmri value='svc:/system/filesystem/minimal' />     </dependency>     <dependency name='loopback-network' grouping='require_any' restart_on='none' type='service'>       <service_fmri value='svc:/network/loopback' />     </dependency>     <dependency name='physical-network' grouping='optional_all' restart_on='none' type='service'>       <service_fmri value='svc:/network/physical' />     </dependency>     <exec_method type='method' name='start' exec='/lib/svc/method/ndd-nettune' timeout_seconds='3' > </exec_method>     <exec_method type='method' name='stop'  exec=':true'                       timeout_seconds='3' > </exec_method>     <property_group name='startd' type='framework'>       <propval name='duration' type='astring' value='transient' />     </property_group>     <stability value='Unstable' />     <template>       <common_name>     <loctext xml:lang='C'> ndd network tuning </loctext>       </common_name>       <documentation>     <manpage title='ndd' section='1M' manpath='/usr/share/man' />       </documentation>     </template>   </service> </service_bundle> ================ system_tuning.sh ================ #!/usr/bin/env bash for server in `cat servers.txt` do   cat etc_system_addins | ssh root@$server "cat >> /etc/system"   scp ndd-nettune.xml root@${server}:/var/svc/manifest/site/ndd-nettune.xml   scp ndd-nettune.txt root@${server}:/lib/svc/method/ndd-nettune   ssh root@$server chmod +x /lib/svc/method/ndd-nettune   ssh root@$server svccfg validate /var/svc/manifest/site/ndd-nettune.xml   ssh root@$server svccfg import /var/svc/manifest/site/ndd-nettune.xml done

    Read the article

  • How I Work: A Cloud Developer's Workstation

    - by BuckWoody
    I've written here a little about how I work during the day, including things like using a stand-up desk (still doing that, by the way). Inspired by a Twitter conversation yesterday, I thought I might explain how I set up my computing environment. First, a couple of important points. I work in Cloud Computing, specifically (but not limited to) Windows Azure. Windows Azure has features to run a Virtual Machine (IaaS), run code without having to control a Virtual Machine (PaaS) and use databases, video streaming, Hadoop and more (a kind of SaaS for tech pros). As such, my designs run the gamut of on-premises, VM's in the Cloud, and software that I write for a platform. I focus on data primarily, meaning that I design a lot of systems that use an RDBMS (like SQL Server or Windows Azure Databases) or a NoSQL approach (MongoDB on Azure or large-scale Key-Value Pairs in Table storage) and even Hadoop and R, and also Cloud Numerics in F#. All that being said, those things inform my choices below. Hardware I have a Lenovo X220 tablet/laptop which I really like a great deal - it's a light, tough, extremely fast system. When I travel, that's the system I take. It has 8GB of RAM, and an SSD drive. I sometimes use that to develop or work at a client's site, on the road, or in the living room when I'm not in my home office. My main system is a GateWay DX430017 - I've maxed it out on RAM, and I have two 1TB drives in it. It's not only my workstation for work; I leave it on all the time and it streams our videos, music and books. I have about 3400 e-books, and I've just started using Calibre to stream the library. I run Windows 8 on it so I can set up Hyper-V images, since Windows Azure allows me to move regular Hyper-V disks back and forth to the Cloud. That's where all my "servers" are, when I have to use an IaaS approach. The reason I use a desktop-style system rather than a laptop only approach is that a good part of my job is setting up architectures to solve really big, complex problems. That means I have to simulate entire networks on-premises, along with the Hybrid Cloud approach I use a lot. I need a lot of disk space and memory for that, and I use two huge monitors on my stand-up desk. I could probably use 10 monitors if I had the room for them. Also, since it's our home system as well, I leave it on all the time and it doesn't travel.   Software For the software for my systems, it's important to keep in mind that I not only write code, but I design databases, teach, present, and create Linux and other environments. Windows 8 - While the jury is out for me on the new interface, the context-sensitive search, integrated everything, and speed is just hands-down the right choice. I've evaluated a server OS, Linux, even an Apple, but I just am not as efficient on those as I am with Windows 8. Visual Studio Ultimate - I develop primarily in .NET (C# and F# mostly) and I use the Team Foundation Server in the cloud, and I'm asked to do everything from UI to Services, so I need everything. Windows Azure SDK, Windows Azure Training Kit - I need the first to set up my Azure PaaS coding, and the second has all the info I need for PaaS, IaaS and SaaS. This is primarily how I get paid. :) SQL Server Developer Edition - While I might install Oracle, MySQL and Postgres on my VM's, the "outside" environment is SQL Server for an RDBMS. I install the Developer Edition because it has the same features as Enterprise Edition, and comes with all the client tools and documentation. Microsoft Office -  Even if I didn't work here, this is what I would use. I've just grown too accustomed to doing business this way to change, so my advice is always "use what works", and this does. The parts I use are: OneNote (and a Math Add-in) - I do almost everything - and I mean everything in OneNote. I can code, do high-end math, present, design, collaborate and more. All my notebooks are on my Skydrive. I can use them from any system, anywhere. If you take the time to learn this program, you'll be hooked. Excel with PowerPivot - Don't make that face. Excel is the world's database, and every Data Scientist I know - even the ones where I teach at the University of Washington - know it, use it, and love it.  Outlook - Primary communications, CRM and contact tool. I have all of my social media hooked up to it, so when I get an e-mail from you, I see everything, see all the history we've had on e-mail, find you on a map and more. Lync - I was fine with LiveMeeting, although it has it's moments. For me, the Lync client is tres-awesome. I use this throughout my day, present on it, stay in contact with colleagues and the folks on the dev team (who wish I didn't have it) and more.  PowerPoint - Once again, don't make that face. Whenever I see someone complaining about PowerPoint, I have 100% of the time found they don't know how to use it. If you suck at presenting or creating content, don't blame PowerPoint. Works great on my machine. :) Zoomit - Magnifier - On Windows 7 (and 8 as well) there's a built-in magnifier, but I install Zoomit out of habit. It enlarges the screen. If you don't use one of these tools (or their equivalent on some other OS) then you're presenting/teaching wrong, and you should stop presenting/teaching until you get them and learn how to show people what you can see on your tiny, tiny monitor. :) Cygwin - Unix for Windows. OK, that's not true, but it's mostly that. I grew up on mainframes and Unix (IBM and HP, thank you) and I can't imagine life without  sed, awk, grep, vim, and bash. I also tend to take a lot of the "Science" and "Development" and "Database" packages in it as well. PuTTY - Speaking of Unix, when I need to connect to my Linux VM's in Windows Azure, I want to do it securely. This is the tool for that. Notepad++ - Somewhere between torturing myself in vim and luxuriating in OneNote is Notepad++. Everyone has a favorite text editor; this one is mine. Too many features to name, and it's free. Browsers - I install Chrome, Firefox and of course IE. I know it's in vogue to rant on IE, but I tend to think for myself a great deal, and I've had few (none) problems with it. The others I have for the haterz that make sites that won't run in IE. Visio - I've used a lot of design packages, but none have the extreme meta-data edit capabilities of Visio. I don't use this all the time - it can be rather heavy, but what it does it does really well. I also present this way when I'm not using PowerPoint. Yup, I just bring up Visio and diagram away as I'm chatting with clients. Depending on what we're covering, this can be the right tool for that. Tweetdeck - The AIR one, not that new disaster they came out with. I live on social media, since you, dear readers, are my cube-mates. When I get tired of you all, I close Tweetdeck. When I need help or someone needs help from me, or if I want to see a picture of a cat while I'm coding, I bring it up. It's up most all day and night. Windows Media Player - I listen to Trance or Classical when I code, and I find music managers overbearing and extra. I just use what comes in the box, and it works great for me. R - F# and Cloud Numerics now allows me to load in R libraries (yay!) and I use this for statistical work on big data loads. Microsoft Math - One of the most amazing, free, rich, amazing, awesome, amazing calculators out there. I get the 64-bit version for quick math conversions, plots and formula-checks. Python - I know, right? Who knew that the scientific community loved Python so much. But they do. I use 2.7; not as much runs with 3+. I also use IronPython in Visual Studio, or I edit in Notepad++ Camstudio recorder - Windows PSR - In much of my training, and all of my teaching at the UW, I need to show a process on a screen. Camstudio records screen and voice, and it's free. If I need to make static training, I use the Windows PSR tool that's built right in. It's ostensibly for problem duplication, but I use it to record for training.   OK - your turn. Post a link to your blog entry below, and tell me how you set your system up.  

    Read the article

  • SSH as root using public key still prompts for password on RHEL 6.1

    - by Dean Schulze
    I've generated rsa keys with cygwin ssh-keygen and copied them to the server with ssh-copy-id -i id_rsa.pub [email protected] I've got the following settings in my /etc/ssh/sshd_config file RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys PermitRootLogin yes When I ssh [email protected] it still prompts for a password. The output below from /usr/sbin/sshd -d says that a matching keys was found in the .ssh/authorized_keys file, but it still requires a password from the client. I've read a bunch of web postings about permissions on files and directories, but nothing works. Is it possible to ssh with keys in RHEL 6.1 or is this forbidden? The debug output from ssh and sshd is below. $ ssh -v [email protected] OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012 debug1: Connecting to my.ip.address [my.ip.address] port 22. debug1: Connection established. debug1: identity file /home/dschulze/.ssh/id_rsa type 1 debug1: identity file /home/dschulze/.ssh/id_rsa-cert type -1 debug1: identity file /home/dschulze/.ssh/id_dsa type 2 debug1: identity file /home/dschulze/.ssh/id_dsa-cert type -1 debug1: identity file /home/dschulze/.ssh/id_ecdsa type -1 debug1: identity file /home/dschulze/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 9f:00:e0:1e:a2:cd:05:53:c8:21:d5:69:25:80:39:92 debug1: Host 'my.ip.address' is known and matches the RSA host key. debug1: Found key in /home/dschulze/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/dschulze/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Offering DSA public key: /home/dschulze/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Trying private key: /home/dschulze/.ssh/id_ecdsa debug1: Next authentication method: password Here is the server output from /usr/sbin/sshd -d [root@ga2-lab .ssh]# /usr/sbin/sshd -d debug1: sshd version OpenSSH_5.3p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-d' debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. debug1: Bind to port 22 on ::. Server listening on :: port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8 debug1: inetd sockets after dupping: 3, 3 Connection from 172.60.254.24 port 53401 debug1: Client protocol version 2.0; client software version OpenSSH_6.1 debug1: match: OpenSSH_6.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: permanently_set_uid: 74/74 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user root service ssh-connection method none debug1: attempt 0 failures 0 debug1: PAM: initializing for "root" debug1: userauth-request for user root service ssh-connection method publickey debug1: attempt 1 failures 0 debug1: test whether pkalg/pkblob are acceptable debug1: PAM: setting PAM_RHOST to "172.60.254.24" debug1: PAM: setting PAM_TTY to "ssh" debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file /root/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /root/.ssh/authorized_keys, line 1 Found matching RSA key: db:b3:b9:b1:c9:df:6d:e1:03:5b:57:d3:d9:c4:4e:5c debug1: restore_uid: 0/0 Postponed publickey for root from 172.60.254.24 port 53401 ssh2 debug1: userauth-request for user root service ssh-connection method publickey debug1: attempt 2 failures 0 debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file /root/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /root/.ssh/authorized_keys, line 1 Found matching RSA key: db:b3:b9:b1:c9:df:6d:e1:03:5b:57:d3:d9:c4:4e:5c debug1: restore_uid: 0/0 debug1: ssh_rsa_verify: signature correct debug1: do_pam_account: called Accepted publickey for root from 172.60.254.24 port 53401 ssh2 debug1: monitor_child_preauth: root has been authenticated by privileged process debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: ssh_gssapi_storecreds: Not a GSSAPI mechanism debug1: restore_uid: 0/0 debug1: SELinux support enabled debug1: PAM: establishing credentials PAM: pam_open_session(): Authentication failure debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_pty_req: session 0 alloc /dev/pts/1 ssh_selinux_setup_pty: security_compute_relabel: Invalid argument debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 17323 debug1: session_exit_message: session 0 channel 0 pid 17323 debug1: session_exit_message: release channel 0 debug1: session_pty_cleanup: session 0 release /dev/pts/1 debug1: session_by_channel: session 0 channel 0 debug1: session_close_by_channel: channel 0 child 0 debug1: session_close: session 0 pid 0 debug1: channel 0: free: server-session, nchannels 1 Received disconnect from 172.60.254.24: 11: disconnected by user debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: deleting credentials

    Read the article

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Best Method to SFTP or FTPS Files via SSIS

    - by Registered User
    What is the best method using SSIS (SQL Server Integration Services) to upload a file to either a remote SFTP (secure FTP with SSH2 protocal) or FTPS (FTP over SSL) site? I've used the following methods, but each has short-comings I would like to avoid: COZYROC LIBRARY Method: Install the CozyRoc library on each development and production server and use the SFTP task to upload the files. Pros: Easy to use. It looks, smells, and feels like a normal SSIS task. SSIS also recognizes the password as sensitive information and allows you all the normal options for protecting the sensitive information instead of just storing it in clear text in a non-secure manner. Works well with other SSIS tasks such as ForEach Loop Containers. Errors out when uploads and downloads fail. Works well when you don't know the names of the files on the remote FTP site to download or when you won't know the name of the file to upload until run-time. Cons: Costs money to license in a production environment. Makes you dependent upon the vendor to update their libraries between each version. Although they already have a 2008 version, this caused me a problem during the CTP's of 2008. Requires installing the libraries on each development and production machine. COMMAND LINE SFTP PROGRAM Method: Install a free command-line SFTP application such as Putty and execute it either by running a batch file or operating system process task. Pros: Free, free, and free. You can be sure it is secure if you are using Putty since numerous GUI FTP clients appear to use Putty under the covers. You DEFINATELY know you are using SSH2 and not SSH. Cons: The two command-line utilities I tried (Putty and Cygwin) required storing the SFTP password in a non-secure location. I haven't found a good way to capture failures or errors when uploading files. The process doesn't look and smell like SSIS. Most of the code is encapsulated in text files instead of SSIS itself. Difficult to use if you don't know the exact name of the file you are uploading or downloading. A 3RD PARTY C# or VB.NET LIBRARY Method: Install a SFTP or FTPS library and use a Script Task that references the library to upload the files. (I've never tried this, so I'm going to guess at the pros and cons) Pros: Probably easy to capture errors. Should work well with variables, so it would probably be easy to use even when you don't know the exact name of the file you are uploading or downloading. Cons: It's a script task combined with .NET libraries. If you are using SSIS, then you probably are more comfortable with SSIS tasks then .NET code. Script tasks are also difficult to troubleshoot since they don't have the same debugging tools and features as regular .NET projects. Creates a dependency on 3rd party code that may not work between different versions of SQL Server. To be fair, it is probably MORE likely to work between different versions of SQL Server than a 3rd party SSIS task library. Another huge con -- I haven't found a free C# or VB.NET library that does this as of yet. So if anyone knows of one, then please let me know!

    Read the article

< Previous Page | 23 24 25 26 27 28  | Next Page >