Search Results

Search found 6756 results on 271 pages for 'hybrid syntax'.

Page 163/271 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • The overlooked OUTPUT clause

    - by steveh99999
    I often find myself applying ad-hoc data updates to production systems – usually running scripts written by other people. One of my favourite features of SQL syntax is the OUTPUT clause – I find this is rarely used, and I often wonder if this is due to a lack of awareness of this feature.. The OUTPUT clause was added to SQL Server in the SQL 2005 release – so has been around for quite a while now, yet I often see scripts like this… SELECT somevalue FROM sometable WHERE keyval = XXX UPDATE sometable SET somevalue = newvalue WHERE keyval = XXX -- now check the update has worked… SELECT somevalue FROM sometable WHERE keyval = XXX This can be rewritten to achieve the same end-result using the OUTPUT clause. UPDATE sometable SET somevalue = newvalue OUTPUT deleted.somevalue AS ‘old value’,              inserted.somevalue AS ‘new value’ WHERE keyval = XXX The Update statement with output clause also requires less IO - ie I've replaced three SQL Statements with one, using only a third of the IO.  If you are not aware of the power of the output clause – I recommend you look at the output clause in books online And finally here’s an example of the output produced using the Northwind database…  

    Read the article

  • good literature for teaching object oriented thinking in C [closed]

    - by Dipan Mehta
    Quite often C is the primary platform for the development. And when things are large scale, I have seen partitioning of the system as different objects is quite a natural thing. Some or many of the object orientated analysis and design principles are used here very well. This is not a debate question about whether or not C is a good candidate for object oriented programming or not. This is also NOT a question how to do OO in C. You can refer to this question and there are probably many such citations. As far as I am concerned, I have learned some of this things while working with many open source and commercial projects. (libjpeg, ffmpeg, Gstreamer which is based on GObject). I can probably extend a few references that explains some of these concepts such as - 1. Event Helix article, 2. Linux Mag article 3. one of my answers which links Schreiner's reference. Unfortunately, when we induct younger folks, it seems too hard to make them learn all of it the hard way. Usually, when we say it's C, a general reaction is to throw away all of the "Object thinking". Looking for help extending above references from those who have been in the similar areas of work. Are there any good formal literature that explains how Object thinking can be made to use while you are working in C? I have seen tons of book on general "object oriented paradigms" but they all focus on advanced languages mostly not in C. You see most C books - but most focus only on the syntax and the obfuscated corners of C and that's it. There are hardly ANY good reference, specially books or any systematic (I mean formal) literature on how to apply OO in C. This is very surprising given that so many large scale open source projects use C which are truly using this very well; but we hardly see any good formal literature on this subject.

    Read the article

  • That Tool is cURLy

    When you just use IE, Firefox or Chrome it can be easy to forget that HTTP is about more then just going to check the latest tech news at Engadget. It is a full and rich protocol, and a great way to experience that richness is the powerful command line utility cURL. cURL has a lot of options, but the syntax starts out simple. You can retrieve the contents of a web page with a simple curl http://blogs.claritycon.com/. The results should be the full text of the web page, tags and all. From there, you can use X to specify the HTTP verb to use, POST, PUT, DELETE, PATCH, etc and d to specify the payload of a POST or PUT. I have found cURL to be incredibly useful for two scenarios. First, as a good way to test basic web services. Second, while working a bit with CouchDB and another document based database, cURL has helped me learn more about RESTful APIs, including different verbs and response codes. cURL is a mainstay in our environments and programming languages precisely because it is simple, powerful and discoverable. I encourage more .NET developers to take a look, bask on the command line for a while and enjoy the plain text of the web. And this excellent logo:     -- Relevant Links -- Its not always the case with manuals, but the manual for cURL is quite useful: http://curl.haxx.se/docs/manual.html To make your command line look a little nicer (and more powerful) on Windows, check out Console and add some transparency effects: http://sourceforge.net/projects/console/Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Generic software code style enforcer

    - by FuzziBear
    It seems to me to be a fairly common thing to do, where you have some code that you'd like to automatically run through a code style tool to catch when people break your coding style guide(s). Particularly if you're working on code that has multiple languages (which is becoming more common with web-language-x and javascript), you generally want to apply similar code style guides to both and have them enforced. I've done a bit of research, but I've only been able to find tools to enforce code style guidelines (not necessarily applying the code style, just telling you when you break code style guidelines) for a particular language. It would seem to me a reasonably trivial thing to do by just using current IDE rules for syntax highlighting (so that you don't check style guide rules inside quotes or strings, etc) and a whole lot of regexes to enforce some really generic things. Examples: if ( rather than if( checking lines with only whitespace Are there any tools that do this kind of really generic style checking? I'd prefer it to be easily configurable for different languages (because like it or not, some things would just not work cross language) and to add new "rules" to check new things.

    Read the article

  • G++ Compiling errors

    - by egn56
    Attempting to do some compiling with g++ and when I run g++ test.cpp this is what I get. I am in the correct directory and I have even messed with the permission settings to make those directories chmod 777 as a test, still nothing. Tried running it as sudo g++ test.cpp and getting nothing. It can compile and create a .o if i run g++ -c test.cpp but it can't seem to link it and create the .out. Any suggestions? /usr/bin/ld: 1: /usr/bin/ld: /bin: Permission denied /usr/bin/ld: 2: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 3: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 4: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 5: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 6: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 7: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 8: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 9: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 10: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 11: /usr/bin/ld: test.cpp: not found /usr/bin/ld: 12: /usr/bin/ld: Syntax error: "(" unexpected collect2: ld returned 2 exit status

    Read the article

  • Graph data structures and journal format for mini-IDE

    - by matec
    Background: I am writing a small/partial IDE. Code is internally converted/parsed into a graph data structure (for fast navigation, syntax-check etc). Functionality to undo/redo (also between sessions) and restoring from crash is implemented by writing to and reading from journal. The journal records modifications to the graph (not to the source). Question: I am hoping for advice on a decision on data-structures and journal format. For the graph I see two possible versions: g-a Graph edges are implemented in the way that one node stores references to other nodes via memory address g-b Every node has an ID. There is an ID-to-memory-address map. Graph uses IDs (instead of addresses) to connect nodes. Moving along an edge from one node to another each time requires lookup in ID-to-address map. And also for the journal: j-a There is a current node (like current working directory in a shell + file-system setting). The journal contains entries like "create new node and connect to current", "connect first child of current node" (relative IDs) j-b Journal uses absolute IDs, e.g. "delete edge 7 - 5", "delete node 5" I could e.g. combine g-a with j-a or combine g-b with j-b. In principle also g-b and j-a should be possible. [My first/original attempt was g-a and a version of j-b that uses addresses, but that turned out to cause severe restrictions: nodes cannot change their addresses (or journal would have to keep track of it), and using journal between two sessions is a mess (or even impossible)] I wonder if variant a or variant b or a combination would be a good idea, what advantages and disadvantages they would have and especially if some variant might be causing troubles later.

    Read the article

  • What do you wish language designers paid attention to?

    - by Berin Loritsch
    The purpose of this question is not to assemble a laundry list of programming language features that you can't live without, or wish was in your main language of choice. The purpose of this question is to bring to light corners of languge design most language designers might not think about. So, instead of thinking about language feature X, think a little more philisophically. One of my biases, and perhaps it might be controversial, is that the softer side of engineering--the whys and what fors--are many times more important than the more concrete side. For example, Ruby was designed with a stated goal of improving developer happiness. While your opinions may be mixed on whether it delivered or not, the fact that was a goal means that some of the choices in language design were influenced by that philosophy. Please do not post: Syntax flame wars (I could care less whether you use whitespace [Python], keywords [Ruby], or curly braces [Java, C/C++, et. al.] to denote program blocks). That's just an implementation detail. "Any language that doesn't have feature X doesn't deserve to exist" type comments. There is at least one reason for all programming languages to exist--good or bad. Please do post: Philisophical ideas that language designers seem to miss. Technical concepts that seem to be poorly implemented more often than not. Please do provide an example of the pain it causes and if you have any ideas of how you would prefer it to function. Things you wish were in the platform's common library but seldom are. One the same token, things that usually are in a common library that you wish were not. Conceptual features such as built in test/assertion/contract/error handling support that you wish all programming languages would implement properly--and define properly. My hope is that this will be a fun and stimulating topic.

    Read the article

  • What is the proper response to lousy error message?

    - by William Pursell
    I've just come across (for the 47 millionth time) some code that looks like this: except IOError, e: print "Problems reading file: %s." % filename sys.exit( 1 ) My first reaction is very visceral: the person who coded this is a complete idiot. How hard is it to print error messages to stderr and to include the system error message in the string? I haven't used python in years, and it took me all of 4 minutes to track down the documentation to figure out how to get the error message from the exception object e and the syntax for printing to stderr. My "complete idiot" reaction was slightly lessened since at least a non-zero value is passed to sys.exit, but I still find this code offensive. My prime thought is that the developer who wrote this is a complete novice for whom I have zero respect. Am I over-reacting? Surely there are excuses for all sorts of bad coding practices, but is there anything that can possibly excuse this sort of $#|t? I guess there are two question here: one is a duplicate of What are developer's problems with helpful error messages?, and the other is "am I over-reacting, or is it valid to conclude that the author of the above code is a novice?"

    Read the article

  • How would I batch rename a lot of files using command-line?

    - by Whisperity
    I have a problem which I am unable to solve: I need to rename a great dump of files using patterns. I tried using this, but I always get an error. I have a folder, inside with a lot of files. Running ls -1 | wc -l, it returns that I have like 160000 files inside. The problem is, that I wish to move these files to a Windows system, but most of them have characters like : and ? in them, which makes the file unaccessible on said Windows-based systems. (As a "do not solve but deal with" method, I tried booting up a LiveCD on the Windows system and moving the files using the live OS. Under that Ubuntu, the files were readable and writable on the mounted NTFS partition, but when I booted back on Windows, it showed that the file is there but Windows was unable to access it in any fashion: rename, delete or open.) I tried running rename 's/\:/_' * inside the folder, but I got Argument list too long error. Some search revealed that it happens because I have so many files, and then I arrived here. The problem is that I don't know how to alter the command to suit my needs, as I always end up having various errors like Trying find -name '*:*' | xargs rename : _, it gives xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option [\n] syntax error at (eval 1) line 1, near ":" [\n] xargs: rename: exited with status 255; aborting Adding the -0 after xargs turns the error message to xargs: argument line too long These files are archive files generated by various PHP scripts. The best solution would be having a chance to rename them before they are moved to Windows, but if there is no way to do it, we might have a way to rename the files while they are moved to Windows. I use samba and proftpd to move the files. Unfortunately, graphical software are out of the question as the server containing the files is what it is, a server, with only command-line interface.

    Read the article

  • No Thank You &ndash; Yours Truly &ndash; F#

    - by MarkPearl
    I am plodding along with my F# book. I have reached the part where I know enough about the syntax of the language to understand something if I read it – but not enough about the language to be productive and write something useful. A bit of a frustrating place to be. Needless to say when you are in this state of mind – you end up paging mindlessly through chapters of my F# book with no real incentive to learn anything until you hit “Exceptions”. Raising an exception explicitly So lets look at raising an exception explicitly – in C# we would throw the exception, F# is a lot more polite instead of throwing the exception it raises it, … (raise (System.InvalidOperationException("no thank you"))) quite simple… Catching an Exception So I would expect to be able to catch an exception as well – lets look at some C# code first… try { Console.WriteLine("Raise Exception"); throw new InvalidOperationException("no thank you"); } catch { Console.WriteLine("Catch Exception and Carry on.."); } Console.WriteLine("Carry on..."); Console.ReadLine();   The F# equivalent would go as follows… open System; try Console.WriteLine("Raise Exception") raise (System.InvalidOperationException("no thank you")) with | _ -> Console.WriteLine("Catch Exception and Carry on..") Console.WriteLine("Carry on...") Console.ReadLine();   In F# there is a “try, with” and a “try finally” Finally… In F# there is a finally block however the “with” and “finally” can’t be combined. open System; try Console.WriteLine("Raise Exception") raise (System.InvalidOperationException("no thank you")) finally Console.WriteLine("Finally carry on...") Console.ReadLine()

    Read the article

  • How do you make slides for programming talks?

    - by Yuvi Masory
    I've given a few talks recently and I have not found a good way to make slides. Here are a few desirable characteristics for programming slides: They're slides. A standard emacs buffer won't do it. They have syntax highlighting for code. They support basic formatting, like font size and color and bullets. No fancy animations needed. The only animation I desire is one-by-one appearance of bullets. So far I have considered: Microsoft Office - out of the question for Linux users. OpenOffice.org - too much for my needs, code formatting/highlighting needs to be done externally and pasted in. On the plus side supports bullets, bullet-by-bullet animation, and font formatting. Emacs - Supports all the code formatting but I haven't found a slides mode that lets me transition from one chunk to another. HTML5 - I once made slides using html5rocks as a template. It supports everything, but is too hard and time-consuming the "throw together" a few slides before a minor talk. Also the html5-only features may not work on the podium computer's installed browser. Any suggestions for programs/techniques for making code-centric presentations?

    Read the article

  • Connection String Incorrectly Formatted [migrated]

    - by Randy E
    I'm running into some issues. Every time I launch into debug mode and hit the "Create User" button, I'm getting an exception being thrown that is due to the Connection String either being in the wrong syntax or just wrong. Using Visual Studio 2010, project is .NET 3.5 with SQL 2008 Express. This is just a personal project that I'm testing some other things with, I know this generally isn't the recommended format but for a personal small project, I don't see the point in doing it any other way. The things I'm testing actually work :) Data Source=.\SQLEXPRESS;AttachDbFilename=|Data Directory|ASPAppDatabase.mdf;Integrated Security=True;User Instance=True That doesn't work. Data Source=.\SQLEXPRESS;AttachDbFilename="|Data Directory|ASPAppDatabase.mdf";Integrated Security=True;User Instance=True Neither does that. Data Source=.\SQLEXPRESS;AttachDbFilename='|Data Directory|ASPAppDatabase.mdf';Integrated Security=True;User Instance=True And again, neither does that :/ However, rather than using "|Data directory| if I use the full path to the local DB file it works just fine and no exception is thrown, and I can read and write as I need. And just to cover all my bases, here is the button click event that creates the User. protected void btnAddUser_Click(object sender, EventArgs e) { Membership.CreateUser(txtUserName.Text, txtPassword.Text); btnLogin_Click(sender, e); } So, what am I missing in regards to the |Data Directory|? Here is an example of the above not working correctly..taken directly from web.config. <add name="ASPAppDatabaseConnection" connectionString='Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|ASPAppDatabase.mdf;Integrated Security=True;User Instance=True'/>

    Read the article

  • Install tmux on Mac OS X

    - by unixben
    This is a short run down on how to get tmux running on your Mac OS X system. The same methodology applies when compiling this on Solaris. What is tmux? According to the developer's page, "tmux is a terminal multiplexer: it enables a number of terminals (or windows), each running a separate program, to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached". Why not just use screen? For me, the primary reason I switched to tmux from screen is the much easier configuration syntax that tmux offers. If you've ever struggled with formatting screen's caption or hardstatus line, then you will appreciate the ease with which you can achieve the same results in tmux. Preparing your environment You will need a C compiler installed. I believe that OS X ships by default with GNU make, but if not, then you will need to obtain it or use Xcode. Download the sources While I'm putting all this together, I like to keep everything neatly tucked away in a build directory. mkdir ~/build cd ~/build curl -OL http://downloads.sourceforge.net/tmux/tmux-1.5.tar.gz curl -OL http://downloads.sourceforge.net/project/levent/libevent/libevent-2.0/libevent-2.0.16-stable.tar.gz Unpack the sources tar xzf tmux-1.5.tar.gz tar xzf libevent-2.0.16-stable.tar.gz Compiling libevent cd libevent-2.0.16-stable ./configure --prefix=/opt make sudo make install Compiling tmux cd ../tmux-1.5 LDFLAGS="-L/opt/lib" CPPFLAGS="-I/opt/include" LIBS="-lresolv" ./configure --prefix=/opt make sudo make install That's all there is to it!

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK What is the mistake if any one can point out in above configuration, or else I need to check any thing else?

    Read the article

  • Error while reomving the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug.

    Read the article

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • PDFtk Password Protection Help

    - by Dave W.
    I am using Ubuntu 11.10 and am looking for a solution to password protect a bunch of pdf files in a directory in batch. I came across PDFtk and it looks like it might do what I need, but I've reviewed the command line PDFtk examples and can't figure out if there is a way to do it in batch without having to individually specify the output file name for every file. I'm hoping a command-line guru can take a look at the PDFtk syntax and tell me if there is some trick / command that will allow me to password protect a directory of pdf files (e.g., *.pdf) and overwrite the existing files using the same name, or consistently rename the individual output files without having to specify each output name individually. Here's a link to the PDFtk command line examples page: http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ Thanks for your help. I think I've answered my own question. Here's a bash script that appears to do the trick. I'd welcome help evaluating why the code I've commented out doesn't work... #!/bin/bash # Created by Dave, 2012-02-23 # This script uses PDFtk to password protect every PDF file # in the directory specified. The script creates a directory named "protected_[DATE]" # to hold the password protected version of the files. # # I'm using the "user_pw" parameter, # which means no one will be able to open or view the file without # the password. # # PDFtk must be installed for this script to work. # # Usage: ./protect_with_pdftk.bsh [FILE(S)] # [FILE(S)] can use wildcard expansion (e.g., *.pdf) # This part isn't working.... ignore. The goal is to avoid errors if the # directory to be created already exists by only attempting to create # it if it doesn't exists # #TARGET_DIR="protected_$(date +%F)" #if [ -d "$TARGET_DIR" ] #then #echo # echo "$TARGET_DIR directory exists!" #else #echo # echo "$TARGET_DIR directory does not exist!" #fi # mkdir protected_$(date +%F) for i in *pdf ; do pdftk "$i" output "./protected_$(date +%F)/$i" user_pw [PASSWORD]; done echo "Complete. Output is in the directory: ./protected_$(date +%F)"

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Calculate travel time on road map with semaphores

    - by Ivansek
    I have a road map with intersections. At intersections there are semaphores. For each semaphore I generate a red light time and green light time which are represented with syntax [R:T1, G:T2], for example: 119 185 250 A ------- B: [R:6, G:4] ------ C: [R:5, G:5] ------ D I want to calculate a car travel time from A - D. Now I do this with this pseudo code: function get_travel_time(semaphores_configuration) { time = 0; for( i=1; i<path.length;i++) { prev_node = path[i-1]; next_node = path[i]); cost = cost_between(prev_node, next_node) time += (cost/movement_speed) // movement_speed = 50px per second light_times = get_light_times(path[i], semaphore_configurations) lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light; // Lights cycle time light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } } return time; } So for distance 119 between A and B travel time is, 119/50 = 2.38s ( exactly mesaured time is between 2.5s and 2.6s), then we add time if we came at a red light when at B. If we came at a red light is calculated with lines: lights_cycle = get_lights_cycle(light_times) // Eg: [R,R,R,G,G,G,G], where [R:3, G:4] lights_sum = light_times.green_time+light_times.red_light light = lights_cycle[cost%lights_sum]; if( light == "R" ) { time += light_times.red_light; } This pseudo code doesn't calculate exactly the same times as they are mesaured, but the calculations are very close to them. Any idea how I would calculate this?

    Read the article

  • Running a Screen instance of a program as non-root

    - by user288467
    I've got a dedicated server (Ubuntu 12.04, no GUI) set up to launch an instance of McMyAdmin and attach it to a screen instance every time I reboot the hardware. I have the command saved to root's crontab as: @reboot cd /var/MC_SVR && screen -dmS McMyAdmin ./MCMA2_Linux_x86_64 Problem being, though, I have a user set up specifically for FTP access to the server files so I don't always have to SSH into the machine. Since the server is being started as a root process, all the files it makes are, obviously, set with root as the owner. So I chown'd all the files and set them to ftpuser. Now I'm stuck with trying to get the process to start as ftpuser. I've tried doing the following but to no avail: cd /var/MC_SVR && su ftpuser - -c 'screen -dmS McMyAdmin ./MCMA2_Linux_x86_64' I try this in terminal and I get no errors or anything (in fact I never get anything unless it's a syntax error from su), but there's no screen instance to access and so I can assume the server never starts. So, what am I doing wrong? Or am I just not accessing the screen instance correctly since it's (supposed) to be launched by another user?

    Read the article

  • CoffeeScript - inability to support progressive adoption

    - by Renso
    First if, what is CoffeeScript?Web definitionsCoffeeScript is a programming language that compiles statement-by-statement to JavaScript. The language adds syntactic sugar inspired by Ruby and Python to enhance JavaScript's brevity and readability, as well as adding more sophisticated features like array comprehension and pattern matching.The issue with CoffeeScript is that it eliminates any progressive adoption. It is a purist approach, kind of like the Amish, if you're not borne Amish, tough luck. So for folks with thousands of lines of JavaScript code will have a tough time to convert it to CoffeeScript. You can use the js2coffee API to convert the JavaScript file to CoffeeScript but in my experience that had trouble converting the files. It would convert the file to CoffeeScript without any complaints, but then when trying to generate the CoffeeScript file got errors with guess what: INDENTATION!Tried to convince the CoffeeScript community on github but got lots of push-back to progressive adoption with comments like "stupid", "crap", "child's comportment", "it's like Ruby, Python", "legacy code" etc. As a matter of interest one of the first comments were that the code needs to be re-designed before converted to CoffeeScript. Well I rest my case then :-)So far the community on github has been very reluctant to even consider introducing some way to define code-blocks, obviously curly braces is not an option as they use it for json object definitions. They also have no consideration for a progressive adoption where some, if not all, JavaScript syntax will be allowed which means all of us in the real world that have thousands of lines of JavaScript will have a real issue converting it over. Worst, I for one lack the confidence that tools like js2coffee will provide the correct indentation that will determine the flow of control in your code!!! Actually it is hard for me to find enough justification for using spaces or tabs to control the flow of code. It is no wonder that C#, C, C++, Java, all enterprise-scale frameworks still use curly braces. Have never seen an enterprise app built with Ruby or PhP.Let me know what your concerns are with CoffeeScript and how you dealt with large scale JavaScript conversions to CoffeeScript.

    Read the article

  • Static DataTable or DataSet in a class - bad idea?

    - by Superbest
    I have several instances of a class. Each instance stores data in a common database. So, I thought "I'll make the DataTable table field static, that way every instance can just add/modify rows to its own table field, but all the data will actually be in one place!" However, apparently it's a bad idea to do use static fields, especially if it's databases: Don't Use "Static" in C#? Is this a bad idea? Will I run into problems later on if I use it? This is a small project so I can accept no testing as a compromise if that is the only drawback. The benefit of using a static database is that there can be many objects of type MyClass, but only one table they all talk to, so a static field seems to be an implementation of exactly this, while keeping syntax concise. I don't see why I shouldn't use a static field (although I wouldn't really know) but if I had to, the best alternative I can think of is creating one DataTable, and passing a reference to it when creating each instance of MyClass, perhaps as a constructor parameter. But is this really an improvement? It seems less intuitive than a static field.

    Read the article

  • When is it ever ok to write your own development tools? (editor into IDE)

    - by mario
    So I'm foremost using a text editor for coding. It's a very bare bones editor; provides mostly just syntax highlighting. But on rare occasions I also need to debug something. And that's when I have to resort to an IDE (mostly Netbeans, but got fiddly Eclipse/Aptana working as second fallback). For general use however IDEs feel not workable to me. It's a visual thing, being used to console UIs etc. And switching back and forth between a text editor and an IDE is slightly cumbersome too. That's why I'm considering extending the editor, not really into a full-fledged IDE - but at the very least integrate a debug feature. Since I'm working on PHP, it seems not that much effort. The DBGp allows to externalize a debug handler from the editor, so it's just minor integration work and figuring out how to shoehorn a breakpoint feature into the editor (joe btw). And while I've also got time to do that, I'm wondering if this is really worthwhile. In this case it's not a needed development tool. It's just for convenience. And the cause for doing it is basically just not liking the existing solution. While over time I might extend and adapt this debugger thing, it initially will be as circumstantial as Eclipse. It inevitably starts out as poor development tool. Furthermore there is likely not much reuse. (Okay, this is not an important point. Most such software exists sans much of a use case. And also obviously, similar extensions already exist for emacs and vim, so it cannot be completely pointless.) But what's a general guideline on attempting to conoct custom development tools, particularily if they are not really needed but satisfy personal preferences? (Usability enhancement not certain.)

    Read the article

  • Can org.freedesktop.Notifications.CloseNotification(uint id) be triggered and invoked via DBus?

    - by george rowell
    ref: Close button on notify-osd? Bookmark: Can org.freedesktop.Notifications.CloseNotification(uint id) be triggered and invoked via DBus? Currently, this script dbus-monitor "interface='org.freedesktop.Notifications'" | \ grep --line-buffered "member=Notify" | \ sed -u -e 's/.*/killall notify-osd/g' | \ bash will kill all pending notifications. It would be better to finesse the specific target OSD notification to cancel, by using org.freedesktop.Notifications.CloseNotification(uint id). Is there an interface method that can put this on (in?) the DBus to fire when a particular notify event occurs? The method will need to get the notify PID to use as the argument for CloseNotification(uint id). Alternatively, qdbus org.freedesktop.Notifications \ /org/freedesktop/Notifications \ org.freedesktop.Notifications.CloseNotification(uint id) could be used from the shell, if the (uint id) argument could be determined. The actual command syntax would use an integer in place of (uint id). Perhaps a better question to ask first might be "How is the DBus address for a notification found?". In hindsight the previous question "How is the (uint id) for a notification found?" is rhetorical! This previous answer: http://askubuntu.com/a/186311/89468 provided details so either method below can be used: gdbus call --session --dest org.freedesktop.DBus \ --object-path / \ --method org.freedesktop.DBus.GetConnectionUnixProcessID :1.16 returning: (uint32 8957,) or qdbus --literal --session org.freedesktop.DBus / \ org.freedesktop.DBus.GetConnectionUnixProcessID :1.16 returning: 8957

    Read the article

  • Entry level engineer question regarding memory mangement

    - by Ealianis
    It has been a few months since I started my position as an entry level software developer. Now that I am past some learning curves (e.g. the language, jargon, syntax of VB and C#) I'm starting to focus on more esoteric topics, as to write better software. A simple question I presented to a fellow coworker was responded with "I'm focusing on the wrong things." While I respect this coworker I do disagree that this is a "wrong thing" to focus upon. Here was the code (in VB) and followed by the question. Note: The Function GenerateAlert() returns an integer. Dim alertID as Integer = GenerateAlert() _errorDictionary.Add(argErrorID, NewErrorInfo(Now(), alertID)) vs... _errorDictionary.Add(argErrorID, New ErrorInfo(Now(), GenerateAlert())) I originally wrote the ladder and rewrote it with the "Dim alertID" so that someone else might find it easier to read. But here was my concern and question. "Should one write this with the Dim AlertID, it would in fact take up more memory; finite but more, and should this method be called many times could it lead to an issue? How will .NET handle this object AlertID. Outside of .NET should one manually dispose of the object after use (near the end of the sub)." I want to ensure I become a knowledgeable programmer that does not just rely upon garbage collection. Am I over thinking this? Am I focusing on the wrong things?

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >