Search Results

Search found 5456 results on 219 pages for 'named pipes'.

Page 161/219 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • Prevent 'Run-time error '7' out of memory' error in Excel when using macro

    - by MasterJedi
    I keep getting this error whenever I run a macro in my excel file. Is there any way I can prevent this? My code is below. Debugging highlights the following line as the issue: ActiveSheet.Shapes.SelectAll My macro: Private Sub Save() Dim sh As Worksheet ActiveWorkbook.Sheets("Report").Copy 'Create new workbook with Sheets("Report"(2)) as only sheet. Set sh = ActiveWorkbook.Sheets(1) 'Set the new sheet to a variable. New workbook is now active workbook. sh.Name = sh.Range("B9") & "_" & Format(Date, "mmyyyy") 'Rename the new sheet to B9 value + date. With sh.UsedRange.Cells .Value = .Value 'eliminate all formulas .Validation.Delete 'remove all validation .FormatConditions.Delete 'remove all conditional formatting ActiveSheet.Buttons.Delete ActiveSheet.Shapes.SelectAll Selection.Delete lrow = Range("I" & Rows.Count).End(xlUp).Row 'select rows from bottom up to last containing data in column I Rows(lrow + 1 & ":" & Rows.Count).Delete 'delete rows with no data in column I Application.ScreenUpdating = False .Range("A410:XFD1048576").Delete Shift:=xlUp 'delete all cells outwith report range Application.ScreenUpdating = True Dim counter Dim nameCount nameCount = ActiveWorkbook.Names.Count counter = nameCount Do While counter > 0 ActiveWorkbook.Names(counter).Delete counter = counter - 1 Loop 'remove named ranges from workbook End With ActiveWorkbook.SaveAs "\\Marko\Report\" & sh.Name & ".xlsx" 'Save new workbook using same name as new sheet. ActiveWorkbook.Close False 'Close the new workbook. MsgBox ("Export complete. Choose the next ADP in cell B9 and click 'Calculate'.") 'Display message box to inform user that report has been saved. End Sub Not sure how to make this more efficient or to prevent this error.

    Read the article

  • mysqldump isn't able to export a specific database, phpMyAdmin crashes

    - by Devils Child
    I'm experiencing problems with a database on my server (Note: All other databases work fine). Once I try to export it with mysqldump I get this error: # mysqldump -u root -pXXXXXXXXX databasename > /root/databasename.sql mysqldump: Couldn't execute 'show table status like 'apps'': Lost connection to MySQL server during query (2013) Also, phpMyAdmin throws an error when selecting this database and immediately logs out. However, the web site which uses this database works fine. I can also execute SELECT statements on the table named "apps" from the MySQL shell. I tried restarting the MySQL daemon as well as REPAIR DATABASE and REPAIR TABLE but the problem still persists. I had this problem before, then it disappeared somehow without me doing anything to resolve the issue. Now, the problem is back and I'm simply unable to create a backup of this database. Used software Debian 6.0.7 x64 MySQL 5.1.66-0 MySQL Version: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.66-0+squeeze1 | | version_comment | (Debian) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+

    Read the article

  • Selecting whole column except first X (header) cells in Excel

    - by Robert Koritnik
    I know I can select all cells in a particular column by clicking on column header descriptor (ie. A or AB). But is it possible to then exclude a few cells out of it, like my data table headings? Example I would like to select data cells of a particular column to set Data Validation (that would eventually display a drop down of list values defined in a named range). But I don't want my data header cells to be included in this selection (so they won't have these drop downs displayed nor will they be validated). What if I later decide to change validation settings of these cells? How can I selection my column then? A sidenote I know I can set data validation on the whole column and then select only those cells that I want to exclude and clear their data validation. What I would like to know is is ti possible to do the correct selection in the first step to avoid this second one. I tried clicking on the column descriptor to select the whole column and then CTRL-click those cells I don't want to include in my selection, but it didn't work as expected.

    Read the article

  • Undo Google Sync in chrome

    - by iamcreasy
    I didn't know that my google account wasn't in sync with my chrome for the last couple of months and now that I have link again, the restored record is several months old. Now, that I've lost all my recent bookmarks and all other stuff...is there anything or anyway so I could revert the Google sync so I can get my bookmarks back? Update 1 I have found that under C:\Users\Profile_Name\AppData\Local\Google\Chrome\User Data\Default there is a file named Bookmarks.bak that holds the old state of my bookmarks before the sync. Update 2 Bookmarks is the file that holds the current(after sync) bookmark list. I replaced Bookmarks with Bookmarks.bak and restarted chrome, but still chrome isn't fetching information from the updated file. So, I have my old bookmark information, but how to restore it in chrome. Update 3 : solved I still couldn't figure out why replacing the bookmarks file didn't work and aparently that's the only solution available on web. I reinstalled everything and then copied the old bookmarks file. Then I got my bookmarks back again. Lession learned : Check regularly if google sync is working.

    Read the article

  • linux: per-process monitor, every 10 minutes, with history access

    - by Inbar Rose
    I really didn't know a better way to ask my question, hence you get a horribly named question. I will explain what i want to do, maybe that will help you help me. I would like to have my linux machine continuously monitor (every 10 minutes) all the processes on my machine. The information from each process that I require is the name, CPU usage, allocated (virtual) memory, and resident (ram) memory. If these periodic reports were to be looked at, they would look something like this: PROCESS CPU RAM VIRTUAL name1 % MB MB name2 % MB MB ...etc..etc These reports should be stored in such a way that I can access them at a later date by giving a date/time scope (range). For instance, if I want to see the history of my processes from 12:00:00 1.12.12 till 12:00:00 2.12.12 I can - and it should give me the history of the processes for every 10 minutes between those date/time borders. The format of the return is not important, that will be handled by a script anyway and can be modified into anything I need. I have looked into a few things so far, but have not found something that clearly meets my needs. Among the things i searched: sar, free(1), top(1).. and a few other things. It should be a simple issue, i can already see all this information by simply looking at my htop, but i need only a tool that will gather the desired fields for me for each processes every 10 minutes, and then also let me extract slices of that data based on date/time scopes (ranges). note: I have limited experience with linux, so please give detailed information. note2: The desired output will be something like this (after receiving the desired range) CPU USAGE BY PROCESS: proc_nameA 1,2,2,2,2,2...... numbers represent % usage every 10 minutes... proc_nameB 4,3,3,6,1,2...... The same idea with the other information.

    Read the article

  • Cannot install SQL Server CE 4

    - by Manos Dilaverakis
    I'm trying to install SQL Server CE 4 on a WinXP Pro SP3 machine. I double-click on the file and absolutely nothing happens. There is nothing in the event viewer and the only effect I can see is the addition of an empty, randomly named folder in C:\ which looks something like C:\7c59aaeb5e43f6bdcb2430e923 I've tried this with both SQL Server CE 4 and the SP1 version. I've tried disabling the AV (Nod32) file protection but it didn't make a difference. I've checked the installed program list in case it's already installed, but I don't see it anywhere. I checked in C:\Program Files\Microsoft SQL Server Compact Edition\ and there's only the \3.5 folder in there from the already installed 3.5 version. Does anyone know what's going on or how I can further diagnose the problem? Edit in response to Ramhound: I have .NET 4 installed. Why, does it need a particular version? Edit in response to leinad13 I tried Process Explorer and filtered by the name of the temporary folder created. I see the following, but can't make much sense of it.

    Read the article

  • Where does Windows store MSI files for uninstallation?

    - by Nilzor
    I'm trying to figure out how Windows (XP through 7) is handling installation and uninstallation of MSI files. I have come up in situations where Windows Installer is unable to uninstall because it's missing the original MSI file, which leads me to believe that it stores a copy of all installed MSI packages somewhere. Where? I've had a couple of theories. It expectes it to reside in the same folder as it was installed from. The registry keys in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall does point to the original installation folder, and error messages when the MSI file is missing often point to this. Removing the MSI file from this folder does not hinder the uninstallation process though, so I've refused this theory. C:\Windows\Installer. This folder actually contains a bunch of seemingly randomly named MSI files. But this list is incomplete. I do find entries in the registry key mentioned in 1) which does not have an MSI copy in this folder. So how does this work? How is windows installer able to uninstall MSI-installed applications even though the MSI is not in 1) and not in 2)?

    Read the article

  • What character can be safely used for naming files on unix/linux?

    - by Eric DANNIELOU
    Before yesterday, I used only lower case letters, numbers, dot (.) and underscore(_) for directories and file naming. Today I would like to start using more special characters. Which ones are safe (by safe I mean I will never have any problem)? ps : I can't believe this question hasn't been asked already on this site, but I've searched for the word "naming" and read canonical questions without success (mosts are about computer names). Edit #1 : (btw, I don't use upper case letters for file names. I don't remember why. But since a few month, I have production problems with upper case letters : Some OS do not support ascii!) Here's what happened yesterday at work : As usual, I had to create a self signed SSL certificate. As usual, I used the name of the website for the files : www2.example.com.key www2.example.com.crt www2.example.com.csr. Then comes the problem : Generate a wildcard self signed certificate. I did that and named the files example.com.key example.com.crt example.com.csr, which is misleading (it's a certificate for *.example.com). I came back home, started putting some stars in apache configuration files filenames and see if it works (on a useless home computer, not even stagging). Stars in file names really scares me : Some coworkers/vendors/... can do some script using rm find xarg that would lead to http://www.ucs.cam.ac.uk/support/unix-support/misc/horror, and already one answer talks about disaster. Edit #2 : Just figured that : does not need to be escaped. Anyone knows why it is not used in file names?

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

  • Windows 7 - "Magic" frequent folder

    - by TheAdamGaskins
    Every week, I export an mp3 file from audacity into a folder with that day's date (e.g. this past sunday I exported the file to a folder named 20130609). Then I close everything and that's it for a while. Then, I come back a few hours later to upload the file to ftp. I usually have some folders open, so to open a new one, I right click on the folder icon on the taskbar... to open a new folder window and browse to this folder I just created, right? Well I look up a little bit and: So I click it and upload the file, and it actually saves me 30 seconds, which is really awesome... but what in the world? It happens every single week without fail. I create the folder inside the audacity export window. The folder stays on the frequent list until I create a new folder the following week. This was definitely not an advertised feature of Windows 7, and it's extremely handy... but it really just seems like magic to me. How does it work?

    Read the article

  • Can I host multiple sites with one Amazon EC2 instance [duplicate]

    - by user22
    This question already has an answer here: Can you help me with my capacity planning? 2 answers I currently have VPS server and I pay around $75 per month and I get: 40GB HD 2Gb RAM 100GB BW 6 core cpu (but i dont use much) I have only one live website running and traffic is only max 100 user visit per day. I mostly do the my testing stuff and some of my inter sites for playing with coding. But I do need one server. I am thinking of moving to Amazon EC2 if the price diff is not so much because then I can learn some more stuff. I am thinking of getting the 3 years Heavy utilization Reserved instance because my server will be running all day and night. I tried their online caluclator with Medium Instance Heavy reserved for 3 years for EC2 it comes $31 per month(effective price) and for EBS and S3 , I think even if thats it $40 for all other stuff. I will be at no loss for what I am getting at present. Am i correct or I missed something?? Now In my current VPS I have Apache for PHP sites and MOD wsgi for python sites. I am not sure if I will be able to do all that stuff in Amazon EC2. Can I host python and PHP sites both in Amazon EC2 instance using Named Virtual Hosts and Ngnix

    Read the article

  • Virtual folder for multiple sites

    - by Cups
    I am creating a very simple flat file CMS for small (multilingual) websites. The little file writing that goes on is handled by 4 scripts in a publicly available folder in each site named /edit. Given that I have 2 websites now working on that simple system: websiteA/index.php (etc) websiteA/edit/ websiteB/index.php (etc) websiteB/edit/ What is the best way of making that /edit folder "virtual" in order that these and each subsequent website owner can login to their view of /edit and yet the code only exists in one place. I do not want the website owners to have to login from a central website, but from their own /edit directory. I have already read about different solutions seemingly using the <Directory> directive in my httpd.conf declaration for each website, and also using straight mod_rewrite but admit to now becoming confused about some of the terminology. Each website has its own config file which contains path settings and so on. What in your opinion is the best way to handle this? EDIT In light of a reply, I suppose that given a virtual host directive such as this: <VirtualHost 00.00.00.00:80> DocumentRoot /var/www/html/websitea.com ServerName www.websitea.com ServerAlias websitea.com DirectoryIndex index.htm index.php CustomLog logs/websitea combined </VirtualHost> Is it possible to create an alias inside that directive for the folder websitea.com/edit ?

    Read the article

  • Can you see something wrong in my working .htaccess?

    - by AlexV
    OK, after many search, trial and errors I've managed to create an .htaccess that do what I wanted (see explanations and questions after the code block): <IfModule mod_rewrite.c> RewriteEngine On #1 If the requested file is not url-mapper.php (to avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!url-mapper\.php)$ #2 If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #3 If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/seo-urls\/(excluded1|excluded2)(/.*)?$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/url-mapper.php?uri=%{REQUEST_URI} [L,QSA] </IfModule> This is what the .htaccess should do: #1 is checking that the file requested is not url-mapper.php (to avoid infinite redirect loops). This file will always be at the root of the domain. #2 the .htaccess must only catch URLs that don't end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch) and URLs ending with .php* files (.php, .php4, .php5, .php123...). #3 some directories (and childs) can be excluded from the .htaccess (in this case /seo-urls/excluded1 and /seo-urls/excluded2). Finally the .htaccess feed the mapper with an hidden GET parameter named uri containing the requested uri. Even if I tested and everything works, I want to know if what I do is correct (and if it's the "best" way to do it). I've learned a lot with this "project" but I still consider myself a beginner at .htaccess and regular expressions so I want to triple check it there before putting it in production...

    Read the article

  • Multiboot USB (OSX only): How to customize partition name?

    - by wrk2bike
    Trying to deal with all the Mac OSX recovery disks I've got by moving them to bootable USB images. I've got a big USB drive with multiple partitions for each recovery disk, and it's easy to use Disk Utility to "restore" the recovery DVD to a partition. When I boot my target Mac while holding down the Alt key, I can see all my bootable images and they work great. Problem is, they've all got the same name: "Mac OS X Install DVD." I manage Macs of various vintages. If my target Mac needs 10.6.3 for example, my only option seems to be to try each one until I get past the "Mac OSX can't be installed on this computer" message. I originally named my partitions with the OSX revision number, but that name is replaced by the disk image name during Disk Utility restore. Is there any way to customize the name during or after Disk Utility restore? I tried making a new DVD image on disk first and renaming it, but when I restore it to my recovery partition it has the original name. EDIT: After booting to the wrong partition, and getting the "..can't be installed" message, I can open the Startup Disk menu and see the other partitions - and as I select each one, the info at the bottom indicates which OS revision is on that partition. So I know the info is in there! Just want it at the boot screen if possible.

    Read the article

  • How do I use a list of filenames to find a folder on my hard drive, that contains most matches of these filenames?

    - by Web Master
    I need a program that will use a list of file names to find a folder on my hard drive that contains the most of these filenames. Long story short I made a giant map. This map was live and got ruined. New map data files have been generated, and previous map data files have been altered. What does this mean? This means file sizes have been changed, and there will be new files that have never been in the backup folder. Some files map files could also have been generated in other projects. So there could be filenames on my computer not associated with this due to the way the files are named when created. So If I take an indidual file for example "r.-1.-1.mca" This file could show up on my hard drive 10 times. Anyway, the goal is to take 100 map files, turn them into a list, and then search the hard drive and find the folder that has the highest count of matching map file names. Can anyone figure out a way to do this? I am thinking about manually searching for every single file.

    Read the article

  • VLAN across a router to give wireless access to remote sites?

    - by Don
    I've been looking online for this answer, but getting conflicting information. I was under the impression that you couldn't use a VLAN across a router, but maybe it's possible (according to some documentation I see online)? I was hoping someone could clear it up for me. Here's what I'm working with: We have a remote site with a handful of users. We recently gave them an access point (Cisco 1142n) for internal wireless. It's plugged into a switch and working fine (getting IPs from the same DHCP scope as the wired users are getting). Private wireless is set on VL50. At the home office we have private wireless for our internal network working and on VL50, with a test VLAN setup for VL60, which points to our DSL line for the time being. Both private and public wireless works fine internally (not crossing a router). VL50 is named the same at both sites for consistency in naming. If we wanted to give the remote site access to the public wireless (VL60), would that be possible across the routers? For more information, currently the site is connected to the home office via a T1 connection, Cisco routers on both ends. I didn't think it was possible due to the nature of VLANS being layer 2. But, I am from from an expert on this and would appreciate any instruction as to the actual truth of the matter. The end result I'm going for is, how to get our remote sites access to a public (outside) connection along with their private connection, without actually having a DSL (or similar type line) dropped at their location? Thanks in advance for your thoughts.

    Read the article

  • BKF file corruption

    - by Naitik Semwaal
    I don't wanna ask anything here as I have nothing to ask. Instead of that, if I share some useful info here, would you guys mind? If not, then let me proceed. You must have heard about "Back up", the process in which we create backup copies of our crucial data into a file, called BKF (backup) file. Having a valid BKF file, provides security to our data against unwanted data loss or corruption. Whenever such a critical situation takes place, we can restore our BKF file and get our data back (but only if backed up earlier). Do you guys ever thought that why a BKF file gets corrupted? What could be the reasons which make the BKF file corrupted or inaccessible? One day while googling, I found a blog post named as: Reasons of BKF file corruption. I read it, it was very informative. In this blog, I came to know about the reasons for corruption in BKF files. I shared the blog here so that users can read it and clear their doubts of BKF file corruption. I hope this would be helpful.

    Read the article

  • I'm trying to run some PHP scripts as CLI instead of over HTTP. How do I make them play nice?

    - by gnfti
    Hi everyone. I'm using some PHP scripts from FeedForAll to join together RSS feeds (RSSmesh) and display them as HTML (RSS2HTML). Because I intend to run these scripts fairly intensively and don't want the resulting HTTP requests and bandwidth to count towards my hosting quota, I am in the process of moving to running them on the web host's server in an umbrella PHP "batch" script, and call this script via cron (this is a Linux server, by the way). Here's a (working) sample request over HTTP: http://www.mydomain.com/a/rss2htmlcore/rss2html2.php?XMLFILE=http://www.mydomain.com/a/myapp/xmlcache/feed.xml&TEMPLATE=template.html This will produce the desired HTML output. An example of how I want this to work on the command line: /srv/customers/mycustomer#/mydomain.com/www/a/rss2htmlcore/rss2html2-cli.php /srv/customers/mycustomer#/mydomain.com/www/a/myapp/xmlcache/feed.xml /srv/customers/mycustomer#/mydomain.com/www/a/template.html This is with the correct shebang line added to "rss2html2-cli.php". I could just as well specify the executable ("/usr/local/bin/php") in the request, I doubt it makes a difference because I am able to run another script (that I wrote myself) either way without problems. Now, RSS2HTML and RSSmesh are different in that, for starters, they include secondary files -- for example, both include an XML parser script -- and I suspect that this is where I am getting a bit in over my head. Right now I'm calling exec() from the "umbrella" batch script, like so: exec("/srv/customers/mycustomer#/mydomain.com/www/a/rss2htmlcore/rss2html2-cli.php /srv/customers/mycustomer#/mydomain.com/www/a/myapp/xmlcache/feed.xml /srv/customers/mycustomer#/mydomain.com/www/a/template.html", $output) But no output is being produced. What's the best way to go about this and what "gotchas" should I keep in mind? Is exec() the right way to approach this? It works fine for the other (simple) script but that writes its own output. For this I want to get the output and write it to a file from within the umbrella script if possible. I've also tried output buffering but to no avail. Do I need to pay attention to anything specific with regard to the includes? Right now they're specified in the scripts as include_once("FeedForAll_XMLParser.inc.php"); and the specified files are indeed in the same folder. Further info: -This is a Linux server. -I have no direct access to the shell, so I can't test things directly on a command line, everything is via crontab. -I will admit that support for the FeedForAll scripts leaves a lot to be desired, but I'd like to keep using their scripts if at all possible, if only because I know them and have been using them for a while. I have looked into Simplepie, but the FFA scripts do some things that I've seen no obvious solutions for with Simplepie, like limiting the number of items per individual feed (RSSmesh) or limiting the description length (RSS2HTML). -Yahoo! Pipes is out, they cache their data for too long for my application. Should you want to take a look at the code, here are the scripts as txt files. RSS2HTML2 and RSSmesh are the FeedForAll scripts, FeedForAll_XMLParser... is the included parser. Note that I have not yet amended these to handle $argv etc. I have however in "scraper-universal-rss-cli", which works fine with CLI. If anyone has any thoughts to share on this it would be very much appreciated. Thank you in advance.

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • Right code to retrieve data from sql server database

    - by HasanGursoy
    Hi, I have some problems in database connection and wonder if I have something wrong in my code. Please review. This question is related: Switch between databases, use two databases simultaneously question. cs="Data Source=mywebsite.com;Initial Catalog=database;User Id=root;Password=toor;Connect Timeout=10;Pooling='true';" using (SqlConnection cnn = new SqlConnection(WebConfigurationManager.ConnectionStrings["cs"].ConnectionString)) { using (SqlCommand cmmnd = new SqlCommand("", cnn)) { try { cnn.Open(); #region Header & Description cmmnd.Parameters.Add("@CatID", SqlDbType.Int).Value = catId; cmmnd.CommandText = "SELECT UpperID, Title, Description FROM Categories WHERE CatID=@CatID;"; string mainCat = String.Empty, rootCat = String.Empty; using (SqlDataReader rdr = cmmnd.ExecuteReader()) { if (rdr.Read()) { mainCat = rdr["Title"].ToString(); upperId = Convert.ToInt32(rdr["UpperID"]); description = rdr["Title"]; } else { Response.Redirect("/", false); } } if (upperId > 0) //If upper category exists add its name { cmmnd.Parameters["@CatID"].Value = upperId; cmmnd.CommandText = "SELECT Title FROM Categories WHERE CatID=@CatID;"; using (SqlDataReader rdr = cmmnd.ExecuteReader()) { if (rdr.Read()) { rootCat = "<a href='x.aspx'>" + rdr["Title"] + "</a> &raquo; "; } } } #endregion #region Sub-Categories if (upperId == 0) //show only at root categories { cmmnd.Parameters["@CatID"].Value = catId; cmmnd.CommandText = "SELECT Count(CatID) FROM Categories WHERE UpperID=@CatID;"; if (Convert.ToInt32(cmmnd.ExecuteScalar()) > 0) { cmmnd.CommandText = "SELECT CatID, Title FROM Categories WHERE UpperID=@CatID ORDER BY Title;"; using (SqlDataReader rdr = cmmnd.ExecuteReader()) { while (rdr.Read()) { subcat.InnerHtml += "<a href='x.aspx'>" + rdr["Title"].ToString().ToLower() + "</a>\n"; description += rdr["Title"] + ", "; } } } } #endregion } catch (Exception ex) { HasanG.LogException(ex, Request.RawUrl, HttpContext.Current); Response.Redirect("/", false); } finally { cnn.Close(); } } } The random errors I'm receiving are: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. Cannot open database "db" requested by the login. The login failed. Login failed for user 'root'.

    Read the article

  • TDD - beginner problems and stumbling blocks

    - by Noufal Ibrahim
    While I've written unit tests for most of the code I've done, I only recently got my hands on a copy of TDD by example by Kent Beck. I have always regretted certain design decisions I made since they prevented the application from being 'testable'. I read through the book and while some of it looks alien, I felt that I could manage it and decided to try it out on my current project which is basically a client/server system where the two pieces communicate via. USB. One on the gadget and the other on the host. The application is in Python. I started off and very soon got entangled in a mess of rewrites and tiny tests which I later figured didn't really test anything. I threw away most of them and and now have a working application for which the tests have all coagulated into just 2. Based on my experiences, I have a few questions which I'd like to ask. I gained some information from http://stackoverflow.com/questions/1146218/new-to-tdd-are-there-sample-applications-with-tests-to-show-how-to-do-tdd but have some specific questions which I'd like answers to/discussion on. Kent Beck uses a list which he adds to and strikes out from to guide the development process. How do you make such a list? I initially had a few items like "server should start up", "server should abort if channel is not available" etc. but they got mixed and finally now, it's just something like "client should be able to connect to server" (which subsumed server startup etc.). How do you handle rewrites? I initially selected a half duplex system based on named pipes so that I could develop the application logic on my own machine and then later add the USB communication part. It them moved to become a socket based thing and then moved from using raw sockets to using the Python SocketServer module. Each time things changed, I found that I had to rewrite considerable parts of the tests which was annoying. I'd figured that the tests would be a somewhat invariable guide during my development. They just felt like more code to handle. I needed a client and a server to communicate through the channel to test either side. I could mock one of the sides to test the other but then the whole channel wouldn't be tested and I worry that I'd miss that. This detracted from the whole red/green/refactor rhythm. Is this just lack of experience or am I doing something wrong? The "Fake it till you make it" left me with a lot of messy code that I later spent a lot of time to refactor and clean up. Is this the way things work? At the end of the session, I now have my client and server running with around 3 or 4 unit tests. It took me around a week to do it. I think I could have done it in a day if I were using the unit tests after code way. I fail to see the gain. I'm looking for comments and advice from people who have implemented large non trivial projects completely (or almost completely) using this methodology. It makes sense to me to follow the way after I have something already running and want to add a new feature but doing it from scratch seems to tiresome and not worth the effort. P.S. : Please let me know if this should be community wiki and I'll mark it like that. Update 0 : All the answers were equally helpful. I picked the one I did because it resonated with my experiences the most. Update 1: Practice Practice Practice!

    Read the article

  • Haskell Monad bind currying

    - by Chime
    I am currently in need of a bit of brain training and I found this article on Haskell and Monads I'm having trouble with exercise 7 re. Randomised function bind. To make the problem even simpler to experiment, I replaced the StdGen type with an unspecified type. So instead of... bind :: (a -> StdGen -> (b,StdGen)) -> (StdGen -> (a,StdGen)) -> (StdGen -> (b,StdGen)) I used... bind :: (a -> c -> (b,c)) -> (c -> (a,c)) -> (c -> (b,c)) and for the actual function impelemtation (just straight from the exercise) bind f x seed = let (x',seed') = x seed in f x' seed' and also 2 randomised functions to trial with: rndf1 :: (Num a, Num b) => a -> b -> (a,b) rndf1 a s = (a+1,s+1) rndf2 :: (Num a, Num b) => a -> b -> (a,b) rndf2 a s = (a+8,s+2) So with this in a Haskell compiler (ghci), I get... :t bind rndf2 bind rndf2 :: (Num a, Num c) => (c -> (a, c)) -> c -> (a, c) This matches the bind curried with rndf2 as the first parameter. But the thing I don't understand is how... :t bind rndf2 . rndf1 Suddenly gives bind rndf2 . rndf1 :: (Num a, Num c) => a -> c -> (a, c) This is the correct type of the composition that we are trying to produce because bind rndf2 . rndf1 Is a function that: takes the same parameter type(s) as rndf1 AND takes the return from rndf1 and pipes it as an input of rndf2 to return the same type as rndf2 rndf1 can take 2 parameters a -> c and rndf2 returns (a, c) so it matches that a composition of these function should have type: bind rndf2 . rndf1 :: (Num a, Num c) => a -> c -> (a, c) This does not match the naive type that I initially came up with for bind bind f :: (a -> b -> (c, d)) -> (c, d) -> (e, f) Here bind mythically takes a function that takes two parameters and produces a function that takes a tuple in order that the output from rndf1 can be fed into rndf2 why the bind function needs to be coded as it is Why the bind function does not have the naive type

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Reusing XSL template to be invoked with different relative XPaths

    - by meomaxy
    Here is my contrived example that illustrates what I am attempting to accomplish. I have an input XML file that I wish to flatten for further processing. Input file: <BICYCLES> <BICYCLE> <COLOR>BLUE</COLOR> <WHEELS> <WHEEL> <WHEEL_TYPE>FRONT</WHEEL_TYPE> <FLAT>NO</FLAT> <REFLECTORS> <REFLECTOR> <REFLECTOR_NUM>1</REFLECTOR_NUM> <COLOR>RED</COLOR> <SHAPE>SQUARE</SHAPE> </REFLECTOR> <REFLECTOR> <REFLECTOR_NUM>2</REFLECTOR_NUM> <COLOR>WHITE</COLOR> <SHAPE>ROUND</SHAPE> </REFLECTOR> </REFLECTORS> </WHEEL> <WHEEL> <WHEEL_TYPE>REAR</WHEEL_TYPE> <FLAT>NO</FLAT> </WHEEL> </WHEELS> </BICYCLE> </BICYCLES> The input is a list of <BICYCLE> nodes. Each <BICYCLE> has a <COLOR> and optionally has <WHEELS>. <WHEELS> is a list of <WHEEL> nodes, each of which has a few attributes, and optionally has <REFLECTORS>. <REFLECTORS> is a list of <REFLECTOR> nodes, each of which has a few attributes. The goal is to flatten this XML. This is the XSL I'm using: <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:fo="http://www.w3.org/1999/XSL/Format" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:fn="http://www.w3.org/2005/xpath-functions"> <xsl:output method="xml" encoding="UTF-8" indent="yes" omit-xml-declaration="yes" xml:space="preserve"/> <xsl:template match="/"> <BICYCLES> <xsl:apply-templates/> </BICYCLES> </xsl:template> <xsl:template match="BICYCLE"> <xsl:choose> <xsl:when test="WHEELS"> <xsl:apply-templates select="WHEELS"/> </xsl:when> <xsl:otherwise> <BICYCLE> <COLOR><xsl:value-of select="COLOR"/></COLOR> <WHEEL_TYPE/> <FLAT/> <REFLECTOR_NUM/> <COLOR/> <SHAPE/> </BICYCLE> </xsl:otherwise> </xsl:choose> </xsl:template> <xsl:template match="WHEELS"> <xsl:apply-templates select="WHEEL"/> </xsl:template> <xsl:template match="WHEEL"> <xsl:choose> <xsl:when test="REFLECTORS"> <xsl:apply-templates select="REFLECTORS"/> </xsl:when> <xsl:otherwise> <BICYCLE> <COLOR><xsl:value-of select="../../COLOR"/></COLOR> <WHEEL_TYPE><xsl:value-of select="WHEEL_TYPE"/></WHEEL_TYPE> <FLAT><xsl:value-of select="FLAT"/></FLAT> <REFLECTOR_NUM/> <COLOR/> <SHAPE/> </BICYCLE> </xsl:otherwise> </xsl:choose> </xsl:template> <xsl:template match="REFLECTORS"> <xsl:apply-templates select="REFLECTOR"/> </xsl:template> <xsl:template match="REFLECTOR"> <BICYCLE> <COLOR><xsl:value-of select="../../../../COLOR"/></COLOR> <WHEEL_TYPE><xsl:value-of select="../../WHEEL_TYPE"/></WHEEL_TYPE> <FLAT><xsl:value-of select="../../FLAT"/></FLAT> <REFLECTOR_NUM><xsl:value-of select="REFLECTOR_NUM"/></REFLECTOR_NUM> <COLOR><xsl:value-of select="COLOR"/></COLOR> <SHAPE><xsl:value-of select="SHAPE"/></SHAPE> </BICYCLE> </xsl:template> </xsl:stylesheet> The output is: <BICYCLES xmlns:fn="http://www.w3.org/2005/xpath-functions" xmlns:fo="http://www.w3.org/1999/XSL/Format" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <BICYCLE> <COLOR>BLUE</COLOR> <WHEEL_TYPE>FRONT</WHEEL_TYPE> <FLAT>NO</FLAT> <REFLECTOR_NUM>1</REFLECTOR_NUM> <COLOR>RED</COLOR> <SHAPE>SQUARE</SHAPE> </BICYCLE> <BICYCLE> <COLOR>BLUE</COLOR> <WHEEL_TYPE>FRONT</WHEEL_TYPE> <FLAT>NO</FLAT> <REFLECTOR_NUM>2</REFLECTOR_NUM> <COLOR>WHITE</COLOR> <SHAPE>ROUND</SHAPE> </BICYCLE> <BICYCLE> <COLOR>BLUE</COLOR> <WHEEL_TYPE>REAR</WHEEL_TYPE> <FLAT>NO</FLAT> <REFLECTOR_NUM/> <COLOR/> <SHAPE/> </BICYCLE> </BICYCLES> What I don't like about this is that I'm outputting the color attribute in several forms: <COLOR><xsl:value-of select="../../../../COLOR"/></COLOR> <COLOR><xsl:value-of select="../../COLOR"/></COLOR> <COLOR><xsl:value-of select="COLOR"/></COLOR> <COLOR/> It seems like there ought to be a way to make a named template and invoke it from the various places where it is needed and pass some parameter that represents the path back to the <BICYCLE> node to which it refers. Is there a way to clean this up, say with a named template for bicycle fields, for wheel fields and for reflector fields? In the real world example this is based on, there are many more attributes to a "bicycle" than just color, and I want to make this XSL easy to change to include or exclude fields without having to change the XSL in multiple places.

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >