Search Results

Search found 4288 results on 172 pages for 'alex man'.

Page 153/172 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

  • What I should know about memory management?

    - by bua
    first of all: I don't use stackadmin or similar so please don't vote for moving there, I'm reading man top and paper "what every programmer should know about memory ..." I need really simple explanation like for retard ;) Having following top dump: top - 11:21:19 up 37 days, 21:16, 4 users, load average: 0.41, 0.75, 1.09 Tasks: 313 total, 5 running, 308 sleeping, 0 stopped, 0 zombie Cpu(s): 0.4%us, 0.6%sy, 0.9%ni, 96.2%id, 0.1%wa, 0.0%hi, 1.9%si, 0.0%st Mem: 132103848k total, 131916948k used, 186900k free, 54000k buffers Swap: 73400944k total, 73070884k used, 330060k free, 13931192k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3305 tudb 25 10 144m 52m 940 R 6.0 0.0 1306:09 app 3011 tudb 15 0 71528 19m 604 S 3.3 0.0 171:57.83 app 3373 tudb 25 10 209m 93m 940 S 3.0 0.1 1074:53 app 3338 tudb 25 10 144m 47m 940 R 2.7 0.0 780:48.48 app 4227 tudb 25 10 208m 99m 904 S 1.3 0.1 198:56.01 app 8506 tudb 25 10 80.7g 49g 932 S 2.0 39.6 458:31.22 app I'm wondering what is: RES (my expl. physical memory consumption ? see 49GB) VIRT (memory mapped disk to cache? see 80GB) SHR (shared pages?) Swap: (is this cached label - for memory mapped disk into swap cache?) Should sum of RES give MEM: X used? or maybe sum of VIRT?

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

  • if I define `my_domain`, postfix does not expand mail aliases

    - by Norky
    I have postfix v2.6.6 running on CentOS 6.3, hostname priest.ocsl.local (private, internal domain) with a number of aliases supportpeople: [email protected], [email protected], [email protected] requests: "|/opt/rt4/bin/rt-mailgate --queue 'general' --action correspond --url http://localhost/", supportpeople help: "|/opt/rt4/bin/rt-mailgate --queue 'help' --action correspond --url http://localhost/", supportpeople If I leave postfix with its default configuration, then the aliases are resolved correctly/as I expect, so that incoming mail to, say, [email protected] will be piped through the rt-mailgate mailgate command and also be delivered (via the mail server for ocsl.co.uk (a publicly resolvable domain)) to [email protected], user2, etc. The problem comes when I define mydomain = ocsl.co.uk in /etc/postfix/main.cf (with the intention that outgoing mail come from, for example, [email protected]). When I do this, postfix continues to run the piped command correctly, however it no longer expands the nested aliases as I expect: instead of trying to deliver to [email protected], user2 etc, it tries to send to [email protected], which does not exist on the upstream mail server and generates NDRs. postconf -n for the non-working configuration follows (the working configuration differs only by the "mydomain" line. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ocsl.co.uk newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 We did have things working as we expected/wanted previously on an older system running Sendmail.

    Read the article

  • What is the --daemon option?

    - by Pascal Dimassimo
    I was installing Solr with Jetty using these instructions. Basically, those instructions made you download the Jetty startup script and copy it to /etc/init.d/jetty. But it was not working. Each time I was starting Jetty, I had a "FAILED" message and nothing to understand why it was happening. I decided to open up the /etc/init.d/jetty script to understand what was happening. I saw that this script was using start-stop-daemon to launch jetty. After a couple of time of debugging, I discovered that removing the --daemon option at the end of the start-stop-daemon call was fixing my problem. I did a couple of research and discovered that this guy had the same problem and resolved it like I did: my removing the --daemon option. What is weird is that the switch does not seem to be specific to start-stop-daemon, because it is not documented in the man page. Also, I've seen it used for other commands. So what is that --daemon option doing? And why removing it resolved my problem? Note that I am working on Ubuntu 10.04.2 LTS.

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • How to search file for matching whole lines?

    - by WilliamKF
    I have a command which sends to stdout a series of numbers, each on a new line. I need to determine whether a particular number exists in the list. The match needs to be exact, not a subset. For example, a simple way to approach this that does not work would be to do: /run/command/outputing/numbers | grep -c <numberToSearch> My this gives a false positive on the following list when searching for '456': 1234567 98765 23 1771 If the count is non-zero, a match was found or if it was zero, the number is not in the list. The issue with this is that the numberToSearch could match a subsequence of numbers on a line, instead I only want hits on the whole line. I looked at the man page for grep and did not see any way to only match whole lines. Is there a way to do this, or would I be better off using awk or sed or some other tool instead? I need a binary answer as to whether the number being search for is present or not.

    Read the article

  • locale: What is the LANGUAGE variable used for? (and when?)

    - by seya
    I am trying to understand the locales used in Linux. On my Ubuntu 11.10 system locale puts out the following: LANG=en_DK.UTF-8 LANGUAGE=en_GB:en LC_CTYPE=en_GB.UTF-8 LC_NUMERIC="en_DK.UTF-8" LC_TIME="en_DK.UTF-8" LC_COLLATE=en_GB.UTF-8 LC_MONETARY="en_DK.UTF-8" LC_MESSAGES=en_GB.UTF-8 LC_PAPER="en_DK.UTF-8" LC_NAME="en_DK.UTF-8" LC_ADDRESS="en_DK.UTF-8" LC_TELEPHONE="en_DK.UTF-8" LC_MEASUREMENT="en_DK.UTF-8" LC_IDENTIFICATION="en_DK.UTF-8" LC_ALL= (en_dk is for using international day format, continental European number formatting (1.234,56) etc.) I think I understand what the LC_* family does, that LANG is the fallback if one of them is not set and that LC_ALL sets all of the LC_* variables to its value. What I don't know yet, is what LANGUAGE is used for. The notation en_GB:en reminds me of the Accept-Language HTTP header. With the settings above it would mean, British English is used, if a translation for it exists. Otherwise any existing English translation (en_US, en_AU, ..., whatever) would be used. Am I right so far? Also what programs actually obey the LANGUAGE setting? In how far is it different from LC_MESSAGES? Unfortunately, man locale only documents the LC_* family. And searching the web for 'linux locale LANGUAGE' or similar is a mute point. (Of course language is a word often used when talking about locales, and it may also be shown just in the output of locale without being discussed). Does anybody of you can help me out there?

    Read the article

  • Upstart multiple instances of service not working.

    - by Dax
    I started playing with MongoDB on Lucid. Now I would like to run a DB and Config server on the same box. They both use the same binary to launch, but with different config files and running on different ports. All directories for log and lib is split so one goes to mongodb and the other to mongoconf. Each process can be started without any problems on their own. start mongodb stop mongodb start mongoconf stop mongoconf But if I try to start both, the second one would just start and exit. Using 'initctl log-priority debug' I got the following in the logs. Jan 6 12:44:12 mongo4 init: event_finished: Finished started event Jan 6 12:44:12 mongo4 init: job_process_handler: Ignored event 1 (1) for process 5690 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) main process (5690) terminated with status 1 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) goal changed from start to stop Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) state changed from running to stopping man 5 init shows that you can use instance names to differentiate the two. I tried using 'instance mongoconf' in the on upstart script and 'instance mongodb' in the other one, and it still fails. I can manually start the other process, so there is definitely no conflicts on port numbers or directories. Any ideas on what to try or how to get output on why it is 'terminated with status 1'? Thanx

    Read the article

  • package manager cannot create users? installation script fails?

    - by RapidWebs
    It seems that when trying to install any package which requires the creation of a system User and Group, the installer fails with the error: install: invalid user 'username' When it happened the first time, I assumed it was related to the package, or it's installation script. But now I am attempting a different package, and alas, the same issue arises! note: /etc/passwd and /etc/group is not chattr -i Here is the example output from an attempted installation (of varnish): Selecting previously unselected package varnish. Unpacking varnish (from .../varnish_3.0.2-1ubuntu0.1_amd64.deb) ... Processing triggers for man-db ... Setting up libvarnishapi1 (3.0.2-1ubuntu0.1) ... Setting up varnish (3.0.2-1ubuntu0.1) ... install: invalid user `varnish' dpkg: error processing varnish (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: varnish note: running ubuntu 12.04 For now, I have resolved the issue by manually creating the users, and running apt-get install -f, but this does not resolve the actual issue. Any ideas?

    Read the article

  • rsync via cron. How do I enable logging?

    - by tetranz
    Hi all I'm backing up a remote server to another computer using rsync. In cron.daily I have a file with this: rsync -avz -e ssh [email protected]:/ /mybackup/ It uses a public / private key pair to login. This seems to work well most of the time however, I've (foolishly) only ever really checked it by looking at the dates on some important files (MySQL dumps) that I know change every day. Obviously, an error could occur after that file. Sometimes it fails. When I run it manually, something like "client reset" sometimes happens. What is the best way to log it so that I can check with certainty if it completed or not? The cron log doesn't indicate any errors. I haven't tried it but the rsync man page on the oldish version of CentOS on the backup machine doesn't show the --log-file option. I guess I could redirect stdout with but I don't really want to know about every file. I just want to know if it all worked or not.. Thanks

    Read the article

  • How to route 1 VPN through another on OS X?

    - by Eeep
    Hi everyone. Thanks a lot for your help! I've been tinkering with this for a while and have read many posts along with Googling for help, but my knowledge of TCP/IP is really weak... I have access to two different VPN servers. 1 Is set up in Network Settings and connects through PPP 2 Is set up through Tunnelblick and uses OpenVPN. I can connect to either tunnel #1 or tunnel #2, but not both one after the other... One of my major to-do's this year is study TCP/IP, but for now, would you be super-helpful and help me fix this really clearly? I have no experience with routing, DNS, gateways or any of that. If you tell me, "Set your gateway to XXX.XXX.XX.XXX" can you specify how I get that IP, off of what interface so I don't get messed up? I can figure out the terminal just fine if you let me know what to type, and I WILL read the man pages on everything you help me with. Thanks a million!

    Read the article

  • Apache mod_disk_cache on a seperate drive

    - by pavs
    how can I set Apache mod_disk_cache on a separate drive from where the OS/Apache is installed? I have set up this on my apache2.conf: <IfModule mod_cache_disk.c> # cache cleaning is done by htcacheclean, which can be configured in # /etc/default/apache2 # # For further information, see the comments in that file, # /usr/share/doc/apache2/README.Debian, and the htcacheclean(8) # man page. # This path must be the same as the one in /etc/default/apache2 CacheRoot /media/cacheHD # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / # The result of CacheDirLevels * CacheDirLength must not be higher than # 20. Moreover, pay attention on file system limits. Some file systems # do not support more than a certain number of inodes and # subdirectories (e.g. 32000 for ext3) CacheDirLevels 2 CacheDirLength 1 </IfModule> and it doesn't seem to be caching anything at all. The drive itself is a freshly installed ssd drive formatted with ext4.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Speech Recognition Grammar Rules using delphi code

    - by XBasic3000
    I need help to make ISeechRecoGrammar without using xml format. Like creating it on runtime on delphi. example: procedure TForm1.FormCreate(Sender: TObject); var AfterCmdState: ISpeechGrammarRuleState; temp : OleVariant; Grammar: ISpeechRecoGrammar; PropertiesRule: ISpeechGrammarRule; ItemRule: ISpeechGrammarRule; TopLevelRule: ISpeechGrammarRule; begin SpSharedRecoContext.EventInterests := SREAllEvents; Grammar := SpSharedRecoContext.CreateGrammar(m_GrammarId); TopLevelRule := Grammar.Rules.Add('TopLevelRule', SRATopLevel Or SRADynamic, 1); PropertiesRule := Grammar.Rules.Add('PropertiesRule', SRADynamic, 2); ItemRule := Grammar.Rules.Add('ItemRule', SRADynamic, 3); AfterCmdState := TopLevelRule.AddState; TopLevelRule.InitialState.AddWordTransition(AfterCmdState, 'test', temp, temp, '****', 0, temp, temp); Grammar.Rules.Commit; Grammar.CmdSetRuleState('TopLevelRule', SGDSActive); end; can someone reconstruct or midify this delphi code (above) to be exactly same function below(xml). <GRAMMAR LANGID="409"> <!-- "Constant" definitions --> <DEFINE> <ID NAME="RID_start" VAL="1"/> <ID NAME="PID_action" VAL="2"/> <ID NAME="PID_actionvalue" VAL="3"/> </DEFINE> <!-- Rule definitions --> <RULE NAME="start" ID="RID_start" TOPLEVEL="ACTIVE"> <P>i am</P> <RULEREF NAME="action" PROPNAME="action" PROPID="PID_action" /> <O>OK</O> </RULE> <RULE NAME="action"> <L PROPNAME="actionvalue" PROPID="PID_actionvalue"> <P VAL="1">albert</P> <P VAL="2">francis</P> <P VAL="3">alex</P> </L> </RULE> </GRAMMAR> sorry for my english...

    Read the article

  • Attaching .swf assets to Flex3 by calling getDefinitionByName()

    - by Alexander Farber
    Hello! Does anybody please know, how could you attach symbols from an .swf file in the Actionscript part of your Flex3 file? I've prepared a simple test case demonstrating my problem. Everything works (there are icons at the 4 buttons, there is a red circle) - except the getDefinitionByName() part. My target is to attach a symbol from library "dynamically" - i.e. depending at the value of the suit variable at the runtime. Thank you, Alex Symbols.as: package { public class Symbols { [Embed('../assets/symbols.swf', symbol='spades')] public static const SPADES:Class; [Embed('../assets/symbols.swf', symbol='clubs')] public static const CLUBS:Class; [Embed('../assets/symbols.swf', symbol='diamonds')] public static const DIAMONDS:Class; [Embed('../assets/symbols.swf', symbol='hearts')] public static const HEARTS:Class; } } TestCase.mxml: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" creationComplete="onCreationComplete();"> <mx:Script> <![CDATA[ private function onCreationComplete():void { var sprite:Sprite = new Sprite(); var g:Graphics = sprite.graphics; g.lineStyle(1, 0xFF0000); g.beginFill(0xFF0000); g.drawCircle(100, 100, 20); g.endFill(); spriteHolder.addChild(sprite); // XXX stuff below not working, can it be fixed? var suit:String = "SPADES"; var mc:MovieClip = new (getDefinitionByName("Symbols.SPADES") as Class); spriteHolder.addChild(mc); } ]]> </mx:Script> <mx:VBox width="100%"> <mx:Button label="1" icon="{Symbols.SPADES}" /> <mx:Button label="2" icon="{Symbols.CLUBS}" /> <mx:Button label="3" icon="{Symbols.DIAMONDS}" /> <mx:Button label="4" icon="{Symbols.HEARTS}" /> <mx:UIComponent id="spriteHolder" width="200" height="200"/> </mx:VBox> </mx:Application>

    Read the article

  • How do I create a list or set object in a class in Python?

    - by Az
    For my project, the role of the Lecturer (defined as a class) is to offer projects to students. Project itself is also a class. I have some global dictionaries, keyed by the unique numeric id's for lecturers and projects that map to objects. Thus for the "lecturers" dictionary (currently): lecturer[id] = Lecturer(lec_name, lec_id, max_students) I'm currently reading in a white-space delimited text file that has been generated from a database. I have no direct access to the database so I haven't much say on how the file is formatted. Here's a fictionalised snippet that shows how the text file is structured. Please pardon the cheesiness. 0001 001 "Miyamoto, S." "Even Newer Super Mario Bros" 0002 001 "Miyamoto, S." "Legend of Zelda: Skies of Hyrule" 0003 002 "Molyneux, P." "Project Milo" 0004 002 "Molyneux, P." "Fable III" 0005 003 "Blow, J." "Ponytail" The structure of each line is basically proj_id, lec_id, lec_name, proj_name. Now, I'm currently reading the relevant data into the relevant objects. Thus, proj_id is stored in class Project whereas lec_name is a class Lecturer object, et al. The Lecturer and Project classes are not currently related. However, as I read in each line from the text file, for that line, I wish to read in the project offered by the lecturer into the Lecturer class; I'm already reading the proj_id into the Project class. I'd like to create an object in Lecturer called offered_proj which should be a set or list of the projects offered by that lecturer. Thus whenever, for a line, I read in a new project under the same lec_id, offered_proj will be updated with that project. If I wanted to get display a list of projects offered by a lecturer I'd ideally just want to use print lecturers[lec_id].offered_proj. My Python isn't great and I'd appreciate it if someone could show me a way to do that. I'm not sure if it's better as a set or a list, as well. Update After the advice from Alex Martelli and Oddthinking I went back and made some changes and tried to print the results. Here's the code snippet: for line in csv_file: proj_id = int(line[0]) lec_id = int(line[1]) lec_name = line[2] proj_name = line[3] projects[proj_id] = Project(proj_id, proj_name) lecturers[lec_id] = Lecturer(lec_id, lec_name) if lec_id in lecturers.keys(): lecturers[lec_id].offered_proj.add(proj_id) print lec_id, lecturers[lec_id].offered_proj The print lecturers[lec_id].offered_proj line prints the following output: 001 set([0001]) 001 set([0002]) 002 set([0003]) 002 set([0004]) 003 set([0005]) It basically feels like the set is being over-written or somesuch. So if I try to print for a specific lecturer print lec_id, lecturers[001].offered_proj all I get is the last the proj_id that has been read in.

    Read the article

  • Using mx.charts in a Flex Hero mobile project

    - by Alexander Farber
    Hello, Adobe states that Charts are supported in mobile projects but when I try to change the following working files (created project with File - New - Flex Mobile Project - Google Nexus One): MyTest.mxml: <?xml version="1.0" encoding="utf-8"?> <s:MobileApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" firstView="views.MyTestHome"> <s:navigationContent> <s:Button label="Home" click="navigator.popToFirstView();"/> </s:navigationContent> <s:actionContent/> </s:MobileApplication> MyTestHome.mxml: <?xml version="1.0" encoding="utf-8"?> <s:View xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" title="Test Chart"> </s:View> to the new MyTestHome.mxml: <?xml version="1.0" encoding="utf-8"?> <s:View xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:mx="library://ns.adobe.com/flex/mx" xmlns:s="library://ns.adobe.com/flex/spark" title="Test Chart"> <fx:Script> <![CDATA[ import mx.collections.*; [Bindable] private var medalsAC:ArrayCollection = new ArrayCollection( [ {Country: "USA", Gold: 35, Silver:39, Bronze: 29 }, {Country:"China", Gold: 32, Silver:17, Bronze: 14 }, {Country:"Russia", Gold: 27, Silver:27, Bronze: 38 } ]); ]]> </fx:Script> <mx:PieChart id="chart" height="100%" width="100%" paddingRight="5" paddingLeft="5" color="0x323232" dataProvider="{medalsAC}" > <mx:series> <mx:PieSeries labelPosition="callout" field="Gold"> <mx:calloutStroke> <s:SolidColorStroke weight="0" color="0x888888" alpha="1.0"/> </mx:calloutStroke> <mx:radialStroke> <s:SolidColorStroke weight="0" color="#FFFFFF" alpha="0.20"/> </mx:radialStroke> <mx:stroke> <s:SolidColorStroke color="0" alpha="0.20" weight="2"/> </mx:stroke> </mx:PieSeries> </mx:series> </mx:PieChart> </s:View> and also add c:\Program Files\Adobe\Adobe Flash Builder Burrito\sdks\4.5.0\frameworks\libs\datavisualization.swc c:\Program Files\Adobe\Adobe Flash Builder Burrito\sdks\4.5.0\frameworks\libs\sparkskins.swc c:\Program Files\Adobe\Adobe Flash Builder Burrito\sdks\4.5.0\frameworks\libs\mx\mx.swc to Flex Build Path (clicking "Add SWC" button): Then it fails with the error: Could not resolve to a component implementation. Does anybody please have an idea here? Alex

    Read the article

  • android error NoSuchElementException

    - by Alexander
    I have returned a cursor string but it contains a delimiter. The delimiter is . I have the string quest.setText(String.valueOf(c.getString(1)));I want to turn the into a new line. What is the best method to achieve this task in android. I understand there is a way to get the delimeter. I want this to achieved for each record. I can itterate through record like so. Cursor c = db.getContact(2); I tried using a string tokenizer but it doesnt seem to work. Here is the code for the tokenizer. I tested it in just plain java and it works without errors. String question = c.getString(1); // quest.setText(String.valueOf(c.getString(1))); //quest.setText(String.valueOf(question)); StringTokenizer st = new StringTokenizer(question,"<ENTER>"); //DisplayContact(c); // StringTokenizer st = new StringTokenizer(question, "=<ENTER>"); while(st.hasMoreTokens()) { String key = st.nextToken(); String val = st.nextToken(); System.out.println(key + "\n" + val); } I then tried running it in android. Here is the error log 06-06 22:31:55.251: E/AndroidRuntime(537): FATAL EXCEPTION: main 06-06 22:31:55.251: E/AndroidRuntime(537): java.util.NoSuchElementException 06-06 22:31:55.251: E/AndroidRuntime(537): at java.util.StringTokenizer.nextToken(StringTokenizer.java:208) 06-06 22:31:55.251: E/AndroidRuntime(537): at alex.android.test.database.quiz.TestdatabasequizActivity$1.onClick(TestdatabasequizActivity.java:95) 06-06 22:31:55.251: E/AndroidRuntime(537): at android.view.View.performClick(View.java:3511) 06-06 22:31:55.251: E/AndroidRuntime(537): at android.view.View$PerformClick.run(View.java:14105) 06-06 22:31:55.251: E/AndroidRuntime(537): at android.os.Handler.handleCallback(Handler.java:605) 06-06 22:31:55.251: E/AndroidRuntime(537): at android.os.Handler.dispatchMessage(Handler.java:92) 06-06 22:31:55.251: E/AndroidRuntime(537): at android.os.Looper.loop(Looper.java:137) 06-06 22:31:55.251: E/AndroidRuntime(537): at android.app.ActivityThread.main(ActivityThread.java:4424) 06-06 22:31:55.251: E/AndroidRuntime(537): at java.lang.reflect.Method.invokeNative(Native Method) 06-06 22:31:55.251: E/AndroidRuntime(537): at java.lang.reflect.Method.invoke(Method.java:511) 06-06 22:31:55.251: E/AndroidRuntime(537): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784) 06-06 22:31:55.251: E/AndroidRuntime(537): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551) 06-06 22:31:55.251: E/AndroidRuntime(537): at dalvik.system.NativeStart.main(Native Method) This is the database query public Cursor getContact(long rowId) throws SQLException { Cursor mCursor = db.query(true, DATABASE_TABLE, new String[] {KEY_ROWID, question, possibleAnsOne,possibleAnsTwo, possibleAnsThree,realQuestion,UR}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); }

    Read the article

  • Re-Add tabBarController's view into window and the device is in landscape mode

    - by user285553
    Hello, I have an app that is a tabBarController based app. I have 4 tabs on it and one of these is for logging in/out in my app. The main ideea is that when I logout, I release the tabBarController (this will release the all 4 view controllers too - I have done this becuse I want to refersh all the views after logging out to look just like the first time).After this, I alloc it again and add the view controllers aagin,set the tabBarItem's titles,etc.This work ok. The problem is when I am in landscape mode;my logout view is painted to fit the landscape area,but after disconnect(now I releasse and alloc again the tabbarController) the view is painted in portrait mode but my device is in lanscape. After disconnected, I call the releaseOldTabView and the createNewTabView.After this tabBarController is in portrait insead of landscape. -(void)releaseOldTabView:(BOOL)tabViewEnabled{ if(tabViewEnabled){ [tabBarController.view removeFromSuperview]; [tabBarController release]; } } -(void)createNewTabView{ tabBarController = [[UITabBarController alloc] init]; NSMutableArray *oneArray = [[NSMutableArray alloc] init]; tabBarController.delegate = self; LoginViewController *login= [[LoginViewController2 alloc] initWithNibName:@"LoginViewController" bundle:nil]; SecondViewController *secondController = [[SecondViewController alloc] initWithNibName:@"SecondViewController" bundle:nil]; ThirdViewController *thirdController = [[ThirdViewController alloc] init]; FourthViewController *fourthController = [[FourthViewController alloc] init]; login.tabBarItem = [[UITabBarItem alloc] initWithTitle:@"Login" image:[UIImage imageNamed:@"connect.png"] tag:0]; secondController.tabBarItem = [[UITabBarItem alloc] initWithTitle:@"TabBar2" image:[UIImage imageNamed:@"tabBar2.png"] tag:1]; thirdController.tabBarItem = [[UITabBarItem alloc] initWithTitle:@"TabBar3" image:[UIImage imageNamed:@"tabBar3.png"] tag:2]; fourthController.tabBarItem = [[UITabBarItem alloc] initWithTitle:@"TabBar4" image:[UIImage imageNamed:@"tabBar4.png"] tag:3]; [oneArray addObject:login]; [oneArray addObject:secondController]; [oneArray addObject:thirdController]; [oneArray addObject:fourthController]; [tabBarController setViewControllers:oneArray]; //[tabBarController.view convertRect:tabBarController.view.frame toView:window]; [window addSubview:tabBarController.view]; [oneArray release]; [login release]; [secondController release]; [thirdController release]; [fourthController release]; } After calling this method, the tabbar view is in portrait,but statusBar(the top bar which says the carrier name) is painted normally(landscape). I also tried the [tabBarController.view convertRect:tabBarController.view.frame toView:window]; but without any succes. Can anyone give a hand of help? Thanks in advance, Alex.

    Read the article

  • Python: How best to parse a simple grammar?

    - by Rosarch
    Ok, so I've asked a bunch of smaller questions about this project, but I still don't have much confidence in the designs I'm coming up with, so I'm going to ask a question on a broader scale. I am parsing pre-requisite descriptions for a course catalog. The descriptions almost always follow a certain form, which makes me think I can parse most of them. From the text, I would like to generate a graph of course pre-requisite relationships. (That part will be easy, after I have parsed the data.) Some sample inputs and outputs: "CS 2110" => ("CS", 2110) # 0 "CS 2110 and INFO 3300" => [("CS", 2110), ("INFO", 3300)] # 1 "CS 2110, INFO 3300" => [("CS", 2110), ("INFO", 3300)] # 1 "CS 2110, 3300, 3140" => [("CS", 2110), ("CS", 3300), ("CS", 3140)] # 1 "CS 2110 or INFO 3300" => [[("CS", 2110)], [("INFO", 3300)]] # 2 "MATH 2210, 2230, 2310, or 2940" => [[("MATH", 2210), ("MATH", 2230), ("MATH", 2310)], [("MATH", 2940)]] # 3 If the entire description is just a course, it is output directly. If the courses are conjoined ("and"), they are all output in the same list If the course are disjoined ("or"), they are in separate lists Here, we have both "and" and "or". One caveat that makes it easier: it appears that the nesting of "and"/"or" phrases is never greater than as shown in example 3. What is the best way to do this? I started with PLY, but I couldn't figure out how to resolve the reduce/reduce conflicts. The advantage of PLY is that it's easy to manipulate what each parse rule generates: def p_course(p): 'course : DEPT_CODE COURSE_NUMBER' p[0] = (p[1], int(p[2])) With PyParse, it's less clear how to modify the output of parseString(). I was considering building upon @Alex Martelli's idea of keeping state in an object and building up the output from that, but I'm not sure exactly how that is best done. def addCourse(self, str, location, tokens): self.result.append((tokens[0][0], tokens[0][1])) def makeCourseList(self, str, location, tokens): dept = tokens[0][0] new_tokens = [(dept, tokens[0][1])] new_tokens.extend((dept, tok) for tok in tokens[1:]) self.result.append(new_tokens) For instance, to handle "or" cases: def __init__(self): self.result = [] # ... self.statement = (course_data + Optional(OR_CONJ + course_data)).setParseAction(self.disjunctionCourses) def disjunctionCourses(self, str, location, tokens): if len(tokens) == 1: return tokens print "disjunction tokens: %s" % tokens How does disjunctionCourses() know which smaller phrases to disjoin? All it gets is tokens, but what's been parsed so far is stored in result, so how can the function tell which data in result corresponds to which elements of token? I guess I could search through the tokens, then find an element of result with the same data, but that feel convoluted... What's a better way to approach this problem?

    Read the article

  • Read from file in eclipse

    - by Buzkie
    I'm trying to read from a text file to input data to my java program. However, eclipse continuosly gives me a Source not found error no matter where I put the file. I've made an additional sources folder in the project directory, the file in question is in both it and the bin file for the project and it still can't find it. I even put a copy of it on my desktop and tried pointing eclipse there when it asked me to browse for the source lookup path. No matter what I do it can't find the file. here's my code in case it's pertinent: System.out.println(System.getProperty("user.dir")); File file = new File("file.txt"); Scanner scanner = new Scanner(file); in addition, it says the user directory is the project directory and there is a copy there too. I have no clue what to do. Thanks, Alex after attempting the suggestion below and refreshing again, I was greeted by a host of errors. FileNotFoundException(Throwable).<init>(String) line: 195 FileNotFoundException(Exception).<init>(String) line: not available FileNotFoundException(IOException).<init>(String) line: not available FileNotFoundException.<init>(String) line: not available URLClassPath$JarLoader.getJarFile(URL) line: not available URLClassPath$JarLoader.access$600(URLClassPath$JarLoader, URL) line: not available URLClassPath$JarLoader$1.run() line: not available AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method] URLClassPath$JarLoader.ensureOpen() line: not available URLClassPath$JarLoader.<init>(URL, URLStreamHandler, HashMap) line: not available URLClassPath$3.run() line: not available AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method] URLClassPath.getLoader(URL) line: not available URLClassPath.getLoader(int) line: not available URLClassPath.access$000(URLClassPath, int) line: not available URLClassPath$2.next() line: not available URLClassPath$2.hasMoreElements() line: not available ClassLoader$2.hasMoreElements() line: not available CompoundEnumeration<E>.next() line: not available CompoundEnumeration<E>.hasMoreElements() line: not available ServiceLoader$LazyIterator.hasNext() line: not available ServiceLoader$1.hasNext() line: not available LocaleServiceProviderPool$1.run() line: not available AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method] LocaleServiceProviderPool.<init>(Class<LocaleServiceProvider>) line: not available LocaleServiceProviderPool.getPool(Class<LocaleServiceProvider>) line: not available NumberFormat.getInstance(Locale, int) line: not available NumberFormat.getNumberInstance(Locale) line: not available Scanner.useLocale(Locale) line: not available Scanner.<init>(Readable, Pattern) line: not available Scanner.<init>(ReadableByteChannel) line: not available Scanner.<init>(File) line: not available code used: System.out.println(System.getProperty("user.dir")); File file = new File(System.getProperty("user.dir") + "/file.txt"); Scanner scanner = new Scanner(file);

    Read the article

  • EXC_BAD_ACCESS when simply casting a pointer in Obj-C

    - by AlexChilcott
    Hi all, Frequent visitor but first post here on StackOverflow, I'm hoping that you guys might be able to help me out with this. I'm fairly new to Obj-C and XCode, and I'm faced with this really... weird... problem. Googling hasn't turned up anything whatsoever. Basically, I get an EXC_BAD_ACCESS signal on a line that doesn't do any dereferencing or anything like that that I can see. Wondering if you guys have any idea where to look for this. I've found a work around, but no idea why this works... The line the broken version barfs out on is the line: LevelEntity *le = entity; where I get my bad access signal. Here goes: THIS VERSION WORKS NSArray *contacts = [self.body getContacts]; for (PhysicsContact *contact in contacts) { PhysicsBody *otherBody; if (contact.bodyA == self.body) { otherBody = contact.bodyB; } if (contact.bodyB == self.body) { otherBody = contact.bodyA; } id entity = [otherBody userData]; if (entity != nil) { LevelEntity *le = entity; CGPoint point = [contact contactPointOnBody:otherBody]; } } THIS VERSION DOESNT WORK NSArray *contacts = [self.body getContacts]; for (NSUInteger i = 0; i < [contacts count]; i++) { PhysicsContact *contact = [contacts objectAtIndex:i]; PhysicsBody *otherBody; if (contact.bodyA == self.body) { otherBody = contact.bodyB; } if (contact.bodyB == self.body) { otherBody = contact.bodyA; } id entity = [otherBody userData]; if (entity != nil) { LevelEntity *le = entity; CGPoint point = [contact contactPointOnBody:otherBody]; } } Here, the only difference between the two examples is the way I enumerate through my array. In the first version (which works) I use for (... in ...), where as in the second I use for (...; ...; ...). As far as I can see, these should be the same. This is seriously weirding me out. Anyone have any similar experience or idea whats going on here? Would be really great :) Cheers, Alex

    Read the article

  • How to add Items to GridView in C# Windows Store App (Win8)

    - by flexage
    To keep things simple let's just say I have a c# Windows Store Project for Windows 8 that has the following items: GridView (PlatformsGrid) List«PlatformSearchResult» (allPlatforms) DataTemplate (PlatformDataTemplate) in standardstyles.xaml allPlatforms is a collection of "PlatformSearchResult"objects populated from an online API, and has the following 3 properties: ID Name Alias I am able to add a new item to the gridview for each object that exists in my allPlatforms collection, however the items are blank and do not show the data from my objects. A quick summary of the current code looks like this: XAML Markup: <!-- Platforms Content --> <GridView x:Name="PlatformsGrid" Grid.Row="1" CanReorderItems="True" CanDragItems="True" ItemTemplate="{StaticResource PlatformDataTemplate}" > <GridView.ItemsPanel> <ItemsPanelTemplate> <WrapGrid MaximumRowsOrColumns="2" VerticalChildrenAlignment="Top" HorizontalChildrenAlignment="Center" /> </ItemsPanelTemplate> </GridView.ItemsPanel> </GridView> Data Template <!-- Platform Item Template --> <DataTemplate x:Key="PlatformDataTemplate"> <Grid Background="#FF939598" Height="250" Width="250"> <Image Source="/SampleImage.png" Stretch="UniformToFill"/> <StackPanel Orientation="Vertical" Background="#CC000000" Height="90" VerticalAlignment="Bottom"> <TextBlock Text="{Binding Name}" Margin="10,3,0,0" Width="242" Height="62" TextTrimming="WordEllipsis" TextWrapping="Wrap" HorizontalAlignment="Left"/> <TextBlock Text="{Binding Alias}" Margin="10,2,0,0" Width="186" Height="14" TextTrimming="WordEllipsis" HorizontalAlignment="Left" FontSize="9" Opacity="0.49"/> </StackPanel> </Grid> </DataTemplate> Controlling Function private async void FetchItemInfo_Loaded(object sender, RoutedEventArgs e) { // Get List of Top Games List<PlatformSearchResult> allPlatforms = new List<PlatformSearchResult>(); allPlatforms = await GamesDB.GetPlatforms(); // Dynamically Create Platform Tiles foreach (PlatformSearchResult platform in allPlatforms) { PlatformsGrid.DataContext = platform; PlatformsGrid.Items.Add(platform); } } How do I get the added items to show the appropriate object properties (ignoring the image for now), I'm just interested in populating the content of the TextBlocks. I appreciate any help anyone can provide! Thanks, Alex.

    Read the article

  • Building Visual Studio Setup Projects with TFS 2010 Team Build

    - by Jakob Ehn
    One of the most common complaints from people starting to use Team Build is that is doesn’t support building Microsoft’s own Setup and Deployment project (*.vdproj). When creating a default build definition that compiles a solution containing a setup project, you’ll get the following warning: The project file "MyProject.vdproj" is not supported by MSBuild and cannot be built.   This is what the problem is all about. MSBuild, that is used for compiling your projects, does not understand the proprietary vdproj format defined by Microsoft quite some time ago. Unfortunately there is no sign that this will change in the near future, in fact the setup projects has barely changed at all since they were introduced. VS 2010 brings no new features or improvements hen it comes to the setup projects. VS 2010 does include a limited version of InstallShield which promises to be more MSBuild friendly and with more or less the same features as VS setup projects. I hope to get a closer look at this installer project type soon. But, how do we go about to build a Visual Studio setup project and produce an MSI as part of a Team Build process? Well, since only one application known to man understands the vdproj projects, we will have to installa copy of Visual Studio on the build server. Sad but true. After doing this, we use the Visual Studio command line interface (devenv) to perform the build. In this post I will show how to do this by using the InvokeProcess activity directly in a build workflow template. You’ll want to run build your setup projects after you have successfully compiled the projects.   Install Visual Studio 2010 on the build server(s)   Open your build process template /remember to branch or copy the xaml file before modifying it!)   Locate the Try to Compile the Project activity   Drop an instance of the InvokeProcess activity from the toolbox onto the designer, after the Run MSBuild for Project activity   Drop an instance of the WriteBuildMessage activity inside the Handle Standard Output section. Set the Importance property to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High (NB: This is necessary if you want the output from devenv to show up in the build log when running the build with the default verbosity) Set the Message property to stdOutput   Drop an instance of the WriteBuildError activity to the Handle Error Output section Set the Message property to errOutput   Select the InvokeProcess activity and set the values of the parameters to:     The finished workflow should look like this:     This will generate the MSI files, but they won’t be copied to the drop location. This is because we are using devenv and not MSBuild, so we have to do this explicitly   Drop a Sequence activity somewhere after the Copy to Drop location activity.   Create a variable in the Sequence activity of type IEnumerable<String> and call it GeneratedInstallers   Drop a FindMatchingFiles activity in the sequence activity and set the properties to:     Drop a ForEach<String> activity after the FindMatchingFiles activity. Set the Value property to GeneratedInstallers   Drop an InvokeProcess activity inside the ForEach activity.  FileName: “xcopy.exe” Arguments: String.Format("""{0}"" ""{1}""", item, BuildDetail.DropLocation) The Sequence activity should look like this:     Save the build process template and check it in.   Run the build and verify that the MSI’s is built and copied to the drop location.   Note 1: One of the drawback of using devenv like this in a team build is that since all the output from the default compilations is placed in the Binaries folder, the outputs is not avaialable when devenv is invoked, which causes the whole solution to rebuild again. In TFS 2008, this was pretty simple to fix by using the CustomizableOutDir property. In TFS 2010, the same feature is not avaialble. Jim Lamb blogged about this recently, have a look at it if you have a problem with this: http://blogs.msdn.com/jimlamb/archive/2010/04/13/customizableoutdir-in-tfs-2010.aspx   Note 2: Although the above solution works, a better approach is to wrap this in a custom activity that you can use in your builds. I will come back to this in a future post.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >