Search Results

Search found 7533 results on 302 pages for 'knowledge transfer'.

Page 79/302 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • FFmpeg Video Hosting for Linux and Windows Server

    - by Aditi
    FFmpeg hosting is a special type of web hosting where the host servers have video transcoding software loaded on them, which allows the automatic conversion of videos from one format to another. FFmpeg is a cross-platform solution for recording, converting, transcoding and stream audio and video. It includes libavcodec – the leading audio/video codec library. FFmpeg hosting gets its name from a set of server side programs (modules) called FFmpeg. There are a number of applications or web scripts available, which allow webmasters to create their own video sharing websites. Video hosting typically requires: PHP 4.3 and above (including support of CLI) Mencoder and also Mplayer FFMpeg-PHP MySQL database server LAME MP3 Encoder Libogg + Libvorbis GD Library 2 or higher CGI-BIN There are number of web service providers who provide FFmpeg hosting service. Following is a list of some of the Best FFmpeg hosting providers for both Linux and Windows Server below. Dream Host Dreamhost provides for web based email access, mail filtering, spam filtering, unlimited email ids, vacation autoresponder, python support, full CGI access and many more services. Price: $7.95 View Details Micfo It offers unlimited disk space and bandwidth. Other services include free domain for life and free Website Transfer with many more services. All in all one of the best option to consider. Price: $5 View Details Host Upon HostUpon offers FFMpeg Hosting on all their hosting packages, with readily installed modules to start a Video website or Social Network with Video uploading. These scripts such as Boonex Dolphin / PHPMotion / Social Engine / ABKsoft Scripts / Joomla Video Plugin / Clipshare / ClipBucket / Social Media / Rayzz / Vidi Script work with their ffmpeg. Their FFMPEG hosting plan offers 24/7/365 support with typical response time of 15min or less. Price: $5.95 View Details DownTown Host DownTown Host provides full and exceptional support by live chat and telephone. It has high-power, modern servers and the finest web server technology. It offers free search engine Submission and continuous data backup protection with free email forwarding and site move. There are many more services too. Site5 This ffmpeg service provider offers uptime guarantee, a real time stats on each server and many more attractive services. Price: $4.95 View Details Cirtex Hosting Cirtex Hosting allows to host 7 websites & domains and provides for unlimited storage space and monthly bandwidth. It also offers FTP and email accounts and many more services. Price: $2.49 View Details FLV Hosting FLV hosting supplies RTMP SERVER STREAMING for large size video streaming and server side recording. It is flexible and costs less. They customize to the clients requirements. Price: $9.95 View Details AptHost This hosting service provides for 24x7x365 Premium Support and fully ffmpeg enabled services. Price: $4.95 View Details HostMDS Great Support, Priced Low. It provides for SSH access, CGI, Ruby on Rails, Perl, PHP, MySQL, front page extentions, 24/7 Support, FREE Domain transfer and spam filtering. It offers instant account setup, low latency fast bandwidth & much more! They were formerly known as Vistapages. Price: $4.95 View Details Related posts:Best WordPress Video Themes for a Video Blog Free Web Based Applications 24+ Coda Alternatives for Windows and Linux

    Read the article

  • Autoscaling in a modern world&hellip;. Part 4

    - by Steve Loethen
    Now that I have the rules and services XML files in the cloud, it is time to sever the bounds of earth and live totally in the cloud.  I have to host the Autoscaling object in Azure as well, point it to the rules, tell it the management certs and get out of the way. A couple of questions.  Where to host?  The most obvious place to me was a worker role.  A simple, single purpose worker role, doing nothing but watching my app.  Here are the steps I used. 1) Created a project.  Separate project from my web site.  I wanted to be able to run the web in the cloud and the autoscaler local for debugging purposes.  Seemed like the easiest way.  2) Add the Wasabi block to the project. 3) Configure the settings.  I used the same settings used for the console app.  It points to the same web role, uses the same rules file.  4) Make sure the certification needed to manage the role is added to the cert store in the sky (“LocalMachine” and “My” are default locations). I ran the worker role in the local fabric.  It worked.  I then published to the cloud, and verified it worked again.  Here is what my code looked like. public override bool OnStart() { Trace.WriteLine("Set Default Connection Limit", "Information"); // Set the maximum number of concurrent connections ServicePointManager.DefaultConnectionLimit = 12; Trace.WriteLine("Set up configuration change code", "Information"); // set up config CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))); Trace.WriteLine("Get current diagnostic configuration", "Information"); // Get current diagnostic configuration DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration(); Trace.WriteLine("Set Diagnostic Buffer Size", "Information"); // Set Diagnostic Buffer size dmc.Logs.BufferQuotaInMB = 4; Trace.WriteLine("Set log transfer period", "Information"); // Set log transfer period dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1); Trace.WriteLine("Set log verbosity", "Information"); // Set log filter to verbose dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose; Trace.WriteLine("Start the diagnostic monitor", "Information"); // Start the diagnostic monitor DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc); Trace.WriteLine("Get the current Autoscaler from the EntLib Container", "Information"); // Get the current Autoscaler from the EntLib Container scaler = EnterpriseLibraryContainer.Current.GetInstance<Autoscaler>(); Trace.WriteLine("Start the autoscaler", "Information"); // Start the autoscaler scaler.Start(); Trace.WriteLine("call the base class OnStart", "Information"); // call the base class OnStart return base.OnStart(); } public override void OnStop() { Trace.WriteLine("Stop the Autoscaler", "Information"); // Stop the Autoscaler scaler.Stop(); } I did have to turn on some basic logging for wasabi, which will cover in the next post.  This let me figure out that I hadn’t done the certificate step.

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • Oracle Solaris Crash Analysis Tool 5.3 now available

    - by user12609056
    Oracle Solaris Crash Analysis Tool 5.3 The Oracle Solaris Crash Analysis Tool Team is happy to announce the availability of release 5.3.  This release addresses bugs discovered since the release of 5.2 plus enhancements to support Oracle Solaris 11 and updates to Oracle Solaris versions 7 through 10. The packages are available on My Oracle Support - simply search for Patch 13365310 to find the downloadable packages. Release Notes General blast support The blast GUI has been removed and is no longer supported. Oracle Solaris 2.6 Support As of Oracle Solaris Crash Analysis Tool 5.3, support for Oracle Solaris 2.6 has been dropped. If you have systems running Solaris 2.6, you will need to use Oracle Solaris Crash Analysis Tool 5.2 or earlier to read its crash dumps. New Commands Sanity Command Though one can re-run the sanity checks that are run at tool start-up using the coreinfo command, many users were unaware that they were. Though these checks can still be run using that command, a new command, namely sanity, can now be used to re-run the checks at any time. Interface Changes scat_explore -r and -t option The -r option has ben added to scat_explore so that a base directory can be specified and the -t op[tion was added to enable color taggging of the output. The scat_explore sub-command now accepts new options. Usage is: scat --scat_explore [-atv] [-r base_dir] [-d dest] [unix.N] [vmcore.]N Where: -v Verbose Mode: The command will print messages highlighting what it's doing. -a Auto Mode: The command does not prompt for input from the user as it runs. -d dest Instructs scat_explore to save it's output in the directory dest instead of the present working directory. -r base_dir Instructs scat_explore to save it's under the directory base_dir instead of the present working directory. If it is not specified using the -d option, scat_explore names it's output file as "scat_explore_system_name_hostid_lbolt_value_corefile_name." -t Enable color tags. When enabled, scat_explore tags important text with colors that match the level of importance. These colors correspond to the color normally printed when running Oracle Solaris Crash Analysis Tool in interactive mode. Tag Name Definition FATAL An extremely important message which should be investigated. WARNING A warning that may or may not have anything to do with the crash. ERROR An error, usually printer with a suggested command ALERT Used to indicate something the tool discovered. INFO Purely informational message INFO2 A follow-up to an INFO tagged message REDZONE Usually used when prnting memory info showing something is in the kernel's REDZONE. N The number of the crash dump. Specifying unix.N vmcore.N is optional and not required. Example: $ scat --scat_explore -a -v -r /tmp vmcore.0 #Output directory: /tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 #Tar filename: scat_explore_oomph_833a2959_0x28800_vmcore.0.tar #Extracting crash data... #Gathering standard crash data collections... #Panic string indicates a possible hang... #Gathering Hang Related data... #Creating tar file... #Compressing tar file... #Successful extraction SCAT_EXPLORE_DATA_DIR=/tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 Sending scat_explore results The .tar.gz file that results from a scat_explore run may be sent using Oracle Secure File Transfer. The Oracle Secure File Transfer User Guide describes how to use it to send a file. The send_scat_explore script now has a -t option for specifying a to address for sending the results. This option is mandatory. Known Issues There are a couple known issues that we are addressing in release 5.4, which you should expect to see soon: Display of timestamps in threads and clock information is incorrect in some cases. There are alignment issues with some of the tables produced by the tool.

    Read the article

  • Options for different domain and hosting

    - by Carl
    The situation I have a hosting service (one.com) on which I have installed a wordpress.org site in a subdirectory 'wordpress': myhost.com/wordpress/ (myhost.com is actually my own domain, but it already has contents and I don't want wordpress/ to appear in the root of that domain.) I want to use a second domain for this site. Thinking I would be able to forward to the wordpress site without problems, I registered the domain at GoDaddy.com: mydomain.com What I want So when my visitors type in mydomaind.com, I want them to see the contents on myhost.com/wordpress/, and the same for all subpages (mydomain.com/a/subpage fetches from myhost.com/wordpress/a/subpage). Just a redirect isn't enough, I want my visitors to see only mydomain.com as their domain. Some notes If I set up forward with URL masking at GoDaddy, they just give a full frame, pointing to myhost.com/wordpress/. This isn't good enough for me, since mydomain.com will always show up in the adress bar, also for subpages (I want mydomain/a/subpage to show in the adress bar for a subpage). I believe this could in principle be done with a .htaccess file with URL rewriting, but I have no hosting with GoDaddy so I can't upload such a file there. Hosting with GoDaddy is very expensive (of course) so I don't want to do that. I don't think I can use DNS settings; the host of mydomain.com says they don't allow anyone else to point to their name servers. If possible, I wouldn't want to re-install the wordpress site, it would take quite some time. I'd prefer to keep it at myhost.com/wordpress/ (if possible) Anything involving transferring the domain is supposed to take 5-7 working days. I would need my site up-and-running earlier than that, so I'd like to avoid it if possible. Am I locked in? As it seems, I am rather locked-in with GoDaddy. I can't use the domain with .htaccess since I can't upload such a file (and won't pay for hosting by GoDaddy). I can't use any of their forward options since none of them do what I want (one just forwards, the one that masks the URL does it with frames). Would you agree? Possible solutions Transfer the domain to any hosting service with reasonable hosting pricing, as opposed to GoDaddy (I'd probably use one.com, the same host as for myhost.com, in that case), and there either re-install wordpress on the new account, or use .htaccess with URL rewrite on the new account to fetch the contents from myhost.com/wordpress/. Can this be set-up to work with sub-pages as well? And visitors won't ever see "myhost.com/wordpress", just "mydomain.com"? E.i., mydomain.com/a/subpage/ wold fetch from myhost.com/wordpress/a/subpage/? This might be a long shot but: Find some free (preferrably) hosting allowing to point to their nameservers Make DNS settings at GoDaddy so that my domain appears at the site above at that site, put a .htaccess file with URL rewriting to forward to myhost.com/wordpress/ Could this be possible? What services could I use in that case? As I see it, this would be the only way not to have to transfer a domain (taking 5-7 working days) and not having to re-install the wordpress site. Sorry for the long question. All info and ideas are welcome.

    Read the article

  • C#: How to avoid WIA-error when scanning documents with 2400dpi or more?

    - by Stephan_W
    Hello, when we scan a document with a resolution of 2400dpi or higher, we recieve (for example) the following error-message: COMException: Ausnahme von HRESULT: 0x80010100 (RPC_E_SYS_CALL_FAILED) or COMException: Ausnahme von HRESULT: 0x8021006F in one of the following lines img = itm.Transfer(scanFormat.ScanFormat) as WIA.ImageFile; img = ip.Apply(img as WIA.ImageFile); some screenshots for the mentioned errors: http://www.amarant-it.de/TempDownload/WIA_Error01.png or the same path with WIA_Error02.png and WIA_Error03.png for scanning we use the following code: #region Image-Convert-Settings //IP.Filters.Add IP.FilterInfos("Convert").FilterID //IP.Filters(1).Properties("FormatID").Value = wiaFormatJPEG WIA.IImageProcess ip = new WIA.ImageProcessClass(); object convert = "Convert"; WIA.IFilterInfo fi = ip.FilterInfos.get_Item(ref convert); ip.Filters.Add(fi.FilterID, 0); convert = "FormatID"; object formatstring = scanFormat.ScanFormat; WIA.IFilter filter; foreach (WIA.IFilter fTemp in ip.Filters) { filter = fTemp; WIA.IProperty prop = filter.Properties.get_Item(ref convert); prop.set_Value(ref formatstring); } #endregion #region Image-Scan + Convert img = itm.Transfer(scanFormat.ScanFormat) as WIA.ImageFile; img = ip.Apply(img as WIA.ImageFile); img.SaveFile("D:\\scan2." + img.FileExtension); Image image = Image.FromFile("D:\\scan2." + img.FileExtension); ilImages.Images.Add(image.ToString(), image); alImages.Add(image); if (ImageScanned != null) { ImageScanned(image); } #endregion can anyone help us with this problem? thanks

    Read the article

  • asp.net mvc2 - controller for master page?

    - by ile
    I've just finished my first ASP.NET MVC (2) CMS. Next step is to build website that will show data from CMS's database. This is website design: #1 (Red box) - displays article categories. ViewModel: public class CategoriesDisplay { public CategoriesDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } } #2 (Brown box) - displays last x articles; skips those from green box #3. Viewmodel: public class ArticleDisplay { public ArticleDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } public int ArticleID { set; get; } public string ArticleTitle { set; get; } public string URLArticleTitle { set; get; } public DateTime ArticleDate; public string ArticleContent { set; get; } } #3 (green box) - Displays last x articles. Uses the same ViewModel as brown box #2 #4 (blue box) - Displays list of upcoming events. Uses dataContext.Model.Event as ViewModel Boxes #1, #2 and #4 will repeat all over the site and they are part of Master Page. So, my question is: what is the best way to transfer this data from Model to Controller and finally to View pages? Should I make a controller for master page and ViewModel class that will wrap all this classes together OR Should I create partial Views for every of these boxes and make each of them inherit appropriate class (if it is even possible that it works this way?) OR Should I put this repeated code in all controllers and all additional data transfer via ViewData, which would be probably the worse way :) OR There is maybe a better and more simple way but I don't know/see it? Thanks in advance, Ile

    Read the article

  • Magento MAGMI: Product attributes (custom options) not showing up in import

    - by Rodgers and Hammertime
    When importing a CSV into Magento with the MAGMI importing tool, I am unable to import Custom Options (as in size: smalee/medium/large). The import manages to put in the basic products, but the Custom Options don't transfer accross. By custom options I mean the fields Title, Input Type, Is Required, Sort Order Title, Price, Price Type, SKU, Sort Order Title, Price, Price Type, SKU, Sort Order Title, Price, Price Type, SKU, Sort Order and so on ... Found in the custom options menu... Even using the example CSV from the MAGMI SourceForge Wiki: sku,name,description,price,Size:drop_down:1 T-Shirt1,T-Shirt,A T-Shirt,5.00,Small|Medium|Large T-Shirt2,T-Shirt2,Another T-Shirt,6.00,XS|S|M|L|XL ...it fails to import the attributes. So i'm simply using MAGMI with the supplied example data from SourceForge on a blank magento product list, and it doesn't transfer properly. Can anyone shed any light on what might be wrong? I am using Magento ver. 1.6.1.0 if that changes anything. Thanks.

    Read the article

  • How to keep character encoding with database queries.

    - by JasonS
    Hi, I am doing the following. 1) I am exporting a database and saving it to a file called dump.sql. 2) The file is then transferred to a different server via PHP ftp. 3) When the file has been successfully transferred the administrator has an option to run a 'dbtransfer' script on the new host. 4) This script blows up the script and runs the queries line by line. This works great - however there is a problem with foreign language encoding. We are using UTF-8. Step 1 : This works fine, file is in UTF-8 Format. Step 3 : When I test the contents of the dump.sql file using mb_check_encoding(). The string comes back as UTF-8. Step 4 : This creates tables with utf8_general_ci encoding. The information is dumped in. When I check the table after the transfer I get records like this: 'ç,Ç,ö,Ö,ü,Ü,ı,İ,ş,Ş,ğ,Ğ'. I don't understand how a UTF-8 string can lose its encoding when it goes into the database. Am I missing a step? Do I need to run some sort of function to ensure the string is parsed as UTF-8? Once the system is installed I can save foreign language queries. It is just the transfer that is messing up. Any ideas?

    Read the article

  • emacs tramp performance

    - by Oleg Pavliv
    Is there a way to improve emacs tramp performance? For me it's faster to open an external ftp client (filezilla), transfer files to the local disk and open them in an external editor (notepad) than open them with emacs. I use emacs23.1 under windows xp. I tried different tramp-default-method (telnet, pscp, ftp), all of them have the same performance. Profiling results with elp-instrument-package are the following (I opened 3 remote files of 1.5 MB each one) tramp-file-name-handler 1461 350.41599999 0.2398466803 tramp-sh-file-name-handler 1461 350.02699999 0.2395804243 tramp-send-command 227 179.63400000 0.7913392070 tramp-send-command-and-check 205 177.77600000 0.8672000000 tramp-wait-for-regexp 227 176.47800000 0.7774361233 tramp-wait-for-output 226 176.40000000 0.7805309734 tramp-barf-unless-okay 18 133.46699999 7.4148333333 tramp-handle-insert-file-contents 3 132.046 44.015333333 tramp-handle-file-local-copy 3 131.281 43.760333333 tramp-accept-process-output 2375 112.95100000 0.0475583157 So, actual file transfer takes 132 sec, about 1/3 of total time. Why does it spend so much time in tramp-sh-file-name-handler? I tried to advice a function tramp-sh-file-name-handler to store and return cached results but it does not work, probably this function has some side effects. Any ideas how to improve tramp performance? (I use emacs 23.1 under WindowsXP)

    Read the article

  • Posting xml from classic asp to asp.net

    - by Chris Dunaway
    I apologize if this has been asked before. I searched and didn't find anything that matched my situation. Also bear in mind I am fairly new to asp/asp.net development. My current project is a relatively simple e-commerce site. The customer will connect to the site, select products, input shipping and billing information, payment information (credit card) and submit the order. The project is being split into two parts: The store front which includes displaying the items and taking the customer's shipping and billing information and the payment site which will collect the customers credit card, compute tax, and save the order into the company's system. The reason that the site was split up, was that our side (payment side) already has facilities for credit card handling and tax computation. There may also be some regulatory issues that the store front side does not want to deal with (which we already do). I'm working on the payment portion of the app and I am using asp.net. The store front side is being written in classic asp (not my decision). Each part will be hosted on different servers. The problem I am having is transferring the contents of the "shopping cart" to our app so that we can collect the cc info and submit the order. We had thought that the classic asp could somehow post an xml fragment which contains the billing/shipping info and the items selected. Our side would display a summary of the order, securely collect the credit card info, and submit the order to our system. But I have been unable to post or send the xml from a classic asp on one server, to our asp.net application on another. It all works just fine when I test on the same server. How can I post (or otherwise transfer) the shopping cart data from classic asp to asp.net across server boundaries and transfer control to the asp.net application? As I said, I am new to web development, so this is proving quite a challenge for me. Thanks

    Read the article

  • Change CulturalInfo after button click

    - by Bart
    i have multilingual asp.net site. there is masterpage and default.aspx in masterpage i put two buttons one to click when i want to change the language to english, second for polish. I want to change the language after click on these buttons (and all changes should appear automatically on the page) here is a code for both: protected void EnglishButton_Click(object sender, ImageClickEventArgs e) { string selectedLanguage = "en-US"; //Sets the cookie that is to be used by InitializeCulture() in content page HttpCookie cookie = new HttpCookie("CultureInfo"); cookie.Value = selectedLanguage; Response.Cookies.Add(cookie); Server.Transfer(Request.Path); } protected void PolishButton_Click(object sender, ImageClickEventArgs e) { string selectedLanguage = "pl-PL"; //Sets the cookie that is to be used by InitializeCulture() in content page HttpCookie cookie = new HttpCookie("CultureInfo"); cookie.Value = selectedLanguage; Response.Cookies.Add(cookie); Server.Transfer(Request.Path); } in default.aspx.cs i have InitializeCulture(): protected override void InitializeCulture() { HttpCookie cookie = Request.Cookies["CultureInfo"]; // if there is some value in cookie if (cookie != null && cookie.Value != null) { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(cookie.Value); Thread.CurrentThread.CurrentUICulture = new CultureInfo(cookie.Value); } else // if none value has been sent by cookie, set default language { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture("pl-PL"); Thread.CurrentThread.CurrentUICulture = new CultureInfo("pl-PL"); } base.InitializeCulture(); } i added resource files and in one label i show actual culture: Welcome.Text = "Culture: " + System.Globalization.CultureInfo.CurrentCulture.ToString(); the problem is that when i run this app and click e.g. english button (default language is polish), there is no effect. if i click it second time or press F5, the changes are applies and in the label is Culture: en-US. the same happens if i want to change language back to polish (it works after second click (or one click and refresh)). What am i doing wrong?

    Read the article

  • [ASP.NET] Change CulturalInfo after button click

    - by Bart
    Hello, i have multilingual asp.net site. there is masterpage and default.aspx in masterpage i put two buttons one to click when i want to change the language to english, second for polish. I want to change the language after click on these buttons (and all changes should appear automatically on the page) here is a code for both: protected void EnglishButton_Click(object sender, ImageClickEventArgs e) { string selectedLanguage = "en-US"; //Sets the cookie that is to be used by InitializeCulture() in content page HttpCookie cookie = new HttpCookie("CultureInfo"); cookie.Value = selectedLanguage; Response.Cookies.Add(cookie); Server.Transfer(Request.Path); } protected void PolishButton_Click(object sender, ImageClickEventArgs e) { string selectedLanguage = "pl-PL"; //Sets the cookie that is to be used by InitializeCulture() in content page HttpCookie cookie = new HttpCookie("CultureInfo"); cookie.Value = selectedLanguage; Response.Cookies.Add(cookie); Server.Transfer(Request.Path); } in default.aspx.cs i have InitializeCulture(): protected override void InitializeCulture() { HttpCookie cookie = Request.Cookies["CultureInfo"]; // if there is some value in cookie if (cookie != null && cookie.Value != null) { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(cookie.Value); Thread.CurrentThread.CurrentUICulture = new CultureInfo(cookie.Value); } else // if none value has been sent by cookie, set default language { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture("pl-PL"); Thread.CurrentThread.CurrentUICulture = new CultureInfo("pl-PL"); } base.InitializeCulture(); } i added resource files and in one label i show actual culture: Welcome.Text = "Culture: " + System.Globalization.CultureInfo.CurrentCulture.ToString(); the problem is that when i run this app and click e.g. english button (default language is polish), there is no effect. if i click it second time or press F5, the changes are applies and in the label is Culture: en-US. the same happens if i want to change language back to polish (it works after second click (or one click and refresh)). What am i doing wrong? Regards, Bart

    Read the article

  • c#: how to read parts of a file? (DICOM)

    - by Xaisoft
    I would like to read a DICOM file in C#. I don't want to do anything fancy, I just for now would like to know how to read in the elements, but first I would actually like to know how to read the header to see if is a valid DICOM file. It consists of Binary Data Elements. The first 128 bytes are unused (set to zero), followed by the string 'DICM'. This is followed by header information, which is organized into groups. A sample DICOM header First 128 bytes: unused DICOM format. Followed by the characters 'D','I','C','M' Followed by extra header information such as: 0002,0000, File Meta Elements Groups Len: 132 0002,0001, File Meta Info Version: 256 0002,0010, Transfer Syntax UID: 1.2.840.10008.1.2.1. 0008,0000, Identifying Group Length: 152 0008,0060, Modality: MR 0008,0070, Manufacturer: MRIcro In the above example, the header is organized into groups. The group 0002 hex is the file meta information group which contains 3 elements: one defines the group length, one stores the file version and the their stores the transfer syntax. Questions How to I read the header file and verify if it is a DICOM file by checking for the 'D','I','C','M' characters after the 128 byte preamble? How do I continue to parse the file reading the other parts of the data?

    Read the article

  • how to write this typical mysql query( ho to use subquery column into main query)

    - by I Like PHP
    I HAVE TWO TABLES shown below table_joining id join_id(PK) transfer_id(FK) unit_id transfer_date joining_date 1 j_1 t_1 u_1 2010-06-05 2010-03-05 2 j_2 t_2 u_3 2010-05-10 2010-03-10 3 j_3 t_3 u_6 2010-04-10 2010-01-01 4 j_5 NULL u_3 NULL 2010-06-05 5 j_6 NULL u_4 NULL 2010-05-05 table_transfer id transfer_id(PK) pastUnitId futureUnitId effective_transfer_date 1 t_1 u_3 u_1 2010-06-05 2 t_2 u_6 u_1 2010-05-10 3 t_3 u_5 u_3 2010-04-10 now i want to know total employee detalis( using join_id) which are currently working on unit u_3 . means i want only join_id j_1 (has transfered but effective_transfer_date is future date, right now in u_3) j_2 ( tansfered and right now in `u_3` bcoz effective_transfer_date has been passed) j_6 ( right now in `u_3` and never transfered) what i need to take care of below steps( as far as i know ) <1> first need to check from table_joining whether transfer_id is NULL or not <2> if transfer_id= is NULL then see unit_id=u_3 where joining_date <=CURDATE() ( means that person already joined u_3) <3> if transfer_id is NOT NULL then go to table_transfer using transfer_id (foreign key reference) <4> now see the effective_transfer_date regrading that transfer_id whether effective_transfer_date<=CURDATE() <5> if transfer date has been passed(means transfer has been done) then return futureUnitID otherwise return pastUnitID i used two separate query but don't know how to join those query?? for step <1 ans <2 SELECT unit_id FROM table_joining WHERE joining_date<=CURDATE() AND transfer_id IS NULL AND unit_id='u_3' for step<5 SELECT IF(effective_transfer_date <= CURDATE(),futureUnitId,pastUnitId) AS currentUnitID FROM table_transfer // here how do we select only those rows which have currentUnitID='u_3' ?? please guide me the process?? i m just confused with JOINS. i think using LEFT JOIN can return the data i need, but i m not getting how to implement ...please help me. Thanks for helping me alwayz

    Read the article

  • GMail appearing to ignore Reply-To.

    - by Samuurai
    I'm using a gmail account to send emails from my website. I'm using the same account to pick up emails which are generated by the contact facility on my site. I'm using the Reply-To field to attempt to make it easier to hit reply and easily get back to people. The message comes up with the 'from' address and ignores the 'reply-to' address. Here's my header: Return-Path: <[email protected]> Received: from svr1 (ec2-79-125-266-266.eu-west-1.compute.amazonaws.com [79.125.266.266]) by mx.google.com with ESMTPS id u14sm23273123gvf.17.2010.03.10.14.33.24 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 10 Mar 2010 14:33:25 -0800 (PST) Received: from localhost ([127.0.0.1] helo=www.rds.com) by aquacouture with esmtp (Exim 4.69) (envelope-from <[email protected]>) id 1NpUSx-0001dK-JM for [email protected]; Wed, 10 Mar 2010 22:33:23 +0000 User-Agent: CodeIgniter Date: Wed, 10 Mar 2010 22:33:23 +0000 From: "New Inquiry" <[email protected]> Reply-To: "Beren" <[email protected]> To: [email protected] Subject: =?utf-8?Q?Test?= X-Sender: [email protected] X-Mailer: CodeIgniter X-Priority: 3 (Normal) Message-ID: <[email protected]> Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="B_ALT_4b981e3390ccd" This is a multi-part message in MIME format. Your email application may not support this format. --B_ALT_4b981e3390ccd Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit test --B_ALT_4b981e3390ccd Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable test --B_ALT_4b981e3390ccd--

    Read the article

  • What is wrong with this mail header text?

    - by dnagirl
    The following $header is being sent via PHP's mail($to,$subject,$content,$header) command. The mail arrives and appears to have an attachment. But the mail text is empty as is the file. I think this has something to do with line spacing but I can't see the problem. I have tried putting the contents (between the boundaries) in $contents rather than appending it to $header. It doesn't make a difference. Any thoughts? From: [email protected] Reply-To: [email protected] X-Mailer: PHP 5.3.1 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="7425195ade9d89bc7492cf520bf9f33a" --7425195ade9d89bc7492cf520bf9f33a Content-type:text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 7bit this is a test message. --7425195ade9d89bc7492cf520bf9f33a Content-Type: application/pdf; name="test.pdf" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="test.pdf" JVBERi0xLjMKJcfsj6IKNSAwIG9iago8PC9MZW5ndGggNiAwIFIvRmlsdGVyIC9GbGF0ZURlY29k ZT4+CnN0cmVhbQp4nE2PT0vEMBDFadVdO4p/v8AcUyFjMmma5CqIIF5cctt6WnFBqLD1+4Np1nY3 c3lvfm+GyQ4VaUY11iQ2PTyuHG5/Ibdx9fIvhi3swJMZX24c602PTzENegwUbNAO4xcoCsG5xuWE . ... the rest of the file . MDAwMDA2MDYgMDAwMDAgbiAKMDAwMDAwMDcwNyAwMDAwMCBuIAowMDAwMDAxMDY4IDAwMDAwIG4g CjAwMDAwMDA2NDcgMDAwMDAgbiAKMDAwMDAwMDY3NyAwMDAwMCBuIAowMDAwMDAxMjg2IDAwMDAw IG4gCjAwMDAwMDA5MzIgMDAwMDAgbiAKdHJhaWxlcgo8PCAvU2l6ZSAxNCAvUm9vdCAxIDAgUiAv SW5mbyAyIDAgUgovSUQgWzxEMURDN0E2OUUzN0QzNjI1MDUyMEFFMjU0MTMxNTQwQz48RDFEQzdB NjlFMzdEMzYyNTA1MjBBRTI1NDEzMTU0MEM+XQo+PgpzdGFydHhyZWYKNDY5MwolJUVPRgo= --7425195ade9d89bc7492cf520bf9f33a-- $header ends without a line break

    Read the article

  • Problem sending email with Codeigniter - Headers sent in the message body

    - by Brian
    Having a strange issue with the email class in codeigniter. When I send email directly to my gmail account email address, it works fine. However if I send email to a different email address and use POP3 to import that email address into gmail, then for some reason all the headers are included in the message. Here's the code for sending the email: $this->email->clear(); $config['mailtype'] = "html"; $this->email->initialize($config); $this->email->set_newline("\r\n"); $this->email->from('[email protected]', 'Website'); $this->email->to('[email protected]'); $this->email->message($message); Here's what arrives in my inbox when the email is sent to an account which is imported into gmail via POP3: Date: Fri, 7 Jan 2011 15:07:04 +0000 From: "Website" <[email protected]> Reply-To: "[email protected]" <[email protected]> X-Sender: [email protected] X-Mailer: CodeIgniter X-Priority: 3 (Normal) Message-ID: <[email protected]> Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="B_ALT_4d272c1835c46" This is a multi-part message in MIME format. Your email application may not support this format. --B_ALT_4d272c1835c46 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit this is the email message content --B_ALT_4d272c1835c46 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html> <body> <p>this is the email message content </p> </body> </html> --B_ALT_4d272c1835c46--

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Problems pulling data from NSMutableArray for MapKit?

    - by Graeme
    Hi, I have a class (DataImporter) which has the code to download an RSS feed. I also have a view and separate class (TableView) which displays the data in a UITableView and starts the parsing process, storing parsed information in an NSMutableArray (items). Now I wish to add a UIMapView which displays the items in the items NSMutableArray. I'm struggling to transfer the data into the new class (mapView) to show it in the map - and I preferably don't want to have to create a new class to download the data again for the map. Is there a way I can transfer the information from the NSMutableArray (items) to the mapView? Thanks. Code for viewDidLoad MapKit: Data *data = nil; NSString *ilocation = [data locations]; NSString *ilocation2 = @"New Zealand"; NSString *inewlString; inewlString = [ilocation stringByAppendingString:ilocation2]; NSLog(@"inewlString=%@",inewlString); if(forwardGeocoder == nil) { forwardGeocoder = [[BSForwardGeocoder alloc] initWithDelegate:self]; } // Forward geocode! [forwardGeocoder findLocation: inewlString]; Code for parsing data into NSMutable Array: - (void)beginParsing { NSLog(@"Parsing has begun"); //self.navigationItem.rightBarButtonItem.enabled = NO; // Allocate the array for song storage, or empty the results of previous parses if (incidents == nil) { NSLog(@"Grabbing array"); self.datas = [NSMutableArray array]; } else { [datas removeAllObjects]; [self.tableView reloadData]; } // Create the parser, set its delegate, and start it. self.parser = [[DataImporter alloc] init]; parser.delegate = self; [parser start]; }

    Read the article

  • Web Workers - Transferable Objects for JSON

    - by kclem06
    HTML 5 Web workers are very slow when using worker.postMessage on a large JSON object. I'm trying to figure out how to transfer a JSON Object to a web worker - using the 'Transferable Objects' types in Chrome, in order to increase the speed of this. Here is what I'm referring to and appears it should speed this up quite a bit: http://updates.html5rocks.com/2011/12/Transferable-Objects-Lightning-Fast I'm having trouble finding a good example of this (and I don't believe I want to use an ArrayBuffer). Any help would be appreciated. I'm imagining something like this: worker = new Worker('workers.js'); var large_json = {}; for(var i = 0; i < 20000; ++i){ large_json[i] = i; large_json["test" + i] = "string"; }; //How to make this call to use Transfer Objects? Takes approx 2 seconds to serialize this for me currently. worker.webkitPostMessage(large_json);

    Read the article

  • Java mail attachment not working on Tomcat

    - by losintikfos
    Hello guys, I have an application which e-mails confirmations. The email part utilises Commons Mail API. The simple code which does the send mail is as shown below; import org.apache.commons.mail.*; ... // Create the attachment EmailAttachment attachment = new EmailAttachment(); attachment.setURL(new URL("http://cashew.org/doc.pdf")); attachment.setDisposition(EmailAttachment.ATTACHMENT); attachment.setDescription("Testing attach"); attachment.setName("doc.pdf"); // Create the email message MultiPartEmail email = new MultiPartEmail(); email.setHostName("mail.cashew.com"); email.addTo("[email protected]"); email.setFrom("[email protected]"); email.setSubject("Testing); email.setMsg("testing message"); // add the attachment email.attach(attachment); // send the email email.send(); My problem is, when I execute this application from Eclipse, I get email sent with attachment without any issues. But when i deploy the application to Tomcat server (I have tried both version 5 & 6 no joy), the e-mail is sent with below content; ------=_Part_0_25002283.1275298567928 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit testing Regards, los ------=_Part_0_25002283.1275298567928 Content-Type: application/pdf; name="doc.pdf" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="doc.pdf" Content-Description: Testing attach JVBERi0xLjQNJeLjz9MNCjYzIDAgb2JqDTw8L0xpbmVhcml6ZWQgMS9MIDMxMzE4Mi9PIDY1L0Ug Mjg2NjY5L04gMS9UIDMxMTgwMi9IIFsgMjgzNiAzNzZdPj4NZW5kb2JqDSAgICAgICAgICAgICAg DQp4cmVmDQo2MyAxMjcNCjAwMDAwMDAwMTYgMDAwMDAgbg0KMDAwMDAwMzM4MCAwMDAwMCBuDQow MDAwMDAzNTIzIDAwMDAwIG4NCjAwMDAwMDQzMDcgMDAwMDAgbg0KMDAwMDAwNTEwOSAwMDAwMCBu DQowMDAwMDA2Mjc5IDAwMDAwIG4NCjAwMDAwMDY0MTAgMDAwMDAgbg0KMDAwMDAwNjU0NiAwMDAw MCBuDQowMDAwMDA3OTY3IDAwMDAwIG4NCjAwMDAwMDkwMjMgMDAwMDAgbg0KMDAwMDAwOTk0OSAw MDAwMCBuDQowMDAwMDExMDAwIDAwMDAwIG4NCjAwMDAwMTIwNTkgMDAwMDAgbg0KMDAwMDAxMjky MCAwMDAwMCBuDQowMDAwMDEyOTU0IDAwMDAwIG4NCjAwMDAwMTI5ODIgMDAwMDAgbg0KMDAwMDAx ....... CnN0YXJ0eHJlZg0KMTE2DQolJUVPRg0K ------=_Part_0_25002283.1275298567928-- One thing also I have noticed is, the header information donot show TO and Subject values. Hmm pretty wierd. I have to point out that, above is not generated of DEBUG, it is the actual message recieved in my outlook client. Can someone help me please! Do anyone knows what's going on?

    Read the article

  • How do I process multipart http responses in Ruby Net:HTTP?

    - by seal-7
    There is so much information out there on how to generate multipart responses or do multipart file uploads. I can't seem to find any information on how to process a multipart http response. Here is some IRB output from a multipart http response I am working with. >> response.http.content_type => "multipart/related" >> response.http.body[0..2048] => "\r\n------=_Part_3_806633756.1271797659309\r\nContent-Type: text/xml; charset=UTF-8\r\nContent-Transfer-Encoding: binary\r\nContent-Id: <A0FCC4333C6D0FCA346B97FAB6B61818>\r\n\r\n<?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Body><ns1:runReportResponse soapenv:encodingStyle="http://www.w3.org/2003/05/soap-encoding" xmlns:ns1="http://192.168.1.200:8080/jasperserver/services/repository"><ns2:result xmlns:ns2="http://www.w3.org/2003/05/soap-rpc">runReportReturn</ns2:result><runReportReturn xsi:type="xsd:string">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;\n&lt;operationResult version=&quot;2.0.1&quot;&gt;\n\t&lt;returnCode&gt;&lt;![CDATA[0]]&gt;&lt;/returnCode&gt;\n&lt;/operationResult&gt;\n</runReportReturn></ns1:runReportResponse></soapenv:Body></soapenv:Envelope>\r\n------=_Part_3_806633756.1271797659309\r\nContent-Type: application/pdf\r\nContent-Transfer-Encoding: binary\r\nContent-Id: <report>\r\n\r\n%PDF-1.4\n%\342\343\317\323\n3 0 obj

    Read the article

  • php file force download

    - by droidus
    When I use this code to download this image (only used for testing purposes), I open the downloaded image, and all it gives me is an error. i tried it in chrome. opening it with windows photo viewer, it says that it can't display the picture because it is empty??? here is the code: <?PHP // Define the path to file $file = 'http://www.media.lonelyplanet.com/lpi/12553/12553-11/469x264.jpg'; if(!file) { // File doesn't exist, output error die('file not found'); } else { header('Content-Description: File Transfer'); header('Content-Type: application/octet-stream'); header('Content-Disposition: attachment; filename='.basename($file)); header('Content-Transfer-Encoding: binary'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); header('Content-Length: ' . filesize($file)); ob_clean(); flush(); readfile($file); exit; } ?>

    Read the article

  • My multipart email script sends HTML messages just fine, but the plain text alternative doesn't not

    - by hsatterwhite
    I have a script set up to send out multipart emails; plain text and html messages. The HTML messages work just fine, but when I used an email client that only does plain text the plaint text message does not render and I get the following: -- This message was generated automatically by Me http://www.somewebsite.com/ $html_msg = $message_details; $plain_text_msg = strip_tags($message_details); $headers = <<<HEADERS From: Me <[email protected]> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="==PHP-alt$mime_boundary" HEADERS; // Use our boundary string to create plain text and HTML versions $message = <<<MESSAGE --==PHP-alt$mime_boundary Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit $plain_text_msg -- This message was generated automatically by Me http://www.somewebsite.com/ If you did not request this message, please notify [email protected] --==PHP-alt$mime_boundary Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: 7bit <html> <body> $html_msg <p> --<br /> This message was generated automatically as a demonstration on <a href="http://www.somewebsite.com/">Me</a> </p> <p> If you did not request this message, please notify <a href="mailto:[email protected]">[email protected]</a> </p> </body> </html> --==PHP-alt$mime_boundary-- MESSAGE;

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >