Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 336/1952 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Embed a Python persistance layer into a C++ application - good idea?

    - by Rickard
    say I'm about to write an application with a thin GUI layer, a really fat calculation layer (doing computationally heavy calibrations and other long-running stuff) and fairly simple persistance layer. I'm looking at building the GUI + calculation layer in C++ (using Qt for the gui parts). Now - would it be a crazy idea to build the persistance layer in Python, using sqlalchemy, and embed it into the C++ application, letting the layers interface with eachother through lightweigth data transfer objects (written in C++ but accessible from python)? (the other alternative I'm leaning towards would probably be to write the app in Python from the start, using the PyQt wrapper, and then calling into C++ for the computational tasks) Thanks, Rickard

    Read the article

  • XML parsing with SAX | how to handle special characters?

    - by cedar715
    We have a JAVA application that pulls the data from SAP, parses it and renders to the users. The data is pulled using JCO connector. Recently we were thrown an exception: org.xml.sax.SAXParseException: Character reference "&#00" is an invalid XML character. So, we are planning to write a new level of indirection where ALL special/illegal characters are replaced BEFORE parsing the XML. My questions here are : 1. Is there any existing(open source) utility that does this job of replacing illegal characters in XML? 2. Or if I had to write such utility, how should i handle them? 3. Why is the above exception thrown? Thank You.

    Read the article

  • Java generic return type

    - by Colby77
    Hi, I'd like to write a method that can accept a type param (or whatever the method can figure out the type from) and return a value of this type so I don't have to cast the return type. Here is a method: public Object doIt(Object param){ if(param instanceof String){ return "string"; }else if(param instanceof Integer){ return 1; }else{ return null; } } When I call this method, and pass in it a String, even if I know the return type will be a String I have to cast the return Object. This is similar to the int param. How shall I write this method to accept a type param, and return this type?

    Read the article

  • NHibernate ICriteria with a bag

    - by plunk
    Hi, Just a quick question. If I've got 2 tables that are joined in a 3rd table with a many-to-many relationship, is it possible to write an ICriteria with expressions in one of the tables and the join table? Lets say the mapping file looks something like: <bag name ="Bag" table="JoinTable" cascade ="none"> <key column="Data_ID"/> <many-to-many class="Data2" column="Data2_ID"/> </bag> Is it then possible to write an ICriteria like the following? ICriteria crit = session.CreateCriteria(typeof(Data)); crit.Add(Expression.Eq("Name", name)); crit.Add(Expression.Between("Date", startDate, endDate)); crit.Add(Expression.Eq("Bag", data2IDNumber)); When I try this, it tells me I the expected type is IList, whereas the actual type is Bag. Thanks.

    Read the article

  • How to keep my functions (objects/methods) 'lean and mean'

    - by Michel
    Hi, in all (Agile) articles i read about this: keep your code and functions small and easy to test. How should i do this with the 'controller' or 'coordinator' class? In my situation, i have to import data. In the end i have one object who coordinates this, and i was wondering if there is a way of keeping the coordinator lean(er) and mean(er). My coordinator now does the followling (pseudocode) //Write to the log that the import has started Log.StartImport() //Get the data in Excel sheet format result = new Downloader().GetExcelFile() //Log this step Log.LogStep(result ) //convert the data to intern objects result = new Converter().Convertdata(result); //Log this step Log.LogStep(result ) //write the data result = Repository.SaveData(result); //Log this step Log.LogStep(result ) Imho, this is one of those 'know all' classes or at least one being 'not lean and mean'? Or, am i taking this lean and mean thing to far and is it impossible to program an import without some kind of 'fat' importer/coordinator? Michel

    Read the article

  • Exporting tasks to 'C using DPI

    - by Alphaneo
    I have an verilog based test-bench, interfaced to 'C source using DPI. Now using DPI I am planning to write my whole firmware. To do this I need 3 things Register Read Register Write Interrupt handler As I understand, register reads and writes are tasks that I need to export from the RTL test-bench. And Interrupt handler (I implemented by importing a function from 'C). I checked most the cadence documentation and found no useful hints. I have also registered with cadence users community but it seems that I cannot ask question till they approve my registration. Just in case someone is aware of this, would appreciate their help.

    Read the article

  • Getting error while transfering PGP file through FTP : The underlying connection was closed: An unex

    - by sumeet Sharma
    I am trying to upload a PGP encrypted file through FTP. But I am getting an error message as follows: The underlying connection was closed: An unexpected error occurred on a receive. I am using the following code and getting the error at line: Stream ftpStream = response.GetResponse(); Is there any one who can help me out ASAP. Following is the code sample: FtpWebRequest request = WebRequest.Create("ftp://ftp.website.com/sample.txt.pgp") as FtpWebRequest; request.UsePassive = true; FtpWebResponse response = request.GetResponse() as FtpWebResponse; Stream ftpStream = response.GetResponse(); int bufferSize = 8192; byte[] buffer = new byte[bufferSize]; using (FileStream fileStream = new FileStream("localfile.zip", FileMode.Create, FileAccess.Write)) { int nBytes; while((nBytes = ftpStream.Read(buffer, 0, bufferSize) > 0) { fileStream.Write(buffer, 0, nBytes); } } Regards, Sumeet

    Read the article

  • SVN post commit stucks while starting process

    - by Oded
    Hi, I've built a script in VS that receives the 2 arguments sent by post-commit hook. The script runs SVN LOG to retrieve data about the revision (author, date, files). When I run the solution from VS with constant vars for the arguments, it runs perfectly. When I execute the exe file, also runs perfectly. When I implement the hook script, it fails where it should read from the process. process.Start(); process.WaitForExit(); str = process.StandardOutput.ReadToEnd(); process.WaitForExit(); if (!process.HasExited) { try { process.Kill(); } catch (Exception e3) { // process is terminated } // Write Errors } Thanks. EDIT: The commit window stucks and never completes the commit. I write the code in C#.... there is no errors shown...

    Read the article

  • Classic ASP Request.Form removes spaces?

    - by alex
    I'm trying to figure this oddity out... in classic ASP i seem to be losing spaces in Request.Form values... ie, Request.Form("json") is {"project":{"...","administrator":"AlexGorbatchev", "anonymousViewUrl":null,"assets":[],"availableFrom":"6/10/20104:15PM"... However, CStr(Request.Form) is json={"project":{"__type":"...":"Alex Gorbatchev", "anonymousViewUrl":null,"assets":[],"availableFrom":"6/10/2010 4:15 PM"... Here's the entire code :) <%@ language="VBSCRIPT"%> <% Response.Write(CStr(Request.Form("json"))) Response.Write(CStr(Request.Form)) %> Somebody please tell me I haven't lost all my marbles...

    Read the article

  • How to pass the 'argument-line' of one PowerShell function to another?

    - by jwfearn
    I'm trying to write some PowerShell functions that do some stuff and then transparently call through to existing built-in function. I want to pass along all the arguments untouched. I don't want to have to know any details of the arguments. I tired using 'splat' to do this with @args but that didn't work as I expected. In the example below, I've written a toy function called myls which supposed to print hello! and then call the same built-in function, Get-ChildItem, that the built-in alias ls calls with the rest of the argument line intact. What I have so far works pretty well: function myls { Write-Output "hello!" Invoke-Expression("Get-ChildItem "+$MyInvocation.UnboundArguments -join " ") } A correct version of myls should be able to handle being called with no arguments, with one argument, with named arguments, from a line containing multiple semi-colon delimited commands, and with variables in the arguments including string variables containing spaces. The tests below compare myls and the builtin ls: [NOTE: output elided and/or compacted to save space] PS> md C:\p\d\x, C:\p\d\y, C:\p\d\"jay z" PS> cd C:\p\d PS> ls # no args PS> myls # pass PS> cd .. PS> ls d # one arg PS> myls d # pass PS> $a="A"; $z="Z"; $y="y"; $jz="jay z" PS> $a; ls d; $z # multiple statements PS> $a; myls d; $z # pass PS> $a; ls d -Exclude x; $z # named args PS> $a; myls d -Exclude x; $z # pass PS> $a; ls d -Exclude $y; $z # variables in arg-line PS> $a; myls d -Exclude $y; $z # pass PS> $a; ls d -Exclude $jz; $z # variables containing spaces in arg-line PS> $a; myls d -Exclude $jz; $z # FAIL! Is there a way I can re-write myls to get the behavior I want?

    Read the article

  • Database independent row level security solution

    - by Filip
    Hi, does anybody knows about Java/C# database independent authorization library. This library should support read, write, delete, insert actions across company organizational structure. Something like this: - user can see all documents - user can enter new document assigned to his unit - user can change all documents assigned to his unit and all subordinate units. - user can delete documents that are assigned to him I should also be able to create custom actions (besides read, write,...) connect them to certain class and assign that "security token" to user (e.g. document.expire). If there aren't any either free or commercial libraries, is there a book that could be useful in implementing this functionality? Thanks.

    Read the article

  • Why a very good PHP framework - Qcodo (or Qcubed - its branch) - is so unpopular?

    - by Pawel
    I am wondering why this framework (QCodo) is almost forgotten and totally unpopular. I've started using it a few years ago and it is the only thing that keeps me with PHP. Yeah ... its development is stuck (that's why there is now more active branch Qcubed) but it is still very good piece of software. Its main advantages: Event driven (something like asp.net) no spaghetti code Powerful code generation good ORM follows DRY very simple AJAX support is fun to write Since then I wanted to be trendy and checked Django but I cannot write normal request-based web application (it just doesn't feel right). Don't believe? chess.com is written with it and surely there are plenty others. My 2 questions are: Have you heard of it (PHP people)? If you are using it what is your opinion about it (show us examples of your work) Thanks

    Read the article

  • Writing JSON with XSLT

    - by JP
    Hi, I'm trying to write XSLT to transform a specific web page to JSON. The following code demonstrates how Ruby would do this conversion, but the XSLT doesn't generate valid JSON (there's one too many commas inside the array) - anyone know how to write XSLT to generate valid JSON? require 'rubygems' require 'nokogiri' require 'open-uri' doc = Nokogiri::HTML(open('http://bbc.co.uk/radio1/playlist')) xslt = Nokogiri::XSLT(DATA.read) puts out = xslt.transform(doc) # Now follows the XSLT __END__ <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml"> <xsl:output method="text" encoding="UTF-8" media-type="text/plain"/> <xsl:template match="/"> [ <xsl:for-each select="//*[@id='playlist_a']//div[@class='artists_and_songs']//ul[@class='clearme']"> {'artist':'<xsl:value-of select="li[@class='artist']" />','track':'<xsl:value-of select="li[@class='song']" />'}, </xsl:for-each> ] </xsl:template> </xsl:stylesheet>

    Read the article

  • How to use regex to match ASTERISK in awk

    - by Ken Chen
    I'm stil pretty new to regular expression and just started learning to use awk. What I am trying to accomplish is writing a ksh script to read-in lines from text, and and for every lines that match the following: *RECORD 0000001 [some_serial_#] to replace $2 (i.e. 000001) with a different number. So essentially the script read in batch record dump, and replace the record number with date+record#, and write to separate file. So this is what I'm thinking the format should be: awk 'match($0,"/*FTR")!=0{$2="$DATE-n++"; print $0} match($0,"/*FTR")==0{print $0}' $BATCH > $OUTPUT but obviously "/*FTR" is not going to work, and I'm not sure if changing $2 and then write the whole line is the correct way to do this. So I am in need of some serious enlightenment.

    Read the article

  • Does binarywriter.flush() also flush the underlying filestream object?

    - by jacob sebastian
    I have got a code snippet as follows: Dim fstream = new filestream(some file here) dim bwriter = new binarywriter(fstream) while not end of file read from source file bwriter.write() bwriter.flush() end while The question I have is the following. When I call bwriter.flush() does it also flush the fstream object? Or should I have to explicitly call fstream.flush() such as given in the following example: while not end of file read from source file bwriter.write() bwriter.flush() fstream.flush() end while A few people suggested that I need to call fstream.flush() explicitly to make sure that the data is written to the disk (or the device). However, my testing shows that the data is written to the disk as soon as I call flush() method on the bwriter object. Can some one confirm this?

    Read the article

  • Tomcat6 MySql JDBC Datasource configuration

    - by Bob
    Hi, I've always used Spring's dependency injection to get datasource objects and use them in my DAOs, but now, I have to write an app without that. With Spring I can write something like this: <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://127.0.0.1/app?characterEncoding=UTF-8" /> <property name="username" value="u" /> <property name="password" value="p" /> </bean> But how can I use datasource in my DAOs without Spring or anything? I'm using servlets and JSPs only. Performance is very important factor. Thanks in advance!

    Read the article

  • Efficiently check string for one of several hundred possible suffixes

    - by Ghostrider
    I need to write a C/C++ function that would quickly check if string ends with one of ~1000 predefined suffixes. Specifically the string is a hostname and I need to check if it belongs to one of several hundred predefined second-level domains. This function will be called a lot so it needs to be written as efficiently as possible. Bitwise hacks etc anything goes as long as it turns out fast. Set of suffixes is predetermined at compile-time and doesn't change. I am thinking of either implementing a variation of Rabin-Karp or write a tool that would generate a function with nested ifs and switches that would be custom tailored to specific set of suffixes. Since the application in question is 64-bit to speed up comparisons I could store suffixes of up to 8 bytes in length as const sorted array and do binary search within it. Are there any other reasonable options?

    Read the article

  • vbscript xml problem

    - by user181421
    Hello Friends, I have this vbscript that calls a web service written in .net 2010. I'm getting an error at the last line. Can't figure it out. This is the webservice: http://www.kollelbaaleibatim.com/services/getinfo.asmx/GetFronpageInfo Dim xmlDOC Dim bOK Dim J Dim HTTP Dim ImagePathLeftCar, ImagePathRightCar Dim CarIDLeft, CarIDRight Dim ShortTitleLeftCar, ShortTitleRightCar Dim DescriptionLeftCar, DescriptionRightCar Dim PriceLeftCar, PriceRightCar Set HTTP = CreateObject("MSXML2.XMLHTTP") Set xmlDOC =CreateObject("MSXML.DOMDocument") xmlDOC.Async=False HTTP.Open "GET","http://www.kollelbaaleibatim.com/services/getinfo.asmx/GetFronpageInfo", false HTTP.setRequestHeader "Content-Type", "application/x-www-form-urlencoded" HTTP.Send() dim xmldoc2 set xmldoc2 = Server.CreateObject("Microsoft.XMLDOM") xmldoc2.async = False bOK = xmldoc2.load(HTTP.responseXML) if Not bOK then response.write( "Error loading XML from HTTP") end if response.write( xmldoc2.documentElement.xml)'Prints a good looking xml ShortTitleLeftCar = xmldoc2.documentElement.selectSingleNode("LeftCarShortTitle").text 'ERROR HERE

    Read the article

  • Scipy Negative Distance? What?

    - by disappearedng
    I have a input file which are all floating point numbers to 4 decimal place. i.e. 13359 0.0000 0.0000 0.0001 0.0001 0.0002` 0.0003 0.0007 ... (the first is the id). My class uses the loadVectorsFromFile method which multiplies it by 10000 and then int() these numbers. On top of that, I also loop through each vector to ensure that there are no negative values inside. However, when I perform _hclustering, I am continually seeing the error, "Linkage Z contains negative values". I seriously think this is a bug because: I checked my values, the values are no where small enough or big enough to approach the limits of the floating point numbers and the formula that I used to derive the values in the file uses absolute value (my input is DEFINITELY right). Can someone enligten me as to why I am seeing this weird error? What is going on that is causing this negative distance error? ===== def loadVectorsFromFile(self, limit, loc, assertAllPositive=True, inflate=True): """Inflate to prevent "negative" distance, we use 4 decimal points, so *10000 """ vectors = {} self.winfo("Each vector is set to have %d limit in length" % limit) with open( loc ) as inf: for line in filter(None, inf.read().split('\n')): l = line.split('\t') if limit: scores = map(float, l[1:limit+1]) else: scores = map(float, l[1:]) if inflate: vectors[ l[0]] = map( lambda x: int(x*10000), scores) #int might save space else: vectors[ l[0]] = scores if assertAllPositive: #Assert that it has no negative value for dirID, l in vectors.iteritems(): if reduce(operator.or_, map( lambda x: x < 0, l)): self.werror( "Vector %s has negative values!" % dirID) return vectors def main( self, inputDir, outputDir, limit=0, inFname="data.vectors.all", mappingFname='all.id.features.group.intermediate'): """ Loads vector from a file and start clustering INPUT vectors is { featureID: tfidfVector (list), } """ IDFeatureDic = loadIdFeatureGroupDicFromIntermediate( pjoin(self.configDir, mappingFname)) if not os.path.exists(outputDir): os.makedirs(outputDir) vectors = self.loadVectorsFromFile( limit, pjoin( inputDir, inFname)) for threshold in map( lambda x:float(x)/30, range(20,30)): clusters = self._hclustering(threshold, vectors) if clusters: outputLoc = pjoin(outputDir, "threshold.%s.result" % str(threshold)) with open(outputLoc, 'w') as outf: for clusterNo, cluster in clusters.iteritems(): outf.write('%s\n' % str(clusterNo)) for featureID in cluster: feature, group = IDFeatureDic[featureID] outline = "%s\t%s\n" % (feature, group) outf.write(outline.encode('utf-8')) outf.write("\n") else: continue def _hclustering(self, threshold, vectors): """function which you should call to vary the threshold vectors: { featureID: [ tfidf scores, tfidf score, .. ] """ clusters = defaultdict(list) if len(vectors) > 1: try: results = hierarchy.fclusterdata( vectors.values(), threshold, metric='cosine') except ValueError, e: self.werror("_hclustering: %s" % str(e)) return False for i, featureID in enumerate( vectors.keys()):

    Read the article

  • Javascript comma operator

    - by Claudiu
    When combining assignment with comma (something that you shouldn't do, probably), how does javascript determine which value is assigned? Consider these two snippets: function nl(x) { document.write(x + "<br>"); } var i = 0; nl(i+=1, i+=1, i+=1, i+=1); nl(i); And: function nl(x) { document.write(x + "<br>"); } var i = 0; nl((i+=1, i+=1, i+=1, i+=1)); nl(i); The first outputs 1 4 while the second outputs 4 4 What are the parentheses doing here?

    Read the article

  • in php, how to send the html tag to mail

    - by zahir hussain
    <script type="text/javascript"> var head= 23; </script> <?php $h="<script language='javascript'> document.write(head);</script>"; $headers = "MIME-Version: 1.0" . "\r\n"; $headers .= "Content-type:text/html;charset=ISO-8859-1" . "\r\n"; $mail_from='[email protected]'; $name="husssain"; $headers.="From: $name <$mail_from>"; $con="Welcome to world"; $to ="[email protected]"; $send_contact = mail($to,$con,$h,$headers); ?> then i send the $h to my mail id but i recieve only this <script language='javascript'> document.write(head);</script>

    Read the article

  • Writing UTF8 text to file

    - by sonofdelphi
    I am using the following function to save text to a file (on IE-8 w/ActiveX). function saveFile(strFullPath, strContent) { var fso = new ActiveXObject( "Scripting.FileSystemObject" ); var flOutput = fso.CreateTextFile( strFullPath, true ); //true for overwrite flOutput.Write( strContent ); flOutput.Close(); } The code works fine if the text is fully Latin-9 but when the text contains even a single UTF-8 encoded character, the write fails. The ActiveX FileSystemObject does not support UTF-8, it seems. I tried UTF-16 encoding the text first but the result was garbled. What is a workaround?

    Read the article

  • Windows API - Beginner help

    - by nXqd
    I try to create a very simple app using windows API. I've done some small apps in console. This is the first time I do with Win32 apps. I've searched and found a document from forgers which is recommended in this site. But I try to write very first line: #include <stdafx.h> #include <windows.h> int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { MessageBoxW (NULL, "Good bye Cruel World", "Note", MB_OK ); return 0; } But it doesn't work ( erased lines from default project created by VS 2008 and write these lines ) Waiting for your answers :)

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >