Search Results

Search found 2122 results on 85 pages for 'yoav str'.

Page 21/85 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Exception: "Given final block not properly padded" in Linux, but it works in Windows

    - by user1685364
    My application works in windows, but fails in Linux with Given final block not properly padded exception. Configuration: JDK Version: 1.6 Windows : version 7 Linux : CentOS 5.8 64bit My code is below: import java.io.IOException; import java.io.UnsupportedEncodingException; import java.security.InvalidKeyException; import java.security.Key; import java.security.NoSuchAlgorithmException; import java.security.SecureRandom; import javax.crypto.BadPaddingException; import javax.crypto.Cipher; import javax.crypto.IllegalBlockSizeException; import javax.crypto.KeyGenerator; import javax.crypto.NoSuchPaddingException; import sun.misc.BASE64Decoder; import sun.misc.BASE64Encoder; public class SecurityKey { private static Key key = null; private static String encode = "UTF-8"; private static String cipherKey = "DES/ECB/PKCS5Padding"; static { try { KeyGenerator generator = KeyGenerator.getInstance("DES"); String seedStr = "test"; generator.init(new SecureRandom(seedStr.getBytes())); key = generator.generateKey(); } catch(Exception e) { } } // SecurityKey.decodeKey("password") public static String decodeKey(String str) throws Exception { if(str == null) return str; Cipher cipher = null; byte[] raw = null; BASE64Decoder decoder = new BASE64Decoder(); String result = null; cipher = Cipher.getInstance(cipherKey); cipher.init(Cipher.DECRYPT_MODE, key); raw = decoder.decodeBuffer(str); byte[] stringBytes = null; stringBytes = cipher.doFinal(raw); // Exception!!!! result = new String(stringBytes, encode); return result; } } At the line: ciper.doFilnal(raw); the following exception is thrown: javax.crypto.BadPaddingException: Given final block not properly padded How can I fix this issue?

    Read the article

  • What is the difference between using MD5.Create and MD5CryptoServiceProvider?

    - by byte
    In the .NET framework there are a couple of ways to calculate an MD5 hash it seems, however there is something I don't understand; What is the distinction between the following? What sets them apart from eachother? They seem to product identical results: public static string GetMD5Hash(string str) { MD5CryptoServiceProvider md5 = new MD5CryptoServiceProvider(); byte[] bytes = ASCIIEncoding.Default.GetBytes(str); byte[] encoded = md5.ComputeHash(bytes); StringBuilder sb = new StringBuilder(); for (int i = 0; i < encoded.Length; i++) sb.Append(encoded[i].ToString("x2")); return sb.ToString(); } public static string GetMD5Hash2(string str) { System.Security.Cryptography.MD5 md5 = System.Security.Cryptography.MD5.Create(); byte[] bytes = Encoding.Default.GetBytes(str); byte[] encoded = md5.ComputeHash(bytes); StringBuilder sb = new StringBuilder(); for (int i = 0; i < encoded.Length; i++) sb.Append(encoded[i].ToString("x2")); return sb.ToString(); }

    Read the article

  • Java UnknownFormatConversionException

    - by user1672458
    The code below is throwing this error, and I'm not sure why. It's clearly a problem with outputting String.format to the str variable, but I don't know what's wrong with it. Exception in thread "main" java.util.UnknownFormatConversionException: Conversion = 'i' at java.util.Formatter$FormatSpecifier.conversion(Unknown Source) at java.util.Formatter$FormatSpecifier.<init>(Unknown Source) at java.util.Formatter.parse(Unknown Source) at java.util.Formatter.format(Unknown Source) at java.util.Formatter.format(Unknown Source) at java.lang.String.format(Unknown Source) at Donor.toString(Donor.java:41) at Donor.main(Donor.java:65) - import java.util.Scanner; public class Donor { public String name; public int age; public double donation; Donor() { //Initialized to these values for debugging name = "NoName"; age = 0; donation = 0; } Donor(String nameinit, int ageinit, double donationinit) { name = nameinit; age = ageinit; donation = donationinit; } public String toString() { String str = ""; str = String.format("%s-30%i-6$%d-20", name, age, donation); return str; } public static void main(String[] args) { Scanner input = new Scanner(System.in); String nameinit = null; int ageinit = -1; double donationinit = -1; String outp = null; System.out.print("Enter the donor's name: "); nameinit = input.nextLine(); System.out.print("Enter the donor's age: "); ageinit = input.nextInt(); System.out.print("Enter the donation amount: "); donationinit = input.nextDouble(); Donor d = new Donor(nameinit, ageinit, donationinit); outp = d.toString(); System.out.printf("%s30 %s6 %s10", "Name", "Age", "Donation"); System.out.print("\n" + outp); input.close(); } }

    Read the article

  • C++ Returning a Reference

    - by Devil Jin
    Consider the following code where I am returning double& and a string&. The thing works fine in the case of a double but not in the case of a string. Why is this difference in the behavior? In both the cases compiler does not even throws the Warning: returning address of local variable or temporary as I am returning a reference. #include <iostream> #include <string> using namespace std; double &getDouble(){ double h = 46.5; double &hours = h; return hours; } string &getString(){ string str = "Devil Jin"; string &refStr = str; return refStr; } int main(){ double d = getDouble(); cout << "Double = " << d << endl; string str = getString(); cout << "String = " << str.c_str() << endl; return 0; } Output: $ ./a.exe Double = 46.5 String =

    Read the article

  • Parsing CSV string and binding it to listbox

    - by Amit Ranjan
    I have splitted comma separated values in an string array, something like this str[0] ="210" str[1] ="abc.pdf" str[2] = "211" str[3] = "xyz.docx" and so on. Please note 0,2,4,6,8 [even positions] are having number and odd positions are having string. I am having a class Attachmodel Public Class AttachmentModel Private _attachmentID As Integer = 0 Private _attachmentPath As String = "" ''' <summary> ''' Get Set Attachment ID ''' </summary> ''' <value></value> ''' <returns></returns> ''' <remarks></remarks> Public Property AttachmentID() As Integer Get Return _attachmentID End Get Set(ByVal value As Integer) _attachmentID = value End Set End Property ''' <summary> ''' Get Set Attachment Path ''' </summary> ''' <value></value> ''' <returns></returns> ''' <remarks></remarks> Public Property AttachmentPath() As String Get Return _attachmentPath End Get Set(ByVal value As String) _attachmentPath = value End Set End Property End Class In the above i want to set the values and bind it to the grid, using List

    Read the article

  • not able to Deserialize object

    - by Ravisha
    I am having following peice of code ,where in i am trying to serialize and deserailize object of StringResource class. Please note Resource1.stringXml = its coming from resource file.If i pass strelemet.outerXMl i get the object from Deserialize object ,but if i pass Resource1.stringXml i am getting following exception {"< STRING xmlns='' was not expected."} System.Exception {System.InvalidOperationException} class Program { static void Main(string[] args) { StringResource str = new StringResource(); str.DELETE = "CanDelete"; str.ID= "23342"; XmlElement strelemet = SerializeObjectToXmlNode (str); StringResource strResourceObject = DeSerializeXmlNodeToObject<StringResource>(Resource1.stringXml); Console.ReadLine(); } public static T DeSerializeXmlNodeToObject<T>(string objectNodeOuterXml) { try { TextReader objStringsTextReader = new StringReader(objectNodeOuterXml); XmlSerializer stringResourceSerializer = new XmlSerializer(typeof(T),string.Empty); return (T)stringResourceSerializer.Deserialize(objStringsTextReader); } catch (Exception excep) { return default(T); } } public static XmlElement SerializeObjectToXmlNode(object obj) { using (MemoryStream memoryStream = new MemoryStream()) { try { XmlSerializerNamespaces xmlNameSpace = new XmlSerializerNamespaces(); xmlNameSpace.Add(string.Empty, string.Empty); XmlWriterSettings writerSettings = new XmlWriterSettings(); writerSettings.CloseOutput = false; writerSettings.Encoding = System.Text.Encoding.UTF8; writerSettings.Indent = false; writerSettings.OmitXmlDeclaration = true; XmlWriter writer = XmlWriter.Create(memoryStream, writerSettings); XmlSerializer xmlserializer = new XmlSerializer(obj.GetType()); xmlserializer.Serialize(writer, obj, xmlNameSpace); writer.Close(); memoryStream.Position = 0; XmlDocument serializeObjectDoc = new XmlDocument(); serializeObjectDoc.Load(memoryStream); return serializeObjectDoc.DocumentElement; } catch (Exception excep) { return null; } } } } public class StringResource { [XmlAttribute] public string DELETE; [XmlAttribute] public string ID; }

    Read the article

  • Using generics to make an algorithm work on lists of "something" instead of only String's

    - by Binary255
    Hi, I have a small algorithm which replaces the position of characters in a String: class Program { static void Main(string[] args) { String pairSwitchedStr = pairSwitch("some short sentence"); Console.WriteLine(pairSwitchedStr); Console.ReadKey(); } private static String pairSwitch(String str) { StringBuilder pairSwitchedStringBuilder = new StringBuilder(); for (int position = 0; position + 1 < str.Length; position += 2) { pairSwitchedStringBuilder.Append((char)str[position + 1]); pairSwitchedStringBuilder.Append((char)str[position]); } return pairSwitchedStringBuilder.ToString(); } } I would like to make it as generic as possible, possibly using Generics. What I'd like to have is something which works with: Anything that is built up using a list of instances. Including strings, arrays, linked lists I suspect that the solution must use generics as the algorithm is working on a list of instances of T (there T is ... something). Version of C# isn't of interest, I guess the solution will be nicer if features from C# version 2.0 is used.

    Read the article

  • Ajax Form submittion in Google App Engine with jQuery

    - by user271785
    could not figure out why it is not working: i need to send request to server, generate some fragment of html in python with meanCal method, and then want that fragment embedded into submitting html file using calculation method and dynamically shows in dyContent div. all the processes are done by single click on submit button in a form. any suggestions??? thanks in advance. the submitting html: <div id="dyContent" style="height: 200px;"> waiting for user... {{ mgs }} </div> <div id="leturetext"> <form id="mean" method="post" action="/calculation"> <select name="meanselect"> <option value=10>example</option> <option value=11>exercise</option> </select> <input type="button" name="btnMean" value="Check Results" /> </form> </div> <script type="text/javascript"> $(document).ready(function() { //$("#btnMean").live("click", function() { $("#mean").submit(function(){ $.ajax({ type: "POST", cache: false, url: "/meanCal", success: function(html) { $("#dyContent").html(html); } }); return false; }); }); </script> python: class MainHandler(webapp.RequestHandler): def get(self): path = self.request.path if doRender(self, path): return doRender(self,'index.htm') class calculationHandler(webapp.RequestHandler): def post(self): doRender(self, 'Diagnostic_stats.htm', {'mgs' : "refreshed.", }) def get(self): doRender(self, 'Diagnostic_stats.htm') class meanHandler(webapp.RequestHandler): def get(self): global GL index = self.request.get('meanselect'.value) if (index == 10): allData = GL.exampleData dataString = ','.join(map(str, allData)) dataMean = (str)(stats.lmean(allData)) doRender(self, 'Result.htm', { 'dataIn' : dataString, 'MEAN' : "Example Mean is: " + dataMean, }) return else: allData = GL.exerciseData dataString = ','.join(map(str, allData)) dataMean = (str)(stats.lmean(allData)) doRender(self, 'Result.htm', { 'dataIn' : dataString, 'MEAN' : "Exercise Mean is: " + dataMean, }) def main(): global GL GL = GlobalVariables() application = webapp.WSGIApplication( [('/calculation', calculationHandler), ('/meanCal', meanHandler), ('.*', MainHandler), ], debug=True) wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main()

    Read the article

  • Vim + OmniCppComplete and completing members of class members

    - by Robert S. Barnes
    I've noticed that I can't seem to complete members of class members using OmniCppComplete. For example, given the following files: // foo.h #include <string> class foo { public: void set_str(const std::string &); std::string get_str_reverse( void ); private: std::string str; }; // foo.cpp #include "foo.h" using std::string; string foo::get_str_reverse ( void ) { string temp; temp.assign(str); reverse(temp.begin(), temp.end()); return temp; } /* ----- end of method foo::get_str ----- */ void foo::set_str ( const string &s ) { str.assign(s); } /* ----- end of method foo::set_str ----- */ I've set up tags for stdlibc++ and generated the tags for these two files using: ctags -R --c++-kinds=+pl --fields=+iaS --extra=+q . When I type temp. in the cpp I get a list of string member functions as expected. But if I type str. omnicomplete spits out "Pattern Not Found". I've noticed that the temp. completion only works if I have the using std::string; declaration. How do I get completion to work on my class members?

    Read the article

  • use proxy in python to fetch a webpage

    - by carmao
    I am trying to write a function in Python to use a public anonymous proxy and fetch a webpage, but I got a rather strange error. The code (I have Python 2.4): import urllib2 def get_source_html_proxy(url, pip, timeout): # timeout in seconds (maximum number of seconds willing for the code to wait in # case there is a proxy that is not working, then it gives up) proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request(url) sock=urllib2.urlopen(req) timp=0 # a counter that is going to measure the time until the result (webpage) is # returned while 1: data = sock.read(1024) timp=timp+1 if len(data) < 1024: break timpLimita=50000000 * timeout if timp==timpLimita: # 5 millions is about 1 second break if timp==timpLimita: print IPul + ": Connection is working, but the webpage is fetched in more than 50 seconds. This proxy returns the following IP: " + str(data) return str(data) else: print "This proxy " + IPul + "= good proxy. " + "It returns the following IP: " + str(data) return str(data) # Now, I call the function to test it for one single proxy (IP:port) that does not support user and password (a public high anonymity proxy) #(I put a proxy that I know is working - slow, but is working) rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) print rez The error: Traceback (most recent call last): File "./public_html/cgi-bin/teste5.py", line 43, in ? rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) File "./public_html/cgi-bin/teste5.py", line 18, in get_source_html_proxy sock=urllib2.urlopen(req) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/urllib2.py", line 573, in lambda r, proxy=url, type=type, meth=self.proxy_open: \ File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open if '@' in host: TypeError: iterable argument required I do not know why the character "@" is an issue (I have no such in my code. Should I have?) Thanks in advance for your valuable help.

    Read the article

  • Memory problems while code is running (Python, Networkx)

    - by MIN SU PARK
    I made a code for generate a graph with 379613734 edges. But the code couldn't be finished because of memory. It takes about 97% of server memory when it go through 62 million lines. So I killed it. Do you have any idea to solve this problem? My code is like this: import os, sys import time import networkx as nx G = nx.Graph() ptime = time.time() j = 1 for line in open("./US_Health_Links.txt", 'r'): #for line in open("./test_network.txt", 'r'): follower = line.strip().split()[0] followee = line.strip().split()[1] G.add_edge(follower, followee) if j%1000000 == 0: print j*1.0/1000000, "million lines done", time.time() - ptime ptime = time.time() j += 1 DG = G.to_directed() # P = nx.path_graph(DG) Nn_G = G.number_of_nodes() N_CC = nx.number_connected_components(G) LCC = nx.connected_component_subgraphs(G)[0] n_LCC = LCC.nodes() Nn_LCC = LCC.number_of_nodes() inDegree = DG.in_degree() outDegree = DG.out_degree() Density = nx.density(G) # Diameter = nx.diameter(G) # Centrality = nx.betweenness_centrality(PDG, normalized=True, weighted_edges=False) # Clustering = nx.average_clustering(G) print "number of nodes in G\t" + str(Nn_G) + '\n' + "number of CC in G\t" + str(N_CC) + '\n' + "number of nodes in LCC\t" + str(Nn_LCC) + '\n' + "Density of G\t" + str(Density) + '\n' # sys.exit() # j += 1 The edge data is like this: 1000 1001 1000245 1020191 1000 10267352 1000653 10957902 1000 11039092 1000 1118691 10346 11882 1000 1228281 1000 1247041 1000 12965332 121340 13027572 1000 13075072 1000 13183162 1000 13250162 1214 13326292 1000 13452672 1000 13844892 1000 14061830 12340 1406481 1000 14134703 1000 14216951 1000 14254402 12134 14258044 1000 14270791 1000 14278978 12134 14313332 1000 14392970 1000 14441172 1000 14497568 1000 14502775 1000 14595635 1000 14620544 1000 14632615 10234 14680596 1000 14956164 10230 14998341 112000 15132211 1000 15145450 100 15285998 1000 15288974 1000 15300187 1000 1532061 1000 15326300 Lastly, is there anybody who has an experience to analyze Twitter link data? It's quite hard to me to take a directed graph and calculate average/median indegree and outdegree of nodes. Any help or idea?

    Read the article

  • Jquery Ajax json Serializable

    - by willsonchan
    I am learing using jquery ajax to hander the JSON..i writre a demo code. HTMLCODE $(function () { $("#add").click(function () { var json = '{ "str":[{"Role_ID":"2","Customer_ID":"155","Brands":"Chloe;","Country_ID":"96;"}]}'; $.ajax({ url: "func.aspx/GetJson", type: "POST", contentType: "application/json", dataType: 'json', data: json, success: function (result) { alert(result); }, error: function () { alert("error"); } }); }); }); <div> <input type="button" value="add" id="add" /> </div> i got a input and bind a script function to it, now the proble is comeing.. my C# functiong like that. [WebMethod] public static string GetJson(object str) { return str.ToString();//good for work } [Serializable] public class TestClass { public TestClass() { } public TestClass(string role_id, string customer_id, string brands, string countryid) { this.Role_ID = role_id; this.Customer_ID = customer_id; this.Brands = brands; this.Country_ID = countryid; } public string Role_ID { get; set; } public string Customer_ID { get; set; } public string Brands { get; set; } public string Country_ID { get; set; } } when i user public static string GetJson(object str) everything is so good.~~ no error at all but . when i try to use my own class TestClass. firebug tell me that "Type 'TestClass' is not supported for deserialization of an array." .any body can give me help:XD

    Read the article

  • Cross-thread operation not valid: accessed from a thread other than the thread it was created on.

    - by user307524
    Hi, I want to remove checked items from checklistbox (winform control) in class file method which i am calling asynchronously using deletegate. but it showing me this error message:- Cross-thread operation not valid: Control 'checkedListBox1' accessed from a thread other than the thread it was created on. i have tried invoke required but again got the same error. Sample code is below: private void button1_Click(object sender, EventArgs e) { // Create an instance of the test class. Class1 ad = new Class1(); // Create the delegate. AsyncMethodCaller1 caller = new AsyncMethodCaller1(ad.TestMethod1); //callback delegate IAsyncResult result = caller.BeginInvoke(checkedListBox1, new AsyncCallback(CallbackMethod)," "); } In class file code for TestMethod1 is : - private delegate void dlgInvoke(CheckedListBox c, Int32 str); private void Invoke(CheckedListBox c, Int32 str) { if (c.InvokeRequired) { c.Invoke(new dlgInvoke(Invoke), c, str); c.Items.RemoveAt(str); } else { c.Text = ""; } } // The method to be executed asynchronously. public string TestMethod1(CheckedListBox chklist) { for (int i = 0; i < 10; i++) { string chkValue = chklist.CheckedItems[i].ToString(); //do some other database operation based on checked items. Int32 index = chklist.FindString(chkValue); Invoke(chklist, index); } return ""; }

    Read the article

  • Finding Local IP via Socket Creation / getsockname

    - by BSchlinker
    I need to get the IP address of a system within C++. I followed the logic and advice of another comment on here and created a socket and then utilized getsockname to determine the IP address which the socket is bound to. However, this doesn't appear to work (code below). I'm receiving an invalid IP address (58.etc) when I should be receiving a 128.etc Any ideas? string Routes::systemIP(){ // basic setup int sockfd; char str[INET_ADDRSTRLEN]; sockaddr* sa; socklen_t* sl; struct addrinfo hints, *servinfo, *p; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_DGRAM; if ((rv = getaddrinfo("4.2.2.1", "80", &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return "1"; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return "2"; } // get information on the local IP from the socket we created getsockname(sockfd, sa, sl); // convert the sockaddr to a sockaddr_in via casting struct sockaddr_in *sa_ipv4 = (struct sockaddr_in *)sa; // get the IP from the sockaddr_in and print it inet_ntop(AF_INET, &(sa_ipv4->sin_addr.s_addr), str, INET_ADDRSTRLEN); printf("%s\n", str); // return the IP return str; }

    Read the article

  • OmniCppComplete: Completing on Class Members which are STL containers

    - by Robert S. Barnes
    Completion on class members which are STL containers is failing. Completion on local objects which are STL containers works fine. For example, given the following files: // foo.h #include <string> class foo { public: void set_str(const std::string &); std::string get_str_reverse( void ); private: std::string str; }; // foo.cpp #include "foo.h" using std::string; string foo::get_str_reverse ( void ) { string temp; temp.assign(str); reverse(temp.begin(), temp.end()); return temp; } /* ----- end of method foo::get_str ----- */ void foo::set_str ( const string &s ) { str.assign(s); } /* ----- end of method foo::set_str ----- */ I've generated the tags for these two files using: ctags -R --c++-kinds=+pl --fields=+iaS --extra=+q . When I type temp. in the cpp I get a list of string member functions as expected. But if I type str. omnicppcomplete spits out "Pattern Not Found". I've noticed that the temp. completion only works if I have the using std::string; declaration. How do I get completion to work on my class members which are STL containers?

    Read the article

  • Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter)

    - by Dennis Williamson
    In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach? Here is a greatly simplified and shortened data structure: list = [ {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'}, {'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'}, {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'}, {'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'}, {'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'} ] Here is the for loop version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True key1 = key2 = 0 for dictionary in list: if dictionary["key1"]: key1 += 1 if dictionary["key2"]: key2 += 1 print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above: Counts: key1: 3, subset key2: 2 Here is the other, perhaps more Pythonic, version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True from operator import itemgetter KEY1 = 0 KEY2 = 1 getentries = itemgetter("key1", "key2") entries = map(getentries, list) key1 = len([x for x in entries if x[KEY1]]) key2 = len([x for x in entries if x[KEY1] and x[KEY2]]) print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above (same as before): Counts: key1: 3, subset key2: 2 I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple. One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available. I have no control over the original form of the data. The code above is not going for style points.

    Read the article

  • Why does this program take up so much memory?

    - by Adrian
    I am learning Objective-C. I am trying to release all of the memory that I use. So, I wrote a program to test if I am doing it right: #import <Foundation/Foundation.h> #define DEFAULT_NAME @"Unknown" @interface Person : NSObject { NSString *name; } @property (copy) NSString * name; @end @implementation Person @synthesize name; - (void) dealloc { [name release]; [super dealloc]; } - (id) init { if (self = [super init]) { name = DEFAULT_NAME; } return self; } @end int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; Person *person = [[Person alloc] init]; NSString *str; int i; for (i = 0; i < 1e9; i++) { str = [NSString stringWithCString: "Name" encoding: NSUTF8StringEncoding]; person.name = str; [str release]; } [person release]; [pool drain]; return 0; } I am using a mac with snow leopard. To test how much memory this is using, I open Activity Monitor at the same time that it is running. After a couple of seconds, it is using gigabytes of memory. What can I do to make it not use so much?

    Read the article

  • Should not a tail-recursive function also be faster?

    - by Balint Erdi
    I have the following Clojure code to calculate a number with a certain "factorable" property. (what exactly the code does is secondary). (defn factor-9 ([] (let [digits (take 9 (iterate #(inc %) 1)) nums (map (fn [x] ,(Integer. (apply str x))) (permutations digits))] (some (fn [x] (and (factor-9 x) x)) nums))) ([n] (or (= 1 (count (str n))) (and (divisible-by-length n) (factor-9 (quot n 10)))))) Now, I'm into TCO and realize that Clojure can only provide tail-recursion if explicitly told so using the recur keyword. So I've rewritten the code to do that (replacing factor-9 with recur being the only difference): (defn factor-9 ([] (let [digits (take 9 (iterate #(inc %) 1)) nums (map (fn [x] ,(Integer. (apply str x))) (permutations digits))] (some (fn [x] (and (factor-9 x) x)) nums))) ([n] (or (= 1 (count (str n))) (and (divisible-by-length n) (recur (quot n 10)))))) To my knowledge, TCO has a double benefit. The first one is that it does not use the stack as heavily as a non tail-recursive call and thus does not blow it on larger recursions. The second, I think is that consequently it's faster since it can be converted to a loop. Now, I've made a very rough benchmark and have not seen any difference between the two implementations although. Am I wrong in my second assumption or does this have something to do with running on the JVM (which does not have automatic TCO) and recur using a trick to achieve it? Thank you.

    Read the article

  • How to get spacing between characters printed using TextOut ?

    - by life-warrior
    I'm trying to calcuate size of each cell (containing text like "ff" or "a0"), so that 32 cells will fit into window by width. However, charWidth*2 doesn' represent the width of a cell, since it doesn't take spacing between characters in the account. How can I obtain size of a font so that 32 cells each is two chars like "ff" fit exactly into window's client area ? Curier is fixed-width font. RECT rect; ::GetClientRect( hWnd, &rect ); LONG charWidth = (rect.right-rect.left)/BLOCK_SIZE/2-2; int oldMapMode = ::SetMapMode( hdc, MM_TEXT ); HFONT font = CreateFont( charWidth*2, charWidth, 0, 0, FW_DONTCARE, FALSE, FALSE, FALSE, DEFAULT_CHARSET, OUT_OUTLINE_PRECIS, CLIP_DEFAULT_PRECIS, CLEARTYPE_QUALITY, FF_ROMAN, _T("Courier") ); HGDIOBJ oldFont = ::SelectObject( hdc, font ); for( int i = 0; i < BLOCK_SIZE; ++i ) { CString str; str.Format( _T("%.2x"), (unsigned char)*(g_memAddr+i) ); SIZE size; ::TextOut( hdc, (size.cx+2)*i+1, 1, str, _tcslen((LPCTSTR)str) ); }

    Read the article

  • Multiple HTTP requests using sockets in java

    - by codeomnitrix
    How could i send multiple http requests from my java program using sockets. actually i have tried as: import java.net.*; import java.io.*; class htmlPageFetch{ public static void main(String[] args){ try{ Socket s = new Socket("127.0.0.1", 80); DataInputStream dIn = new DataInputStream(s.getInputStream()); PrintWriter dOut = new PrintWriter(s.getOutputStream(), true); dOut.println("GET /mytesting/justCheck.html HTTP/1.1\r\nHost:localhost\r\n\r\n"); boolean more_data = true; String str; int i = 0; while(more_data){ str = dIn.readLine(); if(str==null){ //Now server has stopped sending data //So now write again the inputs dOut.println("GET /mytesting/justCheck1.html HTTP/1.1\r\nHost:localhost\r\n\r\n"); continue; } System.out.println(str); } }catch(IOException e){ } } } But when I send the request again it was not processed? Thank in advance.

    Read the article

  • Simple App Engine Sessions Implementation

    - by raz0r
    Here is a very basic class for handling sessions on App Engine: """Lightweight implementation of cookie-based sessions for Google App Engine. Classes: Session """ import os import random import Cookie from google.appengine.api import memcache _COOKIE_NAME = 'app-sid' _COOKIE_PATH = '/' _SESSION_EXPIRE_TIME = 180 * 60 class Session(object): """Cookie-based session implementation using Memcached.""" def __init__(self): self.sid = None self.key = None self.session = None cookie_str = os.environ.get('HTTP_COOKIE', '') self.cookie = Cookie.SimpleCookie() self.cookie.load(cookie_str) if self.cookie.get(_COOKIE_NAME): self.sid = self.cookie[_COOKIE_NAME].value self.key = 'session-' + self.sid self.session = memcache.get(self.key) if self.session: self._update_memcache() else: self.sid = str(random.random())[5:] + str(random.random())[5:] self.key = 'session-' + self.sid self.session = dict() memcache.add(self.key, self.session, _SESSION_EXPIRE_TIME) self.cookie[_COOKIE_NAME] = self.sid self.cookie[_COOKIE_NAME]['path'] = _COOKIE_PATH print self.cookie def __len__(self): return len(self.session) def __getitem__(self, key): if key in self.session: return self.session[key] raise KeyError(str(key)) def __setitem__(self, key, value): self.session[key] = value self._update_memcache() def __delitem__(self, key): if key in self.session: del self.session[key] self._update_memcache() return None raise KeyError(str(key)) def __contains__(self, item): try: i = self.__getitem__(item) except KeyError: return False return True def _update_memcache(self): memcache.replace(self.key, self.session, _SESSION_EXPIRE_TIME) I would like some advices on how to improve the code for better security. Note: In the production version it will also save a copy of the session in the datastore. Note': I know there are much more complete implementations available online though I would like to learn more about this subject so please don't answer the question with "use that" or "use the other" library.

    Read the article

  • Export the datagrid data to text in asp.net+c#.net

    - by SRIRAM
    Problem:It will asks there is no assembly reference/namespace for Database Database db = DatabaseFactory.CreateDatabase(); DBCommandWrapper selectCommandWrapper = db.GetStoredProcCommandWrapper("sp_GetLatestArticles"); DataSet ds = db.ExecuteDataSet(selectCommandWrapper); StringBuilder str = new StringBuilder(); for(int i=0;i<=ds.Tables[0].Rows.Count - 1; i++) { for(int j=0;j<=ds.Tables[0].Columns.Count - 1; j++) { str.Append(ds.Tables[0].Rows[i][j].ToString()); } str.Append("<BR>"); } Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.text"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); Response.Write(str.ToString()); Response.End();

    Read the article

  • convert an int to list of individual digitals more faster?

    - by user478514
    All, I want define an int(987654321) <= [9, 8, 7, 6, 5, 4, 3, 2, 1] convertor, if the length of int number < 9, for example 10 the list will be [0,0,0,0,0,0,0,1,0] , and if the length 9, for example 9987654321 , the list will be [9, 9, 8, 7, 6, 5, 4, 3, 2, 1] >>> i 987654321 >>> l [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> z = [0]*(len(unit) - len(str(l))) >>> z.extend(l) >>> l = z >>> unit [100000000, 10000000, 1000000, 100000, 10000, 1000, 100, 10, 1] >>> sum([x*y for x,y in zip(l, unit)]) 987654321 >>> int("".join([str(x) for x in l])) 987654321 >>> l1 = [int(x) for x in str(i)] >>> z = [0]*(len(unit) - len(str(l1))) >>> z.extend(l1) >>> l1 = z >>> l1 [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> a = [i//x for x in unit] >>> b = [a[x] - a[x-1]*10 for x in range(9)] >>> if len(b) = len(a): b[0] = a[0] # fix the a[-1] issue >>> b [9, 8, 7, 6, 5, 4, 3, 2, 1] I tested above solutions but found those may not faster/simple enough than I want and may have a length related bug inside, anyone may share me a better solution for this kinds convertion? Thanks!

    Read the article

  • Simplifying for-if messes with better structure?

    - by HH
    # Description: you are given a bitwise pattern and a string # you need to find the number of times the pattern matches in the string # any one liner or simple pythonic solution? import random def matchIt(yourString, yourPattern): """find the number of times yourPattern occurs in yourString""" count = 0 matchTimes = 0 # How can you simplify the for-if structures? for coin in yourString: #return to base if count == len(pattern): matchTimes = matchTimes + 1 count = 0 #special case to return to 2, there could be more this type of conditions #so this type of if-conditionals are screaming for a havoc if count == 2 and pattern[count] == 1: count = count - 1 #the work horse #it could be simpler by breaking the intial string of lenght 'l' #to blocks of pattern-length, the number of them is 'l - len(pattern)-1' if coin == pattern[count]: count=count+1 average = len(yourString)/matchTimes return [average, matchTimes] # Generates the list myString =[] for x in range(10000): myString= myString + [int(random.random()*2)] pattern = [1,0,0] result = matchIt(myString, pattern) print("The sample had "+str(result[1])+" matches and its size was "+str(len(myString))+".\n" + "So it took "+str(result[0])+" steps in average.\n" + "RESULT: "+str([a for a in "FAILURE" if result[0] != 8])) # Sample Output # # The sample had 1656 matches and its size was 10000. # So it took 6 steps in average. # RESULT: ['F', 'A', 'I', 'L', 'U', 'R', 'E']

    Read the article

  • Optimization of Function with Dictionary and Zip()

    - by eWizardII
    Hello, I have the following function: def filetxt(): word_freq = {} lvl1 = [] lvl2 = [] total_t = 0 users = 0 text = [] for l in range(0,500): # Open File if os.path.exists("C:/Twitter/json/user_" + str(l) + ".json") == True: with open("C:/Twitter/json/user_" + str(l) + ".json", "r") as f: text_f = json.load(f) users = users + 1 for i in range(len(text_f)): text.append(text_f[str(i)]['text']) total_t = total_t + 1 else: pass # Filter occ = 0 import string for i in range(len(text)): s = text[i] # Sample string a = re.findall(r'(RT)',s) b = re.findall(r'(@)',s) occ = len(a) + len(b) + occ s = s.encode('utf-8') out = s.translate(string.maketrans("",""), string.punctuation) # Create Wordlist/Dictionary word_list = text[i].lower().split(None) for word in word_list: word_freq[word] = word_freq.get(word, 0) + 1 keys = word_freq.keys() numbo = range(1,len(keys)+1) WList = ', '.join(keys) NList = str(numbo).strip('[]') WList = WList.split(", ") NList = NList.split(", ") W2N = dict(zip(WList, NList)) for k in range (0,len(word_list)): word_list[k] = W2N[word_list[k]] for i in range (0,len(word_list)-1): lvl1.append(word_list[i]) lvl2.append(word_list[i+1]) I have used the profiler to find that it seems the greatest CPU time is spent on the zip() function and the join and split parts of the code, I'm looking to see if there is any way I have overlooked that I could potentially clean up the code to make it more optimized, since the greatest lag seems to be in how I am working with the dictionaries and the zip() function. Any help would be appreciated thanks!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >