Search Results

Search found 4616 results on 185 pages for 'c strings'.

Page 149/185 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Endless problems with a very simple python subprocess.Popen task

    - by Thomas
    I'd like python to send around a half-million integers in the range 0-255 each to an executable written in C++. This executable will then respond with a few thousand integers. Each on one line. This seems like it should be very simple to do with subprocess but i've had endless troubles. Right now im testing with code: // main() u32 num; std::cin >> num; u8* data = new u8[num]; for (u32 i = 0; i < num; ++i) std::cin >> data[i]; // test output / spit it back out for (u32 i = 0; i < num; ++i) std::cout << data[i] << std::endl; return 0; Building an array of strings ("data"), each like "255\n", in python and then using: output = proc.communicate("".join(data))[0] ...doesn't work (says stdin is closed, maybe too much at one time). Neither has using proc.stdin and proc.stdout worked. This should be so very simple, but I'm getting constant exceptions, and/or no output data returned to me. My Popen is currently: proc = Popen('aux/test_cpp_program', stdin=PIPE, stdout=PIPE, bufsize=1) Advise me before I pull my hair out. ;)

    Read the article

  • SQL INSTR() using CSV. Need exact match rather than part

    - by Alastair Pitts
    This is a follow up issue relating to the answer for http://stackoverflow.com/questions/2445029/sql-placeholder-in-where-in-issue-inserted-strings-fail Quick background: We have a SQL query that uses a placeholder value to accept a string, which represents a unique tag/id. Usually, this is only a single tag, but we needed the ability to use a csv string for multiple tags, returning a combined result. In the answer we received from the vendor, they suggested the use of the INSTR function, ala: select * from pitotal where tag IN (SELECT tag from pipoint WHERE INSTR(?, tag) <> 0) and time between 'y' and 't' This works perfectly well 99% of the time, the issue is when the tag is also a subset of 2 parts of the CSV string. Eg the placeholder value is: 'northdom,southdom,eastdom,westdom' and possible tags include: north or northdom What happens, as north is a subset of northdom, is that the two tags are return instead of just northdom, which is actually what we want. I'm not strong on SQL so I couldn't work out how to set it as exact, or split the csv string, so help would be appreciated. Is there a way to split the csv string or make it look for an exact match?

    Read the article

  • How do I cast <T> to varbinary and be still be able to perform a CONVERT on the sql side? Implicatio

    - by Biff MaGriff
    Hello, I'm writing this application that will allow a user to define custom quizzes and then allow another user to respond to the questions. Each question of the quiz has a corresponding datatype. All the responses to all the questions are stored vertically in my [Response] table. I currently use 2 fields to store the response. //Response schema ResponseID int QuizPersonID int QuestionID int ChoiceID int //maps to Choice table, used for drop down lists ChoiceValue varbinary(MAX) //used to store a user entered value I'm using .net 3.5 C# SQL Server 2008. I'm thinking that I would want to store different datatypes in the same field and then in my SQL report proc I would CONVERT to the proper datatype. I'm thinking this is ideal because I only have to check one field. I'm also thinking it might be more trouble than it is worth. I think my other options are to; store the data as strings in the db (yuck), or to have a column for each datatype I might use. So what I would like to know is, how would I format my datatypes in C# so that they can be converted properly in SQL? What is the performance hit for converting in SQL? Should I just make a whole wack of columns for each datatype?

    Read the article

  • Doctesting functions that receive and display user input - Python (tearing my hair out)

    - by GlenCrawford
    Howdy! I am currently writing a small application with Python (3.1), and like a good little boy, I am doctesting as I go. However, I've come across a method that I can't seem to doctest. It contains an input(), an because of that, I'm not entirely sure what to place in the "expecting" portion of the doctest. Example code to illustrate my problem follows: """ >>> getFiveNums() Howdy. Please enter five numbers, hit <enter> after each one Please type in a number: Please type in a number: Please type in a number: Please type in a number: Please type in a number: """ import doctest numbers = list() # stores 5 user-entered numbers (strings, for now) in a list def getFiveNums(): print("Howdy. Please enter five numbers, hit <enter> after each one") for i in range(5): newNum = input("Please type in a number:") numbers.append(newNum) print("Here are your numbers: ", numbers) if __name__ == "__main__": doctest.testmod(verbose=True) When running the doctests, the program stops executing immediately after printing the "Expecting" section, waits for me to enter five numbers one after another (without prompts), and then continues. As shown below: I don't know what, if anything, I can place in the Expecting section of my doctest to be able to test a method that receives and then displays user input. So my question (finally) is, is this function doctestable?

    Read the article

  • Take most significant 8 bytes of the MD5 hash of a string as a long (in Ruby)

    - by Nate Murray
    Hey Friends, I'm trying to implement a java "hash" function in ruby. Here's the java side: import java.nio.charset.Charset; import java.security.MessageDigest; /** * @return most significant 8 bytes of the MD5 hash of the string, as a long */ protected long hash(String value) { byte[] md5hash; md5hash = md5Digest.digest(value.getBytes(Charset.forName("UTF8"))); long hash = 0L; for (int i = 0; i < 8; i++) { hash = hash << 8 | md5hash[i] & 0x00000000000000FFL; } return hash; } So far, my best guess in ruby is: # WRONG - doesn't work properly. #!/usr/bin/env ruby -wKU require 'digest/md5' require 'pp' md5hash = Digest::MD5.hexdigest("0").unpack("U*") pp md5hash hash = 0 0.upto(7) do |i| hash = hash << 8 | md5hash[i] & 0x00000000000000FF end pp hash Problem is, this ruby code doesn't match the java output. For reference, the above java code given these strings returns the corresponding long: "00038c53790ecedfeb2f83102e9115a522475d73" => -2059313900129568948 "0" => -3473083983811222033 "001211e8befc8ac22dd265ecaa77f8c227d0007f" => 3234260774580957018 Thoughts: I'm having problems getting the UTF8 bytes from the ruby string In ruby I'm using hexdigest, I suspect I should be using just digest instead The java code is taking the md5 of the UTF8 bytes whereas my ruby code is taking the bytes of the md5 (as hex) Any suggestions on how to get the exact same output in ruby?

    Read the article

  • IValueConverter from string

    - by Aleksandar Toplek
    I have an Enum that needs to be shown in ComboBox. I have managed to get enum values to combobox using ItemsSource and I'm trying to localize them. I thought that that could be done using value converter but as my enum values are already strings compiler throws error that IValueConverter can't take string as input. I'm not aware of any other way to convert them to other string value. Is there some other way to do that (not the localization but conversion)? I'm using this marku extension to get enum values [MarkupExtensionReturnType(typeof (IEnumerable))] public class EnumValuesExtension : MarkupExtension { public EnumValuesExtension() {} public EnumValuesExtension(Type enumType) { this.EnumType = enumType; } [ConstructorArgument("enumType")] public Type EnumType { get; set; } public override object ProvideValue(IServiceProvider serviceProvider) { if (this.EnumType == null) throw new ArgumentException("The enum type is not set"); return Enum.GetValues(this.EnumType); } } and in Window.xaml <Converters:UserTypesToStringConverter x:Key="userTypeToStringConverter" /> .... <ComboBox ItemsSource="{Helpers:EnumValuesExtension Data:UserTypes}" Margin="2" Grid.Row="0" Grid.Column="1" SelectedIndex="0" TabIndex="1" IsTabStop="False"> <ComboBox.ItemTemplate> <DataTemplate DataType="{x:Type Data:UserTypes}"> <Label Content="{Binding Converter=userTypeToStringConverter}" /> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> And here is converter class, it's just a test class, no localization yet. public class UserTypesToStringConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return (int) ((Data.UserTypes) value) == 0 ? "Fizicka osoba" : "Pravna osoba"; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { return default(Data.UserTypes); } }

    Read the article

  • Why is my multithreaded Java program not maxing out all my cores on my machine?

    - by James B
    Hi, I have a program that starts up and creates an in-memory data model and then creates a (command-line-specified) number of threads to run several string checking algorithms against an input set and that data model. The work is divided amongst the threads along the input set of strings, and then each thread iterates the same in-memory data model instance (which is never updated again, so there are no synchronization issues). I'm running this on a Windows 2003 64-bit server with 2 quadcore processors, and from looking at Windows task Manager they aren't being maxed-out, (nor are they looking like they are being particularly taxed) when I run with 10 threads. Is this normal behaviour? It appears that 7 threads all complete a similar amount of work in a similar amount of time, so would you recommend running with 7 threads instead? Should I run it with more threads?...Although I assume this could be detrimental as the JVM will do more context switching between the threads. Alternatively, should I run it with fewer threads? Alternatively, what would be the best tool I could use to measure this?...Would a profiling tool help me out here - indeed, is one of the several profilers better at detecting bottlenecks (assuming I have one here) than the rest? Note, the server is also running SQL Server 2005 (this may or may not be relevant), but nothing much is happening on that database when I am running my program. Note also, the threads are only doing string matching, they aren't doing any I/O or database work or anything else they may need to wait on. Thanks in advance, -James

    Read the article

  • Extending ASP.NET role providers

    - by Quick Joe Smith
    Because the RoleProvider interface seems to treat roles as nothing more than simple strings, I'm wondering if there is any non-hacky way to apply an optional value for a role on a per-user basis. Our current login management system implements roles as key-value pairs, where the value part is optional and usually used to clarify or limit the permissions granted by a role. For example, a role 'editor' might contain a user 'barry', but for 'barry' it will have an optional value 'raptors', which the system would interpret to mean that Barry can only edit articles filed under the 'raptors' category. I have seen elsewhere a suggestion to simply create additional delimited roles, such as 'editor.raptors' or somesuch. That's not really going to be ideal because it would bloat the number of roles greatly, and I can tell it's going to be a very hard sell to replace our current implementation (which is also very less than ideal, but has the advantage of being custom made to work with our user database). I can tell already that the concatenation method mentioned above is going to involve a lot of tedious string-splitting and partial matching. Is there a better way?

    Read the article

  • What Language Feature Can You Just Not Live Without?

    - by akdom
    I always miss python's built-in doc strings when working in other languages. I know this may seem odd, but it allows me to cut down significantly on excess comments while still providing a clean description of my code and any interfaces therein. What Language Feature Can You Just Not Live Without? If someone were building a new language and they asked you what one feature they absolutely must include, what would it be? This is getting kind of long, so I figured I'd do my best to summarize: Paraphrased to be language agnostic. If you know of a language which uses something mentioned, please at it in the parenthesis to the right of the feature. And if you have a better format for this list, by all means try it out (if it doesn't seem to work, I'll just roll back). Regular Expressions ~ torial (Perl) Garbage Collection ~ SaaS Developer (Python, Perl, Ruby, Java, .NET) Anonymous Functions ~ Vinko Vrsalovic (Lisp, Python) Arithmetic Operators ~ Jeremy Ross (Python, Perl, Ruby, Java, C#, Visual Basic, C, C++, Pascal, Smalltalk, etc.) Exception Handling ~ torial (Python, Java, .NET) Pass By Reference ~ Chris (Python) Unified String Format WalloWizard (C#) Generics ~ torial (Python, Java, C#) Integrated Query Equivalent to LINQ ~ Vyrotek (C#) Namespacing ~ Garry Shutler () Short Circuit Logic ~ Adam Bellaire ()

    Read the article

  • How to best show progress info when using ADO.NET?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • How is the 'is' keyword implemented in Python?

    - by Srikanth
    ... the is keyword that can be used for equality in strings. >>> s = 'str' >>> s is 'str' True >>> s is 'st' False I tried both __is__() and __eq__() but they didn't work. >>> class MyString: ... def __init__(self): ... self.s = 'string' ... def __is__(self, s): ... return self.s == s ... >>> >>> >>> m = MyString() >>> m is 'ss' False >>> m is 'string' # <--- Expected to work False >>> >>> class MyString: ... def __init__(self): ... self.s = 'string' ... def __eq__(self, s): ... return self.s == s ... >>> >>> m = MyString() >>> m is 'ss' False >>> m is 'string' # <--- Expected to work, but again failed False >>> Thanks for your help!

    Read the article

  • Dynamic Linq Property Converting to Sql

    - by Matthew Hood
    I am trying to understand dynamic linq and expression trees. Very basically trying to do an equals supplying the column and value as strings. Here is what I have so far private IQueryable<tblTest> filterTest(string column, string value) { TestDataContext db = new TestDataContext(); // The IQueryable data to query. IQueryable<tblTest> queryableData = db.tblTests.AsQueryable(); // Compose the expression tree that represents the parameter to the predicate. ParameterExpression pe = Expression.Parameter(typeof(tblTest), "item"); Expression left = Expression.Property(pe, column); Expression right = Expression.Constant(value); Expression e1 = Expression.Equal(left, right); MethodCallExpression whereCallExpression = Expression.Call( typeof(Queryable), "Where", new Type[] { queryableData.ElementType }, queryableData.Expression, Expression.Lambda<Func<tblTest, bool>>(e1, new ParameterExpression[] { pe })); // Create an executable query from the expression tree. IQueryable<tblTest> results = queryableData.Provider.CreateQuery<tblTest>(whereCallExpression); return results; } That works fine for columns in the DB. But fails for properties in my code eg public partial class tblTest { public string name_test { get { return name; } } } Giving an error cannot be that it cannot be converted into SQL. I have tried rewriting the property as a Expression<Func but with no luck, how can I convert simple properties so they can be used with linq in this dynamic way? Many Thanks

    Read the article

  • Problem with .Contains

    - by Rene
    Hello there, I have a little problem which i can't solve. I want to use an SQL-In-Statement in Linq. I've read in this Forum and in other Forums that I have to use the .Contains (with reverse-thinking-notation :-)). As input i have a List of Guids. I first copied them into an array and then did something like that : datatoget = (from p in objectContext.MyDataSet where ArrayToSearch.Contains(p.Subtable.Id.ToString()) select p).ToList(); datatoget is the result in which all records matching the Subtable.Id (which is a Guid) should be stored. Subtable is a Detail-Table from MyData, and the Id is a Guid-Type. I've tried several things (Convert Guid to String, and then using .Contains, etc), but I always get an Exception which says : 'Linq to Entities' doesn't recognize the Method 'Boolean Contains(System.Guid) and is not able to Translate this method into a memory expression. (Something like that, because I'm using the German Version of VS2008) I am using L2E with .NET 3.5 and am programming in C# with VS 2008. I've read several Examples, but it doesn't work. Is it perhaps because of using Guid's instead of strings ? I've also tried to write my own compare-function, but I don't know how to integrate it so that .NET calls my function for comparing.

    Read the article

  • AppEngine JDO-QL : Problem with multiple AND and OR in query

    - by KlasE
    Hi, I'm working with App Engine(Java/JDO) and are trying to do some querying with lists. So I have the following class: @PersistenceCapable(identityType = IdentityType.APPLICATION, detachable="true") public class MyEntity { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) Key key; @Persistent List<String> tags = new ArrayList<String>(); @Persistent String otherVar; } The following JDO-QL works for me: ( tags.contains('a') || tags.contains('b') || tags.contains('c') || tags.contains('d') ) && otherVar > 'a' && otherVar < 'z' This seem to return all results where tags has one or more strings with one or more of the given values and together with the inequality search on otherVar But the following do not work: (tags.contains('a') || tags.contains('_a')) && (tags.contains('b') || tags.contains('_b')) && otherVar > 'a' && otherVar < 'z' In this case I want all hits where there is at least one a (either a or _a) and one b (either b or _b) together with the inequality search as before. But the problem is that I also get back the results where there is a but no b, which is not what I want. Maybe I have missed something obvious, or done a coding error, or possibly there is a restriction on how you can write these queries in appengine, so any tips or help would be very welcome. Regards Klas

    Read the article

  • Reading UTF-8 XML and writing it to a file with Python

    - by Harri
    I'm trying to parse UTF-8 XML file and save some parts of it to another file. Problem is, that this is my first Python script ever and I'm totally confused about the character encoding problems I'm finding. My script fails immediately when it tries to write non-ascii character to a file, but it can print it to command prompt (at least in some level) Here's the XML (from the parts that matter at least, it's a *.resx file which contains UI strings) <?xml version="1.0" encoding="utf-8"?> <root> <resheader name="foo"> <value>bar</value> </resheader> <data name="lorem" xml:space="preserve"> <value>ipsum öä</value> </data> </root> And here's my python script from xml.dom.minidom import parse names = [] values = [] def getStrings(path): dom = parse(path) data = dom.getElementsByTagName("data") for i in range(len(data)): name = data[i].getAttribute("name") names.append(name) value = data[i].getElementsByTagName("value") values.append(value[0].firstChild.nodeValue.encode("utf-8")) def writeToFile(): with open("uiStrings-fi.py", "w") as f: for i in range(len(names)): line = names[i] + '="'+ values[i] + '"' #varName='varValue' f.write(line) f.write("\n") getStrings("ResourceFile.fi-FI.resx") writeToFile() And here's the traceback: Traceback (most recent call last): File "GenerateLanguageFiles.py", line 24, in writeToFile() File "GenerateLanguageFiles.py", line 19, in writeToFile line = names[i] + '="'+ values[i] + '"' #varName='varValue' UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in ran ge(128) How should I fix my script so it would read and write UTF-8 characters properly? The files I'm trying to generate would be used in test automation with Robots Framework.

    Read the article

  • A function where small changes in input always result in large changes in output

    - by snowlord
    I would like an algorithm for a function that takes n integers and returns one integer. For small changes in the input, the resulting integer should vary greatly. Even though I've taken a number of courses in math, I have not used that knowledge very much and now I need some help... An important property of this function should be that if it is used with coordinate pairs as input and the result is plotted (as a grayscale value for example) on an image, any repeating patterns should only be visible if the image is very big. I have experimented with various algorithms for pseudo-random numbers with little success and finally it struck me that md5 almost meets my criteria, except that it is not for numbers (at least not from what I know). That resulted in something like this Python prototype (for n = 2, it could easily be changed to take a list of integers of course): import hashlib def uniqnum(x, y): return int(hashlib.md5(str(x) + ',' + str(y)).hexdigest()[-6:], 16) But obviously it feels wrong to go over strings when both input and output are integers. What would be a good replacement for this implementation (in pseudo-code, python, or whatever language)?

    Read the article

  • Proper use of the IDisposable interface

    - by cwick
    I know from reading the MSDN documentation that the "primary" use of the IDisposable interface is to clean up unmanaged resources http://msdn.microsoft.com/en-us/library/system.idisposable.aspx. To me, "unmanaged" means things like database connections, sockets, window handles, etc. But, I've seen code where the Dispose method is implemented to free managed resources, which seems redundant to me, since the garbage collector should take care of that for you. For example: public class MyCollection : IDisposable { private List<String> _theList = new List<String>(); private Dictionary<String, Point> _theDict = new Dictionary<String, Point>(); // Die, you gravy sucking pig dog! public void Dispose() { _theList.clear(); _theDict.clear(); _theList = null; _theDict = null; } My question is, does this make the garbage collector free memory used by MyCollection any faster than it normally would? edit: So far people have posted some good examples of using IDisposable to clean up unmanaged resources such as database connections and bitmaps. But suppose that _theList in the above code contained a million strings, and you wanted to free that memory now, rather than waiting for the garbage collector. Would the above code accomplish that?

    Read the article

  • Why use SQL database?

    - by martinthenext
    I'm not quite sure stackoverflow is a place for such a general question, but let's give it a try. Being exposed to the need of storing application data somewhere, I've always used MySQL or sqlite, just because it's always done like that. As it seems like the whole world is using these databases, most of all software products, frameworks, etc. It is rather hard for a beginning developer like me to ask a question - why? Ok, say we have some object-oriented logic in our application, and objects are related to each other somehow. We need to map this logic to the storage logic, so we need relations between database objects too. This leads us to using relational database and I'm ok with that - to put it simple, our database rows sometimes will need to have references to other tables' rows. But why do use SQL language for interaction with such a database? SQL query is a text message. I can understand this is cool for actually understanding what it does, but isn't it silly to use text table and column names for a part of application that no one ever seen after deploynment? If you had to write a data storage from scratch, you would have never used this kind of solution. Personally, I would have used some 'compiled db query' bytecode, that would be assembled once inside a client application and passed to the database. And it surely would name tables and colons by id numbers, not ascii-strings. In the case of changes in table structure those byte queries could be recompiled according to new db schema, stored in XML or something like that. What are the problems of my idea? Is there any reason for me not to write it myself and to use SQL database instead?

    Read the article

  • Output iterator's value_type

    - by wilhelmtell
    The STL commonly defines an output iterator like so: template<class Cont> class insert_iterator : public iterator<output_iterator_tag,void,void,void,void> { // ... Why do output iterators define value_type as void? It would be useful for an algorithm to know what type of value it is supposed to output. For example, a function that translates a URL query "key1=value1&key2=value2&key3=value3" into any container that holds key-value strings elements. template<typename Ch,typename Tr,typename Out> void parse(const std::basic_string<Ch,Tr>& str, Out result) { std::basic_string<Ch,Tr> key, value; // loop over str, parse into p ... *result = typename iterator_traits<Out>::value_type(key, value); } The SGI reference page of value_type hints this is because it's not possible to dereference an output iterator. But that's not the only use of value_type: I might want to instantiate one in order to assign it to the iterator.

    Read the article

  • Optimizing drawing on UITableViewCell

    - by Brian
    I am drawing content to a UITableViewCell and it is working well, but I'm trying to understand if there is a better way of doing this. Each cell has the following components: Thumbnail on the left side - could come from server so it is loaded async Title String - variable length so each cell could be different height Timestamp String Gradient background - the gradient goes from the top of the cell to the bottom and is semi-transparent so that background colors shine through with a gloss It currently works well. The drawing occurs as follows: UITableViewController inits/reuses a cell, sets needed data, and calls [cell setNeedsDisplay] The cell has a CALayer for the thumbnail - thumbnailLayer In the cell's drawRect it draws the gradient background and the two strings The cell's drawRect it then calls setIcon - which gets the thumbnail and sets the image as the contents of the thumbnailLayer. If the image is not found locally, it sets a loading image as the contents of the thumbnailLayer and asynchronously gets the thumbnail. Once the thumbnail is received, it is reset by calling setIcon again & resets the thumbnailLayer.contents This all currently works, but using Instruments I see that the thumbnail is compositing with the gradient. I have tried the following to fix this: setting the cell's backgroundView to a view whose drawRect would draw the gradient so that the cell's drawRect could draw the thumbnail and using setNeedsDisplayInRect would allow me to only redraw the thumbnail after it loaded --- but this resulted in the backgroundView's drawing (gradient) covering the cell's drawing (text). I would just draw the thumbnail in the cell's drawRect, but when setNeedsDisplay is called, drawRect will just overlap another image and the loading image may show through. I would clear the rect, but then I would have to redraw the gradient. I would try to draw the gradient in a CAGradientLayer and store a reference to it, so I can quickly redraw it, but I figured I'd have to redraw the gradient if the cell's height changes. Any ideas? I'm sure I'm missing something so any help would be great.

    Read the article

  • Script working with mysql and php into a textarea and back

    - by Tribalcomm
    I am trying to write a custom script that will keep a list of strings in a textarea. Each line of the textarea will be a row from a table. The problem I have is how to work the script to allow for adding, updating, or deleting rows based on a submit. So, for instance, I currently have 3 rows in the database: john sue mark I want to be able to delete sue and add richard and it will delete the row with sue and insert a row for richard. My code so far is as follows: To query the db and list it in the textarea: $basearray = mysql_query("SELECT name FROM mytable ORDER BY name"); <textarea name="names" cols=6 rows=12>'); <?php foreach($basearray as $base){ echo $base->name."\n"; } ?> </textarea> After the submit, I have: <?php $namelist = $_REQUEST[names]; $newarray = explode("\n", $namelist); foreach($newarray as $name) { if (!in_array($name, $basearray)) { mysql_query(DELETE FROM mytable WHERE word='$name'"); } elseif (in_array($name, $basearray)) { ; } else { mysql_query("INSERT INTO mytable (name) VALUES ("$name")"); } } ?> Please tell me what I am doing wrong. I am not getting any functions to work when I edit the contents of the textarea. Thanks!

    Read the article

  • how to know location of return address on stack c/c++

    - by Dr Deo
    i have been reading about a function that can overwrite its return address. void foo(const char* input) { char buf[10]; //What? No extra arguments supplied to printf? //It's a cheap trick to view the stack 8-) //We'll see this trick again when we look at format strings. printf("My stack looks like:\n%p\n%p\n%p\n%p\n%p\n% p\n\n"); //%p ie expect pointers //Pass the user input straight to secure code public enemy #1. strcpy(buf, input); printf("%s\n", buf); printf("Now the stack looks like:\n%p\n%p\n%p\n%p\n%p\n%p\n\n"); } It was sugggested that this is how the stack would look like Address of foo = 00401000 My stack looks like: 00000000 00000000 7FFDF000 0012FF80 0040108A <-- We want to overwrite the return address for foo. 00410EDE Question: -. Why did the author arbitrarily choose the second last value as the return address of foo()? -. Are values added to the stack from the bottom or from the top? apart from the function return address, what are the other values i apparently see on the stack? ie why isn't it filled with zeros Thanks.

    Read the article

  • Using LINQ to SQL and chained Replace

    - by White Dragon
    I have a need to replace multiple strings with others in a query from p in dx.Table where p.Field.Replace("A", "a").Replace("B", "b").ToLower() = SomeVar select p Which provides a nice single SQL statement with the relevant REPLACE() sql commands. All good :) I need to do this in a few queries around the application... So i'm looking for some help in this regard; that will work as above as a single SQL hit/command on the server It seems from looking around i can't use RegEx as there is no SQL eq Being a LINQ newbie is there a nice way for me to do this? eg is it possible to get it as a IQueryable "var result" say and pass that to a function to add needed .Replace()'s and pass back? Can i get a quick example of how if so? EDIT: This seems to work! does it look like it would be a problem? var data = from p in dx.Videos select p; data = AddReplacements(data, checkMediaItem); theitem = data.FirstOrDefault(); ... public IQueryable<Video> AddReplacements(IQueryable<Video> DataSet, string checkMediaItem) { return DataSet.Where(p => p.Title.Replace(" ", "-").Replace("&", "-").Replace("?", "-") == checkMediaItem); }

    Read the article

  • Approximate string matching with a letter confusion matrix?

    - by zigglenaut
    I'm trying to model a phonetic recognizer that has to isolate instances of words (strings of phones) out of a long stream of phones that doesn't have gaps between each word. The stream of phones may have been poorly recognized, with letter substitutions/insertions/deletions, so I will have to do approximate string matching. However, I want the matching to be phonetically-motivated, e.g. "m" and "n" are phonetically similar, so the substitution cost of "m" for "n" should be small, compared to say, "m" and "k". So, if I'm searching for [mein] "main", it would match the letter sequence [meim] "maim" with, say, cost 0.1, whereas it would match the letter sequence [meik] "make" with, say, cost 0.7. Similarly, there are differing costs for inserting or deleting each letter. I can supply a confusion matrix that, for each letter pair (x,y), gives the cost of substituting x with y, where x and y are any letter or the empty string. I know that there are tools available that do approximate matching such as agrep, but as far as I can tell, they do not take a confusion matrix as input. That is, the cost of any insertion/substitution/deletion = 1. My question is, are there any open-source tools already available that can do approximate matching with confusion matrices, and if not, what is a good algorithm that I can implement to accomplish this?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >