Search Results

Search found 3228 results on 130 pages for 'vb6 conversion'.

Page 115/130 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

  • why is a minus sign prepended to my biginteger?

    - by kyrogue
    package ewa; import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.logging.Level; import java.util.logging.Logger; import java.math.BigInteger; /** * * @author Lotus */ public class md5Hash { public static void main(String[] args) throws NoSuchAlgorithmException { String test = "abc"; MessageDigest md = MessageDigest.getInstance("MD5"); try { md.update(test.getBytes("UTF-8")); byte[] result = md.digest(); BigInteger bi = new BigInteger(result); String hex = bi.toString(16); System.out.println("Pringting result"); System.out.println(hex); } catch (UnsupportedEncodingException ex) { Logger.getLogger(md5Hash.class.getName()).log(Level.SEVERE, null, ex); } } } i am testing conversion of byte to hex and when done, the end result has a minus sign on the beginning of the string, why does this happen? i have read the docs and it says it will add a minus sign, however i do not understand it. And will the minus sign affect the hash result? because i am going to implement it to hash password stored on my database

    Read the article

  • how to send image to remote server using webservices in android only save to byte array

    - by satyamurthy
    get image from sdcard and store that image to remote server. i am getting the image from sdcard and i converterd that image to bytearray by using bitmap .but what's the problem if i oberver byte array it is showing some different values it is not matching with .net image byte array conversion. can u pl help if you have any solution it is very urgent to me following is the code i am using can u pl suggest me FileInputStream fin = new FileInputStream(new File("/sdcard/pictures/1.jpg")); BufferedInputStream bis = new BufferedInputStream(fin,3000); byte[] data = new byte[bis.available()]; bis.read(data, 0, data.length); byte[] data1=new byte[data.length]; for (int i = 0; i < data.length; i++) { System.out.print(data[i]); data1[i]=data[i]; } System.out.println("5..................."+data1); Bitmap bitmap = BitmapFactory.decodeByteArray(data1,0,data1.length); System.out.println("6..................."+data1.length); Log.v("hgfjohfjghjdfhgj",""+bitmap); if(bitmap!=null) image.setImageBitmap(bitmap); else Log.e("Bitmap "," Not Created");

    Read the article

  • Overloading generic implicit conversions

    - by raichoo
    Hi I'm having a little scala (version 2.8.0RC1) problem with implicit conversions. Whenever importing more than one implicit conversion the first one gets shadowed. Here is the code where the problem shows up: // containers class Maybe[T] case class Nothing[T]() extends Maybe[T] case class Just[T](value: T) extends Maybe[T] case class Value[T](value: T) trait Monad[C[_]] { def >>=[A, B](a: C[A], f: A => C[B]): C[B] def pure[A](a: A): C[A] } // implicit converter trait Extender[C[_]] { class Wrapper[A](c: C[A]) { def >>=[B](f: A => C[B])(implicit m: Monad[C]): C[B] = { m >>= (c, f) } def >>[B](b: C[B])(implicit m: Monad[C]): C[B] = { m >>= (c, { (x: A) => b } ) } } implicit def extendToMonad[A](c: C[A]) = new Wrapper[A](c) } // instance maybe object maybemonad extends Extender[Maybe] { implicit object MaybeMonad extends Monad[Maybe] { override def >>=[A, B](a: Maybe[A], f: A => Maybe[B]): Maybe[B] = { a match { case Just(x) => f(x) case Nothing() => Nothing() } } override def pure[A](a: A): Maybe[A] = Just(a) } } // instance value object identitymonad extends Extender[Value] { implicit object IdentityMonad extends Monad[Value] { override def >>=[A, B](a: Value[A], f: A => Value[B]): Value[B] = { a match { case Value(x) => f(x) } } override def pure[A](a: A): Value[A] = Value(a) } } import maybemonad._ //import identitymonad._ object Main { def main(args: Array[String]): Unit = { println(Just(1) >>= { (x: Int) => MaybeMonad.pure(x) }) } } When uncommenting the second import statement everything goes wrong since the first "extendToMonad" is shadowed. However, this one works: object Main { implicit def foo(a: Int) = new { def foobar(): Unit = { println("Foobar") } } implicit def foo(a: String) = new { def foobar(): Unit = { println(a) } } def main(args: Array[String]): Unit = { 1 foobar() "bla" foobar() } } So, where is the catch? What am I missing? Regards, raichoo

    Read the article

  • What -W values in gcc correspond to which actual warnings?

    - by SebastianK
    Preamble: I know, disabling warnings is not a good idea. Anyway, I have a technical question about this. Using GCC 3.3.6, I get the following warning: choosing ... over ... because conversion sequence for the argument is better. Now, I want to disable this warning as described in gcc warning options by providing an argument like -Wno-theNameOfTheWarning But I don't know the name of the warning. How can I find out the name of the option that disables this warning? I am not able to fix the warning, because it occurs in a header of an external library that can not be changed. It is in boost serialization (rx(s, count)): template<class Archive, class Container, class InputFunction, class R> inline void load_collection(Archive & ar, Container &s) { s.clear(); // retrieve number of elements collection_size_type count; unsigned int item_version; ar >> BOOST_SERIALIZATION_NVP(count); if(3 < ar.get_library_version()) ar >> BOOST_SERIALIZATION_NVP(item_version); else item_version = 0; R rx; rx(s, count); std::size_t c = count; InputFunction ifunc; while(c-- > 0){ ifunc(ar, s, item_version); } } I have already tried #pragma GCC system_header but this had no effect. Using -isystem instead of -I also does not work. The general question remains is: I know the text of the warning message. But I do not know the correlation to the gcc warning options.

    Read the article

  • What's the best way of accessing a DRb object (e.g. Ruby Queue) from Scala (and Java)?

    - by Tom Morris
    I have built a variety of little scripts using Ruby's very simple Queue class, and share the Queue between Ruby and JRuby processes using DRb. It would be nice to be able to access these from Scala (and maybe Java) using JRuby. I've put together something Scala and the JSR-223 interface to access jruby-complete.jar. import javax.script._ class DRbQueue(host: String, port: Int) { private var engine = DRbQueue.factory.getEngineByName("jruby") private var invoker = engine.asInstanceOf[Invocable] engine.eval("require \"drb\" ") private var queue = engine.eval("DRbObject.new(nil, \"druby://" + host + ":" + port.toString + "\")") def isEmpty(): Boolean = invoker.invokeMethod(this.queue, "empty?").asInstanceOf[Boolean] def size(): Long = invoker.invokeMethod(this.queue, "length").asInstanceOf[Long] def threadsWaiting: Long = invoker.invokeMethod(this.queue, "num_waiting").asInstanceOf[Long] def offer(obj: Any) = invoker.invokeMethod(this.queue, "push", obj.asInstanceOf[java.lang.Object]) def poll(): Any = invoker.invokeMethod(this.queue, "pop") def clear(): Unit = { invoker.invokeMethod(this.queue, "clear") } } object DRbQueue { var factory = new ScriptEngineManager() } (It conforms roughly to java.util.Queue interface, but I haven't declared the interface because it doesn't implement the element and peek methods because the Ruby class doesn't offer them.) The problem with this is the type conversion. JRuby is fine with Scala's Strings - because they are Java strings. But if I give it a Scala Int or Long, or one of the other Scala types (List, Set, RichString, Array, Symbol) or some other custom type. This seems unnecessarily hacky: surely there has got to be a better way of doing RMI/DRb interop without having to use JSR-223 API. I could either make it so that the offer method serializes the object to, say, a JSON string and takes a structural type of only objects that have a toJson method. I could then write a Ruby wrapper class (or just monkeypatch Queue) to would parse the JSON. Is there any point in carrying on with trying to access DRb from Java/Scala? Might it just be easier to install a real message queue? (If so, any suggestions for a lightweight JVM-based MQ?)

    Read the article

  • How to convert string with double high/wide characters to normal string [VC++6]

    - by Shaitan00
    My application typically recieves a string in the following format: " Item $5.69 " Some contants I always expect: - the LENGHT always 20 characters - the start index of the text always [5] - and most importantly the index of the DECIMAL for the price always [14] In order to identify this string correctly I validate all the expected contants listed above .... Some of my clients have now started sending the string with Doube-High / Double-Wide values (pair of characters which represent a single readable character) similar to the following: " Item $x80x90.x81x91x82x92 " For testing I simply scan the string character-by-character, compare char[i] and char[i+1] and replace these pairs with their corresponding single character when a match is found (works fine) as follows: [Code] for (int i=0; i < sData.length(); i++) { char ch = sData[i] & 0xFF; char ch2 = sData[i+1] & 0xFF; if (ch == '\x80' && ch2 == '\x90') zData.replace("\x80\x90", "0"); else if (ch == '\x81' && ch2 == '\x91') zData.replace("\x81\x91", "1"); else if (ch == '\x82' && ch2 == '\x92') zData.replace("\x82\x92", "2"); ... ... ... } [/Code] But the result is something like this: " Item $5.69 " Notice how this no longer matches my expectation: the lenght is now 17 (instead of 20) due to the 3 conversions and the decimal is now at index 13 (instead of 14) due to the conversion of the "5" before the decimal point. Ideally I would like to convert the string to a normal readable format keeping the constants (length, index of text, index of decimal) at the same place (so the rest of my application is re-usable) ... or any other suggestion (I'm pretty much stuck with this)... Is there a STANDARD way of dealing with these type of characters? Any help would be greatly appreciated, I've been stuck on this for a while now ... Thanks,

    Read the article

  • Short names versus long names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • jQuery/JavaScript Date form validation

    - by Victor Jackson
    I am using the jQuery date picker calendar in a form. Once submitted the form passes params along via the url to a third party site. Everything works fine, except for one thing. If the value inserted into the date field by the datepicker calendar is subsequently deleted, or if the default date, that is in the form on page load, is deleted and the form is submitted I get the following error: "Conversion from string "" to type 'Date' is not valid." The solution to my problem is really simple, I want to validate the text field where the date is submitted and send out a default date (current date) if the field is empty for any reason. The problem is I am terrible at Javascript and cannot figure out how to do this. Here is the form code for my date field. [var('default_date' = date)] <input type="text" id="datepicker" name="txtdate" value="[$default_date]" onfocus="if (this.value == '[$default_date]') this.value = '';" onchange="form.BeginDate.value = this.value; form.EndDate.value = this.value;" /> <input type="hidden" name="BeginDate" value="[$default_date]"/> <input type="hidden" name="EndDate" value="[$default_date]"/>

    Read the article

  • UILabel + IRR, KRW and KHR currencies with wrong symbol

    - by serb
    Hi, I'm experiencing issues when converting decimal to currency for Korean Won, Cambodian Riel and Iranian Rial and showing the result to the UILabel text. Conversion itself passes just fine and I can see correct currency symbol at the debugger, even the NSLog prints the symbol well. If I assign this NSString instance to the UILabel text, the currency symbol is shown as a crossed box instead of the correct symbol. There is no other code between, does not matter what font I use. I tried to print ? (Korean Won) using the unicode value (0x20A9) or even using UTF8 representation (\xe2\x82\xa9), but all I get is the crossed box on the label. Any other supported currency in iPhone SDK and NSLocale (nearly 170 currencies) works perfectly fine no matter how exotic the currency is. Anyone else experiencing the same problem? Is there a "cure" for this? Thanks EDIT: -(NSString *)decimalToCurrency:(NSDecimalNumber *)value byLocale:(NSLocale *)locale { NSNumberFormatter *fmt = [[NSNumberFormatter alloc] init]; [fmt setLocale: locale]; [fmt setNumberStyle: NSNumberFormatterCurrencyStyle]; NSString *res = [fmt stringFromNumber: value]; [fmt release]; return res; } lbValue.text = [self decimalToCurrency: price byLocale: koreanLocale];

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • Entity framework generates values for NOT NULL columns which has default defined in db.

    - by Muhammad Kashif Nadeem
    Hi I have a table Customer. One of the columns in table is DateCreated. This column is NOT NULL but default values is defined for this column in db. When I add new Customer using EF4 from my code. var customer = new Customer(); customer.CustomerName = "Hello"; customer.Email = "[email protected]"; // Watch out commented out. //customer.DateCreated = DateTime.Now; context.AddToCustomers(customer); context.SaveChanges(); Above code generates following query. exec sp_executesql N'insert [dbo].[Customers]([CustomerName], [Email], [Phone], [DateCreated], [DateUpdated]) values (@0, @1, null, @2, null) select [CustomerId] from [dbo].[Customers] where @@ROWCOUNT > 0 and [CustomerId] = scope_identity() ',N'@0 varchar(100),@1 varchar(100),@2 datetime2(7) ',@0='Hello',@1='[email protected]',@2='0001-01-01 00:00:00' And throws following error The conversion of a datetime2 data type to a datetime data type resulted in an out-of-range value. The statement has been terminated. Can you please tell me how NOT NULL columns which has default values at db level should not have values generated by EF? DB: DateCreated DATETIME NOT NULL DateCreated Properties in EF: Nullable: False Getter/Setter: public Type: DateTime DefaultValue: None Thanks.

    Read the article

  • JRuby app throws exception in Spring

    - by mat3001
    I am trying to run a JRuby app in Spring. I use Eclipse to run it. But it doesn't compile. Does anybody know what's going on here? Exception in thread "Launcher:/oflaDemo" [INFO] [Launcher:/oflaDemo] org.springframework.beans.factory.support.DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@4a009ab0: defining beans [placeholderConfig,web.context,web.scope,web.handler,demoService.service]; parent: org.springframework.beans.factory.support.DefaultListableBeanFactory@f5d8d75 [INFO] [Launcher:/installer] org.red5.server.service.Installer - Installer service created org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'web.scope' defined in ServletContext resource [/WEB-INF/red5-web.xml]: Initialization of bean failed; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert property value of type [org.springframework.scripting.jruby.JRubyScriptFactory] to required type [org.red5.server.api.IScopeHandler] for property 'handler'; nested exception is java.lang.IllegalArgumentException: Cannot convert value of type [org.springframework.scripting.jruby.JRubyScriptFactory] to required type [org.red5.server.api.IScopeHandler] for property 'handler': no matching editors or conversion strategy found at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:480) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) I don't have a lot of experience with Spring, so I would really appreciate hits. If you're familiar with red5 - yes it's the oflademo app supplied by red5.

    Read the article

  • Help with Java Generics: Cannot use "Object" as argument for "? extends Object"

    - by AniDev
    Hello, I have the following code: import java.util.*; public class SellTransaction extends Transaction { private Map<String,? extends Object> origValueMap; public SellTransaction(Map<String,? extends Object> valueMap) { super(Transaction.Type.Sell); assignValues(valueMap); this.origValueMap=valueMap; } public SellTransaction[] splitTransaction(double splitAtQuantity) { Map<String,? extends Object> valueMapPart1=origValueMap; valueMapPart1.put(nameMappings[3],(Object)new Double(splitAtQuantity)); Map<String,? extends Object> valueMapPart2=origValueMap; valueMapPart2.put(nameMappings[3],((Double)origValueMap.get(nameMappings[3]))-splitAtQuantity); return new SellTransaction[] {new SellTransaction(valueMapPart1),new SellTransaction(valueMapPart2)}; } } The code fails to compile when I call valueMapPart1.put and valueMapPart2.put, with the error: The method put(String, capture#5-of ? extends Object) in the type Map is not applicable for the arguments (String, Object) I have read on the Internet about generics and wildcards and captures, but I still don't understand what is going wrong. My understanding is that the value of the Map's can be any class that extends Object, which I think might be redundant, because all classes extend Object. And I cannot change the generics to something like ? super Object, because the Map is supplied by some library. So why is this not compiling? Also, if I try to cast valueMap to Map<String,Object>, the compiler gives me that 'Unchecked conversion' warning. Thanks!

    Read the article

  • Speed of QHash lookups using QStrings as keys.

    - by Ryan R.
    I need to draw a dynamic overlay on a QImage. The component parts of the overlay are defined in XML and parsed out to a QHash<QString, QPicture> where the QString is the name (such as "crosshairs") and the QPicture is the resolution independent drawing. I then draw components of the overlay as they are needed at a position determined during runtime. Example: I have 10 pictures in my QHash composing every possible element in a HUD. During a particular frame of video I need to draw 6 of them at different positions on the image. During the next frame something has changed and now I only need to draw 4 of them but 2 of those positions have changed. Now to my question: If I am trying to do this quickly, should I redefine my QHash as QHash<int, QPicture> and enumerate the keys to counteract the overhead caused by string comparisons; or are the comparisons not going to make a very big impact on performance? I can easily make the conversion to integer keys as the XML parser and overlay composer are completely separate classes; but I would like to use a consistent data structure across the application. Should I overcome my desire for consistency and re-usability in order to increase performance? Will it even matter very much if I do?

    Read the article

  • How do I send floats in window messages.

    - by yngvedh
    Hi, What is the best way to send a float in a windows message using c++ casting operators? The reason I ask is that the approach which first occurred to me did not work. For the record I'm using the standard win32 function to send messages: PostWindowMessage(UINT nMsg, WPARAM wParam, LPARAM lParam) What does not work: Using static_cast<WPARAM>() does not work since WPARAM is typedef'ed to UINT_PTR and will do a numeric conversion from float to int, effectively truncating the value of the float. Using reinterpret_cast<WPARAM>() does not work since it is meant for use with pointers and fails with a compilation error. I can think of two workarounds at the moment: Using reinterpret_cast in conjunction with the address of operator: float f = 42.0f; ::PostWindowMessage(WM_SOME_MESSAGE, *reinterpret_cast<WPARAM*>(&f), 0); Using an union: union { WPARAM wParam, float f }; f = 42.0f; ::PostWindowMessage(WM_SOME_MESSAGE, wParam, 0); Which of these are preffered? Are there any other more elegant way of accomplishing this?

    Read the article

  • C++ Pointer member function with templates assignment with a member function of another class

    - by Agusti
    Hi, I have this class: class IShaderParam{ public: std::string name_value; }; template<class TParam> class TShaderParam:public IShaderParam{ public: void (TShaderParam::*send_to_shader)( const TParam&,const std::string&); TShaderParam():send_to_shader(NULL){} TParam value; void up_to_shader(); }; typedef TShaderParam<float> FloatShaderParam; typedef TShaderParam<D3DXVECTOR3> Vec3ShaderParam; In another class, I have a vector of IShaderParams* and functions that i want to send to "send_to_shader". I'm trying assign the reference of these functions like this: Vec3ShaderParam *_param = new Vec3ShaderParam; _param-send_to_shader = &TShader::setVector3; This is the function: void TShader::setVector3(const D3DXVECTOR3 &vec, const std::string &name){ //... } And this is the class with IshaderParams*: class TShader{ std::vector params; public: Shader effect; std::string technique_name; TShader(std::string& afilename):effect(NULL){}; ~TShader(); void setVector3(const D3DXVECTOR3 &vec, const std::string &name); When I compile the project with Visual Studio C++ Express 2008 I recieve this error: Error 2 error C2440: '=' :can't make the conversion 'void (__thiscall TShader::* )(const D3DXVECTOR3 &,const std::string &)' a 'void (__thiscall TShaderParam::* )(const TParam &,const std::string &)' c:\users\isagoras\documents\mcv\afoc\shader.cpp 127 Can I do the assignment? No? I don't know how :-S Yes, I know that I can achieve the same objective with other techniques, but I want to know how can I do this..

    Read the article

  • Why should I use core.autocrlf in Git

    - by Rich
    I have a Git repository that is accessed from both Windows and OS X, and that I know already contains some files with CRLF line-endings. As far as I can tell, there are two ways to deal with this: Set core.autocrlf to false everywhere, Follow the instructions here (echoed on GitHub's help pages) to convert the repository to contain only LF line-endings, and thereafter set core.autocrlf to true on Windows and input on OS X. The problem with doing this is that if I have any binary files in the repository that: a). are not correctly marked as binary in gitattributes, and b). happen to contain both CRLFs and LFs, they will be corrupted. It is possible my repository contains such files. So why shouldn't I just turn off Git's line-ending conversion? There are a lot of vague warnings on the web about having core.autocrlf switched off causing problems, but very few specific ones; the only that I've found so far are that kdiff3 cannot handle CRLF endings (not a problem for me), and that some text editors have line-ending issues (also not a problem for me). The repository is internal to my company, and so I don't need to worry about sharing it with people with different autocrlf settings or line-ending requirements. Are there any other problems with just leaving line-endings as-is that I am unaware of?

    Read the article

  • Adding array to an object breaks the array

    - by DisgruntledGoat
    I have an array like this (output from print_r): Array ( [price] => 700.00 [room_prices] => Array ( [0] => [1] => [2] => [3] => [4] => ) [bills] => Array ( [0] => Gas ) ) I'm running a custom function to convert it to an object. Only the top-level should be converted, the sub-arrays should stay as arrays. The output comes out like this: stdClass Object ( [price] => 700.00 [room_prices] => Array ( [0] => Array ) [bills] => Array ( [0] => Array ) ) Here is my conversion function. All it does is set the value of each array member to an object: function array_to_object( $arr ) { $obj = new stdClass; if ( count($arr) == 0 ) return $obj; foreach ( $arr as $k=>$v ) $obj->$k = $v; return $obj; } I can't figure this out for the life of me!

    Read the article

  • JavaScript automatically converts some special characters

    - by noplacetoh1de
    I need to extract a HTML-Substring with JS which is position dependent. I store special characters HTML-encoded. For example: HTML <div id="test"><p>l&ouml;sen &amp; gr&uuml;&szlig;en</p></div>? Text lösen & grüßen My problem lies in the JS-part, for example when I try to extract the fragment lö, which has the HTML-dependent starting position of 3 and the end position of 9 inside the <div> block. JS seems to convert some special characters internally so that the count from 3 to 9 is wrongly interpreted as "lösen " and not "l&ouml;". Other special characters like the &amp; are not affected by this. So my question is, if someone knows why JS is behaving in that way? Characters like &auml; or &ouml; are being converted while characters like &amp; or &nbsp; are plain. Is there any possibility to avoid this conversion? I've set up a fiddle to demonstrate this: JSFiddle Thanks for any help! EDIT: Maybe I've explained it a bit confusing, sorry for that. What I want is the HTML: <p>l&ouml;sen &amp; gr&uuml;&szlig;en</p> . Every special character should be unconverted, except the HTML-Tags. Like in the HTML above. But JS converts the &ouml; or &uuml; into ö or ü automatically, what I need to avoid.

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • Execute a function to affect different template class instances

    - by Samer Afach
    I have a complicated problem, and I need help. I have a base case, class ParamBase { string paramValue; //... } and a bunch of class templates with different template parameters. template <typename T> class Param : public ParamBase { T value; //... } Now, each instance of Param has different template parameter, double, int, string... etc. To make it easier, I have a vector to their base class pointers that contains all the instances that have been created: vector<ParamBase*> allParamsObjects; The question is: How can I run a single function (global or member or anything, your choice), that converts all of those different instances' strings paramValue with different templates arguments and save the conversion result to the appropriate type in Param::value. This has to be run over all objects that are saved in the vector allParamsObjects. So if the template argument of the first Param is double, paramValue has to be converted to double and saved in value; and if the second Param's argument is int, then the paramValue of the second has to be converted to int and saved in value... etc. I feel it's almost impossible... Any help would be highly appreciated :-)

    Read the article

  • convert variable with mixed date formats to one format in r

    - by jalapic
    A sample of my dataframe: date 1 25 February 1987 2 20 August 1974 3 9 October 1984 4 18 August 1992 5 19 September 1995 6 16-Oct-63 7 30-Sep-65 8 22 Jan 2008 9 13-11-1961 10 18 August 1987 11 15-Sep-70 12 5 October 1994 13 5 December 1984 14 03/23/87 15 30 August 1988 16 26-10-1993 17 22 August 1989 18 13-Sep-97 I have a large dataframe with a date variable that has multiple formats for dates. Most of the formats in the variable are shown above- there are a couple of very rare others too. The reason why there are multiple formats is that the data were pulled together from various websites that each used different formats. I have tried using straightforward conversions e.g. strftime(mydf$date,"%d/%m/%Y") but these sorts of conversion will not work if there are multiple formats. I don't want to resort to multiple gsub type editing. I was wondering if I am missing a more simple solution? Code for example: structure(list(date = structure(c(12L, 8L, 18L, 6L, 7L, 4L, 14L, 10L, 1L, 5L, 3L, 17L, 16L, 11L, 15L, 13L, 9L, 2L), .Label = c("13-11-1961", "13-Sep-97", "15-Sep-70", "16-Oct-63", "18 August 1987", "18 August 1992", "19 September 1995", "20 August 1974", "22 August 1989", "22 Jan 2008", "03/23/87", "25 February 1987", "26-10-1993", "30-Sep-65", "30 August 1988", "5 December 1984", "5 October 1994", "9 October 1984"), class = "factor")), .Names = "date", row.names = c(NA, -18L), class = "data.frame")

    Read the article

  • why BOOST_FOREACH cannot handle const boost::ptr_map?

    - by psaghelyi
    void main() { typedef boost::ptr_map<int, char> MyMap; MyMap mymap; mymap[1] = 'a'; mymap[2] = 'b'; mymap[3] = 'c'; BOOST_FOREACH(MyMap::value_type value, mymap) { std::cout << value.first << " " << value.second << std::endl; } MyMap const & const_mymap = mymap; BOOST_FOREACH(MyMap::value_type value, const_mymap) { std::cout << value.first << " " << value.second << std::endl; } } The following error message comes from GCC at the second BOOST_FOREACH error: conversion from 'boost::ptr_container_detail::ref_pair<int, const char* const>' to non-scalar type 'boost::ptr_container_detail::ref_pair<int, char* const>' requested I reckon that this is the weakness of the pointer container's ref_pair...

    Read the article

  • Make Errors: Missing Includes in C++ Script?

    - by Abs
    Hello all, I just got help in how to compile this script a few mintues ago on SO but I have managed to get errors. I am only a beginner in C++ and have no idea what the below erros means or how to fix it. This is the script in question. I have read the comments from some users suggesting they changed the #include parts but it seems to be exactly what the script has, see this comment. [root@localhost wkthumb]# qmake-qt4 && make g++ -c -pipe -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -Wall -W -D_REENTRANT -DQT_NO_DEBUG -DQT_GUI_LIB -DQT_CORE_LIB -I/usr/lib/qt4/mkspecs/linux-g++ -I. -I/usr/include/QtCore -I/usr/include/QtGui -I/usr/include -I. -I. -I. -o main.o main.cpp main.cpp:5:20: error: QWebView: No such file or directory main.cpp:6:21: error: QWebFrame: No such file or directory main.cpp:8: error: expected constructor, destructor, or type conversion before ‘*’ token main.cpp:11: error: ‘QWebView’ has not been declared main.cpp: In function ‘void loadFinished(bool)’: main.cpp:18: error: ‘view’ was not declared in this scope main.cpp:18: error: ‘QWebSettings’ has not been declared main.cpp:19: error: ‘QWebSettings’ has not been declared main.cpp:20: error: ‘QWebSettings’ has not been declared main.cpp: In function ‘int main(int, char**)’: main.cpp:42: error: ‘view’ was not declared in this scope main.cpp:42: error: expected type-specifier before ‘QWebView’ main.cpp:42: error: expected `;' before ‘QWebView’ make: *** [main.o] Error 1 I have the web kit on my Fedora Core 10 machine: qt-4.5.3-9.fc10.i386 qt-devel-4.5.3-9.fc10.i386 Thanks all for any help

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >