Search Results

Search found 8301 results on 333 pages for 'types'.

Page 295/333 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • IE won't load PDF in a window created with window.open

    - by Dean
    Here's the problem, which only occurs in Internet Explorer (IE). I have a page that has links to several different types of files. Links from these files execute a Javascript function that opens a new window and loads the specific file. This works great, unless the file I need to open in the new window is a PDF in which case the window is blank, even though the URL is in the address field. Refreshing that window using F5 doesn't help. However, if I put the cursor in the address field and press <enter> the PDF loads right up. This problem only occurs in IE. I have seen it in IE 7 and 8 and am using Adobe Acrobat Reader 9. In Firefox (PC and Mac) everything works perfectly. In Chrome (Mac), the PDF is downloaded. In Safari (Mac) it works. In Opera (Mac) it prompts me to open or save. Basically, everything probably works fine, except for IE. I have searched for similar problems and have seen some posts where it was suggested to adjust some of the Internet Options on IE. I have tried this but it doesn't help, and the problem wasn't exactly the same anyway. Here's the Javascript function I use to open the new window. function newwin(url,w,h) { win = window.open(url,"temp","width="+w+",height="+h+",menubar=yes,toolbar=yes,location=yes,status=yes,scrollbars=auto,resizable=yes"); win.focus(); } You can see that I pass in the URL as well as the height, h, and width, w, of the window. I've used a function like this for years and as far as I know have never had a problem. I call the newwin() function using this. <a href="javascript:newwin('/path/document.pdf',400,300)">document.pdf</a> (Yes, I know there are other, better ways than using inline JS, and I've even tried some of them because I've run out of things to try, but nothing works.) So, if anyone has an idea as to what might be causing this problem, I'd love to hear it.

    Read the article

  • Cascade Saves with Fluent NHibernate AutoMapping - Old Anwser Still Valid?

    - by Glenn
    I want to do exactly what this question asks: http://stackoverflow.com/questions/586888/cascade-saves-with-fluent-nhibernate-automapping Using Fluent Nhibernate Mappings to turn on "cascade" globally once for all classes and relation types using one call rather than setting it for each mapping individually. The answer to the earlier question looks great, but I'm afraid that the Fluent Nhibernate API altered its .WithConvention syntax last year and broke the answer... either that or I'm missing something. I keep getting a bunch of name space not found errors relating to the IOneToOnePart, IManyToOnePart and all their variations: "The type or namespace name 'IOneToOnePart' could not be found (are you missing a using directive or an assembly reference?)" I've tried the official example dll's, the RTM dll's and the latest build and none of them seem to make VS 2008 see the required namespace. The second problem is that I want to use the class with my AutoPersistenceModel but I'm not sure where to this line: .ConventionDiscovery.AddFromAssemblyOf() in my factory creation method. private static ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database(SQLiteConfiguration.Standard.UsingFile(DbFile)) .Mappings(m => m.AutoMappings .Add(AutoMap.AssemblyOf<Shelf>(type => type.Namespace.EndsWith("Entities")) .Override<Shelf>(map => { map.HasManyToMany(x => x.Products).Cascade.All(); }) ) )//emd mappings .ExposeConfiguration(BuildSchema) .BuildSessionFactory();//finalizes the whole thing to send back. } Below is the class and using statements I'm trying using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using FluentNHibernate.Conventions; using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; using FluentNHibernate.Mapping; namespace TestCode { public class CascadeAll : IHasOneConvention, IHasManyConvention, IReferenceConvention { public bool Accept(IOneToOnePart target) { return true; } public void Apply(IOneToOnePart target) { target.Cascade.All(); } public bool Accept(IOneToManyPart target) { return true; } public void Apply(IOneToManyPart target) { target.Cascade.All(); } public bool Accept(IManyToOnePart target) { return true; } public void Apply(IManyToOnePart target) { target.Cascade.All(); } } }

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • How to eager load sibling data using LINQ to SQL?

    - by Scott
    The goal is to issue the fewest queries to SQL Server using LINQ to SQL without using anonymous types. The return type for the method will need to be IList<Child1>. The relationships are as follows: Parent Child1 Child2 Grandchild1 Parent Child1 is a one-to-many relationship Child1 Grandchild1 is a one-to-n relationship (where n is zero to infinity) Parent Child2 is a one-to-n relationship (where n is zero to infinity) I am able to eager load the Parent, Child1 and Grandchild1 data resulting in one query to SQL Server. This query with load options eager loads all of the data, except the sibling data (Child2): DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<Child1>(o => o.GrandChild1List); loadOptions.LoadWith<Child1>(o => o.Parent); dataContext.LoadOptions = loadOptions; IQueryable<Child1> children = from child in dataContext.Child1 select child; I need to load the sibling data as well. One approach I have tried is splitting the query into two LINQ to SQL queries and merging the result sets together (not pretty), however upon accessing the sibling data it is lazy loaded anyway. Adding the sibling load option will issue a query to SQL Server for each Grandchild1 and Child2 record (which is exactly what I am trying to avoid): DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<Child1>(o => o.GrandChild1List); loadOptions.LoadWith<Child1>(o => o.Parent); loadOptions.LoadWith<Parent>(o => o.Child2List); dataContext.LoadOptions = loadOptions; IQueryable<Child1> children = from child in dataContext.Child1 select child; exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=1 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=2 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=3 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=4 I've also written LINQ to SQL queries to join in all of the data in hopes that it would eager load the data, however when the LINQ to SQL EntitySet of Child2 or Grandchild1 are accessed it lazy loads the data. The reason for returning the IList<Child1> is to hydrate business objects. My thoughts are I am either: Approaching this problem the wrong way. Have the option of calling a stored procedure? My organization should not be using LINQ to SQL as an ORM? Any help is greatly appreciated. Thank you, -Scott

    Read the article

  • Windows Service HTTPListener Memory Issue

    - by crawshaws
    Hi all, Im a complete novice to the "best practices" etc of writing in any code. I tend to just write it an if it works, why fix it. Well, this way of working is landing me in some hot water. I am writing a simple windows service to server a single webpage. (This service will be incorperated in to another project which monitors the services and some folders on a group of servers.) My problem is that whenever a request is recieved, the memory usage jumps up by a few K per request and keeps qoing up on every request. Now ive found that by putting GC.Collect in the mix it stops at a certain number but im sure its not meant to be used this way. I was wondering if i am missing something or not doing something i should to free up memory. Here is the code: Public Class SimpleWebService : Inherits ServiceBase 'Set the values for the different event log types. Public Const EVENT_ERROR As Integer = 1 Public Const EVENT_WARNING As Integer = 2 Public Const EVENT_INFORMATION As Integer = 4 Public listenerThread As Thread Dim HTTPListner As HttpListener Dim blnKeepAlive As Boolean = True Shared Sub Main() Dim ServicesToRun As ServiceBase() ServicesToRun = New ServiceBase() {New SimpleWebService()} ServiceBase.Run(ServicesToRun) End Sub Protected Overrides Sub OnStart(ByVal args As String()) If Not HttpListener.IsSupported Then CreateEventLogEntry("Windows XP SP2, Server 2003, or higher is required to " & "use the HttpListener class.") Me.Stop() End If Try listenerThread = New Thread(AddressOf ListenForConnections) listenerThread.Start() Catch ex As Exception CreateEventLogEntry(ex.Message) End Try End Sub Protected Overrides Sub OnStop() blnKeepAlive = False End Sub Private Sub CreateEventLogEntry(ByRef strEventContent As String) Dim sSource As String Dim sLog As String sSource = "Service1" sLog = "Application" If Not EventLog.SourceExists(sSource) Then EventLog.CreateEventSource(sSource, sLog) End If Dim ELog As New EventLog(sLog, ".", sSource) ELog.WriteEntry(strEventContent) End Sub Public Sub ListenForConnections() HTTPListner = New HttpListener HTTPListner.Prefixes.Add("http://*:1986/") HTTPListner.Start() Do While blnKeepAlive Dim ctx As HttpListenerContext = HTTPListner.GetContext() Dim HandlerThread As Thread = New Thread(AddressOf ProcessRequest) HandlerThread.Start(ctx) HandlerThread = Nothing Loop HTTPListner.Stop() End Sub Private Sub ProcessRequest(ByVal ctx As HttpListenerContext) Dim sb As StringBuilder = New StringBuilder sb.Append("<html><body><h1>Test My Service</h1>") sb.Append("</body></html>") Dim buffer() As Byte = Encoding.UTF8.GetBytes(sb.ToString) ctx.Response.ContentLength64 = buffer.Length ctx.Response.OutputStream.Write(buffer, 0, buffer.Length) ctx.Response.OutputStream.Close() ctx.Response.Close() sb = Nothing buffer = Nothing ctx = Nothing 'This line seems to keep the mem leak down 'System.GC.Collect() End Sub End Class Please feel free to critisise and tear the code apart but please BE KIND. I have admitted I dont tend to follow the best practice when it comes to coding.

    Read the article

  • .NET XML Serialization without <?xml> root node

    - by Graphain
    Hi, I'm trying to generate XML like this: <?xml version="1.0"?> <!DOCTYPE APIRequest SYSTEM "https://url"> <APIRequest> <Head> <Key>123</Key> </Head> <ObjectClass> <Field>Value</Field </ObjectClass> </APIRequest> I have a class (ObjectClass) decorated with XMLSerialization attributes like this: [XmlRoot("ObjectClass")] public class ObjectClass { [XmlElement("Field")] public string Field { get; set; } } And my really hacky intuitive thought to just get this working is to do this when I serialize: ObjectClass inst = new ObjectClass(); XmlSerializer serializer = new XmlSerializer(inst.GetType(), ""); StringWriter w = new StringWriter(); w.WriteLine(@"<?xml version=""1.0""?>"); w.WriteLine("<!DOCTYPE APIRequest SYSTEM"); w.WriteLine(@"""https://url"">"); w.WriteLine("<APIRequest>"); w.WriteLine("<Head>"); w.WriteLine(@"<Field>Value</Field>"); w.WriteLine(@"</Head>"); XmlSerializerNamespaces ns = new XmlSerializerNamespaces(); ns.Add("", ""); serializer.Serialize(w, inst, ns); w.WriteLine("</APIRequest>"); However, this generates XML like this: <?xml version="1.0"?> <!DOCTYPE APIRequest SYSTEM "https://url"> <APIRequest> <Head> <Key>123</Key> </Head> <?xml version="1.0" encoding="utf-16"?> <ObjectClass> <Field>Value</Field> </ObjectClass> </APIRequest> i.e. the serialize statement is automatically adding a <?xml root element. I know I'm attacking this wrong so can someone point me in the right direction? As a note, I don't think it will make practical sense to just make an APIRequest class with an ObjectClass in it (because there are say 20 different types of ObjectClass that each needs this boilerplate around them) but correct me if I'm wrong.

    Read the article

  • How to Determine the Size of MSADO Command Parameters

    - by Adam
    I am new to MS ADO and trying to understand how to set the size on command parameters as created by the command.CreateParameter (Name, Type, Direction, Size, Value) The documentation says the following: Size Optional. A Long value that specifies the maximum length for the parameter value in characters or bytes. ... If you specify a variable-length data type in the Type argument, you must either pass a Size argument or set the Size property of the Parameter object before appending it to the Parameters collection; otherwise, an error occurs. 1.) What should one pass for fixed-size parameters? Is it a "don't care"? I was a bit confused by the example found here, in which they set size to 3 for an adInteger parameter with Value set to a variant of type VT_I2 pPrmByRoyalty->Type = adInteger; pPrmByRoyalty->Size = 3; pPrmByRoyalty->Direction = adParamInput; pPrmByRoyalty->Value = vtroyal; VT_I2 implies two bytes. A tagVARIANT struct is 16 bytes. How did they land on three? I see that the enum value for adInteger happens to be three, but I suspect that is just a coincidence. So it's a bit confusing what to pass for fixed-size parameters. The team I'm working with has always passed sizeof(int) for adInteger, and it seems to work. Is that correct? Now, for "variable-length" parameters: we are instructed by the documentation to pass "the maximum length .. in characters or bytes". 2.) For adVarChar, is it sufficient to pass the max width as defined in the database? 3.) What about the Wide types (e.g. adVarWChar)? Is it characters or bytes? 4.) How about adVariant, which could contain fixed- or variable-length data? 5.) Do arrays ever come into play here? (we don't pass them as parameters, just curious) Any references or personal insights are welcome.

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • Mixing policy-based design with CRTP in C++

    - by Eitan
    I'm attempting to write a policy-based host class (i.e., a class that inherits from its template class), with a twist, where the policy class is also templated by the host class, so that it can access its types. One example where this might be useful is where a policy (used like a mixin, really), augments the host class with a polymorphic clone() method. Here's a minimal example of what I'm trying to do: template <template <class> class P> struct Host : public P<Host<P> > { typedef P<Host<P> > Base; typedef Host* HostPtr; Host(const Base& p) : Base(p) {} }; template <class H> struct Policy { typedef typename H::HostPtr Hptr; Hptr clone() const { return Hptr(new H((Hptr)this)); } }; Policy<Host<Policy> > p; Host<Policy> h(p); int main() { return 0; } This, unfortunately, fails to compile, in what seems to me like circular type dependency: try.cpp: In instantiation of ‘Host<Policy>’: try.cpp:10: instantiated from ‘Policy<Host<Policy> >’ try.cpp:16: instantiated from here try.cpp:2: error: invalid use of incomplete type ‘struct Policy<Host<Policy> >’ try.cpp:9: error: declaration of ‘struct Policy<Host<Policy> >’ try.cpp: In constructor ‘Host<P>::Host(const P<Host<P> >&) [with P = Policy]’: try.cpp:17: instantiated from here try.cpp:5: error: type ‘Policy<Host<Policy> >’ is not a direct base of ‘Host<Policy>’ If anyone can spot an obvious mistake, or has successfuly mixing CRTP in policies, I would appreciate any help.

    Read the article

  • Need to set cursor position to the end of a contentEditable div, issue with selection and range obje

    - by DavidR
    I'm forgetting about cross-browser compatibility for the moment, I just want this to work. What I'm doing is trying to modify a script (and you probably don't need to know this) located at typegreek.com The basic script is found here. Basically what it does is when you type in characters, it converts the character your are typing into greek characters and prints it onto the screen. What I'm trying to do is to get it to work on contentEditable div's (It only works for Textareas) My issue is with this one function: The user types a key, it get's converted to a greek key, and goes to a function, it gets sorted through some if's, and where it ends up is where I can add div support. Here is what I have so far, myField is the div, myValue is the greek character. //Get selection object... var userSelection if (window.getSelection) {userSelection = window.getSelection();} else if (document.selection) {userSelection = document.selection.createRange();} //Now get the cursor position information... var startPos = userSelection.anchorOffset; var endPos = userSelection.focusOffset; var cursorPos = endPos; //Needed later when reinserting the cursor... var rangeObj = userSelection.getRangeAt(0) var container = rangeObj.startContainer //Now take the content from pos 0 -> cursor, add in myValue, then insert everything after myValue to the end of the line. myField.textContent = myField.textContent.substring(0, startPos) + myValue + myField.textContent.substring(endPos, myField.textContent.length); //Now the issue is, this updates the string, and returns the cursor to the beginning of the div. //so that at the next keypress, the character is inserted into the beginning of the div. //So we need to reinsert the cursor where it was. //Re-evaluate the cursor position, taking into account the added character. var cursorPos = endPos + myValue.length; //Set the caracter position. rangeObj.setStart(container,cursorPos) Now, this works only as long as I don't type more than the size of the original text. Say I had 30 characters in the div before hand. If I type more than that 30, it adds character 31, but places the cursor back at 30. I can type character 32 at pos.31, then character 33 at pos.32, but if I try to put character 34 in, it adds the character, and sets the cursor back at 32. The issue is that the function for adding the new character screws up if cursorPos is greater than what is defined in the range. Any ideas?

    Read the article

  • json problems with making a ruby on rails application

    - by Prince Merdz
    So I'm using Bitnami to learn Ruby on Rails. I have also previously tried the manual installation for ruby and rails and was met by the same problem so I thought I should try first the easy package deal of Bitnami. Anyway my problem with json is that it causes the bundle install to fail. First the auto bundle install that rails new does fails because of an ssl error. Which is easily solved by changing the source in the gemfile which is https to http. However when I try to bundle install it does another error when it tries to install json. C:\RubyStack-3.2.7-0\projects\testing>bundle install Fetching gem metadata from http://rubygems.org/......... Using rake (0.9.2.2) Using i18n (0.6.0) Using multi_json (1.3.6) Installing activesupport (3.2.8) Using builder (3.0.0) Installing activemodel (3.2.8) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.1) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Installing actionpack (3.2.8) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.10) Using mail (2.4.4) Installing actionmailer (3.2.8) Using arel (3.0.2) Using tzinfo (0.3.33) Installing activerecord (3.2.8) Installing activeresource (3.2.8) Using bundler (1.1.5) Using coffee-script-source (1.3.3) Using execjs (1.4.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Installing json (1.7.5) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension . C:/RUBYST~1.7-0/ruby/bin/ruby.exe extconf.rb creating Makefile make 0 [main] echo 5244 open_stackdumpfile: Dumping stack trace to echo.exe.sta ckdump make: *** [generator-i386-mingw32.def] Error 5 Gem files will remain installed in C:/RUBYST~1.7-0/ruby/lib/ruby/gems/1.9.1/gems /json-1.7.5 for inspection. Results logged to C:/RUBYST~1.7-0/ruby/lib/ruby/gems/1.9.1/gems/json-1.7.5/ext/j son/ext/generator/gem_make.out An error occured while installing json (1.7.5), and Bundler cannot continue. Make sure that `gem install json -v '1.7.5'` succeeds before bundling. This is the gem_make.out file it produces after trying to install json (btw windows also produces an error that echo.exe has stopped working while running the gem install json) C:/RUBYST~1.7-0/ruby/bin/ruby.exe extconf.rb creating Makefile make 0 [main] echo 5244 open_stackdumpfile: Dumping stack trace to echo.exe.stackdump make: *** [generator-i386-mingw32.def] Error 5 I can't even start learning ror for the setup is already a huge pain. (btw I have no prior experience with web frameworks, just desktop programming). help?

    Read the article

  • Integers not properly returned from a property list (plist) array in Objective-C

    - by Gaurav
    In summary, I am having a problem where I read what I expect to be an NSNumber from an NSArray contained in a property list and instead of getting a number such as '1', I get what looks to be a memory address (i.e. '61879840'). The numbers are clearly correct in the property list. Below is my process for creating the property list and reading it back. Creating the property list I have created a simple Objective-C property list with arrays of integers within one root array: <array> <array> <integer>1</integer> <integer>2</integer> </array> <array> <integer>1</integer> <integer>2</integer> <integer>5</integer> </array> ... more arrays with integers ... </array> The arrays are NSArray objects and the integers are NSNumber objects. The property list has been created and serialized using the following code: // factorArray is an NSArray that contains NSArrays of NSNumbers as described above // serialize and compress factorArray as a property list, Factors-bin.plist NSString *error; NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *plistPath = [rootPath stringByAppendingPathComponent:@"Factors-bin.plist"]; NSData *plistData = [NSPropertyListSerialization dataFromPropertyList:factorArray format:NSPropertyListBinaryFormat_v1_0 errorDescription:&error]; Inspecting the created plist, all values and types are correct. Reading the property list The property list is read in as Data and then converted to an NSArray: NSString *path = [[NSBundle mainBundle] pathForResource:@"Factors" ofType:@"plist"]; NSData *plistData = [[NSData alloc] initWithContentsOfFile:path]; NSPropertyListFormat format; NSString *error = nil; NSArray *factorData = (NSArray *)[NSPropertyListSerialization propertyListFromData:plistData mutabilityOption:NSPropertyListImmutable format:&format errorDescription:&error]; Cycling through factorData to see what it contains is where I see the erroneous integers: for (int i = 0; i < 10; i++) { NSArray *factorList = (NSArray *)[factorData objectAtIndex:i]; NSLog(@"Factors of %d\n", i + 1); for (int j = 0; j < [factorList count]; j++) { NSLog(@" %d\n", (NSNumber *)[factorList objectAtIndex:j]); } } I see all the correct number of values, but the values themselves are incorrect, i.e.: Factors of 3 61879840 (should be 1) 61961200 (should be 3) Factors of 4 61879840 (should be 1) 61943472 (should be 2) 61943632 (should be 4) Factors of 5 61879840 (should be 1) 61943616 (should be 5)

    Read the article

  • How can I pass latitude and longitude values from UIViewController to MKMapView?

    - by jerincbus
    I have a detail view that includes three UIButtons, each of which pushes a different view on to the stack. One of the buttons is connected to a MKMapView. When that button is pushed I need to send the latitude and longitude variables from the detail view to the map view. I'm trying to add the string declaration in the IBAction: - (IBAction)goToMapView { MapViewController *mapController = [[MapViewController alloc] initWithNibName:@"MapViewController" bundle:nil]; mapController.mapAddress = self.address; mapController.mapTitle = self.Title; mapController.mapLat = self.lat; mapController.mapLng = self.lng; //Push the new view on the stack [[self navigationController] pushViewController:mapController animated:YES]; [mapController release]; //mapController = nil; } And on my MapViewController.h file I have: #import <UIKit/UIKit.h> #import <MapKit/MapKit.h> #import "DetailViewController.h" #import "CourseAnnotation.h" @class CourseAnnotation; @interface MapViewController : UIViewController <MKMapViewDelegate> { IBOutlet MKMapView *mapView; NSString *mapAddress; NSString *mapTitle; NSNumber *mapLat; NSNumber *mapLng; } @property (nonatomic, retain) IBOutlet MKMapView *mapView; @property (nonatomic, retain) NSString *mapAddress; @property (nonatomic, retain) NSString *mapTitle; @property (nonatomic, retain) NSNumber *mapLat; @property (nonatomic, retain) NSNumber *mapLng; @end And on the pertinent parts of the MapViewController.m file I have: @synthesize mapView, mapAddress, mapTitle, mapLat, mapLng; - (void)viewDidLoad { [super viewDidLoad]; [mapView setMapType:MKMapTypeStandard]; [mapView setZoomEnabled:YES]; [mapView setScrollEnabled:YES]; MKCoordinateRegion region = { {0.0, 0.0 }, { 0.0, 0.0 } }; region.center.latitude = mapLat; //40.105085; region.center.longitude = mapLng; //-83.005237; region.span.longitudeDelta = 0.01f; region.span.latitudeDelta = 0.01f; [mapView setRegion:region animated:YES]; [mapView setDelegate:self]; CourseAnnotation *ann = [[CourseAnnotation alloc] init]; ann.title = mapTitle; ann.subtitle = mapAddress; ann.coordinate = region.center; [mapView addAnnotation:ann]; } But I get this when I try to build: 'error: incompatible types in assignment' for both lat and lng variables. So my questions are am I going about passing the variables from one view to another the right way? And does the MKMapView accept latitude and longitude as a string or a number?

    Read the article

  • Maven:install jar file during build process

    - by Venkata
    I have got a requirement as follows. I need to run ant build file during maven build process. I need to invoke the build.xml from my pom.xml file. I have done that using maven-antrun-plugin. Now I need to install the ant build generated jar file automatically into my local repository before maven compiles my project source. I tried using build-helper-maven-plugin but it did not help. Either I am doing something wrong, or i am not doing right. Please help. Update Thank you. ant maven tasks worked for me as well. However I am runing into the following exception at the end of the build process. Any help is highly appreciated. org.apache.tools.ant.ExitException: Permission (java.lang.RuntimePermission exitVM) was not granted. at org.apache.tools.ant.types.Permissions$MySM.checkExit(Permissions.java:196) at java.lang.Runtime.exit(Runtime.java:99) at java.lang.System.exit(System.java:275) at org.codehaus.classworlds.Launcher.main(Launcher.java:376) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:599) at org.apache.tools.ant.taskdefs.ExecuteJava.run(ExecuteJava.java:217) at org.apache.tools.ant.taskdefs.ExecuteJava.execute(ExecuteJava.java:152) at org.apache.tools.ant.taskdefs.Java.run(Java.java:771) at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:221) at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:135) at org.apache.tools.ant.taskdefs.Java.execute(Java.java:108) at org.apache.maven.artifact.ant.Mvn.execute(Mvn.java:81) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:599) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1251) at org.apache.tools.ant.Main.runBuild(Main.java:809) at org.apache.tools.ant.Main.startAnt(Main.java:217) at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280) at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109)

    Read the article

  • java.sql.SQLException: Parameter number X is not an OUT parameter

    - by Frederik
    Hi guys, I'm struggling with getting the result OUT variable from a MySQL stored procedure. I get the following error: java.sql.SQLException: Parameter number 3 is not an OUT parameter The stored procedure looks like this: CREATE DEFINER=`cv_admin`@`localhost` PROCEDURE `CheckGameEligibility`( IN gID INT(10), IN uID INT(10), OUT result TINYINT(1) ) BEGIN # Do lots of stuff, then eventually: SET result = 1; END My java function takes an array of strings* and creates the CallableStatement object dynamically: public static int callAndReturnResult( String sql , String[] values ) { int out = 0 ; try { // construct the SQL. Creates: CheckGameEligibility(?, ?, ?) sql += "(" ; for( int i = 0 ; i < values.length ; i++ ) { sql += "?, " ; } sql += "?)" ; System.out.println( "callAndReturnResult("+sql+"): constructed SQL: " + sql ); // Then the statement CallableStatement cstmt = DB.prepareCall( sql ); for( int i = 0 ; i < values.length ; i++ ) { System.out.println( " " + (i+1) + ": " + values[ i ] ) ; cstmt.setString(i+1, values[ i ] ); } System.out.println( " " + (values.length+1) + ": ? (OUT)" ) ; cstmt.registerOutParameter( values.length + 1 , Types.TINYINT ); cstmt.execute(); out = cstmt.getInt( values.length ); cstmt.close(); } catch( Exception e ) { System.out.println( "*** db trouble: callAndReturnResult(" + sql + " failed: " + e ); e.printStackTrace() ; } return out ; } *) I suppose I should be using an int array instead of a string array, but it doesn't seem to be what the error message was about. Anyway, here's the output it generates: callAndReturnResult(CheckGameEligibility(?, ?, ?)): constructed SQL: CheckGameEligibility(?, ?, ?) 1: 57 2: 29 3: ? (OUT) *** db trouble: callAndReturnResult(CheckGameEligibility(?, ?, ?) failed: java.sql.SQLException: Parameter number 3 is not an OUT parameter at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:984) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:929) at com.mysql.jdbc.CallableStatement.checkIsOutputParam(CallableStatement.java:692) at com.mysql.jdbc.CallableStatement.registerOutParameter(CallableStatement.java:1847) at org.apache.commons.dbcp.DelegatingCallableStatement.registerOutParameter(DelegatingCallabl>eStatement.java:92) at Tools.callAndReturnResult(Tools.java:156) Any ideas what might be the problem? :) Thanks in advance!

    Read the article

  • error working with wsdl files in visual studio 2008

    - by deostroll
    Hi. I got a wsdl file in email. At first I didn't know how to use it. I've simply saved the file to my disk. Opened visual studio...added a service reference...provided path to file, and service was discovered. I opened the object browser to see the types and methods that got imported. I figure anything that ends with the name 'Client' is a good place to start using the web service. I've tried using a simple method to get data but it has run into and expception. Need help in resolving it. System.InvalidOperationException was unhandled Message="The XML element 'ListsRequest' from namespace 'http://www.asd.org/MGMMIRAGE.MDM.WS/Customer' references a method and a type. Change the method's message name using WebMethodAttribute or change the type's root element using the XmlRootAttribute." Source="System.Xml" StackTrace: at System.Xml.Serialization.XmlReflectionImporter.ReconcileAccessor(Accessor accessor, NameTable accessors) at System.Xml.Serialization.XmlReflectionImporter.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, Boolean openModel, XmlMappingAccess access) at System.Xml.Serialization.XmlReflectionImporter.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, Boolean openModel) at System.Xml.Serialization.XmlReflectionImporter.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.XmlSerializerImporter.ImportMembersMapping(XmlName elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, Boolean isEncoded, String mappingKey) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, String mappingKey) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.LoadBodyMapping(MessageDescription message, String mappingKey, MessagePartDescriptionCollection& rpcEncodedTypedMessageBodyParts) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.CreateMessageInfo(MessageDescription message, String key) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.EnsureMessageInfos() at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.EnsureMessageInfos() at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.get_Request() at System.ServiceModel.Description.XmlSerializerOperationBehavior.CreateFormatter() at System.ServiceModel.Description.XmlSerializerOperationBehavior.System.ServiceModel.Description.IOperationBehavior.ApplyClientBehavior(OperationDescription description, ClientOperation proxy) at System.ServiceModel.Description.DispatcherBuilder.BindOperations(ContractDescription contract, ClientRuntime proxy, DispatchRuntime dispatch) at System.ServiceModel.Description.DispatcherBuilder.ApplyClientBehavior(ServiceEndpoint serviceEndpoint, ClientRuntime clientRuntime) at System.ServiceModel.Description.DispatcherBuilder.BuildProxyBehavior(ServiceEndpoint serviceEndpoint, BindingParameterCollection& parameters) at System.ServiceModel.Channels.ServiceChannelFactory.BuildChannelFactory(ServiceEndpoint serviceEndpoint) at System.ServiceModel.ChannelFactory.CreateFactory() at System.ServiceModel.ChannelFactory.OnOpening() at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.ChannelFactory.EnsureOpened() at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via) at System.ServiceModel.ChannelFactory`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannelInternal() at System.ServiceModel.ClientBase`1.get_Channel() at MDMWSDemo.MDMWebSrvc.MGMCustomerSoapPortTypeClient.MDMWSDemo.MDMWebSrvc.MGMCustomerSoapPortType.CountryCodeGet(CountryCodeGetRequest request) in C:\Documents and Settings\tbhagava01\My Documents\Visual Studio 2008\Projects\MDMWSDemo\MDMWSDemo\Service References\MDMWebSrvc\Reference.cs:line 2983 at MDMWSDemo.MDMWebSrvc.MGMCustomerSoapPortTypeClient.CountryCodeGet(String countryCode) in C:\Documents and Settings\tbhagava01\My Documents\Visual Studio 2008\Projects\MDMWSDemo\MDMWSDemo\Service References\MDMWebSrvc\Reference.cs:line 2989 at MDMWSDemo.Program.Main(String[] args) in C:\Documents and Settings\tbhagava01\My Documents\Visual Studio 2008\Projects\MDMWSDemo\MDMWSDemo\Program.cs:line 15 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • Trouble cross-referencing two XML child nodes in AS3

    - by Dwayne
    I am building a mini language translator out of XML nodes and Actionscript 3. I have my XML ready. It contains a series of nodes with two children in them. Here is a sample: <translations> <entry> <english>man</english> <cockney>geeza</cockney> </entry> <entry> <english>woman</english> <cockney>lily</cockney> </entry> </translations> The AS3 part consist of one input box named "textfield_txt" where the English word will be typed in. An output text field for the translation called "cockney_txt". Finally, one button to execute the translation called "generate_mc". The idea is to have actionscript look through the XML for key English words after the user types it in the "textfield", cross-freferences the children then returns the Cockney translation as a value. The trouble is, when I test it I get no response or error messages- it's completely silent. I'm not sure what I'm doing wrong. At present, I have setup a conditional statement to tell me whether the function works or not. The result is, no it's not! Here's the code below. I hope someone can help. Cheers! generate_mc.buttonMode=true; var English:String; var myXML:XML; var myLoader:URLLoader = new URLLoader(); myLoader.load(new URLRequest("Language.xml")); myLoader.addEventListener(Event.COMPLETE, processXML); function processXML(e:Event):void { myXML = new XML(e.target.data); } generate_mc.addEventListener(MouseEvent.CLICK, onClick); function onClick(event:MouseEvent) { English = textfield.text; cockney_txt.text = myXML.translations.entry.cockney; if(textfield.text.toLowerCase() == myXML.translations.entry.english.toLowerCase){ //return myXML.translations.entry.cockney; trace("success"); }else{ trace("try again!"); // ***I get this as a result } }

    Read the article

  • How to do validation on both client and server side for a service which is a store procedure(return a complex type)

    - by Tai
    Hi I am doing Silverlight 4 In my database, I have a store procedure(having two parameters) which returns rows (with extra fields). So i have to make a complex type for those rows on my Models. And Making a service to call that function import store procedure. The RIA will automatically create a matching Entity(to the complex type) and an operation for me. However, I don't know how to validation the parameters of the operation on both client and server side. For example, the parameter must be an integer only (and greater than 10) or datetime only. below is my xaml code. I am using DomainDataSource control and don't know how to validate the two field parameter.It has two TextBox to let the user types in the value of parameters. Plz help me, thank you <riaControls:DomainDataSource AutoLoad="False" d:DesignData="{d:DesignInstance my1:USPFinancialAccountHistory, CreateList=true}" Height="0" LoadedData="uSPFinancialAccountHistoryDomainDataSource_LoadedData" Name="uSPFinancialAccountHistoryDomainDataSource" QueryName="GetFinancialAccountHistoryQuery" Width="0" Margin="0,0,705,32"> <riaControls:DomainDataSource.DomainContext> <my:USPFinancialAccountHistoryContext /> </riaControls:DomainDataSource.DomainContext> <riaControls:DomainDataSource.QueryParameters> <riaControls:Parameter ParameterName="fiscalYear" Value="{Binding ElementName=fiscalYearTextBox, Path=Text}" /> <riaControls:Parameter ParameterName="fiscalPeriod" Value="{Binding ElementName=fiscalPeriodTextBox, Path=Text}" /> </riaControls:DomainDataSource.QueryParameters> </riaControls:DomainDataSource> <StackPanel Height="30" HorizontalAlignment="Left" Orientation="Horizontal" VerticalAlignment="Top"> <sdk:Label Content="Fiscal Year:" Margin="3" VerticalAlignment="Center" /> <TextBox Name="fiscalYearTextBox" Width="60" /> <sdk:Label Content="Fiscal Period:" Margin="3" VerticalAlignment="Center" /> <TextBox Name="fiscalPeriodTextBox" Width="60" /> <Button Command="{Binding Path=LoadCommand, ElementName=uSPFinancialAccountHistoryDomainDataSource}" Content="Load" Margin="3" Name="uSPFinancialAccountHistoryDomainDataSourceLoadButton" /> </StackPanel> <telerik:RadGridView ItemsSource="{Binding ElementName=uSPFinancialAccountHistoryDomainDataSource, Path=Data}" Name="uSPFinancialAccountHistoryRadGridView" Grid.Row="1" IsReadOnly="True" DataLoadMode="Asynchronous" AutoGenerateColumns="False" ShowGroupPanel="False"> <telerik:RadGridView.Columns> <telerik:GridViewDataColumn Header="Account Number" DataMemberBinding="{Binding AccountNumber}"/> <telerik:GridViewDataColumn Header="Department Number" DataMemberBinding="{Binding DepartmentNumber}"/> <telerik:GridViewDataColumn Header="Period code" DataMemberBinding="{Binding PeriodCode}" /> <telerik:GridViewDataColumn Header="Total Debit" DataMemberBinding="{Binding TotalDebit}" DataFormatString="{}{0:C2}"/> <telerik:GridViewDataColumn Header="Total Credit" DataMemberBinding="{Binding TotalCredit}" DataFormatString="{}{0:C2}"/> <telerik:GridViewDataColumn Header="Period Total" DataMemberBinding="{Binding PeriodTotal}" DataFormatString="{}{0:C2}"/> <telerik:GridViewDataColumn Header="Year To Date" DataMemberBinding="{Binding YearToDate}" DataFormatString="{}{0:C2}"/> </telerik:RadGridView.Columns> </telerik:RadGridView>

    Read the article

  • Java Hardware Acceleration

    - by Freezerburn
    I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java: 1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping() is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it? 2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped? 3) Is there any advantage to using someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage() as opposed to ImageIO.read() In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated. 4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly. 5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this? Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.

    Read the article

  • Oracle sample data problems

    - by Jay
    So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts. I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software. The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema. Now, I have two issues in creating my target schema and emptying it. I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff. I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back. Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.

    Read the article

  • What is the best python module skeleton code?

    - by user213060
    == Subjective Question Warning == Looking for well supported opinions or supporting evidence. Let us assume that skeleton code can be good. If you disagree with the very concept of module skeleton code then fine, but please refrain from repeating that opinion here. Many python IDE's will start you with a template like: print 'hello world' That's not enough... So here's my skeleton code to get this question started: My Module Skeleton, Short Version: #!/usr/bin/env python """ Module Docstring """ # ## Code goes here. # def test(): """Testing Docstring""" pass if __name__=='__main__': test() and, My Module Skeleton, Long Version: #!/usr/bin/env python # -*- coding: ascii -*- """ Module Docstring Docstrings: http://www.python.org/dev/peps/pep-0257/ """ __author__ = 'Joe Author ([email protected])' __copyright__ = 'Copyright (c) 2009-2010 Joe Author' __license__ = 'New-style BSD' __vcs_id__ = '$Id$' __version__ = '1.2.3' #Versioning: http://www.python.org/dev/peps/pep-0386/ # ## Code goes here. # def test(): """ Testing Docstring""" pass if __name__=='__main__': test() Notes: """ ===MODULE TYPE=== Since the vast majority of my modules are "library" types, I have constructed this example skeleton as such. For modules that act as the main entry for running the full application, you would make changes such as running a main() function instead of the test() function in __main__. ===VERSIONING=== The following practice, specified in PEP8, no longer makes sense: __version__ = '$Revision: 1.2.3 $' for two reasons: (1) Distributed version control systems make it neccessary to include more than just a revision number. E.g. author name and revision number. (2) It's a revision number not a version number. Instead, the __vcs_id__ variable is being adopted. This expands to, for example: __vcs_id__ = '$Id: example.py,v 1.1.1.1 2001/07/21 22:14:04 goodger Exp $' ===VCS DATE=== Likewise, the date variable has been removed: __date__ = '$Date: 2009/01/02 20:19:18 $' ===CHARACTER ENCODING=== If the coding is explicitly specified, then it should be set to the default setting of ascii. This can be modified if necessary (rarely in practice). Defaulting to utf-8 can cause anomalies with editors that have poor unicode support. """ There are a lot of PEPs that put forward coding style recommendations. Am I missing any important best practices? What is the best python module skeleton code? Update Show me any kind of "best" that you prefer. Tell us what metrics you used to qualify "best".

    Read the article

  • Linux Device Driver: Symbol "memcpy" not found

    - by Hinton
    Hello, I'm trying to write a Linux device driver. I've got it to work really well, until I tried to use "memcpy". I don't even get a compiler error, when I "make" it just warns me: WARNING: "memcpy" [/root/homedir/sv/main.ko] undefined! OK and when I try to load via insmod, I get on the console: insmod: error inserting './main.ko': -1 Unknown symbol in module and on dmesg: main: Unknown symbol memcpy (err 0) I include the following: #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/init.h> #include <linux/kernel.h> /* printk() */ #include <linux/slab.h> /* kmalloc() */ #include <linux/fs.h> /* everything... */ #include <linux/errno.h> /* error codes */ #include <linux/types.h> /* size_t */ #include <linux/fcntl.h> /* O_ACCMODE */ #include <linux/cdev.h> #include <asm/system.h> /* cli(), *_flags */ #include <asm/uaccess.h> /* copy_*_user */ The function using memcpy: static int dc_copy_to_user(char __user *buf, size_t count, loff_t *f_pos, struct sv_data_dev *dev) { char data[MAX_KEYLEN]; size_t i = 0; /* Copy the bulk as long as there are 10 more bytes to copy */ while (i < (count + MAX_KEYLEN)) { memcpy(data, &dev->data[*f_pos + i], MAX_KEYLEN); ec_block(dev->key, data, MAX_KEYLEN); if (copy_to_user(&buf[i], data, MAX_KEYLEN)) { return -EFAULT; } i += MAX_KEYLEN; } return 0; } Could someone help me? I thought the thing was in linux/string.h, but I get the error just the same. I'm using kernel 2.6.37-rc1 (I'm doing in in user-mode-linux, which works only since 2.6.37-rc1). Any help is greatly appreciated. # Context dependent makefile that can be called directly and will invoke itself # through the kernel module building system. KERNELDIR=/usr/src/linux ifneq ($(KERNELRELEASE),) EXTRA_CFLAGS+=-I $(PWD) -ARCH=um obj-m := main.o else KERNELDIR ?= /lib/modules/$(shell uname -r)/build PWD = $(shell pwd) all: $(MAKE) V=1 ARCH=um -C $(KERNELDIR) M=$(PWD) modules clean: rm -rf Module.symvers .*.cmd *.ko .*.o *.o *.mod.c .tmp_versions *.order endif

    Read the article

  • F# Equivalent to Enumerable.OfType<'a>

    - by Joel Mueller
    ...or, how do I filter a sequence of classes by the interfaces they implement? Let's say I have a sequence of objects that inherit from Foo, a seq<#Foo>. In other words, my sequence will contain one or more of four different subclasses of Foo. Each subclass implements a different independent interface that shares nothing with the interfaces implemented by the other subclasses. Now I need to filter this sequence down to only the items that implement a particular interface. The C# version is simple: void MergeFoosIntoList<T>(IEnumerable<Foo> allFoos, IList<T> dest) where T : class { foreach (var foo in allFoos) { var castFoo = foo as T; if (castFoo != null) { dest.Add(castFoo); } } } I could use LINQ from F#: let mergeFoosIntoList (foos:seq<#Foo>) (dest:IList<'a>) = System.Linq.Enumerable.OfType<'a>(foos) |> Seq.iter dest.Add However, I feel like there should be a more idiomatic way to accomplish it. I thought this would work... let mergeFoosIntoList (foos:seq<#Foo>) (dest:IList<'a>) = foos |> Seq.choose (function | :? 'a as x -> Some(x) | _ -> None) |> Seq.iter dest.Add However, the complier complains about :? 'a - telling me: This runtime coercion or type test from type 'b to 'a involves an indeterminate type based on information prior to this program point. Runtime type tests are not allowed on some types. Further type annotations are needed. I can't figure out what further type annotations to add. There's no relationship between the interface 'a and #Foo except that one or more subclasses of Foo implement that interface. Also, there's no relationship between the different interfaces that can be passed in as 'a except that they are all implemented by subclasses of Foo. I eagerly anticipate smacking myself in the head as soon as one of you kind people points out the obvious thing I've been missing.

    Read the article

  • Marshalling polymorphic objects in JAX-WS

    - by pkchukiss
    I'm creating a JAX-WS type webservice, with operations that return an object WebServiceReply. The class WebServiceReply itself contains a field of type Object. The individual operations would populate that field with a few different data-types, depending on the operation. Publishing the WSDL (I'm using Netbeans 6.7), and getting a ASP.NET application to retrieve and parse the WSDL was fine, but when I tried to call an operation, I would receive the following exception: javax.xml.ws.WebServiceException: javax.xml.bind.MarshalException - with linked exception: [javax.xml.bind.JAXBException: class [LDataObject.Patient; nor any of its super class is known to this context.] How do I mark the annotations in the DataObject.Patient class, as well as the WebServiceReply class to get it to work? I haven't been able to fine a definitive resource on marshalling based upon annotations within the target classes either, so it would be great if anybody could point me to that too. WebServiceReply.java @XmlRootElement(name="WebServiceReply") public class WebServiceReply { private Object returnedObject; private String returnedType; private String message; private String errorMessage; .......... // Getters and setters follow } DataObject.Patient.java @XmlRootElement(name="Patient") public class Patient { private int uid; private Date versionDateTime; private String name; private String identityNumber; private List<Address> addressList; private List<ContactNumber> contactNumberList; private List<Appointment> appointmentList; private List<Case> caseList; } Solution (Thanks to Gregory Mostizky for his answer) I edited the WebServiceReply class so that all the possible return objects extend from a new class ReturnValueBase, and added the annotations using @XmlSeeAlso to ReturnValueBase. JAXB worked properly after that! Nonetheless, I'm still learning about JAXB marshalling in JAX-WS, so it would be great if anyone can still post any tutorial on this. Gregory: you might want to add-on to your answer that the return objects need to sub-class from ReturnValueBase. Thanks a lot for your help! I had been going bonkers over this problem for so long!

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >