Search Results

Search found 3985 results on 160 pages for 'contexts and dependency injection'.

Page 139/160 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Analyzing bitmaps produced by NSAffineTransform and CILineOverlay filters

    - by Adam
    I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging. I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me??? My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting! Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep. CIImage * myResult = [transform valueForKey:@"outputImage"]; NSImage *outputImage; NSCIImageRep *ir = [NSCIImageRep alloc]; ir = [NSCIImageRep imageRepWithCIImage:myResult]; outputImage = [[[NSImage alloc] initWithSize: NSMakeSize(inputImage.size.width, inputImage.size.height)] autorelease]; [outputImage addRepresentation:ir]; [outputImageView setImage: outputImage]; NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult]; Thanks, Adam

    Read the article

  • Online voice chat: Why client-server model vs. peer-to-peer model?

    - by sstallings
    I am adding online voice chat to a Silverlight app. I've been reviewing current apps, services and SDKs found thru online searches and forums. I'm finding that the majority of these implement a client-server (C/S) model and I'm trying to understand why that model versus a peer-to-peer (PTP) model. To me PTP would be preferable because going direct between peers would be more efficient (fewer IP hops and no processing along the way by a server computer) and no need for a server and its costs and dependencies. I found some products offer the ability to switch from PTP to C/S if the PTP proves insufficient. As I thought more about it, I could see that C/S could be better if there are more than two peers involved in a conversation, then the server (supposedly with more bandwidth) could do a better job of relaying each peers outgoing traffic to the multiple other peers. In C/S many-to-many voice chatting, each peer's upstream broadband (which is where the bottleneck inherently is) would only have to carry each item of voice traffic once, then the server would use its superior bandwidth to relay the message to the multiple other peers. But, in a situation with one-on-one voice chatting it seems that PTP would be best. A server would not reduce each of the two peer's bandwidth requirements and would only add unnecessary overhead, dependency and cost. In one-on-one voice chatting: Am I mistaken on anything above? Would peer-to-peer be best? Would a server provide anything of value that could not be provided by a client-only program? Is there anything else that I should be taking into consideration? And lastly, can you recommend any Silverlight PTP or C/S voice chat products? Thanks in advance for any info.

    Read the article

  • RESTfully Nesting Resource Routes with Single Identifiers

    - by Craig Walker
    In my Rails app I have a fairly standard has_many relationship between two entities. A Foo has zero or more Bars; a Bar belongs to exactly one Foo. Both Foo and Bar are identified by a single integer ID value. These values are unique across all of their respective instances. Bar is existence dependent on Foo: it makes no sense to have a Bar without a Foo. There's two ways to RESTfully references instances of these classes. Given a Foo.id of "100" and a Bar.id of "200": Reference each Foo and Bar through their own "top-level" URL routes, like so: /foo/100 /bar/200 Reference Bar as a nested resource through its instance of Foo: /foo/100 /foo/100/bar/200 I like the nested routes in #2 as it more closely represents the actual dependency relationship between the entities. However, it does seem to involve a lot of extra work for very little gain. Assuming that I know about a particular Bar, I don't need to be told about a particular Foo; I can derive that from the Bar itself. In fact, I probably should be validating the routed Foo everywhere I go (so that you couldn't do /foo/150/bar/200, assuming Bar 200 is not assigned to Foo 150). Ultimately, I don't see what this brings me. So, are there any other arguments for or against these two routing schemes?

    Read the article

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • rake test:units fails with status ()

    - by ander163
    New user, haven't been building tests as I go, so I'm an idiot. The application is running, but the tests fail. Here is what appears to be relevant: .... ** Execute test:units /usr/local/bin/ruby -I"lib:test" "/usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/event_test.rb" "test/unit/helpers/calendar1_helper_test.rb" "test/unit/helpers/events_helper_test.rb" "test/unit/helpers/homepage_helper_test.rb" "test/unit/helpers/main_helper_test.rb" "test/unit/helpers/mobile_helper_test.rb" "test/unit/helpers/notes_helper_test.rb" "test/unit/helpers/password_resets_helper_test.rb" "test/unit/helpers/projects_helper_test.rb" "test/unit/helpers/search_helper_test.rb" "test/unit/helpers/start_helper_test.rb" "test/unit/helpers/superadmin_helper_test.rb" "test/unit/helpers/tasks_helper_test.rb" "test/unit/helpers/user_sessions_helper_test.rb" "test/unit/helpers/users_helper_test.rb" "test/unit/note_test.rb" "test/unit/notifier_test.rb" "test/unit/project_test.rb" "test/unit/task_test.rb" "test/unit/user_session_test.rb" "test/unit/user_test.rb" /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement /usr/lib/ruby/gems/1.8/gems/hpricot-0.6.164/lib/universal-java1.6/fast_xs.bundle: [BUG] Segmentation fault ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0] rake aborted! Command failed with status (): [/usr/local/bin/ruby -I"lib:test" "/usr/loc...] /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:995:in sh' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1010:incall'

    Read the article

  • Can I create template-based library objects in Dreamweaver CS5?

    - by Danjah
    At work we need two 'streams' of template. The first are general layout templates, like the ones already available in the MX through CS5 packages (except we'd have our own customised ones). The second are more granular objects, some of which are functional. In both cases, I don't want Jimmy to be able to wreak havoc inside anything other than the 'editable regions' which make up the templates. Now this is fine if I stick with the first scenario (layout templates) where there's simply a big chunk of editable region for good ole Jim to sprawl into, think of this as the 'body content' area. But I really do need these granular library (or snippet) objects to work in the same way. Unfortunately with my attempts so far they don't work as I'd have thought - perhaps for good reason? When I create a blank template and throw in my chunk of HTML (unobstrusive JS and external CSS use selectors in this HTML to provide style and function) and save it as a new library item or snippet, all looks well. Then I create a new document based on a layout template and save it as a plain html file (still all good so far). Next I drop in my custom library item... still all good... but then I go to save the document and it only allows me to save it as a new template! I expected it would just allow me to save it as HTML and have it simply respect the defined editable regions, as happens in the containing page 'body content' editable region. Apologies if that got specific and technical quite quickly, but it is quite particular. If you want some example files lemme know and I'll zip some up. Many thanks :) p.s It is not a requirement that library objects must somehow inject their dependency files into the newly created page - I already know what they'll be. Also, I know I must 'detatch from original' once I drop a library item into a document which then allows customisation of the library object.

    Read the article

  • Execute SQL on CSV files via JDBC

    - by Markos Fragkakis
    Dear all, I need to apply an SQL query to CSV files (comma-separated text files). My SQL is predefined from another tool, and is not eligible to change. It may contain embedded selects and table aliases in the FROM part. For my task I have found two open-source (this is a project requirement) libraries that provide JDBC drivers: CsvJdbc XlSQL JBoss Teiid Create an Apache Derby DB, load all CSVs as tables and execute the query. These are the problems I encountered: it does not accept the syntax of the SQL (it uses internal selects and table aliases). Furthermore, it has not been maintained since 2004. I could not get it to work, as it has as dependency a SAX Parser that causes exception when parsing other documents. Similarly, no change since 2004. Have not checked if it supports the syntax, but seems like an overhead. It needs several entities defines (Virtual Databases, Bindings). From the mailing list they told me that last release supports runtime creation of required objects. Has anyone used it for such simple task (normally it can connect to several types of data, like CSV, XML or other DBS and create a virtual, unified one)? Can this even be done easily? From the 4 things I considered/tried, only 3 and 4 seem to me viable. Any advice on these, or any other way in which I can query my CSV files? Cheers

    Read the article

  • Is jQuery always the answer?

    - by Kibbee
    I've come across a couple questions, such as this one, and I really have to wonder why "Use jQuery" seems to be the answer when somebody asks how to do something in JavaScript. I understand that jQuery can save you a lot of time, and can help you out a lot, especially when you are doing a lot of fancy JavaScript in your site. However, in instances like this, and in many other instances, it seems like it's just jumping around the problem instead of answering the question. I also feel like this builds too much dependency into libraries. I've seen way too many developers that simply rely too much on libraries, and if they encounter a situation where they didn't have the library, they would be completely unable to function. I feel like there are already enough developers who don't know JavaScript, without just telling everybody to not learn JavaScript, and use jQuery. So, just to reiterate the question. Do you think there's too much of a tendency to use jQuery, for small pieces of JavaScript, when most of the functionality of jQuery isn't being used. Should developers be fluent in the use of bare JavaScript so they don't get too dependent on using libraries? [Additional related conversation topic] Does the existence of jQuery give too much slack to web browser developers who write the JavaScript engines? If we just have workarounds to cover all the inconsistencies in JavaScript, what pressure is there on browser makers to ensure that their JavaScript library works as it should. I feel like this extrapolates the same problem discussed in SO Podcast #36 of "be conservative in what you send, liberal in what you accept". By being so liberal with bad JavaScript engines, and using a common library to work around the flaws, we are promoting their use, and extending the problem.

    Read the article

  • Unity to dispose of object

    - by Johan Levin
    Is there a way to make Unit dispose property-injected objects as part of the Teardown? The background is that I am working on an application that uses ASP.NET MVC 2, Unity and WCF. We have written our own MVC controller factory that uses unity to instantiate the controller and WCF proxies are injected using the [Dependency] attribute on public properties of the controller. At the end of the page life cycle the ReleaseController method of the controller factory is called and we call IUnityContainer.Teardown(theMvcController). At that point the controller is disposed as expected but I also need to dispose the injected wcf-proxies. (Actually I need to call Close and/or Abort on them and not Dispose but that is a later problem.) I could of course override the controllers' Dispose methods and clean up the proxies there, but I don't want the controllers to have to know about the lifecycles of the injected interfaces or even that they refer to WCF proxies. If I need to write code myself for this - what would be the best extension point? I'd appreciate any pointer.

    Read the article

  • Error while using JSFUnit/HtmlUnit/CSSParser

    - by brianf
    We've just recently converted our project to using Maven for builds and dependency management, and after the conversion I'm getting the following exception while trying to run any JSFUnit tests in my project. Exception class=[java.lang.UnsupportedOperationException] com.gargoylesoftware.htmlunit.ScriptException: CSSRule com.steadystate.css.dom.CSSCharsetRuleImpl is not yet supported. at com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$HtmlUnitContextAction.run(JavaScriptEngine.java:527) at net.sourceforge.htmlunit.corejs.javascript.Context.call(Context.java:537) ... All the dependencies and JARs for JSFUnit were pulled with Maven using the JBoss repository (http://repository.jboss.com/maven2/). We're using the following dependencies in the project: jboss-jsfunit-core 1.2.0.Final jboss-jsfunit-richfaces 1.2.0.Final richfaces-ui 3.3.2.GA openfaces 2.0 JSF 1.2_12 Facelets 1.1.14 Before the dependencies were being managed by Maven, we were able to run our JSFUnit tests just fine. I was able to semi-fix the issue by using a ss_css2.jar file that someone had tucked into our WEB-INF/lib directory (from before the Maven conversion). I'm hoping to find out if there's something else I can do to fix the dependencies in Maven rather than resorting to managing some of the dependencies myself.

    Read the article

  • Unit testing with Mocks when SUT is leveraging Task Parallel Libaray

    - by StevenH
    I am trying to unit test / verify that a method is being called on a dependency, by the system under test. The depenedency is IFoo. The dependent class is IBar. IBar is implemented as Bar. Bar will call Start() on IFoo in a new (System.Threading.Tasks.)Task, when Start() is called on Bar instance. Unit Test (Moq): [Test] public void StartBar_ShouldCallStartOnAllFoo_WhenFoosExist() { //ARRANGE //Create a foo, and setup expectation var mockFoo0 = new Mock<IFoo>(); mockFoo0.Setup(foo => foo.Start()); var mockFoo1 = new Mock<IFoo>(); mockFoo1.Setup(foo => foo.Start()); //Add mockobjects to a collection var foos = new List<IFoo> { mockFoo0.Object, mockFoo1.Object }; IBar sutBar = new Bar(foos); //ACT sutBar.Start(); //Should call mockFoo.Start() //ASSERT mockFoo0.VerifyAll(); mockFoo1.VerifyAll(); } Implementation of IBar as Bar: class Bar : IBar { private IEnumerable<IFoo> Foos { get; set; } public Bar(IEnumerable<IFoo> foos) { Foos = foos; } public void Start() { foreach(var foo in Foos) { Task.Factory.StartNew( () => { foo.Start(); }); } } } I appears that the issue is obviously due to the fact that the call to "foo.Start()" is taking place on another thread (/task), and the mock (Moq framework) can't detect it. But I could be wrong. Thanks

    Read the article

  • Converting a Linq expression tree that relies on SqlMethods.Like() for use with the Entity Framework

    - by JohnnyO
    I recently switched from using Linq to Sql to the Entity Framework. One of the things that I've been really struggling with is getting a general purpose IQueryable extension method that was built for Linq to Sql to work with the Entity Framework. This extension method has a dependency on the Like() method of SqlMethods, which is Linq to Sql specific. What I really like about this extension method is that it allows me to dynamically construct a Sql Like statement on any object at runtime, by simply passing in a property name (as string) and a query clause (also as string). Such an extension method is very convenient for using grids like flexigrid or jqgrid. Here is the Linq to Sql version (taken from this tutorial: http://www.codeproject.com/KB/aspnet/MVCFlexigrid.aspx): public static IQueryable<T> Like<T>(this IQueryable<T> source, string propertyName, string keyword) { var type = typeof(T); var property = type.GetProperty(propertyName); var parameter = Expression.Parameter(type, "p"); var propertyAccess = Expression.MakeMemberAccess(parameter, property); var constant = Expression.Constant("%" + keyword + "%"); var like = typeof(SqlMethods).GetMethod("Like", new Type[] { typeof(string), typeof(string) }); MethodCallExpression methodExp = Expression.Call(null, like, propertyAccess, constant); Expression<Func<T, bool>> lambda = Expression.Lambda<Func<T, bool>>(methodExp, parameter); return source.Where(lambda); } With this extension method, I can simply do the following: someList.Like("FirstName", "mike"); or anotherList.Like("ProductName", "widget"); Is there an equivalent way to do this with Entity Framework? Thanks in advance.

    Read the article

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • Error while trying to install Community Engine: NameError - "Undefined local variable or method 'map

    - by floatingfrisbee
    I'm trying to install Community Engine using the instructions here: http://github.com/bborn/communityengine At first I thought it might be because I had Rails 2.3.5 and desert 0.5.3 which were higher versions than what was mentioned on the installation site. However moving to rails 2.3.4 and desert 0.5.2 did not work. Any ideas as to what might be going on? $ script/generate plugin_migration /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecat ed and will be removed on or after August 2010. Use #requirement /cygdrive/c/users/me/jesse/projects/ceng1/config/routes.rb:2: undefined local variable or method `map' for main:Object (NameError ) from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:147:in `load_without_new_constant _marking' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:147:in `load_without_desert' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:18:in `load' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:32:in `__each_matching_file' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:17:in `load' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `load_routes!' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `each' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `load_routes!' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:266:in `reload!' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:537:in `initialize_routing' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:188:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:113:in `run' from /cygdrive/c/users/me/jesse/projects/ceng1/config/environment.rb:6 from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:1 from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script/generate:3

    Read the article

  • Table valued function only returns CLR error

    - by Anthony
    I have read-only access to a database that was set up for a third-party, closed-source app. Once group of (hopefully) useful table functions only returns the error: Failed to initialize the Common Language Runtime (CLR) v2.0.50727 with HRESULT 0x80131522. You need to restart SQL server to use CLR integration features. (severity 16) But in theory, the third-party app should be able to use the function (either directly or indirectly), so I'm convinced I'm not setting things up right. I'm very new to SQL Server, so I could be missing something obvious. Or I could be missing something really slight, I have no idea. Here is an example of a query that returns the above error: SELECT * FROM dbo.UncompressDataDateRange(4,'Apr 24 2010 12:00AM','Apr 30 2010 12:00AM') Where the function takes three parameters: The Data Set (int) -- basically the data has 6 classifications, and the giant table this should be pulling from has a column to indicate which is which. startDate (smalldatetime) endDate (smalldatetime) There are other, similar functions that expand on the same idea, all returning the same error. Quick Note: I'm not sure if this is relevant, but I was able to connect to the database via SQL Studio (but without the privs to script the functions as code), and a checked the dependency for the above sample function. It turns out that it is a dependent of a view that I have gotten to work, and that view is dependent of the larger, much-hairier data-table. This makes me think I should somehow be pointing the function at the results of the view, but I'm not seeing any documentation that shows how that is done.

    Read the article

  • Compiling and linking libcurl to create a stand alone dll

    - by Haraldo
    Hi, I've managed to compile a dll with the necessary linked libraries (*.lib) and with CURL_STATICLIB set in the preprocessor section among other settings. I'm using "libcurl-7.19.3-win32-ssl-msvc.zip" package and compiling with VS 2008 express. This has been the first version I managed to get compiled properly with no link issues etc. The problem I have now is that my dll needs libcurl.dll to function and this is not ok. My dll needs to be independent. I have no idea how to implement this. I've taken all day just to get what I've got compiled. I've got runtime library set to Multi threaded dll (debug/release) respectively under C/C++ - code generation. I've a number of preprocessor set - CURL_STATICLIB being one of them. Configuration Type is set to Dynamic Library Use of MFC is set to Use MFC in a static library Additional Library Directories is set to the lib folders (debug/release) respectively. I've noticed there is a curllib_static.lib file which I've tried instead of curllib.lib as an additional dependency but it only compiles with the later. This is driving me nuts! So I guess I need some guidance as to how to make my dll completely static so it doesn't have any dependencies. I notice my dll is currently dependent on: CURLLIB.DLL MSVCR90D.DLL As I'm pretty new to C++ it could be a setting I'm missing in VS 2008 but I'm not sure. One person said I should be using a static library with *.a files (libcurl.a) etc but when I do this I get link errors which I haven't been able to resolve. Any guidance here would be much appreciated.

    Read the article

  • Delphi - working with dll's for beginners

    - by doubleu
    Hi there, I'm a total newbie regarding to DLL. And I don't need to creat them I just need to use one. I've read some tutorials, but they weren't as helpful as I hoped. Here's the way I started: I've downloaded the SDK which I need to use (ESTOS Tapi Server). I read in the docs and spotted out the DLL which I need to use, which is the ENetSN.dll, and so I registered it. Next I've used the Dependency Walker to take a look at the DLL - and I was wondering because there are only these functions: DllCanUnloadNow, DllGetClassObject, DllRegisterServer and DllUnregisterServer, and these are not the functions mentioned in the docs. I think I have to call DllGetClassObject to get an object out of the DLL with which I can start to work. Unfortunately the tutorials I found doesn't mentioned how this is done (or I didn't understood it). There are also 3 exmaples delivered for VB and C++, but I wasn't able to 'translate' them into delphi. If somebody knows a tutorial where this is explained or could give me a pointer to the right direcetion, I would be very thankful .

    Read the article

  • How can I use a class with the same name from another namespace in my class?

    - by Beau Simensen
    I have two classes with the same name in different namespaces. I want one of these classes to reference the other class. The reason is that I am migrating to some newer code and I want to update the old code to simply pass through to the newer code. Here is a super basic example: namespace project { namespace legacy { class Content { public: Content(const string& url) : url_(url) { } string url() { return url_; } private: string url_; }; }} // namespace project::legacy; namespace project { namespace current { class Content { public: Content(const string& url) : url_(url) {} string url() { return url_; } private: string url_; }} // namespace project::current; I expected to be able to do the following to project::legacy::Content, but I am having trouble with some linker issues. Is this an issue with how I'm trying to do this, or do I need to look more closely at my project files to see if I have some sort of weird dependency issues? #include "project/current/Content.h" namespace project { namespace legacy { class Content { public: Content(const string& url) : actualContent_(url) { } string url() { return actualContent_.url(); } private: project::current::Content actualContent_; }; }} // namespace project::legacy; The test application compiles fine if I try to reference an instance of project::current::Content but if I try to reference project::current::Content from project::legacy::Content I get an: undefined reference to `project::current::Content::Content(...)` UPDATE As it turns out, this was a GNU Autotoolset issue and was unrelated to the actual topic. Thanks to everyone for their help and suggestions!

    Read the article

  • Analyzing Windows crash dumps generated on XP/32 machines with Win7/64 ?

    - by Martin
    We have a problem with analyzing our Windows crash-dumps that were created on customer Windows XP/32 boxes on our development machines. Many of our development machines are now Win7/64 boxes, but it appears that the crash-dumps generated under Windows XP cannot full resolve their binary dependency, thereby leading to warnings when displaying the call stacks in Visual Studio (2005). For example, the msvcr80.dll cannot be resolved when loaded from a Win7 machine when the dump was generated on Windows XP: On XP, the WinSxS path appears to be C:\WINDOWS\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.4053_x-ww_e6967989\msvcr80.dll -- on Win7, the WinSxS path to the same DLL version seems to be: x86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d08d7da0442a985d (I got this info from a forum thread on codeguru that link to an msdn article.) Visual Studio (2005) can now no longer correctly resolve the binaries for the crash-dump. How can I get Visual Studio to resolve all the correct binaries for my dump file? Note: I have already correctly set up the symbol server. The public symbols for most system DLLs (kernel32.dll, etc) and our symbols of our own DLLs are correctly loaded. It is just that the symbols of DLLs that reside in the WinSxS folder are not loaded, because it appears that Vista/7 uses a different path scheme for these DLLs than XP does and therefore Visual Studio cannot find the dll (not the pdb) on the local dev machine and so cannot load the corresponding symbols for the dump file.

    Read the article

  • Will the following NHibernate interface mapping work?

    - by Ben Aston
    I'd like to program against interfaces when working with NHibernate due to type dependency issues within the solution I am working with. SO questions such as this indicate it is possible. I have an ILocation interface and a concrete Location type. Will the following work? HBM mapping: <class name="ILocation" abstract="true" table="ILocation"> <id name="Id" type="System.Guid" unsaved-value="00000000-0000-0000-0000-000000000000"> <column name="LocationId" /> <generator class="guid" /> </id> <union-subclass table="Location" name="Location"> <property name="Name" type="System.String"/> </union-subclass> </class> Detached criteria usage using the interface: var criteria = DetachedCriteria.For<ILocation>().Add(Restrictions.Eq("Name", "blah")); var locations = criteria.GetExecutableCriteria(UoW.Session).List<ILocation>(); Are there any issues with not using the hilo ID generator and/or with this approach in general?

    Read the article

  • Specifiy package path for Androd:entries

    - by Priyank
    Hi. I am using following code in preferences page in android to show a list of items. The list and values are located in a file at location "app/res/xml/time.xml" <ListPreference android:title="Time unit list" android:summary="Select the time unit" android:dependency="Main_Option" android:key="listPref" android:defaultValue="1" android:entries="?xml:time/timet" android:entryValues="@xml:time/timet_values" /> The code for the time.xml is as follow: <?xml version="1.0" encoding="utf-8"?> <resources> <string-array name="timet"> <item>seconds</item> <item>minutes</item> <item>hours</item> </string-array> <string-array name="timet_values"> <item>3600</item> <item>60</item> <item>1</item> </string-array> </resources> I am not able to reference these values in my preference xml file. (The code snippet above). It gives an error. How can I specify packaged path for the List preferences entry and entry_values Any help is appreciated. Cheers

    Read the article

  • How to delay static initialization within a property

    - by Mystagogue
    I've made a class that is a cross between a singleton (fifth version) and a (dependency injectable) factory. Call this a "Mono-Factory?" It works, and looks like this: public static class Context { public static BaseLogger LogObject = null; public static BaseLogger Log { get { return LogFactory.instance; } } class LogFactory { static LogFactory() { } internal static readonly BaseLogger instance = LogObject ?? new BaseLogger(null, null, null); } } //USAGE EXAMPLE: //Optional initialization, done once when the application launches... Context.LogObject = new ConLogger(); //Example invocation used throughout the rest of code... Context.Log.Write("hello", LogSeverity.Information); The idea is for the mono-factory could be expanded to handle more than one item (e.g. more than a logger). But I would have liked to have made the mono-factory look like this: public static class Context { private static BaseLogger LogObject = null; public static BaseLogger Log { get { return LogFactory.instance; } set { LogObject = value; } } class LogFactory { static LogFactory() { } internal static readonly BaseLogger instance = LogObject ?? new BaseLogger(null, null, null); } } The above does not work, because the moment the Log property is touched (by a setter invocation) it causes the code path related to the getter to be executed...which means the internal LogFactory "instance" data is always set to the BaseLogger (setting the "LogObject" is always too late!). So is there a decoration or other trick I can use that would cause the "get" path of the Log property to be lazy while the set path is being invoked?

    Read the article

  • JDBC/OSGi and how to dynamically load drivers without explicitly stating dependencies in the bundle?

    - by Chris
    Hi, This is a biggie. I have a well-structured yet monolithic code base that has a primitive modular architecture (all modules implement interfaces yet share the same classpath). I realize the folly of this approach and the problems it represents when I go to deploy on application servers that may have different conflicting versions of my library. I'm dependent on around 30 jars right now and am mid-way though bnding them up. Now some of my modules are easy to declare the versioned dependencies of, such as my networking components. They statically reference classes within the JRE and other BNDded libraries but my JDBC related components instantiate via Class.forName(...) and can use one of any number of drivers. I am breaking everything up into OSGi bundles by service area. My core classes/interfaces. Reporting related components. Database access related components (via JDBC). etc.... I wish for my code to be able to still be used without OSGi via single jar file with all my dependencies and without OSGi at all (via JARJAR) and also to be modular via the OSGi meta-data and granular bundles with dependency information. How do I configure my bundle and my code so that it can dynamically utilize any driver on the classpath and/or within the OSGi container environment (Felix/Equinox/etc.)? Is there a run-time method to detect if I am running in an OSGi container that is compatible across containers (Felix/Equinox/etc.) ? Do I need to use a different class loading mechanism if I am in a OSGi container? Am I required to import OSGi classes into my project to be able to load an at-bundle-time-unknown JDBC driver via my database module? I also have a second method of obtaining a driver (via JNDI, which is only really applicable when running in an app server), do I need to change my JNDI access code for OSGi-aware app servers?

    Read the article

  • ruby on rails: undefined method "version_requirements' when attempting to start server after new install

    - by ezabak
    Hi there, I had to newly install ruby on rails recently. When I attempted to start the server for a project I had already been working on previous to this new install, I received the following error: $ ruby script/server => Booting WEBrick... ./script/../config/../vendor/rails/railties/lib/rails/gem_dependency.rb:107:in `requirement': undefined method `version_requirements' for #<Gem::Dependency:0xb74bf764> (NoMethodError) from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `map' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:165:in `process' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `send' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `run' from /media/78C0-455B/bidmc/schedule/config/environment.rb:13 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/servers/webrick.rb:59 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/server.rb:49 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from script/server:3 I have the latest versions of ruby, rubygems, and rails. Any suggestions? Thanks.

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >