Search Results

Search found 8389 results on 336 pages for 'shared calendar'.

Page 140/336 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • VB.net Saving an MetaFile / EMF as a bitmap ( .tiff)

    - by pehaada
    Currently I have a third party control that generates a Metafile. I can save the .wmf file to disk with out issue. The problem is how do I render the Metafile as a Tiff file. Currently I have the following code to get my metafile and save it. Dim mf As Metafile = page.GetImage(TXTextControl.Page.PageContent.All) Dim enhMetafileHandle As IntPtr = mf.GetHenhmetafile() Dim h As IntPtr Dim bufferSize As UInteger = GetEnhMetaFileBits(enhMetafileHandle, 0, h) Dim buffer(CInt(bufferSize)) As Byte GetEnhMetaFileBits(enhMetafileHandle, bufferSize, buffer) Dim msMetafileStream As New MemoryStream msMetafileStream.Write(buffer, 0, CInt(bufferSize)) Dim baMetafileData() As Byte baMetafileData = msMetafileStream.ToArray Dim g As Graphics = Graphics.FromImage(mf) mf.Dispose() File.WriteAllBytes("c:\a.wmf", baMetafileData) end sub _ Public Shared Function GetEnhMetaFileBits( ByVal hEMF As System.IntPtr, ByVal nSize As UInteger, ByVal lpData As IntPtr) As UInteger End Function <System.Runtime.InteropServices.DllImportAttribute("gdi32.dll", EntryPoint:="GetEnhMetaFileBits")> _ Public Shared Function GetEnhMetaFileBits(<System.Runtime.InteropServices.InAttribute()> ByVal hEMF As System.IntPtr, ByVal nSize As UInteger, ByVal lpData() As Byte) As UInteger End Function I've tried all sort of IMAGE and Graphic calls and just can't save the meta file as a .tiff. Any suggestions would be great. I even tried to create a new bitmap and draw the metafile onto it. I always end up with a GDI exception being thrown.

    Read the article

  • How to deal with recursive dependencies between static libraries using the binutils linker?

    - by Jack Lloyd
    I'm porting an existing system from Windows to Linux. The build is structured with multiple static libraries. I ran into a linking error where a symbol (defined in libA) could not be found in an object from libB. The linker line looked like g++ test_obj.o -lA -lB -o test The problem of course being that by the time the linker finds it needs the symbol from libA, it has already passed it by, and does not rescan, so it simply errors out even though the symbol is there for the taking. My initial idea was of course to simply swap the link (to -lB -lA) so that libA is scanned afterwards, and any symbols missing from libB that are in libA are picked up. But then I find there is actually a recursive dependency between libA and libB! I'm assuming the Visual C++ linker handles this in some way (does it rescan by default?). Ways of dealing with this I've considered: Use shared objects. Unfortunately this is undesirable from the perspective of requiring PIC compliation (this is performance sensitive code and losing %ebx to hold the GOT would really hurt), and shared objects aren't needed. Build one mega ar of all of the objects, avoiding the problem. Restructure the code to avoid the recursive dependency (which is obviously the Right Thing to do, but I'm trying to do this port with minimal changes). Do you have other ideas to deal with this? Is there some way I can convince the binutils linker to perform rescans of libraries it has already looked at when it is missing a symbol?

    Read the article

  • Enterprise Eclipse Provisioning - Or - How to share your standard Eclipse setup with other developer

    - by nikhil
    We use Eclipse as the IDE for developing all sorts of Java/J2EE applications in our 150 people odd IT department. One of the common problems we have been seeing is that developers download and install different versions of Eclipse and plugins based on their personal likes and dislikes. We have been trying to bring some consistency to this and have standardized on the version and the plugins that developers should be using. So the problem now is how do we distribute this installation to the team. We have zipped the directories and shared it through a shared drive. But I am looking for a better solution using some kind of provisioning tool for Eclipse using which people can install the IDE or get updates. Has anyone faced this problem? What are your solutions to this? How do you ensure a standard Eclipse environment across developers? I found Yoxos as a potential solution to this. Does anyone have any experience with it? Can p2 be used to do this?

    Read the article

  • Django 404 page not showing up

    - by Matthew Doyle
    Hey all, I'm in the middle of putting up my first django application on shared hosting. This should be an easy thing, but I am just not seeing it. I tried to follow the directions of the django documentation, and created a 404.html page within my template folder. I just wrote "This is a 404 page." in the .html file. I also did the same thing for a 500.html page and wrote in it "This is a 500 page." However when I hit a 'bad page' I get a standard 404 page from the browser (Oops! This link appears to be broken. in Chrome) when I would expect "This is a 404 page." What's even more interesting is out of frustration I wrote {% asdfjasdf %} in the 404.html, and instead of getting the "Oops!..." error I get "This is a 500 page," so it definitely sees the 404.html template. Here's what I can confirm: Debug = False I am running apache on a shared hosting I have not done anything special with .htaccess and 404 errors. If I run with Debug = True, it says it's a 404 error. I am using FastCGI Anything else anyone think I could try? Thank you very much!

    Read the article

  • In C#, what thread will Events be handled in?

    - by Ben
    Hi, I have attempted to implement a producer/consumer pattern in c#. I have a consumer thread that monitors a shared queue, and a producer thread that places items onto the shared queue. The producer thread is subscribed to receive data...that is, it has an event handler, and just sits around and waits for an OnData event to fire (the data is being sent from a 3rd party api). When it gets the data, it sticks it on the queue so the consumer can handle it. When the OnData event does fire in the producer, I had expected it to be handled by my producer thread. But that doesn't seem to be what is happening. The OnData event seems as if it's being handled on a new thread instead! Is this how .net always works...events are handled on their own thread? Can I control what thread will handle events when they're raised? What if hundreds of events are raised near-simultaneously...would each have its own thread? Thank in advance! Ben

    Read the article

  • Eclipse plugin installation/update issues

    - by The Elite Gentleman
    I've installed the following Team repository plugins (along with it's dependencies) for Eclipse Helios (using Eclipse updater). MercurialEclipse 1.7.1 Subclipse 1.6.17 Subversive SVN All of these are the latest in Eclipse Marketplace. My problem is when I go to Eclipse "Preferences", under "Team" I only see CVS but under Eclipse Marketplace, I can see that these plugins are installed (it gives me an option to uninstall it). How do I configure my Team repositories to reflect under "Team" in Preferences? Also, there is an update for "Eclipse IDE for Java EE developers, but when I try to update it, the following error occurs: Cannot complete the install because of a conflicting dependency. Software being installed: Eclipse IDE for Java EE Developers 1.3.2.20110301-1807 (epp.package.jee 1.3.2.20110301-1807) Software currently installed: Shared profile 1.0.0.1276787175574 (SharedProfile_epp.package.jee 1.0.0.1276787175574) Only one of the following can be installed at once: toolingepp.package.jee.configuration 1.3.2.20110301-1807 toolingepp.package.jee.configuration 1.3.0.20100617-0521 Cannot satisfy dependency: From: Shared profile 1.0.0.1276787175574 (SharedProfile_epp.package.jee 1.0.0.1276787175574) To: toolingepp.package.jee.configuration [1.3.0.20100617-0521] Cannot satisfy dependency: From: Eclipse IDE for Java EE Developers 1.3.2.20110301-1807 (epp.package.jee 1.3.2.20110301-1807) To: toolingepp.package.jee.configuration [1.3.2.20110301-1807] How do I solve it? Yes, I've spent days Googling for this issue but none solved my problem. Thanks in advance.

    Read the article

  • Looking for streaming xml pretty printer in C/C++ using expat or libxml2

    - by Mark Zeren
    I'm looking for a streaming xml pretty printer for C/C++ that's either self contained or that uses libxml2 or expat. I've searched a bit and not found one. It seems like something that would be generally useful. Am I missing an obvious tool that does this? Background: I have a library that outputs xml without whitespace all on one line. In some cases I'd like to pretty print that output. I'm looking for a BSD-ish licensed C/C++ library or sample code that will take a raw xml byte stream and pretty print it. Here's some pseudo code showing one way that I might use this functionality: void my_write(const char* buf, int len); PrettyPrinter pp(bind(&my_write)); while (...) { // ... get some more xml ... const char* buf = xmlSource.get_buf(); int len = xmlSource.get_buf_len(); int written = pp.write(buf, len); // calls my_write with pretty printed xml // ... error handling, maybe call write again, etc. ... } I'd like to avoid instantiating a DOM representation. I already have dependencies on the expat and libxml2 shared libraries, and I'd rather not add any more shared library dependencies.

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • How to minimize total cost of shortest path tree

    - by Michael
    I have a directed acyclic graph with positive edge-weights. It has a single source and a set of targets (vertices furthest from the source). I find the shortest paths from the source to each target. Some of these paths overlap. What I want is a shortest path tree which minimizes the total sum of weights over all edges. For example, consider two of the targets. Given all edge weights equal, if they share a single shortest path for most of their length, then that is preferable to two mostly non-overlapping shortest paths (fewer edges in the tree equals lower overall cost). Another example: two paths are non-overlapping for a small part of their length, with high cost for the non-overlapping paths, but low cost for the long shared path (low combined cost). On the other hand, two paths are non-overlapping for most of their length, with low costs for the non-overlapping paths, but high cost for the short shared path (also, low combined cost). There are many combinations. I want to find solutions with the lowest overall cost, given all the shortest paths from source to target. Does this ring any bells with anyone? Can anyone point me to relevant algorithms or analogous applications? Cheers!

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • Creating/Maintaining a large project-agnostic code library

    - by bufferz
    In order to reduce repetition and streamline testing/debugging, I'm trying to find the best way to develop a group of libraries that many projects can utilize. I'd like to keep individual executable relatively small, and have shared libraries for math, database, collections, graphics, etc. that were previously scattered among several projects and in many cases duplicated (bad!). This library is to be in an SVN repo and several programmers will be working on it. This library will be in constant development along with the executables that utilize it. For example, I want a code file in ProjectA to look something like the following: using MyCompany.Math.2D; //static 2D math methods using MyCompany.Math.3D; //static #D math methods using MyCompany.Comms.SQL; //static methods for doing simple SQLDB I/O using MyCompany.Graphics.BitmapOperations; //static methods that play with bitmaps So in my ProjectA solution file in VisualStudio, in order to develop/debug the MyCompany library I have to add several projects (Math, Comms, Graphics). Things get pretty cluttered and Solution files get out of date quickly between programmer SVN commits. I'm just looking for a high level approach to maintaining a large, shared code base in an SCN repository. I am fully willing to radically redesign my approach. I'm looking for that warm fuzzy feeling you get when you're design approach is spot on and development is fluid and natural. And ideas? Thanks!!

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

  • How to recursive rake? -- or suitable alternatives

    - by TerryP
    I want my projects top level Rakefile to build things using rakefiles deeper in the tree; i.e. the top level rakefile says how to build the project (big picture) and the lower level ones build a specific module (local picture). There is of course a shared set of configuration for the minute details of doing that whenever it can be shared between tasks: so it is mostly about keeping the descriptions of what needs building, as close to the sources being built. E.g. /Source/Module/code.foo and cie should be built using the instructions in /Source/Module/Rakefile; and /Rakefile understands the dependencies between modules. I don't care if it uses multiple rake processes (ala recursive make), or just creates separate build environments. Either way it should be self-containable enough to be processed by a queue: so that non-dependent modules could be built simultaneously. The problem is, how the heck do you actually do something like that with Rake!? I haven't been able to find anything meaningful on the Internet, nor in the documentation. I tried creating a new Rake::Application object and setting it up, but whatever methods I try invoking, only exceptions or "Don't know how to build task ':default'" errors get thrown. (Yes, all rakefiles have a :default). Obviously one could just execute 'rake' in a sub directory for a :modulename task, but that would ditch the options given to the top level; e.g. think of $(MAKE) and $(MAKEFLAGS). Anyone have a clue on how to properly do something like a recursive rake?

    Read the article

  • Windows Azure ASP.NET MVC Role behaves strangely when redirecting from HTTP to HTTPS

    - by Rinat Abdullin
    Subj. I've got an ASP.NET 2 MVC Worker Role Application, that does not differ much from the default template. When attempting redirect from HTTP to HTTPS (this happens when we access constroller secured by the usual RequireSSL attribute implementation) we get blank page with "Bad Request" message. IntelliTrace shows this: Thrown: "The file '/Views/Home/LogOnUserControl.aspx' does not exist." (System.Web.HttpException) Call stack is really short: [External Code] App_Web_vfahw7gz.dll!ASP.views_shared_site_master.__Render__control1(System.Web.UI.HtmlTextWriter __w = {unknown}, System.Web.UI.Control parameterContainer = {unknown}) [External Code] App_Web_bsbqxr44.dll!ASP.views_home_index_aspx.ProcessRequest(System.Web.HttpContext context = {unknown}) [External Code] User control reference is the usual one in /Views/Shared/Site.Master: <div id="logindisplay"> <% Html.RenderPartial("LogOnUserControl"); %> </div> And partial view LogOnUserControl.ashx is located in Views/Shared (and it is ASHX, not ASPX). Problem shows up, when we try to access site pages, that require auth and redirect. These pages are secured by RequireSSL attribute (Redirect == true): [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = false)] public sealed class RequireSslAttribute : FilterAttribute, IAuthorizationFilter { public bool Redirect { get; set; } // Methods public void OnAuthorization(AuthorizationContext filterContext) { // this get's messy, when we are running custom ports // within the local dev fabric. // hence we disable code in the debug #if !DEBUG if (filterContext == null) { throw new ArgumentNullException("filterContext"); } if (filterContext.HttpContext.Request.IsSecureConnection) return; var canRedirect = string.Equals(filterContext.HttpContext.Request.HttpMethod, "GET", StringComparison.OrdinalIgnoreCase); if (canRedirect && Redirect) { var builder = new UriBuilder { Scheme = "https", Host = filterContext.HttpContext.Request.Url.Host, Path = filterContext.HttpContext.Request.RawUrl }; filterContext.Result = new RedirectResult(builder.ToString()); } else { throw new HttpException(0x193, "Access forbidden. The requested resource requires an SSL connection."); } #endif } } Obviously we compile in RELEASE for this case. Does anybody have any idea, what could cause this strange exception and how to get rid of it?

    Read the article

  • Converting Makefile to Visual Studio Terminology Questions (First time using VS)

    - by Ukko
    I am an old Unix guy who is converting a makefile based project over to Microsoft Visual Studio, I got tasked with this because I understand the Makefile which chokes VS's automatic import tools. I am sure there is a better way than what I am doing but we are making things fit into the customer's environment and that is driving my choices. So gmake is not a valid answer, even if it is the right one ;-) I just have a couple of questions on terminology that I think will be easy for an experienced (or junior) user to answer. Presently, a make all will generate several executables and a shared library. How should I structure this? Is it one "solution" with multiple projects? There is a body of common code (say 50%) that is shared between the various executable targets that is not in a formal library, if that matters. I thought I could just set up the first executable and then add targets for the others, but that does not seem to work. I know I am working against the tool, so what is the right way? I am also using Visual C++ 2010 Express to try and do this so that may also be a problem if support for multiple targets is not supported without using Visual C++ 2010 (insert superlative). Thanks, this is really one of those questions that should be answerable by a quick chat with the resident Windows Developer at the water cooler. So, I am asking at the virtual water cooler, I also spring for a virtual frosty beverage after work.

    Read the article

  • Just messed up a server misusing chown, how to execute it correctly?

    - by Jack Webb-Heller
    Hi! I'm moving from an old shared host to a dedicated server at MediaTemple. The server is running Plesk CP, but, as far as I can tell, there's no way via the Interface to do what I want to do. On the old shared host, running cPanel, I creative a .zip archive of all the website's files. I downloaded this to my computer, then uploaded it with FTP to the new host account I'd set up. Finally, I logged in via SSH, navigated to the directory the zip was stored in (something like var/www/vhosts/mysite.com/httpdocs/ and ran the unzip command on the file sitearchive.zip. This extracted everything just the fine. The site appeared to work just fine. The problem: When I tried to edit a file through FTP, I got Error - 160: Permission Denied. When I Get Info for the file I'm trying to edit, it says the owner and group is swimwir1. I attemped to use chown at this point to change owner - and yes, as you may be able to tell, I'm a little inexperienced in SSH ;) luckily the server was new, since the command I ran - chown -R newuser / appeared to mess a load of stuff up. The reason I used / on the end rather than /var/www/vhosts/mysite.com/httpdocs/ was because I'd already cded into their, so I presumed the / was relative to where I was working. This may be the case, I have no idea, either way - Plesk was no longer accessible, although Apache and things continued to work. I realised my mistake, and deciding it wasn't worth the hassle of 1) being an amateur and 2) trying to fix it, I just reprovisioned the server to start afresh. So - what do I do to change the owner of these files correctly? Thanks for helping out a confused beginner! Jack

    Read the article

  • Have error message show when form is created through AJAX

    - by Railslearner
    I have a page called /add that you can add a Dog on and the form is in its own partial. I'm using Simple Form and Twitter Bootstrap. I added the files for the main Bootstrap but use a gem for simple_form to work with it just so you know. DogsController # new.js.erb (deleted new.html.erb) def new @dog = Dog.new respond_to do |format| format.js end end # create.js.erb def create @dog = current_user.dogs.new(params[:dog]) respond_to do |format| if @dog.save format.html { redirect_to add_url, notice: 'Dog was successfully added.' } format.json { render json: @dog, status: :created, location: @dog} format.js else format.html { render 'pages/add' } format.json { render json: @dog.errors, status: :unprocessable_entity } end end end dogs/_form.html.erb <%= simple_form_for(@dog, :remote => true) do |f| %> <%= render :partial => "shared/error_message", :locals => { :f => f } %> <%= f.input :name %> <%= f.button :submit, 'Done' %> <% end %> This line: <%= render :partial => "shared/error_message", :locals => { :f => f } %> Is for bootstrap so it renders the errors html correctly. PagesController def add respond_to do |format| format.html end end pages/add.html.erb <div id="generate-form"> </div> dogs/new.js.erb $("#generate-form").html("<%= escape_javascript(render(:partial => 'dogs/form', locals: { dog: @dog })) %>"); Now how would I get this to render the error partial as if it was still on my dogs/new.html.erb since its being created through AJAX? I don't need client side validations do I?

    Read the article

  • Difference between KeywordQuery, FullTextQuerySearch type for Object Model and Web service Query

    - by Raghu
    Initially I believed these 3 to be doing more or less the same thing with just the notation being different. Until recently, when i noticed that their does exists a big difference between the results of the KeyWordQuery/FullTextQuerySearch and Web service Query. I used both KeywordQuery and FullText method to search of the the value of a customColumn XYZ with value (ASDSADA-21312ASD-ASDASD):- When I run this query as:- FullTextSqlQuery:- FullTextSqlQuery myQuery = new FullTextSqlQuery(site); { // Construct query text String queryText = "Select title, path, author, isdocument from scope() where freetext('ASDSADA-21312ASD-ASDASD') "; myQuery.QueryText = queryText; myQuery.ResultTypes = ResultType.RelevantResults; }; // execute the query and load the results into a datatable ResultTableCollection queryResults = myQuery.Execute(); ResultTable resultTable = queryResults[ResultType.RelevantResults]; // Load table with results DataTable queryDataTable = new DataTable(); queryDataTable.Load(resultTable, LoadOption.OverwriteChanges); I get the following result representing the document. * Title: TestPDF * path: http://SharepointServer/Shared Documents/Forms/DispForm.aspx?ID=94 * author: null * isDocument: false Do note the Path and isDocument fields of the above result. Web Service Method Then I tried a Web Service Query method. I used Sharepoint Search Service Tool available at http://sharepointsearchserv.codeplex.com/ and ran the same query i.e. Select title, path, author, isdocument from scope() where freetext('ASDSADA-21312ASD-ASDASD'). This time I got the following results:- * Title: TestPDF * path: http://SharepointServer/Shared Documents/TestPDF.pdf * author: null * isDocument: true Again note the path. While the search results from 2nd method are useful as they provide me the file path exactly, I can't seem to understand why is the method 1 not giving me the same results? Why is there a discrepancy between the two results?

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • Can't return a List from a Compiled Query.

    - by Andrew
    I was speeding up my app by using compiled queries for queries which were getting hit over and over. I tried to implement it like this: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id) End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, List(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ (From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u).ToList()) This did not work and I got this error message: Type : System.ArgumentNullException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Value cannot be null. Parameter name: value However, when I changed my compiled query to return IQueryable instead of List like so: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id).ToList() End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, IQueryable(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u) It worked fine. Can anyone shed any light as to why this is? BTW, compiled queries rock! They sped up my app by a factor of 2.

    Read the article

  • Java: library that does nice formatted log outputs

    - by WizardOfOdds
    I cannot find back a library that allowed to format log output statements in a much nicer way than what is usually seen. One of the feature I remember is that it could 'offset' the log message depending on the 'nestedness' of where the log statement was occuring. That is, instead of this: DEBUG | DefaultBeanDefinitionDocumentReader.java| 86 | Loading bean definitions DEBUG | AbstractAutowireCapableBeanFactory.java| 411 | Finished creating instance of bean 'MS-SQL' DEBUG | DefaultSingletonBeanRegistry.java| 213 | Creating shared instance of singleton bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java| 383 | Creating instance of bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java| 459 | Eagerly caching bean 'MySQL' to allow for resolving potential circular references DEBUG | AutowireCapableBeanFactory.java| 789 | Another debug message It would shows something like this: DEBUG | DefaultBeanDefinitionDocumentReader.java| 86 | Loading bean definitions DEBUG | AbstractAutowireCapableBeanFactory.java | 411 | Finished creating instance of bean 'MS-SQL' DEBUG | DefaultSingletonBeanRegistry.java | 213 | Creating shared instance of singleton bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java | 383 | Creating instance of bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java | 459 | |__ Eagerly caching bean 'MySQL' to allow for resolving potential circular references DEBUG | AutowireCapableBeanFactory.java | 789 | |__ Another debug message This is an example I just made up (VeryLongCamelCaseClassNamesNotMine). But I remember seeing such cleanly formatted log output and they were really much nicer than anything I had seen before and, in addition to being just plain nicer, they were also easier to read for they reproduced some of the logical organization of the code. Yet I cannot find anymore what that library was. I'm pretty sure it was fully compatible with log4j or sl4j.

    Read the article

  • iPhone static library Clang/LLVM error: non_lazy_symbol_pointers

    - by Bekenn
    After several hours of experimentation, I've managed to reduce the problem to the following example (C++): extern "C" void foo(); struct test { ~test() { } }; void doTest() { test t; // 1 foo(); // 2 } This is being compiled for iOS devices in XCode 4.2, using the provided Clang compiler (Apple LLVM compiler 3.0) and the iOS 5.0 SDK. The project is configured as a Cocoa Touch Static Library, and "Enable Linking With Shared Libraries" is set to No because I'm building an AIR native extension. The function foo is defined in another external library. (In my actual project, this would be any of the C API functions defined by Adobe for use in AIR native extensions.) When attempting to compile this code, I get back the error: FATAL:incompatible feature used: section type non_lazy_symbol_pointers (must specify "-dynamic" to be used) clang: error: assembler command failed with exit code 1 (use -v to see invocation) The error goes away if I comment out either of the lines marked 1 or 2 above, or if I change the build setting "Enable Linking With Shared Libraries" to Yes. (However, if I change the build setting, then I get multiple ld warning: unexpected srelocation type 9 warnings when linking the library into the final project, and the application crashes when running on the device.) The build error also goes away if I remove the destructor from test. So: Is this a bug in Clang? Am I missing some all-important and undocumented build setting? The interaction between an externally-provided function and a struct with a destructor is very peculiar, to say the least.

    Read the article

  • GIT clone repo across local file system

    - by Jon
    Hi all, I am a complete Noob when it comes to GIT. I have been just taking my first steps over the last few days. I setup a repo on my laptop, pulled down the Trunk from an SVN project (had some issues with branches, not got them working), but all seems ok there. I now want to be able to pull or push from the laptop to my main desktop. The reason being the laptop is handy on the train as I spend 2 hours a day travelling and can get some good work done. But my main machine at home is great for development. So I want to be able to push / pull from the laptop to the main computer when I get home. I thought the most simple way of doing this would be to just have the code folder shared out across the LAN and do: git clone file://192.168.10.51/code unfortunately this doesn't seem to be working for me: so I open a git bash cmd and type the above command, I am in C:\code (the shared folder for both machines) this is what I get back: Initialized empty Git repository in C:/code/code/.git/ fatal: 'C:/Program Files (x86)/Git/code' does not appear to be a git repository fatal: The remote end hung up unexpectedly How can I share the repository between the two machines in the most simple of ways. There will be other locations that will be official storage points and places where the other devs and CI server etc will pull from, this is just so that I can work on the same repo across two machines. Thanks

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • GCC Dynamic library building problem

    - by Sirish Kumar
    I am new to linux, while compiling with dynamic library I am getting the segmentationfault error. I have two files ctest1.c void ctest1(int *i) { *i =10; } ctest2.c void ctest2(int *i) { *i =20; } I have compiled both files to a shared library named libtest.so using following command gcc -shared -W1,-soname,libtest.so.1 -o libtest.so.1.0.1 ctest1.o ctest2.o -lc And I have wrote another program prog.c which uses functions exported by this library prog.c #include void (ctest1)(int); void (ctest2)(int*); int main() { int a; ctest1(&a); printf("%d",a); return 0; } And when I have built the executable with following command gcc -Wall prog.c -L. -o prog But when I run the generated executable I get the SegmentationFault error. When I checked the header of prog with ldd it shows linux-vdso.so.1 = (0x00007f99dff000) libc.so.6 = /lib64/libc.so.6 (0x0007feeaa8c1000) /lib64/ld-linux-x86-64.so.2 (0x00007feeaac1c000) Can somebody tell what is the problem

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >