Search Results

Search found 22884 results on 916 pages for 'team build'.

Page 297/916 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • error A2070: invalid instruction operands IN SSE MASM64

    - by Green
    when compiling this in ml64.exe 64bit (masm64) the SSE command give me an error what do i need to do to include the SSE commands in 64 bit? .code test PROC movlps [rdx], xmm7 ;;error A2070: invalid instruction operands ;//Inc in vec ptr add rsi, 16 movhlps xmm6, xmm7 movss [rdx+8], xmm6 ;;rror A2070: invalid instruction operands ret test ENDP end i get the error: 1>Performing Custom Build Step 1> Assembling: extasm.asm 1>extasm.asm(6) : error A2070: invalid instruction operands 1>extasm.asm(10) : error A2070: invalid instruction operands 1>Microsoft (R) Macro Assembler (x64) Version 8.00.50727.215 1>Copyright (C) Microsoft Corporation. All rights reserved. 1>Project : error PRJ0019: A tool returned an error code from "Performing Custom Build Step"

    Read the article

  • SSL cert install in Mongrel

    - by normalocity
    Running a RoR app. I've created a self-signed SSL cert for my test/dev environment, which consists of a single mongrel, and Mac OS X Leopard. Can I install the SSL cert in such a way that mongrel will use it, or do I have build a different server stack to handle SSL (e.g. Apache in front of a mongrel cluster), and install the cert inside of Apache? For my production server, I'll eventually build a server stack like this, so it's no big deal if I have to go that direction - it's just a question of now or later. Thanks!

    Read the article

  • Lambda Expression to be used in Select() query

    - by jameschinnock
    Hi, I am trying to build a lambda expression, containing two assignments (as shown further down), that I can then pass to a Queryable.Select() method. I am trying to pass a string variable into a method and then use that variable to build up the lambda expression so that I can use it in a LINQ Select query. My reasoning behind it is that I have a SQL Server datasource with many column names, I am creating a charting application that will allow the user to select, say by typing in the column name, the actual column of data they want to view in the y-axis of my chart, with the x-axis always being the DateTime. Therefore, they can essentially choose what data they chart against the DateTime value (it’s a data warehouse type app). I have, for example, a class to store the retrieved data in, and hence use as the chart source of: public class AnalysisChartSource { public DateTime Invoicedate { get; set; } public Decimal yValue { get; set; } } I have (purely experimentaly) built an expression tree for the Where clause using the String value and that works fine: public void GetData(String yAxis) { using (DataClasses1DataContext db = new DataClasses1DataContext()) { var data = this.FunctionOne().AsQueryable<AnalysisChartSource>(); //just to get some temp data in.... ParameterExpression pe = Expression.Parameter(typeof(AnalysisChartSource), "p"); Expression left = Expression.MakeMemberAccess(pe, typeof(AnalysisChartSource).GetProperty(yAxis)); Expression right = Expression.Constant((Decimal)16); Expression e2 = Expression.LessThan(left, right); Expression expNew = Expression.New(typeof(AnalysisChartSource)); LambdaExpression le = Expression.Lambda(left, pe); MethodCallExpression whereCall = Expression.Call( typeof(Queryable), "Where", new Type[] { data.ElementType }, data.Expression, Expression.Lambda<Func<AnalysisChartSource, bool>>(e2, new ParameterExpression[] { pe })); } } However……I have tried a similar approach for the Select statement, but just can’t get it to work as I need the Select() to populate both X and Y values of the AnalysisChartSource class, like this: .Select(c => new AnalysisChartSource { Invoicedate = c.Invoicedate, yValue = c.yValue}).AsEnumerable(); How on earth can I build such an expression tree….or….possibly more to the point…..is there an easier way that I have missed entirely?

    Read the article

  • How can I turn the structure of an XML file into a folder structure using ANT

    - by 1ndivisible
    I would like to be able to pass an XML file to an ANT build script and have it create a folder structure mimicking the nodal structure of the XML, using the build files parent directory as the root. For Example using: <root> <folder1> <folder1-1/> </folder1> <folder2/> <folder3> <folder3-1/> </folder3> </root> ant would create: folder1 -folder1-1 folder2 folder3 -folder3-1 I know how to create a directory, but i'm not sure how to have ANT parse the XML.

    Read the article

  • CruiseControl.Net Publisher email failing to send e-mail

    - by CrimsonX
    So here's my problem: I cannot seem to be able to configure CruiseControl.NET to send out an e-mail to me when a build occurs. I copied the example from the documentation and filled it in with my own values. http://confluence.public.thoughtworks.org/display/CCNET/Email+Publisher Here's the relevant section of ccnet.config below <publishers> <merge> <files> <file>C:\Build\Temp\*.xml</file> </files> </merge> <xmllogger logDir="buildlogs" /> <statistics> <statisticList> <statistic /> </statisticList> </statistics> <email includeDetails="TRUE" mailhostUsername="user" mailhostPassword="password" useSSL="TRUE"> <from>[email protected]</from> <mailhost>mail.mycompanysmtpserver.com</mailhost> <users> <user name="MyName Lastname" group="buildmaster" address="[email protected]" /> </users> <groups> <group name="buildmaster"> <notifications> <notificationType>Always</notificationType> </notifications> </group> </groups> <modifierNotificationTypes> <NotificationType>Failed</NotificationType> <NotificationType>Fixed</NotificationType> </modifierNotificationTypes> <subjectSettings> <subject buildResult="StillBroken" value="Build is still broken for {CCNetProject}" /> </subjectSettings> </email> </publishers> I have had a successful CruiseControl.NET server configured for some time, and it successfully updates people through CCTray, but I need to add e-mail support as well. I've already looked at the relevant StackOverflow articles like this one and many more and tried my hand at googling the solution but I dont know what I could be doing wrong. The only other thing that I'd like to validate is that I can send/receive e-mails using my username/password with the SMTP server I received from IT (my mail works properly) but I just followed the steps on testing an SMTP server here and everything looks fine http://support.microsoft.com/kb/153119 Anyone have any ideas as to why I'm getting this problem, or how to troubleshoot it further?

    Read the article

  • Unity and Object Creation

    - by William
    I am using unity as my IoC container. I am trying to implement a type of IProviderRepository. The concrete implementation has a constructor that accepts a type of IRepository. When I remove the constructor parameter from the concrete implementation everything works fine. I am sure the container is wired correctly. When I try to create the concrete object with the constructor I receive the following error: "The current build operation (build key Build Key[EMRGen.Infrastructure.Data.IRepository1[EMRGen.Model.Provider.Provider], null]) failed: The current type, EMRGen.Infrastructure.Data.IRepository1[EMRGen.Model.Provider.Provider], is an interface and cannot be constructed. Are you missing a type mapping? (Strategy type BuildPlanStrategy, index 3)". Is it possible to achieve the above mention functionality with Unity? Namely have Unity infer a concrete type from the Interface and also inject the constructor of the concrete type with the appropriate concrete object based on constructor parameters. Below is sample of my types defined in Unity and a skeleton class listing for what I want to achieve. IProviderRepository is implemented by ProviderRepository which has a constructor that expects a type of IRepository. <typeAlias alias="ProviderRepositoryInterface" type="EMRGen.Model.Provider.IProviderRepository, EMRGen.Model" /> <typeAlias alias="ProviderRepositoryConcrete" type="EMRGen.Infrastructure.Repositories.Providers.ProviderRepository, EMRGen.Infrastructure.Repositories" /> <typeAlias alias="ProviderGenericRepositoryInterface" type="EMRGen.Infrastructure.Data.IRepository`1[[EMRGen.Model.Provider.IProvider, EMRGen.Model]], EMRGen.Infrastructure" /> <typeAlias alias="ProviderGenericRepositoryConcrete" type="EMRGen.Infrastructure.Repositories.EntityFramework.ApplicationRepository`1[[EMRGen.Model.Provider.Provider, EMRGen.Model]], EMRGen.Infrastructure.Repositories" /> <!-- Provider Mapping--> <typeAlias alias="ProviderInterface" type="EMRGen.Model.Provider.IProvider, EMRGen.Model" /> <typeAlias alias="ProviderConcrete" type="EMRGen.Model.Provider.Doctor, EMRGen.Model" /> //Illustrate the call being made inside my class public class PrescriptionService { PrescriptionService() { IUnityContainer uc = UnitySingleton.Instance.Container; UnityServiceLocator unityServiceLocator = new UnityServiceLocator(uc); ServiceLocator.SetLocatorProvider(() => unityServiceLocator); IProviderRepository pRepository = ServiceLocator.Current.GetInstance<IProviderRepository>(); } } public class GenericRepository<IProvider> : IRepository<IProvider> { } public class ProviderRepository : IProviderRepository { private IRepository<IProvider> _genericProviderRepository; //Explict public default constructor public ProviderRepository(IRepository<IProvider> genericProviderRepository) { _genericProviderRepository = genericProviderRepository; } }

    Read the article

  • How do I create an Access 2003 MDE programmatically or by command line in Access 2007?

    - by Ned Ryerson
    I have a legacy Access 2003 database file that must remain in that format to preserve its menus and toolbars. I have recently moved to Access 2007 in my build environment and will be deploying the compiled Access 2003 program with the Access 2007 runtime. In Access 2003, I could script the process of creating an MDE with the Access Developer Extensions (WZADE.mde) using the command line and an .xml file of build preferences (without creating an install package). The Access 2007 developer extensions do not seem to offer a similar option. I can "Package a Solution", but it creates an accdr and buries it in a CD installer. I've tried programmatic options like Docmd.RunCommand acMakeMDEFILe and Syscmd(603, mdbpath, mdepath) but they no longer work in Access 2007. Of course, i can manually create an MDE using Database ToolsCreate MDE, but that is no scriptable as far as I can tell.

    Read the article

  • Silverlight 4.0 question - synchronous calls to asmx web service

    - by Anvar
    Hi, I have Silverlight problem. I have to deal with fairly large legacy application that has some web services exposed (regular asmx, not wcf). What I need to do is to build Silverlight app that consumes those web services. Due to business logic I need to have the same web method to be called from Silverlight app synchronously. I was able to build web services consumption but only asynchronously because that would be default Silverlight behavior. Is there a way to make asynchronous calls synchronous in Silverlight? I looked here and googled around but came across only wcf examples. I would appreciate if somebody would get me a code example for regular asmx web service. I use Silverlight 4.0. Thanks!

    Read the article

  • The fastest way to resize images from ASP.NET. And it’s (more) supported-ish.

    - by Bertrand Le Roy
    I’ve shown before how to resize images using GDI, which is fairly common but is explicitly unsupported because we know of very real problems that this can cause. Still, many sites still use that method because those problems are fairly rare, and because most people assume it’s the only way to get the job done. Plus, it works in medium trust. More recently, I’ve shown how you can use WPF APIs to do the same thing and get JPEG thumbnails, only 2.5 times faster than GDI (even now that GDI really ultimately uses WIC to read and write images). The boost in performance is great, but it comes at a cost, that you may or may not care about: it won’t work in medium trust. It’s also just as unsupported as the GDI option. What I want to show today is how to use the Windows Imaging Components from ASP.NET APIs directly, without going through WPF. The approach has the great advantage that it’s been tested and proven to scale very well. The WIC team tells me you should be able to call support and get answers if you hit problems. Caveats exist though. First, this is using interop, so until a signed wrapper sits in the GAC, it will require full trust. Second, the APIs have a very strong smell of native code and are definitely not .NET-friendly. And finally, the most serious problem is that older versions of Windows don’t offer MTA support for image decoding. MTA support is only available on Windows 7, Vista and Windows Server 2008. But on 2003 and XP, you’ll only get STA support. that means that the thread safety that we so badly need for server applications is not guaranteed on those operating systems. To make it work, you’d have to spin specialized threads yourself and manage the lifetime of your objects, which is outside the scope of this article. We’ll assume that we’re fine with al this and that we’re running on 7 or 2008 under full trust. Be warned that the code that follows is not simple or very readable. This is definitely not the easiest way to resize an image in .NET. Wrapping native APIs such as WIC in a managed wrapper is never easy, but fortunately we won’t have to: the WIC team already did it for us and released the results under MS-PL. The InteropServices folder, which contains the wrappers we need, is in the WicCop project but I’ve also included it in the sample that you can download from the link at the end of the article. In order to produce a thumbnail, we first have to obtain a decoding frame object that WIC can use. Like with WPF, that object will contain the command to decode a frame from the source image but won’t do the actual decoding until necessary. Getting the frame is done by reading the image bytes through a special WIC stream that you can obtain from a factory object that we’re going to reuse for lots of other tasks: var photo = File.ReadAllBytes(photoPath); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream( inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnLoad); var frame = decoder.GetFrame(0); We can read the dimensions of the frame using the following (somewhat ugly) code: uint width, height; frame.GetSize(out width, out height); This enables us to compute the dimensions of the thumbnail, as I’ve shown in previous articles. We now need to prepare the output stream for the thumbnail. WIC requires a special kind of stream, IStream (not implemented by System.IO.Stream) and doesn’t directlyunderstand .NET streams. It does provide a number of implementations but not exactly what we need here. We need to output to memory because we’ll want to persist the same bytes to the response stream and to a local file for caching. The memory-bound version of IStream requires a fixed-length buffer but we won’t know the length of the buffer before we resize. To solve that problem, I’ve built a derived class from MemoryStream that also implements IStream. The implementation is not very complicated, it just delegates the IStream methods to the base class, but it involves some native pointer manipulation. Once we have a stream, we need to build the encoder for the output format, which could be anything that WIC supports. For web thumbnails, our only reasonable options are PNG and JPEG. I explored PNG because it’s a lossless format, and because WIC does support PNG compression. That compression is not very efficient though and JPEG offers good quality with much smaller file sizes. On the web, it matters. I found the best PNG compression option (adaptive) to give files that are about twice as big as 100%-quality JPEG (an absurd setting), 4.5 times bigger than 95%-quality JPEG and 7 times larger than 85%-quality JPEG, which is more than acceptable quality. As a consequence, we’ll use JPEG. The JPEG encoder can be prepared as follows: var encoder = factory.CreateEncoder( Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); The next operation is to create the output frame: IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); Notice that we are passing in a property bag. This is where we’re going to specify our only parameter for encoding, the JPEG quality setting: var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); We can then set the resolution for the thumbnail to be 96, something we weren’t able to do with WPF and had to hack around: outputFrame.SetResolution(96, 96); Next, we set the size of the output frame and create a scaler from the input frame and the computed dimensions of the target thumbnail: outputFrame.SetSize(thumbWidth, thumbHeight); var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, thumbWidth, thumbHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); The scaler is using the Fant method, which I think is the best looking one even if it seems a little softer than cubic (zoomed here to better show the defects): Cubic Fant Linear Nearest neighbor We can write the source image to the output frame through the scaler: outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)thumbWidth, Height = (int)thumbHeight }); And finally we commit the pipeline that we built and get the byte array for the thumbnail out of our memory stream: outputFrame.Commit(); encoder.Commit(); var outputArray = outputStream.ToArray(); outputStream.Close(); That byte array can then be sent to the output stream and to the cache file. Once we’ve gone through this exercise, it’s only natural to wonder whether it was worth the trouble. I ran this method, as well as GDI and WPF resizing over thirty twelve megapixel images for JPEG qualities between 70% and 100% and measured the file size and time to resize. Here are the results: Size of resized images   Time to resize thirty 12 megapixel images Not much to see on the size graph: sizes from WPF and WIC are equivalent, which is hardly surprising as WPF calls into WIC. There is just an anomaly for 75% for WPF that I noted in my previous article and that disappears when using WIC directly. But overall, using WPF or WIC over GDI represents a slight win in file size. The time to resize is more interesting. WPF and WIC get similar times although WIC seems to always be a little faster. Not surprising considering WPF is using WIC. The margin of error on this results is probably fairly close to the time difference. As we already knew, the time to resize does not depend on the quality level, only the size does. This means that the only decision you have to make here is size versus visual quality. This third approach to server-side image resizing on ASP.NET seems to converge on the fastest possible one. We have marginally better performance than WPF, but with some additional peace of mind that this approach is sanctioned for server-side usage by the Windows Imaging team. It still doesn’t work in medium trust. That is a problem and shows the way for future server-friendly managed wrappers around WIC. The sample code for this article can be downloaded from: http://weblogs.asp.net/blogs/bleroy/Samples/WicResize.zip The benchmark code can be found here (you’ll need to add your own images to the Images directory and then add those to the project, with content and copy if newer in the properties of the files in the solution explorer): http://weblogs.asp.net/blogs/bleroy/Samples/WicWpfGdiImageResizeBenchmark.zip WIC tools can be downloaded from: http://code.msdn.microsoft.com/wictools To conclude, here are some of the resized thumbnails at 85% fant:

    Read the article

  • Implementing google dashboard type interface in asp.net

    - by Sam_Cogan
    I'm looking at implementing a Google IG type dashboard in a .net app. There are a number of options I've found to do this, and i'm trying to establish what is going to be the best to use, in terms of speed, versatility etc. So far the options I am looking at are either to use asp.net webparts and .net Ajax, this would make it quicker to build, but I'm concerned this is going to make the application bulky and slow, or using JQuery, and either .net MVC or Webforms, to custom build an interface. Does anyone have any thoughts on what the best option may be, or any options I may have missed? All I want to do here is to allow users to customise a dashboard with a number of components (which will be user controls). I do also have access to Telerik controls, but I'm not sure if they would be any use here.

    Read the article

  • SSAS/SSRS remove parameter from cube report destroys report

    - by Jon
    Group, We built a data cube using SSAS and are now building SSRS reports off of that cube. Not sure if anyone has come across this, but when you build the report using the wizard and include parameters all looks fine. However if you are in the report after the wizard is compete, and you decide you want to remove one of the parameters you created it debunks the report and the only way to get it back is to re-create the whole report. Any way you can remove or add parameters after the initial build without destroying your report? Thanks in advance for the help! I love this forumn!

    Read the article

  • Using .net 3.5 assemblies in asp.net 2.0 web application

    - by masterik
    I have an .net assembly build against 3.5 framework. This assembly has a class Foo with two method overrides: public class Foo { public T Set<T>(T value); public T Set<T>(Func<T> getValueFunc); } I'm referencing this assembly in my asp.net 2.0 web application to use first override of the Set method (without Func). But on build I get an error saying that I should reference System.Core to use System.Func delegate... but I'm not using this type... Is there a workaround to solve this? PS: There is no option to convert my web application targeting 3.5 framework.

    Read the article

  • qemu-kvm virtual machine virtio network freeze under load

    - by Rick Koshi
    I'm having a problem with my virtual machines, where the network will freeze under heavy load. I'm using CentOS 6.2 as both host and guest, not using libvirt, just running qemu-kvm directly as follows: /usr/libexec/qemu-kvm \ -drive file=/data2/vm/rb-dev2-www1-vm.img,index=0,media=disk,cache=none,if=virtio \ -boot order=c \ -m 2G \ -smp cores=1,threads=2 \ -vga std \ -name rb-dev2-www1-vm \ -vnc :84,password \ -net nic,vlan=0,macaddr=52:54:20:00:00:54,model=virtio \ -net tap,vlan=0,ifname=tap84,script=/etc/qemu-ifup \ -monitor unix:/var/run/vm/rb-dev2-www1-vm.mon,server,nowait \ -rtc base=utc \ -device piix3-usb-uhci \ -device usb-tablet /etc/qemu-ifup (used by the above command) is a very simple script, containing the following: #!/bin/sh sudo /sbin/ifconfig $1 0.0.0.0 promisc up sudo /usr/sbin/brctl addif br0 $1 sleep 2 And here's the info on br0 and other interfaces: avl-host3 14# brctl show bridge name bridge id STP enabled interfaces br0 8000.180373f5521a no bond0 tap84 virbr0 8000.525400858961 yes virbr0-nic avl-host3 15# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 4: em3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:1e brd ff:ff:ff:ff:ff:ff 5: em4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:20 brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet 172.16.1.46/24 brd 172.16.1.255 scope global br0 inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff 12: tap84: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether ba:e8:9b:2a:ff:48 brd ff:ff:ff:ff:ff:ff inet6 fe80::b8e8:9bff:fe2a:ff48/64 scope link valid_lft forever preferred_lft forever bond0 is a bond of em1 and em2. virbr0 and virbr0-nic are vestigial interfaces left over from CentOS's default installation. They are unused (as far as I know). The guest runs perfectly until I run a large 'rsync', when the network will freeze after some seemingly-random time (usually under a minute). When it freezes, there is no network activity in or out of the guest. I can still connect to the guest's console via vnc, but it is unable to speak out its network interface. Any attempt to 'ping' from the guest gives a "Destination Host Unreachable" error for 3/4 packets and no reply for every fourth packet. Sometimes (perhaps two thirds of the time), I can bring the interface back to life by doing a "service network restart" from the guest's console. If this works (and if I do it before the rsync times out), the rsync will resume. Usually it will freeze again within a minute or two. If I repeat, the rsync will eventually finish, and I presume the machine goes back to waiting for another period of heavy load. Throughout the whole process, there are no console errors or relevant (that I can see) syslog messages on either guest or host machine. If the "service network restart" doesn't work the first time, trying again (and again and again) never seems to work. The command completes normally, with normal output, but the interface stays frozen. However, a soft reboot of the guest machine (without restarting qemu-kvm) always seems to bring it back. I am aware of the "lowest mac address" assignment problem, where the bridge takes on the mac address of the slave interface with the lowest mac address. This causes temporary network freezes, but is definitely not what's happening for me. My freezes are permanent until manual intervention, and you can see from the 'ip addr show' output above that the mac address being used by br0 is that of the physical ethernet. There are no other virtual machines running on the host. I've verified that each virtual machine on the subnet has its own unique mac address. I have rebuilt the guest machine several times, and I have tried this on three different host machines (identical hardware, built identically). Oddly, I do have one virtual host (the second of this series) which never seemed to have a problem. It never had its network freeze when it was running the same rsync during its build. It's particularly odd because it was the second build. The first, on a different host, did have the freezing problem, but the second did not. I assumed at the time that I had done something wrong with the first build, and that the problem was resolved. Unfortunately, the problem reappeared when I built the third VM. Also unfortunately, I can't do many tests with the working VM, as it's now in production use, and I'm hoping I can find the cause of this issue before that machine starts having problems. It's possible that I just got really lucky while running the rsync on the working machine, and that one time it didn't freeze. Of course it's possible that I somehow changed the build scripts without realizing it and re-broke something, but I can't find any such thing. In any case, I'm hoping someone has some idea what could cause this. Addendum: Preliminary tests suggest that I don't have the problem if I substitute e1000 for virtio in the first -net flag to qemu-kvm. I don't consider this a solution, but it is suitable for a stopgap. Has anyone else had (or better yet, solved) this problem with the virtio network driver?

    Read the article

  • Map a column to be IDENTITY in db with EF4 Code-Only

    - by Tomas Lycken
    Although I have marked my ID column with .Identity(), the generated database schema doesn't have IDENTITY set to true, which gives me problems when I'm adding records. If I manually edit the database schema (in SQL Management Studio) to have the Id column marked IDENTITY, everything works as I want it - I just can't make EF do that by itself. This is my complete mapping: public class EntryConfiguration : EntityConfiguration<Entry> { public EntryConfiguration() { Property(e => e.Id).IsIdentity(); Property(e => e.Amount); Property(e => e.Description).IsRequired(); Property(e => e.TransactionDate); Relationship(e => (ICollection<Tag>)e.Tags).FromProperty(t => t.Entries); } } As I'm using EF to build and re-build the database for integration testing, I really need this to be done automatically...

    Read the article

  • PostSharp , PDB Debugging and Referenced Assemblies ...

    - by Anil Bisnoi
    When using PostSharp with a Referenced Assembly with proper PDB info( checked with chkmatch), it seems strange that the debug info gets lost by VStudio build and post compile process and I get the following error by using chkmatch to compare the assembly after the vstudio build. Error: Debug information not found in the executable. So it doesn't step into for debugging into this assembly. Does Post Sharp properly writes back the Assemblies without destroying the PDB location offset info as I saw no valid offset info in the written back into DLL by PostSharp using Hex Editor and What's the workaround for this ? Thx Anil Bisnoi

    Read the article

  • _CopyWebApplication with web.config transformations

    - by Jeremy
    I am trying to have my web application automatically Publish when a Release build is performed. I'm doing this using the _CopyWebApplication target. I added the following to my .csproj file: <!-- Automatically Publish in Release build. --> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> <Target Name="AfterBuild"> <RemoveDir Directories="$(ProjectDir)..\Output\MyWeb" ContinueOnError="true" /> <MSBuild Projects="MyWeb.csproj" Properties="Configuration=Release;WebProjectOutputDir=$(ProjectDir)..\Output\MyWeb;OutDir=$(ProjectDir)bin\" Targets="ResolveReferences;_CopyWebApplication" /> </Target> This works but with one issue. The difference between this output, and the output generated when using the Publish menu item in Visual Studio, is that the Web.Release.config transformation is not applied to the Web.config file when using the MSBuild method. Instead, Web.config, Web.Release.config, and Web.Debug.config are all copied. Any ideas are appreciated.

    Read the article

  • How do you include additional files using VS2010 web deployment packages?

    - by Jason
    I am testing out using the new web packaging functionality in visual studio 2010 and came across a situation where I use a pre-build event to copy required .dll's into my bin folder that my app relies on for API calls. They cannot be included as a reference since they are not COM dlls that can be used with interop. When I build my deployment package those files are excluded when I choose the option to only include the files required to run the app. Is there a way to configure the deployment settings to include these files? I have had no luck finding any good documentation on this.

    Read the article

  • Reference a GNU C (POSIX) DLL built in GCC against Cygwin, from C#/NET

    - by Dale Halliwell
    Here is what I want: I have a huge legacy C/C++ codebase written for POSIX, including some very POSIX specific stuff like pthreads. This can be compiled on Cygwin/GCC and run as an executable under Windows with the Cygwin DLL. What I would like to do is build the codebase itself into a Windows DLL that I can then reference from C# and write a wrapper around it to access some parts of it programatically. I have tried this approach with the very simple "hello world" example at http://www.cygwin.com/cygwin-ug-net/dll.html and it doesn't seem to work. #include <stdio.h> extern "C" __declspec(dllexport) int hello(); int hello() { printf ("Hello World!\n"); return 42; } I believe I should be able to reference a DLL built with the above code in C# using something like: [DllImport("kernel32.dll")] public static extern IntPtr LoadLibrary(string dllToLoad); [DllImport("kernel32.dll")] public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName); [DllImport("kernel32.dll")] public static extern bool FreeLibrary(IntPtr hModule); [UnmanagedFunctionPointer(CallingConvention.Cdecl)] private delegate int hello(); static void Main(string[] args) { var path = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "helloworld.dll"); IntPtr pDll = LoadLibrary(path); IntPtr pAddressOfFunctionToCall = GetProcAddress(pDll, "hello"); hello hello = (hello)Marshal.GetDelegateForFunctionPointer( pAddressOfFunctionToCall, typeof(hello)); int theResult = hello(); Console.WriteLine(theResult.ToString()); bool result = FreeLibrary(pDll); Console.ReadKey(); } But this approach doesn't seem to work. LoadLibrary returns null. It can find the DLL (helloworld.dll), it is just like it can't load it or find the exported function. I am sure that if I get this basic case working I can reference the rest of my codebase in this way. Any suggestions or pointers, or does anyone know if what I want is even possible? Thanks. Edit: Examined my DLL with Dependency Walker (great tool, thanks) and it seems to export the function correctly. Question: should I be referencing it as the function name Dependency Walker seems to find (_Z5hellov)? Edit2: Just to show you I have tried it, linking directly to the dll at relative or absolute path (i.e. not using LoadLibrary): [DllImport(@"C:\.....\helloworld.dll")] public static extern int hello(); static void Main(string[] args) { int theResult = hello(); Console.WriteLine(theResult.ToString()); Console.ReadKey(); } This fails with: "Unable to load DLL 'C:.....\helloworld.dll': Invalid access to memory location. (Exception from HRESULT: 0x800703E6) *Edit 3: * Oleg has suggested running dumpbin.exe on my dll, this is the output: Dump of file helloworld.dll File Type: DLL Section contains the following exports for helloworld.dll 00000000 characteristics 4BD5037F time date stamp Mon Apr 26 15:07:43 2010 0.00 version 1 ordinal base 1 number of functions 1 number of names ordinal hint RVA name 1 0 000010F0 hello Summary 1000 .bss 1000 .data 1000 .debug_abbrev 1000 .debug_info 1000 .debug_line 1000 .debug_pubnames 1000 .edata 1000 .eh_frame 1000 .idata 1000 .reloc 1000 .text Edit 4 Thanks everyone for the help, I managed to get it working. Oleg's answer gave me the information I needed to find out what I was doing wrong. There are 2 ways to do this. One is to build with the gcc -mno-cygwin compiler flag, which builds the dll without the cygwin dll, basically as if you had built it in MingW. Building it this way got my hello world example working! However, MingW doesn't have all the libraries that cygwin has in the installer, so if your POSIX code has dependencies on these libraries (mine had heaps) you can't do this way. And if your POSIX code didn't have those dependencies, why not just build for Win32 from the beginning. So that's not much help unless you want to spend time setting up MingW properly. The other option is to build with the Cygwin DLL. The Cygwin DLL needs an initialization function init() to be called before it can be used. This is why my code wasn't working before. The code below loads and runs my hello world example. //[DllImport(@"hello.dll", EntryPoint = "#1",SetLastError = true)] //static extern int helloworld(); //don't do this! cygwin needs to be init first [DllImport("kernel32", CharSet = CharSet.Ansi, ExactSpelling = true, SetLastError = true)] static extern IntPtr GetProcAddress(IntPtr hModule, string procName); [DllImport("kernel32", SetLastError = true)] static extern IntPtr LoadLibrary(string lpFileName); public delegate int MyFunction(); static void Main(string[] args) { //load cygwin dll IntPtr pcygwin = LoadLibrary("cygwin1.dll"); IntPtr pcyginit = GetProcAddress(pcygwin, "cygwin_dll_init"); Action init = (Action)Marshal.GetDelegateForFunctionPointer(pcyginit, typeof(Action)); init(); IntPtr phello = LoadLibrary("hello.dll"); IntPtr pfn = GetProcAddress(phello, "helloworld"); MyFunction helloworld = (MyFunction)Marshal.GetDelegateForFunctionPointer(pfn, typeof(MyFunction)); Console.WriteLine(helloworld()); Console.ReadKey(); } Thanks to everyone that answered~~

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 2

    - by Tarun Arora
    Welcome back, in part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies. In this blog post I’ll get into the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. Tools => Options => Test Tools Have you visited the treasures of Visual Studio Menu bar tools => Options => Test Tools lately? The options to enable disable prompts on creating, editing, deleting or running manual/automated tests can be controller from here. The default test project language and default test types created on a new test project creation could be selected/unselected from here. Ever wondered how you can change the default limit of 25 test results, this can again be changed from here. If you record a lot of Web Tests and wish for the web test recorder to start with “that” URL populated, well this again can be specified from here. If you haven’t so far, I would urge you to spend 2 minutes in the test tools options.   Test Menu => Ready Steady Test Action! The Test tools are under the Test Menu in Visual Studio, apart from being able to create a new Test and Test List you can also load an existing vsmdi file. You can also manage your test controllers from here. A solution can have one or more test setting files, but there can only be one active test settings file at any time. Again, this selection can be done from here.  You can open the various test windows from under the windows option from the test menu. If you open the Test view window you will see that you have the option to group the tests by work items, project, test type, etc. You can set these properties by right clicking a test in the test list and choosing properties from the context menu.    So, what is a vsmdi file? vsmdi stands for Visual Studio Test Metadata File. Placed under the Solution Items this file keeps track of the list of unit tests in your solution. If you open the vsmdi file as an xml file you will see a series of Test Links nested with in the list Test List tags along with the Run Configuration tag. When in visual studio you run tests, the IDE looks at the vsmdi file to see what tests need to be run. You also have the option of using the vsmdi file in your team builds to specify which tests need to run as part of the build. Refer here for a walkthrough from a fellow blogger on how to use the vsmdi file in the team builds. Web Performance Test – The Truth! In Visual Studio 2010 “Web Tests” have been renamed to “Web Performance Tests”. Apart from renaming this test type there have been several improvements to this test type in visual studio 2010. I am very active on the MSDN Visual Studio And Load Testing forum and a frequent question from many users is “Do Web Tests support Pages that run JavaScript?” I will start with a little bit of background before answering this question. Web Performance Tests operate at the HTTP Layer, but why? To enable you to generate high loads with a relatively low amount of hardware, Web performance tests are driven at the protocol layer rather than instantiating a browser.The most common source of confusion is that users do not realize Web Performance Tests work at the HTTP layer. The tool adds to that misconception. After all, you record in IE, and when running a Web test you can select which browser to use, and then the result viewer shows the results in a browser window. So that means the tests run through the browser, right? NO! The Web test engine works at the HTTP layer, and does not instantiate a browser. What does that mean? In the diagram below, you can see there are no browsers running when the engine is sending and receiving requests. Does that mean I can’t test pages that use Java script? The best example for java script generating HTTP traffic is AJAX calls. The most common example of browser plugins are Silverlight or Flash. The Web test recorder will record HTTP traffic from AJAX calls and from most (but not all) browser plugins. This means you will still be able to web performance test pages that use java script or plugin and play back the results but the playback engine will not show the java script or plug in results in the ‘browser control’. If you want to test the page behaviour as a result of the java script or plug in consider using Coded UI Tests. This page looks like it failed, when in fact it succeeded! Looking closely at the response, and subsequent requests, it is clear the operation succeeded. As stated above, the reason why the browser control is pasting this message is because java script has been disabled in this control. So, to reiterate, the web performance test recorder: - Sends and receives data at the HTTP layer. - Does NOT run a browser. - Does NOT run java script. - Does NOT host ActiveX controls or plugins. There is a great series of blog posts from Ed Glas, i would highly recommend his blog to any one performing Load/Performance testing through Visual Studio. Demo – Web Performance Test [Demo] - Visual Studio Ultimate 2010: Test Settings and Configuration   [Demo]–Visual Studio Ultimate 2010: Web Performance Test   In this short video I try and answer the following questions, Why is performance Testing important? How does Visual Studio Help you performance Test your applications? How do i record a web performance test? How do make a web performance test data driven, transaction driven, loop driven, convert to code, add validations? Best practices for recording Web Performance Tests. I have a web performance test, what next? Creating the Web Performance Test was the first step towards load testing your application. Now that we have the base test we can test the page behaviour when N-users access the page. Have you ever had the head of business call you and mention that the marketing team has done a fantastic job and are expecting increased traffic on the web site, can the website survive the weekend with that additional load? This is the perfect opportunity to capacity test your application to see how your website holds up under various levels of load, you can work the results backwards to see how much hardware you may need to scale up your application to survive the weekend. Apart from that it is always a good idea to have some benchmarks around how the application performs under light loads for short duration, under heavy load for long duration and soak test the application run a constant load for a very week or two to record the effects of constant load for really long durations, this is a great way of identifying how your application handles the default IIS application pool reset which by default is configured to once every 25 hours. These bench marks will act as the perfect yard stick to measure performance gains when you start making improvements. BUT there are some best practices! => Goal Based Load Testing Approach Since the subject is vast and there are a lot of things to measure and analyse, … it is very easy to get distracted from the real goal!  You can optimize your application once you know where the pain points are. There is no point performing a load test of 5000 users if your intranet application will only have a 100 simultaneous users, it is important to keep focussed on the real goals of the project. So the idea is to have a user story around your load testing scenarios and test realistically. So it is recommended that you follow the below outline, It is an Iterative process, refine your objectives, identify the key scenarios, what is the expected workload, key metrics you want to report, record the web performance tests, simulate load and analyse results. Is your application already deployed in Production? This is great! You can analyse the IIS Logs to understand the user behaviour… But what are IIS LOGS? The IIS logs allow you to record events for each application and Web site on the Web server. You can create separate logs for each of your applications and Web sites. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed. You can use the IIS logs to identify any attempts to gain unauthorized access to your Web server. How to configure IIS LOGS? For those Ninjas who already have IIS Logs configured (by the way its on by default) and need a way to analyse the IIS Logs, can use the Windows IIS Utility – Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV, XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5. Frequently used Log Parser queries. Demo – Load Test [Demo]–Visual Studio Ultimate 2010: Load Testing   In this short video I try and answer the following questions, - Types of Performance Testing? - Perform Goal driven Load Testing, analyse Test Run Result and Generate a report? Recap A quick recap of what we have covered so far,     Thank you for taking the time out and reading this blog post, in part III of this blog series I’ll be getting into the details of Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, and the Asp.net Profiler. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. See you on in Part III   Share this post : CodeProject

    Read the article

  • NetBeans not liking libraries in lib-src

    - by DJTripleThreat
    I'm working on a project with a group that is using Eclipse, but I'm using Netbeans. Up until today this wasn't an issue. When updating from the repo they have added some source code as a library under a directory called /lib-src. When I try to compile the code I get an error that it can't find certain packages... these are the packages under /lib-src. Using NetBeans I can add the library as a folder so now the references to those packages are happy. However, I'm getting this new error when compiling: UNEXPECTED TOP-LEVEL ERROR: java.lang.OutOfMemoryError: Java heap space at java.util.HashMap.addEntry(HashMap.java:753) at java.util.HashMap.put(HashMap.java:385) at com.android.dx.dex.file.ClassDataItem.addStaticField(ClassDataItem.java:134) at com.android.dx.dex.file.ClassDefItem.addStaticField(ClassDefItem.java:280) at com.android.dx.dex.cf.CfTranslator.processFields(CfTranslator.java:159) at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:130) at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:85) at com.android.dx.command.dexer.Main.processClass(Main.java:297) at com.android.dx.command.dexer.Main.processFileBytes(Main.java:276) at com.android.dx.command.dexer.Main.access$100(Main.java:56) at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:228) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:134) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:108) at com.android.dx.command.dexer.Main.processOne(Main.java:245) at com.android.dx.command.dexer.Main.processAllFiles(Main.java:183) at com.android.dx.command.dexer.Main.run(Main.java:139) at com.android.dx.command.dexer.Main.main(Main.java:120) at com.android.dx.command.Main.main(Main.java:87) /home/aaron/NetBeansProjects/xbmc-remote/nbproject/build-impl.xml:411: exec returned: 3 BUILD FAILED (total time: 1 minute 25 seconds) I can include the build-impl.xml file if you need it, but I don't think that is main issue. Any ideas?

    Read the article

  • ASP.NET - Missing #includes cause compilation errors: Failed to map the path '...'

    - by frankadelic
    I have an ASP.NET application which features some server-side includes. For example: <!--#include virtual="/scripts.inc" --> These files are not present in my ASP.NET website project because my website starts in a virtual directory: /path-to-my-application When I choose Build Web Site, I get this error: Failed to map the path '/scripts.inc' Visual Studio cannot resolve these include files that are defined at the root directory level. They are not visible in the website project. Aside from manually commenting out the #include references, is there any way I can get the website to build? Can I force Visual Studio to ignore those errors and compile the site? Once the website is pushed out to IIS, there is no problem, because all the #include files are in place. NOTE - Web Controls are not an option for this application. Please assume #include files are a requirement. Also, I cannot move the include files since they are used by other applications.

    Read the article

  • Spring 3 JSON with MVC

    - by stevedbrown
    Is there a way to build Spring Web calls that consume and produce application/json formatted requests and responses respectively? Maybe this isn't Spring MVC, I'm not sure. I'm looking for Spring libraries that behave in a similar fashion to Jersey/JSON. The best case would be if there was an annotation that I could add to the Controller classes that would turn them into JSON service calls. A tutorial showing how to build Spring Web Services with JSON would be great. EDIT: I'm looking for an annotation based approach (similar to Jersey). EDIT2: Like Jersey, I am looking for REST support (POST,GET,DELETE,PUT). EDIT3: Most preferably, this will be the pom.xml entries and some information on using the spring-js with jackson Spring native version of things.

    Read the article

  • Web Site in solution where "Rebuild Solution" compile succeeds cannot launch debugger

    - by fordareh
    I have a solution that includes a Web Site (created using the web site template not the web app project template - converting isn't an option, btw). When I rebuild all, the compile succeeds, but strangely displays 3 errors, all of which are "Could not get dependencies for project reference 'PROJNAME'". When I try to launch the debugger, I get the "There were build errors." dialogue. Two questions: If I choose the 'Yes' option in the debug error dialogue to run the last successful build, will it run on the code that my Rebuild All just compiled? How do I resolve this issue? I checked this post and am disheartened by my prospects. What is strange, though, is that I added these same projects to a separate web site solution that compiled/debugged fine, removed the test web site and re-added the target website I would like to debug, and it failed in the same manner. Is there a secret web site .proj file for .NET web sites? http://stackoverflow.com/questions/863379/could-not-get-dependencies-for-project-reference

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >