Search Results

Search found 3679 results on 148 pages for 'definition'.

Page 60/148 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • Box2D blocky map. Body, Fixtures a huge map and performance

    - by Solom
    Right now I'm still in the planning phase of a my very first game. I'm creating a "Minecraft"-like game in 2D that features blocks that can be destroyed as well as players moving around the map. For creating the map I chose a 2D-Array of Integers that represent the Block ID. For testing purposes I created a huge map (16348 * 256) and in my prototype that didn't use Box2D everything worked like a charm. I only rendered those blocks that where within the bounds of my camera and got 60 fps straight. The problem started when I decided to use an existing physics-solution rather than implementing my own one. What I had was basically simple hitboxes around the blocks and then I had to manually check if the player collided with any of those in his neighborhood. For more advanced physics as well as the collision detection I want to switch over to Box2D. The problem I have right now is ... how to go about the bodies? I mean, the blocks are of a static bodytype. They don't move on their own, they just are there to be collided with. But as far as I can see it, every block needs his own body with a rectangular fixture attached to it, so as to be destroyable. But for a huge map such as mine, this turns out to be a real performance bottle-neck. (In fact even a rather small map [compared to the other] of 1024*256 is unplayable.) I mean I create thousands of thousands of blocks. Even if I just render those that are in my immediate neighborhood there are hundreds of them and (at least with the debugRenderer) I drop to 1 fps really quickly (on my own "monster machine"). I thought about strategies like creating just one body, attaching multiple fixtures and only if a fixture got hit, separate it from the body, create a new one and destroy it, but this didn't turn out quite as successful as hoped. (In fact the core just dumps. Ah hello C! I really missed you :X) Here is the code: public class Box2DGameScreen implements Screen { private World world; private Box2DDebugRenderer debugRenderer; private OrthographicCamera camera; private final float TIMESTEP = 1 / 60f; // 1/60 of a second -> 1 frame per second private final int VELOCITYITERATIONS = 8; private final int POSITIONITERATIONS = 3; private Map map; private BodyDef blockBodyDef; private FixtureDef blockFixtureDef; private BodyDef groundDef; private Body ground; private PolygonShape rectangleShape; @Override public void show() { world = new World(new Vector2(0, -9.81f), true); debugRenderer = new Box2DDebugRenderer(); camera = new OrthographicCamera(); // Pixel:Meter = 16:1 // Body definition BodyDef ballDef = new BodyDef(); ballDef.type = BodyDef.BodyType.DynamicBody; ballDef.position.set(0, 1); // Fixture definition FixtureDef ballFixtureDef = new FixtureDef(); ballFixtureDef.shape = new CircleShape(); ballFixtureDef.shape.setRadius(.5f); // 0,5 meter ballFixtureDef.restitution = 0.75f; // between 0 (not jumping up at all) and 1 (jumping up the same amount as it fell down) ballFixtureDef.density = 2.5f; // kg / m² ballFixtureDef.friction = 0.25f; // between 0 (sliding like ice) and 1 (not sliding) // world.createBody(ballDef).createFixture(ballFixtureDef); groundDef = new BodyDef(); groundDef.type = BodyDef.BodyType.StaticBody; groundDef.position.set(0, 0); ground = world.createBody(groundDef); this.map = new Map(20, 20); rectangleShape = new PolygonShape(); // rectangleShape.setAsBox(1, 1); blockFixtureDef = new FixtureDef(); // blockFixtureDef.shape = rectangleShape; blockFixtureDef.restitution = 0.1f; blockFixtureDef.density = 10f; blockFixtureDef.friction = 0.9f; } @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); debugRenderer.render(world, camera.combined); drawMap(); world.step(TIMESTEP, VELOCITYITERATIONS, POSITIONITERATIONS); } private void drawMap() { for(int a = 0; a < map.getHeight(); a++) { /* if(camera.position.y - (camera.viewportHeight/2) > a) continue; if(camera.position.y - (camera.viewportHeight/2) < a) break; */ for(int b = 0; b < map.getWidth(); b++) { /* if(camera.position.x - (camera.viewportWidth/2) > b) continue; if(camera.position.x - (camera.viewportWidth/2) < b) break; */ /* blockBodyDef = new BodyDef(); blockBodyDef.type = BodyDef.BodyType.StaticBody; blockBodyDef.position.set(b, a); world.createBody(blockBodyDef).createFixture(blockFixtureDef); */ PolygonShape rectangleShape = new PolygonShape(); rectangleShape.setAsBox(1, 1, new Vector2(b, a), 0); blockFixtureDef.shape = rectangleShape; ground.createFixture(blockFixtureDef); rectangleShape.dispose(); } } } @Override public void resize(int width, int height) { camera.viewportWidth = width / 16; camera.viewportHeight = height / 16; camera.update(); } @Override public void hide() { dispose(); } @Override public void pause() { } @Override public void resume() { } @Override public void dispose() { world.dispose(); debugRenderer.dispose(); } } As you can see I'm facing multiple problems here. I'm not quite sure how to check for the bounds but also if the map is bigger than 24*24 like 1024*256 Java just crashes -.-. And with 24*24 I get like 9 fps. So I'm doing something really terrible here, it seems and I assume that there most be a (much more performant) way, even with Box2D's awesome physics. Any other ideas? Thanks in advance!

    Read the article

  • A Linker Resolution Problem in a C++ Program

    - by Vlad
    We have two source files, a.cpp and b.cpp and a header file named constructions.h. We define a simple C++ class with the same name (class M, for instance) in each source file, respectively. The file a.cpp looks like this: #include "iostream" #include "constructions.h" class M { int i; public: M(): i( -1 ) { cout << "M() from a.cpp" << endl; } M( int a ) : i( a ) { cout << "M(int) from a.cpp, i: " << i << endl; } M( const M& b ) { i = b.i; cout << "M(M&) from a.cpp, i: " << i << endl; } M& operator = ( M& b ) { i = b.i; cout << "M::operator =(), i: " << i << endl; return *this; } virtual ~M(){ cout << "M::~M() from a.cpp" << endl; } operator int() { cout << "M::operator int() from a.cpp" << endl; return i; } }; void test1() { cout << endl << "Example 1" << endl; M b1; cout << "b1: " << b1 << endl; cout << endl << "Example 2" << endl; M b2 = 5; cout << "b2: " << b2 << endl; cout << endl << "Example 3" << endl; M b3(6); cout << "b3: " << b3 << endl; cout << endl << "Example 4" << endl; M b4 = b1; cout << "b4: " << b4 << endl; cout << endl << "Example 5" << endl; M b5; b5 = b2; cout << "b5: " << b5 << endl; } int main(int argc, char* argv[]) { test1(); test2(); cin.get(); return 0; } The file b.cpp looks like this: #include "iostream" #include "constructions.h" class M { public: M() { cout << "M() from b.cpp" << endl; } ~M() { cout << "M::~M() from b.cpp" << endl; } }; void test2() { M m; } Finally, the file constructions.h contains only the declaration of the function "test2()" (which is defined in "b.cpp"), so that it can be used in "a.cpp": using namespace std; void test2(); We compiled and linked these three files using either VS2005 or the GNU 4.1.0 compiler and the 2.16.91 ld linker under Suse. The results are surprising and different between the two build environments. But in both cases it looks like the linker gets confused about which definition of the class M it should use. If we comment out the definition of test2() from b.cpp and its invocation from a.cpp, then all the C++ objects created in test1() are of the type M defined in a.cpp and the program executes normally under Windows and Suse. Here is the run output under Windows: Example 1 M() from a.cpp M::operator int() from a.cpp b1: -1 Example 2 M(int) from a.cpp, i: 5 M::operator int() from a.cpp b2: 5 Example 3 M(int) from a.cpp, i: 6 M::operator int() from a.cpp b3: 6 Example 4 M(M&) from a.cpp, i: -1 M::operator int() from a.cpp b4: -1 Example 5 M() from a.cpp M::operator =(), i: 5 M::operator int() from a.cpp b5: 5 M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp If we enable the definition of test2() in "b.cpp" but comment out its invocation from main(), then the results are different. Under Suse, the C++ objects created in test1() are still of the type M defined in a.cpp and the program still seems to execute normally. The VS2005 versions behave differently in Debug or Release mode: in Debug mode, the program still seems to execute normally, but in Release mode, b1 and b5 are of the type M defined in b.cpp (as the constructor invocation proves), although the other member functions called (including the destructor), belong to M defined in a.cpp. Here is the run output for the executable built in Release mode: Example 1 M() from b.cpp M::operator int() from a.cpp b1: 4206872 Example 2 M(int) from a.cpp, i: 5 M::operator int() from a.cpp b2: 5 Example 3 M(int) from a.cpp, i: 6 M::operator int() from a.cpp b3: 6 Example 4 M(M&) from a.cpp, i: 4206872 M::operator int() from a.cpp b4: 4206872 Example 5 M() from b.cpp M::operator =(), i: 5 M::operator int() from a.cpp b5: 5 M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp Finally, if we allow the call to test2() from main, the program misbehaves in all circumstances (that is under Suse and under Windows in both Debug and Release modes). The Windows-Debug version finds a memory corruption around the variable m, defined in test2(). Here is the Windows output in Release mode (test2() seems to have created an instance of M defined in b.cpp): Example 1 M() from b.cpp M::operator int() from a.cpp b1: 4206872 Example 2 M(int) from a.cpp, i: 5 M::operator int() from a.cpp b2: 5 Example 3 M(int) from a.cpp, i: 6 M::operator int() from a.cpp b3: 6 Example 4 M(M&) from a.cpp, i: 4206872 M::operator int() from a.cpp b4: 4206872 Example 5 M() from b.cpp M::operator =(), i: 5 M::operator int() from a.cpp b5: 5 M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M() from b.cpp M::~M() from b.cpp And here is the Suse output. The objects created in test1() are of the type M defined in a.cpp but the object created in test2() is also of the type M defined in a.cpp, unlike the object created under Windows which is of the type M defined in b.cpp. The program crashed in the end: Example 1 M() from a.cpp M::operator int() from a.cpp b1: -1 Example 2 M(int) from a.cpp, i: 5 M::operator int() from a.cpp b2: 5 Example 3 M(int) from a.cpp, i: 6 M::operator int() from a.cpp b3: 6 Example 4 M(M&) from a.cpp, i: -1 M::operator int() from a.cpp b4: -1 Example 5 M() from a.cpp M::operator =(), i: 5 M::operator int() from a.cpp b5: 5 M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M::~M() from a.cpp M() from a.cpp M::~M() from a.cpp Segmentation fault (core dumped) I couldn't make the angle brackets appear using Markdown, so I used quotes around the header file name iostream. Otherwise, the code could be copied verbatim and tried. It is purely scholastic. The statement cin.get() at the end of main() was included just to facilitate running the program directly from VS2005 (cause it to display the output window until we could analyze the output). We are looking for a software engineer in Sunnyvale, CA and may offer that position to the programmer capable of providing an intelligent and comprehensive explanation of these anomalies. I can be contacted at [email protected].

    Read the article

  • NSClient++: external script with optional arguments

    - by syneticon-dj
    I am trying to define an external script which would take optional arguments in NSClient++ 0.4.1 on Windows. Following the nsclient-full.ini example code I have defined mycheck=cmd /C echo C:\mydir\myscript.ps1 %ARGS% | powershell.exe -command - which simply yields the string %ARGS% passed as the only argument to myscript.ps1, no matter what I specify in my call through NRPE (using Nagios' check_nrpe if that matters). I then tried to rewrite the definition to mycheck=cmd /C echo C:\mydir\myscript.ps1 $ARG1$ $ARG2$ | powershell.exe -command - (myscript.ps1 would take up to two arguments), which does help a bit. At least, if two arguments are provided, I can fetch them via the args[] array. The trouble starts when the call has less than two arguments - in this case the literal strings $ARG2 and $ARG1$ are passed through as arguments. Handling this case in the code of myscript.ps1 makes the whole argument processing routine ugly at best. Is there a sane way of defining optional parameters to an external script which would not pass NSClient's variable names if no parameter has been specified?

    Read the article

  • Reconfigure RAID on Dell PowerEdge T710

    - by Stefano Borini
    I have a Dell PowerEdge T710 under my feet at this very moment, with RedHat Enterprise Server 5.3. I have 6 1TB disks and two 500GB. parted reports two devices, one 500 GB and the other 4 TB. So I assume the RAID has been setup as mirror for two disks, and I assume as RAID 5 the remaining ones. I say "I assume" because it does not make sense. Having 6 disks in RAID 5, I should obtain a total space of 5 TB, not 4TB. It's not even RAID 10: I would end up with a 3 TB unit. How can I check and eventually modify the RAID array definition? In the Fujitsu Siemens I played with some time ago, at boot I had the chance to enter the controller BIOS, but here I don't see a clear way to perform this operation.

    Read the article

  • mono: application not running when web.config in subfolder

    - by Quandary
    I run asp.net on mono. I wanted to run blogengine: http://www.dotnetblogengine.net/ I downloaded, put it in /var/www/blogengine, on the console went to /var/www/blogengine and started xsp2. I went to http://localhost:8080, and it ran without problems. Then I stopped xsp2, went to /var/www and started xsp2. I went to http://localhost:8080/blogengine It ended with a strange error: The section <blogProvider> can't be defined in this configuration file (the allowed definition context is 'MachineToApplication'). (/var/www/blogengine/Web.Config line 9) The problem seems to be that it stops working as soon as the xsp2 root folder is anything else than the application root folder... Do I have to config anything ? Or what else is wrong ?

    Read the article

  • Junos custom-attack signature pattern syntax

    - by James Hawkwind
    I am stuck at a point with the configuration of a custom-attack signature in Junos. According to the Junos Custom Attack Definition documentation page, I can set up a custom attack based upon a signature in the packet. In the documentation you can specify a "pattern" to match, but it fails to describe what the pattern syntax should be. Particularly, I want to match the HEX values of 8C 00 13 00 in the first four bytes of the TCP data payload. Does anyone know how to accomplish this correctly?

    Read the article

  • Dolby Digital Live (DDL) on Asus Rampage II Gene (Creative X-Fi Extreme)

    - by kevyn
    Hi there, I have an Asus Rampage II Gene motherboard which has X-Fi extreme built in. I can get it to work with Windows 7 ok using the Creative drivers, however when I try and install the DDL/DTS add on pack from Creative I get the error message: "There are no supported audio device available. You need to close the application. Click OK to close the application now" I don't understand it because I have the Creative software installed ok and supporting the sound without any problems. In Device manager the audio device comes up as 'High definition audio device' and uses driver: 6.1.7600.16385 from Microsoft. I tried using the Creative drivers which show up as 'soundmax HD audio' however these do not allow any of the Creative products to run properly. Please can anyone offer any help? Or even just confirm that DDL can work with my onboard sound? Windows 7 Ultimate 64 bit 6GB DDR3 XFX GS8800 384mb Asus Rampage II Gene Intel i7 920 (2.66)

    Read the article

  • Setting alias in Windows PowerShell

    - by westsider
    In PowerShell, I type: PS C: sal cdp "cd 'C:\Users\ec\Documents\Visual Studio 2010\Projects'" I get no error from this, and PS C: gal cdp shows definition as: cd 'C:\Users\ec\Documents\Visual Studio 2010\Projects' But, when I try to use cdp, I get this: Cannot resolve alias 'cdp' because it refers to term 'cd 'C:\Users\ec\Documents\Visual Studio 2010\Projects'', which is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again. At line:1 char:4 + cdp <<<<  + CatergoryInfo   : ObjectNotFound (dsp:String) [], CommandNotFoundException  + FullyQualifiedErrorId   : AliasNotResolvedException I am guessing that this is trivially easy. So I apologize in advance if that is the case. I have googled and googled and have also read through Windows PowerShell Cookbook.

    Read the article

  • problem connecting to datasource defined in freetds.conf

    - by pkaeding
    I can connect successfully to my database using tsql when I bypass the freetds.conf file, like so: % TDSVER=8.0 tsql -H 10.100.102.202 -p 1086 -U sa After I enter my password, I am presented with a 1> prompt, and it is ready for my commands. However, if I try to connect using the definition in my freetds.conf file, like this: % tsql -S Millie -U sa after entering my password, it seems to be trying to generate a prompt, but it just keeps counting. I will see 1, followed by 2, etc, without ever displaying a > character. Here is what I have for my freetds.conf: [global] # TDS protocol version tds version = 8.0 text size = 64512 [Millie] host = 10.100.102.202 port = 1086 What could be causing this anomaly? If it helps, here is the output of tsql -C: % tsql -C Compile-time settings (established with the "configure" script) Version: freetds v0.82 freetds.conf directory: /usr/local/etc MS db-lib source compatibility: no Sybase binary compatibility: no Thread safety: yes iconv library: yes TDS version: 5.0 iODBC: no unixodbc: no

    Read the article

  • CPU Usage at 100% with "Hardware Interrupts"

    - by eventualEntropy
    After turning on my desktop one day, I found that my CPU usage was maxed out at 100%, with 99% of that going to hardware "Interrupts". I tried to enable/disable all my devices one by one through the device manager, and found that I could get the CPU usage used by the Interrupts down to 50% by disabling all devices labelled "USB Host Controller" (except the ones for the mouse/keyboard). I found that I also got 10-20% more from disabling "High Definition Audio Controller". Following the tutorial at: http://www.msfn.org/board/topic/140263-how-to-get-the-cause-of-high-cpu-usage-by-dpc-interrupt/ Led me to similar conclusions (that is, that the culprit is mostly "USB Host Controller"): I've tried updating my asus motherboard driver and my video card driver. This is on Windows 7 64 bit. I've spent hours trying to figure this out and I'm running out of ideas short of formatting (which might still not fix it!).

    Read the article

  • Immutable hard links on ext3/4?

    - by shovas
    In my research on file versioning at the fs level, snapshotting, and related ideas, I took a look at hard-links and exactly what they are and how they behave. Using rsync you can get a pretty slick poor man's snapshotting system up and running on file systems that don't natively support it. But, can you get immutable hard links on ext3/4 or any other file systems for that matter? My definition for immutable hard link is: A hard link which, when changed on one location, becomes a regular copy and no longer a hard link. I would like this because it would enable snapshotting use of the source data to link against instead of a copy of the data (in the case of the rsync snapshotting technique). I have gigabytes of data that can't be duplicated due to space restrictions but I have enough room if I can intelligently snapshot individual changed files with the rest linked to the source not a copy. Given all that, is there some other technique, feature or technology I'm really looking for?

    Read the article

  • kmemsize problems in VPS even when there is about 500MB free mem

    - by Amer
    Hello, I have a site hosted on a Plesk VPS with 512MB memory and keep on getting kmemsize in "black zone" QoS errors. The soft limit of kmemsize is 12,288,832 and hard limit is 13,517,715. The definition Virtuozzo gives is: Size of unswappable memory, allocated by the operating system kernel. What's eating up the kmemsize? Is there any way to reconfigure and increase the kmemsize? The servers barely have any load or processing. Thanks for the help...

    Read the article

  • Evaluate a Munin graph defined in munin.conf

    - by Ztyx
    Hi, I have defined an additional graph (in Munin, munin.conf) that calculates the total size of my MySQL database. The index and data sizes are extracted from an external plugin. The definition looks like this: [...] [Database;my.host.com] address my.host.com use_node_name yes dbsize.update no dbsize.graph_args --base 1024 -l 0 dbsize.graph_title Total database size dbsize.graph_vlabel bytes dbsize.graph_category mysql dbsize.graph_info The total database size. dbsize.graph_order the_sum dbsize.the_sum.sum \ my.host.com:mysql_size.index \ my.host.com:mysql_size.datas dbsize.the_sum.label data+index dbsize.the_sum.type GAUGE dbsize.the_sum.min 0 [...] Now, is it possible to extract the current value of this graph? Running # munin-run dbsize or # munin-run my.host.com:dbsize does not seem to work.

    Read the article

  • Windows 7 Disk Management Spanned Volume vs Striped Volume

    - by Kairan
    Im looking for a reason why a person would use a Spanned volume rather than a Striped volume? If my understanding is correct Striped: Faster read/write speed than spanned, but I "assume" more wear+tear Spanned: No speed benefit like striped, but data is written sequentially and fills up Drive1 before filling up Drive2, so it saves on wear+tear Beyond that Im not sure if there is any other deciding factor on which to use. Definition found below: A striped volume uses the free space on more than one physical hard disk to create a bigger volume. Unlike a spanned volume, a striped volume writes across all volumes in the stripe in small blocks, distributing the load across the disks in the volume. The portions of disk used to create the volume need to be the same size; the size of the smallest free space included in the striped volume will determine.

    Read the article

  • Live noise-filter on line-in

    - by Damon Gant
    I'm running the following setup: Xbox 360 is hooked up to my (PC) screen via HDMI/DVI converter. Because the Xbox has no dedicated sound output, except for optical S/PIDF, I'm also using the AV/RCA output, namely just the audio, which is connected to an old stereo, which is then connected to my PCs line-in. I'm now experiencing a some of noise. I'm using one of the standard "Realtek High Definition Audio" cards, which doesn't seem to offer this kind of functionality. Is there a software that will playback audio right off a device while running filters on it? It doesn't have to create a device on its own, I just want to listen to it. Here's a sample: http://puu.sh/1suY6

    Read the article

  • Using LDAP to store customer data

    - by mechcow
    We wish to store some data in 389 Directory Server LDAP that doesn't fit that well into the standard set of schema's that come with the product. Nothing too amazing, things like: when the customer joined are they currently active customer certificate[1] which environment they are using My question is this: should we register with OID and start writing up our own custom schema OR is there a standard schema definition not provided by Directory Server that we can download and use that would fit our needs? Should we munge/hack existing attributes and store the data among there (I'm strongly opposed to this, but would be interested in arguments about why its better than extending)? [1] I know there is a field for this userCertificate but we don't want to use it to authenticate the user for the purposes of binding Using CentOS 5.5 with 389 Directory Server 8.1

    Read the article

  • What is Openflow?

    - by jmreicha
    Okay, I've been thinking about this a lot recently but I think I may be more confused now than I was originally when I started. I can't find anything about Openflow that helps me understand what it is. None of the websites I have found have given a solid definition of what Openflow really is and what it does, at least if they do I can't comprehend it. Is there an easy way to explain this standard so my small brain can understand it? I understand that Openflow is a way to abstract networking away from switches and can be managed from software, etc etc. I feel like it might be a little easier to grasp if I had exmaples of application. But so far searching has failed me. What is Openflow? How can it help me? What does it offer?

    Read the article

  • NHibernate won't persist DateTime SqlDateTime overflow

    - by chris raethke
    I am working on an ASP.NET MVC project with NHibernate as the backend and am having some trouble getting some dates to write back to my SQL Server database tables. These date fields are NOT nullable, so the many answers here about how to setup nullable datetimes have not helped. Basically when I try to save the entity which has a DateAdded and a LastUpdated fields, I am getting a SqlDateTime overflow exception. I have had a similar problem in the past where I was trying to write a datetime field into a smalldatetime column, updating the type on the column appeared to fix the problem. My gut feeling is that its going to be some problem with the table definition or some type of incompatible data types, and the overflow exception is a bit of a bum steer. I have attached an example of the table definition and the query that NHibernate is trying to run, any help or suggestions would be greatly appreciated. CREATE TABLE [dbo].[CustomPages]( [ID] [uniqueidentifier] NOT NULL, [StoreID] [uniqueidentifier] NOT NULL, [DateAdded] [datetime] NOT NULL, [AddedByID] [uniqueidentifier] NOT NULL, [LastUpdated] [datetime] NOT NULL, [LastUpdatedByID] [uniqueidentifier] NOT NULL, [Title] [nvarchar](150) NOT NULL, [Term] [nvarchar](150) NOT NULL, [Content] [ntext] NULL ) exec sp_executesql N'INSERT INTO CustomPages (Title, Term, Content, LastUpdated, DateAdded, StoreID, LastUpdatedById, AddedById, ID) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7, @p8)',N'@p0 nvarchar(21),@p1 nvarchar(21),@p2 nvarchar(33),@p3 datetime,@p4 datetime,@p5 uniqueidentifier,@p6 uniqueidentifier,@p7 uniqueidentifier,@p8 uniqueidentifier',@p0=N'Size and Colour Chart',@p1=N'size-and-colour-chart',@p2=N'This is the size and colour chart',@p3=''2009-03-14 14:29:37:000'',@p4=''2009-03-14 14:29:37:000'',@p5='48315F9F-0E00-4654-A2C0-62FB466E529D',@p6='1480221A-605A-4D72-B0E5-E1FE72C5D43C',@p7='1480221A-605A-4D72-B0E5-E1FE72C5D43C',@p8='1E421F9E-9A00-49CF-9180-DCD22FCE7F55' In response the the answers/comments, I am using Fluent NHibernate and the generated mapping is below public CustomPageMap() { WithTable("CustomPages"); Id( x => x.ID, "ID" ) .WithUnsavedValue(Guid.Empty) . GeneratedBy.Guid(); References(x => x.Store, "StoreID"); Map(x => x.DateAdded, "DateAdded"); References(x => x.AddedBy, "AddedById"); Map(x => x.LastUpdated, "LastUpdated"); References(x => x.LastUpdatedBy, "LastUpdatedById"); Map(x => x.Title, "Title"); Map(x => x.Term, "Term"); Map(x => x.Content, "Content"); } <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-lazy="false" assembly="MyNamespace.Core" namespace="MyNamespace.Core"> <class name="CustomPage" table="CustomPages" xmlns="urn:nhibernate-mapping-2.2"> <id name="ID" column="ID" type="Guid" unsaved-value="00000000-0000-0000-0000-000000000000"><generator class="guid" /></id> <property name="Title" column="Title" length="100" type="String"><column name="Title" /></property> <property name="Term" column="Term" length="100" type="String"><column name="Term" /></property> <property name="Content" column="Content" length="100" type="String"><column name="Content" /></property> <property name="LastUpdated" column="LastUpdated" type="DateTime"><column name="LastUpdated" /></property> <property name="DateAdded" column="DateAdded" type="DateTime"><column name="DateAdded" /></property> <many-to-one name="Store" column="StoreID" /><many-to-one name="LastUpdatedBy" column="LastUpdatedById" /> <many-to-one name="AddedBy" column="AddedById" /></class></hibernate-mapping>

    Read the article

  • Need help trouble shooting Https webserver error - SSL Handshake failed

    - by DerNalia
    I followed this guide: http://hints.macworld.com/article.php?story=20041129143420344 Here is my virtual host definition <VirtualHost *:443> SSLEngine on SSLProxyEngine On RequestHeader set Front-End-Https "On" CacheDisable * SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL DocumentRoot "/Users/me/projects/myproject/public" ServerName ssl.mydomain.com ServerAlias *.ssl.mydomain.com SSLCertificateKeyFile "/private/etc/apache2/certs/webserver.nopass.key" SSLCertificateFile "/private/etc/apache2/certs/newcert.pem" SSLCACertificateFile "/private/etc/apache2/certs/demoCA/cacert.pem" SSLCARevocationPath "/private/etc/apache2/certs/demoCA/crl" ErrorLog "/Users/me/Desktop/ssl.log" ProxyPass / https://localhost:3002/ ProxyPassReverse / https://localhost:3002 ProxyPreserveHost on </VirtualHost> And when I try connecting to the sevre viov the web browser, I get this error: [Thu Feb 02 16:50:40 2012] [error] (502)Unknown error: 502: proxy: pass request body failed to 127.0.0.1:3002 (localhost) [Thu Feb 02 16:50:40 2012] [error] [client 96.11.81.39] proxy: Error during SSL Handshake with remote server returned by /session/new [Thu Feb 02 16:50:40 2012] [error] proxy: pass request body failed to 127.0.0.1:3002 (localhost) from 96.11.81.39 () how do I debug / fix this?

    Read the article

  • dhcpd: varying vendor-class-identifier

    - by jessicah
    I'm having trouble selectively sending parameters in response to a DHCP Inform packet using groups (or even without, just using host declarations) for bootp stuff. My configuration file right now looks like: subnet 130.123.131.128 netmask 255.255.255.128 { allow unknown-clients; } host dev-mac-09 { option vendor-class-identifier "example-identifier"; hardware ethernet 10:9a:dd:51:ff:83; } If I put vendor-class-identifier in the global scope, using tcpdump I can see that the client receives the vendor class option successfully. If I take it out, and just keep it in the host scope (or group scope), the client never receives the option. Specifying option dhcp-parameter-request list 60 doesn't help either. I did try using a class definition inside a group, but then it applied even if the host wasn't a part of the group. As an aside, how do I get detailed logging? At least something to indicate what groups and things got used to generate the response to the client.

    Read the article

  • graphics cards with no HDMI output

    - by Noam Gal
    I am currently looking at Gainward GTX260 896MB GS GLH, but I've seen it also on the x275 version - the card doesn't have an HDMI output, only two dvi and one tv out (s-video?). They claim they support HDMI using a dvi-HDMI converter. Will I get a true high definition quality on my TV (assuming it supports it) like that? Or is it not as good, and I should stick to cards that have an HDMI output (ATI), or pay way too much for x295? What about connecting the audio? The x260 comes with an internal spdif cable - does that mean I can connect my soundcard to my graphics card, and have the audio come out through the dvi, and into the HDMI cable? Or am I mixing it all up here, and I have to somehow connect the sound to the TV using a seperate cable (Hoping it has a seperate audio-in for the HDMI channel)?

    Read the article

  • Live CD with good anti-virus software to scan/repair Windows?

    - by overtherainbow
    Hello, I browsed through the archives, and it seems like there's no live CD from which to run a good, up-to-date anti-virus application, at least to check whether a Windows host has been compromised The Ultimate Boot CD has only three AV applications, and their virus definition is from... 2007 In a report, ClamAV scored very low. It's nice that it's open-source, but if it's not as good as commercial alternatives... Those of you into this kind of thing, do you confirm that there's just no good live CD to inspect Windows hosts, and possibly repair them? If there is, what do you recommend? Thank you.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >