Search Results

Search found 1862 results on 75 pages for 'stuart brand'.

Page 6/75 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Architecture Forum in the North 2010 - Hosted by Black Marble

    - by Stuart Brierley
    On Thursday the 8th of December I attended the "Architecture Forum in the North 2010" hosted by Black Marble. The third time this annual event has been held, it was pitched as featuring Black Marble and Microsoft UK architecture experts focusing on “Tools and Methods for Architects.... a unique opportunity to provide IT Managers, IT and software architects from Northern businesses the chance to learn about the latest technologies and best practices from Microsoft in the field of Architecture....insightful information about the latest techniques, demonstrating how with Microsoft’s architecture tools and technologies you can address your current business needs." Following a useful overview of the Architecture features of Visual Studio 2010, the rest of the day was given over to various features and ways to make use of Microsoft's Azure offerings.  While I did feel that a wider spread of technologies could have been covered (maybe a bit of Sharepoint or BizTalk even), the technological and architectural overviews of the Azure platform were well presented, informative and useful. The day was well organised and all those involved were friendly and approachable for questions and discussions.  If you are in "the North" and get a chance to attend next year I would highly recommend it.

    Read the article

  • BizTalk 2009 - Pipeline Component Wizard Error

    - by Stuart Brierley
    When attempting to run the BizTalk Server Pipeline Component Wizard for the first time I encountered an error that prevented the creation of the pipeline component project: System.ArgumentException : Value does not fall withing the expected range  I found the solution for this error in a couple of places, first a reference to the issue on the Codeplex Project and then a fuller description on a blog referring to the BizTalk 2006 implementation. To resolve this issue you need to make a change to the downloaded wizard solution code, by changing the file BizTalkPipelineComponentWizard.cs around line 304 From // get a handle to the project. using mySolution will not work. pipelineComponentProject = this._Application.Solution.Projects.Item(Path.Combine(Path.GetFileNameWithoutExtension(projectFileName), projectFileName)); To // get a handle to the project. using mySolution will not work. if (this._Application.Solution.Projects.Count == 1) pipelineComponentProject = this._Application.Solution.Projects.Item(1); else { // In Doubt: Which project is the target? pipelineComponentProject = this._Application.Solution.Projects.Item(projectFileName); } Following this you need to uninstall the the wizard, rebuild the solution and re-install the wizard again.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation with Input Parameters

    - by Stuart Brierley
    As previsouly noted in my post on Schema Generation using the Community ODBC Adapter, I ran into a problem when trying to generate a schema to represent a MySQL stored procedure that had input parameters.  After a bit of investigation and a few deadends I managed to figure out a way around this issue - detailed below are both the problem and solution in case you ever run into this yourself. The Problem Imagine a stored procedure that is coded as follows in MySQL: StuTest(in DStr varchar(80)) BEGIN   Declare GRNID int;   Select grn_id into GRNID from grn_header where distribution_number = DStr;   Select GRNID; END This is quite a simple stored procedure but can be used to illustrate the issue with parameters quite niceley. When generating the schema using the Add Generated Items wizard, I tried selecting "Stored Procedure" and then in the Statement Information window typing the stored procedure name: StuTest Pressing generate then gives the following error: "Incorrect Number of arguments for Procedure StuTest; expected 1, got 0" If you attempt to supply a value for the parameter you end up with a schema that will only ever supply the parameter value that you specify.  For example supplying StuTest('123') will always call the procedure with a parameter value of 123. The Solution   I tried contacting Two Connect about this, but their experience of testing the adapter with MySQL was limited. After looking through the code for the ODBC adapter myself and trying a few things out, I was eventually able to use the ODBC adapter to call a test stored procedure using a two way send port. In the generate schema wizard instead of selecting Stored Procedure I had to choose SQL Script instead, detailing the following script: Call StuTest(@InputParameter) By default this would create a request schema with an attribute called InputParameter, with a SQL type of NVarChar(1).  In most cases this is not going to be correct for the stored procedure being called. To change the type from the default that is applied you need to select the "Override default query processing" check box when specifying the script in the wizard.  This then opens the BizTalk ODBC Override window which lets you change the properties of the parameters and also test out the query script.  Once I had done this I was then able to generate the correct schema, which included an attribute representing the parameter.  By deploying the schema assembly I was then able to try the ODBC adapter out on a two way send port. When supplied with an appropriate message instance (for the generated request schema) this send port successfully returned the expected response.

    Read the article

  • My New Job

    - by Stuart Brierley
    Last year I started a new job with a logistics company in the North of England, where I was responsible for the management, design and development of IT Integration strategies, architectures and solutions using BizTalk Server 2009.  This included the design and implementation of the BizTalk Server 2009 infrastructure, the definition of development standards, mentoring a fellow developer in the ways of BizTalk and migrating a number of existing solutions from Softshare over to BizTalk 2009. Unfortunately I then realised that, following this initial set up, there didn't actually seem to be that much BizTalk work for me to get stuck into and reluctantly I have now moved on from this role to a very similar role with the country's largest office supplies company.  Based in Sheffield, we distribute office supplies on a UK wide basis and computer supplies across Europe. The situation here is slightly different than when I first joined my previous employer.  Whereas that was a green field installation with no previous BizTalk solutions in place, my new employer currently has a number of live BizTalk 2000 (!) and BizTalk 2006 solutions in place.  Unfortunately the infrastructure around these is less than ideal; with no clear distinction between development and test environments and no source control what so ever! We are currently building a proposal for a new BizTalk Server 2010 implementation, where I am hopeful of being able to implement fully independent development, test and pseudo-live environments, alongside an enterprise level live installation.  We should also be introducing Team Foundation Server to the development process, thereby giving us some much needed source control capabilities. Following this is likely to be a period of migration for the existing BizTalk Solutions, along with the onward development of new projects and initiatives - I'm hoping to be a busy man for the forseeable future :o)

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Receive Location

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas.  But what about creating a receive location? An ODBC receive location will periodically poll the configured database using the stored procedure or SQL string defined in your request schema. If you need to, begin by adding a new receive port to your BizTalk configuration. Create a new receive location and select to use the ODBC adapter and click Address. You will now be shown the ODBC Community Adapter Transport properties window.  Select connection string and you will be shown the Choose data Source window.  If you have already created the Test Database source when generating a schema from ODBC this will be shown (if not go and take a look in my previous post to see how this is done).   You will then need to choose the SQL command that will be run by the recieve port.  In this case I have deployed the Test Mapping schemas that I created previously and selected the Request schema. You should now have populated the appropriate properties for the ODBC Com Adapter. Finally set the standard receive location properties and your ODBC receive location is now ready.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Send Port

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas and laterly the creation of a receive port using the ODBC Adapter.  But what about creating a send port? Select to add a new Send Port, select the ODBC Adapter and click configure. Clicking Connection string will open the DataSource window. Choose one of your system datasources and press OK. This will now update the Transport properties.  Select okay. All that remains is to set the standard send port properties and your ODBC send port is now ready.

    Read the article

  • What's the best way of marketing to programmers?

    - by Stuart
    Disclaimer up front - I'm definitely not going to include any links in here - this question isn't part of my marketing! I've had a few projects recently where the end product is something that developers will use. In the past I've been on the receiving end of all sorts of marketing - as a developer I've gotten no end of junk - 1000s of pens, tee-shirts and mouse pads; enough CDs to keep my desk tea-free; some very useful USB keys with some logos I no longer recognise; a small forest's worth of leaflets; a bulging spam folder full of ignored emails, etc... So that's my question - What are good ways to market to developers? And as an aside - are developers the wrong people to target? - since we so often don't have a purchasing budget anyways!

    Read the article

  • Customer Engagement: Are Your Customers Engaged With Your Brands?

    - by Michael Snow
    Engaging Customers is Critical for Business Growth This week we'll be spending some time looking at Customer Engagement. We all have stories about how we try to engage our customers better than ever before.  We all know that successfully engaging customers is critical to an organization’s business success. We also know that engaging our customers is more challenging today than ever before. There is so much noise to compete with for getting anyone's attention. Over the last decade and a half we’ve watched as the online channel became a primary one for conducting our business and even managing our lives. And during this whole process or evolution, the customer journey has grown increasingly complex. Customers themselves have assumed increasing power and influence over the purchase process and for setting the tone and pace of the relationships they have with brands and you see the evidence of this in the really high expectations that customers have today. They expect brand experiences that are personalized and relevant -- In other words they want experiences that demonstrate that the brand understands their interests, preferences and past interactions with them. They also expect their experience with a brand and the community surrounding it to be social and interactive – it’s no longer acceptable to have a static, one-way dialogue with your customer base or to fail to connect your customers with fellow customers, or with your employees and partners. And on top of all this, customers expect us to deliver this rich and engaging, personalized and interactive experience, in a consistent way across a variety of channels including web, mobile and social channels or even offline venues such as in-store or via a call center. And as a result, we see that delivery on these expectations and successfully engaging your customers is a great challenge today. Customers expect a personal, engaging and consistent online customer experience. Today’s consumer expects to engage with your brand and the community surrounding it in an interactive and social way. Customers have come to expect a lot for the online customer experience.  ·        They expect it to be personal: o   Accessible:  - Regardless of my device  Via my existing online identities  o   Relevant:  Content that interests me  o   Customized:  To be able to tailor my online experience  ·        They expect it to be engaging: o   Social:  So I can share content with my social networks  o   Intuitive:  To easily find what I need   o   Interactive:  So I can interact with online communities And they expect it to be consistent across the online experience – so you better have your brand and information ducks in a row. These expectations are not only limited to your customers by any means. Your employees (and partners) are also expecting to be empowered with engagement tools across their internal and external communications and interactions with customers, partners and other employees. We had a great conversation with Ted Schadler from Forrester Research entitled: "Mobile is the New Face of Engagement" that is now available On-Demand. Take a look at all the webcasts available to watch from our Social Business Thought Leader Series. Social capabilities have become so pervasive and changed customers’ expectations for their online experiences. The days of one-direction communication with customers are at an end. Today’s customers expect to engage in a dialogue with your brand and the community surrounding it in an interactive and social way. You have at a very short window of opportunity to engage a customer before they go to another site in their pursuit of information, product, or services. In fact, customers who engage with brands via social media tend to spend more that customers who don’t, between 20% and 40% more.  And your customers are also increasingly influenced by their social networks too – 40% of consumers say they factor in Facebook recommendations when making purchasing decisions.  This means a few different things for today’s businesses. Incorporating forms of social interaction such as commenting or reviews as well as tightly integrating your online experience with your customers’ social networking experiences into the online customer experience are crucial for maintaining the eyeballs on your desired pages. --- Notes/Sources: 93% - Cone Finds that Americans Expect Companies to Have a Presence in Social Media - http://www.coneinc.com/content1182 40% of consumers factor in Facebook recommendations when making decisions about purchasing (Increasing Campaign Effectiveness with Social Media, Syncapse, March 2011) 20%-40% - Customers who engage with a company via social media spend this percentage more with that company than other customers (Source: Bain & Company Report – Putting Social Media to Work)

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • SAB BizTalk Archiving Pipeline Component v0.2

    - by Stuart Brierley
    Just released to Codeplex is an updated version of my archiving pipeline component for BizTalk. The changes in this release are: Addition of FTP adapter macros to the base macros and File adapter macros. Fix for the issue of garbage collection of data streams within pipelines as discussed in this previous blog entry. Now looks for OutboundTransportType in addition to InboundTransportType to pick up send port transport type; Therefore changed %InboundTransportType% macro to %TransportType%. An initial outline of the project can be read here.

    Read the article

  • Netflix Rolls Out Polished New iPhone and Android Apps [Video]

    - by Jason Fitzpatrick
    If you’re a Netflix subscriber, you’ve got a brand spanking new mobile interface to take for a spin. Last week Netflix released a brand new iOS interface, this week it’s a brand new Android interface. The above video showcases the new iOS interface for mobile playback on devices like the iPhone and iPad. The slick new layout makes it even easier to browser new content and resume watching content you’ve paused at home or on the go. For a peek at the new (and similar) Android interface, check out the video below: For more information about the respective apps, visit their download pages to read up and grab a copy. Netflix for Android / iOS How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Is there an equivalent to the Max OS X software Hazel that runs on Ubuntu?

    - by Stuart Woodward
    Is there an equivalent to the Max OS X software Hazel that runs on Ubuntu? "Hazel watches whatever folders you tell it to, automatically organizing your files according to the rules you create. It features a rule interface [..]. Have Hazel move files around based on name, date, type, what site/email address it came from [..] and much more. Automatically put your music in your Music folder, movies in Movies. Keep your downloads off the desktop and put them where they are supposed to be." This question probably won't make sense unless you have used Hazel, but basically you can define rules via the GUI to move and rename files automatically to make an automated workflow.

    Read the article

  • MySQL - Configuration

    - by Stuart Brierley
    Having previously detailed how to install MySQL Server, the next step is configuring MySQL. The MySQL configuration wizard can either be run immediately following installation from the MySQL installation wizard or manually from the Start Menu. Following the splash screen you can then choose whether to run a detailed or standard configuration. The detailed configuration allows you to create the optimal configuration for your specific machine, whereas the standard configuration creates a general configuration that can then be manually tuned. I chose detailed.   You are then asked to choose the type of server instance that is being configured. In this case it is a developer machine. Following this you are asked to choose the type of database usage that you expect on the server. I opted for multifunctional. You must then specify the location of the InnoDB tablespace.   Next specify the number of concurent connections to the server.   Now you must configure the networking options. I left the Strict mode enabled as this is the recommended option, but I disabled TCP/IP networking as I wanted to restrict this MySQL installation to the local machine only.   Set the character set that is best suited to your use - for me this was the default standard character set. Next up is the option to run MySQL as a service and whether or not to include the mysql dircetories in the windows PATH. I kept the install as a windows service option enabled, but unchecked the Launch MySQL server automatically option. This is because I only wanted MySQL running when I specifically want to use it. I also enabled the include in windows PATH option.   You can then change the security settings for the mysql installation. I opted to change the root password, disable root from local machines and disable annoymous access.   You are now ready to execute the configuration.   Once completed you should hopefully see the completed screen with lots of nice ticks against the various configuration tasks.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation

    - by Stuart Brierley
    Having previously detailed the installation of the Community ODBC Adapter for BizTalk 2009, the next thing I will be looking at is the generation of schemas using this ODBC adapter. Within your BizTalk 2009 project, right click the project and select Add Generated Items.  In the resultant window choose Add Adapter Metadata and click Add to open the Add Adapter Wizard. Check that the BizTalk Server and Database names are correct, select the ODBC adapter and click next. You must now set the connection string. To start with choose set, then new DSN (data source name). You now need to define the Data Source you will be connecting to.  On the User DSN tab select Add add then driver you want to use. In this case I am going to use the MySQL ODBC Driver.  A User DSN will only be visible on the current machine with you as a user. * Although I initially set up a User DSN and this was fine for creating schemas with, I later realised that you actually need a system DSN as the BizTalk host service needs this to be able connect to the database on a receive or send port. You will then be asked to Set up the MySQL ODBC Data Source.  In my case this is a local database making use of named pipes, so I had to make sure that I ticked the "Force use of named pipes" check box and removed the "# The Pipe the MySQL Server will use socket=mysql" line from the mysql.ini; with this is place the connection would fail as there is no apparent way to specify the pipe name in the ODBC driver configuration. This will then update the User DSN tab with the new Data Source.  Make sure that you select it and press OK. Select it again in the Choose Data Source window and press OK.  On the ODBC transport window select next. You will now be presented with the Schema Information window, where you must supply the namespace, type and root element names for your schema. Next choose the type of statement that you will be using to create your schema - in this case I am using a stored procedure. *I later discovered that this option is fine for MySQL stored procedures without input parameters, but failed for MySQL stored procedures with input parameters.  (I will be posting on the way to handle input parameters soon) Next you will need to specify the name of the stored procedure.  In this case I have a simple stored procedure to return all the data held by my TestTable in MySQL. Select * from TestTable; The table itself has three columns: Name, Sex and Married. Selecting finish should now hopefully create your schemas based on the input and output from your stored procedure. In my case I have:   An empty schema for the request; after all I have no parameters for the stored procedure.  A response schema comprised of a Table Record with Name, Sex and Married children. Next I will be looking at the use of the ODBC adpater with: Receive ports Send ports

    Read the article

  • Changing the RSS and Dynamic Views layout when using Blogger as a Podcast index

    - by Stuart
    I'm trying to set up a podcast service at present. This is just a 'spare time' task - so I wanted a quick, easy way to do it. To get this working: I've ripped (with owner permission) some YouTube content across to MP3 and hosted this content on Azure Blob Storage. I've posted blog posts - with linked mp3 content - inside a Blogger website. I've registered the RSS feed with iTunes This all seems to be working OK - http://dotnetmobilepodcast.blogspot.co.uk/ However, when it comes to a couple of final touches, then I'm hitting problems. RSS I would like to add iTunes metadata to the RSS feed. However, I can't find any way to do this inside the Blogger system. To workaround this I've tried using FeedBurner with its StreamCast plugin. However, the output from FeedBurner doesn't seem to be accepted by iTunes - e.g. http://feeds.feedburner.com/MobileAppCSharpPodcasts leads to this very unhelpful 11111 message: Is there any other way I can get this iTunes metadata content into the Blogger RSS feed - e.g. maybe an alternative service or a Yahoo! Pipe? Showing the MP3 files in the Blog I'm trying to work out how to automatically display the linked enclosures inside the blog posts - do the blogger Dynamic Views don't seem to have any way of doing this? I've found the HTML in those views very difficult to follow. If necessary I can workaround this using manual entries into each blog post... but I'd prefer to do this programatically if I can.

    Read the article

  • BizTalk 2010 - BAM Portal - No Views to Display

    - by Stuart Brierley
    Our latest BizTalk Server 2010 development project is utilising BizTalk as the integration ring around a new and sizable implementaion of Dynamics AX 2012. With this project we have decided to use BAM to monitor the processes within our various new applications.Although I have been specialising in BizTalk for around 9 years, this is my first time using BAM so it is an interesting process to be going through.Recently when deploying a solution I was attempting to check the BAM Portal to see that the View that I had created was properly deployed and that the Activity I was populating was being surfaced in the Portal as expected. Initially I was presented with the message "No view to display" in the "My Views" area of the BAM Portal landing page.This was because you need to set the permissions on the views that you want to see from the command line using the bm.exe tool:bm.exe add-account -AccountName:YourServerOrDomain\YourUsername -View:YourViewThis tool can be found in the BAM folder at the BizTalk installation location:C:\Program Files (x86)\Microsoft BizTalk Server 2010\Tracking

    Read the article

  • Adjusting server-side tickrate dynamically

    - by Stuart Blackler
    I know nothing of game development/this site, so I apologise if this is completely foobar. Today I experimented with building a small game loop for a network game (think MW3, CSGO etc). I was wondering why they do not build in automatic rate adjustment based on server performance? Would it affect the client that much if the client knew this frame is based on this tickrate? Has anyone attempted this before? Here is what my noobish C++ brain came up with earlier. It will improve the tickrate if it has been stable for x ticks. If it "lags", the tickrate will be reduced down by y amount: // GameEngine.cpp : Defines the entry point for the console application. // #ifdef WIN32 #include <Windows.h> #else #include <sys/time.h> #include <ctime> #endif #include<iostream> #include <dos.h> #include "stdafx.h" using namespace std; UINT64 GetTimeInMs() { #ifdef WIN32 /* Windows */ FILETIME ft; LARGE_INTEGER li; /* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it * to a LARGE_INTEGER structure. */ GetSystemTimeAsFileTime(&ft); li.LowPart = ft.dwLowDateTime; li.HighPart = ft.dwHighDateTime; UINT64 ret = li.QuadPart; ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */ ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */ return ret; #else /* Linux */ struct timeval tv; gettimeofday(&tv, NULL); uint64 ret = tv.tv_usec; /* Convert from micro seconds (10^-6) to milliseconds (10^-3) */ ret /= 1000; /* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */ ret += (tv.tv_sec * 1000); return ret; #endif } int _tmain(int argc, _TCHAR* argv[]) { int sv_tickrate_max = 1000; // The maximum amount of ticks per second int sv_tickrate_min = 100; // The minimum amount of ticks per second int sv_tickrate_adjust = 10; // How much to de/increment the tickrate by int sv_tickrate_stable_before_increment = 1000; // How many stable ticks before we increase the tickrate again int sys_tickrate_current = sv_tickrate_max; // Always start at the highest possible tickrate for the best performance int counter_stable_ticks = 0; // How many ticks we have not lagged for UINT64 __startTime = GetTimeInMs(); int ticks = 100000; while(ticks > 0) { int maxTimeInMs = 1000 / sys_tickrate_current; UINT64 _startTime = GetTimeInMs(); // Long code here... cout << "."; UINT64 _timeTaken = GetTimeInMs() - _startTime; if(_timeTaken < maxTimeInMs) { Sleep(maxTimeInMs - _timeTaken); counter_stable_ticks++; if(counter_stable_ticks >= sv_tickrate_stable_before_increment) { // reset the stable # ticks counter counter_stable_ticks = 0; // make sure that we don't go over the maximum tickrate if(sys_tickrate_current + sv_tickrate_adjust <= sv_tickrate_max) { sys_tickrate_current += sv_tickrate_adjust; // let me know in console #DEBUG cout << endl << "Improving tickrate. New tickrate: " << sys_tickrate_current << endl; } } } else if(_timeTaken > maxTimeInMs) { cout << endl; if((sys_tickrate_current - sv_tickrate_adjust) > sv_tickrate_min) { sys_tickrate_current -= sv_tickrate_adjust; } else { if(sys_tickrate_current == sv_tickrate_min) { cout << "Please reduce sv_tickrate_min..." << endl; } else{ sys_tickrate_current = sv_tickrate_min; } } // let me know in console #DEBUG cout << "The server has lag. Reduced tickrate to: " << sys_tickrate_current << endl; } ticks--; } UINT64 __timeTaken = GetTimeInMs() - __startTime; cout << endl << endl << "Total time in ms: " << __timeTaken; cout << endl << "Ending tickrate: " << sys_tickrate_current; char test; cin >> test; return 0; }

    Read the article

  • When designing a job queue, what should determine the scope of a job?

    - by Stuart Pegg
    We've got a job queue system that'll cheerfully process any kind of job given to it. We intend to use it to process jobs that each contain 2 tasks: Job (Pass information from one server to another) Fetch task (get the data, slowly) Send task (send the data, comparatively quickly) The difficulty we're having is that we don't know whether to break the tasks into separate jobs, or process the job in one go. Are there any best practices or useful references on this subject? Is there some obvious benefit to a method that we're missing? So far we can see these benefits for each method: Split Job lease length reflects job length: Rather than total of two Finer granularity on recovery: If we lose outgoing connectivity we can tell them all to retry The starting state of the second task is saved to job history: Helps with debugging (although similar logging could be added in single task method) Single Single job to be scheduled: Less processing overhead Data not stale on recovery: If the outgoing downtime is quite long, the pending Send jobs could be outdated

    Read the article

  • Ubuntu 12.04 I'm trying to ugrade my Geforce 8400 to Geforce GTX 560, getting a black screen

    - by Stuart Anderson
    I have an NVidia Geforce 8400 which is working fine in Ubuntu 12.04. I have just purchased an NVidia GeForce GTX 560 and have been trying to install it. No matter what I do all I get is a black screen on boot up. I have tried; 1/ Additional Drivers, driver version 295.40, this works for the GeForce 8400 but gives me a black screen with the GTX 560 when I boot. 2/ Downloaded driver version 295.40 from NVidias site, was able to install it successfully, it works with my 8400 but gives me a black screen with the GTX 560 when I boot. Are there any options I can try?

    Read the article

  • Audio PC Software running on UBUNTU

    - by Stuart
    Hi I recently built my own home studio PC. i5 CPU) 8Gb RAM, solid state drive etc. Basically the fastest PC I've ever built. I have a 32bit version of XP and all the music software I have runs on this. However I want to use all the RAM and can only do so by moving to 64bit Windows. My questions are: Will Ubuntu run my Audio software or would I need to get Linux specific Audio software? Are there any good (pro) shareware linux based multi-track audio software packages? Will VST plug-ins work through UBUNTU? Cheers, Stu.

    Read the article

  • cannot get my wifi to work on my acer aspire 5552

    - by stuart
    I am very new to this forum and also linux. I have managed to install wubi on my laptop fine and I can get on the net with ethernet, But when I try to download and install the wifi drivers it keeps giving me what I think is a generic var/ log error and just starts to hang on install then says it will not install. Can anyone help please as I really really want to move from windows and this is not helping me..A big thanks in advance :)

    Read the article

  • Entrepreneur Needs Programmers, Architects, or Engineers?

    - by brand-newbie
    Hi guys (Ladies included). I posted on a related site, but THIS is the place to be. I want to build a specialized website. I am an entrepreneur and refining valuations now for venture capitalsists: i.e., determining how much cash I will need. I need help in understanding what human resources I need (i.e., Software Programmers, Architects, Engineers, etc.)??? Trust me, I have read most--if not all--of the threads here on the subject, and I can tell you I am no closer to the answer than ever. Here's my technology problem: The website will include (2) main components: a search engine (web crawler)...and a very large database. The search engine will not be a competitor to google, obviously; however, it "will" require bots to scour the web. The website will be, basically, a statistical database....where users should be able to pull up any statistic from "numerous" fields. Like any entrepreneur with a web-based vision, I'm "hoping" to get 100+ million registered users eventually. However, practically, we will start as small as feasible. As regards the technology (database architecture, servers, etc.), I do want quality, quality, quality. My priorities are speed, and the capaility to be scalable...so that if I "did" get globally large, we could do it without having to re-engineer anything. In other words, I want the back-end and the "infrastructure" to be scalable and professional....with emphasis on quality. I am not an IT professional. Although I've built several Joomla-based websites, I'm just a rookie who's only used minor javascript coding to modify a few plug-ins and components. The business I'm trying to create requires specialization and experts. I want to define the problem and let a capable team create the final product, and I will stay totally hands off. So who do you guys suggest I hire to run this thing? A software engineer? I was thinking I would need a "database engineer," a "systems security engineer", and maybe 2 or 3 "programmers" for the search engine. Also a web designer...and maybe a part-time graphic designer...everyone working under a single software engineer. What do you guys think? Who should I hire?...I REALLY need help from some people in the industry (YOU guys) on this. Is this project do-able in 6 months? If so, how many people will I need? Who exactly needs to head up this thing?...Senior software engineer, an embedded engineer, a CC++ engineer, a java engineer, a database engineer? And do I build this thing is Ruby or Java?

    Read the article

  • Javascript won't execute in iPhone Safari

    - by Stuart Meyer
    I'm running into this issue only because I recently purchased an iPhone. The javascript for a picture carousel on my website (http://www.stuartmeyerphotography.com) won't execute in Safari for iPhone. I thought it worked on Mac Safari last I checked with a friend who had a Mac (a year ago), but now I need to go back and check that too to make sure it works on the Mac. "View source" on my website would show the entire html page, but I've pulled the code from the body section to show here: carousel({id:'Photos', border:'', size_mode:'image', width:120, height:120, sides:8, steps:75, speed:4, direction:'left', images:['mainthumbs/babiesthumb.jpg','mainthumbs/engagementsthumb.jpg','mainthumbs/dancethumb1.jpg','mainthumbs/artistthumb.jpg','mainthumbs/portraitsthumb1.jpg','mainthumbs/seniorsthumb1.jpg','mainthumbs/wedthumb1.jpg'], links:['babies/babies.html','engagements/engagemainshow/engagementpictures.html','dance/dancepictures.html','artists/artists.html','portraits/portraits.html','seniors/highschoolseniors.html','weddings/weddings.html'], lnk_base:'', lnk_targets:['_iframe1', '_iframe1', '_iframe1', '_iframe1', '_iframe1', '_iframe1', '_iframe1' ], lnk_attr:['width=200,height=300,top=200,menubar=yes', 'width=300,height=200,left=400,scrollbars=yes', 'width=150,height=250,left=200,top=100', ''], titles:['Babies', 'Engagements', 'Dance', 'Artists', 'Portraits', 'HS Seniors', 'Weddings'], image_border_width:1, image_border_color:'#E3F0A1' });   </div> Any thoughts? -Stuart

    Read the article

  • This is a great job opportunity!!! [closed]

    - by Stuart Gordon
    ASP.NET MVC Web Developer / London / £450pd / £25-£50,000pa / Interested contact [email protected] ! As a web developer within the engineering department, you will work with a team of enthusiastic developers building a new ASP.NET MVC platform for online products utilising exciting cutting edge technologies and methodologies (elements of Agile, Scrum, Lean, Kanban and XP) as well as developing new stand-alone web products that conform to W3C standards. Key Responsibilities and Objectives: Develop ASP.NET MVC websites utilising Frameworks and enterprise search technology. Develop and expand content management and delivery solutions. Help maintain and extend existing products. Formulate ideas and visions for new products and services. Be a proactive part of the development team and provide support and assistance to others when required. Qualification/Experience Required: The ideal candidate will have a web development background and be educated to degree level in a Computer Science/IT related course plus ASP.NET MVC experience. The successful candidate needs to be able to demonstrate commercial experience in all or most of the following skills: Essential: ASP.NET MVC with C# (Visual Studio), Castle, nHibernate, XHTML and JavaScript. Experience of Test Driven Development (TDD) using tools such as NUnit. Preferable: Experience of Continuous Integration (TeamCity and MSBuild), SQL Server (T-SQL), experience of source control such as Subversion (plus TortioseSVN), JQuery. Learn: Fluent NHibernate, S#arp Architecture, Spark (View engine), Behaviour Driven Design (BDD) using MSpec. Furthermore, you will possess good working knowledge of W3C web standards, web usability, web accessibility and understand the basics of search engine optimisation (SEO). You will also be a quick learner, have good communication skills and be a self-motivated and organised individual.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >