Search Results

Search found 1264 results on 51 pages for 'sam drake'.

Page 8/51 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • West Palm Beach Developer Group March 2012 Meeting Recap

    - by Sam Abraham
    Our West Palm Beach Developer Group March 2012 meeting featured Herve Roggero, Azure MVP and co-founder of Pyn Logic.  Herve covered multiple case studies on migrating existing application to SQL Azure.  This event was quiet popular filling-in our meeting room. We would like to thank PC Professor for hosting our meeting and Sherlock Technology for sponsoring our free food. Our April 24th, 2012 meeting will feature Will Tartak who will be demonstrating how he used ServiceStack.net to quickly develop Windows Phone and Android applications. For more information and to register, please visit: http://www.fladotnet.com/Reg.aspx?EventID=585 Below are some photos of our March event: .

    Read the article

  • IntelliTrace As a Learning Tool for MVC2 in a VS2010 Project

    - by Sam Abraham
    IntelliTrace is a new feature in Visual Studio 2010 Ultimate Edition. I see this valuable tool as a “Program Execution Recorder” that captures information about events and calls taking place as soon as we hit the VS2010 play (Start Debugging) button or the F5 key. Many online resources already discuss IntelliTrace and the benefit it brings to both developers and testers alike so I see no value of just repeating this information.  In this brief blog entry, I would like to share with you how I will be using IntelliTrace in my upcoming talk at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (check http://www.fladotnet.com for more information), as a learning tool to demonstrate the internals of the lifecycle of an MVC2 application.  I will also be providing some helpful links that cover IntelliTrace in more detail at the end of my article for reference. IntelliTrace is setup by default to only capture execution events. Microsoft did such a great job on optimizing its recording process that I haven’t even felt the slightest performance hit with IntelliTrace running as I was debugging my solutions and projects.  For my purposes here however, I needed to capture more information beyond execution events, so I turned on the option for capturing calls in addition to events as shown in Figures 1 and 2. Changing capture options will require us to stop our debugging session and start over for the new settings to take place. Figure 1 – Access IntelliTrace options via the Tools->Options menu items Figure 2 – Change IntelliTrace Options to capture call information as well as events Notice the warning with regards to potentially degrading performance when selecting to capture call information in addition to the default events-only setting. I have found this warning to be sure true. My subsequent tests showed slowness in page load times compared to rendering those same exact pages with the “event-only” option selected. Execution recording is auto-started along with the new debugging session of our project. At this point, we can simply interact with the application and continue executing normally until we decide to “playback” the code we have executed so far.  For code replay, first step is to “break” the current execution as show in Figure 3.   Figure 3 – Break to replay recording A few tries later, I found a good process to quickly find and demonstrate the MVC2 page lifecycle. First-off, we start with the event view as shown in Figure 4 until we find an interesting event that needs further studying.  Figure 4 – Going through IntelliTrace’s events and picking as specific entry of interest We now can, for instance, study how the highlighted HTTP GET request is being handled, by clicking on the “Calls View” for that particular event. Notice that IntelliTrace shows us all calls that took place in servicing that GET request. Double clicking on any call takes us to a more granular view of the call stack within that clicked call, up until getting to a specific line of code where we can do a line-by-line replay of the execution from that point onwards using F10 or F11 just like our typical good old VS2008 debugging helped us accomplish. Figure 5 – switching to call view on an event of interest Figure 6 – Double clicking on call shows a more granular view of the call stack. In conclusion, the introduction of IntelliTrace as a new addition to the VS developers’ tool arsenal enhances development and debugging experience and effectively tackles the “no-repro” problem. It will also hopefully enhance my audience’s experience listening to me speaking about  an MVC2 page lifecycle which I can now easily visually demonstrate, thereby improving the probability of keeping everybody awake a little longer. IntelliTrace References: http://msdn.microsoft.com/en-us/magazine/ee336126.aspx http://msdn.microsoft.com/en-us/library/dd264944(VS.100).aspx

    Read the article

  • How to solve CUDA crash when run CUDA example fluidsGL?

    - by sam
    I use ubuntu 12.04 64 bits with GTX560Ti. I install CUDA by following instruction: wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/toolkit/cudatoolkit_4.2.9_lin ux_64_ubuntu11.04.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/drivers/devdriver_4.2_linux _64_295.41.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/sdk/gpucomputingsdk_4.2.9 _linux.run chmod +x cudatoolkit_4.2.9_linux_64_ubuntu11.04.run sudo ./cudatoolkit_4.2.9_linux_64_ubuntu11.04.run echo "/usr/local/cuda/lib64" > ~/cuda.conf echo "/usr/local/cuda/lib" >> ~/cuda.conf sudo mv ~/cuda.conf /etc/ld.so.conf.d/cuda.conf sudo ldconfig echo 'export PATH=$PATH:/usr/local/cuda/bin' >> ~/.bashrc chmod +x gpucomputingsdk_4.2.9_linux.run ./gpucomputingsdk_4.2.9_linux.run sudo apt-get install build-essential libx11-dev libglu1-mesa-dev freeg lut3-dev libxi-dev libxmu-dev gcc-4.4 g++-4.4 sed 's/g++ -fPIC/g++-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/gcc -fPIC/gcc-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib/-L$(SHAREDDIR)\/lib -L\/u sr\/lib\/nvidia-current/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib -L\/usr\/lib\/nvidia-current $(NV CUVIDLIB)/-L$(SHAREDDIR)\/lib $(NVCUVIDLIB)/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk After I run ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release/./fluidsGL It got stuck even mouse or keyboard couldn't move. How to solve it? Thank you~

    Read the article

  • Remove urls to unidex blog content from google, then copy blogs content to new blog [closed]

    - by sam
    Possible Duplicate: migrating PR / rankings from one site to another Ive been writing a blog for the past yr or so, with about 300 published articles, the blog have been running under a subdomain blog.mysite.com We are no looking to change the url of our site, so the blog is going to have to be ported over to a subdoamin on the new site. We would really like to keep the backlog / archive of all the articles we have written but dont wont to be penalized for having duplicate content, could we just remove / unindex the urls from google in webmaster tools then export the blog and import it back to our new blog ? Would google still see this a duplicate content or becuase ive removed the urls have they no longer got a copy of it ? thanks

    Read the article

  • setting up mod_proxy - plesk, apache, .htacess?

    - by sam
    I want to set up mod_proxy to work so that my blog is running under a subdirectory of my site rather than subdomain so i get the seo backlink benefit. What im looking to do is get my tumblr blog which is running at blog.mysite.com (which is in turn mapped from myblog.tumblr.com) will be running on mysite.com/blog How can i set up mod_proxy to do this, is it just something that i can setup from inside of my .htacess file ? Ive got my site hosted on an apache server, using plesk as a controll panel. I contacted my webhost and they told me mod_rewrite could acheve it, they gave me this but said they wont provide me further support regarding mod_rewrite as its somthing they dont support <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_HOST} ^example.co.uk/blog$ RewriteCond %{REQUEST_URI} !/standard RewriteRule ^(.*)$ http://example.tumblr.com$1 [R] </IfModule> ideally id like to use the mod_proxy method as it recomended from an seo point of view from this article http://www.seomoz.org/blog/what-is-a-reverse-proxy-and-how-can-it-help-my-seo

    Read the article

  • Bluetooth fix for Ubuntu 12.04, lenovo G580

    - by Sam Abraham
    Bluetooth not working, it shows turned on but manager indicated Bluetooth disabled. Uninstalled default manager and installed Blueman. The same with Blueman, clicking on connect to devices gets response 'adapters not found'. I've found many more people with the same problem. The fixes found in the archive don't work for me. I've tried a couple of things from the forum. I'm not familiar with computer hardware or software but have been using Ubuntu cause it saves me money, it's fairly easy to use and it does not tax my mid-range lap. Any help will be appreciated.

    Read the article

  • How to convince boss to buy Visual Studio 2012 Professional

    - by Sam Leach
    The main advantage is the use of ReSharper and other add-ons but we need to make a convincing argument for the purchase of Visual Studio 2012 Professional. We are currently using Visual Studio 2012 Express for Windows. It is quite good but is hard to switch from using the full Professional version in the past. So far the team has compiled the following list: Extract Interface function missing. Very useful for clean SOLID code. No add-on support. Can’t install StyleCop or productivity tools. AnkhSvn, Spell checker, Productivity PowerTools, GhostDoc, Regex Editor, PowerCommands. The exception assistant is limited in Express edition. This is a big annoyance. See http://www.lifehacker.com.au/2013/01/ive-given-up-on-visual-studio-express-2012-for-windows-desktop-heres-why/ Different tools provided by MS like certificate generation. Possibility of create a Test project based on source code. We do server development in C# so any web add-ons or anything else is useless. The reason I am asking is I am sure that people have been in the same position. What approach did you use and can you think of additions or ammends to the above list? Thanks,

    Read the article

  • SEO Pros and cons of having your blog in a subdirectory or subdomain

    - by sam
    From an SEO point of view is it better to have your blog running as part of your site (ie. /blog) so that it will be generating more content for the site OR is it better to have it running as a subdomain (ie. blog.) of your main site (correct me if im wrong but google sees subdomains as seperate site ?) so that it would be getting lots of external links from my blog, but then again, it would be generating no extra content for my main site.

    Read the article

  • Propagaging Apache rules Automatically for all folders Beneath Root

    - by Sam
    Hi folks, my webroot folder /httpdocs folder contains a .htaccess file The first lines look like this: RewriteEngine on RewriteBase / AddDefaultCharset UTF-8 Options +FollowSymLinks -Indexes -ExecCGI # DirectoryIndex index.php /index.php # ServerSignature Off Now, I want all settings that I have set it to, to be propagated automatically to other folders as well. How can I do that? Thanks very much for suggestions.

    Read the article

  • What would be a best way to code a rudimentary LIMS?

    - by Sam
    Hi, I hope that I am asking the right question at right place. If not, please do not hesitate to point me to the right place. I am working in a laboratory, where computers/programming/programmers are all used only as support. But I believe that we need better information organization and management. Given the diverse information generated in the lab I believe that we need some sort of Laboratory Information Management System. Is there a good starting point to do this? I am myself not a trained programmer. I can code reasonably well in python and R, and if necessary I can learn MySql, php.

    Read the article

  • New META TAGS with positive effects for seo ranking in 2011 and beyond

    - by Sam
    Hi all, im trying to make an up to date chart of meta tags, for all of us, with their purposes, their use and their good (or bad) effects on search engines/being found. Also any body knows new/promising meta tags? I will add yours into my list so this chart is a result of live discussion and up to date. Also, it would be creative to invent your own useful meta, because we are the ones making the web, or aren't we? LEGEND P PURPOSE? What does this meta tag do in 2011, if anything N NECESSARY? Does every site really needs it or not? G GOOD wether it will have a good effect for your site to be found I INVENTED meta tag, who knows it will be accepted in a year! META "METANAME" = PURPOSE? - NECESSARY? - GOOD EFFECT? #### important meta "title" = P consice summary + teaser - N very - G extremely meta "description" = P description + teaser - N yes - G very meta "robots" = P if needed, to skip default dmoz/yahoodir listing - N no - G? #### new & promising! Thanks for input (John, ) meta "original-source" P url of whoever broke the news gets credits - N? - G? meta "syndication-source" P url for syndication of published news - N? - G? meta "canonical" P? - N? - G? #### seems obsolete meta "keywords" = P some keywords - N+G not for google but yahoo likes them meta "language" = P overrule guesswork by defining language - N no - G? meta "page-topic" = P topic/theme - N? - G? meta "abstract" = P short summary - N? - G? meta "copyright" = ? #### invented by me meta "audience" = P filteres audience: "+seniors, +parents, -children, -youth" meta "mood" = P specifies textual style: "discussion, informative, commercial, sexual, fictional, scientific, romantic, therapeutic, technical"

    Read the article

  • getting 500 intenal error when setting 301 redirect using .htaccess

    - by sam
    im trying to use a 301 redirect to direct users and bots to my new site but when i put the .htaccess live i keep getting a 500 internal error shown. The site is actually a subdomain which i want to redirect to another subdomain on another site (im not sure if thats relivant but i thought i should include it) the site is hosted on a apache server The 301 htaccess code im using is : Options +FollowSymLinks RewriteEngine on RewriteRule (.*) http://www.blog.mysite.co.uk/$1 [R=301,L] any idea what might be wrong with this ?

    Read the article

  • Upgrading from MVC 1.0 to MVC2 in Visual Studio 2010 and VS2008.

    - by Sam Abraham
    With MVC2 officially released, I was involved in a few conversations regarding the feasibility of upgrading existing MVC 1.0 projects to quickly leverage the newly introduced MVC features. Luckily, Microsoft has proactively addressed this question for both Visual Studio 2008 and 2010 and many online resources discussing the upgrade process are a "Bing/Google Search" away. As I will happen to be speaking about MVC2 and Visual Studio 2010 at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (Check http://www.fladotnet.com for more info.), I decided to include a quick demo on upgrading the NerdDinner project (which I consider the "Hello MVC World" project) from MVC 1.0 to MVC2 using Visual studio 2010 to demonstrate how simple the upgrade process is. In the next few lines, I will be briefly touching on upgrading to MVC2 for Visual Studio 2008 then discussing, in more detail, the upgrade process using Visual Studio 2010 while highlighting the advantage of its multi-targeting support. Using Visual Studio 2008 SP1 For upgrading to MVC2 Using VS2008 SP1, a Microsoft White Paper [1] presents two approaches:  1- Using a provided automated upgrade tool, 2-Manually upgrading the project. I personally prefer using the automated tool although it comes with an "AS IS" disclaimer. For those brave souls, or those who end up with no luck using the tool, detailed manual upgrade steps are also provided as a second option. Backing up the project in question is a must regardless of which route one would take to upgrade. Using Visual Studio 2010 Life is much easier for developers who already adopted Visual Studio 2010. Simply opening the MVC 1.0 solution file brings up the upgrade wizard as shown in figures 1, 2, 3 and 4. As we proceed with the upgrade process, the wizard requests confirmation on whether we choose to upgrade our target framework version to .Net 4.0 or keep the existing .Net 3.5 (Figure 5). VS2010 does a good job with multi-targeting where we can still develop .Net 3.5 applications while leveraging all the new bells and whistles that VS2010 brings to the table (Multi-targeting enables us to develop with as early as .Net 2.0 in VS2010) Figure 1 - Open Solution File Using VS2010   Figure 2 - VS2010 Conversion Wizard Figure 3- Ready To Convert To VS2010 Confirmation Screen Figure 4 - VS2010 Solution Conversion Progress Figure 5 - Confirm Target Framework Upgrade In an attempt to make my demonstration realistic, I decided to opt to keep the project targeted to the .Net 3.5 Framework.  After the successful completion of the conversion process,  a quick sanity check revealed that the NerdDinner project is still targeted to the .Net 3.5 framework as shown in figure 6. Inspecting the Web.Config revealed that the MVC DLL version our code compiles against has been successfully upgraded to 2.0 (Figure 7) and hence we should now be able to leverage the newly introduced features in MVC2 and VS2010 with no effort or time invested on modifying existing code. Figure 6- Confirm Target Framework Remained .Net 3.5  Figure 7 - Confirm MVC DLL Version Has Been Upgraded In Conclusion, Microsoft has empowered developers with the tools necessary to quickly and seamlessly upgrade their MVC solutions to the newly released MVC2. The multi-targeting feature in Visual Studio 2010 enables us to adopt this latest and greatest development tool while supporting development in as early as .Net 2.0. References 1. "Upgrading an ASP.NET MVC 1.0 Application to ASP.NET MVC 2" http://www.asp.net/learn/whitepapers/aspnet-mvc2-upgrade-notes

    Read the article

  • wordpress sites are slow on shared hosting but plain html/css sites are fast

    - by sam
    ive got a shared hosting account, unlimited sites, unlimited gb, unlimited bandwidth ect ect. Of course because its shared and a cheap one at that theres too many sites on each server and it all runs slow due to lack of ram. What ive found is that my plain html/css/js sites run an awful lot faster than my wordpress sites on this hosting and i was trying to work out why. Im not exactly sure how a browser sends a request for a page and the full process of request and delivery, but are my html sites running faster as they are just serving code to the browser, where as the wordpress sites are having to make calculations from the database to make each page before its delivered .. is that correct, or am i completly off course ?

    Read the article

  • export emails from open-xchnage webmail and import them to horde webmail

    - by sam
    Im moving hosting and need to move the emails stored with the old domain/hosting, the hosting i want to transfer from uses open-xchange for its webmail and the hosting i want to move to uses horde for their webmail. Is their an automated way from inside of the open xchange webmail, or is there a way i can do it from inside of outlook / mac mail ? not sure if this is the right place for this, if not please advise..

    Read the article

  • Recaptcha php problem [closed]

    - by Sam Gabriel
    Hey guys, I'm using recaptcha and I've got a problem, when a user clicks the signup button it redirects him to the sign up verification page, and here is the code, found on the very top of the web code, that checks the recaptcha entered data. <?php require_once('recaptchalib.php'); $privatekey = "***"; $resp = recaptcha_check_answer ($privatekey, $_SERVER["REMOTE_ADDR"], $_POST["recaptcha_challenge_field"], $_POST["recaptcha_response_field"]); if (!$resp->is_valid) { header('location: signup.php'); } ?> But it seems whatever I type in in the recaptcha box, be it right or wrong, I get redirected to the signup.php page. Here is the recaptcha code in the signup.php page: <?php ini_set('display_errors', 'On'); error_reporting(E_ALL | E_STRICT); require_once('recaptchalib.php'); $publickey = "***"; echo recaptcha_get_html($publickey); ?>

    Read the article

  • What's the best practice for linking to/from guest posts on other blogs?

    - by sam
    When writing a guest blog for a site, I include a link back to my site - an inbound link. If I were to write a post on my blog publicising my guest post, that would mean there were reciprocal links, thus cancelling each other out, correct? If I made the link (on my blog) back to the guest post nofollow, would that cancel the effect, meaning I still get link juice from the guest post? Further down the line if the site I posted on as a guest wanted to write a post for my site, what is the best way for me to prevent re-introducing the reciprocal link problem?

    Read the article

  • Singular or Plural Nouns as file names for better Search & SEO friendlyness? [closed]

    - by Sam
    Possible Duplicate: Should I use singular or plural nouns in a domain name and why? Dear folks, two scenarios where file names should be best representing the search volume by audiences searching for it. Scenario 1 website.org/en/logo.php website.org/en/brochure.php website.org/en/poster.php website.org/en/design.php OR Scenario 2 website.org/en/logos.php website.org/en/brochures.php website.org/en/posters.php website.org/en/designs.php Q1. What do you intuitivly think would be the best? Q2. What do the facts in general show? people search for singular or plural in search? Q3. Do Search engines have common rule of thumb for this? Q4. Should I pick either and go with either scenario consistently or does it depend on the word? Thanks very much for your ideas/suggestions. I reall don't know which one to go for.

    Read the article

  • Work Item Traceability in TFS 2010

    - by Sam Patrick
    I have created a Windows Form project (VS solution) under a TFS 2010 project. I may eventually add more solutions to the TFS project. My question: Can we create a Use Case WIT for a specific solution within a TFS project? Furthermore, is it possible to create a "traceability matrix" that starts at the Use Case level and goes down to the the code level (at least the namespace level) of that particular VS solution?

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • What is a better abstraction layer for D3D9 and OpenGL vertex data management?

    - by Sam Hocevar
    My rendering code has always been OpenGL. I now need to support a platform that does not have OpenGL, so I have to add an abstraction layer that wraps OpenGL and Direct3D 9. I will support Direct3D 11 later. TL;DR: the differences between OpenGL and Direct3D cause redundancy for the programmer, and the data layout feels flaky. For now, my API works a bit like this. This is how a shader is created: Shader *shader = Shader::Create( " ... GLSL vertex shader ... ", " ... GLSL pixel shader ... ", " ... HLSL vertex shader ... ", " ... HLSL pixel shader ... "); ShaderAttrib a1 = shader->GetAttribLocation("Point", VertexUsage::Position, 0); ShaderAttrib a2 = shader->GetAttribLocation("TexCoord", VertexUsage::TexCoord, 0); ShaderAttrib a3 = shader->GetAttribLocation("Data", VertexUsage::TexCoord, 1); ShaderUniform u1 = shader->GetUniformLocation("WorldMatrix"); ShaderUniform u2 = shader->GetUniformLocation("Zoom"); There is already a problem here: once a Direct3D shader is compiled, there is no way to query an input attribute by its name; apparently only the semantics stay meaningful. This is why GetAttribLocation has these extra arguments, which get hidden in ShaderAttrib. Now this is how I create a vertex declaration and two vertex buffers: VertexDeclaration *decl = VertexDeclaration::Create( VertexStream<vec3,vec2>(VertexUsage::Position, 0, VertexUsage::TexCoord, 0), VertexStream<vec4>(VertexUsage::TexCoord, 1)); VertexBuffer *vb1 = new VertexBuffer(NUM * (sizeof(vec3) + sizeof(vec2)); VertexBuffer *vb2 = new VertexBuffer(NUM * sizeof(vec4)); Another problem: the information VertexUsage::Position, 0 is totally useless to the OpenGL/GLSL backend because it does not care about semantics. Once the vertex buffers have been filled with or pointed at data, this is the rendering code: shader->Bind(); shader->SetUniform(u1, GetWorldMatrix()); shader->SetUniform(u2, blah); decl->Bind(); decl->SetStream(vb1, a1, a2); decl->SetStream(vb2, a3); decl->DrawPrimitives(VertexPrimitive::Triangle, NUM / 3); decl->Unbind(); shader->Unbind(); You see that decl is a bit more than just a D3D-like vertex declaration, it kinda takes care of rendering as well. Does this make sense at all? What would be a cleaner design? Or a good source of inspiration?

    Read the article

  • Juju didn't configure rabbitmq for openstack?

    - by SaM
    I have installed ubuntu Openstack HA with juju with all 24 servers. But my openstack is not working at all. On dashboard on every page I get errors saying "could not retrieve usage information", "could not retrieve volume information, "could not retrieve .....etc I spent hours and have discovered that juju has not done configuration correctly. I found that on cloud controller in nova.conf juju has added rabitmq vhost enrty, but that virtual host is not added in rabbitmq. Then how is its suppose to work? And on juju-gui canvas rabbimq is all green and is working fine, which in reality its not. I am really wondering if juju has really done correct configuration in all 24 servers now, I am getting the feeling that it would have been faster if I would have done openstack deployment manually instead of using juju. Why was the virtual host entry not added in rabbitmq? How should I solve this?

    Read the article

  • backlink anchor text / keyword stratergy post penguin

    - by sam
    Ive heard allot recently about over optimisation regarding backlink anchor text ofsite. What ive heard from seomoz was sites most effected by the penguin update had over 60% of their backlinks anchor text the same, so google saw this is unnatural and penalized them. Which kind of makes sense as its not normal to have such high density on one word / phrase. If i where building links for a gastro pub in london. (this is purely hypothetical). The sort of keywords i would go after are "gastro pub in london" and "london gastro pub" If i where to mix up the anchor text by having: gastro pub gastro pub in london london gastro pub would this be seen as ok ? or would these all be seen as broad match keywords and counted as one phrase, making me fall foul of the penguin update ?

    Read the article

  • Network really slow with TL-WN951N wireless card

    - by Sam
    I literally just installed ubuntu and it seems to be working great except the network is deadly slow. I'm running a TL-WN951N wireless card which can download at about 600-700 KB/s in windows but in Ubuntu the max speed it seems to get is around 5KB/s. I guess I should note that my WAP is only wireless-G but like I said, I can get much better speeds in Windows. I'm testing the speeds by downloading files from here: http://mirror.internode.on.net/pub/test/ Anyone have any idea? I saw some people recommend downloading drivers and compiling them myself but I'm really really new to all of this so would appreciate someone babying me through it so I don't brick my computer! Here are the results of lspci -v: 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02) Subsystem: Giga-byte Technology GA-EP45-DS5 Motherboard Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at c000 [size=256] Memory at e9110000 (64-bit, prefetchable) [size=4K] Memory at e9100000 (64-bit, prefetchable) [size=64K] [virtual] Expansion ROM at e9120000 [disabled] [size=64K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169 05:02.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) Subsystem: Atheros Communications Inc. Device 3071 Flags: bus master, 66MHz, medium devsel, latency 168, IRQ 18 Memory at e9200000 (32-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >