Search Results

Search found 3056 results on 123 pages for 'ben daniel'.

Page 23/123 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Connecting/Removing a Second Monitor with Multiple Workspaces

    - by Ben
    I am running 12.04 on a laptop. I use workspaces extensively to manage different projects. When I take my laptop home, I plug in an additional monitor. The problem I am having is that whenever I plug in the additional monitor, all of my windows, regardless of what workspace they were in before I connected the additional monitor, move to workspace 1. This forces me to go through all the windows and manually move the windows back to where I had them. I don't think adding a monitor to each workspace should cause windows to move around between workspaces. Any thoughts or ideas on how to fix this? Know of any workarounds? Thanks in advance!

    Read the article

  • Triple boot problem with Windows 7, Ubuntu 12.04 & Fedora 17

    - by daniel
    I just installed Fedora 17 after Ubuntu 12.04, But now I can't boot into any of 2 linux, I also have windows 7 installed and I can boot to it, I edited boot with EasyBCD. During installation of Fedora 17 I used standard creating partition and used Separate "/boot" , "/", "Swap" , "Home" for Fedora 17, Is this fixable? or I have to reinstall 2 OS? Also Is possible to share one "Swap" partition? And I am on Ubuntu live cd. Thanks for any guides.

    Read the article

  • Ubuntuone fails to sync with 'File Sync starting...' displayed

    - by a different ben
    I am on 12.04 using ubuntuone-client 3.0.1-0ubuntu1.0.1. I actually have two machines that I sync with, having the same Ubuntu version and ubuntuone-client version. One is fine, the other is not. File sync has frozen within a user-defined folder under my home folder. The graphical client reports in the top-right corner: 'File Sync starting...', but this doesn't change. I have two files with changes that show a syncing overlay in Nautilus. They are both very small text files. Here are some details: harb@joan:~$ u1sdtool --status State: READY connection: With User Not Network description: ready to connect is_connected: False is_error: False is_online: False queues: WORKING harb@joan:~$ u1sdtool --current-transfers Current uploads: 0 Current downloads: 0 The status seems to suggest that I am not connected to a network, however I am connected to a network - in fact I am accessing this machine via NX. Is it not working because I am connected via NX? Happy to provide other info, just not sure what would be useful.

    Read the article

  • Critical Patch Update for Oracle Fusion Middleware - CPU October 2013

    - by Daniel Mortimer
    The latest Critical Patch Update (CPU) has been released for Oracle products. Start your reading here Critical Patch Updates, Security Alerts and Third Party Bulletin  This is the home page containing links to all "Critical Patch Updates" released to date, along with sections detailing  Security Alerts  Third Party Bulletin Public Vulnerabilities Fixed Policies Reporting Security Vulnerabilities On this page you will find the link to the Oracle Critical Patch Update Advisory - October 2013 The advisory lists the support documents that cover the patch availability for all Oracle products. For Oracle Fusion Middleware, go to: Patch Set Update and Critical Patch Update October 2013 Availability Document [ID 1571391.1] If you are hit any unexpected errors when applying the CPU patches, check out the known issues documented in these two support documents. Critical Patch Update October 2013 Oracle Fusion Middleware Known Issues  [ID 1571369.1] Critical Patch Update October 2013 Database Known Issues [ID 1571653.1] And lastly, for an informal summary of what the Critical Patch Update fixes, check out the blog posts by "Oracle Software Security Assurance" team October 2013 Critical Patch Update Released

    Read the article

  • OpenGL flickerinng near the edges

    - by Daniel
    I am trying to simulate particles moving around the scene with OpenCL for computation and OpenGL for rendering with GLUT. There is no OpenCL-OpenGL interop yet, so the drawing is done in the older fixed pipeline way. Whenever circles get close to the edges, they start to flicker. The drawing should draw a part of the circle on the top of the scene and a part on the bottom. The effect is the following: The balls you see on the bottom should be one part on the bottom and one part on the top. Wrapping around the scene, so to say, but they constantly flicker. The code for drawing them is: void Scene::drawCircle(GLuint index){ glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(pos.at(2*index),pos.at(2*index+1), 0.0f); glBegin(GL_TRIANGLE_FAN); GLfloat incr = (2.0 * M_PI) / (GLfloat) slices; glColor3f(0.8f, 0.255f, 0.26f); glVertex2f(0.0f, 0.0f); glColor3f(1.0f, 0.0f, 0.0f); for(GLint i = 0; i <=slices; ++i){ GLfloat x = radius * sin((GLfloat) i * incr); GLfloat y = radius * cos((GLfloat) i * incr); glVertex2f(x, y); } glEnd(); } If it helps, this is the reshape method: void Scene::reshape(GLint width, GLint height){ if(0 == height) height = 1; //Prevent division by zero glViewport(0, 0, width, height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(xmin, xmax, ymin, ymax); std::cout << xmin << " " << xmax << " " << ymin << " " << ymax << std::endl; }

    Read the article

  • Project Euler 6: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 6.  As always, any feedback is welcome. # Euler 6 # http://projecteuler.net/index.php?section=problems&id=6 # Find the difference between the sum of the squares of # the first one hundred natural numbers and the square # of the sum. import time start = time.time() square_of_sums = sum(range(1,101)) ** 2 sum_of_squares = reduce(lambda agg, i: agg+i**2, range(1,101)) print square_of_sums - sum_of_squares print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Tiled/TMX C++ Library/Parser

    - by Ben
    Where can I find an easy to use and up to date C++ parser/library for the .tmx map format (used by the Tiled Map Editor) ? EDIT: David's comment, 'Unless you want to build your game around the format of the parser..', got me thinking... So I have downloaded pugixml, which is an easy to use xml-parser with very straightforward documentation. Together with the spec for the TMX Map Format, I think I'll give it a try myself. I'll probably compare with Cocos2d-x's CCTMXTiledMap at some point.

    Read the article

  • Overheating laptop

    - by Moncef ben slimane
    i've been using ubuntu for ~2 months, when i installed it on my computer (laptop) it never overheat but a day, i don't know what happened, it over heated.. (70*C @ Idle) I've tryed what ever i found on the net, and as well, i can't change the CPU freq o.O, i5 M460 @ 2.53 GHz.. i have benn trying, jupiter (no result), lm-sensors (aswell), and the cpu freq thingy for unity (cpu wont move from 2.5GHz) Any help? (i'm a C++ user and PHP coder...)

    Read the article

  • Should I add old code into my repository?

    - by Ben Brocka
    I've got an SVN repository of a PHP site and the last programmer didn't use source control properly. As a result, only code since I started working here is in the Repo. I have a bunch of old copies of the full code base saved in files as "backups" but they're not in source control. I don't know why most of the copies were saved nor do I have any reasonable way to tag them to a version number. Due to upgrades to the frameworks and database drivers involved, the old code is quite defunct; it no longer works on the current server config. However, the previous programmers had some...unique...logic, so I hate to be completely without old copies to refer to what on earth they were doing. Should I keep this stuff in version control? How? Wall off the old code in separate Tags/branches?

    Read the article

  • Brightness problem after upgrade Ubuntu 13.10

    - by Daniel Yunus
    I have upgrade to 13.10 recently. But, the brightness is working after I add this in version 13.04 before. After I upgrade to 13.10, this code is not working on ASUS Slimbook X401U #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. echo 0 > /sys/class/backlight/acpi_video0/brightness exit 0

    Read the article

  • Request Validation in ASP.NET 4.0

    - by Ben Bastiaensen
    Up to ASP.NET 3.5 Request Validation is enabled by default. In order to to disable this for a page you needed to set the ValidationRequest property in the page directive to false. This is no longer the default case in ASP.NET 4.0. If you want to use this behaviour you need to add the follwing setting in web.config  <httpRuntime requestValidationMode="2.0" /> Of course you need to check all input in the page for XSS or other malicious input if you set the pages request validation to false.

    Read the article

  • Add Reference with Search

    - by Daniel Cazzulino
    If you have been using VS2010 for any significant amount of time, you surely came across the awkward, slow and hard to use Add Reference dialog. Despite some (apparent) improvements over the VS2008 behavior, in its current form it's even LESS usable than before. A brief non-exhaustive summary of the typical grief with this dialog is: Scrolling a list of *hundreds* of entries? (300+ typically) No partial matching when typing: yes, you can type in the list to get to the desired entry, but the matching is performed in an exact manner, from the beginning of the assembly name. So, to get to the (say) "Microsoft.VisualStudio.Settings" assembly, you actually have to type the first two segments in their entirety before starting to type "Settings"....Read full article

    Read the article

  • How can I mount an AFS filesystem?

    - by Ben
    My current method is to mount the filesystem via SSH using Nautilus's graphical interface, but I would much prefer to be able to use some tool that mounts the AFS filesystem and gives me access to AFS-specific features (permissions, etc.). I've tried installing OpenAFS via apt-get, but so far the kernel module has refused to compile. Also, assuming I get OpenAFS installed, I'm not quite sure how to actually mount the remote filesystem to, say, /media/afs or some directory. I'm running Maverick with the 2.6.36-020636-generic kernel from http://kernel.ubuntu.com/~kernel-ppa/mainline/ Thanks for the help!

    Read the article

  • Common way of animating 'motion' for walk cycle animations

    - by Ben Hymers
    I've just posted this at the Blender artists' forums before realising I would probably get a better response from a more game development-specific audience, so apologies for cross-posting! It's for the right reasons :) I'm a programmer trying to animate a character walking for a game project, using Ogre. I've made a very simple walk cycle in Blender and exported it to Ogre, and it plays just fine. By fine, I mean it works, but there's terrible foot sliding. This is because I just animated the walk in-place (at the origin) in Blender, and of course I don't know what "speed of walk" that corresponds to, so when I move the character in-game the motion doesn't necessarily match up with the movement of the feet in the animation. So my question is: what's the normal approach for this kind of thing? At work we use Maya, and the animators either animate a special 'moveTrans' node that represents the "position" of the character (or have the exporter generate it for them from the movement of the root node), then the game can read this to know how fast the animation moves the character. So in the Maya file, the character will walk forward for one cycle and this extra node will follow along with them by their feet. I've not seen anything like this in open-source land, and there's certainly no provision for that in the Ogre Exporter script. What do you chaps normally do for this?

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Unable to install files with apt-get: "unable to locate package"

    - by Ben Casling
    I'm having issues with my ubuntu server version 12.04 installed on a HP550 laptop, when i try sudo apt-get install <programname>, e.g apache2 it will not work, saying E: Unable to locate package apache2. I have tried to look/edit the sources. but they will not work either the gedit command is broken too, i am trying gedit /etc/apt/sources.list for those wondering, is this a case of the computer network not configured properly? it downloaded a language pack easily enough in the installation though. how do i fix this? a prompt reply would be appreciated.

    Read the article

  • Computer science curriculum for non-CS major?

    - by Daniel
    Hi all, I would like to have some ideas for building up my foundation CS skills. I have started programming computers 10 years ago and have made a pretty good career out of it. However, I cannot stop thinking that the path that brought me here was very particular, and if something goes wrong (e.g. I get laid off) it would be harder to find a job here in the US on the same salary level, OR in a top company. The reason I say that is that I am a self-learner; my degree is not in Computer Science so although I master C/C++/Java, I do not have the formal CS and mathematical background that many other software developers (esp. here in the US) have. When I look at job interview questions from Apple, Google, Amazon, I have the impression that I'd flunk those technical interviews at some point. Don't get me wrong, I know my algorithms and data structures, but when things dive too deeply into the CS realm I am in trouble. What can I do to close the gap? I was thinking about a MSc in CS, but will I even UNDERSTAND what's going on there if I'm not a CS undergrad? Should I go back to basics and get a BSc in CS instead? I always tend to go into self-study mode when I want to learn new stuff, but I have the impression that I will need more formal education in CS if I want to have a shot at working at those kinds of companies. Thank you!

    Read the article

  • Do Not Uninstall Flag on Apt?

    - by Daniel C. Sobral
    Does the Debian/Ubuntu package infrastructure has some way of marking packages so that they never get uninstalled, no matter the pinning of other packages? My problem is that, sometimes, packages installed by Puppet (coming from non-standard repositories, of course) cause other packages to get uninstalled -- in particular, openssh-{server,client}. The way this happens is that package A and B depend on different versions of package C. If A is installed and one asks to install B, then the version of C changes. The new version of C is incompatible with A, so A gets uninstalled. The funny thing is that the process is then reversed, as, on the next run, Puppet notices that A is not installed and tries to install it. So, basically, I want to make sure A never gets uninstalled, which would prevent B from getting installed. That would be reported as an error, making me aware of the issue. If anyone cares, Puppet uses the following command to install packages: /usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install <package>

    Read the article

  • Why don't %MEM values add up to mem in top?

    - by ben
    I'm currently debugging performance issues with my VPS and for that I'm trying to understand which of the processes eat the most memory. Reading top, here's what I get: Mem: 366544k total, 321396k used, 45148k free, 380k buffers Swap: 1048572k total, 592388k used, 456184k free, 7756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 12339 ruby 20 0 844m 74m 2440 S 0 20.8 0:24.84 ruby 12363 ruby 20 0 844m 73m 1576 S 0 20.6 0:00.26 ruby 21117 ruby 20 0 171m 33m 1792 S 0 9.3 2:03.98 ruby 11846 ruby 20 0 858m 21m 1820 S 0 6.0 0:09.15 ruby 21277 ruby 20 0 219m 11m 1648 S 0 3.2 2:00.98 ruby 792 root 20 0 266m 10m 1024 S 0 3.0 1:40.06 ruby 532 mysql 20 0 234m 4760 1040 S 0 1.3 0:41.58 mysqld 793 root 20 0 250m 4616 984 S 0 1.3 1:20.55 ruby 586 root 20 0 156m 4532 848 S 0 1.2 6:17.10 god 12315 ruby 20 0 175m 2412 1900 S 0 0.7 0:07.55 ruby 3844 root 20 0 44036 2132 1028 S 0 0.6 1:08.22 ruby 10939 ruby 20 0 179m 1884 1724 S 0 0.5 0:08.33 ruby 4660 ruby 20 0 229m 1592 1440 S 0 0.4 2:55.46 ruby 3879 nobody 20 0 37428 964 520 S 0 0.3 0:01.99 nginx As you can see my memory is about 90% used (which is my issue) but when you add up the %MEM values, it goes to about 50-60% only. Same thing, RES doesn't add up to ~350mb. Why? Am I misunderstanding their meaning? Thanks

    Read the article

  • Getting Started with FMW 11g - Advisor Webcast Recordings

    - by Daniel Mortimer
    Predating the creation of this blog there have been two Oracle Support Advisor Webcasts which are worth reviewing- especially if you tackling install and/or patching of Oracle Fusion Middleware 11g for the first time.  Topic  Web Links How to Plan for a New Installation of Oracle Fusion Middleware 11g Webcast Recording Slides (PDF) Oracle Fusion Middleware 11g Patching Concepts and Tools Webcast Recording Slides (PDF) Ignore the duration of the recording indicated by the link. You can skip forward to the main presentation and demo .. which shapes up at 45 minutes long, the rest is Q/A and blurb.Support Advisor Webcast Schedule and Recordings are found via these support documents Advisor Webcast Current Schedule [Doc ID 740966.1] Advisor Webcast Archived Recordings [Doc ID 740964.1] Note: You will need a My Oracle Support login to access these documents.

    Read the article

  • Origins of code indentation

    - by Daniel Mahler
    I am interested in finding out who introduced code indentation, as well as when and where it was introduced. It seems so critical to code comprehension, but it was not universal. Most Fortran and Basic code was (is?) unindented, and the same goes for Cobol. I am pretty sure I have even seen old Lisp code written as continuous, line-wrapped text. You had to count brackets in your head just to parse it, never mind understanding it. So where did such a huge improvement come from? I have never seen any mention of its origin. Apart from original examples of its use, I am also looking for original discussions of indentation.

    Read the article

  • Project Euler 20: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 20.  As always, any feedback is welcome. # Euler 20 # http://projecteuler.net/index.php?section=problems&id=20 # n! means n x (n - 1) x ... x 3 x 2 x 1 # Find the sum of digits in 100! import time start = time.time() def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) print sum([int(i) for i in str(factorial(100))]) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • ASP.Net Fails to Detect IE10 without .Net Hotfix

    - by Ben Barreth
    Benny Mathew recently alerted us that he couldn’t create, edit or delete posts on GeeksWithBlogs in IE10 (Windows 8). It turns out the problem is that ASP.Net fails to detect IE10 causing a javascript error on postback. We’ll be applying a hotfix to the .Net framework on GWB shortly to fix this issue. In the meantime you can use the simple workaround outlined below. (Note that if you create posts using Windows Live Writer you won’t have this issue creating posts). Log into your GWB Account and go to the “Posts” page. Hit F12 to bring up the developer window in IE10. Click on the ‘Browser Mode’ option and change it to IE9. You should now be able to create/edit/delete posts in GWB. Note this also fixes any other sites in IE10 that might not yet have the hotfix applied. You can tell if the hotfix is the likely culprit if you’re using IE10 and see the following error in the Web Developers Console area: SCRIPT5009: '__doPostBack' is undefined Let us know ASAP if there are other issues you are experiencing that aren’t fixed by this workaround!

    Read the article

  • Choosing a subject to create a site for

    - by Daniel
    For 6 years I'm working as web developer and I would like to finally create my own site. I thought I have everything needed: programming and administration skills, found a good designer, own servers and understanding how to operate site. But I've missed very important (or the most important) point: how do I find a subject for my site? My hobby and my work are the same: IT, but I don't want to create just another tech blog or news aggregator, I want something different. First I thought things like Google Trends or Google top 1000 could help, but I've got lost in thousands of options I can't see them all (I actually can, but it'll take at least a couple of month). So my question is: how did you start? Where did you get the idea?

    Read the article

  • How to Reinstall Ubuntu on Acer C7 Chromebook

    - by Daniel
    I installed Chrbuntu 13.04 recently and it works great, however I was updating some programs when a pop-up appeared asking if i wanted to upgrade to the newest version. I didnt realize that it was still unstable and buggy. I am unsure on how to do this. Ive been told that i need to wipe the ubuntu partition completely but i dont see why that is neccesary. Basically i want to remove ubuntu 13.10 and use the partition it is installed on to install 13.04. Sorry for being a noob on linux but i hope you can help, And thanks for your time.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >