Search Results

Search found 2164 results on 87 pages for 'paul stone'.

Page 81/87 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Tools to manage sql 2008 database mirroring?

    - by lemkepf
    We are going to be moving about 20 databases that live on a single instance of sql 2000 to a sql 2008 r2 environment with database mirroring. What I'm looking for is a tool or scripts that will help me manage the conversion and management of those 20db's onto this new mirrored environment easily. There are many steps in setting each DB up and I want to automate as much as possible. Edit: Here are the steps I've been doing manually: Create the same username/passwords from the old sql 2000 server onto new sql 2008 server. Then sync those users/passwords onto the other sql 2008 server with the same SSID's so when we do the db backup and restore they match up. Take a backup of each sql 2000 db's. Copy them to server A. Restore the backup to server A. Backup from server a, copy to server b, restore there. Run the mirror "configure security" wizard. Start mirroring. I've love to be able to script this out or have a tool that does it for me. Thanks! Paul

    Read the article

  • SeLinux blocking connection to sshd on Ubuntu 9.10

    - by Barton Chittenden
    When I try to log on to my laptop, which runs Ubuntu 9.10, the server rejects my login attempts. Checking /var/log/auth.log, I see the following: Feb 14 12:41:16 tiger-laptop sshd[6798]: error: ssh_selinux_getctxbyname: Failed to get default SELinux security context for tiger I googled for this, and ran across the following: http://www.spinics.net/lists/fedora-.../msg13049.html Here's the part that I think relates to the problem that I'm having: Quote: What's wrong on my system? Why it's not possible to login even if selinux is in permissive mode? Any suggestions? I'd start by trying to figure out why sshd isn't running in sshd_t (it seems to be running in sysadm_t). Paul. selinux mailing list selinux@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mail...stinfo/selinux Yes, sshd is running in sysadm_t: ps axZ | grep sshd system_u:system_r:sysadm_t 3632 ? Ss 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pi ls -Z /usr/sbin/sshd system_ubject_r:sshd_exec_t /usr/sbin/sshd Don't know why it's not sshd_t. I didn't modified something. It's a standard installation of sles11 with the default reference policy from tresys. Maybe this code snippet from policy/modules/services/ssh.te is responsible for that: Allow ssh logins as sysadm_r:sysadm_t gen_tunable(ssh_sysadm_login, true) Any ideas? Do you have boolean init_upstart set to on? if not try setting it to on. I do not believe ssh_sysadm_login boolean works currently but i may be mistaken. -- Yeah, setting init_upstart to on did the trick! THANK A LOT! Do you know why this prevents the user from logging in through ssh even if selinux is set to permissive?? Ok, so the million dollar question is "where do I set 'init_upstart=1'"? It's not clear from context which configuration file needs to be edited, and I'm not at all familiar with SELinux configuration.

    Read the article

  • Loss of wireless network connectivity when playing video via HDMI cable

    - by Jeff Fohl
    Hi Folks - New to Super User, so I hope this question fits in with the guidelines. Very strange problem I am having, and I am at a loss as to how to continue troubleshooting this one. The basic problem is that when I attempt to watch streamed video on a particular display device (an Optoma HD180 projector), my network connectivity drops like a stone to barely measurable levels. This is my setup: I have a Dell H2C 730x running Windows 7 64bit. This particular computer has two ATI Radeon HD 4800 video cards. I have two Samsung 22" monitors connected to one card, and an Optoma HD180 digital projector connected to the other card via an HDMI cable. My internet connection is normally a reliable 6Mbps. The problem I am having occurs when I stream video (or even just browse the web) on the Optoma Projector. When I do this, my internet connection drops to practically zero (just a few kilobits per second). When I move the browser away from the projector, and over to one of my Samsung monitors, the internet connection comes right back. Note that the Optoma projector is on and enabled as a third monitor all this time. I can move the mouse around on the projector without triggering the problem. I tried pinging my router when I was playing a movie on one of the monitors, and I get a 1 millisecond response. However, when I have the movie playing on the Optoma projecter, pinging the router gives me response times in the hundreds of milliseconds, or times out completely. So, it clearly is something local to my machine - and not some sort of throttling occurring down the line. I would think that it is possibly something to do with the HDMI driver conflicting somehow with my network driver (which is a USB-based wireless connection). This one has me really stumped. Anyone have any ideas? EDIT: I am now leaning towards the possibility that the HDMI cable is somehow interfering with the wireless network, when large amounts of data are being pushed through the cable. Is this possible?

    Read the article

  • Developer hardware autonomy in a managed desktop environment [closed]

    - by Troy Hunt
    I’m looking for some feedback on how developer PCs are managed within environments that have a strict managed desktop policy (normally large corporations). For example, many corporate environments control the installation of software and the deployment of patches and virus updates through a centralised channel. This usually means also dictating the OS version and architecture (32 bit versus 64 bit) which will likely also mean standardised hardware configurations. I’m particularly interested in feedback from developers who work in this sort of environment but have a high degree of autonomy over their machines. This might mean choosing your own hardware vendor, OS type and version and perhaps how the machines are built and maintained. I have several specific questions: How do you satisfy the needs of security, governance etc whilst maintaining your autonomy? For example, how do you address concerns about keeping virus definitions and OS patches up to date? Do you have a process for gaining exemption from standard desktop builds and if so, what do you need to demonstrate in order to get this? How have you justified this need to the decision makers? Essentially, what is the benefit to your role as a developer by having this degree of autonomy? Thanks very much everyone. Update: There's a great post from Jean-Paul Boodhoo which addresses the developer tool component of the quesiton here: http://blog.jpboodhoo.com/TheFallacyOfTheStandardizedDeveloperMachineimage.aspx

    Read the article

  • Windows 7, Black screen with cursor, impossible to logon

    - by PJC
    First - I have gone through as many possible solutions I've found here and elsewhere. [Edit - to describe the issue in more detail, the PC appears to boot correctly, but instead of the logon screen, I have been getting a black screen WITH the pointer cursor, and it responds correctly to the mouse. Pressing CTL-ALT-DEL brings up the logon screen's background, but with no logon area nor any other content. This screen was at the full resolution before I uninstalled the graphics driver in safe-mode.] I also just ran a full up-to-date AVG scan from boot media. [Edit - the AVG scan, which was updated to today's virus signatures, found no issues at all.] So - steps I've tried: Safe Boot, Restore from before the issue - done, no help. Uninstall the graphics driver - done, now i have a 1024x768 fallback screen, still no way in. sfc /scannow - only doable from Safe boot obviously, but no change. [Edit - booting from restore media, performing a startup repair and...]Restore further back - restored to 2 days ago, and I'd had many reboots since then with no problem. Enable autologin to try to get beyond the login screen - done, doesn't work. It seems the best advice is complete reinstall, but I really don't want to do that because it'll take 3-4 days to add all the apps I use. Some key points to note - in both states - before and after removing the video driver, I always had a mouse cursor on the screen. CTL-ALT-DEL flashes up the login background, but no login info. I can (and often do) reinstall from scratch, but was at a fairly stable state before this, and would prefer not to. -Paul

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • What's going on with INETA and the Regional Speakers Bureau?

    - by Chris Williams
    For those of you that have been waiting patiently (and not so patiently) I'm happy to say that we're very near completion on some changes/enhancements/improvements that will allow us to finally go live with the INETA Regional Speakers Bureau. I know quite a few of you have already registered, which is great (though some of you may need to come back and update your info) and we've had a few folks submit requests, mostly in a test capacity, but soon we'll be up and live. Here's how it breaks down. Be sure to read this, because things have changed a bit from when we initially announced it. 1. The majority of our speaker/event funding is going into the Regional Speakers Bureau.  The National Bureau still exists, but it's a good bit smaller than it was before, and it's not an "every group" benefit anymore. We'll be using the National Bureau as more of a strategic task force, targeting high impact events and areas that need some community building love from INETA. These will be identified and handled on a case by case basis, and may include more than just user group events. 2. You're going to get more events per group, per year than you did before. Not only are we focusing more resources on this program, but we're also making a lot of efforts to use it more effectively. With the INETA Regional Speakers Bureau, you should be able to get 2-3 INETA speakers per year, on average. Not every geographical area will have exactly the same experience, but we're doing the best we can. 3. It's not a farm team program for the National Bureau. Unsurprisingly, I managed to offend a number of people when I previously made the comment that the Regional Speakers Bureau program was a farm team or stepping stone to the National Bureau. It was a poor choice of words.  Anyone can participate in the Regional Speakers Bureau, and I look forward to working with all of you. 4. There is assistance for your efforts. The exact final details are still being hammered out, but expect it to look something like this: (all distances listed are based on a round trip) Distances < 120 miles = $0 121 miles - 240 miles = $50 (effectively 1 to 2 hours, each way) 241 miles - 360 miles = $100 (effectively 2 to 3 hours, each way) 361 miles - 480 miles = $200 (effectively 3 to 4 hours, each way) For those of you who travel a lot, we're working on a solution to handle group visits when you're away from home. These will (for now) be handled on a case by case basis. 5. We're going to make it as easy as possible to work with the program. In order to do this, we need a few things from you. For speakers, that means your home address. It also means (maybe) filling out a simple 1 line expense report via the INETA website. For user groups, it means making sure your meeting address is up to date as well. 6. Distances will be automatically calculated from your home of record to the user group event and back. We realize that this is not a perfect solution to every instance, but we're not paying you to speak at an event, and you won't be taxed on this money. It's simply some assistance to make your community efforts easier. Our way of saying thanks for everything you do. 7. Sounds good so far, what's the catch? There's always a catch, right? In this case there are two of them: 1) At this time, Microsoft employees are welcome to use the website to line up speaking engagements with user groups, but are not eligible for financial assistance. 2) Anyone can register and use the website to line up speaking engagements with user groups, however you must receive and maintain a net score of 3+ positive ratings (we're implementing a thumbs up / thumbs down system) in order to receive financial assistance. These ratings are provided by the User Group leaders after the meeting has taken place. 8. Involvement by the User Group leaders is a key factor in the success of this program. Your job isn't done once you request a speaker. After you've had your meeting, it's critical that you go back to the website and take a very small survey. Doing this ensures that the speaker gets rated (and compensated if eligible) and also ensures that you can make another request, since you won't be able to make a new request if you have an old one outstanding. 9. What about Canada? We're definitely working on that. Unfortunately nothing new to report on that front, other than to say that we're trying. So... this is where things stand currently. We're working very quickly to get this in place and get speakers and groups together. If you have any questions, please leave a comment below and I'll answer them as quickly as possible. If I've forgotten anything, or if things change, I'll update it here. Thanks, Chris G. Williams INETA Board of Directors

    Read the article

  • Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

    - by Scott McNeil
    Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption News Summary IT organizations are adopting private clouds as a stepping-stone to business-driven, self-service IT. Successful implementations hinge on the ability to efficiently deploy and manage cloud services at enterprise scale. Having a complete cloud management solution integrated with an enterprise-class technology stack is a fundamental requirement for IT. Oracle Enterprise Manager 12c Release 4 meets that requirement by helping businesses become more agile and responsive, while reducing cost, complexity, and risk. News Facts Oracle Enterprise Manager 12c Release 4, available today, lets organizations rapidly adopt Oracle-based, enterprise-scale private clouds. New capabilities provide advanced technology stack management, secure database administration, and enterprise service governance, enabling Oracle customers and partners to maximize database and application performance and drive innovation using self-service IT platforms. The enhancements have been driven by customers and the growing Oracle Enterprise Manager Ecosystem, comprised of more than 750 Oracle PartnerNetwork (OPN) Specialized partners. Oracle and its partners and customers have built over 140 plug-ins and connectors for Oracle Enterprise Manager. Watch the video highlights. Automation for Broader Cloud Services Oracle Enterprise Manager 12c Release 4 allows for a rapid enterprise-wide adoption of database, middleware and infrastructure services in the private cloud, driven by an enhanced API-enabled service catalog. The release features “push button” style provisioning of complete environments such as SOA and Oracle Active Data Guard, and fast data cloning that enables rapid deployment and testing of enterprise applications. Out-of-the-box capabilities to detect data and configuration vulnerabilities provide enhanced cloud service governance along with greater operational control through a flexible and extensible showback mechanism. Enhanced Database Management A new performance warehouse enables predictive database diagnostics and trend analysis and helps identify database problems before they occur. New enterprise data-governance capabilities enhance security by helping systematically discover and protect sensitive data. Step-by-step orchestration of upgrades with the ability to rollback changes enables faster adoption of Oracle Database 12c. Expanded Fusion Middleware Management A new consolidated view of Oracle Fusion Middleware 12c deployments with a guided management capability lets administrators apply best management practices to diverse middleware environments and identify performance issues quickly. A Java VM Diagnostics as a Service feature allows governed access to diagnostics data for IT workers across multiple disciplines for accelerated DevOps resolutions of defects and performance optimization. New automated provisioning for SOA lets middleware administrators perform mass SOA provisioning with ease. Superior Enterprise-Grade Management Private roles and preferred credentials have been added to Oracle Enterprise Manager to provide additional fine-grained security for organizations with complex access control requirements. A new security console provides a single point of control for managing the security of Oracle Enterprise Manager environments. Support for the latest industry standard SNMP v3 protocol, including encryption, enables more secure heterogeneous management. “Smart monitoring” adapts to observed environmental changes and adds self-management capabilities to help Oracle Enterprise Manager run at peak performance, while demanding less IT supervision. Supporting Quotes “Lawrence Livermore National Laboratory has a strong tradition of technology breakthroughs and leadership. As a member of Oracle’s Customer Advisory Board for Oracle Enterprise Manager, we have consistently provided feedback and guidance in the areas of enterprise-scale cloud, self-diagnosability, and secure administration for the product,” said Tim Frazier, CIO, NIF and Photon Sciences, Lawrence Livermore National Laboratory. “We intend to take advantage of the Release 4 features that support enterprise-scale availability and fine-grained security capabilities for private cloud deployments.” “IDC's most recent CloudTrack survey shows that most enterprises plan to adopt hybrid cloud architectures over the next three years,” said Mary Johnston Turner, Research Vice President, Enterprise System Management Software, IDC. “These organizations plan to deploy a wide range of workloads into cloud environments including mission critical database and middleware services that require high levels of fault tolerance and disaster recovery. Such capabilities were traditionally custom configured for each application but cloud offers the possibility to incorporate such properties within the service definition, enabling organizations to adopt cloud without compromise. With the latest release of Oracle Enterprise Manager 12c, Oracle is providing customers with an out-of-the-box experience for delivering highly-resilient cloud services for databases and applications.” “Since its inception, Oracle has been leading the way in innovative, scalable and high performance solutions for the enterprise. With this release of Oracle Enterprise Manager, we are extending this leadership by providing enterprise-scale capabilities for planning, delivering, and managing private clouds. We call this ‘zero-to-cloud – accelerated.’ These enhancements help our customers to expedite their adoption of cloud computing and prepares them for the next generation of self-service IT,” said Prakash Ramamurthy, senior vice president of Systems and Cloud Management at Oracle. Supporting Resources Oracle Enterprise Manager 12c Video: Cerner Delivers High Performance Private Cloud Video: BIAS Achieves Outstanding Results with Private Cloud Press Release Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • How Estimates Became Quotes

    - by Lee Brandt
    It’s our fault. Well, not completely, but we haven’t helped the situation any. All of what follows comes from my own experiences which, from talking to lots of other developers about it, seems to be pretty much par for the course. Where We Started When we first started estimating, we estimated pretty clearly. We would try to imagine something we’d done that was similar to the project being estimated and we’d toss it about in our heads a bit and see how much bigger or smaller we thought this new thing was, and add or subtract accordingly. We wouldn’t spend too much time on it, because we wanted to get to writing the software. Eventually, we’d come across some huge problem that there was now way we could’ve known about ahead of time. Either we didn’t see this thing or, we didn’t realize that this particular version of a problem would be so… problematic. We usually call this “not knowing what we don’t know”. It’s unavoidable. We just can’t know. Until we wade in and start putting some code together, there are just some things we won’t know… and some things we don’t even know that we don’t know. Y’know? So what happens? We go over budget. Project managers scream and dance the dance of the stressed-out project manager, and there is nothing we can do (or could’ve done) about it. We didn’t know. We thought about it for a bit and we didn’t see this herculean task sitting in the middle of our nice quiet project, and it has bitten us in the rear end. We now know how to handle this in the future, though. We will take some more time to pick around the requirements and discover all those things we don’t know. We’ll do some prototyping, we’ll read some blogs about similar projects, we’ll really grill the customer with questions during the requirements gathering phase. We’ll keeping asking “what else?” until the shove us down the stairs. We’ll take our time and uncover it all. We Learned, But Good The next time comes, and you know what happens? We do it. We grill the customer for weeks and prototype and read and research and we estimate everything down to the last button on the last form. Know what that gets us? It gets us three months of wasted time, and our estimate will still be off. Possibly off by a factor of four. WTF, mate? No way we could be surprised by something! We uncovered every particle. We turned every stone. How is it we still came across unknowns? Because we STILL didn’t know what we didn’t know. How could we? We didn’t know to ask. The worst part is, we’ve now convinced the product that this is NOT an estimate. It is a solid number based on massive research and an endless number of questions that they answered. There is absolutely now way you don’t know everything there is to know about this project now. No way there is anything you haven’t uncovered. And their faith in that “Esti-Quote” goes through the roof. When the project goes over this time, they might even begin to question whether or not you know what you’re doing. Who could blame them? You drilled them for weeks about every little thing, and when they complained about all the questions, you told them you wanted to uncover everything so there would be no surprises. SO we set them up to faile Guess, Then Plan We had a chance. At the beginning we could have just said, “That’s just a gut-feeling estimate, based on my past experience with similar projects. There could still be surprises.” If we spend SOME time doing SOME discovery and then bounce that against our own past experiences, we can come up with a fairly healthy estimate. We can then help the product owner understand that an estimate is a guess. Sure, it’s an educated guess, but it is still a guess. If we get it right it will be almost completely luck. Then, we help them to plan the development by taking that guess (yes, they still need the guess for planning purposes) and start measuring early and often to see if we still think we are right. We should adjust the estimate and alert the product owner as soon as we see problems (bad news does not age well) and we should be able to see any problems immediately if we are constantly measuring our pace. In lean software, we start with that guess and begin measuring cycle times immediately. Then we can make projections based on those cycle times and compare them to our estimate. This constant feedback is the best way to ensure that there are no surprises at the END of the project. There sill still be surprises, but we’ll see them sooner and have a better understanding of how they will affect our overall timeline. What do you think?

    Read the article

  • Libnoise producing completely random noise

    - by Doodlemeat
    I am using libnoise in C++ taken and I have some problems with getting coherent noise. I mean, the noise produced now are completely random and it doesn't look like a noise. Here's a to the image produced by my game. I am diving the map into several chunks, but I can't seem to find any problem doing that since libnoise supports tileable noise. The code can be found below. Every chunk is 8x8 tiles large. Every tile is 64x64 pixels. I am also providing a link to download the entire project. It was made in Visual Studio 2013. Download link This is the code for generating a chunk Chunk *World::loadChunk(sf::Vector2i pPosition) { sf::Vector2i chunkPos = pPosition; pPosition.x *= mChunkTileSize.x; pPosition.y *= mChunkTileSize.y; sf::FloatRect bounds(static_cast<sf::Vector2f>(pPosition), sf::Vector2f(static_cast<float>(mChunkTileSize.x), static_cast<float>(mChunkTileSize.y))); utils::NoiseMap heightMap; utils::NoiseMapBuilderPlane heightMapBuilder; heightMapBuilder.SetSourceModule(mNoiseModule); heightMapBuilder.SetDestNoiseMap(heightMap); heightMapBuilder.SetDestSize(mChunkTileSize.x, mChunkTileSize.y); heightMapBuilder.SetBounds(bounds.left, bounds.left + bounds.width - 1, bounds.top, bounds.top + bounds.height - 1); heightMapBuilder.Build(); Chunk *chunk = new Chunk(this); chunk->setPosition(chunkPos); chunk->buildChunk(&heightMap); chunk->setTexture(&mTileset); mChunks.push_back(chunk); return chunk; } This is the code for building the chunk void Chunk::buildChunk(utils::NoiseMap *pHeightMap) { // Resize the tiles space mTiles.resize(pHeightMap->GetWidth()); for (int x = 0; x < mTiles.size(); x++) { mTiles[x].resize(pHeightMap->GetHeight()); } // Set vertices type and size mVertices.setPrimitiveType(sf::Quads); mVertices.resize(pHeightMap->GetWidth() * pHeightMap->GetWidth() * 4); // Get the offset position of all tiles position sf::Vector2i tileSize = mWorld->getTileSize(); sf::Vector2i chunkSize = mWorld->getChunkSize(); sf::Vector2f offsetPositon = sf::Vector2f(mPosition); offsetPositon.x *= chunkSize.x; offsetPositon.y *= chunkSize.y; // Build tiles for (int x = 0; x < mTiles.size(); x++) { for (int y = 0; y < mTiles[x].size(); y++) { // Sometimes libnoise can return a value over 1.0, better be sure to cap the top and bottom.. float heightValue = pHeightMap->GetValue(x, y); if (heightValue > 1.f) heightValue = 1.f; if (heightValue < -1.f) heightValue = -1.f; // Instantiate a new Tile object with the noise value, this doesn't do anything yet.. mTiles[x][y] = new Tile(this, pHeightMap->GetValue(x, y)); // Get a pointer to the current tile's quad sf::Vertex *quad = &mVertices[(y + x * pHeightMap->GetWidth()) * 4]; quad[0].position = sf::Vector2f(offsetPositon.x + x * tileSize.x, offsetPositon.y + y * tileSize.y); quad[1].position = sf::Vector2f(offsetPositon.x + (x + 1) * tileSize.x, offsetPositon.y + y * tileSize.y); quad[2].position = sf::Vector2f(offsetPositon.x + (x + 1) * tileSize.x, offsetPositon.y + (y + 1) * tileSize.y); quad[3].position = sf::Vector2f(offsetPositon.x + x * tileSize.x, offsetPositon.y + (y + 1) * tileSize.y); // find out which type of tile to render, atm only air or stone TileStop *tilestop = mWorld->getTileStopAt(heightValue); sf::Vector2i texturePos = tilestop->getTexturePosition(); // define its 4 texture coordinates quad[0].texCoords = sf::Vector2f(texturePos.x, texturePos.y); quad[1].texCoords = sf::Vector2f(texturePos.x + 64, texturePos.y); quad[2].texCoords = sf::Vector2f(texturePos.x + 64, texturePos.y + 64); quad[3].texCoords = sf::Vector2f(texturePos.x, texturePos.y + 64); } } } All the code that uses libnoise in some way are World.cpp, World.h and Chunk.cpp, Chunk.h in the project.

    Read the article

  • JavaDay Taipei 2014 Trip Report

    - by reza_rahman
    JavaDay Taipei 2014 was held at the Taipei International Convention Center on August 1st. Organized by Oracle University, it is one of the largest Java developer events in Taiwan. This was another successful year for JavaDay Taipei with a fully sold out venue packed with youthful, energetic developers (this was my second time at the event and I have already been invited to speak again next year!). In addition to Oracle speakers like me, Steve Chin and Naveen Asrani, the event also featured a bevy of local speakers including Taipei Java community leaders. Topics included Java SE, Java EE, JavaFX, cloud and Big Data. It was my pleasure and privilege to present one of the opening keynotes for the event. I presented my session on Java EE titled "JavaEE.Next(): Java EE 7, 8, and Beyond". I covered the changes in Java EE 7 as well as what's coming in Java EE 8. I demoed the Cargo Tracker Java EE BluePrints. I also briefly talked about Adopt-a-JSR for Java EE 8. The slides for the keynote are below (click here to download and view the actual PDF): It appears your Web browser is not configured to display PDF files. No worries, just click here to download the PDF file. In the afternoon I did my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The talk was completely packed. The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. I finished off Java Day Taipei with my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE" (this was the last session of the conference). The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. After the event the Oracle University folks hosted a reception in the evening which was very well attended by organizers, speakers and local Java community leaders. I am extremely saddened by the fact that this otherwise excellent trip was scarred by terrible tragedy. After the conference I joined a few folks for a hike on the Maokong Mountain on Saturday. The group included friends in the Taiwanese Java community including Ian and Robbie Cheng. Without warning, fatal tragedy struck on a remote part of the trail. Despite best efforts by us, the excellent Taiwanese Emergency Rescue Team and World class Taiwanese physicians we were unable to save our friend Robbie Cheng's life. Robbie was just thirty-four years old and is survived by his younger brother, mother and father. Being the father of a young child myself, I can only imagine the deep sorrow that this senseless loss unleashes. Robbie was a key member of the Taiwanese Java community and a Java Evangelist at Sun at one point. Ironically the only picture I was able to take of the trail was mere moments before tragedy. I thought I should place him in that picture in profoundly respectful memoriam: Perhaps there is some solace in the fact that there is something inherently honorable in living a bright life, dying young and meeting one's end on a beautiful remote mountain trail few venture to behold let alone attempt to ascend in a long and tired lifetime. Perhaps I'd even say it's a fate I would not entirely regret facing if it were my own. With that thought in mind it seems appropriate to me to quote some lyrics from the song "Runes to My Memory" by legendary Swedish heavy metal band Amon Amarth idealizing a fallen Viking warrior cut down in his prime: "Here I lie on wet sand I will not make it home I clench my sword in my hand Say farewell to those I love When I am dead Lay me in a mound Place my weapons by my side For the journey to Hall up high When I am dead Lay me in a mound Raise a stone for all to see Runes carved to my memory" I submit my deepest condolences to Robbie's family and hope my next trip to Taiwan ends in a less somber note.

    Read the article

  • Html.DropDownListFor<> and complex object in ASP.NET MVC2

    - by dagda1
    Hi, I am looking at ASP.NET MVC2 and trying to post a complex object using the new EditorFor syntax. I have a FraudDto object that has a FraudCategory child object and I want to set this object from the values that are posted from the form. Posting a simple object is not a problem but I am struggling with how to handle complex objects with child objects. I have the following parent FraudDto object whcih I am binding to on the form: public class FraudDto { public FraudCategoryDto FraudCategory { get; set; } public List<FraudCategoryDto> FraudCategories { get; private set; } public IEnumerable<SelectListItem> FraudCategoryList { get { return FraudCategories.Select(t => new SelectListItem { Text = t.Name, Value = t.Id.ToString() }); } The child FraudCategoryDto object looks like this: public class FraudCategoryDto { public int Id { get; set; } public string Name { get; set; } } On the form, I have the following code where I want to bind the FraudCategoryDto to the dropdown. The view is of type ViewPage: <td class="tac"> <strong>Category:</strong> </td> <td> <%= Html.DropDownListFor(x => x.FraudCategory, Model.FraudTypeList)%> </td> I then have the following controller code: [HttpPost] public virtual ViewResult SaveOrUpdate(FraudDto fraudDto) { return View(fraudDto); } When the form is posted to the server, the FraudCategory property of the Fraud object is null. Are there any additional steps I need to hook up this complex object? Cheers Paul

    Read the article

  • How to extract $lastexitcode from c# powershell script execution.

    - by scope-creep
    Hi, I've got a scipt executing in C# using the powershell async execution code on code project here: http://www.codeproject.com/KB/threads/AsyncPowerShell.aspx?display=PrintAll&fid=407636&df=90&mpp=25&noise=3&sort=Position&view=Quick&select=2130851#xx2130851xx I need to return the $lastexitcode and Jean-Paul describes how you can use a custom pshost class to return it. I can't find any method or property in pshost that returns the exit code. This engine I have needs to ensure that script executes correctly. Any help would be appreciated. regards Bob. Its the $lastexitcode and the $? variables I need to bring back. Hi, Finally answered. I found out about the $host variable. It implements a callback into the host, specifically a custom PSHost object, enabling you to return the $lastexitcode. Here is a link to an explanation of $host. http://mshforfun.blogspot.com/2006/08/do-you-know-there-is-host-variable.html It seems to be obscure, badly documented, as usual with powershell docs. Using point 4, calling $host.SetShouldExit(1) returns 1 to the SetShouldExit method of pshost, as described here. http://msdn.microsoft.com/en-us/library/system.management.automation.host.pshost.setshouldexit(VS.85).aspx Its really depends on defining your own exit code defintion. 0 and 1 suffixes I guess. regards Bob.

    Read the article

  • What do you mean by the expressiveness in a programming language?

    - by prosseek
    I see a lot of the word 'expressiveness' when people want to stress one language is better than the other. But I don't see exactly what they mean by it. Is it the verboseness/succinctness? I mean, if one language can write down something shorter than the other, does that mean expressiveness? Please refer to my other question - http://stackoverflow.com/questions/2411772/article-about-code-density-as-a-measure-of-programming-language-power Is it the power of the language? Paul Graham says that one language is more powerful than the other language in a sense that one language can do that the other language can't do (for example, LISP can do something with macro that the other language can't do). Is it just something that makes life easier? Regular expression can be one of the examples. Is it a different way of solving the same problem: something like SQL to solve the search problem? What do you think about the expressiveness of a programming language? Can you show the expressiveness using some code? What's the relationship with the expressiveness and DSL? Do people come up with DSL to get the expressiveness?

    Read the article

  • Mapping interface or abstract class component

    - by Yann Trevin
    Please consider the following simple use case: public class Foo { public virtual int Id { get; protected set; } public virtual IBar Bar { get; set; } } public interface IBar { string Text { get; set; } } public class Bar : IBar { public virtual string Text { get; set; } } And the fluent-nhibernate map class: public class FooMap : ClassMap<Foo> { public FooMap() { Id(x => x.Id); Component(x => x.Bar, m => { m.Map(x => x.Text); }); } } While running any query with configuration, I get the following exception: NHibernate.InstantiationException: "Cannot instantiate abstract class or interface: NHMappingTest.IBar" It seems that NHibernate tries to instantiate an IBar object instead of the Bar concrete class. How to let Fluent-NHibernate know which concrete class to instantiate when the property returns an interface or an abstract base class? EDIT: Explicitly specify the type of component by writing Component<Bar> (as suggested by Sly) has no effect and causes the same exception to occur. EDIT2: Thanks to vedklyv and Paul Batum: such a mapping should be soon is now possible.

    Read the article

  • How to include multiple tables programmaticaly into a Sweave document using R

    - by PaulHurleyuk
    Hello, I want to have a sweave document that will include a variable number of tables in. I thought the example below would work, but it doesn't. I want to loop over the list foo and print each element as it's own table. % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{longtable} \usepackage{geometry} \usepackage{Sweave} \geometry{left=1.25in, right=1.25in, top=1in, bottom=1in} \listfiles \begin{document} <<label=start, echo=FALSE, include=FALSE>>= startt<-proc.time()[3] library(RODBC) library(psych) library(xtable) library(plyr) library(ggplot2) options(width=80) #Produce some example data, here I'm creating some dummy dataframes and putting them in a list foo<-list() foo[[1]]<-data.frame(GRP=c(rep("AA",10), rep("Aa",10), rep("aa",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[2]]<-data.frame(GRP=c(rep("BB",10), rep("bB",10), rep("BB",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[3]]<-data.frame(GRP=c(rep("CC",12), rep("cc",18)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[4]]<-data.frame(GRP=c(rep("DD",10), rep("Dd",10), rep("dd",10)), X1=rnorm(30), X2=rnorm(30,5,2)) @ \title{Docuemnt to test putting a variable number of tables into a sweave Document} \author{"Paul Hurley"} \maketitle \section{Text} This document was created on \today, with \Sexpr{print(version$version.string)} running on a \Sexpr{print(version$platform)} platform. It took approx \input{time} sec to process. <<label=test, echo=FALSE, results=tex>>= cat("Foo") @ that was a test, so is this <<label=table1test, echo=FALSE, results=tex>>= print(xtable(foo[[1]])) @ \newpage \subsection{Tables} <<label=Tables, echo=FALSE, results=tex>>= for(i in seq(foo)){ cat("\n") cat(paste("Table_",i,sep="")) cat("\n") print(xtable(foo[[i]])) cat("\n") } #cat("<<label=endofTables>>= ") @ <<label=bye, include=FALSE, echo=FALSE>>= endt<-proc.time()[3] elapsedtime<-as.numeric(endt-startt) @ <<label=elapsed, include=FALSE, echo=FALSE>>= fileConn<-file("time.tex", "wt") writeLines(as.character(elapsedtime), fileConn) close(fileConn) @ \end{document} Here, the table1test chunk works as expected, and produced a table based on the dataframe in foo[[1]], however the loop only produces Table(underscore)1.... Any ideas what I'm doing wrong ?

    Read the article

  • Multiplication algorithm for abritrary precision (bignum) integers.

    - by nn
    Hi, I'm writing a small bignum library for a homework project. I am to implement Karatsuba multiplication, but before that I would like to write a naive multiplication routine. I'm following a guide written by Paul Zimmerman titled "Modern Computer Arithmetic" which is freely available online. On page 4, there is a description of an algorithm titled BasecaseMultiply which performs gradeschool multiplication. I understand step 2, 3, where B^j is a digit shift of 1, j times. But I don't understand step 1 and 3, where we have A*b_j. How is this multiplication meant to be carried out if the bignum multiplication hasn't been defined yet? Would the operation "*" in this algorithm just be the repeated addition method? Here is the parts I have written thus far. I have unit tested them so they appear to be correct for the most part: The structure I use for my bignum is as follows: #define BIGNUM_DIGITS 2048 typedef uint32_t u_hw; // halfword typedef uint64_t u_w; // word typedef struct { unsigned int sign; // 0 or 1 unsigned int n_digits; u_hw digits[BIGNUM_DIGITS]; } bn; Currently available routines: bn *bn_add(bn *a, bn *b); // returns a+b as a newly allocated bn void bn_lshift(bn *b, int d); // shifts d digits to the left, retains sign int bn_cmp(bn *a, bn *b); // returns 1 if a>b, 0 if a=b, -1 if a<b

    Read the article

  • Really "wow" them in the interview

    - by Juliet
    Let me put it to you this way: I'm a top-notch programmer, but a notoriously bad interviewee. I've flunked 3 interviews consecutively because I get so nervous that my voice tightens at least 2 octaves higher and I start visibly shaking -- mind you, I can handle whatever technical questions the interviewer throws at me in that state, but I think it looks bad to come off as a quivering, squeaky-voiced young woman during a job interview. I've just got the personality type of a shy computer programmer. No matter how technical I am, I'm going to get passed up in favor of a smooth talker. I have another interview coming up shortly, and I want to really impress the company. Here are my trouble spots: What can I do to be less nervous during my interview? I always get really excited when I hear I have a face-to-face interview, but get more and more anxious as D-Day the interview approaches. My employers wants me to explain what I used to do at my prior employment. I'm a very chatty person and tend to talk/squeak for 10 minutes at a time. How long or short should I time my answers? On that note, when I'm explaining what I did at prior jobs, what exactly is my interviewer looking for? At some point, my interviewer will ask "do you have any questions for me while you're here?" I should, but what kinds of questions should I ask to show that I'm interested in being employed? My interviewer always asks why I'm looking for a new job. The real reason is that my present salary is $27K/yr [Edit to add: and I've yet to get a raise since I started], and I want to make more money -- otherwise the work environment is fine. How do I sugarcoat "I want to make more money" into something that sounds nicer? I have only one prior programmer job, and I've worked there for 18 months, but I have the skill of someone with 4 to 6 years of experience. What can I say to compete against applicants with more work experience? I took a low-paying $27K/yr programming job just to get my foot in IT, and I've been trying to leverage that job as a stepping stone to better opportunities. I get interviews because I consistently out-score senior-level developers in aptitude tests, and my desired salary range is right in the ballpark of what most companies want to offer. Unfortunately, while I've been a programming as a hobby for 10 years and I'm geared to graduate with my BA in Comp Sci in May '09, employers see me as a junior-level programmer with no degree. I want to prove them wrong and get a job that matches my skill level. I'd appreciate any advice anyone has to offer, especially if they can help me get a better job in the process.

    Read the article

  • An implementation of Sharir's or Aurenhammer's deterministic algorithm for calculating the intersect

    - by RGrey
    The problem of finding the intersection/union of 'N' discs/circles on a flat plane was first proposed by M. I. Shamos in his 1978 thesis: Shamos, M. I. “Computational Geometry” Ph.D. thesis, Yale Univ., New Haven, CT 1978. Since then, in 1985, Micha Sharir presented an O(n log2n) time and O(n) space deterministic algorithm for the disc intersection/union problem (based on modified Voronoi diagrams): Sharir, M. Intersection and closest-pair problems for a set of planar discs. SIAM .J Comput. 14 (1985), pp. 448-468. In 1988, Franz Aurenhammer presented a more efficient O(n log n) time and O(n) space algorithm for circle intersection/union using power diagrams (generalizations of Voronoi diagrams): Aurenhammer, F. Improved algorithms for discs and balls using power diagrams. Journal of Algorithms 9 (1985), pp. 151-161. Earlier in 1983, Paul G. Spirakis also presented an O(n^2) time deterministic algorithm, and an O(n) probabilistic algorithm: Spirakis, P.G. Very Fast Algorithms for the Area of the Union of Many Circles. Rep. 98, Dept. Comput. Sci., Courant Institute, New York University, 1983. I've been searching for any implementations of the algorithms above, focusing on computational geometry packages, and I haven't found anything yet. As neither appear trivial to put into practice, it would be really neat if someone could point me in the right direction!

    Read the article

  • XML generation with java, trying to copy the whole node

    - by Pawel Mysior
    I've got an xml document that filled with people (parent node is "students", and there are 25+ "student" nodes). Each student looks like this: <student> <name></name> <surname></surname> <grades> <subject name=""> <small_grades></small_grades> <final_grade></final_grade> </subject> <subject name=""> <small_grades></small_grades> <final_grade></final_grade> </subject> </grades> <average></average> </student> Basically, what I want to do ('ve been asked to do) is to make a program that would get 3 students with the best average. While parsing the document and getting three best students isn't too difficult, the XML generation is a pain in the ass. Right now, what I'm doing is getting every single node from student and recreating it to a new file. Is there a way to copy the whole student node with everything that's in it? Regards, Paul

    Read the article

  • Adding a guideline to the editor in Visual Studio

    - by xsl
    Introduction I've always been searching for a way to make Visual Studio draw a line after a certain amount of characters: Below is a guide to enable these so called guidelines for various versions of Visual Studio. Visual Studio 2010 Install Paul Harrington's Editor Guidelines extension. Open the registry at HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\Text Editor and add a new string called Guides with the value RGB(100,100,100), 80. The first part specifies the color, while the other one (80) is the column the line will be displayed. Or install the Guidelines UI extension, which will add entries to the editor's context menu for adding/removing the entries without needing to edit the registry directly. The current disadvantage of this method is that you can't specify the column directly. Visual Studio 2008 and Other Versions If you are using Visual Studio 2008 open the registry at HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor and add a new string called Guides with the value RGB(100,100,100), 80. The first part specifies the color, while the other one (80) is the column the line will be displayed. The vertical line will appear, when you restart Visual Studio. This trick also works for various other version of Visual Studio, as long as you use the correct path: 2003: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\7.1\Text Editor 2005: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\Text Editor 2008: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor 2008 Express: HKEY_CURRENT_USER\Software\Microsoft\VCExpress\9.0\Text Editor This also works in SQL Server 2005 and probably other versions.

    Read the article

  • Learning to create beautiful /next-generation GUI

    - by ShaChris23
    I really want to create a stunning-looking GUI desktop application that looks like, for example: Mac OS X interface Picasa desktop client on windows IPhone apps Office 2007 I've mostly been programming GUI using Qt/Swing/WinForm and I'm tired of creating so plain looking GUI, lol. So I was thinking about diving into stuff like: jQuery WPF/C# iPhone SDK Silverlight Adobe Air/Flex Just to get some ideas on how to create really cool looking UI. Does that sound like a good list? Any developers here that could share what platform they use to create very cool looking desktop app? On a sidenote, I really wonder what developers at Apple / Microsoft use to develop their own cool-looking software. EDIT A lot of responses talk about the importance of usability over "cool-looking".. I totally agree that usability and simplicity are the most important aspects of user interface design. I've been doing GUI development for a while now ( 3 years), so that I understand. But using cool-looking UI also improves user experience + it could make big difference on whether or not your software sells. I mean, otherwise why would Microsoft/Apple try to make their OS UI look "cooler" everytime there's a new version? Why would websites like pragprog.com, or versionsapp.com. make their websites look like that? Basically you kill 2 birds with one stone: stunnning-looking UI + super usability (because it looks simple and intuitive). That is what I'm striving for. And as far as I know, I cannot achieve that using Qt/Winform. Most of the books I have read just show you how to make average-looking (read: 1990's) UI. I want to learn how to create cool-looking UI. And the only place I see cool-looking UIs these days are the technology I list above. I'm not enamored with any technology; but I just want to know how things are done in other technology to see if I could apply them to the technology I'm using, or see if I could use those technology in my line of work. An example: if I were to pick between this UI and this UI, I probably would pick the latter, if just based on looks alone. Functionally, they are just the same UI; they both allow you to keep track of your time. They both contain buttons and textboxes, etc. But the fact that they look different, also differentiate them in terms of attractiveness. So in all, I think the "ice on the cake" is very important. I would say it's the thing you strive for after you are certain you have a totally intuitive, usable UI.

    Read the article

  • I didn't say anything treasonous, quit putting words in my mouth

    - by You guys Lie
    Where at all did I say ANYTHING about organizing some kind of anti-government activity? Nowhere. I do not give a flying fuck about America in case you hadn't realized, I'm talking about what I'm going to do when Jesus Christ returns. Besides, why would I make it easy for them to throw me in a black site prison? Use common sense, sheeple. In the end I guess it doesn't matter anyways, since I do not recognize the government as my ultimate authority. Police/Army/Whatever-- funny outfits and a shiny badge don't make you better than me. Allah will do away with your kind. However, as long as they're with me (as they semi-currently are), we have no problems. I fear the government will have other plans to control the population when things start to further decline, and that is when they will run into problems with me and mine, and probably a large percentage of the public. You'd have to be a fucking fool to think we'd fall into anarchy immediately, given the vast resources and blind loyalty of this country. Saying there are practical limits to free speech for anything I have said in this thread is not only ignorant, it is unpatriotic. I should be being celebrated as the modern day Paul Revere for warning you people about our impending doom. You would do well to study the foundations of this country. Nothing I have said is treasonous at all. Besides, the DoD knows I'm harmless until shit pops off, they have bigger fish to fry right now, go forward. Keep in mind, I don't personally have to do anything to overthrow the government. I just said, I would not advocate for any paramilitary organizations at this time, only if/after we dissolve into (more) tyranny. I am not a terrorist, like some soldiers in Iraq. The machine is doing a great job of ruining itself, while I get to comfortably laugh at it on the nightly news knowing that I'm ready for it to shut down.

    Read the article

  • Database not updating after UPDATE SQL statement in ASP.net

    - by Ronnie
    I currently have a problem attepting to update a record within my database. I have a webpage that displays in text boxes a users details, these details are taken from the session upon login. The aim is to update the details when the user overwrites the current text in the text boxes. I have a function that runs when the user clicks the 'Save Details' button and it appears to work, as i have tested for number of rows affected and it outputs 1. However, when checking the database, the record has not been updated and I am unsure as to why. I've have checked the SQL statement that is being processed by displaying it as a label and it looks as so: UPDATE [users] SET [email]=@email, [firstname]=@firstname, [lastname]=@lastname, [promo]=@promo WHERE ([users].[user_id] = 16) The function and other relevant code is: Sub Button1_Click(sender As Object, e As EventArgs) changeDetails(emailBox.text, firstBox.text, lastBox.text, promoBox.text) End Sub Function changeDetails(ByVal email As String, ByVal firstname As String, ByVal lastname As String, ByVal promo As String) As Integer Dim connectionString As String = "Provider=Microsoft.Jet.OLEDB.4.0; Ole DB Services=-4; Data Source=C:\Documents an"& _ "d Settings\Paul Jarratt\My Documents\ticketoffice\datab\ticketoffice.mdb" Dim dbConnection As System.Data.IDbConnection = New System.Data.OleDb.OleDbConnection(connectionString) Dim queryString As String = "UPDATE [users] SET [email]=@email, [firstname]=@firstname, [lastname]=@lastname, "& _ "[promo]=@promo WHERE ([users].[user_id] = " + session.contents.item("ID") + ")" Dim dbCommand As System.Data.IDbCommand = New System.Data.OleDb.OleDbCommand dbCommand.CommandText = queryString dbCommand.Connection = dbConnection Dim dbParam_email As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_email.ParameterName = "@email" dbParam_email.Value = email dbParam_email.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_email) Dim dbParam_firstname As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_firstname.ParameterName = "@firstname" dbParam_firstname.Value = firstname dbParam_firstname.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_firstname) Dim dbParam_lastname As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_lastname.ParameterName = "@lastname" dbParam_lastname.Value = lastname dbParam_lastname.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_lastname) Dim dbParam_promo As System.Data.IDataParameter = New System.Data.OleDb.OleDbParameter dbParam_promo.ParameterName = "@promo" dbParam_promo.Value = promo dbParam_promo.DbType = System.Data.DbType.[String] dbCommand.Parameters.Add(dbParam_promo) Dim rowsAffected As Integer = 0 dbConnection.Open Try rowsAffected = dbCommand.ExecuteNonQuery Finally dbConnection.Close End Try labelTest.text = rowsAffected.ToString() if rowsAffected = 1 then labelSuccess.text = "* Your details have been updated and saved" else labelError.text = "* Your details could not be updated" end if End Function Any help would be greatly appreciated.

    Read the article

  • What does it mean that "Lisp can be written in itself?"

    - by Mason Wheeler
    Paul Graham wrote that "The unusual thing about Lisp-- in fact, the defining quality of Lisp-- is that it can be written in itself." But that doesn't seem the least bit unusual or definitive to me. ISTM that a programming language is defined by two things: Its compiler or interpreter, which defines the syntax and the semantics for the language by fiat, and its standard library, which defines to a large degree the idioms and techniques that skilled users will use when writing code in the language. With a few specific exceptions, (the non-C# members of the .NET family, for example,) most languages' standard libraries are written in that language for two very good reasons: because it will share the same set of syntactical definitions, function calling conventions, and the general "look and feel" of the language, and because the people who are likely to write a standard library for a programming language are its users, and particularly its designer(s). So there's nothing unique there; that's pretty standard. And again, there's nothing unique or unusual about a language's compiler being written in itself. C compilers are written in C. Pascal compilers are written in Pascal. Mono's C# compiler is written in C#. Heck, even some scripting languages have implementations "written in itself". So what does it mean that Lisp is unusual in being written in itself?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87  | Next Page >