Search Results

Search found 7976 results on 320 pages for 'sl dev'.

Page 95/320 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • How can find out the device Id of my unmounted DVD?

    - by fred.bear
    When I put a DVD into the DVD drive, it appears in Nautilus Places, but is not automatically mounted. (this is by personal choice). In this unmounted state, mount (of course) reports nothing, and likewise for df.. but Nautilus is aware of the DVD hardware unit and has read the Label; which it shows in Places So it seems to me that Nautilus has already accessed the DVD devices (Did it temporarily mount it?)... The main point of my question was to determine how to find the device Id of an unmounted device .. but as I've been writing this, I now think it may not be as simple as that... This issue came up because I wanted to test this command cat iso-pieces.* | growisofs -Z /dev/dvd=/dev/stdin, but then realized that I didn't know how to get my DVD's device Id. ... and does the above command requires a mounted device, or does it write directly to the device? ... as you can see, I'm a bit vague about devices :) Come to think of it maybe Nautalus read the DVD device directly, because when all is said and done, something has to read/write directly to it. info growisofs says: Under Linux it will most likely be an ide-scsi device such as "/dev/scd0 How can I find this Id via a script?

    Read the article

  • How do I partition my Hard drive to install Kubuntu?

    - by Xdflames
    I am a complete newbie to partitioning and I would like some guidance here. Can anyone explain what exactly I should be doing here? I am installing Kubuntu 12.04 on a currently Windows 7 laptop. My current Partitions say this: /dev/sda /dev/sda1 ntfs 104 MB 35 MB /dev/sda2 ntfs 319965 MB 87164 MB Basically, what I want is some guidance on what exactly to do here. All I want is a partition for my OS (Which will be Kubuntu 12.04) and a partition for all of my data. I also want to restart fresh with my hard drive, with only what I mentioned on here. I am installing Kubuntu from a flash drive (I set it up as a bootable device with Universal-USB-Installer-1.9.0.9), as I do not have any blank CDs/DVDs to burn to. What should I name my partitions? What should I set the type as? What size do they need to be? Edit: My hard drive is 320GB. Just looked it up in my BIOS. This computer will mainly be used for internet browsing and overall just messing around with the Linux OS.

    Read the article

  • What is the recommended way to empty a SSD?

    - by Lekensteyn
    I've just received my new SSD since the old one died. This Intel 320 SSD supports TRIM. For testing purposes, my dealer put malware, err, Windows on it. I want to get rid of it and install Kubuntu on it. It does not have to be a "secure wipe", I just need the empty the disk in the mosy healthy way. I believe that dd if=/dev/zero of=/dev/sda just fills the blocks with zeroes and thereby taking another write (correct me if I'm wrong). I've seen the answer How to enable TRIM, but it looks like it's suited for clearing empty blocks, not wiping the disk. hdparm seems to be the program to do it, but I'm not sure if it clears the disk OR cleans empty blocks. From its manual page: --trim-sector-ranges For Solid State Drives (SSDs). EXCEPTIONALLY DANGEROUS. DO NOT USE THIS OPTION!! Tells the drive firmware to discard unneeded data sectors, destroying any data that may have been present within them. This makes those sectors available for immediate use by the firmware's garbage collection mechanism, to improve scheduling for wear-leveling of the flash media. This option expects one or more sector range pairs immediately after the option: an LBA starting address, a colon, and a sector count, with no intervening spaces. EXCEPTIONALLY DANGEROUS. DO NOT USE THIS OPTION!! E.g. hdparm --trim-sector-ranges 1000:4 7894:16 /dev/sdz How can I make all blocks appear as empty using TRIM?

    Read the article

  • Is looking for code examples constantly a sign of a bad developer?

    - by Newly Insecure
    I am a comp sci student with several years of experience in C and C++, and for the last few years I've been constantly working with Java/Objective C doing app dev and now I have switched to web dev and am mainly focused on ruby on rails and I came to the realization that (as with app dev, really) I reference other code wayyyy too much. I constantly google functionality for lots of things I imagine I should be able to do from scratch and it's really cracked my confidence a bit. Basic fundamentals are not an issue, I hate to use this as an example but I can run through javabat in both java/python at a sprint - obviously not an accomplishment and but what I mean to say is I have a strong base for the fundamentals I think? I know what I need to use typically but reference syntax constantly. Would love some advice and input on this, as it has been holding me back pretty solidly in terms of looking for work in this field even though I'm finishing my degree. My main reason for asking is not really about employment, but more that I don't want to be the only guy at a hackathon not hammering out nonstop code and sitting there with 20 google/github tabs open, and I have refrained from attending any due to a slight lack of confidence... Is a person a bad developer by constantly looking to code examples for moderate to complex tasks?

    Read the article

  • how do I uninstall old kernel options listed in Grub2? [closed]

    - by user12809
    Possible Duplicate: Is there a way to remove/hide old kernel versions? I installed Ubuntu Tweak in Ubuntu 11.10, went to Janitor, and selected and removed old kernels that appeared there (3.0.0-12). Now, the only installed linux-image that appears as 'Installed' in SPM is the most recent one (3.0.0-13), which is the one I want. It did not however eliminate the kernel listing in Grub 2. At boot: However, at boot, in Grub-2, the following options still appear: 3.0.0-13-generic 3.0.0-13-generic (recovery mode) 3.0.0-12 (generic) (on /dev/sde5) 3.0.0-12 (generic (recovery mode) (on /dev/sde5) And, in Terminal, when I change directory (cd) to /boot, and then list (ls), I get the following listed kernels: 3.0.0-13 2.6.38-12 2.6.38-8 (al There is no change when I sudo update-grub in Terminal 1) what is /dev/sde5, and where is it located in the file system, so i can delete it? 2) why the differences between what appears as installed in SPM, what appears at boot in Grub2, and what shows when I list the contents of Grub2 in Terminal? Ultimately, I simply want to remove the 3.0.0-12 kernel options at boot in Grub2. How do I best and simplest do that? Thanks again donofrij is online now Report Post Edit/Delete Message Reply With Quote Multi-Quote This Message Quick reply to this message

    Read the article

  • Install Ubuntu side by side with Windows

    - by Igal
    I'm trying to setup both Windows 7 and Ubuntu 14.04 Desktop on the same machine. I've partitioned the disk into 3 parts, so that I can have Windows Ubuntu Shared Partition for Files I've installed Windows 7 on the first partition (which created a small partition of 100MB for boot), so now I have 4 partitions on the disk which is all it can take. Now I am installing Ubuntu, and it's asking me whether I want to: Install Ubuntu inside Windows 7 Replace Windows 7 with Ubuntu (No!) Something else I want the Ubuntu installation to go into the partition that I prepared for it. Should I choose "Something else"? If I do so -- will I be able to choose which OS to load at boot? Can anyone explain how "Ubuntu inside Windows" work? it says that it will allow me to choose which OS to load at boot, which is desired. UPDATE: When choosing "Something else" I see also an option for Device for Boot Loader Installation: /dev/sda -- the ssd disk itself /dev/sda1 -- the Windows 7 loader (100MB partition) /dev/sda4 -- which is one of the other partitions Which one should I choose there? TIA!

    Read the article

  • How often do you look for code examples?

    - by Newly Insecure
    I am a comp sci student with several years of experience in C and C++, and for the last few years I've been constantly working with Java/Objective C doing app dev and now I have switched to web dev and am mainly focused on ruby on rails and I came to the realization that (as with app dev, really) I reference other code wayyyy too much. I constantly google functionality for lots of things I imagine I should be able to do from scratch and it's really cracked my confidence a bit. Basic fundamentals are not an issue, I hate to use this as an example but I can run through javabat in both java/python at a sprint - obviously not an accomplishment and but what I mean to say is I have a strong base for the fundamentals I think? I was wondering how often you guys reference other code and does it just boil down to a lack of memorization of intricate tasks on my part? I know what I need to use typically but reference syntax constantly. Would love some advice and input on this, as it has been holding me back pretty solidly in terms of looking for work in this field even though I'm finishing my degree. My main reason for asking is not really about employment, but more that I don't want to be the only guy at a hackathon not hammering out nonstop code and sitting there with 20 google/github tabs open, and I have refrained from attending any due to a slight lack of confidence... tl;dr: I google for code examples for basically ALL semi advanced/advanced functionality, how to fix this and do you do as well?

    Read the article

  • Documents stored on separate internal drive, Ubuntu doesn't notice on startup

    - by PlanoAlto
    My machine has Windows 7 Ultimate x64 and Ubuntu 12.04 LTS running side-by-side on a single hard drive with GRUB bootloader, each with 500 GB storage. I keep my personal documents on a separate 1TB hard drive so they remain isolated from any changes I make to the OS drive, but when Ubuntu starts it does not seem to notice my documents drive. While I've installed and worked with Ubuntu 12.04 Server x32 before, using it as a desktop OS is new to me. I use my documents drive for all of my personal data, including wallpapers and music, so it is imperative that Ubuntu recognize it on startup. Concerning the two specific examples: Ubuntu loads with the default blue-colored desktop instead of my desired picture of the spectacular Carina galaxy. When I right-click the desktop and select "Change Desktop Background", it wakes up from its amnesia and loads the proper background. As for my music, Rhythmbox defaults to an empty library upon reboot, forcing me to reload the settings manually each time. This gets quite tedious because I certainly can't work to my full potential without my music. The second thing I would like to address is making Ubuntu point the documents directories in ~ to their appropriate counterparts on the 1TB documents drive. I realize that this question is not new, but when I create the symbolical links, they established themselves inside the directories and did not convert the directories themselves into symbolical links. I also prefer not to move the files themselves from their current location on the 1TB drive. I believe this would also help the Rhythmbox library problem as well considering it's a default directory for the music player. Excerpt from fstab: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb6 during installation UUID=057ac83e-76ad-460d-86e5-b6d46e9b1d80 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb7 during installation #UUID=1183df90-23fc-44e4-aa17-4e7c9865d5cb none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 That's enough content for one question. I really like the Ubuntu experience so far since it doesn't treat me like an idiot out of the box (can't say the same for Windows) so I can't wait to hear from the community! Thanks for your help in advance.

    Read the article

  • How to make the internal subwoofer work on an Asus G73JW?

    - by CodyLoco
    I have an Asus G73JW laptop which has an internal subwoofer built-in. Currently, the system detects the internal speakers as a 2.0 system (or I can change do 4.0 is the only other option). I found a bug report here: https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/673051 which discusses the bug and according to them a fix was sent upstream back at the end of 2010. I would have thought this would have made it into 12.04 but I guess not? I tried following the link given at the very bottom to install the latest ALSA drivers, here: https://wiki.ubuntu.com/Audio/InstallingLinuxAlsaDriverModules however I keep running into an error when trying to install: sudo apt-get install linux-alsa-driver-modules-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-alsa-driver-modules-3.2.0-24-generic E: Couldn't find any package by regex 'linux-alsa-driver-modules-3.2.0-24-generic' I believe I have added the repository correctly: sudo add-apt-repository ppa:ubuntu-audio-dev/ppa [sudo] password for codyloco: You are about to add the following PPA to your system: This PPA will be used to provide testing versions of packages for supported Ubuntu releases. More info: https://launchpad.net/~ubuntu-audio-dev/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.7apgZoNrqK --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 4E9F485BF943EF0EABA10B5BD225991A72B194E5 gpg: requesting key 72B194E5 from hkp server keyserver.ubuntu.com gpg: key 72B194E5: public key "Launchpad Ubuntu Audio Dev team PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) And I also ran an update as well (followed the instructions on the fix above). Any ideas?

    Read the article

  • Hard drive clicking noise on Acer AO722

    - by Blank
    I'm running Ubuntu 11.10 on an Acer Aspire One 722. Whenever I'm on battery power I get a clicking sound from my hard drive every 5 seconds or so (this does not happen when the laptop is plugged in). I'm dual booting with Windows 7 and I don't get the clicking sound in Windows. The clicking sound stops when I run the command:sudo hdparm -B 254 /dev/sda Also, according to:sudo smartctl -H /dev/sda my hard drive is healthy. Is this clicking sound something I can just ignore? Or is it a serious problem and will it eventually damage my computer? If so, how would I fix it? I have tried adding hdparm -B 254 /dev/sda to my /etc/rc.local file, but I still run into the clicking problem if my computer boots while plugged in and is then unplugged. Also, I'm finding this fix to be unreliable. Sometimes it works, sometimes it does not. Is this a good solution and is there a better way of doing this? Also, would running my laptop with a -B value of 254 have any negative effects? (I read somewhere about a lower level protecting the hard drive from bumps)

    Read the article

  • Can a Silverlight application authenticate versus a local LDAP/ActiveDirectory Server

    - by caryden
    If I have an externally hosted application (www.outside.com) outside the firewall but users within a company wanted to be able to enable LDAP authentication against their local (behind the firewall) AD server (acting as LDAP) or other LDAP server (call it ldap.inside.com), how would this be done. It seems technically possible in that when a user tried to login to outside.com through a client-side silverlight interface, that the SL app could connect to the outside.com login service and be told to authenticate that user against ldap.inside.com. The SL app would make the calls to ldap.inside.com to authenticate the user. Of course, there is the issue of how the server is notified securely that the client authenticated itself...Has anyone done this?

    Read the article

  • Silverlight local storage

    - by IMHO
    As you may know Silverlight has support for local storage. We are looking at creating Sl application that will work in off line mode. This application may require quite a bit of data to be cached on the client side. Obvious solution - use local storage with some sort of XMl based structure won't work as our PoC showed due to performance issues. We are looking at several 3rd party solutions that implement light database engines on top of SL local storage. If you have solved this problem in the past or have any ideas - I would appreciate some pointers and ideas.

    Read the article

  • Sensor Manager getOrientation, doesnt display text and only works in debug mode?

    - by Aidan
    Hi Guys, My code appears to crash. I'm trying to setup a SensorManager for getting the Orientation of the device. I also have a listener that should update when conditions change. But when I run this code it crashes... public void sensor(){ // Locate the SensorManager using Activity.getSystemService sm = (SensorManager) getSystemService(SENSOR_SERVICE); sm.getOrientation(mR, mOrientation); // Register your SensorListener sm.registerListener(sl, sm.getDefaultSensor(Sensor.TYPE_ORIENTATION),SensorManager.SENSOR_DELAY_NORMAL); } private final SensorEventListener sl = new SensorEventListener() { @Override public void onAccuracyChanged(Sensor sensor, int accuracy) { } @Override public void onSensorChanged(SensorEvent event) { if (event.sensor.getType()==Sensor.TYPE_ORIENTATION) { Global.Orientation = "Orientation is equal to: "+ SensorManager.getOrientation(mR, mOrientation); } } sm is defined as SensorManager sm; as a class wide variable also. I've got some other classes outputting the Orientation to a screen which I know works. The problem is somewhere in these methods. Am I doing it wrong or Is there a better way of doing this?

    Read the article

  • Windows Mobile Silverlight?

    - by eidylon
    Is it possible to develop Silverlight apps to run on WinMo devices? I see all around searching on the web - in articles from 2008 and 2009 - that they were adding Silverlight support in WinMo 6.1, for example: Internet Explorer Mobile The new version of Internet Explorer Mobile adds the ability to easily view full-screen Web pages and multimedia on the Web with a smartphone. Microsoft's press release states the new version takes advantage of "Internet Explorer 6 technologies" and supports industry standards such as H.264, Adobe Flash and Microsoft Silverlight. The update will be available to mobile phone partners in the third quarter of 2008, with the first Windows Mobile phones using the new version expected to be available by the end of 2008. But I have found an SL app supposedly geared for mobile devices (as much as I hate weatherbug), but when i try going there in PIE on my WinMo 6.1 device, it shows me the little "get silverlight" image button, but clicking it doesn't do anything. So, what is the story? Is SL/WinMo development possible, or ?

    Read the article

  • Why does a user have to enter "Profile" data to enter data into other tables?

    - by Greg McNulty
    EDIT It appears the user has to enter some data for his profile, otherwise I get this error below. I guess if there is no profile data, the user can not continue to enter data in other tables by default? I do not want to make entering user profile data a requirement to use the rest of the sites functionality, how can I get around this? Currently I have been testing everything with the same user and everything has been working fine. However, when I created a new user for the very first time and tried to enter data into my custom table, I get the following error. The INSERT statement conflicted with the FOREIGN KEY constraint "FK_UserData_aspnet_Profile". The conflict occurred in database "C:\ISTATE\APP_DATA\ASPNETDB.MDF", table "dbo.aspnet_Profile", column 'UserId'. The statement has been terminated. Not sure why I am getting this error. I have the user controls set up in ASP.NET 3.5 however all I am using is my own table or at least that I am aware of. I have a custom UserData table that includes the columns: id, UserProfileID, CL, LL, SL, DateTime (id is the auto incremented int) The intent is that all users will add their data in this table and as I mentioned above it has been working fine for my original first user I created. However, when i created a new user I am getting this problem. Here is the code that updates the database. protected void Button1_Click(object sender, EventArgs e) { //connect to database MySqlConnection database = new MySqlConnection(); database.CreateConn(); //create command object Command = new SqlCommand(queryString, database.Connection); //add parameters. used to prevent sql injection Command.Parameters.Add("@UID", SqlDbType.UniqueIdentifier); Command.Parameters["@UID"].Value = Membership.GetUser().ProviderUserKey; Command.Parameters.Add("@CL", SqlDbType.Int); Command.Parameters["@CL"].Value = InCL.Text; Command.Parameters.Add("@LL", SqlDbType.Int); Command.Parameters["@LL"].Value = InLL.Text; Command.Parameters.Add("@SL", SqlDbType.Int); Command.Parameters["@SL"].Value = InSL.Text; Command.ExecuteNonQuery(); } Source Error: Line 84: Command.ExecuteNonQuery();

    Read the article

  • WCF(HTTPS,UserName) calling by SilverLight

    - by Andrew Kalashnikov
    Hello colleagues. I've created wcf service with transport security over HTTPS. Also I use UserName authentication as described at http://msdn.microsoft.com/en-us/library/cc949025.aspx, so I can use my Membership,RoleProvider. When I work with this service with ASP.NET all is OK var client = new RegistratorClient(); client.ClientCredentials.UserName.UserName = ConfigurationManager.AppSettings["registratorLogin"]; client.ClientCredentials.UserName.Password = ConfigurationManager.AppSettings["registratorPassword"]; But at my SilverLight appliation I can't do the same. When I try setup credntials and call wcf I get standard browser window with username and password. When I insert it SL application works well, but this message is so annoyed. I can't use clientCredentialType="Basic" at my SL config. What should I do for silence calling my WCF. Big thanks

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Visual Studio Express 2012 debug mode doesn't work

    - by user2350086
    I have a project in Visual Studio that I have been working on for a while, and I have used the debugger extensively. Recently I changed some settings and I have lost the ability to stop the program and step through code. I can't figure out what I had changed that might have affected this. If I put a breakpoint in my code and try to have the program stop there, it doesn't. The break point shows up white with a red outline. If I hover the mouse over it, it says "The breakpoint will not currently be hit. No executable code of the debugger's target code type is associated with this line. Possible causes include: conditional compilation, compiler optimizations, or the target architecture of this line is not supported by the current debugger code type." I know for a fact that the program executes the code where the breakpoint is because I put the breakpoint in the beginning of the InitializeComponent method. The program displays the window fine, but does not stop at the breakpoint. Yes, I am running in debug mode. It seems as though there is a disconnect between the compiled code and the source code displayed. Does anyone know what that would be, or know which compiler settings I should check to re-enable debugging? Here are the compiler options: /GS /analyze- /W3 /Zc:wchar_t /I"D:\dev\libcurl-7.19.3-win32-ssl-msvc\include" /Zi /Od /sdl /Fd"Debug\vc110.pdb" /fp:precise /D "WIN32" /D "_DEBUG" /D "_UNICODE" /D "UNICODE" /errorReport:prompt /WX- /Zc:forScope /Oy- /clr /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\mscorlib.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Data.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Drawing.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.DataVisualization.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Xml.dll" /MDd /Fa"Debug\" /EHa /nologo /Fo"Debug\" /Fp"Debug\Prog.pch" The linker options are: /OUT:"D:\dev\Prog\Debug\Prog.exe" /MANIFEST /NXCOMPAT /PDB:"D:\dev\Prog\Debug\Prog.pdb" /DYNAMICBASE "curllib.lib" "winmm.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /FIXED:NO /DEBUG /MACHINE:X86 /ENTRY:"Main" /INCREMENTAL /PGD:"D:\dev\Prog\Debug\Prog.pgd" /SUBSYSTEM:WINDOWS /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /ManifestFile:"Debug\Prog.exe.intermediate.manifest" /ERRORREPORT:PROMPT /NOLOGO /LIBPATH:"D:\dev\libcurl-7.19.3-win32-ssl-msvc\lib\Debug" /ASSEMBLYDEBUG /TLBID:1

    Read the article

  • how to profile silverlight mvvm application with a lot of custom controls

    - by tomo
    There is a quite big LOB silverlight application and we wrote a lot of custom controls which are rather heavy in drawing. All data is loaded by RIA service, processed and bound (using INofityPropertyChanged interface) to the view. The problem is that first drawing takes a lot time. Following calls to the service (server) and redrawing is quite fast. I used Equatec profiler to track the problem. I saw that processing takes a couple of miliseconds only so my idea is that the drawing by SL engine is slow. I'm wondering if it is possible to profile somehow processes inside SL to check which drawing operations are taking too much time. Are there any guidelines how to implement faster drawing of complex custom controls?

    Read the article

  • Virtual dedicated surver repetitive draining RAM, OOM constantly

    - by Deerly
    My linux (fedora red hat 7) virtual dedicated server has been experiencing OOM multiple times a day for the past several days. I thought the issue was with spamd/spamassassin but after disabling this the errors remains. The highest usage displayed on ps faux --cumulative: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 28412 8.7 0.5 309572 109308 ? Sl 22:15 0:17 /usr/java/jdk1. mysql 7716 0.0 0.0 136256 18000 ? Sl 22:12 0:00 _ /usr/libexe named 17697 0.0 0.0 120904 15316 ? Ssl 22:09 0:00 /usr/sbin/named I'm not running any java applications so I'm not sure why the top issue is showing up. It is frustrating as I barely have anything running on the server and use the tiniest fraction of bandwidth. Any help or suggestions on zeroing in on the source of the drain would be much appreciated! Thanks!

    Read the article

  • Linux: How to find all serial devices (ttyS, ttyUSB, ..) without opening them?

    - by Thomas Tempelmann
    What is the proper way to get a list of all available serial ports/devices on a Linux system? In other words, when I iterate over all devices in /dev/, how do I tell which ones are serial ports in the classic way, i.e. those usually supporting baud rates and RTS/CTS flow control? The solution would be coded in C. I ask because I am using a 3rd party library that does this clearly wrong: It appears to only iterate over /dev/ttyS*. The problem is that there are, for instance, serial ports over USB (provided by USB-RS232 adapters), and those are listed under /dev/ttyUSB*. And reading the Serial-HOWTO at Linux.org, I get the idea that there'll be other name spaces as well, as time comes. So I need to find the official way to detect serial devices. Problem is that there appears none documented, or I can't find it. I imagine one way would be to open all files from /dev/tty* and call a specific ioctl() on them that is only available on serial devices. Would that be a good solution, though? Update hrickards suggested to look at the source for "setserial". Its code does exactly what I had in mind: First, it opens a device with: fd = open (path, O_RDWR | O_NONBLOCK) Then it invokes: ioctl (fd, TIOCGSERIAL, &serinfo) If that call returns no error, then it's a serial dev, apparently. I found similar code here, which suggested to also add the O_NOCTTY option. There is one problem with this approach, though: When I tested this code on BSD Unix (i.e. OSX), it worked as well, however serial devices that are provided thru Bluetooth cause the system (driver) to try to connect to the bluetooth device, which takes a while before it'll return with a timeout error. This is caused by just opening the device. And I can imagine that similar things can happen on Linux as well - ideally, I should not need to open the device to figure out its type. I wonder if there's also a way to invoke ioctl functions without an open, or open a device in a way that it does not cause connections to be made? Any ideas?

    Read the article

  • Bullet indents in PowerPoint 2007 compatibility mode via .NET interop issue

    - by L. Shaydariv
    Hello. I've got a really difficult bug and I can't see the fix. The subject drives me insane for real for a long time. Let's consider the following scenario: 1) There is a PowerPoint 2003 presentation. It contains the only slide and the only shape, but the shape contains a text frame including a bulleted list with a random textual representation structure. 2) There is a requirement to get bullet indents for every bulletted paragraph using PowerPoint 2007. I can satisfy the requirement opening the presentation in the compatibility mode and applying the following VBA script: With ActivePresentation Dim sl As Slide: Set sl = .Slides(1) Dim sh As Shape: Set sh = sl.Shapes(1) Dim i As Integer For i = 1 To sh.TextFrame.TextRange.Paragraphs.Count Dim para As TextRange: Set para = sh.TextFrame.TextRange.Paragraphs(i, 1) Debug.Print para.Text; para.indentLevel, sh.TextFrame.Ruler.Levels(para.indentLevel).FirstMargin Next i End With that produces the following output: A 1 0 B 1 0 C 2 24 D 3 60 E 5 132 Obviously, everything is perfect indeed: it has shown the proper list item text, list item level and its bullet indent. But I can't see the way of how I can reach the same result using C#. Let's add a COM-reference to Microsoft.Office.Interop.PowerPoint 2.9.0.0 (taken from MSPPT.OLB, MS Office 12): // presentation = ...("presentation.ppt")... // a PowerPoint 2003 presentation Slide slide = presentation.Slides[1]; Shape shape = slide.Shapes[1]; for (int i = 1; i<=shape.TextFrame.TextRange.Paragraphs(-1, -1).Count; i++) { TextRange paragraph = shape.TextFrame.TextRange.Paragraphs(i, 1); Console.WriteLine("{0} {1} {2}", paragraph.Text, paragraph.IndentLevel, shape.TextFrame.Ruler.Levels[paragraph.IndentLevel].FirstMargin); } Oh, man... What's it? I've got problems here. First, the paragraph.Text value is trimmed until the '\r' character is found (however paragraph.Text[0] really returns the first character O_o). But it's ok, I can shut my eyes to this. But... But, second, I can't understand why the first margins are always zero and it does not matter which level they belong to. They are always zero in the compatibility mode... It's hard to believe it... :) So is there any way to fix it or just to find a workaround? I'd like to accept any help regarding to the solution of the subject. I can't even find any article related to the issue. :( Probably you have ever been face to face with it... Or is it just a bug with no fix and must it be reported to Microsoft? Thanks you.

    Read the article

  • Virtual dedicated server repetitive draining RAM, OOM constantly

    - by Deerly
    My linux (fedora red hat 7) virtual dedicated server has been experiencing OOM multiple times a day for the past several days. I thought the issue was with spamd/spamassassin but after disabling this the errors remains. The highest usage displayed on ps faux --cumulative: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 28412 8.7 0.5 309572 109308 ? Sl 22:15 0:17 /usr/java/jdk1. mysql 7716 0.0 0.0 136256 18000 ? Sl 22:12 0:00 _ /usr/libexe named 17697 0.0 0.0 120904 15316 ? Ssl 22:09 0:00 /usr/sbin/named I'm not running any java applications so I'm not sure why the top issue is showing up. It is frustrating as I barely have anything running on the server and use the tiniest fraction of bandwidth. Any help or suggestions on zeroing in on the source of the drain would be much appreciated! Thanks!

    Read the article

  • How can I further optimize this color difference function?

    - by aLfa
    I have made this function to calculate color differences in the CIE Lab colorspace, but it lacks speed. Since I'm not a Java expert, I wonder if any Java guru around has some tips that can improve the speed here. The code is based on the matlab function mentioned in the comment block. /** * Compute the CIEDE2000 color-difference between the sample color with * CIELab coordinates 'sample' and a standard color with CIELab coordinates * 'std' * * Based on the article: * "The CIEDE2000 Color-Difference Formula: Implementation Notes, * Supplementary Test Data, and Mathematical Observations,", G. Sharma, * W. Wu, E. N. Dalal, submitted to Color Research and Application, * January 2004. * available at http://www.ece.rochester.edu/~gsharma/ciede2000/ */ public static double deltaE2000(double[] lab1, double[] lab2) { double L1 = lab1[0]; double a1 = lab1[1]; double b1 = lab1[2]; double L2 = lab2[0]; double a2 = lab2[1]; double b2 = lab2[2]; // Cab = sqrt(a^2 + b^2) double Cab1 = Math.sqrt(a1 * a1 + b1 * b1); double Cab2 = Math.sqrt(a2 * a2 + b2 * b2); // CabAvg = (Cab1 + Cab2) / 2 double CabAvg = (Cab1 + Cab2) / 2; // G = 1 + (1 - sqrt((CabAvg^7) / (CabAvg^7 + 25^7))) / 2 double CabAvg7 = Math.pow(CabAvg, 7); double G = 1 + (1 - Math.sqrt(CabAvg7 / (CabAvg7 + 6103515625.0))) / 2; // ap = G * a double ap1 = G * a1; double ap2 = G * a2; // Cp = sqrt(ap^2 + b^2) double Cp1 = Math.sqrt(ap1 * ap1 + b1 * b1); double Cp2 = Math.sqrt(ap2 * ap2 + b2 * b2); // CpProd = (Cp1 * Cp2) double CpProd = Cp1 * Cp2; // hp1 = atan2(b1, ap1) double hp1 = Math.atan2(b1, ap1); // ensure hue is between 0 and 2pi if (hp1 < 0) { // hp1 = hp1 + 2pi hp1 += 6.283185307179586476925286766559; } // hp2 = atan2(b2, ap2) double hp2 = Math.atan2(b2, ap2); // ensure hue is between 0 and 2pi if (hp2 < 0) { // hp2 = hp2 + 2pi hp2 += 6.283185307179586476925286766559; } // dL = L2 - L1 double dL = L2 - L1; // dC = Cp2 - Cp1 double dC = Cp2 - Cp1; // computation of hue difference double dhp = 0.0; // set hue difference to zero if the product of chromas is zero if (CpProd != 0) { // dhp = hp2 - hp1 dhp = hp2 - hp1; if (dhp > Math.PI) { // dhp = dhp - 2pi dhp -= 6.283185307179586476925286766559; } else if (dhp < -Math.PI) { // dhp = dhp + 2pi dhp += 6.283185307179586476925286766559; } } // dH = 2 * sqrt(CpProd) * sin(dhp / 2) double dH = 2 * Math.sqrt(CpProd) * Math.sin(dhp / 2); // weighting functions // Lp = (L1 + L2) / 2 - 50 double Lp = (L1 + L2) / 2 - 50; // Cp = (Cp1 + Cp2) / 2 double Cp = (Cp1 + Cp2) / 2; // average hue computation // hp = (hp1 + hp2) / 2 double hp = (hp1 + hp2) / 2; // identify positions for which abs hue diff exceeds 180 degrees if (Math.abs(hp1 - hp2) > Math.PI) { // hp = hp - pi hp -= Math.PI; } // ensure hue is between 0 and 2pi if (hp < 0) { // hp = hp + 2pi hp += 6.283185307179586476925286766559; } // LpSqr = Lp^2 double LpSqr = Lp * Lp; // Sl = 1 + 0.015 * LpSqr / sqrt(20 + LpSqr) double Sl = 1 + 0.015 * LpSqr / Math.sqrt(20 + LpSqr); // Sc = 1 + 0.045 * Cp double Sc = 1 + 0.045 * Cp; // T = 1 - 0.17 * cos(hp - pi / 6) + // + 0.24 * cos(2 * hp) + // + 0.32 * cos(3 * hp + pi / 30) - // - 0.20 * cos(4 * hp - 63 * pi / 180) double hphp = hp + hp; double T = 1 - 0.17 * Math.cos(hp - 0.52359877559829887307710723054658) + 0.24 * Math.cos(hphp) + 0.32 * Math.cos(hphp + hp + 0.10471975511965977461542144610932) - 0.20 * Math.cos(hphp + hphp - 1.0995574287564276334619251841478); // Sh = 1 + 0.015 * Cp * T double Sh = 1 + 0.015 * Cp * T; // deltaThetaRad = (pi / 3) * e^-(36 / (5 * pi) * hp - 11)^2 double powerBase = hp - 4.799655442984406; double deltaThetaRad = 1.0471975511965977461542144610932 * Math.exp(-5.25249016001879 * powerBase * powerBase); // Rc = 2 * sqrt((Cp^7) / (Cp^7 + 25^7)) double Cp7 = Math.pow(Cp, 7); double Rc = 2 * Math.sqrt(Cp7 / (Cp7 + 6103515625.0)); // RT = -sin(delthetarad) * Rc double RT = -Math.sin(deltaThetaRad) * Rc; // de00 = sqrt((dL / Sl)^2 + (dC / Sc)^2 + (dH / Sh)^2 + RT * (dC / Sc) * (dH / Sh)) double dLSl = dL / Sl; double dCSc = dC / Sc; double dHSh = dH / Sh; return Math.sqrt(dLSl * dLSl + dCSc * dCSc + dHSh * dHSh + RT * dCSc * dHSh); }

    Read the article

  • MS ACCESS: How can i count distinct value using access query?

    - by Sadat
    here is the current complex query given below. SELECT DISTINCT Evaluation.ETCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.TVenue, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Answer.QCode, Answer.Answer, Count(Answer.Answer) AS [Count], Questions.SL, Questions.Question FROM ((Evaluation INNER JOIN Training ON Evaluation.ETCode=Training.TCode) INNER JOIN Answer ON Evaluation.ECode=Answer.ECode) INNER JOIN Questions ON Answer.QCode=Questions.QCode GROUP BY Evaluation.ETCode, Answer.QCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.Tvenue, Answer.Answer, Questions.Question, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Questions.SL ORDER BY Answer.QCode, Answer.Answer; There is an another column Training.TCode. I need to count distinct Training.TCode, can anybody help me? If you need more information please let me know

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >