Search Results

Search found 50994 results on 2040 pages for 'simple solution'.

Page 127/2040 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Simple Ubuntu Server - Expanding disk space - adding a new drive LVM, RAID0 existing setup - how?

    - by NightWolf
    I have a 1TB ext4 partition mounted at / with all my data and Ubuntu 11.04 (natty) installed. Now this drive is almost full (I used it as a database server for some processing). RAID0 is ok, I can take a failure (touch wood). But I need a way to grow this partition. I have a new 1TB drive I want to add, however as my Ubuntu boot and all data is on the one partition I'm not sure how I can go about setting up a RAID0 or LVM array without loosing all my data. So the question is how can I extend my existing ext4 partition over two physical drives without losing data? Thanks!

    Read the article

  • Simple, centralized user management on a small LAN - NIS or LDAP?

    - by einpoklum
    I'm setting up a small LAN for my team. It will, for all intents and purposes, not be connected to any external networks. I would it to have centralized control of user accounts (at least, I think I'd like that; I'm also considering using puppet, so theoretically I could just push /etc/passwd changes, or something). The number of machines is fixed, but not very small. Mostly they're 'attached' to a single user, but sometimes people work remotely on someone else's box; and there are a couple of servers. I've read this question, but my scenario is much simpler (even simpler than in this question) and I'd like to do something (relatively) quick, with not much hassle, but not a dirty totally-insecure hack. Is NIS relevant for my scenario? If not, what's the most hassle-free way to set up LDAP (or LDAP+Kerberos) to achieve the same? Notes: I have no experience with setting up either NIS or LDAP. We use Debian-flavored Linux distributions, mainly Kubuntu 12.04 (not my choice, but that's the way it is).

    Read the article

  • Simple Windows+Linux server provisioning? Chef/Puppet/Ansible etc

    - by Andrew
    I'm primarily a developer, part time devops; and manage servers here and there for my projects. I want to automate provisioning of web/app/database servers going forward for my projects I manage a mixture of both Windows and Linux servers (VPS, cloud and dedicated) I've looked at investigated Chef/Puppet/Ansible briefly; and I am wanting to find something that: Is easy to learn and understand. I don't want to invest weeks into understanding a complicated piece of tech. Ideally does not require a server ("master server") to hold the configurations Supports provisioning of Windows and Linux servers Comes with suitable documentation to get started Does anyone have any advice on what tool is best suited? Thanks

    Read the article

  • MySQL simple replication problem: 'show master status' produces 'Empty set'?

    - by simon
    I've been setting up MySQL master replication (on Debian 6.0.1) following these instructions faithfully: http://www.neocodesoftware.com/replication/ I've got as far as: mysql > show master status; but this is unfortunately producing the following, rather than any useful output: Empty set (0.00 sec) The error log at /var/log/mysql.err is just an empty file, so that's not giving me any clues. Any ideas? This is what I have put in /etc/mysql/my.cnf on one server (amended appropriately for the other server): server-id = 1 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 1 master-host = 10.0.0.3 master-user = <myusername> master-password = <mypass> master-connect-retry = 60 replicate-do-db = fruit log-bin = /var/log/mysql-replication.log binlog-do-db = fruit And I have set up users and can connect from MySQL on Server A to the database on Server B using the username/password/ipaddress above.

    Read the article

  • Why does my simple Raid 1 backup storage perform really slow sometimes?

    - by randomguy
    I bought 2x Samsung F3 EcoGreen 2TB hard disks to make a backup storage. I put them in Raid 1 (mirror) mode. Made a single partition and formatted it to NTFS, running Windows 7. For some reason, accessing the drive's contents (simply by navigating folders) is sometimes really slow. Like opening D:/photos/ can sometimes take several seconds before it starts showing any of the folder's contents. Same applies for other folders. What could be causing this and what could I do to improve the performance? I remember that there was an option somewhere inside Windows to choose fast access but less reliable persistence operations (read/write). It was a tick inside some dialog. At the time, it felt like a good idea to take the tick away from the option and get more reliable persistence but slower access, but now I'm regretting. I'm unable to find this dialog.. I've looked hard. I don't know, if it would make any difference. Oh, and I've ran scan disk and defrag on the drive. No errors and speed isn't improved.

    Read the article

  • Is it possible to create a simple frontend indexer for openbittorent torrents?

    - by SimonK
    I run a website which distributes a few files every now and again, live music performances by a rock band. I create a torrent file, set the trackers as openbittorrent, publicbt and other similar open trackers. I upload the torrent file to my forum, my users download it and the files are shared. No problems there. What I would like to do is index those torrents properly on my website though so I can follow seeders/leechers and other stats online. I know the open torrent trackers don't have an index but I am aware of many, many indexing sites that do that exact thing. I don't know how though. So what I'm asking is what do I need to do to do that myself? I simply want to create a page that lists the torrents I and other users on my site create, the seeders/leechers ratio and a link to the torrent file etc. What data do I need to be able to do that? I'm proficient in general web design but I don't know what I would need data wise to pull the required info on the torrents? Thanks

    Read the article

  • Is there a simple context-menu add-in that could make-up for the Windows-7 status bar deficiency?

    - by DanO
    Edit: I initially asked about free disk space and selected item size. It has since been pointed out that the selected item size summary is still availiable natively in the details pane. I had read elsewhere (wikipedia) that this was removed along with disk free space, which is not the case. Only free disk space has been completely removed. Selection size is still availiable. Is there a context menu add-in out there that could show the free disk space of the relevant drive, when you right click? This would go a long way to compensating for one of the only steps backward I’ve discovered in Windows 7 so far. I doubt anyone had created one specifially for this need before windows 7 because this information was previously easily accessible in the status bar. I thought about creating one, but it has been a while since I have messed with the Shell API, and I know there are coders out there who could do it faster and better. If you’ve heard of one, or know of something else to make-up for this Microsoft misstep, I’d appreciate hearing about it. If MS were listing to the community they would already have a powertoy or add-in of some kind to un-break this. (they could release it unsupported even), as there seem to be many power users that are extremely annoyed by this feature removal decision. If anyone has seen something, please post it here. As it has been only 4 days since official Windows 7 release, I'll wait at least a week to chose an answer. Here's a picture of protoype screenshot: SU question 19232 is related.

    Read the article

  • How to output a simple network activity plot in console in Linux?

    - by Vi.
    There's tload that plots load average. There's iftop that network usage as bars. How to do something like this: # tcpdump -i eth0 --plot 'host 1.2.3.4' 13:45:03 | | 0 in 0 out 13:45:04 |O | 0 in 1MB out 13:45:05 |OOOI | 500 KB in 4MB out 13:45:06 |OIIII | 6MB in 1MB out 13:45:07 | | 0 in 0 out 13:45:08 |IIIIIIIIIIII | 53M in 0 out

    Read the article

  • Seemingly simple skinning problem in Flex4; style gives disco effect

    - by Cheradenine
    I'm doing something wrong, but I can't figure out what. Simple project in flex4, whereby I create a skinned combobox (fragments at end). If I turn on the 3 skin references (over-skin, up-skin, down-skin), the combobox appears to simply stop working. If I remove the up-skin, hovering over the combo produces a flickering effect, where it appears to apply the style, then remove it immediately. I get the same thing with a button instead of a combo. I'm sure it's something really simple, but it's evading me. <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600" xmlns:containers="flexlib.containers.*"> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <fx:Style> @namespace s "library://ns.adobe.com/flex/spark"; @namespace mx "library://ns.adobe.com/flex/mx"; #myCombo { over-skin: ClassReference("nmx.MyComboSkin"); up-skin: ClassReference("nmx.MyComboSkin"); down-skin: ClassReference("nmx.MyComboSkin"); } </fx:Style> <fx:Script> <![CDATA[ [Bindable] public var items:Array = ["A","B","C"]; ]]> </fx:Script> <mx:Canvas backgroundColor="#ff0000" width="726" height="165" x="20" y="41"> <mx:ComboBox id="myCombo" x="10" y="10" prompt="Hospital" dataProvider="{items}"> </mx:ComboBox> </mx:Canvas> </s:Application> Skin Definition: package nmx { import flash.display.GradientType; import flash.display.Graphics; import mx.skins.Border; import mx.skins.ProgrammaticSkin; import mx.skins.halo.ComboBoxArrowSkin; import mx.skins.halo.HaloColors; import mx.utils.ColorUtil; public class MyComboSkin extends ProgrammaticSkin { public function MyComboSkin() { super(); } override protected function updateDisplayList(w:Number, h:Number):void { trace(name); super.updateDisplayList(w, h); var arrowColor:int = 0xffffff; var g:Graphics = graphics; g.clear(); // Draw the border and fill. switch (name) { case "upSkin": case "editableUpSkin": { g.moveTo(0,0); g.lineStyle(1,arrowColor); g.lineTo(w-1,0); g.lineTo(w-1,h-1); g.lineTo(0,h-1); g.lineTo(0,0); } break; case "overSkin": case "editableOverSkin": case "downSkin": case "editableDownSkin": { // border /*drawRoundRect( 0, 0, w, h, cr, [ themeColor, themeColor ], 1); */ g.moveTo(0,0); g.lineStyle(1,arrowColor); g.lineTo(w-1,0); g.lineTo(w-1,h-1); g.lineTo(0,h-1); g.lineTo(0,0); // Draw the triangle. g.beginFill(arrowColor); g.moveTo(w - 11.5, h / 2 + 3); g.lineTo(w - 15, h / 2 - 2); g.lineTo(w - 8, h / 2 - 2); g.lineTo(w - 11.5, h / 2 + 3); g.endFill(); } break; case "disabledSkin": case "editableDisabledSkin": { break; } } } } }

    Read the article

  • Which part of this simple script is breaking internet explorer?

    - by user961627
    I'm writing a simple virtual keyboard for Arabic (indic) digits. Just links that, when clicked, produce the corresponding Unicode Indic character. The following is my HTML, in the body tag: <a href="#" id='start'>Start</a> <div id='vkb' style='padding:20px;font-size:16pt; border:2px solid #eee; width:250px' dir='ltr'> <a class='key' href='#' id='0'>&#1632;</a> <a class='key' href='#' id='1'>&#1633;</a> <a class='key' href='#' id='2'>&#1634;</a> <a class='key' href='#' id='3'>&#1635;</a> <a class='key' href='#' id='4'>&#1636;</a><br /> <a class='key' href='#' id='5'>&#1637;</a> <a class='key' href='#' id='6'>&#1638;</a> <a class='key' href='#' id='7'>&#1639;</a> <a class='key' href='#' id='8'>&#1640;</a> <a class='key' href='#' id='9'>&#1641;</a> <br /> <a href="#" id='stop'>Stop</a> </div> <div id='output' /></div> This is my CSS: a { text-decoration:none; } .key { padding:7px; background-color:#fff; margin:5px; border:2px solid #eee; display:inline-block; } .key:hover { background-color:#eee; } And this is my javascript: <script type="text/javascript" src="js/jquery.js"></script> <script> $(document).ready(function(){ var toprint = ""; $('#vkb').hide(); $('#start').click(function(e){ toprint = ""; $('#vkb').show(); }); $('#stop').click(function(e){ $('#vkb').hide(); ret = ar2ind(toprint); $('#output').text(ret); toprint = ""; }); $('#vkb').click(function(e){ var $key = $(e.target).closest('.key'); var pressed = $key.attr('id'); if(pressed === undefined){ pressed = ""; } toprint = toprint + pressed; }); }); function ar2ind(str) { str = str.replace(/0/g, "?"); str = str.replace(/1/g, "?"); str = str.replace(/2/g, "?"); str = str.replace(/3/g, "?"); str = str.replace(/4/g, "?"); str = str.replace(/5/g, "?"); str = str.replace(/6/g, "?"); str = str.replace(/7/g, "?"); str = str.replace(/8/g, "?"); str = str.replace(/9/g, "?"); return str; } </script> It seems simple enough but it's crashing in IE9. (Might be crashing in earlier versions too but haven't been able to check.)

    Read the article

  • hosting simple python scripts in a container to handle concurrency, configuration, caching, etc.

    - by Justin Grant
    My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like: fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data) running multiple instances of the same script in different threads instead of spinning up a new process for each one expose an API for caching expensive data and storing persistent state from one script invocation to the next Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers. Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about. A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks. HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API): def run (config, context, cache) : results = http_library_call (config.url, config.http_method, config.username, config.password, ...) return { html : results.html, status_code : results.status, headers : results.response_headers } def init(config, context, cache) : config.max_threads = 20 # up to 20 URLs at one time (per process) config.max_processes = 3 # launch up to 3 concurrent processes config.keepalive = 1200 # keep process alive for 10 mins without another call config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks) config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes Database-data fetching script example might look like this (in pseudocode): def run (config, context, cache) : expensive = context.cache["something_expensive"] for record in db_library_call (expensive, context.checkpoint, config.connection_string) : context.log (record, "logDate") # log all properties, optionally specify name of timestamp property last_date = record["logDate"] context.checkpoint = last_date # persistent checkpoint, used next time through def init(config, context, cache) : cache["something_expensive"] = get_expensive_thing() def shutdown(config, context, cache) : expensive = cache["something_expensive"] expensive.release_me() Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.) Specific questions: is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this? when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters) My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural? I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead? Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?

    Read the article

  • SQL SERVER – Fix: Error : 402 The data types ntext and varchar are incompatible in the equal to operator

    - by pinaldave
    Some errors are very simple to understand but the solution of the same is not easy to figure out. Here is one of the similar errors where it clearly suggests where the problem is but does not tell what is the solution. Additionally, there are multiple solutions so developers often get confused with which one is correct and which one is not correct. Let us first recreate scenario and understand where the problem is. Let us run following USE Tempdb GO CREATE TABLE TestTable (ID INT, MyText NTEXT) GO SELECT ID, MyText FROM TestTable WHERE MyText = 'AnyText' GO DROP TABLE TestTable GO When you run above script it will give you following error. Msg 402, Level 16, State 1, Line 1 The data types ntext and varchar are incompatible in the equal to operator. One of the questions I often receive is that voucher is for sure compatible to equal to operator, then why does this error show up. Well, the answer is much simpler I think we have not understood the error message properly. Please see the image below. The next and varchar are not compatible when compared with each other using equal sign. Now let us change the data type on the right side of the string to nvarchar from varchar. To do that we will put N’ before the string. USE Tempdb GO CREATE TABLE TestTable (ID INT, MyText NTEXT) GO SELECT ID, MyText FROM TestTable WHERE MyText = N'AnyText' GO DROP TABLE TestTable GO When you run above script it will give following error. Msg 402, Level 16, State 1, Line 1 The data types ntext and nvarchar are incompatible in the equal to operator. You can see that error message also suggests that now we are comparing next to nvarchar. Now as we have understood the error properly, let us see various solutions to the above problem. Solution 1: Convert the data types to match with each other using CONVERT function. Change the datatype of the MyText to nvarchar. SELECT ID, MyText FROM TestTable WHERE CONVERT(NVARCHAR(MAX), MyText) = N'AnyText' GO Solution 2: Convert the data type of columns from NTEXT to NVARCHAR(MAX) (TEXT to VARCHAR(MAX) ALTER TABLE TestTable ALTER COLUMN MyText NVARCHAR(MAX) GO Now you can run the original query again and it will work fine. Solution 3: Using LIKE command instead of Equal to command. SELECT ID, MyText FROM TestTable WHERE MyText LIKE 'AnyText' GO Well, any of the three of the solutions will work. Here is my suggestion if you can change the column data type from ntext or text to nvarchar or varchar, you should follow that path as text and ntext datatypes are marked as deprecated. All developers any way to change the deprecated data types in future, it will be a good idea to change them right early. If due to any reason you can not convert the original column use Solution 1 for temporary fix. Solution 3 is the not the best solution and use it as a last option. Did I miss any other method? If yes, please let me know and I will add the solution to original blog post with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • My search what the Cloud will mean for my Work, part 2

    - by Kay Sellenrode
    My experience with the cloud and why work will change and not disappear. Until now I have multiple experiences with the cloud, for the most good. i have worked on multiple cloud solutions in the past but let me describe them as 0.x versions. For me the 1st real serious cloud experience was a bit more than 1 year ago, when our company switched from an in house server to Microsoft BPOS as a complete replacement. Since we are a small consultancy firm and don’t have that much else to do than consulting, our IT requirements are quite simple. We need Mail and Storage space for our documents. With the in house server we had multiple outages during a year, mostly by lack of administering. Being consultants in the field and hardly having time to maintain a server, BPOS was and still is for us the right solution. Since the migration we have less outages and a much more robust solution. Have we run into issues with BPOS for our own environment? No not that I’m aware of. Based on this experience I made a stance about deploy ability of BPOS and cloud solutions, they are suitable for MKB (Dutch for Medium and Small Businesses). Most Small businesses don’t have the amount of work to hire a full time it admin. Hiring a service provider to maintain their own server might be even more costly than hiring an admin. So seeing the capabilities of BPOS and the needs of most businesses I see it as a great solution that gives the business a complete Server replacement solution for a fixed price per user. resulting in a clear budget for IT spending, something most small businesses were looking for, for a long time. So right now I’m deploying BPOS with a customer, and I run into some of the Cloud 1.0 issues. In my opinion BPOS is a good working Cloud version 1.0 solution. What do I mean with 1.0? Well 1.0 is mostly a tested solution (unlike 0.x versions) but still have quite some limitations caused by too few market experience. in my opnion this is also the reason why we don’t see that much BPOS customers yet and why I think Office 365 will make a huge difference. What I have seen of 365 shows me it is a Cloud 2.0 version, meaning it has all needed features and is much more flexible to the customer. This is also why I see changes happen in my work field, changes and not unemployment due to Cloud solutions. Cloud 1.0 solutions gave me the idea that if every customer would adopt them I would be out of work. But in reality Cloud 1.0 solutions are here just to set the market needs. The Cloud 2.0 and higher versions will give the customer much more flexibility, but also require the need for a consultant. Where the 1.0 versions are simple to setup and maintain, the 2.0 solution needs more thought upfront and afterwards. ie. BPOS in its 1.0 version brings you a very simplified Exchange 2007 solution, Suitable for some customers. Looking at Office 365 you receive almost a full blown Exchange 2010 solution. I expect this to be even more customizable in the next version. In my search for the changes to my work I try to regulary write a post with my thought around the Cloud and the impact on my work as a consultant. I'm also planning to present around this topic, so if anyone is interested to see me present around this topic, you're more than welcome to contact me.

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • If I build a solution in C#, how do I add a database to the final program?

    - by DevX
    Apologies if this is a very basic question, but I am preparing to create an executable for a Visual Studio C# Application. (My first time!) My application uses a database that I'm currently storing in SQL Server. This works fine while I'm coding since I created the database manually on my computer. I see that VS2008 has created an .exe in my bin/debug folder, but how do I ensure that any new users (that doesn't already have the database) doing a fresh install gets the database also?

    Read the article

  • Share code between projects in a solution in Visual Studio 2008, when building a common assembly is

    - by Binary255
    Hi, I create an add-on for the product Foo. There are different versions of Foo, namely version 1, 2, 3 and 4. These versions have a mostly compatible API, but not fully. I currently have 5 projects: DotNetCommon - here are the common methods which could be used if I create an add-on or something other than the Foo product. FooOne FooTwo FooThree FooFour The Foo*-projects contains the add-in for version 1-4 of Foo. There are a lot of duplicated files in the Foo*-projects, as there are a lot of things in the API which are identical for all versions of Foo. It would be nice to separate out everything which is common for all Foo-versions. Why not just create a common assembly for all versions of Foo called FooCommon? If I would put all classes which are common for all versions of Foo into a new library project, I would still have to choose which version of Foo the new FooCommon should reference. As said, they are not identical.

    Read the article

  • Tree View WIKI replacement solution for SharePoint like Confluence?

    - by Melih Öztürk
    Hi to all, I keep my Process Documents on SVN and I want to create a Wiki page includes the information about these files. We use SharePoint in the company for basic document sharing and team sites. As it is mentioned in http://stackoverflow.com/questions/256407/what-are-your-biggest-complaints-about-sharepoint SharePoint Wiki lacks of usability. I need an easy to use wiki tool which is capable of showing the content like WikiPedia contents and it would be great if I could have the SharePoint tree view and Active Directory authentication also. I googled it and found Atlassian's Confluence and it seems that this product is capable of the requirements. We use Jira for issue tracking, so we can use it's reporting in dashboards. I need and it has a Wiki part which displays wiki pages in tree view. It should be like Does anyone used Confluence or have an idea for other products which meets my requirements

    Read the article

  • MongoDB vs. Redis vs. Cassandra for a fast-write, temporary row storage solution

    - by Mark Bao
    Hi there, I'm building a system that tracks and verifies ad impressions and clicks. This means that there are a lot of insert commands (about 90/second average, peaking at 250) and some read operations, but the focus is on performance and making it blazing-fast. The system is currently on MongoDB, but I've been introduced to Cassandra and Redis since then. Would it be a good idea to go to one of these two solutions, rather than stay on MongoDB? Why or why not? Thank you

    Read the article

  • unable to record tests in Jmeter, here is the log file. Can somebody tell me the solution

    - by mrinalini
    2010/06/07 17:36:24 INFO - jmeter.util.JMeterUtils: Setting Locale to en_US 2010/06/07 17:36:25 INFO - jmeter.JMeter: Loading user properties from: E:\mrinalini\jakarta-jmeter-2.3.4\bin\user.properties 2010/06/07 17:36:25 INFO - jmeter.JMeter: Loading system properties from: E:\mrinalini\jakarta-jmeter-2.3.4\bin\system.properties 2010/06/07 17:36:25 INFO - jmeter.JMeter: Copyright (c) 1998-2009 The Apache Software Foundation 2010/06/07 17:36:25 INFO - jmeter.JMeter: Version 2.3.4 r785646 2010/06/07 17:36:25 INFO - jmeter.JMeter: java.version=1.6.0_16 2010/06/07 17:36:25 INFO - jmeter.JMeter: java.vm.name=Java HotSpot(TM) Client VM 2010/06/07 17:36:25 INFO - jmeter.JMeter: os.name=Windows XP 2010/06/07 17:36:25 INFO - jmeter.JMeter: os.arch=x86 2010/06/07 17:36:25 INFO - jmeter.JMeter: os.version=5.1 2010/06/07 17:36:25 INFO - jmeter.JMeter: file.encoding=Cp1252 2010/06/07 17:36:25 INFO - jmeter.JMeter: Default Locale=English (United States) 2010/06/07 17:36:25 INFO - jmeter.JMeter: JMeter Locale=English (United States) 2010/06/07 17:36:25 INFO - jmeter.JMeter: JMeterHome=E:\mrinalini\jakarta-jmeter-2.3.4 2010/06/07 17:36:25 INFO - jmeter.JMeter: user.dir =E:\mrinalini\jakarta-jmeter-2.3.4\bin 2010/06/07 17:36:25 INFO - jmeter.JMeter: PWD =E:\mrinalini\jakarta-jmeter-2.3.4\bin 2010/06/07 17:36:25 INFO - jmeter.JMeter: IP: 10.254.1.127 Name: cura-dws-06 FullName: cura-dws-06.curasoftware.co.in 2010/06/07 17:36:25 INFO - jmeter.JMeter: Loaded icon properties from org/apache/jmeter/images/icon.properties 2010/06/07 17:36:26 INFO - jmeter.engine.util.CompoundVariable: Note: Function class names must contain the string: '.functions.' 2010/06/07 17:36:26 INFO - jmeter.engine.util.CompoundVariable: Note: Function class names must not contain the string: '.gui.' 2010/06/07 17:36:26 INFO - jmeter.util.BSFTestElement: Registering JMeter version of JavaScript engine as work-round for BSF-22 2010/06/07 17:36:26 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Cannot find .className property for htmlParser, using default 2010/06/07 17:36:26 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/html is 2010/06/07 17:36:26 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for application/xhtml+xml is 2010/06/07 17:36:26 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for application/xml is 2010/06/07 17:36:26 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/xml is 2010/06/07 17:36:26 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/vnd.wap.wml is org.apache.jmeter.protocol.http.parser.RegexpHTMLParser 2010/06/07 17:36:27 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.http.modifier.gui.ParamModifierGui 2010/06/07 17:36:27 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.http.modifier.gui.UserParameterModifierGui 2010/06/07 17:36:27 INFO - jmeter.protocol.http.sampler.HTTPSampler: Maximum connection retries = 10 2010/06/07 17:36:27 INFO - jmeter.protocol.http.sampler.HTTPSampler: Connection and read timeouts are available on this JVM 2010/06/07 17:36:27 WARN - jmeter.gui.util.MenuFactory: Missing jar? Could not create org.apache.jmeter.visualizers.MailerVisualizer. java.lang.NoClassDefFoundError: javax/mail/MessagingException 2010/06/07 17:36:27 INFO - jmeter.samplers.SampleResult: Note: Sample TimeStamps are START times 2010/06/07 17:36:27 INFO - jmeter.samplers.SampleResult: sampleresult.default.encoding is set to ISO-8859-1 2010/06/07 17:36:38 INFO - jmeter.services.FileServer: Default base=E:\mrinalini\jakarta-jmeter-2.3.4\bin 2010/06/07 17:36:38 INFO - jmeter.services.FileServer: Set new base=E:\mrinalini\jakarta-jmeter-2.3.4\bin 2010/06/07 17:36:38 INFO - jmeter.save.SaveService: Testplan (JMX) version: 2.2. Testlog (JTL) version: 2.2 2010/06/07 17:36:38 INFO - jmeter.save.SaveService: Using SaveService properties file encoding UTF-8 2010/06/07 17:36:38 INFO - jmeter.save.SaveService: Using SaveService properties file version 697317 2010/06/07 17:36:38 INFO - jmeter.save.SaveService: Using SaveService properties version 2.1 2010/06/07 17:36:38 INFO - jmeter.save.SaveService: All converter versions present and correct 2010/06/07 17:36:41 INFO - jmeter.protocol.http.proxy.Proxy: Proxy will remove the headers: If-Modified-Since,If-None-Match,Host 2010/06/07 17:36:41 INFO - jmeter.protocol.http.proxy.Daemon: Creating Daemon Socket on port: 8080 2010/06/07 17:36:41 INFO - jmeter.protocol.http.proxy.Daemon: Proxy up and running! 2010/06/07 17:37:55 INFO - jmeter.protocol.http.proxy.Daemon: Proxy Server stopped

    Read the article

  • MS SQL Server 15MM rows, simple COUNT query. 15+ seconds?

    - by john
    We took over a website from another company after a client decided to switch. We have a table that grows by about 25k records a day, and is currently at 15MM records. The table looks something like: id (PK, int, not null) member_id (int, not null) another_id (int, not null) date (datetime, not null) SELECT COUNT(id) FROM tbl can take up to 15 seconds. A simple inner join on 'another_id' takes over 30 seconds. I can't imagine why this is taking so long. Any advice? SQL Server 2005 Express

    Read the article

  • Problem sending mail with simple .net application. Server refusal error.

    - by Fatih
    I have a very simple .net application for testing SMTP on .net. But i am receiving this weird error. "System.Net.Mail.SmtpException: Failure sending mail. --- System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it" SMTP server is remote and doesn't need any kind of authentication so i don't need credentials. But i can send mails from this computer with outlook using same smtp server and same smtp settings without any problem. Any ideas? It will be appreciated so much. Imports System.Net.Mail Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim smtp As New SmtpClient smtp.Host = "10.241.128.220" smtp.Port = 25 smtp.Send("[email protected]", "[email protected]", "test", "test") End Sub

    Read the article

  • FREED(id): message release sent to freed object error solution ?

    - by Meko
    Hi.In my Iphone app I am getting: objc[597]: FREED(id): message release sent to freed object=0x3b81780 error.What should cause this error?IS it about memory allocation? I have UITable and model view that include some text field.It takes username from model view and search this in internet and gets some images from internet.It takes data but when model view disappears app exits.and gives that error.When model views close it runs my method and gets value.But it exits from app.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >