Search Results

Search found 322 results on 13 pages for 'headache'.

Page 11/13 | < Previous Page | 7 8 9 10 11 12 13  | Next Page >

  • Drag Drop copy file

    - by Graham Warrender
    I've perhaps done something marginally stupid, but can't see what it is!! string pegasusKey = @"HKEY_LOCAL_MACHINE\SOFTWARE\Pegasus\"; string opera2ServerPath = @"Server VFP\"; string opera3ServerPath = @"O3 Client VFP\"; string opera2InstallationPath = null; string opera3InstallationPath = null; //Gets the opera Installtion paths and reads to the string opera*InstallationPath opera2InstallationPath = (string)Registry.GetValue(pegasusKey + opera2ServerPath + "System", "PathToServerDynamic", null); opera3InstallationPath = (string)Registry.GetValue(pegasusKey + opera3ServerPath + "System", "PathToServerDynamic", null); string Filesource = null; string[] FileList = (string[])e.Data.GetData(DataFormats.FileDrop, false); foreach (string File in FileList) Filesource = File; label.Text = Filesource; if (System.IO.Directory.Exists(opera3InstallationPath)) { System.IO.File.Copy(Filesource, opera3InstallationPath); MessageBox.Show("File Copied from" + Filesource + "\n to" + opera3InstallationPath); } else { MessageBox.Show("Directory Doesn't Exist"); } The user drags the file onto the window, I then get the installation path of an application which is then used as the destination for the source file.. When the application is runs, it throws the error directory not found. But surely if the directory doesn't exists is should step into the else statement? a simple application that is becoming a headache!!

    Read the article

  • Refining Search Results [PHP/MySQL]

    - by Dae
    I'm creating a set of search panes that allow users to tweak their results set after submitting a query. We pull commonly occurring values in certain fields from the results and display them in order of their popularity - you've all seen this sort of thing on eBay. So, if a lot of rows in our results were created in 2009, we'll be able to click "2009" and see only rows created in that year. What in your opinion is the most efficient way of applying these filters? My working solution was to discard entries from the results that didn't match the extra arguments, like: while($row = mysql_fetch_assoc($query)) { foreach($_GET as $key => $val) { if($val !== $row[$key]) { continue 2; } } // Output... } This method should hopefully only query the database once in effect, as adding filters doesn't change the query - MySQL can cache and reuse one data set. On the downside it makes pagination a bit of a headache. The obvious alternative would be to build any additional criteria into the initial query, something like: $sql = "SELECT * FROM tbl MATCH (title, description) AGAINST ('$search_term')"; foreach($_GET as $key => $var) { $sql .= " AND ".$key." = ".$var; } Are there good reasons to do this instead? Or are there better options altogether? Maybe a temporary table? Any thoughts much appreciated!

    Read the article

  • Deployment a web-site on IIS from another program

    - by slo2ols
    Hi, I developed a web-site on ASP.NET 3.5 SP1 platform. And additional I have 2 win services. My task is to build install package. I decided that Visual Studio install projects are not met my requirements. I design my own installer for this project, because I need to resolve many question and problem in install process. My problem: I need to deploy web-site into IIS, but I don't know how to do it easy. I found Microsoft tool as Web Deployment Tool, but I didn't find any documentation. And must I include this tool into my installer for deployment at destination customer? Another side I found SDC Tasks Library and it looks like a solution for me. But I saw many topics where people had problems and because the project was dead anybody couldn't help them. I know it is a long story... My question: how can I deploy the web-site from another program (I know that IIS versions have some differences and it is another headache), set a virtual directory, application pool (very important), a type of authentification and so forth ??? Thanks.

    Read the article

  • How do I stop Chrome from yellowing my site's input boxes?

    - by davebug
    Among other text and visual aids on a form submission, post-validation, I'm coloring my input boxes red to signify the interactive area needing attention. On Chrome (and for Google Toolbar users) the auto-fill feature re-colors my input forms yellow. Here's the complex issue: I want auto-complete allowed on my forms, as it speeds users logging in. I am going to check into the ability to turn the autocomplete attribute to off if/when there's an error triggered, but it is a complex bit of coding to programmatically turn off the auto-complete for the single effected input on a page. This, to put it simply, would be a major headache. So to try to avoid that issue, is there any simpler method of stopping Chrome from re-coloring the input boxes? [edit] I tried the !important suggestion below and it had no effect. I have not yet checked Google Toolbar to see if the !important attribute woudl work for that. As far as I can tell, there isn't any means other than using the autocomplete attribute (which does appear to work).

    Read the article

  • PHP modifying and combining array

    - by Industrial
    Hi everyone, I have a bit of an array headache going on. The function does what I want, but since I am not yet to well acquainted with PHP:s array/looping functions, so thereby my question is if there's any part of this function that could be improved from a performance-wise perspective? I tried to be as complete as possible in my descriptions in each stage of the functions which shortly described prefixes all keys in an array, fill up eventual empty/non-valid keys with '' and removes the prefixes before returning the array: $var = myFunction ( array('key1', 'key2', 'key3', '111') ); function myFunction ($keys) { $prefix = 'prefix_'; $keyCount = count($keys); // Prefix each key and remove old keys for($i=0;$i<$keyCount; $i++){ $keys[] = $prefix.$keys[$i]; unset($keys[$i]); } // output: array('prefix_key1', 'prefix_key2', 'prefix_key3', '111) // Get all keys from memcached. Only returns valid keys $items = $this->memcache->get($keys); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3) // note: key 111 was not found in memcache. // Fill upp eventual keys that are not valid/empty from memcache $return = $items + array_fill_keys($keys, ''); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3, 'prefix_111' => '') // Remove the prefixes for each result before returning array to application foreach ($return as $k => $v) { $expl = explode($prefix, $k); $return[$expl[1]] = $v; unset($return[$k]); } // output: array('key1' => 'value1', 'key2' => 'value2', 'key3'=>'value3, '111' => '') return $return; } Thanks a lot!

    Read the article

  • Two Objects created with the same Address in Flex

    - by James
    Hi, I have an issue in flex which is causing a bit of a headache! I am adding objects to an ArrayCollection but in doing so, another ArrayCollection is also picking up these changes even though there is no binding occurring. I can see from the debug that the two ACs have the same address but for the life of me can't figure out why. I have two Array Collections: model.index.rows //The main array collection model.index.holdRows //The array collection that imitates the above This phantom data binding occurs only for the first iteration in the loop and for all others it will just write it the once. The reason this is proving troublesome is that it creates duplicate entries in my datagrid. public override function handleMessage(message:IMessage):void { super.handleMessage(message); if (message is IndexResponse) { var response:IndexResponse = message as IndexResponse; model.index.rows.removeAll(); model.index.indexIsEmpty = response.nullIndex; if (model.index.indexIsEmpty !== true) { //Update the index model from the response. Note: each property of the row object will be shown in the UI as a column in the grid response.index.forEach(function(entry:EntryData, i:int, a:Array):void { var row:Object = { fileID: entry.fileID, dadName: entry.dadName }; entry.tags.forEach(function(tag:Tag, i:int, a:Array):void { row[tag.name] = tag.value; }); model.index.rows.addItem(row); }); if(model.index.indexForNetworkView == true){ model.index.holdRows.source = model.index.holdRows.source.concat(model.index.rows.source); model.index.indexCounter++; model.index.indexForNetworkView = false; controller.indexController.showNetwork(model.index.indexCounter); } model.index.rows.refresh(); controller.networkController.show(); } } Has anyone else who has encountered something simillar propose a solution?

    Read the article

  • Silverlight horizontal stretch and get position issue

    - by David
    I have a Grid (container) wich in turn has several grids(subContainers) arranged by rows. Each one of those "subContainers" has diferent columns and controls. And each of those "subContainers" has the horizontal alignment set to stretch, and it has to stay that way, since the layout this viewer depends on it. I use the "container" to set each control on it's adequate position. So far so good. Now comes my headache... I want to remove the control from the grid and put it in a canvas, at the same exact position, only, the position it returns is as if the control is set to the beggining of the grid and not it's true position. For testing purposes, I've set the "subContainters" horizontal alignment to center and (despite the layout is totally wrong) every control is in it's right position when sent to a canvas, wich it doesn't happen when HA = stretch. Here's the code I'm using to get position: GeneralTransform gt = nc.TransformToVisual(gridZoom); Point offset = gt.Transform(new Point()); So you can understand, for example, my first control should be somewhere like (80, 1090), but the point that I get is (3,3). Can anyone help me? Thanks

    Read the article

  • Help dealing with data dependency between two registration forms

    - by franko75
    I have a tricky issue here with a registration of both a user and his/her pet. Both the user and the pet are treated as separate entities and both require separate registration forms. However, the user's pet has to be linked to the user via a foreign key in the database. The process is basically that when a new user joins the site, firstly they register their pet, then they register themselves. The reason for this order is to check their pet's eligibility for the site (there are some criteria to be met) first, instead of getting the user to sign up only to then find out their pet is ineligible. It is this ordering of the form submissions which is causing me a bit of a headache, as follows... The site is being developed with an MVC framework, and the User registration process is managed via a method in a User_form controller, while the pet registration process is managed via a method in the Pet_form controller. The pet registration form happens first, and the pet data can be saved without the owner_id at this stage, with the user id possibly being added (e.g by retrieving pet's id from session) following user registration. However, doing it this way could potentially result in redundant data, where pet records would be created in the database, but if the user doesn't actually register themselves too, then the pets will be ownerless records in the DB. Other option is to serialize the new pet's data at the pet registration stage, don't save it to the DB until the user fills out their registration form. Once the user is created, i can pass serialised data AND the owner_id to a method in the Pet Model which can update the DB. However, I also need to set the newly created $pet to $this-pet which I then access for a sequence of other related forms. Should I just set the session variable in the model method? Then in the Pet controller constructor, do a check for pet stored in session, if yes, assign to $this-pet... If this makes any sense to anybody and you have some advice, i'd be grateful to hear it!

    Read the article

  • how should i create my own 'now' / DateTime.Now ?

    - by Michel
    Hi all, i'm starting to build a part of a system which will hold a lot of DateTime validations, and a lot of 'if it was done before now' or 'if it will start in an hour etc'. Usual way to go is to use DateTime.Now to get the actual time. I predict however, that during unit test that will give me a real headache because i will have to setup my testdata for the time when the test will run in stead of use a default set of test data. So i thought: why not use my own 'now' so i can set the current datetime to any moment in time. As i don't want to set the testservers internal clock i was thinking about this solution, and i was wondering what you think of it. Base thought is that i use my own DateTime class. That class gives you the current datetime, but you can also set your own time from outside. public static class MyDateTime { private static TimeSpan _TimeDifference = TimeSpan.Zero; public static DateTime Now { get { return DateTime.Now + _TimeDifference; } } public static void SetNewNow(DateTime newNow) { _TimeDifference = newNow - DateTime.Now; } public static void AddToRealTime(TimeSpan timeSpan ) { _TimeDifference = timeSpan; } public static void SubtractFromRealTime(TimeSpan timeSpan) { _TimeDifference = - timeSpan; } }

    Read the article

  • How to combine two separate unrelated Git repositories into one with single history timeline

    - by Antony
    I have two unrelated (not sharing any ancestor check in) Git repositories, one is a super repository which consists a number of smaller projects (Lets call it repository A). Another one is just a makeshift local Git repository for a smaller project (lets call it repository B). Graphically, it would look like this A0-B0-C0-D0-E0-F0-G0-HEAD (repo A) A0-B0-C0-D0-E0-F0-G0-HEAD (remote/master bare repo pulled & pushed from repo A) A1-B1-C1-D1-E1-HEAD (repo B) Ideally, I would really like to merge repo B into repo A with a single history timeline. So it would appear that I originally started project in repo A. Graphically, this would be the ideal end result A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (new repo A) A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (remote/master bare repo pulled & pushed from repo A) I have been doing some reading with submodules and subtree (Pro Git is a pretty good book by the way), but both of them seem to cater solution towards maintaining two separate branch with sub module being able to pull changes from upstream and subtree being slightly less headache. Both solution require additional and specialized git commands to handle check ins and sync between master and sub tree/module branch. Both solution also result in multiple time-lines (with --squash you even get 3 timelines with subtree). The closest solution from SO seems to talk about "graft", but is that really it? The goal is to have a single unified repository where I can pull/push check-ins, so that there are no more repo B, just repo A in the end.

    Read the article

  • Visual Studio 2010 is not allowing me to debug my code

    - by Tejs
    So, this interesting issue has been plaguing me for the past couple of hours. Visual Studio 2010 Ultimate no longer attaches the debugger and lets me debug my code. If I use the built in development server, then everything works fine. If I switch to Use Local IIS Web Server (http://localhost/), then all it does it attach to w3wp.exe, but no DLLs or PDBs are loaded for anything. I can go to Debug Windows Modules, and literally nothing is loaded in this window. Conversely, when using the built in development server, the Modules window displays all the DLLs and shows that the symbols for my DLLs have been loaded. Something is obviously amiss. The VS installation is completely bone stock. In IIS, my website is configured with ASP.NET 2.0 (because no 3.5 exists to select from the drop down), along with read / log visits / index this resource options checked on the "Home Directory" tab. Some of my failed ideas: 1) If I attach to process on the iexplore.exe instance where the website is displayed, it loads Internet Explorer's DLLs, but not mine. 2) I've restarted the computer multiple times 3) I've invoked devenv.exe /resetuserdata once 4) I've confirmed that every project is indeed set to debug and not release. 5) Deleted all \bin contents and rebuilt the solution. 6) Deleted entire solution and repulled from Source Control. Can someone tell me what is wrong with this thing? I'm going to have an aneurism from the headache this is causing me.

    Read the article

  • Tips for using Subversion and XCode in a team project

    - by FelipeUY
    Hi to all. I've been working on an Xcode (iPhone) project with three different persons. We have the project on a Subversion repository, but we still don't completely understand some aspects of the Subversion + Xcode methodology: 1) Each time someone does a commit on a single file, it may appear or not in the project of the other developers. Even though the same person that creates the new files, it adds those files to the Repository and then it commits on those files. Why does that happens? Any suggestions? 2) Each person that is involved on the project can't do a "Commit entire project" without causing a considerable headache to the rest of the developers... any idea how this should be done?. The working methodology that we are trying to implement is that only one developer (generally the leader of the project) can Commit the entire project but he must inform the rest of the team, so everybody can be prepared to receive a message asking him to discard his changes and read the new files from the repository. I need suggestions or advice on how to handle a project with multiple developers using subversion. We have read the Subversion handbook, and many other messages on StackOverflow but I still can't find any useful advice. Thanks for any tip!

    Read the article

  • Laravel4: Checking many-to-many relationship attribute even when it does not exist

    - by Simo A.
    This is my first time with Laravel and also Stackoverflow... I am developing a simple course management system. The main objects are User and Course, with a many-to-many relationship between them. The pivot table has an additional attribute, participant_role, which defines whether the user is a student or an instructor within that particular course. class Course extends Eloquent { public function users() { return $this->belongsToMany('User')->withPivot('participant_role')->withTimestamps(); } } However, there is an additional role, system admin, which can be defined on the User object. And this brings me mucho headache. In my blade view I have the following: @if ($course->pivot->participant_role == 'instructor') { // do something here... } This works fine when the user has been assigned to that particular $course and there is an entry for this user-course combination in the pivot table. However, the problem is the system administrator, who also should be able to access course details. The if-clause above gives Trying to get property of non-object error, apparently because there is no entry in the pivot table for system administrator (as the role is defined within the User object). I could probably solve the problem by using some off-the-shelf bundle for handling role-based permissions. Or I could (but don't want to) do something like this with two internal if-clauses: if (!empty($course->pivot)) { if ($course->pivot->participant_role == 'instructor') { // do something... } } Other options (suggested in partially similar entries in Stackoverflow) would include 1) reading all the pivot table entries to an array, and use in_array to check if a role exists, or even 2) SQL left join to do this on database level. However, ultimately I am looking for a compact one-line solution? Can I do anything similar to this, which unfortunately does not seem to work? if (! empty ($course->pivot->participant_role == 'instructor') ) { // do something... } The shorter the answer, the better :-). Thanks a lot!

    Read the article

  • How to handle javascript & css files across a site?

    - by Industrial
    Hi everybody, I have had some thoughts recently on how to handle shared javascript and css files across a web application. In a current web application that I am working on, I got quite a large number of different javascripts and css files that are placed in an folder on the server. Some of the files are reused, while others are not. In a production site, it's quite stupid to have a high number of HTTP requests and many kilobytes of unnecessary javascript and redundant css being loaded. The solution to that is of course to create one big bundled file per page that only contains the necessary information, which then is minimized and sent compressed (GZIP) to the client. There's no worries to create a bundle of javascript files and minimize them manually if you were going to do it once, but since the app is continuously maintained and things do change and develop, it quite soon becomes a headache to do this manually while pushing out new updates that features changes to javascripts and/or css files to production. What's a good approach to handle this? How do you handle this in your application?

    Read the article

  • How to send java.util.logging to log4j?

    - by matt b
    I have an existing application which does all of its logging against log4j. We use a number of other libraries that either also use log4j, or log against Commons Logging, which ends up using log4j under the covers in our environment. One of our dependencies even logs against slf4j, which also works fine since it eventually delegates to log4j as well. Now, I'd like to add ehcache to this application for some caching needs. Previous versions of ehcache used commons-logging, which would have worked perfectly in this scenario, but as of version 1.6-beta1 they have removed the dependency on commons-logging and replaced it with java.util.logging instead. Not really being familiar with the built-in JDK logging available with java.util.logging, is there an easy way to have any log messages sent to JUL logged against log4j, so I can use my existing configuration and set up for any logging coming from ehcache? Looking at the javadocs for JUL, it looks like I could set up a bunch of environment variables to change which LogManager implementation is used, and perhaps use that to wrap log4j Loggers in the JUL Logger class. Is this the correct approach? Kind of ironic that a library's use of built-in JDK logging would cause such a headache when (most of) the rest of the world is using 3rd party libraries instead.

    Read the article

  • C++: Conceptual problem in designing an intepreter

    - by sub
    I'm programming an interpreter for an experimental programming language (educational, fun,...) So far, everything went well (Tokenizer & Parser), but I'm getting a huge problem with some of the data structures in the part that actually runs the tokenized and parsed code. My programming language basically has only two types, int and string, and they are represented as C++ strings (std class) and ints Here is a short version of the data structure that I use to pass values around: enum DataType { Null, Int, String } class Symbol { public: string identifier; DataType type; string stringValue; int intValue; } I can't use union because string doesn't allow me to. This structure above is beginning to give me a headache. I have to scatter code like this everywhere in order to make it work, it is beginning to grow unmaintainable: if( mySymbol.type == Int ) { mySymbol.intValue = 1234; } else { mySymbol.stringValue = "abcde"; } I use the Symbol data structure for variables, return values for functions and general representation of values in the programming language. Is there any better way to solve this? I hope so!

    Read the article

  • Rails from with better url

    - by Sam
    wow, switching to rest is a different paradigm for sure and is mainly a headache right now. view <% form_tag (businesses_path, :method => "get") do %> <%= select_tag :business_category_id, options_for_select(@business_categories.collect {|bc| [bc.name, bc.id ]}.insert(0, ["All Containers", 0]), which_business_category(@business_category) ), { :onchange => "this.form.submit();"} %> <% end %> controller def index @business_categories = BusinessCategory.find(:all) if params[:business_category_id].to_i != 0 @business_category = BusinessCategory.find(params[:business_category_id]) @businesses = @business_category.businesses else @businesses = Business.all end respond_to do |format| format.html # index.html.erb format.xml { render :xml => @businesses } end end routes map.resources What I want to to is get a better url than what this form is presenting which is the following: http://localhost:3000/businesses?business_category_id=1 Without rest I would have do something like http://localhost:3000/business/view/bbq bbq as permalink or I would have done http://localhost:300/business_categories/view/bbq and get the business that are associated with the category but I don't really know the best way of doing this. So the two questions are what is the best logic of finding a business by its categories using the latter form and number two how to get that in a pretty url all through restful routes in rails.

    Read the article

  • PHP the SELECT FROM WHERE col IN no working using array

    - by Sam Ram San
    I'm trying to pull some data from MySQL using an Array that was fetch from a first query. Everything's fine all the way to the implode after that, it's been a headache for me. Can someone help me? <?php $var = 'somedata'; include("config/conect.php"); $zip="SELECT * FROM table WHERE firstcol LIKE '%$var%' ORDER BY seconcol"; $results = $connetction->query($zip); while ($row = $results->fetch_array()){ $mycode[]=$row['zip']; } array_pop($mycode); $mycode = implode(', ',$mycode); //print_r ($mycode); echo '<br /><br /><br />'; $usr="SELECT * FROM reg_temp WHERE zip IN('".join("','", $mycode)."')"; $results = $asies->query($usr); while ($row = $results-> fetch_arra()) { $name = $row['name']; echo $name; } ?>

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

  • Why can't I connect to remote Microsoft SQL Server through SSH tunnel?

    - by Alexander
    I have at home a D-Link DIR-615 C1 router with DD-WRT. I set up the SSH server on the router, and log on through an SSH2-RSA passphrase-protected key. That router is the gateway between the local network and the internet. One of the computers on that network has Microsoft SQL Server 2008 installed, with TCP/IP protocol enabled through port 1433. I've set up port forwarding on the router, so that remote connections are possible and are, in fact, working (some developers log on remotely without problems). I am part of another network, that has internet access through a proxy server, which only has ports 80 and 443 opened. I can't connect to that MSSQL server on that remote server because 1433 port is closed on this network. I connected (using Putty) through 443 port to my router's SSH server, and set up 2 tunnels. One is for RDP (3389), and it's working. The other is for 1433 port, to connect to the server. I can't connect through the SSH tunnel to the MS SQL Server, neither through telnet, or through GUI clients. Am I missing something? Additional details: on connect, I get this error from SQL Server Management Studio: TITLE: Connect to Server Cannot connect to localhost:14330. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 3) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=3&LinkId=20476 BUTTONS: OK The tunnel is configured like this: L14330 192.168.0.103:1433 192.168.0.103 is the permanent address of the SQL Server on the LAN. I also successfully forwarded TCP traffic of 3389 port to that IP, so tunneling is working to that IP address. When connecting without tunnel, through Microsoft SQL Server Management Studio, using the same method the connection establishes. Too bad my proxy doesn't allow 1433 port traffic, I wouldn't have this headache.

    Read the article

  • Cannot install grub to RAID1 (md0)

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat This is the my mdadm output after sync: /dev/md0: Version : 0.90 Creation Time : Wed Feb 17 16:18:25 2010 Raid Level : raid1 Array Size : 470455360 (448.66 GiB 481.75 GB) Used Dev Size : 470455360 (448.66 GiB 481.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Nov 1 15:19:31 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 92e6ff4e:ed3ab4bf:fee5eb6c:d9b9cb11 Events : 0.11049560 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. This is my fdisk -l output: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057d19 Device Boot Start End Blocks Id System /dev/sda1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sda2 940910985 976768064 17928540 5 Extended /dev/sda5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table This is my grub install output: root@answe:~# grub-install /dev/sda /usr/sbin/grub-setup: warn: Attempting to install GRUB to a disk with multiple partition labels or both partition label and filesystem. This is not supported yet.. /usr/sbin/grub-setup: error: embedding is not possible, but this is required for cross-disk install. root@answe:~# grub-install /dev/sdb Installation finished. No error reported. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Why can't I connect to remote Microsoft SQL Server through SSH tunnel?

    - by Alexander
    I have at home a D-Link DIR-615 C1 router with DD-WRT. I set up the SSH server on the router, and log on through an SSH2-RSA passphrase-protected key. That router is the gateway between the local network and the internet. One of the computers on that network has Microsoft SQL Server 2008 installed, with TCP/IP protocol enabled through port 1433. I've set up port forwarding on the router, so that remote connections are possible and are, in fact, working (some developers log on remotely without problems). I am part of another network, that has internet access through a proxy server, which only has ports 80 and 443 opened. I can't connect to that MSSQL server on that remote server because 1433 port is closed on this network. I connected (using Putty) through 443 port to my router's SSH server, and set up 2 tunnels. One is for RDP (3389), and it's working. The other is for 1433 port, to connect to the server. I can't connect through the SSH tunnel to the MS SQL Server, neither through telnet, or through GUI clients. Am I missing something? Additional details: on connect, I get this error from SQL Server Management Studio: TITLE: Connect to Server Cannot connect to localhost:14330. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 3) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=3&LinkId=20476 BUTTONS: OK The tunnel is configured like this: L14330 192.168.0.103:1433 192.168.0.103 is the permanent address of the SQL Server on the LAN. I also successfully forwarded TCP traffic of 3389 port to that IP, so tunneling is working to that IP address. When connecting without tunnel, through Microsoft SQL Server Management Studio, using the same method the connection establishes. Too bad my proxy doesn't allow 1433 port traffic, I wouldn't have this headache.

    Read the article

  • routing through multiple subinterfaces in debian

    - by Kstro21
    my question is as simple as the title, i have a debian 6 , 2 NICs, 3 different subnets in a single interface, just like this: auto eth0 iface eth0 inet static address 192.168.106.254 netmask 255.255.255.0 auto eth0:0 iface eth0:0 inet static address 172.19.221.81 netmask 255.255.255.248 auto eth0:1 iface eth0:1 inet static address 192.168.254.1 netmask 255.255.255.248 auto eth1 iface eth1 inet static address 172.19.216.3 netmask 255.255.255.0 gateway 172.19.216.13 eth0 is conected to a swith with 3 differents vlans, eth1 is conected to a router. No iptables DROP, so, all traffic is allowed. Now, passing the traffic through eth0 is OK, passing the traffic through eth0:0 is OK, but, passing the traffic through eth0:1 is not working, i can ping the ip address of that sub interface from a pc where this ip is the default gateway, but can't get to servers in the subnet of the eth1 interface, the traffic is not passing, even when i set the iptables to log all the traffic in the FORWARD chain and i can see the traffic there, but, the traffic is not really passing. And the funny is i can do any the other way around, i mean, passing from eth1 to eth0:1, RDP, telnet, ping, etc, doing some work with the iptable, i manage to pass some traffic from eth0:1 to eth1, the iptables look like this: iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m multiport --dports 25,110,5269 -j DNAT --to-destination 172.19.216.1 iptables -t nat PREROUTING -d 192.168.254.1/32 -p udp -m udp --dport 53 -j DNAT --to-destination 172.19.216.9 iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 172.19.216.11 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 172.19.221.80/29 -j SNAT --to-source 172.19.221.81 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 192.168.254.0/29 -j SNAT --to-source 192.168.254.1 iptables -t nat POSTROUTING -s 172.19.216.0/24 -o eth0 -j SNAT --to-source 192.168.106.254 dong this is working, but,it is really a headache have to map each port with the server, imagine if i move the service from server, so, now i have doubts: can debian route through multiple subinterfaces?? exist a limit for this?? if not, what i'm doing wrong when i have the same setup with other subnets and it is working ok?? without the iptables rules in the nat, it doesn't work thanks and i hope good comments/answers

    Read the article

  • The Challenge with HTML5 – In Pictures

    - by dwahlin
    I love working with Web technologies and am looking forward to the new functionality that HTML5 will ultimately bring to the table (some of which can be used today). Having been through the div versus layer battle back in the IE4 and Netscape 4 days I think we’re headed down that road again as a result of browsers implementing features differently. I’ve been spending a lot of time researching and playing around with HTML5 samples and features (mainly because we’re already seeing demand for training on HTML5) and there’s a lot of great stuff there that will truly revolutionize web applications as we know them. However, browsers just aren’t there yet and many people outside of the development world don’t really feel a need to upgrade their browser if it’s working reasonably well (Mom and Dad come to mind) so it’s going to be awhile. There’s a nice test site at http://www.HTML5Test.com that runs through different HTML5 features and scores how well they’re supported. They don’t test for everything and are very clear about that on the site: “The HTML5 test score is only an indication of how well your browser supports the upcoming HTML5 standard and related specifications. It does not try to test all of the new features offered by HTML5, nor does it try to test the functionality of each feature it does detect. Despite these shortcomings we hope that by quantifying the level of support users and web developers will get an idea of how hard the browser manufacturers work on improving their browsers and the web as a development platform. The score is calculated by testing for the many new features of HTML5. Each feature is worth one or more points. Apart from the main HTML5 specification and other specifications created the W3C HTML Working Group, this test also awards points for supporting related drafts and specifications. Some of these specifications were initially part of HTML5, but are now further developed by other W3C working groups. WebGL is also part of this test despite not being developed by the W3C, because it extends the HTML5 canvas element with a 3d context. The test also awards bonus points for supporting audio and video codecs and supporting SVG or MathML embedding in a plain HTML document. These test do not count towards the total score because HTML5 does not specify any required audio or video codec. Also SVG and MathML are not required by HTML5, the specification only specifies rules for how such content should be embedded inside a plain HTML file. Please be aware that the specifications that are being tested are still in development and could change before receiving an official status. In the future new tests will be added for the pieces of the specification that are currently still missing. The maximum number of points that can be scored is 300 at this moment, but this is a moving goalpost.” It looks like their tests haven’t been updated since June, but the numbers are pretty scary as a developer because it means I’m going to have to do a lot of browser sniffing before assuming a particular feature is available to use. Not that much different from what we do today as far as browser sniffing you say? I’d have to disagree since HTML5 takes it to a whole new level. In today’s world we have script libraries such as jQuery (my personal favorite), Prototype, script.aculo.us, YUI Library, MooTools, etc. that handle the heavy lifting for us. Until those libraries handle all of the key HTML5 features available it’s going to be a challenge. Certain features such as Canvas are supported fairly well across most of the major browsers while other features such as audio and video are hit or miss depending upon what codec you want to use. Run the tests yourself to see what passes and what fails for different browsers. You can also view the HTML5 Test Suite Conformance Results at http://test.w3.org/html/tests/reporting/report.htm (a work in progress). The table below lists the scores that the HTML5Test site returned for different browsers I have installed on my desktop PC and laptop. A specific list of tests run and features supported are given when you go to the site. Note that I went ahead and tested the IE9 beta and it didn’t do nearly as good as I expected it would, but it’s not officially out yet so I expect that number will change a lot. Am I opposed to HTML5 as a result of these tests? Of course not - I’m actually really excited about what it offers.  However, I’m trying to be realistic and feel it'll definitely add a new level of headache to the Web application development process having been through something like this many years ago. On the flipside, developers that are able to target a specific browser (typically Intranet apps) or master the cross-browser issues are going to release some pretty sweet applications. Check out http://html5gallery.com/ for a look at some of the more cutting-edge sites out there that use HTML5. Also check out the http://www.beautyoftheweb.com site that Microsoft put together to showcase IE9. Chrome 8 Safari 5 for Windows     Opera 10 Firefox 3.6     Internet Explorer 9 Beta (Note that it’s still beta) Internet Explorer 8

    Read the article

  • BIP and Mapviewer Mash Up I

    - by Tim Dexter
    I was out in Yellowstone last week soaking up various wildlife and a bit too much rain ... good to be back until the 95F heat yesterday. Taking a little break from the Excel templates; the dev folks are planing an Excel patch in the next week or so that will add a mass of new functionality. At the risk of completely mis leading you I'm going to hang back a while. What I have written so far holds true and will continue to do so. This week, I have been mostly eating 'mapviewer' ... answers on a post card please, TV show and character. I had a request to show how BIP can call mapviewer and render a dynamic map in an output. So I hit the books and colleagues for some answers. Mapviewer is Oracle's geographic information system, hereby known as GIS. I use it a lot in our BIEE demos where the interaction with the maps is very impressive. Need a map of California and its congressional districts? I have contacts; Jerry and David with their little black box of maps. Once in my possession I can build highly interactive, clickable maps that allow the user to drill into more information using a very friendly interface driving BIEE content and navigation. But what about maps in BIP output? Bryan Wise, who has written some articles on this blog did some work a while back with the PL/SQL API interface. The extract for the report called a function that in turn called the mapviewer server, passing a set of mapping requirements, it then returned a URL to a cached copy of that map. Easy to then have BIP render that image. Thats still very doable. You need to install a couple of packages and then load the mapviewer java APIs into the database. Then you can write your function to the APIs. A little involved? Maybe, but the database is doing all the heavy lifting for you. I thought I would investigate another method for getting the maps back into BIP. There is a URL interface you can call, this involves building an XML message to be passed to the mapviewer server. It's pretty straightforward to use on the mapviewer side. On the BIP side things are little more tricksy. After some unexpected messing about I finally got the ubiquitous Hello World map to render using the URL method. Not the most exciting map in the world, lots of ocean and a rather long URL to get it to render. http://127.0.0.1:9704/mapviewer/omserver?xml_request=%3Cmap_request%20title=%22Hello%20World%22%20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E Notice all of the encoding in the URL string to handle the spaces, quotes, etc. All necessary to get BIP to make the call to the mapviewer server correctly without truncating the URL if it hits a real space rather than a %20. With that in mind constructing the URL was pretty simple. I'm not going to get into the content of the URL too much, for that you need to bone up on the mapviewer XML API. Check out the home page here and the documentation here. To make the template portable I used the standard CURRENT_SERVER_URL parameter from the BIP server and declared that in my template. <?param@begin:CURRENT_SERVER_URL;'myserver'?> Ignore the 'myserver', that was just a dummy value for testing at runtime it will resolve to: 'http://yourserver:port/xmlpserver' Not quite what we need as mapviewer has its own server path, in my case I needed 'mapviewer/omserver?xml_request=' as the fixed path to the mapviewer request URL. A little concatenation and substringing later I came up with <?param@begin:mURL;concat(substring($CURRENT_SERVER_URL,1,22),'mapviewer/omserver?xml_request=')?> Thats the basic URL that I can then build on. To get the Hello World map I need to add the following: <map_request title="Hello World" datasource="cagis" format="GIF_STREAM"/> Those angle brackets were the source of my headache, BIPs XSLT engine was attempting to process them rather than just pass them. Hok Min to the rescue ... again. I owe him lunch when I get out to HQ again! To solve the problem, I needed to escape all the characters and white space and then use native XSL to assign the string to a parameter. <xsl:param xdofo:ctx="begin"name="pXML">%3Cmap_request%20title=%22Hello%20World%22 %20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E</xsl:param> I did not need to assign it to a parameter but I felt that if I were going to do anything more serious than Hello World like plotting points of interest on the map. I would need to dynamically build the URL, so using a set of parameters or variables that I then concatenated would be easier. Now I had the initial server string and the request all I then did was combine the two using a concat: concat($mURL,$pXML) Embedding that into an image tag: <fo:external-graphic src="url({concat($mURL,$pXML)})"/> and I was done. Notice the curly braces to get the concat evaluated prior to the image call. As you will see next time, building the XML message to go onto the URL can get quite complex but I have used it with some data. Ultimately, it would be easier to build an extension to BIP to handle the data to be plotted, it would then build the XML message, call mapviewer and return a URL to the map image for BIP to render. More on that next time ...

    Read the article

< Previous Page | 7 8 9 10 11 12 13  | Next Page >