Search Results

Search found 6438 results on 258 pages for 'layer groups'.

Page 92/258 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • How would I authenticate against a local windows user on another machine in an ASP.NET application?

    - by Daniel Chambers
    In my ASP.NET application, I need to be able to authenticate/authorise against local Windows users/groups (ie. not Active Directory) on a different machine, as well as be able to change the passwords of said remote local Windows accounts. Yes, I know Active Directory is built for this sort of thing, but unfortunately the higher ups have decreed it needs to be done this way (so authentication against users in a database is out as well). I've tried using DirectoryEntry and WinNT like so: DirectoryEntry user = new DirectoryEntry(String.Format("WinNT://{0}/{1},User", serverName, username), username, password, AuthenticationTypes.Secure) but this results in an exception when you try to log in more than one user: Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again. I've tried making sure my DirectoryEntries are used inside a using block, so they're disposed properly, but this doesn't seem to fix the issue. Plus, even if that did work it is possible that two users could hit that line of code concurrently and therefore try to create multiple connections, so it would be fragile anyway. Is there a better way to authenticate against local Windows accounts on a remote machine, authorise against their groups, and change their passwords? Thanks for your help in advance.

    Read the article

  • Connecting grouped dots/points on a scatter plot based on distance

    - by ToNoY
    I have 2 sets of depth point measurements, for example: > a depth value 1 2 2 2 4 3 3 6 4 4 8 5 5 16 40 6 18 45 7 20 58 > b depth value 1 10 10 2 12 20 3 14 35 I want to show both groups in one figure plotted with depth and with different symbols as you can see here plot(a$value, a$depth, type='b', col='green', pch=15) points(b$value, b$depth, type='b', col='red', pch=14) The plot seems okay, but the annoying part is that the green symbols are all connected (though I want connected lines also). I want connection only when one group has a continued data points at 2 m interval i.e. the symbols should be connected with a line from 2 to 8 m (green) and then group B symbols should be connected from 10-14 m (red) and again group A symbols should be connected (green), which means I do NOT want to see the connection between 8 m sample with the 16 m for group A. An easy solution may be dividing the group A into two parts (say, A-shallow and A-deep) and then plotting A-shallow, B, and A-deep separately. But this is completely impractical because I have thousands of data points with hundreds of groups i.e. I have to produce many depth profiles. Therefore, there has to be a way to program so that dots are NOT connected beyond a prescribed frequency/depth interval (e.g. 2 m in this case) for a particular group of samples. Any idea?

    Read the article

  • Using resources with the same name in Xcode

    - by Roberto
    Is there a way to add multiple resources with the same name to an Xcode project and have 1 of them take priority over the others? Example: I added 2 files, both called icon.png, to an Xcode project. They are on different folders in the file system (Folder1/icon.png and Folder2/icon.png) and on different groups in Xcode. Is there a way to tell Xcode to have Folder2/icon.png take priority over Folder1/icon.png? And if only 1 icon.png exists, then use that one. Thanks. EDIT (2010-12-23): You can have multiple files with the same name in an Xcode project even if they are not in separate folder references, but they are in separate groups. Once compiled, the app bundle (which will be flat with no folders in it), will only have one copy of the file (icon.png). How do you pick which copy of the file to use? I was told that you can do this for BlackBerry. It works something like this: The compiler will go down the list of files in the project and start adding them to the app bundle. If it sees a duplicate, it will overwrite it (or not), so the files at the bottom (or at the top) will have higher precedence and will be the final bundle.

    Read the article

  • iPhone: Same Rows Repeated in Each Section of Grouped UITableview

    - by Rank Beginner
    I have an app that is a list of tasks, like a to do list. The user configures the tasks and that goes to the SQLite db. The list is displayed in a tableview. The SQL table in question consists of a taskid int, groupname varchar, taskname varchar, lastcompleted datetime, nextdue datetime, weighting integer. I currently have it working by creating an array from each column in the SQL table. In the tableView:cellForRowAtIndexPath: method, I create the controls for each task by binding their values to the array for each column. I want to add configurable task groups that should display as the section titles. I got the task groups to display as the section headers. My problem is that all the task rows are repeated in each group under each header. How do I get the correct rows to show up only under the correct section? I'm really new to development period and took on a hobby of trying to teach myself how to develop iphone apps. So, pretty please, be a little more detailed than you normally would with a professional developer. :)

    Read the article

  • SQL return ORDER BY result as an array

    - by EarthMind
    Is it possible to return groups as an associative array? I'd like to know if a pure SQL solution is possible. Note that I release that I could be making things more complex unnecessarily but this is mainly to give me an idea of the power of SQL. My problem: I have a list of words in the database that should be sorted alphabetically and grouped into separate groups according to the first letter of the word. For example: ape broom coconut banana apple should be returned as array( 'a' => 'ape', 'apple', 'b' => 'banana', 'broom', 'c' => 'coconut' ) so I can easily created sorted lists by first letter (i.e. clicking "A" only shows words starting with a, "B" with b, etc. This should make it easier for me to load everything in one query and make the sorted list JavaScript based, i.e. without having to reload the page (or use AJAX). Side notes: I'm using PostgreSQL but a solution for MySQL would be fine too so I can try to port it to PostgreSQL. Scripting language is PHP.

    Read the article

  • C# : Regular Expression

    - by Pramodh
    I'm having a set of row data as follows List<String> l_lstRowData = new List<string> { "Data 1 32:01805043*0FFFFFFF", "Data 3, 20.0e-3", "Data 2, 1.0e-3 172:?:CRC" , "Data 6" }; and two List namely "KeyList" and "ValueList" like List<string> KeyList = new List<string>(); List<string> ValueList = new List<string>(); I need to fill the two List<String> from the data from l_lstRowData using Pattern Matching And here is my Pattern for this String l_strPattern = @"(?<KEY>(Data|data|DATA)\s[0-9]*[,]?[ ][0-9e.-]*)[ \t\r\n]*(?<Value>[0-9A-Za-z:?*!. \t\r\n\-]*)"; Regex CompiledPattern=new Regex(l_strPattern,RegexOptions.IgnoreCase | RegexOptions.IgnorePatternWhitespace); So finally the two Lists will contain KeyList { "Data 1" } { "Data 3, 20.0e-3" } { "Data 2, 1.0e-3" } { "Data 6" } ValueList { "32:01805043*0FFFFFFF" } { "" } { "172:?:CRC" } { "" } Scenerio: The Group KEY in the Pattern Should match "The data followed by an integer value , and the if there exist a comma(,) then the next string i.e a double value The Group Value in the Pattern should match string after the whitespace.In the first string it should match 32:01805043*0FFFFFFF but in the 3rd 172:?:CRC. Here is my sample code for (int i = 0; i < l_lstRowData.Count; i++) { MatchCollection M = CompiledPattern.Matches(l_lstRowData[i], 0); KeyList.Add(M[0].Groups["KEY"].Value); ValueList.Add(M[0].Groups["Value"].Value); } But my Pattern is not working in this situation. Please help me to rewrite my Pattern.

    Read the article

  • hash fragments and collisions cont.

    - by Mark
    For this application I've mine I feel like I can get away with a 40 bit hash key, which seems awfully low, but see if you can confirm my reasoning (I want a small key because I want a small filename and the key will be converted to a filename): (Note: only accidental collisions a concern - no security issues.) A key point here is that the population in question is divided into groups, and a collision is only relevant if it occurs within the same group. A "group" is a directory on a user's system (the contents of files are hashed and a collision is only relevant if it occurs for files within the same directory). So with speculating roughly 100,000 potential users, say 2^17, that corresponds to 2^18 "groups" assuming 2 directories per user on average. So with a 40 bit key I can expect 2^(20+9) files created (among all users) before a collision occurs for some user somewhere. (Or IOW 2^((40+18)/2), due to the "birthday effect".) That's an average 4096 unique files created per user, for 2^17 users, before a single collision occurs for some user somewhere. And then that long again before another collision occurs somewhere (right?)

    Read the article

  • has_many :through default values

    - by David Lyod
    I have a need to design a system to track users memberships to groups with varying roles (currently three). class Group < ActiveRecord::Base has_many :memberships has_many :users, :through => :memberships end class Role < ActiveRecord::Base has_many :memberships has_many :users, :through => :memberships end class Membership < ActiveRecord::Base belongs_to :user belongs_to :role belongs_to :group end class User < ActiveRecord::Base has_many :memberships has_many :groups, :through => :memberships end Ideally what I want is to simply set @group.users << @user and have the membership have the correct role. I can use :conditions to select data that has been manually inserted as such : :conditions => ["memberships.grouprole_id= ? ", Grouprole.find_by_name('user')] But when creating the membership to the group the grouprole_id is not being set. Is there a way to do this as at present I have a somewhat repetitive piece of code for each user role in my Group model.

    Read the article

  • why the exception is not caught?

    - by Álvaro García
    I have the following code: List<MyEntity> lstAllMyRecords = miDbContext.MyEntity.ToList<MyEntity>(); foreach MyEntity iterator in lstMainRecord) { tasks.Add( TaskEx.Run(() => { try { checkData(lstAllMyRecords.Where(n => n.IDReference == iterator.IDReference).ToList<MyEntity>()); } catch CustomRepository ex) { //handle my custom repository } catch (Exception) { throw; } }) ); }//foreach Task.WaitAll(tasks.ToArray()); I get all the records from my data base and in the foreach loop, I group all the records that have the same IDReference. Thenk I check if the data is correct with the method chekData. The checkData method throw a custom exception if something is wrong. I would like to catch this exception to handle it. But the problem is that with this code the exceptions are not caught and all seem to work without errors, but I know that this is not true. I try to check only one group of records that I know that has problems. If I check only one group of registrers, the loop is execute once and then only task is created. In this case the exception is caught, but if I have many groups, then any exception s thrwon. Why when I only have one task the exception is caught and with many groups are not? Thanks.

    Read the article

  • why is Active Record firing extra query when I use Includes method to fetch data

    - by riddhi_agrawal
    I have the following model structure: class Group < ActiveRecord::Base has_many :group_products, :dependent => :destroy has_many :products, :through => :group_products end class Product < ActiveRecord::Base has_many :group_products, :dependent => :destroy has_many :groups, :through => :group_products end class GroupProduct < ActiveRecord::Base belongs_to :group belongs_to :product end I wanted to minimize my database queries so I decided to use includes.In the console I tried something like, groups = Group.includes(:products) my development logs show the following calls, Group Load (403.0ms) SELECT `groups`.* FROM `groups` GroupProduct Load (60.0ms) SELECT `group_products`.* FROM `group_products` WHERE (`group_products`.group_id IN (1,3,14,15,16,18,19,20,21,22,23,24,25,26,27,28,29,30,33,42,49,51)) Product Load (22.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` IN (382,304,353,12,63,103,104,105,262,377,263,264,265,283,284,285,286,287,302,306,307,308,328,335,336,337,340,355,59,60,61,247,309,311,66,30,274,294,324,350,140,176,177,178,64,240,327,332,338,380,383,252,254,255,256,257,325,326)) Product Load (10.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` = 377) LIMIT 1 I could analyze the initial three calls were necessary but don't get the reason why the last database call is made, Product Load (10.0ms) SELECT `products`.* FROM `products` WHERE (`products`.`id` = 377) LIMIT 1 Any idea why this is happening? Thanks in advance. :)

    Read the article

  • bonding module parameters are not shown in /sys/module/bonding/parameters/

    - by c4f4t0r
    I have a server with Suse 11 sp1 kernel 2.6.32.54-0.3-default, with modinfo bonding i see all parameters, but under /sys/module/bonding/parameters/ not modinfo bonding | grep ^parm parm: max_bonds:Max number of bonded devices (int) parm: num_grat_arp:Number of gratuitous ARP packets to send on failover event (int) parm: num_unsol_na:Number of unsolicited IPv6 Neighbor Advertisements packets to send on failover event (int) parm: miimon:Link check interval in milliseconds (int) parm: updelay:Delay before considering link up, in milliseconds (int) parm: downdelay:Delay before considering link down, in milliseconds (int) parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int) parm: mode:Mode of operation : 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp) parm: primary:Primary network device to use (charp) parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner (slow/fast) (charp) parm: ad_select:803.ad aggregation selection logic: stable (0, default), bandwidth (1), count (2) (charp) parm: xmit_hash_policy:XOR hashing method: 0 for layer 2 (default), 1 for layer 3+4 (charp) parm: arp_interval:arp interval in milliseconds (int) parm: arp_ip_target:arp targets in n.n.n.n form (array of charp) parm: arp_validate:validate src/dst of ARP probes: none (default), active, backup or all (charp) parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC. none (default), active or follow (charp) in /sys/module/bonding/parameters ls -l /sys/module/bonding/parameters/ total 0 -rw-r--r-- 1 root root 4096 2013-10-17 11:22 num_grat_arp -rw-r--r-- 1 root root 4096 2013-10-17 11:22 num_unsol_na I found some of this parameters under /sys/class/net/bond0/bonding/, but when i try to change one i got the following error echo layer2+3 > /sys/class/net/bond0/bonding/xmit_hash_policy -bash: echo: write error: Operation not permitted

    Read the article

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

  • What is the point of PPPoE?

    - by aaa90210
    I am trying to expand my knowledge of networking beyond the basics. I have started reading about PPP, and how it is used in DSL modems with PPPoE and PPPoA. My first impression of PPP was "well that seems pretty similar to Ethernet". They are both data link layer protocols. They both have fields to identify the encapsulated protocol (e.g. IP). They both have related protocols to assign IP addresses (DHCP and NCP). So my first question was "so what's the point of PPP, why not just use Ethernet?". The answer to that was fairly straightforward - Ethernet is not supported over a wide range of media like serial lines, and is a fairly specific technology to LAN's using CAT5 or similar. HOWEVER - then I was reading about PPPoE, and the obvious thought was "well if we are doing something over Ethernet, then Ethernet must be available and in use, so why not just use it?". In other words, PPPoE seems to be encapsulating one data-link layer protocol in another very similar protocol. Why do IP-inside-PPP-inside-Ethernet when we could just be doing IP-inside-Ethernet, and use DHCP rather than NCP to assign the IP address to the home router? Thanks

    Read the article

  • How would I force Debian to use the physical sector size on a hard disk?

    - by Confused User
    I just purchased a few new 3TB WD drives. These have physical 4k sectors, but there is some sort of layer which is providing 512B logical sectors (see the partition table below). In order to attempt to get some more speed out of my hard drives, I would like to get rid of this logical layer and actually use the physical 4k sectors. However, I can't figure out how to do this (or even if it's possible) from the man pages of fdisk and parted, or from searching Google. Does anybody know how this could be done? As to why this is relevant, this page demonstrates that meerly aligning the sectors properly can already make up to a 25% speed difference for reads, and more than 2500% for writes in some cases! Getting rid of the logical sectors in favor of the physicals ones should improve speeds even more. Thanks! $ parted /dev/sdc GNU Parted 2.3 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: ATA WDC WD30EZRX-00M (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB zfs 9 3001GB 3001GB 8389kB P.S. I don't care about the data on the drives, I was just playing with different file systems. Also, this is my first time posting here, so please let me know if my posts should be formatted differently, etc.

    Read the article

  • Window 7 Host does not answer to ping

    - by gencha
    Today I tried printing on a shared printer on one of our homegroup members. Sadly it did not work (printer marked as offline). Shortly after, I noticed I can't even ping the machine that owns the printer (I also can not remotely access it in any other way I've tried). Currently I'm trying to ping the machine from the router both computers are connected to (and my machine in question doesn't answer). I do receive the echo requests (as verified with WireShark). I also added a rule in the Windows Firewall to specifically allow ICMP echo requests, but that didn't change anything. I also tried netsh firewall set icmpsetting 8 enable, but that didn't change anything either. Completely disabling the Windows Firewall has no effect on the issue either. One has to wonder, where does Windows log when and why it ignored any incoming packets? How can I get to the bottom of this? Here are some ways I found to dig deeper into the issue: Enabling logging on the Windows Firewall Enabling Windows Filtering Platform Auditing Both methods at least give more insight into the issue. The plain log file is full of entries like this: 2011-11-11 14:35:27 DROP ICMP 192.168.133.1 192.168.133.128 - - 84 - - - - 8 0 - RECEIVE So the ICMP packets are being dropped as if that was intended. The Event Viewer now gives a little bit more details: The Windows Filtering Platform has blocked a packet. Application Information: Process ID: 4 Application Name: System Network Information: Direction: Inbound Source Address: 192.168.133.1 Source Port: 0 Destination Address: 192.168.133.128 Destination Port: 8 Protocol: 1 Filter Information: Filter Run-Time ID: 214517 Layer Name: Receive/Accept Layer Run-Time ID: 44 This same entry is always repeated with 2 points of information changing: Process ID: 420 Application Name: \device\harddiskvolume2\windows\system32\svchost.exe The service host with the PID 420 is the host for the following services: Windows Audio DHCP Client Windows Event Log HomeGroup Provider TCP/IP NetBIOS Helper Security Center Additionally, there is currently this problem with the same machine: Even though my network is set to be a "Home network", I am unable to create a new homegroup.

    Read the article

  • Can two mocha for After Effects X-Spline Layers be merged ?

    - by George Profenza
    Hello, I'm new to mocha for After Effects, but like the auto tracking feature. There are a few markers I'm adding, but there's one which is giving me a bit of a headache. I'm tracking a circle that moves around a larger object, so it moves in front of it initially, then behind, being occluded by the larger object, then in front again. I tried to track the circle in the first part(when it's in front, before being occluded) and in the second part(when it's in front again, coming out on the other side of the large object) using a single X-Spline. The problem is the second part starts in a different location and I don't know how to 'move' the X-Spline to the new position and track from there, without affecting the previous keyframes. As a workaround I use two X-Spline Layers, export the data to .txt files, then manually merge the two files into a new one containing keyframes from both X-Spline Layers. Is there an easier way to do this (either merging two X-Spline Layer, or using a single X-Spline Layer that can move to a new location without affecting previous keyframes) ? Any suggestion would help.

    Read the article

  • AVConv increases song duration when converting MP3

    - by chauffch
    I am struggling with the following issue. I want to convert an MP3 ADTS into pure a MP3. I am using AVConv on Ubuntu 12.10. The outcome is a file that has the same size, but the duration is now longer. $ ls -l total 6436 -rw-r--r-- 1 teuf teuf 6586514 nov. 25 09:25 Blindsided_Bon_Iver.mpga $ file Blindsided_Bon_Iver.mpga Blindsided_Bon_Iver.mpga: MPEG ADTS, layer III, v1, 160 kbps, 44.1 kHz, JntStereo $ avconv -i Blindsided_Bon_Iver.mpga -c copy Blindsided_Bon_Iver.mp3 avconv version 0.8.4-4:0.8.4-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:50:25 with gcc 4.6.3 [mp3 @ 0x8c6e240] max_analyze_duration reached Input #0, mp3, from 'Blindsided_Bon_Iver.mpga': Duration: 00:05:29.29, start: 0.000000, bitrate: 160 kb/s Stream #0.0: Audio: mp3, 44100 Hz, stereo, s16, 160 kb/s Output #0, mp3, to 'Blindsided_Bon_Iver.mp3': Metadata: TSSE : Lavf53.21.0 Stream #0.0: Audio: libmp3lame, 44100 Hz, stereo, 160 kb/s Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding size= 6432kB time=329.30 bitrate= 160.0kbits/s video:0kB audio:6432kB global headers:0kB muxing overhead 0.002080% $ ls -l total 12868 -rw-rw-r-- 1 teuf teuf 6586129 nov. 27 22:26 Blindsided_Bon_Iver.mp3 -rw-r--r-- 1 teuf teuf 6586514 nov. 25 09:25 Blindsided_Bon_Iver.mpga $ file Blindsided_Bon_Iver.mp3 Blindsided_Bon_Iver.mp3: Audio file with ID3 version 2.4.0, contains: MPEG ADTS, layer III, v1, 32 kbps, 44.1 kHz, Stereo Amarok shows the new file has a duration of 25:27 and has a lot of silence. Am I using an incorrect option? Is it a bug in AVConv? Any ideas how to fix it?

    Read the article

  • Network Load Balancing and AnyCast Routing

    - by user126917
    Hi All can anyone advise on problems with the following? I am planning on installing the following setup on my estate: I have 2 sites that both have a large amount of users. Goals are to keep things simple for the users and to have automatic failover above the database level. Our Database will exist at the primary site and be async mirrored to the secondary site with manual failover procedures.The database generate sequential ID's so distributing it is not an option. I plan to site IIS boxes at both sites with all of the business logic on them and heavy operations. The connections to SQL will be lightweight and DB reads will be cached on IIS. On this layer I plan to use Windows network load balancing and have the same IP or IPs across all IIS boxes at both sites. This way there will be automatic failover and no single point of failure. Also users can have one web address regardless of which site they are in automatically be network load balanced to their local IIS. This is great but obviously our two sites are on different subnets and as this will be one IP address with most of our traffic we can't go broadcasting everything across the link between the sites. To solve this problem we plan to use AnyCast routing over our network layer to route the traffic to the most local box that is listening which will be defined by the network load balancing. Has anyone used this setup before? Can anyone think of any issues with this? Also some specifics I can't find anywhere at the moment. If my Windows box is assigned an IP and listening on that IP but network load balancing is not accepting specific traffic then will AnyCast route away from that? Also can I AnyCast on a socket level?

    Read the article

  • Recover data from Dynamic Disk (MBR) bigger than 2TB

    - by Helder
    Here is the situation: Promise Array FastTrak TX4310 with 3 disks (750 GB each) in RAID5. This comes to around 1500 GB of data. Last week I had the idea of expanding the RAID with an additional 750 GB disk. This would bring the volume to around 2250 GB. I plugged the disk and used the Webpam software to do the RAID expansion. However, I didn't count with the MBR 2TB limit, as I didn't remembered that the disk was using MBR instead of GPT and I didn't check it prior to the expansion. After a couple of days of expansion, today when I got home, the disk in Windows disk manager showed the message "Invalid disk" and when I try to activate it, it says "The operation is not allowed on the Invalid pack". From what I figured, the logical volume on the RAID expanded, and passed that info to the Windows layer and I ended up with an "larger than 2TB" MBR disk. I'm hopping that somehow I can still recover some data from this, and I was wondering if I can "rewrite" the MBR structure back to the 1500 GB partition size, so I can access the partition in Windows. Right now I'm doing an "Analyse" with TestDisk, as I hope the program will pickup the old 1500 structure and allow me to somehow revert back to it. I think that even though the Logical Drive in the RAID is bigger than the 2TB, I can somehow correct the MBR to show the 1500 GB partition again. I had a similar problem once, and I was able to recover the data using a similar method. What do you guys think? Is it a dead end? Am I totally screwed because there is the extra RAID layer that I'm not counting? Or is there other way to move with this? Thanks all!

    Read the article

  • what are valid 'ack' values?

    - by WileECanisLatrans
    having an issue with a vendor who claims the cause of a problem is an invalid 'ack' value in the tcp data. I'm using java so I didn't write this layer. I used snoop to capture the traffic on the wire and am using wireshark to display the data. Here is what is happening. After receiving a multi-packet(5) message I see a multi-pack(3) response. The first packet in the response has a value for 'ack' that is different than the 'ack' value in the other two packets. The vendor claims this data is suspect. I've provided sample data below. I'm not a tcp expert so I don't know if this is a problem or not. I've tried to find something on valid ack values and it seems to me the value should be 80018 but that doesn't mean the 78345 is wrong. I found this on the web and it seems to apply but I'm not sure: "the ack value of any data segment is considered valid as long as it does not acknowledge data ahead of the next segment to send". Thanks for your help. My understanding is the vendor has written their own tcp layer. * source seq ack len * vendor 75465 10924 0 * vendor 75465 10924 1440 * vendor 76905 10924 1440 * vendor 78345 10924 1440 * vendor 79785 10924 233 * me 10924 78345 0 * me 10924 80018 0 * me 10924 80018 197

    Read the article

  • How do you get AWS VPC EC2 instances to be able to see the AWS APIs?

    - by Peter Mounce
    We're spinning up infrastructure inside of an AWS VPC via CloudFormation. We're using auto-scaling groups to bring up VPC-EC2 instances (so, we don't bring up instances directly; ASGs manage that). Inside of a PVC, EC2 instances only have a private IP; they cannot see the outside world without further work. When these instances spin up, we have some bootstrap tasks that require talking to the various AWS APIs. We also have some ongoing tasks that require AWS API traffic. How are you tackling this apparent chicken-egg problem? We've read about: NAT instances - but don't like this so much because it's another layer to our stack. assigning elastic-IPs to each VPC instance that needs to talk - but a) they all do, and b) since we're using ASGs, we don't know which instances to assign EIPs to at provision-time, and c) we'd need to set up something to monitor those ASGs and assign EIPs when instances are terminated and replaced spinning up an instance (actually, a load-balanced pair, probably spanning AZs) to act as an AWS-API proxy for all API traffic I guess I'm wondering whether there's some kind of back-door we can open that allows our VPC EC2 instances access to the AWS API endpoints, but nothing else, for cheap-complexity setup, that doesn't add another network-hop layer to our infrastructure for serving requests.

    Read the article

  • What Device/System to use as a "router on a stick"

    - by Jeff Leyser
    I need to create several distinct VLANs, and provide a way for traffic to move between them. A "router on a stick" approach seems ideal: Internet | Router with Trunking Capability ("router on a stick") * * Trunk between router and switch * Switch with Trunking Capability | | | | | | | | | | | LAN 2 | LAN 4 | | 10.0.2.0/24 | 10.0.4.0/24 | | | | LAN 1 LAN 3 LAN 5 10.0.1.0/24 10.0.3.0/24 10.0.5.0/24 We have trunk-capable Layer-2 switches. The question is what to use as the router on a stick. My choices seem to be: 1) Use an existing Cisco 5505 ASA firewall. It appears the ASA can do the routing, but it's a 100Mbps device, and so seems sub-optimal at best 2) Buy a router. This seems overkill. 3) Buy a Layer-3 switch. Also seems overkill. 4) Use an existing Linux Box as a router 5) Use a new Linux box as a router' 6) Something I'm not thinking of I think either (4) or (5) is my best option, but I'm not sure how to choose between them. I expect the amount of traffic that has to cross the VLANs to be somewhat small, but bursty. How much load does routing add to a CentOS machine?

    Read the article

  • Anyone have real world experience with Rackspace Cloud Sites at high scale?

    - by Allara
    I have a pure web service application layer using .NET. I was originally planning to use Amazon EC2, but rolling my own autoscaling procedures is a bit intimidating, and the scaling isn't very granular from a cost perspective. If the app is successful, we could be looking at relatively high scale (millions of requests per month). The app uses Amazon SimpleDB as the database layer. As a test, I have the app running successfully in Rackspace Cloud Sites. Performance seems to be equal to (if not better than) a standard EC2 instance, even with the added latency of the SimpleDB requests travelling to the Rackspace network. However, testing at this stage is at a very low scale. My question is this: has anyone had real-world experience running a high scale application on Rackspace Cloud Sites? Moreover, once you pass the "included" 10,000 compute cycles per month, does the overall cost seem to be lower than rolling lots of EC2 instances? My assumption would be that with completely smooth scaling (i.e. only adding compute resources as needed), the cost could be lower on average. However, their stated goal of calibrating 10,000 CCs as a single 1.2 Ghz CPU seems on average to be much more expensive than EC2. I like the idea of no-touch scaling, but is it too good to be true?

    Read the article

  • High availability virtual machines

    - by Jeremy
    I've been reading a lot about high availability virtualization, either via Hyper-V or VMWare. In that context, essentially high availabliity means that the VM is hosted by a closter of physical servers (nodes), so if one of the physical servers goes down, the VM can still be served by other physical servers. So far so good, the physical cluster and the VM itself are highly available. However if the service being provided, let's say SQL server, MSDTC, or any other service, are actually being provided by the VM image and the virtualized operating system. So I imagine that there is still a point of failure at the virtual layer that isn't accounted for. Something could happen within the virtual machine itself that the physican cluster can not account for, correct? In that instance the physican failover cluster (Hyper-V) or VMWare host, can not fail over, because the issue is not with one of the servers in the physical cluster - failing over a physical node would not do any good. Does this necessitate building a virtual failover cluster on top of the physical one, or is this not necessary? Alternatively, I suppose you could skip the phsyical clustering, and just cluster at the virtual layer (Child based failover clustering), because that should still survive a physical failure. See image below showing parent based (left), child based (right) and a combination (center). Is parent based as far as you need to go, or is child based more appropriate?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >