Search Results

Search found 16748 results on 670 pages for 'port block'.

Page 90/670 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Jquery Block on my document .ready function not working

    - by kumar
    Hello Friens, I have this code, I added JS Script file to my Master page. <script src="/Scripts/Jquery.blockUI.js" type="text/javascript"></script> This Below code I have in my master page.on document.ready <script type="text/javascript"> $(document).ready(function () { $.blockUI({ message: $('#question'), css: { width: '275px'} }); }); </script> <div id="question" style="display:none; cursor: default"> <h2 class="padding"><br />An unexpected system error has occurred while processing your request.<br /></h2> <h3>We apologize for this inconvenience.<br /> Please report this error to your system administrator with the following information:<br /><br /> Session id is:</h3> <input type="button" id="OK" value="OK" /> </asp:Content> On my Document.ready Function my BlockUi is not working? can any body tell me why its not working? thanks

    Read the article

  • How to set baud rate to 307200 on Linux?

    - by cairol
    Basically I'm using the following code to set the baud rate of a serial port: struct termios options; tcgetattr(fd, &options); cfsetispeed(&options, B115200); cfsetospeed(&options, B115200); tcsetattr(fd, TCSANOW, &options); This works very well. But know I have to communicate with a device that uses a baud rate of 307200. How can I set that? cfsetispeed(&options, B307200); doesn't work, there is no B307200 defined.

    Read the article

  • Loop each x elements

    - by Cimm
    What's the beste way to show a list with 20 images in rows of 5? Or, in other words, how do I clean up this ugly snippet? <div class="row"> <% @images.each_with_index do |image, index| %> <%= image_tag image.url %> <% if index != 0 && index % 5 == 0 %> </div><div class="row"> <% end %> <% end %> </div>

    Read the article

  • SLRequest return nil before block complete

    - by jaytr0n
    I'm having a little trouble thinking through this and deciding if it's a design flaw on my behalf or if I'm missing a piece that could make this work. Basically I'm using the new SLRequest to make a Twitter API call. After the data is returned, I'd like to put it into an object and return that object: -(AUDSlide *) getFollowerSlide { SLRequest *getRequest = [SLRequest requestForServiceType:SLServiceTypeTwitter requestMethod:SLRequestMethodGET URL:[NSURL URLWithString:[NSString stringWithFormat:@"https://api.twitter.com/1.1/followers/ids.json?user_id=%@", [[self.twitterAccount valueForKey:@"properties"] valueForKey:@"user_id"]]] parameters:nil]; getRequest.account = twitterAccount; AUDSlide *slide = [[AUDSlide alloc] init]; [getRequest performRequestWithHandler:^(NSData *responseData, NSHTTPURLResponse *urlResponse, NSError *error) { if ([urlResponse statusCode] == 200) { NSError *jsonError = nil; NSDictionary *list = [NSJSONSerialization JSONObjectWithData:responseData options:0 error:&jsonError]; NSLog(@"Number of Followers: %u", [[list objectForKey:@"ids"] count]); slide.title = @"Followers"; slide.number = [NSString stringWithFormat:@"%u",[[list objectForKey:@"ids"] count]]; } else{ slide.title = @"Error"; slide.number = [NSString stringWithFormat: @"E%u", [urlResponse statusCode]]; } }]; return slide; } Of course the slide is returned before the call is complete and returns a nil object. So I'm not sure if I should try to force this into a synchronous request (that seems like it could be a bad idea) or rethink the design. Does anyone have any advice?

    Read the article

  • Common block usage in Fortran

    - by Crystal
    I'm new to Fortran and just doing some simple things for work. And as a new programmer in general, not sure exactly how this works, so excuse me if my explanation or notation is not the best. At the top of the .F file there are common declarations. The person explaining it to me said think of it like a struct in C, and that they are global. Also in that same .F file, they have it declared with what type. So it's something like: COMMON SOMEVAR INTEGER*2 SOMEVAR And then when I actually see it being used in some other file, they declare local variables, (e.g. SOMEVAR_LOCAL) and depending on the condition, they set SOMEVAR_LOCAL = 1 or 0. Then there is another conditional later down the line that will say something like IF (SOMEVAR_LOCAL. eq. 1) SOMEVAR(PARAM) = 1; (Again I apologize if this is not proper Fortran, but I don't have access to the code right now). So it seems to me that there is a "struct" like variable called SOMEVAR that is of some length (2 bytes of data?), then there is a local variable that is used as a flag so that later down the line, the global struct SOMEVAR can be set to that value. But because there is (PARAM), it's like an array for that particular instance? Thanks. Sorry for my bad explanation, but hopefully you will understand what I am asking.

    Read the article

  • Accessing data feed

    - by racket99
    I have a Java program on my desktop which displays financial data gleaned from the web. It is a 3rd party application. What I would like to do is intercept the data before it goes to the Java application and record it into a flat file for the purpose of later data analysis. Is this at all possible? I imagine the data are available and are entering my computer through some port which the Java app picks up and then displays. Help appreciated. Thanks

    Read the article

  • If I write a framework that gets information from the Internet, should I make a degelate or use blocks?

    - by Time Machine
    Say I'm writing a publicly available framework for the Vimeo API. This framework needs to get information from the Internet. Because this can take some time, I need to use threadin to prevent the UI from hanging. Foundation uses delegates for this, like NSURLConnectionDelegate. However, Game Kit uses blocks as callback functions. What is the recommended way of doing this? I know blocks aren't supported in standard GCC versions, but they require less, much less code for the one that uses my framework. Delegates, on the other hand, are real methods and when protocols are used, I'm sure the methods are implemented. Thanks.

    Read the article

  • How does this code block works?

    - by Justin John
    I can't understand how the following code works. $start = 1; while($start<10){ if ($start&1) { echo "ODD ".$start." <br/> "; } else { echo "EVEN ".$start." <br/> "; } $start++; } The $start&1 will return ODD and EVEN seperately. Output ODD 1 EVEN 2 ODD 3 EVEN 4 ODD 5 EVEN 6 ODD 7 EVEN 8 ODD 9 If we give $start&2 instead of $start&1, it returns with another order. How &1 &2 etc... works here?

    Read the article

  • Converting Java code block to Objective-C

    - by user1123688
    I am trying to convert my Android app to iOS. This will be my first iOS app. I can't seem to translate this code properly. If someone wouldn't mind showing me how its done, that would be greatly appreciated. //scrambBase20 is a Byte array String descramble(String input){ char[] ret; ret = input.toCharArray(); int offset = -scrambBase20.length; for(int i=0;i<input.length();i++){ if(i%scrambBase20.length==0) offset+=scrambBase20.length; ret[scrambBase20[i%scrambBase20.length]+offset]=(char) ((byte) (input.charAt(i))^0x45); } String realRet = ""; for (char x : ret){ realRet+=x; } realRet = realRet.trim(); return realRet; }

    Read the article

  • Chalk Talk, Glenn Block &ndash; Leith, Edinburgh 12th March 2011

    - by David Christiansen
    Exciting news. I am proud to announce that Glenn Block from Microsoft  will be coming all the way from Seattle to Scotland on the 12th March to talk to you!. Glenn is a PM on the WCF team working on Microsoft’s future HTTP and REST stack and has been involved in some pretty exciting and ground-breaking Microsoft development mind-shifts in recent times. Don’t miss the chance to hear him speak and ask him questions. Brief history of Glenn Prior to WCF he was a PM on the new Managed Extensibility Framework in .NET 4.0. Glenn has a breadth of experience both inside and outside Microsoft developing software solutions for ISVs and the enterprise. Glenn has also been very active in involving folks from the community in the development of software at Microsoft. This has included shipping several products under open source licenses, as well as assisting other teams looking to do so. Glenn is also a frequent speaker at local and international events and user groups.  When he's not working and playing with technology, he spends his time with his wife and daughter either at their home in Seattle or at one of the local coffee shops. Glenn Block on the web mvcConf 2 - Glenn Block: Take some REST with WCF (Feb 2011) @gblock on twitter My Technobabble - Glenn’s Blog Sponsored by Storm ID is an award winning full service digital agency in Edinburgh

    Read the article

  • Is there a command-line utility app which can find a specific block of lines in a text file, and replace it?

    - by fred.bear
    UPDATE (see end of question) The text "search and replace" utility programs I've seen, seem to only search on a line-by-line basis... Is there a command-line tool which can locate one block of lines (in a text file), and replace it with another block of lines.? For example: Does the test file file contain this exact group of lines: 'Twas brillig, and the slithy toves Did gyre and gimble in the wabe: All mimsy were the borogoves, And the mome raths outgrabe. 'Beware the Jabberwock, my son! The jaws that bite, the claws that catch! Beware the Jubjub bird, and shun The frumious Bandersnatch!' I want this, so that I can replace multiple lines of text in a file and know I'm not overwriting the wrong lines. I would never replace "The Jabberwocky" (Lewis Carroll), but it makes a novel example :) UPDATE: ..(sub-update) My following comment about reasons when not use sed are only in the context of; don't push any tool too far beyond its design intent (I use sed quite often, and consider it to be invaluable.) I just now found an interesting web page about sed and when not to use it. So, because of all the sed answers, I"ll post the link.. it is part of the sed FAQ on sourceforge Also, I'm pretty sure there is some way diff can do the job of locating the block of text (once it's located, the replacement is quite straight foward; using head and tail) ... 'diff' dumps all the necessary data, but I haven't yet worked out how to filter it , ... (I'm still working on it)

    Read the article

  • Cisco Catalyst 3550 + Alteon 184 Load-Balancing Issues...

    - by upkels
    I have just deployed a couple Cisco Catalyst 3550 switches, and a couple Alteon 184 Web Switches for load-balancing. I can ping all RIPs and VIPs to/from the Alteon. Topology Before: (server) <- (Alteon) <- (Internet) Topology Now: (server) <- (3550) <- Alteon <- (Internet) Cisco Port Configuration (Alteon Uplink Port): description LB_1_PORT_9_PRIMARY switchport access vlan 10 switchport mode access switchport nonegotiate speed 100 duplex full Alteon Port 9 Configuration (VLAN 10 WAN): >> Main# /c/port 9/cur Current Port 9 configuration: enabled pref fast, backup gig, PVID 10, BW Contract 1024 name UPLINK >> Main# /c/port 9/fast/cur Current Port 9 Fast link configuration: speed 100, mode full duplex, fctl none, auto off Cisco Configuration (Load-Balanced Servers Port): description LB_1_PORT_1_PRIMARY switchport access vlan 30 switchport mode access switchport nonegotiate speed 100 duplex full Alteon Port 1 Configuration (VLAN 30 LOAD-BALANCED LAN): >> Main# /c/port 1/cur Current Port 1 configuration: enabled pref fast, backup gig, PVID 30, BW Contract 1024 name LB_PORT_1 >> Main# /c/port 1/fast/cur Current Port 1 Fast link configuration: speed 100, mode full duplex, fctl both, auto on Each of my servers are on vlan 10 and 30, properly communicating. I have tried to turn on VLAN tagging on the Alteon, however it seems to cause all communications to stop working. When I tcpdump -i vlan30 on any of the webservers, I see normal ARP communications, and some STP communications, which may or may not be part of the problem: ... 15:00:51.035882 STP 802.1d, Config, Flags [none], bridge-id 801e.00:11:5c:62:fe:80.8041, length 42 15:00:51.493154 IP 10.1.1.254.33923 > 10.1.1.1.http: Flags [S], seq 707324510, win 8760, options [mss 1460], length 0 15:00:51.493336 IP 10.1.1.1.http > 10.1.1.254.33923: Flags [S.], seq 3981707623, ack 707324511, win 65535, options [mss 1460], len gth 0 15:00:51.493778 ARP, Request who-has 10.1.3.1 tell 10.1.3.254, length 46 etc... I'm not sure if I've provided enough information, so please let me know if any more is necessary. Thank you!

    Read the article

  • Cisco Catalyst 3550 + Alteon 184 Load-Balancing Issues

    - by upkels
    I have just deployed a couple Cisco Catalyst 3550 switches, and a couple Alteon 184 Web Switches for load-balancing. I can ping all RIPs and VIPs to/from the Alteon. Topology Before: (server) <- (Alteon) <- (Internet) Topology Now: (server) <- (3550) <- Alteon <- (Internet) Cisco Port Configuration (Alteon Uplink Port): description LB_1_PORT_9_PRIMARY switchport access vlan 10 switchport mode access switchport nonegotiate speed 100 duplex full Alteon Port 9 Configuration (VLAN 10 WAN): >> Main# /c/port 9/cur Current Port 9 configuration: enabled pref fast, backup gig, PVID 10, BW Contract 1024 name UPLINK >> Main# /c/port 9/fast/cur Current Port 9 Fast link configuration: speed 100, mode full duplex, fctl none, auto off Cisco Configuration (Load-Balanced Servers Port): description LB_1_PORT_1_PRIMARY switchport access vlan 30 switchport mode access switchport nonegotiate speed 100 duplex full Alteon Port 1 Configuration (VLAN 30 LOAD-BALANCED LAN): >> Main# /c/port 1/cur Current Port 1 configuration: enabled pref fast, backup gig, PVID 30, BW Contract 1024 name LB_PORT_1 >> Main# /c/port 1/fast/cur Current Port 1 Fast link configuration: speed 100, mode full duplex, fctl both, auto on Each of my servers are on vlan 10 and 30, properly communicating. I have tried to turn on VLAN tagging on the Alteon, however it seems to cause all communications to stop working. When I tcpdump -i vlan30 on any of the webservers, I see normal ARP communications, and some STP communications, which may or may not be part of the problem: ... 15:00:51.035882 STP 802.1d, Config, Flags [none], bridge-id 801e.00:11:5c:62:fe:80.8041, length 42 15:00:51.493154 IP 10.1.1.254.33923 > 10.1.1.1.http: Flags [S], seq 707324510, win 8760, options [mss 1460], length 0 15:00:51.493336 IP 10.1.1.1.http > 10.1.1.254.33923: Flags [S.], seq 3981707623, ack 707324511, win 65535, options [mss 1460], len gth 0 15:00:51.493778 ARP, Request who-has 10.1.3.1 tell 10.1.3.254, length 46 etc... I'm not sure if I've provided enough information, so please let me know if any more is necessary. Thank you!

    Read the article

  • Advice on coding Perl to work like PHP

    - by user272273
    I am first and foremost a perl coder, but like many, also code in PHP for client work, especially web apps. I am finding that I am duplicating a lot of my projects in the two languages, but using different paradigms (e.g. for handling cgi input and session data) or functions. What I would like to do is start to code my Perl in a way which is structured more like PHP, so that I a) am keeping one paradigm in my head b) can more quickly port over scripts from one to the other Specifically, I am asking if people could advise how you might do the following in perl? 1) Reproduce the functionality of $_SESSION, $_GET etc. e.g. by wrapping up the param() method of CGI.pm, a session library? 2) Templating library that is similar to PHP I am used to mixing my code and HTML in the PHP convention. e.g. i <h1>HTML Code here</h1> <? print "Hello World\b"; ?> Can anybody advise on which perl templating engine (and possibly configuration) will allow me to code similarly? 3) PHP function library Anybody know of a library for perl which reproduces a lot of the php inbuilt functions?

    Read the article

  • Moving delegate-related function to a different thread

    - by Chris
    Hello everybody. We are developing a library in C# that communicates with the serial port. We have a function that is given to a delegate. The problem is that we want it to be run in a different thread. We tried creating a new thread (called DatafromBot) but keep using it as follows (first line): comPort.DataReceived += new SerialDataReceivedEventHandler(comPort_DataReceived); DatafromBot = new Thread(comPort_DataReceived); DatafromBot.Start(); comPort_DataReceived is defined as: Thread DatafromBot; public void comPort_DataReceived(object sender, SerialDataReceivedEventArgs e) { ... } The following errors occur: Error 3 The best overloaded method match for 'System.Threading.Thread.Thread(System.Threading.ThreadStart)' has some invalid arguments C:...\IR52cLow\CommunicationManager.cs 180 27 IR52cLow Error 4 Argument '1': cannot convert from 'method group' to 'System.Threading.ThreadStart' C:...\IR52cLow\CommunicationManager.cs 180 38 IR52cLow Any ideas of how we should convert this to get it to compile? Please note that comPort.DataReceived (pay attention to "." instead of "_") lies within a system library and cannot be modified. Thanks for your time! Chris

    Read the article

  • Listing serial (COM) ports on Windows?

    - by Eli Bendersky
    Hello, I'm looking for a robust way to list the available serial (COM) ports on a Windows machine. There's this post about using WMI, but I would like something less .NET specific - I want to get the list of ports in a Python or a C++ program, without .NET. I currently know of two other approaches: Reading the information in the HARDWARE\\DEVICEMAP\\SERIALCOMM registry key. This looks like a great option, but is it robust? I can't find a guarantee online or in MSDN that this registry cell indeed always holds the full list of available ports. Tryint to call CreateFile on COMN with N a number from 1 to something. This isn't good enough, because some COM ports aren't named COMN. For example, some virtual COM ports created are named CSNA0, CSNB0, and so on, so I wouldn't rely on this method. Any other methods/ideas/experience to share? Edit: by the way, here's a simple Python implementation of reading the port names from registry: import _winreg as winreg import itertools def enumerate_serial_ports(): """ Uses the Win32 registry to return a iterator of serial (COM) ports existing on this computer. """ path = 'HARDWARE\\DEVICEMAP\\SERIALCOMM' try: key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, path) except WindowsError: raise IterationError for i in itertools.count(): try: val = winreg.EnumValue(key, i) yield (str(val[1]), str(val[0])) except EnvironmentError: break

    Read the article

  • WinCE and PC USB communication

    - by sebeksd
    We are developing some device and we need to find good solution for one of needed functionality. Thing is that we need communicate WinCE 6.0 (ARM) and Windows on PC. Easiest way is of course COM port but in our case it is impossible (all serial ports are used on WinCE and we don't want to add one more). Second option is LAN but for us it is not the best option for few reasons. So there is third option we could use. USB to USB communication but how to do that ? Of course WinCE is USB Device and PC is USB Host so all hardware basics are meet. We could use Active sync but there are few problems with it: - WinCE 6.0 is not working with WMDC (drivers on device just crash after connecting device with PC) and I didn't find any solution for it so in this case we need to use WinXP on PC side (old ActiveSync) - we need to filter communication with active sync to only our application, no other non authorized software should be allowed (what I know this is imposible to obtain). So propably best way to do what we need is to communicate throug USB like standard COM (serial communication). The question is, how it could be made, are we need to write driver on WinCE and also a Driver on Windows (PC), or there are better solution? Maybe some driver for WinCE 6.0 that would emulate Virtual COM on PC side (and of course allow standard Read/Write to it on WinCE side) ? Could someone tell me if something like that exists ?

    Read the article

  • Fake serial communication under Linux

    - by kigurai
    I have an application where I want to simulate the connection between a device and a "modem". The device will be connected to a serial port and will talk to the software modem through that. For testing purposes I want to be able to use a mock software device to test send and receive data. Example Python code device = Device() modem = Modem() device.connect(modem) device.write("Hello") modem_reply = device.read() Now, in my final app I will just pass /dev/ttyS1 or COM1 or whatever for the application to use. But how can I do this in software? I am running Linux and application is written in Python. I have tried making a FIFO (mkfifo ~/my_fifo) and that does work, but then I'll need one FIFO for writing and one for reading. What I want is to open ~/my_fake_serial_port and read and write to that. I have also lpayed with the ptymodule, but can't get that to work either. I can get a master and slave file descriptor from pty.openpty() but trying to read or write to them only causes IOError Bad File Descriptor error message.

    Read the article

  • .NET SerialPort DataReceived event thread interference with main thread

    - by Kiran
    I am writing a serial communication program using the SerialPort class in C# to interact with a strip machine connected via a RS232 cable. When i send the command to the machine it responds with some bytes depending on the command. Like when i send a "\D" command, i am expecting to download the machine program data of 180 bytes as a continous string. As per the machine's manual, it suggests as a best practice to send an unreognized characters like comma (,) character to make sure the machine is initialized before sending the first command in the cycle. My serial communication code is as follows: public class SerialHelper { SerialPort commPort = null; string currentReceived = string.Empty; string receivedStr = string.Empty; private bool CommInitialized() { try { commPort = new SerialPort(); commPort.PortName = "COM1"; if (!commPort.IsOpen) commPort.Open(); commPort.BaudRate = 9600; commPort.Parity = System.IO.Ports.Parity.None; commPort.StopBits = StopBits.One; commPort.DataBits = 8; commPort.RtsEnable = true; commPort.DtrEnable = true; commPort.DataReceived += new SerialDataReceivedEventHandler(commPort_DataReceived); return true; } catch (Exception ex) { return false; } } void commPort_DataReceived(object sender, SerialDataReceivedEventArgs e) { SerialPort currentPort = (SerialPort)sender; currentReceived = currentPort.ReadExisting(); receivedStr += currentReceived; } internal int CommIO(string outString, int outLen, ref string inBuffer, int inLen) { receivedStr = string.Empty; inBuffer = string.Empty; if (CommInitialized()) { commPort.Write(outString); } System.Threading.Thread.Sleep(1500); int i = 0; while ((receivedStr.Length < inLen) && i < 10) { System.Threading.Thread.Sleep(500); i += 1; } if (!string.IsNullOrEmpty(receivedStr)) { inBuffer = receivedStr; } commPort.Close(); return inBuffer.Length; } } I am calling this code from a windows form as follows: len = SerialHelperObj.CommIO(",",1,ref inBuffer, 4) len = SerialHelperObj.CommIO(",",1,ref inBuffer, 4) If(inBuffer == "!?*O") { len = SerialHelperObj.CommIO("\D",2,ref inBuffer, 180) } A valid return value from the serial port looks like this: \D00000010000000000010 550 3250 0000256000 and so on ... I am getting some thing like this: \D00000010D,, 000 550 D,, and so on... I feel that my comm calls are getting interferred with the one when i send commands. But i am trying to make sure the result of the comma command then initiating the actual command. but the received thread is inserting the bytes from the previous communication cycle. Can any one please shed some light into this...? I lost quite some hair just trying to get this work. I am not sure where i am doing wrong

    Read the article

  • why is port 500 in use and how can I free it? VPNC error

    - by kirill_igum
    i tried to use network manager to connect to my university's vpn; it didn't work. then i used a command line vpnc: > sudo vpnc [sudo] password for kirill: Enter IPSec gateway address: vpn.net.**.edu Enter IPSec ID for vpn.net.**.edu: ** Enter IPSec secret for **@vpn.net.**.edu: Enter username for vpn.net.**.edu: ** Enter password for **@vpn.net.**.edu: vpnc: Error binding to source port. Try '--local-port 0' Failed to bind to 0.0.0.0:500: Address already in use then i did this: sudo vpnc --local-port 0 with the same config and it all worked. i'd like to be able to use network manager gui to connect to vpn. I wanted to find out which program uses the port 500: > sudo netstat -a |grep 500 tcp 0 0 *:17500 *:* LISTEN udp 0 0 *:4500 *:* udp 0 0 *:17500 *:* unix 3 [ ] STREAM CONNECTED 63500 unix 3 [ ] STREAM CONNECTED 12500 @/tmp/.X11-unix/X0 there is nothing that uses 500 i'm using ubuntu 10.10 on thinkpad x201t

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL SERVER – Dedicated Access Control for SQL Server Express Edition – An error occurred while obtaining the dedicated administrator connection (DAC) port.

    - by pinaldave
    Recently I had faced very interesting situation. Due to some reason we were not able to login into the production server for one of client. The reason for the same was that server was very busy, we had to login into the system and bring server to normal situation. When all the attempts failed, I decided to login using Dedicated Administrator Connection (DAC). However when I attempted to connect using DAC it threw following error for me. C:\Users\pinald>sqlcmd -A -d master -S .\SQLEXPRESS Sqlcmd: Error: Microsoft SQL Server Native Client 11.0 : SQL Server Network Interfaces: An error occurred while obtaining the dedicated administrator connection (DAC) port. Make sure that SQL Browser is running, or check the error log for t he port number [xFFFFFFFF]. .Sqlcmd: Error: Microsoft SQL Server Native Client 11.0 : Login timeout expired.Sqlcmd: Error: Microsoft SQL Server Native Client 11.0 : A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. I was bit taken a back as I knew that my commands are correct to login and if DAC does not work, there should be some serious reason for it. When inquired further about the SQL Server version I learned that it was SQL Server Express version deployed. To conserve resources, SQL Server Express does not listen on the DAC port. There is an additional step to be done if SQL Server Express has to be used with DAC. Enable TRACEFLAG on SQL Server Express will enable the connection by DAC possible. Here is the quick methods how one can enable DAC on SQL Server Express. Go to Start >> All Program >>Microsoft SQL Server (your version) >> Configuration Tools >> SQL Server Configuration Manager. Click on SQL Server Services >> Select your SQL Server Express version >> Right Click Properties >> select Startup Parameters Once on the Startup Parameter add the Startup parameter which is TRACEFLAG -T7806. Click on OK and RESTART SQL Server Express edition. Now once again try to connect to SQL Server Express edition and it will work just fine. This is absolutely documented method on BOL and SQL Server Express needs to be restarted. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Error Messages, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Server Express

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >