Search Results

Search found 45843 results on 1834 pages for 'network access'.

Page 119/1834 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • outlook security alert after adding a second wireless access point to the network

    - by Mark
    Just added a Netgear WG103 Wireless Access Point in our conference room to allow visitors to access the internet through out internal network. When switched on visitors can connect to the intenet and everything works fine. Except, when the Access Point is switched on, normal users of the network get a Security Alert when they try to start Outlook 2007. The Security Alert is the same as the one shown in question 148526 asked by desiny back in June 2010 (http://serverfault.com/questions/148526/outlook-security-alert-following-exchange-2007-upgrade-to-sp2) rather than "autodiscover.ad.unc.edu" my security alert references our "Remote.server.org.uk". If I view the certificate it relates to "Netgear HTTPS:....", but the only Netgear equipment we have is the new Access Point installed in the conference room. If the Access Point is not switched on we do not get the Security Alert. At first I thought it was because we had selected "WPA-PSK & WPA2-PSK" Network Authentication Type but it continues to occur even if we opt for "Shared Key" WEP Data Encryption. I do not understand why adding a Netgear Wireless Access point would cause Outlook to issue a Security Alert when users try to read their email. Does anyone know what I have to do to get rid of the Security Alert? Thanks in advance for reading this and helping me out.

    Read the article

  • Cannot access BIOS on a Lenovo U410

    - by Michael
    I recently took a step into Linux on my Lenovo Idea pad U410; after a couple hours I managed to get it installed with the drivers. However now I no longer have the ability to access the BIOS. I tried the usual FN+F2, F2,F1,Del,Tab,F12,F11; all to no avail. I was wondering is there something different to be done running Ubuntu? I know that the BIOS would generally not be affected by the OS. Does anyone have any suggestions?

    Read the article

  • Installing wireless drivers without internet access [closed]

    - by Lucas Jones
    Possible Duplicate: How can I install and download drivers without internet? (This is related to my other question; my approach there didn't work.) My friend has (I'm quite sure) a Broadcom wireless chipset. However, he doesn't have any wired internet access on the machine, so his only option is to boot into Windows (he is using Wubi) and download packages there. This means we can't use the Hardware Drivers dialog to install the drivers. He can't fetch the repository information, so the Broadcom driver packages aren't showing up in Synaptic. Is there any way to get Wi-Fi working?

    Read the article

  • Is it possible to keep nm-applet running between invocations of WM startup?

    - by serverninja
    I am using nm-applet to interface with NetworkManager, running xmonad as a window manager. My X sessions (including nm-applet) are set up with a /usr/local/bin/xmonad.start script. My question is, how can I keep nm-applet running in the background as long as X is running, but not necessarily xmonad? As mentioned above, it is being started with xmonad (and dying with it when xmonad is restarted, etc). I am using gdm to manage my X sessions, and I'm running 10.10. Where's a good place to start nm-applet to suit my particular needs? I need to remove it from the control of xmonad, but don't know where to start it otherwise. Any help, tips, etc appreciated. Edit: problem seems to be with how I have integrated xmonad. I have the session script as a file in /usr/share/xsessions/xmonad.desktop with the following contents: [Desktop Entry] Encoding=UTF-8 Name=XMonad Comment=Lightweight tiling window manager Exec=/usr/local/bin/xmonad.start Icon=xmonad.png Type=XSession /usr/local/bin/xmonad.start contains the following: #!/bin/bash xrdb -merge ~/.Xresources xcompmgr -c & trayer --edge top --align right --SetDockType true --SetPartialStrut true --expand true --width 8 --heighttype pixel --height 18 --transparent true --alpha 0 --tint 0x000000 & gnome-settings-daemon & gnome-screensaver & if [ -x /usr/bin/nm-applet ] ; then nm-applet --sm-disable & fi /usr/bin/urxvtd -q -o -f & eval `ssh-agent` & if [ -x /usr/bin/gnome-power-manager ] ; then sleep 1 gnome-power-manager & fi /usr/bin/gnome-volume-control-applet & exec xmonad The question is how do I integrate xmonad, gdm, X, etc in such a manner to replicate the behavior I currently have except with nm-applet (and possibly other programs) running whether or not xmonad is?

    Read the article

  • Davicom DM9601 USB LAN NIC Ubuntu 11.10 issue

    - by Gaurav_Java
    I have a davicom dm9601 USB ethernet card. When I plug in the device, it is detected and drivers are loaded, but I can't connect to internet using it. It works perfectly on XP, other laptop but not working on Ubuntu 11.10 How can I install the driver for this? I have tried many things But nothing is working. If I go to this link driver but not compiling or may I be doing something wrong. I found this one but don't know how to follow these steps . This is my lsusb output: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 004: ID 064e:a103 Suyin Corp. Acer/HP Integrated Webcam [CN0314] Bus 003 Device 002: ID 08ff:1600 AuthenTec, Inc. AES1600 Bus 005 Device 002: ID 0a46:9601 Davicom Semiconductor, Inc. DM9601 Fast Ethernet Adapter Bus 006 Device 002: ID 046d:c045 Logitech, Inc. Optical Mouse Bus 003 Device 003: ID 0a5c:2101 Broadcom Corp. Bluetooth Controller Bus 004 Device 002: ID 04d9:1702 Holtek Semiconductor, Inc. But when I connected my Internet from different system its start working.

    Read the article

  • Advice on developing a social network [on hold]

    - by Siraj Mansour
    I am doing research on assembling a team, using the right tools, and the cost to develop a highly responsive social network that is capable of dealing with a lot of users. Similar to the Facebook concept but using the basics package for now. Profile, friends, posts, updates, media upload/download, streaming, chat and Inbox messaging are all in the package. We certainly do not expect it to be as popular as Facebook or handle the same number of users and requests, but in its own game it has to be a monster, and expandable for later on. Neglecting the hosting, and servers part, i am looking for technical advise and opinions, on what kind of team i need ? how many developers ? their expertise ? What are the right tools ? languages ? frameworks ? environments ? Any random ideas about the infrastructure ? Quick thoughts on the development process ? Please use references, if you have any to support your ideas. Development cost mere estimation ? NEGLECTING THE COST OF SERVERS I know my question is too broad but my knowledge is very limited and i need detailed help, for any help you can offer i thank you in advance.

    Read the article

  • redirect subdomain(weblog) to new domain that can't access .htccss 301

    - by fafa
    I've a problem that I can't find the solution of it in web. I have a blog that has PR 1 and it's subdomain "aaaa.domain.com" that "domain.com" is a blog server. and now i buy a domain "newdomain.com" i want tell google webmaster to redirect old subdomain to this new domain and load trafic to my new domane. but i can't access .htccss to redirect 301. all thing that i can do is put html code in the html . now how can i do this. when i use "Change of Address" in goole webmaster it say:"Restricted to root level domains only" . sorry for bad English.

    Read the article

  • Realtek RTL8168/8111E onboard NIC not recognized - the everlasting problem?

    - by Nikioko
    I used the newest vendor driver for my onboard NIC (board: ASUS M5A97 Pro) and those sites to get it running on kernel 3.0: http://www.twm-kd.com/linux/realtek-rtl81688111e-and-ubuntu-linux/ The problem now is, that this doesn't work with either kernel 3.1 or 3.2: https://code.google.com/p/r8168/issues/detail?id=6 My question is: does anyone have a solution to get it run on the newest kernel? Or is this a plaque I have to carry forever?

    Read the article

  • Connecting to VPN / proxy with different port number

    - by user779159
    I have an IP address, port, username, and password for one of those VPN / proxy services that allows you to use an IP address other than what your ISP gives you, but in the 'Edit Connections VPN' GUI I don't see where to enter a port. I enter the IP address under 'Gateway' and the username and password where it asks for it, but it doesn't work. I try to enter it with the IP address ending with a colon, followed by the port, but that doesn't work either (e.g. "1.1.1.1:9999"). Any idea how I could make it work? I'm using Ubuntu 12.04 LTS. Thanks.

    Read the article

  • What You Said: Your Favorite Remote Desktop Access Tools and Tips

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your favorite remote desktop access tools and tips; now we’re back to highlight your favorite tools and how you use them. The two prevailing themes among all the tools suggested were pricing and ease of deployment. On that front, LogMeIn had a strong following. Mtech writes: I use Logmein and am amazed the free version can be used even for business purposes. I also felt so bad and wanted to pay for the Pro version just out of gratitude but they called me personally from the USA and said why pay when the free version does all you need! What a company. HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • How to setup passwordless SSH access for root user

    - by Cerin
    I need to configure a machine so software installation can be automated remotely via SSH. Following the wiki, I was able to setup SSH keys so my user can access the machine without a password, but I still need to manually enter my password when I use sudo, which obviously an automated process shouldn't have to do. Although my /etc/ssh/sshd_config has PermitRootLogin yes, I can't seem to be able to login as root, presumably because it's not a "real" account with a separate password. How do I configure SSH keys, so a process can remotely login as root on Ubuntu?

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • dnsmasq not running

    - by Yevgeniy M.
    i installed ubuntu 12.04 on a netbook with 16GB SSD. To keep the installation small i used the mini.iso i got from here. Everything worked fine, but i noticed that dnsmasq does not get started by NetworkManager. On a different machine i installed 12.04 from a regular iso and netstat shows dnsmasq running and listening on port 53. NetworkManager.conf look identically on both systems. The line dns=dnsmasq is present. Although i do not really need dnsmasq - name resolution works fine without - i would like to know the reason why dnsmasq is running on one system, but does not run on the other and how i could adjust this behavior. Thx in advance!

    Read the article

  • Why do I have to reconnect my usb router cable?

    - by Searock
    I have a Iball Baton ADSD2+ Router. It's working fine but the problem is when I boot into Ubuntu I have to unplug the usb cable and then plug it again, then it starts working. Why do I have to re connect my usb cable? Let me know if you need more details. Edit : I am using a direct connection. I mean to say I don't have to enter a username or password. I am connected to internet as soon as I start my router. The problem is if I start my router before my computer I have to re connect my usb cable. Thanks.

    Read the article

  • No mobile broadband wizard with a GOBI2000

    - by XenGi
    My details: Thinkpad T410 GOBI 2000 WWAN Ubuntu 12.10 with Gnome3 In Ubnutu 12.04, Debian Squeeze and Debian Wheezy my GOBI2000 works like a charm but in Ubuntu 12.10 the connection wizard doesn't show up. I configured the card the normal way: added thnkpad ppa install gobi-loader-tp copied firmware files to /lib/firmware/gobi/ (the same files work under debian) The card is correctly recognised and my SIM is detected too. the only problem is when I try to setup a new connection the wizard doesn't start. simply nothing happens. Any idea?

    Read the article

  • Server 2012R2 – PowerShell Web Access

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2014/05/17/server-2012r2--powershell-web-access.aspxHaha … Sometimes I'm joking that there is nothing worse than Linux fanboi imprisoned in Windows engineer's body. Maybe someday I will start blogging about my noob's experiences. However let's stick to the point. Sometimes the easiest solutions are the best. After couple of tries how to reach left pocket using right hand I'm going to follow easy path. Today's plan is very easy, I'm going to take advantage of Server 2012 and install Web gateway to PowerShell console. After that I would be able execute PoSH from any device including Linux. Install-WindowsFeature –Name WindowsPowerShellWebAccess –IncludeManagementToolsInstall-PswaWebApplication –UseTestCertificateAdd-PswaAuthorizationRule –UserName * -ComputerName * -ConfigurationName *  Let's test it …

    Read the article

  • Wireless BMC4311 driver install

    - by user113910
    Exasperated with Ubuntu 12.04 and BCM4311 driver. HP2133 (was SUSE 10) Installed 12.04 in March. It ran well until November with no updates) then updates killed my wireless controller. Trolled forums & other for fix. Tried sudo -apt-get ing & gedit ing. Finally got it working (on full update with STA driver) Hwew!—thought that was it, only to fail again after 3-4 days. Tried all the fixes again but sudo -apt-get ing & gedit ing does not fix. I'm not without experience either: first 'boot-loader' I/P from 20 switch panel, seq to 4096 bit core-store: run to I/P larger processes from paper tape reader. Gone thru acoustic-coupled networks-to-billboards (before 'the net') pre-DOS, DOS, Windows 3.1 & all the others. This is a tough problem. Any advice?

    Read the article

  • access denied when trying to open terminal on desktop

    - by chris
    ok so heres the skinny. I just moved some file using the sudo su command so i can move them to bin folder in file system then after closing terminal tried to reopen from desktop and got permission denied. I then rebooted and now i cant access my account and when trying to login it starts to boot then back to login screen. I then boot up in xterm and i get this message bash: /home/chris/.bashrc:Permission denied. I'm currently running xubuntu 10.04 and would like to get back in to that user. Can someone please help me. Not a noob but close to it. Thanks to anyone who helps and the quicker the better.

    Read the article

  • Ubuntu 12.04 Network Connection Unavailable

    - by user211315
    I recently added a EDIMAX EW-7811Un to my computer to allow wireless networking. That was working. Now after a reboot I am unable to connect to either wireless or wired network. the result of ifconfig is ifconfig eth0 Link encap:Ethernet HWaddr bc:ae:c5:46:19:6d inet6 addr: fe80::beae:c5ff:fe46:196d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:231 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:62615 (62.6 KB) Interrupt:51 Base address:0x4000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:664 errors:0 dropped:0 overruns:0 frame:0 TX packets:664 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:52639 (52.6 KB) TX bytes:52639 (52.6 KB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

    Read the article

  • Hyper-V for Developers Part 1 Internal Networks

    Over the last year, weve been working with Microsoft to build training and demo content for the next version of Office Communications Server code-named Microsoft Communications Server 14.  This involved building multi-server demo environments in Hyper-V, getting them running on demo servers which we took to TechEd, PDC, and other training events, and sometimes connecting the demo servers to the show networks at those events.  ITPro stuff that should scare the hell out of a developer! It can get ugly when I occasionally have to venture into ITPro land.  Lets leave it at that. Having gone through this process about 10 to 15 times in the last year, I finally have it down.  This blog series is my attempt to put all that knowledge in one place if anything, so I can find it somewhere when I need it again.  Ill start with the most simple scenario and then build on top of it in future blog posts. If youre an ITPro, please resist the urge to laugh at how trivial this is. Internal Hyper-V Networks Lets start simple.  An internal network is one that intended only for the virtual machines that are going to be on that network it enables them to communicate with each other. Create an Internal Network On your host machine, fire up the Hyper-V Manager and click the Virtual Network Manager in the Actions panel. Select Internal and leave all the other default values. Give the virtual network a name, and leave all the other default values. After the virtual network is created, open the Network and Sharing Center and click Change Adapter Settings to see the list of network connections. The only thing I recommend that you do is to give this connection a friendly label, e.g. Hyper-V Internal.  When you have multiple networks and virtual networks on the host machines, this helps group the networks so you can easily differentiate them from each other.  Otherwise, dont touch it, only bad things can happen. Connect the Virtual Machines to the Internal Network Im assuming that you have more than 1 virtual machine already configured in Hyper-V, for example a Domain Controller, and Exchange Server, and a SharePoint Server. What you need to do is basically plug in the network to the virtual machine.  In order to do this, the machine needs to have a virtual network adapter.  If the VM doesnt have a network adapter, open the VMs Settings and click Add Hardware in the left pane.  Choose the virtual network to which to bind the adapter to. If you already have a virtual network adapter on the VM, simply connect it to the virtual network. Assign IP Addresses to the Virtual Machines on the Internal Network Open the Network and Sharing Center on your VM, there should only be 1 network at this time.  Open the Properties of the connection, select Internet Protocol Version 4 (TCP/IPv4) and hit Properties. In this environment, Im assigning IP addresses as 192.168.0.xxx.  This particular VM has an IP address of 192.168.0.40 with a subnet mask of 255.255.255.0, and a DNS Server of 192.168.0.18.  DNS is running on the Domain Controller VM which has an IP address of 192.168.0.18. Repeat this process on every VM in your environment, obviously assigning a unique IP address to each.  In an environment with a domain controller, you should now be able to ping the machines from each other. What Next? After completing this process, heres what you still cannot do: Access the internet from any of the VMs Remote desktop to a VM from the host Remote desktop to a VM over the network In the next post, well take a look configuring an External network adapter on the virtual machines.  Well then build on top of that so that you can RDP into the VMs from the host machine and over the network.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Map a drive to root of a server (\\sever) in Vista

    - by Andy T
    Hi, In Win XP, I can very easily map a network drive to the root of my NAS server. I browse to it in Explorer (\192.168.1.70), choose "Map Network Drive", choose the drive letter, done. In Vista, this does not seem possible. I have to go "Map Network Drive" from 'Computer', then enter the address, but it will only let me map to specific shares (sub-folders off of the server root) and NOT to the server root share. Since my NAS has built-in shares (music, photo, video, etc.) then I would have to have drive letters for all of these, which I absolutely don't want. Can anyone tell me - how come I can easily map to the server root from XP, but not in Vista? Is there something fundamentally different in the networking across the two OS's? Or do I just need to do things a different way? Hope someone can help. Thanks, AT

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • how to add programs to ubuntu without internet access

    - by captainandcoke
    I don't have internet access at my home and it takes me about a half hour to ride my bike to the library. I have downloaded .deb files to try to install at my home computer but everyone I have downloaded says it can't install because it depends on package X. The next day I will download package X and it will require package Y. Is there anyway to find out what ALL the sub-dependencies are for deb files? I have tried to boot from USB or External Hard drive on the library computers but the security settings prevent this. Also, I do not know anyone with a Linux computer.

    Read the article

  • Can't connect to Wireless Network - Ubuntu 12.04 LTS & Sabrent A111N USB Dongle

    - by Ohgodwhy
    I've been trying to connect to this network for quite some time. I can't directly connect to the router with a Wire, but can access the Router with other wireless devices without any issues. I had previously tried several other Wifi nic's but none of them would load properly. Today, i went and bought a new (supported) Sabrent A111N USB Dongle, which said explicitly that it works with Linux 2.4 +. I popped the Dongle in, and low-and-behold it immediately said that there were Available Wireless Connections. I selected my connection and tried to connect, but it just loops constantly while saying Wireless Disconnected then attempts to connect again over and over. ifconfig and iwconfig both show my device in a ready and working state. However, iwlist wlan0 scan says that there are no results found. I don't get it... At one point, I could see the CPU in the DHCP client list under the router, but it doesn't fully make the connection (something about a timeout?). Any help would be appreciated. Bus 001 Device 002: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >