Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 118/1051 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Linux network stack : adding protocols with an LKM and dev_add_pack

    - by agent0range
    Hello, I have recently been trying to familiarize myself with the Linux Networking stack and device drivers (have both similarly named O'Reilly books) with the eventual goal of offloading UDP. I have already implemented UDP on the NIC but now the hard part... Rather than ask for assistance on this larger goal I was hoping someone could clarify for me a particular snippet I found that is part of a LKM which registeres a new protocol (OTP) that acts as a filter between the device driver and network stack. http://www.phrack.org/archives/55/p55_0x0c_Building%20Into%20The%20Linux%20Network%20Layer_by_lifeline%20&%20kossak.txt (Note: this Phrack article contains three different modules, code for the OTP is at the bottom of the page) In the init function of his example he has: otp_proto.type = htons(ETH_P_ALL); otp_proto.func = otp_func; dev_add_pack(&otp_proto); which (if I understand correctly) should register otp_proto as a packet sniffer and put it into the ptype_all data structure. My question is about the dev_add_pack. Is it the case that the protocol being registered as a filter will always be placed at this layer between L2 and the device driver? Or, for instance could I make such a filtering occur between the application and transport layers (analyze socket parameters) using the same process? I apologize if this is confusing - I am having some trouble wrapping my head around the bigger picture when it comes to modules altering kernel stack functionality. Thanks

    Read the article

  • Where can I configure wireless to be passed on to the Virtualbox guest?

    - by huahsin68
    I have WinXP install in virtualbox which host in Linux. I have a TP-Link (TP-WN321G) USB wifi adapter and have the driver installed inside WinXP. When I plug-in the wifi adapter, there is an option show "Ralink 802.11g WLAN [0101]" in the virtualbox's USB icon, tick on that option, the Device Manager able to detect the hardware which shows TP-Link, but when look into the properties, it says there is no driver was install. I did try to install Ralink driver but still no luck. Just curious why my wifi adapter is TP-Link, but the option show Ralink? May I know how can I emulate the wireless network inside WindowsXP?

    Read the article

  • mmap only needed pages of kernel buffer to user space

    - by axeoth
    See also this answer: http://stackoverflow.com/a/10770582/1284631 I need something similar, but without having to allocate a buffer: the buffer is large, in theory, but the user space program only needs to access some parts of it (it mocks some registers of a hardware). As I cannot allocate with vmalloc_user() such a large buffer (kernel 32 bit, in embedded environment, no swap...), I followed the same approach as in the quoted answer, trying to allocate only those pages that are really requested by the user space. So: I use a my_mmap() function for the device file (actually, is the .mmap field of a struct uio_info) to set up the fields of the vma, then, in the vm_area_struct's .fault field (also named my_fault()), I should return a page. except that: In the my_fault() method of vm_area_struct, I cannot obtain a page through: vmf->page=vmalloc_to_page(my_buf + (vmf->pgoff << PAGE_SHIFT)); since there is no allocated buffer: my_buf = vmalloc_user(MY_BUF_SIZE); fails with "allocation failed: out of vmalloc space - use vmalloc= to increase size." (and there is no room or swap to increase that vmalloc= parameter). So, I would need to get a page from the kernel and fill the vmf->page field. How to allocate a page (I assume that the offset of the page is known, as it is vm->pgoff). What base memory should I use instead of my_buf? PS: I also did set up the vma->flags |= VM_NORESERVE; (in the my_mmap()), but not sure if it helps. Is there any vmalloc_user_unreserved()-like function? (let's say, lazy allocation) Also, writing 1 to /proc/sys/vm/overcommit_memory and large values (eg 500) to /proc/sys/vm/overcommit_ratio before trying to my_buf=vmalloc_user(<large_size>) didn't work.

    Read the article

  • How switch between screen inside screen?

    - by André Andrade
    I have to work inside two environment. One Windows (local) and one Linux (remote). I've installed the screen linux utility in both. I'm able to open a screen on my windows, then in one tab, I opened a ssh connection to the linux remote and I start another screen. Sample linux -- |0 linux remote 0| 1 linux remote 1 windows-- |0 linux | 9 windows I can switch between "linux remote 0" and "linux remote 1" using Atl+. This is configured in .screenrc (bindkey "^[0" select 0) How could I switch to "9 windows"?

    Read the article

  • Fake serial communication under Linux

    - by kigurai
    I have an application where I want to simulate the connection between a device and a "modem". The device will be connected to a serial port and will talk to the software modem through that. For testing purposes I want to be able to use a mock software device to test send and receive data. Example Python code device = Device() modem = Modem() device.connect(modem) device.write("Hello") modem_reply = device.read() Now, in my final app I will just pass /dev/ttyS1 or COM1 or whatever for the application to use. But how can I do this in software? I am running Linux and application is written in Python. I have tried making a FIFO (mkfifo ~/my_fifo) and that does work, but then I'll need one FIFO for writing and one for reading. What I want is to open ~/my_fake_serial_port and read and write to that. I have also lpayed with the ptymodule, but can't get that to work either. I can get a master and slave file descriptor from pty.openpty() but trying to read or write to them only causes IOError Bad File Descriptor error message.

    Read the article

  • cant open device

    - by yoavstr
    I have a little problem. I install this module into my kernel and its written under /proc When I try to open() it from user mode I get the following message: "Can't open device file: my_dev" static int module_permission(struct inode *inode, int op, struct nameidata *foo) { //if its write if ((op == 2)&&(writer == DOESNT_EXIST)){ writer = EXIST ; return 0; } //if its read if (op == 4 ){ numOfReaders++; return 0; } return -EACCES; } int procfs_open(struct inode *inode, struct file *file) { try_module_get(THIS_MODULE); return 0; } static struct file_operations File_Ops_4_Our_Proc_File = { .read = procfs_read, .write = procfs_write, .open = procfs_open, .release = procfs_close, }; static struct inode_operations Inode_Ops_4_Our_Proc_File = { .permission = module_permission, /* check for permissions */ }; int init_module() { /* create the /proc file */ Our_Proc_File = create_proc_entry(PROC_ENTRY_FILENAME, 0644, NULL); /* check if the /proc file was created successfuly */ if (Our_Proc_File == NULL){ printk(KERN_ALERT "Error: Could not initialize /proc/%s\n", PROC_ENTRY_FILENAME); return -ENOMEM; } Our_Proc_File->owner = THIS_MODULE; Our_Proc_File->proc_iops = &Inode_Ops_4_Our_Proc_File; Our_Proc_File->proc_fops = &File_Ops_4_Our_Proc_File; Our_Proc_File->mode = S_IFREG | S_IRUGO | S_IWUSR; Our_Proc_File->uid = 0; Our_Proc_File->gid = 0; Our_Proc_File->size = 80; //i added init the writewr status writer = DOESNT_EXIST; numOfReaders = 0 ; printk(KERN_INFO "/proc/%s created\n", PROC_ENTRY_FILENAME); return 0; }

    Read the article

  • Storage drives is causting system crash

    - by Chad
    I'm running Centos 5.4 with 750GB(ntfs) and 2TB drives for storage. Originally I installed the 750, everything seemed fine and then I installed the 2TB drive with NTFS already partitioned. I noticed when I would copy a lot of videos it would crash (no mouse or response from server) about 20min into it. After doing some troubleshooting I noticed the 750 would also crash when doing the same task so I decided that NTFS may be the problem. I unmounted the 2TB drive and tried to partition and format it using ext2 but when using parted it would crash at this point "writing inode tables". Looking at the dmesg logs I believe this is the error "mtrr: type mismatch for e0000000,10000000 old: write-back new: write-combining". Any idea as to what could be causing this?

    Read the article

  • Is memory allocation in linux non-blocking?

    - by Mark
    I am curious to know if the allocating memory using a default new operator is a non-blocking operation. e.g. struct Node { int a,b; }; ... Node foo = new Node(); If multiple threads tried to create a new Node and if one of them was suspended by the OS in the middle of allocation, would it block other threads from making progress? The reason why I ask is because I had a concurrent data structure that created new nodes. I then modified the algorithm to recycle the nodes. The throughput performance of the two algorithms was virtually identical on a 24 core machine. However, I then created an interference program that ran on all the system cores in order to create as much OS pre-emption as possible. The throughput performance of the algorithm that created new nodes decreased by a factor of 5 relative the the algorithm that recycled nodes. I'm curious to know why this would occur. Thanks. *Edit : pointing me to the code for the c++ memory allocator for linux would be helpful as well. I tried looking before posting this question, but had trouble finding it.

    Read the article

  • Capabilities & Linux & Java

    - by Marek Jelen
    Hi, I am experimenting with linux capabilities for java application ... I do not want to add capabilities to interpreter (JVM), so I tried to write simple wrapper (with debugging information printed to stdout): #include <stdio.h> #include <stdlib.h> #include <sys/capability.h> #include <unistd.h> int main(int argc, char *argv[]){ cap_t cap = cap_get_proc(); if (!cap) { perror("cap_get_proc"); exit(1); } printf("%s: running with caps %s\n", argv[0], cap_to_text(cap, NULL)); return execlp("/usr/bin/java", "-server", "-jar", "project.jar", (char *)NULL); } This way, I can se that the capability is set for this execucatable: ./runner: running with caps = cap_net_bind_service+p And getcap shows runner = cap_net_bind_service+ip I have the capability set to be inheritable, so there should be no problem. However java still don't want to bind to privileged ports :-( I am getting this error: sun/nio/ch/Net.java:-2:in `bind': java.net.SocketException: Permission denied (NativeException) Can someone help me to resolve this? Thanks in advance

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Kernel module for /proc

    - by sb2367
    How to write a kernel module that creates a directory in /proc named mymod and a file in it name is mymodfile. This file should accept a number ranged from 1 to 3 when written into it and return the following messages when read based on the number already written into it: • 1: Current system time (in microseconds precision) • 2: System uptime • 3: Number of processes currently in the system “The Output” root@Paradise# echo 1 > /proc/mymod/mymodfile root@Paradise# cat /proc/mymod/mymodfile 08:30:24 342us root@Paradise# echo 2 > /proc/mymod/mymodfile root@Paradise# cat /proc/mymod/mymodfile up 1 day, 8 min root@Paradise# echo 3 > /proc/mymod/mymodfile root@Paradise# cat /proc/mymod/mymodfile process count: 48 please give me some hint how to write a kernel modules and how to compile and install it .

    Read the article

  • Boost program will not working on Linux

    - by Martin Lauridsen
    Hi SOF, I have this program which uses Boost::Asio for sockets. I pretty much altered some code from the Boost examples. The program compiles and runs just like it should on Windows in VS. However, when I compile the program on Linux and run it, I get a Segmentation fault. I posted the code here The command I use to compile it is this: c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include mpqs.cpp mpqs_polynomial.cpp mpqs_host.cpp -o mpqs_host -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -lboost_thread -static -lpthread By commenting out code, I have found out that I get the Segmentation fault due to the following line: boost::asio::io_service io_service; Can anyone provide any assistance, as to what may be the problem (and the solution)? Thanks! Edit: I tried changing the program to a minimal example, using no other libraries or headers, just boost/asio.hpp: #define DEBUG 0 #include <boost/asio.hpp> int main(int argc, char* argv[]) { boost::asio::io_service io_service; return 0; } I also removed other library inclusions and linking on compilation, however this minimal example still gives me a segmentation fault.

    Read the article

  • NULL pointer dereference in swiotlb_unmap_sg_attrs() on disk IO

    - by Inductiveload
    I'm getting an error I really don't understand when reading or writing files using a PCIe block device driver. I seem to be hitting an issue in swiotlb_unmap_sg_attrs(), which appears to be doing a NULL dereference of the sg pointer, but I don't know where this is coming from, as the only scatterlist I use myself is allocated as part of the device info structure and persists as long as the driver does. There is a stacktrace to go with the problem. It tends to vary a bit in exact details, but it always crashes in swiotlb_unmap_sq_attrs(). I think it's likely I have a locking issue, as I am not sure how to handle the locks around the IO functions. The lock is already held when the request function is called, I release it before the IO functions themselves are called, as they need an (MSI) IRQ to complete. The IRQ handler updates a "status" value, which the IO function is waiting for. When the IO function returns, I then take the lock back up and return to request queue handling. The crash happens in blk_fetch_request() during the following: if (!__blk_end_request(req, res, bytes)){ printk(KERN_ERR "%s next request\n", DRIVER_NAME); req = blk_fetch_request(q); } else { printk(KERN_ERR "%s same request\n", DRIVER_NAME); } where bytes is updated by the request handler to be the total length of IO (summed length of each scatter-gather segment).

    Read the article

  • Help Installing phpMyAdmin on Linux Server

    - by Joe Majewski
    I constantly get this error when attempting to view index.php in the phpMyAdmin folder I have setup: Cannot start session without errors, please check errors given in your PHP and/or webserver log file and configure your PHP installation properly. I am using subversion with three co-workers and I am trying to install this on my repository, which is at /svni3/myusername/intranet/ on our linux development server. I do have shell access so I would be able to install it somewhere else if that may be causing the problem. My config.inc.php file looks like this (only the things I changed): $cfg['blowfish_secret'] = 'blah'; $cfg['Servers'][$i]['auth_type'] = 'cookie'; //$cfg['Servers'][$i]['user'] = 'username'; //$cfg['Servers'][$i]['password'] = 'password'; $cfg['Servers'][$i]['host'] = 'localhost'; Other helpful information??? When I ping localhost it works. When I run the command "ls -l /var/lib/php": drwxrwx--- 2 root apache 4096 2009-04-17 02:31 session If there's anything else that may be helpful let me know; this has been plaguing me for a few hours now.

    Read the article

  • Why my linux signal handler run only once

    - by Henry Fané
    #include <iostream> #include <signal.h> #include <fenv.h> #include <string.h> void signal_handler(int sig, siginfo_t *siginfo, void* context) { std::cout << " signal_handler " << fetestexcept(FE_ALL_EXCEPT) << std::endl; throw "exception"; } void divide() { float a = 1000., b = 0., c, f = 1e-300; c = a / b; std::cout << c << " and f = " << f << std::endl; } void init_sig_hanlder() { feenableexcept(FE_ALL_EXCEPT); struct sigaction sa, initial_sa; sa.sa_sigaction = &signal_handler ; sigemptyset( &sa.sa_mask ) ; sa.sa_flags = SA_SIGINFO; // man sigaction(3) // allows for void(*)(int,siginfo_t*,void*) handler sigaction(SIGFPE, &sa, &initial_sa); } int main(int argc, char** argv) { init_sig_hanlder(); while(true) { try { sleep(1); divide(); } catch(const char * a) { std::cout << "Exception in catch: " << a << std::endl; } catch(...) { std::cout << "Exception in ..." << std::endl; } } return 0; } Produce the following results on Linux/g++4.2: signal_handler 0 Exception in catch: exception inf and f = 0 inf and f = 0 inf and f = 0 inf and f = 0 So, signal handler is executed the first time but the next fp exception does not trigger the handler again. Where am I wrong ?

    Read the article

  • sysklogd ignores my log facilities

    - by Synther Lawrence
    I'm using sysklogd 1.5.5. All I want is to get local0 entries in /var/log/vr file. My conf: *.*;local0.none /var/log/messages local0.* /var/log/vr When I do logger -p local0.info "local0 test from logger" the message appear in /var/log/vr file. That's ok. But the following sends message to /var/log/messages instead of /var/log/vr: #include <stdlib.h> #include <syslog.h> int main(int argc, char const* argv[]) { openlog(NULL, LOG_PID, LOG_LOCAL0); syslog(LOG_INFO, "local0 test from app\n"); closelog(); return 0; } Where am I wrong?

    Read the article

  • CSV Parser works in windows, not linux.

    - by ladookie
    I'm parsing a CSV file that looks like this: E1,E2,E7,E8,,, E2,E1,E3,,,, E3,E2,E8,,, E4,E5,E8,E11,,, I store the first entry in each line in a string, and the rest go in a vector of strings: while (getline(file_input, line)) { stringstream tokenizer; tokenizer << line; getline(tokenizer, roomID, ','); vector<string> aVector; while (getline(tokenizer, adjRoomID, ',')) { if (!adjRoomID.empty()) { aVector.push_back(adjRoomID); } } Room aRoom(roomID, aVector); rooms.addToTail(aRoom); } In windows this works fine, however in Linux the first entry of each vector mysteriously loses the first character. For Example in the first iteration through the while loop: roomID would be E1 and aVector would be 2 E7 E8 then the second iteration: roomID would be E2 and aVector would be 1 E3 Notice the missing E's in the first entry of aVector. when I put in some debugging code it appears that it is initially being stored correctly in the vector, but then something overwrites it. Kudos to whoever figures this one out. Seems bizarre to me. rooms is declared as such: DLList<Room> rooms where DLList stands for Doubly-Linked list.

    Read the article

  • Java socket bug on linux (0xFF sent, -3 received)

    - by Marius
    While working on a WebSocket server in Java I came across this strange bug. I've reduced it down to two small java files, one is the server, the other is the client. The client simply sends 0x00, the string Hello and then 0xFF (per the WebSocket specification). On my windows machine, the server prints the following: Listening byte: 0 72 101 108 108 111 recieved: 'Hello' While on my unix box the same code prints the following: Listening byte: 0 72 101 108 108 111 -3 Instead of receiving 0xFF it gets -3, never breaks out of the loop and never prints what it has received. The important part of the code looks like this: byte b = (byte)in.read(); System.out.println("byte: "+b); StringBuilder input = new StringBuilder(); b = (byte)in.read(); while((b & 0xFF) != 0xFF){ input.append((char)b); System.out.print(b+" "); b = (byte)in.read(); } inputLine = input.toString(); System.out.println("recieved: '" + inputLine+"'"); if(inputLine.equals("bye")){ break; } I've also uploaded the two files to my server: Server.java Client.java My Windows machine is running windows 7 and my Linux machine is running Debian

    Read the article

  • C++: Help with cin difference between Linux and Windows

    - by Krashman5k
    I have a Win32 console program that I wrote and it works fine. The program takes input from the user and performs some calculations and displays the output - standard stuff. For fun, I am trying to get the program to work on my Fedora box but I am running into an issue with clearing cin when the user inputs something that does not match my variable type. Here is the code in question: void CParameter::setPrincipal() { double principal = 0.0; cout << endl << "Please enter the loan principal: "; cin >> principal; while(principal <= 0) { if (cin.fail()) { cin.clear(); cin.ignore(INT_MAX, '\n'); } else { cout << endl << "Plese enter a number greater than zero. Please try again." << endl; cin >> principal; } } m_Parameter = principal; } This code works in Windows. For example, if the user tries to enter a char data type (versus double) then the program informs the user of the error, resets cin, and allows the user another opportunity to enter a valid value. When I move this code to Fedora, it compiles fine. When I run the program and enter an invalid data type, the while loop never breaks to allow the user to change the input. My questions are; how do I clear cin when invalid data is inputted in the Fedora environment? Also, how should I write this code so it will work in both environments (Windows & Linux)? Thanks in advance for your help!

    Read the article

  • malloc in kernel

    - by yoavstr
    when i try to malloc at kernel mod i get screamed by the compiler : res=(ListNode*)malloc(sizeof(ListNode)); and the compiler is screaming : /root/ex3/ex3mod.c:491: error: implicit declaration of function ‘malloc’ what should i do ?

    Read the article

  • compile error:c language in telnet(linux)

    - by lilyrose07
    #include<stdio.h> #include<unistd.h> #include<stdlib.h> #include<pthread.h> int count=0; void *thread_function(void *arg) { while(count<10) { if(count%2==1) { count++; } else {sleep(1);} } } int main(int argc,int *argv) { int res; pthread_t a_thread[2]; void *thread_result; int n; while(count<10) { if(count%2==0) {printf("%d",count); count++; } else{sleep(1);} } for(n=0;n<2;n++) { pthread_create(&(a_thread[n]),NULL,thread_function,NULL); } while(count==9) {pthread_join(a_thread[0],&thread_result); } while(count==10) { pthread_join(a_thread[1],&thread_result); } printf("%d",count); return 0; } in telnet,linux i write gcc za.c error list: undefined reference to pthread_create,pthread_join in function 'main' //why??

    Read the article

  • iptables, forward traffic for ip not active on the host itself

    - by gucki
    I have kvm guest which's netword card is conntected to the host using a tap device. The tap device is part of a bridge on the host together with eth0 so it can access the public network. So far everything works, the guest can access the public network and it can be accessed from the public network. Now the kvm process on the host provides a vnc server for the guest which listens on 127.0.0.1:5901 on the host. Is there any way to make this vnc server accessible by the ip address which the guest is using (ex. 192.168.0.249), without interrupting the guest from using the same ip (port 5901 is not used by the guest)? It should also work when the guest is not using any ip address at all. So basically I just want to fake IP xx is on the host and only answer/ forward traffic to port 5901 to the host itself. I tried using this NAT rule on the host, but it doesn't work. Ip forwarding is enabled at the host. iptables -t nat -A PREROUTING -p tcp --dst 192.168.0.249 --dport 5901 -j DNAT --to-destination 127.0.0.1:5901 I assume this is because the IP 192.168.0.249 is not not bound to any interfaces and so no ARP requests for it get answered and so no packets for this IP arrive at the host. How can make it work? :)

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >