Search Results

Search found 8161 results on 327 pages for 'django queries'.

Page 308/327 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • How to get rid of messages addressed to not existing subdomains?

    - by user71061
    Hi! I have small problem with my sendmail server and need your little help :-) My situation is as follow: User mailboxes are placed on MS exchanege server and all mail to and from outside world are relayed trough my sendmail box. Exchange server ----- sendmail server ------ Internet My servers accept messages for one main domain (say, my.domain.com) and for few other domains (let we narrow it too just one, say my_other.domain.com). After configuring sendmail with showed bellow abbreviated sendmail.mc file, essentially everything works ok, but there is small problem. I want to reject messages addressed to not existing recipients as soon as possible (to avoid sending non delivery reports), so my sendmail server make LDAP queries to exchange server, validating every recipient address. This works well both domains but not for subdomains. Such subdomains do not exist, but someone (I'm mean those heated spamers :-) could try addresses like this: user@any_host.my.domain.com or user@any_host.my_other.domain.com and for those addresses results are as follows: Messages to user@sendmail_hostname.my.domain.com are rejected with error "Unknown user" (due to additional LDAPROUTE_DOMAIN line in my sendmail.mc file, and this is expected behaviour) Messages to user@any_other_hostname.my.domain.com are rejected with error "Relaying denied". Little strange to me, why this time the error is different, but still ok. After all message was rejected and I don't care very much what error code will be returned to sender (spamer). Messages to user@sendmail_hostname.my_other.domain.com and user@any_other_hostname.my_other.domain.com are rejected with error "Unknown user" but only when, there is no user@my_other.domain.com mailbox (on exchange server). If such mailbox exist, then all three addresses (i.e. user@my_other.domain.com, user@sendmail_hostname.my_other.domain.com and user@any_other_hostname.my_other.domain.com) will be accepted. (adding additional line LDAPROUTE_DOMAIN(my_sendmail_host.my_other.domain.com) to my sendmail.mc file don't change anything) My abbreviated sendmail.mc file is as follows (sendmail 8.14.3-5). Both domains are listed in /etc/mail/local-host-names file (FEATURE(use_cw_file) ): define(`_USE_ETC_MAIL_')dnl include(`/usr/share/sendmail/cf/m4/cf.m4')dnl OSTYPE(`debian')dnl DOMAIN(`debian-mta')dnl undefine(`confHOST_STATUS_DIRECTORY')dnl define(`confRUN_AS_USER',`smmta:smmsp')dnl FEATURE(`no_default_msa')dnl define(`confPRIVACY_FLAGS',`needmailhelo,needexpnhelo,needvrfyhelo,restrictqrun,restrictexpand,nobodyreturn,authwarnings')dnl FEATURE(`use_cw_file')dnl FEATURE(`access_db', , `skip')dnl FEATURE(`always_add_domain')dnl MASQUERADE_AS(`my.domain.com')dnl FEATURE(`allmasquerade')dnl FEATURE(`masquerade_envelope')dnl dnl define(`confLDAP_DEFAULT_SPEC',`-p 389 -h my_exchange_server.my.domain.com -b dc=my,dc=domain,dc=com')dnl dnl define(`ALIAS_FILE',`/etc/aliases,ldap:-k (&(|(objectclass=user)(objectclass=group))(proxyAddresses=smtp:%0)) -v mail')dnl FEATURE(`ldap_routing',, `ldap -1 -T<TMPF> -v mail -k proxyAddresses=SMTP:%0', `bounce')dnl LDAPROUTE_DOMAIN(`my.domain.com')dnl LDAPROUTE_DOMAIN(`my_other.domain.com ')dnl LDAPROUTE_DOMAIN(`my_sendmail_host.my.domain.com')dnl define(`confLDAP_DEFAULT_SPEC', `-p 389 -h "my_exchange_server.my.domain.com" -d "CN=sendmail,CN=Users,DC=my,DC=domain,DC=com" -M simple -P /etc/mail/ldap-secret -b "DC=my,DC=domain,DC=com"')dnl FEATURE(`nouucp',`reject')dnl undefine(`UUCP_RELAY')dnl undefine(`BITNET_RELAY')dnl define(`confTRY_NULL_MX_LIST',true)dnl define(`confDONT_PROBE_INTERFACES',true)dnl define(`MAIL_HUB',` my_exchange_server.my.domain.com.')dnl FEATURE(`stickyhost')dnl MAILER_DEFINITIONS MAILER(smtp)dnl Could someone more experienced with sendmail advice my how to reject messages to those unwanted subdomains? P.S. Mailboxes @my_other.domain.com are used only for receiving messages and never for sending.

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance Have posted this question previously on SuperUser but no responses yet. So was wondering if this is a more apt forum for this.

    Read the article

  • Internet Working, Browsing Not.

    - by jeffreypriebe
    I have a very odd problem that I can't resolve. I am connected to the internet, but my browsing doesn't work. I don't mean a web browser - I mean browsing. Firefox, Chrome, Curl all fail to successfully connect to an HTTP address. However existing connections, e.g. to mail in Outlook (Exchange Server and also IMAP server) continue to work. Also, the internet is on, I can confirm both from my machine (other ports / connections) as well as from any other computer connected to the same network. Additionally, it appears to be HTTP, not simple a port issue as HTTP over port 8443 (Tortoise SVN if you must know - running over HTTP not over SVN) also fails. I am using Windows Vista SP2 (build 6002). It seems to "creep up" in that after running the computer for a few hours it will fail. (No found way to systematically reproduce the problem.) Additionally, it seems to be more prone on days where the internet connection is flaky already (not sure why the internet is flaky, just is, lot's of failed browsing requests and have to retry/reload often). What I have tried (when the problem arises) - none have yielded any resolution: Resetting the network connection (dis-connect, re-connect) Disable/re-enable the network adapter Double-checked the ip settings Double-checked the HOSTS file. Note: DNS continues to work (both new and cached responses to DNS queries). (Thanks for the suggestion Daniel and antenore.) Checked the routing tables (ip4 only as ipv6 is beyond my understanding) resetting all involved hardware (routers and modems) Close and reopen browsers Looked for malware interference: Run HijackThis Looked for suspicious processes using SysInternals procexp. Looked for explorer hijacks, lsa provider interference, winsock provider interference using SysInternals Autoruns. Run a complete anti-virus scan. Reviewed the output of a netstat -onab to see if there were stuck ports open or unusual processes running somewhere The only thing that works is to do a full reboot. That works 100% of the time to restore browsing. What else can I try to nail down the problem?

    Read the article

  • Ubuntu Server 12.04 CPU Load

    - by zertux
    I have a Server (2x Hexa-Core Xeon E5649 2.53GHz w/HT with 32GB RAM and 20000 GB Bandwidth) running Ubuntu Server 12.04 LTS. The server runs LAMP and serves one website only, the estimated number of users is to be ~ 15,000 at the same time. At the moment i have around 2000 users online each of them runs 50 MySQL queries (small values mostly select and insert) from the beginning until the end of the session. Server CPU Load is high at this number of connections while the RAM usage is almost 1GB out of 32GB its worth mentioning that the server was running very fast with no problems at all but am concerned about the load average. http://s12.postimage.org/z7hi6mz3h/photo.png top - 03:02:43 up 9 min, 2 users, load average: 50.83, 30.14, 12.83 Tasks: 432 total, 1 running, 430 sleeping, 0 stopped, 1 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 66.5%id, 33.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32939992k total, 3111604k used, 29828388k free, 84108k buffers Swap: 2048280k total, 0k used, 2048280k free, 1621640k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2860 root 20 0 25820 2288 1420 S 3 0.0 0:11.18 htop 1182 root 20 0 0 0 0 D 2 0.0 0:01.46 kjournald 1935 mysql 20 0 12.3g 161m 7924 S 1 0.5 102:31.45 mysqld 11 root 20 0 0 0 0 S 0 0.0 0:00.38 kworker/0:1 1822 www-data 20 0 247m 25m 4188 D 0 0.1 0:01.81 apache2 2920 www-data 20 0 0 0 0 Z 0 0.0 0:01.20 apache2 <defunct> 2942 www-data 20 0 247m 23m 3056 D 0 0.1 0:00.20 apache2 3516 www-data 20 0 247m 23m 3028 D 0 0.1 0:00.06 apache2 3521 www-data 20 0 247m 23m 3020 D 0 0.1 0:00.09 apache2 3664 www-data 20 0 247m 23m 3132 D 0 0.1 0:00.09 apache2 3674 www-data 20 0 247m 23m 3252 D 0 0.1 0:00.06 apache2 3713 www-data 20 0 247m 23m 3040 D 0 0.1 0:00.09 apache2 1 root 20 0 24328 2284 1344 S 0 0.0 0:03.09 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.01 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 9 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/1:0 root@server:~/codes# vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 19 0 0 29684012 86112 1689844 0 0 19 590 254 231 48 0 47 5 23 0 0 29704812 86128 1697672 0 0 4 320 11100 8121 77 1 22 0 33 0 0 29671044 86156 1705308 0 0 0 5440 13190 9140 95 1 4 0 33 3 0 29670088 86160 1706288 0 0 0 32932 12275 7297 99 0 1 0 35 0 0 29693456 86188 1710724 0 0 4 676 12701 7867 98 1 1 0 ^C I have not changed any of the default configurations that comes with Ubuntu. Is this load normal for such powerful server ? is there any optimization i can make to Apache/MySQL to minimize the load ? What do you recommend ?

    Read the article

  • Mixed sessions with Classic ASP on IIS 7.5 and Windows 2008 R2 64 bit

    - by Marcin
    Recently had an issues with a server upgrade from IIS 6 on Windows 2003 to IIS 7.5 on Windows 2008 R2 64 bit. We have a number of websites running on Classic ASP. All the sites sit under a particular site, e.g. www.example.com/foo and www.example.com/foobar. On IIS 6 each site was set up as a virtual directory and things worked fine. Since moving to the new set up, a lot of websites seem to have mixed Sessions. To be clear, this is not a app pool recycling issue; rather the sessions are populated with information when the user hits the site and while browsing they get sessions from different sites. We've determined this based on - a few customers called up and reported having their shopping cart with items with names of items belonging to a different site - also our own testing showed that some queries being run would try to bring products in from a different site We've tried - disabling dynamic caching - converting each site to be a virtual application (if I understand correctly, the virtual directory/application concepts were changed/refined somewhat in IIS 7 although to be honest, I'm not clear what the difference is) - various application pool changes (using .NET 2 framework), classic and integrated modes, changing the Process model to NetworkIdentity), all to no avail. The only thing we haven't tried is changing it to run as a 32 bit application. We're not using http only cookies, so when I open up a browser and type document.cookie into the dev console in Firefox/Chrome/IE that there will be multiple ASPSESSIONID=... values whereas previously I believe there was only one. Finally, we use server side JScript for the classic ASP pages, not VBScript, so we have code similar to the below. //the user's login account as a jscript object Session("user") = { email : "[email protected]", id : 123 }; and if we execute a line of code like below: Response.Write( typeof(Session("user")) ); When things are running correctly, we get "object" - as expected. When the Session gets trashed, the output is "unknown" and we are also unable to access the fields within the JScript object (e.g. the .email or .id fields). Much appreciated if anyone can provide any pointers about how to resolve this, everything on google seems to point to different issues.

    Read the article

  • How to recover a MySQL InnoDB table?

    - by Kau-Boy
    When I try to launch the Plesk administration page of you server I get the following error: ERROR: PleskMainDBException MySQL query failed: MySQL server has gone away The MySQL Server is working well. Although it seems that the plesk database is somehow corrupt and any action on this database results in a restart of the mysql process, so even queries to other databases on the same MySQL server will be lost. If I try to connect to the plesk database using phpMyAdmin, I can only see the number of tables, the database had originally. But I am not able to open the tables listing. As soon as I try it, the mysql process crashes again. Trying to connect to the database using ssh works. I can even run a SELECT statement against any table an get a result. I don't know if it is an plesk error or an error of the psa database or even the MySQL server. Can you give me any tips on how to recover the plesk system. Should I try to repair the Plesk installation before. And if so, how can I do it and will all my settings get lost doing it? EDIT: Trying to dump the database, I get the following error: mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES EDIT: I could find out, that the table 'data_bases' is responsible for the crash of the MySQL server process. But trying to repair it using a REPAIR TABLE statement doesn't work. EDIT: I now dropped the whole database and restored it from a dump. But why I try to recover the data_bases table I get the following error: ERROR 1005 (HY000) at line 24: Can't create table './psa/data_bases.frm' (errno: 121) I am not able to create the table again. Somewhere in the MySQL system there is still some information about this table. I tested the same thing locally. If I just delete the table files and then try to create the table again I get the same error. If I drop the table through MySQL, I can create the table again afterwards. But trying to drop the table using MySQL the whole MySQL system crashed. Is there any way to solve that issue?

    Read the article

  • Excessive Outbound DNS Traffic

    - by user1318414
    I have a VPS system which I have had for 3 years on one host without issue. Recently, the host started sending an extreme amount of outbound DNS traffic to 31.193.132.138. Due to the way that Linode responded to this, I have recently left Linode and moved to 6sync. The server was completely rebuilt on 6sync with the exception of postfix mail configurations. Currently, the daemons run are as follows: sshd nginx postfix dovecot php5-fpm (localhost only) spampd (localhost only) clamsmtpd (localhost only) Given that the server was 100% rebuilt, I can't find any serious exploits against the above stated daemons, passwords have changed, ssh keys don't even exist on the rebuild yet, etc... it seems extremely unlikely that this is a compromise which is being used to DoS the address. The provided IP is noted online as a known SPAM source. My initial assumption was that it was attempting to use my postfix server as a relay, and the bogus addresses it was providing were domains with that IP registered as their nameservers. I would imagine given my postfix configuration that DNS queries for things such as SPF information would come in with equal or greater amount than the number of attempted spam e-mails sent. Both Linode and 6Sync have said that the outbound traffic is extremely disproportionate. The following is all the information I received from Linode regarding the outbound traffic: 21:28:28.647263 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647264 IP 97.107.134.33 > 31.193.132.138: udp 21:28:28.647264 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647265 IP 97.107.134.33 > 31.193.132.138: udp 21:28:28.647265 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647266 IP 97.107.134.33 > 31.193.132.138: udp 6sync cannot confirm whether or not the recent spike in outbound traffic was to the same IP or over DNS, but I have presumed as such. For now my server is blocking the entire 31.0.0.0/8 subnet to help deter this while I figure it out. Anyone have any idea what is going on?

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • Some domain names not resolving on local network

    - by Solignis
    I am not really sure where to start with this one... I have a small network setup with some linux servers (Ubuntu 11.04 Server). 2 servers are running BIND 9 (NS01, NS02), they are configured as master and slave respectively. 1 server is running Zimbra ZCS 7.1.1 (MX01), it has a private BIND 9 server running to achieve a split DNS configuration. This DNS server does not interact with the other two, it forwards queries it can resolve to the other 2 that is it. No zone transfers. Zimbra is hosting 3 domains at the moment, solignis.local, solignis.com, campbellsurvey.net. The problem From with in my network I cannot connect to mail.campbellsurvey.net. When I mean I cannot connect, I mean if I open firefox and type https://mail.campbellsurvey.net I go nowhere, the address is supposed to connect to my Zimbra webmail. But it goes nowhere, the odd thing is if I try the same task from outside of the network it brings the website up like normal. If I try to create an account in thunderbird to connect to the same server using IMAP4 or POP3 I get an error saying that thunderbird cannot find the domain name. Even the zimbra client fails too. It is like from with in my own walls that campbellsurvey.net does not exist. But if step outside I can get it work with no problem at all. I had thought maybe the problem was with the DNS server (BIND 9), so just to eliminate it as a possibility I configured a windows server I use for VMware VCenter as a DNS server to see what would happen. The result was the same, its like something is preventing connections to those domains, but I have checked various firewalls and such. I checked port forwards, etc. So I am running out of ideas I know this is not a lot of information to work from and I can give more details about certain things as needed. I am just trying to figure out what could be going wrong. Any help you could offer would be much appreciated.

    Read the article

  • How to disable or tune filesystem cache sharing for OpenVZ?

    - by gertvdijk
    For OpenVZ, an example of container-based virtualization, it seems that host and all guests are sharing the filesystem cache. This sounds paradoxical when talking about virtualization, but this is actually a feature of OpenVZ. It makes sense too. Because only one kernel is running, it's possible to benefit from sharing the same pages of filesystem cache in memory. And while it sounds beneficial, I think a set up here actually suffers in performance from it. Here's why I think why: my machines aren't actually sharing any files on disk so I can't benefit from this feature in OpenVZ. Several OpenVZ machines are running MySQL with MyISAM tables. MyISAM relies on the system's filesystem cache for caching of data files, unlike InnoDB's buffer pool. Also some virtual machines are known to do heavy and large I/O operations on the same filesystem in the host. For example, when running cat *.MYD > /dev/null on some large database in one machine, I saw the filesystem cache lowering in another, monitored by htop. This essentially flushes all the useful filesystem cache in guests (FIFO) and so it flushes the MySQL caches in the guests. Now users are complaining that MySQL is very slow. And it is. Some simple SELECT queries take several seconds on times disk I/O is heavily used by other machines. So, simply put: Is there a way to avoid filesystem cache being wiped out by other virtual machines in container-based virtualization? Some thoughts: Choosing algorithm for flushing filesystem cache in the kernel. (possible? how?) Reserving a certain amount of pages for a single VM. (seems no option for filesystem cache type of pages that reading man vzctl) Will running MySQL on another filesystem get me anywhere? If not, I think my alternatives are: Use KVM for MySQL-MyISAM running VMs. KVM actually assigns memory to the VM and does not allow swapping out caches unless using a balloon driver. Move to InnoDB and tune the buffer pools, dirty pages, etc. This is now considered to be 'nice to have' on the long-term as not everyone responsible for administration of the system understands InnoDB. more suggestions welcome. System software: Proxmox (now 1.9, could be upgraded to 2.x). One big LV assigned for the VMs.

    Read the article

  • SQL Transactional Replication snapshot not applying

    - by dmch2
    Hi, I'm using SQL Transactional Replication with pull subscriptions to replicate databases (hosting their own distribution database) from several servers across a VPN to a central server. I've got the first 2 databases working fine but the 3rd one is causing me problems. My subscription server is SQL 2008, the source systems are all SQL 2005. The source databases are a few 100Mb in size and contain audit data so are simply growing slowly by adding new records at approx 1kb a second. As far as the replication monitor, Agent logs and event logs show everything is working fine - except that no data appears in my subscription database. The distribution agent doesn't seem to want to read the snapshot (and hence the initial state and schema) from the publisher. New transactions aren't applied although they do seem to be arriving OK as the replication monitor shows things like '5 transactions with 10 commands were delivered'. I would expect (as in previous times) to see statements about data being BCPed in the replication monitor. The snapshot is on the publisher on a shared folder. The subscriber can view the snapshot OK (\\repldata) and the alt snapshot folder is pointing at it. But the distribution agent doesn't seem to be making an attempt to do read it. I tried changing the snapshot path to something that's incorrect and didn't even get an error saying that it couldn't access it. After lots of googling etc I found that sp_MSget_repl_commands is called by the subscriber on the distribution database on the publisher. Running a profiler I can see that it's only called for one agent Id. After a reinit it's called for sequence number 0x0 as expected so I thought that would mean it's would look for the snapshot. However, looking on the publisher I see that there's data for two agents - the snapshot agent and the log reader agent (which is being queries). So I guess I need to tell the distribution agent to get the data for both. But how? and more importantly - why? It worked fine on the other two servers I've replicated. I'm not an SQL novice but this is pretty much my first go at replication so don't be afraid to accuse me of missing something obvious/stupid! I can get log files (eg from the distribution agent) if you want but they don't seem to have any errors in them - it just starts up and starts applying log reader agent changes. Cheers Dave

    Read the article

  • Why is BIND giving me a SERVFAIL in this case? (Notes inside)

    - by imaginative
    Woke up this morning to a bunch of the following: root@foo:/etc/bind# dig @1.2.3.4 foo.example.com ; <<>> DiG 9.6.1-P2 <<>> @1.2.3.4 foo.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 36121 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;;foo.example.com. IN A ;; Query time: 0 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Thu Apr 1 09:57:59 2010 ;; MSG SIZE rcvd: 31 Some background on the fictitious "1.2.3.4". It's a slave name server in my nameserver "farm". Technically I have ns1 (being the master) and ns2/ns3. Currently ns1/ns2 are down for maintenance, so I left ns3 at it serving live traffic. That's the point, DNS is supposed to be resilient. Now the odd part is, "1.2.3.4" was serving requests for example.com just fine for the last 4-5 days. This morning I get a phone call that it's non-responsive. After investigation I see the message you see above, SERVFAIL. I looked into the zone file and saw the following: example.com IN SOA ns1.example.com. hostmaster.mail.example.com. ( I wondered if at this point that the nameserver thought it was not authoritative over example.com and adjusted it to the following: example.com IN SOA ns3.example.com. hostmaster.mail.example.com. ( After that, it started responding again for all authoritative queries for example.com. I have no idea why. I thought these things were supposed to be normalized upon zone transfer from ns1 - ns3? Can someone please example why this happened and how to prevent it from happening in the future? I've never had a similar problem, and because I don't understand it well, I might be missing some critical information in this question. So please let me know if I can further add any detail to make things clearer as well. One more thing to note: I have other domains that I'm authoritative for that have their SOA still saying ns1.example.com. and not ns3.example.com. Those domains are serving requests just fine! Is it a matter of time before they stop also and I have to change SOA to ns3.example.com? Is this also only required because ns1 and ns2 are currently offline?

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Developer Dashboard in SharePoint 2010

    - by jcortez
    Introducing the Developer Dashboard As a SharePoint developer (or IT Professional), how many times have you had the pleasure of figuring out why a particular page on your site is taking too long to render? I'm sure one of the techniques you have employed in troubleshooting is the process of elimination - removing individual web parts from the page hoping to identify which web part is misbehaving. One of the new features of SharePoint 2010 is the Developer Dashboard. This dashboard provides tracing and performance information that can be useful when you are trying to troubleshoot pages that are loading too slow. The Developer Dashboard is turned off by default and I'll go over 3 different ways to display it. Here is a screenshot of what the Developer Dashboard looks like when displayed at the bottom of the page:   You can see on the left side the different events that fired during the page processing pipeline and how long these events took. This is where you will see individual web parts being processed and how long it took to complete (obviously the kind of processing depends on what the web part does). On the right side you would see the different database calls issued through the SharePoint Object Model to process the page. You will notice that each of these database queries are actually a hyperlink and clicking on it displays a pop-up window that shows the actual SQL Query Text, the Call Stack that triggered the database call, and the IO statistics of that query. Enabling the Developer Dashboard Option 1: Managed Code   The Developer Dashboard is a farm-wide setting and the code above won't work if it is used within a web part hosted on any non-Central Admin site. The SPDeveloperDashboardLevel enum has three possible values: On, Off, and OnDemand. Setting it to On will always display the Developer Dashboard at the bottom of the page. Setting it Off will hide the Developer Dashboard. Setting it to OnDemand will add an icon at the top right corner of the page (see screenshot below) where a Site Collection Admin can toggle the display of the Developer Dashboard for a particular site collection. In my opinion, OnDemand is the best setting when troubleshooting a page or during development since a Site Collection Admin can turn it on or off and for a particular site only. The first cool thing about this is that the Site Collection Admin that turned it on will be the only one to see the Developer Dashboard output. Everyday users won't see the Developer Dashboard output even if it was turned on by a Site Collection Admin. If you need more flexibility on who gets to see the Developer Dashboard output, you can set the SPDeveloperDashboardSettings.RequiredPermissions to control which group of users will have the permission to see the output. Option 2: Using stsadm Using stsadm, you can run the following command to configure the Developer Dashboard: STSADM –o setproperty –pn developer-dashboard –pv OnDemand To successfully execute this command, be sure you that are running as a Farm Admin. Option 3: Using PowerShell For all scripts in SharePoint 2010, I prefer writing them as PowerShell scripts. Though the stsadm command is less verbose, the PowerShell equivalent is pretty straightforward and uses the SharePoint Object Model: You can of course parameterized the value that gets assigned to the DisplayLevel property so you can turn it On, Off or OnDemand depending on the parameter. Events and the Developer Dashboard  Now, don't assume that all the code inside your web part or page will show up in the Developer Dashboard complete with all the great troubleshooting information. Only a finite set of events are monitored by default (for a web part it will events in the base web part class). Let's say you have a click event that could take some time, for example a web service call. And you want to include troubleshooting information for this event in the Developer Dashboard. Enter SPMonitoredScope which is also a new feature in SharePoint 2010. In SharePoint 2010, everything is executed within a "Monitored Scope". And each scope has a set of "Monitors" that measures and counts calls and timings which appears in the Developer Dashboard. Below is an example on how to get your custom code to get included in the Developer Dashboard by wrapping it inside a new monitored scope: The code above would include your new scope "My long web service call" into the Developer Dashboard and would log the time it took to complete processing. In my opinion, wrapping your custom code in a SPMonitoredScope is a SharePoint development best practice since it provides you visibility and a better understanding on the performance of your components.

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

  • Oracle Open World 2012: SQL Developer Recap

    - by thatjeffsmith
    Last week was the ‘big show’ in San Francisco. I was very happy to meet many of you in person. And many of you had questions – lots of questions! We had full or overflowing rooms for our sessions and hands-on-labs. The SQL Developer ‘booths’ were also slammed several times. So exciting to see so many of YOU excited about SQL Developer. It’s very cool to hear the stories of our tools saving you and your organizations so much time (and money!) Instead of doing a Day 0 – Day 9 recap, I thought I’d share with you the questions that I heard more than once. And just for giggles, I’ll throw in some answers as well So in no particular order… What’s the difference between Oracle SQL Developer & Oracle SQL Developer Data Modeler? Mathematically speaking – two words. But as far as the actual modeling features go, there’s no difference between the two applications. The same ‘code’ or features as it pertains to data modeling and design are in both tools. However, in SQL Developer you have all of the OTHER features fighting for real estate in the UI. So I have a general rule of thumb – if you spend MOST of your time in the database, use SQL Developer. And if you spend most of your time in the data model, run the separate and dedicated program, Oracle SQL Developer Data Modeler. Here’s a couple of screenshots to drive home the UI point: Oracle SQL Developer Oracle SQL Developer Data Modeler running INSIDE of SQL Developer. Notice how the Modeler menu items fold under the file menu? Oracle SQL Developer Data Modeler Easier to navigate and manipulate your models with the stand alone modeler. Just no worksheet to run your ad-hoc queries, etc. Don’t forget you can disable the Data Modeler inside of SQL Developer via the Extensions preference page. How can I model my table partitions? Partitioning is defined via the Physical model. So after you have finished your relational model, you need to generate a physical model. Oracle SQL Developer Data Modeler Physical Model and Partitioning Open the properties for your physical model table. Enable the ‘partitioned’ property. Once you do so, the ‘Partitioning’ page will activate. Lots and lots of partitioning support and options here But what about Interval Partitioning? An extension of range partitioning in 11gR2, we don’t currently support this partitioning scheme in SQL Developer. But we’re working on it! Can SQL Developer ignore column order when comparing models? Yes! After you start a model compare, one of your options is to disregard the order of an attribute or column definition. Tell SQL Developer you don’t care when your column shows up, just as long as it DOES show up. Wow, you got a lot of questions around modeling! Is that normal? Yes! While we appreciate that many folks inherit their applications and associated designs, new applications are being ‘born’ every day. Since both of our tools are free for anyone to design their new Oracle applications with, we attract a fair amount of attention I want to do a Hands On Lab. How do I get your software and instructional guides? Go here. Download VirtualBox. Then download the VB image. Import the appliance. Start it. Connect oracle/oracle on the OEL VM. Click on ‘Start Here’ in the desktop. Follow the instructions. If you need help, ask away! You went too fast in your Tips & Tricks session. Do you have cliff notes? Yes! And you’re SO close to finding them! Just go to my SQL Developer resources page. All of my tips are documented on this blog somewhere. I’ve indexed the most popular ones on the resource page. You can use the Search dialog on the right to find the rest. Or just send me a comment or question, and I’ll do my best to answer them as they come in.

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • ICC Cricket World Cup 2011- Free Online Live Streaming, Mobile Apps, TV and Radio Guide

    - by Kavitha
    The ICC Cricket World Cup 2011 will be hosted jointly by Bangladesh, India and Sri Lanka. This 10th edition of World Cup is held between 19 February-2 April 2011. The World Cup drive will be starting in Dhaka on 19 February with the inaugural match between India and Bangladesh. The 43 days long ICC World Cup Cricket 2011 event will host 49 matches, day matches starting as early as 9.30am IST and day-night matches starting at 2.30pm IST. Here is our guide to follow 2011 ICC Cricket World Cup live on your computers, televisions,mobiles and radios Free Live Streaming On The Web (Official & Unofficial) http://espnstar.com will live stream all the matches of World Cup 2011 and they will be available in HD quality as they are the official broadcasters of World Cup 2011 cricket event. This is the first time ever a world cup cricket event is streamed online officially. If you are not able to access the official live streaming of Cricket World Cup due to regional restrictions, point your browser to any of the following unofficial live streams on the web. NOTE: MAKE SURE THAT YOUR ANTIVIRUS and ANTIMALWARE software are up and running before opening any of these sites. crictime.com - this site offers 6 live streaming servers that offer World Cup 2011 Cricket matches streams. Don’t mind the ads that are displayed left,right and center and just enjoy the cricket. Web pages dedicated for the world cup streaming are already live and you can bookmark them for your reference. cricfire.com/live-cricket: cricfire   gathers cricket live streams available around the web and provides them for easy access. Also they provide links for watching highlights and other post match analysis shows. Other sites that provide live streaming videos extracover.net webcric.com Searching for Unofficial Streams On Live Video Streaming Sites One of the best ways to find the unofficial streams is look for live streaming feeds on popular video streaming websites. We can be assured that these sites does not spread malware and spammy ads as they are well established. Here are the queries that you can use to search the popular sites FreedoCast  http://freedocast.com/search.aspx?go=cricket%20world%20cup Justin.tv      http://www.justin.tv/search?q=cricket+world+cup Ustream.tv  http://www.ustream.tv/discovery/live/all?q=cricket%20world%20cup TV Channels That Telecast Cricket World Cup Live Even though web is the place where we spend most of our time for entertainment, TVs are still popular for watching sports events. Mostly 90% of us are going to follow this cricket world cup matches on television sets. Here is the list of TV channels that paid whooping amounts of money for broadcasting rights and going to telecast live cricket Afghanistan – Ariana Television Network: Lemar TV Australia – Nine Network, Fox Sports Bangladesh – Bangladesh Television Canada – Asian Television Network China – ESPN Star Sports Europe (Except UK & Ireland) – Eurosport2 Fiji – Fiji TV India – ESPN Star Sports, Star Cricket, DD National (mostly India matches alone) Ireland – Zee Cafe Jamaica – Television Jamaica Middle East – Arab Radio and Television Network Nepal – ESPN Star Sports New Zealand – Sky Sport Pacific Islands – Sky Pacific Pakistan – GEO Super, Pakistan Television Corporation Pan-Africa – South African Broadcasting Corporation Singapore – Star Cricket South Africa – Supersport, Sabc3 Sport Sri Lanka – Sri Lanka Rupavahini Corporation United Kingdom – Sky Sports HD USA – Willow Cricket, DirecTV, Dish Network West Indies – Caribbean Media Corporation Radio Stations That Provide Live Commentary Don’t we listen to radio? Yes we still listen to radios, especially when we are on the go. Radios are part of our mobiles as well as music players like iPods. Here are the stations that you can tune into for catching live cricket commentary Australia – ABC Local Radio Bangladesh – Bangladesh Betar Canada , Central America – EchoStar India – All India Radio Pakistan, United Arab Emirates – Hum FM Sri Lanka – FM Derana United Kingdom, Ireland – BBC Radio West Indies – Caribbean Media Corporation Watch World Cup Cricket On Your Mobile This section is for Indian users. 3G rollout is happening at very high pace in all part of the India and most of the metros and towns are able to access 3G services. With 3G on your mobile you will be able to watch live ICC world cricket on your Reliance Mobiles and you can read more about it here. Top 10 Cricket Websites Check out our earlier post on top 10 cricket web sites for information. This article titled,ICC Cricket World Cup 2011- Free Online Live Streaming, Mobile Apps, TV and Radio Guide, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • How to deploy jBPM 3.2.2 console on Oracle 10g iAS

    - by Balint Pato
    Hi! Does anybody have experience regarding deployment of the jBPM Administration Console on Oracle 10g iAS? I successfully deployed it using an .ear, security mappings working, I can even login to the console, Hibernate finds the JNDI datasource but it cannot find the TransactionManager. I see no log, only the exception thrown in the jsf page: Can anybody help me? The hibernate.cfg.xml file now looks like this: <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <!-- hibernate dialect --> <property name="hibernate.dialect">org.hibernate.dialect.Oracle9Dialect</property> <!-- JDBC connection properties (begin) === <property name="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</property> <property name="hibernate.connection.url">jdbc:hsqldb:mem:jbpm</property> <property name="hibernate.connection.username">sa</property> <property name="hibernate.connection.password"></property> ==== JDBC connection properties (end) --> <property name="hibernate.cache.provider_class">org.hibernate.cache.HashtableCacheProvider</property> <!-- DataSource properties (begin) --> <property name="hibernate.connection.datasource">java:/JbpmDS</property> <!-- DataSource properties (end) --> <!-- JTA transaction properties (begin) --> <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</property> <!-- <property name="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.JBossTransactionManagerLookup</property>--> <!-- JTA transaction properties (end) --> <!-- CMT transaction properties (begin) === <property name="hibernate.transaction.factory_class">org.hibernate.transaction.CMTTransactionFactory</property> <property name="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.JBossTransactionManagerLookup</property> ==== CMT transaction properties (end) --> <!-- logging properties (begin) --> <property name="hibernate.show_sql">true</property> <property name="hibernate.format_sql">true</property> <property name="hibernate.use_sql_comments">true</property> <--==== logging properties (end) --> <!-- ############################################ --> <!-- # mapping files with external dependencies # --> <!-- ############################################ --> <!-- following mapping file has a dependendy on --> <!-- 'bsh-{version}.jar'. --> <!-- uncomment this if you don't have bsh on your --> <!-- classpath. you won't be able to use the --> <!-- script element in process definition files --> <mapping resource="org/jbpm/graph/action/Script.hbm.xml"/> <!-- following mapping files have a dependendy on --> <!-- 'jbpm-identity.jar', mapping files --> <!-- of the pluggable jbpm identity component. --> <!-- Uncomment the following 3 lines if you --> <!-- want to use the jBPM identity mgmgt --> <!-- component. --> <!-- identity mappings (begin) --> <mapping resource="org/jbpm/identity/User.hbm.xml"/> <mapping resource="org/jbpm/identity/Group.hbm.xml"/> <mapping resource="org/jbpm/identity/Membership.hbm.xml"/> <!-- identity mappings (end) --> <!-- following mapping files have a dependendy on --> <!-- the JCR API --> <!-- jcr mappings (begin) === <mapping resource="org/jbpm/context/exe/variableinstance/JcrNodeInstance.hbm.xml"/> ==== jcr mappings (end) --> <!-- ###################### --> <!-- # jbpm mapping files # --> <!-- ###################### --> <!-- hql queries and type defs --> <mapping resource="org/jbpm/db/hibernate.queries.hbm.xml" /> <!-- graph.action mapping files --> <mapping resource="org/jbpm/graph/action/MailAction.hbm.xml"/> <!-- graph.def mapping files --> <mapping resource="org/jbpm/graph/def/ProcessDefinition.hbm.xml"/> <mapping resource="org/jbpm/graph/def/Node.hbm.xml"/> <mapping resource="org/jbpm/graph/def/Transition.hbm.xml"/> <mapping resource="org/jbpm/graph/def/Event.hbm.xml"/> <mapping resource="org/jbpm/graph/def/Action.hbm.xml"/> <mapping resource="org/jbpm/graph/def/SuperState.hbm.xml"/> <mapping resource="org/jbpm/graph/def/ExceptionHandler.hbm.xml"/> <mapping resource="org/jbpm/instantiation/Delegation.hbm.xml"/> <!-- graph.node mapping files --> <mapping resource="org/jbpm/graph/node/StartState.hbm.xml"/> <mapping resource="org/jbpm/graph/node/EndState.hbm.xml"/> <mapping resource="org/jbpm/graph/node/ProcessState.hbm.xml"/> <mapping resource="org/jbpm/graph/node/Decision.hbm.xml"/> <mapping resource="org/jbpm/graph/node/Fork.hbm.xml"/> <mapping resource="org/jbpm/graph/node/Join.hbm.xml"/> <mapping resource="org/jbpm/graph/node/MailNode.hbm.xml"/> <mapping resource="org/jbpm/graph/node/State.hbm.xml"/> <mapping resource="org/jbpm/graph/node/TaskNode.hbm.xml"/> <!-- context.def mapping files --> <mapping resource="org/jbpm/context/def/ContextDefinition.hbm.xml"/> <mapping resource="org/jbpm/context/def/VariableAccess.hbm.xml"/> <!-- taskmgmt.def mapping files --> <mapping resource="org/jbpm/taskmgmt/def/TaskMgmtDefinition.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/def/Swimlane.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/def/Task.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/def/TaskController.hbm.xml"/> <!-- module.def mapping files --> <mapping resource="org/jbpm/module/def/ModuleDefinition.hbm.xml"/> <!-- bytes mapping files --> <mapping resource="org/jbpm/bytes/ByteArray.hbm.xml"/> <!-- file.def mapping files --> <mapping resource="org/jbpm/file/def/FileDefinition.hbm.xml"/> <!-- scheduler.def mapping files --> <mapping resource="org/jbpm/scheduler/def/CreateTimerAction.hbm.xml"/> <mapping resource="org/jbpm/scheduler/def/CancelTimerAction.hbm.xml"/> <!-- graph.exe mapping files --> <mapping resource="org/jbpm/graph/exe/Comment.hbm.xml"/> <mapping resource="org/jbpm/graph/exe/ProcessInstance.hbm.xml"/> <mapping resource="org/jbpm/graph/exe/Token.hbm.xml"/> <mapping resource="org/jbpm/graph/exe/RuntimeAction.hbm.xml"/> <!-- module.exe mapping files --> <mapping resource="org/jbpm/module/exe/ModuleInstance.hbm.xml"/> <!-- context.exe mapping files --> <mapping resource="org/jbpm/context/exe/ContextInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/TokenVariableMap.hbm.xml"/> <mapping resource="org/jbpm/context/exe/VariableInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/ByteArrayInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/DateInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/DoubleInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/HibernateLongInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/HibernateStringInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/LongInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/NullInstance.hbm.xml"/> <mapping resource="org/jbpm/context/exe/variableinstance/StringInstance.hbm.xml"/> <!-- job mapping files --> <mapping resource="org/jbpm/job/Job.hbm.xml"/> <mapping resource="org/jbpm/job/Timer.hbm.xml"/> <mapping resource="org/jbpm/job/ExecuteNodeJob.hbm.xml"/> <mapping resource="org/jbpm/job/ExecuteActionJob.hbm.xml"/> <!-- taskmgmt.exe mapping files --> <mapping resource="org/jbpm/taskmgmt/exe/TaskMgmtInstance.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/exe/TaskInstance.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/exe/PooledActor.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/exe/SwimlaneInstance.hbm.xml"/> <!-- logging mapping files --> <mapping resource="org/jbpm/logging/log/ProcessLog.hbm.xml"/> <mapping resource="org/jbpm/logging/log/MessageLog.hbm.xml"/> <mapping resource="org/jbpm/logging/log/CompositeLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/ActionLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/NodeLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/ProcessInstanceCreateLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/ProcessInstanceEndLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/ProcessStateLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/SignalLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/TokenCreateLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/TokenEndLog.hbm.xml"/> <mapping resource="org/jbpm/graph/log/TransitionLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/VariableLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/VariableCreateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/VariableDeleteLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/VariableUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/ByteArrayUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/DateUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/DoubleUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/HibernateLongUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/HibernateStringUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/LongUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/context/log/variableinstance/StringUpdateLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/TaskLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/TaskCreateLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/TaskAssignLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/TaskEndLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/SwimlaneLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/SwimlaneCreateLog.hbm.xml"/> <mapping resource="org/jbpm/taskmgmt/log/SwimlaneAssignLog.hbm.xml"/> </session-factory> </hibernate-configuration> ---- edit --- I have already tried the hibernate.transaction.manager_lookup_class to set to the JBoss version (org.hibernate.transaction.JBossTransactionManagerLookup) it did not work...well it's not that suprising...I'll try now: org.hibernate.transaction.OC4JTransactionManagerLookup I tried with CMT instead of JTA, but it didn't work also.

    Read the article

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >