Search Results

Search found 4043 results on 162 pages for 'mod cluster'.

Page 99/162 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Biztalk - how do I set up MSMQ load balancing and high availability ?

    - by FullOfQuestions
    Hi, From what I understand, in order to achieve MSMQ load-balancing, one must use a technology such as NLB. And in order to achieve MSMQ high-availability, one must cluster the related Biztalk Host (and hence the underlying servers have to be in a cluster themselves). Yet, according to Microsoft Documentation, NLB and FailOver Clustering technologies are not compatible. See this link for reference: http://support.microsoft.com/kb/235305 Can anyone PLEASE explain to me how MSMQ load-balancing and high-availability can be achieved? thank you in advance, M

    Read the article

  • Hadoop On Azure C#Isotope

    - by Sreesankar
    During the initial release of the HadoopOnAzure, Microsoft had provided the C#Isotope SDK as a programatic interface to the Hadoop cluster on the Azure. After the HDInsight release this is removed from the downoads. More over while trying with the previous version of the sdk we get a 500 - Internal Server Error. Any idea if this services is disabled? If so whats the alternative way to programatically interface with the HDInsight Cluster on Azure?

    Read the article

  • Is nginx / node.js / postgres a very scalable architecture?

    - by Luc
    I have an app running with: one instance of nginx as the frontend (serving static file) a cluster of node.js application for the backend (using cluster and expressjs modules) one instance of Postgres as the DB Is this architecture sufficient if the application needs scalability (this is only for HTTP / REST requests) for: 500 request per seconds (each requests only fetches data from the DB, those data could be several ko, and with no big computation needed after the fetch). 20000 users connected at the same time Where could be the bottlenecks ?

    Read the article

  • How to create a matrix format in perl

    - by shaq
    I have an array of array in which each array is like: clusterA gene1 1 clusterA gene2 0 clusterB gene1 1 clusterB gene2 0 I want to produce a file like: name gene1 gene2 clusterA 1 0 clusterB 1 0 Current attempt: if (condition) { @array = ($cluster, $genes, "1"); } elsif (not condition) { @array = ($cluster, $genes, "0"); } push @AoA, [ @array ]; @A0A is my array of array.

    Read the article

  • How to set global hook for WH_CALLWNDPROCRET ?

    - by user261882
    Hello I want to set global hook that tracks which application is active. In my main program I am doing the foloowing : HMODULE mod=::GetModuleHandle(L"HookProcDll"); HHOOK rslt=(WH_CALLWNDPROCRET,MyCallWndRetProc,mod,0); The hook procedure which is called MyCallWndRetProc exists in separate dll called HookProcDll.dll. The hook procedure is watching for WM_ACTIVATE message. The thing is that the code stucks in the line where I am setting the hook, i.e in the line where I am calling ::SetWindowsHookEx. And then Windows gets unresponsive, my task bar disappears and I am left with empty desktop. Then I must reset the computer. What am doing wrong, why Windows get unresponsive ? and Do I need to inject HookProcDll.dll in every process in order to set the global hook, and how can I do that ?

    Read the article

  • NTRU Pseudo-code for computing Polynomial Inverses

    - by Neville
    Hello all. I was wondering if anyone could tell me how to implement line 45 of the following pseudo-code. Require: the polynomial to invert a(x), N, and q. 1: k = 0 2: b = 1 3: c = 0 4: f = a 5: g = 0 {Steps 5-7 set g(x) = x^N - 1.} 6: g[0] = -1 7: g[N] = 1 8: loop 9: while f[0] = 0 do 10: for i = 1 to N do 11: f[i - 1] = f[i] {f(x) = f(x)/x} 12: c[N + 1 - i] = c[N - i] {c(x) = c(x) * x} 13: end for 14: f[N] = 0 15: c[0] = 0 16: k = k + 1 17: end while 18: if deg(f) = 0 then 19: goto Step 32 20: end if 21: if deg(f) < deg(g) then 22: temp = f {Exchange f and g} 23: f = g 24: g = temp 25: temp = b {Exchange b and c} 26: b = c 27: c = temp 28: end if 29: f = f XOR g 30: b = b XOR c 31: end loop 32: j = 0 33: k = k mod N 34: for i = N - 1 downto 0 do 35: j = i - k 36: if j < 0 then 37: j = j + N 38: end if 39: Fq[j] = b[i] 40: end for 41: v = 2 42: while v < q do 43: v = v * 2 44: StarMultiply(a; Fq; temp;N; v) 45: temp = 2 - temp mod v 46: StarMultiply(Fq; temp; Fq;N; v) 47: end while 48: for i = N - 1 downto 0 do 49: if Fq[i] < 0 then 50: Fq[i] = Fq[i] + q 51: end if 52: end for 53: {Inverse Poly Fq returns the inverse polynomial, Fq, through the argument list.} The function StarMultiply returns a polynomial (array) stored in the variable temp. Basically temp is a polynomial (I'm representing it as an array) and v is an integer (say 4 or 8), so what exactly does temp = 2-temp mod v equate to in normal language? How should i implement that line in my code. Can someone give me an example. The above algorithm is for computing Inverse polynomials for NTRUEncrypt key generation. The pseudo-code can be found on page 28 of this document. Thanks in advance.

    Read the article

  • how to get trac to run with apache?

    - by ajsie
    i have some problems getting trac to be running with apache. have no idea of how to do and the tutorial i followed doesnt work. i have an empty /etc/apache2/httpd.conf. should it be empty? then i followed the tutorial (http://trac.edgewall.org/wiki/TracModPython) and typed in: LoadModule python_module modules/mod_python.so so now it contains one row. i have ubuntu and i installed mod_python with: apt-get install libapache2-mod-python libapache2-mod-python-doc however, when i run a2enmod mod_python it says: ERROR: Module mod_python does not exist! but i have checked that it exists in /usr/lib/apache2/modules/mod_python.so. so whats the problem?

    Read the article

  • Tomcat gzip while chunked issue

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Content-Encoding: gzip Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recieved, but then after ungzipping I see that response is partial. I never seen such behavior with gzip turned off in request headers. So my question is: is it common tomcat issue? maybe one of it's mod which is doing compression? Or maybe it maybe some kind of proxy issue? I can't tell about versions of tomcat or what gzip mod they use, but feel free to ask, i'll try ask my service provider. Thanks.

    Read the article

  • Division, Remainders and only Real Numbers Allowed

    - by Senica Gonzalez
    Trying to figure out this pseudo code. The following is assumed.... I can only use unsigned and signed integers (or long). Division returns a real number with no remainder. MOD returns a real number. Fractions and decimals are not handled. INT I = 41828; INT C = 15; INT D = 0; D = (I / 65535) * C; How would you handle a fraction (or decimal value) in this situation? Is there a way to use negative value to represent the remainder? In this example I/65535 should be 0.638, however, with the limitations, I get 0 with a MOD of 638. How can I then multiply by C to get the correct answer? Hope that makes sense.

    Read the article

  • Using the mpz_powm functions from the GMP/MPIR libraries with negative exponents

    - by Mihai Todor
    Please consider the following code: mpz_t x, n, out; mpz_init_set_ui(x, 2UL); mpz_init_set_ui(n, 7UL); mpz_init(out); mpz_invert(out, x, n); gmp_printf ("%Zd\n", out);//prints 4. 2 * 4 (mod 7) = 1. OK mpz_powm_ui(out, x, -1UL, n);//prints 1. 2 * 1 (mod 7) = 2. How come? gmp_printf ("%Zd\n", out); mpz_clear(x); mpz_clear(n); mpz_clear(out); I am unable to understand how the mpz_powm functions handle negative exponents, although, according to the documentation, it is supposed to support them. I would expect that raising a number to -1 modulo n is equivalent to inverting it modulo n. What am I missing here?

    Read the article

  • What are block expressions actually good for?

    - by Helper Method
    I just solved the first problem from Project Euler in JavaFX for the fun of it and wondered what block expressions are actually good for? Why are they superior to functions? Is it the because of the narrowed scope? Less to write? Performance? Here's the Euler example. I used a block here but I don't know if it actually makes sense // sums up all number from low to high exclusive which are divisible by a or b function sumDivisibleBy(a: Integer, b: Integer, high: Integer) { def low = if (a <= b) a else b; def sum = { var result = 0; for (i in [low .. <high] where i mod 3 == 0 or i mod 5 == 0) { result += i } result } } Does a block makes sense here?

    Read the article

  • How can I convert a decimal to a fraction ?

    - by CornyD
    How do I convert a indefinite decimal (i.e. .333333333...) to a string fraction representation (i.e. "1/3"). I am using VBA and the following is the code I used (i get an overflow error at the line "b = a Mod b": Function GetFraction(ByVal Num As Double) As String If Num = 0# Then GetFraction = "None" Else Dim WholeNumber As Integer Dim DecimalNumber As Double Dim Numerator As Double Dim Denomenator As Double Dim a, b, t As Double WholeNumber = Fix(Num) DecimalNumber = Num - Fix(Num) Numerator = DecimalNumber * 10 ^ (Len(CStr(DecimalNumber)) - 2) Denomenator = 10 ^ (Len(CStr(DecimalNumber)) - 2) If Numerator = 0 Then GetFraction = WholeNumber Else a = Numerator b = Denomenator t = 0 While b <> 0 t = b b = a Mod b a = t Wend If WholeNumber = 0 Then GetFraction = CStr(Numerator / a) & "/" & CStr(Denomenator / a) Else GetFraction = CStr(WholeNumber) & " " & CStr(Numerator / a) & "/" & CStr(Denomenator / a) End If End If End If End Function

    Read the article

  • is it possible that a greasemonkey script can work on one computer but not on another?

    - by plastic cloud
    i'm writing an greasemonkey script for somebody else. he is a moderator and i am not. and the script will help him do some moderating things. now the script works for me. as far as it can work for me.(as i am not a mod) but even those things that work for me are not working for him.. i checked his version of greasemonkey plugin and firefox and he is up to date. only thing that's really different is that i'm on a mac and he is pc, but i wouldn't think that would be any problem. this is one of the functions that is not working for him. he does gets the first and third GM_log message. but not the second one ("got some(1) .."). kmmh.trackNames = function(){ GM_log("starting to get names from the first "+kmmh.topAmount+" page(s) from leaderboard."); kmmh.leaderboardlist = []; for (var p=1; p<=(kmmh.topAmount); p++){ var page = "http://www.somegamesite.com/leaderboard?page="+ p; var boardHTML = ""; dojo.xhrGet({ url: page, sync: true, load: function(response){ boardHTML = response; GM_log("got some (1) => "+boardHTML.length); }, handleAs: "text" }); GM_log("got some (2) => "+boardHTML.length); //create dummy div and place leaderboard html in there var dummy = dojo.create('div', { innerHTML: boardHTML }); //search through it var searchN = dojo.query('.notcurrent', dummy).forEach(function(node,index){ if(index >= 10){ kmmh.leaderboardlist.push(node.textContent); // add names to array } }); } GM_log("all names from "+ kmmh.topAmount +" page(s) of leaderboard ==> "+ kmmh.leaderboardlist); does anyone have any idea what could be causing this ?? EDIT: i know i had to write according to what he would see on his mod screen. so i asked him to copy paste source of pages and so on. and besides that, this part of the script is not depending on being a mod or not. i got everything else working for him. just this function still doesn't on neither of his pc's.

    Read the article

  • VBA long overflow

    - by HK_CH
    Hi I am trying to do some maths with my VBA excel (prime factorization) and I am hitting the limit of the long data type (runtime error 6 Overflow). Is there any way to get around this and still stay within VBA? (I am aware that the obvious one would be to use another more appropriate programming language) Thanks for help in advance! Thank you, it works in so far that I am able to get the big numbers into the variables now. However when I try to apply the MOD function (bignumber MOD 2 for example) it still fails with error message runtime error 6 Overflow.

    Read the article

  • How do I dynamically import a module in App Engine?

    - by Scott Ferguson
    I'm trying to dynamically load a class from a specific module (called 'commands') and the code runs totally cool on my local setup running from a local Django server. This bombs out though when I deploy to Google App Engine. I've tried adding the commands module's parent module to the import as well with no avail (on either setup in that case). Here's the code: mod = __import__('commands.%s' % command, globals(), locals(), [command]) return getattr(mod, command) App Engine just throws an ImportError whenever it hits this. And the clarify, it doesn't bomb out on the commands module. If I have a command like 'commands.cat' it can't find 'cat'.

    Read the article

  • getResponseHeader('last-modified'); does not change value

    - by telexper
    var page = 'data/appointments/<? echo $_SESSION['name']; ?><? echo $_SESSION['last']; ?>App.html'; var lM; function checkModified(){ $.get(page, function(a,a,x){ var mod = x.getResponseHeader('last-modified'); alert (lM); alert ("page" +mod); }); } when i alert the last-modified from my page, it outputs the same value, even when when i deleted all my cookies and cache , deleted the file from the server and replace it. it still outputs one value Tue , Oct 23, 2012 3:37:41 GMT

    Read the article

  • why Haskell can deduce [] type in this function

    - by Sili
    rho x = map (((flip mod) x).(\a -> a^2-1)) (rho x) This function will generate an infinite list. And I tested in GHCi, the function type is *Main> :t rho rho :: Integral b => b -> [b] If I define a function like this fun x = ((flip mod) x).(\a -> a^2-1) The type is *Main> :t fun fun :: Integral c => c -> c -> c My question is, how can Haskell deduce the function type to b - [b]? We don't have any [] type data in this function. Thanks!

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • Can't get MySQL to install

    - by James Marthenal
    I'd like to think I know what I'm doing in a Unix shell but maybe not. I made a mistake in a configuration file for MySQL, so I decided to just uninstall it and then reinstall it, so I did: sudo apt-get --purge remove mysql-server mysql-server-5.0 mysql-client The files were deleted, so I then tried to install it, but it didn't ask me for a root password or anything else, so I uninstalled it using the above command again and then did sudo rm -rf /etc/mysql sudo rm /etc/init.d/mysql sudo rm -rf /var/lib/mysql* I then restarted the computer then installed it again: sudo apt-get install mysql-server mysql-client It asked for a root password, and everything looked like it would work, until I saw this: $ sudo apt-get install mysql-server mysql-client Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 Suggested packages: tinyca The following NEW packages will be installed: mysql-client mysql-server mysql-server-5.0 0 upgraded, 3 newly installed, 0 to remove and 1 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.7MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-client mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.28101": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.28101 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 160284 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-client. Unpacking mysql-client (from .../mysql-client_5.0.51a-24+lenny5_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up mysql-client (5.0.51a-24+lenny5) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) Now I can't seem to figure out what to do. I just want to get a clean MySQL installation at this point. I'm running the latest stable release of Debian. All help is appreciated—thanks! Edit: I looked at this similar question, which suggests that I uninstall mysql-common, but when I try to do so I see: The following packages will be REMOVED: apache2 apache2-mpm-prefork apache2-utils apache2.2-common git-svn libapache2-mod-php5 libapache2-mod-python libapache2-svn libaprutil1 libdbd-mysql-perl libdbd-mysql-rubygem libmysql-ruby libmysql-ruby1.8 libmysql-rubygem libmysqlclient15-dev libmysqlclient15off librdf-perl librdf0 libserf-0-0 libsvn-perl libsvn1 mysql-client-5.0 mysql-common mytop ndn-apache22-php5 ndn-apache22-svn ndn-interpreters ndn-lighttpd ndn-netsaint-plugins ndn-perl-modules ndn-php5-cgi ndn-php5-xcache ndn-php53 ndn-php53-suhosin ndn-rubygems php5 php5-mcrypt php5-mysql proftpd proftpd-mod-mysql python-django python-mysqldb python-subversion python-svn subversion subversion-tools trac zendoptimizer 0 upgraded, 0 newly installed, 48 to remove and 1 not upgraded. Eeek! Any suggestions?

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • eSTEP Newsletter November 2012

    - by uwes
    Dear Partners,We would like to inform you that the November '12 issue of our Newsletter is now available.The issue contains information to the following topics: News from CorpOracle Celebrates 25 Years of SPARC Innovation; IDC White Papers Finds Growing Customer Comfort with Oracle Solaris Operating System; Oracle Buys Instantis; Pillar Axiom OpenWorld Highlights; Announcement Oracle Solaris 11.1 Availability (data sheet, new features, FAQ's, corporate pages, internal blog, download links, Oracle shop); Announcing StorageTek VSM 6; Announcement Oracle Solaris Cluster 4.1 Availability (new features, FAQ's, cluster corp page, download site, shop for media); Announcement: Oracle Database Appliance 2.4 patch update becomes available Technical SectionOracle White papers on SPARC SuperCluster; Understanding Parallel Execution; With LTFS, Tape is Gaining Storage Ground with additional link to How to Create Oracle Solaris 11 Zones with Oracle Enterprise Manager Ops Center; Provisioning Capabilities of Oracle Enterprise Ops Center Manager 12c; Maximizing your SPARC T4 Oracle Solaris Application Performance with the following articles: SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark, SPARC T4-Based Highly Scalable Solutions Posts New World Record on SPECjEnterprise2010 Benchmark, SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; Oracle SUN ZFS Storage Appliance Reference Architecture for VMware vSphere4;  Why 4K? - George Wilson's ZFS Day Talk; Pillar Axiom 600 with connected subjects: Oracle Introduces Pillar Axiom Release 5 Storage System Software, Driving down the high cost of Storage, This Provisioning with Pilar Axiom 600, Pillar Axiom 600- System overview and architecture; Migrate to Oracle;s SPARC Systems; Top 5 Reasons to Migrate to Oracle's SPARC Systems Learning & EventsRecently delivered Techcasts: Learning Paths; Oracle Database 11g: Database Administration (New) - Learning Path; Webcast: Drill Down on Disaster Recovery; What are Oracle Users Doing to Improve Availability and Disaster Recovery; SAP NetWeaver and Oracle Exadata Database Machine ReferencesARTstor Selects Oracle’s Sun ZFS Storage 7420 Appliances To Support Rapidly Growing Digital Image Library, Scottish Widows Cuts Sales Administration 20%, Reduces Time to Prepare Reports by 75%, and Achieves Return on Investment in First Year, Oracle's CRM Cloud Service Powers Innovation: Applications on Demand; Technology on Demand, How toHow to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11; How to prepare a Sun ZFS Storage Appliance to Serve as a Storage Devise with Oracle Enterprise Manager Ops Center 12c; Command Summary: Basic Operations with the Image Packaging System In Oracle Solaris 11; How to Update to Oracle Solaris 11.1 Using the Image Packaging System, How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11;  Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster; Ease the Chaos with Automated Patching: Oracle Enterprise Manager Cloud Control 12c; Book excerpt: Oracle Exalogic Elastic Cloud Handbook You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2012

    - by Bob Rhubart
    Every day ArchBeat searches the web for content created by and for community members, and then shares that content via social media. Here's the list of the Top 10 most popular items posted on the OTN ArchBeat Facebook Page for November 2012. One-Stop Shop for Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. OAM/OVD JVM Tuning Vinay from the Oracle Fusion Middleware Architecture Group (otherwise known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. White Paper: Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Architected Systems: "If you don't develop an architecture, you will get one anyway..." "Can you build a system without taking care of architecture?," asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Backup and Recovery of an Exalogic vServer via rsync "On Exalogic a vServer will consist of a number of resources from the underlying machine," says the man known only as Donald. "These resources include compute power, networking and storage. In order to recover a vServer from a failure in the underlying rack all of these components have to be thoughts about. This article only discusses the backup and recovery strategies that apply to the storage system of a vServer." This Week on the OTN Architect Community Home Page Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Technical article by Yuli Vasiliev on Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Podcast: Are You Future Proof? Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. OIM 11g : Multi-thread approach for writing custom scheduled job | Saravanan V S Saravanan shares insight and expertise relevant to "designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs." How to Create Virtual Directory in Weblogic Server | Zeeshan Baig Oracle ACE Zeeshan Baig shows you how in six easy steps. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Thought for the Day "Humans are the best value in computers -- where else can you get a non-linear computer weighing only about 160lbs, having a billion binary decision elements, that can be mass-produced by unskilled labour?" — Anonymous Source: SoftwareQuotes.com

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >