Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 393/500 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Get your content off Blogger.com

    - by Daniel Moth
    Due to blogger.com deprecating FTP users I've decided to move my blog. When I think of the content of a blog, 4 items come to mind: blog posts, comments, binary files that the blog posts linked to (e.g. images, ZIP files) and the CSS+structure of the blog. 1. Binaries The binary files you used in your blog posts are sitting on your own web space, so really blogger.com is not involved with that. Nothing for you to do at this stage, I'll come back to these in another post. 2. CSS and structure In the best case this exists as a separate CSS file on your web space (so no action for now) or in a worst case, like me, your CSS is embedded with the HTML. In the latter case, simply navigate from you dashboard to "Template" then "Edit HTML" and copy paste the contents of the box. Save that locally in a txt file and we'll come back to that in another post. 3. Blog posts and Comments The blog posts and comments exist in all the HTML files on your own web space. Parsing HTML files to extract that can be painful, so it is easier to download the XML files from blogger's servers that contain all your blog posts and comments. 3.1 Single XML file, but incomplete The obvious thing to do is go into your dashboard "Settings" and under the "Basic" tab look at the top next to "Blog Tools". There is a link there to "Export blog" which downloads an XML file with both comments and posts. The problem with that is that it only contains 200 comments - if you have more than that, you will lose the surplus. Also, this XML file has a lot of noise, compared to the better solution described next. (note that a tool I will refer to in a future post deals with either kind of XML file) 3.2 Multiple XML files First you need to find your blog ID. In case you don't know what that is, navigate to the "Template" as described in section 2 above. You will find references to the blog id in the HTML there, but you can also see it as part of the URL in your browser: blogger.com/template-edit.g?blogID=YOUR_NUMERIC_ID. Mine is 7 digits. You can now navigate to these URLs to download the XML for your posts and comments respectively: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=1 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=1 Note that you can only get 500 posts at a time and only 200 comments at a time. To get more than that you have to change the URL and download the next batch. To get you started, to get the XML for the next 500 posts and next 200 comments respectively you’d have to use these URLs: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=501 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=201 ...and so on and so forth. Keep all the XML files in the same folder on your local machine (with nothing else in there). 4. Validating the XML aka editing older blog posts The XML files you just downloaded really contain HTML fragments inside for all your blog posts. If you are like me, your blog posts did not conform to XHTML so passing them to an XML parser (which is what we will want to do) will result in the XML parser choking. So the next step is to fix that. This can be no work at all for you, or a huge time sink or just a couple hours of pain (which was my case). The process I followed was to attempt to load the XML files using XmlDocument.Load and wait for the exception to be thrown from my code. The exception would point to the exact offending line and column which would help me fix the issue. Rather than fix it in the XML itself, I would go back and edit the offending blog post and fix it there - recommended! Then I'd repeat the cycle until the XML could be loaded in the XmlDocument. To give you an idea, some of the issues I encountered are: extra or missing quotes in img and href elements, direct usage of chevrons instead of encoding them as &lt;, missing closing tags, mismatched nested pairs of elements and capitalization of html elements. For a full list of things that may go wrong see this. 5. Opportunity for other changes I also found a few posts that did not have a category assigned so I fixed those too. I took the further opportunity to create new categories and tag some of my blog posts with that. Note that I did not remove/change categories of existing posts, but only added.   In an another post we'll see how to use the XML files you stored in the local folder… Comments about this post welcome at the original blog.

    Read the article

  • Invalid Opcode 0000

    - by Mr47
    At random times (usually when watching a movie in XBMC), the computer locks up. I can still sometimes SSH in and get the 'dmesg' output before that locks up too. A hard reboot is usually required to get things going again. I have cut out the date/time/server columns for easier reading, please do ask if these seem relevant omissions... System: Ubuntu 11.04 (2.6.38-8-server) x64 X11 installed with IceWm (and XBMC) Core 2 Duo E8400 @ 3.00GHz 8 GB RAM Asus P5Q premium motherboard Primary harddrive: OCZ Vertex 2 60 GB (SSD) Other harddrives: various 750GB, 1TB, 1.5TB & 2TB (WD & samsung) Any important information I am not supplying is purely a sign of my incompetence in these matters, so please do ask and excuse me for my inabilities... invalid opcode: 0000 [#1] SMP last sysfs file: /sys/devices/system/cpu/cpu1/cache/index2/shared_cpu_map CPU 0 Modules linked in: parport_pc ppdev vesafb snd_hda_codec_analog tuner_simple tuner_types wm8775 tda9887 tda8290 tea5767 tuner cx25840 ir_lirc_codec lirc_dev snd_hda_intel snd_hda_codec snd_hwdep snd_pcm ir_sony_decoder snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq rc_rc6_mce ivtv ir_jvc_decoder cx2341x i2c_algo_bit v4l2_common mceusb videodev ir_rc6_decoder ir_rc5_decoder snd_timer ir_nec_decoder nvidia(P) btusb bluetooth rc_core v4l2_compat_ioctl32 tveeprom snd_seq_device pata_marvell psmouse shpchp serio_raw snd asus_atk0110 soundcore snd_page_alloc lp parport firewire_ohci firewire_core crc_itu_t r8169 sky2 ahci libahci Pid: 4597, comm: xbmc.bin Tainted: P 2.6.38-8-server #42-Ubuntu System manufacturer P5Q Premium/P5Q Premium RIP: 0010:[<ffffffff8119bc4a>] [<ffffffff8119bc4a>] do_mpage_readpage+0x9a/0x510 RSP: 0018:ffff88021f5a59d8 EFLAGS: 00210246 RAX: 0000000000000020 RBX: ffff88021f5a5ac8 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 0000000015e36fe0 RDI: 0000000000000000 RBP: ffff88021f5a5a98 R08: ffff88021f5a5ac8 R09: ffff88021f5a5b38 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: 000000000000b148 R14: 0000000000000001 R15: ffff8802067034b8 FS: 00007f3f34eb1700(0000) GS:ffff8800cfc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007feb8d515000 CR3: 000000021d744000 CR4: 00000000000406f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process xbmc.bin (pid: 4597, threadinfo ffff88021f5a4000, task ffff88021f63db80) Stack: ffff88021f5a5a28 ffffffff8116019d ffff88021f5a5b40 0000002000000003 ffff88021f5a5b38 ffff880206703370 ffffea0006d800f8 0000000000000000 ffffea0006d800f8 0000000c811270b5 ffff88021f5a5a68 ffffffff8110bdba Call Trace: [<ffffffff8116019d>] ? mem_cgroup_cache_charge+0xed/0x130 [<ffffffff8110bdba>] ? add_to_page_cache_locked+0xea/0x160 [<ffffffff8119c232>] mpage_readpages+0x102/0x150 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 [<ffffffff81149475>] ? alloc_pages_current+0xa5/0x110 [<ffffffff8120157d>] ext4_readpages+0x1d/0x20 [<ffffffff81116a9b>] __do_page_cache_readahead+0x14b/0x220 [<ffffffff81116ed1>] ra_submit+0x21/0x30 [<ffffffff81116ff5>] ondemand_readahead+0x115/0x230 [<ffffffff811171a0>] page_cache_async_readahead+0x90/0xc0 [<ffffffff8110b184>] ? file_read_actor+0xd4/0x170 [<ffffffff812de72e>] ? radix_tree_lookup_slot+0xe/0x10 [<ffffffff8110c521>] do_generic_file_read.clone.23+0x271/0x450 [<ffffffff8110d1ba>] generic_file_aio_read+0x1ca/0x240 [<ffffffff8100a82e>] ? __switch_to+0x20e/0x2f0 [<ffffffff81164c82>] do_sync_read+0xd2/0x110 [<ffffffff8108b61c>] ? hrtimer_try_to_cancel+0x4c/0xe0 [<ffffffff81279083>] ? security_file_permission+0x93/0xb0 [<ffffffff81164fa1>] ? rw_verify_area+0x61/0xf0 [<ffffffff81165463>] vfs_read+0xc3/0x180 [<ffffffff81165571>] sys_read+0x51/0x90 [<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b Code: ff ff 48 c7 85 78 ff ff ff 00 00 00 00 49 d3 ee b9 0c 00 00 00 2b 4d 8c 48 8b b2 c8 00 00 00 ba 01 00 00 00 41 0f af c6 49 d3 e5 <0f> 36 4d 8c 4c 01 e8 d3 e2 4c 8d 44 16 ff 48 8b 53 20 49 d3 f8 RIP [<ffffffff8119bc4a>] do_mpage_readpage+0x9a/0x510 RSP <ffff88021f5a59d8> ---[ end trace ac6cd2f4692205a3 ]--- Please note that the error is ALWAYS occuring at do_mpage_readpage+0x9a/0x510 with the same numbers after it. I've tried to come up with the possible meaning of these, but couldn't get any further. I've also noticed that the top block from the call trace is always the following with the exact same numbers: [<ffffffff8116019d>] ? mem_cgroup_cache_charge+0xed/0x130 [<ffffffff8110bdba>] ? add_to_page_cache_locked+0xea/0x160 [<ffffffff8119c232>] mpage_readpages+0x102/0x150 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 Could this indicate a hard drive issue, a RAM issue or something else entirely?

    Read the article

  • Unable to install Konstruktor program

    - by George Barnick
    I've recently switched to Linux, having a good knowledge of how it works before switching, since I've had experience working with Ubuntu-based servers for several months. I've been trying to install a LEGO CAD program that I have found based on an open source library. Information on that is here. The program I'm looking at is called Konstruktor, and for the most part it should be working. One dependency however does not seem to be able to install. The missing dependency is "kdelibs5", which I know what it is and what it's for, but can't get it to install. When trying to install, I get this error: george@GEORGE-PC:~$ sudo apt-get install kdelibs5 Reading package lists... Done Building dependency tree Reading state information... Done Package kdelibs5 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'kdelibs5' has no installation candidate When trying to install the proper package of Konstruktor for my system, I get this error: george@GEORGE-PC:~$ sudo dpkg -i '/home/george/Downloads/konstruktor_0.9-beta1-2_amd64.deb' Selecting previously unselected package konstruktor. (Reading database ... 224684 files and directories currently installed.) Unpacking konstruktor (from .../konstruktor_0.9-beta1-2_amd64.deb) ... dpkg: dependency problems prevent configuration of konstruktor: konstruktor depends on kdelibs5 (>= 4:4.4.5); however: Package kdelibs5 is not installed. dpkg: error processing konstruktor (--install): dependency problems - leaving unconfigured Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf-2.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for mime-support ... Processing triggers for hicolor-icon-theme ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Errors were encountered while processing: konstruktor I have tried updating and adding apt repositories for KDE packages, but I continuously get the same error. I tried the package "kdelibs5-dev", which installed fine, but "kdelibs5" won't, and Konstruktor still won't install. I am on Ubuntu 13.10 with GNOME shell on amd64 architecture. Any help would be much appreciated. (originally posted my issue here) Thanks ahead of time.

    Read the article

  • Bullet physics debug drawing not working

    - by Krishnabhadra
    Background I am following on from this question, which isn't answered yet. Basically I have a cube and a UVSphere in my scene, with UVSphere on the top of the cube without touching the cube. Both exported from blender. When I run the app The UVSphere does circle around the cube for 3 or 4 times and jump out of the scene. What I actually expect was the sphere to fall on top of the cube. What this question about From the comment to the linked question, I got to know about bullet debug drawing, which helps in debugging by drawing outline of physics bodies which are normally invisible. I did some research on that and came up with the code given below. From whatever I have read, below code should work, but it doesn't. My Code My bullet initialization code. -(void) initializeScene { /*Setup physics world*/ _physicsWorld = [[CC3PhysicsWorld alloc] init]; [_physicsWorld setGravity:0 y:-9.8 z:0]; /*Setting up debug draw*/ MyDebugDraw *draw = new MyDebugDraw; draw->setDebugMode(draw->getDebugMode() | btIDebugDraw::DBG_DrawWireframe ); _physicsWorld._discreteDynamicsWorld->setDebugDrawer(draw); /*Setup camera and lamb*/ ………….. //This simpleCube.pod contains the cube [self addContentFromPODFile: @"simpleCube.pod"]; //This file contains sphere [self addContentFromPODFile: @"SimpleSphere.pod"]; [self createGLBuffers]; CC3MeshNode* cubeNode = (CC3MeshNode*)[self getNodeNamed:@"Cube"]; CC3MeshNode* sphereNode = (CC3MeshNode*)[self getNodeNamed:@"Sphere"]; // both cubeNode and sphereNode are not nil from this point float *cVertexData = (float*)((CC3VertexArrayMesh*)cubeNode.mesh) .vertexLocations.vertices; int cVertexCount = ((CC3VertexArrayMesh*)cubeNode.mesh) .vertexLocations.vertexCount; btTriangleMesh* cTriangleMesh = new btTriangleMesh(); int offset = 0; for (int i = 0; i < (cVertexCount / 3); i++) { unsigned int index1 = offset; unsigned int index2 = offset+6; unsigned int index3 = offset+12; cTriangleMesh->addTriangle( btVector3(cVertexData[index1], cVertexData[index1+1], cVertexData[index1+2]), btVector3(cVertexData[index2], cVertexData[index2+1], cVertexData[index2+2]), btVector3(cVertexData[index3], cVertexData[index3+1], cVertexData[index3+2])); offset += 18; } [self releaseRedundantData]; /*Create a triangle mesh from the vertices*/ btBvhTriangleMeshShape* cTriMeshShape = new btBvhTriangleMeshShape(cTriangleMesh,true); btCollisionShape *sphereShape = new btSphereShape(1); gTriMeshObject = [_physicsWorld createPhysicsObjectTrimesh:cubeNode shape:cTriMeshShape mass:0 restitution:1.0 position:cubeNode.location]; sphereObject = [_physicsWorld createPhysicsObject:sphereNode shape:sphereShape mass:1 restitution:0.1 position:sphereNode.location]; sphereObject.rigidBody->setDamping(0.1,0.8); /*Enable debug drawing*/ _physicsWorld._discreteDynamicsWorld->debugDrawWorld(); } And My btIDebugDraw implementation (MyDebugDraw.h) //MyDebugDraw.h class MyDebugDraw: public btIDebugDraw{ int m_debugMode; public: virtual void drawLine(const btVector3& from,const btVector3& to ,const btVector3& color); virtual void drawContactPoint(const btVector3& PointOnB ,const btVector3& normalOnB,btScalar distance ,int lifeTime,const btVector3& color); virtual void reportErrorWarning(const char* warningString); virtual void draw3dText(const btVector3& location ,const char* textString); virtual void setDebugMode(int debugMode); virtual int getDebugMode() const; }; void MyDebugDraw::drawLine(const btVector3& from,const btVector3& to ,const btVector3& color){ LogInfo(@"Works!!"); glPushMatrix(); glColor4f(color.getX(), color.getY(), color.getZ(), 1.0); const GLfloat line[] = { from.getX()*1, from.getY()*1, from.getZ()*1, //point A to.getX()*1, to.getY()*1,to.getZ()*1 //point B }; glVertexPointer( 3, GL_FLOAT, 0, &line ); glPointSize( 5.0f ); glDrawArrays( GL_POINTS, 0, 2 ); glDrawArrays( GL_LINES, 0, 2 ); glPopMatrix(); } void MyDebugDraw::drawContactPoint(const btVector3 &PointOnB ,const btVector3 &normalOnB, btScalar distance ,int lifeTime, const btVector3 &color){ } void MyDebugDraw::reportErrorWarning(const char *warningString){ } void MyDebugDraw::draw3dText(const btVector3 &location , const char *textString){ } void MyDebugDraw::setDebugMode(int debugMode){ } int MyDebugDraw::getDebugMode() const{ return DBG_DrawWireframe; } My Problem The drawLine method is getting called. I can see the cube and sphere in place. Sphere again does some circling around the cube before jumping off. No debug lines are getting drawn.

    Read the article

  • Why does my MySQL remote-connection fail (VLAN)?

    - by Johannes Nielsen
    ubuntu-community! Again I have a problem with my special friend MySQL :D I have got two servers - a database-server and a web-server - who are connected via VLAN. Now I want the web-server to have remote access to the database-server's MySQL. So I created the user user in mysql.user. user's Host is xxx.yyy.zzz.9 which is the internal IP-address of the web-server. xxx.yyy.zzz.0 is the network. I also created user with Host % . As long as I use MySQL on the database-server logging in as user, everything works fine. But trying to log in as user from xxx.yyy.zzz.9 using mysql -h xxx.yyy.zzz.8 -u user -p (where xxx.yyy.zzz.8 is the database-server's internal IP), I get ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.yyy.zzz.8' (110) So I tried to activate Bind-Address in the my.cnf file. Well, if I use xxx.yyy.zzz.8, nothing changes. But if I try xxx.yyy.zzz.9 and try to restart MySQL, I get mysql stop/waiting start: Job failed to start I checked the log files and found - nothing. The database-server's MySQL doesn't even register, that the web-server tries to connect remotely. My idea is, that maybe I didn't configure the VLAN properley, even though I asked someone who actually knows such stuff and he told me, I did everything right. What I wrote into /etc/networking/interfaces is: #The VLAN auto eth1 iface eth1 inet static address xxx.yyy.zzz..8 netmask 255.255.255.0 network xxx.yyy.zzz.0 broadcast xxx.yyy.zzz.255 mtu 1500 ifconfig returns eth1 Link encap:Ethernet HWaddr xxxxxxxxxxxxxx inet addr:xxx.yyy.zzz.8 Bcast:xxx.yyy.zzz.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxxxxxx/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:241146 errors:0 dropped:0 overruns:0 frame:0 TX packets:9765 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17825995 (17.8 MB) TX bytes:566602 (566.6 KB) Memory:fb900000-fb920000 for the eth1, what is, what I configured. (This is for the database-server, the web-server looks similar). ethtool eth1 returns: Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000003 (3) drv probe Link detected: yes (This is for the database-server, the web-server looks similar). Actually I think, everything is right, but it still doesn't work. Is there someone with an idea? EDIT: I commented ou Bind-Address in my.cnf after it didn't work.

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • Oracle Systems and Solutions at OpenWorld Tokyo 2012

    - by ferhat
    Oracle OpenWorld Tokyo and JavaOne Tokyo will start next week April 4th. We will cover Oracle systems and Oracle Optimized Solutions in several keynote talks and general sessions. Full schedule can be found here. Come by the DemoGrounds to learn more about mission critical integration and optimization of complete Oracle stack. Our Oracle Optimized Solutions experts will be at hand to discuss 1-1 several of Oracle's systems solutions and technologies. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance. And because they are highly flexible by design, Oracle Optimized Solutions can be implemented as an end-to-end solution or easily adapted into existing environments. Oracle Optimized Solutions, Servers,  Storage, and Oracle Solaris  Sessions, Keynotes, and General Session Talks DAY TIME TITLE Notes Session Wednesday  April 4 9:00 - 11:15 Keynote: ENGINEERED FOR INNOVATION - Engineered Systems Mark Hurd,  President, Oracle Takao Endo, President & CEO, Oracle Corporation Japan John Fowler, EVP of Systems, Oracle Ed Screven, Chief Corporate Architect, Oracle English Session K1-01 11:50 - 12:35 Simplifying IT: Transforming the Data Center with Oracle's Engineered Systems Robert Shimp, Group VP, Product Marketing, Oracle English Session S1-01 15:20 - 16:05 Introducing Tiered Storage Solution for low cost Big Data Archiving S1-33 16:30 - 17:15 Simplifying IT - IT System Consolidation that also Accelerates Business Agility S1-42 Thursday  April 5 9:30 - 11:15 Keynote: Extreme Innovation Larry Ellison, Chief Executive Officer, Oracle English Session K2-01 11:50 - 13:20 General Session: Server and Storage Systems Strategy John Fowler, EVP of Systems, Oracle English Session G2-01 16:30 - 17:15 Top 5 Reasons why ZFS Storage appliance is "The cloud storage" by SAKURA Internet Inc L2-04 16:30 - 17:15 The UNIX based Exa* Performance IT Integration Platform - SPARC SuperCluster S2-42 17:40 - 18:25 Full stack solutions of hardware and software with SPARC SuperCluster and Oracle E-Business Suite  to minimize the business cost while maximizing the agility, performance, and availability S2-53 Friday April 6 9:30 - 11:15 Keynote: Oracle Fusion Applications & Cloud Robert Shimp, Group VP, Product Marketing Anthony Lye, Senior VP English Session K3-01 11:50 - 12:35 IT at Oracle: The Art of IT Transformation to Enable Business Growth English Session S3-02 13:00-13:45 ZFS Storagge Appliance: Architecture of high efficient and high performance S3-13 14:10 - 14:55 Why "Niko Niko doga" chose ZFS Storage Appliance to support their growing requirements and storage infrastructure By DWANGO Co, Ltd. S3-21 15:20 - 16:05 Osaka University: Lower TCO and higher flexibility for student study by Virtual Desktop By Osaka University S3-33 Oracle Developer Sessions with Oracle Systems and Oracle Solaris DAY TIME TITLE Notes LOCATION Friday April 6 13:00 - 13:45 Oracle Solaris 11 Developers D3-03 13:00 - 14:30 Oracle Solaris Tuning Contest Hands-On Lab D3-04 14:00 - 14:35 How to build high performance and high security Oracle Database environment with Oracle SPARC/Solaris English Session D3-13 15:00 - 15:45 IT Assets preservation and constructive migration with Oracle Solaris virtualization D3-24 16:00 - 17:30 The best packaging system for cloud environment - Creating an IPS package D3-34 Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions.

    Read the article

  • Idea to develop a caching server between IIS and SQL Server

    - by John
    I work on a few high traffic websites that all share the same database and that are all heavily database driven. Our SQL server is max-ed out and, although we have already implemented many changes that have helped but the server is still working too hard. We employ some caching in our website but the type of queries we use negate using SQL dependency caching. We tried SQL replication to try and kind of load balance but that didn't prove very successful because the replication process is quite demanding on the servers too and it needed to be done frequently as it is important that data is up to date. We do use a Varnish web caching server (Linux based) to take a bit of the load off both the web and database server but as a lot of the sites are customised based on the user we can only do so much. Anyway, the reason for this question... Varnish gave me an idea for a possible application that might help in this situation. Just like Varnish sits between a web browser and the web server and caches response from the web server, I was wondering about the possibility of creating something that sits between the web server and the database server. Imagine that all SQL queries go through this SQL caching server. If it's a first time query then it will get recorded, and the result requested from the SQL server and stored locally on the cache server. If it's a repeat request within a set time then the result gets retrieved from the local copy without the query being sent to the SQL server. The caching server could also take advantage of SQL dependency caching notifications. This seems like a good idea in theory. There's still the same amount of data moving back and forward from the web server, but the SQL server is relieved of the work of processing the repeat queries. I wonder about how difficult it would be to build a service that sort of emulates requests and responses from SQL server, whether SQL server's own caching is doing enough of this already that this wouldn't be a benefit, or even if someone has done this before and I haven't found it? I would welcome any feedback or any references to any relevant projects.

    Read the article

  • Ghost team foundation build controllers

    - by Martin Hinshelwood
    Quite often after an upgrade there are things left over. Most of the time they are easy to delete, but sometimes it takes a little effort. Even rarer are those times when something just will not go away no matter how much you try. We have had a ghost team build controller hanging around for a while now, and it had defeated my best efforts to get rid of it. The build controller was from our old TFS server from before our TFS 2010 beta 2 upgrade and was really starting to annoy me. Every time I try to delete it I get the message: Controller cannot be deleted because there are build in progress -Manage Build Controller dialog   Figure: Deleting a ghost controller does not always work. I ended up checking all of our 172 Team Projects for the build that was queued, but did not find anything. Jim Lamb pointed me to the “tbl_BuildQueue” table in the team Project Collection database and sure enough there was the nasty little beggar. Figure: The ghost build was easily spotted Adam Cogan asked me: “Why did you suspect this one?” Well, there are a number of things that led me to suspect it: QueueId is very low: Look at the other items, they are in the thousands not single digits ControllerId: I know there is only one legitimate controller, and I am assuming that 6 relates to “zzUnicorn” DefinitionId: This is a very low number and I looked it up in “tbl_BuildDefinition” and it did not exist QueueTime: As we did not upgrade to TFS 2010 until late 2009 a date of 2008 for a queued build is very suspect Status: A status of 2 means that it is still queued This build must have been queued long ago when we were using TFS 2008, probably a beta, and it never got cleaned up. As controllers are new in TFS 2010 it would have created the “zzUnicorn” controller to handle any build servers that already exist. I had previously deleted the Agent, but leaving the controller just looks untidy. Now that the ghost build has been identified there are two options: Delete the row I would not recommend ever deleting anything from the database to achieve something in TFS. It is really not supported. Set the Status to cancelled (Recommended) This is the best option as TFS will then clean it up itself So I set the Status of this build to 2 (cancelled) and sure enough it disappeared after a couple of minutes and I was then able to then delete the “zzUnicorn” controller. Figure: Almost completely clean Now all I have to do is get rid of that untidy “zzBunyip” agent, but that will require rewriting one of our build scripts which will have to wait for now.   Technorati Tags: ALM,TFBS,TFS 2010

    Read the article

  • Keep website and webservices warm with zero coding

    - by oazabir
    If you want to keep your websites or webservices warm and save user from seeing the long warm up time after an application pool recycle, or IIS restart or new code deployment or even windows restart, you can use the tinyget command line tool, that comes with IIS Resource Kit, to hit the site and services and keep them warm. Here’s how: First get tinyget from here. Download and install the IIS 6.0 Resource Kit on some PC. Then copy the tinyget.exe from “c:\program files…\IIS 6.0 ResourceKit\Tools'\tinyget” to the server where your IIS 6.0 or IIS 7 is running. Then create a batch file that will hit the pages and webservices. Something like this: SET TINYGET=C:\Program Files (x86)\IIS Resources\TinyGet\tinyget.exe"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/ -status:200"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/WidgetService.asmx?WSDL - status:200 First I am hitting the homepage to keep the webpage warm. Then I am hitting the webservice URL with ?WSDL parameter, which allows ASP.NET to compile the service if not already compiled and walk through all the operations and reflect on them and thus loading all related DLLs into memory and reducing the warmup time when hit. Tinyget gets the servers name or IP in the –srv parameter and then the actual URI in the –uri. I have specified what’s the HTTP response code to expect in –status parameter. It ensures the site is alive and is returning http 200 code. Besides just warming up a site, you can do some load test on the site. Tinyget can run in multiple threads and run loops to hit some URL. You can literally blow up a site with commands like this: "%TINYGET%" -threads:30 -loop:100 -srv:google.com -uri:http://www.google.com/ -status:200 Tinyget is also pretty useful to run automated tests. You can record http posts in a text file and then use it to make http posts to some page. Then you can put matching clause to check for certain string in the output to ensure the correct response is given. Thus with some simple command line commands, you can warm up, do some transactions, validate the site is giving off correct response as well as run a load test to ensure the server performing well. Very cheap way to get a lot done.

    Read the article

  • What is hogging my connection?

    - by SF.
    At times it seems like dozens, if not hundreds of root-owned HTTP connections spring up. This is not much of a problem on LAN or WLAN as each of them seems to transfer very little, but if I use GPRS link, my ping times go into minutes (seriously, 80000ms is not infrequent!) and all connections grind to a halt waiting till these end. This usually lasts some 15 minutes and ends about when I start troubleshooting it for real. I've managed to capture a fragment of Nethogs output NetHogs version 0.8.0 PID USER PROGRAM DEV SENT RECEIVED ? root 37.209.147.180:59854-141.101.114.59:80 0.013 0.000 KB/sec ? root 37.209.147.180:59853-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:52804-173.194.70.95:80 0.000 0.000 KB/sec 1954 bw /home/bw/.dropbox-dist/dropbox ppp0 0.000 0.000 KB/sec ? root 37.209.147.180:59851-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:59850-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:52801-173.194.70.95:80 0.000 0.000 KB/sec 13301 bw /usr/lib/firefox/firefox ppp0 0.000 0.000 KB/sec ? root unknown TCP 0.000 0.000 KB/sec Unfortunately, it doesn't display the owning process of these. Does anyone recognize these addresses or is able to suggest how to troubleshoot it further or disable it? Is it some automatic update or something like that? EDIT: per request; netstat -n, for obvious reason that normal netstat won't ever launch as all DNS requests are hogged just the same. netstat -n Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 1 93.154.166.62:51314 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44098 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59855 141.101.114.59:80 FIN_WAIT1 tcp 1 0 192.168.43.224:38237 213.189.45.39:443 CLOSE_WAIT tcp 1 0 93.154.146.186:35167 75.101.152.29:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32939 199.15.160.100:80 CLOSE_WAIT tcp 1 0 192.168.43.224:55619 63.245.217.207:443 CLOSE_WAIT tcp 1 0 93.154.146.186:60210 75.101.152.29:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32944 199.15.160.100:80 CLOSE_WAIT tcp 0 1 37.209.147.180:52804 173.194.70.95:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46606 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:52619 107.22.246.76:80 CLOSE_WAIT tcp 415 0 93.154.146.186:36156 82.112.106.104:80 CLOSE_WAIT tcp 1 0 93.154.146.186:50352 107.22.246.76:443 CLOSE_WAIT tcp 1 0 192.168.43.224:55000 213.189.45.44:443 CLOSE_WAIT tcp 0 1 37.209.147.180:59853 141.101.114.59:80 FIN_WAIT1 tcp 1 0 192.168.43.224:32937 199.15.160.100:80 CLOSE_WAIT tcp 1 0 192.168.43.224:56055 93.184.221.40:80 CLOSE_WAIT tcp 415 0 93.154.146.186:36155 82.112.106.104:80 CLOSE_WAIT tcp 0 1 37.209.147.180:44097 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:35166 75.101.152.29:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32943 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:46607 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:36422 23.21.151.181:443 CLOSE_WAIT tcp 1 0 192.168.43.224:36081 93.184.220.148:80 CLOSE_WAIT tcp 1 0 192.168.43.224:44462 213.189.45.29:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32938 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:36419 23.21.151.181:443 CLOSE_WAIT tcp 0 497 93.154.166.62:51313 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59851 141.101.114.59:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44095 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46611 23.21.151.181:80 CLOSE_WAIT tcp 1 0 192.168.43.224:38236 213.189.45.39:443 CLOSE_WAIT tcp 0 171 37.209.147.180:45341 173.194.113.146:443 ESTABLISHED tcp 0 1 37.209.147.180:52801 173.194.70.95:80 FIN_WAIT1 tcp 1 0 192.168.43.224:36080 93.184.220.148:80 CLOSE_WAIT tcp 0 1 37.209.147.180:59856 141.101.114.59:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44096 198.252.206.16:80 FIN_WAIT1 tcp 0 1 93.154.166.62:57471 108.160.162.49:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59854 141.101.114.59:80 FIN_WAIT1 tcp 0 171 37.209.147.180:45340 173.194.113.146:443 ESTABLISHED tcp 0 168 37.209.147.180:45334 173.194.113.146:443 FIN_WAIT1 tcp 1 0 93.154.146.186:46609 23.21.151.181:80 CLOSE_WAIT tcp 0 1248 93.154.166.62:58270 64.251.23.59:443 FIN_WAIT1 tcp 0 1 37.209.147.180:59850 141.101.114.59:80 FIN_WAIT1 tcp 1 0 93.154.146.186:35181 75.101.152.29:80 CLOSE_WAIT tcp 232 0 93.154.172.168:46384 198.252.206.25:80 ESTABLISHED tcp 1 0 93.154.146.186:52618 107.22.246.76:80 CLOSE_WAIT tcp 1 0 93.154.172.168:36298 173.194.69.95:443 CLOSE_WAIT tcp 1 0 93.154.146.186:60209 75.101.152.29:443 CLOSE_WAIT tcp 0 168 37.209.147.180:45335 173.194.113.146:443 FIN_WAIT1 tcp 415 0 93.154.146.186:36157 82.112.106.104:80 CLOSE_WAIT tcp 1 0 192.168.43.224:36082 93.184.220.148:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32942 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:50350 107.22.246.76:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32941 199.15.160.100:80 CLOSE_WAIT tcp 0 534 37.209.147.180:44089 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46608 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:46612 23.21.151.181:80 CLOSE_WAIT udp 0 0 37.209.147.180:49057 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:51631 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:34827 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:35908 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:44106 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:42184 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:54485 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:42216 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:51961 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:48412 193.41.112.14:53 ESTABLISHED The interesting lines from ping got lost, but the summary over past few hours is: --- 8.8.8.8 ping statistics --- 107459 packets transmitted, 104376 received, +22 duplicates, 2% packet loss, time 195427362ms rtt min/avg/max/mdev = 24.822/528.132/90538.257/2519.263 ms, pipe 90 EDIT: Per request: Happened again, reboot didn't help but cleaned up all "hanging" processes. Currently netstat shows: bw@pony:/var/log$ netstat -n -t Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 93.154.188.68:42767 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:50270 173.194.69.189:443 ESTABLISHED tcp 0 0 93.154.188.68:45250 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:53488 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:53490 173.194.32.198:80 ESTABLISHED tcp 0 159 93.154.188.68:42741 74.125.239.143:443 LAST_ACK tcp 0 0 93.154.188.68:45808 198.252.206.25:80 ESTABLISHED tcp 0 0 93.154.188.68:52449 173.194.32.199:443 ESTABLISHED tcp 0 0 93.154.188.68:52600 173.194.32.199:443 TIME_WAIT tcp 0 0 93.154.188.68:50300 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:45253 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:46252 173.194.32.204:443 ESTABLISHED tcp 0 0 93.154.188.68:45246 190.93.244.58:80 ESTABLISHED tcp 0 0 93.154.188.68:47064 173.194.113.143:443 ESTABLISHED tcp 0 0 93.154.188.68:34484 173.194.69.95:443 ESTABLISHED tcp 0 0 93.154.188.68:45252 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:54290 173.194.32.202:443 ESTABLISHED tcp 0 0 93.154.188.68:47063 173.194.113.143:443 ESTABLISHED tcp 0 0 93.154.188.68:53469 173.194.32.198:80 TIME_WAIT tcp 0 0 93.154.188.68:45242 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:53468 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:50299 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:42764 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:45256 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:58047 108.160.162.105:80 ESTABLISHED tcp 0 0 93.154.188.68:45249 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:50297 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:53470 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:34100 68.232.35.121:443 ESTABLISHED tcp 0 0 93.154.188.68:42758 74.125.239.143:443 ESTABLISHED tcp 0 0 93.154.188.68:42765 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:39000 173.194.69.95:80 TIME_WAIT tcp 0 0 93.154.188.68:50296 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:53467 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:42766 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:45251 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:45248 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:45247 190.93.244.58:80 ESTABLISHED tcp 0 159 93.154.188.68:50254 173.194.69.189:443 LAST_ACK tcp 0 0 93.154.188.68:34483 173.194.69.95:443 ESTABLISHED Output of ps: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.8 0.0 3628 2092 ? Ss 16:52 0:03 /sbin/init root 2 0.0 0.0 0 0 ? S 16:52 0:00 [kthreadd] root 3 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/0] root 4 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/0:0] root 6 0.0 0.0 0 0 ? S 16:52 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S 16:52 0:00 [migration/1] root 10 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/1] root 11 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/1] root 12 0.0 0.0 0 0 ? S 16:52 0:00 [migration/2] root 14 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/2] root 15 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/2] root 16 0.0 0.0 0 0 ? S 16:52 0:00 [migration/3] root 17 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/3:0] root 18 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/3] root 19 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/3] root 20 0.0 0.0 0 0 ? S< 16:52 0:00 [cpuset] root 21 0.0 0.0 0 0 ? S< 16:52 0:00 [khelper] root 22 0.0 0.0 0 0 ? S 16:52 0:00 [kdevtmpfs] root 23 0.0 0.0 0 0 ? S< 16:52 0:00 [netns] root 24 0.0 0.0 0 0 ? S 16:52 0:00 [sync_supers] root 25 0.0 0.0 0 0 ? S 16:52 0:00 [bdi-default] root 26 0.0 0.0 0 0 ? S< 16:52 0:00 [kintegrityd] root 27 0.0 0.0 0 0 ? S< 16:52 0:00 [kblockd] root 28 0.0 0.0 0 0 ? S< 16:52 0:00 [ata_sff] root 29 0.0 0.0 0 0 ? S 16:52 0:00 [khubd] root 30 0.0 0.0 0 0 ? S< 16:52 0:00 [md] root 42 0.0 0.0 0 0 ? S 16:52 0:00 [khungtaskd] root 43 0.0 0.0 0 0 ? S 16:52 0:00 [kswapd0] root 44 0.0 0.0 0 0 ? SN 16:52 0:00 [ksmd] root 45 0.0 0.0 0 0 ? SN 16:52 0:00 [khugepaged] root 46 0.0 0.0 0 0 ? S 16:52 0:00 [fsnotify_mark] root 47 0.0 0.0 0 0 ? S 16:52 0:00 [ecryptfs-kthrea] root 48 0.0 0.0 0 0 ? S< 16:52 0:00 [crypto] root 59 0.0 0.0 0 0 ? S< 16:52 0:00 [kthrotld] root 70 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/2:1] root 71 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_0] root 72 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_1] root 73 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_2] root 74 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_3] root 75 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/u:2] root 76 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/u:3] root 79 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/1:1] root 99 0.0 0.0 0 0 ? S< 16:52 0:00 [deferwq] root 100 0.0 0.0 0 0 ? S< 16:52 0:00 [charger_manager] root 101 0.0 0.0 0 0 ? S< 16:52 0:00 [devfreq_wq] root 102 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/2:2] root 106 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_4] root 107 0.0 0.0 0 0 ? S 16:52 0:00 [usb-storage] root 108 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_5] root 109 0.0 0.0 0 0 ? S 16:52 0:00 [usb-storage] root 271 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/1:2] root 316 0.0 0.0 0 0 ? S 16:52 0:00 [jbd2/sda1-8] root 317 0.0 0.0 0 0 ? S< 16:52 0:00 [ext4-dio-unwrit] root 440 0.1 0.0 2820 608 ? S 16:52 0:00 upstart-udev-bridge --daemon root 478 0.0 0.0 3460 1648 ? Ss 16:52 0:00 /sbin/udevd --daemon root 632 0.0 0.0 3348 1336 ? S 16:52 0:00 /sbin/udevd --daemon root 633 0.0 0.0 3348 1204 ? S 16:52 0:00 /sbin/udevd --daemon root 782 0.0 0.0 2816 596 ? S 16:52 0:00 upstart-socket-bridge --daemon root 822 0.0 0.0 6684 2400 ? Ss 16:52 0:00 /usr/sbin/sshd -D 102 834 0.2 0.0 4064 1864 ? Ss 16:52 0:01 dbus-daemon --system --fork root 857 0.0 0.1 7420 3380 ? Ss 16:52 0:00 /usr/sbin/modem-manager root 858 0.0 0.0 4784 1636 ? Ss 16:52 0:00 /usr/sbin/bluetoothd syslog 860 0.0 0.0 31068 1496 ? Sl 16:52 0:00 rsyslogd -c5 root 869 0.1 0.1 24280 5564 ? Ssl 16:52 0:00 NetworkManager avahi 883 0.0 0.0 3448 1488 ? S 16:52 0:00 avahi-daemon: running [pony.local] avahi 884 0.0 0.0 3448 436 ? S 16:52 0:00 avahi-daemon: chroot helper root 885 0.0 0.0 0 0 ? S< 16:52 0:00 [kpsmoused] root 892 0.0 0.1 25696 4140 ? Sl 16:52 0:00 /usr/lib/policykit-1/polkitd --no-debug root 923 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_6] root 959 0.0 0.0 0 0 ? S< 16:52 0:00 [krfcommd] root 970 0.0 0.1 7536 3120 ? Ss 16:52 0:00 /usr/sbin/cupsd -F colord 976 0.1 0.3 55080 10396 ? Sl 16:52 0:00 /usr/lib/i386-linux-gnu/colord/colord root 979 0.0 0.0 4632 872 tty4 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty4 root 987 0.0 0.0 4632 884 tty5 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty5 root 994 0.0 0.0 4632 884 tty2 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty2 root 995 0.0 0.0 4632 868 tty3 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty3 root 998 0.0 0.0 4632 876 tty6 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty6 root 1022 0.0 0.0 2176 680 ? Ss 16:52 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket root 1029 0.0 0.0 3632 664 ? Ss 16:52 0:00 /usr/sbin/irqbalance daemon 1030 0.0 0.0 2476 120 ? Ss 16:52 0:00 atd root 1031 0.0 0.0 2620 880 ? Ss 16:52 0:00 cron root 1061 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/3:2] root 1064 0.0 1.0 34116 31072 ? SLsl 16:52 0:00 lightdm root 1076 13.4 1.2 118688 37920 tty7 Ssl+ 16:52 0:55 /usr/bin/X :0 -core -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswit root 1085 0.0 0.0 0 0 ? S 16:52 0:00 [rts_pstor] root 1087 0.0 0.0 0 0 ? S 16:52 0:00 [rtsx-polling] root 1095 0.0 0.0 0 0 ? S< 16:52 0:00 [cfg80211] root 1127 0.0 0.0 0 0 ? S 16:52 0:00 [flush-8:0] root 1130 0.0 0.0 6136 1824 ? Ss 16:52 0:00 /sbin/wpa_supplicant -B -P /run/sendsigs.omit.d/wpasupplicant.pid -u -s -O /va root 1137 0.0 0.1 24604 3164 ? Sl 16:52 0:00 /usr/lib/accountsservice/accounts-daemon root 1140 0.0 0.0 0 0 ? S< 16:52 0:00 [hd-audio0] root 1188 0.0 0.1 34308 3420 ? Sl 16:52 0:00 /usr/sbin/console-kit-daemon --no-daemon root 1425 0.0 0.0 4632 872 tty1 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty1 root 1443 0.1 0.1 29460 4664 ? Sl 16:52 0:00 /usr/lib/upower/upowerd root 1579 0.0 0.1 16540 3272 ? Sl 16:53 0:00 lightdm --session-child 12 19 bw 1623 0.0 0.0 2232 644 ? Ss 16:53 0:00 /bin/sh /usr/bin/startkde bw 1672 0.0 0.0 4092 204 ? Ss 16:53 0:00 /usr/bin/ssh-agent /usr/bin/gpg-agent --daemon --sh --write-env-file=/home/bw/ bw 1673 0.0 0.0 5492 384 ? Ss 16:53 0:00 /usr/bin/gpg-agent --daemon --sh --write-env-file=/home/bw/.gnupg/gpg-agent-in bw 1676 0.0 0.0 3848 792 ? S 16:53 0:00 /usr/bin/dbus-launch --exit-with-session /usr/bin/startkde bw 1677 0.5 0.0 5384 2180 ? Ss 16:53 0:02 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session root 1704 0.3 0.1 25348 3600 ? Sl 16:53 0:01 /usr/lib/udisks/udisks-daemon root 1705 0.0 0.0 6620 728 ? S 16:53 0:00 udisks-daemon: not polling any devices bw 1736 0.0 0.0 2008 64 ? S 16:53 0:00 /usr/lib/kde4/libexec/start_kdeinit +kcminit_startup bw 1737 0.0 0.5 115200 15588 ? Ss 16:53 0:00 kdeinit4: kdeinit4 Running... bw 1738 0.1 0.2 116756 8728 ? S 16:53 0:00 kdeinit4: klauncher [kdeinit] --fd=9 bw 1740 0.6 1.0 340524 31264 ? Sl 16:53 0:02 kdeinit4: kded4 [kdeinit] bw 1742 0.0 0.0 8944 2144 ? S 16:53 0:00 /usr/lib/i386-linux-gnu/gconf/gconfd-2 bw 1746 0.2 0.4 92028 14688 ? S 16:53 0:00 /usr/bin/kglobalaccel bw 1748 0.0 0.4 90804 13500 ? S 16:53 0:00 /usr/bin/kwalletd bw 1752 0.1 0.5 103764 15152 ? S 16:53 0:00 /usr/bin/kactivitymanagerd bw 1758 0.0 0.0 2144 280 ? S 16:53 0:00 kwrapper4 ksmserver bw 1759 0.1 0.5 150016 16088 ? Sl 16:53 0:00 kdeinit4: ksmserver [kdeinit] bw 1763 2.2 1.0 178492 32100 ? Sl 16:53 0:08 kwin bw 1772 0.2 0.5 106292 16340 ? Sl 16:53 0:00 /usr/bin/knotify4 bw 1777 0.9 1.1 246120 32912 ? Sl 16:53 0:03 /usr/bin/krunner bw 1778 6.3 2.7 389884 80216 ? Sl 16:53 0:23 /usr/bin/plasma-desktop bw 1785 0.0 0.0 2844 1208 ? S 16:53 0:00 ksysguardd bw 1789 0.1 0.4 82036 14176 ? S 16:53 0:00 /usr/bin/kuiserver bw 1805 0.3 0.1 61560 5612 ? Sl 16:53 0:01 /usr/bin/akonadi_control root 1806 0.0 0.0 0 0 ? S 16:53 0:00 [kworker/0:2] bw 1808 0.1 0.2 211852 8460 ? Sl 16:53 0:00 akonadiserver bw 1810 0.4 0.8 244116 25360 ? Sl 16:53 0:01 /usr/sbin/mysqld --defaults-file=/home/bw/.local/share/akonadi/mysql.conf --da bw 1874 0.0 0.0 35284 2956 ? Sl 16:53 0:00 /usr/bin/xsettings-kde bw 1876 0.0 0.3 68776 9488 ? Sl 16:53 0:00 /usr/bin/nepomukserver bw 1884 0.4 0.9 173876 29240 ? SNl 16:53 0:01 /usr/bin/nepomukservicestub nepomukstorage bw 1902 6.1 2.1 451512 63924 ? Sl 16:53 0:21 /home/bw/.dropbox-dist/dropbox bw 1906 3.8 1.0 142368 32376 ? Rl 16:53 0:13 /usr/bin/yakuake bw 1933 0.0 0.1 54636 4680 ? Sl 16:53 0:00 /usr/bin/zeitgeist-datahub bw 1943 0.5 1.5 164836 46836 ? Sl 16:53 0:01 python /usr/bin/printer-applet bw 1945 0.1 0.1 99636 5048 ? S<l 16:53 0:00 /usr/bin/pulseaudio --start --log-target=syslog rtkit 1947 0.0 0.0 21336 1248 ? SNl 16:53 0:00 /usr/lib/rtkit/rtkit-daemon bw 1958 0.0 0.1 44204 3792 ? Sl 16:53 0:00 /usr/bin/zeitgeist-daemon bw 1972 0.0 0.0 27008 2684 ? Sl 16:53 0:00 /usr/lib/gvfs/gvfsd bw 1974 0.1 0.5 90480 16660 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_akonotes_resource akonadi_akonotes_res bw 1984 0.1 0.5 90472 16636 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_akonotes_resource akonadi_akonotes_res bw 1985 0.3 0.9 148800 28304 ? S 16:53 0:01 /usr/bin/akonadi_archivemail_agent --identifier akonadi_archivemail_agent bw 1992 0.1 0.5 90020 16148 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_contacts_resource akonadi_contacts_res bw 1993 0.1 0.5 90132 16452 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_contacts_resource akonadi_contacts_res bw 1994 0.1 0.5 90564 16332 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_ical_resource akonadi_ical_resource_0 bw 1995 0.1 0.5 90676 16732 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_ical_resource akonadi_ical_resource_1 bw 1996 0.1 0.5 90468 16800 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_maildir_resource akonadi_maildir_resou bw 1999 0.2 0.6 99324 19276 ? S 16:53 0:00 /usr/bin/akonadi_maildispatcher_agent --identifier akonadi_maildispatcher_agen bw 2006 0.3 0.9 148808 28332 ? S 16:53 0:01 /usr/bin/akonadi_mailfilter_agent --identifier akonadi_mailfilter_agent bw 2017 0.0 0.1 50256 4716 ? Sl 16:53 0:00 /usr/lib/zeitgeist/zeitgeist-fts bw 2024 0.2 0.6 103632 18376 ? Sl 16:53 0:00 /usr/bin/akonadi_nepomuk_feeder --identifier akonadi_nepomuk_feeder bw 2043 0.0 0.0 4484 280 ? S 16:53 0:00 /bin/cat bw 2101 0.2 0.7 113600 22396 ? Sl 16:53 0:00 /usr/lib/kde4/libexec/polkit-kde-authentication-agent-1 bw 2105 0.2 0.7 114196 22072 ? Sl 16:53 0:00 /usr/bin/nepomukcontroller bw 2156 0.3 1.0 333188 31244 ? Sl 16:54 0:01 /usr/bin/kmix bw 2167 0.0 0.0 6548 2724 pts/2 Ss 16:54 0:00 /bin/bash bw 2177 0.2 0.7 113496 22960 ? Sl 16:54 0:00 /usr/bin/klipper bw 2394 3.5 1.2 52932 35596 ? SNl 16:54 0:11 /usr/bin/virtuoso-t +foreground +configfile /tmp/virtuoso_hX1884.ini +wait root 2460 0.0 0.0 6184 1876 pts/2 S 16:54 0:00 sudo -s root 2500 0.0 0.0 6528 2700 pts/2 S 16:54 0:00 /bin/bash root 2599 0.0 0.0 5444 1280 pts/2 S+ 16:54 0:00 /bin/bash bin/aero root 2606 0.1 0.0 9836 2500 pts/2 S+ 16:54 0:00 wvdial aero2 root 2619 0.0 0.0 3504 1280 pts/2 S 16:54 0:00 /usr/sbin/pppd 57600 modem crtscts defaultroute usehostname -detach user aero bw 2653 0.0 0.0 6600 2880 pts/3 Ss 16:54 0:00 /bin/bash bw 2676 0.4 0.8 130296 24016 ? SNl 16:54 0:01 /usr/bin/nepomukservicestub nepomukfilewatch bw 2679 0.1 0.7 101636 22252 ? SNl 16:54 0:00 /usr/bin/nepomukservicestub nepomukqueryservice bw 2681 0.2 0.8 109836 24280 ? SNl 16:54 0:00 /usr/bin/nepomukservicestub nepomukbackupsync bw 3833 46.0 9.7 829272 288012 ? Rl 16:55 1:46 /usr/lib/firefox/firefox bw 3903 0.0 0.0 35128 2804 ? Sl 16:55 0:00 /usr/lib/at-spi2-core/at-spi-bus-launcher bw 4708 0.1 0.0 6564 2736 pts/4 Ss 16:56 0:00 /bin/bash root 5210 0.0 0.0 0 0 ? S 16:57 0:00 [kworker/u:0] root 6140 0.2 0.0 0 0 ? S 16:58 0:00 [kworker/0:1] root 6371 0.5 0.0 6184 1868 pts/4 S+ 16:59 0:00 sudo nethogs ppp0 root 6411 17.7 0.2 8616 6144 pts/4 S+ 16:59 0:05 nethogs ppp0 bw 6787 0.0 0.0 5464 1220 pts/3 R+ 16:59 0:00 ps auxw

    Read the article

  • EPM 11.1.2 - R&A DATABASE CONNECTIONS DISAPPEAR FROM THE "DATABASE CONNECTION MANAGER

    - by Powder
    When accessing the database connection panel through Reporting and Analysis all previously entered database connection do not appear. This is due to a bug in the Windows SMB2 protocol. To work around this bug you have to disable the protocol. On Windows 2008 the protocol is automatically enabled. This needs to be done on both the servers and the clients. Note that “server” is the server which hosts RAF repository service and RM1 folder, “client” – server which hosts replicated Repository service that accesses repository files via network i.e. \\<server_host>\RM1  In order to disable SMB 2.0 on the server side, follow these steps:  1. Run "regedit" on Windows Server 2008 based computer.  2. Expand and locate the sub tree as follows.  HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters  3. Add a new REG_DWORD key with the name of "Smb2" (without quotation mark)  Value name: Smb2  Value type: REG_DWORD  0 = disabled  1 = enabled 4. Set the value to 0 to disable SMB 2.0, or set it to 1 to re-enable SMB 2.0.  5. Reboot the server.  To disable SMB 2.0 for Windows Vista or Windows Server 2008 systems that are the “client” systems run the following commands:  sc config lanmanworkstation depend= bowser/mrxsmb10/nsi  sc config mrxsmb20 start= disabled  Note there's an extra " " (space) after the "=" sign.  To enable back SMB 2.0 for Windows Vista or Windows Server 2008 systems that  are the “client” systems run the following commands: sc config lanmanworkstation depend= bowser/mrxsmb10/mrxsmb20/nsi  sc config mrxsmb20 start= auto  Again, note there's an extra " " (space) after the "=" sign. 

    Read the article

  • Reasons to Use a VM For Development

    - by George Stocker
    Background: I work at a start-up company, where one team uses Virtual Machines to connect to a remote server to do their development, and another team (the team I'm on) uses local IIS/SQL Server 2005/Visual Studio installations to conduct work. Team VM is located about 1000 miles from Team Non-VM, and the servers the VMs run off of are located near Team VM (Latency, for those that are wondering, is about 50ms). A person high in the company is pushing for Team Non-VM to use virtual machines for programming, development, and testing. The latter point we agree on -- we want Virtual Machines to test configurations and various aspects of the web application in a 'clean' state. The Problem: What we don't agree on is having developers using RDP to connect to a desktop remotely that contains Visual Studio, SQL Server, and IIS to do the same development we could do locally on our laptops. I've tried the VM set-up, and besides the color issue, there is a latency issue that is rather noticeable, not to mention that since we're a start-up, a good number of employees work from home on occasion with our work laptops, and this move would cut off the laptops. They'd be turned in. Reasons to Use Remote VMs for Development (Not Testing!): Here are the stated reasons that this person wants us to use VMs: They work for TeamVM. They keep the source code "safe". If we want to work from home, we could just use our home PCs. Licenses (I don't know what the argument is, only that it's been used). Reasons not to use Remote VMs for Development: Here are the stated reasons why we don't want to use VMs: We like working from home. We get a lot done on our own time. We're not going to use our Home PCs to do work related stuff. The Latency is noticeable. Support for the VMs (if they go down, or if we need a new VM) takes a while. We don't have administrative privileges on the VM, and are unable to change settings as needed. What I'm looking for from the community is this: What reasons would you give for not using VMs for development? Keep in mind these are remote VMs -- this isn't a VM running on a local desktop. It's using the laptop (or a desktop) as a thin client for a remote VM. Also, on the other side of the coin: Is there something we're missing that makes VMs more palatable for development? Edit: I think 'safe' is used in term of corporate espionage, or more correctly if the Laptop gets stolen, the person who stole would have access to our source code. The former (as we've pointed out, is always going to be a possibility -- companies stop that with litigation, there isn't a technical solution (so far as I can see)). The latter point is ( though I don't know its usefulness in a corporate scenario) mitigated by Truecrypt'ing the entire volume.

    Read the article

  • Oracle VM networking under the hood and 3 new templates

    - by Chris Kawalek
    We have a few cool things to tell you about:  First up: have you ever wondered what happens behind the scenes in the network when you Live Migrate your Oracle VM server workload? Or how Oracle VM implements the network infrastructure you configure through your point & click action in the GUI? Really….how do they do this? For an in-depth view of the Oracle VM for x86 Networking model, Look ‘Under the Hood’ at Networking in Oracle VM Server for x86 with our best practices engineer in a blog post on OTN Garage. Next, making things simple in Oracle VM is what we strive every day to deliver to our user community. With that, we are pleased to bring you updates on three new Oracle Application templates: E-Business Suite 12.1.3 for Oracle ExalogicOracle VM templates for Oracle E-Business Suite 12.1.3 (x86 64-bit for Oracle Exalogic Elastic Cloud) contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server. You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install). For further details, please review the announcement.   JD Edwards EnterpriseOne 9.1 and JD Edwards EnterpriseOne Tools 9.1.2.1 for x86 servers and Oracle Exalogic The Oracle VM Templates for JD Edwards EnterpriseOne provide a method to rapidly install JD Edwards EnterpriseOne 9.1  and Tools 9.1.2.1. The complete stack includes Oracle Database 11g R2 and Oracle WebLogic Server 10.3.5 running on Oracle Linux 5. The templates can be installed to Oracle VM Server for x86 release 3.x and to the Oracle Exalogic Elastic Cloud.  PeopleSoft PeopleTools 8.5.2.10 for Oracle Exalogic This virtual deployment package delivers a "quick start" of PeopleSoft Middle-tier template on Oracle Linux for Oracle Exalogic Elastic Cloud. And last, are you wondering why we talk about “fast”, “rapid” when we refer to using Oracle VM templates to virtualize Oracle applications? Read the Evaluator Group Lab Validation report quantifying speeds of deployment up to 10x faster than with VMware vSphere. Or you can also check out our on demand webcast Quantifying the Value of Application-Driven Virtualization.

    Read the article

  • A brief introduction to BRM and architecture

    - by Yani Miguel
    Oracle Communications Billing and Revenue Management (Oracle BRM) is the telcos industry´s leading solution intended for communications service providers. This post encourages to know BRM starting with the basics. History Portal was a billing and revenue managament solution to communications industry created by Portal Software. In 2006 Oracle acquired Portal Software and the solution was renamed BRM. Today Oracle BRM is the first end-to-end packaged enterprise software suite for the communications industry, however BRM is just one more product in the catalog of OSS solutions that Oracle offers. BRM can bill and manage all communications services including wireline, wireless, broadband, cable, voice over IP, IPTV, music, and video. BRM Architecture BRM´s architecture consists of 4 layers or tiers. Through these layers are the data, bussines logic and interfaces to connect graphical client tools.Application tier This layer provides GUI client tools enabling communication to other layers through open APIs. Some BRM client applications are: Customer Center Pricing Center Universal Event Loader Web Server BRM Billing Application Collections Center Permissioning Center Furthermore, this layer is where are provided real-time external events. Bussines Process Tier Although all layers are equally important, I think it deserves more atention because in this tier BRM functionality is implemented. All functions that give life to BRM are in this layer coded in C language called Opcodes (System Processes in the image). Any changes or additional functionality should be made here, so when we try to customize the product, we will most of the time programming in this layer (Business Policies in the image).Bussines Process Tier Features: Implements Portal system functionalityValidates data from the application tierModifies Portal behavior through business policies. Business policies can by customized.Triggers external systems using event notification. Object Tier This layer is responsible for transfer the BRM requests into database language and translate BRM requests into external system requests. Without it, the business logic (data from Bussines Process Tier) could not be understood by the relational database. Data tier Data tier is responsable for the storage of BRM database and other external systems databases. External systems include credit card, tax, and directory servers. Finally, It's important to note that BRM is designed to easily integrate with the following solutions:AIA 2.4 Siebel CRM E-Business Suite - G/L onlyCommunications Services Gatekeeper Oracle BI Publisher. Personally, I think that BRM could improve migrating client-server architecture to a fully web platform that works with Oracle Middleware like any product of the Fusion Middleware family. Hopefully there are already initiatives in this area.

    Read the article

  • Entity Framework 4.0: Optimal and horrible SQL

    - by DigiMortal
    Lately I had Entity Framework 4.0 session where I introduced new features of Entity Framework. During session I found out with audience how Entity Framework 4.0 can generate optimized SQL. After session I also showed guys one horrible example about how awful SQL can be generated by Entity Framework. In this posting I will cover both examples. Optimal SQL Before going to code take a look at following model. There is class called Event and I will use this class in my query. Here is the LINQ To Entities query that uses small anonymous type. var query = from e in _context.Events             select new { Id = e.Id, Title = e.Title }; Debug.WriteLine(((ObjectQuery)query).ToTraceString()); Running this code gives us the following SQL. SELECT      [Extent1].[event_id] AS [event_id],      [Extent1].[title] AS [title]  FROM [dbo].[events] AS [Extent1] This is really small – no additional fields in SELECT clause. Nice, isn’t it? Horrible SQL Ayende Rahien blog shows us darker side of Entiry Framework 4.0 queries. You can find comparison betwenn NHibernate, LINQ To SQL and LINQ To Entities from posting What happens behind the scenes: NHibernate, Linq to SQL, Entity Framework scenario analysis. In this posting I will show you the resulting query and let you think how much better it can be done. Well, it is not something we want to see running in our servers. I hope that EF team improves generated SQL to acceptable level before Visual Studio 2010 is released. There is also morale of this example: you should always check out the queries that O/R-mapper generates. Behind the curtains it may silently generate queries that perform badly and in this case you need to optimize you data querying strategy. Conclusion Entity Framework 4.0 is new product with a lot of new features and it is clear that not everything is 100% super in its first release. But it still great step forward and I hope that on 12.04.2010 we have new promising O/R-mapper available to use in our projects. If you want to read more about Entity Framework 4.0 and Visual Studio 2010 then please feel free to follow this link to list of my Visual Studio 2010 and .NET Framework 4.0 postings.

    Read the article

  • Why Does Ejabberd Start Fail?

    - by Andrew
    I am trying to install ejabberd 2.1.10-2 on my Ubuntu 12.04.1 server. This is a fresh install, and ejabberd is never successfully installed. The Install Every time, apt-get hangs on this: Setting up ejabberd (2.1.10-2ubuntu1) ... Generating SSL certificate /etc/ejabberd/ejabberd.pem... Creating config file /etc/ejabberd/ejabberd.cfg with new version Starting jabber server: ejabberd............................................................ failed. The dots just go forever until it times out or I 'killall' beam, beam.smp, epmd, and ejabberd processes. I've turned off all firewall restrictions. Here's the output of epmd -names while the install is hung: epmd: up and running on port 4369 with data: name ejabberdctl at port 42108 name ejabberd at port 39621 And after it fails: epmd: up and running on port 4369 with data: name ejabberd at port 39621 At the same time (during and after), the output of both netstat -atnp | grep 5222 and netstat -atnp | grep 5280 is empty. The Crash File A crash dump file is create at /var/log/ejabber/erl_crash.dump. The slogan (i.e. reason for the crash) is: Slogan: Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) It's alive? Whenever I try to relaunch ejabberd with service ejabberd start, the same thing happens - even if I've killed all processes before doing so. However, when I killall the processes listed above again, and run su - ejabberd -c /usr/sbin/ejabberd, this is the output I get: Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:0] [kernel-poll:false] Eshell V5.8.5 (abort with ^G) (ejabberd@ns1)1> =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.478.0>:ejabberd_listener:166) : Reusing listening port for 5222 =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.479.0>:ejabberd_listener:166) : Reusing listening port for 5269 =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.480.0>:ejabberd_listener:166) : Reusing listening port for 5280 =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.40.0>:ejabberd_app:72) : ejabberd 2.1.10 is started in the node ejabberd@ns1 Then, the server appears to be running. I get a login prompt when I access http://mydomain.com:5280/admin/. Of course I can't login unless I create an account. At this time, the output of netstat -atnp | grep 5222 and netstat -atnp | grep 5280 is as follows: tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN 19347/beam tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN 19347/beam ejabberdctl Even when it appears ejabberd is running, trying to do anything with ejabberdctl fails. For example: trying to register a user: root@ns1:~# ejabberdctl register myusername mydomain.com mypassword Failed RPC connection to the node ejabberd@ns1: nodedown I have no idea what I'm doing wrong. This happens on two different servers I have with identical software installed (really not much of anything). Please help. Thanks.

    Read the article

  • DHCP server with multiple interfaces on ubuntu, destroys default gateway

    - by Henrik Alstad
    I use Ubuntu, and I have many interfaces. eth0, which is my internet connection, and it gets its info from a DHCP-server totally outisde of my control. I then have eth1,eth2,eth3 and eth4 which I have created a DHCP-server for.(ISC DHCP-Server) It seems to work, and I even get an IP-address from the foreign DHCP-server on the internet facing interface. However, for some reason it seems my gateway for eth0 became screwed after I installed my local DHCP-server for eth1-eth4. (I think so because I got an IP for eth0, and I can ping other stuff on the local network, but I cannot get access to the internet). My eth0-specific info in /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet static address 10.0.1.1 netmask 255.255.255.0 network 10.0.1.0 broadcast 10.0.1.255 gateway 10.0.1.1 mtu 8192 auto eth2 iface eth2 inet static address 10.0.2.1 netmask 255.255.255.0 network 10.0.2.0 broadcast 10.0.2.255 gateway 10.0.2.1 mtu 8192 My /etc/default/isc-dhcp-server: INTERFACES="eth1 eth2 eth3 eth4" So why does my local DHCP-server fuck up the gateway for eth0, when I tell it not to listen to eth0? Anyone see the problem or what I can do to fix it? The problem seems indeed to be the gateways. "netstat -nr" gives: 0.0.0.0 --- 10.X.X.X ---- 0.0.0.0 --- UG 0 0 0 eth3 It should have been 0.0.0.0 129.2XX.X.X 0.0.0.0 UG 0 0 0 eth0 So for some reason, my local DHCP-server overrides the gateway I get from the network DHCP. Edit: dhcp.conf looks like this(I included info only for eth1 subnet): ddns-update-style none; not authoritative; subnet 10.0.1.0 netmask 255.255.255.0 { interface eth1; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; range 10.0.1.10 10.0.1.100; host camera1_1 { hardware ethernet 00:30:53:11:24:6E; fixed-address 10.0.1.10; } host camera2_1 { hardware ethernet 00:30:53:10:16:70; fixed-address 10.0.1.11; } } Also, it seems that the gateway is correctly set if I run "/etc/init.d/networking restart" in a terminal, but that's not helpful for me, I need the correct gateway to be set during startup, and i'd rather find the source of the problem

    Read the article

  • Partner Webcast – More out of ODA with DB Options - 19 July 2012

    - by Thanos
    The Simple, Reliable, Affordable Path to High-Availability Databases Critical business data needs to be available 24/7 for users and customers, but it can be a struggle to find the time and resources to build a highly available database system that’s reliable and affordable. That’s why Oracle created the new Oracle Database Appliance—a complete package of software, server, storage, and networking. The Oracle Database Appliance integrates the world’s most popular database - Oracle Database 11g  - with system software, servers, storage and networking in a single box. Business gets the benefit of a reliable, secure and highly available database to support applications and maintain continuity – as well as groundbreaking ease of use. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. The benefits?   Unmatched performance, reliability & security for your data that’s there when you need it – which is all the time. Fast installation, simple deployment, easy management. Out of the box. Significant cost savings & reduced risk and complexity compared to integrating all the elements yourself. Ongoing lower total cost of ownership with multiple automated support, detection & correction functions that also save you time.   Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • SQLAuthority News – #TechEDIn – TechEd India 2012 – Things to Do and Explore for SQL Enthusiast

    - by pinaldave
    TechEd India 2012 is just 48 hours away and I have been receiving lots of requests regarding how SQL enthusiasts can maximize their time they’ll be spending at TechEd India 2012. Trust me – TechEd is the biggest Tech Event in India and it is much larger in magnitude than we can imagine. There are plenty of tracks there and lots of things to do. Honestly, we need clone ourselves multiple times to completely cover the event. However, I am going to talk about SQL enthusiasts only right now. In this post, I’ll share a few things they can do in this big event. But before I start talking about specific things, there is one thing which is a must – Keynote. There are amazing Keynotes planned every single day at TechEd India 2012. One should not miss them at all. Social Media I am a big believer of the social media. I am everywhere - Twitter, Facebook, LinkedIn and GPlus. I suggest you follow the tag #TechEdIn as well as contribute at the healthy conversation going on right now. You may want to follow a few of the SQL Server enthusiasts who are also attending events like TechEd India. This way, you will know where they are and you can contribute along with them. For a good start, you can follow all the speakers who are presenting at the event. I have linked all the speakers’ names with their respective Twitter accounts. Networking Do not stop meeting new people. Introduce yourself. Catch the speakers after their sessions. Meet other SQL experts and discuss SQL as well as life aside SQL. The best way to start the communication is to talk about something new. Here are a few lines I usually use when I have to break the ice: SQL Server 2012 is just released and I have installed it. How many SQL Server sessions are you going to attend? I am going to attend _________ I am a big fan of SQL Server. Sessions Agenda Day 1 T-SQL Rediscovered with SQL Server 2012 - Jacob Sebastian Catapult your data with SQL Server 2012 integration services - Praveen Srivatsa Processing Big Data with SQL Server 2012 and Hadoop  - Stephan Forte SQL Server Misconceptions and Resolution – A Practical Perspective – Pinal Dave and Vinod Kumar Securing with ContainedDB in SQL Server 2012  - Pranab Majumdar Agenda Day 2 Hand-on-Lab – Exploring Power View with SQL Server 2012 – Ravi S. Maniam Hand-on-Lab - SQL Server 2012 – AlwaysOn Availability Groups  - Amit Ganguli Agenda Day 3 Peeling SQL Server like an Onion: Internals Debunked  - Vinod Kumar Speed Up! – Parallel Processes and Unparalleled Performance  - Pinal Dave Keeping Your Database Available – ‘AlwaysOn’  - Balmukund Lakhani Lesser Known Facts of SQL Server Backup and Restore  - Amit Banerjee Top five reasons why you want SQL Server 2012 BI - Praveen Srivatsa Product Booth and Event Partners There will be a dedicated SQL Server booth at the event. I suggest you stop by there and do communication with SQL Server Experts. Additionally there will be booths of various event partners. Stop by their booth and see if they have a product which can help your career. I know that Pluralsight has recently released my course on their online learning site and if that interests you, you can talk about the subject with them. Bring Your Camera Make a list of the people you want to meet. Follow them on Twitter or send them an email and know their location. Introduce yourself, meet them and have your conversation. Do not forget to take a photo with them and later on, share the photo on social media. It would be nice to send an email to everyone with attached high resolution images if you have their email address. After-hours parties After-hours parties are not always about eating and meeting friends but sometimes, they are very informative. Last time I ended up meeting an SQL expert, and we end up talking for long hours on various aspects of SQL Server. After 4 hours, we figured out that he stays in the same apartment complex as mine and since we have had an excellent friendship, he has then become our family friend. So, my advice is that you start to seek out who is meeting where in the evening and see if you can get invited to the parties. Make new friends but never lose mutual respect by doing something silly. Meet Me I will be at the event for three days straight. I will be around the SQL tracks. Please stop by and introduce yourself. I would like to meet you and talk to you. Meeting folks from the Community is very important as we all speak the same language at the end of the day – SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • How can I convince cowboy programmers to use source control?

    - by P.Brian.Mackey
    UPDATE I work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control: The team is not used to using TFS. I've had 2 training sessions, but was only allotted 1 hour which is insufficient. Team members directly modify code on the server. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems arise Time estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as "slow". The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • My Package Version Number Appears Greater Yet apt-get Doesn't Select It

    - by nutznboltz
    Backstory: It was determined that when using lxc container VMs the Nagios nrpe shutdown script when run on the host of the containers would kill the nrpe processes inside the containers. This was remediated by changing the script to use pidfiles instead of searching the process table for the nrpe process. Regrettably start-stop-daemon is a C program that resulted from translating a Perl script and it shows. There are far too many global varibles in start-stop-daemon.c and although there are some nice blocks of comments there are far to few comments that explain the intent behind variable names such as "schedule" (the string "schedule" appears in many contexts.) The manual page for start-stop-daemon strongly suggests that unless you use the "--retry" option the start-stop-daemon program may return before the process it sent a signal to actually calls exit() and terminates, however it doesn't actually state this in plain English. The obtuseness of start-stop-daemon is most likely the reason that the "fixed" version of the script includes a dubious comment indicating that sometimes the pid file has not been removed. I can easily see why someone would not understand that he left the --retry option missing. This bug also causes failures when the script is given the "restart" option; the nrpe daemon will shutdown but not start up again. Did I mention that since applying the update our nrpe servers started crashing over and over? Repairing this is why I am doing this work. I have been working on remediating the fix. You can see my current work in this PPA. Actual Question: The upstream version number of nagios-nrpe-server in lucid-updates is 2.12-4ubuntu1.10.04.1 My PPA uses this version number 2.12-4ubuntu1.10.04.1.1~ppa1~lucid1 I check the rules here and use this test program and I am lead to believe that the version number I use in my PPA is greater than the one in lucid-updates yet when I ran: sudo add-apt-repository ppa:nutznboltz/nrpe-unbreak-lp-600941 sudo apt-get update sudo aptitiude dist-upgrade The replacement package was not installed. I was able to install it using sudo aptitude install nagios-nrpe-server=2.12-4ubuntu1.10.04.1.1~ppa1~lucid1 Can anyone explain this behavior? Why didn't my version number appear greater to "aptitude dist-upgrade"? Thanks $ cat /etc/apt/preferences Package: * Pin: release a=lucid-backports Pin-Priority: 400 Package: * Pin: release a=lucid-security Pin-Priority: 990 Package: * Pin: release a=lucid-updates Pin-Priority: 900 Package: * Pin: release a=lucid-proposed Pin-Priority: 400 $ ls /etc/apt/preferences.d/ $ Should not make any difference as a PPA cannot be in any of those pockets. I went ahead and bumped the version number in the PPA to 2.12-4ubuntu1.10.04.2~ppa1~lucid1. I'll see if that makes a difference. I do notice that lintian complains: W: nagios-nrpe-server: debian-revision-not-well-formed 2.12-4ubuntu1.10.04.2~ppa1~lucid1

    Read the article

  • Help for choosing a cost effective game server for Flash client

    - by Sapots Thomas
    I am developing a flash-based game primarily for desktops, to be hosted on facebook platform (like cityville, sims social etc). The gameplay doesn't involve real-time communication between players unlike an mmorpg. Here each player plays in his own world without any knowledge of other online players. I've written almost 95% of the game logic in actionscript on the client side. I used Smartfox Server pro on the server side (mostly used for getting data from the DB) and the entire server code is an extension written in java. I'm using json as the protocol for communication. Although I love smartfox server, as an indie, its tough for me to afford the unlimited users license. Morever its limited just to one machine. So I'm looking for an alternative to smartfox server now. The reason for choosing smartfox server earlier was to use the server properties supported by it. Server properties on smartfox server take advantage of the socket connection and are essentially server side objects in java which store some data for the player which he can change frequently during the game. And when he logs out of the game, the extension can write out the final state in the DB (I'm using MySQL). This significantly reduces the number of DB UPDATE/INSERT calls made during the game. I love the way this works since the data is secure as its on the server side and smartfox server is known to be scalable. (although I'm not sure whether this approach is used widely by gaming industry or not, since this is not an mmorpg, I'm putting all player in the lobby). So my question is whether any of the free and community supported servers like reddwarf, firebase, BlazeDS etc can provide a similar architecture so that I can use server properties without many code changes? EDIT : I am not insisting on the exact same feature (thats asking too much!), but atleast a viable messaging system on the server so that I can send actionscript objects from the client using json/binary so that its fast. OR maybe some completely different way to implement what I need here. Thanks in advance.

    Read the article

  • links for 2011-02-16

    - by Bob Rhubart
    On the Software Architect Trail Software architect is the #1 job, according to a 2010 CNN-Money poll. In this article in Oracle Magazine, several members of the OTN architect community talk about the career paths that led them to this lucrative role.  (tags: oracle oraclemagazine softwarearchitect) Oracle Technology Network Architect Day: Denver Registration opens soon for this event to be held in Denver on March 23, 2011.  (tags: oracle otn entarch) How the Internet Gets Inside Us : The New Yorker "It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with." - Adam Gopnik (tags: internet progress technology innovation) The Insider Threat: Understand and Mitigate Your Risks: CSO Webcast February 23, 2011 at 1:00 PM EST/ 10:00 AM PST .  Speakers: Randy Trzeciak, lead for the CERT Insider Threat research team, and  Roxana Bradescu, Director of Database Security at Oracle. (tags: oracle CERT security) The Tom Kyte Blog: An Interesting Read... Tom looks at "an internet security firm brought down by not following the most *basic* of security principals." (tags: security oracle) Jason Williamson: Oracle as a Service in the Cloud "It is not trivial to migrate large amounts of pre-relational or 'devolved' relational data. To do this, we again must revert back to a tight roadmap to migration and leverage the growing tools and services that we have." - Jason Williamson (tags: oracle cloud soa) Edwin Biemond: Java / Oracle SOA blog: Building an asynchronous web service with JAX-WS "Building an asynchronous web service can be complex especially when you are used to synchronous Web services where you can wait for the response in your favorite tool." - Oracle ACE Edwin Biemond (tags: oracle oracleace java soa) Shared Database Servers (The SaaS Report) "Outside the virtualization world, there are capabilities of Oracle Database which can be used to prevent resource contention and guarantee SLA." - Shivanshu Upadhyay (tags: oracle database cloud SaaS) White Paper: Experiencing the New Social Enterprise "Increasingly organizations recognize the mandate to create a modern user experience that transforms existing business processes and increases business efficiency and agility." (tags: e20 enterprise2.0 socialcomputing oracle) Clusterware 11gR2 - Setting up an Active/Passive failover configuration Gilles Haro illustrates the steps necessary to achieve "a fully operational 11gR2 database protected by automatic failover capabilities." (tags: oracle clusterware) Oracle ERP: How to overcome local hurdles in a global implementation "The corporate world becomes a global village as many companies expand their business and offices around different countries and even continents. And this number keeps increasing. This globalization raises interesting questions..." - Jan Verhallen (tags: oracle capgemini entarch erp) Webcast: Successful Strategies for Optimizing Your Data Warehouse. March 3. 10 a.m. PT/1 p.m. ET Thursday, March 3, 2011. 10 a.m. PT/1 p.m. ET. Speakers: Mala Narasimharajan (Senior Product Marketing Manager, Oracle Data Integration) and Denis Gray (Principal Product Manager, Oracle Data Integration) (tags: oracle dataintegration datawarehousing)

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >