Search Results

Search found 2410 results on 97 pages for 'alex worden'.

Page 94/97 | < Previous Page | 90 91 92 93 94 95 96 97  | Next Page >

  • JCP.Next - Early Adopters of JCP 2.8

    - by Heather VanCura
    JCP.Next is a series of three JSRs (JSR 348, JSR 355 and JSR 358), to be defined through the JCP process itself, with the JCP Executive Committee serving as the Expert Group. The proposed JSRs will modify the JCP's processes  - the Process Document and Java Specification Participation Agreement (JSPA) and will apply to all new JSRs for all Java platforms.   The first - JCP.next.1, or more formally JSR 348, Towards a new version of the Java Community Process - was completed and put into effect in October 2011 as JCP 2.8. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. The second - JSR 355, Executive Committee Merge, is also Final. You can read the JCP 2.9 Process Document .  As part of the JSR 355 Final Release, the JCP Executive Committee published revisions to the JCP Process Document (version 2.9) and the EC Standing Rules (version 2.2).  The changes went into effect following the 2012 EC Elections in November. The third JSR 358, A major revision of the Java Community Process was submitted in June 2012.  This JSR will modify the Java Specification Participation Agreement (JSPA) as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons, the JCP EC (acting as the Expert Group for this JSR), expects to spend a considerable amount of time working on. The JSPA is defined by the JCP as "a one-year, renewable agreement between the Member and Oracle. The success of the Java community depends upon an open and transparent JCP program.  JSR 358, A major revision of the Java Community Process, is now in process and can be followed on java.net. The following JSRs and Spec Leads were the early adopters of JCP 2.8, who voluntarily migrated their JSRs from JCP 2.x to JCP 2.8 or above.  More candidates for 2012 JCP Star Spec Leads! JSR 236, Concurrency Utilities for Java EE (Anthony Lai/Oracle), migrated April 2012 JSR 308, Annotations on Java Types (Michael Ernst, Alex Buckley/Oracle), migrated September 2012 JSR 335, Lambda Expressions for the Java Programming Language (Brian Goetz/Oracle), migrated October 2012 JSR 337, Java SE 8 Release Contents (Mark Reinhold/Oracle) – EG Formation, migrated September 2012 JSR 338, Java Persistence 2.1 (Linda DeMichiel/Oracle), migrated January 2012 JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services (Santiago Pericas-Geertsen, Marek Potociar/Oracle), migrated July 2012 JSR 340, Java Servlet 3.1 Specification (Shing Wai Chan, Rajiv Mordani/Oracle), migrated August 2012 JSR 341, Expression Language 3.0 (Kin-man Chung/Oracle), migrated August 2012 JSR 343, Java Message Service 2.0 (Nigel Deakin/Oracle), migrated March 2012 JSR 344, JavaServer Faces 2.2 (Ed Burns/Oracle), migrated September 2012 JSR 345, Enterprise JavaBeans 3.2 (Marina Vatkina/Oracle), migrated February 2012 JSR 346, Contexts and Dependency Injection for Java EE 1.1 (Pete Muir/RedHat) – migrated December 2011

    Read the article

  • The Red Gate Guide to SQL Server Team based Development Free e-book

    - by Mladen Prajdic
    After about 6 months of work, the new book I've coauthored with Grant Fritchey (Blog|Twitter), Phil Factor (Blog|Twitter) and Alex Kuznetsov (Blog|Twitter) is out. They're all smart folks I talk to online and this book is packed with good ideas backed by years of experience. The book contains a good deal of information about things you need to think of when doing any kind of multi person database development. Although it's meant for SQL Server, the principles can be applied to any database platform out there. In the book you will find information on: writing readable code, documenting code, source control and change management, deploying code between environments, unit testing, reusing code, searching and refactoring your code base. I've written chapter 5 about Database testing and chapter 11 about SQL Refactoring. In the database testing chapter (chapter 5) I cover why you should test your database, why it is a good idea to have a database access interface composed of stored procedures, views and user defined functions, what and how to test. I talk about how there are many testing methods like black and white box testing, unit and integration testing, error and stress testing and why and how you should do all those. Sometimes you have to convince management to go for testing in the development lifecycle so I give some pointers and tips how to do that. Testing databases is a bit different from testing object oriented code in a way that to have independent unit tests you need to rollback your code after each test. The chapter shows you ways to do this and also how to avoid it. At the end I show how to test various database objects and how to test access to them. In the SQL Refactoring chapter (chapter 11) I cover why refactor and where to even begin refactoring. I also who you a way to achieve a set based mindset to solve SQL problems which is crucial to good SQL set based programming and a few commonly seen problems to refactor. These problems include: using functions on columns in the where clause, SELECT * problems, long stored procedure with many input parameters, one subquery per condition in the select statement, cursors are good for anything problem, using too large data types everywhere and using your data in code for business logic anti-pattern. You can read more about it and download it here: The Red Gate Guide to SQL Server Team-based Development Hope you like it and send me feedback if you wish too.

    Read the article

  • Oracle ODP.NET und Windows PowerShell

    - by cjandaus
    In der Microsoft Welt wohlbekannt, in der Oracle Welt nur ein Schulterzucken hervorrufend - die sogenannten Scripting Guys. Wie der Name bereits vermuten lässt, geht es in deren Hey, Scripting Guy! Blog um Scripting. Und damit natürlich um die Windows PowerShell. Ja, die Zeiten des DOS-Kommandofensters und Batch-Dateien ist vorbei. Die PowerShell ist eine mächtige Scripting-Umgebung unter Windows, die selbst unter Unix/Linux-Administratoren Gefallen finden sollte. Dass man damit wunderbar auch auf Oracle Datenbanken zugreifen kann, haben wir bereits vor Jahren in einer Oracle Workshop Reihe bewiesen. Damals begleitete mich Klaus Rohe von Microsoft, der mit mir dann auch gemeinsam einen Vortrag auf DOAG Konferenz hielt. Unser gemeinsames Ziel war es damals wie heute, die Oracle Anwender von der hervorragenden Integration zwischen Oracle, Windows und .NET zu überzeugen. Was lag näher, als sich dies von beiden Herstellern gemeinsam bestätigen zu lassen? Vor allem die ewigen Zweifler begrüßten dies. Seither war die PowerShell bei mir nicht mehr auf dem Radar und auch Oracle Anwender haben das Thema nicht mehr aufgeworfen. Möglicherweise auch deshalb, weil es zu neu oder zu unbekannt ist? Eher unwahrscheinlich ... Vielleicht liegt es vielmehr daran, dass man einfach mal davon ausgeht, dass PowerShell nur für Microsoft Produkte richtig nutzbar ist? Oder man bekommt erzählt, dass nur die Integration mit der Microsoft-eigenen Datenbank SQL Server möglich ist? Und das ist natürlich nicht richtig - so wie immer (ich denke dabei unter anderem an das Microsoft Active Directory - aber dazu ein andermal mehr). Umso mehr freut es mich, einen brandneuen Blog-Beitrag zu genau diesem Thema zu lesen, auf den mich Alex Keh, (Produkt Manager für Windows und .NET im Oracle Headquarter in San Francisco) aufmerksam gemacht hat. Was die Sache noch besser macht, dieser Beitrag stammt aus der Microsoft Welt und belegt damit zwischen den Zeilen, dass die Oracle Datenbank und unsere .NET Integration via dem Oracle Data Provider for .NET (ODP.NET) auch hier eine bedeutende Rolle spielt. In diesem Sinne: Beide Daumen hoch für die Scripting Guys! Der Beitrag nennt sich Use Oracle ODP.NET and PowerShell to Simplify Data Access und trotz ein paar weniger Ausreißer, ist der Artikel sehr zu empfehlen, um in das Thema einzusteigen. Lassen Sie es mich wissen, wie Sie zu dieser Integration stehen, ob die PowerShell für Sie in der Praxis wichtig ist oder werden könnte, und falls Sie Features vermissen, die Oracle künftig umsetzen sollte. Danke!

    Read the article

  • Highlights from recent Yammer video

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} A few weeks back, Ryan Kennedy of Yammer gave a talk about Berkeley DB Java Edition. You can find it posted here on Alex Popescu's Blog, or go directly to the video post itself. It was full of useful nuggets of information, such as why they chose to use BDB JE, performance, and some tips & tricks at the end. At over 40 minutes, the video is quite long. Ryan is an entertaining speaker, so I suggest you watch all of it. But if you only have time for the highlights, here are some times you can sync to:  06:18 hear the Berkeley DB JE features that caused Yammer select it, including: replication auto leader election, failover configurable durability and consistency guarantees 23:10 System performance characteristics 35:08 Check out the tips and tricks for using Berkeley DB JE I know the Berkeley DB development team is very pleased that BDB JE is working out well for Yammer. We definitely encourage others out there to take note of this success, especially if your requirements are similar to Yammer's (which Ryan outlines at the beginning of his talk)

    Read the article

  • Splitting an MP4 file

    - by Asaf Chertkoff
    what is the fastest and less resource consuming method for splitting an MP4 file? @Alex: it didn't work, i don't know why. see the out put here: asafche@asafche-laptop:~$ ffmpeg -vcodec copy -ss 0 -t 00:10:00 -i /home/asafche/Videos/myVideos/MAH00124.MP4 /home/asafche/Videos/myVideos/eh.mp4 FFmpeg version SVN-r0.5.1-4:0.5.1-1ubuntu1.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.1-1ubuntu1.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Mar 31 2011 18:53:20, gcc: 4.4.3 Seems stream 0 codec frame rate differs from container frame rate: 119.88 (120000/1001) -> 59.94 (60000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/asafche/Videos/myVideos/MAH00124.MP4': Duration: 00:15:35.96, start: 0.000000, bitrate: 5664 kb/s Stream #0.0(und): Video: h264, yuv420p, 1280x720, 59.94 tbr, 59.94 tbn, 119.88 tbc Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to '/home/asafche/Videos/myVideos/eh.mp4': Stream #0.0(und): Video: libx264, yuv420p, 1280x720, q=2-31, 90k tbn, 59.94 tbc Stream #0.1(und): Audio: 0x0000, 48000 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Unsupported codec for output stream #0.1 it says something about different frame rate...

    Read the article

  • Power Your Cloud with Oracle Fusion Middleware

    - by user753488
    Introducing the biggest and most strategic event for Fusion Middleware this year: Power your Cloud with Oracle Fusion Middleware. Running in over 50 cities across the globe, this event is aimed at Architects, IT Managers, and technical leaders like you who are using Fusion Middleware or trying to learn more about middleware in the context of Cloud computing. Join us for a special kickoff on Wednesday, June 29th in Chicago for the first event in North America. This event features an exclusive keynote from Rick Schultz, VP of Technology Product Marketing. Cloud is certainly all the rage. But what can we make of it? According to Alex Andrianopoulos, Vice President Product Marketing for Fusion Middleware states, “Not since Java was unveiled have we seen something so transformative hit the industry. The promised benefits of Cloud are many, significant, and deliver value to both IT organizations as well as the Line of Business. The benefits range from lower data center costs, to significantly reduced environmental impact, to the ability to capture more of the opportunities that market present through increased agility in resource deployment and dramatically reduced time to market.” With an ROI so promising, why isn’t everyone on Cloud already? It’s a question a lot of IT managers are struggling with. While the promised benefits of Cloud computing can be immense, achieving them requires much more than the adoption of a new architecture, or the virtualization of servers, or the outsourcing of some or all of the IT resources. These may be useful steps towards moving to a Cloud computing blueprint, but on their own do not deliver Cloud computing and its associated benefits to the enterprise. This is exactly what we’ll be addressing in the event series, ways you can leverage Complete, Open and Integrated capabilities of Oracle Fusion Middleware today to get one step closer to Cloud. Whether you’re: Leveraging Exalogic Elastic Cloud to consolidate your applications Improving agility with Oracle SOA to generate a foundation for shared data services Securing and managing your Cloud using Oracle Identity Management and Oracle Enterprise Manager Migrating from mainframe to Cloud using Oracle Tuxedo, Coherence and GoldenGate Building applications in the Cloud swiftly and easier with Oracle’s WebCenter Suite Join us for the first of its kind event in Chicago this week by registering now, or find an event near you. Learn more about Oracle Fusion Middleware and Cloud computing today on the Oracle.com website by going to http://www.Oracle.com/goto/Middleware4Cloud

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • IIS 7.5 Basic authorization issue

    - by Alsin
    When I log on using correct user name\password (I always copy-paste them) I get 401.1 error. User name and password are correct (user is created on server locally, not a domain one). I can run program as this user (runas /noprofile /user:tmp notepad.exe). Basic authorization's default domain is a server name, realm is empty. I've saved FailedReqLogFile. AUTH_BASIC_LOGON_FAILED shows ErrorCode="Logon failure: unknown user name or bad password. (0x8007052e)" and MODULE_SET_RESPONSE_ERROR_STATUS shows ModuleName="BasicAuthenticationModule", Notification="AUTHENTICATE_REQUEST", HttpStatus="401", HttpReason="Unauthorized", HttpSubStatus="1", ErrorCode="Logon failure: unknown user name or bad password. (0x8007052e)", ConfigExceptionInfo="" And one more thing - if I use my domain login\password it woks! Basic Authentications is only enabled authentication in application... Could you please suggest me how I can troubleshoot and fix this issue? Maybe somebody hit it before... Best regards, Alex UPDATE: I get 401.1 when I trying to access site from local host. I can actually access files from remote host.

    Read the article

  • Authenticating Active Directory Users to Mac OS X Mavericks Server L2TP VPN Service

    - by dean
    We have a Windows Server 2012 Active Directory Infrastructure that consists of two domain controllers. Bound to the Active Directory Domain is a Mac OS X Mavericks Server 10.9.3. The server runs Profile Manager and VPN Services. My Active Directory users are able to authenticate to the Profile Manager, but not the VPN. I have found several threads on other forums of other users reporting similar issues, here is just one of many references: https://discussions.apple.com/thread/5174619 It appears as though the issue is related to a CHAP authentication failure. Can anyone suggest what next troubleshooting steps I might take? Is there a way to liberalize the authentication mechanism to include MSCHAP? Here is an excerpt of the transaction from the logs. Please note the domain has been changed to example.com. Jun 6 15:25:03 profile-manager.example.com vpnd[10317]: Incoming call... Address given to client = 192.168.55.217 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: publish_entry SCDSet() failed: Success! Jun 6 15:25:03 --- last message repeated 2 times --- Jun 6 15:25:03 profile-manager.example.com pppd[10677]: pppd 2.4.2 (Apple version 727.90.1) started by root, uid 0 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: L2TP incoming call in progress from '108.46.112.181'... Jun 6 15:25:03 profile-manager.example.com racoon[257]: pfkey DELETE received: ESP 192.168.55.12[4500]->108.46.112.181[4500] spi=25137226(0x17f904a) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP connection established. Jun 6 15:25:04 profile-manager kernel[0]: ppp0: is now delegating en0 (type 0x6, family 2, sub-family 0) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connect: ppp0 <--> socket[34:18] Jun 6 15:25:04 profile-manager.example.com pppd[10677]: CHAP peer authentication failed for alex Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connection terminated. Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnecting... Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnected Jun 6 15:25:04 profile-manager.example.com vpnd[10317]: --> Client with address = 192.168.55.217 has hung up

    Read the article

  • "What happens?" server performance monitor

    - by AlexAtNet
    Hello! After reviewing some thread about server monitoring software I end up with a simple question: Which of the server monitoring tools should I use for automatic detection of "abnormal" situations with recommendations on how to fix them? I look for software that checks the system performance after installation and calculate some average load values (memory, CPU, etc). And when something happens (CPU load is increased to 20%) then it tries to detect a reason for this. If it is apache, it should check for access logs. If mysql, it should check mysql logs and tell me what happens. It this is because some user decodes a lot of images, I'd like to know which command is executed, when and user name. The same for disk usage, memory, number of processes, threads and so on. Ideally, this software should periodically checks the system and report problems: errors in PHP error log, outdated packages, security vulnerabilities. In other word I'm looking a software that will keep my simple Debian/Apache/PHP/MySQL server without forcing me to monitor the charts every day. I hope that such program exists. Thanks, Alex

    Read the article

  • cset shield --kthread on: should I use this?

    - by lori
    I'm reading up on cpu shielding using Alex Tsariounov's cset utility here: https://rt.wiki.kernel.org/index.php/Cpuset_Management_Utility/tutorial In the tutorial I'm finding the wording around migrating kernel threads from having access to all cpus to running only in a certain cpuset a bit ambiguous The tutorial says the following: Some kernel threads can be moved into the unshielded system cpuset as well. These are the threads that are not bound to specific CPUs. If a kernel thread is bound to a specific CPU, then it is generally not a good idea to move that thread to the system set because at worst it may hang the system and at best it will slow the system down significantly. These threads are usually the IRQ threads on a real time Linux kernel, for example, and you may want to not move these kernel threads into system. If you leave them in the root cpuset, then they will have access to all CPUs. The tutorial then goes on to say: However, if your application demands an even "quieter" shield, then you can move all movable kernel threads into the unshielded system set with the following command. [zuul:cpuset-trunk]# cset shield -k on cset: --> activating kthread shielding cset: kthread shield activated, moving 70 tasks into system cpuset... [==================================================]% cset: done I am confused by this final sentence. By using the word however, it seems to suggest that you typically should not move the movable kernel threads into the unshielded system set. Is this the case, or is it safe to move kernel threads which can be moved into a cpuset, thereby preventing them from running on some cpus?

    Read the article

  • Timestamp Updating Constantly on /dev/null

    - by motorleague
    I've been working on a problem with a /dev/null file on an AIX system (just for background it looks as though it was inadvertently deleted and recreated as a normal file by somebody), but in trying to determine what caused the problem, I noticed that the timestamp on it seems to update every minute. I've observed this on several AIX servers at my workplace. At present I can't entirely rule out this be something specific to the Application being used at my workplace, so I compared with CentOS and Debian based computers at home last night. The CentOS box, which runs 24 hours, had a mod time on /dev/null of around 4 days ago (during which time it was essentially just being used as a web browser and multimedia player, although it would have had active but essentially unused Apache, MySQL and VMM processes running in the background). The timestamp on /dev/null on the Debian machine, which was a just booted laptop, pretty much reflected the boot time, but I tested redirecting STDIN from, and STDOUT to it, and the modification time was unchanged (I'm not sure 100% sure if directing data to /dev/null constitutes "writing to it" in the way it would a normal file). So my question is essentially, could anybody please offer any advice with regards to what circumstances (permissions changes etc.. aside) might cause the timestamp on /dev/null to update? Thanks very much for any suggestions. Alex.

    Read the article

  • Windows Server 2008 (x64): Wont boot past bios page

    - by WebSolProv
    Happy New Year, Since a month or so ago I inherited responsibility for small network administration for my sins. The domain controller (yes there is only one, and yes I know it is best practice to have two even in a small domain setup) went down overnight and I have been trying all day to get it back up and running. Unfortunately this machine also administers our entire ActiveDirectory setup: 1) It goes thru the BIOS without any errors, nothing whatsoever 2) It gets into the “select safe mode, safe mode with networking, normal” etc and if you select either of the safe mode options it loads a few files then reboots. If you select normal it just runs for a bit (doesn’t get to the windows splash screen) and then reboots again. 3) If you select windows repair, it asks for an image to repair too: however it would appear that none was taken that can be used (!!) or one is not being shown. 4) I have tried repairing the boot sector and the boot configuration using Bootrec.exe, both which it says were completed successfully but still it doesn’t work. 5) I have tried swapping the drives into another server to rule out hardware and that didn’t work either so clearly it’s the OS. 6) I have tried running chkdsk which ran fine, and also memory check which was also fine. We do have another machine on the network that was installed as a DC so when we decommission the current infrastructure but when I try and "promote" this to the lead DC then I get “you cannot modify domain or trust information because a PDC emulator cannot be contacted" so I am unable to replicare the ActiveDirectory details. If anyone can think of any direction I should follow it would be greatly appreciated, Thanks, Alex

    Read the article

  • Multiple if functions

    - by user2948699
    I have problem I'm hoping someone could help me with. I have five different text values in 5 cells. I am trying to combine these values into one cell with a comma in between each. However the trick is that if there is no value in (H6) then it must place the word "and" between the cell (F6) and (G6). If there is a value in (H6) then place the word "and" between (G6) and (H6). In the same statement I must also include If there is not value in (G6) then it must place the word "and" between the cell E6 and F6. Please see image attached. I am trying to get the highlighted statements into one cell. So multiple IF statements into one cell. Anyone? =IF(G8=0,(D8)&", "&(E8)&" and "&(F8),(D8)&", "&(E8)&", "&(F8)&" and "&(G8)=IF(H8=0,(D8)&", "&(E8)&", "&(F8)&" and "&(G8),(D8)&", "&(E8)&", "&(F8)&", "&(G8)&" and "&(H8))) I cant figure out the code. Many thanks. Alex Edit: The original image can be found here if size of the inlined is too small.

    Read the article

  • Top-Rated JavaScript Blogs

    - by Andreas Grech
    I am currently trying to find some blogs that talk (almost solely) on the JavaScript Language, and this is due to the fact that most of the time, bloggers with real life experience at work or at home development can explain more clearly and concisely certain quirks and hidden features than most 'Official Language Specifications' Below find a list of blogs that are JavaScript based (will update the list as more answers flow in): DHTML Kitchen, by Garrett Smith Robert's Talk, by Robert Nyman EJohn, by John Resig (of jQuery) Crockford's JavaScript Page, by Douglas Crockford Dean.edwards.name, by Dean Edwards Ajaxian, by various (@Martin) The JavaScript Weblog, by various SitePoint's JavaScript and CSS Page, by various AjaxBlog, by various Eric Lippert's Blog, by Eric Lippert (talks about JScript and JScript.Net) Web Bug Track, by various (@scunliffe) The Strange Zen Of JavaScript , by Scott Andrew Alex Russell (of Dojo) (@Eran Galperin) Ariel Flesler (@Eran Galperin) Nihilogic, by Jacob Seidelin (@llimllib) Peter's Blog, by Peter Michaux (@Borgar) Flagrant Badassery, by Steve Levithan (@Borgar) ./with Imagination, by Dustin Diaz (@Borgar) HedgerWow (@Borgar) Dreaming in Javascript, by Nosredna spudly.shuoink.com, by Stephen Sorensen Yahoo! User Interface Blog, by various (@Borgar) remy sharp's b:log, by Remy Sharp (@Borgar) JScript Blog, by the JScript Team (@Borgar) Dmitry Baranovskiy’s Web Log, by Dmitry Baranovskiy James Padolsey's Blog (@Kenny Eliasson) Perfection Kills; Exploring JavaScript by example, by Juriy Zaytsev DailyJS (@Ric) NCZOnline (@Kenny Eliasson), by Nicholas C. Zakas Which top-rated blogs am I currently missing from the above list, that you think should be imperative to any JavaScript developer to read (and follow) concurrently?

    Read the article

  • How do I break down an NSTimeInterval into year, months, days, hours, minutes and seconds on iPhone?

    - by willc2
    I have a time interval that spans years and I want all the time components from year down to seconds. My first thought is to integer divide the time interval by seconds in a year, subtract that from a running total of seconds, divide that by seconds in a month, subtract that from the running total and so on. That just seems convoluted and I've read that whenever you are doing something that looks convoluted, there is probably a built-in method. Is there? I integrated Alex's 2nd method into my code. It's in a method called by a UIDatePicker in my interface. NSDate *now = [NSDate date]; NSDate *then = self.datePicker.date; NSTimeInterval howLong = [now timeIntervalSinceDate:then]; NSDate *date = [NSDate dateWithTimeIntervalSince1970:howLong]; NSString *dateStr = [date description]; const char *dateStrPtr = [dateStr UTF8String]; int year, month, day, hour, minute, sec; sscanf(dateStrPtr, "%d-%d-%d %d:%d:%d", &year, &month, &day, &hour, &minute, &sec); year -= 1970; NSLog(@"%d years\n%d months\n%d days\n%d hours\n%d minutes\n%d seconds", year, month, day, hour, minute, sec); When I set the date picker to a date 1 year and 1 day in the past, I get: 1 years 1 months 1 days 16 hours 0 minutes 20 seconds which is 1 month and 16 hours off. If I set the date picker to 1 day in the past, I am off by the same amount. Update: I have an app that calculates your age in years, given your birthday (set from a UIDatePicker), yet it was often off. This proves there was an inaccuracy, but I can't figure out where it comes from, can you?

    Read the article

  • How to prevent buffer overflow in C/C++?

    - by alexpov
    Hello, i am using the following code to redirect stdout to a pipe, then read all the data from the pipe to a buffer. I have 2 problems: first problem: when i send a string (after redirection) bigger then the pipe's BUFF_SIZE, the program stops responding (deadlock or something). second problem: when i try to read from a pipe before something was sent to stdout. I get the same response, the program stops responding - _read command stuck's ... The issue is that i don't know the amount of data that will be sent to the pipe after the redirection. The first problem, i don't know how to handle and i'll be glad for help. The second problem i solved by a simple workaround, right after the redirection i print space character to stdout. but i guess that this solution is not the correct one ... #include <fcntl.h> #include <io.h> #include <iostream> #define READ 0 #define WRITE 1 #define BUFF_SIZE 5 using namespace std; int main() { int stdout_pipe[2]; int saved_stdout; saved_stdout = _dup(_fileno(stdout)); // save stdout if(_pipe(stdout_pipe,BUFF_SIZE, O_TEXT) != 0 ) // make a pipe { exit(1); } fflush( stdout ); if(_dup2(stdout_pipe[1], _fileno(stdout)) != 0 ) //redirect stdout to the pipe { exit(1); } ios::sync_with_stdio(); setvbuf( stdout, NULL, _IONBF, 0 ); //anything sent to stdout goes now to the pipe //printf(" ");//workaround for the second problem printf("123456");//first problem char buffer[BUFF_SIZE] = {0}; int nOutRead = 0; nOutRead = _read(stdout_pipe[READ], buffer, BUFF_SIZE); //second problem buffer[nOutRead] = '\0'; // reconnect stdout if (_dup2(saved_stdout, _fileno(stdout)) != 0 ) { exit(1); } ios::sync_with_stdio(); printf("buffer: %s\n", buffer); } Thanks, Alex

    Read the article

  • forkpty - socket

    - by Alexxx
    Hi, I'm trying to develop a simple "telnet/server" daemon which have to run a program on a new socket connection. This part working fine. But I have to associate my new process to a pty, because this process have some terminal capabilities (like a readline). The code I've developped is (where socketfd is the new socket file descriptor for the new input connection) : int masterfd, pid; const char *prgName = "..."; char *arguments[10] = ....; if ((pid = forkpty(&masterfd, NULL, NULL, NULL)) < 0) perror("FORK"); else if (pid) return pid; else { close(STDOUT_FILENO); dup2(socketfd, STDOUT_FILENO); close(STDIN_FILENO); dup2(socketfd, STDIN_FILENO); close(STDERR_FILENO); dup2(socketfd, STDERR_FILENO); if (execvp(prgName, arguments) < 0) { perror("execvp"); exit(2); } } With that code, the stdin / stdout / stderr file descriptor of my "prgName" are associated to the socket (when looking with ls -la /proc/PID/fd), and so, the terminal capabilities of this process doesn't work. A test with a connection via ssh/sshd on the remote device, and executing "localy" (under the ssh connection) prgName, show that the stdin/stdout/stderr fd of this process "prgName" are associated to a pty (and so the terminal capabilities of this process are working fine). What I am doing wrong? How to associate my socketfd with the pty (created by forkpty) ? Thank Alex

    Read the article

  • Help with editing data in entity framework.

    - by Levelbit
    Title of this question is simple because there is no an easy explanation for what I'm trying to ask you. I hope you'll understand in the end :). I have to tables in my database: Company and Location (relationship:one to many) and corresponding entity sets. In my wpf application I have some datagrid which I want to fill with locations and to be able to edit every row in separate window as some form of details view (so I don't want to edit my data in datagrid). I did this by accessing Location entity from selected row and creating a new Location entity and then I copy properties from original entity to newly created. Something like cloning the object. After editing if I press OK changed data is copied to original object back, and if I press Cancel nothing happens. Of course, you probably thinking I could use NoTracking option and AttachAsModified method as was mentioned as solution in some earlier questions(see:http://stackoverflow.com/questions/803022/changing-entities-in-the-entityframework) by Alex James, but lets say I had some problems about that and I have my own reasons for doing this. Finally, because navigation property(Company) of my location entity is assigned to newly created location object(during cloning) from some reason in object context additional object as copy of object I want to edit from datagrid is created without my will(similar problem :http://blogs.msdn.com/alexj/archive/2009/12/02/tip-47-how-fix-up-can-make-it-hard-to-change-relationships.aspx). So, when I do ObjectContext.SaveChanges it inserts additional row of data into my database table Location, the same as the one i wanted to edit. I read about sth like this, but I don't quite understand why is that and how to block or override this. I hope I was clear as I could. Please, solutions or some other ideas.

    Read the article

  • How to parse text fragments located outside tags (inbetween tags) by simplehtmldom?

    - by moogeek
    Hello! I'm using simplehtmldom to parse html and I'm stuck in parsing plaintext located outside of any tag (but between two different tags): <div class="text_small"> <b>?dress:</b> 7 Hange Road<br> <b>Phone:</b> 415641587484<br> <b>Contact:</b> Alex<br> <b>Meeting Time:</b> 12:00-13:00<br> </div> Is it possible to get these values of Adress, Phone, Contact, Meeting Time? I wonder if there is a opportunity to pass CSS Selectors into nextSibling/previousSibling functions... foreach($html->find('div.text_small') as $div_descr) { foreach($div_descr->find('b') as $b) { if ($b->innertext=="?dress:") {//someaction } if ($b->innertext=="Phone:") { //someaction } if ($b->innertext=="Contact:") { //someaction } if ($b->innertext=="Meeting Time:") { //someaction } } } What I should use instead "someaction" ? upd. Yes, I don't have an access for editing the target page. Otherwise, would it be worth to? :)

    Read the article

  • How do I store complex objects in javascript?

    - by Colen
    Hello, I need to be able to store objects in javascript, and access them very quickly. For example, I have a list of vehicles, defined like so: { "name": "Jim's Ford Focus", "color": "white", isDamaged: true, wheels: 4 } { "name": "Bob's Suzuki Swift", "color": "green", isDamaged: false, wheels: 4 } { "name": "Alex's Harley Davidson", "color": "black", isDamaged: false, wheels: 2 } There will potentially be hundreds of these vehicle entries, which might be accessed thousands of times. I need to be able to access them as fast as possible, ideally in some useful way. For example, I could store the objects in an array. Then I could simply say vehicles[0] to get the Ford Focus entry, vehicles[1] to get the Suzuki Swift entry, etc. However, how do I know which entry is the Ford Focus? I want to simply ask "find me Jim's Ford Focus" and have the object returned to me, as fast as possible. For example, in another language, I might use a hash table, indexed by name. How can I do this in javascript? Or, is there a better way? Thanks.

    Read the article

  • How can I create a dynamic LINQ query in C# with possible multiple group by clauses?

    - by FordPrefect141
    I have been a programmer for some years now but I am a newcomer to LINQ and C# so forgive me if my question sounds particularly stupid. I hope someone may be able to point me in the right direction. My task is to come up with the ability to form a dynamic multiple group by linq query within a c# script using a generic list as a source. For example, say I have a list containing multiple items with the following structure: FieldChar1 - character FieldChar2 - character FieldChar3 - character FieldNum1 - numeric FieldNum2 - numeric In a nutshell I want to be able to create a LINQ query that will sum FieldNum1 and FieldNum2 grouped by any one, two or all three of the FieldChar fields that will be decided at runtime depending on the users requirements as well as selecting the FieldChar fields in the same query. I have the dynamic.cs in my project which icludes a GroupByMany extension method but I have to admit I am really not sure how to put these to use. I am able to get the desired results if I use a query with hard-wired group by requests but not dynamically. Apologies for any erroneous nomenclature, I am new to this language but any advice would be most welcome. Many thanks Alex

    Read the article

  • ExecutorService - scaling

    - by Stanciu Alexandru-Marian
    I am trying to write a program in Java using ExecutorService and it's function invokeAll. My question is: does the invokeAll functions solve the tasks simultaneously? I mean, if i have two processors, there will be two workers in the same time? Because a can't make it to scale correct. It takes the same time to complete the problem if i give newFixedThreadPool(2) or 1. List<Future<PartialSolution>> list = new ArrayList<Future<PartialSolution>>(); Collection<Callable<PartialSolution>> tasks = new ArrayList<Callable<PartialSolution>>(); for(PartialSolution ps : wp) { tasks.add(new Map(ps, keyWords)); } list = executor.invokeAll(tasks); Map is a class that implements Callable and wp is a vector of Partial Solutions, a class that holds some informations in different times. Why doesn't it scale? What could be the problem? Thank you, Alex

    Read the article

  • Google App Engine - Caching generated HTML

    - by Alexander
    I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages. The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage. However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated. I am open to any suggestions on this approach, or other approaches for caching generated html. Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires). Kind Regards, and thank you in advance for any suggestions you may be able to offer. -Alex

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97  | Next Page >