Search Results

Search found 1002 results on 41 pages for 'infinite'.

Page 34/41 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Meet our 2009 Oracle Graduates in South Africa

    - by anca.rosu
    Focusing on the broader Oracle community, Oracle South Africa initiated its first skills development programme in May 1988. Since its inception the programme has developed and improved and every year more graduates are taken on board. The Oracle Graduate Programme is made up of specific learning paths designed around customer, partner and Oracle specifications and is structured to meet the urgent skills requirements in the Oracle “economy”. The training programmes have a specific duration and incorporate both theoretical and practical application of Oracle product sets. It is aimed at creating: Meaningful employment for graduates; Learning opportunities for individuals within the organization so that career growth opportunities are exploited to the fullest; Capacity building for small enterprises which is aligned to Oracle’s Enterprise Development Programme Meet our five graduates who joined us in December 2008 and have spent over a year with us! Let’s get their initial feedback on the graduate programme and on their assignment to Jordan. Lector   On the Oracle Graduate Programme: “The Oracle Graduate Programme is an experience of a life time. I would not trade it for anything. It’s challenging and rewarding. I am proud and happy to be in an organization like Oracle” On the assignment in Jordan: "Friendly, welcoming people, world class instructors always willing to go the extra mile. What more can you ask for?"   Lungile On the Oracle Graduate Programme: “I joined Oracle as part of the graduate intake for pre-sales in order to develop my skills and knowledge. Working at Oracle has been an overwhelmingly positive experience as it has encouraged me to progress with my personal development. I am hugely grateful. It has been a great challenge and an awesome opportunity.” On the assignment to Jordan: “Going to Jordan was a great opportunity and the experience of a lifetime. The people were very welcoming and friendly. The culture was totally different from ours - the food, the clothes and the weather. It was an amazingly different experience to work from Sunday to Thursday with Friday and Saturday as the weekend.” Thabo On the Oracle Graduate Programme: “Life is an infinite learning path. I truly value growth. I believe for one to grow, one needs to be challenged to your full potential. The Oracle Graduate Programme offers real growth – and so much more.” On the assignment to Jordan: “I was amazed by the cultural differences. I now understood that to be part of the global community, I must embrace our similarities and understand our differences.”   Albeauty On the Oracle Graduate Programme: “Responsibility, dedication, focus and taking initiative … these are the key points I learned from Oracle. It is such an honour to finally be part of the Oracle family. The graduate programme itself was a great experience as I managed to learn how Oracle operates – it has been the highlight of my year. I believe that my hard work will assist in the growth of the company.” On the Jordan assignment: “A memory worth embracing. Going to Jordan was a great opportunity as I learned a lot with respect to integration between different cultures and getting to adapt to all things different. I, along with almost every other graduate, discovered that Oracle is far more than a database company. Now I know there is far more to the ‘Big Red’ name.” Emmanuel On the Oracle Graduate Programme: “The programme gave me invaluable exposure to the ICT sector and also provided an opportunity to travel, network and exchange ideas with others. The formal training helped me to improve my presentation skills and gave me a better understanding of business etiquette and communication.” On the assignment to Jordan: “It was my first trip abroad. It was a great chance to get to know each other. I had the opportunity to share ideas, share personal stuff as a team. We met experts who gave us superb training in Oracle Technologies. It was great.”   If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com.   Technorati Tags: Oracle community,South Africa,Graduate Programme,Jordan,Technologies

    Read the article

  • What is this code?

    - by Aerovistae
    This is from the Evolution of a Programmer "joke", at the "Master Programmer" level. It seems to be C++, but I don't know what all this bloated extra stuff is, nor did any Google searches turn up anything except the joke I took it from. Can anyone tell me more about what I'm reading here? [ uuid(2573F8F4-CFEE-101A-9A9F-00AA00342820) ] library LHello { // bring in the master library importlib("actimp.tlb"); importlib("actexp.tlb"); // bring in my interfaces #include "pshlo.idl" [ uuid(2573F8F5-CFEE-101A-9A9F-00AA00342820) ] cotype THello { interface IHello; interface IPersistFile; }; }; [ exe, uuid(2573F890-CFEE-101A-9A9F-00AA00342820) ] module CHelloLib { // some code related header files importheader(<windows.h>); importheader(<ole2.h>); importheader(<except.hxx>); importheader("pshlo.h"); importheader("shlo.hxx"); importheader("mycls.hxx"); // needed typelibs importlib("actimp.tlb"); importlib("actexp.tlb"); importlib("thlo.tlb"); [ uuid(2573F891-CFEE-101A-9A9F-00AA00342820), aggregatable ] coclass CHello { cotype THello; }; }; #include "ipfix.hxx" extern HANDLE hEvent; class CHello : public CHelloBase { public: IPFIX(CLSID_CHello); CHello(IUnknown *pUnk); ~CHello(); HRESULT __stdcall PrintSz(LPWSTR pwszString); private: static int cObjRef; }; #include <windows.h> #include <ole2.h> #include <stdio.h> #include <stdlib.h> #include "thlo.h" #include "pshlo.h" #include "shlo.hxx" #include "mycls.hxx" int CHello:cObjRef = 0; CHello::CHello(IUnknown *pUnk) : CHelloBase(pUnk) { cObjRef++; return; } HRESULT __stdcall CHello::PrintSz(LPWSTR pwszString) { printf("%ws\n", pwszString); return(ResultFromScode(S_OK)); } CHello::~CHello(void) { // when the object count goes to zero, stop the server cObjRef--; if( cObjRef == 0 ) PulseEvent(hEvent); return; } #include <windows.h> #include <ole2.h> #include "pshlo.h" #include "shlo.hxx" #include "mycls.hxx" HANDLE hEvent; int _cdecl main( int argc, char * argv[] ) { ULONG ulRef; DWORD dwRegistration; CHelloCF *pCF = new CHelloCF(); hEvent = CreateEvent(NULL, FALSE, FALSE, NULL); // Initialize the OLE libraries CoInitiali, NULL); // Initialize the OLE libraries CoInitializeEx(NULL, COINIT_MULTITHREADED); CoRegisterClassObject(CLSID_CHello, pCF, CLSCTX_LOCAL_SERVER, REGCLS_MULTIPLEUSE, &dwRegistration); // wait on an event to stop WaitForSingleObject(hEvent, INFINITE); // revoke and release the class object CoRevokeClassObject(dwRegistration); ulRef = pCF->Release(); // Tell OLE we are going away. CoUninitialize(); return(0); } extern CLSID CLSID_CHello; extern UUID LIBID_CHelloLib; CLSID CLSID_CHello = { /* 2573F891-CFEE-101A-9A9F-00AA00342820 */ 0x2573F891, 0xCFEE, 0x101A, { 0x9A, 0x9F, 0x00, 0xAA, 0x00, 0x34, 0x28, 0x20 } }; UUID LIBID_CHelloLib = { /* 2573F890-CFEE-101A-9A9F-00AA00342820 */ 0x2573F890, 0xCFEE, 0x101A, { 0x9A, 0x9F, 0x00, 0xAA, 0x00, 0x34, 0x28, 0x20 } }; #include <windows.h> #include <ole2.h> #include <stdlib.h> #include <string.h> #include <stdio.h> #include "pshlo.h" #include "shlo.hxx" #include "clsid.h" int _cdecl main( int argc, char * argv[] ) { HRESULT hRslt; IHello *pHello; ULONG ulCnt; IMoniker * pmk; WCHAR wcsT[_MAX_PATH]; WCHAR wcsPath[2 * _MAX_PATH]; // get object path wcsPath[0] = '\0'; wcsT[0] = '\0'; if( argc > 1) { mbstowcs(wcsPath, argv[1], strlen(argv[1]) + 1); wcsupr(wcsPath); } else { fprintf(stderr, "Object path must be specified\n"); return(1); } // get print string if(argc > 2) mbstowcs(wcsT, argv[2], strlen(argv[2]) + 1); else wcscpy(wcsT, L"Hello World"); printf("Linking to object %ws\n", wcsPath); printf("Text String %ws\n", wcsT); // Initialize the OLE libraries hRslt = CoInitializeEx(NULL, COINIT_MULTITHREADED); if(SUCCEEDED(hRslt)) { hRslt = CreateFileMoniker(wcsPath, &pmk); if(SUCCEEDED(hRslt)) hRslt = BindMoniker(pmk, 0, IID_IHello, (void **)&pHello); if(SUCCEEDED(hRslt)) { // print a string out pHello->PrintSz(wcsT); Sleep(2000); ulCnt = pHello->Release(); } else printf("Failure to connect, status: %lx", hRslt); // Tell OLE we are going away. CoUninitialize(); } return(0); }

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

  • Tip #13 java.io.File Surprises

    - by ByronNevins
    There is an assumption that I've seen in code many times that is totally wrong.  And this assumption can easily bite you.  The assumption is: File.getAbsolutePath and getAbsoluteFile return paths that are not relative.  Not true!  Sort of.  At least not in the way many people would assume.  All they do is make sure that the beginning of the path is absolute.  The rest of the path can be loaded with relative path elements.  What do you think the following code will print? public class Main {    public static void main(String[] args) {        try {            File f = new File("/temp/../temp/../temp/../");            File abs  = f.getAbsoluteFile();            File parent = abs.getParentFile();            System.out.println("Exists: " + f.exists());            System.out.println("Absolute Path: " + abs);            System.out.println("FileName: " + abs.getName());            System.out.printf("The Parent Directory of %s is %s\n", abs, parent);            System.out.printf("The CANONICAL Parent Directory of CANONICAL %s is %s\n",                        abs, abs.getCanonicalFile().getParent());            System.out.printf("The CANONICAL Parent Directory of ABSOLUTE %s is %s\n",                        abs, parent.getCanonicalFile());            System.out.println("Canonical Path: " + f.getCanonicalPath());        }        catch (IOException ex) {            System.out.println("Got an exception: " + ex);        }    }} Output: Exists: trueAbsolute Path: D:\temp\..\temp\..\temp\..FileName: ..The Parent Directory of D:\temp\..\temp\..\temp\.. is D:\temp\..\temp\..\tempThe CANONICAL Parent Directory of CANONICAL D:\temp\..\temp\..\temp\.. is nullThe CANONICAL Parent Directory of ABSOLUTE D:\temp\..\temp\..\temp\.. is D:\tempCanonical Path: D:\ Notice how it says that the parent of d:\ is d:\temp !!!The file, f, is really the root directory.  The parent is supposed to be null. I learned about this the hard way! getParentXXX simply hacks off the final item in the path. You can get totally unexpected results like the above. Easily. I filed a bug on this behavior a few years ago[1].   Recommendations: (1) Use getCanonical instead of getAbsolute.  There is a 1:1 mapping of files and canonical filenames.  I.e each file has one and only one canonical filename and it will definitely not have relative path elements in it.  There are an infinite number of absolute paths for each file. (2) To get the parent file for File f do the following instead of getParentFile: File parent = new File(f, ".."); [1] http://bt2ws.central.sun.com/CrPrint?id=6687287

    Read the article

  • Having a hard time having consecutive animations for an attack

    - by Kelby Styler
    So I've been trying to figure this out for about 8 hours now...It's driving me nuts because I am pretty sure that it is something dead simple that I am just not understanding. I had everything working fine when I was just cycling through the animation: Idle - Attack - Attack 1 - Attack 2. Just in an infinite loop. The problem now is that I want it to go Attack - check if x time passes if ctrl pressed before x passes move to Attack 1, if not move back to Idle - Then either Attack 1 or Idle depending on how long has passed. I've almost gotten it a few time, but something always happens where it falls apart if I press ctrl too fast or after multiple cycles of the animation. Any help would be appreciated, I'm just at my wits end on this one. I've been looking at this so long that I just don't know where to go anymore. Code is below, here is the controller using UnityEngine; using System.Collections; public class MeleeAttack : MonoBehaviour { public int damage; public bool Attack; public bool Attack1; public bool Attack2; public bool Idle; private Animator animator; private int attnum = 0; private float count = 2f; private float timeLeft; //Gives value to damage output void MAttackDmg () { if (Input.GetKeyDown (KeyCode.RightControl) || Input.GetKeyDown (KeyCode.LeftControl)) { switch (attnum) { case (0): Attack = true; damage = 2; animator.SetBool ("Attack", Attack); attnum++; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; case (1): Attack1 = true; damage = 2; animator.SetBool ("Attack1", Attack1); attnum++; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; case (2): Attack2 = true; damage = 2; animator.SetBool ("Attack2", Attack2); attnum = 0; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; } } if (Input.GetKeyUp (KeyCode.RightControl) || Input.GetKeyUp (KeyCode.LeftControl)) { switch (attnum) { case (0): Debug.Log ("false"); damage = 0; if (timeLeft <= 0f) { Attack2 = false; animator.SetBool ("Attack2", Attack2); Debug.Log ("t1"); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; case (1): Debug.Log ("false1"); damage = 0; if (timeLeft <= 0f) { Debug.Log ("t2"); Attack = false; animator.SetBool ("Attack", Attack); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; case (2): Debug.Log ("false2"); damage = 0; if (timeLeft <= 0f) { Attack1 = false; animator.SetBool ("Attack1", Attack1); Debug.Log ("t3"); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; } } } // Use this for initialization void Awake () { animator = GetComponent<Animator> (); } // Update is called once per frame void Update () { timeLeft -= Time.deltaTime;; MAttackDmg (); } void Start (){ timeLeft = count; } }

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Why enumerator structs are a really bad idea

    - by Simon Cooper
    If you've ever poked around the .NET class libraries in Reflector, I'm sure you would have noticed that the generic collection classes all have implementations of their IEnumerator as a struct rather than a class. As you will see, this design decision has some rather unfortunate side effects... As is generally known in the .NET world, mutable structs are a Very Bad Idea; and there are several other blogs around explaining this (Eric Lippert's blog post explains the problem quite well). In the BCL, the generic collection enumerators are all mutable structs, as they need to keep track of where they are in the collection. This bit me quite hard when I was coding a wrapper around a LinkedList<int>.Enumerator. It boils down to this code: sealed class EnumeratorWrapper : IEnumerator<int> { private readonly LinkedList<int>.Enumerator m_Enumerator; public EnumeratorWrapper(LinkedList<int> linkedList) { m_Enumerator = linkedList.GetEnumerator(); } public int Current { get { return m_Enumerator.Current; } } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { return m_Enumerator.MoveNext(); } public void Reset() { ((System.Collections.IEnumerator)m_Enumerator).Reset(); } public void Dispose() { m_Enumerator.Dispose(); } } The key line here is the MoveNext method. When I initially coded this, I thought that the call to m_Enumerator.MoveNext() would alter the enumerator state in the m_Enumerator class variable and so the enumeration would proceed in an orderly fashion through the collection. However, when I ran this code it went into an infinite loop - the m_Enumerator.MoveNext() call wasn't actually changing the state in the m_Enumerator variable at all, and my code was looping forever on the first collection element. It was only after disassembling that method that I found out what was going on The MoveNext method above results in the following IL: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000, [1] valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator CS$0$0001) L_0000: nop L_0001: ldarg.0 L_0002: ldfld valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: stloc.1 L_0008: ldloca.s CS$0$0001 L_000a: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000f: stloc.0 L_0010: br.s L_0012 L_0012: ldloc.0 L_0013: ret } Here, the important line is 0002 - m_Enumerator is accessed using the ldfld operator, which does the following: Finds the value of a field in the object whose reference is currently on the evaluation stack. So, what the MoveNext method is doing is the following: public bool MoveNext() { LinkedList<int>.Enumerator CS$0$0001 = this.m_Enumerator; bool CS$1$0000 = CS$0$0001.MoveNext(); return CS$1$0000; } The enumerator instance being modified by the call to MoveNext is the one stored in the CS$0$0001 variable on the stack, and not the one in the EnumeratorWrapper class instance. Hence why the state of m_Enumerator wasn't getting updated. Hmm, ok. Well, why is it doing this? If you have a read of Eric Lippert's blog post about this issue, you'll notice he quotes a few sections of the C# spec. In particular, 7.5.4: ...if the field is readonly and the reference occurs outside an instance constructor of the class in which the field is declared, then the result is a value, namely the value of the field I in the object referenced by E. And my m_Enumerator field is readonly! Indeed, if I remove the readonly from the class variable then the problem goes away, and the code works as expected. The IL confirms this: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000) L_0000: nop L_0001: ldarg.0 L_0002: ldflda valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000c: stloc.0 L_000d: br.s L_000f L_000f: ldloc.0 L_0010: ret } Notice on line 0002, instead of the ldfld we had before, we've got a ldflda, which does this: Finds the address of a field in the object whose reference is currently on the evaluation stack. Instead of loading the value, we're loading the address of the m_Enumerator field. So now the call to MoveNext modifies the enumerator stored in the class rather than on the stack, and everything works as expected. Previously, I had thought enumerator structs were an odd but interesting feature of the BCL that I had used in the past to do linked list slices. However, effects like this only underline how dangerous mutable structs are, and I'm at a loss to explain why the enumerators were implemented as structs in the first place. (interestingly, the SortedList<TKey, TValue> enumerator is a struct but is private, which makes it even more odd - the only way it can be accessed is as a boxed IEnumerator!). I would love to hear people's theories as to why the enumerators are implemented in such a fashion. And bonus points if you can explain why LinkedList<int>.Enumerator.Reset is an explicit implementation but Dispose is implicit... Note to self: never ever ever code a mutable struct.

    Read the article

  • Real Time BI in the Real World

    - by tobin.gilman(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} One of my favorite BI offerings from Oracle is a solution called Oracle Real Time Decisions.  Whenever I mention this product in customer meetings, eyes light up.  There are some fascinating examples of customers using it to up-sell, cross-sell, increase customer retention, and reduce risk in real time, with off the charts return on investment. I plan to share some of those stories in a future blog.  In this post however, I want to share some far more common real time analytics use case scenarios that are being addressed with widely deployed Oracle BI and data integration technologies Not all real time BI applications require continuous learning, predictive modeling, and data mining.  Many simply require the ability to integrate, aggregate, and access information that is current (typically within in few minutes or a few seconds).  The use cases are infinite.  A few I've seen: ·         Purchasing agents need to match demand against available inventory ·         Manufacturing planners need to monitor current parts and material against scheduled build plans ·         Airline agents need to match ticket demand against flight schedules, ·         Human resources managers need to track the status of global hiring requisitions against current headcount authorizations...you get the idea. One way of doing this is to run reports or federated queries directly against transactional systems.  That approach can be viable if you only need to access simple data sets on rare occasions.  High volume and complex queries can quickly bog down performance of mission critical transactional systems.  There is an architecturally simple way of solving the problem, and it's being applied by real companies around the world to solve real needs in real time.    Cbeyond is an Atlanta, GA based  provider of voice, data and mobile business applications delivers.  They deliver real time information to its call center agents  as they are interacting with their customers. The data they need resides in production CRM and other transactional systems, but  instead or reporting directly off the those systems, data is first moved to an operational data store (ODS).  Rather than running data intensive, time consuming, and performance degrading batch ETL routines to populate the ODS, Cbeyond uses Oracle Golden Gate software to incrementally capture and move only the changed records from log files of the transactional systems every few minutes.  There is no impact on transactional system performance, and the information needed by call center representatives is up to date.  Oracle Business Intelligence software presents the information to services reps in a rich, visual, and highly interactive format. Avea is similar to Cbeyond.  They are a telecommunications company who integrates billing and customer information in an ODS that is accessed by their call center agents in real time using Oracle Golden Gate and Oracle Business Intelligence.  They've taken it a step further by using the ODS to feed a data warehouse.  The operational data store provides the current information needed by call center agents during "in flight" customer interactions.  The data warehouse is used for more sophisticated analysis of historical data.  For maximum performance, both the ODS and data warehouse run on the Oracle Exadata Database Machine. These are practical illustrations of companies addressing real time reporting and analysis needs using established business intelligence/data warehousing methodologies and tools common to many IT departments.  If real time BI could benefit your organization, you may be already be closer than you thought to having the pieces in place to solving the problem.    Give us a shout if you are interested in learning more or if you have an interesting use or approach to real-time BI.

    Read the article

  • Speed up executable program Linux. Bit Toggling

    - by AK_47
    I have a ZyBo circuit board which has a ArmV7 processor. I wrote a C program to output a clock and a corresponding data sequence on a PMOD. The PMOD has a switching speed of up to 50MHz. However, my program's created clock only has a max frequency of 115 Hz. I need this program to output as fast as possible because the PMOD I'm using is capable of 50MHz. I compiled my program with the following code line: gcc -ofast (c_program) Here is some sample code: #include <stdio.h> #include <stdlib.h> #define ARRAYSIZE 511 //________________________________________ //macro for the SIGNAL PMOD //________________________________________ //DATA //ZYBO Use Pin JE1 #define INIT_SIGNAL system("echo 54 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio54/direction"); #define SIGNAL_ON system("echo 1 > /sys/class/gpio/gpio54/value"); #define SIGNAL_OFF system("echo 0 > /sys/class/gpio/gpio54/value"); //________________________________________ //macro for the "CLOCK" PMOD //________________________________________ //CLOCK //ZYBO Use Pin JE4 #define INIT_MYCLOCK system("echo 57 > /sys/class/gpio/export"); system("echo out > /sys/class/gpio/gpio57/direction"); #define MYCLOCK_ON system("echo 1 > /sys/class/gpio/gpio57/value"); #define MYCLOCK_OFF system("echo 0 > /sys/class/gpio/gpio57/value"); int main(void){ int myarray[ARRAYSIZE] = {//hard coded array for signal data 1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0,0,1,0,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,0,0,0,1,0,0,1,1,1,0,0,1,1,1,0,1,1,1,1,0,0,1,0,0,0,1,0,1,0,0,1,1,1,0,0,1,0,1,0,1,0,0,1,0,1,1,0,1,0,1,1,0,0,1,1,1,1,0,0,1,0,1,0,0,1,1,1,1,1,1,0,0,1,0,0,1,1,0,1,0,0,0,0,1,0,0,0,1,1,0,0,1,0,1,1,1,0,0,0,1,0,0,0,1,0,1,0,0,0,1,0,0,1,0,1,1,1,1,0,1,1,0,1,0,0,1,0,0,0,1,0,1,0,0,1,0,0,0,1,0,0,0,1,0,1,0,1,0,1,0,1,1,0,0,0,0,0,0,0,0,1,0,1,1,0,1,1,1,1,1,0,0,1,1,1,0,0,1,1,0,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,1,0,1,0,1,0,1,1,0,1,0,0,0,1,1,1,0,1,0,1,0,1,0,1,0,1,0,1,0,0,1,0,0,0,0,1,1,1,0,1,1,1,1,0,1,1,0,1,0,1,0,1,0,0,1,0,1,1,1,0,1,1,1,0,0,1,1,1,0,1,0,0,1,0,1,1,1,1,1,0,1,1,1,1,1,1,1,0,1,1,0,0,1,0,1,1,0,1,0,1,1,1,0,0,0,0,0,1,0,0,0,1,0,1,1,1,1,1,1,1,0,0,0,0,0,1,1,0,1,1,1,1,1,1,1,1,0,1,1,0,1,0,0,0,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,1,0,1,1,1,0,0,0,1,0,1,0,1,0,0,1,1,0,0,1,1,0,1,0,0,1,0,0,1,0,1,1,1,1,1,1,0,1,1,0,1,0,1,1,1,1,1,1,0,0,1,1,0,1,1,0,0,1,1,0,1,1,0,1,0,1,0,1,0,1,0,0,1,1,1,0,1,1,0,0,0,0,1,1,0,1,1,0,1,1,1,1,1,1,1,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0 }; INIT_SIGNAL INIT_MYCLOCK; //infinite loop int i; do{ i = 0; do{ /* 1020 is chosen because it is twice the size needed allowing for the changes in the clock. (511= 0-510, 510*2= 1020 ==> 0-1020 needed, so 1021 it is) */ if((i%2)==0) { MYCLOCK_ON; if(myarray[i/2] == 1){ SIGNAL_ON; }else{ SIGNAL_OFF; } } else if((i%2)==1) { MYCLOCK_OFF; //dont need to change the signal since it will just stay at whatever it was. } ++i; } while(i < 1021); } while(1); return 0; } I'm using the 'system' call to tell the system to output 1 volt or 0 volts onto a pin on the board (to represent the data signal and clock signal. One pin for the data and another for the clock). That was the only way I knew to tell the system to output a voltage. What can I do to make my executable program output to be at least in the magnitude of MegaHertz?

    Read the article

  • How can I make a WPF TreeView data binding lazy and asynchronous?

    - by pauldoo
    I am learning how to use data binding in WPF for a TreeView. I am procedurally creating the Binding object, setting Source, Path, and Converter properties to point to my own classes. I can even go as far as setting IsAsync and I can see the GUI update asynchronously when I explore the tree. So far so good! My problem is that WPF eagerly evaluates parts of the tree prior to them being expanded in the GUI. If left long enough this would result in the entire tree being evaluated (well actually in this example my tree is infinite, but you get the idea). I would like the tree only be evaluated on demand as the user expands the nodes. Is this possible using the existing asynchronous data binding stuff in the WPF? As an aside I have not figured out how ObjectDataProvider relates to this task. My XAML code contains only a single TreeView object, and my C# code is: public partial class Window1 : Window { public Window1() { InitializeComponent(); treeView.Items.Add( CreateItem(2) ); } static TreeViewItem CreateItem(int number) { TreeViewItem item = new TreeViewItem(); item.Header = number; Binding b = new Binding(); b.Converter = new MyConverter(); b.Source = new MyDataProvider(number); b.Path = new PropertyPath("Value"); b.IsAsync = true; item.SetBinding(TreeView.ItemsSourceProperty, b); return item; } class MyDataProvider { readonly int m_value; public MyDataProvider(int value) { m_value = value; } public int[] Value { get { // Sleep to mimick a costly operation that should not hang the UI System.Threading.Thread.Sleep(2000); System.Diagnostics.Debug.Write(string.Format("Evaluated for {0}\n", m_value)); return new int[] { m_value * 2, m_value + 1, }; } } } class MyConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { // Convert the double to an int. int[] values = (int[])value; IList<TreeViewItem> result = new List<TreeViewItem>(); foreach (int i in values) { result.Add(CreateItem(i)); } return result; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new InvalidOperationException("Not implemented."); } } } Note: I have previously managed to do lazy evaluation of the tree nodes by adding WPF event handlers and directly adding items when the event handlers are triggered. I'm trying to move away from that and use data binding instead (which I understand is more in spirit with "the WPF way").

    Read the article

  • Mocking using boost::shared_ptr and AMOP

    - by Edison Gustavo Muenz
    Hi, I'm trying to write mocks using amop. I'm using Visual Studio 2008. I have this interface class: struct Interface { virtual void Activate() = 0; }; and this other class which receives pointers to this Interface, like this: struct UserOfInterface { void execute(Interface* iface) { iface->Activate(); } }; So I try to write some testing code like this: amop::TMockObject<Interface> mock; mock.Method(&Interface::Activate).Count(1); UserOfInterface user; user.execute((Interface*)mock); mock.Verifiy(); It works! So far so good, but what I really want is a boost::shared_ptr in the execute() method, so I write this: struct UserOfInterface { void execute(boost::shared_ptr<Interface> iface) { iface->Activate(); } }; How should the test code be now? I tried some things, like: amop::TMockObject<Interface> mock; mock.Method(&Interface::Activate).Count(1); UserOfInterface user; boost::shared_ptr<Interface> mockAsPtr((Interface*)mock); user.execute(mockAsPtr); mock.Verifiy(); It compiles, but obviously crashes, since at the end of the scope the variable 'mock' gets double destroyed (because of the stack variable 'mock' and the shared_ptr). I also tried to create the 'mock' variable on the heap: amop::TMockObject<Interface>* mock(new amop::TMockObject<Interface>); mock->Method(&Interface::Activate).Count(1); UserOfInterface user; boost::shared_ptr<Interface> mockAsPtr((Interface*)*mock); user.execute(mockAsPtr); mock->Verifiy(); But it doesn't work, somehow it enters an infinite loop, before I had a problem with boost not finding the destructor for the mocked object when the shared_ptr tried to delete the object. Has anyone used amop with boost::shared_ptr successfully?

    Read the article

  • VBA/SQL recordsets

    - by intruesiive
    The project I'm asking about is for sending an email to teachers asking what books they're using for the classes they're teaching next semester, so that the books can be ordered. I have a query that compares the course number of this upcoming semester's classes to the course numbers of historical textbook orders, pulling out only those classes that are being taught this semester. That's where I get lost. I have a table that contains the following: -Professor -Course Number -Year -Book -Title The data looks like this: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 12 1111 Pride and Prejudice smith 11 1222 Infinite Jest smith 10 1333 The Bible smith 13 1333 The Bible smith 12 1222 The Alchemist smith 10 1111 Moby Dick johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 10 1333 Everything is Illuminated johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time johnson 10 1333 Great Expectations johnson 9 1222 Proust on the Shore Here's what I need the code to do "on paper": Group the records by professor. Determine every unique course number in that group, and group records by course number. For each unique course number, determine the highest year associated. Then spit out every record with that professor+course number+year combination. With the sample data, the results would be: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 13 1333 The Bible johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time I'm thinking I should make a record set for each teacher, and within that, another record set for each course number. Within the course number record set, I need the system to determine what the highest year number is - maybe store that in a variable? Then pull out every associated record so that if the teacher ordered 3 books the last time they taught that class (whether it was in 2013 or 2012 and so on) all three books display. I'm not sure I'm thinking of record sets in the right way, though. My SQL so far is basic and clearly doesn't work: SELECT [All].Professor, [All].Course, Max([All].Year) FROM [All] GROUP BY [All].Professor, [All].Course; Thanks for your help.

    Read the article

  • PyQt threads and signals - how to properly retrieve values

    - by Cawas
    Using Python 2.5 and PyQt, I couldn't find any question this specific in Python, so sorry if I'm repeating the other Qt referenced questions below, but I couldn't easily understand that C code. I've got two classes, a GUI and a thread, and I'm trying to get return values from the thread. I've used the link in here as base to write my code, which is working just fine. To sum it up and illustrate the question in code here (I don't think this code will run on itself): class MainWindow (QtGui.QWidget): # this is just a reference and not really relevant to the question def __init__ (self, parent = None): QtGui.QWidget.__init__(self, parent) self.thread = Worker() # this does not begin a thread - look at "Worker.run" for mor details self.connect(self.thread, QtCore.SIGNAL('finished()'), self.unfreezeUi) self.connect(self.thread, QtCore.SIGNAL('terminated()'), self.unfreezeUi) self.connect(self.buttonDaemon, QtCore.SIGNAL('clicked()'), self.pressDaemon) # the problem begins below: I'm not using signals, or queue, or whatever, while I believe I should def pressDaemon (self): self.buttonDaemon.setEnabled(False) if self.thread.isDaemonRunning(): self.thread.setDaemonStopSignal(True) self.buttonDaemon.setText('Daemon - converts every %s sec'% args['daemonInterval']) else: self.buttonConvert.setEnabled(False) self.thread.startDaemon() self.buttonDaemon.setText('Stop Daemon') self.buttonDaemon.setEnabled(True) # this whole class is just another reference class Worker (QtCore.QThread): daemonIsRunning = False daemonStopSignal = False daemonCurrentDelay = 0 def isDaemonRunning (self): return self.daemonIsRunning def setDaemonStopSignal (self, bool): self.daemonStopSignal = bool def __init__ (self, parent = None): QtCore.QThread.__init__(self, parent) self.exiting = False self.thread_to_run = None # which def will be running def __del__ (self): self.exiting = True self.thread_to_run = None self.wait() def run (self): if self.thread_to_run != None: self.thread_to_run(mode='continue') def startDaemon (self, mode = 'run'): if mode == 'run': self.thread_to_run = self.startDaemon # I'd love to be able to just pass this as an argument on start() below return self.start() # this will begin the thread # this is where the thread actually begins self.daemonIsRunning = True self.daemonStopSignal = False sleepStep = 0.1 # don't know how to interrupt while sleeping - so the less sleepStep, the faster StopSignal will work # begins the daemon in an "infinite" loop while self.daemonStopSignal == False and not self.exiting: # here, do any kind of daemon service delay = 0 while self.daemonStopSignal == False and not self.exiting and delay < args['daemonInterval']: time.sleep(sleepStep) # delay is actually set by while, but this holds for N second delay += sleepStep # daemon stopped, reseting everything self.daemonIsRunning = False self.emit(QtCore.SIGNAL('terminated')) Tho it's quite big, I hope this is pretty clear. The main point is on def pressDaemon. Specifically all 3 self.thread calls. The last one, self.thread.startDaemon() is just fine, and exactly as the example. I doubt that represents any issue. The problem is being able to set the Daemon Stop Signal and retrieve the value if it's running. I'm not sure that it's possible to set a stop signal on QtCore.QtThread, because I've tried doing the same way and it didn't work. But I'm pretty sure it's not possible to retrieve a return result from the emit. So, there it is. I'm using direct calls to the thread class, and I'm almost positive that's not a good design and will probably fail when running under stress. I read about that queue, but I'm not sure it's the proper solution here, or if I should be using Qt at all, since this is Python. And just maybe there's nothing wrong with the way I'm doing.

    Read the article

  • Writing a Servlet that checks to see if JSP's exist and forwards to another JSP if they aren't

    - by Omar Kooheji
    I've beeb tasked with writing a servlet that intercepts a call to and JSP in a specific directoy, check that the file exists and if it does just forwarding to that file, if if doesn't I'm to forward to a default JSP. I've setup the web.xml as follows: <servlet> <description>This is the description of my J2EE component</description> <display-name>This is the display name of my J2EE component</display-name> <servlet-name>CustomJSPListener</servlet-name> <servlet-class> ... CustomJSPListener</servlet-class> <load-on-startup>1</load-on-startup> </servlet> ... <servlet-mapping> <servlet-name>CustomJSPListener</servlet-name> <url-pattern>/custom/*</url-pattern> </servlet-mapping> And the doGet method of the servlet is as follows: public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { logger.debug(String.format("Intercepted a request for an item in the custom directory [%s]",request.getRequestURL().toString())); String requestUri = request.getRequestURI(); // Check that the file name contains a text string if (requestUri.toLowerCase(Locale.UK).contains("someText")){ logger.debug(String.format("We are interested in this file [%s]",requestUri)); File file = new File(requestUri); boolean fileExists = file.exists(); logger.debug(String.format("Checking to see if file [%s] exists [%s].",requestUri,fileExists)); // if the file exists just forward it to the file if (fileExists){ getServletConfig().getServletContext().getRequestDispatcher( requestUri).forward(request,response); } else { // Otherwise redirect to default.jsp getServletConfig().getServletContext().getRequestDispatcher( "/custom/default.jsp").forward(request,response); } } else { // We aren't responsible for checking this file exists just pass it on to the requeseted jsp getServletConfig().getServletContext().getRequestDispatcher( requestUri).forward(request,response); } } This seems to result in an error 500 from tomcat, I think this is because the servlet is redirecting to the same folder which is then being intercepted again by the servlet, resulting in an infinite loop. Is there a better way to do this? I'm lead to believe that I could use filters to do this, but I don't know very much about them.

    Read the article

  • Need help with implementation of the jQuery LiveUpdate routine

    - by miCRoSCoPiC_eaRthLinG
    Hey all, Has anyone worked with the LiveUpdate function (may be a bit of a misnomer) found on this page? It's not really a live search/update function, but a quick filtering mechanism for a pre-existing list, based on the pattern you enter in a text field. For easier reference, I'm pasting the entire function in here: jQuery.fn.liveUpdate = function(list){ list = jQuery(list); if ( list.length ) { var rows = list.children('li'), cache = rows.map(function(){ return this.innerHTML.toLowerCase(); }); this .keyup(filter).keyup() .parents('form').submit(function(){ return false; }); } return this; function filter(){ var term = jQuery.trim( jQuery(this).val().toLowerCase() ), scores = []; if ( !term ) { rows.show(); } else { rows.hide(); cache.each(function(i){ var score = this.score(term); if (score > 0) { scores.push([score, i]); } }); jQuery.each(scores.sort(function(a, b){return b[0] - a[0];}), function(){ jQuery(rows[ this[1] ]).show(); }); } } }; I have this list, with members as the ID. And a text field with say, qs as ID. I tried binding the function in the following manner: $( '#qs' ).liveUpdate( '#members' ); But when I do this, the function is called only ONCE when the page is loaded (I put in some console.logs in the function) but never after when text is keyed into the text field. I also tried calling the routine from the keyup() function of qs. $( '#qs' ).keyup( function() { $( this ).liveUpdate( '#members' ); }); This ends up going into infinite loops (almost) and halting with "Too much recursion" errors. So can anyone please shed some light on how I am supposed to actually implement this function? Also while you are at it, can someone kindly explain this line to me: var score = this.score(term); What I want to know is where this member method score() is coming from? I didn't find any such method built into JS or jQuery. Thanks for all the help, m^e

    Read the article

  • a problem in socks.h

    - by janathan
    i use this (http://www.codeproject.com/KB/IP/Socks.aspx) lib in my socket programing in c++ and copy the socks.h in include folder and write this code: include include include include include include "socks.h" define PORT 1001 // the port client will be connecting to define MAXDATASIZE 100 static void ReadThread(void* lp); int socketId; int main(int argc, char* argv[]) { const char temp[]="GET / HTTP/1.0\r\n\r\n"; CSocks cs; cs.SetVersion(SOCKS_VER4); cs.SetSocksPort(1080); cs.SetDestinationPort(1001); cs.SetDestinationAddress("192.168.11.97"); cs.SetSocksAddress("192.168.11.97"); //cs.SetVersion(SOCKS_VER5); //cs.SetSocksAddress("128.0.21.200"); socketId = cs.Connect(); // if failed if (cs.m_IsError) { printf( "\n%s", cs.GetLastErrorMessage()); getch(); return 0; } // send packet for requesting to a server if(socketId > 0) { send(socketId, temp, strlen(temp), 0); HANDLE ReadThreadID; // handle for read thread id HANDLE handle; // handle for thread handle handle = CreateThread ((LPSECURITY_ATTRIBUTES)NULL, // No security attributes. (DWORD)0, // Use same stack size. (LPTHREAD_START_ROUTINE)ReadThread, // Thread procedure. (LPVOID)(void*)NULL, // Parameter to pass. (DWORD)0, // Run immediately. (LPDWORD)&ReadThreadID); WaitForSingleObject(handle, INFINITE); } else { printf("\nSocks Server / Destination Server not started.."); } closesocket(socketId); getch(); return 0; } // Thread Proc for reading from server socket. static void ReadThread(void* lp) { int numbytes; char buf[MAXDATASIZE]; while(1) { if ((numbytes=recv(socketId, buf, MAXDATASIZE-1, 0)) == -1) { printf("\nServer / Socks Server has been closed Receive thread Closed\0"); break; } if (numbytes == 0) break; buf[numbytes] = '\0'; printf("Received: %s\r\n",buf); send(socketId,buf,strlen(buf),0); } } but when compile this i get an error . pls help me thanks

    Read the article

  • C++ question on prime numbers.

    - by user278330
    Hello. I am trying to make a program that determines if the number is prime or composite. I have gotten thus far. Could you give me any ideas so that it will work? All primes will , however, because composites have values that are both r0 and r==0, they will always be classified as prime. How can I fix this? int main() { int pNumber, limit, x, r; limit = 0; x = 2; cout << "Please enter any positive integer: " ; cin >> pNumber; if (pNumber < 0) { cout << "Invalid. Negative Number. " << endl; return 0; } else if (pNumber == 0) { cout << "Invalid. Zero has an infinite number of divisors, and therefore neither composite nor prime." << endl; return 0; } else if (pNumber == 1) { cout << "Valid. However, one is neither prime nor composite" << endl; return 0; } else { while (limit < pNumber) { r = pNumber % x; x++; limit++; } if (r == 0) cout << "Your number is composite" << endl; else cout << "Your number is prime" << endl; } return 0; }

    Read the article

  • C++ question on prime numbers.

    - by user278330
    Hello. I am trying to make a program that determines if the number is prime or composite. I have gotten thus far. Could you give me any ideas so that it will work? All primes will , however, because composites have values that are both r0 and r==0, they will always be classified as prime. How can I fix this? int main() { int pNumber, limit, x, r; limit = 0; x = 2; cout << "Please enter any positive integer: " ; cin >> pNumber; if (pNumber < 0) { cout << "Invalid. Negative Number. " << endl; return 0; } else if (pNumber == 0) { cout << "Invalid. Zero has an infinite number of divisors, and therefore neither composite nor prime." << endl; return 0; } else if (pNumber == 1) { cout << "Valid. However, one is neither prime nor composite" << endl; return 0; } else { while (limit < pNumber) { r = pNumber % x; x++; limit++; } if (r == 0) cout << "Your number is composite" << endl; else cout << "Your number is prime" << endl; } return 0; }

    Read the article

  • How to make this sub-sub-query work?

    - by Josh Weissbock
    I am trying to do this in one query. I asked a similar question a few days ago but my personal requirements have changed. I have a game type website where users can attend "classes". There are three tables in my DB. I am using MySQL. I have four tables: hl_classes (int id, int professor, varchar class, text description) hl_classes_lessons (int id, int class_id, varchar lessonTitle, varchar lexiconLink, text lessonData) hl_classes_answers (int id, int lesson_id, int student, text submit_answer, int percent) hl_classes stores all of the classes on the website. The lessons are the individual lessons for each class. A class can have infinite lessons. Each lesson is available in a specific term. hl_classes_terms stores a list of all the terms and the current term has the field active = '1'. When a user submits their answers to a lesson it is stored in hl_classes_answers. A user can only answer each lesson once. Lessons have to be answered sequentially. All users attend all "classes". What I am trying to do is grab the next lesson for each user to do in each class. When the users start they are in term 1. When they complete all 10 lessons in each class they move on to term 2. When they finish lesson 20 for each class they move on to term 3. Let's say we know the term the user is in by the PHP variable $term. So this is my query I am currently trying to massage out but it doesn't work. Specifically because of the hC.id is unknown in the WHERE clause SELECT hC.id, hC.class, (SELECT MIN(output.id) as nextLessonID FROM ( SELECT id, class_id FROM hl_classes_lessons hL WHERE hL.class_id = hC.id ORDER BY hL.id LIMIT $term,10 ) as output WHERE output.id NOT IN (SELECT lesson_id FROM hl_classes_answers WHERE student = $USER_ID)) as nextLessonID FROM hl_classes hC My logic behind this query is first to For each class; select all of the lessons in the term the current user is in. From this sort out the lessons the user has already done and grab the MINIMUM id of the lessons yet to be done. This will be the lesson the user has to do. I hope I have made my question clear enough.

    Read the article

  • Avoiding stack overflows in wrapper DLLs

    - by peachykeen
    I have a program to which I'm adding fullscreen post-processing effects. I do not have the source for the program (it's proprietary, although a developer did send me a copy of the debug symbols, .map format). I have the code for the effects written and working, no problems. My issue now is linking the two. I've tried two methods so far: Use Detours to modify the original program's import table. This works great and is guaranteed to be stable, but the user's I've talked to aren't comfortable with it, it requires installation (beyond extracting an archive), and there's some question if patching the program with Detours is valid under the terms of the EULA. So, that option is out. The other option is the traditional DLL-replacement. I've wrapped OpenGL (opengl32.dll), and I need the program to load my DLL instead of the system copy (just drop it in the program folder with the right name, that's easy). I then need my DLL to load the Cg framework and runtime (which relies on OpenGL) and a few other things. When Cg loads, it calls some of my functions, which call Cg functions, and I tend to get stack overflows and infinite loops. I need to be able to either include the Cg DLLs in a subdirectory and still use their functions (not sure if it's possible to have my DLLs import table point to a DLL in a subdirectory) or I need to dynamically link them (which I'd rather not do, just to simplify the build process), something to force them to refer to the system's file (not my custom replacement). The entire chain is: Program loads DLL A (named opengl32.dll). DLL A loads Cg.dll and dynamically links (GetProcAddress) to sysdir/opengl32.dll. I now need Cg.dll to also refer to sysdir/opengl32.dll, not DLL A. How would this be done? Edit: How would this be done easily without using GetProcAddress? If nothing else works, I'm willing to fall back to that, but I'd rather not if at all possible. Edit2: I just stumbled across the function SetDllDirectory in the MSDN docs (on a totally unrelated search). At first glance, that looks like what I need. Is that right, or am I misjudging? (off to test it now) Edit3: I've solved this problem by doing thing a bit differently. Instead of dropping an OpenGL32.dll, I've renamed my DLL to DInput.dll. Not only does it have the advantage of having to export one function instead of well over 120 (for the program, Cg, and GLEW), I don't have to worry about functions running back in (I can link to OpenGL as usual). To get into the calls I need to intercept, I'm using Detours. All in all, it works much better. This question, though, is still an interesting problem (and hopefully will be useful for anyone else trying to do crazy things in the future). Both the answers are good, so I'm not sure yet which to pick...

    Read the article

  • DriverManager always returns my custom driver regardless of the connection URL

    - by JGB146
    I am writing a driver to act as a wrapper around two separate MySQL connections (to distributed databases). Basically, the goal is to enable interaction with my driver for all applications instead of requiring the application to sort out which database holds the desired data. Most of the code for this is in place, but I'm having a problem in that when I attempt to create connections via the MySQL Driver, the DriverManager is returning an instance of my driver instead of the MySQL Driver. I'd appreciate any tips on what could be causing this and what could be done to fix it! Below is a few relevant snippets of code. I can provide more, but there's a lot, so I'd need to know what else you want to see. First, from MyDriver.java: public MyDriver() throws SQLException { DriverManager.registerDriver(this); } public Connection connect(String url, Properties info) throws SQLException { try { return new MyConnection(info); } catch (Exception e) { return null; } } public boolean acceptsURL(String url) throws SQLException { if (url.contains("jdbc:jgb://")) { return true; } return false; } It is my understanding that this acceptsURL function will dictate whether or not the DriverManager deems my driver a suitable fit for a given URL. Hence it should only be passing connections from my driver if the URL contains "jdbc:jgb://" right? Here's code from MyConnection.java: Connection c1 = null; Connection c2 = null; /** *Constructors */ public DDBSConnection (Properties info) throws SQLException, Exception { info.list(System.out); //included for testing Class.forName("com.mysql.jdbc.Driver").newInstance(); String url1 = "jdbc:mysql://server1.com/jgb"; String url2 = "jdbc:mysql://server2.com/jgb"; this.c1 = DriverManager.getConnection( url1, info.getProperty("username"), info.getProperty("password")); this.c2 = DriverManager.getConnection( url2, info.getProperty("username"), info.getProperty("password")); } And this tells me two things. First, the info.list() call confirms that the correct user and password are being sent. Second, because we enter an infinite loop, we see that the DriverManager is providing new instances of my connection as matches for the mysql URLs instead of the desired mysql driver/connection. FWIW, I have separately tested implementations that go straight to the mysql driver using this exact syntax (al beit only one at a time), and was able to successfully interact with each database individually from a test application outside of my driver.

    Read the article

  • Algorithm for finding symmetries of a tree

    - by Paxinum
    I have n sectors, enumerated 0 to n-1 counterclockwise. The boundaries between these sectors are infinite branches (n of them). The sectors live in the complex plane, and for n even, sector 0 and n/2 are bisected by the real axis, and the sectors are evenly spaced. These branches meet at certain points, called junctions. Each junction is adjacent to a subset of the sectors (at least 3 of them). Specifying the junctions, (in pre-fix order, lets say, starting from junction adjacent to sector 0 and 1), and the distance between the junctions, uniquely describes the tree. Now, given such a representation, how can I see if it is symmetric wrt the real axis? For example, n=6, the tree (0,1,5)(1,2,4,5)(2,3,4) have three junctions on the real line, so it is symmetric wrt the real axis. If the distances between (015) and (1245) is equal to distance from (1245) to (234), this is also symmetric wrt the imaginary axis. The tree (0,1,5)(1,2,5)(2,4,5)(2,3,4) have 4 junctions, and this is never symmetric wrt either imaginary or real axis, but it has 180 degrees rotation symmetry if the distance between the first two and the last two junctions in the representation are equal. Edit: This is actually for my research. I have posted the question at mathoverflow as well, but my days in competition programming tells me that this is more like an IOI task. Code in mathematica would be excellent, but java, python, or any other language readable by a human suffices. Here are some examples (pretend the double edges are single and we have a tree) http://www2.math.su.se/~per/files.php?file=contr_ex_1.pdf http://www2.math.su.se/~per/files.php?file=contr_ex_2.pdf http://www2.math.su.se/~per/files.php?file=contr_ex_5.pdf Example 1 is described as (0,1,4)(1,2,4)(2,3,4)(0,4,5) with distances (2,1,3). Example 2 is described as (0,1,4)(1,2,4)(2,3,4)(0,4,5) with distances (2,1,1). Example 5 is described as (0,1,4,5)(1,2,3,4) with distances (2). So, given the description/representation, I want to find some algorithm to decide if it is symmetric wrt real, imaginary, and rotation 180 degrees. The last example have 180 degree symmetry. (These symmetries corresponds to special kinds of potential in the Schroedinger equation, which has nice properties in quantum mechanics.)

    Read the article

  • ArithmeticException thrown during BigDecimal.divide

    - by polygenelubricants
    I thought java.math.BigDecimal is supposed to be The Answer™ to the need of performing infinite precision arithmetic with decimal numbers. Consider the following snippet: import java.math.BigDecimal; //... final BigDecimal one = BigDecimal.ONE; final BigDecimal three = BigDecimal.valueOf(3); final BigDecimal third = one.divide(three); assert third.multiply(three).equals(one); // this should pass, right? I expect the assert to pass, but in fact the execution doesn't even get there: one.divide(three) causes ArithmeticException to be thrown! Exception in thread "main" java.lang.ArithmeticException: Non-terminating decimal expansion; no exact representable decimal result. at java.math.BigDecimal.divide It turns out that this behavior is explicitly documented in the API: In the case of divide, the exact quotient could have an infinitely long decimal expansion; for example, 1 divided by 3. If the quotient has a non-terminating decimal expansion and the operation is specified to return an exact result, an ArithmeticException is thrown. Otherwise, the exact result of the division is returned, as done for other operations. Browsing around the API further, one finds that in fact there are various overloads of divide that performs inexact division, i.e.: final BigDecimal third = one.divide(three, 33, RoundingMode.DOWN); System.out.println(three.multiply(third)); // prints "0.999999999999999999999999999999999" Of course, the obvious question now is "What's the point???". I thought BigDecimal is the solution when we need exact arithmetic, e.g. for financial calculations. If we can't even divide exactly, then how useful can this be? Does it actually serve a general purpose, or is it only useful in a very niche application where you fortunately just don't need to divide at all? If this is not the right answer, what CAN we use for exact division in financial calculation? (I mean, I don't have a finance major, but they still use division, right???).

    Read the article

  • Are there programs that iteratively write new programs?

    - by chris
    For about a year I have been thinking about writing a program that writes programs. This would primarily be a playful exercise that might teach me some new concepts. My inspiration came from negentropy and the ability for order to emerge from chaos and new chaos to arise out of order in infinite succession. To be more specific, the program would start by writing a short random string. If the string compiles the programs will log it for later comparison. If the string does not compile the program will try to rewrite it until it does compile. As more strings (mini 'useless' programs) are logged they can be parsed for similarities and used to generate a grammar. This grammar can then be drawn on to write more strings that have a higher probability of compilation than purely random strings. This is obviously more than a little silly, but I thought it would be fun to try and grow a program like this. And as a byproduct I get a bunch of unique programs that I can visualize and call art. I'll probably write this in Ruby due to its simple syntax and dynamic compilation and then I will visualize in processing using ruby-processing. What I would like to know is: Is there a name for this type of programming? What currently exists in this field? Who are the primary contributors? BONUS! - In what ways can I procedurally assign value to output programs beyond compiles(y/n)? I may want to extend the functionality of this program to generate a program based on parameters, but I want the program to define those parameters through running the programs that compile and assigning meaning to the programs output. This question is probably more involved than reasonable for a bonus, but if you can think of a simple way to get something like this done in less than 23 lines or one hyperlink, please toss it into your response. I know that this is not quite meta-programming and from the little I know of AI and generative algorithms they are usually more goal oriented than what I am thinking. What would be optimal is a program that continually rewrites and improves itself so I don't have to ^_^

    Read the article

  • Discovering a functional algorithm from a mutable one

    - by Garrett Rowe
    This isn't necessarily a Scala question, it's a design question that has to do with avoiding mutable state, functional thinking and that sort. It just happens that I'm using Scala. Given this set of requirements: Input comes from an essentially infinite stream of random numbers between 1 and 10 Final output is either SUCCEED or FAIL There can be multiple objects 'listening' to the stream at any particular time, and they can begin listening at different times so they all may have a different concept of the 'first' number; therefore listeners to the stream need to be decoupled from the stream itself. Pseudocode: if (first number == 1) SUCCEED else if (first number >= 9) FAIL else { first = first number rest = rest of stream for each (n in rest) { if (n == 1) FAIL else if (n == first) SUCCEED else continue } } Here is a possible mutable implementation: sealed trait Result case object Fail extends Result case object Succeed extends Result case object NoResult extends Result class StreamListener { private var target: Option[Int] = None def evaluate(n: Int): Result = target match { case None => if (n == 1) Succeed else if (n >= 9) Fail else { target = Some(n) NoResult } case Some(t) => if (n == t) Succeed else if (n == 1) Fail else NoResult } } This will work but smells to me. StreamListener.evaluate is not referentially transparent. And the use of the NoResult token just doesn't feel right. It does have the advantage though of being clear and easy to use/code. Besides there has to be a functional solution to this right? I've come up with 2 other possible options: Having evaluate return a (possibly new) StreamListener, but this means I would have to make Result a subtype of StreamListener which doesn't feel right. Letting evaluate take a Stream[Int] as a parameter and letting the StreamListener be in charge of consuming as much of the Stream as it needs to determine failure or success. The problem I see with this approach is that the class that registers the listeners should query each listener after each number is generated and take appropriate action immediately upon failure or success. With this approach, I don't see how that could happen since each listener is forcing evaluation of the Stream until it completes evaluation. There is no concept here of a single number generation. Is there any standard scala/fp idiom I'm overlooking here?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >