Search Results

Search found 5572 results on 223 pages for 'cpu'.

Page 214/223 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • response.redirect to classic asp failing {Unable to evaluate expression because the code is optimize

    - by jeff
    I have the following code pasted below. For some reason, the response.redirect seems to be failing and it is maxing out the cpu on my server and just doesn't do anything. The .net code uploads the file fine, but does not redirect to the asp page to do the processing. I know this is absolute rubbish why would you have .net code redirecting to classic asp, it is a legacy app. I have tried putting false or true etc. at the end of the redirect as I have read other people have had issues with this. Please help as it's driving me insane! It's so strange, it runs locally on my machine but won't run on my server! I am getting the following error when I debugged remotely. {Unable to evaluate expression because the code is optimized or a native frame is on top of the call stack.} (UPDATED) After debugging remotely and taking the redirect out of the try catch, I have found that the redirect is trying to get to the correct location but after it leaves the redirect is just seems to get lost. (almost as if it can't navigate away from the cobra_import project) back up a level to COBRA/pages. Why is this??? This has worked previously!!! public void btnUploadTheFile_Click(object Source, EventArgs evArgs) { //need to check that the uploaded file is an xls file. string strFileNameOnServer = "PJI3.txt"; string strBaseLocation = ConfigurationSettings.AppSettings["str_file_location"]; if ("" == strFileNameOnServer) { txtOutput.InnerHtml = "Error - a file name must be specified."; return; } if (null != uplTheFile.PostedFile) { try { uplTheFile.PostedFile.SaveAs(strBaseLocation+strFileNameOnServer); txtOutput.InnerHtml = "File <b>" + strBaseLocation+strFileNameOnServer+"</b> uploaded successfully"; Response.Redirect ("/COBRA/pages/sap_import_pji3_prc.asp"); } catch (Exception e) { txtOutput.InnerHtml = "Error saving <b>" + strBaseLocation+strFileNameOnServer+"</b><br>"+ e.ToString(); } } }

    Read the article

  • How to do the processing and keep GUI refreshed using databinding?

    - by macias
    History of the problem This is continuation of my previous question How to start a thread to keep GUI refreshed? but since Jon shed new light on the problem, I would have to completely rewrite original question, which would make that topic unreadable. So, new, very specific question. The problem Two pieces: CPU hungry heavy-weight processing as a library (back-end) WPF GUI with databinding which serves as monitor for the processing (front-end) Current situation -- library sends so many notifications about data changes that despite it works within its own thread it completely jams WPF data binding mechanism, and in result not only monitoring the data does not work (it is not refreshed) but entire GUI is frozen while processing the data. The aim -- well-designed, polished way to keep GUI up to date -- I am not saying it should display the data immediately (it can skip some changes even), but it cannot freeze while doing computation. Example This is simplified example, but it shows the problem. XAML part: <StackPanel Orientation="Vertical"> <Button Click="Button_Click">Start</Button> <TextBlock Text="{Binding Path=Counter}"/> </StackPanel> C# part (please NOTE this is one piece code, but there are two sections of it): public partial class MainWindow : Window,INotifyPropertyChanged { // GUI part public MainWindow() { InitializeComponent(); DataContext = this; } private void Button_Click(object sender, RoutedEventArgs e) { var thread = new Thread(doProcessing); thread.IsBackground = true; thread.Start(); } // this is non-GUI part -- do not mess with GUI here public event PropertyChangedEventHandler PropertyChanged; public void OnPropertyChanged(string property_name) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(property_name)); } long counter; public long Counter { get { return counter; } set { if (counter != value) { counter = value; OnPropertyChanged("Counter"); } } } void doProcessing() { var tmp = 10000.0; for (Counter = 0; Counter < 10000000; ++Counter) { if (Counter % 2 == 0) tmp = Math.Sqrt(tmp); else tmp = Math.Pow(tmp, 2.0); } } } Known workarounds (Please do not repost them as answers) Those two first are based on Jon ideas: pass GUI dispatcher to library and use it for sending notifications -- why it is ugly? because it could be no GUI at all give up with data binding COMPLETELY (one widget with databinding is enough for jamming), and instead check from time to time data and update the GUI manually -- well, I didn't learn WPF just to give up with it now ;-) and this is mine, it is ugly, but simplicity of it kills -- before sending notification freeze a thread -- Thread.Sleep(1) -- to let the potential receiver "breathe" -- it works, it is minimalistic, it is ugly though, and it ALWAYS slows down computation even if no GUI is there So... I am all ears for real solutions, not some tricks.

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • C# DLL Deployed in COM+. Error while accessing the methods.

    - by Dakshinamurthy
    I have the C# Dll (ABService) deployed in COM + and my os Is windows 2008. I have given the strong name for this dll and its dependent dll’s When I access the method of this dll through localhost or if I add the reference to the client project the method are executed successfully. Simply if I access the dll from the same machine with the reference it is working. So I think there is no problem with the way I deployed in the COM +. I have the doubt whether I have the problem in OS and Visual Studio 2008 combination. I have built all the dll with the Visual Studio 2008 with Target cpu as x86 and targert framework as 2.0. I have given in below the codes I have tried with and the errors. I need to create the object for the dll in server machine(64 bit) and access its method from the client(32 bit) Code : Type svr = Type.GetTypeFromProgID("ABService.Service", strserver1url[2],false); ABService.Service service1= (ABService.Service)Activator.CreateInstance(svr); strresult = service1.ExecuteService(orequest.xml); Error :{"Retrieving the COM class factory for remote component with CLSID {77BF00E0-41AC-3967-9E72-A4927CC0B880} from machine 10.105.138.64 failed due to the following error: 80040154."} Code Type svr = Type.GetTypeFromProgID("ABService.Service", strserver1url[2],true); object Service1 = null; Service1 = (ABService.Service)Activator.CreateInstance(svr, true); strresult = Convert.ToString(ReflectionHelper.Invoke(Service1, "ExecuteService", new object[] { orequest.xml })); Service1 = null; Error: Retrieving the COM class factory for remote component with CLSID {77BF00E0-41AC-3967-9E72-A4927CC0B880} from machine ftpsite failed due to the following error: 80040154. With the below code instead of C#.Net dll if i have the vb dll in Com + the method is executed successfully. Code ords = new RDS.DataSpace(); ords.InternetTimeout = 600000; object M_Service = null; ABService.Service oabservice = null; M_Service = ords.CreateObject("ABService.Service",url); strresult = Convert.ToString(ReflectionHelper.Invoke(M_Service, "ExecuteService", new object[] { orequest.xml })); Error : {"Object doesn't support this property or method 'ExecuteService'"} Code object obj=Interaction.CreateObject("ABService.Service", "10.105.138.64"); strresult = Convert.ToString(ReflectionHelper.Invoke(obj, "ExecuteService", new object[] { orequest.xml })); Error: {"Cannot create ActiveX component."} Code object obj = Activator.GetObject(typeof(ABService.Service), @"http://10.105.138.64:80/ABANET"); strresult = Convert.ToString(ReflectionHelper.Invoke(obj, "ExecuteService", new object[] { orequest.xml })); Error: InnerException {"The remote server returned an error: (405) Method Not Allowed."} System.Exception {System.Net.WebException} Message "Exception has been thrown by the target of an invocation.”

    Read the article

  • What is fastest way to convert bool to byte?

    - by Amir Rezaei
    What is fastest way to convert bool to byte? I want this mapping: False=0, True=1 Note: I don't want to use any if statement. Update: I don't want to use conditional statement. I don't want the CPU to halt or guess next statement. I want to optimize this code: private static string ByteArrayToHex(byte[] barray) { char[] c = new char[barray.Length * 2]; byte k; for (int i = 0; i < barray.Length; ++i) { k = ((byte)(barray[i] >> 4)); c[i * 2] = (char)(k > 9 ? k + 0x37 : k + 0x30); k = ((byte)(barray[i] & 0xF)); c[i * 2 + 1] = (char)(k > 9 ? k + 0x37 : k + 0x30); } return new string(c); } Update: The length of the array is very large, it's in terabyte order! Therefore I need to do optimization if possible. I shouldn't need to explain my self. The question is still valid. Update: I'm working on a project and looking at others code. That's why I didn't provide with the function at first place. I didn't want to spend time on explaining for people when they have opinion about the code. I shouldn’y need to provide in my question the background of my work, and a function that is not written by me. I have started to optimize it part by part. If I needed help with the whole function I would asked that in another question. That is why I asked this very simple at the beginning. Unfortunately people couldn’t keep themselves to the question. So please if you want to help answer the question. Update: For dose who want to see the point of this question. This example shows how two if statement are reduced from the code. byte A = k > 9 ; //If it was possible (k>9) == 0 || 1 c[i * 2] = A * (k + 0x30) - (A - 1) * (k + 0x30);

    Read the article

  • MySQL - Calculating fields on the fly vs storing calculated data

    - by Christian Varga
    Hi Everyone, I apologise if this has been asked before, but I can't seem to find an answer to a question that I have about calculating on the fly vs storing fields in a database. I read a few articles that suggested it was preferable to calculate when you can, but I would just like to know if that still applies to the following 2 examples. Example 1. Say you are storing data relating to a car. You store the fuel tank size in litres, and how many litres it uses per 100km. You also want to know how many KMs it can travel, which can be calculated from the tank size and economy. I see 2 ways of doing this: When a car is added or updated, calculate the amount of KMs and store this as a static field in the database. Every time a car is accessed, calculate the amount of KMs on the fly. Because the cars economy/tank size doesn't change (although it could be edited), the KMs is a pretty static value. I don't see why we would calculate it every single time the car is accessed. Wouldn't this waste cpu time as opposed to simply storing it in a separate field in the database and calculating only when a car is added or updated? My next example, which is almost an entirely different question (but on the same topic), relates to counting children. Let's say we have a app which has categories and items. We have a view where we display all the categories, and a count of all the items inside each category. Again, I'm wondering what's better. To perform a MySQL query to count all the items in each category every single time the page is accessed? Or store the count in a field in the categories table and update when an item is added / deleted? I know it is redundant to store anything that can be calculated, but I worry that calculating fields or counting records might be slow as opposed to storing the data in a field. If it's not then please let me know, I just want to learn about when to use either method. On a small scale I guess it wouldn't matter either way, but apps like Facebook, would they really count the amount of friends you have every time someone views your profile or would they just store it as a field? I'd appreciate any responses to both of these scenarios, and any resource that might explain the benefits of calculating vs storing. Thanks in advance, Christian

    Read the article

  • Is it Bad Practice to use C++ only for the STL containers?

    - by gmatt
    First a little background ... In what follows, I use C,C++ and Java for coding (general) algorithms, not gui's and fancy program's with interfaces, but simple command line algorithms and libraries. I started out learning about programming in Java. I got pretty good with Java and I learned to use the Java containers a lot as they tend to reduce complexity of book keeping while guaranteeing great performance. I intermittently used C++, but I was definitely not as good with it as with Java and it felt cumbersome. I did not know C++ enough to work in it without having to look up every single function and so I quickly reverted back to sticking to Java as much as possible. I then made a sudden transition into cracking and hacking in assembly language, because I felt I was concentrated too much attention on a much too high level language and I needed more experience with how a CPU interacts with memory and whats really going on with the 1's and 0's. I have to admit this was one of the most educational and fun experiences I've had with computers to date. For obviously reasons, I could not use assembly language to code on a daily basis, it was mostly reserved for fun diversions. After learning more about the computer through this experience I then realized that C++ is so much closer to the "level of 1's and 0's" than Java was, but I still felt it to be incredibly obtuse, like a swiss army knife with far too many gizmos to do any one task with elegance. I decided to give plain vanilla C a try, and I quickly fell in love. It was a happy medium between simplicity and enough "micromanagent" to not abstract what is really going on. However, I did miss one thing about Java: the containers. In particular, a simple container (like the stl vector) that expands dynamically in size is incredibly useful, but quite a pain to have to implement in C every time. Hence my code currently looks like almost entirely C with containers from C++ thrown in, the only feature I use from C++. I'd like to know if its consider okay in practice to use just one feature of C++, and ignore the rest in favor of C type code?

    Read the article

  • How to interpret kernel panics?

    - by Owen
    Hi all, I'm new to linux kernel and could barely understand how to debug kernel panics. I have this error below and I don't know where in the C code should I start checking. I was thinking maybe I could echo what functions are being called so I could check where in the code is this null pointer dereferenced. What print function should I use ? How do you interpret the error message below? Unable to handle kernel NULL pointer dereference at virtual address 0000000d pgd = c7bdc000 [0000000d] *pgd=4785f031, *pte=00000000, *ppte=00000000 Internal error: Oops: 17 [#1] PREEMPT Modules linked in: bcm5892_secdom_fw(P) bcm5892_lcd snd_bcm5892 msr bcm5892_sci bcm589x_ohci_p12 bcm5892_skeypad hx_decoder(P) pinnacle hx_memalloc(P) bcm_udc_dwc scsi_mod g_serial sd_mod usb_storage CPU: 0 Tainted: P (2.6.27.39-WR3.0.2ax_standard #1) PC is at __kmalloc+0x70/0xdc LR is at __kmalloc+0x48/0xdc pc : [c0098cc8] lr : [c0098ca0] psr: 20000093 sp : c7a9fd50 ip : c03a4378 fp : c7a9fd7c r10: bf0708b4 r9 : c7a9e000 r8 : 00000040 r7 : bf06d03c r6 : 00000020 r5 : a0000093 r4 : 0000000d r3 : 00000000 r2 : 00000094 r1 : 00000020 r0 : c03a4378 Flags: nzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user Control: 00c5387d Table: 47bdc008 DAC: 00000015 Process sh (pid: 1088, stack limit = 0xc7a9e260) Stack: (0xc7a9fd50 to 0xc7aa0000) fd40: c7a6a1d0 00000020 c7a9fd7c c7ba8fc0 fd60: 00000040 c7a6a1d0 00000020 c71598c0 c7a9fd9c c7a9fd80 bf06d03c c0098c64 fd80: c71598c0 00000003 c7a6a1d0 bf06c83c c7a9fdbc c7a9fda0 bf06d098 bf06d008 fda0: c7159880 00000000 c7a6a2d8 c7159898 c7a9fde4 c7a9fdc0 bf06d130 bf06d078 fdc0: c79ca000 c7159880 00000000 00000000 c7afbc00 c7a9e000 c7a9fe0c c7a9fde8 fde0: bf06d4b4 bf06d0f0 00000000 c79fd280 00000000 0f700000 c7a9e000 00000241 fe00: c7a9fe3c c7a9fe10 c01c37b4 bf06d300 00000000 c7afbc00 00000000 00000000 fe20: c79cba84 c7463c78 c79fd280 c7473b00 c7a9fe6c c7a9fe40 c00a184c c01c35e4 fe40: 00000000 c7bb0005 c7a9fe64 c79fd280 c7463c78 00000000 c00a1640 c785e380 fe60: c7a9fe94 c7a9fe70 c009c438 c00a164c c79fd280 c7a9fed8 c7a9fed8 00000003 fe80: 00000242 00000000 c7a9feb4 c7a9fe98 c009c614 c009c2a4 00000000 c7a9fed8 fea0: c7a9fed8 00000000 c7a9ff64 c7a9feb8 c00aa6bc c009c5e8 00000242 000001b6 fec0: 000001b6 00000241 00000022 00000000 00000000 c7a9fee0 c785e380 c7473b00 fee0: d8666b0d 00000006 c7bb0005 00000300 00000000 00000000 00000001 40002000 ff00: c7a9ff70 c79b10a0 c79b10a0 00005402 00000003 c78d69c0 ffffff9c 00000242 ff20: 000001b6 c79fd280 c7a9ff64 c7a9ff38 c785e380 c7473b00 00000000 00000241 ff40: 000001b6 ffffff9c 00000003 c7bb0000 c7a9e000 00000000 c7a9ff94 c7a9ff68 ff60: c009c128 c00aa380 4d18b5f0 08000000 00000000 00071214 0007128c 00071214 ff80: 00000005 c0027ee4 c7a9ffa4 c7a9ff98 c009c274 c009c0d8 00000000 c7a9ffa8 ffa0: c0027d40 c009c25c 00071214 0007128c 0007128c 00000241 000001b6 00000000 ffc0: 00071214 0007128c 00071214 00000005 00073580 00000003 000713e0 400010d0 ffe0: 00000001 bef0c7b8 000269cc 4d214fec 60000010 0007128c 00000000 00000000 Backtrace: [] (__kmalloc+0x0/0xdc) from [] (gs_alloc_req+0x40/0x70 [g_serial]) r8:c71598c0 r7:00000020 r6:c7a6a1d0 r5:00000040 r4:c7ba8fc0 [] (gs_alloc_req+0x0/0x70 [g_serial]) from [] (gs_alloc_requests+0x2c/0x78 [g_serial]) r7:bf06c83c r6:c7a6a1d0 r5:00000003 r4:c71598c0 [] (gs_alloc_requests+0x0/0x78 [g_serial]) from [] (gs_start_io+0x4c/0xac [g_serial]) r7:c7159898 r6:c7a6a2d8 r5:00000000 r4:c7159880 [] (gs_start_io+0x0/0xac [g_serial]) from [] (gs_open+0x1c0/0x224 [g_serial]) r9:c7a9e000 r8:c7afbc00 r7:00000000 r6:00000000 r5:c7159880 r4:c79ca000 [] (gs_open+0x0/0x224 [g_serial]) from [] (tty_open+0x1dc/0x314) [] (tty_open+0x0/0x314) from [] (chrdev_open+0x20c/0x22c) [] (chrdev_open+0x0/0x22c) from [] (__dentry_open+0x1a0/0x2b8) r8:c785e380 r7:c00a1640 r6:00000000 r5:c7463c78 r4:c79fd280 [] (__dentry_open+0x0/0x2b8) from [] (nameidata_to_filp+0x38/0x50) [] (nameidata_to_filp+0x0/0x50) from [] (do_filp_open+0x348/0x6f4) r4:00000000 [] (do_filp_open+0x0/0x6f4) from [] (do_sys_open+0x5c/0x170) [] (do_sys_open+0x0/0x170) from [] (sys_open+0x24/0x28) r8:c0027ee4 r7:00000005 r6:00071214 r5:0007128c r4:00071214 [] (sys_open+0x0/0x28) from [] (ret_fast_syscall+0x0/0x2c) Code: e59c4080 e59c8090 e3540000 159c308c (17943103) ---[ end trace be196e7cee3cb1c9 ]--- note: sh[1088] exited with preempt_count 2 process '-/bin/sh' (pid 1088) exited. Scheduling for restart. Welcome to Wind River Linux

    Read the article

  • C++ DLL creation for C# project - No functions exported

    - by Yeti
    I am working on a project that requires some image processing. The front end of the program is C# (cause the guys thought it is a lot simpler to make the UI in it). However, as the image processing part needs a lot of CPU juice I am making this part in C++. The idea is to link it to the C# project and just call a function from a DLL to make the image processing part and allow to the C# environment to process the data afterwards. Now the only problem is that it seems I am not able to make the DLL. Simply put the compiler refuses to put any function into the DLL that I compile. Because the project requires some development time testing I have created two projects into a C++ solution. One is for the Dll and another console application. The console project holds all the files and I just include the corresponding header into my DLL project file. I thought the compiler should take out the functions that I marked as to be exported and make the DLL from them. Nevertheless this does not happens. Here it is how I defined the function in the header: extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck); extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI, CvScalar &refHSVColorLow, CvScalar &refHSVColorHi ); Followed by the implementation in the cpp file: extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI,&refHSVColorLow, CvScalar &refHSVColorHi ) { \\... return cvPoint((int)( M10/M00) + imgROI.x, (int)( M01/M00 ) + imgROI.y) ;} extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck) { \\ ...}; And my main file for the DLL project looks like: #ifdef _MANAGED #pragma managed(push, off) #endif /// <summary> Include files. </summary> #include "..\ImageProcessingDebug\ImageProcessingTest.h" #include "..\ImageProcessingDebug\ImageProcessing.h" BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) { return TRUE; } #ifdef _MANAGED #pragma managed(pop) #endif Needless to say it does not work. A quick look with DLL export viewer 1.36 reveals that no function is inside the library. I don't get it. What I am doing wrong ? As side not I am using the C++ objects (and here it is the C++ DLL part) such as the vector. However, only for internal usage. These will not appear in the headers of either function as you can observe from the previous code snippets. Any ideas? Thx, Bernat

    Read the article

  • Search row with highest number of cells in a row with colspan

    - by user593029
    What is the efficient way to search highest number of cells in big table with numerous colspan (merge cells ** colspan should be ignored so in below example highest number of cells is 4 in first row). Is it js/jquery with reg expression or just the loop with bubble sorting. I got one link as below explainig use of regex is it ideal way ... can someone suggest pseudo code for this. High cpu consumption due to a jquery regex patch <table width="156" height="84" border="0" > <tbody> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px" colspan="2"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> </tbody> </table>

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • Swing: does DefaultBoundedRangeModel coalesce multiple events?

    - by Jason S
    I have a JProgressBar displaying a BoundedRangeModel which is extremely fine grained and I was concerned that updating it too often would slow down my computer. So I wrote a quick test program (see below) which has a 10Hz timer but each timer tick makes 10,000 calls to microtick() which in turn increments the BoundedRangeModel. Yet it seems to play nicely with a JProgressBar; my CPU is not working hard to run the program. How does JProgressBar or DefaultBoundedRangeModel do this? They seem to be smart about how much work it does to update the JProgressBar, so that as a user I don't have to worry about updating the BoundedRangeModel's value. package com.example.test.gui; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.BoundedRangeModel; import javax.swing.DefaultBoundedRangeModel; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JProgressBar; import javax.swing.Timer; public class BoundedRangeModelTest1 extends JFrame { final private BoundedRangeModel brm = new DefaultBoundedRangeModel(); final private Timer timer = new Timer(100, new ActionListener() { @Override public void actionPerformed(ActionEvent arg0) { tick(); } }); public BoundedRangeModelTest1(String title) { super(title); JPanel p = new JPanel(); p.add(new JProgressBar(this.brm)); getContentPane().add(p); this.brm.setMaximum(1000000); this.brm.setMinimum(0); this.brm.setValue(0); } protected void tick() { for (int i = 0; i < 10000; ++i) { microtick(); } } private void microtick() { this.brm.setValue(this.brm.getValue()+1); } public void start() { this.timer.start(); } static public void main(String[] args) { BoundedRangeModelTest1 f = new BoundedRangeModelTest1("BoundedRangeModelTest1"); f.pack(); f.setVisible(true); f.setDefaultCloseOperation(EXIT_ON_CLOSE); f.start(); } }

    Read the article

  • C++ Vector vs Array (Time)

    - by vsha041
    I have got here two programs with me, both are doing exactly the same task. They are just setting an boolean array / vector to the value true. The program using vector takes 27 seconds to run whereas the program involving array with 5 times greater size takes less than 1 s. I would like to know the exact reason as to why there is such a major difference ? Are vectors really that inefficient ? Program using vectors #include <iostream> #include <vector> #include <ctime> using namespace std; int main(){ const int size = 2000; time_t start, end; time(&start); vector<bool> v(size); for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - 27 seconds Program using Array #include <iostream> #include <ctime> using namespace std; int main(){ const int size = 10000; // 5 times more size time_t start, end; time(&start); bool v[size]; for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - < 1 seconds Platform - Visual Studio 2008 OS - Windows Vista 32 bit SP 1 Processor Intel(R) Pentium(R) Dual CPU T2370 @ 1.73GHz Memory (RAM) 1.00 GB Thanks Amare

    Read the article

  • Browsers (IE and Firefox) freeze when copying large amount of text

    - by Matt
    I have a web application - a Java servlet - that delivers data to users in the form of a text printout in a browser (text marked up with HTML in order to display in the browser as we want it to). The text does display in different colors, though most of it is black. One typical mode of operation is this: 1. User submits a form to request data. 2. Servlet delivers HTML file to browser. 3. User does CTRL+A to select all the text. 4. User does CTRL+C to copy all the text. 5. User goes to a text editor and does CTRL+V to paste the text. In the testing where I'm having this problem, step #2 successfully loads all the data - we wait for that to complete. We can scroll down to the end of what the browser loaded and see the end of the data. However, the browser freezes on step #3 (Firefox) or on step #4 (IE). Because step #2 finishes, I think it is a browser/memory issue, and not an issue with the web application. If I run queries to deliver smaller amounts of data (but after several queries we get the same data we would have above in one query) and copy/paste this text, the file I save it into ends up being about 8 MB. If I save the browser's displayed HTML to a file on my computer via File-Save As from the browser menu, it works fine and the file is about 22 MB. We've tried this on 2 different computers at work (both running Windows XP, with at least 2 GB of RAM and many GB of free disk space), using Firefox and IE. We also tried it on a home computer from a home network outside of work (thinking it might be our IT security software causing the problem), running Windows 7 using IE, and still had the problem. When I've done this, I can see whatever browser I'm using utilizing the CPU at 50%. Firefox's memory usage grows to about 1 GB; IE's stays in the several hundred MBs. We once let this run for half an hour, and it did not complete. I'm most likely going to modify the web app to have an option of delivering a plain text file for download, and I imagine that will get the users what they need. But for the mean time, and because I'm curious - and I don't like my application freezing people's browsers, does anyone have any ideas about the browser freezing? I understand that sometimes you just reach your memory limit, but 22 MB sounds to me like an amount I should be able to copy to the clipboard.

    Read the article

  • Suggest an alternative way to organize/build a database solution.

    - by Hamish Grubijan
    We are using Visual Studio 2010, but this was first conceived with VS2003. I will forward the best suggestions to my team. The current setup almost makes me vomit. It is a C# solution with most projects containing .sql files. Because we support Microsoft, Oracle, and Sybase, and so home-brewed a pre-processor, much like C preprocessor, except that substitutions are performed by a home-brewed C# program without using yacc and tools like that. #ifdefs are used for conditional macro definitions, and yeah - macros are the way this is done. A macro can expand to another macro or two, but this should eventually terminate. Only macros have #ifdef in them - the rest of the SQL-like code just uses these macros. Now, the various configurations: Debug, MNDebug, MNRelease, Release, SQL_APPLY_ALL, SQL_APPLY_MSFT, SQL_APPLY_ORACLE, SQL_APPLY_SYBASE, SQL_BUILD_OUTPUT_ALL, SQL_COMPILE, as well as 2 more. Also: Any CPU, Mixed Platforms, Win32. What drives me nuts is having to configure it correctly as well as choosing the right one out of 12 x 3 = 36 configurations as well as having to substitute database name depending on the type of database: config, main, or gateway. I am thinking that configuration should be reduced to just Debug, Release, and SQL_APPLY. Also, using 0, 1, and 2 seems so 80s ... Finally, I think my intention to build or not to build 3 types of databases for 3 types of vendors should be configured with just a tic tac toe board like: XOX OOX XXX In this case it would mean build MSFT+CONFIG, all SYBASE, and all GATEWAY. Still, the overall thing which uses a text file and a pre-processor and many configurations seems incredibly clunky. It is year 2010 now and someone out there is bound to have a very clean and/or creative tool/solution. The only pro would be that the existing collection of macros has been well tested. Have you ever had to write SQL that would work for several vendors? How did you do it? SqlVars.txt (Every one of 30 users makes a copy of a template and modifies this to suit their needs): // This is the default parameters file and should not be changed. // You can overwrite any of these parameters by copying the appropriate // section to override into SqlVars.txt and providing your own information. //Build types are 0-Config, 1-Main, 2-Gateway BUILD_TYPE=1 REMOVE_COMMENTS=1 // Login information used when applying to a Microsoft SQL server database SQL_APPLY_MSFT_version=SQL2005 SQL_APPLY_MSFT_database=msftdb SQL_APPLY_MSFT_server=ABC SQL_APPLY_MSFT_user=msftusr SQL_APPLY_MSFT_password=msftpwd // Login information used when applying to an Oracle database SQL_APPLY_ORACLE_version=ORACLE10g SQL_APPLY_ORACLE_server=oradb SQL_APPLY_ORACLE_user=orausr SQL_APPLY_ORACLE_password=orapwd // Login information used when applying to a Sybase database SQL_APPLY_SYBASE_version=SYBASE125 SQL_APPLY_SYBASE_database=sybdb SQL_APPLY_SYBASE_server=sybdb SQL_APPLY_SYBASE_user=sybusr SQL_APPLY_SYBASE_password=sybpwd ... (THIS GOES ON)

    Read the article

  • NVIDIA graphics driver in Ubuntu 12.04

    - by user924501
    So my overall goal is that I want to be able to code with CUDA enabled applications. However, upon many days of searching, using installation walkthroughs, and reinstalling countless times after driver failure... I'm now here as a last resort. I cannot get Ubuntu 12.04 LTS to install the NVIDIA 295.59 driver for my GeForce GT 540M NVIDIA graphics card. My main system specs is as follows... (I believe having the Intel processor may be the problem) DELL Laptop XPS L502X Intel® Core™ i7-2620M CPU @ 2.70GHz × 4 Intel Integrated Graphics 64 bit NVIDIA GeForce GT 540M Ubuntu 12.04 LTS All other specs are irrelevant unless I forgot something? Methods Tried: aptitude install nvidia-current (all packages) Results: Nothing really happened. Nothing in the additional drivers menu appeared, nor was the NVIDIA X Server settings application allowing access because it thought there was no NVIDIA X Server installed. Downloaded driver from nvidia.com. Set nomodeset in the grub boot menu through /boot/grub/grub.cfg Went to console and turned off lightdm. Installed the driver, but it said the pre-install failed? (mean anything?) Started up lightdm again. Results: NVIDIA X Server settings still didn't notice it. Even tried to do nvidia-xconfig multiple times. I also went into the config file to make sure the driver setting said "nvidia". aptitude install nvidia-173 (all packages) Results: Couldn't find the xorg-video-abi-10 virtual package. It was nowhere to be found and the ubuntu forums everywhere had no answers. Lots of people were having this problem. This is easily done in windows, simply download the driver and debug in visual studio with no problems at all. I'd like clear step-by-step instructions on how I should go about this. I'm relatively new to linux but I can find my way around pretty well so you aren't talking to a straight-up beginner. Also, if you think another thread may have the answer please post because I was having a hard time looking for my specific type of problem. TL;DR I want to have access to my GPU so I can code with CUDA while in Ubuntu 12.04 on my 64 bit laptop that also has Intel integrated graphics on the processor. Solution: sudo apt-add-repository ppa:ubuntu-x-swat/x-updates && sudo apt-get update && sudo apt-get upgrade && sudo apt-get install nvidia-current

    Read the article

  • Why is Dictionary.First() so slow?

    - by Rotsor
    Not a real question because I already found out the answer, but still interesting thing. I always thought that hash table is the fastest associative container if you hash properly. However, the following code is terribly slow. It executes only about 1 million iterations and takes more than 2 minutes of time on a Core 2 CPU. The code does the following: it maintains the collection todo of items it needs to process. At each iteration it takes an item from this collection (doesn't matter which item), deletes it, processes it if it wasn't processed (possibly adding more items to process), and repeats this until there are no items to process. The culprit seems to be the Dictionary.Keys.First() operation. The question is why is it slow? Stopwatch watch = new Stopwatch(); watch.Start(); HashSet<int> processed = new HashSet<int>(); Dictionary<int, int> todo = new Dictionary<int, int>(); todo.Add(1, 1); int iterations = 0; int limit = 500000; while (todo.Count > 0) { iterations++; var key = todo.Keys.First(); var value = todo[key]; todo.Remove(key); if (!processed.Contains(key)) { processed.Add(key); // process item here if (key < limit) { todo[key + 13] = value + 1; todo[key + 7] = value + 1; } // doesn't matter much how } } Console.WriteLine("Iterations: {0}; Time: {1}.", iterations, watch.Elapsed); This results in: Iterations: 923007; Time: 00:02:09.8414388. Simply changing Dictionary to SortedDictionary yields: Iterations: 499976; Time: 00:00:00.4451514. 300 times faster while having only 2 times less iterations. The same happens in java. Used HashMap instead of Dictionary and keySet().iterator().next() instead of Keys.First().

    Read the article

  • Optimizing GDI+ drawing?

    - by user146780
    I'm using C++ and GDI+ I'm going to be making a vector drawing application and want to use GDI+ for the drawing. I'v created a simple test to get familiar with it: case WM_PAINT: GetCursorPos(&mouse); GetClientRect(hWnd,&rct); hdc = BeginPaint(hWnd, &ps); MemDC = CreateCompatibleDC(hdc); bmp = CreateCompatibleBitmap(hdc, 600, 600); SelectObject(MemDC,bmp); g = new Graphics(MemDC); for(int i = 0; i < 1; ++i) { SolidBrush sb(Color(255,255,255)); g->FillRectangle(&sb,rct.top,rct.left,rct.right,rct.bottom); } for(int i = 0; i < 250; ++i) { pts[0].X = 0; pts[0].Y = 0; pts[1].X = 10 + mouse.x * i; pts[1].Y = 0 + mouse.y * i; pts[2].X = 10 * i + mouse.x; pts[2].Y = 10 + mouse.y * i; pts[3].X = 0 + mouse.x; pts[3].Y = (rand() % 600) + mouse.y; Point p1, p2; p1.X = 0; p1.Y = 0; p2.X = 300; p2.Y = 300; g->FillPolygon(&b,pts,4); } BitBlt(hdc,0,0,900,900,MemDC,0,0,SRCCOPY); EndPaint(hWnd, &ps); DeleteObject(bmp); g->ReleaseHDC(MemDC); DeleteDC(MemDC); delete g; break; I'm wondering if I'm doing it right, or if I have areas killing the cpu. Because right now it takes ~ 1sec to render this and I want to be able to have it redraw itself very quickly. Thanks In a real situation would it be better just to figure out the portion of the screen to redraw and only redraw the elements withing bounds of this?

    Read the article

  • Using a large list of terms, search through page text and replace words with links

    - by dunc
    A while ago I posted this question asking if it's possible to convert text to HTML links if they match a list of terms from my database. I have a fairly huge list of terms - around 6000. The accepted answer on that question was superb, but having never used XPath, I was at a loss when problems started occurring. At one point, after fiddling with code, I somehow managed to add over 40,000 random characters to our database - the majority of which required manual removal. Since then I've lost faith in that idea and the more simple PHP solutions simply weren't efficient enough to deal with the amount of data and the quantity of terms. My next attempt at a solution is to write a JS script which, once the page has loaded, retrieves the terms and matches them against the text on a page. This answer has an idea which I'd like to attempt. I would use AJAX to retrieve the terms from the database, to build an object such as this: var words = [ { word: 'Something', link: 'http://www.something.com' }, { word: 'Something Else', link: 'http://www.something.com/else' } ]; When the object has been built, I'd use this kind of code: //for each array element $.each(words, function() { //store it ("this" is gonna become the dom element in the next function) var search = this; $('.message').each( function() { //if it's exactly the same if ($(this).text() === search.word) { //do your magic tricks $(this).html('<a href="' + search.link + '">' + search.link + '</a>'); } } ); } ); Now, at first sight, there is a major issue here: with 6,000 terms, will this code be in any way efficient enough to do what I'm trying to do?. One option would possibly be to perform some of the overhead within the PHP script that the AJAX communicates with. For instance, I could send the ID of the post and then the PHP script could use SQL statements to retrieve all of the information from the post and match it against all 6,000 terms.. then the return call to the JavaScript could simply be the matching terms, which would significantly reduce the number of matches the above jQuery would make (around 50 at most). I have no problem with the script taking a few seconds to "load" on the user's browser, as long as it isn't impacting their CPU usage or anything like that. So, two questions in one: Can I make this work? What steps can I take to make it as efficient as possible? Thanks in advance,

    Read the article

  • AudioTrack lag: obtainBuffer timed out

    - by BTR
    I'm playing WAVs on my Android phone by loading the file and feeding the bytes into AudioTrack.write() via the FileInputStream BufferedInputStream DataInputStream method. The audio plays fine and when it is, I can easily adjust sample rate, volume, etc on the fly with nice performance. However, it's taking about two full seconds for a track to start playing. I know AudioTrack has an inescapable delay, but this is ridiculous. Every time I play a track, I get this: 03-13 14:55:57.100: WARN/AudioTrack(3454): obtainBuffer timed out (is the CPU pegged?) 0x2e9348 user=00000960, server=00000000 03-13 14:55:57.340: WARN/AudioFlinger(72): write blocked for 233 msecs, 9 delayed writes, thread 0xba28 I've noticed that the delayed write count increases by one every time I play a track -- even across multiple sessions -- from the time the phone has been turned on. The block time is always 230 - 240ms, which makes sense considering a minimum buffer size of 9600 on this device (9600 / 44100). I've seen this message in countless searches on the Internet, but it usually seems to be related to not playing audio at all or skipping audio. In my case, it's just a delayed start. I'm running all my code in a high priority thread. Here's a truncated-yet-functional version of what I'm doing. This is the thread callback in my playback class. Again, this works (only playing 16-bit, 44.1kHz, stereo files right now), it just takes forever to start and has that obtainBuffer/delayed write message every time. public void run() { // Load file FileInputStream mFileInputStream; try { // mFile is instance of custom file class -- this is correct, // so don't sweat this line mFileInputStream = new FileInputStream(mFile.path()); } catch (FileNotFoundException e) {} BufferedInputStream mBufferedInputStream = new BufferedInputStream(mFileInputStream, mBufferLength); DataInputStream mDataInputStream = new DataInputStream(mBufferedInputStream); // Skip header try { if (mDataInputStream.available() > 44) mDataInputStream.skipBytes(44); } catch (IOException e) {} // Initialize device mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, ConfigManager.SAMPLE_RATE, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT, ConfigManager.AUDIO_BUFFER_LENGTH, AudioTrack.MODE_STREAM); mAudioTrack.play(); // Initialize buffer byte[] mByteArray = new byte[mBufferLength]; int mBytesToWrite = 0; int mBytesWritten = 0; // Loop to keep thread running while (mRun) { // This flag is turned on when the user presses "play" while (mPlaying) { try { // Check if data is available if (mDataInputStream.available() > 0) { // Read data from file and write to audio device mBytesToWrite = mDataInputStream.read(mByteArray, 0, mBufferLength); mBytesWritten += mAudioTrack.write(mByteArray, 0, mBytesToWrite); } } catch (IOException e) { } } } } If I can get past the artificially long lag, I can easily deal with the inherit latency by starting my write at a later, predictable position (ie, skip past the minimum buffer length when I start playing a file).

    Read the article

  • CUDA memory transfer issue

    - by Vaibhav Sundriyal
    I am trying to execute a code which first transfers data from CPU to GPU memory and vice-versa. In spite of increasing the volume of data, the data transfer time remains the same as if no data transfer is actually taking place. I am posting the code. #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ #include <sys/time.h> __global__ void device_volume(float *x_d,float *y_d) { int index = blockIdx.x * blockDim.x + threadIdx.x; } int main(void) { float *x_h,*y_h,*x_d,*y_d,*z_h,*z_d; long long size=9999999; long long nbytes=size*sizeof(float); timeval t1,t2; double et; x_h=(float*)malloc(nbytes); y_h=(float*)malloc(nbytes); z_h=(float*)malloc(nbytes); cudaMalloc((void **)&x_d,size*sizeof(float)); cudaMalloc((void **)&y_d,size*sizeof(float)); cudaMalloc((void **)&z_d,size*sizeof(float)); gettimeofday(&t1,NULL); cudaMemcpy(x_d, x_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(y_d, y_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(z_d, z_h, nbytes, cudaMemcpyHostToDevice); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("\n %ld\t\t%f\t\t",nbytes,et); et=0.0; //printf("%f %d\n",seconds,CLOCKS_PER_SEC); // launch a kernel with a single thread to greet from the device //device_volume<<<1,1>>>(x_d,y_d); gettimeofday(&t1,NULL); cudaMemcpy(x_h, x_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(y_h, y_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(z_h, z_d, nbytes, cudaMemcpyDeviceToHost); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("%f\n",et); cudaFree(x_d); cudaFree(y_d); cudaFree(z_d); return 0; } Can anybody help me with this issue? Thanks

    Read the article

  • Top Tweets SOA Partner Community – March 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community ?SOA Community Newsletter February 2012 wp.me/p10C8u-o0 Marc ?Reading through the #OFM 11.1.1.6 , patchset 5 documentation. What is the best way to upgrade your whole dev…prd street. SOA Community Thanks for the successful and super interesting #sbidays ! Wonderful discussions around the Integration, case management and security tracks Torsten Winterberg Schon den neuen Opitz Technology-Blog gebookmarked? The Cattle Crew bit.ly/yLPwBD wird ab sofort regelmäßig Erkenntnisse posten. OTNArchBeat ? Unit Testing Asynchronous BPEL Processes Using soapUI | @DanielAmadei bit.ly/x9NsS9 Rolando Carrasco ?Video de Human Task en BPM 11g. Por @edwardo040. bit.ly/wki9CA cc @OracleBPM @OracleSOA @soacommunity View video Marcel Mertin SOA Security Hands-On by Dirk Krafzig and Mamoon Yunus at #sbidays is also great! SOA Community Workshop day #sbidays #BPMN2.0 by Volker Stiehl from #SAP great training – now I can model & execute in #bpmsuite #soacommunity Simone Geib ?Just updated our advanced #soasuite #otn page with a number of very interesting @orclateamsoa blog posts: bit.ly/advancedsoasui… OTNArchBeat ? Start Small, Grow Fast: SOA Best Practices article by @biemond, @rluttikhuizen, @demed bit.ly/yem9Zv Steffen Miller ? Nice new features in SOA Suite Business Rules #PS5 Testing rules with scenarios and output validation bit.ly/zj64Q3 @SOACOMMUNITY OTNArchBeat ? Reply SOA Blackbelt training by David Shaffer, April 30th–May 4th 2012 bit.ly/xGdC24 OTNArchBeat ? What have BPM, big data, social tools, and business models got in common? | Andy Mulholland bit.ly/xUkOGf SOA Community ? Live hacking at #sbidays – cheaper shopping, bias cracking, payment systems, secure your SOA! pic.twitter.com/y7YaIdug SOA Community Future #BPM & #ACM solutions can make use of ontology’s, based on #sqarql #sbidays pic.twitter.com/xLb1Z5zs Simone Geib ? @soacommunity: SOA Blackbelt training by David Shaffer, April 30th–May 4th 2012 wp.me/p10C8u-nX Biemond Changing your ADF Connections in Enterprise Manager with PS5: With Patch Set 5 of Fusion Middleware you can fina… bit.ly/zF7Rb1 Marc ? HUGE (!) CPU and Heap improvement on Oracle Fusion Middleware tinyurl.com/762drzp @wlscommunity @soacommunity #OSB #SOA #WLS SOA Community ?Networking @ SOA & BPM Partner Community blogs.oracle.com/soacommunity/e… #soacommunity #otn #opn #oracle SOA Community ?Published the SOA Partner Community newsletter February edition – READ it. Not yet a member? oracle.com/goto/emea/soa #soacommunity #otn #opn AMIS, Oracle & Java Blog by Lucas Jellema: "Book Review: Do More with SOA Integration: Best of Packt (december 2011, various authors)" bit.ly/wq633E Jon petter hjulstad @SOASimone Excellent summary! Lots of new features! Simone Geib ?Do you want to know what’s new in #soasuite #PS5? Go to bit.ly/xBX06f and let me know what you think SOA Community ? Unit Testing Asynchronous BPEL Processes Using soapUI oracle.com/technetwork/ar… #soacommunity #soa #otn #oracle #bpel Retweeted by SOA Community View media Retweeted by SOA Community Eric Elzinga ? Oracle Fusion Middleware Partner Community Forum Malage, The Overview, bit.ly/AA9BKd #ofmforum SOA&Cloud Symposium ? The February issue of the Service Technology Magazine is now published. servicetechmag.com SOA Community ? Oracle SOA Suite 11g Database Growth Management – must read! oracle.com/technetwork/da… #soacommunity #soa #purging demed ? Have you exposed internal processes to mobile devices using #oraclesoa? Interested in an article? DM me! #osb #rest #multichannel #mobile orclateamsoa ? A-Team SOA Blog: Enhanced version of Thread Dump Analyzer (TDA A-Team) ow.ly/1hpk7l SOA Community Reply BPM Suite #PS5 (11.1.1.6) available for download soacommunity.wordpress.com/2012/02/22/soa… Send us your feedback! #soacommunity #bpmsuite #opn SOA Community ? SOA Suite #PS5 (11.1.1.6) available for download soacommunity.wordpress.com/2012/02/22/soa… Send us your feedback! #soacommunity #soasuite SOA Community BPM Suite #PS5 1(1.1.1.6) available for download. List of new BPM features blogs.oracle.com/soacommunity/e… #soacommunity #bpm #bpmsuite #opn OracleBlogs BPM in Utilties Industry ow.ly/1hC3fp Retweeted by SOA Community OTNArchBeat ? Demystifying Oracle Enterprise Gateway | Naresh Persaud bit.ly/xtDNe2 OTNArchBeat ? Architect’s Guide to Big Data; Test BPEL Processes Using SoapUI; Development Debate bit.ly/xbDYSo Frank Nimphius ? Finished my book review of "Do More with SOA Integration: Best of Packt ". Here are my review comments: bit.ly/x2k9OZ Lucas Jellema ? That is my one stop-and-go download center for #PS5 : edelivery.oracle.com/EPD/Download/g… Lucas Jellema ? Interesting piece of documentation: Fusion Applications Extensibility Guide – docs.oracle.com/cd/E15586_01/f… source for design time @ run time inspira Lucas Jellema ? Strongly improved support for testing Business Rules at Design Time in #PS5 see docs.oracle.com/cd/E23943_01/u… Lucas Jellema ? SOA Suite 11gR1 PS5: new BPEL Component testing – docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? PS5 available for CEP (Complex Event Processing) – a personal favorite of mine : oracle.com/technetwork/mi… Lucas Jellema ?What’s New in Fusion Developer’s Guide 11gR1 PS5: docs.oracle.com/cd/E23943_01/w… Lucas Jellema ? BPMN Correlation (FMW 11gR1 PS5): docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? Modifying running BPM Process instances (FMW 11gR1 PS5): docs.oracle.com/cd/E23943_01/d… Lucas Jellema ? SOA Suite 11gR1 PS5 – new aggregation pattern: docs.oracle.com/cd/E23943_01/d… routing multiple messages to same instance Melvin van der Kuijl ? Automating Testing of SOA Composite Applications in PS5. docs.oracle.com/cd/E23943_01/d… Cato Aune ? SOA suite PS5 Enterprise Deployment Guide is available in ePub docs.oracle.com/cd/E23943_01/c… . Much better than pdf on Galaxy Note SOA Community ?JDeveloper 11.1.1.6 is available for download bit.ly/wGYrwE #soacommunity SOA Community ? Your first experience #PS5 – let us know @soacommunity – send us your tweets and blog posts! #soacommunity Jon petter hjulstad ? WLS 10.3.6 New features, ex better logging of jdbc use: docs.oracle.com/cd/E23943_01/w… Heidi Buelow ? Get it now! RT @soacommunity: BPM Suite PS5 11.1.1.6 available for download bit.ly/AgagT5 #bpm #soacommunity Jon petter hjulstad ?SOA Suite PS5 EDG contains OSB! docs.oracle.com/cd/E23943_01/c… Jon petter hjulstad ? Testing Oracle Rules from JDeveloper is easier in PS5: docs.oracle.com/cd/E23943_01/u… Biemond® ? What’s New in Oracle Service Bus 11.1.1.6.0 oracle.com/technetwork/mi… Jon petter hjulstad ? Adminguide New and Changed Features for PS5, ex GridLink data sources: docs.oracle.com/cd/E23943_01/c… Retweeted by SOA Community Andreas Koop ? Unbelievable! #OFM Doc Lib growth from 11gPS4->11gPS5 by 1.2G! View media SOA Community ?ODI PS5 is available oracle.com/technetwork/mi… #odi #soacommunity 22 Feb View media SOA Community Service Bus 11g Development Cookbook soacommunity.wordpress.com/2012/02/20/ser… #osb #soacommunity #ace #opn View media For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • Anunciando Windows Azure Mobile Services (Serviços Móveis da Windows Azure)

    - by Leniel Macaferi
    Estou animado para anunciar uma nova capacidade que estamos adicionando à Windows Azure hoje: Windows Azure Mobile Services (Serviços Móveis da Windows Azure) Os Serviços Móveis da Windows Azure tornam incrivelmente fácil conectar um backend da nuvem escalável em suas aplicações clientes e móveis. Estes serviços permitem que você facilmente armazene dados estruturados na nuvem que podem abranger dispositivos e usuários, integrando tais dados com autenticação do usuário. Você também pode enviar atualizações para os clientes através de notificações push. O lançamento de hoje permite que você adicione essas capacidades em qualquer aplicação Windows 8 em literalmente minutos, e fornece uma maneira super produtiva para que você transforme rapidamente suas ideias em aplicações. Também vamos adicionar suporte para permitir esses mesmos cenários para o Windows Phone, iOS e dispositivos Android em breve. Leia este tutorial inicial (em Inglês) que mostra como você pode construir (em menos de 5 minutos) uma simples aplicação Windows 8 "Todo List" (Lista de Tarefas) que é habilitada para a nuvem usando os Serviços Móveis da Windows Azure. Ou assista este vídeo (em Inglês) onde mostro como construí-la passo a passo. Começando Se você ainda não possui uma conta na Windows Azure, você pode se inscrever usando uma assinatura gratuita sem compromisso. Uma vez inscrito, clique na seção "preview features" logo abaixo da tab "account" (conta) no website www.windowsazure.com e ative sua conta para ter acesso ao preview dos "Mobile Services" (Serviços Móveis). Instruções sobre como ativar estes novos recursos podem ser encontradas aqui (em Inglês). Depois de habilitar os Serviços Móveis, entre no Portal da Windows Azure, clique no botão "New" (Novo) e escolha o novo ícone "Mobile Services" (Serviços Móveis) para criar o seu primeiro backend móvel. Uma vez criado, você verá uma página de início rápido como a mostrada a seguir com instruções sobre como conectar o seu serviço móvel a uma aplicação Windows 8 cliente já existente, a qual você já tenha começado a implementar, ou como criar e conectar uma nova aplicação Windows 8 cliente ao backend móvel: Leia este tutorial inicial (em Inglês) com explicações passo a passo sobre como construir (em menos de 5 minutos) uma simples aplicação Windows 8 "Todo List" (Lista de Tarefas) que armazena os dados na Windows Azure. Armazenamento Dados na Nuvem Armazenar dados na nuvem com os Serviços Móveis da Windows Azure é incrivelmente fácil. Quando você cria um Serviço Móvel da Windows Azure, nós automaticamente o associamos com um banco de dados SQL dentro da Windows Azure. O backend do Serviço Móvel da Windows Azure então fornece suporte nativo para permitir que aplicações remotas armazenem e recuperem dados com segurança através dele (usando end-points REST seguros, através de um formato OData baseado em JSON) - sem que você tenha que escrever ou implantar qualquer código personalizado no servidor. Suporte integrado para o gerenciamento do backend é fornecido dentro do Portal da Windows Azure para a criação de novas tabelas, navegação pelos dados, criação de índices, e controle de permissões de acesso. Isto torna incrivelmente fácil conectar aplicações clientes na nuvem, e permite que os desenvolvedores de aplicações desktop que não têm muito conhecimento sobre código que roda no servidor sejam produtivos desde o início. Eles podem se concentrar na construção da experiência da aplicação cliente, tirando vantagem dos Serviços Móveis da Windows Azure para fornecer os serviços de backend da nuvem que se façam necessários.  A seguir está um exemplo de código Windows 8 C#/XAML do lado do cliente que poderia ser usado para consultar os dados de um Serviço Móvel da Windows Azure. Desenvolvedores de aplicações que rodam no cliente e que usam C# podem escrever consultas como esta usando LINQ e objetos fortemente tipados POCO, os quais serão mais tarde traduzidos em consultas HTTP REST que são executadas em um Serviço Móvel da Windows Azure. Os desenvolvedores não precisam escrever ou implantar qualquer código personalizado no lado do servidor para permitir que o código do lado do cliente mostrado a seguir seja executado de forma assíncrona preenchendo a interface (UI) do cliente: Como os Serviços Móveis fazem parte da Windows Azure, os desenvolvedores podem escolher mais tarde se querem aumentar ou estender sua solução adicionando funcionalidades no lado do servidor bem como lógica de negócio mais avançada, se quiserem. Isso proporciona o máximo de flexibilidade, e permite que os desenvolvedores ampliem suas soluções para atender qualquer necessidade. Autenticação do Usuário e Notificações Push Os Serviços Móveis da Windows Azure também tornam incrivelmente fácil integrar autenticação/autorização de usuários e notificações push em suas aplicações. Você pode usar esses recursos para habilitar autenticação e controlar as permissões de acesso aos dados que você armazena na nuvem de uma maneira granular. Você também pode enviar notificações push para os usuários/dispositivos quando os dados são alterados. Os Serviços Móveis da Windows Azure suportam o conceito de "scripts do servidor" (pequenos pedaços de script que são executados no servidor em resposta a ações), os quais tornam a habilitação desses cenários muito fácil. A seguir estão links para alguns tutoriais (em Inglês) no formato passo a passo para cenários comuns de autenticação/autorização/push que você pode utilizar com os Serviços Móveis da Windows Azure e aplicações Windows 8: Habilitando Autenticação do Usuário Autorizando Usuários  Começando com Push Notifications Push Notifications para múltiplos Usuários Gerencie e Monitore seu Serviço Móvel Assim como todos os outros serviços na Windows Azure, você pode monitorar o uso e as métricas do backend de seu Serviço Móvel usando a tab "Dashboard" dentro do Portal da Windows Azure. A tab Dashboard fornece uma visão de monitoramento que mostra as chamadas de API, largura de banda e ciclos de CPU do servidor consumidos pelo seu Serviço Móvel da Windows Azure. Você também usar a tab "Logs" dentro do portal para ver mensagens de erro.  Isto torna fácil monitorar e controlar como sua aplicação está funcionando. Aumente a Capacidade de acordo com o Crescimento do Seu Negócio Os Serviços Móveis da Windows Azure agora permitem que cada cliente da Windows Azure crie e execute até 10 Serviços Móveis de forma gratuita, em um ambiente de hospedagem compartilhado com múltiplos banco de dados (onde o backend do seu Serviço Móvel será um dos vários aplicativos sendo executados em um conjunto compartilhado de recursos do servidor). Isso fornece uma maneira fácil de começar a implementar seus projetos sem nenhum custo algum (nota: cada conta gratuita da Windows Azure também inclui um banco de dados SQL de 1GB que você pode usar com qualquer número de aplicações ou Serviços Móveis da Windows Azure). Se sua aplicação cliente se tornar popular, você pode clicar na tab "Scale" (Aumentar Capacidade) do seu Serviço Móvel e mudar de "Shared" (Compartilhado) para o modo "Reserved" (Reservado). Isso permite que você possa isolar suas aplicações de maneira que você seja o único cliente dentro de uma máquina virtual. Isso permite que você dimensione elasticamente a quantidade de recursos que suas aplicações consomem - permitindo que você aumente (ou diminua) sua capacidade de acordo com o tráfego de dados: Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Isso permite um modelo super flexível que é ideal para novos cenários de aplicações móveis, bem como para novas empresas que estão apenas começando. Resumo Eu só toquei na superfície do que você pode fazer com os Serviços Móveis da Windows Azure - há muito mais recursos para explorar. Com os Serviços Móveis da Windows Azure, você será capaz de construir cenários de aplicações móveis mais rápido do que nunca, permitindo experiências de usuário ainda melhores - conectando suas aplicações clientes na nuvem. Visite o centro de desenvolvimento dos Serviços Móveis da Windows Azure (em Inglês) para aprender mais, e construa sua primeira aplicação Windows 8 conectada à Windows Azure hoje. E leia este tutorial inicial (em Inglês) com explicações passo a passo que mostram como você pode construir (em menos de 5 minutos) uma simples aplicação Windows 8 "Todo List" (Lista de Tarefas) habilitada para a nuvem usando os Serviços Móveis da Windows Azure. Espero que ajude, - Scott P.S. Além do blog, eu também estou utilizando o Twitter para atualizações rápidas e para compartilhar links. Siga-me em: twitter.com/ScottGu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Thursday Community Keynote: "By the Community, For the Community"

    - by Janice J. Heiss
    Sharat Chander, JavaOne Community Chairperson, began Thursday's Community Keynote. As part of the morning’s theme of "By the Community, For the Community," Chander noted that 60% of the material at the 2012 JavaOne conference was presented by Java Community members. "So next year, when the call for papers starts, put-in your submissions," he urged.From there, Gary Frost, Principal Member of Technical Staff, AMD, expanded upon Sunday's Strategy Keynote exploration of Project Sumatra, an OpenJDK project targeted at bringing Java to heterogeneous computing platforms (which combine the CPU and the parallel processor of the GPU into a single piece of silicon). Sumatra entails enhancing the JVM to make maximum use of these advanced platforms. Within this development space, AMD created the Aparapi API, which converts Java bytecode into OpenCL for execution on such GPU devices. The Aparapi API was open sourced in September 2011.Whether it was zooming-in on a Mandelbrot set, "the game of life," or a swarm of 10,000 Dukes in a space-bound gravitational dance, Frost's demos, using an Aparapi/OpenCL implementation, produced stunningly faster display results. He indicated that the Java 9 timeframe is where they see Project Sumatra coming to ultimate fruition, employing the Lamdas of Java 8.Returning to the theme of the keynote, Donald Smith, Director, Java Product Management, Oracle, explored a mind map graphic demonstrating the importance of Community in terms of fostering innovation. "It's the sharing and mixing of culture, the diversity, and the rapid prototyping," he said. Within this topic, Smith, brought up a panel of representatives from Cloudera, Eclipse, Eucalyptus, Perrone Robotics, and Twitter--ideal manifestations of community and innovation in the world of Java.Marten Mickos, CEO, Eucalyptus Systems, explored his company's open source cloud software platform, written in Java, and used by gaming companies, technology companies, media companies, and more. Chris Aniszczyk, Operations Engineering,Twitter, noted the importance of the JVM in terms of their multiple-language development environment. Mike Olson, CEO, Cloudera, described his company's Apache Hadoop-based software, support, and training. Mike Milinkovich, Executive Director, Eclipse Foundation, noted that they have about 270 tools projects at Eclipse, with 267 of them written in Java. Milinkovich added that Eclipse will even be going into space in 2013, as part of the control software on various experiments aboard the International Space Station. Lastly, Paul Perrone, CEO, Perrone Robotics, detailed his company's robotics and automation software platform built 100% on Java, including Java SE and Java ME--"on rat, to cat, to elephant-sized systems." Milinkovic noted that communities are by nature so good at innovation because of their very openness--"The more open you make your innovation process, the more ideas are challenged, and the more developers are focused on justifying their choices all the way through the process."From there, Georges Saab, VP Development Java SE OpenJDK, continued the topic of innovation and helping the Java Community to "Make the Future Java." Martijn Verburg, representing the London Java Community (winner of a Duke's Choice Award 2012 for their activity in OpenJDK and JCP), soon joined Saab onstage. Verburg detailed the LJC's "Adopt a JSR" program--"to get day-to-day developers more involved in the innovation that's happening around them."  From its London launching pad, the innovative program has spread to Brazil, Morocco, Latvia, India, and more.Other active participants in the program joined Verburg onstage--Ben Evans, London Java Community; James Gough, Stackthread; Bruno Souza, SOUJava; Richard Warburton, jClarity; and Cecelia Borg, Oracle--OpenJDK Onboarding. Together, the group explored the goals and tasks inherent in the Adopt a JSR program--from organizing hack days (testing prototype implementations), to managing mailing lists and forums, to triaging issues, to evangelism—all with the goal of fostering greater community/developer involvement, but equally importantly, building better open standards. “Come join us, and make your ecosystem better!" urged Verburg.Paul Perrone returned to profile the latest in his company's robotics work around Java--including the AARDBOTS family of smaller robotic vehicles, running the Perrone MAX platform on top of the Java JVM. Perrone took his "Rumbles" four-wheeled robot out for a spin onstage--a roaming, ARM-based security-bot vehicle, complete with IR, ultrasonic, and "cliff" sensors (the latter, for the raised stage at JavaOne). As an ultimate window into the future of robotics, Perrone displayed a "head-set" controller--a sensor directed at the forehead to monitor brainwaves, for the someday-implementation of brain-to-robot control.Then, just when it seemed this might be the end of the day's futuristic offerings, a mystery voice from offstage pronounced "I've got some toys"--proving to be guest-visitor James Gosling, there to explore his cutting-edge work with Liquid Robotics. While most think of robots as something with wheels or arms or lasers, Gosling explained, the Liquid Robotics vehicle is an entirely new and innovative ocean-going 'bot. Looking like a floating surfboard, with an attached set of underwater wings, the autonomous devices roam the oceans using only the energy of ocean waves to propel them, and a single actuated rudder to steer. "We have to accomplish all guidance just by wiggling the rudder," Gosling said. The devices offer applications from self-installing weather buoy, to pollution monitoring station, to marine mammal monitoring device, to climate change data gathering, to even ocean life genomic sampling. The early versions of the vehicle used C code on very tiny industrial micro controllers, where they had to "count the bytes one at a time."  But the latest generation vehicles, which just hit the water a week or so ago, employ an ARM processor running Linux and the ARM version of JDK 7. Gosling explained that vehicle communication from remote locations is achieved via the Iridium satellite network. But because of the costs of this communication path, the data must be sent in very small bursts--using SBD short burst data. "It costs $1/kb, so that rules everything in the software design,” said Gosling. “If you were trying to stream a Netflix video over this, it would cost a million dollars a movie. …We don't have a 'big data' problem," he quipped. There are currently about 150 Liquid Robotics vehicles out traversing the oceans. Gosling demonstrated real time satellite tracking of several vehicles currently at sea, noting that Java is actually particularly good at AI applications--due to the language having garbage collection, which facilitates complex data structures. To close-out his time onstage, Gosling of course participated in the ceremonial Java tee-shirt toss out to the audience…In parting, Chander passed the JavaOne Community Chairperson baton to Stephen Chin, Java Technology Evangelist, Oracle. Onstage in full motorcycle gear, Chin noted that he'll soon be touring Europe by motorcycle, meeting Java Community Members and streaming live via UStream--the ultimate manifestation of community and technology!  He also reminded attendees of the upcoming JavaOne Latin America 2012, São Paulo, Brazil (December 4-6, 2012), and stated that the CFP (call for papers) at the conference has been extended for one more week. "Remember, December is summer in Brazil!" Chin said.

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >