Search Results

Search found 65333 results on 2614 pages for 'real time goldengate odi'.

Page 64/2614 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Using elapsed time for SlowMo in XNA

    - by Dave Voyles
    I'm trying to create a slow-mo effect in my pong game so that when a player is a button the paddles and ball will suddenly move at a far slower speed. I believe my understanding of the concepts of adjusting the timing in XNA are done, but I'm not sure of how to incorporate it into my design exactly. The updates for my bats (paddles) are done in my Bat.cs class: /// Controls the bat moving up the screen /// </summary> public void MoveUp() { SetPosition(Position + new Vector2(0, -moveSpeed)); } /// <summary> /// Controls the bat moving down the screen /// </summary> public void MoveDown() { SetPosition(Position + new Vector2(0, moveSpeed)); } /// <summary> /// Updates the position of the AI bat, in order to track the ball /// </summary> /// <param name="ball"></param> public virtual void UpdatePosition(Ball ball) { size.X = (int)Position.X; size.Y = (int)Position.Y; } While the rest of my game updates are done in my GameplayScreen.cs class (I'm using the XNA game state management sample) Class GameplayScreen { ........... bool slow; .......... public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) base.Update(gameTime, otherScreenHasFocus, false); if (IsActive) { // SlowMo Stuff Elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; if (Slowmo) Elapsed *= .8f; MoveTimer += Elapsed; double elapsedTime = gameTime.ElapsedGameTime.TotalMilliseconds; if (Keyboard.GetState().IsKeyDown(Keys.Up)) slow = true; else if (Keyboard.GetState().IsKeyDown(Keys.Down)) slow = false; if (slow == true) elapsedTime *= .1f; // Updating bat position leftBat.UpdatePosition(ball); rightBat.UpdatePosition(ball); // Updating the ball position ball.UpdatePosition(); and finally my fixed time step is declared in the constructor of my Game1.cs Class: /// <summary> /// The main game constructor. /// </summary> public Game1() { IsFixedTimeStep = slow = false; } So my question is: Where do I place the MoveTimer or elapsedTime, so that my bat will slow down accordingly?

    Read the article

  • Copyrighting software, templates, etc. under real name or screen name?

    - by Abluescarab
    My question is hopefully simple--should I copyright my work (art, software, web design, etc.) under my real name or my screen name? My real name and screen name are also easily connected with a bit of searching, so does it really matter in the end? I'm not a professional (at this point). I read this article: Is it a bad idea to sell Android apps in the Android Market under your real name? and they recommended releasing on the app market under a company name. I also read this article: On what name should I claim copyright in open source software?, but that didn't answer my question. I know it probably matters for big projects, but for little projects, does it matter? Thanks ahead of time!

    Read the article

  • Developer career feeling like going back in time every new job [closed]

    - by komediant
    Is there a good category for this question? My background is bachelor in ICT and for a hobby I am programming already since I was around twelve I think. Started with QBasic, Pascal, C, Java et cetera. Currently I am working for about eight/nine years. Half academics/medical and half company world. A few years ago I started with frameworks and I began with Grails (underlying Spring/Hibernate), which was a heavenly job, very productive and no hassle. My previous job I developed in pure Spring/Hibernate Java, which was a bit more writing annotations and XML and no conventions like Grails. But still, I did like Spring/Hibernate a lot and the professional setup with a developmentstreet, versioning, Jenkins/Sonar, log4j and a good IDE like IntellIJ. It felt quite 'clear' and organised, although I knew Grails which felt a bit more productive. But...at my current job almost half the code is pure servlet, hard coded JDBC (connections handled by yourself), scriptlets in all JSP pages, no service layer, no versioning, no Maven, HTML in DAO-layer, JAR-hell, no hot swap deployment locally, every change you have to deploy and hope it works fine on the server. All local development needs ugly scriptlet tags to check which environment it is running. Et cetera. Now and then developers work over in the evening - I don't - and still lots of issues are not solved and new projects are waiting. I hear the developers complaining, but somehow they feel like what they have now is "advanced" or they are in a sort of comfore zone. The lead developer seems open for new things, but half of the times he says he can implement MVC-framework features himself instead of using what is already out there. So in short, I currently feel like I miss all the modern framework techniques and that the company is going so slow forward. I just work here for two months now. What I do now is also code some partially ugly stuff, but it goes in completely into my nature and I feel uncomfortable with it. Coding something takes long(er) than estimated and my manager complains about why it takes so long and I feel ashamed for myself needing so much time. Where I was used to just writing a query I now build up whole try catch methods. My manager knows my complaints and the developers do so too. There will come a meeting to line out plans for 2013 on technology and the issues I and the company are facing. I am not looking for another job yet, it's close to wehre I live and the economy is fragile. Does anyone else have had this kind of career, like feeling going backwards witch technology? And how did you cope with it?

    Read the article

  • Why does ffmpeg stop randomly in the middle of a process?

    - by acidzombie24
    ffmpeg feels like its taking a long time. I then look at my output file and i see it stops between 6 and 8mbs. A fully encoded file is about 14mb. Why does ffmpeg stop? My code locks up on StandardOutput.ReadToEnd();. I had to kill the process (after seeing it not move for more then 10 seconds when i see it update every second previously) then i get the results of stdout and err. stdout is "" stderr is below. The output msg shows the filesize ended. I also see a drop in my CPU usage when it stops. I copyed the argument from visual studios. CD to the same working directory and ran the cmd (bin/ffmpeg) and pasted the argument. It was able to complete. int soundProcess(string infn, string outfn) { string aa, aa2; aa = aa2 = "DEAD"; var app = new Process(); app.StartInfo.UseShellExecute = false; app.StartInfo.RedirectStandardOutput = true; app.StartInfo.RedirectStandardError = true; //*/ app.StartInfo.FileName = @"bin\ffmpeg.exe"; app.StartInfo.Arguments = string.Format(@"-i ""{0}"" -ab 192k -y {2} ""{1}""", infn, outfn, param); app.Start(); try { app.PriorityClass = ProcessPriorityClass.BelowNormal; } catch (Exception ex) { if (!Regex.IsMatch(ex.Message, @"Cannot process request because the process .*has exited")) throw ex; } aa = app.StandardOutput.ReadToEnd(); aa2 = app.StandardError.ReadToEnd(); app.WaitForExit(); if (aa2.IndexOf("could not find codec parameters") != -1) return 1; else if (aa == "DEAD" || aa2 == "DEAD") return -1; else if (aa2.Length != 0) return -2; else return 0; } The output of stderr. stdout is empty. FFmpeg version SVN-r15815, Copyright (c) 2000-2008 Fabrice Bellard, et al. configuration: --enable-memalign-hack --enable-postproc --enable-swscale --enable-gpl --enable-libfaac --enable-libfaad --enable-libgsm --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libx264 --enable-libxvid --disable-ffserver --disable-vhook --enable-avisynth --enable-pthreads libavutil 49.12. 0 / 49.12. 0 libavcodec 52. 3. 0 / 52. 3. 0 libavformat 52.23. 1 / 52.23. 1 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 6. 1 / 0. 6. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 13 2008 10:28:29, gcc: 4.2.4 (TDM-1 for MinGW) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\dev\src\trunk\prjname\prjname\App_Data/temp/m/o/6304266424778814852': Duration: 00:12:53.36, start: 0.000000, bitrate: 154 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Output #0, ipod, to 'C:\dev\src\trunk\prjname\prjname\App_Data\temp\m\o\2.m4a': Stream #0.0(und): Audio: libfaac, 44100 Hz, stereo, s16, 192 kb/s Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding size= 87kB time=4.74 bitrate= 150.7kbits/s size= 168kB time=9.06 bitrate= 151.9kbits/s size= 265kB time=14.28 bitrate= 151.8kbits/s size= 377kB time=20.29 bitrate= 152.1kbits/s size= 487kB time=26.22 bitrate= 152.1kbits/s size= 594kB time=32.02 bitrate= 152.1kbits/s size= 699kB time=37.64 bitrate= 152.1kbits/s size= 808kB time=43.54 bitrate= 152.0kbits/s size= 930kB time=50.09 bitrate= 152.2kbits/s size= 1058kB time=57.05 bitrate= 152.0kbits/s size= 1193kB time=64.23 bitrate= 152.1kbits/s size= 1329kB time=71.63 bitrate= 152.0kbits/s size= 1450kB time=78.16 bitrate= 152.0kbits/s size= 1578kB time=85.05 bitrate= 152.0kbits/s size= 1706kB time=92.00 bitrate= 152.0kbits/s size= 1836kB time=98.94 bitrate= 152.0kbits/s size= 1971kB time=106.25 bitrate= 151.9kbits/s size= 2107kB time=113.57 bitrate= 152.0kbits/s size= 2214kB time=119.33 bitrate= 152.0kbits/s size= 2345kB time=126.39 bitrate= 152.0kbits/s size= 2479kB time=133.56 bitrate= 152.0kbits/s size= 2611kB time=140.76 bitrate= 152.0kbits/s size= 2745kB time=147.91 bitrate= 152.1kbits/s size= 2880kB time=155.20 bitrate= 152.0kbits/s size= 3013kB time=162.40 bitrate= 152.0kbits/s size= 3146kB time=169.58 bitrate= 152.0kbits/s size= 3277kB time=176.61 bitrate= 152.0kbits/s size= 3412kB time=183.90 bitrate= 152.0kbits/s size= 3540kB time=190.80 bitrate= 152.0kbits/s size= 3670kB time=197.81 bitrate= 152.0kbits/s size= 3805kB time=205.08 bitrate= 152.0kbits/s size= 3932kB time=211.93 bitrate= 152.0kbits/s size= 4052kB time=218.38 bitrate= 152.0kbits/s size= 4171kB time=224.82 bitrate= 152.0kbits/s size= 4277kB time=230.55 bitrate= 152.0kbits/s size= 4378kB time=235.96 bitrate= 152.0kbits/s size= 4486kB time=241.79 bitrate= 152.0kbits/s size= 4592kB time=247.50 bitrate= 152.0kbits/s size= 4698kB time=253.21 bitrate= 152.0kbits/s size= 4804kB time=258.95 bitrate= 152.0kbits/s size= 4906kB time=264.41 bitrate= 152.0kbits/s size= 5012kB time=270.09 bitrate= 152.0kbits/s size= 5118kB time=275.85 bitrate= 152.0kbits/s size= 5234kB time=282.10 bitrate= 152.0kbits/s size= 5331kB time=287.39 bitrate= 151.9kbits/s size= 5445kB time=293.55 bitrate= 152.0kbits/s size= 5555kB time=299.40 bitrate= 152.0kbits/s size= 5665kB time=305.37 bitrate= 152.0kbits/s size= 5766kB time=310.80 bitrate= 152.0kbits/s size= 5876kB time=316.70 bitrate= 152.0kbits/s size= 5984kB time=322.50 bitrate= 152.0kbits/s size= 6094kB time=328.49 bitrate= 152.0kbits/s size= 6212kB time=334.76 bitrate= 152.0kbits/s size= 6327kB time=340.99 bitrate= 152.0kbits/s

    Read the article

  • Implication of variable context switching time

    - by Rob
    Hi, I know that constant switching time of the Linux scheduler was a big achievement. I was just asking myself the question what would be the implication of a non-constant switching time. The only obvious reason I can think of is real-time systems where we have to meet deadlines. There it is obviously no ideal if the switching time is "random". Are there any other good reasons that favour constant switching times? Many thanks, Rob

    Read the article

  • How to keep windows 2003 Daylight Saving values updated

    - by SirMoreno
    My web app runs on windows 2003 .Net 3.5 I have users from Israel (GMT +2), and Israel switched to Daylight saving time on 26/3/10 so now it's (GMT +3). I use TimeZoneInfo.ConvertTime that doesn’t know the Daylight saving time switch is on 26/3/10 so it still converts to GMT +2. I asked on StackOverflow: http://stackoverflow.com/questions/2530834/problem-with-timezoneinfo-converttime-missed-the-daylight-saving-switch/2532104#2532104 And I was told that I need to update: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones\Israel Standard Time\Dynamic DST I found this update: http://support.microsoft.com/kb/976098 That supposes to fix the Dynamic DST for 2010, Is this the update I need? Where can I find an update that handles 2011 2012… ? Will I need to update my windows every year to get the DST right? Thanks.

    Read the article

  • Meinberg ntp for Windows - force more frequent updates?

    - by ana
    I have a Windows VM that has time drift problems, and left to its own devices will drift by minutes per day. I know there are issues with time on VMs, but I was hoping using the Meinberg NTP service would be successful. It works, generally, in that the time gets corrected in one big step about once every 70mins when it's at about 10mins offset. This has totally confused me as I thought NTP was meant to drift gently back towards the right time, and panic and die if the offset was more than 3 minutes. So (a) what is happening and (b) how do I make it update more regularly?

    Read the article

  • Win 7 time service error

    - by casterle
    I see a warning in my Win7 Events Log: The time service has not synchronized the system time for 86400 seconds because none of the time service providers provided a usable time stamp. The message suggests that I run: w32tm /resync When I try to run that command, I get an error: The following error occurred: The specified module could not be found. <0x8007007E I can run w32tm with no arguments and get the expected list of w32tm commands, so the program is accessable and runs. Any help is greatly appreciated.

    Read the article

  • Parameters for selection of Operating system, memory and processor for embedded system ?

    - by James
    I am developing an embedded real time system software (in C language). I have designed the s/w architecture - we know various objects required, interactions required between various objects and IPC communication between tasks. Based on this information, i need to decide on the operating system(RTOS), microprocessor and memory size requirements. (Most likely i would be using Quadros, as it has been suggested by the client based on their prior experience in similar projects) But i am confused about which one to begin with, since choice of one could impact the selection of other. Could you also guide me on parameters to consider to estimate the memory requirements from the s/w design (lower limit and upper limit of memory requirement) ? (Cost of the component(s) could be ignored for this evaluation)

    Read the article

  • Wireshark vs Netmon for precise time tagging

    - by Nic
    I'm using Wireshark to time tag and get some statistics on multicast traffic. When there is not much traffic, the stats looks good, but as soon as there is a bunch of packets arriving at the same time, I have stats that are not even possible (e.g. round trip time of 0ms) I'm wondering if Netmon could be more precise in time tagging packet because it is not relying on the Winpcap driver? Does anybody already faced the same situation? Thanks a lot, Nic

    Read the article

  • Memory management (segmentation and paging) in 80286 and 80386: How does it work?

    - by Andrew J. Brehm
    I found lots of Web sites and books explaining how memory management worked on the 8086 and later x86 CPUs in Real Mode. I understand, I think, how two 16 bit values, segment address and offset are combined to get a linear 20 bit physical address (shift segment four bits to the left, add offset; segments are 64K and start every 16 bytes). But I couldn't find any good Web sites or books that explained how memory management works in Protected Mode, specifically the differences between 80286 and 80386. Can anyone point me to a good Web site or book (or explain it right here)? (For extra credit, i.e. an upvote, how does it work in Long Mode?)

    Read the article

  • Python to see if a file falls into a date range

    - by tod
    I have a script that I want to have an if statement check if a file falls between several specified date periods as it searches directories. I have been able to get the time in %Y-%m-%d format for the file, but I can't seem to be able to specify a date range for python to go through. Any ideas/help? Here's what I have so far for name in files: f = os.path.join(root, name) dt = os.path.getmtime(f) nwdt = time.gmtime(dt) ndt = time.strftime('%Y-%m-%d', nwdt)

    Read the article

  • NTP daemon or ntpdate doesn't synchronize

    - by user2862333
    I'm having some problems with synchronization with an NTP server. 1) The NTP daemon doesn't sync the system clock at all, even though it's running (confirmed with /etc/init.d/ntp status). Forcing to sync with ntpd -q or ntpd -gq does not work either. 2) Stopping the NTP daemon and syncing manually with ntpdate does give me the following output: ~# ntpdate -d 0.debian.pool.ntp.org 6 Nov 16:48:53 ntpdate[4417]: ntpdate [email protected] Sat May 12 09:07:19 UTC 2012 (1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) transmit(79.132.237.5) receive(79.132.237.5) transmit(85.234.197.2) receive(85.234.197.2) transmit(194.50.97.34) receive(194.50.97.34) transmit(79.132.237.1) receive(79.132.237.1) server 79.132.237.5, port 123 stratum 2, precision -20, leap 00, trust 000 refid [79.132.237.5], delay 0.05141, dispersion 0.00145 transmitted 4, in filter 4 reference time: d624e3b1.f490b90d Wed, Nov 6 2013 16:50:09.955 originate timestamp: d624e457.eaaf787c Wed, Nov 6 2013 16:52:55.916 transmit timestamp: d624e36c.4a7036fd Wed, Nov 6 2013 16:49:00.290 filter delay: 0.08537 0.05141 0.05151 0.06346 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6038 235.6087 235.6095 235.6068 0.000000 0.000000 0.000000 0.000000 delay 0.05141, dispersion 0.00145 offset 235.608782 server 85.234.197.2, port 123 stratum 2, precision -20, leap 00, trust 000 refid [85.234.197.2], delay 0.05151, dispersion 0.00336 transmitted 4, in filter 4 reference time: d624e3e7.dc6cd02b Wed, Nov 6 2013 16:51:03.861 originate timestamp: d624e458.1c91031f Wed, Nov 6 2013 16:52:56.111 transmit timestamp: d624e36c.7da1d882 Wed, Nov 6 2013 16:49:00.490 filter delay: 0.05765 0.07750 0.06013 0.05151 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6048 235.6014 235.6035 235.6078 0.000000 0.000000 0.000000 0.000000 delay 0.05151, dispersion 0.00336 offset 235.607826 server 194.50.97.34, port 123 stratum 3, precision -23, leap 00, trust 000 refid [194.50.97.34], delay 0.03021, dispersion 0.00090 transmitted 4, in filter 4 reference time: d624e38d.2bce952c Wed, Nov 6 2013 16:49:33.171 originate timestamp: d624e458.4dbbc114 Wed, Nov 6 2013 16:52:56.303 transmit timestamp: d624e36c.b0d38834 Wed, Nov 6 2013 16:49:00.690 filter delay: 0.03030 0.03636 0.03091 0.03021 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6095 235.6085 235.6098 235.6105 0.000000 0.000000 0.000000 0.000000 delay 0.03021, dispersion 0.00090 offset 235.610589 server 79.132.237.1, port 123 stratum 3, precision -20, leap 00, trust 000 refid [79.132.237.1], delay 0.05113, dispersion 0.00305 transmitted 4, in filter 4 reference time: d624dfcb.6acea332 Wed, Nov 6 2013 16:33:31.417 originate timestamp: d624e458.838672ad Wed, Nov 6 2013 16:52:56.513 transmit timestamp: d624e36c.e405181c Wed, Nov 6 2013 16:49:00.890 filter delay: 0.06345 0.05113 0.05681 0.05656 0.00000 0.00000 0.00000 0.00000 filter offset: 235.6087 235.6038 235.6010 235.6074 0.000000 0.000000 0.000000 0.000000 delay 0.05113, dispersion 0.00305 offset 235.603888 6 Nov 16:49:00 ntpdate[4417]: step time server 79.132.237.5 offset 235.608782 sec Clearly, ntpdate can reach the NTP server(s), but after checking the clock, it hasn't changed and is still displaying the wrong time. Any ideas what would be the problem would be much appreciated.

    Read the article

  • Time sync in data center

    - by ak
    We currently have setting to sync time when spread is more than 5 mins, but it's getting to a point where some applications don't accept it. What is best practice out there to sync time for all windows and unix boxes to sync with time server or domain controller. Windows time service is not made for high accuracy less then 10 secs. What are alternatives ?

    Read the article

  • Modifying / Resetting the Last reboot time

    - by user3711455
    I am trying to recreate a problem I encountered to try to confirm the root causes of it. One of the possible theory is that the problem is being caused because the server hadn't restarted for along time. Since I've already restarted the computer, Is there a way of resetting/modifying the "Last Reboot time" so the computer thinks it hasn't restarted for along time? I am using systeminfo | find "System Boot Time" The computer is running Windows XP Embedded if it helps. Thanks

    Read the article

  • Looking for examples of "real" uses of continuations

    - by Sébastien RoccaSerra
    I'm trying to grasp the concept of continuations and I found several small teaching examples like this one from the Wikipedia article: (define the-continuation #f) (define (test) (let ((i 0)) ; call/cc calls its first function argument, passing ; a continuation variable representing this point in ; the program as the argument to that function. ; ; In this case, the function argument assigns that ; continuation to the variable the-continuation. ; (call/cc (lambda (k) (set! the-continuation k))) ; ; The next time the-continuation is called, we start here. (set! i (+ i 1)) i)) I understand what this little function does, but I can't see any obvious application of it. While I don't expect to use continuations all over my code anytime soon, I wish I knew a few cases where they can be appropriate. So I'm looking for more explicitely usefull code samples of what continuations can offer me as a programmer. Cheers!

    Read the article

  • how to get the real bounds with google maps when fully zoomed out

    - by brad
    I have a map that shows location points based on the gbounds of the map. For example, any time the map is moved/zoomed, i find the bounds and query for locations that fall within those bounds. Unfortunately I'm unable to display all my locations when fully zoomed out. Reason being, gmaps reports the min/max long as whatever is at the edge of the map, but if you zoom out enough, you can get a longitudinal range that excludes visible locations. For instance, if you zoom your map so that you see NorthAmerica twice, on the far left and far right. The min/max long are around: -36.5625 to 170.15625. But this almost completely excludes NorthAmerica which lies in the -180 to -60 range. Obviously this is bothersome as you can actually see the continent NorthAmerica (twice), but when I query my for locations in the range from google maps, NorthAmerica isn't returned. My code for finding the min/max long is: bounds = gmap.getBounds(); min_lat = bounds.getSouthWest().lat() max_lat = bounds.getNorthEast().lat() Has anyone encountered this and can anyone suggest a workaround? Off the top of my head I can only thing of a hack: to check the zoom level and hardcode the min/max lats to -180/180 if necessary, which is definitely unacceptable.

    Read the article

  • whether rand_r is real thread safe?

    - by terry
    Well, rand_r function is supposed to be a thread safe function. However, by its implementation, I cannot believe it could make itself not change by other threads. Suppose that two threads will invoke rand_r in the same time with the same variable seed. So read-write race will occur. The code rand_r implemented by glibc is listed below. Anybody knows why rand_r is called thread safe? int rand_r (unsigned int *seed) { unsigned int next = *seed; int result; next *= 1103515245; next += 12345; result = (unsigned int) (next / 65536) % 2048; next *= 1103515245; next += 12345; result <<= 10; result ^= (unsigned int) (next / 65536) % 1024; next *= 1103515245; next += 12345; result <<= 10; result ^= (unsigned int) (next / 65536) % 1024; *seed = next; return result; }

    Read the article

  • interface abstract in php real world scenario

    - by jason
    The goal is to learn whether to use abstract or interface or both... I'm designing a program which allows a user to de-duplicate all images but in the process rather then I just build classes I'd like to build a set of libraries that will allow me to re-use the code for other possible future purposes. In doing so I would like to learn interface vs abstract and was hoping someone could give me input into using either. Here is what the current program will do: recursive scan directory for all files determine file type is image type compare md5 checksum against all other files found and only keep the ones which are not duplicates Store total duplicates found at the end and display size taken up Copy files that are not duplicates into folder by date example Year, Month folder with filename is file creation date. While I could just create a bunch of classes I'd like to start learning more on interfaces and abstraction in php. So if I take the scan directory class as the first example I have several methods... ScanForFiles($path) FindMD5Checksum() FindAllImageTypes() getFileList() The scanForFiles could be public to allow anyone to access it and it stores in an object the entire directory list of files found and many details about them. example extension, size, filename, path, etc... The FindMD5Checksum runs against the fileList object created from scanForFiles and adds the md5 if needed. The FindAllImageTypes runs against the fileList object as well and adds if they are image types. The findmd5checksum and findallimagetypes are optionally run methods because the user might not intend to run these all the time or at all. The getFileList returns the fileList object itself. While I have a working copy I've revamped it a few times trying to figure out whether I need to go with an interface or abstract or both. I'd like to know how an expert OO developer would design it and why?

    Read the article

  • Seriously, It’s Time to Get Your Content Act Together

    - by Mike Stiles
    Branded content, content marketing, social content, brand journalism, we’re seeing those terms more and more. Why? The technology tools are coming together. We should know. We can gather big data, crunch it, listen to the public, moderate, respond, get to know the customer intimately, know what they like, know what they want, we can target, distribute, amplify, measure engagement and reaction, modify strategy and even automate a great deal of all that. An amazing machine, a sleek, smooth-running engine has been built such that all the parts can interact and work together to deliver peak performance and maximum output. But that engine isn’t going anywhere without any gas. Content is the gas. Yes, we curate other people’s content. We can siphon their gas. There’s tech to help with that too. But as for the creation of original, worthwhile content made for a specific audience, our audience, machines can’t do that…at least not yet. Curated content is great. But somebody has to originate the content for it to be curated and shared. And since the need for good, curated content is obviously large and the desire to share is there, it’s a winning proposition for a brand to be a consistent producer of original content. And yet, it feels like content is an issue we’re avoiding. There’s a reluctance to build a massive pipeline if you have no idea what you’re going to run through it. The C-suite often doesn’t know what content is, that it’s different from ads, where to get it, who makes it, how long it should be, what the point of it is if there’s no hard sell of the product, what it costs, how to use it, how to measure it, how to make sure it’s good, or how to make sure it will keep flowing. It could be the reason many brands aren’t pulling the trigger on socially enabling the enterprise. And that’s a shame, because there are a lot of creative, daring, experimental, uniquely talented entertainers and journalists chomping at the bit to execute content for brands. But for many corporate executives, content is “weird,” and the people who make it are even weirder. The content side of the equation is human. It’s art, but art that can be informed by data. The natural inclination is for brands to turn to their agencies for such creative endeavors. But agencies are falling into one of two categories. They’re failing to transition from ads to content. In “Content Era, What’s the Role of Agencies?” Alexander Jutkowitz says agencies were made for one-hit campaigns, not ongoing content. Or, they’re ready and capable but can’t get clients to do the right things. Agencies have to make money, even if it means continuing to do the wrong things because that’s all the client will agree to. So what we wind up with in the pipeline is advertising, marketing-heavy content, content that was obviously created or spearheaded by non-creative executives, random & inconsistent content, copy written for SEO bots, and other completely uninteresting nightmares. Frank Rose, author of “The Art of Immersion,” writes, “Content without story and excitement is noise pollution.” In the old days, you made an ad and inserted it into shows made by people who knew what they were doing. You could bask in that show’s success and leverage their audience. Now, you are tasked with attracting, amassing and holding your own audience. You may just want to make, advertise and sell your widgets. But now there’s a war on for a precious commodity, attention. People are busy. They have filters to keep uninteresting and irrelevant things out. They value their time and expect value back when they give it up. Joe Pulizzi, founder of the Content Marketing Institute, says, "Your customers don't care about you, your products, your services…they care about themselves, their wants and their needs." Is it worth getting serious about content and doing it right? 61% of consumers feel better about a company that delivers custom content (Custom Content Council). Interesting content is one of the top 3 reasons people follow brands on social (Content+). 78% of consumers think organizations that provide custom content want to build good relationships with them (TMG Custom Media). On the B2B side, 80% of business decision makers prefer to get company info in a series of articles vs. an ad. So what’s the hang-up? Cited barriers to content marketing are lack of human resources (42%) and lack of budget (35%). 54% of brands don’t have a single on-site, dedicated content creator. And only 38% of brands have a content marketing strategy. Tech has built the biggest, most incredible stage for brands that’s ever been built. Putting something on that stage is your responsibility. Do a bad show, or no show at all, and you’ll be the beautiful, talented actress that never got discovered. @mikestilesPhoto: Gabriella Fabbri, stock.xchng

    Read the article

  • Android Can't get two virtual joysticks to move independently and at the same time

    - by Cole
    @Override public boolean onTouch(View v, MotionEvent event) { // TODO Auto-generated method stub float r = 70; float centerLx = (float) (screenWidth*.3425); float centerLy = (float) (screenHeight*.4958); float centerRx = (float) (screenWidth*.6538); float centerRy = (float) (screenHeight*.4917); float dx = 0; float dy = 0; float theta; float c; int action = event.getAction(); int actionCode = action & MotionEvent.ACTION_MASK; int pid = (action & MotionEvent.ACTION_POINTER_INDEX_MASK) >> MotionEvent.ACTION_POINTER_INDEX_SHIFT; int fingerid = event.getPointerId(pid); int x = (int) event.getX(pid); int y = (int) event.getY(pid); c = FloatMath.sqrt(dx*dx + dy*dy); theta = (float) Math.atan(Math.abs(dy/dx)); switch (actionCode) { case MotionEvent.ACTION_DOWN: case MotionEvent.ACTION_POINTER_DOWN: //if touching down on left stick, set leftstick ID to this fingerid. if(x < screenWidth/2 && c<r*.8) { lsId = fingerid; dx = x-centerLx; dy = y-centerLy; touchingLs = true; } else if(x > screenWidth/2 && c<r*.8) { rsId = fingerid; dx = x-centerRx; dy = y-centerRy; touchingRs = true; } break; case MotionEvent.ACTION_MOVE: if (touchingLs && fingerid == lsId) { dx = x - centerLx; dy = y - centerLy; }else if (touchingRs && fingerid == rsId) { dx = x - centerRx; dy = y - centerRy; } c = FloatMath.sqrt(dx*dx + dy*dy); theta = (float) Math.atan(Math.abs(dy/dx)); //if touching outside left radius and moving left stick if(c >= r && touchingLs && fingerid == lsId) { if(dx>0 && dy<0) { //top right quadrant lsX = r * FloatMath.cos(theta); lsY = -(r * FloatMath.sin(theta)); Log.i("message", "top right"); } if(dx<0 && dy<0) { //top left quadrant lsX = -(r * FloatMath.cos(theta)); lsY = -(r * FloatMath.sin(theta)); Log.i("message", "top left"); } if(dx<0 && dy>0) { //bottom left quadrant lsX = -(r * FloatMath.cos(theta)); lsY = r * FloatMath.sin(theta); Log.i("message", "bottom left"); } else if(dx > 0 && dy > 0){ //bottom right quadrant lsX = r * FloatMath.cos(theta); lsY = r * FloatMath.sin(theta); Log.i("message", "bottom right"); } } if(c >= r && touchingRs && fingerid == rsId) { if(dx>0 && dy<0) { //top right quadrant rsX = r * FloatMath.cos(theta); rsY = -(r * FloatMath.sin(theta)); Log.i("message", "top right"); } if(dx<0 && dy<0) { //top left quadrant rsX = -(r * FloatMath.cos(theta)); rsY = -(r * FloatMath.sin(theta)); Log.i("message", "top left"); } if(dx<0 && dy>0) { //bottom left quadrant rsX = -(r * FloatMath.cos(theta)); rsY = r * FloatMath.sin(theta); Log.i("message", "bottom left"); } else if(dx > 0 && dy > 0) { rsX = r * FloatMath.cos(theta); rsY = r * FloatMath.sin(theta); Log.i("message", "bottom right"); } } else { if(c < r && touchingLs && fingerid == lsId) { lsX = dx; lsY = dy; } if(c < r && touchingRs && fingerid == rsId){ rsX = dx; rsY = dy; } } break; case MotionEvent.ACTION_UP: case MotionEvent.ACTION_POINTER_UP: if (fingerid == lsId) { lsId = -1; lsX = 0; lsY = 0; touchingLs = false; } else if (fingerid == rsId) { rsId = -1; rsX = 0; rsY = 0; touchingRs = false; } break; } return true; } There's a left joystick and a right joystick. Right now only one will move at a time. If someone could set me on the right track I would be incredibly grateful cause I've been having nightmares about this problem.

    Read the article

  • First time shadow mapping problems

    - by user1294203
    I have implemented basic shadow mapping for the first time in OpenGL using shaders and I'm facing some problems. Below you can see an example of my rendered scene: The process of the shadow mapping I'm following is that I render the scene to the framebuffer using a View Matrix from the light point of view and the projection and model matrices used for normal rendering. In the second pass, I send the above MVP matrix from the light point of view to the vertex shader which transforms the position to light space. The fragment shader does the perspective divide and changes the position to texture coordinates. Here is my vertex shader, #version 150 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform mat4 lightMVP; uniform float scale; in vec3 in_Position; in vec3 in_Normal; in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; smooth out vec4 lightspace_Position; void main(void){ pass_Normal = NormalMatrix * in_Normal; pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; lightspace_Position = lightMVP * vec4(scale * in_Position, 1.0); TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } And my fragment shader, #version 150 core struct Light{ vec3 direction; }; uniform Light light; uniform sampler2D inSampler; uniform sampler2D inShadowMap; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; smooth in vec4 lightspace_Position; out vec4 out_Color; float CalcShadowFactor(vec4 lightspace_Position){ vec3 ProjectionCoords = lightspace_Position.xyz / lightspace_Position.w; vec2 UVCoords; UVCoords.x = 0.5 * ProjectionCoords.x + 0.5; UVCoords.y = 0.5 * ProjectionCoords.y + 0.5; float Depth = texture(inShadowMap, UVCoords).x; if(Depth < (ProjectionCoords.z + 0.001)) return 0.5; else return 1.0; } void main(void){ vec3 Normal = normalize(pass_Normal); vec3 light_Direction = -normalize(light.direction); vec3 camera_Direction = normalize(-pass_Position); vec3 half_vector = normalize(camera_Direction + light_Direction); float diffuse = max(0.2, dot(Normal, light_Direction)); vec3 temp_Color = diffuse * vec3(1.0); float specular = max( 0.0, dot( Normal, half_vector) ); float shadowFactor = CalcShadowFactor(lightspace_Position); if(diffuse != 0 && shadowFactor > 0.5){ float fspecular = pow(specular, 128.0); temp_Color += fspecular; } out_Color = vec4(shadowFactor * texture(inSampler, TexCoord).xyz * temp_Color, 1.0); } One of the problems is self shadowing as you can see in the picture, the crate has its own shadow cast on itself. What I have tried is enabling polygon offset (i.e. glEnable(POLYGON_OFFSET_FILL), glPolygonOffset(GLfloat, GLfloat) ) but it didn't change much. As you see in the fragment shader, I have put a static offset value of 0.001 but I have to change the value depending on the distance of the light to get more desirable effects , which not very handy. I also tried using front face culling when I render to the framebuffer, that didn't change much too. The other problem is that pixels outside the Light's view frustum get shaded. The only object that is supposed to be able to cast shadows is the crate. I guess I should pick more appropriate projection and view matrices, but I'm not sure how to do that. What are some common practices, should I pick an orthographic projection? From googling around a bit, I understand that these issues are not that trivial. Does anyone have any easy to implement solutions to these problems. Could you give me some additional tips? Please ask me if you need more information on my code. Here is a comparison with and without shadow mapping of a close-up of the crate. The self-shadowing is more visible.

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >